refactor: extract skills from commands across 10 plugins #338

Merged
lmiranda merged 7 commits from refactor/all-plugins-skills-extraction into development 2026-01-30 22:35:21 +00:00
127 changed files with 7423 additions and 7633 deletions

View File

@@ -2,16 +2,12 @@
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +----------------------------------------------------------------------+
│ 💬 CLARITY-ASSIST · Prompt Optimization | CLARITY-ASSIST - Prompt Optimization |
└──────────────────────────────────────────────────────────────────┘ +----------------------------------------------------------------------+
``` ```
Then proceed with the workflow.
## Purpose ## Purpose
Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations. Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations.
@@ -23,127 +19,22 @@ Transform vague, incomplete, or ambiguous requests into clear, actionable specif
- Tasks requiring significant context gathering - Tasks requiring significant context gathering
- When user seems uncertain about what they want - When user seems uncertain about what they want
## 4-D Methodology ## Skills to Load
### Phase 1: Deconstruct Load these skills before proceeding:
Break down the user's request into components: - `skills/4d-methodology.md` - Core 4-phase process
- `skills/nd-accommodations.md` - ND-friendly question patterns
- `skills/clarification-techniques.md` - Anti-patterns and templates
- `skills/escalation-patterns.md` - When to adjust approach
1. **Extract explicit requirements** - What was directly stated ## Workflow
2. **Identify implicit assumptions** - What seems assumed but not stated
3. **Note ambiguities** - Points that could go multiple ways
4. **List dependencies** - External factors that might affect implementation
### Phase 2: Diagnose 1. **Deconstruct** - Break down request into components
2. **Diagnose** - Identify gaps and conflicts
Analyze gaps and potential issues: 3. **Develop** - Gather clarifications via structured questions
4. **Deliver** - Present refined specification
1. **Missing information** - What do we need to know?
2. **Conflicting requirements** - Do any stated goals contradict?
3. **Scope boundaries** - What's in/out of scope?
4. **Technical constraints** - Platform, language, architecture limits
### Phase 3: Develop
Gather clarifications through structured questioning:
**ND-Friendly Question Rules:**
- Present 2-4 concrete options (never open-ended alone)
- Include "Other" for custom responses
- Ask 1-2 questions at a time maximum
- Provide brief context for why you're asking
- Check for conflicts with previous answers
**Example Format:**
```
To help me understand the scope better:
**How should errors be handled?**
1. Silent logging (user sees nothing)
2. Toast notifications (brief, dismissible)
3. Modal dialogs (requires user action)
4. Other
[Context: This affects both UX and how much error-handling code we need]
```
### Phase 4: Deliver
Produce the refined specification:
```markdown
## Clarified Request
### Summary
[1-2 sentence description of what will be built]
### Scope
**In Scope:**
- [Item 1]
- [Item 2]
**Out of Scope:**
- [Item 1]
### Requirements
| # | Requirement | Priority | Notes |
|---|-------------|----------|-------|
| 1 | ... | Must | ... |
| 2 | ... | Should | ... |
### Assumptions
- [Assumption made based on conversation]
### Open Questions
- [Any remaining ambiguities, if any]
```
## Conflict Detection
Before each question, review previous answers for conflicts:
```
[Internal check]
Previous: User said "keep it simple"
Current question about: Adding configuration options
Potential conflict: More options = more complexity
→ Acknowledge: "Earlier you mentioned keeping it simple. With that in mind..."
```
## Escalation Protocol
If the request is particularly messy or the user seems overwhelmed:
1. Acknowledge the complexity
2. Offer to start with just ONE aspect
3. Build incrementally
```
"This touches a lot of areas. Rather than tackle everything at once,
let's start with [most critical piece]. Once that's clear, we can
add the other parts. Sound good?"
```
## Example Interaction
**User:** Make the app faster
**Clarify Response:**
```
Let me help clarify what kind of performance improvements you're looking for.
**What type of slowness are you experiencing?**
1. Initial page load takes too long
2. Interactions feel sluggish (clicking, typing)
3. API calls are slow
4. All of the above
5. Other
[I'm asking because each has very different solutions]
```
## Output Format ## Output Format
After gathering all necessary information, use the Deliver phase format to present the clarified specification for user confirmation. Use the Deliver phase template from `skills/4d-methodology.md` to present the clarified specification for user confirmation.

View File

@@ -2,16 +2,12 @@
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +----------------------------------------------------------------------+
│ 💬 CLARITY-ASSIST · Quick Clarify | CLARITY-ASSIST - Quick Clarify |
└──────────────────────────────────────────────────────────────────┘ +----------------------------------------------------------------------+
``` ```
Then proceed with the workflow.
## Purpose ## Purpose
Single-pass clarification for requests that are mostly clear but need minor disambiguation. Single-pass clarification for requests that are mostly clear but need minor disambiguation.
@@ -23,74 +19,27 @@ Single-pass clarification for requests that are mostly clear but need minor disa
- Follow-up to an already-clarified request - Follow-up to an already-clarified request
- Simple feature additions or bug fixes - Simple feature additions or bug fixes
## Process ## Skills to Load
### Step 1: Echo Understanding - `skills/nd-accommodations.md` - ND-friendly question patterns
- `skills/clarification-techniques.md` - Echo and micro-summary techniques
- `skills/escalation-patterns.md` - When to escalate to full /clarify
Restate what you understood in a single sentence: ## Workflow
``` 1. **Echo Understanding** - Restate in a single sentence
"I understand you want [X] that does [Y]." 2. **Quick Disambiguation** - Ask ONE multiple-choice question if needed
``` 3. **Proceed or Confirm** - Start work or offer micro-summary
### Step 2: Quick Disambiguation
If needed, ask ONE multiple-choice question:
```
**Just to confirm:** Should this [A] or [B]?
1. [Option A with brief description]
2. [Option B with brief description]
3. Both
4. Neither / Other approach
```
### Step 3: Proceed or Confirm
If confident (>90%), proceed with a brief confirmation:
```
"Got it. I'll implement [X with chosen option]. Starting now."
```
If less confident, offer a micro-summary:
```
"Quick summary before I start:
- [Key point 1]
- [Key point 2]
- [Assumption made]
Proceed? (Or clarify anything)"
```
## Example Interaction
**User:** Add a delete button to the user list
**Quick-Clarify Response:**
```
I'll add a delete button to each row in the user list.
**Should deletion require confirmation?**
1. Yes - show confirmation dialog (Recommended)
2. No - delete immediately
3. Soft delete - mark as deleted but keep data
[Asking because accidental deletions can be problematic]
```
## Escalation to Full /clarify
If quick-clarify reveals complexity:
```
"This is more involved than it first appeared - there are
several decisions to make. Want me to switch to a more
thorough clarification process? (Just say 'yes' or 'clarify')"
```
## Output Format ## Output Format
For quick-clarify, no formal specification document is needed. Just proceed with the task after brief confirmation, documenting assumptions inline with the work. No formal specification document needed. Proceed after brief confirmation, documenting assumptions inline with the work.
## Escalation
If complexity emerges, offer to switch to full `/clarify`:
```
"This is more involved than it first appeared. Want me to switch
to a more thorough clarification process?"
```

View File

@@ -0,0 +1,76 @@
# 4-D Methodology for Prompt Clarification
The 4-D methodology transforms vague requests into actionable specifications.
## Phase 1: Deconstruct
Break down the user's request into components:
1. **Extract explicit requirements** - What was directly stated
2. **Identify implicit assumptions** - What seems assumed but not stated
3. **Note ambiguities** - Points that could go multiple ways
4. **List dependencies** - External factors that might affect implementation
## Phase 2: Diagnose
Analyze gaps and potential issues:
1. **Missing information** - What do we need to know?
2. **Conflicting requirements** - Do any stated goals contradict?
3. **Scope boundaries** - What is in/out of scope?
4. **Technical constraints** - Platform, language, architecture limits
## Phase 3: Develop
Gather clarifications through structured questioning:
- Present 2-4 concrete options (never open-ended alone)
- Include "Other" for custom responses
- Ask 1-2 questions at a time maximum
- Provide brief context for why you are asking
- Check for conflicts with previous answers
**Example Format:**
```
To help me understand the scope better:
**How should errors be handled?**
1. Silent logging (user sees nothing)
2. Toast notifications (brief, dismissible)
3. Modal dialogs (requires user action)
4. Other
[Context: This affects both UX and how much error-handling code we need]
```
## Phase 4: Deliver
Produce the refined specification:
```markdown
## Clarified Request
### Summary
[1-2 sentence description of what will be built]
### Scope
**In Scope:**
- [Item 1]
- [Item 2]
**Out of Scope:**
- [Item 1]
### Requirements
| # | Requirement | Priority | Notes |
|---|-------------|----------|-------|
| 1 | ... | Must | ... |
| 2 | ... | Should | ... |
### Assumptions
- [Assumption made based on conversation]
### Open Questions
- [Any remaining ambiguities, if any]
```

View File

@@ -0,0 +1,86 @@
# Clarification Techniques
Structured approaches for disambiguating user requests.
## Anti-Patterns to Detect
### Vague Requests
**Triggers:** "improve", "fix", "update", "change", "better", "faster", "cleaner"
**Response:** Ask for specific metrics or outcomes
### Scope Creep Signals
**Triggers:** "while you're at it", "also", "might as well", "and another thing"
**Response:** Acknowledge, then isolate: "I'll note that for after the main task"
### Assumption Gaps
**Triggers:** References to "the" thing (which thing?), "it" (what?), "there" (where?)
**Response:** Echo back specific understanding
### Conflicting Requirements
**Triggers:** "Simple but comprehensive", "Fast but thorough", "Minimal but complete"
**Response:** Prioritize: "Which matters more: simplicity or completeness?"
## Question Templates
### For Unclear Purpose
```
**What problem does this solve?**
1. [Specific problem A]
2. [Specific problem B]
3. Combination
4. Different problem: ____
```
### For Missing Scope
```
**What should this include?**
- [ ] Feature A
- [ ] Feature B
- [ ] Feature C
- [ ] Other: ____
```
### For Ambiguous Behavior
```
**When [trigger event], what should happen?**
1. [Behavior option A]
2. [Behavior option B]
3. Nothing (ignore)
4. Depends on: ____
```
### For Technical Decisions
```
**Implementation approach:**
1. [Approach A] - pros: X, cons: Y
2. [Approach B] - pros: X, cons: Y
3. Let me decide based on codebase
4. Need more info about: ____
```
## Echo Understanding Technique
Before diving into questions, restate understanding:
```
"I understand you want [X] that does [Y]."
```
This validates comprehension and gives user a chance to correct early.
## Micro-Summary Technique
For quick confirmations before proceeding:
```
"Quick summary before I start:
- [Key point 1]
- [Key point 2]
- [Assumption made]
Proceed? (Or clarify anything)"
```

View File

@@ -0,0 +1,57 @@
# Escalation Patterns
Guidelines for when to escalate between clarification modes.
## Quick-Clarify to Full Clarify
Escalate when quick-clarify reveals unexpected complexity:
```
"This is more involved than it first appeared - there are
several decisions to make. Want me to switch to a more
thorough clarification process? (Just say 'yes' or 'clarify')"
```
### Triggers for Escalation
- Multiple ambiguities discovered during quick pass
- User's answer reveals hidden dependencies
- Scope expands beyond original understanding
- Technical constraints emerge that need discussion
- Conflicting requirements surface
## Full Clarify to Incremental
When user is overwhelmed by full 4-D process:
```
"This touches a lot of areas. Rather than tackle everything at once,
let's start with [most critical piece]. Once that's clear, we can
add the other parts. Sound good?"
```
### Signs of Overwhelm
- Long pauses or hesitation
- "I don't know" responses
- Requesting breaks
- Contradicting earlier answers
- Expressing frustration
## Choosing Initial Mode
### Use /quick-clarify When
- Request is fairly clear, just one or two ambiguities
- User is in a hurry
- Follow-up to an already-clarified request
- Simple feature additions or bug fixes
- Confidence is high (>90%)
### Use /clarify When
- Complex multi-step requests
- Requirements with multiple possible interpretations
- Tasks requiring significant context gathering
- User seems uncertain about what they want
- First time working on this feature/area

View File

@@ -0,0 +1,74 @@
# Neurodivergent-Friendly Accommodations
Guidelines for making clarification interactions accessible and comfortable for neurodivergent users.
## Core Principles
### Reduce Cognitive Load
- Maximum 4 options per question
- Always include "Other" escape hatch
- Provide examples, not just descriptions
- Use numbered lists for easy reference
### Support Working Memory
- Summarize frequently
- Reference earlier decisions explicitly
- Do not assume user remembers context from many turns ago
- Echo back understanding before proceeding
### Allow Processing Time
- Do not rapid-fire questions
- Validate answers before moving on
- Offer to revisit or change earlier answers
- One question block at a time
### Manage Overwhelm
- Offer to break into smaller sessions
- Prioritize must-haves vs nice-to-haves
- Provide "good enough for now" options
- Acknowledge complexity openly
## Question Formatting Rules
**Always do:**
```
**How should errors be handled?**
1. Silent logging (user sees nothing)
2. Toast notifications (brief, dismissible)
3. Modal dialogs (requires user action)
4. Other
[Context: This affects both UX and error-handling complexity]
```
**Never do:**
```
How do you want to handle errors? There are many approaches...
```
## Conflict Acknowledgment
Before asking about something that might conflict with a previous answer:
```
[Internal check]
Previous: User said "keep it simple"
Current question about: Adding configuration options
Potential conflict: More options = more complexity
```
Then acknowledge: "Earlier you mentioned keeping it simple. With that in mind..."
## Escalation for Overwhelm
If the request is particularly complex or user seems overwhelmed:
1. Acknowledge the complexity openly
2. Offer to start with just ONE aspect
3. Build incrementally
```
"This touches a lot of areas. Rather than tackle everything at once,
let's start with [most critical piece]. Once that's clear, we can
add the other parts. Sound good?"
```

View File

@@ -4,29 +4,18 @@ description: Analyze CLAUDE.md for optimization opportunities and plugin integra
# Analyze CLAUDE.md # Analyze CLAUDE.md
This command analyzes your project's CLAUDE.md file and provides a detailed report on optimization opportunities and plugin integration status. Analyze your CLAUDE.md and provide a scored report with recommendations.
## Skills to Load
- skills/visual-header.md
- skills/analysis-workflow.md
- skills/optimization-patterns.md
- skills/pre-change-protocol.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Analysis`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Analysis │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
## What This Command Does
1. **Read CLAUDE.md** - Locates and reads the project's CLAUDE.md file
2. **Analyze Structure** - Evaluates organization, headers, and flow
3. **Check Content** - Reviews clarity, completeness, and conciseness
4. **Identify Issues** - Finds redundancy, verbosity, and missing sections
5. **Detect Active Plugins** - Identifies marketplace plugins enabled in the project
6. **Check Plugin Integration** - Verifies CLAUDE.md references active plugins
7. **Generate Report** - Provides scored assessment with recommendations
## Usage ## Usage
@@ -34,202 +23,27 @@ Then proceed with the analysis.
/config-analyze /config-analyze
``` ```
Or invoke the maintainer agent directly: ## Workflow
``` 1. Locate and parse CLAUDE.md
Analyze the CLAUDE.md file in this project 2. Evaluate structure, clarity, completeness, conciseness
``` 3. Find redundancy, verbosity, missing sections
4. Detect active marketplace plugins
5. Verify plugin integration in CLAUDE.md
6. Generate scored report with recommendations
## Analysis Criteria ## Scoring (100 points)
### Structure (25 points) | Category | Points |
- Logical section ordering |----------|--------|
- Clear header hierarchy | Structure | 25 |
- Easy navigation | Clarity | 25 |
- Appropriate grouping | Completeness | 25 |
| Conciseness | 25 |
### Clarity (25 points)
- Clear instructions
- Good examples
- Unambiguous language
- Appropriate detail level
### Completeness (25 points)
- Project overview present
- Quick start commands documented
- Critical rules highlighted
- Key workflows covered
- **Pre-Change Protocol section present** (MANDATORY - see below)
### Conciseness (25 points)
- No unnecessary repetition
- Efficient information density
- Appropriate length for project size
- No generic filler content
## Pre-Change Protocol Check (MANDATORY)
**This check is CRITICAL.** The Pre-Change Protocol section ensures Claude performs comprehensive dependency analysis before making any code changes, preventing missed references and incomplete updates.
### What to Check
Search CLAUDE.md for:
- Section header containing "Pre-Change" or "Before Any Code Change"
- References to `grep -rn` or impact search
- Checklist with "Files That Will Be Affected"
- Requirement for user verification before proceeding
### If Missing
**Flag as HIGH PRIORITY issue:**
```
1. [HIGH] Missing Pre-Change Protocol section
CLAUDE.md lacks mandatory dependency-check protocol.
Impact: Claude may miss file references when making changes,
leading to broken dependencies and incomplete updates.
Recommendation: Add Pre-Change Protocol section immediately.
This is the #1 cause of cascading bugs from incomplete changes.
```
### Required Section Content
The Pre-Change Protocol section must include:
1. Requirement to run grep search and show results
2. List of files that will be affected
3. List of files searched but not changed (with reasoning)
4. Documentation that references the change target
5. User verification checkpoint before proceeding
6. Post-change verification step
## Plugin Integration Analysis
After the content analysis, the command detects and analyzes marketplace plugin integration:
### Detection Method
1. **Read `.claude/settings.local.json`** - Check for enabled MCP servers
2. **Map MCP servers to plugins** - Use marketplace registry to identify active plugins:
- `gitea` → projman
- `netbox` → cmdb-assistant
3. **Check for hooks** - Identify hook-based plugins (project-hygiene)
4. **Scan CLAUDE.md** - Look for plugin integration content
### Plugin Coverage Scoring
For each detected plugin, verify CLAUDE.md contains:
- Plugin section header or mention
- Available commands documentation
- MCP tools reference (if applicable)
- Usage guidelines
Coverage is reported as percentage: `(plugins referenced / plugins detected) * 100`
## Expected Output
```
CLAUDE.md Analysis Report
=========================
File: /path/to/project/CLAUDE.md
Lines: 245
Last Modified: 2025-01-18
Overall Score: 72/100
Category Scores:
- Structure: 20/25 (Good)
- Clarity: 18/25 (Good)
- Completeness: 22/25 (Excellent)
- Conciseness: 12/25 (Needs Work)
Strengths:
+ Clear project overview with good context
+ Critical rules prominently displayed
+ Comprehensive coverage of workflows
Issues Found:
1. [HIGH] Verbose explanations (lines 45-78)
Section "Running Tests" has 34 lines that could be 8 lines.
Impact: Harder to scan, important info buried
2. [MEDIUM] Duplicate content (lines 102-115, 189-200)
Same git workflow documented twice.
Impact: Maintenance burden, inconsistency risk
3. [MEDIUM] Missing Quick Start section
No clear "how to get started" instructions.
Impact: Slower onboarding for Claude
4. [LOW] Inconsistent header formatting
Mix of "## Title" and "## Title:" styles.
Impact: Minor readability issue
Recommendations:
1. Add Quick Start section at top (priority: high)
2. Condense Testing section to essentials (priority: high)
3. Remove duplicate git workflow (priority: medium)
4. Standardize header formatting (priority: low)
Estimated improvement: 15-20 points after changes
---
Plugin Integration Analysis
===========================
Detected Active Plugins:
✓ projman (via gitea MCP server)
✓ cmdb-assistant (via netbox MCP server)
✓ project-hygiene (via PostToolUse hook)
Plugin Coverage: 33% (1/3 plugins referenced)
✓ projman - Referenced in CLAUDE.md
✗ cmdb-assistant - NOT referenced
✗ project-hygiene - NOT referenced
Missing Integration Content:
1. cmdb-assistant
Add infrastructure management commands and NetBox MCP tools reference.
2. project-hygiene
Add cleanup hook documentation and configuration options.
---
Would you like me to:
[1] Implement all content recommendations
[2] Add missing plugin integrations to CLAUDE.md
[3] Do both (recommended)
[4] Show preview of changes first
```
## When to Use
Run `/config-analyze` when:
- Setting up a new project with existing CLAUDE.md
- CLAUDE.md feels too long or hard to use
- Claude seems to miss instructions
- Before major project changes
- Periodic maintenance (quarterly)
- After installing new marketplace plugins
- When Claude doesn't seem to use available plugin tools
## Follow-Up Actions ## Follow-Up Actions
After analysis, you can: 1. Implement content recommendations
- Run `/config-optimize` to automatically improve the file 2. Add missing plugin integrations
- Manually address specific issues 3. Do both (recommended)
- Request detailed recommendations for any section 4. Show preview first
- Compare with best practice templates
## Tips
- Run analysis after significant project changes
- Address HIGH priority issues first
- Keep scores above 70/100 for best results
- Re-analyze after making changes to verify improvement

View File

@@ -4,248 +4,45 @@ description: Show diff between current CLAUDE.md and last commit
# Compare CLAUDE.md Changes # Compare CLAUDE.md Changes
This command shows differences between your current CLAUDE.md file and previous versions, helping track configuration drift and review changes before committing. Show differences between CLAUDE.md versions to track configuration drift.
## Skills to Load
- skills/visual-header.md
- skills/diff-analysis.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Diff`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Diff │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the diff.
## What This Command Does
1. **Detect CLAUDE.md Location** - Finds the project's CLAUDE.md file
2. **Compare Versions** - Shows diff against last commit or specified revision
3. **Highlight Sections** - Groups changes by affected sections
4. **Summarize Impact** - Explains what the changes mean for Claude's behavior
## Usage ## Usage
``` ```
/config-diff /config-diff # Working vs last commit
/config-diff --commit=abc1234 # Working vs specific commit
/config-diff --from=v1.0 --to=v2.0 # Compare two commits
/config-diff --section="Critical Rules" # Specific section only
``` ```
Compare against a specific commit: ## Workflow
``` 1. Find project's CLAUDE.md file
/config-diff --commit=abc1234 2. Show diff against target revision
/config-diff --commit=HEAD~3 3. Group changes by affected sections
``` 4. Explain behavioral implications
Compare two specific commits:
```
/config-diff --from=abc1234 --to=def5678
```
Show only specific sections:
```
/config-diff --section="Critical Rules"
/config-diff --section="Quick Start"
```
## Comparison Modes
### Default: Working vs Last Commit
Shows uncommitted changes to CLAUDE.md:
```
/config-diff
```
### Working vs Specific Commit
Shows changes since a specific point:
```
/config-diff --commit=v1.0.0
```
### Commit to Commit
Shows changes between two historical versions:
```
/config-diff --from=v1.0.0 --to=v2.0.0
```
### Branch Comparison
Shows CLAUDE.md differences between branches:
```
/config-diff --branch=main
/config-diff --from=feature-branch --to=main
```
## Expected Output
```
CLAUDE.md Diff Report
=====================
File: /path/to/project/CLAUDE.md
Comparing: Working copy vs HEAD (last commit)
Commit: abc1234 "Update build commands" (2 days ago)
Summary:
- Lines added: 12
- Lines removed: 5
- Net change: +7 lines
- Sections affected: 3
Section Changes:
----------------
## Quick Start [MODIFIED]
- Added new environment variable requirement
- Updated test command with coverage flag
## Critical Rules [ADDED CONTENT]
+ New rule: "Never modify database migrations directly"
## Architecture [UNCHANGED]
## Common Operations [MODIFIED]
- Removed deprecated deployment command
- Added new Docker workflow
Detailed Diff:
--------------
--- CLAUDE.md (HEAD)
+++ CLAUDE.md (working)
@@ -15,7 +15,10 @@
## Quick Start
```bash
+export DATABASE_URL=postgres://... # Required
pip install -r requirements.txt
-pytest
+pytest --cov=src # Run with coverage
uvicorn main:app --reload
```
@@ -45,6 +48,7 @@
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
- Always run tests before committing
Behavioral Impact:
------------------
These changes will affect Claude's behavior:
1. [NEW REQUIREMENT] Claude will now export DATABASE_URL before running
2. [MODIFIED] Test command now includes coverage reporting
3. [NEW RULE] Claude will avoid direct migration modifications
Review: Do these changes reflect your intended configuration?
```
## Section-Focused View
When using `--section`, output focuses on specific areas:
```
/config-diff --section="Critical Rules"
CLAUDE.md Section Diff: Critical Rules
======================================
--- HEAD
+++ Working
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
+- Always use type hints in Python code
- Always run tests before committing
-- Keep functions under 50 lines
Changes:
+ 2 rules added
- 1 rule removed
Impact: Claude will follow 2 new constraints and no longer enforce
the 50-line function limit.
```
## Options ## Options
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
| `--commit=REF` | Compare working copy against specific commit/tag | | `--commit=REF` | Compare against specific commit |
| `--from=REF` | Starting point for comparison | | `--from=REF` | Starting point |
| `--to=REF` | Ending point for comparison (default: HEAD) | | `--to=REF` | Ending point (default: HEAD) |
| `--branch=NAME` | Compare against branch tip | | `--section=NAME` | Show only specific section |
| `--section=NAME` | Show only changes to specific section | | `--stat` | Statistics only |
| `--stat` | Show only statistics, no detailed diff |
| `--no-color` | Disable colored output |
| `--context=N` | Lines of context around changes (default: 3) |
## Understanding the Output
### Change Indicators
| Symbol | Meaning |
|--------|---------|
| `+` | Line added |
| `-` | Line removed |
| `@@` | Location marker showing line numbers |
| `[MODIFIED]` | Section has changes |
| `[ADDED]` | New section created |
| `[REMOVED]` | Section deleted |
| `[UNCHANGED]` | No changes to section |
### Impact Categories
- **NEW REQUIREMENT** - Claude will now need to do something new
- **REMOVED REQUIREMENT** - Claude no longer needs to do something
- **MODIFIED** - Existing behavior changed
- **NEW RULE** - New constraint added
- **RELAXED RULE** - Constraint removed or softened
## When to Use ## When to Use
Run `/config-diff` when:
- Before committing CLAUDE.md changes - Before committing CLAUDE.md changes
- Reviewing what changed after pulling updates - Reviewing changes after pull
- Debugging unexpected Claude behavior - Debugging unexpected behavior
- Auditing configuration changes over time
- Comparing configurations across branches
## Integration with Other Commands
| Workflow | Commands |
|----------|----------|
| Review before commit | `/config-diff` then `git commit` |
| After optimization | `/config-optimize` then `/config-diff` |
| Audit history | `/config-diff --from=v1.0.0 --to=HEAD` |
| Branch comparison | `/config-diff --branch=main` |
## Tips
1. **Review before committing** - Always check what changed
2. **Track behavioral changes** - Focus on rules and requirements sections
3. **Use section filtering** - Large files benefit from focused diffs
4. **Compare across releases** - Use tags to track major changes
5. **Check after merges** - Ensure CLAUDE.md didn't get conflict artifacts
## Troubleshooting
### "No changes detected"
- CLAUDE.md matches the comparison target
- Check if you're comparing the right commits
### "File not found in commit"
- CLAUDE.md didn't exist at that commit
- Use `git log -- CLAUDE.md` to find when it was created
### "Not a git repository"
- This command requires git history
- Initialize git or use file backup comparison instead

View File

@@ -4,343 +4,45 @@ description: Lint CLAUDE.md for common anti-patterns and best practices
# Lint CLAUDE.md # Lint CLAUDE.md
This command checks your CLAUDE.md file against best practices and detects common anti-patterns that can cause issues with Claude Code. Check CLAUDE.md against best practices and detect common anti-patterns.
## Skills to Load
- skills/visual-header.md
- skills/lint-rules.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Lint`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Lint │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the linting.
## What This Command Does
1. **Parse Structure** - Validates markdown structure and hierarchy
2. **Check Security** - Detects hardcoded paths, secrets, and sensitive data
3. **Validate Content** - Identifies anti-patterns and problematic instructions
4. **Verify Format** - Ensures consistent formatting and style
5. **Generate Report** - Provides actionable findings with fix suggestions
## Usage ## Usage
``` ```
/config-lint /config-lint # Full lint
/config-lint --fix # Auto-fix issues
/config-lint --rules=security # Check specific category
``` ```
Lint with auto-fix: ## Workflow
``` 1. Parse markdown structure and hierarchy
/config-lint --fix 2. Check for hardcoded paths, secrets, sensitive data
``` 3. Identify content anti-patterns
4. Verify consistent formatting
Check specific rules only: 5. Generate report with fix suggestions
```
/config-lint --rules=security,structure
```
## Linting Rules
### Security Rules (SEC)
| Rule | Description | Severity |
|------|-------------|----------|
| SEC001 | Hardcoded absolute paths | Warning |
| SEC002 | Potential secrets/API keys | Error |
| SEC003 | Hardcoded IP addresses | Warning |
| SEC004 | Exposed credentials patterns | Error |
| SEC005 | Hardcoded URLs with tokens | Error |
| SEC006 | Environment variable values (not names) | Warning |
### Structure Rules (STR)
| Rule | Description | Severity |
|------|-------------|----------|
| STR001 | Missing required sections | Error |
| STR002 | Invalid header hierarchy (h3 before h2) | Warning |
| STR003 | Orphaned content (text before first header) | Info |
| STR004 | Excessive nesting depth (>4 levels) | Warning |
| STR005 | Empty sections | Warning |
| STR006 | Missing section content | Warning |
### Content Rules (CNT)
| Rule | Description | Severity |
|------|-------------|----------|
| CNT001 | Contradictory instructions | Error |
| CNT002 | Vague or ambiguous rules | Warning |
| CNT003 | Overly long sections (>100 lines) | Info |
| CNT004 | Duplicate content | Warning |
| CNT005 | TODO/FIXME in production config | Warning |
| CNT006 | Outdated version references | Info |
| CNT007 | Broken internal links | Warning |
### Format Rules (FMT)
| Rule | Description | Severity |
|------|-------------|----------|
| FMT001 | Inconsistent header styles | Info |
| FMT002 | Inconsistent list markers | Info |
| FMT003 | Missing code block language | Info |
| FMT004 | Trailing whitespace | Info |
| FMT005 | Missing blank lines around headers | Info |
| FMT006 | Inconsistent indentation | Info |
### Best Practice Rules (BPR)
| Rule | Description | Severity |
|------|-------------|----------|
| BPR001 | No Quick Start section | Warning |
| BPR002 | No Critical Rules section | Warning |
| BPR003 | Instructions without examples | Info |
| BPR004 | Commands without explanation | Info |
| BPR005 | Rules without rationale | Info |
| BPR006 | Missing plugin integration docs | Info |
## Expected Output
```
CLAUDE.md Lint Report
=====================
File: /path/to/project/CLAUDE.md
Rules checked: 25
Time: 0.3s
Summary:
Errors: 2
Warnings: 5
Info: 3
Findings:
---------
[ERROR] SEC002: Potential secret detected (line 45)
│ api_key = "sk-1234567890abcdef"
│ ^^^^^^^^^^^^^^^^^^^^^^
└─ Hardcoded API key found. Use environment variable reference instead.
Suggested fix:
- api_key = "sk-1234567890abcdef"
+ api_key = $OPENAI_API_KEY # Set in environment
[ERROR] CNT001: Contradictory instructions (lines 23, 67)
│ Line 23: "Always run tests before committing"
│ Line 67: "Skip tests for documentation-only changes"
└─ These rules conflict. Clarify the exception explicitly.
Suggested fix:
+ "Always run tests before committing, except for documentation-only
+ changes (files in docs/ directory)"
[WARNING] SEC001: Hardcoded absolute path (line 12)
│ Database location: /home/user/data/myapp.db
│ ^^^^^^^^^^^^^^^^^^^^^^^^
└─ Absolute paths break portability. Use relative or variable.
Suggested fix:
- Database location: /home/user/data/myapp.db
+ Database location: ./data/myapp.db # Or $DATA_DIR/myapp.db
[WARNING] STR002: Invalid header hierarchy (line 34)
│ ### Subsection
│ (no preceding ## header)
└─ H3 header without parent H2. Add H2 or promote to H2.
[WARNING] CNT004: Duplicate content (lines 45-52, 89-96)
│ Same git workflow documented twice
└─ Remove duplicate or consolidate into single section.
[WARNING] STR005: Empty section (line 78)
│ ## Troubleshooting
│ (no content)
└─ Add content or remove empty section.
[WARNING] BPR002: No Critical Rules section
│ Missing "Critical Rules" or "Important Rules" section
└─ Add a section highlighting must-follow rules for Claude.
[INFO] FMT003: Missing code block language (line 56)
│ ```
│ npm install
│ ```
└─ Specify language for syntax highlighting: ```bash
[INFO] CNT003: Overly long section (lines 100-215)
│ "Architecture" section is 115 lines
└─ Consider breaking into subsections or condensing.
[INFO] FMT001: Inconsistent header styles
│ Line 10: "## Quick Start"
│ Line 25: "## Architecture:"
│ (colon suffix inconsistent)
└─ Standardize header format throughout document.
---
Auto-fixable: 4 issues (run with --fix)
Manual review required: 6 issues
Run `/config-lint --fix` to apply automatic fixes.
```
## Options ## Options
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
| `--fix` | Automatically fix auto-fixable issues | | `--fix` | Auto-fix issues |
| `--rules=LIST` | Check only specified rule categories | | `--rules=LIST` | Check specific categories |
| `--ignore=LIST` | Skip specified rules (e.g., `--ignore=FMT001,FMT002`) | | `--ignore=LIST` | Skip specified rules |
| `--severity=LEVEL` | Show only issues at or above level (error/warning/info) | | `--severity=LEVEL` | Filter by severity |
| `--format=FORMAT` | Output format: `text` (default), `json`, `sarif` |
| `--config=FILE` | Use custom lint configuration |
| `--strict` | Treat warnings as errors | | `--strict` | Treat warnings as errors |
## Rule Categories
Use `--rules` to focus on specific areas:
```
/config-lint --rules=security # Only security checks
/config-lint --rules=structure # Only structure checks
/config-lint --rules=security,content # Multiple categories
```
Available categories:
- `security` - SEC rules
- `structure` - STR rules
- `content` - CNT rules
- `format` - FMT rules
- `bestpractice` - BPR rules
## Custom Configuration
Create `.claude-lint.json` in project root:
```json
{
"rules": {
"SEC001": "warning",
"FMT001": "off",
"CNT003": {
"severity": "warning",
"maxLines": 150
}
},
"ignore": [
"FMT*"
],
"requiredSections": [
"Quick Start",
"Critical Rules",
"Project Overview"
]
}
```
## Anti-Pattern Examples
### Hardcoded Secrets (SEC002)
```markdown
# BAD
API_KEY=sk-1234567890abcdef
# GOOD
API_KEY=$OPENAI_API_KEY # Set via environment
```
### Hardcoded Paths (SEC001)
```markdown
# BAD
Config file: /home/john/projects/myapp/config.yml
# GOOD
Config file: ./config.yml
Config file: $PROJECT_ROOT/config.yml
```
### Contradictory Rules (CNT001)
```markdown
# BAD
- Always use TypeScript
- JavaScript files are acceptable for scripts
# GOOD
- Always use TypeScript for source code
- JavaScript (.js) is acceptable only for config files and scripts
```
### Vague Instructions (CNT002)
```markdown
# BAD
- Be careful with the database
# GOOD
- Never run DELETE without WHERE clause
- Always backup before migrations
```
### Invalid Hierarchy (STR002)
```markdown
# BAD
# Main Title
### Skipped Level
# GOOD
# Main Title
## Section
### Subsection
```
## When to Use ## When to Use
Run `/config-lint` when:
- Before committing CLAUDE.md changes - Before committing CLAUDE.md changes
- During code review for CLAUDE.md modifications - During code review
- Setting up CI/CD checks for configuration files - Periodically as maintenance
- After major edits to catch introduced issues
- Periodically as maintenance check
## Integration with CI/CD
Add to your CI pipeline:
```yaml
# GitHub Actions example
- name: Lint CLAUDE.md
run: claude /config-lint --strict --format=sarif > lint-results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: lint-results.sarif
```
## Tips
1. **Start with errors** - Fix errors before warnings
2. **Use --fix carefully** - Review auto-fixes before committing
3. **Configure per-project** - Different projects have different needs
4. **Integrate in CI** - Catch issues before they reach main
5. **Review periodically** - Run lint check monthly as maintenance
## Related Commands
| Command | Relationship |
|---------|--------------|
| `/config-analyze` | Deeper content analysis (complements lint) |
| `/config-optimize` | Applies fixes and improvements |
| `/config-diff` | Shows what changed (run lint before commit) |

View File

@@ -4,255 +4,46 @@ description: Initialize a new CLAUDE.md file for a project
# Initialize CLAUDE.md # Initialize CLAUDE.md
This command creates a new CLAUDE.md file tailored to your project, gathering context and generating appropriate content. Create a new CLAUDE.md file tailored to your project.
## Skills to Load
- skills/visual-header.md
- skills/claude-md-structure.md
- skills/pre-change-protocol.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Initialization`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Initialization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the initialization.
## What This Command Does
1. **Gather Context** - Analyzes project structure and asks clarifying questions
2. **Detect Stack** - Identifies technologies, frameworks, and tools
3. **Generate Content** - Creates tailored CLAUDE.md sections
4. **Review & Refine** - Allows customization before saving
5. **Save File** - Creates the CLAUDE.md in project root
## Usage ## Usage
``` ```
/config-init /config-init # Interactive
/config-init --minimal # Minimal version
/config-init --comprehensive # Detailed version
``` ```
Or with options: ## Workflow
``` 1. Analyze project structure, ask clarifying questions
/config-init --template=api # Use API project template 2. Detect technologies, frameworks, tools
/config-init --minimal # Create minimal version 3. Generate tailored CLAUDE.md sections
/config-init --comprehensive # Create detailed version 4. Allow review and customization
``` 5. Save file in project root
## Initialization Workflow
```
CLAUDE.md Initialization
========================
Step 1: Project Analysis
------------------------
Scanning project structure...
Detected:
- Language: Python 3.11
- Framework: FastAPI
- Package Manager: pip (requirements.txt found)
- Testing: pytest
- Docker: Yes (Dockerfile found)
- Git: Yes (.git directory)
Step 2: Clarifying Questions
----------------------------
1. Project Description:
What does this project do? (1-2 sentences)
> [User provides description]
2. Build/Run Commands:
Detected commands - are these correct?
- Install: pip install -r requirements.txt
- Test: pytest
- Run: uvicorn main:app --reload
[Y/n/edit]
3. Critical Rules:
Any rules Claude MUST follow?
Examples: "Never modify migrations", "Always use type hints"
> [User provides rules]
4. Sensitive Areas:
Any files/directories Claude should be careful with?
> [User provides or skips]
Step 3: Generate CLAUDE.md
--------------------------
Generating content based on:
- Project type: FastAPI web API
- Detected technologies
- Your provided context
Preview:
---
# CLAUDE.md
## Project Overview
[Generated description]
## Quick Start
```bash
pip install -r requirements.txt # Install dependencies
pytest # Run tests
uvicorn main:app --reload # Start dev server
```
## Architecture
[Generated based on structure]
## Critical Rules
[Your provided rules]
## File Structure
[Generated from analysis]
---
Save this CLAUDE.md? [Y/n/edit]
Step 4: Complete
----------------
CLAUDE.md created successfully!
Location: /path/to/project/CLAUDE.md
Lines: 87
Score: 85/100 (following best practices)
Recommendations:
- Run /config-analyze periodically to maintain quality
- Update when adding major features
- Add troubleshooting section as issues are discovered
```
## Templates ## Templates
### Minimal Template | Template | Sections |
For small projects or when starting fresh: |----------|----------|
- Project Overview (required) | Minimal | Overview, Quick Start, Critical Rules, Pre-Change Protocol |
- Quick Start (required) | Standard | + Architecture, Common Operations, File Structure |
- Critical Rules (required) | Comprehensive | + Troubleshooting, Integration Points, Workflow |
- **Pre-Change Protocol (required)**
### Standard Template (default) **Note:** Pre-Change Protocol is MANDATORY in all templates.
For typical projects:
- Project Overview
- Quick Start
- Architecture
- Critical Rules
- **Pre-Change Protocol**
- Common Operations
- File Structure
### Comprehensive Template
For large or complex projects:
- All standard sections plus:
- Detailed Architecture
- **Pre-Change Protocol**
- Troubleshooting
- Integration Points
- Development Workflow
- Deployment Notes
### Pre-Change Protocol Section (MANDATORY in ALL templates)
**This section MUST be included in every generated CLAUDE.md:**
```markdown
## ⛔ MANDATORY: Before Any Code Change
**Claude MUST show this checklist BEFORE editing any file:**
### 1. Impact Search Results
Run and show output of:
\`\`\`bash
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
\`\`\`
### 2. Files That Will Be Affected
Numbered list of every file to be modified, with the specific change for each.
### 3. Files Searched But Not Changed (and why)
Proof that related files were checked and determined unchanged.
### 4. Documentation That References This
List of docs that mention this feature/script/function.
**User verifies this list before Claude proceeds. If Claude skips this, stop immediately.**
### After Changes
Run the same grep and show results proving no references remain unaddressed.
```
**Rationale:** This protocol prevents incomplete changes where Claude modifies some files but misses others that reference the same code, causing cascading bugs.
## Auto-Detection
The command automatically detects:
| What | How |
|------|-----|
| Language | File extensions, config files |
| Framework | package.json, requirements.txt, etc. |
| Build system | Makefile, package.json scripts, etc. |
| Testing | pytest.ini, jest.config, etc. |
| Docker | Dockerfile, docker-compose.yml |
| Database | Connection strings, ORM configs |
## Customization
After generation, you can:
- Edit any section before saving
- Add additional sections
- Remove unnecessary sections
- Adjust detail level
- Add project-specific content
## When to Use ## When to Use
Run `/config-init` when:
- Starting a new project - Starting a new project
- Project lacks CLAUDE.md - Project lacks CLAUDE.md
- Existing CLAUDE.md is outdated/poor quality - Taking over unfamiliar project
- Taking over an unfamiliar project
## Tips
1. **Provide accurate description** - This shapes the whole file
2. **Include critical rules** - What must Claude never do?
3. **Review generated content** - Auto-detection isn't perfect
4. **Start minimal, grow as needed** - Add sections when required
5. **Keep it current** - Update when project changes significantly
## Examples
### For a CLI Tool
```
/config-init
> Description: CLI tool for managing cloud infrastructure
> Critical rules: Never delete resources without confirmation, always show dry-run first
```
### For a Web App
```
/config-init
> Description: E-commerce platform with React frontend and Node.js backend
> Critical rules: Never expose API keys, always validate user input, follow the existing component patterns
```
### For a Library
```
/config-init --template=minimal
> Description: Python library for parsing log files
> Critical rules: Maintain backward compatibility, all public functions need docstrings
```

View File

@@ -4,231 +4,47 @@ description: Optimize CLAUDE.md structure and content
# Optimize CLAUDE.md # Optimize CLAUDE.md
This command automatically optimizes your project's CLAUDE.md file based on best practices and identified issues. Automatically optimize CLAUDE.md based on best practices.
## Skills to Load
- skills/visual-header.md
- skills/optimization-patterns.md
- skills/pre-change-protocol.md
- skills/claude-md-structure.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Optimization`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Optimization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the optimization.
## What This Command Does
1. **Analyze Current File** - Identifies all optimization opportunities
2. **Plan Changes** - Determines what to restructure, condense, or add
3. **Show Preview** - Displays before/after comparison
4. **Apply Changes** - Updates the file with your approval
5. **Verify Results** - Confirms improvements achieved
## Usage ## Usage
``` ```
/config-optimize /config-optimize # Full optimization
/config-optimize --condense # Reduce verbosity
/config-optimize --dry-run # Preview only
``` ```
Or specify specific optimizations: ## Workflow
``` 1. Identify optimization opportunities
/config-optimize --condense # Focus on reducing verbosity 2. Plan restructure, condense, or add actions
/config-optimize --restructure # Focus on reorganization 3. Show before/after preview
/config-optimize --add-missing # Focus on adding missing sections 4. Apply changes with approval
``` 5. Verify improvements
## Optimization Actions
### Restructure
- Reorder sections by importance
- Group related content together
- Improve header hierarchy
- Add navigation aids
### Condense
- Remove redundant explanations
- Convert verbose text to bullet points
- Eliminate duplicate content
- Shorten overly detailed sections
### Enhance
- Add missing essential sections
- **Add Pre-Change Protocol if missing (HIGH PRIORITY)**
- Improve unclear instructions
- Add helpful examples
- Highlight critical rules
### Format
- Standardize header styles
- Fix code block formatting
- Align list formatting
- Improve table layouts
## Expected Output
```
CLAUDE.md Optimization
======================
Current Analysis:
- Score: 72/100
- Lines: 245
- Issues: 4
Planned Optimizations:
1. ADD: Quick Start section (new, ~15 lines)
+ Build command
+ Test command
+ Run command
2. CONDENSE: Testing section (34 → 8 lines)
Before: Verbose explanation with redundant setup info
After: Concise command reference with comments
3. REMOVE: Duplicate git workflow (lines 189-200)
Keeping: Original at lines 102-115
4. FORMAT: Standardize headers
Changing 12 headers from "## Title:" to "## Title"
Preview Changes? [Y/n] y
--- CLAUDE.md (before)
+++ CLAUDE.md (after)
@@ -1,5 +1,20 @@
# CLAUDE.md
+## Quick Start
+
+```bash
+# Install dependencies
+pip install -r requirements.txt
+
+# Run tests
+pytest
+
+# Start development server
+python manage.py runserver
+```
+
## Project Overview
...
[Full diff shown]
Apply these changes? [Y/n] y
Optimization Complete!
- Previous score: 72/100
- New score: 89/100
- Lines reduced: 245 → 198 (-19%)
- Issues resolved: 4/4
Backup saved to: .claude/backups/CLAUDE.md.2025-01-18
```
## Safety Features
### Backup Creation
- Automatic backup before changes
- Stored in `.claude/backups/`
- Easy restoration if needed
### Preview Mode
- All changes shown before applying
- Diff format for easy review
- Option to approve/reject
### Selective Application
- Can apply individual changes
- Skip specific optimizations
- Iterative refinement
## Options ## Options
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
| `--dry-run` | Show changes without applying | | `--dry-run` | Preview without applying |
| `--no-backup` | Skip backup creation | | `--no-backup` | Skip backup |
| `--aggressive` | Maximum condensation | | `--aggressive` | Maximum condensation |
| `--preserve-comments` | Keep all existing comments | | `--section=NAME` | Optimize specific section |
| `--section=NAME` | Optimize specific section only |
## Pre-Change Protocol (Mandatory Addition) **Priority:** Add Pre-Change Protocol if missing.
**If CLAUDE.md is missing the Pre-Change Protocol section, optimization MUST add it.** ## Safety
This is the highest priority enhancement because it prevents cascading bugs from incomplete code changes. - Auto backup to `.claude/backups/`
- Preview before applying
### Detection
Search CLAUDE.md for:
- "Pre-Change" or "Before Any Code Change" in headers
- References to impact search or grep verification
- User verification checkpoint
### If Missing
Add this section (position: after Critical Rules, before Common Operations):
```markdown
## ⛔ MANDATORY: Before Any Code Change
**Claude MUST show this checklist BEFORE editing any file:**
### 1. Impact Search Results
Run and show output of:
\`\`\`bash
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
\`\`\`
### 2. Files That Will Be Affected
Numbered list of every file to be modified, with the specific change for each.
### 3. Files Searched But Not Changed (and why)
Proof that related files were checked and determined unchanged.
### 4. Documentation That References This
List of docs that mention this feature/script/function.
**User verifies this list before Claude proceeds. If Claude skips this, stop immediately.**
### After Changes
Run the same grep and show results proving no references remain unaddressed.
```
## When to Use
Run `/config-optimize` when:
- Analysis shows score below 70
- File has grown too long
- Structure needs reorganization
- Missing critical sections
- After major refactoring
## Best Practices
1. **Run analysis first** - Understand current state
2. **Review preview carefully** - Ensure nothing important lost
3. **Test after changes** - Verify Claude follows instructions
4. **Keep backups** - Restore if issues arise
5. **Iterate** - Multiple small optimizations beat one large one
## Rollback
If optimization causes issues:
```bash
# Restore from backup
cp .claude/backups/CLAUDE.md.TIMESTAMP ./CLAUDE.md
```
Or ask:
```
Restore CLAUDE.md from the most recent backup
```

View File

@@ -0,0 +1,112 @@
# CLAUDE.md Analysis Workflow
This skill defines the workflow for analyzing CLAUDE.md files.
## Analysis Steps
1. **Locate File** - Find CLAUDE.md in project root
2. **Parse Structure** - Extract headers and sections
3. **Evaluate Content** - Score against criteria
4. **Detect Plugins** - Identify active marketplace plugins
5. **Check Integration** - Verify plugin references
6. **Generate Report** - Provide scored assessment
## Content Analysis
### What to Check
| Area | Check For |
|------|-----------|
| Structure | Header hierarchy, section ordering, grouping |
| Clarity | Clear instructions, examples, unambiguous language |
| Completeness | Required sections present, workflows documented |
| Conciseness | No redundancy, efficient density, appropriate length |
### Required Sections Check
1. Project Overview - present?
2. Quick Start - present with commands?
3. Critical Rules - present?
4. **Pre-Change Protocol** - present? (HIGH PRIORITY if missing)
## Plugin Integration Analysis
### Detection Method
1. Read `.claude/settings.local.json` for enabled MCP servers
2. Map MCP servers to plugins:
- `gitea` -> projman
- `netbox` -> cmdb-assistant
3. Check for hook-based plugins (project-hygiene)
4. Scan CLAUDE.md for plugin references
### Coverage Scoring
For each detected plugin, verify CLAUDE.md contains:
- Plugin section header or mention
- Available commands documentation
- MCP tools reference (if applicable)
- Usage guidelines
Coverage = (plugins referenced / plugins detected) * 100%
## Report Format
```
CLAUDE.md Analysis Report
=========================
File: /path/to/project/CLAUDE.md
Lines: N
Last Modified: YYYY-MM-DD
Overall Score: NN/100
Category Scores:
- Structure: NN/25 (Rating)
- Clarity: NN/25 (Rating)
- Completeness: NN/25 (Rating)
- Conciseness: NN/25 (Rating)
Strengths:
+ [Positive finding]
Issues Found:
N. [SEVERITY] Issue description (location)
Context explaining the problem.
Impact: What happens if not fixed.
Recommendations:
N. Action to take (priority: high/medium/low)
---
Plugin Integration Analysis
===========================
Detected Active Plugins:
[check] plugin-name (via detection method)
Plugin Coverage: NN% (N/N plugins referenced)
Missing Integration Content:
N. plugin-name
What to add.
```
## Issue Severity
| Level | When to Use |
|-------|-------------|
| HIGH | Missing mandatory sections, security issues |
| MEDIUM | Missing recommended content, duplicate content |
| LOW | Formatting issues, minor improvements |
## Follow-Up Actions
After analysis, offer:
1. Implement all content recommendations
2. Add missing plugin integrations
3. Do both (recommended)
4. Show preview of changes first

View File

@@ -0,0 +1,113 @@
# CLAUDE.md Structure Reference
This skill defines the standard structure, required sections, and templates for CLAUDE.md files.
## Required Sections
Every CLAUDE.md MUST have these sections:
| Section | Purpose | Priority |
|---------|---------|----------|
| Project Overview | What the project does | Required |
| Quick Start | Build/test/run commands | Required |
| Critical Rules | Must-follow constraints | Required |
| Pre-Change Protocol | Dependency check before edits | **MANDATORY** |
## Recommended Sections
| Section | When to Include |
|---------|-----------------|
| Architecture | Complex projects with multiple components |
| Common Operations | Projects with repetitive tasks |
| File Structure | Large codebases |
| Troubleshooting | Projects with known gotchas |
| Integration Points | Projects with external dependencies |
## Header Hierarchy
```
# CLAUDE.md (H1 - only one)
## Section (H2 - main sections)
### Subsection (H3 - within sections)
#### Detail (H4 - rarely needed)
```
**Rules:**
- Never skip levels (no H3 before H2)
- Maximum depth: 4 levels
- No orphaned content before first header
## Templates
### Minimal Template
For small projects:
```markdown
# CLAUDE.md
## Project Overview
[Description]
## Quick Start
[Commands]
## Critical Rules
[Constraints]
## Pre-Change Protocol
[Mandatory section - see pre-change-protocol.md]
```
### Standard Template (Default)
```markdown
# CLAUDE.md
## Project Overview
## Quick Start
## Architecture
## Critical Rules
## Pre-Change Protocol
## Common Operations
## File Structure
```
### Comprehensive Template
For large projects - adds:
- Detailed Architecture
- Troubleshooting
- Integration Points
- Development Workflow
- Deployment Notes
## Auto-Detection Signals
| Technology | Detection Method |
|------------|------------------|
| Language | File extensions, config files |
| Framework | package.json, requirements.txt, Cargo.toml |
| Build system | Makefile, scripts in package.json |
| Testing | pytest.ini, jest.config.js, go.mod |
| Docker | Dockerfile, docker-compose.yml |
| Database | ORM configs, connection strings |
## Section Content Guidelines
### Project Overview
- 1-3 sentences describing purpose
- Target audience if relevant
- Key technologies used
### Quick Start
- Install command
- Test command
- Run command
- Each with brief inline comment
### Critical Rules
- Numbered or bulleted list
- Specific, actionable constraints
- Include rationale for non-obvious rules
### Architecture
- High-level component diagram (ASCII or description)
- Data flow explanation
- Key file/directory purposes

View File

@@ -0,0 +1,97 @@
# CLAUDE.md Diff Analysis
This skill defines how to analyze and present CLAUDE.md differences.
## Comparison Modes
| Mode | Command | Description |
|------|---------|-------------|
| Working vs HEAD | `/config-diff` | Uncommitted changes |
| Working vs Commit | `--commit=REF` | Changes since specific point |
| Commit to Commit | `--from=X --to=Y` | Historical comparison |
| Branch Comparison | `--branch=NAME` | Cross-branch differences |
## Change Indicators
| Symbol | Meaning |
|--------|---------|
| `+` | Line added |
| `-` | Line removed |
| `@@` | Location marker (line numbers) |
| `[MODIFIED]` | Section has changes |
| `[ADDED]` | New section created |
| `[REMOVED]` | Section deleted |
| `[UNCHANGED]` | No changes to section |
## Impact Categories
| Category | Meaning |
|----------|---------|
| NEW REQUIREMENT | Claude will need to do something new |
| REMOVED REQUIREMENT | Claude no longer needs to do something |
| MODIFIED | Existing behavior changed |
| NEW RULE | New constraint added |
| RELAXED RULE | Constraint removed or softened |
## Report Format
```
CLAUDE.md Diff Report
=====================
File: /path/to/project/CLAUDE.md
Comparing: [mode description]
Commit: [ref] "[message]" (time ago)
Summary:
- Lines added: N
- Lines removed: N
- Net change: +/-N lines
- Sections affected: N
Section Changes:
----------------
## Section Name [STATUS]
+/- Change description
Detailed Diff:
--------------
--- CLAUDE.md (before)
+++ CLAUDE.md (after)
@@ -N,M +N,M @@
context
-removed
+added
context
Behavioral Impact:
------------------
These changes will affect Claude's behavior:
N. [CATEGORY] Description of impact
```
## Section-Focused View
When using `--section=NAME`:
- Filter diff to only that section
- Show section-specific statistics
- Highlight behavioral impact for that area
## Troubleshooting
### No changes detected
- File matches comparison target
- Verify comparing correct commits
### File not found in commit
- CLAUDE.md didn't exist at that point
- Use `git log -- CLAUDE.md` to find creation
### Not a git repository
- Command requires git history
- Initialize git or use file backup comparison

View File

@@ -0,0 +1,136 @@
# CLAUDE.md Lint Rules
This skill defines all linting rules for validating CLAUDE.md files.
## Rule Categories
### Security Rules (SEC)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| SEC001 | Hardcoded absolute paths | Warning | Yes |
| SEC002 | Potential secrets/API keys | Error | No |
| SEC003 | Hardcoded IP addresses | Warning | No |
| SEC004 | Exposed credentials patterns | Error | No |
| SEC005 | Hardcoded URLs with tokens | Error | No |
| SEC006 | Environment variable values (not names) | Warning | No |
### Structure Rules (STR)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| STR001 | Missing required sections | Error | Yes |
| STR002 | Invalid header hierarchy (h3 before h2) | Warning | Yes |
| STR003 | Orphaned content before first header | Info | No |
| STR004 | Excessive nesting depth (>4 levels) | Warning | No |
| STR005 | Empty sections | Warning | Yes |
| STR006 | Missing section content | Warning | No |
### Content Rules (CNT)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| CNT001 | Contradictory instructions | Error | No |
| CNT002 | Vague or ambiguous rules | Warning | No |
| CNT003 | Overly long sections (>100 lines) | Info | No |
| CNT004 | Duplicate content | Warning | No |
| CNT005 | TODO/FIXME in production config | Warning | No |
| CNT006 | Outdated version references | Info | No |
| CNT007 | Broken internal links | Warning | No |
### Format Rules (FMT)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| FMT001 | Inconsistent header styles | Info | Yes |
| FMT002 | Inconsistent list markers | Info | Yes |
| FMT003 | Missing code block language | Info | Yes |
| FMT004 | Trailing whitespace | Info | Yes |
| FMT005 | Missing blank lines around headers | Info | Yes |
| FMT006 | Inconsistent indentation | Info | Yes |
### Best Practice Rules (BPR)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| BPR001 | No Quick Start section | Warning | No |
| BPR002 | No Critical Rules section | Warning | No |
| BPR003 | Instructions without examples | Info | No |
| BPR004 | Commands without explanation | Info | No |
| BPR005 | Rules without rationale | Info | No |
| BPR006 | Missing plugin integration docs | Info | No |
## Anti-Pattern Examples
### SEC002: Hardcoded Secrets
```markdown
# BAD
API_KEY=sk-1234567890abcdef
# GOOD
API_KEY=$OPENAI_API_KEY # Set via environment
```
### SEC001: Hardcoded Paths
```markdown
# BAD
Config file: /home/john/projects/myapp/config.yml
# GOOD
Config file: ./config.yml
Config file: $PROJECT_ROOT/config.yml
```
### CNT001: Contradictory Rules
```markdown
# BAD
- Always use TypeScript
- JavaScript files are acceptable for scripts
# GOOD
- Always use TypeScript for source code
- JavaScript (.js) is acceptable only for config files and scripts
```
### CNT002: Vague Instructions
```markdown
# BAD
- Be careful with the database
# GOOD
- Never run DELETE without WHERE clause
- Always backup before migrations
```
### STR002: Invalid Hierarchy
```markdown
# BAD
# Main Title
### Skipped Level
# GOOD
# Main Title
## Section
### Subsection
```
## Output Format
```
[SEVERITY] RULE_ID: Description (line N)
| Context line showing issue
| ^^^^^^ indicator
+-- Explanation of problem
Suggested fix:
- old line
+ new line
```
## Severity Levels
| Level | Meaning | Action |
|-------|---------|--------|
| Error | Must fix | Blocks commit |
| Warning | Should fix | Review recommended |
| Info | Consider fixing | Optional improvement |

View File

@@ -0,0 +1,136 @@
# CLAUDE.md Optimization Patterns
This skill defines patterns for optimizing CLAUDE.md files.
## Optimization Categories
### Restructure
- Reorder sections by importance (Quick Start near top)
- Group related content together
- Improve header hierarchy
- Add navigation aids (TOC for long files)
### Condense
- Remove redundant explanations
- Convert verbose text to bullet points
- Eliminate duplicate content
- Shorten overly detailed sections
### Enhance
- Add missing essential sections
- **Add Pre-Change Protocol if missing (HIGH PRIORITY)**
- Improve unclear instructions
- Add helpful examples
- Highlight critical rules
### Format
- Standardize header styles (no trailing colons)
- Fix code block formatting (add language tags)
- Align list formatting (consistent markers)
- Improve table layouts
## Scoring Criteria
### Structure (25 points)
- Logical section ordering
- Clear header hierarchy
- Easy navigation
- Appropriate grouping
### Clarity (25 points)
- Clear instructions
- Good examples
- Unambiguous language
- Appropriate detail level
### Completeness (25 points)
- Project overview present
- Quick start commands documented
- Critical rules highlighted
- Key workflows covered
- Pre-Change Protocol present (MANDATORY)
### Conciseness (25 points)
- No unnecessary repetition
- Efficient information density
- Appropriate length for project size
- No generic filler content
## Score Interpretation
| Score | Rating | Action |
|-------|--------|--------|
| 90-100 | Excellent | Maintenance only |
| 70-89 | Good | Minor improvements |
| 50-69 | Needs Work | Optimization recommended |
| Below 50 | Poor | Major restructuring needed |
## Common Optimizations
### Verbose to Concise
```markdown
# Before (34 lines)
## Running Tests
To run the tests, you first need to make sure you have all the
dependencies installed. The dependencies are listed in requirements.txt.
Once you have installed the dependencies, you can run the tests using
pytest. Pytest will automatically discover all test files...
# After (8 lines)
## Running Tests
```bash
pip install -r requirements.txt # Install dependencies
pytest # Run all tests
pytest -v # Verbose output
pytest tests/unit/ # Run specific directory
```
```
### Duplicate Removal
- Keep first occurrence
- Add cross-reference if needed: "See [Section Name] above"
### Header Standardization
```markdown
# Before
## Quick Start:
## Architecture
## Testing:
# After
## Quick Start
## Architecture
## Testing
```
### Code Block Enhancement
```markdown
# Before
```
npm install
npm test
```
# After
```bash
npm install # Install dependencies
npm test # Run test suite
```
```
## Safety Features
### Backup Creation
- Always backup before changes
- Store in `.claude/backups/CLAUDE.md.TIMESTAMP`
- Easy restoration if needed
### Preview Mode
- Show all changes before applying
- Use diff format for easy review
- Allow approve/reject per change
### Selective Application
- Can apply individual changes
- Skip specific optimizations
- Iterative refinement supported

View File

@@ -0,0 +1,83 @@
# Pre-Change Protocol
This skill defines the mandatory Pre-Change Protocol section that MUST be included in every CLAUDE.md file.
## Why This Is Mandatory
The Pre-Change Protocol prevents the #1 cause of bugs from AI-assisted coding: **incomplete changes where Claude modifies some files but misses others that reference the same code**.
Without this protocol:
- Claude may rename a function but miss callers
- Claude may modify a config but miss documentation
- Claude may update a schema but miss dependent code
## Detection
Search CLAUDE.md for these indicators:
- Header containing "Pre-Change" or "Before Any Code Change"
- References to `grep -rn` or impact search
- Checklist with "Files That Will Be Affected"
- User verification checkpoint
## Required Section Content
```markdown
## MANDATORY: Before Any Code Change
**Claude MUST show this checklist BEFORE editing any file:**
### 1. Impact Search Results
Run and show output of:
```bash
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
```
### 2. Files That Will Be Affected
Numbered list of every file to be modified, with the specific change for each.
### 3. Files Searched But Not Changed (and why)
Proof that related files were checked and determined unchanged.
### 4. Documentation That References This
List of docs that mention this feature/script/function.
**User verifies this list before Claude proceeds. If Claude skips this, stop immediately.**
### After Changes
Run the same grep and show results proving no references remain unaddressed.
```
## Placement
Insert Pre-Change Protocol section:
- **After:** Critical Rules section
- **Before:** Common Operations section
## If Missing During Analysis
Flag as **HIGH PRIORITY** issue:
```
1. [HIGH] Missing Pre-Change Protocol section
CLAUDE.md lacks mandatory dependency-check protocol.
Impact: Claude may miss file references when making changes,
leading to broken dependencies and incomplete updates.
Recommendation: Add Pre-Change Protocol section immediately.
This is the #1 cause of cascading bugs from incomplete changes.
```
## If Missing During Optimization
**Automatically add the section** at the correct position. This is the highest priority enhancement.
## Variations
The exact wording can vary, but these elements are required:
1. **Search requirement** - Must run grep/search before changes
2. **Affected files list** - Must enumerate all files to modify
3. **Non-affected files proof** - Must show what was checked but unchanged
4. **Documentation check** - Must list referencing docs
5. **User checkpoint** - Must pause for user verification
6. **Post-change verification** - Must verify after changes

View File

@@ -0,0 +1,52 @@
# Visual Header Display
This skill defines the standard visual header for claude-config-maintainer commands.
## Header Format
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - [Command Name] |
+-----------------------------------------------------------------+
```
## Command-Specific Headers
### /config-analyze
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Analysis |
+-----------------------------------------------------------------+
```
### /config-optimize
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Optimization |
+-----------------------------------------------------------------+
```
### /config-lint
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Lint |
+-----------------------------------------------------------------+
```
### /config-diff
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Diff |
+-----------------------------------------------------------------+
```
### /config-init
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Initialization |
+-----------------------------------------------------------------+
```
## Usage
Display the header at the start of command execution, before any analysis or output.

View File

@@ -1,21 +1,20 @@
# CMDB Assistant Agent # CMDB Assistant Agent
You are an infrastructure management assistant specialized in NetBox CMDB operations. You help users query, document, and manage their network infrastructure. You are an infrastructure management assistant specialized in NetBox CMDB operations.
## Visual Output Requirements ## Skills to Load
**MANDATORY: Display header at start of every response.** - `skills/visual-header.md`
- `skills/netbox-patterns/SKILL.md`
- `skills/mcp-tools-reference.md`
``` ## Visual Output
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Infrastructure Management Execute `skills/visual-header.md` with context "Infrastructure Management".
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
You have full access to NetBox via MCP tools covering: Full access to NetBox via MCP tools covering:
- **DCIM**: Sites, locations, racks, devices, interfaces, cables, power - **DCIM**: Sites, locations, racks, devices, interfaces, cables, power
- **IPAM**: IP addresses, prefixes, VLANs, VRFs, ASNs, services - **IPAM**: IP addresses, prefixes, VLANs, VRFs, ASNs, services
- **Circuits**: Providers, circuits, terminations - **Circuits**: Providers, circuits, terminations
@@ -29,183 +28,66 @@ You have full access to NetBox via MCP tools covering:
### Query Operations ### Query Operations
- Start with list operations to find objects - Start with list operations to find objects
- Use filters to narrow results (name, status, site_id, etc.) - Use filters to narrow results
- Follow up with get operations for detailed information - Follow up with get operations for details
- Present results in clear, organized format
### Create Operations ### Create Operations
- Always confirm required fields with user before creating - Confirm required fields before creating
- Look up related object IDs (device_type, role, site) first - Look up related object IDs first
- Provide the created object details after success - Suggest follow-up actions after success
- Suggest follow-up actions (add interfaces, assign IPs, etc.)
### Update Operations ### Update Operations
- Show current values before updating - Show current values before updating
- Confirm changes with user - Confirm changes with user
- Report what was changed after success
### Delete Operations ### Delete Operations
- ALWAYS ask for explicit confirmation before deleting - ALWAYS ask for explicit confirmation
- Show what will be deleted - Warn about dependent objects
- Warn about dependent objects that may be affected
## Common Workflows
### Document a New Server
1. Create device with `dcim_create_device`
2. Add interfaces with `dcim_create_interface`
3. Assign IPs with `ipam_create_ip_address`
4. Add journal entry with `extras_create_journal_entry`
### Allocate IP Space
1. Find available prefixes with `ipam_list_available_prefixes`
2. Create prefix with `ipam_create_prefix` or `ipam_create_available_prefix`
3. Allocate IPs with `ipam_create_available_ip`
### Audit Infrastructure
1. List recent changes with `extras_list_object_changes`
2. Review devices by site with `dcim_list_devices`
3. Check IP utilization with prefix operations
### Cable Management
1. List interfaces with `dcim_list_interfaces`
2. Create cable with `dcim_create_cable`
3. Verify connectivity
## Response Format
When presenting data:
- Use tables for lists
- Highlight key fields (name, status, IPs)
- Include IDs for reference in follow-up operations
- Suggest next steps when appropriate
## Error Handling
- If an operation fails, explain why clearly
- Suggest corrective actions
- For permission errors, note what access is needed
- For validation errors, explain required fields/formats
## Data Quality Validation ## Data Quality Validation
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Reference `skills/netbox-patterns/SKILL.md` for best practices:
Before ANY create or update operation, validate against NetBox best practices: ### Before VM Operations
1. Cluster/Site assignment required
2. Recommend tenant if not provided
3. Check naming convention
### VM Operations ### Before Device Operations
1. Site is REQUIRED
2. Recommend platform
3. Check naming convention
4. Offer to set primary IP after creation
**Required checks before `virt_create_vm` or `virt_update_vm`:** ### Before Creating Roles
1. List existing roles first
2. Recommend consolidation if >10 specific roles
1. **Cluster/Site Assignment** - VMs must have either cluster or site ## Dependency Order
2. **Tenant Assignment** - Recommend if not provided
3. **Platform Assignment** - Recommend for OS tracking
4. **Naming Convention** - Check against `{env}-{app}-{number}` pattern
5. **Role Assignment** - Recommend appropriate role
**If user provides no site/tenant, ASK:**
> "This VM has no site or tenant assigned. NetBox best practices recommend:
> - **Site**: For location-based queries and power budgeting
> - **Tenant**: For resource isolation and ownership tracking
>
> Would you like me to:
> 1. Assign to an existing site/tenant (list available)
> 2. Create new site/tenant first
> 3. Proceed without (not recommended for production use)"
### Device Operations
**Required checks before `dcim_create_device` or `dcim_update_device`:**
1. **Site is REQUIRED** - Fail without it
2. **Platform Assignment** - Recommend for OS tracking
3. **Naming Convention** - Check against `{role}-{location}-{number}` pattern
4. **Role Assignment** - Ensure appropriate role selected
5. **After Creation** - Offer to set primary IP
### Cluster Operations
**Required checks before `virt_create_cluster`:**
1. **Site Scope** - Recommend assigning to site
2. **Cluster Type** - Ensure appropriate type selected
3. **Device Association** - Recommend linking to host device
### Role Management
**Before creating a new device role:**
1. List existing roles with `dcim_list_device_roles`
2. Check if a more general role already exists
3. Recommend role consolidation if >10 specific roles exist
**Example guidance:**
> "You're creating role 'nginx-web-server'. An existing 'web-server' role exists.
> Consider using 'web-server' and tracking nginx via the platform field instead.
> This reduces role fragmentation and improves maintainability."
## Dependency Order Enforcement
When creating multiple objects, follow this order:
Follow order from `skills/netbox-patterns/SKILL.md`:
``` ```
1. Regions Sites Locations Racks 1. Regions -> Sites -> Locations -> Racks
2. Tenant Groups Tenants 2. Tenant Groups -> Tenants
3. Manufacturers Device Types 3. Manufacturers -> Device Types
4. Device Roles, Platforms 4. Device Roles, Platforms
5. Devices (with site, role, type) 5. Devices (with site, role, type)
6. Clusters (with type, optional site) 6. Clusters (with type, optional site)
7. VMs (with cluster) 7. VMs (with cluster)
8. Interfaces IP Addresses Primary IP assignment 8. Interfaces -> IP Addresses -> Primary IP
``` ```
**CRITICAL Rules:**
- NEVER create a VM before its cluster exists
- NEVER create a device before its site exists
- NEVER create an interface before its device exists
- NEVER create an IP before its interface exists (if assigning)
## Naming Convention Enforcement
When user provides a name, check against patterns:
| Object Type | Pattern | Example |
|-------------|---------|---------|
| Device | `{role}-{site}-{number}` | `web-dc1-01` |
| VM | `{env}-{app}-{number}` or `{prefix}_{service}` | `prod-api-01` |
| Cluster | `{site}-{type}` | `dc1-vmware`, `home-docker` |
| Prefix | Include purpose in description | "Production /24 for web tier" |
**If name doesn't match patterns, warn:**
> "The name 'HotServ' doesn't follow naming conventions.
> Suggested: `prod-hotserv-01` or `hotserv-cloud-01`.
> Consistent naming improves searchability and automation compatibility.
> Proceed with original name? [Y/n]"
## Duplicate Prevention ## Duplicate Prevention
Before creating objects, always check for existing duplicates: Before creating, check for existing:
``` ```
# Before creating device
dcim_list_devices name=<proposed-name> dcim_list_devices name=<proposed-name>
# Before creating VM
virt_list_vms name=<proposed-name> virt_list_vms name=<proposed-name>
# Before creating prefix
ipam_list_prefixes prefix=<proposed-prefix> ipam_list_prefixes prefix=<proposed-prefix>
``` ```
If duplicate found, inform user and suggest update instead of create.
## Available Commands ## Available Commands
Users can invoke these commands for structured workflows:
| Command | Purpose | | Command | Purpose |
|---------|---------| |---------|---------|
| `/cmdb-search <query>` | Search across all CMDB objects | | `/cmdb-search <query>` | Search across all CMDB objects |
@@ -215,3 +97,6 @@ Users can invoke these commands for structured workflows:
| `/cmdb-audit [scope]` | Data quality analysis | | `/cmdb-audit [scope]` | Data quality analysis |
| `/cmdb-register` | Register current machine | | `/cmdb-register` | Register current machine |
| `/cmdb-sync` | Sync machine state with NetBox | | `/cmdb-sync` | Sync machine state with NetBox |
| `/cmdb-topology <view>` | Generate infrastructure diagrams |
| `/change-audit [filters]` | Audit NetBox changes |
| `/ip-conflicts [scope]` | Detect IP conflicts |

View File

@@ -4,20 +4,14 @@ description: Audit NetBox changes with filtering by date, user, or object type
# CMDB Change Audit # CMDB Change Audit
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Change Audit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the audit.
Query and analyze the NetBox audit log for change tracking and compliance. Query and analyze the NetBox audit log for change tracking and compliance.
## Skills to Load
- `skills/visual-header.md`
- `skills/change-audit.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -33,142 +27,30 @@ Query and analyze the NetBox audit log for change tracking and compliance.
## Instructions ## Instructions
You are a change auditor that queries NetBox's object change log and generates audit reports. Execute `skills/visual-header.md` with context "Change Audit".
### MCP Tools Execute `skills/change-audit.md` which covers:
1. Parse user request for filters
2. Query object changes via MCP
3. Enrich data with detailed records
4. Analyze patterns
5. Generate report
Use these tools to query the audit log: ## Security Audit Mode
- `extras_list_object_changes` - List changes with filters:
- `user_id` - Filter by user ID
- `changed_object_type` - Filter by object type (e.g., "dcim.device", "ipam.ipaddress")
- `action` - Filter by action: "create", "update", "delete"
- `extras_get_object_change` - Get detailed change record by ID
### Common Object Types
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |
### Workflow
1. **Parse user request** to determine filters
2. **Query object changes** using `extras_list_object_changes`
3. **Enrich data** by fetching detailed records if needed
4. **Analyze patterns** in the changes
5. **Generate report** in structured format
### Report Format
```markdown
## NetBox Change Audit Report
**Generated:** [timestamp]
**Period:** [date range or "All time"]
**Filters:** [applied filters]
### Summary
| Metric | Count |
|--------|-------|
| Total Changes | X |
| Creates | Y |
| Updates | Z |
| Deletes | W |
| Unique Users | N |
| Object Types | M |
### Changes by Action
#### Created Objects (Y)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-15 14:30 | admin | dcim.device | server-01 | Created device |
| ... | ... | ... | ... | ... |
#### Updated Objects (Z)
| Time | User | Object Type | Object | Changed Fields |
|------|------|-------------|--------|----------------|
| 2024-01-15 15:00 | john | ipam.ipaddress | 10.0.1.50/24 | status, description |
| ... | ... | ... | ... | ... |
#### Deleted Objects (W)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-14 09:00 | admin | dcim.interface | eth2 | Removed from server-01 |
| ... | ... | ... | ... | ... |
### Changes by User
| User | Creates | Updates | Deletes | Total |
|------|---------|---------|---------|-------|
| admin | 5 | 10 | 2 | 17 |
| john | 3 | 8 | 0 | 11 |
### Changes by Object Type
| Object Type | Creates | Updates | Deletes | Total |
|-------------|---------|---------|---------|-------|
| dcim.device | 2 | 5 | 0 | 7 |
| ipam.ipaddress | 4 | 3 | 1 | 8 |
### Timeline
```
2024-01-15: ████████ 8 changes
2024-01-14: ████ 4 changes
2024-01-13: ██ 2 changes
```
### Notable Patterns
- **Bulk operations:** [Identify if many changes happened in short time]
- **Unusual activity:** [Flag unexpected deletions or after-hours changes]
- **Missing audit trail:** [Note if expected changes are not logged]
### Recommendations
1. [Any security or process recommendations based on findings]
```
### Time Period Handling
When user specifies "last N days":
- The NetBox API may not have direct date filtering in `extras_list_object_changes`
- Fetch recent changes and filter client-side by the `time` field
- Note any limitations in the report
### Enriching Change Details
For detailed audit, use `extras_get_object_change` with the change ID to see:
- `prechange_data` - Object state before change
- `postchange_data` - Object state after change
- `request_id` - Links related changes in same request
### Security Audit Mode
If user asks for "security audit" or "compliance report": If user asks for "security audit" or "compliance report":
1. Focus on deletions and permission-sensitive changes - Focus on deletions and permission-sensitive changes
2. Highlight changes to critical objects (firewalls, VRFs, prefixes) - Highlight changes to critical objects (firewalls, VRFs, prefixes)
3. Flag changes outside business hours - Flag changes outside business hours
4. Identify users with high change counts - Identify users with high change counts
## Examples ## Examples
- `/change-audit` - Show recent changes (last 24 hours) - `/change-audit` - Recent changes (last 24 hours)
- `/change-audit last 7 days` - Changes in past week - `/change-audit last 7 days` - Past week
- `/change-audit by admin` - All changes by admin user - `/change-audit by admin` - All changes by admin
- `/change-audit type dcim.device` - Device changes only - `/change-audit type dcim.device` - Device changes only
- `/change-audit action delete` - All deletions - `/change-audit action delete` - All deletions
- `/change-audit object server-01` - Changes to server-01
## User Request ## User Request

View File

@@ -4,20 +4,15 @@ description: Audit NetBox data quality and identify consistency issues
# CMDB Data Quality Audit # CMDB Data Quality Audit
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Data Quality Audit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the audit.
Analyze NetBox data for quality issues and best practice violations. Analyze NetBox data for quality issues and best practice violations.
## Skills to Load
- `skills/visual-header.md`
- `skills/audit-workflow.md`
- `skills/netbox-patterns/SKILL.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -33,174 +28,30 @@ Analyze NetBox data for quality issues and best practice violations.
## Instructions ## Instructions
You are a data quality auditor for NetBox. Your job is to identify consistency issues and best practice violations. Execute `skills/visual-header.md` with context "Data Quality Audit".
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Execute `skills/audit-workflow.md` which covers:
1. Data collection via MCP
2. Quality checks by severity (CRITICAL, HIGH, MEDIUM, LOW)
3. Naming convention analysis
4. Role fragmentation analysis
5. Report generation with recommendations
### Phase 1: Data Collection ## Scope-Specific Focus
Run these MCP tool calls to gather data for analysis: | Scope | Focus |
|-------|-------|
| `all` | Full audit across all categories |
| `vms` | Virtual Machine checks only |
| `devices` | Device checks only |
| `naming` | Naming convention analysis |
| `roles` | Role fragmentation analysis |
``` ## Examples
1. virt_list_vms (no filters - get all)
2. dcim_list_devices (no filters - get all)
3. virt_list_clusters (no filters)
4. dcim_list_sites
5. tenancy_list_tenants
6. dcim_list_device_roles
7. dcim_list_platforms
```
Store the results for analysis. - `/cmdb-audit` - Full audit
- `/cmdb-audit vms` - VM-specific checks
### Phase 2: Quality Checks - `/cmdb-audit naming` - Naming conventions
Analyze collected data for these issues by severity:
#### CRITICAL Issues (must fix immediately)
| Check | Detection |
|-------|-----------|
| VMs without cluster | `cluster` field is null AND `site` field is null |
| Devices without site | `site` field is null |
| Active devices without primary IP | `status=active` AND `primary_ip4` is null AND `primary_ip6` is null |
#### HIGH Issues (should fix soon)
| Check | Detection |
|-------|-----------|
| VMs without site | VM has no site (neither direct nor via cluster.site) |
| VMs without tenant | `tenant` field is null |
| Devices without platform | `platform` field is null |
| Clusters not scoped to site | `site` field is null on cluster |
| VMs without role | `role` field is null |
#### MEDIUM Issues (plan to address)
| Check | Detection |
|-------|-----------|
| Inconsistent naming | Names don't match patterns: devices=`{role}-{site}-{num}`, VMs=`{env}-{app}-{num}` |
| Role fragmentation | More than 10 device roles with <3 assignments each |
| Missing tags on production | Active resources without any tags |
| Mixed naming separators | Some names use `_`, others use `-` |
#### LOW Issues (informational)
| Check | Detection |
|-------|-----------|
| Docker containers as VMs | Cluster type is "Docker Compose" - document this modeling choice |
| VMs without description | `description` field is empty |
| Sites without physical address | `physical_address` is empty |
| Devices without serial | `serial` field is empty |
### Phase 3: Naming Convention Analysis
For naming scope, analyze patterns:
1. **Extract naming patterns** from existing objects
2. **Identify dominant patterns** (most common conventions)
3. **Flag outliers** that don't match dominant patterns
4. **Suggest standardization** based on best practices
**Expected Patterns:**
- Devices: `{role}-{location}-{number}` (e.g., `web-dc1-01`)
- VMs: `{prefix}_{service}` or `{env}-{app}-{number}` (e.g., `prod-api-01`)
- Clusters: `{site}-{type}` (e.g., `home-docker`)
### Phase 4: Role Analysis
For roles scope, analyze fragmentation:
1. **List all device roles** with assignment counts
2. **Identify single-use roles** (only 1 device/VM)
3. **Identify similar roles** that could be consolidated
4. **Suggest consolidation** based on patterns
**Red Flags:**
- More than 15 highly specific roles
- Roles with technology in name (use platform instead)
- Roles that duplicate functionality
### Phase 5: Report Generation
Present findings in this structure:
```markdown
## CMDB Data Quality Audit Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Metric | Count |
|--------|-------|
| Total VMs | X |
| Total Devices | Y |
| Total Clusters | Z |
| **Total Issues** | **N** |
| Severity | Count |
|----------|-------|
| Critical | A |
| High | B |
| Medium | C |
| Low | D |
### Critical Issues
[List each with specific object names and IDs]
**Example:**
- VM `HotServ` (ID: 1) - No cluster or site assignment
- Device `server-01` (ID: 5) - No site assignment
### High Issues
[List each with specific object names]
### Medium Issues
[Grouped by category with counts]
### Recommendations
1. **[Most impactful fix]** - affects N objects
2. **[Second priority]** - affects M objects
...
### Quick Fixes
Commands to fix common issues:
```
# Assign site to VM
virt_update_vm id=X site=Y
# Assign platform to device
dcim_update_device id=X platform=Y
```
### Next Steps
- Run `/cmdb-register` to properly register new machines
- Use `/cmdb-sync` to update existing registrations
- Consider bulk updates via NetBox web UI for >10 items
```
## Scope-Specific Instructions
### For `vms` scope:
Focus only on Virtual Machine checks. Skip device and role analysis.
### For `devices` scope:
Focus only on Device checks. Skip VM and cluster analysis.
### For `naming` scope:
Focus on naming convention analysis across all objects. Generate detailed pattern report.
### For `roles` scope:
Focus on role fragmentation analysis. Generate consolidation recommendations.
## User Request ## User Request

View File

@@ -1,18 +1,11 @@
# CMDB Device Management # CMDB Device Management
## Visual Output Manage network devices in NetBox.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐ - `skills/mcp-tools-reference.md`
│ 🖥️ CMDB-ASSISTANT · Device Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage network devices in NetBox - create, view, update, or delete.
## Usage ## Usage
@@ -22,42 +15,40 @@ Manage network devices in NetBox - create, view, update, or delete.
## Instructions ## Instructions
You are a device management assistant with full CRUD access to NetBox devices. Execute `skills/visual-header.md` with context "Device Management".
### Actions ### Actions
**List/View:** **List/View:**
- `list` or `show all` - List all devices using `dcim_list_devices` - `list` or `show all` - List all devices: `dcim_list_devices`
- `show <name>` - Get device details using `dcim_list_devices` with name filter, then `dcim_get_device` - `show <name>` - Get device details: `dcim_get_device`
- `at <site>` - List devices at a specific site - `at <site>` - List devices at site
**Create:** **Create:**
- `create <name>` - Create a new device - `create <name>` - Create new device
- Required: name, device_type, role, site - Required: name, device_type, role, site
- Use `dcim_list_device_types`, `dcim_list_device_roles`, `dcim_list_sites` to help user find IDs - Use `dcim_list_device_types`, `dcim_list_device_roles`, `dcim_list_sites` to find IDs
- Then use `dcim_create_device`
**Update:** **Update:**
- `update <name>` - Update device properties - `update <name>` - Update device properties
- First get the device ID, then use `dcim_update_device` - Get device ID first, then use `dcim_update_device`
**Delete:** **Delete:**
- `delete <name>` - Delete a device (ask for confirmation first) - `delete <name>` - Delete device (ask confirmation first)
- Use `dcim_delete_device`
### Related Operations ### Related Operations
After creating a device, offer to: After creating a device, offer to:
- Add interfaces with `dcim_create_interface` - Add interfaces: `dcim_create_interface`
- Assign IP addresses with `ipam_create_ip_address` - Assign IP addresses: `ipam_create_ip_address`
- Add to a rack with `dcim_update_device` - Add to rack: `dcim_update_device`
## Examples ## Examples
- `/cmdb-device list` - Show all devices - `/cmdb-device list`
- `/cmdb-device show core-router-01` - Get details for specific device - `/cmdb-device show core-router-01`
- `/cmdb-device create web-server-03` - Create a new device - `/cmdb-device create web-server-03`
- `/cmdb-device at headquarters` - List devices at headquarters site - `/cmdb-device at headquarters`
## User Request ## User Request

View File

@@ -1,19 +1,13 @@
# CMDB IP Management # CMDB IP Management
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · IP Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage IP addresses and prefixes in NetBox. Manage IP addresses and prefixes in NetBox.
## Skills to Load
- `skills/visual-header.md`
- `skills/ip-management.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -22,43 +16,36 @@ Manage IP addresses and prefixes in NetBox.
## Instructions ## Instructions
You are an IP address management (IPAM) assistant with access to NetBox. Execute `skills/visual-header.md` with context "IP Management".
Execute operations from `skills/ip-management.md`.
### Actions ### Actions
**Prefixes:** **Prefixes:**
- `prefixes` - List all prefixes using `ipam_list_prefixes` - `prefixes` - List all prefixes
- `prefix <cidr>` - Get prefix details or find prefix containing address - `prefix <cidr>` - Get prefix details
- `available in <prefix>` - Show available IPs in a prefix using `ipam_list_available_ips` - `available in <prefix>` - Show available IPs
- `create prefix <cidr>` - Create new prefix using `ipam_create_prefix` - `create prefix <cidr>` - Create new prefix
**IP Addresses:** **IP Addresses:**
- `list` - List all IP addresses using `ipam_list_ip_addresses` - `list` - List all IP addresses
- `show <address>` - Get IP details - `show <address>` - Get IP details
- `allocate from <prefix>` - Auto-allocate next available IP using `ipam_create_available_ip` - `allocate from <prefix>` - Auto-allocate next available
- `create <address>` - Create specific IP using `ipam_create_ip_address` - `create <address>` - Create specific IP
- `assign <ip> to <device>` - Assign IP to device interface - `assign <ip> to <device> <interface>` - Assign IP to interface
**VLANs:** **VLANs and VRFs:**
- `vlans` - List VLANs using `ipam_list_vlans` - `vlans` - List VLANs
- `vlan <id>` - Get VLAN details - `vlan <id>` - Get VLAN details
- `vrfs` - List VRFs
**VRFs:**
- `vrfs` - List VRFs using `ipam_list_vrfs`
### Workflow Examples
**Allocate IP to new server:**
1. Find available IPs in target prefix
2. Create the IP address
3. Assign to device interface
## Examples ## Examples
- `/cmdb-ip prefixes` - List all prefixes - `/cmdb-ip prefixes`
- `/cmdb-ip available in 10.0.1.0/24` - Show available IPs - `/cmdb-ip available in 10.0.1.0/24`
- `/cmdb-ip allocate from 10.0.1.0/24` - Get next available IP - `/cmdb-ip allocate from 10.0.1.0/24`
- `/cmdb-ip assign 10.0.1.50/24 to web-server-01 eth0` - Assign IP to interface - `/cmdb-ip assign 10.0.1.50/24 to web-server-01 eth0`
## User Request ## User Request

View File

@@ -4,19 +4,15 @@ description: Register the current machine into NetBox with all running applicati
# CMDB Machine Registration # CMDB Machine Registration
## Visual Output Register the current machine into NetBox, including hardware info, network interfaces, and running applications.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐ - `skills/device-registration.md`
│ 🖥️ CMDB-ASSISTANT · Machine Registration │ - `skills/system-discovery.md`
└──────────────────────────────────────────────────────────────────┘ - `skills/netbox-patterns/SKILL.md`
``` - `skills/mcp-tools-reference.md`
Then proceed with the registration.
Register the current machine into NetBox, including hardware info, network interfaces, and running applications (Docker containers, services).
## Usage ## Usage
@@ -31,303 +27,24 @@ Register the current machine into NetBox, including hardware info, network inter
## Instructions ## Instructions
You are registering the current machine into NetBox. This is a multi-phase process that discovers local system information and creates corresponding NetBox objects. Execute `skills/visual-header.md` with context "Machine Registration".
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Execute `skills/device-registration.md` which covers:
1. System discovery via Bash (use `skills/system-discovery.md`)
### Phase 1: System Discovery (via Bash) 2. Pre-registration checks (device exists?, site?, platform?, role?)
3. Device creation via MCP
Gather system information using these commands: 4. Interface and IP creation
5. Container registration (if Docker found)
#### 1.1 Basic Device Info 6. Journal entry documentation
```bash
# Hostname
hostname
# OS/Platform info
cat /etc/os-release 2>/dev/null || uname -a
# Hardware model (varies by system)
# Raspberry Pi:
cat /proc/device-tree/model 2>/dev/null || echo "Unknown"
# x86 systems:
cat /sys/class/dmi/id/product_name 2>/dev/null || echo "Unknown"
# Serial number
# Raspberry Pi:
cat /proc/device-tree/serial-number 2>/dev/null || cat /proc/cpuinfo | grep Serial | cut -d: -f2 | tr -d ' ' 2>/dev/null
# x86 systems:
cat /sys/class/dmi/id/product_serial 2>/dev/null || echo "Unknown"
# CPU info
nproc
# Memory (MB)
free -m | awk '/Mem:/ {print $2}'
# Disk (GB, root filesystem)
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
```
#### 1.2 Network Interfaces
```bash
# Get interfaces with IPs (JSON format)
ip -j addr show 2>/dev/null || ip addr show
# Get default gateway interface
ip route | grep default | awk '{print $5}' | head -1
# Get MAC addresses
ip -j link show 2>/dev/null || ip link show
```
#### 1.3 Running Applications
```bash
# Docker containers (if docker available)
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}","ports":"{{.Ports}}"}' 2>/dev/null || echo "Docker not available"
# Docker Compose projects (check common locations)
find ~/apps /home/*/apps -name "docker-compose.yml" -o -name "docker-compose.yaml" 2>/dev/null | head -20
# Systemd services (running)
systemctl list-units --type=service --state=running --no-pager --plain 2>/dev/null | grep -v "^UNIT" | head -30
```
### Phase 2: Pre-Registration Checks (via MCP)
Before creating objects, verify prerequisites:
#### 2.1 Check if Device Already Exists
```
dcim_list_devices name=<hostname>
```
**If device exists:**
- Inform user and suggest `/cmdb-sync` instead
- Ask if they want to proceed with re-registration (will update existing)
#### 2.2 Verify/Create Site
If `--site` provided:
```
dcim_list_sites name=<site-name>
```
If site doesn't exist, ask user if they want to create it.
If no site provided, list available sites and ask user to choose:
```
dcim_list_sites
```
#### 2.3 Verify/Create Platform
Based on OS detected, check if platform exists:
```
dcim_list_platforms name=<platform-name>
```
**Platform naming:**
- `Raspberry Pi OS (Bookworm)` for Raspberry Pi
- `Ubuntu 24.04 LTS` for Ubuntu
- `Debian 12` for Debian
- Use format: `{OS Name} {Version}`
If platform doesn't exist, create it:
```
dcim_create_platform name=<platform-name> slug=<slug>
```
#### 2.4 Verify/Create Device Role
Based on detected services:
- If Docker containers found → `Docker Host`
- If only basic services → `Server`
- If specific role specified → Use that
```
dcim_list_device_roles name=<role-name>
```
### Phase 3: Device Registration (via MCP)
#### 3.1 Get/Create Manufacturer and Device Type
For Raspberry Pi:
```
dcim_list_manufacturers name="Raspberry Pi Foundation"
dcim_list_device_types manufacturer_id=X model="Raspberry Pi 4 Model B"
```
Create if not exists.
For generic x86:
```
dcim_list_manufacturers name=<detected-manufacturer>
```
#### 3.2 Create Device
```
dcim_create_device
name=<hostname>
device_type=<device_type_id>
role=<role_id>
site=<site_id>
platform=<platform_id>
tenant=<tenant_id> # if provided
serial=<serial>
description="Registered via cmdb-assistant"
```
#### 3.3 Create Interfaces
For each network interface discovered:
```
dcim_create_interface
device=<device_id>
name=<interface_name> # eth0, wlan0, tailscale0, etc.
type=<type> # 1000base-t, virtual, other
mac_address=<mac>
enabled=true
```
**Interface type mapping:**
- `eth*`, `enp*``1000base-t`
- `wlan*``ieee802.11ax` (or appropriate wifi type)
- `tailscale*`, `docker*`, `br-*``virtual`
- `lo` → skip (loopback)
#### 3.4 Create IP Addresses
For each IP on each interface:
```
ipam_create_ip_address
address=<ip/prefix> # e.g., "192.168.1.100/24"
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
description="Discovered via cmdb-register"
```
#### 3.5 Set Primary IP
Identify primary IP (interface with default route):
```
dcim_update_device
id=<device_id>
primary_ip4=<primary_ip_id>
```
### Phase 4: Container Registration (via MCP)
If Docker containers were discovered:
#### 4.1 Create/Get Cluster Type
```
virt_list_cluster_types name="Docker Compose"
```
Create if not exists:
```
virt_create_cluster_type name="Docker Compose" slug="docker-compose"
```
#### 4.2 Create Cluster
For each Docker Compose project directory found:
```
virt_create_cluster
name=<project-name> # e.g., "apps-hotport"
type=<cluster_type_id>
site=<site_id>
description="Docker Compose stack on <hostname>"
```
#### 4.3 Create VMs for Containers
For each running container:
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
site=<site_id>
role=<role_id> # Map container function to role
status="active"
vcpus=<cpu_shares> # Default 1.0 if unknown
memory=<memory_mb> # Default 256 if unknown
disk=<disk_gb> # Default 5 if unknown
description=<container purpose>
comments=<image, ports, volumes info>
```
**Container role mapping:**
- `*caddy*`, `*nginx*`, `*traefik*` → "Reverse Proxy"
- `*db*`, `*postgres*`, `*mysql*`, `*redis*` → "Database"
- `*webui*`, `*frontend*` → "Web Application"
- Others → Infer from image name or use generic "Container"
### Phase 5: Documentation
#### 5.1 Add Journal Entry
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device registered via /cmdb-register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
```
### Phase 6: Summary Report
Present registration summary:
```markdown
## Machine Registration Complete
### Device Created
- **Name:** <hostname>
- **Site:** <site>
- **Platform:** <platform>
- **Role:** <role>
- **ID:** <device_id>
- **URL:** https://netbox.example.com/dcim/devices/<id>/
### Network Interfaces
| Interface | Type | MAC | IP Address |
|-----------|------|-----|------------|
| eth0 | 1000base-t | aa:bb:cc:dd:ee:ff | 192.168.1.100/24 |
| tailscale0 | virtual | - | 100.x.x.x/32 |
### Primary IP: 192.168.1.100
### Docker Containers Registered (if applicable)
**Cluster:** <cluster_name> (ID: <cluster_id>)
| Container | Role | vCPUs | Memory | Status |
|-----------|------|-------|--------|--------|
| media_jellyfin | Media Server | 2.0 | 2048MB | Active |
| media_sonarr | Media Management | 1.0 | 512MB | Active |
### Next Steps
- Run `/cmdb-sync` periodically to keep data current
- Run `/cmdb-audit` to check data quality
- Add tags for classification (env:*, team:*, etc.)
```
## Error Handling ## Error Handling
- **Device already exists:** Suggest `/cmdb-sync` or ask to proceed | Error | Action |
- **Site not found:** List available sites, offer to create new |-------|--------|
- **Docker not available:** Skip container registration, note in summary | Device already exists | Suggest `/cmdb-sync` or ask to proceed |
- **Permission denied:** Note which operations failed, suggest fixes | Site not found | List available sites, offer to create new |
| Docker not available | Skip container registration, note in summary |
| Permission denied | Note which operations failed, suggest fixes |
## User Request ## User Request

View File

@@ -1,19 +1,12 @@
# CMDB Site Management # CMDB Site Management
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Site Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage sites and locations in NetBox. Manage sites and locations in NetBox.
## Skills to Load
- `skills/visual-header.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -22,46 +15,35 @@ Manage sites and locations in NetBox.
## Instructions ## Instructions
You are a site/location management assistant with access to NetBox. Execute `skills/visual-header.md` with context "Site Management".
### Actions ### Actions
**Sites:** **Sites:**
- `list` - List all sites using `dcim_list_sites` - `list` - List all sites: `dcim_list_sites`
- `show <name>` - Get site details using `dcim_get_site` - `show <name>` - Get site details: `dcim_get_site`
- `create <name>` - Create new site using `dcim_create_site` - `create <name>` - Create new site: `dcim_create_site`
- `update <name>` - Update site using `dcim_update_site` - `update <name>` - Update site: `dcim_update_site`
- `delete <name>` - Delete site (with confirmation) - `delete <name>` - Delete site (with confirmation)
**Locations (within sites):** **Locations:**
- `locations at <site>` - List locations using `dcim_list_locations` - `locations at <site>` - List locations: `dcim_list_locations`
- `create location <name> at <site>` - Create location using `dcim_create_location` - `create location <name> at <site>` - Create location
**Racks:** **Racks:**
- `racks at <site>` - List racks using `dcim_list_racks` - `racks at <site>` - List racks: `dcim_list_racks`
- `create rack <name> at <site>` - Create rack using `dcim_create_rack` - `create rack <name> at <site>` - Create rack
**Regions:** **Regions:**
- `regions` - List regions using `dcim_list_regions` - `regions` - List regions: `dcim_list_regions`
- `create region <name>` - Create region using `dcim_create_region` - `create region <name>` - Create region
### Site Properties
When creating/updating sites:
- name (required)
- slug (required, auto-generated if not provided)
- status: active, planned, staging, decommissioning, retired
- region: parent region ID
- facility: datacenter/building name
- physical_address, shipping_address
- time_zone
## Examples ## Examples
- `/cmdb-site list` - Show all sites - `/cmdb-site list`
- `/cmdb-site show headquarters` - Get HQ site details - `/cmdb-site show headquarters`
- `/cmdb-site create branch-office-nyc` - Create new site - `/cmdb-site create branch-office-nyc`
- `/cmdb-site racks at headquarters` - List racks at HQ - `/cmdb-site racks at headquarters`
## User Request ## User Request

View File

@@ -4,19 +4,14 @@ description: Synchronize current machine state with existing NetBox record
# CMDB Machine Sync # CMDB Machine Sync
## Visual Output Update an existing NetBox device record with the current machine state.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐ - `skills/sync-workflow.md`
│ 🖥️ CMDB-ASSISTANT · Machine Sync │ - `skills/system-discovery.md`
└──────────────────────────────────────────────────────────────────┘ - `skills/mcp-tools-reference.md`
```
Then proceed with the synchronization.
Update an existing NetBox device record with the current machine state. Compares local system information with NetBox and applies changes.
## Usage ## Usage
@@ -30,318 +25,32 @@ Update an existing NetBox device record with the current machine state. Compares
## Instructions ## Instructions
You are synchronizing the current machine's state with its NetBox record. This involves comparing current system state with stored data and updating differences. Execute `skills/visual-header.md` with context "Machine Sync".
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Execute `skills/sync-workflow.md` which covers:
1. Device lookup via MCP
### Phase 1: Device Lookup (via MCP) 2. Current state discovery via Bash
3. Comparison of NetBox vs local state
First, find the existing device record: 4. Diff report generation
5. User confirmation (unless dry-run)
```bash 6. Apply updates via MCP
# Get current hostname 7. Journal entry creation
hostname
``` ## Modes
``` | Mode | Behavior |
dcim_list_devices name=<hostname> |------|----------|
``` | Default | Show diff, ask confirmation, apply changes |
| `--dry-run` | Show diff only, no changes applied |
**If device not found:** | `--full` | Skip confirmation, update all fields |
- Inform user: "Device '<hostname>' not found in NetBox"
- Suggest: "Run `/cmdb-register` to register this machine first"
- Exit sync
**If device found:**
- Store device ID and all current field values
- Fetch interfaces: `dcim_list_interfaces device_id=<device_id>`
- Fetch IPs: `ipam_list_ip_addresses device_id=<device_id>`
Also check for associated clusters/VMs:
```
virt_list_clusters # Look for cluster associated with this device
virt_list_vms cluster=<cluster_id> # If cluster found
```
### Phase 2: Current State Discovery (via Bash)
Gather current system information (same as `/cmdb-register`):
```bash
# Device info
hostname
cat /etc/os-release 2>/dev/null || uname -a
nproc
free -m | awk '/Mem:/ {print $2}'
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
# Network interfaces with IPs
ip -j addr show 2>/dev/null || ip addr show
# Docker containers
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}"}' 2>/dev/null || echo "[]"
```
### Phase 3: Comparison
Compare discovered state with NetBox record:
#### 3.1 Device Attributes
| Field | Compare |
|-------|---------|
| Platform | OS version changed? |
| Status | Still active? |
| Serial | Match? |
| Description | Keep existing |
#### 3.2 Network Interfaces
| Change Type | Detection |
|-------------|-----------|
| New interface | Interface exists locally but not in NetBox |
| Removed interface | Interface in NetBox but not locally |
| Changed MAC | MAC address different |
| Interface type | Type mismatch |
#### 3.3 IP Addresses
| Change Type | Detection |
|-------------|-----------|
| New IP | IP exists locally but not in NetBox |
| Removed IP | IP in NetBox but not locally (on this device) |
| Primary IP changed | Default route interface changed |
#### 3.4 Docker Containers
| Change Type | Detection |
|-------------|-----------|
| New container | Container running locally but no VM in cluster |
| Stopped container | VM exists but container not running |
| Resource change | vCPUs/memory different (if trackable) |
### Phase 4: Diff Report
Present changes to user:
```markdown
## Sync Diff Report
**Device:** <hostname> (ID: <device_id>)
**NetBox URL:** https://netbox.example.com/dcim/devices/<id>/
### Device Attributes
| Field | NetBox Value | Current Value | Action |
|-------|--------------|---------------|--------|
| Platform | Ubuntu 22.04 | Ubuntu 24.04 | UPDATE |
| Status | active | active | - |
### Network Interfaces
#### New Interfaces (will create)
| Interface | Type | MAC | IPs |
|-----------|------|-----|-----|
| tailscale0 | virtual | - | 100.x.x.x/32 |
#### Removed Interfaces (will mark offline)
| Interface | Type | Reason |
|-----------|------|--------|
| eth1 | 1000base-t | Not found locally |
#### Changed Interfaces
| Interface | Field | Old | New |
|-----------|-------|-----|-----|
| eth0 | mac_address | aa:bb:cc:00:00:00 | aa:bb:cc:11:11:11 |
### IP Addresses
#### New IPs (will create)
- 192.168.1.150/24 on eth0
#### Removed IPs (will unassign)
- 192.168.1.100/24 from eth0
### Docker Containers
#### New Containers (will create VMs)
| Container | Image | Role |
|-----------|-------|------|
| media_lidarr | linuxserver/lidarr | Media Management |
#### Stopped Containers (will mark offline)
| Container | Last Status |
|-----------|-------------|
| media_bazarr | Exited |
### Summary
- **Updates:** X
- **Creates:** Y
- **Removals/Offline:** Z
```
### Phase 5: User Confirmation
If not `--dry-run`:
```
The following changes will be applied:
- Update device platform to "Ubuntu 24.04"
- Create interface "tailscale0"
- Create IP "100.x.x.x/32" on tailscale0
- Create VM "media_lidarr" in cluster
- Mark VM "media_bazarr" as offline
Proceed with sync? [Y/n]
```
**Use AskUserQuestion** to get confirmation.
### Phase 6: Apply Updates (via MCP)
Only if user confirms (or `--full` specified):
#### 6.1 Device Updates
```
dcim_update_device
id=<device_id>
platform=<new_platform_id>
# ... other changed fields
```
#### 6.2 Interface Updates
**For new interfaces:**
```
dcim_create_interface
device=<device_id>
name=<interface_name>
type=<type>
mac_address=<mac>
enabled=true
```
**For removed interfaces:**
```
dcim_update_interface
id=<interface_id>
enabled=false
description="Marked offline by cmdb-sync - interface no longer present"
```
**For changed interfaces:**
```
dcim_update_interface
id=<interface_id>
mac_address=<new_mac>
```
#### 6.3 IP Address Updates
**For new IPs:**
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
**For removed IPs:**
```
ipam_update_ip_address
id=<ip_id>
assigned_object_type=null
assigned_object_id=null
description="Unassigned by cmdb-sync"
```
#### 6.4 Primary IP Update
If primary IP changed:
```
dcim_update_device
id=<device_id>
primary_ip4=<new_primary_ip_id>
```
#### 6.5 Container/VM Updates
**For new containers:**
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
status="active"
# ... other fields
```
**For stopped containers:**
```
virt_update_vm
id=<vm_id>
status="offline"
description="Container stopped - detected by cmdb-sync"
```
### Phase 7: Journal Entry
Document the sync:
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device synced via /cmdb-sync command\n\nChanges applied:\n- <list of changes>"
```
### Phase 8: Summary Report
```markdown
## Sync Complete
**Device:** <hostname>
**Sync Time:** <timestamp>
### Changes Applied
- Updated platform: Ubuntu 22.04 → Ubuntu 24.04
- Created interface: tailscale0 (ID: X)
- Created IP: 100.x.x.x/32 (ID: Y)
- Created VM: media_lidarr (ID: Z)
- Marked VM offline: media_bazarr (ID: W)
### Current State
- **Interfaces:** 4 (3 active, 1 offline)
- **IP Addresses:** 5
- **Containers/VMs:** 8 (7 active, 1 offline)
### Next Sync
Run `/cmdb-sync` again after:
- Adding/removing Docker containers
- Changing network configuration
- OS upgrades
```
## Dry Run Mode
If `--dry-run` specified:
- Complete Phase 1-4 (lookup, discovery, compare, diff report)
- Skip Phase 5-8 (no confirmation, no updates, no journal)
- End with: "Dry run complete. No changes applied. Run without --dry-run to apply."
## Full Sync Mode
If `--full` specified:
- Skip user confirmation
- Update all fields even if unchanged (force refresh)
- Useful for ensuring NetBox matches current state exactly
## Error Handling ## Error Handling
- **Device not found:** Suggest `/cmdb-register` | Error | Action |
- **Permission denied on updates:** Note which failed, continue with others |-------|--------|
- **Cluster not found:** Offer to create or skip container sync | Device not found | Suggest `/cmdb-register` |
- **API errors:** Log error, continue with remaining updates | Permission denied | Note which failed, continue others |
| Cluster not found | Offer to create or skip container sync |
## User Request ## User Request

View File

@@ -4,20 +4,14 @@ description: Generate infrastructure topology diagrams from NetBox data
# CMDB Topology Visualization # CMDB Topology Visualization
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Topology │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Generate Mermaid diagrams showing infrastructure topology from NetBox. Generate Mermaid diagrams showing infrastructure topology from NetBox.
## Skills to Load
- `skills/visual-header.md`
- `skills/topology-generation.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -26,168 +20,34 @@ Generate Mermaid diagrams showing infrastructure topology from NetBox.
**Views:** **Views:**
- `rack <rack-name>` - Rack elevation showing devices and positions - `rack <rack-name>` - Rack elevation showing devices and positions
- `network [site]` - Network topology showing device connections via cables - `network [site]` - Network topology showing device connections
- `site <site-name>` - Site overview with racks and device counts - `site <site-name>` - Site overview with racks and device counts
- `full` - Full infrastructure overview - `full` - Full infrastructure overview
## Instructions ## Instructions
You are a topology visualization assistant that queries NetBox and generates Mermaid diagrams. Execute `skills/visual-header.md` with context "Topology".
### View: Rack Elevation Execute `skills/topology-generation.md` which covers:
- Data collection via MCP for each view type
- Mermaid diagram generation with proper shapes
- Legend and data notes
Generate a rack view showing devices and their positions. ## Output Format
**Data Collection:**
1. Use `dcim_list_racks` to find the rack by name
2. Use `dcim_list_devices` with `rack_id` filter to get devices in rack
3. For each device, note: `position`, `u_height`, `face`, `name`, `role`
**Mermaid Output:**
```mermaid
graph TB
subgraph rack["Rack: <rack-name> (U<height>)"]
direction TB
u42["U42: empty"]
u41["U41: empty"]
u40["U40: server-01 (Server)"]
u39["U39: server-01 (cont.)"]
u38["U38: switch-01 (Switch)"]
%% ... continue for all units
end
```
**For devices spanning multiple U:**
- Mark the top U with device name and role
- Mark subsequent Us as "(cont.)" for the same device
- Empty Us should show "empty"
### View: Network Topology
Generate a network diagram showing device connections.
**Data Collection:**
1. Use `dcim_list_sites` if no site specified (get all)
2. Use `dcim_list_devices` with optional `site_id` filter
3. Use `dcim_list_cables` to get all connections
4. Use `dcim_list_interfaces` for each device to understand port names
**Mermaid Output:**
```mermaid
graph TD
subgraph site1["Site: Home"]
router1[("core-router-01<br/>Router")]
switch1[["dist-switch-01<br/>Switch"]]
server1["web-server-01<br/>Server"]
server2["db-server-01<br/>Server"]
end
router1 -->|"eth0 - eth1"| switch1
switch1 -->|"gi0/1 - eth0"| server1
switch1 -->|"gi0/2 - eth0"| server2
```
**Node shapes by role:**
- Router: `[(" ")]` (cylinder/database shape)
- Switch: `[[ ]]` (double brackets)
- Server: `[ ]` (rectangle)
- Firewall: `{{ }}` (hexagon)
- Other: `[ ]` (rectangle)
**Edge labels:** Show interface names on both ends (A-side - B-side)
### View: Site Overview
Generate a site-level view showing racks and summary counts.
**Data Collection:**
1. Use `dcim_get_site` to get site details
2. Use `dcim_list_racks` with `site_id` filter
3. Use `dcim_list_devices` with `site_id` filter for counts per rack
**Mermaid Output:**
```mermaid
graph TB
subgraph site["Site: Headquarters"]
subgraph row1["Row 1"]
rack1["Rack A1<br/>12/42 U used<br/>5 devices"]
rack2["Rack A2<br/>20/42 U used<br/>8 devices"]
end
subgraph row2["Row 2"]
rack3["Rack B1<br/>8/42 U used<br/>3 devices"]
end
end
```
### View: Full Infrastructure
Generate a high-level view of all sites and their relationships.
**Data Collection:**
1. Use `dcim_list_regions` to get hierarchy
2. Use `dcim_list_sites` to get all sites
3. Use `dcim_list_devices` with status filter for counts
**Mermaid Output:**
```mermaid
graph TB
subgraph region1["Region: Americas"]
site1["Headquarters<br/>3 racks, 25 devices"]
site2["Branch Office<br/>1 rack, 5 devices"]
end
subgraph region2["Region: Europe"]
site3["EU Datacenter<br/>10 racks, 100 devices"]
end
site1 -.->|"WAN Link"| site3
```
### Output Format
Always provide: Always provide:
1. **Summary** - Brief description
1. **Summary** - Brief description of what the diagram shows 2. **Mermaid Code Block** - The diagram
2. **Mermaid Code Block** - The diagram code in a fenced code block 3. **Legend** - Shape explanations
3. **Legend** - Explanation of shapes and colors used 4. **Data Notes** - Quality issues found
4. **Data Notes** - Any data quality issues (e.g., devices without position, missing cables)
**Example Output:**
```markdown
## Network Topology: Home Site
This diagram shows the network connections between 4 devices at the Home site.
```mermaid
graph TD
router1[("core-router<br/>Router")]
switch1[["main-switch<br/>Switch"]]
server1["homelab-01<br/>Server"]
router1 -->|"eth0 - gi0/24"| switch1
switch1 -->|"gi0/1 - eth0"| server1
```
**Legend:**
- Cylinder shape: Routers
- Double brackets: Switches
- Rectangle: Servers
**Data Notes:**
- 1 device (nas-01) has no cable connections documented
```
## Examples ## Examples
- `/cmdb-topology rack server-rack-01` - Show devices in server-rack-01 - `/cmdb-topology rack server-rack-01` - Rack elevation
- `/cmdb-topology network` - Show all network connections - `/cmdb-topology network` - All network connections
- `/cmdb-topology network Home` - Show network topology for Home site only - `/cmdb-topology network Home` - Network for Home site
- `/cmdb-topology site Headquarters` - Show rack overview for Headquarters - `/cmdb-topology site Headquarters` - Site overview
- `/cmdb-topology full` - Show full infrastructure overview - `/cmdb-topology full` - Full infrastructure
## User Request ## User Request

View File

@@ -1,176 +1,74 @@
--- ---
description: Interactive setup wizard for cmdb-assistant plugin - configures NetBox MCP server description: Interactive setup wizard for cmdb-assistant plugin
--- ---
# CMDB Assistant Setup Wizard # CMDB Assistant Setup Wizard
## Visual Output Configure the cmdb-assistant plugin with NetBox integration.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the setup.
This command sets up the cmdb-assistant plugin with NetBox integration.
## Important Context ## Important Context
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools - **Uses Bash, Read, Write, AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart** - **MCP tools unavailable until after setup + session restart**
- **Uses NetBox MCP server (separate from Gitea MCP)**
--- ## Usage
## Phase 1: Environment Validation ```
/initial-setup
```
### Step 1.1: Check Python Version ## Instructions
Execute `skills/visual-header.md` with context "Setup Wizard".
### Phase 1: Environment Validation
```bash ```bash
python3 --version python3 --version
``` ```
If below 3.10, stop and inform user.
If below 3.10, stop setup and inform user. ### Phase 2: MCP Server Setup
--- 1. Locate NetBox MCP server in marketplace
2. Check virtual environment exists
3. Create venv if missing: `python3 -m venv .venv && pip install -r requirements.txt`
## Phase 2: MCP Server Setup ### Phase 3: System Configuration
### Step 2.1: Locate NetBox MCP Server 1. Create config directory: `mkdir -p ~/.config/claude`
2. Check `~/.config/claude/netbox.env` exists
3. If missing, ask user for NetBox API URL (must include `/api`)
4. Create config file with placeholder token
5. Instruct user to add API token manually
```bash ### Phase 4: Validation
find ~/.claude ~/.config/claude -name "mcp_server" -path "*netbox*" 2>/dev/null | head -5
```
If not found, ask user for marketplace location. 1. Test API connection if token was added
2. Report result (200=success, 403=invalid token)
3. Display completion summary
4. Remind user to restart session for MCP tools
### Step 2.2: Check Virtual Environment ## Completion Summary
```bash
ls -la /path/to/mcp-servers/netbox/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING"
```
### Step 2.3: Create Virtual Environment (if missing)
```bash
cd /path/to/mcp-servers/netbox && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
---
## Phase 3: System Configuration
### Step 3.1: Create Config Directory
```bash
mkdir -p ~/.config/claude
```
### Step 3.2: Check NetBox Configuration
```bash
cat ~/.config/claude/netbox.env 2>/dev/null || echo "FILE_NOT_FOUND"
```
**If file exists with valid values:** Skip to Phase 4.
**If missing or has placeholders:** Continue.
### Step 3.3: Gather NetBox Information
Use AskUserQuestion:
- Question: "What is your NetBox API URL? (e.g., https://netbox.company.com/api)"
- Header: "NetBox URL"
- Options:
- "Other (I'll provide the URL)"
Ask user to provide the URL.
**Important:** The URL must include `/api` at the end. If the user provides a URL without `/api`, append it automatically.
### Step 3.4: Create Configuration File
```bash
cat > ~/.config/claude/netbox.env << 'EOF'
# NetBox API Configuration
# Generated by cmdb-assistant /initial-setup
NETBOX_API_URL=<USER_PROVIDED_URL>
NETBOX_API_TOKEN=PASTE_YOUR_TOKEN_HERE
EOF
chmod 600 ~/.config/claude/netbox.env
```
### Step 3.5: Token Instructions
---
**Action Required: Add Your NetBox API Token**
I've created `~/.config/claude/netbox.env` but you need to add your API token manually.
**Steps:**
1. Open: `nano ~/.config/claude/netbox.env`
2. Generate token in NetBox: Admin → API Tokens → Add Token
3. Replace `PASTE_YOUR_TOKEN_HERE` with your token
4. Save the file
---
Use AskUserQuestion:
- Question: "Have you added your NetBox token?"
- Header: "Token"
- Options:
- "Yes, I've added the token"
- "Skip for now"
---
## Phase 4: Validation
### Step 4.1: Test Configuration (if token was added)
```bash
source ~/.config/claude/netbox.env && curl -s -o /dev/null -w "%{http_code}" -H "Authorization: Token $NETBOX_API_TOKEN" "$NETBOX_API_URL/"
```
**Note:** The URL already includes `/api`, so we just append `/` for the root API endpoint.
Report result:
- 200: Success
- 403: Invalid token
- Other: Connection issue
### Step 4.2: Summary
``` ```
╔════════════════════════════════════════════════════════════╗ CMDB-ASSISTANT SETUP COMPLETE
║ CMDB-ASSISTANT SETUP COMPLETE ║ MCP Server (NetBox): Ready
╠════════════════════════════════════════════════════════════╣ System Config: ~/.config/claude/netbox.env
║ MCP Server (NetBox): ✓ Ready ║
║ System Config: ✓ ~/.config/claude/netbox.env ║ Restart your Claude Code session for MCP tools.
╚════════════════════════════════════════════════════════════╝
After restart, try:
- /cmdb-device <hostname>
- /cmdb-ip <address>
- /cmdb-site <name>
- /cmdb-search <query>
``` ```
### Step 4.3: Session Restart Notice ## User Request
--- $ARGUMENTS
**⚠️ Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/cmdb-device <hostname>` to look up a device
- Run `/cmdb-ip <address>` to look up an IP address
- Run `/cmdb-site <name>` to look up a site
- Run `/cmdb-search <query>` for general search
---
## Note on Project Configuration
cmdb-assistant does not require project-level configuration. The NetBox connection is system-wide and not tied to specific repositories.

View File

@@ -4,20 +4,14 @@ description: Detect IP address conflicts and overlapping prefixes in NetBox
# CMDB IP Conflict Detection # CMDB IP Conflict Detection
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · IP Conflict Detection │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes. Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
## Skills to Load
- `skills/visual-header.md`
- `skills/ip-management.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -33,205 +27,31 @@ Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
## Instructions ## Instructions
You are an IP conflict detection specialist that analyzes NetBox IPAM data for conflicts and issues. Execute `skills/visual-header.md` with context "IP Conflict Detection".
### Conflict Types to Detect Execute conflict detection from `skills/ip-management.md`:
#### 1. Duplicate IP Addresses 1. **Data Collection** - Fetch IPs, prefixes, VRFs via MCP
2. **Duplicate Detection** - Group by address+VRF, flag >1 record
3. **Overlap Detection** - Compare prefixes pairwise using CIDR math
4. **Orphan IP Detection** - Find IPs without containing prefix
5. **Generate Report** - Use template from skill
Multiple IP address records with the same address (within same VRF). ## Conflict Types
**Detection:** | Type | Severity |
1. Use `ipam_list_ip_addresses` to get all addresses |------|----------|
2. Group by address + VRF combination | Duplicate IP (same interface type) | CRITICAL |
3. Flag groups with more than one record | Duplicate IP (different roles) | HIGH |
| Overlapping prefixes (same status) | HIGH |
**Exception:** Anycast addresses may legitimately appear multiple times - check the `role` field for "anycast". | Overlapping prefixes (container ok) | LOW |
| Orphan IP | MEDIUM |
#### 2. Overlapping Prefixes
Prefixes that contain the same address space (within same VRF).
**Detection:**
1. Use `ipam_list_prefixes` to get all prefixes
2. For each prefix pair in the same VRF, check if one contains the other
3. Legitimate hierarchies should have proper parent-child relationships
**Legitimate Overlaps:**
- Parent/child prefix hierarchy (e.g., 10.0.0.0/8 contains 10.0.1.0/24)
- Different VRFs (isolated routing tables)
- Marked as "container" status
#### 3. IPs Outside Their Prefix
IP addresses that don't fall within any defined prefix.
**Detection:**
1. For each IP address, find the most specific prefix that contains it
2. Flag IPs with no matching prefix
#### 4. Prefix Overlap Across VRFs (Informational)
Same prefix appearing in multiple VRFs - not necessarily a conflict, but worth noting.
### MCP Tools
- `ipam_list_ip_addresses` - Get all IP addresses with filters:
- `address` - Filter by specific address
- `vrf_id` - Filter by VRF
- `parent` - Filter by parent prefix
- `status` - Filter by status
- `ipam_list_prefixes` - Get all prefixes with filters:
- `prefix` - Filter by prefix CIDR
- `vrf_id` - Filter by VRF
- `within` - Find prefixes within a parent
- `contains` - Find prefixes containing an address
- `ipam_list_vrfs` - List VRFs for context
- `ipam_get_ip_address` - Get detailed IP info including assigned device/interface
- `ipam_get_prefix` - Get detailed prefix info
### Workflow
1. **Data Collection**
- Fetch all IP addresses (or filtered set)
- Fetch all prefixes (or filtered set)
- Fetch VRFs for context
2. **Duplicate Detection**
- Build address map: `{address+vrf: [records]}`
- Filter for entries with >1 record
3. **Overlap Detection**
- For each VRF, compare prefixes pairwise
- Check using CIDR math: does prefix A contain prefix B or vice versa?
- Ignore legitimate hierarchies (status=container)
4. **Orphan IP Detection**
- For each IP, find containing prefix
- Flag IPs with no prefix match
5. **Generate Report**
### Report Format
```markdown
## IP Conflict Detection Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Check | Status | Count |
|-------|--------|-------|
| Duplicate IPs | [PASS/FAIL] | X |
| Overlapping Prefixes | [PASS/FAIL] | Y |
| Orphan IPs | [PASS/FAIL] | Z |
| Total Issues | - | N |
### Critical Issues
#### Duplicate IP Addresses
| Address | VRF | Count | Assigned To |
|---------|-----|-------|-------------|
| 10.0.1.50/24 | Global | 2 | server-01 (eth0), server-02 (eth0) |
| 192.168.1.100/24 | Global | 2 | router-01 (gi0/1), switch-01 (vlan10) |
**Impact:** IP conflicts cause network connectivity issues. Devices will have intermittent connectivity.
**Resolution:**
- Determine which device should have the IP
- Update or remove the duplicate assignment
- Consider IP reservation to prevent future conflicts
#### Overlapping Prefixes
| Prefix 1 | Prefix 2 | VRF | Type |
|----------|----------|-----|------|
| 10.0.0.0/24 | 10.0.0.0/25 | Global | Unstructured overlap |
| 192.168.0.0/16 | 192.168.1.0/24 | Production | Missing container flag |
**Impact:** Overlapping prefixes can cause routing ambiguity and IP management confusion.
**Resolution:**
- For legitimate hierarchies: Mark parent prefix as status="container"
- For accidental overlaps: Consolidate or re-address one prefix
### Warnings
#### IPs Without Prefix
| Address | VRF | Assigned To | Nearest Prefix |
|---------|-----|-------------|----------------|
| 172.16.5.10/24 | Global | server-03 (eth0) | None found |
**Impact:** IPs without a prefix bypass IPAM allocation controls.
**Resolution:**
- Create appropriate prefix to contain the IP
- Or update IP to correct address within existing prefix
### Informational
#### Same Prefix in Multiple VRFs
| Prefix | VRFs | Purpose |
|--------|------|---------|
| 10.0.0.0/24 | Global, DMZ, Internal | [Check if intentional] |
### Statistics
| Metric | Value |
|--------|-------|
| Total IP Addresses | X |
| Total Prefixes | Y |
| Total VRFs | Z |
| Utilization (IPs/Prefix space) | W% |
### Remediation Commands
```
# Remove duplicate IP (keep server-01's assignment)
ipam_delete_ip_address id=123
# Mark prefix as container
ipam_update_prefix id=456 status=container
# Create missing prefix for orphan IP
ipam_create_prefix prefix=172.16.5.0/24 status=active
```
```
### CIDR Math Reference
For overlap detection, use these rules:
- Prefix A **contains** Prefix B if: A.network <= B.network AND A.broadcast >= B.broadcast
- Two prefixes **overlap** if: A.network <= B.broadcast AND B.network <= A.broadcast
**Example:**
- 10.0.0.0/8 contains 10.0.1.0/24 (legitimate hierarchy)
- 10.0.0.0/24 and 10.0.0.128/25 overlap (10.0.0.128/25 is within 10.0.0.0/24)
### Severity Levels
| Issue | Severity | Description |
|-------|----------|-------------|
| Duplicate IP (same interface type) | CRITICAL | Active conflict, causes outages |
| Duplicate IP (different roles) | HIGH | Potential conflict |
| Overlapping prefixes (same status) | HIGH | IPAM management issue |
| Overlapping prefixes (container ok) | LOW | May need status update |
| Orphan IP | MEDIUM | Bypasses IPAM controls |
## Examples ## Examples
- `/ip-conflicts` - Full scan for all conflicts - `/ip-conflicts` - Full scan
- `/ip-conflicts addresses` - Check only for duplicate IPs - `/ip-conflicts addresses` - Duplicate IPs only
- `/ip-conflicts prefixes` - Check only for overlapping prefixes - `/ip-conflicts vrf Production` - Scan specific VRF
- `/ip-conflicts vrf Production` - Scan only Production VRF
- `/ip-conflicts prefix 10.0.0.0/8` - Scan within specific prefix range
## User Request ## User Request

View File

@@ -1,10 +1,14 @@
{ {
"hooks": { "hooks": {
"SessionStart": [ "SessionStart": [
{
"hooks": [
{ {
"type": "command", "type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh" "command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
} }
]
}
], ],
"PreToolUse": [ "PreToolUse": [
{ {

View File

@@ -0,0 +1,163 @@
# Audit Workflow Skill
How to audit NetBox data quality.
## Prerequisites
Load these skills:
- `netbox-patterns` - Best practices reference
- `mcp-tools-reference` - MCP tool reference
## Data Collection
```
virt_list_vms
dcim_list_devices
virt_list_clusters
dcim_list_sites
tenancy_list_tenants
dcim_list_device_roles
dcim_list_platforms
```
## Quality Checks by Severity
### CRITICAL (must fix immediately)
| Check | Detection |
|-------|-----------|
| VMs without cluster | `cluster` is null AND `site` is null |
| Devices without site | `site` is null |
| Active devices without primary IP | `status=active` AND `primary_ip4` is null AND `primary_ip6` is null |
### HIGH (should fix soon)
| Check | Detection |
|-------|-----------|
| VMs without site | No site (neither direct nor via cluster.site) |
| VMs without tenant | `tenant` is null |
| Devices without platform | `platform` is null |
| Clusters not scoped to site | `site` is null on cluster |
| VMs without role | `role` is null |
### MEDIUM (plan to address)
| Check | Detection |
|-------|-----------|
| Inconsistent naming | Names don't match patterns |
| Role fragmentation | >10 device roles with <3 assignments each |
| Missing tags on production | Active resources without tags |
| Mixed naming separators | Some `_`, others `-` |
### LOW (informational)
| Check | Detection |
|-------|-----------|
| Docker containers as VMs | Cluster type is "Docker Compose" |
| VMs without description | `description` is empty |
| Sites without physical address | `physical_address` is empty |
| Devices without serial | `serial` is empty |
## Naming Convention Analysis
### Expected Patterns
| Object Type | Pattern | Example |
|-------------|---------|---------|
| Devices | `{role}-{location}-{number}` | `web-dc1-01` |
| VMs | `{env}-{app}-{number}` | `prod-api-01` |
| Clusters | `{site}-{type}` | `home-docker` |
### Analysis Steps
1. Extract naming patterns from existing objects
2. Identify dominant patterns (most common)
3. Flag outliers that don't match
4. Suggest standardization
## Role Fragmentation Analysis
### Red Flags
- More than 15 highly specific roles
- Roles with technology in name (use platform instead)
- Roles that duplicate functionality
- Single-use roles (only 1 device/VM)
### Recommended Consolidation
Use general roles + platform/tags for specificity:
- Instead of `nginx-web-server`, use `web-server` + platform `nginx`
## Report Template
```markdown
## CMDB Data Quality Audit Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Metric | Count |
|--------|-------|
| Total VMs | X |
| Total Devices | Y |
| Total Clusters | Z |
| **Total Issues** | **N** |
| Severity | Count |
|----------|-------|
| Critical | A |
| High | B |
| Medium | C |
| Low | D |
### Critical Issues
[List each with specific object names and IDs]
- VM `HotServ` (ID: 1) - No cluster or site assignment
- Device `server-01` (ID: 5) - No site assignment
### High Issues
[List each with specific object names]
### Medium Issues
[Grouped by category with counts]
### Recommendations
1. **[Most impactful fix]** - affects N objects
2. **[Second priority]** - affects M objects
### Quick Fixes
Commands to fix common issues:
```
# Assign site to VM
virt_update_vm id=X site=Y
# Assign platform to device
dcim_update_device id=X platform=Y
```
### Next Steps
- Run `/cmdb-register` to properly register new machines
- Use `/cmdb-sync` to update existing registrations
- Consider bulk updates via NetBox web UI for >10 items
```
## Scope-Specific Focus
| Scope | Focus |
|-------|-------|
| `all` | Full audit across all categories |
| `vms` | Virtual Machine checks only |
| `devices` | Device checks only |
| `naming` | Naming convention analysis |
| `roles` | Role fragmentation analysis |

View File

@@ -0,0 +1,130 @@
# Change Audit Skill
Audit NetBox changes for tracking and compliance.
## Prerequisites
Load skill: `mcp-tools-reference`
## MCP Tools
| Tool | Purpose | Parameters |
|------|---------|------------|
| `extras_list_object_changes` | List changes | `user_id`, `changed_object_type`, `action` |
| `extras_get_object_change` | Get change details | `id` |
## Common Object Types
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |
## Audit Workflow
1. **Parse user request** - Determine filters
2. **Query object changes** - `extras_list_object_changes`
3. **Enrich data** - Fetch detailed records if needed
4. **Analyze patterns** - Identify bulk operations, unusual activity
5. **Generate report** - Structured format
## Report Template
```markdown
## NetBox Change Audit Report
**Generated:** [timestamp]
**Period:** [date range or "All time"]
**Filters:** [applied filters]
### Summary
| Metric | Count |
|--------|-------|
| Total Changes | X |
| Creates | Y |
| Updates | Z |
| Deletes | W |
| Unique Users | N |
| Object Types | M |
### Changes by Action
#### Created Objects (Y)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-15 14:30 | admin | dcim.device | server-01 | Created device |
#### Updated Objects (Z)
| Time | User | Object Type | Object | Changed Fields |
|------|------|-------------|--------|----------------|
| 2024-01-15 15:00 | john | ipam.ipaddress | 10.0.1.50/24 | status, description |
#### Deleted Objects (W)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-14 09:00 | admin | dcim.interface | eth2 | Removed from server-01 |
### Changes by User
| User | Creates | Updates | Deletes | Total |
|------|---------|---------|---------|-------|
| admin | 5 | 10 | 2 | 17 |
| john | 3 | 8 | 0 | 11 |
### Changes by Object Type
| Object Type | Creates | Updates | Deletes | Total |
|-------------|---------|---------|---------|-------|
| dcim.device | 2 | 5 | 0 | 7 |
| ipam.ipaddress | 4 | 3 | 1 | 8 |
### Timeline
```
2024-01-15: ######## 8 changes
2024-01-14: #### 4 changes
2024-01-13: ## 2 changes
```
### Notable Patterns
- **Bulk operations:** [Many changes in short time]
- **Unusual activity:** [Unexpected deletions, after-hours changes]
- **Missing audit trail:** [Expected changes not logged]
### Recommendations
1. [Security or process recommendations based on findings]
```
## Enriching Change Details
For detailed audit, use `extras_get_object_change` to see:
- `prechange_data` - Object state before change
- `postchange_data` - Object state after change
- `request_id` - Links related changes in same request
## Security Audit Mode
When user asks for "security audit" or "compliance report":
1. Focus on deletions and permission-sensitive changes
2. Highlight changes to critical objects (firewalls, VRFs, prefixes)
3. Flag changes outside business hours
4. Identify users with high change counts
## Filter Examples
| Request | Filter |
|---------|--------|
| Recent changes | None (last 24 hours default) |
| Last 7 days | Filter by `time` field |
| By user | `user_id=<id>` |
| Device changes | `changed_object_type=dcim.device` |
| All deletions | `action=delete` |

View File

@@ -0,0 +1,177 @@
# Device Registration Skill
How to register devices into NetBox.
## Prerequisites
Load these skills:
- `system-discovery` - Bash commands for gathering system info
- `netbox-patterns` - Best practices for data quality
- `mcp-tools-reference` - MCP tool reference
## Registration Workflow
### Phase 1: System Discovery
Use commands from `system-discovery` skill to gather:
- Hostname, OS, hardware model, serial number
- CPU, memory, disk
- Network interfaces with IPs
- Running Docker containers
### Phase 2: Pre-Registration Checks
1. **Check if device exists:**
```
dcim_list_devices name=<hostname>
```
If exists, suggest `/cmdb-sync` instead.
2. **Verify/Create site:**
```
dcim_list_sites name=<site-name>
```
If not found, list available sites or offer to create.
3. **Verify/Create platform:**
```
dcim_list_platforms name=<platform-name>
```
Create if not exists with `dcim_create_platform`.
4. **Verify/Create device role:**
```
dcim_list_device_roles name=<role-name>
```
### Phase 3: Device Creation
1. **Get/Create manufacturer and device type:**
```
dcim_list_manufacturers name="<manufacturer>"
dcim_list_device_types manufacturer_id=X model="<model>"
```
2. **Create device:**
```
dcim_create_device
name=<hostname>
device_type=<device_type_id>
role=<role_id>
site=<site_id>
platform=<platform_id>
tenant=<tenant_id> # if provided
serial=<serial>
description="Registered via cmdb-assistant"
```
3. **Create interfaces:**
For each network interface:
```
dcim_create_interface
device=<device_id>
name=<interface_name>
type=<type>
mac_address=<mac>
enabled=true
```
4. **Create IP addresses:**
For each IP:
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
5. **Set primary IP:**
```
dcim_update_device
id=<device_id>
primary_ip4=<primary_ip_id>
```
### Phase 4: Container Registration (if Docker)
1. **Create/Get cluster type:**
```
virt_list_cluster_types name="Docker Compose"
virt_create_cluster_type name="Docker Compose" slug="docker-compose"
```
2. **Create cluster:**
```
virt_create_cluster
name=<project-name>
type=<cluster_type_id>
site=<site_id>
description="Docker Compose stack on <hostname>"
```
3. **Create VMs for containers:**
For each running container:
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
site=<site_id>
role=<role_id>
status="active"
vcpus=<cpu_shares>
memory=<memory_mb>
disk=<disk_gb>
```
### Phase 5: Documentation
Add journal entry:
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device registered via /cmdb-register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
```
## Summary Report Template
```markdown
## Machine Registration Complete
### Device Created
- **Name:** <hostname>
- **Site:** <site>
- **Platform:** <platform>
- **Role:** <role>
- **ID:** <device_id>
- **URL:** https://netbox.example.com/dcim/devices/<id>/
### Network Interfaces
| Interface | Type | MAC | IP Address |
|-----------|------|-----|------------|
| eth0 | 1000base-t | aa:bb:cc:dd:ee:ff | 192.168.1.100/24 |
### Primary IP: 192.168.1.100
### Docker Containers Registered (if applicable)
**Cluster:** <cluster_name> (ID: <cluster_id>)
| Container | Role | vCPUs | Memory | Status |
|-----------|------|-------|--------|--------|
| media_jellyfin | Media Server | 2.0 | 2048MB | Active |
### Next Steps
- Run `/cmdb-sync` periodically to keep data current
- Run `/cmdb-audit` to check data quality
- Add tags for classification
```
## Error Handling
| Error | Action |
|-------|--------|
| Device already exists | Suggest `/cmdb-sync` or ask to proceed |
| Site not found | List available sites, offer to create new |
| Docker not available | Skip container registration, note in summary |
| Permission denied | Note which operations failed, suggest fixes |

View File

@@ -0,0 +1,162 @@
# IP Management Skill
IP address and prefix management in NetBox.
## Prerequisites
Load skill: `mcp-tools-reference`
## IPAM Operations
### Prefix Management
| Action | Tool | Key Parameters |
|--------|------|----------------|
| List prefixes | `ipam_list_prefixes` | `prefix`, `vrf_id`, `within`, `contains` |
| Get details | `ipam_get_prefix` | `id` |
| Find available child | `ipam_list_available_prefixes` | `prefix_id` |
| Create prefix | `ipam_create_prefix` | `prefix`, `status`, `site`, `vrf` |
| Allocate child | `ipam_create_available_prefix` | `prefix_id`, `prefix_length` |
### IP Address Management
| Action | Tool | Key Parameters |
|--------|------|----------------|
| List IPs | `ipam_list_ip_addresses` | `address`, `vrf_id`, `device_id` |
| Get details | `ipam_get_ip_address` | `id` |
| Find available | `ipam_list_available_ips` | `prefix_id` |
| Create IP | `ipam_create_ip_address` | `address`, `assigned_object_type`, `assigned_object_id` |
| Allocate next | `ipam_create_available_ip` | `prefix_id` |
| Assign to interface | `ipam_update_ip_address` | `id`, `assigned_object_id` |
### VLAN and VRF
| Action | Tool |
|--------|------|
| List VLANs | `ipam_list_vlans` |
| Get VLAN | `ipam_get_vlan` |
| Create VLAN | `ipam_create_vlan` |
| List VRFs | `ipam_list_vrfs` |
| Get VRF | `ipam_get_vrf` |
## IP Allocation Workflow
1. **Find available IPs in target prefix:**
```
ipam_list_available_ips prefix_id=<id>
```
2. **Create the IP address:**
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
3. **Set as primary (if needed):**
```
dcim_update_device id=<device_id> primary_ip4=<ip_id>
```
## IP Conflict Detection
### Conflict Types
1. **Duplicate IP Addresses**
- Multiple records with same address in same VRF
- Exception: Anycast addresses (check `role` field)
2. **Overlapping Prefixes**
- Prefixes containing same address space in same VRF
- Legitimate: Parent/child hierarchy, different VRFs, "container" status
3. **IPs Outside Prefix**
- IP addresses not within any defined prefix
4. **Same Prefix in Multiple VRFs** (informational)
### Detection Workflow
1. **Duplicate Detection:**
- Get all addresses: `ipam_list_ip_addresses`
- Group by address + VRF
- Flag groups with >1 record
2. **Overlap Detection:**
- Get all prefixes: `ipam_list_prefixes`
- For each VRF, compare prefixes pairwise
- Check if prefix A contains prefix B or vice versa
- Ignore legitimate hierarchies (status=container)
3. **Orphan IP Detection:**
- For each IP, find containing prefix
- Flag IPs with no prefix match
### CIDR Math Rules
- Prefix A **contains** Prefix B if: `A.network <= B.network AND A.broadcast >= B.broadcast`
- Two prefixes **overlap** if: `A.network <= B.broadcast AND B.network <= A.broadcast`
### Severity Levels
| Issue | Severity |
|-------|----------|
| Duplicate IP (same interface type) | CRITICAL |
| Duplicate IP (different roles) | HIGH |
| Overlapping prefixes (same status) | HIGH |
| Overlapping prefixes (container ok) | LOW |
| Orphan IP | MEDIUM |
## Conflict Report Template
```markdown
## IP Conflict Detection Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Check | Status | Count |
|-------|--------|-------|
| Duplicate IPs | [PASS/FAIL] | X |
| Overlapping Prefixes | [PASS/FAIL] | Y |
| Orphan IPs | [PASS/FAIL] | Z |
### Critical Issues
#### Duplicate IP Addresses
| Address | VRF | Count | Assigned To |
|---------|-----|-------|-------------|
| 10.0.1.50/24 | Global | 2 | server-01, server-02 |
**Resolution:**
- Determine which device should have the IP
- Update or remove the duplicate
#### Overlapping Prefixes
| Prefix 1 | Prefix 2 | VRF | Type |
|----------|----------|-----|------|
| 10.0.0.0/24 | 10.0.0.0/25 | Global | Unstructured |
**Resolution:**
- For legitimate hierarchies: Mark parent as status="container"
- For accidental: Consolidate or re-address
### Remediation Commands
```
# Remove duplicate IP
ipam_delete_ip_address id=123
# Mark prefix as container
ipam_update_prefix id=456 status=container
# Create missing prefix
ipam_create_prefix prefix=172.16.5.0/24 status=active
```
```

View File

@@ -0,0 +1,281 @@
# NetBox MCP Tools Reference
Complete reference for NetBox MCP tools organized by category.
## DCIM (Data Center Infrastructure Management)
### Sites and Locations
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_sites` | List all sites | `name`, `status`, `region_id` |
| `dcim_get_site` | Get site details | `id` |
| `dcim_create_site` | Create new site | `name`, `slug`, `status` |
| `dcim_update_site` | Update site | `id`, fields to update |
| `dcim_delete_site` | Delete site | `id` |
| `dcim_list_locations` | List locations within sites | `site_id`, `parent_id` |
| `dcim_get_location` | Get location details | `id` |
| `dcim_create_location` | Create location | `name`, `slug`, `site` |
| `dcim_update_location` | Update location | `id`, fields to update |
| `dcim_delete_location` | Delete location | `id` |
| `dcim_list_regions` | List regions | `name` |
| `dcim_get_region` | Get region details | `id` |
| `dcim_create_region` | Create region | `name`, `slug` |
| `dcim_update_region` | Update region | `id`, fields to update |
| `dcim_delete_region` | Delete region | `id` |
### Racks
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_racks` | List racks | `site_id`, `location_id`, `name` |
| `dcim_get_rack` | Get rack details | `id` |
| `dcim_create_rack` | Create rack | `name`, `site`, `u_height` |
| `dcim_update_rack` | Update rack | `id`, fields to update |
| `dcim_delete_rack` | Delete rack | `id` |
### Devices
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_devices` | List devices | `name`, `site_id`, `role_id`, `status` |
| `dcim_get_device` | Get device details | `id` |
| `dcim_create_device` | Create device | `name`, `device_type`, `role`, `site` |
| `dcim_update_device` | Update device | `id`, `primary_ip4`, etc. |
| `dcim_delete_device` | Delete device | `id` |
### Device Types and Roles
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_device_types` | List device types | `manufacturer_id`, `model` |
| `dcim_get_device_type` | Get type details | `id` |
| `dcim_create_device_type` | Create device type | `manufacturer`, `model`, `slug` |
| `dcim_update_device_type` | Update device type | `id`, fields |
| `dcim_delete_device_type` | Delete device type | `id` |
| `dcim_list_device_roles` | List device roles | `name` |
| `dcim_get_device_role` | Get role details | `id` |
| `dcim_create_device_role` | Create device role | `name`, `slug` |
| `dcim_update_device_role` | Update device role | `id`, fields |
| `dcim_delete_device_role` | Delete device role | `id` |
### Manufacturers and Platforms
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_manufacturers` | List manufacturers | `name` |
| `dcim_get_manufacturer` | Get manufacturer details | `id` |
| `dcim_create_manufacturer` | Create manufacturer | `name`, `slug` |
| `dcim_update_manufacturer` | Update manufacturer | `id`, fields |
| `dcim_delete_manufacturer` | Delete manufacturer | `id` |
| `dcim_list_platforms` | List platforms | `name` |
| `dcim_get_platform` | Get platform details | `id` |
| `dcim_create_platform` | Create platform | `name`, `slug` |
| `dcim_update_platform` | Update platform | `id`, fields |
| `dcim_delete_platform` | Delete platform | `id` |
### Interfaces and Cables
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_interfaces` | List interfaces | `device_id`, `name`, `type` |
| `dcim_get_interface` | Get interface details | `id` |
| `dcim_create_interface` | Create interface | `device`, `name`, `type` |
| `dcim_update_interface` | Update interface | `id`, `enabled`, `mac_address` |
| `dcim_delete_interface` | Delete interface | `id` |
| `dcim_list_cables` | List cables | `device_id`, `site_id` |
| `dcim_get_cable` | Get cable details | `id` |
| `dcim_create_cable` | Create cable | `a_terminations`, `b_terminations` |
| `dcim_update_cable` | Update cable | `id`, fields |
| `dcim_delete_cable` | Delete cable | `id` |
### Power
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_power_panels` | List power panels | `site_id` |
| `dcim_get_power_panel` | Get panel details | `id` |
| `dcim_create_power_panel` | Create power panel | `name`, `site` |
| `dcim_list_power_feeds` | List power feeds | `power_panel_id` |
| `dcim_get_power_feed` | Get feed details | `id` |
| `dcim_create_power_feed` | Create power feed | `name`, `power_panel`, `supply` |
### Other DCIM
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_virtual_chassis` | List virtual chassis | (varies) |
| `dcim_get_virtual_chassis` | Get virtual chassis | `id` |
| `dcim_list_inventory_items` | List inventory items | `device_id` |
| `dcim_get_inventory_item` | Get inventory item | `id` |
## IPAM (IP Address Management)
### Prefixes
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_prefixes` | List prefixes | `prefix`, `vrf_id`, `within`, `contains` |
| `ipam_get_prefix` | Get prefix details | `id` |
| `ipam_create_prefix` | Create prefix | `prefix`, `status`, `site`, `vrf` |
| `ipam_update_prefix` | Update prefix | `id`, `status`, etc. |
| `ipam_delete_prefix` | Delete prefix | `id` |
| `ipam_list_available_prefixes` | List available child prefixes | `prefix_id` |
| `ipam_create_available_prefix` | Allocate from parent | `prefix_id`, `prefix_length` |
### IP Addresses
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_ip_addresses` | List IP addresses | `address`, `vrf_id`, `device_id`, `status` |
| `ipam_get_ip_address` | Get IP details | `id` |
| `ipam_create_ip_address` | Create IP address | `address`, `assigned_object_type`, `assigned_object_id` |
| `ipam_update_ip_address` | Update IP address | `id`, `status`, etc. |
| `ipam_delete_ip_address` | Delete IP address | `id` |
| `ipam_list_available_ips` | List available IPs in prefix | `prefix_id` |
| `ipam_create_available_ip` | Allocate next available | `prefix_id` |
### VLANs and VRFs
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_vlans` | List VLANs | `vid`, `name`, `site_id` |
| `ipam_get_vlan` | Get VLAN details | `id` |
| `ipam_create_vlan` | Create VLAN | `vid`, `name`, `site` |
| `ipam_update_vlan` | Update VLAN | `id`, fields |
| `ipam_delete_vlan` | Delete VLAN | `id` |
| `ipam_list_vlan_groups` | List VLAN groups | `site_id` |
| `ipam_get_vlan_group` | Get VLAN group | `id` |
| `ipam_create_vlan_group` | Create VLAN group | `name`, `slug`, `scope_type` |
| `ipam_list_vrfs` | List VRFs | `name` |
| `ipam_get_vrf` | Get VRF details | `id` |
| `ipam_create_vrf` | Create VRF | `name`, `rd` |
| `ipam_update_vrf` | Update VRF | `id`, fields |
| `ipam_delete_vrf` | Delete VRF | `id` |
### Other IPAM
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_asns` | List ASNs | (varies) |
| `ipam_get_asn` | Get ASN details | `id` |
| `ipam_create_asn` | Create ASN | `asn`, `rir` |
| `ipam_list_rirs` | List RIRs | `name` |
| `ipam_get_rir` | Get RIR details | `id` |
| `ipam_list_aggregates` | List aggregates | `prefix`, `rir_id` |
| `ipam_get_aggregate` | Get aggregate | `id` |
| `ipam_create_aggregate` | Create aggregate | `prefix`, `rir` |
| `ipam_list_services` | List services | `device_id`, `name` |
| `ipam_get_service` | Get service details | `id` |
| `ipam_create_service` | Create service | `name`, `ports`, `protocol` |
## Virtualization
### Clusters
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `virt_list_cluster_types` | List cluster types | `name` |
| `virt_get_cluster_type` | Get cluster type | `id` |
| `virt_create_cluster_type` | Create cluster type | `name`, `slug` |
| `virt_list_cluster_groups` | List cluster groups | `name` |
| `virt_get_cluster_group` | Get cluster group | `id` |
| `virt_create_cluster_group` | Create cluster group | `name`, `slug` |
| `virt_list_clusters` | List clusters | `name`, `site_id`, `type_id` |
| `virt_get_cluster` | Get cluster details | `id` |
| `virt_create_cluster` | Create cluster | `name`, `type`, `site` |
| `virt_update_cluster` | Update cluster | `id`, fields |
| `virt_delete_cluster` | Delete cluster | `id` |
### Virtual Machines
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `virt_list_vms` | List VMs | `name`, `cluster_id`, `site_id`, `status` |
| `virt_get_vm` | Get VM details | `id` |
| `virt_create_vm` | Create VM | `name`, `cluster`, `site`, `status` |
| `virt_update_vm` | Update VM | `id`, `status`, etc. |
| `virt_delete_vm` | Delete VM | `id` |
| `virt_list_vm_ifaces` | List VM interfaces | `virtual_machine_id` |
| `virt_get_vm_iface` | Get VM interface | `id` |
| `virt_create_vm_iface` | Create VM interface | `virtual_machine`, `name` |
## Circuits
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `circuits_list_providers` | List providers | `name` |
| `circuits_get_provider` | Get provider | `id` |
| `circuits_create_provider` | Create provider | `name`, `slug` |
| `circuits_update_provider` | Update provider | `id`, fields |
| `circuits_delete_provider` | Delete provider | `id` |
| `circ_list_types` | List circuit types | `name` |
| `circ_get_type` | Get circuit type | `id` |
| `circ_create_type` | Create circuit type | `name`, `slug` |
| `circuits_list_circuits` | List circuits | `provider_id`, `type_id` |
| `circuits_get_circuit` | Get circuit | `id` |
| `circuits_create_circuit` | Create circuit | `cid`, `provider`, `type` |
| `circuits_update_circuit` | Update circuit | `id`, fields |
| `circuits_delete_circuit` | Delete circuit | `id` |
| `circ_list_terminations` | List terminations | `circuit_id` |
| `circ_get_termination` | Get termination | `id` |
| `circ_create_termination` | Create termination | `circuit`, `site`, `term_side` |
## Tenancy
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `tenancy_list_tenant_groups` | List tenant groups | `name` |
| `tenancy_get_tenant_group` | Get tenant group | `id` |
| `tenancy_create_tenant_group` | Create tenant group | `name`, `slug` |
| `tenancy_list_tenants` | List tenants | `name`, `group_id` |
| `tenancy_get_tenant` | Get tenant | `id` |
| `tenancy_create_tenant` | Create tenant | `name`, `slug` |
| `tenancy_update_tenant` | Update tenant | `id`, fields |
| `tenancy_delete_tenant` | Delete tenant | `id` |
| `tenancy_list_contacts` | List contacts | `name` |
| `tenancy_get_contact` | Get contact | `id` |
| `tenancy_create_contact` | Create contact | `name` |
## VPN
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `vpn_list_tunnels` | List VPN tunnels | `name` |
| `vpn_get_tunnel` | Get tunnel | `id` |
| `vpn_create_tunnel` | Create tunnel | `name`, `status` |
| `vpn_list_l2vpns` | List L2VPNs | `name` |
| `vpn_get_l2vpn` | Get L2VPN | `id` |
| `vpn_create_l2vpn` | Create L2VPN | `name`, `type` |
| `vpn_list_ike_policies` | List IKE policies | (varies) |
| `vpn_list_ipsec_policies` | List IPSec policies | (varies) |
| `vpn_list_ipsec_profiles` | List IPSec profiles | (varies) |
## Wireless
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `wlan_list_groups` | List WLAN groups | `name` |
| `wlan_get_group` | Get WLAN group | `id` |
| `wlan_create_group` | Create WLAN group | `name`, `slug` |
| `wlan_list_lans` | List WLANs | `ssid` |
| `wlan_get_lan` | Get WLAN | `id` |
| `wlan_create_lan` | Create WLAN | `ssid`, `group` |
| `wlan_list_links` | List wireless links | (varies) |
| `wlan_get_link` | Get wireless link | `id` |
## Extras
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `extras_list_tags` | List tags | `name` |
| `extras_get_tag` | Get tag | `id` |
| `extras_create_tag` | Create tag | `name`, `slug`, `color` |
| `extras_update_tag` | Update tag | `id`, fields |
| `extras_delete_tag` | Delete tag | `id` |
| `extras_list_custom_fields` | List custom fields | `name` |
| `extras_get_custom_field` | Get custom field | `id` |
| `extras_list_webhooks` | List webhooks | `name` |
| `extras_get_webhook` | Get webhook | `id` |
| `extras_list_journal_entries` | List journal entries | `assigned_object_type`, `assigned_object_id` |
| `extras_get_journal_entry` | Get journal entry | `id` |
| `extras_create_journal_entry` | Create journal entry | `assigned_object_type`, `assigned_object_id`, `comments` |
| `extras_list_object_changes` | List audit log | `user_id`, `changed_object_type`, `action` |
| `extras_get_object_change` | Get change details | `id` |
| `extras_list_config_contexts` | List config contexts | `name` |
| `extras_get_config_context` | Get config context | `id` |
## Common Object Types for Filtering
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |

View File

@@ -0,0 +1,191 @@
# Sync Workflow Skill
How to synchronize machine state with NetBox.
## Prerequisites
Load these skills:
- `system-discovery` - Bash commands for system info
- `mcp-tools-reference` - MCP tool reference
## Sync Workflow
### Phase 1: Device Lookup
```
dcim_list_devices name=<hostname>
```
If not found, suggest `/cmdb-register` first.
If found:
- Store device ID and current field values
- Fetch interfaces: `dcim_list_interfaces device_id=<device_id>`
- Fetch IPs: `ipam_list_ip_addresses device_id=<device_id>`
- Check clusters/VMs: `virt_list_clusters`, `virt_list_vms cluster=<cluster_id>`
### Phase 2: Current State Discovery
Use commands from `system-discovery` skill.
### Phase 3: Comparison
#### Device Attributes
| Field | Compare |
|-------|---------|
| Platform | OS version changed? |
| Status | Still active? |
| Serial | Match? |
| Description | Keep existing |
#### Network Interfaces
| Change Type | Detection |
|-------------|-----------|
| New interface | Exists locally but not in NetBox |
| Removed interface | In NetBox but not locally |
| Changed MAC | MAC address different |
| Interface type | Type mismatch |
#### IP Addresses
| Change Type | Detection |
|-------------|-----------|
| New IP | Exists locally but not in NetBox |
| Removed IP | In NetBox but not locally |
| Primary IP changed | Default route interface changed |
#### Docker Containers
| Change Type | Detection |
|-------------|-----------|
| New container | Running locally but no VM in cluster |
| Stopped container | VM exists but container not running |
| Resource change | vCPUs/memory different |
### Phase 4: Diff Report
```markdown
## Sync Diff Report
**Device:** <hostname> (ID: <device_id>)
**NetBox URL:** https://netbox.example.com/dcim/devices/<id>/
### Device Attributes
| Field | NetBox Value | Current Value | Action |
|-------|--------------|---------------|--------|
| Platform | Ubuntu 22.04 | Ubuntu 24.04 | UPDATE |
### Network Interfaces
#### New Interfaces (will create)
| Interface | Type | MAC | IPs |
|-----------|------|-----|-----|
| tailscale0 | virtual | - | 100.x.x.x/32 |
#### Removed Interfaces (will mark offline)
| Interface | Type | Reason |
|-----------|------|--------|
| eth1 | 1000base-t | Not found locally |
#### Changed Interfaces
| Interface | Field | Old | New |
|-----------|-------|-----|-----|
| eth0 | mac_address | aa:bb:cc:00:00:00 | aa:bb:cc:11:11:11 |
### IP Addresses
#### New IPs (will create)
- 192.168.1.150/24 on eth0
#### Removed IPs (will unassign)
- 192.168.1.100/24 from eth0
### Docker Containers
#### New Containers (will create VMs)
| Container | Image | Role |
|-----------|-------|------|
| media_lidarr | linuxserver/lidarr | Media Management |
### Summary
- **Updates:** X
- **Creates:** Y
- **Removals/Offline:** Z
```
### Phase 5: Apply Updates
#### Device Updates
```
dcim_update_device id=<device_id> platform=<new_platform_id>
```
#### Interface Updates
New:
```
dcim_create_interface device=<device_id> name=<name> type=<type>
```
Removed (mark offline):
```
dcim_update_interface id=<id> enabled=false description="Marked offline by cmdb-sync"
```
Changed:
```
dcim_update_interface id=<id> mac_address=<new_mac>
```
#### IP Address Updates
New:
```
ipam_create_ip_address address=<ip/prefix> assigned_object_type="dcim.interface" assigned_object_id=<id>
```
Removed (unassign):
```
ipam_update_ip_address id=<id> assigned_object_type=null assigned_object_id=null
```
#### Primary IP Update
```
dcim_update_device id=<device_id> primary_ip4=<new_primary_ip_id>
```
#### Container/VM Updates
New:
```
virt_create_vm name=<name> cluster=<cluster_id> status="active"
```
Stopped:
```
virt_update_vm id=<id> status="offline"
```
### Phase 6: Journal Entry
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device synced via /cmdb-sync command\n\nChanges applied:\n- <list>"
```
## Sync Modes
### Dry Run Mode
- Complete phases 1-4 (lookup, discovery, compare, diff report)
- Skip phases 5-6 (no updates, no journal)
- End with: "Dry run complete. No changes applied."
### Full Sync Mode
- Skip user confirmation
- Update all fields even if unchanged (force refresh)
## Error Handling
| Error | Action |
|-------|--------|
| Device not found | Suggest `/cmdb-register` |
| Permission denied | Note which failed, continue others |
| Cluster not found | Offer to create or skip container sync |
| API errors | Log error, continue with remaining |

View File

@@ -0,0 +1,101 @@
# System Discovery Skill
Bash commands for gathering system information from the current machine.
## Basic Device Information
```bash
# Hostname
hostname
# OS/Platform info
cat /etc/os-release 2>/dev/null || uname -a
# Hardware model - Raspberry Pi
cat /proc/device-tree/model 2>/dev/null || echo "Unknown"
# Hardware model - x86 systems
cat /sys/class/dmi/id/product_name 2>/dev/null || echo "Unknown"
# Serial number - Raspberry Pi
cat /proc/device-tree/serial-number 2>/dev/null || cat /proc/cpuinfo | grep Serial | cut -d: -f2 | tr -d ' ' 2>/dev/null
# Serial number - x86 systems
cat /sys/class/dmi/id/product_serial 2>/dev/null || echo "Unknown"
# CPU count
nproc
# Memory in MB
free -m | awk '/Mem:/ {print $2}'
# Disk size in GB (root filesystem)
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
```
## Network Interfaces
```bash
# Get interfaces with IPs (JSON format)
ip -j addr show 2>/dev/null || ip addr show
# Get default gateway interface
ip route | grep default | awk '{print $5}' | head -1
# Get MAC addresses
ip -j link show 2>/dev/null || ip link show
```
## Running Applications
```bash
# Docker containers (JSON format)
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}","ports":"{{.Ports}}"}' 2>/dev/null || echo "Docker not available"
# Docker Compose projects (find compose files)
find ~/apps /home/*/apps -name "docker-compose.yml" -o -name "docker-compose.yaml" 2>/dev/null | head -20
# Running systemd services
systemctl list-units --type=service --state=running --no-pager --plain 2>/dev/null | grep -v "^UNIT" | head -30
```
## Interface Type Mapping
| Interface Pattern | NetBox Type |
|-------------------|-------------|
| `eth*`, `enp*` | `1000base-t` |
| `wlan*` | `ieee802.11ax` |
| `tailscale*`, `docker*`, `br-*` | `virtual` |
| `lo` | Skip (loopback) |
## Platform Detection
Based on OS detected, determine platform name:
| OS Detection | Platform Name |
|--------------|---------------|
| Raspberry Pi OS | `Raspberry Pi OS (Bookworm)` |
| Ubuntu | `Ubuntu {version} LTS` |
| Debian | `Debian {version}` |
| Default | `{OS Name} {Version}` |
## Device Role Auto-Detection
Based on detected services:
| Detection | Suggested Role |
|-----------|----------------|
| Docker containers found | `Docker Host` |
| Only basic services | `Server` |
| Specific role specified | Use specified |
## Container Role Mapping
Map container names/images to roles:
| Container Pattern | Role |
|-------------------|------|
| `*caddy*`, `*nginx*`, `*traefik*` | Reverse Proxy |
| `*db*`, `*postgres*`, `*mysql*`, `*redis*` | Database |
| `*webui*`, `*frontend*` | Web Application |
| Others | Infer from image or use "Container" |

View File

@@ -0,0 +1,155 @@
# Topology Generation Skill
Generate Mermaid diagrams from NetBox data.
## Prerequisites
Load skill: `mcp-tools-reference`
## View: Rack Elevation
### Data Collection
1. Find rack: `dcim_list_racks name=<name>`
2. Get devices: `dcim_list_devices rack_id=<id>`
3. Note for each: `position`, `u_height`, `face`, `name`, `role`
### Mermaid Template
```mermaid
graph TB
subgraph rack["Rack: <rack-name> (U<height>)"]
direction TB
u42["U42: empty"]
u41["U41: empty"]
u40["U40: server-01 (Server)"]
u39["U39: server-01 (cont.)"]
u38["U38: switch-01 (Switch)"]
end
```
### Rules
- Mark top U with device name and role
- Mark subsequent Us as "(cont.)" for multi-U devices
- Empty Us show "empty"
## View: Network Topology
### Data Collection
1. List sites: `dcim_list_sites`
2. List devices: `dcim_list_devices site_id=<id>`
3. List cables: `dcim_list_cables`
4. List interfaces: `dcim_list_interfaces device_id=<id>`
### Mermaid Template
```mermaid
graph TD
subgraph site1["Site: Home"]
router1[("core-router-01<br/>Router")]
switch1[["dist-switch-01<br/>Switch"]]
server1["web-server-01<br/>Server"]
server2["db-server-01<br/>Server"]
end
router1 -->|"eth0 - eth1"| switch1
switch1 -->|"gi0/1 - eth0"| server1
switch1 -->|"gi0/2 - eth0"| server2
```
### Node Shapes by Role
| Role | Shape | Mermaid Syntax |
|------|-------|----------------|
| Router | Cylinder | `[(" ")]` |
| Switch | Double brackets | `[[ ]]` |
| Server | Rectangle | `[ ]` |
| Firewall | Hexagon | `{{ }}` |
| Other | Rectangle | `[ ]` |
### Edge Labels
Show interface names: `A-side - B-side`
## View: Site Overview
### Data Collection
1. Get site: `dcim_get_site id=<id>`
2. List racks: `dcim_list_racks site_id=<id>`
3. Count devices per rack: `dcim_list_devices rack_id=<id>`
### Mermaid Template
```mermaid
graph TB
subgraph site["Site: Headquarters"]
subgraph row1["Row 1"]
rack1["Rack A1<br/>12/42 U used<br/>5 devices"]
rack2["Rack A2<br/>20/42 U used<br/>8 devices"]
end
subgraph row2["Row 2"]
rack3["Rack B1<br/>8/42 U used<br/>3 devices"]
end
end
```
## View: Full Infrastructure
### Data Collection
1. List regions: `dcim_list_regions`
2. List sites: `dcim_list_sites`
3. Count devices: `dcim_list_devices status=active`
### Mermaid Template
```mermaid
graph TB
subgraph region1["Region: Americas"]
site1["Headquarters<br/>3 racks, 25 devices"]
site2["Branch Office<br/>1 rack, 5 devices"]
end
subgraph region2["Region: Europe"]
site3["EU Datacenter<br/>10 racks, 100 devices"]
end
site1 -.->|"WAN Link"| site3
```
## Output Format
Always provide:
1. **Summary** - Brief description of diagram content
2. **Mermaid Code Block** - The diagram code
3. **Legend** - Explanation of shapes and colors
4. **Data Notes** - Any data quality issues
### Example Output
```markdown
## Network Topology: Home Site
This diagram shows network connections between 4 devices at Home site.
```mermaid
graph TD
router1[("core-router<br/>Router")]
switch1[["main-switch<br/>Switch"]]
server1["homelab-01<br/>Server"]
router1 -->|"eth0 - gi0/24"| switch1
switch1 -->|"gi0/1 - eth0"| server1
```
**Legend:**
- Cylinder shape: Routers
- Double brackets: Switches
- Rectangle: Servers
**Data Notes:**
- 1 device (nas-01) has no cable connections documented
```

View File

@@ -0,0 +1,32 @@
# Visual Header Skill
Standard visual header for cmdb-assistant commands.
## Header Template
```
+----------------------------------------------------------------------+
| CMDB-ASSISTANT - [Context] |
+----------------------------------------------------------------------+
```
## Context Values by Command
| Command | Context |
|---------|---------|
| `/cmdb-search` | Search |
| `/cmdb-device` | Device Management |
| `/cmdb-ip` | IP Management |
| `/cmdb-site` | Site Management |
| `/cmdb-audit` | Data Quality Audit |
| `/cmdb-register` | Machine Registration |
| `/cmdb-sync` | Machine Sync |
| `/cmdb-topology` | Topology |
| `/change-audit` | Change Audit |
| `/ip-conflicts` | IP Conflict Detection |
| `/initial-setup` | Setup Wizard |
| Agent mode | Infrastructure Management |
## Usage
Display header at the start of every command response before proceeding with the operation.

View File

@@ -8,16 +8,12 @@ Analyze and preview refactoring opportunities without making changes.
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +----------------------------------------------------------------------+
│ 🔒 CODE-SENTINEL · Refactor Preview | CODE-SENTINEL - Refactor Preview |
└──────────────────────────────────────────────────────────────────┘ +----------------------------------------------------------------------+
``` ```
Then proceed with the analysis.
## Usage ## Usage
``` ```
/refactor-dry <target> [--all] /refactor-dry <target> [--all]
@@ -26,44 +22,31 @@ Then proceed with the analysis.
**Target:** File path, function name, or "." for current file **Target:** File path, function name, or "." for current file
**--all:** Show all opportunities, not just recommended **--all:** Show all opportunities, not just recommended
## Skills to Load
- skills/refactoring-patterns.md
- skills/dry-run-workflow.md
## Process ## Process
1. **Scan Target** 1. **Scan Target** - Analyze code using patterns from skill
Analyze code for refactoring opportunities. 2. **Score Opportunities** - Rate by Impact/Risk/Effort (see dry-run-workflow skill)
3. **Output** - Group by recommended vs optional
2. **Score Opportunities** ## Output Format
Each opportunity rated by:
- Impact (how much it improves code)
- Risk (likelihood of breaking something)
- Effort (complexity of the refactoring)
3. **Output**
``` ```
## Refactoring Opportunities: src/handlers.py ## Refactoring Opportunities: <target>
### Recommended (High Impact, Low Risk) ### Recommended (High Impact, Low Risk)
1. **pattern** at lines X-Y
- Impact: High | Risk: Low
- Run: `/refactor <target> --pattern=<pattern>`
1. **extract-method** at lines 45-67 ### Optional
- Extract order validation logic - Lower priority items
- Impact: High (reduces complexity from 12 to 4)
- Risk: Low (pure function, no side effects)
- Run: `/refactor src/handlers.py:45 --pattern=extract-method`
2. **use-dataclass** for OrderInput class
- Convert to dataclass with validation
- Impact: Medium (reduces boilerplate)
- Risk: Low
- Run: `/refactor src/models.py:OrderInput --pattern=use-dataclass`
### Optional (Consider Later)
3. **use-fstring** at 12 locations
- Modernize string formatting
- Impact: Low (readability only)
- Risk: None
### Summary ### Summary
- 2 recommended refactorings - X recommended, Y optional
- 1 optional improvement - Estimated complexity reduction: Z%
- Estimated complexity reduction: 35%
``` ```

View File

@@ -8,16 +8,12 @@ Apply refactoring transformations to specified code.
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +----------------------------------------------------------------------+
│ 🔒 CODE-SENTINEL · Refactor | CODE-SENTINEL - Refactor |
└──────────────────────────────────────────────────────────────────┘ +----------------------------------------------------------------------+
``` ```
Then proceed with the refactoring workflow.
## Usage ## Usage
``` ```
/refactor <target> [--pattern=<pattern>] /refactor <target> [--pattern=<pattern>]
@@ -26,68 +22,31 @@ Then proceed with the refactoring workflow.
**Target:** File path, function name, or "." for current context **Target:** File path, function name, or "." for current context
**Pattern:** Specific refactoring pattern (optional) **Pattern:** Specific refactoring pattern (optional)
## Available Patterns ## Skills to Load
### Structure - skills/refactoring-patterns.md
| Pattern | Description |
|---------|-------------|
| `extract-method` | Extract code block into named function |
| `extract-class` | Move related methods to new class |
| `inline` | Inline trivial function/variable |
| `rename` | Rename with all references updated |
| `move` | Move function/class to different module |
### Simplification
| Pattern | Description |
|---------|-------------|
| `simplify-conditional` | Flatten nested if/else |
| `remove-dead-code` | Delete unreachable code |
| `consolidate-duplicate` | Merge duplicate code blocks |
| `decompose-conditional` | Break complex conditions into named parts |
### Modernization
| Pattern | Description |
|---------|-------------|
| `use-comprehension` | Convert loops to list/dict comprehensions |
| `use-pathlib` | Replace os.path with pathlib |
| `use-fstring` | Convert .format() to f-strings |
| `use-typing` | Add type hints |
| `use-dataclass` | Convert class to dataclass |
## Process ## Process
1. **Analyze Target** 1. **Analyze Target** - Parse code, identify opportunities from skill, check dependencies
- Parse code structure 2. **Propose Changes** - Show before/after diff, explain improvement, list affected files
- Identify refactoring opportunities 3. **Apply (with confirmation)** - Make changes, update references, run tests
- Check for side effects and dependencies
2. **Propose Changes** ## Output Format
- Show before/after diff
- Explain the improvement
- List affected files/references
3. **Apply (with confirmation)**
- Make changes
- Update all references
- Run existing tests if available
4. **Output**
``` ```
## Refactoring: extract-method ## Refactoring: <pattern>
### Target ### Target
src/handlers.py:create_order (lines 45-89) <file>:<function> (lines X-Y)
### Changes ### Changes
- Extracted validation logic → validate_order_input() - Change description
- Extracted pricing logic → calculate_order_total()
- Original function now 15 lines (was 44)
### Files Modified ### Files Modified
- src/handlers.py - file1.py
- tests/test_handlers.py (updated calls)
### Metrics ### Metrics
- Cyclomatic complexity: 12 → 4 - Cyclomatic complexity: X -> Y
- Function length: 44 → 15 lines - Function length: X -> Y lines
``` ```

View File

@@ -8,61 +8,34 @@ Comprehensive security audit of the project.
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +----------------------------------------------------------------------+
│ 🔒 CODE-SENTINEL · Security Scan | CODE-SENTINEL - Security Scan |
└──────────────────────────────────────────────────────────────────┘ +----------------------------------------------------------------------+
``` ```
Then proceed with the scan workflow. ## Skills to Load
- skills/security-patterns/SKILL.md
## Process ## Process
1. **File Discovery** 1. **File Discovery** - Scan: .py, .js, .ts, .jsx, .tsx, .go, .rs, .java, .rb, .php, .sh
Scan all code files: .py, .js, .ts, .jsx, .tsx, .go, .rs, .java, .rb, .php, .sh 2. **Pattern Detection** - Apply patterns from skill (Critical/High/Medium severity)
3. **Report** - Group by severity, include code snippets and fixes
2. **Pattern Detection** ## Output Format
### Critical Vulnerabilities
| Pattern | Risk | Detection |
|---------|------|-----------|
| SQL Injection | High | String concat in SQL queries |
| Command Injection | High | shell=True, os.system with vars |
| XSS | High | innerHTML with user input |
| Code Injection | Critical | eval/exec with external input |
| Deserialization | Critical | pickle.loads, yaml.load unsafe |
| Path Traversal | High | File ops without sanitization |
| Hardcoded Secrets | High | API keys, passwords in code |
| SSRF | Medium | URL from user input in requests |
### Code Quality Issues
| Pattern | Risk | Detection |
|---------|------|-----------|
| Broad Exceptions | Low | `except:` or `except Exception:` |
| Debug Statements | Low | print/console.log with data |
| TODO/FIXME Security | Medium | Comments mentioning security |
| Deprecated Functions | Medium | Known insecure functions |
3. **Output Format**
``` ```
## Security Scan Report ## Security Scan Report
### Critical (Immediate Action Required) ### Critical (Immediate Action Required)
🔴 src/db.py:45 - SQL Injection [red] file:line - Vulnerability Type
Code: `f"SELECT * FROM users WHERE id = {user_id}"` Code: `problematic code`
Fix: Use parameterized query: `cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))` Fix: Recommended solution
### High ### High / Medium / Low
🟠 config.py:12 - Hardcoded Secret [Similar format]
Code: `API_KEY = "sk-1234..."`
Fix: Use environment variable: `API_KEY = os.environ.get("API_KEY")`
### Medium
🟡 utils.py:78 - Broad Exception
Code: `except:`
Fix: Catch specific exceptions
### Summary ### Summary
- Critical: X (must fix before deploy) - Critical: X (must fix before deploy)
@@ -70,7 +43,8 @@ Then proceed with the scan workflow.
- Medium: X (improve when possible) - Medium: X (improve when possible)
``` ```
4. **Exit Code Guidance** ## Exit Guidance
- Critical findings: Recommend blocking merge/deploy
- High findings: Recommend fixing before release - Critical findings: Block merge/deploy
- High findings: Fix before release
- Medium/Low: Informational - Medium/Low: Informational

View File

@@ -0,0 +1,99 @@
---
description: Workflow for previewing changes safely before applying them
---
# Dry Run Workflow Skill
## Overview
Dry run mode analyzes code and shows proposed changes without modifying files. Essential for reviewing impact before committing to changes.
## Opportunity Scoring
Rate each refactoring opportunity on three dimensions:
### Impact Score (1-5)
| Score | Meaning | Example |
|-------|---------|---------|
| 5 | Major improvement | Cyclomatic complexity 15 -> 3 |
| 4 | Significant improvement | Function 50 lines -> 15 lines |
| 3 | Moderate improvement | Better naming, clearer structure |
| 2 | Minor improvement | Code style modernization |
| 1 | Cosmetic only | Formatting changes |
### Risk Score (1-5)
| Score | Meaning | Example |
|-------|---------|---------|
| 5 | Very high risk | Changes to core business logic |
| 4 | High risk | Modifies shared utilities |
| 3 | Moderate risk | Changes function signatures |
| 2 | Low risk | Internal implementation only |
| 1 | Minimal risk | Pure functions, no side effects |
### Effort Score (1-5)
| Score | Meaning | Example |
|-------|---------|---------|
| 5 | Major effort | Requires architecture changes |
| 4 | Significant effort | Many files affected |
| 3 | Moderate effort | Multiple related changes |
| 2 | Low effort | Single file, clear scope |
| 1 | Trivial | Automated transformation |
## Priority Calculation
```
Priority = (Impact * 2) - Risk - (Effort * 0.5)
```
| Priority Range | Recommendation |
|---------------|----------------|
| > 5 | Recommended - do it |
| 3-5 | Optional - consider it |
| < 3 | Skip - not worth it |
## Output Format
### Recommended Section
High impact, low risk opportunities:
```
1. **pattern-name** at file:lines
- Description of the change
- Impact: High/Medium/Low (specific metric improvement)
- Risk: Low/Medium/High (why)
- Run: `/refactor <target> --pattern=<pattern>`
```
### Optional Section
Lower priority opportunities grouped by type.
### Summary
- Count of recommended vs optional
- Estimated overall improvement percentage
- Any blockers or dependencies
## Dependency Detection
Before recommending changes, check for:
1. **Test Coverage** - Does this code have tests?
2. **Usage Scope** - Is it used elsewhere?
3. **Side Effects** - Does it modify external state?
4. **Breaking Changes** - Will it change public API?
Flag dependencies in output:
```
Note: This refactoring requires updating 3 callers:
- src/api/handlers.py:45
- src/cli/commands.py:78
- tests/test_handlers.py:23
```
## Safety Checklist
Before recommending any change:
- [ ] All affected code locations identified
- [ ] No breaking API changes without flag
- [ ] Test coverage assessed
- [ ] Side effects documented
- [ ] Rollback path clear (git)

View File

@@ -0,0 +1,119 @@
---
description: Code refactoring patterns and techniques for improving structure and maintainability
---
# Refactoring Patterns Skill
## Structure Patterns
| Pattern | Description | When to Use |
|---------|-------------|-------------|
| `extract-method` | Extract code block into named function | Long functions, repeated logic |
| `extract-class` | Move related methods to new class | Class doing too much |
| `inline` | Inline trivial function/variable | Over-abstracted code |
| `rename` | Rename with all references updated | Unclear naming |
| `move` | Move function/class to different module | Misplaced code |
### extract-method Example
```python
# BEFORE
def process_order(order):
# Validate (extract this)
if not order.items:
raise ValueError("Empty order")
if order.total < 0:
raise ValueError("Invalid total")
# Process
for item in order.items:
inventory.reserve(item)
return create_invoice(order)
# AFTER
def validate_order(order):
if not order.items:
raise ValueError("Empty order")
if order.total < 0:
raise ValueError("Invalid total")
def process_order(order):
validate_order(order)
for item in order.items:
inventory.reserve(item)
return create_invoice(order)
```
## Simplification Patterns
| Pattern | Description | When to Use |
|---------|-------------|-------------|
| `simplify-conditional` | Flatten nested if/else | Deep nesting (>3 levels) |
| `remove-dead-code` | Delete unreachable code | After refactoring |
| `consolidate-duplicate` | Merge duplicate code blocks | DRY violations |
| `decompose-conditional` | Break complex conditions into named parts | Long boolean expressions |
### simplify-conditional Example
```python
# BEFORE
if user:
if user.active:
if user.has_permission:
do_action()
# AFTER (guard clauses)
if not user:
return
if not user.active:
return
if not user.has_permission:
return
do_action()
```
## Modernization Patterns
| Pattern | Description | When to Use |
|---------|-------------|-------------|
| `use-comprehension` | Convert loops to list/dict comprehensions | Simple transformations |
| `use-pathlib` | Replace os.path with pathlib | File path operations |
| `use-fstring` | Convert .format() to f-strings | String formatting |
| `use-typing` | Add type hints | Public APIs, complex functions |
| `use-dataclass` | Convert class to dataclass | Data-holding classes |
### use-dataclass Example
```python
# BEFORE
class User:
def __init__(self, name, email, age):
self.name = name
self.email = email
self.age = age
# AFTER
@dataclass
class User:
name: str
email: str
age: int
```
## Code Smell Detection
| Smell | Indicators | Suggested Pattern |
|-------|------------|-------------------|
| Long Method | >20 lines, multiple responsibilities | extract-method |
| Large Class | >10 methods, low cohesion | extract-class |
| Primitive Obsession | Many related primitives | use-dataclass |
| Nested Conditionals | >3 nesting levels | simplify-conditional |
| Duplicate Code | Copy-pasted blocks | consolidate-duplicate |
| Dead Code | Unreachable branches | remove-dead-code |
## Metrics
After refactoring, measure improvement:
| Metric | Good Target | Tool |
|--------|-------------|------|
| Cyclomatic Complexity | <10 per function | radon, lizard |
| Function Length | <25 lines | manual count |
| Class Cohesion | LCOM <0.5 | pylint |
| Duplication | <3% | jscpd, radon |

View File

@@ -1,18 +1,10 @@
# /check-agent - Validate Agent Definition # /check-agent - Validate Agent Definition
## Visual Output ## Skills to Load
- skills/visual-output.md
When executing this command, display the plugin header: - skills/interface-parsing.md
- skills/validation-rules.md
``` - skills/mcp-tools-reference.md
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Agent Check │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the validation.
Validate a single agent's tool references and data flow.
## Usage ## Usage
@@ -22,30 +14,26 @@ Validate a single agent's tool references and data flow.
## Parameters ## Parameters
- `agent_name` (required): Name of the agent to validate (e.g., "Planner", "Orchestrator") - `agent_name` (required): Agent to validate (e.g., "Planner", "Orchestrator")
- `claude_md_path` (optional): Path to CLAUDE.md file. Defaults to `./CLAUDE.md` - `claude_md_path` (optional): Path to CLAUDE.md. Defaults to `./CLAUDE.md`
## Workflow ## Workflow
1. **Parse agent definition**: 1. **Display header** per `skills/visual-output.md`
- Locate agent in CLAUDE.md (Four-Agent Model table or Agents section)
- Extract responsibilities, tool references, workflow steps
2. **Validate tool references**: 2. **Parse agent** per `skills/interface-parsing.md`
- Check each referenced tool exists in available plugins - Use `parse_claude_md_agents` to extract agent definition
- Report missing or misspelled tool names - Get responsibilities, tool references, workflow steps
- Suggest corrections for common mistakes
3. **Validate data flow**: 3. **Validate** per `skills/validation-rules.md`
- Analyze sequence of tools in agent workflow - Use `validate_agent_refs` - check all tools exist
- Verify data producers precede data consumers - Use `validate_data_flow` - verify producer/consumer order
- Check for orphaned data references
4. **Report findings**: 4. **Report findings**:
- List all tool references found - Tool references found
- List any missing tools - Missing tools (with suggestions)
- Data flow validation results - Data flow issues
- Suggestions for improvement - Recommendations
## Examples ## Examples
@@ -54,10 +42,3 @@ Validate a single agent's tool references and data flow.
/check-agent Orchestrator ./CLAUDE.md /check-agent Orchestrator ./CLAUDE.md
/check-agent data-analysis ~/project/CLAUDE.md /check-agent data-analysis ~/project/CLAUDE.md
``` ```
## Available Tools
Use these MCP tools:
- `validate_agent_refs` - Check agent tool references exist
- `validate_data_flow` - Verify data flow through agent sequence
- `parse_claude_md_agents` - Parse all agents from CLAUDE.md

View File

@@ -1,18 +1,11 @@
# /dependency-graph - Generate Dependency Visualization # /dependency-graph - Generate Dependency Visualization
## Visual Output ## Skills to Load
- skills/visual-output.md
When executing this command, display the plugin header: - skills/plugin-discovery.md
- skills/interface-parsing.md
``` - skills/dependency-analysis.md
┌──────────────────────────────────────────────────────────────────┐ - skills/mcp-tools-reference.md
│ ✅ CONTRACT-VALIDATOR · Dependency Graph │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Generate a Mermaid flowchart showing plugin dependencies, data flows, and tool relationships.
## Usage ## Usage
@@ -28,236 +21,35 @@ Generate a Mermaid flowchart showing plugin dependencies, data flows, and tool r
## Workflow ## Workflow
1. **Discover plugins**: 1. **Display header** per `skills/visual-output.md`
- Scan plugins directory for all plugins with `.claude-plugin/` marker
- Parse each plugin's README.md to extract interface
- Parse CLAUDE.md for agent definitions and tool sequences
2. **Analyze dependencies**: 2. **Discover plugins** per `skills/plugin-discovery.md`
- Identify shared MCP servers (plugins using same server)
- Detect tool dependencies (which plugins produce vs consume data)
- Find agent tool references across plugins
- Categorize as required (ERROR if missing) or optional (WARNING if missing)
3. **Build dependency graph**: 3. **Parse interfaces** per `skills/interface-parsing.md`
- Create nodes for each plugin - Use `parse_plugin_interface` for each plugin
- Create edges for: - Use `parse_claude_md_agents` for CLAUDE.md
- Shared MCP servers (bidirectional)
- Data producers -> consumers (directional)
- Agent tool dependencies (directional)
- Mark edges as optional or required
4. **Generate Mermaid output**: 4. **Analyze dependencies** per `skills/dependency-analysis.md`
- Create flowchart diagram syntax - Identify shared MCP servers
- Style required dependencies with solid lines - Detect data producers/consumers
- Style optional dependencies with dashed lines - Categorize as required or optional
- Group by MCP server or data flow
## Output Format 5. **Generate output**:
- Mermaid: Create flowchart TD diagram with styled edges
### Mermaid (default) - Text: Create text summary with counts and lists
```mermaid
flowchart TD
subgraph mcp_gitea["MCP: gitea"]
projman["projman"]
pr-review["pr-review"]
end
subgraph mcp_data["MCP: data-platform"]
data-platform["data-platform"]
end
subgraph mcp_viz["MCP: viz-platform"]
viz-platform["viz-platform"]
end
%% Data flow dependencies
data-platform -->|"data_ref (required)"| viz-platform
%% Optional dependencies
projman -.->|"lessons (optional)"| pr-review
%% Styling
classDef required stroke:#e74c3c,stroke-width:2px
classDef optional stroke:#f39c12,stroke-dasharray:5 5
```
### Text Format
```
DEPENDENCY GRAPH
================
Plugins: 12
MCP Servers: 4
Dependencies: 8 (5 required, 3 optional)
MCP Server Groups:
gitea: projman, pr-review
data-platform: data-platform
viz-platform: viz-platform
netbox: cmdb-assistant
Data Flow Dependencies:
data-platform -> viz-platform (data_ref) [REQUIRED]
data-platform -> data-platform (data_ref) [INTERNAL]
Cross-Plugin Tool Usage:
projman.Planner uses: create_issue, search_lessons
pr-review.reviewer uses: get_pr_diff, create_pr_review
```
## Dependency Types
| Type | Line Style | Meaning |
|------|------------|---------|
| Required | Solid (`-->`) | Plugin cannot function without this dependency |
| Optional | Dashed (`-.->`) | Plugin works but with reduced functionality |
| Internal | Dotted (`...>`) | Self-dependency within same plugin |
| Shared MCP | Double (`==>`) | Plugins share same MCP server instance |
## Known Data Flow Patterns
The command recognizes these producer/consumer relationships:
### Data Producers
- `read_csv`, `read_parquet`, `read_json` - File loaders
- `pg_query`, `pg_execute` - Database queries
- `filter`, `select`, `groupby`, `join` - Transformations
### Data Consumers
- `describe`, `head`, `tail` - Data inspection
- `to_csv`, `to_parquet` - File writers
- `chart_create` - Visualization
### Cross-Plugin Flows
- `data-platform` produces `data_ref` -> `viz-platform` consumes for charts
- `projman` produces issues -> `pr-review` references in reviews
- `gitea` wiki -> `projman` lessons learned
## Examples ## Examples
### Basic Usage
``` ```
/dependency-graph /dependency-graph
```
Generates Mermaid diagram for current marketplace.
### With Tool Details
```
/dependency-graph --show-tools /dependency-graph --show-tools
```
Includes individual tool nodes showing which tools each plugin provides.
### Text Summary
```
/dependency-graph --format text /dependency-graph --format text
```
Outputs text-based summary suitable for CLAUDE.md inclusion.
### Specific Path
```
/dependency-graph ~/claude-plugins-work /dependency-graph ~/claude-plugins-work
``` ```
Analyze marketplace at specified path. ## Integration
## Integration with Other Commands Use with `/validate-contracts`:
1. Run `/dependency-graph` to visualize
Use with `/validate-contracts` to: 2. Run `/validate-contracts` to find issues
1. Run `/dependency-graph` to visualize relationships 3. Fix and regenerate
2. Run `/validate-contracts` to find issues in those relationships
3. Fix issues and regenerate graph to verify
## Available Tools
Use these MCP tools:
- `parse_plugin_interface` - Extract interface from plugin README.md
- `parse_claude_md_agents` - Extract agents and their tool sequences
- `generate_compatibility_report` - Get full interface data (JSON format for analysis)
- `validate_data_flow` - Verify data producer/consumer relationships
## Implementation Notes
### Detecting Shared MCP Servers
Check each plugin's `.mcp.json` file for server definitions:
```bash
# List all .mcp.json files in plugins
find plugins/ -name ".mcp.json" -exec cat {} \;
```
Plugins with identical MCP server names share that server.
### Identifying Data Flows
1. Parse tool categories from README.md
2. Map known producer tools to their output types
3. Map known consumer tools to their input requirements
4. Create edges where outputs match inputs
### Optional vs Required
- **Required**: Consumer tool has no default/fallback behavior
- **Optional**: Consumer works without producer (e.g., lessons search returns empty)
Determination is based on:
- Issue severity from `validate_data_flow` (ERROR = required, WARNING = optional)
- Tool documentation stating "requires" vs "uses if available"
## Sample Output
For the leo-claude-mktplace:
```mermaid
flowchart TD
subgraph gitea_mcp["Shared MCP: gitea"]
projman["projman<br/>14 commands"]
pr-review["pr-review<br/>6 commands"]
end
subgraph netbox_mcp["Shared MCP: netbox"]
cmdb-assistant["cmdb-assistant<br/>3 commands"]
end
subgraph data_mcp["Shared MCP: data-platform"]
data-platform["data-platform<br/>7 commands"]
end
subgraph viz_mcp["Shared MCP: viz-platform"]
viz-platform["viz-platform<br/>7 commands"]
end
subgraph standalone["Standalone Plugins"]
doc-guardian["doc-guardian"]
code-sentinel["code-sentinel"]
clarity-assist["clarity-assist"]
git-flow["git-flow"]
claude-config-maintainer["claude-config-maintainer"]
contract-validator["contract-validator"]
end
%% Data flow: data-platform -> viz-platform
data-platform -->|"data_ref"| viz-platform
%% Cross-plugin: projman lessons -> pr-review context
projman -.->|"lessons"| pr-review
%% Styling
classDef mcpGroup fill:#e8f4fd,stroke:#2196f3
classDef standalone fill:#f5f5f5,stroke:#9e9e9e
classDef required stroke:#e74c3c,stroke-width:2px
classDef optional stroke:#f39c12,stroke-dasharray:5 5
class gitea_mcp,netbox_mcp,data_mcp,viz_mcp mcpGroup
class standalone standalone
```

View File

@@ -1,164 +1,49 @@
--- ---
description: Interactive setup wizard for contract-validator plugin - verifies MCP server and shows capabilities description: Interactive setup wizard for contract-validator plugin
--- ---
# Contract-Validator Setup Wizard # /initial-setup - Contract-Validator Setup Wizard
## Visual Output ## Skills to Load
- skills/visual-output.md
- skills/mcp-tools-reference.md
When executing this command, display the plugin header: **Important:** This command uses Bash, Read, Write tools - NOT MCP tools (they work only after setup + restart).
``` ## Workflow
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the setup. 1. **Display header** per `skills/visual-output.md`
This command sets up the contract-validator plugin for cross-plugin compatibility validation.
## Important Context
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart**
- **No external credentials required** - this plugin validates local files only
---
## Phase 1: Environment Validation
### Step 1.1: Check Python Version
2. **Check Python version**:
```bash ```bash
python3 --version python3 --version
``` ```
Requires 3.10+. Stop if below.
Requires Python 3.10+. If below, stop setup and inform user: 3. **Locate MCP server**:
``` - Installed: `~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/contract-validator/`
Python 3.10 or higher is required. Please install it and run /initial-setup again. - Source: `~/claude-plugins-work/mcp-servers/contract-validator/`
```
---
## Phase 2: MCP Server Setup
### Step 2.1: Locate Contract-Validator MCP Server
4. **Check/create venv**:
```bash ```bash
# If running from installed marketplace ls .venv/bin/python || (python3 -m venv .venv && .venv/bin/pip install -r requirements.txt)
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/contract-validator/ 2>/dev/null || echo "NOT_FOUND_INSTALLED"
# If running from source
ls -la ~/claude-plugins-work/mcp-servers/contract-validator/ 2>/dev/null || echo "NOT_FOUND_SOURCE"
``` ```
Determine which path exists and use that as the MCP server path. 5. **Verify MCP server**:
### Step 2.2: Check Virtual Environment
```bash ```bash
ls -la /path/to/mcp-servers/contract-validator/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING" .venv/bin/python -c "from mcp_server.server import ContractValidatorMCPServer; print('OK')"
``` ```
### Step 2.3: Create Virtual Environment (if missing) 6. **Display success** per `skills/visual-output.md`
```bash 7. **Inform user**: Session restart required for MCP tools.
cd /path/to/mcp-servers/contract-validator && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
**If pip install fails:** ## Post-Setup Commands
- Show the error to the user
- Suggest: "Check your internet connection and try again."
--- - `/validate-contracts` - Full marketplace validation
- `/check-agent` - Validate single agent
## Phase 3: Validation - `/list-interfaces` - Show all plugin interfaces
### Step 3.1: Verify MCP Server
```bash
cd /path/to/mcp-servers/contract-validator && .venv/bin/python -c "from mcp_server.server import ContractValidatorMCPServer; print('MCP Server OK')"
```
If this fails, check the error and report it to the user.
### Step 3.2: Summary
Display:
```
╔════════════════════════════════════════════════════════════════╗
║ CONTRACT-VALIDATOR SETUP COMPLETE ║
╠════════════════════════════════════════════════════════════════╣
║ MCP Server: ✓ Ready ║
║ Parse Tools: ✓ Available (2 tools) ║
║ Validation Tools: ✓ Available (3 tools) ║
║ Report Tools: ✓ Available (2 tools) ║
╚════════════════════════════════════════════════════════════════╝
```
### Step 3.3: Session Restart Notice
---
**Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/validate-contracts` to check all plugins for compatibility issues
- Run `/check-agent` to validate a single agent definition
- Run `/list-interfaces` to see all plugin commands and tools
---
## Available Tools
| Category | Tools | Description |
|----------|-------|-------------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` | Extract interfaces from README.md and agents from CLAUDE.md |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow` | Check conflicts, tool references, and data flows |
| Report | `generate_compatibility_report`, `list_issues` | Generate reports and filter issues |
---
## Available Commands
| Command | Description |
|---------|-------------|
| `/validate-contracts` | Full marketplace compatibility validation |
| `/check-agent` | Validate single agent definition |
| `/list-interfaces` | Show all plugin interfaces |
---
## Use Cases
### 1. Pre-Release Validation
Run `/validate-contracts` before releasing a new marketplace version to catch:
- Command name conflicts between plugins
- Missing tool references in agents
- Broken data flows
### 2. Agent Development
Run `/check-agent` when creating or modifying agents to verify:
- All referenced tools exist
- Data flows are valid
- No undeclared dependencies
### 3. Plugin Audit
Run `/list-interfaces` to get a complete view of:
- All commands across plugins
- All tools available
- Potential overlap areas
---
## No Configuration Required ## No Configuration Required
This plugin doesn't require any configuration files. It reads plugin manifests and README files directly from the filesystem. This plugin reads plugin manifests and README files directly - no credentials needed.
**Paths it scans:**
- Marketplace: `~/.claude/plugins/marketplaces/leo-claude-mktplace/plugins/`
- Source (if available): `~/claude-plugins-work/plugins/`

View File

@@ -1,18 +1,10 @@
# /list-interfaces - Show Plugin Interfaces # /list-interfaces - Show Plugin Interfaces
## Visual Output ## Skills to Load
- skills/visual-output.md
When executing this command, display the plugin header: - skills/plugin-discovery.md
- skills/interface-parsing.md
``` - skills/mcp-tools-reference.md
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · List Interfaces │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the interface listing.
Display what each plugin in the marketplace produces and accepts.
## Usage ## Usage
@@ -26,45 +18,29 @@ Display what each plugin in the marketplace produces and accepts.
## Workflow ## Workflow
1. **Discover plugins**: 1. **Display header** per `skills/visual-output.md`
- Scan plugins directory for all plugins with `.claude-plugin/` marker
- Read each plugin's README.md
2. **Parse interfaces**: 2. **Discover plugins** per `skills/plugin-discovery.md`
- Extract commands (slash commands offered by plugin)
- Extract agents (autonomous agents defined)
- Extract tools (MCP tools provided)
- Identify tool categories and features
3. **Display summary**: 3. **Parse interfaces** per `skills/interface-parsing.md`
- Table of plugins with command/agent/tool counts - Use `parse_plugin_interface` for each plugin README.md
- Detailed breakdown per plugin - Extract commands, agents, tools
- Tool categories and their contents
## Output Format
4. **Display summary table**:
``` ```
| Plugin | Commands | Agents | Tools | | Plugin | Commands | Agents | Tools |
|-------------|----------|--------|-------| |-------------|----------|--------|-------|
| projman | 12 | 4 | 26 | | projman | 12 | 4 | 26 |
| data-platform| 7 | 2 | 32 |
| ... | ... | ... | ... |
## projman
- Commands: /sprint-plan, /sprint-start, ...
- Agents: Planner, Orchestrator, Executor, Code Reviewer
- Tools: list_issues, create_issue, ...
``` ```
5. **Display per-plugin details**:
- List of commands
- List of agents
- Tool categories
## Examples ## Examples
``` ```
/list-interfaces /list-interfaces
/list-interfaces ~/claude-plugins-work /list-interfaces ~/claude-plugins-work
``` ```
## Available Tools
Use these MCP tools:
- `parse_plugin_interface` - Parse individual plugin README
- `generate_compatibility_report` - Get full interface data (JSON format)

View File

@@ -1,18 +1,11 @@
# /validate-contracts - Full Contract Validation # /validate-contracts - Full Contract Validation
## Visual Output ## Skills to Load
- skills/visual-output.md
When executing this command, display the plugin header: - skills/plugin-discovery.md
- skills/interface-parsing.md
``` - skills/validation-rules.md
┌──────────────────────────────────────────────────────────────────┐ - skills/mcp-tools-reference.md
│ ✅ CONTRACT-VALIDATOR · Full Validation │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the validation.
Run comprehensive cross-plugin compatibility validation for the entire marketplace.
## Usage ## Usage
@@ -26,25 +19,22 @@ Run comprehensive cross-plugin compatibility validation for the entire marketpla
## Workflow ## Workflow
1. **Discover plugins**: 1. **Display header** per `skills/visual-output.md`
- Scan plugins directory for all plugins with `.claude-plugin/` marker
- Parse each plugin's README.md to extract interface
2. **Run compatibility checks**: 2. **Discover plugins** per `skills/plugin-discovery.md`
- Perform pairwise compatibility validation between all plugins
- Check for command name conflicts
- Check for tool name overlaps
- Identify interface mismatches
3. **Validate CLAUDE.md agents**: 3. **Parse interfaces** per `skills/interface-parsing.md`
- Parse agent definitions from CLAUDE.md - Use `parse_plugin_interface` for each plugin
- Validate all tool references exist
- Check data flow through agent sequences
4. **Generate report**: 4. **Run validations** per `skills/validation-rules.md`
- Summary statistics (plugins, commands, tools, issues) - Use `validate_compatibility` for pairwise checks
- Detailed findings by severity (error, warning, info) - Use `validate_agent_refs` for CLAUDE.md agents
- Actionable suggestions for each issue - Use `validate_data_flow` for data sequences
5. **Generate report**:
- Use `generate_compatibility_report` for full report
- Use `list_issues` to filter by severity
- Display summary and actionable suggestions
## Examples ## Examples
@@ -52,11 +42,3 @@ Run comprehensive cross-plugin compatibility validation for the entire marketpla
/validate-contracts /validate-contracts
/validate-contracts ~/claude-plugins-work /validate-contracts ~/claude-plugins-work
``` ```
## Available Tools
Use these MCP tools:
- `generate_compatibility_report` - Generate full marketplace report
- `list_issues` - Filter issues by severity or type
- `parse_plugin_interface` - Parse individual plugin interface
- `validate_compatibility` - Check two plugins for conflicts

View File

@@ -0,0 +1,57 @@
# Skill: Dependency Analysis
Analyze dependencies between plugins including shared MCP servers and data flows.
## Dependency Types
| Type | Line Style | Meaning |
|------|------------|---------|
| Required | Solid (`-->`) | Plugin cannot function without this |
| Optional | Dashed (`-.->`) | Plugin works with reduced functionality |
| Internal | Dotted (`...>`) | Self-dependency within same plugin |
| Shared MCP | Double (`==>`) | Plugins share same MCP server instance |
## Shared MCP Server Detection
Check each plugin's `.mcp.json` for server definitions. Plugins with identical MCP server names share that server.
**Common shared servers:**
- `gitea` - Used by projman, pr-review
- `netbox` - Used by cmdb-assistant
- `data-platform` - Used by data-platform
- `viz-platform` - Used by viz-platform
## Data Flow Patterns
### Data Producers
- `read_csv`, `read_parquet`, `read_json` - File loaders
- `pg_query`, `pg_execute` - Database queries
- `filter`, `select`, `groupby`, `join` - Transformations
### Data Consumers
- `describe`, `head`, `tail` - Data inspection
- `to_csv`, `to_parquet` - File writers
- `chart_create` - Visualization
### Cross-Plugin Flows
| Producer | Consumer | Data Type |
|----------|----------|-----------|
| data-platform | viz-platform | `data_ref` |
| projman | pr-review | issues/lessons |
| gitea wiki | projman | lessons learned |
## Required vs Optional
- **Required**: Consumer has no default/fallback (ERROR if missing)
- **Optional**: Consumer works without producer (WARNING if missing)
Determined by:
- Issue severity from `validate_data_flow`
- Tool docs stating "requires" vs "uses if available"
## MCP Tools
| Tool | Purpose |
|------|---------|
| `validate_data_flow` | Verify producer/consumer relationships |
| `generate_compatibility_report` | Get full dependency data |

View File

@@ -0,0 +1,52 @@
# Skill: Interface Parsing
Parse plugin interfaces from README.md and agent definitions from CLAUDE.md.
## Interface Components
| Component | Source | Description |
|-----------|--------|-------------|
| Commands | README.md | Slash commands offered by plugin |
| Agents | README.md, CLAUDE.md | Autonomous agents defined |
| Tools | README.md | MCP tools provided |
| Categories | README.md | Tool groupings and features |
## Parsing README.md
Extract from these sections:
1. **Commands section**: Look for tables with `| Command |` or lists of `/command-name`
2. **Tools section**: Look for tables with `| Tool |` or code blocks with tool names
3. **Agents section**: Look for "Four-Agent Model" or "Agents" headings
## Parsing CLAUDE.md
Extract agent definitions from:
1. **Four-Agent Model table**: `| Agent | Personality | Responsibilities |`
2. **Agent sections**: Headings like `### Planner Agent` or `## Agents`
3. **Tool sequences**: Lists of tools in workflow steps
## Agent Definition Structure
```yaml
agent:
name: "Planner"
personality: "Thoughtful, methodical"
responsibilities:
- "Sprint planning"
- "Architecture analysis"
tools:
- "create_issue"
- "search_lessons"
workflow:
- step: "Analyze requirements"
tools: ["list_issues", "get_issue"]
```
## MCP Tools
| Tool | Purpose |
|------|---------|
| `parse_plugin_interface` | Extract interface from README.md |
| `parse_claude_md_agents` | Extract agents and tool sequences from CLAUDE.md |

View File

@@ -0,0 +1,61 @@
# Skill: MCP Tools Reference
Available MCP tools for contract-validator operations.
## Tool Categories
### Parse Tools
| Tool | Description |
|------|-------------|
| `parse_plugin_interface` | Extract interface from plugin README.md |
| `parse_claude_md_agents` | Extract agents and tool sequences from CLAUDE.md |
### Validation Tools
| Tool | Description |
|------|-------------|
| `validate_compatibility` | Check two plugins for conflicts |
| `validate_agent_refs` | Check agent tool references exist |
| `validate_data_flow` | Verify data flow through agent sequence |
### Report Tools
| Tool | Description |
|------|-------------|
| `generate_compatibility_report` | Generate full marketplace report (JSON) |
| `list_issues` | Filter issues by severity or type |
## Tool Usage Patterns
### Full Marketplace Validation
```
1. generate_compatibility_report(marketplace_path)
2. list_issues(severity="ERROR") # Get critical issues
3. list_issues(severity="WARNING") # Get warnings
```
### Single Agent Check
```
1. parse_claude_md_agents(claude_md_path)
2. validate_agent_refs(agent_name, agents_data)
3. validate_data_flow(agent_workflow)
```
### Interface Listing
```
1. For each plugin:
parse_plugin_interface(readme_path)
2. Aggregate results into summary table
```
### Dependency Graph
```
1. generate_compatibility_report(marketplace_path)
2. validate_data_flow(cross_plugin_flows)
3. Build Mermaid diagram from results
```
## Error Handling
If MCP tools fail:
1. Check if `/initial-setup` has been run
2. Verify session was restarted after setup
3. Check MCP server venv exists and is valid

View File

@@ -0,0 +1,42 @@
# Skill: Plugin Discovery
Discover plugins in a marketplace by scanning for `.claude-plugin/` markers.
## Discovery Process
1. **Identify marketplace root**:
- Use provided path or default to current project root
- Look for `plugins/` subdirectory
2. **Scan for plugins**:
- Find all directories containing `.claude-plugin/plugin.json`
- Each is a valid plugin
3. **Build plugin list**:
- Extract plugin name from directory name
- Record path to plugin root
## Standard Paths
| Context | Path |
|---------|------|
| Installed | `~/.claude/plugins/marketplaces/leo-claude-mktplace/plugins/` |
| Source | `~/claude-plugins-work/plugins/` |
## Expected Structure
```
plugins/
plugin-name/
.claude-plugin/
plugin.json # Required marker
commands/ # Command definitions
agents/ # Agent definitions (optional)
hooks/ # Hook definitions (optional)
skills/ # Skill files (optional)
README.md # Interface documentation
```
## MCP Tool
Use `parse_plugin_interface` with each discovered plugin's README.md path.

View File

@@ -0,0 +1,57 @@
# Skill: Validation Rules
Rules for validating plugin compatibility and agent definitions.
## Compatibility Checks
### 1. Command Name Conflicts
- No two plugins should have identical command names
- Severity: ERROR
### 2. Tool Name Overlaps
- Tools with same name must have same signature
- Severity: WARNING (may be intentional aliasing)
### 3. Interface Mismatches
- Producer output types must match consumer input types
- Severity: ERROR
## Agent Validation
### Tool Reference Check
1. Parse agent's tool references from workflow steps
2. Verify each tool exists in available plugins
3. Report missing or misspelled tool names
4. Suggest corrections for common mistakes
### Data Flow Validation
1. Analyze sequence of tools in agent workflow
2. Verify data producers precede data consumers
3. Check for orphaned data references
4. Ensure required data is available at each step
## Severity Levels
| Level | Meaning | Action |
|-------|---------|--------|
| ERROR | Will cause runtime failures | Must fix |
| WARNING | May cause issues | Review needed |
| INFO | Informational only | No action required |
## Common Issues
| Issue | Cause | Fix |
|-------|-------|-----|
| Missing tool | Typo or plugin not installed | Check spelling, install plugin |
| Data flow gap | Producer not called before consumer | Reorder workflow steps |
| Name conflict | Two plugins use same command | Rename one command |
| Orphan reference | Data produced but never consumed | Remove or use the data |
## MCP Tools
| Tool | Purpose |
|------|---------|
| `validate_compatibility` | Check two plugins for conflicts |
| `validate_agent_refs` | Check agent tool references |
| `validate_data_flow` | Verify data flow sequences |
| `list_issues` | Filter issues by severity or type |

View File

@@ -0,0 +1,88 @@
# Skill: Visual Output
Standard visual formatting for contract-validator commands.
## Command Header
Display at the start of every command:
```
+----------------------------------------------------------------------+
| CONTRACT-VALIDATOR - [Command Name] |
+----------------------------------------------------------------------+
```
## Result Boxes
### Success Box
```
+====================================================================+
| CONTRACT-VALIDATOR [ACTION] COMPLETE |
+====================================================================+
| Item 1: [checkmark] Status |
| Item 2: [checkmark] Status |
+====================================================================+
```
### Error Box
```
+====================================================================+
| CONTRACT-VALIDATOR [ACTION] FAILED |
+====================================================================+
| Error: [description] |
| Suggestion: [fix] |
+====================================================================+
```
## Summary Tables
### Plugin Summary
```
| Plugin | Commands | Agents | Tools |
|-------------|----------|--------|-------|
| projman | 12 | 4 | 26 |
| data-platform| 7 | 2 | 32 |
```
### Issue Summary
```
| Severity | Count |
|----------|-------|
| ERROR | 2 |
| WARNING | 5 |
| INFO | 8 |
```
## Mermaid Diagrams
For dependency graphs, use flowchart TD format:
```mermaid
flowchart TD
subgraph group_name["Group Label"]
node1["Plugin Name"]
end
node1 -->|"label"| node2
node1 -.->|"optional"| node3
classDef required stroke:#e74c3c,stroke-width:2px
classDef optional stroke:#f39c12,stroke-dasharray:5 5
```
## Text Format Alternative
For non-graphical output:
```
DEPENDENCY GRAPH
================
Plugins: 12
MCP Servers: 4
Dependencies: 8 (5 required, 3 optional)
MCP Server Groups:
gitea: projman, pr-review
data-platform: data-platform
```

View File

@@ -1,18 +1,13 @@
# /data-quality - Data Quality Assessment # /data-quality - Data Quality Assessment
## Skills to Load
- skills/data-profiling.md
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Data Quality`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Data Quality │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the assessment.
Comprehensive data quality check for DataFrames with pass/warn/fail scoring.
## Usage ## Usage
@@ -22,72 +17,18 @@ Comprehensive data quality check for DataFrames with pass/warn/fail scoring.
## Workflow ## Workflow
1. **Get data reference**: Execute `skills/data-profiling.md` quality assessment:
- If no data_ref provided, use `list_data` to show available options
- Validate the data_ref exists
2. **Null analysis**: 1. **Get data reference**: Use `list_data` if none provided
- Calculate null percentage per column 2. **Run quality checks**: Nulls, duplicates, types, outliers
- **PASS**: < 5% nulls 3. **Calculate score**: Apply weighted scoring formula
- **WARN**: 5-20% nulls 4. **Generate report**: Issues and recommendations
- **FAIL**: > 20% nulls
3. **Duplicate detection**:
- Check for fully duplicated rows
- **PASS**: 0% duplicates
- **WARN**: < 1% duplicates
- **FAIL**: >= 1% duplicates
4. **Type consistency**:
- Identify mixed-type columns (object columns with mixed content)
- Flag columns that could be numeric but contain strings
- **PASS**: All columns have consistent types
- **FAIL**: Mixed types detected
5. **Outlier detection** (numeric columns):
- Use IQR method (values beyond 1.5 * IQR)
- Report percentage of outliers per column
- **PASS**: < 1% outliers
- **WARN**: 1-5% outliers
- **FAIL**: > 5% outliers
6. **Generate quality report**:
- Overall quality score (0-100)
- Per-column breakdown
- Recommendations for remediation
## Report Format
```
=== Data Quality Report ===
Dataset: sales_data
Rows: 10,000 | Columns: 15
Overall Score: 82/100 [PASS]
--- Column Analysis ---
| Column | Nulls | Dups | Type | Outliers | Status |
|--------------|-------|------|----------|----------|--------|
| customer_id | 0.0% | - | int64 | 0.2% | PASS |
| email | 2.3% | - | object | - | PASS |
| amount | 15.2% | - | float64 | 3.1% | WARN |
| created_at | 0.0% | - | datetime | - | PASS |
--- Issues Found ---
[WARN] Column 'amount': 15.2% null values (threshold: 5%)
[WARN] Column 'amount': 3.1% outliers detected
[FAIL] 1.2% duplicate rows detected (12 rows)
--- Recommendations ---
1. Investigate null values in 'amount' column
2. Review outliers in 'amount' - may be data entry errors
3. Remove or deduplicate 12 duplicate rows
```
## Options ## Options
| Flag | Description | | Flag | Description |
|------|-------------| |------|-------------|
| `--strict` | Use stricter thresholds (WARN at 1% nulls, FAIL at 5%) | | `--strict` | Stricter thresholds (WARN at 1%, FAIL at 5% nulls) |
## Examples ## Examples
@@ -96,20 +37,12 @@ Overall Score: 82/100 [PASS]
/data-quality df_customers --strict /data-quality df_customers --strict
``` ```
## Scoring ## Quality Thresholds
| Component | Weight | Scoring | See `skills/data-profiling.md` for detailed thresholds and scoring.
|-----------|--------|---------|
| Nulls | 30% | 100 - (avg_null_pct * 2) |
| Duplicates | 20% | 100 - (dup_pct * 50) |
| Type consistency | 25% | 100 if clean, 0 if mixed |
| Outliers | 25% | 100 - (avg_outlier_pct * 10) |
Final score: Weighted average, capped at 0-100 ## Required MCP Tools
## Available Tools - `describe` - Get statistical summary
Use these MCP tools:
- `describe` - Get statistical summary (for outlier detection)
- `head` - Preview data - `head` - Preview data
- `list_data` - List available DataFrames - `list_data` - List available DataFrames

View File

@@ -1,18 +1,13 @@
# /dbt-test - Run dbt Tests # /dbt-test - Run dbt Tests
## Skills to Load
- skills/dbt-workflow.md
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - dbt Tests`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · dbt Tests │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the tests.
Execute dbt tests with formatted pass/fail results.
## Usage ## Usage
@@ -22,75 +17,17 @@ Execute dbt tests with formatted pass/fail results.
## Workflow ## Workflow
1. **Pre-validation** (MANDATORY): Execute `skills/dbt-workflow.md` test workflow:
- Use `dbt_parse` to validate project first
- If validation fails, show errors and STOP
2. **Execute tests**: 1. **Pre-validation** (MANDATORY): Run `dbt_parse` first
- Use `dbt_test` with provided selection 2. **Execute tests**: Use `dbt_test` with selection
- Capture all test results 3. **Format results**: Group by test type, show pass/fail/warn counts
3. **Format results**:
- Group by test type (schema vs. data)
- Show pass/fail status with counts
- Display failure details
## Report Format
```
=== dbt Test Results ===
Project: my_project
Selection: tag:critical
--- Summary ---
Total: 24 tests
PASS: 22 (92%)
FAIL: 1 (4%)
WARN: 1 (4%)
SKIP: 0 (0%)
--- Schema Tests (18) ---
[PASS] unique_dim_customers_customer_id
[PASS] not_null_dim_customers_customer_id
[PASS] not_null_dim_customers_email
[PASS] accepted_values_dim_customers_status
[FAIL] relationships_fct_orders_customer_id
--- Data Tests (6) ---
[PASS] assert_positive_order_amounts
[PASS] assert_valid_dates
[WARN] assert_recent_orders (threshold: 7 days)
--- Failure Details ---
Test: relationships_fct_orders_customer_id
Type: schema (relationships)
Model: fct_orders
Message: 15 records failed referential integrity check
Query: SELECT * FROM fct_orders WHERE customer_id NOT IN (SELECT customer_id FROM dim_customers)
--- Warning Details ---
Test: assert_recent_orders
Type: data
Message: No orders in last 7 days (expected for dev environment)
Severity: warn
```
## Selection Syntax
| Pattern | Meaning |
|---------|---------|
| (none) | Run all tests |
| `model_name` | Tests for specific model |
| `+model_name` | Tests for model and upstream |
| `tag:critical` | Tests with tag |
| `test_type:schema` | Only schema tests |
| `test_type:data` | Only data tests |
## Options ## Options
| Flag | Description | | Flag | Description |
|------|-------------| |------|-------------|
| `--warn-only` | Treat failures as warnings (don't fail CI) | | `--warn-only` | Treat failures as warnings |
## Examples ## Examples
@@ -98,34 +35,9 @@ Severity: warn
/dbt-test # Run all tests /dbt-test # Run all tests
/dbt-test dim_customers # Tests for specific model /dbt-test dim_customers # Tests for specific model
/dbt-test tag:critical # Run critical tests only /dbt-test tag:critical # Run critical tests only
/dbt-test +fct_orders # Test model and its upstream
``` ```
## Test Types ## Required MCP Tools
### Schema Tests
Built-in tests defined in `schema.yml`:
- `unique` - No duplicate values
- `not_null` - No null values
- `accepted_values` - Value in allowed list
- `relationships` - Foreign key integrity
### Data Tests
Custom SQL tests in `tests/` directory:
- Return rows that fail the assertion
- Zero rows = pass, any rows = fail
## Exit Codes
| Code | Meaning |
|------|---------|
| 0 | All tests passed |
| 1 | One or more tests failed |
| 2 | dbt error (parse failure, etc.) |
## Available Tools
Use these MCP tools:
- `dbt_parse` - Pre-validation (ALWAYS RUN FIRST) - `dbt_parse` - Pre-validation (ALWAYS RUN FIRST)
- `dbt_test` - Execute tests (REQUIRED) - `dbt_test` - Execute tests (REQUIRED)
- `dbt_build` - Alternative: run + test together

View File

@@ -1,18 +1,14 @@
# /explain - dbt Model Explanation # /explain - dbt Model Explanation
## Skills to Load
- skills/dbt-workflow.md
- skills/lineage-analysis.md
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Model Explanation`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Model Explanation │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the explanation.
Explain a dbt model's purpose, dependencies, and SQL logic.
## Usage ## Usage
@@ -22,24 +18,10 @@ Explain a dbt model's purpose, dependencies, and SQL logic.
## Workflow ## Workflow
1. **Get model info**: 1. **Get model info**: Use `dbt_lineage` for metadata (description, tags, materialization)
- Use `dbt_lineage` to get model metadata 2. **Analyze dependencies**: Show upstream/downstream as tree
- Extract description, tags, materialization 3. **Compile SQL**: Use `dbt_compile` to get rendered SQL
4. **Report**: Purpose, materialization, dependencies, key SQL logic
2. **Analyze dependencies**:
- Show upstream models (what this depends on)
- Show downstream models (what depends on this)
- Visualize as dependency tree
3. **Compile SQL**:
- Use `dbt_compile` to get rendered SQL
- Explain key transformations
4. **Report**:
- Model purpose (from description)
- Materialization strategy
- Dependency graph
- Key SQL logic explained
## Examples ## Examples
@@ -48,9 +30,8 @@ Explain a dbt model's purpose, dependencies, and SQL logic.
/explain fct_orders /explain fct_orders
``` ```
## Available Tools ## Required MCP Tools
Use these MCP tools:
- `dbt_lineage` - Get model dependencies - `dbt_lineage` - Get model dependencies
- `dbt_compile` - Get compiled SQL - `dbt_compile` - Get compiled SQL
- `dbt_ls` - List related resources - `dbt_ls` - List related resources

View File

@@ -1,18 +1,12 @@
# /ingest - Data Ingestion # /ingest - Data Ingestion
## Skills to Load
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Ingest`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Ingest │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the ingestion.
Load data from files or database into the data platform.
## Usage ## Usage
@@ -22,21 +16,17 @@ Load data from files or database into the data platform.
## Workflow ## Workflow
1. **Identify data source**: 1. **Identify source**:
- If source is a file path, determine format (CSV, Parquet, JSON) - File path: determine format (CSV, Parquet, JSON)
- If source is "db" or a table name, query PostgreSQL - SQL query or table name: query PostgreSQL
2. **Load data**: 2. **Load data**:
- For files: Use `read_csv`, `read_parquet`, or `read_json` - Files: `read_csv`, `read_parquet`, `read_json`
- For database: Use `pg_query` with appropriate SELECT - Database: `pg_query`
3. **Validate**: 3. **Validate**: Check row count against 100k limit
- Check row count against limits
- If exceeds 100k rows, suggest chunking or filtering
4. **Report**: 4. **Report**: data_ref, row count, columns, memory usage, preview
- Show data_ref, row count, columns, and memory usage
- Preview first few rows
## Examples ## Examples
@@ -46,9 +36,8 @@ Load data from files or database into the data platform.
/ingest "SELECT * FROM orders WHERE created_at > '2024-01-01'" /ingest "SELECT * FROM orders WHERE created_at > '2024-01-01'"
``` ```
## Available Tools ## Required MCP Tools
Use these MCP tools:
- `read_csv` - Load CSV files - `read_csv` - Load CSV files
- `read_parquet` - Load Parquet files - `read_parquet` - Load Parquet files
- `read_json` - Load JSON/JSONL files - `read_json` - Load JSON/JSONL files

View File

@@ -1,243 +1,49 @@
--- # /initial-setup - Data Platform Setup Wizard
description: Interactive setup wizard for data-platform plugin - configures MCP server and optional PostgreSQL/dbt
---
# Data Platform Setup Wizard ## Skills to Load
- skills/setup-workflow.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Setup Wizard`
## Usage
``` ```
┌──────────────────────────────────────────────────────────────────┐ /initial-setup
│ 📊 DATA-PLATFORM · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
``` ```
Then proceed with the setup. ## Workflow
This command sets up the data-platform plugin with pandas, PostgreSQL, and dbt integration. Execute `skills/setup-workflow.md` phases in order:
## Important Context ### Phase 1: Environment Validation
- Check Python 3.10+ installed
- Stop if version too old
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools ### Phase 2: MCP Server Setup
- **MCP tools won't work until after setup + session restart** - Locate MCP server (installed or source path)
- **PostgreSQL and dbt are optional** - pandas tools work without them - Check/create virtual environment
- Install dependencies if needed
--- ### Phase 3: PostgreSQL Configuration (Optional)
- Ask user if they want PostgreSQL access
- If yes: create `~/.config/claude/postgres.env`
- Test connection and report status
## Phase 1: Environment Validation ### Phase 4: dbt Configuration (Optional)
- Ask user if they use dbt
- If yes: explain auto-detection via `dbt_project.yml`
- Check dbt CLI installation
### Step 1.1: Check Python Version ### Phase 5: Validation
- Verify MCP server can be imported
- Display summary with component status
- Inform user to restart session
```bash ## Important Notes
python3 --version
```
Requires Python 3.10+. If below, stop setup and inform user. - Uses Bash, Read, Write, AskUserQuestion tools (NOT MCP tools)
- MCP tools unavailable until session restart
### Step 1.2: Check for Required Libraries - PostgreSQL and dbt are optional - pandas works without them
```bash
python3 -c "import sys; print(f'Python {sys.version_info.major}.{sys.version_info.minor}')"
```
---
## Phase 2: MCP Server Setup
### Step 2.1: Locate Data Platform MCP Server
The MCP server should be at the marketplace root:
```bash
# If running from installed marketplace
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/data-platform/ 2>/dev/null || echo "NOT_FOUND_INSTALLED"
# If running from source
ls -la ~/claude-plugins-work/mcp-servers/data-platform/ 2>/dev/null || echo "NOT_FOUND_SOURCE"
```
Determine the correct path based on which exists.
### Step 2.2: Check Virtual Environment
```bash
ls -la /path/to/mcp-servers/data-platform/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING"
```
### Step 2.3: Create Virtual Environment (if missing)
```bash
cd /path/to/mcp-servers/data-platform && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
**Note:** This may take a few minutes due to pandas, pyarrow, and dbt dependencies.
---
## Phase 3: PostgreSQL Configuration (Optional)
### Step 3.1: Ask About PostgreSQL
Use AskUserQuestion:
- Question: "Do you want to configure PostgreSQL database access?"
- Header: "PostgreSQL"
- Options:
- "Yes, I have a PostgreSQL database"
- "No, I'll only use pandas/dbt tools"
**If user chooses "No":** Skip to Phase 4.
### Step 3.2: Create Config Directory
```bash
mkdir -p ~/.config/claude
```
### Step 3.3: Check PostgreSQL Configuration
```bash
cat ~/.config/claude/postgres.env 2>/dev/null || echo "FILE_NOT_FOUND"
```
**If file exists with valid URL:** Skip to Step 3.6.
**If missing or has placeholders:** Continue.
### Step 3.4: Gather PostgreSQL Information
Use AskUserQuestion:
- Question: "What is your PostgreSQL connection URL format?"
- Header: "DB Format"
- Options:
- "Standard: postgresql://user:pass@host:5432/db"
- "PostGIS: postgresql://user:pass@host:5432/db (with PostGIS extension)"
- "Other (I'll provide the full URL)"
Ask user to provide the connection URL.
### Step 3.5: Create Configuration File
```bash
cat > ~/.config/claude/postgres.env << 'EOF'
# PostgreSQL Configuration
# Generated by data-platform /initial-setup
POSTGRES_URL=<USER_PROVIDED_URL>
EOF
chmod 600 ~/.config/claude/postgres.env
```
### Step 3.6: Test PostgreSQL Connection (if configured)
```bash
source ~/.config/claude/postgres.env && python3 -c "
import asyncio
import asyncpg
async def test():
try:
conn = await asyncpg.connect('$POSTGRES_URL', timeout=5)
ver = await conn.fetchval('SELECT version()')
await conn.close()
print(f'SUCCESS: {ver.split(\",\")[0]}')
except Exception as e:
print(f'FAILED: {e}')
asyncio.run(test())
"
```
Report result:
- SUCCESS: Connection works
- FAILED: Show error and suggest fixes
---
## Phase 4: dbt Configuration (Optional)
### Step 4.1: Ask About dbt
Use AskUserQuestion:
- Question: "Do you use dbt for data transformations in your projects?"
- Header: "dbt"
- Options:
- "Yes, I have dbt projects"
- "No, I don't use dbt"
**If user chooses "No":** Skip to Phase 5.
### Step 4.2: dbt Discovery
dbt configuration is **project-level** (not system-level). The plugin auto-detects dbt projects by looking for `dbt_project.yml`.
Inform user:
```
dbt projects are detected automatically when you work in a directory
containing dbt_project.yml.
If your dbt project is in a subdirectory, you can set DBT_PROJECT_DIR
in your project's .env file:
DBT_PROJECT_DIR=./transform
DBT_PROFILES_DIR=~/.dbt
```
### Step 4.3: Check dbt Installation
```bash
dbt --version 2>/dev/null || echo "DBT_NOT_FOUND"
```
**If not found:** Inform user that dbt CLI tools require dbt-core to be installed globally or in the project.
---
## Phase 5: Validation
### Step 5.1: Verify MCP Server
```bash
cd /path/to/mcp-servers/data-platform && .venv/bin/python -c "from mcp_server.server import DataPlatformMCPServer; print('MCP Server OK')"
```
### Step 5.2: Summary
```
╔════════════════════════════════════════════════════════════╗
║ DATA-PLATFORM SETUP COMPLETE ║
╠════════════════════════════════════════════════════════════╣
║ MCP Server: ✓ Ready ║
║ pandas Tools: ✓ Available (14 tools) ║
║ PostgreSQL Tools: [✓/✗] [Status based on config] ║
║ PostGIS Tools: [✓/✗] [Status based on PostGIS] ║
║ dbt Tools: [✓/✗] [Status based on discovery] ║
╚════════════════════════════════════════════════════════════╝
```
### Step 5.3: Session Restart Notice
---
**⚠️ Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/ingest` to load data from files or database
- Run `/profile` to analyze DataFrame statistics
- Run `/schema` to explore database/DataFrame schema
- Run `/run` to execute dbt models (if configured)
- Run `/lineage` to view dbt model dependencies
---
## Memory Limits
The data-platform plugin has a default row limit of 100,000 rows per DataFrame. For larger datasets:
- Use chunked processing (`chunk_size` parameter)
- Filter data before loading
- Store to Parquet for efficient re-loading
You can override the limit by setting in your project `.env`:
```
DATA_PLATFORM_MAX_ROWS=500000
```

View File

@@ -1,18 +1,13 @@
# /lineage-viz - Mermaid Lineage Visualization # /lineage-viz - Mermaid Lineage Visualization
## Skills to Load
- skills/lineage-analysis.md
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Lineage Visualization`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Lineage Visualization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Generate Mermaid flowchart syntax for dbt model lineage.
## Usage ## Usage
@@ -22,61 +17,16 @@ Generate Mermaid flowchart syntax for dbt model lineage.
## Workflow ## Workflow
1. **Get lineage data**: 1. **Get lineage data**: Use `dbt_lineage` to fetch model dependencies
- Use `dbt_lineage` to fetch model dependencies 2. **Build Mermaid graph**: Apply node shapes from `skills/lineage-analysis.md`
- Capture upstream sources and downstream consumers 3. **Output**: Render copy-paste ready Mermaid flowchart
2. **Build Mermaid graph**:
- Create nodes for each model/source
- Style nodes by materialization type
- Add directional arrows for dependencies
3. **Output**:
- Render Mermaid flowchart syntax
- Include copy-paste ready code block
## Output Format
```mermaid
flowchart LR
subgraph Sources
raw_customers[(raw_customers)]
raw_orders[(raw_orders)]
end
subgraph Staging
stg_customers[stg_customers]
stg_orders[stg_orders]
end
subgraph Marts
dim_customers{{dim_customers}}
fct_orders{{fct_orders}}
end
raw_customers --> stg_customers
raw_orders --> stg_orders
stg_customers --> dim_customers
stg_orders --> fct_orders
dim_customers --> fct_orders
```
## Node Styles
| Materialization | Mermaid Shape | Example |
|-----------------|---------------|---------|
| source | Cylinder `[( )]` | `raw_data[(raw_data)]` |
| view | Rectangle `[ ]` | `stg_model[stg_model]` |
| table | Double braces `{{ }}` | `dim_model{{dim_model}}` |
| incremental | Hexagon `{{ }}` | `fct_model{{fct_model}}` |
| ephemeral | Dashed `[/ /]` | `tmp_model[/tmp_model/]` |
## Options ## Options
| Flag | Description | | Flag | Description |
|------|-------------| |------|-------------|
| `--direction TB` | Top-to-bottom layout (default: LR = left-to-right) | | `--direction TB` | Top-to-bottom layout (default: LR) |
| `--depth N` | Limit lineage depth (default: unlimited) | | `--depth N` | Limit lineage depth |
## Examples ## Examples
@@ -86,52 +36,8 @@ flowchart LR
/lineage-viz rpt_revenue --depth 2 /lineage-viz rpt_revenue --depth 2
``` ```
## Usage Tips ## Required MCP Tools
1. **Paste in documentation**: Copy the output directly into README.md or docs
2. **GitHub/GitLab rendering**: Both platforms render Mermaid natively
3. **Mermaid Live Editor**: Paste at https://mermaid.live for interactive editing
## Example Output
For `/lineage-viz fct_orders`:
~~~markdown
```mermaid
flowchart LR
%% Sources
raw_customers[(raw_customers)]
raw_orders[(raw_orders)]
raw_products[(raw_products)]
%% Staging
stg_customers[stg_customers]
stg_orders[stg_orders]
stg_products[stg_products]
%% Marts
dim_customers{{dim_customers}}
dim_products{{dim_products}}
fct_orders{{fct_orders}}
%% Dependencies
raw_customers --> stg_customers
raw_orders --> stg_orders
raw_products --> stg_products
stg_customers --> dim_customers
stg_products --> dim_products
stg_orders --> fct_orders
dim_customers --> fct_orders
dim_products --> fct_orders
%% Highlight target model
style fct_orders fill:#f96,stroke:#333,stroke-width:2px
```
~~~
## Available Tools
Use these MCP tools:
- `dbt_lineage` - Get model dependencies (REQUIRED) - `dbt_lineage` - Get model dependencies (REQUIRED)
- `dbt_ls` - List dbt resources - `dbt_ls` - List dbt resources
- `dbt_docs_generate` - Generate full manifest if needed - `dbt_docs_generate` - Generate full manifest if needed

View File

@@ -1,18 +1,13 @@
# /lineage - Data Lineage Visualization # /lineage - Data Lineage Visualization
## Skills to Load
- skills/lineage-analysis.md
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Lineage`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Lineage │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Show data lineage for dbt models or database tables.
## Usage ## Usage
@@ -22,24 +17,10 @@ Show data lineage for dbt models or database tables.
## Workflow ## Workflow
1. **Get lineage data**: 1. **Get lineage data**: Use `dbt_lineage` for dbt models
- Use `dbt_lineage` for dbt models 2. **Build lineage graph**: Identify upstream sources and downstream consumers
- For database tables, trace through dbt manifest 3. **Visualize**: ASCII tree with depth levels (see `skills/lineage-analysis.md`)
4. **Report**: Full dependency chain and refresh implications
2. **Build lineage graph**:
- Identify all upstream sources
- Identify all downstream consumers
- Note materialization at each node
3. **Visualize**:
- ASCII art dependency tree
- List format with indentation
- Show depth levels
4. **Report**:
- Full dependency chain
- Critical path identification
- Refresh implications
## Examples ## Examples
@@ -48,25 +29,8 @@ Show data lineage for dbt models or database tables.
/lineage fct_orders --depth 3 /lineage fct_orders --depth 3
``` ```
## Output Format ## Required MCP Tools
```
Sources:
└── raw_customers (source)
└── raw_orders (source)
dim_customers (table)
├── upstream:
│ └── stg_customers (view)
│ └── raw_customers (source)
└── downstream:
└── fct_orders (incremental)
└── rpt_customer_lifetime (table)
```
## Available Tools
Use these MCP tools:
- `dbt_lineage` - Get model dependencies - `dbt_lineage` - Get model dependencies
- `dbt_ls` - List dbt resources - `dbt_ls` - List dbt resources
- `dbt_docs_generate` - Generate full manifest - `dbt_docs_generate` - Generate full manifest

View File

@@ -1,18 +1,13 @@
# /profile - Data Profiling # /profile - Data Profiling
## Skills to Load
- skills/data-profiling.md
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Data Profile`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Data Profile │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the profiling.
Generate statistical profile and quality report for a DataFrame.
## Usage ## Usage
@@ -22,24 +17,12 @@ Generate statistical profile and quality report for a DataFrame.
## Workflow ## Workflow
1. **Get data reference**: Execute `skills/data-profiling.md` profiling workflow:
- If no data_ref provided, use `list_data` to show available options
- Validate the data_ref exists
2. **Generate profile**: 1. **Get data reference**: Use `list_data` if none provided
- Use `describe` for statistical summary 2. **Generate profile**: Use `describe` for statistics
- Analyze null counts, unique values, data types 3. **Quality assessment**: Identify null columns, potential issues
4. **Report**: Statistics, types, memory usage, quality score
3. **Quality assessment**:
- Identify columns with high null percentage
- Flag potential data quality issues
- Suggest cleaning operations if needed
4. **Report**:
- Summary statistics per column
- Data type distribution
- Memory usage
- Quality score
## Examples ## Examples
@@ -48,9 +31,8 @@ Generate statistical profile and quality report for a DataFrame.
/profile df_a1b2c3d4 /profile df_a1b2c3d4
``` ```
## Available Tools ## Required MCP Tools
Use these MCP tools:
- `describe` - Get statistical summary - `describe` - Get statistical summary
- `head` - Preview first rows - `head` - Preview first rows
- `list_data` - List available DataFrames - `list_data` - List available DataFrames

View File

@@ -1,18 +1,13 @@
# /run - Execute dbt Models # /run - Execute dbt Models
## Skills to Load
- skills/dbt-workflow.md
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - dbt Run`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · dbt Run │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the execution.
Run dbt models with automatic pre-validation.
## Usage ## Usage
@@ -22,46 +17,28 @@ Run dbt models with automatic pre-validation.
## Workflow ## Workflow
1. **Pre-validation** (MANDATORY): Execute `skills/dbt-workflow.md` run workflow:
- Use `dbt_parse` to validate project
- Check for deprecated syntax (dbt 1.9+)
- If validation fails, show errors and STOP
2. **Execute models**: 1. **Pre-validation** (MANDATORY): Run `dbt_parse` first
- Use `dbt_run` with provided selection 2. **Execute models**: Use `dbt_run` with selection
- Monitor progress and capture output 3. **Report results**: Status, execution time, row counts
3. **Report results**: ## Selection Syntax
- Success/failure status per model
- Execution time See `skills/dbt-workflow.md` for full selection patterns.
- Row counts where available
- Any warnings or errors
## Examples ## Examples
``` ```
/run # Run all models /run # Run all models
/run dim_customers # Run specific model /run dim_customers # Run specific model
/run +fct_orders # Run model and its upstream /run +fct_orders # Run model and upstream
/run tag:daily # Run models with tag /run tag:daily # Run models with tag
/run --full-refresh # Rebuild incremental models /run --full-refresh # Rebuild incremental models
``` ```
## Selection Syntax ## Required MCP Tools
| Pattern | Meaning |
|---------|---------|
| `model_name` | Run single model |
| `+model_name` | Run model and upstream |
| `model_name+` | Run model and downstream |
| `+model_name+` | Run model with all deps |
| `tag:name` | Run by tag |
| `path:models/staging` | Run by path |
## Available Tools
Use these MCP tools:
- `dbt_parse` - Pre-validation (ALWAYS RUN FIRST) - `dbt_parse` - Pre-validation (ALWAYS RUN FIRST)
- `dbt_run` - Execute models - `dbt_run` - Execute models
- `dbt_build` - Run + test - `dbt_build` - Run + test
- `dbt_test` - Run tests only

View File

@@ -1,18 +1,12 @@
# /schema - Schema Exploration # /schema - Schema Exploration
## Skills to Load
- skills/mcp-tools-reference.md
- skills/visual-header.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `DATA-PLATFORM - Schema Explorer`
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Schema Explorer │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the exploration.
Display schema information for database tables or DataFrames.
## Usage ## Usage
@@ -23,23 +17,15 @@ Display schema information for database tables or DataFrames.
## Workflow ## Workflow
1. **Determine target**: 1. **Determine target**:
- If argument is a loaded data_ref, show DataFrame schema - DataFrame: show pandas schema via `describe`
- If argument is a table name, query database schema - Database table: query via `pg_columns`
- If no argument, list all available tables and DataFrames - No argument: list all tables and DataFrames
2. **For DataFrames**: 2. **For DataFrames**: Show dtypes, null counts, sample values
- Use `describe` to get column info
- Show dtypes, null counts, sample values
3. **For database tables**: 3. **For database tables**: Show columns, types, constraints, indexes
- Use `pg_columns` for column details
- Use `st_tables` to check for PostGIS columns
- Show constraints and indexes if available
4. **Report**: 4. **For PostGIS**: Include geometry type and SRID via `st_tables`
- Column name, type, nullable, default
- For PostGIS: geometry type, SRID
- For DataFrames: pandas dtype, null percentage
## Examples ## Examples
@@ -49,9 +35,8 @@ Display schema information for database tables or DataFrames.
/schema sales_data # Show DataFrame schema /schema sales_data # Show DataFrame schema
``` ```
## Available Tools ## Required MCP Tools
Use these MCP tools:
- `pg_tables` - List database tables - `pg_tables` - List database tables
- `pg_columns` - Get column info - `pg_columns` - Get column info
- `pg_schemas` - List schemas - `pg_schemas` - List schemas

View File

@@ -0,0 +1,72 @@
# Data Profiling
## Profiling Workflow
1. **Get data reference** via `list_data`
2. **Generate statistics** via `describe`
3. **Analyze quality** (nulls, duplicates, types, outliers)
4. **Calculate score** and generate report
## Quality Checks
### Null Analysis
- Calculate null percentage per column
- **PASS**: < 5% nulls
- **WARN**: 5-20% nulls
- **FAIL**: > 20% nulls
### Duplicate Detection
- Check for fully duplicated rows
- **PASS**: 0% duplicates
- **WARN**: < 1% duplicates
- **FAIL**: >= 1% duplicates
### Type Consistency
- Identify mixed-type columns
- Flag numeric columns with string values
- **PASS**: Consistent types
- **FAIL**: Mixed types detected
### Outlier Detection (IQR Method)
- Calculate Q1, Q3, IQR = Q3 - Q1
- Outliers: values < Q1 - 1.5*IQR or > Q3 + 1.5*IQR
- **PASS**: < 1% outliers
- **WARN**: 1-5% outliers
- **FAIL**: > 5% outliers
## Quality Scoring
| Component | Weight | Formula |
|-----------|--------|---------|
| Nulls | 30% | 100 - (avg_null_pct * 2) |
| Duplicates | 20% | 100 - (dup_pct * 50) |
| Type consistency | 25% | 100 if clean, 0 if mixed |
| Outliers | 25% | 100 - (avg_outlier_pct * 10) |
Final score: Weighted average, capped at 0-100
## Report Format
```
=== Data Quality Report ===
Dataset: [data_ref]
Rows: X | Columns: Y
Overall Score: XX/100 [PASS/WARN/FAIL]
--- Column Analysis ---
| Column | Nulls | Dups | Type | Outliers | Status |
|--------|-------|------|------|----------|--------|
| col1 | X.X% | - | type | X.X% | PASS |
--- Issues Found ---
[WARN/FAIL] Column 'X': Issue description
--- Recommendations ---
1. Suggested remediation steps
```
## Strict Mode
With `--strict` flag:
- **WARN** at 1% nulls (vs 5%)
- **FAIL** at 5% nulls (vs 20%)

View File

@@ -0,0 +1,85 @@
# dbt Workflow
## Pre-Validation (MANDATORY)
**Always run `dbt_parse` before any dbt operation.**
This validates:
- dbt_project.yml syntax
- Model SQL syntax
- schema.yml definitions
- Deprecated syntax (dbt 1.9+)
If validation fails, show errors and STOP.
## Model Selection Syntax
| Pattern | Meaning |
|---------|---------|
| `model_name` | Single model |
| `+model_name` | Model and upstream dependencies |
| `model_name+` | Model and downstream dependents |
| `+model_name+` | Model with all dependencies |
| `tag:name` | Models with specific tag |
| `path:models/staging` | Models in path |
| `test_type:schema` | Schema tests only |
| `test_type:data` | Data tests only |
## Execution Workflow
1. **Parse**: `dbt_parse` - Validate project
2. **Run**: `dbt_run` - Execute models
3. **Test**: `dbt_test` - Run tests
4. **Build**: `dbt_build` - Run + test together
## Test Types
### Schema Tests
Defined in `schema.yml`:
- `unique` - No duplicate values
- `not_null` - No null values
- `accepted_values` - Value in allowed list
- `relationships` - Foreign key integrity
### Data Tests
Custom SQL in `tests/` directory:
- Return rows that fail assertion
- Zero rows = pass, any rows = fail
## Materialization Types
| Type | Description |
|------|-------------|
| `view` | Virtual table, always fresh |
| `table` | Physical table, full rebuild |
| `incremental` | Append/merge new rows only |
| `ephemeral` | CTE, no physical object |
## Exit Codes
| Code | Meaning |
|------|---------|
| 0 | Success |
| 1 | Test/run failure |
| 2 | dbt error (parse failure) |
## Result Formatting
```
=== dbt [Operation] Results ===
Project: [project_name]
Selection: [selection_pattern]
--- Summary ---
Total: X models/tests
PASS: X (%)
FAIL: X (%)
WARN: X (%)
SKIP: X (%)
--- Details ---
[Model/Test details with status]
--- Failure Details ---
[Error messages and remediation]
```

View File

@@ -0,0 +1,73 @@
# Lineage Analysis
## Lineage Workflow
1. **Get lineage data** via `dbt_lineage`
2. **Build dependency graph** (upstream + downstream)
3. **Visualize** (ASCII tree or Mermaid)
4. **Report** critical path and refresh implications
## ASCII Tree Format
```
Sources:
|-- raw_customers (source)
|-- raw_orders (source)
model_name (materialization)
|-- upstream:
| |-- stg_model (view)
| |-- raw_source (source)
|-- downstream:
|-- fct_model (incremental)
|-- rpt_model (table)
```
## Mermaid Diagram Format
```mermaid
flowchart LR
subgraph Sources
raw_data[(raw_data)]
end
subgraph Staging
stg_model[stg_model]
end
subgraph Marts
dim_model{{dim_model}}
end
raw_data --> stg_model
stg_model --> dim_model
```
## Mermaid Node Shapes
| Materialization | Shape | Syntax |
|-----------------|-------|--------|
| source | Cylinder | `[(name)]` |
| view | Rectangle | `[name]` |
| table | Double braces | `{{name}}` |
| incremental | Hexagon | `{{name}}` |
| ephemeral | Dashed | `[/name/]` |
## Mermaid Options
| Flag | Description |
|------|-------------|
| `--direction TB` | Top-to-bottom (default: LR) |
| `--depth N` | Limit lineage depth |
## Styling Target Model
```mermaid
style target_model fill:#f96,stroke:#333,stroke-width:2px
```
## Usage Tips
1. **Documentation**: Copy Mermaid to README.md
2. **GitHub/GitLab**: Both render Mermaid natively
3. **Live Editor**: https://mermaid.live for interactive editing

View File

@@ -0,0 +1,69 @@
# MCP Tools Reference
## pandas Tools
| Tool | Description |
|------|-------------|
| `read_csv` | Load CSV file into DataFrame |
| `read_parquet` | Load Parquet file into DataFrame |
| `read_json` | Load JSON/JSONL file into DataFrame |
| `to_csv` | Export DataFrame to CSV |
| `to_parquet` | Export DataFrame to Parquet |
| `describe` | Get statistical summary (count, mean, std, min, max) |
| `head` | Preview first N rows |
| `tail` | Preview last N rows |
| `filter` | Filter rows by condition |
| `select` | Select specific columns |
| `groupby` | Aggregate data by columns |
| `join` | Join two DataFrames |
| `list_data` | List all loaded DataFrames |
| `drop_data` | Remove DataFrame from memory |
## PostgreSQL Tools
| Tool | Description |
|------|-------------|
| `pg_connect` | Establish database connection |
| `pg_query` | Execute SELECT query, return DataFrame |
| `pg_execute` | Execute INSERT/UPDATE/DELETE |
| `pg_tables` | List tables in schema |
| `pg_columns` | Get column info for table |
| `pg_schemas` | List available schemas |
## PostGIS Tools
| Tool | Description |
|------|-------------|
| `st_tables` | List tables with geometry columns |
| `st_geometry_type` | Get geometry type for column |
| `st_srid` | Get SRID for geometry column |
| `st_extent` | Get bounding box for geometry |
## dbt Tools
| Tool | Description |
|------|-------------|
| `dbt_parse` | Validate project (ALWAYS RUN FIRST) |
| `dbt_run` | Execute models |
| `dbt_test` | Run tests |
| `dbt_build` | Run + test together |
| `dbt_compile` | Compile SQL without execution |
| `dbt_ls` | List dbt resources |
| `dbt_docs_generate` | Generate documentation manifest |
| `dbt_lineage` | Get model dependencies |
## Tool Selection Guidelines
**For data loading:**
- Files: `read_csv`, `read_parquet`, `read_json`
- Database: `pg_query`
**For data exploration:**
- Schema: `describe`, `pg_columns`, `st_tables`
- Preview: `head`, `tail`
- Available data: `list_data`, `pg_tables`
**For dbt operations:**
- Always start with `dbt_parse` for validation
- Use `dbt_lineage` for dependency analysis
- Use `dbt_compile` to see rendered SQL

View File

@@ -0,0 +1,108 @@
# Setup Workflow
## Important Context
- **This workflow uses Bash, Read, Write, AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart**
- **PostgreSQL and dbt are optional** - pandas tools work without them
## Phase 1: Environment Validation
### Check Python Version
```bash
python3 --version
```
Requires Python 3.10+. If below, stop and inform user.
## Phase 2: MCP Server Setup
### Locate MCP Server
Check both paths:
```bash
# Installed marketplace
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/data-platform/
# Source
ls -la ~/claude-plugins-work/mcp-servers/data-platform/
```
### Check/Create Virtual Environment
```bash
# Check
ls -la /path/to/mcp-servers/data-platform/.venv/bin/python
# Create if missing
cd /path/to/mcp-servers/data-platform
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
deactivate
```
## Phase 3: PostgreSQL Configuration (Optional)
### Config Location
`~/.config/claude/postgres.env`
### Config Format
```bash
# PostgreSQL Configuration
POSTGRES_URL=postgresql://user:pass@host:5432/db
```
Set permissions: `chmod 600 ~/.config/claude/postgres.env`
### Test Connection
```bash
source ~/.config/claude/postgres.env && python3 -c "
import asyncio, asyncpg
async def test():
conn = await asyncpg.connect('$POSTGRES_URL', timeout=5)
ver = await conn.fetchval('SELECT version()')
await conn.close()
print(f'SUCCESS: {ver.split(\",\")[0]}')
asyncio.run(test())
"
```
## Phase 4: dbt Configuration (Optional)
dbt is **project-level** (auto-detected via `dbt_project.yml`).
For subdirectory projects, set in `.env`:
```
DBT_PROJECT_DIR=./transform
DBT_PROFILES_DIR=~/.dbt
```
### Check dbt Installation
```bash
dbt --version
```
## Phase 5: Validation
### Verify MCP Server
```bash
cd /path/to/mcp-servers/data-platform
.venv/bin/python -c "from mcp_server.server import DataPlatformMCPServer; print('OK')"
```
## Memory Limits
Default: 100,000 rows per DataFrame
Override in project `.env`:
```
DATA_PLATFORM_MAX_ROWS=500000
```
For larger datasets:
- Use chunked processing (`chunk_size` parameter)
- Filter data before loading
- Store to Parquet for efficient re-loading
## Session Restart
After setup, restart Claude Code session for MCP tools to become available.

View File

@@ -0,0 +1,45 @@
# Visual Header
## Standard Format
Display at the start of every command execution:
```
+----------------------------------------------------------------------+
| DATA-PLATFORM - [Command Name] |
+----------------------------------------------------------------------+
```
## Command Headers
| Command | Header Text |
|---------|-------------|
| initial-setup | Setup Wizard |
| ingest | Ingest |
| profile | Data Profile |
| schema | Schema Explorer |
| data-quality | Data Quality |
| run | dbt Run |
| dbt-test | dbt Tests |
| lineage | Lineage |
| lineage-viz | Lineage Visualization |
| explain | Model Explanation |
## Summary Box Format
For completion summaries:
```
+============================================================+
| DATA-PLATFORM [OPERATION] COMPLETE |
+============================================================+
| Component: [Status] |
| Component: [Status] |
+============================================================+
```
## Status Indicators
- Success: `[check]` or `Ready`
- Warning: `[!]` or `Partial`
- Failure: `[X]` or `Failed`

View File

@@ -6,116 +6,49 @@ description: Generate changelog from conventional commits in Keep-a-Changelog fo
Generate a changelog entry from conventional commits. Generate a changelog entry from conventional commits.
## Skills to Load
- skills/changelog-format.md
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +------------------------------------------------------------------+
│ 📝 DOC-GUARDIAN · Changelog Generation | DOC-GUARDIAN - Changelog Generation |
└──────────────────────────────────────────────────────────────────┘ +------------------------------------------------------------------+
``` ```
Then proceed with the generation.
## Process ## Process
1. **Identify Commit Range** 1. **Identify Commit Range**
- Default: commits since last tag Execute `skills/changelog-format.md` - detect range from tags
- Optional: specify range (e.g., `v1.0.0..HEAD`)
- Detect if this is first release (no previous tags)
2. **Parse Conventional Commits** 2. **Parse Conventional Commits**
Extract from commit messages following the pattern: Use pattern from skill: `<type>(<scope>): <description>`
```
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
```
**Recognized Types:**
| Type | Changelog Section |
|------|------------------|
| `feat` | Added |
| `fix` | Fixed |
| `docs` | Documentation |
| `perf` | Performance |
| `refactor` | Changed |
| `style` | Changed |
| `test` | Testing |
| `build` | Build |
| `ci` | CI/CD |
| `chore` | Maintenance |
| `BREAKING CHANGE` | Breaking Changes |
3. **Group by Type** 3. **Group by Type**
Organize commits into Keep-a-Changelog sections: Map to Keep-a-Changelog sections per skill
- Breaking Changes (if any `!` suffix or `BREAKING CHANGE` footer)
- Added (feat)
- Changed (refactor, style, perf)
- Deprecated
- Removed
- Fixed (fix)
- Security
4. **Format Entries** 4. **Format Entries**
For each commit: - Extract scope as bold prefix
- Extract scope (if present) as prefix
- Use description as entry text - Use description as entry text
- Link to commit hash if repository URL available - Link commit hashes if repo URL available
- Include PR/issue references from footer
5. **Output Format** 5. **Output**
```markdown Use format from `skills/changelog-format.md`
## [Unreleased]
### Breaking Changes
- **scope**: Description of breaking change
### Added
- **scope**: New feature description
- Another feature without scope
### Changed
- **scope**: Refactoring description
### Fixed
- **scope**: Bug fix description
### Documentation
- Updated README with new examples
```
## Options ## Options
| Flag | Description | Default | | Flag | Description | Default |
|------|-------------|---------| |------|-------------|---------|
| `--from <tag>` | Start from specific tag | Latest tag | | `--from <tag>` | Start from tag | Latest tag |
| `--to <ref>` | End at specific ref | HEAD | | `--to <ref>` | End at ref | HEAD |
| `--version <ver>` | Set version header | [Unreleased] | | `--version <ver>` | Version header | [Unreleased] |
| `--include-merge` | Include merge commits | false | | `--include-merge` | Include merges | false |
| `--group-by-scope` | Group by scope within sections | false | | `--group-by-scope` | Group by scope | false |
## Integration ## Integration
The generated output is designed to be copied directly into CHANGELOG.md: Output designed for direct copy to CHANGELOG.md:
- Follows [Keep a Changelog](https://keepachangelog.com) format - Follows [Keep a Changelog](https://keepachangelog.com) format
- Compatible with semantic versioning - Compatible with semantic versioning
- Excludes non-user-facing commits (chore, ci, test by default)
## Example Usage
```
/changelog-gen
/changelog-gen --from v1.0.0 --version 1.1.0
/changelog-gen --include-merge --group-by-scope
```
## Non-Conventional Commits
Commits not following conventional format are:
- Listed under "Other" section
- Flagged for manual categorization
- Skipped if `--strict` flag is used

View File

@@ -6,33 +6,26 @@ description: Full documentation audit - scans entire project for doc drift witho
Perform a comprehensive documentation drift analysis. Perform a comprehensive documentation drift analysis.
## Skills to Load
- skills/drift-detection.md
- skills/doc-patterns.md
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +------------------------------------------------------------------+
│ 📝 DOC-GUARDIAN · Documentation Audit | DOC-GUARDIAN - Documentation Audit |
└──────────────────────────────────────────────────────────────────┘ +------------------------------------------------------------------+
``` ```
Then proceed with the audit.
## Process ## Process
1. **Inventory Documentation Files** 1. **Inventory Documentation Files**
- README.md (root and subdirectories) Execute `skills/doc-patterns.md` - identify all doc files
- CLAUDE.md
- API documentation
- Docstrings in code files
- Configuration references
2. **Cross-Reference Analysis** 2. **Cross-Reference Analysis**
For each documentation file: Execute `skills/drift-detection.md` - verify all references
- Extract referenced functions, classes, endpoints, configs
- Verify each reference exists in codebase
- Check signatures/types match documentation
- Flag deprecated or renamed items still in docs
3. **Completeness Check** 3. **Completeness Check**
- Public functions without docstrings - Public functions without docstrings
@@ -40,23 +33,7 @@ Then proceed with the audit.
- Environment variables used but not documented - Environment variables used but not documented
- CLI commands not in help text - CLI commands not in help text
4. **Output Format** 4. **Output**
``` Use format from `skills/drift-detection.md`
## Documentation Drift Report
### Critical (Broken References)
- [ ] README.md:45 references `calculate_total()` - function renamed to `compute_total()`
### Stale (Outdated Info)
- [ ] CLAUDE.md:23 lists Python 3.9 - project uses 3.11
### Missing (Undocumented)
- [ ] api/handlers.py:`create_order()` - no docstring
### Summary
- Critical: X items
- Stale: X items
- Missing: X items
```
5. **Do NOT make changes** - audit only, report findings 5. **Do NOT make changes** - audit only, report findings

View File

@@ -6,135 +6,43 @@ description: Calculate documentation coverage percentage for functions and class
Analyze codebase to calculate documentation coverage metrics. Analyze codebase to calculate documentation coverage metrics.
## Skills to Load
- skills/coverage-calculation.md
- skills/doc-patterns.md
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +------------------------------------------------------------------+
│ 📝 DOC-GUARDIAN · Documentation Coverage | DOC-GUARDIAN - Documentation Coverage |
└──────────────────────────────────────────────────────────────────┘ +------------------------------------------------------------------+
``` ```
Then proceed with the analysis.
## Process ## Process
1. **Scan Source Files** 1. **Scan Source Files**
Identify all documentable items: Execute `skills/coverage-calculation.md` - identify documentable items
**Python:**
- Functions (def)
- Classes
- Methods
- Module-level docstrings
**JavaScript/TypeScript:**
- Functions (function, arrow functions)
- Classes
- Methods
- JSDoc comments
**Other Languages:**
- Adapt patterns for Go, Rust, etc.
2. **Determine Documentation Status** 2. **Determine Documentation Status**
For each item, check: Check each item has meaningful docstring/JSDoc
- Has docstring/JSDoc comment
- Docstring is non-empty and meaningful (not just `pass` or `TODO`)
- Parameters are documented (for detailed mode)
- Return type is documented (for detailed mode)
3. **Calculate Metrics** 3. **Calculate Metrics**
``` Use formula from skill: `Coverage = (Documented / Total) * 100`
Coverage = (Documented Items / Total Items) * 100
```
**Levels:** 4. **Output**
- Basic: Item has any docstring Use format from `skills/coverage-calculation.md`
- Standard: Docstring describes purpose
- Complete: All parameters and return documented
4. **Output Format**
```
## Documentation Coverage Report
### Summary
- Total documentable items: 156
- Documented: 142
- Coverage: 91.0%
### By Type
| Type | Total | Documented | Coverage |
|------|-------|------------|----------|
| Functions | 89 | 85 | 95.5% |
| Classes | 23 | 21 | 91.3% |
| Methods | 44 | 36 | 81.8% |
### By Directory
| Path | Total | Documented | Coverage |
|------|-------|------------|----------|
| src/api/ | 34 | 32 | 94.1% |
| src/utils/ | 28 | 28 | 100.0% |
| src/models/ | 45 | 38 | 84.4% |
| tests/ | 49 | 44 | 89.8% |
### Undocumented Items
- [ ] src/api/handlers.py:45 `create_order()`
- [ ] src/api/handlers.py:78 `update_order()`
- [ ] src/models/user.py:23 `UserModel.validate()`
```
## Options ## Options
| Flag | Description | Default | | Flag | Description | Default |
|------|-------------|---------| |------|-------------|---------|
| `--path <dir>` | Scan specific directory | Project root | | `--path <dir>` | Scan specific directory | Project root |
| `--exclude <glob>` | Exclude files matching pattern | `**/test_*,**/*_test.*` | | `--exclude <glob>` | Exclude pattern | `**/test_*` |
| `--include-private` | Include private members (_prefixed) | false | | `--min-coverage <pct>` | Fail if below | none |
| `--include-tests` | Include test files | false | | `--detailed` | Check params/returns | false |
| `--min-coverage <pct>` | Fail if below threshold | none |
| `--format <fmt>` | Output format (table, json, markdown) | table |
| `--detailed` | Check parameter/return docs | false |
## Thresholds ## Exit Codes
Common coverage targets: - 0: Coverage meets threshold
| Level | Coverage | Description |
|-------|----------|-------------|
| Minimal | 60% | Basic documentation exists |
| Good | 80% | Most public APIs documented |
| Excellent | 95% | Comprehensive documentation |
## CI Integration
Use `--min-coverage` to enforce standards:
```bash
# Fail if coverage drops below 80%
claude /doc-coverage --min-coverage 80
```
Exit codes:
- 0: Coverage meets threshold (or no threshold set)
- 1: Coverage below threshold - 1: Coverage below threshold
## Example Usage
```
/doc-coverage
/doc-coverage --path src/
/doc-coverage --min-coverage 85 --exclude "**/generated/**"
/doc-coverage --detailed --include-private
```
## Language Detection
File extensions mapped to documentation patterns:
| Extension | Language | Doc Format |
|-----------|----------|------------|
| .py | Python | Docstrings (""") |
| .js, .ts | JavaScript/TypeScript | JSDoc (/** */) |
| .go | Go | // comments above |
| .rs | Rust | /// doc comments |
| .rb | Ruby | # comments, YARD |
| .java | Java | Javadoc (/** */) |

View File

@@ -6,22 +6,23 @@ description: Synchronize all pending documentation updates in a single commit
Apply all pending documentation updates detected by doc-guardian hooks. Apply all pending documentation updates detected by doc-guardian hooks.
## Skills to Load
- skills/sync-workflow.md
- skills/drift-detection.md
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +------------------------------------------------------------------+
│ 📝 DOC-GUARDIAN · Documentation Sync | DOC-GUARDIAN - Documentation Sync |
└──────────────────────────────────────────────────────────────────┘ +------------------------------------------------------------------+
``` ```
Then proceed with the sync.
## Process ## Process
1. **Review Pending Queue** 1. **Review Pending Queue**
List all documentation drift detected during this session. Execute `skills/sync-workflow.md` - read `.doc-guardian-queue`
2. **Batch Updates** 2. **Batch Updates**
For each pending item: For each pending item:
@@ -29,48 +30,13 @@ Then proceed with the sync.
- Apply the update - Apply the update
- Track in change list - Track in change list
3. **Update Types** 3. **Commit Strategy**
**Reference Fixes:**
- Renamed function/class → update all doc references
- Changed signature → update parameter documentation
- Removed item → remove or mark deprecated in docs
**Content Sync:**
- Version numbers (Python, Node, dependencies)
- Configuration keys/values
- File paths and directory structures
- Command examples and outputs
**Structural:**
- Add missing sections for new features
- Remove sections for deleted features
- Reorder to match current code organization
4. **Commit Strategy**
- Stage all doc changes together - Stage all doc changes together
- Single commit: `docs: sync documentation with code changes` - Single commit: `docs: sync documentation with code changes`
- Include summary of what was updated in commit body - Include summary in commit body
5. **Clear Queue** 4. **Clear Queue**
After successful sync, clear the queue file: After successful sync, clear the queue file
```bash
echo "# Doc Guardian Queue - cleared after sync on $(date +%Y-%m-%d)" > .doc-guardian-queue
```
6. **Output** 5. **Output**
``` Use format from `skills/sync-workflow.md`
## Documentation Sync Complete
### Files Updated
- README.md (3 changes)
- CLAUDE.md (1 change)
- src/api/README.md (2 changes)
### Changes Applied
- Updated function reference: calculate_total → compute_total
- Updated Python version: 3.9 → 3.11
- Added docstring to create_order()
Committed: abc123f
```

View File

@@ -6,30 +6,23 @@ description: Detect documentation files that are stale relative to their associa
Identify documentation files that may be outdated based on commit history. Identify documentation files that may be outdated based on commit history.
## Skills to Load
- skills/staleness-metrics.md
- skills/drift-detection.md
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +------------------------------------------------------------------+
│ 📝 DOC-GUARDIAN · Stale Documentation Check | DOC-GUARDIAN - Stale Documentation Check |
└──────────────────────────────────────────────────────────────────┘ +------------------------------------------------------------------+
``` ```
Then proceed with the check.
## Process ## Process
1. **Map Documentation to Code** 1. **Map Documentation to Code**
Build relationships between docs and code: Execute `skills/staleness-metrics.md` - build relationships
| Doc File | Related Code |
|----------|--------------|
| README.md | All files in same directory |
| API.md | src/api/**/* |
| CLAUDE.md | Configuration files, scripts |
| docs/module.md | src/module/**/* |
| Component.md | Component.tsx, Component.css |
2. **Analyze Commit History** 2. **Analyze Commit History**
For each doc file: For each doc file:
@@ -38,118 +31,22 @@ Then proceed with the check.
- Count commits to code since doc was updated - Count commits to code since doc was updated
3. **Calculate Staleness** 3. **Calculate Staleness**
``` Use levels from skill (Fresh/Aging/Stale/Critical)
Commits Behind = Code Commits Since Doc Update
Days Behind = Days Since Doc Update - Days Since Code Update
```
4. **Apply Threshold** 4. **Output**
Default: Flag if documentation is 10+ commits behind related code Use format from `skills/staleness-metrics.md`
**Staleness Levels:**
| Commits Behind | Level | Action |
|----------------|-------|--------|
| 0-5 | Fresh | No action needed |
| 6-10 | Aging | Review recommended |
| 11-20 | Stale | Update needed |
| 20+ | Critical | Immediate attention |
5. **Output Format**
```
## Stale Documentation Report
### Critical (20+ commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| docs/api.md | 2024-01-15 | 34 | src/api/**/* |
### Stale (11-20 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| README.md | 2024-02-20 | 15 | package.json, src/index.ts |
### Aging (6-10 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| CONTRIBUTING.md | 2024-03-01 | 8 | .github/*, scripts/* |
### Summary
- Critical: 1 file
- Stale: 1 file
- Aging: 1 file
- Fresh: 12 files
- Total documentation files: 15
```
## Options ## Options
| Flag | Description | Default | | Flag | Description | Default |
|------|-------------|---------| |------|-------------|---------|
| `--threshold <n>` | Commits behind to flag as stale | 10 | | `--threshold <n>` | Commits behind to flag | 10 |
| `--days` | Use days instead of commits | false | | `--days` | Use days instead | false |
| `--path <dir>` | Scan specific directory | Project root | | `--path <dir>` | Scan directory | Project root |
| `--doc-pattern <glob>` | Pattern for doc files | `**/*.md,**/README*` | | `--show-fresh` | Include fresh docs | false |
| `--ignore <glob>` | Ignore specific docs | `CHANGELOG.md,LICENSE` |
| `--show-fresh` | Include fresh docs in output | false |
| `--format <fmt>` | Output format (table, json) | table |
## Relationship Detection
How docs are mapped to code:
1. **Same Directory**
- `src/api/README.md` relates to `src/api/**/*`
2. **Name Matching**
- `docs/auth.md` relates to `**/auth.*`, `**/auth/**`
3. **Explicit Links**
- Parse `[link](path)` in docs to find related files
4. **Import Analysis**
- Track which modules are referenced in code examples
## Configuration
Create `.doc-guardian.yml` to customize mappings:
```yaml
stale-docs:
threshold: 10
mappings:
- doc: docs/deployment.md
code:
- Dockerfile
- docker-compose.yml
- .github/workflows/deploy.yml
- doc: ARCHITECTURE.md
code:
- src/**/*
ignore:
- CHANGELOG.md
- LICENSE
- vendor/**
```
## Example Usage
```
/stale-docs
/stale-docs --threshold 5
/stale-docs --days --threshold 30
/stale-docs --path docs/ --show-fresh
```
## Integration with doc-audit
`/stale-docs` focuses specifically on commit-based staleness, while `/doc-audit` checks content accuracy. Use both for comprehensive documentation health:
```
/doc-audit # Check for broken references and content drift
/stale-docs # Check for files that may need review
```
## Exit Codes ## Exit Codes
- 0: No critical or stale documentation - 0: No critical or stale docs
- 1: Stale documentation found (useful for CI) - 1: Stale docs found
- 2: Critical documentation found - 2: Critical docs found

View File

@@ -0,0 +1,121 @@
---
name: changelog-format
description: Keep a Changelog format and Conventional Commits parsing
---
# Changelog Format
## Purpose
Defines Keep a Changelog format and how to parse Conventional Commits.
## When to Use
- **changelog-gen**: Generating changelog entries from commits
- **git-flow integration**: Validating commit message format
---
## Conventional Commits Pattern
```
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
```
---
## Type to Section Mapping
| Commit Type | Changelog Section |
|-------------|------------------|
| `feat` | Added |
| `fix` | Fixed |
| `docs` | Documentation |
| `perf` | Performance |
| `refactor` | Changed |
| `style` | Changed |
| `test` | Testing |
| `build` | Build |
| `ci` | CI/CD |
| `chore` | Maintenance |
| `BREAKING CHANGE` | Breaking Changes |
---
## Keep a Changelog Sections
Standard order (only include non-empty):
1. Breaking Changes
2. Added
3. Changed
4. Deprecated
5. Removed
6. Fixed
7. Security
---
## Breaking Changes Detection
Detected by:
- `!` suffix on type: `feat!: new auth system`
- `BREAKING CHANGE` in footer
- `BREAKING-CHANGE` in footer
---
## Entry Formatting
For each commit:
1. Extract scope (if present) as bold prefix: `**scope**: `
2. Use description as entry text
3. Link to commit hash if repository URL available
4. Include PR/issue references from footer
### Example Output
```markdown
## [Unreleased]
### Breaking Changes
- **auth**: Remove deprecated OAuth1 support
### Added
- **api**: New batch processing endpoint
- User preference saving feature
### Changed
- **core**: Improve error message clarity
### Fixed
- **api**: Handle null values in response
```
---
## Non-Conventional Handling
Commits not following format:
- List under "Other" section
- Flag for manual categorization
- Skip if `--strict` flag used
---
## Commit Range Detection
1. Default: commits since last tag
2. First release: all commits from initial
3. Explicit: `--from <tag> --to <ref>`
```bash
# Find last tag
git describe --tags --abbrev=0
# Commits since tag
git log v1.0.0..HEAD --oneline
```

View File

@@ -0,0 +1,131 @@
---
name: coverage-calculation
description: Documentation coverage metrics, language patterns, and thresholds
---
# Coverage Calculation
## Purpose
Defines how to calculate documentation coverage and thresholds.
## When to Use
- **doc-coverage**: Full coverage analysis
- **doc-audit**: Completeness checks
---
## Coverage Formula
```
Coverage = (Documented Items / Total Items) * 100
```
---
## Documentable Items by Language
### Python
- Functions (`def`)
- Classes
- Methods
- Module-level docstrings
### JavaScript/TypeScript
- Functions (`function`, arrow functions)
- Classes
- Methods
- JSDoc comments (`/** */`)
### Go
- Functions
- Types
- Methods
- Package comments (`//` above declaration)
### Rust
- Functions
- Structs/Enums
- Impl blocks
- Doc comments (`///`)
---
## Language Detection
| Extension | Language | Doc Format |
|-----------|----------|------------|
| .py | Python | Docstrings (`"""`) |
| .js, .ts | JavaScript/TypeScript | JSDoc (`/** */`) |
| .go | Go | `//` comments above |
| .rs | Rust | `///` doc comments |
| .rb | Ruby | `#` comments, YARD |
| .java | Java | Javadoc (`/** */`) |
---
## Coverage Levels
### Basic
- Item has any docstring/comment
- Not empty or placeholder
### Standard
- Docstring describes purpose
- Non-trivial content (not just `pass` or `TODO`)
### Complete
- All parameters documented
- Return type documented
- Raises/throws documented
---
## Coverage Thresholds
| Level | Coverage | Description |
|-------|----------|-------------|
| Minimal | 60% | Basic documentation exists |
| Good | 80% | Most public APIs documented |
| Excellent | 95% | Comprehensive documentation |
---
## Output Format
```
## Documentation Coverage Report
### Summary
- Total documentable items: 156
- Documented: 142
- Coverage: 91.0%
### By Type
| Type | Total | Documented | Coverage |
|------|-------|------------|----------|
| Functions | 89 | 85 | 95.5% |
| Classes | 23 | 21 | 91.3% |
| Methods | 44 | 36 | 81.8% |
### By Directory
| Path | Total | Documented | Coverage |
|------|-------|------------|----------|
| src/api/ | 34 | 32 | 94.1% |
| src/utils/ | 28 | 28 | 100.0% |
### Undocumented Items
- [ ] src/api/handlers.py:45 `create_order()`
- [ ] src/models/user.py:23 `UserModel.validate()`
```
---
## Exclusion Patterns
Default exclusions:
- `**/test_*` - Test files
- `**/*_test.*` - Test files
- Private members (`_prefixed`) unless `--include-private`
- Generated code (`**/generated/**`)

View File

@@ -0,0 +1,67 @@
---
name: doc-patterns
description: Common documentation structures and patterns
---
# Documentation Patterns
## Purpose
Defines common documentation file structures and their contents.
## When to Use
- **doc-audit**: Understanding what to check in each doc type
- **doc-coverage**: Identifying documentation locations
---
## README.md Patterns
Typical sections:
- **Installation**: Version requirements, dependencies
- **Usage**: Function calls, CLI commands
- **Configuration**: Environment vars, config files
- **API**: Endpoint references
---
## CLAUDE.md Patterns
Typical sections:
- **Project Context**: Tech stack versions
- **File Structure**: Directory layout
- **Commands**: Available operations
- **Workflows**: Process descriptions
---
## Code Documentation
### Docstrings
- Function signatures
- Parameters and types
- Return values
- Raised exceptions
### Type Hints
- Should match docstring types
- Verify consistency
### Inline Comments
- References to other code
- TODO markers
- Warning notes
---
## File Inventory
Standard documentation files to check:
- `README.md` (root and subdirectories)
- `CLAUDE.md`
- `CONTRIBUTING.md`
- `CHANGELOG.md`
- `docs/**/*.md`
- API documentation
- Configuration references

View File

@@ -1,39 +0,0 @@
---
description: Knowledge of documentation patterns and structures for drift detection
---
# Documentation Patterns Skill
## Common Documentation Structures
### README.md Patterns
- Installation section: version requirements, dependencies
- Usage section: function calls, CLI commands
- Configuration section: env vars, config files
- API section: endpoint references
### CLAUDE.md Patterns
- Project context: tech stack versions
- File structure: directory layout
- Commands: available operations
- Workflows: process descriptions
### Code Documentation
- Docstrings: function signatures, parameters, returns
- Type hints: should match docstring types
- Comments: inline references to other code
## Drift Detection Rules
1. **Version Mismatch**: Any hardcoded version in docs must match package.json, pyproject.toml, requirements.txt
2. **Function References**: Function names in docs must exist in codebase with matching signatures
3. **Path References**: File paths in docs must exist in current directory structure
4. **Config Keys**: Environment variables and config keys in docs must be used in code
5. **Command Examples**: CLI examples in docs should be valid commands
## Priority Levels
- **P0 (Critical)**: Broken references that would cause user errors
- **P1 (High)**: Outdated information that misleads users
- **P2 (Medium)**: Missing documentation for public interfaces
- **P3 (Low)**: Style inconsistencies, minor wording issues

View File

@@ -0,0 +1,102 @@
---
name: drift-detection
description: Core drift detection rules, cross-reference analysis, and priority levels
---
# Drift Detection
## Purpose
Defines how to detect documentation drift through cross-reference analysis.
## When to Use
- **doc-audit**: Full cross-reference analysis
- **stale-docs**: Commit-based staleness detection
- **SessionStart hook**: Real-time drift detection
---
## Cross-Reference Analysis
For each documentation file:
1. Extract referenced functions, classes, endpoints, configs
2. Verify each reference exists in codebase
3. Check signatures/types match documentation
4. Flag deprecated or renamed items still in docs
---
## Drift Detection Rules
| Rule | Check | Priority |
|------|-------|----------|
| Version Mismatch | Hardcoded versions must match package.json, pyproject.toml, requirements.txt | P0 |
| Function References | Function names must exist with matching signatures | P0 |
| Path References | File paths must exist in directory structure | P0 |
| Config Keys | Env vars and config keys must be used in code | P1 |
| Command Examples | CLI examples must be valid commands | P1 |
---
## Priority Levels
| Level | Description | Action |
|-------|-------------|--------|
| **P0 (Critical)** | Broken references causing user errors | Immediate fix |
| **P1 (High)** | Outdated information misleading users | Fix in current session |
| **P2 (Medium)** | Missing documentation for public interfaces | Add to backlog |
| **P3 (Low)** | Style inconsistencies, minor wording | Optional |
---
## Drift Categories
### Critical (Broken References)
- Function/class renamed but docs not updated
- File moved/deleted but docs still reference old path
- API endpoint changed but docs show old URL
### Stale (Outdated Info)
- Version numbers not matching actual
- Configuration examples using deprecated keys
- Screenshots of old UI
### Missing (Undocumented)
- Public functions without docstrings
- New features not in README
- Environment variables used but not documented
---
## Documentation File Mapping
| Doc File | Related Code |
|----------|--------------|
| README.md | All files in same directory |
| API.md | src/api/**/* |
| CLAUDE.md | Configuration files, scripts |
| docs/module.md | src/module/**/* |
| Component.md | Component.tsx, Component.css |
---
## Output Format
```
## Documentation Drift Report
### Critical (Broken References)
- [ ] README.md:45 references `calculate_total()` - function renamed to `compute_total()`
### Stale (Outdated Info)
- [ ] CLAUDE.md:23 lists Python 3.9 - project uses 3.11
### Missing (Undocumented)
- [ ] api/handlers.py:`create_order()` - no docstring
### Summary
- Critical: X items
- Stale: X items
- Missing: X items
```

View File

@@ -0,0 +1,127 @@
---
name: staleness-metrics
description: Documentation staleness levels, thresholds, and relationship detection
---
# Staleness Metrics
## Purpose
Defines how to measure documentation staleness relative to code changes.
## When to Use
- **stale-docs**: Commit-based staleness detection
- **doc-audit**: Age-based analysis
---
## Staleness Calculation
```
Commits Behind = Code Commits Since Doc Update
Days Behind = Days Since Doc Update - Days Since Code Update
```
---
## Staleness Levels
| Commits Behind | Level | Action |
|----------------|-------|--------|
| 0-5 | Fresh | No action needed |
| 6-10 | Aging | Review recommended |
| 11-20 | Stale | Update needed |
| 20+ | Critical | Immediate attention |
---
## Relationship Detection
### 1. Same Directory
`src/api/README.md` relates to `src/api/**/*`
### 2. Name Matching
`docs/auth.md` relates to `**/auth.*`, `**/auth/**`
### 3. Explicit Links
Parse `[link](path)` in docs to find related files
### 4. Import Analysis
Track which modules are referenced in code examples
---
## Git Commands
```bash
# Last doc commit
git log -1 --format="%H %ai" -- docs/api.md
# Last code commit for related files
git log -1 --format="%H %ai" -- src/api/
# Commits to code since doc update
git rev-list <doc-commit>..HEAD -- src/api/ | wc -l
```
---
## Configuration
`.doc-guardian.yml`:
```yaml
stale-docs:
threshold: 10
mappings:
- doc: docs/deployment.md
code:
- Dockerfile
- docker-compose.yml
- .github/workflows/deploy.yml
- doc: ARCHITECTURE.md
code:
- src/**/*
ignore:
- CHANGELOG.md
- LICENSE
- vendor/**
```
---
## Output Format
```
## Stale Documentation Report
### Critical (20+ commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| docs/api.md | 2024-01-15 | 34 | src/api/**/* |
### Stale (11-20 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| README.md | 2024-02-20 | 15 | package.json, src/index.ts |
### Aging (6-10 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| CONTRIBUTING.md | 2024-03-01 | 8 | .github/*, scripts/* |
### Summary
- Critical: 1 file
- Stale: 1 file
- Aging: 1 file
- Fresh: 12 files
- Total documentation files: 15
```
---
## Exit Codes
- 0: No critical or stale documentation
- 1: Stale documentation found (for CI)
- 2: Critical documentation found

View File

@@ -0,0 +1,109 @@
---
name: sync-workflow
description: Documentation synchronization process and queue management
---
# Sync Workflow
## Purpose
Defines how to synchronize documentation with code changes.
## When to Use
- **doc-sync**: Apply pending documentation updates
- **PostToolUse hook**: Queue drift for later sync
---
## Queue File
Location: `.doc-guardian-queue` in project root
Format:
```
# Doc Guardian Queue
# Generated: YYYY-MM-DD HH:MM:SS
## Pending Updates
- README.md:45 | reference | calculate_total -> compute_total
- CLAUDE.md:23 | version | Python 3.9 -> 3.11
- src/api/README.md:12 | path | old/path.py -> new/path.py
```
---
## Update Types
### Reference Fixes
- Renamed function/class: update all doc references
- Changed signature: update parameter documentation
- Removed item: remove or mark deprecated in docs
### Content Sync
- Version numbers (Python, Node, dependencies)
- Configuration keys/values
- File paths and directory structures
- Command examples and outputs
### Structural
- Add missing sections for new features
- Remove sections for deleted features
- Reorder to match current code organization
---
## Sync Process
1. **Review Queue**
- Read `.doc-guardian-queue`
- List all pending items
2. **Batch Updates**
- Apply each update
- Track in change list
3. **Commit Strategy**
- Stage all doc changes together
- Single commit: `docs: sync documentation with code changes`
- Include summary in commit body
4. **Clear Queue**
```bash
echo "# Doc Guardian Queue - cleared after sync on $(date +%Y-%m-%d)" > .doc-guardian-queue
```
---
## Output Format
```
## Documentation Sync Complete
### Files Updated
- README.md (3 changes)
- CLAUDE.md (1 change)
- src/api/README.md (2 changes)
### Changes Applied
- Updated function reference: calculate_total -> compute_total
- Updated Python version: 3.9 -> 3.11
- Added docstring to create_order()
Committed: abc123f
```
---
## Queue Entry Format
```
<file>:<line> | <type> | <old> -> <new>
```
Types:
- `reference` - Function/class name change
- `version` - Version number update
- `path` - File path change
- `config` - Configuration key/value
- `missing` - New documentation needed

View File

@@ -1,124 +1,44 @@
---
name: branch-cleanup
description: Remove merged and stale branches locally and optionally on remote
agent: git-assistant
---
# /branch-cleanup - Clean Merged and Stale Branches # /branch-cleanup - Clean Merged and Stale Branches
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/git-safety.md
``` - skills/sync-workflow.md
┌──────────────────────────────────────────────────────────────────┐ - skills/environment-variables.md
│ 🔀 GIT-FLOW · Branch Cleanup │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the workflow.
## Purpose ## Purpose
Remove branches that have been merged OR whose remote tracking branch no longer exists, both locally and optionally on remote. Remove branches that have been merged OR whose remote tracking branch no longer exists.
## Behavior ## Parameters
### Step 1: Prune Remote Refs | Parameter | Description |
|-----------|-------------|
| `--dry-run` | Preview without deleting |
| `--remote` | Also delete remote branches |
| `--stale-only` | Only delete stale branches (upstream gone) |
```bash ## Workflow
# Remove stale remote-tracking references
git fetch --prune
```
### Step 2: Identify Branches for Cleanup 1. **Display header** - Show GIT-FLOW Branch Cleanup header
2. **Prune remote refs** - `git fetch --prune`
```bash 3. **Find merged branches** - `git branch --merged <base-branch>`
# Find merged local branches 4. **Find stale branches** - `git branch -vv | grep ': gone]'`
git branch --merged <base-branch> 5. **Exclude protected** - Never delete protected branches (per git-safety.md)
6. **Present findings** - Show merged, stale, and protected lists
# Find merged remote branches 7. **Confirm deletion** - Options: all, merged only, stale only, pick, cancel
git branch -r --merged <base-branch> 8. **Execute cleanup** - Delete selected branches
9. **Report** - Show deletion summary
# Find local branches with deleted upstreams (stale)
git branch -vv | grep ': gone]'
```
### Step 3: Present Findings
```
Found branches for cleanup:
Merged (safe to delete):
- feat/login-page (merged 3 days ago)
- fix/typo-header (merged 1 week ago)
- chore/deps-update (merged 2 weeks ago)
Stale (remote deleted):
- feat/old-feature (upstream gone)
- fix/already-merged (upstream gone)
Remote (merged into base):
- origin/feat/login-page
- origin/fix/typo-header
Protected (won't delete):
- main
- development
- staging
Delete these branches?
1. Delete all (local merged + stale + remote)
2. Delete merged only (skip stale)
3. Delete stale only (upstream gone)
4. Let me pick which ones
5. Cancel
```
### Step 4: Execute Cleanup
```bash
# Delete merged local branches
git branch -d <branch-name>
# Delete stale local branches (force needed since no upstream)
git branch -D <stale-branch-name>
# Delete remote branches
git push origin --delete <branch-name>
```
### Step 5: Report
```
Cleanup complete:
Deleted local (merged): 3 branches
Deleted local (stale): 2 branches
Deleted remote: 2 branches
Skipped: 0 branches
Remaining local branches:
- main
- development
- feat/current-work (not merged, has upstream)
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_DEFAULT_BASE` | `development` | Base branch for merge detection |
| `GIT_PROTECTED_BRANCHES` | `main,master,development,staging,production` | Never delete these |
| `GIT_AUTO_DELETE_REMOTE` | `false` | Auto-delete remote branches |
| `GIT_CLEANUP_STALE` | `true` | Include stale branches (upstream gone) in cleanup |
## Safety
- Never deletes protected branches
- Warns about unmerged branches that still have upstreams
- Confirms before deleting remote branches
- Uses `-d` (safe delete) for merged branches
- Uses `-D` (force delete) only for stale branches with confirmation
- Stale branches are highlighted separately for review
## Output ## Output
On success:
``` ```
Cleaned up: Cleaned up:
Local (merged): 3 branches deleted Local (merged): 3 branches deleted

View File

@@ -1,106 +1,43 @@
---
name: branch-start
description: Create a new feature/fix/chore branch with consistent naming
agent: git-assistant
---
# /branch-start - Start New Branch # /branch-start - Start New Branch
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/branch-naming.md
``` - skills/git-safety.md
┌──────────────────────────────────────────────────────────────────┐ - skills/environment-variables.md
│ 🔀 GIT-FLOW · Branch Start │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the workflow.
## Purpose ## Purpose
Create a new feature/fix/chore branch with consistent naming conventions. Create a new branch with consistent naming conventions, based on the configured base branch.
## Usage ## Parameters
``` | Parameter | Description |
/branch-start [description] |-----------|-------------|
``` | `<description>` | Brief description for branch name |
| `--type` | Branch type: feat, fix, chore, docs, refactor |
| `--issue` | Issue number to include in branch name |
## Behavior ## Workflow
### Step 1: Determine Branch Type 1. **Display header** - Show GIT-FLOW Branch Start header
2. **Determine type** - Prompt for branch type if not provided
``` 3. **Get description** - Prompt for description if not provided
What type of change is this? 4. **Generate name** - Convert to kebab-case (per branch-naming.md)
1. feat - New feature 5. **Validate** - Check naming rules, truncate if needed
2. fix - Bug fix 6. **Update base** - Checkout and pull base branch
3. chore - Maintenance task 7. **Create branch** - `git checkout -b <new-branch>`
4. docs - Documentation 8. **Confirm** - Display created branch info
5. refactor - Code refactoring
```
### Step 2: Get Description
If not provided, ask:
```
Brief description (2-4 words):
> add user authentication
```
### Step 3: Generate Branch Name
Convert to kebab-case:
- `feat/add-user-authentication`
- `fix/login-timeout-error`
- `chore/update-dependencies`
### Step 4: Create Branch
```bash
# Ensure base branch is up-to-date
git checkout <base-branch>
git pull origin <base-branch>
# Create and switch to new branch
git checkout -b <new-branch>
```
### Step 5: Confirm
```
Created branch: feat/add-user-authentication
Based on: development (abc1234)
Ready to start coding!
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_DEFAULT_BASE` | `development` | Branch to create from |
| `GIT_BRANCH_PREFIX` | `true` | Use type/ prefix |
## Naming Rules
- Lowercase only
- Hyphens for spaces
- No special characters
- Max 50 characters
## Validation
```
Branch name validation:
✓ Lowercase
✓ Valid prefix (feat/)
✓ Descriptive (3+ words recommended)
✗ Too long (52 chars, max 50)
Suggested: feat/add-user-auth
Use this instead? (y/n)
```
## Output ## Output
On success:
``` ```
Branch: feat/add-user-authentication Branch: feat/add-user-authentication
Base: development @ abc1234 Base: development @ abc1234

View File

@@ -1,94 +1,49 @@
---
name: commit-merge
description: Commit current changes and merge branch into target
agent: git-assistant
---
# /commit-merge - Commit and Merge # /commit-merge - Commit and Merge
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/commit-conventions.md
``` - skills/merge-workflow.md
┌──────────────────────────────────────────────────────────────────┐ - skills/git-safety.md
│ 🔀 GIT-FLOW · Commit & Merge │ - skills/environment-variables.md
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the workflow.
## Purpose ## Purpose
Commit current changes, then merge the current branch into a target branch. Commit current changes, then merge the current branch into a target branch.
## Behavior ## Parameters
### Step 1: Run /commit | Parameter | Description |
|-----------|-------------|
| `--target` | Target branch (default: GIT_DEFAULT_BASE) |
| `--squash` | Squash commits on merge |
| `--no-delete` | Keep branch after merge |
Execute the standard commit workflow. ## Workflow
### Step 2: Identify Target Branch 1. **Display header** - Show GIT-FLOW Commit & Merge header
2. **Run /commit** - Execute standard commit workflow
Check environment or ask: 3. **Identify target** - Prompt for target branch if not specified
4. **Select strategy** - Merge commit, squash, or rebase (per merge-workflow.md)
``` 5. **Execute merge** - Switch to target, pull, merge, push
Merge into which branch? 6. **Handle conflicts** - Guide resolution if needed
1. development (Recommended - GIT_DEFAULT_BASE) 7. **Cleanup** - Offer to delete merged branch (per git-safety.md)
2. main 8. **Report** - Show merge summary
3. Other: ____
```
### Step 3: Merge Strategy
```
How should I merge?
1. Merge commit (preserves history)
2. Squash and merge (single commit)
3. Rebase (linear history)
```
### Step 4: Execute Merge
```bash
# Switch to target
git checkout <target>
# Pull latest
git pull origin <target>
# Merge feature branch
git merge <feature-branch> [--squash] [--no-ff]
# Push
git push origin <target>
```
### Step 5: Cleanup (Optional)
```
Merge complete. Delete the feature branch?
1. Yes, delete local and remote (Recommended)
2. Delete local only
3. Keep the branch
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_DEFAULT_BASE` | `development` | Default branch to merge into |
| `GIT_MERGE_STRATEGY` | `merge` | Default merge strategy |
| `GIT_AUTO_DELETE_MERGED` | `true` | Auto-delete merged branches |
## Safety Checks
- Verify target branch exists
- Check for uncommitted changes before switching
- Ensure merge doesn't conflict (preview first)
## Output ## Output
On success:
``` ```
Committed: abc1234 Committed: abc1234
feat(auth): add password reset functionality feat(auth): add password reset functionality
Merged feat/password-reset development Merged feat/password-reset -> development
Deleted branch: feat/password-reset Deleted branch: feat/password-reset
development is now at: def5678 development is now at: def5678

View File

@@ -1,65 +1,42 @@
---
name: commit-push
description: Create a commit and push to remote in one operation
agent: git-assistant
---
# /commit-push - Commit and Push # /commit-push - Commit and Push
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/commit-conventions.md
``` - skills/sync-workflow.md
┌──────────────────────────────────────────────────────────────────┐ - skills/git-safety.md
│ 🔀 GIT-FLOW · Commit & Push │ - skills/environment-variables.md
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the workflow.
## Purpose ## Purpose
Create a commit and push to the remote repository in one operation. Create a commit and push to the remote repository in one operation.
## Behavior ## Parameters
### Step 1: Run /commit | Parameter | Description |
|-----------|-------------|
| `--message`, `-m` | Override auto-generated message |
| `--force` | Force push (requires confirmation) |
Execute the standard commit workflow (see commit.md). ## Workflow
### Step 2: Push to Remote 1. **Display header** - Show GIT-FLOW Commit & Push header
2. **Run /commit** - Execute standard commit workflow
After successful commit: 3. **Check upstream** - Set up tracking if needed (`git push -u`)
4. **Push** - Push to remote
1. Check if branch has upstream tracking 5. **Handle conflicts** - Offer rebase/merge/force if push fails (per sync-workflow.md)
2. If no upstream, set it: `git push -u origin <branch>` 6. **Verify safety** - Warn before push to protected branches (per git-safety.md)
3. If upstream exists: `git push` 7. **Report** - Show push result
### Step 3: Handle Conflicts
If push fails due to diverged history:
```
Remote has changes not in your local branch.
Options:
1. Pull and rebase, then push (Recommended)
2. Pull and merge, then push
3. Force push (⚠️ destructive)
4. Cancel and review manually
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_AUTO_PUSH` | `true` | Auto-push after commit |
| `GIT_PUSH_STRATEGY` | `rebase` | How to handle diverged branches |
## Safety Checks
- **Protected branches**: Warn before pushing to main/master/production
- **Force push**: Require explicit confirmation
- **No tracking**: Ask before creating new remote branch
## Output ## Output
On success:
``` ```
Committed: abc1234 Committed: abc1234
feat(auth): add password reset functionality feat(auth): add password reset functionality

View File

@@ -1,116 +1,49 @@
---
name: commit-sync
description: Commit, push, and sync with base branch
agent: git-assistant
---
# /commit-sync - Commit, Push, and Sync # /commit-sync - Commit, Push, and Sync
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/commit-conventions.md
``` - skills/sync-workflow.md
┌──────────────────────────────────────────────────────────────────┐ - skills/merge-workflow.md
│ 🔀 GIT-FLOW · Commit Sync │ - skills/environment-variables.md
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the workflow.
## Purpose ## Purpose
Full sync operation: commit local changes, push to remote, sync with upstream/base branch, and clean up stale remote-tracking branches. Full sync operation: commit local changes, push to remote, sync with upstream/base branch, and detect stale branches.
## Behavior ## Parameters
### Step 1: Run /commit | Parameter | Description |
|-----------|-------------|
| `--base` | Override default base branch |
| `--no-rebase` | Use merge instead of rebase |
Execute the standard commit workflow. ## Workflow
### Step 2: Push to Remote 1. **Display header** - Show GIT-FLOW Commit Sync header
2. **Run /commit** - Execute standard commit workflow
Push committed changes to remote branch. 3. **Push to remote** - Push committed changes
4. **Fetch with prune** - `git fetch --all --prune`
### Step 3: Sync with Base 5. **Sync with base** - Rebase on base branch (per sync-workflow.md)
6. **Handle conflicts** - Guide resolution if conflicts occur (per merge-workflow.md)
Pull latest from base branch and rebase/merge: 7. **Push again** - `git push --force-with-lease` if rebased
8. **Detect stale** - Report stale local branches
```bash 9. **Report status** - Show sync summary
# Fetch all with prune (removes stale remote-tracking refs)
git fetch --all --prune
# Rebase on base branch
git rebase origin/<base-branch>
# Push again (if rebased)
git push --force-with-lease
```
### Step 4: Detect Stale Local Branches
Check for local branches tracking deleted remotes:
```bash
# Find local branches with gone upstreams
git branch -vv | grep ': gone]'
```
If stale branches found, report them:
```
Stale local branches (remote deleted):
- feat/old-feature (was tracking origin/feat/old-feature)
- fix/merged-bugfix (was tracking origin/fix/merged-bugfix)
Run /branch-cleanup to remove these branches.
```
### Step 5: Report Status
```
Sync complete:
Local: feat/password-reset @ abc1234
Remote: origin/feat/password-reset @ abc1234
Base: development @ xyz7890 (synced)
Your branch is up-to-date with development.
No conflicts detected.
Cleanup:
Remote refs pruned: 2
Stale local branches: 2 (run /branch-cleanup to remove)
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_DEFAULT_BASE` | `development` | Branch to sync with |
| `GIT_SYNC_STRATEGY` | `rebase` | How to incorporate upstream changes |
| `GIT_AUTO_PRUNE` | `true` | Auto-prune stale remote refs on sync |
## Conflict Handling
If conflicts occur during rebase:
```
Conflicts detected while syncing with development.
Conflicting files:
- src/auth/login.ts
- src/auth/types.ts
Options:
1. Open conflict resolution (I'll guide you)
2. Abort sync (keep local state)
3. Accept all theirs (⚠️ loses your changes in conflicts)
4. Accept all ours (⚠️ ignores upstream in conflicts)
```
## Output ## Output
On success:
``` ```
Committed: abc1234 Committed: abc1234
Pushed to: origin/feat/password-reset Pushed to: origin/feat/password-reset
Synced with: development (xyz7890) Synced with: development (xyz7890)
Status: Clean, up-to-date Status: Clean, up-to-date
Stale branches: None (or N found - run /branch-cleanup) Stale branches: 2 found - run /branch-cleanup
``` ```

View File

@@ -1,158 +1,41 @@
---
name: commit
description: Create a git commit with auto-generated conventional commit message
agent: git-assistant
---
# /commit - Smart Commit # /commit - Smart Commit
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/git-safety.md
``` - skills/commit-conventions.md
┌──────────────────────────────────────────────────────────────────┐ - skills/environment-variables.md
│ 🔀 GIT-FLOW · Smart Commit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the commit workflow.
## Purpose ## Purpose
Create a git commit with an auto-generated conventional commit message based on staged changes. Create a git commit with an auto-generated conventional commit message based on staged changes.
## Behavior ## Parameters
### Step 1: Check for Protected Branch | Parameter | Description |
|-----------|-------------|
| `--message`, `-m` | Override auto-generated message |
| `--all`, `-a` | Stage all changes before commit |
Before any commit operation, check if the current branch is protected: ## Workflow
1. Get current branch: `git branch --show-current` 1. **Display header** - Show GIT-FLOW Smart Commit header
2. Check against `GIT_PROTECTED_BRANCHES` (default: `main,master,development,staging,production`) 2. **Check protected branch** - Warn if on protected branch (per git-safety.md)
3. **Analyze changes** - Run `git status` and `git diff --staged`
If on a protected branch, warn the user: 4. **Handle unstaged** - Prompt to stage if nothing staged
5. **Generate message** - Create conventional commit message (per commit-conventions.md)
``` 6. **Confirm or edit** - Present message with options to use, edit, regenerate, or cancel
⚠️ You are on a protected branch: development 7. **Execute commit** - Run `git commit` with message and co-author footer
Protected branches typically have push restrictions that will prevent
direct commits from being pushed to the remote.
Options:
1. Create a feature branch and continue (Recommended)
2. Continue on this branch anyway (may fail on push)
3. Cancel
```
**If option 1 (create feature branch):**
- Prompt for branch type (feat/fix/chore/docs/refactor)
- Prompt for brief description
- Create branch using `/branch-start` naming conventions
- Continue with commit on the new branch
**If option 2 (continue anyway):**
- Proceed with commit (user accepts risk of push rejection)
- Display reminder: "Remember: push may be rejected by remote protection rules"
### Step 2: Analyze Changes
1. Run `git status` to see staged and unstaged changes
2. Run `git diff --staged` to examine staged changes
3. If nothing staged, prompt user to stage changes
### Step 3: Generate Commit Message
Analyze the changes and generate a conventional commit message:
```
<type>(<scope>): <description>
[optional body]
[optional footer]
```
**Types:**
- `feat`: New feature
- `fix`: Bug fix
- `docs`: Documentation only
- `style`: Formatting, missing semicolons, etc.
- `refactor`: Code change that neither fixes a bug nor adds a feature
- `perf`: Performance improvement
- `test`: Adding/updating tests
- `chore`: Maintenance tasks
- `build`: Build system or external dependencies
- `ci`: CI configuration
**Scope:** Determined from changed files (e.g., `auth`, `api`, `ui`)
### Step 4: Confirm or Edit
Present the generated message:
```
Proposed commit message:
───────────────────────
feat(auth): add password reset functionality
Implement forgot password flow with email verification.
Includes rate limiting and token expiration.
───────────────────────
Options:
1. Use this message (Recommended)
2. Edit the message
3. Regenerate with different focus
4. Cancel
```
### Step 5: Execute Commit
If confirmed, run:
```bash
git commit -m "$(cat <<'EOF'
<message>
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_PROTECTED_BRANCHES` | `main,master,development,staging,production` | Branches that trigger protection warning |
| `GIT_COMMIT_STYLE` | `conventional` | Message style (conventional, simple, detailed) |
| `GIT_SIGN_COMMITS` | `false` | Use GPG signing |
| `GIT_CO_AUTHOR` | `true` | Include Claude co-author footer |
## Edge Cases
### No Changes Staged
```
No changes staged for commit.
Would you like to:
1. Stage all changes (`git add -A`)
2. Stage specific files (I'll help you choose)
3. Cancel
```
### Untracked Files
```
Found 3 untracked files:
- src/new-feature.ts
- tests/new-feature.test.ts
- docs/new-feature.md
Include these in the commit?
1. Yes, stage all (Recommended)
2. Let me pick which ones
3. No, commit only tracked files
```
## Output ## Output
On success:
``` ```
Committed: abc1234 Committed: abc1234
feat(auth): add password reset functionality feat(auth): add password reset functionality

View File

@@ -1,106 +1,50 @@
---
name: git-config
description: Configure git-flow settings for the current project
agent: git-assistant
---
# /git-config - Configure git-flow # /git-config - Configure git-flow
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/environment-variables.md
``` - skills/workflow-patterns/branching-strategies.md
┌──────────────────────────────────────────────────────────────────┐
│ 🔀 GIT-FLOW · Configuration │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the configuration.
## Purpose ## Purpose
Configure git-flow settings for the current project. Configure git-flow settings interactively or display current configuration.
## Behavior ## Parameters
### Interactive Configuration | Parameter | Description |
|-----------|-------------|
| `--show` | Display current settings without prompting |
| `--reset` | Reset all settings to defaults |
| `<key>=<value>` | Set specific variable directly |
``` ## Workflow
git-flow Configuration
═══════════════════════════════════════════
Current settings: 1. **Display header** - Show GIT-FLOW Configuration header
GIT_WORKFLOW_STYLE: feature-branch 2. **Load current settings** - Read from project and user config
GIT_DEFAULT_BASE: development 3. **Present menu** - Show configuration options
GIT_AUTO_DELETE_MERGED: true 4. **Handle selection** - Configure workflow style, base branch, protected branches, etc.
GIT_AUTO_PUSH: false 5. **Save settings** - Write to `.env` or `.claude/settings.json`
6. **Confirm** - Display saved configuration
What would you like to configure? ## Configuration Menu
1. Workflow style
1. Workflow style (simple, feature-branch, pr-required, trunk-based)
2. Default base branch 2. Default base branch
3. Auto-delete merged branches 3. Auto-delete merged branches
4. Auto-push after commit 4. Auto-push after commit
5. Protected branches 5. Protected branches
6. View all settings 6. View all settings
7. Reset to defaults 7. Reset to defaults
```
### Setting: Workflow Style
```
Choose your workflow style:
1. simple
- Direct commits to development
- No feature branches required
- Good for solo projects
2. feature-branch (Recommended)
- Feature branches from development
- Merge when complete
- Good for small teams
3. pr-required
- Feature branches from development
- Requires PR for merge
- Good for code review workflows
4. trunk-based
- Short-lived branches
- Frequent integration
- Good for CI/CD heavy workflows
```
### Setting: Protected Branches
```
Protected branches (comma-separated):
Current: main, master, development, staging, production
These branches will:
- Never be auto-deleted
- Require confirmation before direct commits
- Warn before force push
```
## Environment Variables
| Variable | Default | Options |
|----------|---------|---------|
| `GIT_WORKFLOW_STYLE` | `feature-branch` | simple, feature-branch, pr-required, trunk-based |
| `GIT_DEFAULT_BASE` | `development` | Any branch name |
| `GIT_AUTO_DELETE_MERGED` | `true` | true, false |
| `GIT_AUTO_PUSH` | `false` | true, false |
| `GIT_PROTECTED_BRANCHES` | `main,master,development,staging,production` | Comma-separated |
| `GIT_COMMIT_STYLE` | `conventional` | conventional, simple, detailed |
| `GIT_CO_AUTHOR` | `true` | true, false |
## Storage
Settings are stored in:
- Project: `.env` or `.claude/settings.json`
- User: `~/.config/claude/git-flow.env`
Project settings override user settings.
## Output ## Output
After configuration:
``` ```
Configuration saved! Configuration saved!

View File

@@ -1,84 +1,56 @@
---
name: git-status
description: Show comprehensive git status with recommendations
agent: git-assistant
---
# /git-status - Enhanced Status # /git-status - Enhanced Status
## Visual Output ## Skills
When executing this command, display the plugin header: - skills/visual-header.md
- skills/commit-conventions.md
``` - skills/environment-variables.md
┌──────────────────────────────────────────────────────────────────┐
│ 🔀 GIT-FLOW · Status │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the status display.
## Purpose ## Purpose
Show comprehensive git status with recommendations and insights. Show comprehensive git status with recommendations and insights beyond standard `git status`.
## Behavior ## Parameters
### Output Format | Parameter | Description |
|-----------|-------------|
| `--short` | Compact output format |
## Workflow
1. **Display header** - Show GIT-FLOW Status header
2. **Gather info** - Branch, base comparison, remote status
3. **Categorize changes** - Staged, unstaged, untracked, deleted, renamed
4. **Generate recommendations** - What to stage, commit, sync
5. **Show quick actions** - Relevant /commands for current state
## Output Format
``` ```
═══════════════════════════════════════════
Git Status: <repo-name> Git Status: <repo-name>
═══════════════════════════════════════════
Branch: feat/password-reset Branch: feat/password-reset
Base: development (3 commits ahead, 0 behind) Base: development (3 commits ahead, 0 behind)
Remote: origin/feat/password-reset (synced) Remote: origin/feat/password-reset (synced)
─── Changes ─────────────────────────────── --- Changes ---
Staged (ready to commit): Staged (ready to commit):
src/auth/reset.ts (modified) [x] src/auth/reset.ts (modified)
✓ src/auth/types.ts (modified)
Unstaged: Unstaged:
tests/auth.test.ts (modified) [ ] tests/auth.test.ts (modified)
• src/utils/email.ts (new file, untracked)
─── Recommendations ───────────────────────
--- Recommendations ---
1. Stage test file: git add tests/auth.test.ts 1. Stage test file: git add tests/auth.test.ts
2. Consider adding new file: git add src/utils/email.ts 2. Ready to commit with 1 staged file
3. Ready to commit with 2 staged files
─── Quick Actions ───────────────────────── --- Quick Actions ---
/commit - Commit staged changes
/commit - Commit staged changes /commit-push - Commit and push
• /commit-push - Commit and push
• /commit-sync - Full sync with development
═══════════════════════════════════════════
``` ```
## Analysis Provided
### Branch Health
- Commits ahead/behind base branch
- Sync status with remote
- Age of branch
### Change Categories
- Staged (ready to commit)
- Modified (not staged)
- Untracked (new files)
- Deleted
- Renamed
### Recommendations
- What to stage
- What to ignore
- When to commit
- When to sync
### Warnings
- Large number of changes (consider splitting)
- Old branch (consider rebasing)
- Conflicts with upstream
## Output
Always produces the formatted status report with context-aware recommendations.

View File

@@ -0,0 +1,97 @@
# Branch Naming
## Purpose
Defines branch naming conventions and validation rules for consistent repository organization.
## When to Use
- Creating new branches with `/branch-start`
- Validating branch names
- Converting descriptions to branch names
## Branch Name Format
```
<type>/<description>
```
## Branch Types
| Type | Purpose | Example |
|------|---------|---------|
| `feat` | New feature | `feat/user-authentication` |
| `fix` | Bug fix | `fix/login-timeout` |
| `chore` | Maintenance | `chore/update-deps` |
| `docs` | Documentation | `docs/api-reference` |
| `refactor` | Code restructure | `refactor/auth-module` |
| `test` | Test additions | `test/auth-coverage` |
| `perf` | Performance | `perf/query-optimization` |
| `debug` | Debugging work | `debug/memory-leak` |
## Naming Rules
1. **Lowercase only** - Never use uppercase
2. **Hyphens for spaces** - Use `-` not `_` or ` `
3. **No special characters** - Alphanumeric and hyphens only
4. **Descriptive** - 2-4 words recommended
5. **Max 50 characters** - Keep concise
## Conversion Algorithm
```
Input: "Add User Authentication"
Output: "feat/add-user-authentication"
Steps:
1. Lowercase: "add user authentication"
2. Replace spaces: "add-user-authentication"
3. Remove special chars: (none to remove)
4. Add prefix: "feat/add-user-authentication"
5. Truncate if > 50: (not needed)
```
## Validation Checks
```
Branch name validation:
[x] Lowercase
[x] Valid prefix (feat/)
[x] Descriptive (3+ words recommended)
[ ] Too long (52 chars, max 50)
Suggested: feat/add-user-auth
```
## Examples
**Valid:**
```
feat/add-password-reset
fix/null-pointer-login
chore/upgrade-typescript-5
docs/update-readme
refactor/simplify-auth
```
**Invalid:**
```
Feature/Add_Password_Reset (wrong case, underscores)
fix-bug (too vague, no prefix)
my-branch (no type prefix)
feat/add-new-super-amazing-feature-for-users (too long)
```
## Issue-Linked Branches
When working on issues, include issue number:
```
feat/123-add-password-reset
fix/456-login-timeout
```
## Related Skills
- skills/commit-conventions.md
- skills/git-safety.md
- skills/workflow-patterns/branching-strategies.md

View File

@@ -0,0 +1,95 @@
# Commit Conventions
## Purpose
Defines conventional commit message format for consistent, parseable commit history.
## When to Use
- Generating commit messages in `/commit`
- Validating user-provided commit messages
- Explaining commit format to users
## Message Format
```
<type>(<scope>): <description>
[optional body]
[optional footer]
```
## Commit Types
| Type | Purpose | Example Scope |
|------|---------|---------------|
| `feat` | New feature | `auth`, `api`, `ui` |
| `fix` | Bug fix | `login`, `validation` |
| `docs` | Documentation only | `readme`, `api-docs` |
| `style` | Formatting, whitespace | `lint`, `format` |
| `refactor` | Code change (no bug fix, no feature) | `auth-module` |
| `perf` | Performance improvement | `query`, `cache` |
| `test` | Adding/updating tests | `unit`, `e2e` |
| `chore` | Maintenance tasks | `deps`, `build` |
| `build` | Build system or dependencies | `webpack`, `npm` |
| `ci` | CI configuration | `github-actions` |
## Scope Detection
Derive scope from changed files:
- `src/auth/*` -> `auth`
- `src/api/*` -> `api`
- `tests/*` -> `test`
- `docs/*` -> `docs`
- Multiple directories -> most significant or omit
## Examples
**Feature commit:**
```
feat(auth): add password reset flow
Implement forgot password with email verification.
Includes rate limiting (5 attempts/hour) and 24h token expiration.
Closes #123
```
**Bug fix:**
```
fix(ui): resolve button alignment on mobile
The submit button was misaligned on screens < 768px.
Added responsive flex rules.
```
**Maintenance:**
```
chore(deps): update dependencies
- typescript 5.3 -> 5.4
- react 18.2 -> 18.3
- node 18 -> 20 (LTS)
```
## Footer Conventions
| Footer | Purpose |
|--------|---------|
| `Closes #123` | Auto-close issue |
| `Refs #123` | Reference without closing |
| `BREAKING CHANGE:` | Breaking change description |
| `Co-Authored-By:` | Credit co-author |
## Co-Author Footer
When Claude assists with commits:
```
Co-Authored-By: Claude <noreply@anthropic.com>
```
## Related Skills
- skills/branch-naming.md
- skills/git-safety.md

View File

@@ -0,0 +1,92 @@
# Environment Variables
## Purpose
Centralized reference for all git-flow environment variables and their defaults.
## When to Use
- Configuring git-flow behavior in `/git-config`
- Documenting available options to users
- Setting up project-specific overrides
## Core Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_DEFAULT_BASE` | `development` | Base branch for new branches and merges |
| `GIT_PROTECTED_BRANCHES` | `main,master,development,staging,production` | Comma-separated list of protected branches |
| `GIT_WORKFLOW_STYLE` | `feature-branch` | Workflow: simple, feature-branch, pr-required, trunk-based |
## Commit Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_COMMIT_STYLE` | `conventional` | Message style: conventional, simple, detailed |
| `GIT_SIGN_COMMITS` | `false` | Use GPG signing |
| `GIT_CO_AUTHOR` | `true` | Include Claude co-author footer |
## Push/Sync Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_AUTO_PUSH` | `false` | Auto-push after commit |
| `GIT_PUSH_STRATEGY` | `rebase` | Handle diverged branches: rebase, merge |
| `GIT_SYNC_STRATEGY` | `rebase` | Incorporate upstream changes: rebase, merge |
| `GIT_AUTO_PRUNE` | `true` | Auto-prune stale remote refs on sync |
## Branch Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_BRANCH_PREFIX` | `true` | Use type/ prefix for branches |
| `GIT_AUTO_DELETE_MERGED` | `true` | Auto-delete merged branches |
| `GIT_AUTO_DELETE_REMOTE` | `false` | Auto-delete remote branches |
| `GIT_CLEANUP_STALE` | `true` | Include stale branches in cleanup |
## Workflow Styles
### simple
- Direct commits to main/development
- No feature branches required
- Best for: Solo projects, small scripts
### feature-branch (Default)
- Feature branches from development
- Merge when complete
- Best for: Small teams
### pr-required
- Feature branches from development
- Requires PR for merge
- Best for: Code review workflows
### trunk-based
- Short-lived branches (< 1 day)
- Frequent integration
- Best for: CI/CD heavy workflows
## Storage Locations
| Scope | Location | Priority |
|-------|----------|----------|
| Project | `.env` or `.claude/settings.json` | Highest |
| User | `~/.config/claude/git-flow.env` | Lower |
Project settings override user settings.
## Example Configuration
**.env file:**
```bash
GIT_DEFAULT_BASE=main
GIT_WORKFLOW_STYLE=pr-required
GIT_AUTO_DELETE_MERGED=true
GIT_COMMIT_STYLE=conventional
GIT_PROTECTED_BRANCHES=main,staging,production
```
## Related Skills
- skills/git-safety.md
- skills/commit-conventions.md

View File

@@ -0,0 +1,105 @@
# Git Safety
## Purpose
Defines protected branches, destructive command warnings, and safety checks to prevent accidental data loss.
## When to Use
- Before any commit, push, or merge operation
- When user attempts to work on protected branches
- Before executing destructive commands
## Protected Branches
Default protected branches (configurable via `GIT_PROTECTED_BRANCHES`):
- `main`
- `master`
- `development`
- `staging`
- `production`
## Protection Rules
| Action | Behavior |
|--------|----------|
| Direct commit | Warn and offer to create feature branch |
| Force push | Require explicit confirmation |
| Deletion | Block completely |
| Merge into | Allow with standard workflow |
## Protected Branch Warning
When committing on protected branch:
```
You are on a protected branch: development
Protected branches typically have push restrictions that will prevent
direct commits from being pushed to the remote.
Options:
1. Create a feature branch and continue (Recommended)
2. Continue on this branch anyway (may fail on push)
3. Cancel
```
## Destructive Commands
Commands requiring extra confirmation:
| Command | Risk | Mitigation |
|---------|------|------------|
| `git push --force` | Overwrites remote history | Use `--force-with-lease` |
| `git reset --hard` | Loses uncommitted changes | Warn about unsaved work |
| `git branch -D` | Deletes unmerged branch | Confirm branch name |
| `git clean -fd` | Deletes untracked files | List files first |
## Safe Alternatives
| Risky | Safe Alternative |
|-------|------------------|
| `git push --force` | `git push --force-with-lease` |
| `git branch -D` | `git branch -d` (merged only) |
| `git reset --hard` | `git stash` first |
| `git checkout .` | Review changes first |
## Branch Deletion Safety
**Merged branches (`-d`):**
```bash
git branch -d feat/old-feature # Safe: only deletes if merged
```
**Unmerged branches (`-D`):**
```bash
# Requires confirmation
git branch -D feat/abandoned # Force: deletes regardless
```
## Push Rejection Handling
When push fails on protected branch:
```
Push rejected: Remote protection rules prevent direct push to development.
Options:
1. Create a pull request instead (Recommended)
2. Review branch protection settings
3. Cancel
```
## Stale Branch Detection
Branches with deleted remotes:
```bash
git branch -vv | grep ': gone]'
```
These are safe to delete locally with `-D` after confirmation.
## Related Skills
- skills/branch-naming.md
- skills/merge-workflow.md

View File

@@ -0,0 +1,124 @@
# Merge Workflow
## Purpose
Defines merge strategies, conflict resolution approaches, and post-merge cleanup procedures.
## When to Use
- Merging feature branches in `/commit-merge`
- Resolving conflicts during sync operations
- Cleaning up after successful merges
## Merge Strategies
### 1. Merge Commit (Default)
```bash
git merge <branch> --no-ff
```
- **Preserves history** - All commits visible
- **Creates merge commit** - Clear merge point
- **Best for:** Feature branches, team workflows
### 2. Squash and Merge
```bash
git merge <branch> --squash
git commit -m "feat: complete feature"
```
- **Single commit** - Clean main branch history
- **Loses individual commits** - Combined into one
- **Best for:** PR workflows, many small commits
### 3. Rebase
```bash
git checkout <branch>
git rebase <target>
git checkout <target>
git merge <branch> --ff-only
```
- **Linear history** - No merge commits
- **Rewrites history** - Changes commit hashes
- **Best for:** Personal branches, clean history
## Standard Merge Procedure
```bash
# 1. Switch to target branch
git checkout <target>
# 2. Pull latest changes
git pull origin <target>
# 3. Merge feature branch
git merge <feature-branch> [--squash] [--no-ff]
# 4. Push merged result
git push origin <target>
```
## Conflict Resolution
When conflicts occur:
```
Conflicts detected while merging.
Conflicting files:
- src/auth/login.ts
- src/auth/types.ts
Options:
1. Open conflict resolution (guided)
2. Abort merge (keep local state)
3. Accept all theirs (loses your changes in conflicts)
4. Accept all ours (ignores upstream in conflicts)
```
### Manual Resolution Steps
1. Open conflicting file
2. Find conflict markers: `<<<<<<<`, `=======`, `>>>>>>>`
3. Edit to desired state
4. Remove markers
5. Stage resolved file: `git add <file>`
6. Continue: `git merge --continue` or `git rebase --continue`
## Post-Merge Cleanup
After successful merge:
```
Merge complete. Delete the feature branch?
1. Yes, delete local and remote (Recommended)
2. Delete local only
3. Keep the branch
```
### Cleanup Commands
```bash
# Delete local branch
git branch -d <branch>
# Delete remote branch
git push origin --delete <branch>
```
## Pre-Merge Checks
Before merging:
1. **Verify target exists** - `git branch -a | grep <target>`
2. **Check for uncommitted changes** - `git status`
3. **Preview conflicts** - `git merge --no-commit --no-ff <branch>`
4. **Abort preview** - `git merge --abort`
## Related Skills
- skills/git-safety.md
- skills/sync-workflow.md

View File

@@ -0,0 +1,146 @@
# Sync Workflow
## Purpose
Defines push/pull patterns, rebase strategies, upstream tracking, and stale branch detection.
## When to Use
- Pushing commits in `/commit-push`
- Full sync operations in `/commit-sync`
- Detecting and reporting stale branches
## Push Workflow
### First Push (No Upstream)
```bash
git push -u origin <branch>
```
Sets upstream tracking for future pushes.
### Subsequent Pushes
```bash
git push
```
Pushes to tracked upstream.
### Push After Rebase
```bash
git push --force-with-lease
```
Safe force push - fails if remote has new commits.
## Sync with Base Branch
```bash
# 1. Fetch all with prune
git fetch --all --prune
# 2. Rebase on base branch
git rebase origin/<base-branch>
# 3. Push (force if rebased)
git push --force-with-lease
```
## Push Conflict Handling
When push fails due to diverged history:
```
Remote has changes not in your local branch.
Options:
1. Pull and rebase, then push (Recommended)
2. Pull and merge, then push
3. Force push (destructive - requires confirmation)
4. Cancel and review manually
```
### Rebase Resolution
```bash
git pull --rebase origin <branch>
git push
```
### Merge Resolution
```bash
git pull origin <branch>
git push
```
## Stale Branch Detection
Find local branches tracking deleted remotes:
```bash
git branch -vv | grep ': gone]'
```
### Report Format
```
Stale local branches (remote deleted):
- feat/old-feature (was tracking origin/feat/old-feature)
- fix/merged-bugfix (was tracking origin/fix/merged-bugfix)
Run /branch-cleanup to remove these branches.
```
## Remote Pruning
Remove stale remote-tracking references:
```bash
git fetch --prune
```
Or fetch all remotes:
```bash
git fetch --all --prune
```
## Sync Status Report
```
Sync complete:
Local: feat/password-reset @ abc1234
Remote: origin/feat/password-reset @ abc1234
Base: development @ xyz7890 (synced)
Your branch is up-to-date with development.
No conflicts detected.
Cleanup:
Remote refs pruned: 2
Stale local branches: 2 (run /branch-cleanup to remove)
```
## Tracking Setup
Check tracking status:
```bash
git branch -vv
```
Set upstream:
```bash
git branch --set-upstream-to=origin/<branch> <branch>
```
## Related Skills
- skills/git-safety.md
- skills/merge-workflow.md

View File

@@ -0,0 +1,84 @@
# Visual Header
## Purpose
Standard header format for consistent visual output across all git-flow commands.
## When to Use
- At the start of every git-flow command execution
- Displaying command identity to user
## Header Format
```
+----------------------------------------------------------------------+
| GIT-FLOW [Command Name] |
+----------------------------------------------------------------------+
```
## Command Headers
### /commit
```
+----------------------------------------------------------------------+
| GIT-FLOW Smart Commit |
+----------------------------------------------------------------------+
```
### /commit-push
```
+----------------------------------------------------------------------+
| GIT-FLOW Commit & Push |
+----------------------------------------------------------------------+
```
### /commit-sync
```
+----------------------------------------------------------------------+
| GIT-FLOW Commit Sync |
+----------------------------------------------------------------------+
```
### /commit-merge
```
+----------------------------------------------------------------------+
| GIT-FLOW Commit & Merge |
+----------------------------------------------------------------------+
```
### /branch-start
```
+----------------------------------------------------------------------+
| GIT-FLOW Branch Start |
+----------------------------------------------------------------------+
```
### /branch-cleanup
```
+----------------------------------------------------------------------+
| GIT-FLOW Branch Cleanup |
+----------------------------------------------------------------------+
```
### /git-status
```
+----------------------------------------------------------------------+
| GIT-FLOW Status |
+----------------------------------------------------------------------+
```
### /git-config
```
+----------------------------------------------------------------------+
| GIT-FLOW Configuration |
+----------------------------------------------------------------------+
```
## Usage
Display header immediately after command invocation, before any workflow steps.
## Related Skills
- None (standalone utility)

View File

@@ -1,285 +1,45 @@
--- ---
description: Interactive setup wizard for pr-review plugin - configures Gitea MCP and project settings description: Interactive setup wizard for pr-review plugin
--- ---
# PR Review Setup Wizard # PR Review Setup Wizard
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `PR-REVIEW - Setup Wizard`
``` ## Skills to Load
┌──────────────────────────────────────────────────────────────────┐
│ 🔍 PR-REVIEW · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the setup. - skills/setup-workflow.md
- skills/mcp-tools-reference.md
This command sets up the pr-review plugin. It shares the Gitea MCP server with projman, so if you've already run `/initial-setup` for projman, most of the work is done. - skills/output-formats.md
## Important Context ## Important Context
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools - Uses Bash, Read, Write, AskUserQuestion - NOT MCP tools
- **MCP tools won't work until after setup + session restart** - MCP tools won't work until after setup + session restart
- **Shares Gitea MCP server with projman plugin** - Shares Gitea MCP server with projman plugin
--- ## Workflow
## Phase 1: Check Existing Setup ### Phase 1: Check Existing Setup
### Step 1.1: Check if Gitea MCP is Already Configured Check `~/.config/claude/gitea.env`. If valid, skip to Phase 3.
First, check if the system configuration already exists (from projman or previous setup): ### Phase 2: System Setup (if needed)
```bash Execute `skills/setup-workflow.md`: verify Python, create gitea.env, prompt for token
cat ~/.config/claude/gitea.env 2>/dev/null || echo "FILE_NOT_FOUND"
```
**If file exists with valid values (no placeholders):** ### Phase 3: Project Configuration
- Skip to Phase 3 (Project Configuration)
- Inform user: "Gitea configuration found. Skipping system setup."
**If file doesn't exist or has placeholders:** Execute `skills/setup-workflow.md`: auto-detect org/repo, validate via API, create .env
- Continue to Phase 2
### Step 1.2: Check if projman is Installed ### Phase 4: Validation
Check if projman plugin exists (they share MCP server): Test API connection, display completion summary, remind to restart session
```bash
find ~/.claude ~/.config/claude -name "projman" -type d 2>/dev/null | head -1
```
**If projman exists:**
- Suggest: "The projman plugin is installed and shares the same Gitea MCP server. Consider running `/initial-setup` from projman for the full setup wizard."
Use AskUserQuestion:
- Question: "How would you like to proceed with setup?"
- Header: "Setup"
- Options:
- "Continue with pr-review setup (Recommended if not using projman)"
- "I'll use projman's /initial-setup instead"
**If user chooses projman setup:** End here with instructions to run projman's setup.
---
## Phase 2: System Setup (if needed)
This is a condensed version focusing on what pr-review needs.
### Step 2.1: Python and MCP Server
Check Python version:
```bash
python3 --version
```
If below 3.10, stop and inform user.
Locate and set up the MCP server:
```bash
find ~/.claude ~/.config/claude -name "mcp_server" -path "*gitea*" 2>/dev/null | head -5
```
If venv doesn't exist, create it:
```bash
cd /path/to/mcp-servers/gitea && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
### Step 2.2: Gitea Configuration
Create config directory:
```bash
mkdir -p ~/.config/claude
```
Use AskUserQuestion:
- Question: "What is your Gitea server URL?"
- Header: "Gitea URL"
- Options:
- "https://gitea.hotserv.cloud"
- "Other (I'll provide the URL)"
Create configuration file (credentials only, org is per-project):
```bash
cat > ~/.config/claude/gitea.env << 'EOF'
# Gitea API Configuration
# Generated by pr-review /initial-setup
# Note: GITEA_ORG is configured per-project in .env
GITEA_API_URL=<USER_PROVIDED_URL>
GITEA_API_TOKEN=PASTE_YOUR_TOKEN_HERE
EOF
chmod 600 ~/.config/claude/gitea.env
```
### Step 2.3: Token Instructions
Display these instructions:
---
**Action Required: Add Your Gitea API Token**
I've created `~/.config/claude/gitea.env` but you need to add your API token manually.
**Steps:**
1. Open: `nano ~/.config/claude/gitea.env`
2. Generate token in Gitea: Settings → Applications → Generate New Token
- Permissions needed: `repo`, `read:org`, `read:user`
3. Replace `PASTE_YOUR_TOKEN_HERE` with your token
4. Save the file
---
Use AskUserQuestion:
- Question: "Have you added your Gitea token?"
- Header: "Token"
- Options:
- "Yes, I've added the token"
- "Skip for now"
---
## Phase 3: Project Configuration
### Step 3.1: Check Current Directory
```bash
pwd && git rev-parse --show-toplevel 2>/dev/null || echo "NOT_A_GIT_REPO"
```
### Step 3.2: Check Existing Project Config
```bash
cat .env 2>/dev/null | grep GITEA_REPO || echo "NOT_FOUND"
```
If `GITEA_REPO` is already set, skip to Phase 4.
### Step 3.3: Detect Organization and Repository
Extract organization:
```bash
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\/[^/]*$/\1/'
```
Extract repository:
```bash
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\.git$/\1/' | sed 's/.*\/\([^/]*\)$/\1/'
```
### Step 3.4: Validate Repository via Gitea API
```bash
source ~/.config/claude/gitea.env
curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/repos/<detected-org>/<detected-repo>"
```
| HTTP Code | Action |
|-----------|--------|
| **200** | Auto-fill - "Verified: <org>/<repo> exists" - skip to Step 3.7 |
| **404** | Not found - proceed to Step 3.5 |
| **401/403** | Permission issue - warn, proceed to Step 3.5 |
### Step 3.5: Confirm Organization (only if validation failed)
Use AskUserQuestion:
- Question: "Repository not found. Is '<detected-org>' the correct organization?"
- Header: "Organization"
- Options:
- "Yes, that's correct"
- "No, let me specify"
### Step 3.6: Confirm Repository (only if validation failed)
Use AskUserQuestion:
- Question: "Is '<detected-repo-name>' the correct repository?"
- Header: "Repository"
- Options:
- "Yes, that's correct"
- "No, let me specify"
**After corrections, re-validate via API (Step 3.4).**
### Step 3.7: Create/Update Project Config
If `.env` exists, append:
```bash
echo "GITEA_ORG=<ORG_NAME>" >> .env
echo "GITEA_REPO=<REPO_NAME>" >> .env
```
If `.env` doesn't exist:
```bash
cat > .env << 'EOF'
# Project Configuration
GITEA_ORG=<ORG_NAME>
GITEA_REPO=<REPO_NAME>
EOF
```
### Step 3.5: PR Review Settings (Optional)
Use AskUserQuestion:
- Question: "Do you want to configure PR review settings?"
- Header: "Settings"
- Options:
- "Use defaults (Recommended)"
- "Let me customize"
If customize, ask about:
- `PR_REVIEW_CONFIDENCE_THRESHOLD` (default: 0.5)
- `PR_REVIEW_AUTO_SUBMIT` (default: false)
---
## Phase 4: Validation and Next Steps
### Step 4.1: Test Configuration (if token was added)
```bash
source ~/.config/claude/gitea.env && curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/user"
```
Report result:
- 200: Success
- 401: Invalid token
- Other: Connection issue
### Step 4.2: Summary
```
╔════════════════════════════════════════════════════════════╗
║ PR-REVIEW SETUP COMPLETE ║
╠════════════════════════════════════════════════════════════╣
║ MCP Server (Gitea): ✓ Ready ║
║ System Config: ✓ ~/.config/claude/gitea.env ║
║ Project Config: ✓ ./.env ║
╚════════════════════════════════════════════════════════════╝
```
### Step 4.3: Session Restart Notice
---
**⚠️ Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/pr-review <PR_NUMBER>` to review a pull request
- Run `/pr-summary <PR_NUMBER>` for a quick summary
- Run `/pr-findings <PR_NUMBER>` to list actionable findings
---
## Available Commands After Setup ## Available Commands After Setup
| Command | Description | - `/pr-review <number>` - Full multi-agent review
|---------|-------------| - `/pr-summary <number>` - Quick summary
| `/pr-review <number>` | Full multi-agent PR review with confidence scoring | - `/pr-findings <number>` - List findings
| `/pr-summary <number>` | Quick PR summary |
| `/pr-findings <number>` | List findings with severity and line numbers |

View File

@@ -2,165 +2,47 @@
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display header: `PR-REVIEW - Diff Viewer`
``` ## Skills to Load
┌──────────────────────────────────────────────────────────────────┐
│ 🔍 PR-REVIEW · Diff Viewer │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the diff display. - skills/mcp-tools-reference.md
- skills/pr-analysis.md
## Purpose - skills/output-formats.md
Display the PR diff with inline annotations from review comments, making it easy to see what feedback has been given alongside the code changes.
## Usage ## Usage
``` ```
/pr-diff <pr-number> [--repo owner/repo] [--context <lines>] /pr-diff <pr-number> [--repo owner/repo] [--context <n>] [--no-comments] [--file <pattern>]
``` ```
### Options ## Workflow
``` ### Step 1: Fetch Data
--repo <owner/repo> Override repository (default: from .env)
--context <n> Lines of context around changes (default: 3)
--no-comments Show diff without comment annotations
--file <pattern> Filter to specific files (glob pattern)
```
## Behavior Load MCP tools, then: `get_pr_diff`, `get_pr_comments`
### Step 1: Fetch PR Data
Using Gitea MCP tools:
1. `get_pr_diff` - Unified diff of all changes
2. `get_pr_comments` - All review comments on the PR
### Step 2: Parse and Annotate ### Step 2: Parse and Annotate
Parse the diff and overlay comments at their respective file/line positions: Execute `skills/pr-analysis.md` Annotated Diff Display:
- Overlay comments at file/line positions
- Show commenter, timestamp, replies
- Mark resolved vs open
``` ### Step 3: Display
═══════════════════════════════════════════════════
PR #123 Diff - Add user authentication
═══════════════════════════════════════════════════
Branch: feat/user-auth → development Use annotated diff format from `skills/output-formats.md`
Files: 12 changed (+234 / -45)
───────────────────────────────────────────────────
src/api/users.ts (+85 / -12)
───────────────────────────────────────────────────
@@ -42,6 +42,15 @@ export async function getUser(id: string) {
42 │ const db = getDatabase();
43 │
44 │- const user = db.query("SELECT * FROM users WHERE id = " + id);
│ ┌─────────────────────────────────────────────────────────────
│ │ COMMENT by @reviewer (2h ago):
│ │ This is a SQL injection vulnerability. Use parameterized
│ │ queries instead: `db.query("SELECT * FROM users WHERE id = ?", [id])`
│ └─────────────────────────────────────────────────────────────
45 │+ const query = "SELECT * FROM users WHERE id = ?";
46 │+ const user = db.query(query, [id]);
47 │
48 │ if (!user) {
49 │ throw new NotFoundError("User not found");
50 │ }
@@ -78,3 +87,12 @@ export async function updateUser(id: string, data: UserInput) {
87 │+ // Validate input before update
88 │+ validateUserInput(data);
89 │+
90 │+ const result = db.query(
91 │+ "UPDATE users SET name = ?, email = ? WHERE id = ?",
92 │+ [data.name, data.email, id]
93 │+ );
│ ┌─────────────────────────────────────────────────────────────
│ │ COMMENT by @maintainer (1h ago):
│ │ Good use of parameterized query here!
│ │
│ │ REPLY by @author (30m ago):
│ │ Thanks! Applied the same pattern throughout.
│ └─────────────────────────────────────────────────────────────
───────────────────────────────────────────────────
src/components/LoginForm.tsx (+65 / -0) [NEW FILE]
───────────────────────────────────────────────────
@@ -0,0 +1,65 @@
1 │+import React, { useState } from 'react';
2 │+import { useAuth } from '../context/AuthContext';
3 │+
4 │+export function LoginForm() {
5 │+ const [email, setEmail] = useState('');
6 │+ const [password, setPassword] = useState('');
7 │+ const { login } = useAuth();
... (remaining diff content)
═══════════════════════════════════════════════════
Comment Summary: 5 comments, 2 resolved
═══════════════════════════════════════════════════
```
### Step 3: Filter by Confidence (Optional)
If `PR_REVIEW_CONFIDENCE_THRESHOLD` is set, also annotate with high-confidence findings from previous reviews:
```
44 │- const user = db.query("SELECT * FROM users WHERE id = " + id);
│ ┌─── REVIEW FINDING (0.95 HIGH) ─────────────────────────────
│ │ [SEC-001] SQL Injection Vulnerability
│ │ Use parameterized queries to prevent injection attacks.
│ └─────────────────────────────────────────────────────────────
│ ┌─── COMMENT by @reviewer ────────────────────────────────────
│ │ This is a SQL injection vulnerability...
│ └─────────────────────────────────────────────────────────────
```
## Output Formats
### Default (Annotated Diff)
Full diff with inline comments as shown above.
### Plain (--no-comments)
```
/pr-diff 123 --no-comments
# Standard unified diff output without annotations
```
### File Filter (--file)
```
/pr-diff 123 --file "src/api/*"
# Shows diff only for files matching pattern
```
## Use Cases ## Use Cases
- **Review preparation**: See the full context of changes with existing feedback - Review preparation with existing feedback
- **Followup work**: Understand what was commented on and where - Followup work - see what was commented
- **Discussion context**: View threaded conversations alongside the code - Progress tracking on resolved comments
- **Progress tracking**: See which comments have been resolved
## Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.7` | Minimum confidence for showing review findings |
## Related Commands ## Related Commands
| Command | Purpose | | Command | Purpose |
|---------|---------| |---------|---------|
| `/pr-summary` | Quick overview without diff | | `/pr-summary` | Quick overview |
| `/pr-review` | Full multi-agent review | | `/pr-review` | Full review |
| `/pr-findings` | Filter review findings by category | | `/pr-findings` | Filter findings |

Some files were not shown because too many files have changed in this diff Show More