feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]

Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action>
sub-command pattern. Git-flow consolidated from 8→5 commands (commit
variants absorbed into --push/--merge/--sync flags). Dispatch files,
name: frontmatter, and cross-reference updates for all plugins.

Phase 2: Design documents for 8 new plugins in docs/designs/.

Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate,
saas-react-platform, saas-test-pilot, data-seed, ops-release-manager,
ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents,
skills, README, and claude-md-integration. Marketplace grows from 12→20.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-06 14:52:11 -05:00
parent 5098422858
commit 2d51df7a42
321 changed files with 13582 additions and 1019 deletions

View File

@@ -119,7 +119,7 @@ Track gathered information in a mental model:
### After Clarification
Produce a clear specification (see /clarify command for format).
Produce a clear specification (see /clarity clarify command for format).
## Example Session

View File

@@ -18,8 +18,8 @@ This project uses the clarity-assist plugin for requirement gathering.
| Command | Use Case |
|---------|----------|
| `/clarify` | Full 4-D methodology for complex requests |
| `/quick-clarify` | Rapid mode for simple disambiguation |
| `/clarity clarify` | Full 4-D methodology for complex requests |
| `/clarity quick-clarify` | Rapid mode for simple disambiguation |
### Communication Style

View File

@@ -1,4 +1,8 @@
# /clarify - Full Prompt Optimization
---
name: clarity clarify
---
# /clarity clarify - Full Prompt Optimization
## Visual Output

View File

@@ -1,4 +1,8 @@
# /quick-clarify - Rapid Clarification Mode
---
name: clarity quick-clarify
---
# /clarity quick-clarify - Rapid Clarification Mode
## Visual Output
@@ -23,7 +27,7 @@ Single-pass clarification for requests that are mostly clear but need minor disa
- `skills/nd-accommodations.md` - ND-friendly question patterns
- `skills/clarification-techniques.md` - Echo and micro-summary techniques
- `skills/escalation-patterns.md` - When to escalate to full /clarify
- `skills/escalation-patterns.md` - When to escalate to full `/clarity clarify`
## Workflow
@@ -37,7 +41,7 @@ No formal specification document needed. Proceed after brief confirmation, docum
## Escalation
If complexity emerges, offer to switch to full `/clarify`:
If complexity emerges, offer to switch to full `/clarity clarify`:
```
"This is more involved than it first appeared. Want me to switch

View File

@@ -0,0 +1,14 @@
---
description: Prompt optimization and requirement clarification
---
# /clarity
Prompt optimization and requirement clarification with ND-friendly accommodations.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/clarity clarify` | Full 4-D methodology for complex requests |
| `/clarity quick-clarify` | Rapid mode for simple disambiguation |

View File

@@ -58,8 +58,8 @@ Our design philosophy centers on three principles:
### 3. Customizable Verbosity
**Detail Levels**
- `/clarify` - Full methodology for complex requests (more thorough, more questions)
- `/quick-clarify` - Rapid mode for simple disambiguation (fewer questions, faster)
- `/clarity clarify` - Full methodology for complex requests (more thorough, more questions)
- `/clarity quick-clarify` - Rapid mode for simple disambiguation (fewer questions, faster)
**User Control**
- Users can always say "that's enough detail" to end questioning early
@@ -68,7 +68,7 @@ Our design philosophy centers on three principles:
### 4. Vagueness Detection
The `UserPromptSubmit` hook automatically detects prompts that might benefit from clarification and gently suggests using `/clarify`.
The `UserPromptSubmit` hook automatically detects prompts that might benefit from clarification and gently suggests using `/clarity clarify`.
**Detection Signals**
- Short prompts (< 10 words) without specific technical terms
@@ -156,10 +156,10 @@ This triggers vagueness detection because:
- No specific technical context
- No measurable outcome
### After: Clarified with /clarify
### After: Clarified with /clarity clarify
```
User: /clarify Make the app faster
User: /clarity clarify Make the app faster
Claude: Let me help clarify what kind of performance improvements you're looking for.
@@ -235,7 +235,7 @@ Optimize initial page load time to under 2 seconds by addressing bundle size and
For simpler requests that just need minor disambiguation:
```
User: /quick-clarify Add a delete button to the user list
User: /clarity quick-clarify Add a delete button to the user list
Claude: I'll add a delete button to each row in the user list.
@@ -286,14 +286,14 @@ export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.8
### If You're Feeling Overwhelmed
- Use `/quick-clarify` instead of `/clarify` for faster interactions
- Use `/clarity quick-clarify` instead of `/clarity clarify` for faster interactions
- Say "let's focus on just one thing" to narrow scope
- Ask to "pause and summarize" at any point
- It's OK to say "I don't know" - the plugin will offer concrete alternatives
### If You Have Executive Function Challenges
- Start with `/clarify` even for tasks you think are simple - it helps with planning
- Start with `/clarity clarify` even for tasks you think are simple - it helps with planning
- The structured specification can serve as a checklist
- Use the scope boundaries to prevent scope creep

View File

@@ -240,7 +240,7 @@ if (( $(echo "$SCORE >= $THRESHOLD" | bc -l) )); then
# Gentle, non-blocking suggestion
echo "$PREFIX Your prompt could benefit from more clarity."
echo "$PREFIX Consider running /clarify to refine your request."
echo "$PREFIX Consider running /clarity clarify to refine your request."
echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)"
# Additional RFC suggestion if feature request detected

View File

@@ -40,7 +40,7 @@ add the other parts. Sound good?"
## Choosing Initial Mode
### Use /quick-clarify When
### Use /clarity quick-clarify When
- Request is fairly clear, just one or two ambiguities
- User is in a hurry
@@ -48,7 +48,7 @@ add the other parts. Sound good?"
- Simple feature additions or bug fixes
- Confidence is high (>90%)
### Use /clarify When
### Use /clarity clarify When
- Complex multi-step requests
- Requirements with multiple possible interpretations

View File

@@ -102,7 +102,7 @@ Also check for hook-based plugins (project-hygiene uses `PostToolUse` hooks).
For each detected plugin, search CLAUDE.md for:
- Plugin name mention (e.g., "projman", "cmdb-assistant")
- Command references (e.g., `/sprint-plan`, `/cmdb-search`)
- Command references (e.g., `/sprint plan`, `/cmdb search`)
- MCP tool mentions (e.g., `list_issues`, `dcim_list_devices`)
**Step 3: Load Integration Snippets**

View File

@@ -6,14 +6,14 @@ This project uses the **claude-config-maintainer** plugin to analyze and optimiz
| Command | Description |
|---------|-------------|
| `/config-analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring |
| `/config-optimize` | Automatically optimize CLAUDE.md structure and content |
| `/config-init` | Initialize a new CLAUDE.md file for a project |
| `/config-diff` | Track CLAUDE.md changes over time with behavioral impact analysis |
| `/config-lint` | Lint CLAUDE.md for anti-patterns and best practices (31 rules) |
| `/config-audit-settings` | Audit settings.local.json permissions with 100-point scoring |
| `/config-optimize-settings` | Optimize permission patterns and apply named profiles |
| `/config-permissions-map` | Visual map of review layers and permission coverage |
| `/claude-config analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring |
| `/claude-config optimize` | Automatically optimize CLAUDE.md structure and content |
| `/claude-config init` | Initialize a new CLAUDE.md file for a project |
| `/claude-config diff` | Track CLAUDE.md changes over time with behavioral impact analysis |
| `/claude-config lint` | Lint CLAUDE.md for anti-patterns and best practices (31 rules) |
| `/claude-config audit-settings` | Audit settings.local.json permissions with 100-point scoring |
| `/claude-config optimize-settings` | Optimize permission patterns and apply named profiles |
| `/claude-config permissions-map` | Visual map of review layers and permission coverage |
### CLAUDE.md Scoring System
@@ -47,10 +47,10 @@ The settings audit uses a 100-point scoring system across four categories:
### Usage Guidelines
- Run `/config-analyze` periodically to assess CLAUDE.md quality
- Run `/config-audit-settings` to check permission efficiency
- Run `/claude-config analyze` periodically to assess CLAUDE.md quality
- Run `/claude-config audit-settings` to check permission efficiency
- Target a score of **70+/100** for effective Claude Code operation
- Address HIGH priority issues first when optimizing
- Use `/config-init` when setting up new projects to start with best practices
- Use `/config-permissions-map` to visualize review layer coverage
- Use `/claude-config init` when setting up new projects to start with best practices
- Use `/claude-config permissions-map` to visualize review layer coverage
- Re-analyze after making changes to verify improvements

View File

@@ -1,8 +1,9 @@
---
name: claude-config analyze
description: Analyze CLAUDE.md for optimization opportunities and plugin integration
---
# Analyze CLAUDE.md
# /claude-config analyze
Analyze your CLAUDE.md and provide a scored report with recommendations.
@@ -20,7 +21,7 @@ Display: `CONFIG-MAINTAINER - CLAUDE.md Analysis`
## Usage
```
/config-analyze
/claude-config analyze
```
## Workflow

View File

@@ -1,9 +1,9 @@
---
name: config-audit-settings
name: claude-config audit-settings
description: Audit settings.local.json for permission optimization opportunities
---
# /config-audit-settings
# /claude-config audit-settings
Audit Claude Code `settings.local.json` permissions with 100-point scoring across redundancy, coverage, safety alignment, and profile fit.
@@ -24,8 +24,8 @@ Before executing, load:
## Usage
```
/config-audit-settings # Full audit with recommendations
/config-audit-settings --diagram # Include Mermaid diagram of review layer coverage
/claude-config audit-settings # Full audit with recommendations
/claude-config audit-settings --diagram # Include Mermaid diagram of review layer coverage
```
## Workflow
@@ -128,9 +128,9 @@ Recommendations:
...
Follow-Up Actions:
1. Run /config-optimize-settings to apply recommendations
2. Run /config-optimize-settings --dry-run to preview first
3. Run /config-optimize-settings --profile=reviewed to apply profile
1. Run /claude-config optimize-settings to apply recommendations
2. Run /claude-config optimize-settings --dry-run to preview first
3. Run /claude-config optimize-settings --profile=reviewed to apply profile
```
## Diagram Output (--diagram flag)

View File

@@ -1,8 +1,9 @@
---
name: claude-config diff
description: Show diff between current CLAUDE.md and last commit
---
# Compare CLAUDE.md Changes
# /claude-config diff
Show differences between CLAUDE.md versions to track configuration drift.
@@ -18,10 +19,10 @@ Display: `CONFIG-MAINTAINER - CLAUDE.md Diff`
## Usage
```
/config-diff # Working vs last commit
/config-diff --commit=abc1234 # Working vs specific commit
/config-diff --from=v1.0 --to=v2.0 # Compare two commits
/config-diff --section="Critical Rules" # Specific section only
/claude-config diff # Working vs last commit
/claude-config diff --commit=abc1234 # Working vs specific commit
/claude-config diff --from=v1.0 --to=v2.0 # Compare two commits
/claude-config diff --section="Critical Rules" # Specific section only
```
## Workflow

View File

@@ -1,8 +1,9 @@
---
name: claude-config init
description: Initialize a new CLAUDE.md file for a project
---
# Initialize CLAUDE.md
# /claude-config init
Create a new CLAUDE.md file tailored to your project.
@@ -19,9 +20,9 @@ Display: `CONFIG-MAINTAINER - CLAUDE.md Initialization`
## Usage
```
/config-init # Interactive
/config-init --minimal # Minimal version
/config-init --comprehensive # Detailed version
/claude-config init # Interactive
/claude-config init --minimal # Minimal version
/claude-config init --comprehensive # Detailed version
```
## Workflow

View File

@@ -1,8 +1,9 @@
---
name: claude-config lint
description: Lint CLAUDE.md for common anti-patterns and best practices
---
# Lint CLAUDE.md
# /claude-config lint
Check CLAUDE.md against best practices and detect common anti-patterns.
@@ -18,9 +19,9 @@ Display: `CONFIG-MAINTAINER - CLAUDE.md Lint`
## Usage
```
/config-lint # Full lint
/config-lint --fix # Auto-fix issues
/config-lint --rules=security # Check specific category
/claude-config lint # Full lint
/claude-config lint --fix # Auto-fix issues
/claude-config lint --rules=security # Check specific category
```
## Workflow

View File

@@ -1,9 +1,9 @@
---
name: config-optimize-settings
name: claude-config optimize-settings
description: Optimize settings.local.json permissions based on audit recommendations
---
# /config-optimize-settings
# /claude-config optimize-settings
Optimize Claude Code `settings.local.json` permission patterns and apply named profiles.
@@ -25,10 +25,10 @@ Before executing, load:
## Usage
```
/config-optimize-settings # Apply audit recommendations
/config-optimize-settings --dry-run # Preview only, no changes
/config-optimize-settings --profile=reviewed # Apply named profile
/config-optimize-settings --consolidate-only # Only merge/dedupe, no new rules
/claude-config optimize-settings # Apply audit recommendations
/claude-config optimize-settings --dry-run # Preview only, no changes
/claude-config optimize-settings --profile=reviewed # Apply named profile
/claude-config optimize-settings --consolidate-only # Only merge/dedupe, no new rules
```
## Options
@@ -44,7 +44,7 @@ Before executing, load:
### Step 1: Run Audit Analysis
Execute the same analysis as `/config-audit-settings`:
Execute the same analysis as `/claude-config audit-settings`:
1. Locate settings file
2. Parse permission arrays
3. Detect issues (duplicates, subsets, merge candidates, etc.)
@@ -214,7 +214,7 @@ DRY RUN - No changes will be made
[... preview content ...]
To apply these changes, run:
/config-optimize-settings
/claude-config optimize-settings
```
### Applied Output

View File

@@ -1,8 +1,9 @@
---
name: claude-config optimize
description: Optimize CLAUDE.md structure and content
---
# Optimize CLAUDE.md
# /claude-config optimize
Automatically optimize CLAUDE.md based on best practices.
@@ -20,9 +21,9 @@ Display: `CONFIG-MAINTAINER - CLAUDE.md Optimization`
## Usage
```
/config-optimize # Full optimization
/config-optimize --condense # Reduce verbosity
/config-optimize --dry-run # Preview only
/claude-config optimize # Full optimization
/claude-config optimize --condense # Reduce verbosity
/claude-config optimize --dry-run # Preview only
```
## Workflow

View File

@@ -1,9 +1,9 @@
---
name: config-permissions-map
name: claude-config permissions-map
description: Generate visual map of review layers and permission coverage
---
# /config-permissions-map
# /claude-config permissions-map
Generate a Mermaid diagram showing the relationship between file operations, review layers, and permission status.
@@ -26,8 +26,8 @@ Also read: `/mnt/skills/user/mermaid-diagrams/SKILL.md` (for diagram requirement
## Usage
```
/config-permissions-map # Generate and display diagram
/config-permissions-map --save # Save diagram to .mermaid file
/claude-config permissions-map # Generate and display diagram
/claude-config permissions-map --save # Save diagram to .mermaid file
```
## Workflow

View File

@@ -0,0 +1,20 @@
---
description: CLAUDE.md and settings optimization
---
# /claude-config
CLAUDE.md and settings.local.json optimization for Claude Code projects.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/claude-config analyze` | Analyze CLAUDE.md for optimization opportunities |
| `/claude-config optimize` | Optimize CLAUDE.md structure with preview/backup |
| `/claude-config init` | Initialize new CLAUDE.md for a project |
| `/claude-config diff` | Track CLAUDE.md changes over time with behavioral impact |
| `/claude-config lint` | Lint CLAUDE.md for anti-patterns and best practices |
| `/claude-config audit-settings` | Audit settings.local.json permissions (100-point score) |
| `/claude-config optimize-settings` | Optimize permissions (profiles, consolidation, dry-run) |
| `/claude-config permissions-map` | Visual review layer + permission coverage map |

View File

@@ -6,7 +6,7 @@ This skill defines how to analyze and present CLAUDE.md differences.
| Mode | Command | Description |
|------|---------|-------------|
| Working vs HEAD | `/config-diff` | Uncommitted changes |
| Working vs HEAD | `/claude-config diff` | Uncommitted changes |
| Working vs Commit | `--commit=REF` | Changes since specific point |
| Commit to Commit | `--from=X --to=Y` | Historical comparison |
| Branch Comparison | `--branch=NAME` | Cross-branch differences |

View File

@@ -12,56 +12,56 @@ This skill defines the standard visual header for claude-config-maintainer comma
## Command-Specific Headers
### /config-analyze
### /claude-config analyze
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Analysis |
+-----------------------------------------------------------------+
```
### /config-optimize
### /claude-config optimize
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Optimization |
+-----------------------------------------------------------------+
```
### /config-lint
### /claude-config lint
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Lint |
+-----------------------------------------------------------------+
```
### /config-diff
### /claude-config diff
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Diff |
+-----------------------------------------------------------------+
```
### /config-init
### /claude-config init
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Initialization |
+-----------------------------------------------------------------+
```
### /config-audit-settings
### /claude-config audit-settings
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Audit |
+-----------------------------------------------------------------+
```
### /config-optimize-settings
### /claude-config optimize-settings
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
```
### /config-permissions-map
### /claude-config permissions-map
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Permissions Map |

View File

@@ -97,13 +97,13 @@ ipam_list_prefixes prefix=<proposed-prefix>
| Command | Purpose |
|---------|---------|
| `/cmdb-search <query>` | Search across all CMDB objects |
| `/cmdb-device <action>` | Device CRUD operations |
| `/cmdb-ip <action>` | IP address and prefix management |
| `/cmdb-site <action>` | Site and location management |
| `/cmdb-audit [scope]` | Data quality analysis |
| `/cmdb-register` | Register current machine |
| `/cmdb-sync` | Sync machine state with NetBox |
| `/cmdb-topology <view>` | Generate infrastructure diagrams |
| `/change-audit [filters]` | Audit NetBox changes |
| `/ip-conflicts [scope]` | Detect IP conflicts |
| `/cmdb search <query>` | Search across all CMDB objects |
| `/cmdb device <action>` | Device CRUD operations |
| `/cmdb ip <action>` | IP address and prefix management |
| `/cmdb site <action>` | Site and location management |
| `/cmdb audit [scope]` | Data quality analysis |
| `/cmdb register` | Register current machine |
| `/cmdb sync` | Sync machine state with NetBox |
| `/cmdb topology <view>` | Generate infrastructure diagrams |
| `/cmdb change-audit [filters]` | Audit NetBox changes |
| `/cmdb ip-conflicts [scope]` | Detect IP conflicts |

View File

@@ -6,10 +6,10 @@ This project uses the **cmdb-assistant** plugin for NetBox CMDB integration to m
| Command | Description |
|---------|-------------|
| `/cmdb-search` | Search across all NetBox objects |
| `/cmdb-device` | Manage devices (create, update, list) |
| `/cmdb-ip` | Manage IP addresses and prefixes |
| `/cmdb-site` | Manage sites and locations |
| `/cmdb search` | Search across all NetBox objects |
| `/cmdb device` | Manage devices (create, update, list) |
| `/cmdb ip` | Manage IP addresses and prefixes |
| `/cmdb site` | Manage sites and locations |
### MCP Tools Available

View File

@@ -1,8 +1,9 @@
---
name: cmdb audit
description: Audit NetBox data quality and identify consistency issues
---
# CMDB Data Quality Audit
# /cmdb audit
Analyze NetBox data for quality issues and best practice violations.
@@ -16,7 +17,7 @@ Analyze NetBox data for quality issues and best practice violations.
## Usage
```
/cmdb-audit [scope]
/cmdb audit [scope]
```
**Scopes:**
@@ -49,9 +50,9 @@ Execute `skills/audit-workflow.md` which covers:
## Examples
- `/cmdb-audit` - Full audit
- `/cmdb-audit vms` - VM-specific checks
- `/cmdb-audit naming` - Naming conventions
- `/cmdb audit` - Full audit
- `/cmdb audit vms` - VM-specific checks
- `/cmdb audit naming` - Naming conventions
## User Request

View File

@@ -1,8 +1,9 @@
---
name: cmdb change-audit
description: Audit NetBox changes with filtering by date, user, or object type
---
# CMDB Change Audit
# /cmdb change-audit
Query and analyze the NetBox audit log for change tracking and compliance.
@@ -15,7 +16,7 @@ Query and analyze the NetBox audit log for change tracking and compliance.
## Usage
```
/change-audit [filters]
/cmdb change-audit [filters]
```
**Filters:**
@@ -46,11 +47,11 @@ If user asks for "security audit" or "compliance report":
## Examples
- `/change-audit` - Recent changes (last 24 hours)
- `/change-audit last 7 days` - Past week
- `/change-audit by admin` - All changes by admin
- `/change-audit type dcim.device` - Device changes only
- `/change-audit action delete` - All deletions
- `/cmdb change-audit` - Recent changes (last 24 hours)
- `/cmdb change-audit last 7 days` - Past week
- `/cmdb change-audit by admin` - All changes by admin
- `/cmdb change-audit type dcim.device` - Device changes only
- `/cmdb change-audit action delete` - All deletions
## User Request

View File

@@ -1,4 +1,8 @@
# CMDB Device Management
---
name: cmdb device
---
# /cmdb device
Manage network devices in NetBox.
@@ -10,7 +14,7 @@ Manage network devices in NetBox.
## Usage
```
/cmdb-device <action> [options]
/cmdb device <action> [options]
```
## Instructions
@@ -45,10 +49,10 @@ After creating a device, offer to:
## Examples
- `/cmdb-device list`
- `/cmdb-device show core-router-01`
- `/cmdb-device create web-server-03`
- `/cmdb-device at headquarters`
- `/cmdb device list`
- `/cmdb device show core-router-01`
- `/cmdb device create web-server-03`
- `/cmdb device at headquarters`
## User Request

View File

@@ -1,8 +1,9 @@
---
name: cmdb ip-conflicts
description: Detect IP address conflicts and overlapping prefixes in NetBox
---
# CMDB IP Conflict Detection
# /cmdb ip-conflicts
Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
@@ -15,7 +16,7 @@ Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
## Usage
```
/ip-conflicts [scope]
/cmdb ip-conflicts [scope]
```
**Scopes:**
@@ -49,9 +50,9 @@ Execute conflict detection from `skills/ip-management.md`:
## Examples
- `/ip-conflicts` - Full scan
- `/ip-conflicts addresses` - Duplicate IPs only
- `/ip-conflicts vrf Production` - Scan specific VRF
- `/cmdb ip-conflicts` - Full scan
- `/cmdb ip-conflicts addresses` - Duplicate IPs only
- `/cmdb ip-conflicts vrf Production` - Scan specific VRF
## User Request

View File

@@ -1,4 +1,8 @@
# CMDB IP Management
---
name: cmdb ip
---
# /cmdb ip
Manage IP addresses and prefixes in NetBox.
@@ -11,7 +15,7 @@ Manage IP addresses and prefixes in NetBox.
## Usage
```
/cmdb-ip <action> [options]
/cmdb ip <action> [options]
```
## Instructions
@@ -42,10 +46,10 @@ Execute operations from `skills/ip-management.md`.
## Examples
- `/cmdb-ip prefixes`
- `/cmdb-ip available in 10.0.1.0/24`
- `/cmdb-ip allocate from 10.0.1.0/24`
- `/cmdb-ip assign 10.0.1.50/24 to web-server-01 eth0`
- `/cmdb ip prefixes`
- `/cmdb ip available in 10.0.1.0/24`
- `/cmdb ip allocate from 10.0.1.0/24`
- `/cmdb ip assign 10.0.1.50/24 to web-server-01 eth0`
## User Request

View File

@@ -1,8 +1,9 @@
---
name: cmdb register
description: Register the current machine into NetBox with all running applications
---
# CMDB Machine Registration
# /cmdb register
Register the current machine into NetBox, including hardware info, network interfaces, and running applications.
@@ -17,7 +18,7 @@ Register the current machine into NetBox, including hardware info, network inter
## Usage
```
/cmdb-register [--site <site-name>] [--tenant <tenant-name>] [--role <role-name>]
/cmdb register [--site <site-name>] [--tenant <tenant-name>] [--role <role-name>]
```
**Options:**
@@ -41,7 +42,7 @@ Execute `skills/device-registration.md` which covers:
| Error | Action |
|-------|--------|
| Device already exists | Suggest `/cmdb-sync` or ask to proceed |
| Device already exists | Suggest `/cmdb sync` or ask to proceed |
| Site not found | List available sites, offer to create new |
| Docker not available | Skip container registration, note in summary |
| Permission denied | Note which operations failed, suggest fixes |

View File

@@ -1,4 +1,8 @@
# CMDB Search
---
name: cmdb search
---
# /cmdb search
## Visual Output
@@ -17,7 +21,7 @@ Search NetBox for devices, IPs, sites, or any CMDB object.
## Usage
```
/cmdb-search <query>
/cmdb search <query>
```
## Instructions
@@ -37,9 +41,9 @@ For broad searches, query multiple endpoints and consolidate results.
## Examples
- `/cmdb-search router` - Find all devices with "router" in the name
- `/cmdb-search 10.0.1.0/24` - Find prefix and IPs within it
- `/cmdb-search datacenter` - Find sites matching "datacenter"
- `/cmdb search router` - Find all devices with "router" in the name
- `/cmdb search 10.0.1.0/24` - Find prefix and IPs within it
- `/cmdb search datacenter` - Find sites matching "datacenter"
## User Query

View File

@@ -1,8 +1,9 @@
---
name: cmdb setup
description: Interactive setup wizard for cmdb-assistant plugin
---
# CMDB Assistant Setup Wizard
# /cmdb setup
Configure the cmdb-assistant plugin with NetBox integration.
@@ -18,7 +19,7 @@ Configure the cmdb-assistant plugin with NetBox integration.
## Usage
```
/cmdb-setup
/cmdb setup
```
## Instructions
@@ -63,10 +64,10 @@ System Config: ~/.config/claude/netbox.env
Restart your Claude Code session for MCP tools.
After restart, try:
- /cmdb-device <hostname>
- /cmdb-ip <address>
- /cmdb-site <name>
- /cmdb-search <query>
- /cmdb device <hostname>
- /cmdb ip <address>
- /cmdb site <name>
- /cmdb search <query>
```
## User Request

View File

@@ -1,4 +1,8 @@
# CMDB Site Management
---
name: cmdb site
---
# /cmdb site
Manage sites and locations in NetBox.
@@ -10,7 +14,7 @@ Manage sites and locations in NetBox.
## Usage
```
/cmdb-site <action> [options]
/cmdb site <action> [options]
```
## Instructions
@@ -40,10 +44,10 @@ Execute `skills/visual-header.md` with context "Site Management".
## Examples
- `/cmdb-site list`
- `/cmdb-site show headquarters`
- `/cmdb-site create branch-office-nyc`
- `/cmdb-site racks at headquarters`
- `/cmdb site list`
- `/cmdb site show headquarters`
- `/cmdb site create branch-office-nyc`
- `/cmdb site racks at headquarters`
## User Request

View File

@@ -1,8 +1,9 @@
---
name: cmdb sync
description: Synchronize current machine state with existing NetBox record
---
# CMDB Machine Sync
# /cmdb sync
Update an existing NetBox device record with the current machine state.
@@ -16,7 +17,7 @@ Update an existing NetBox device record with the current machine state.
## Usage
```
/cmdb-sync [--full] [--dry-run]
/cmdb sync [--full] [--dry-run]
```
**Options:**
@@ -48,7 +49,7 @@ Execute `skills/sync-workflow.md` which covers:
| Error | Action |
|-------|--------|
| Device not found | Suggest `/cmdb-register` |
| Device not found | Suggest `/cmdb register` |
| Permission denied | Note which failed, continue others |
| Cluster not found | Offer to create or skip container sync |

View File

@@ -1,8 +1,9 @@
---
name: cmdb topology
description: Generate infrastructure topology diagrams from NetBox data
---
# CMDB Topology Visualization
# /cmdb topology
Generate Mermaid diagrams showing infrastructure topology from NetBox.
@@ -15,7 +16,7 @@ Generate Mermaid diagrams showing infrastructure topology from NetBox.
## Usage
```
/cmdb-topology <view> [scope]
/cmdb topology <view> [scope]
```
**Views:**
@@ -43,11 +44,11 @@ Always provide:
## Examples
- `/cmdb-topology rack server-rack-01` - Rack elevation
- `/cmdb-topology network` - All network connections
- `/cmdb-topology network Home` - Network for Home site
- `/cmdb-topology site Headquarters` - Site overview
- `/cmdb-topology full` - Full infrastructure
- `/cmdb topology rack server-rack-01` - Rack elevation
- `/cmdb topology network` - All network connections
- `/cmdb topology network Home` - Network for Home site
- `/cmdb topology site Headquarters` - Site overview
- `/cmdb topology full` - Full infrastructure
## User Request

View File

@@ -0,0 +1,23 @@
---
description: NetBox CMDB infrastructure management
---
# /cmdb
NetBox CMDB integration for infrastructure management.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/cmdb search` | Search NetBox for devices, IPs, sites |
| `/cmdb device` | Manage network devices (create, view, update, delete) |
| `/cmdb ip` | Manage IP addresses and prefixes |
| `/cmdb site` | Manage sites, locations, racks, and regions |
| `/cmdb audit` | Data quality analysis (VMs, devices, naming, roles) |
| `/cmdb register` | Register current machine into NetBox |
| `/cmdb sync` | Sync machine state with NetBox (detect drift) |
| `/cmdb topology` | Infrastructure topology diagrams |
| `/cmdb change-audit` | NetBox audit trail queries with filtering |
| `/cmdb ip-conflicts` | Detect IP conflicts and overlapping prefixes |
| `/cmdb setup` | Setup wizard for NetBox MCP server |

View File

@@ -147,8 +147,8 @@ dcim_update_device id=X platform=Y
### Next Steps
- Run `/cmdb-register` to properly register new machines
- Use `/cmdb-sync` to update existing registrations
- Run `/cmdb register` to properly register new machines
- Use `/cmdb sync` to update existing registrations
- Consider bulk updates via NetBox web UI for >10 items
```

View File

@@ -25,7 +25,7 @@ Use commands from `system-discovery` skill to gather:
```
dcim_list_devices name=<hostname>
```
If exists, suggest `/cmdb-sync` instead.
If exists, suggest `/cmdb sync` instead.
2. **Verify/Create site:**
```
@@ -131,7 +131,7 @@ Add journal entry:
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device registered via /cmdb-register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
comments="Device registered via /cmdb register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
```
## Summary Report Template
@@ -162,8 +162,8 @@ extras_create_journal_entry
| media_jellyfin | Media Server | 2.0 | 2048MB | Active |
### Next Steps
- Run `/cmdb-sync` periodically to keep data current
- Run `/cmdb-audit` to check data quality
- Run `/cmdb sync` periodically to keep data current
- Run `/cmdb audit` to check data quality
- Add tags for classification
```
@@ -171,7 +171,7 @@ extras_create_journal_entry
| Error | Action |
|-------|--------|
| Device already exists | Suggest `/cmdb-sync` or ask to proceed |
| Device already exists | Suggest `/cmdb sync` or ask to proceed |
| Site not found | List available sites, offer to create new |
| Docker not available | Skip container registration, note in summary |
| Permission denied | Note which operations failed, suggest fixes |

View File

@@ -16,7 +16,7 @@ Load these skills:
dcim_list_devices name=<hostname>
```
If not found, suggest `/cmdb-register` first.
If not found, suggest `/cmdb register` first.
If found:
- Store device ID and current field values
@@ -167,7 +167,7 @@ virt_update_vm id=<id> status="offline"
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device synced via /cmdb-sync command\n\nChanges applied:\n- <list>"
comments="Device synced via /cmdb sync command\n\nChanges applied:\n- <list>"
```
## Sync Modes
@@ -185,7 +185,7 @@ extras_create_journal_entry
| Error | Action |
|-------|--------|
| Device not found | Suggest `/cmdb-register` |
| Device not found | Suggest `/cmdb register` |
| Permission denied | Note which failed, continue others |
| Cluster not found | Offer to create or skip container sync |
| API errors | Log error, continue with remaining |

View File

@@ -14,17 +14,17 @@ Standard visual header for cmdb-assistant commands.
| Command | Context |
|---------|---------|
| `/cmdb-search` | Search |
| `/cmdb-device` | Device Management |
| `/cmdb-ip` | IP Management |
| `/cmdb-site` | Site Management |
| `/cmdb-audit` | Data Quality Audit |
| `/cmdb-register` | Machine Registration |
| `/cmdb-sync` | Machine Sync |
| `/cmdb-topology` | Topology |
| `/change-audit` | Change Audit |
| `/ip-conflicts` | IP Conflict Detection |
| `/cmdb-setup` | Setup Wizard |
| `/cmdb search` | Search |
| `/cmdb device` | Device Management |
| `/cmdb ip` | IP Management |
| `/cmdb site` | Site Management |
| `/cmdb audit` | Data Quality Audit |
| `/cmdb register` | Machine Registration |
| `/cmdb sync` | Machine Sync |
| `/cmdb topology` | Topology |
| `/cmdb change-audit` | Change Audit |
| `/cmdb ip-conflicts` | IP Conflict Detection |
| `/cmdb setup` | Setup Wizard |
| Agent mode | Infrastructure Management |
## Usage

View File

@@ -16,11 +16,11 @@ PreToolUse hooks scan all code changes for:
Critical issues are blocked. Warnings are noted but allowed.
### Commands
- `/security-scan` - Full project security audit
- `/refactor <target>` - Apply refactoring pattern
- `/refactor-dry <target>` - Preview refactoring opportunities
- `/sentinel scan` - Full project security audit
- `/sentinel refactor <target>` - Apply refactoring pattern
- `/sentinel refactor-dry <target>` - Preview refactoring opportunities
### Severity Levels
- 🔴 Critical: Must fix immediately
- 🟠 High: Fix before release
- 🟡 Medium: Improve when possible
- Critical: Must fix immediately
- High: Fix before release
- Medium: Improve when possible

View File

@@ -1,8 +1,9 @@
---
name: sentinel refactor-dry
description: Preview refactoring changes without applying them
---
# Refactor Dry Run
# /sentinel refactor-dry
Analyze and preview refactoring opportunities without making changes.
@@ -16,7 +17,7 @@ Analyze and preview refactoring opportunities without making changes.
## Usage
```
/refactor-dry <target> [--all]
/sentinel refactor-dry <target> [--all]
```
**Target:** File path, function name, or "." for current file
@@ -41,7 +42,7 @@ Analyze and preview refactoring opportunities without making changes.
### Recommended (High Impact, Low Risk)
1. **pattern** at lines X-Y
- Impact: High | Risk: Low
- Run: `/refactor <target> --pattern=<pattern>`
- Run: `/sentinel refactor <target> --pattern=<pattern>`
### Optional
- Lower priority items

View File

@@ -1,8 +1,9 @@
---
name: sentinel refactor
description: Apply refactoring patterns to improve code structure and maintainability
---
# Refactor
# /sentinel refactor
Apply refactoring transformations to specified code.
@@ -16,7 +17,7 @@ Apply refactoring transformations to specified code.
## Usage
```
/refactor <target> [--pattern=<pattern>]
/sentinel refactor <target> [--pattern=<pattern>]
```
**Target:** File path, function name, or "." for current context

View File

@@ -1,8 +1,9 @@
---
name: sentinel scan
description: Full security audit of codebase - scans all files for vulnerability patterns
---
# Security Scan
# /sentinel scan
Comprehensive security audit of the project.

View File

@@ -0,0 +1,15 @@
---
description: Security scanning and code refactoring
---
# /sentinel
Security scanning and safe code refactoring tools.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/sentinel scan` | Full security audit (SQL injection, XSS, secrets, etc.) |
| `/sentinel refactor` | Apply refactoring patterns to improve code |
| `/sentinel refactor-dry` | Preview refactoring without applying changes |

View File

@@ -60,7 +60,7 @@ High impact, low risk opportunities:
- Description of the change
- Impact: High/Medium/Low (specific metric improvement)
- Risk: Low/Medium/High (why)
- Run: `/refactor <target> --pattern=<pattern>`
- Run: `/sentinel refactor <target> --pattern=<pattern>`
```
### Optional Section

View File

@@ -91,7 +91,7 @@ You are an agent definition validator. Your role is to verify that a specific ag
## Example Interaction
**User**: /check-agent Orchestrator
**User**: /cv check-agent Orchestrator
**Agent**:
1. Parses CLAUDE.md, finds Orchestrator agent
@@ -101,7 +101,7 @@ You are an agent definition validator. Your role is to verify that a specific ag
5. Validates data flow: no data producers/consumers used
6. Reports: "Agent Orchestrator: VALID - all 3 tool references found"
**User**: /check-agent InvalidAgent
**User**: /cv check-agent InvalidAgent
**Agent**:
1. Parses CLAUDE.md, agent not found

View File

@@ -93,7 +93,7 @@ You are a contract validation specialist. Your role is to perform comprehensive
## Example Interaction
**User**: /validate-contracts ~/claude-plugins-work
**User**: /cv validate ~/claude-plugins-work
**Agent**:
1. Discovers 12 plugins in marketplace

View File

@@ -13,15 +13,15 @@ This marketplace uses the contract-validator plugin for cross-plugin compatibili
| Command | Purpose |
|---------|---------|
| `/validate-contracts` | Full marketplace compatibility validation |
| `/check-agent` | Validate single agent definition |
| `/list-interfaces` | Show all plugin interfaces |
| `/cv validate` | Full marketplace compatibility validation |
| `/cv check-agent` | Validate single agent definition |
| `/cv list-interfaces` | Show all plugin interfaces |
### Validation Workflow
Run before merging plugin changes:
1. `/validate-contracts` - Check for conflicts
1. `/cv validate` - Check for conflicts
2. Review errors (must fix) and warnings (should review)
3. Fix issues before merging
@@ -91,7 +91,7 @@ Avoid generic names that may conflict:
| `/setup` | Setup wizard |
# GOOD - Plugin-specific prefix
| `/data-setup` | Data platform setup wizard |
| `/data setup` | Data platform setup wizard |
```
### Document All Tools
@@ -125,20 +125,20 @@ This agent uses tools from:
```
# Before merging new plugin
/validate-contracts
/cv validate
# Check specific agent after changes
/check-agent Orchestrator
/cv check-agent Orchestrator
```
### Plugin Development
```
# See what interfaces exist
/list-interfaces
/cv list-interfaces
# After adding new command, verify no conflicts
/validate-contracts
/cv validate
```
### CI/CD Integration
@@ -148,5 +148,5 @@ Add to your pipeline:
```yaml
- name: Validate Plugin Contracts
run: |
claude --skill contract-validator:validate-contracts --args "${{ github.workspace }}"
claude --skill contract-validator:cv-validate --args "${{ github.workspace }}"
```

View File

@@ -1,4 +1,8 @@
# /check-agent - Validate Agent Definition
---
name: cv check-agent
---
# /cv check-agent
## Skills to Load
- skills/visual-output.md
@@ -9,7 +13,7 @@
## Usage
```
/check-agent <agent_name> [claude_md_path]
/cv check-agent <agent_name> [claude_md_path]
```
## Parameters
@@ -38,7 +42,7 @@
## Examples
```
/check-agent Planner
/check-agent Orchestrator ./CLAUDE.md
/check-agent data-analysis ~/project/CLAUDE.md
/cv check-agent Planner
/cv check-agent Orchestrator ./CLAUDE.md
/cv check-agent data-analysis ~/project/CLAUDE.md
```

View File

@@ -1,4 +1,8 @@
# /dependency-graph - Generate Dependency Visualization
---
name: cv dependency-graph
---
# /cv dependency-graph
## Skills to Load
- skills/visual-output.md
@@ -10,7 +14,7 @@
## Usage
```
/dependency-graph [marketplace_path] [--format <mermaid|text>] [--show-tools]
/cv dependency-graph [marketplace_path] [--format <mermaid|text>] [--show-tools]
```
## Parameters
@@ -41,15 +45,15 @@
## Examples
```
/dependency-graph
/dependency-graph --show-tools
/dependency-graph --format text
/dependency-graph ~/claude-plugins-work
/cv dependency-graph
/cv dependency-graph --show-tools
/cv dependency-graph --format text
/cv dependency-graph ~/claude-plugins-work
```
## Integration
Use with `/validate-contracts`:
1. Run `/dependency-graph` to visualize
2. Run `/validate-contracts` to find issues
Use with `/cv validate`:
1. Run `/cv dependency-graph` to visualize
2. Run `/cv validate` to find issues
3. Fix and regenerate

View File

@@ -1,4 +1,8 @@
# /list-interfaces - Show Plugin Interfaces
---
name: cv list-interfaces
---
# /cv list-interfaces
## Skills to Load
- skills/visual-output.md
@@ -9,7 +13,7 @@
## Usage
```
/list-interfaces [marketplace_path]
/cv list-interfaces [marketplace_path]
```
## Parameters
@@ -41,6 +45,6 @@
## Examples
```
/list-interfaces
/list-interfaces ~/claude-plugins-work
/cv list-interfaces
/cv list-interfaces ~/claude-plugins-work
```

View File

@@ -1,8 +1,9 @@
---
name: cv setup
description: Interactive setup wizard for contract-validator plugin
---
# /cv-setup - Contract Validator Setup Wizard
# /cv setup
## Skills to Load
- skills/visual-output.md
@@ -40,9 +41,9 @@ description: Interactive setup wizard for contract-validator plugin
## Post-Setup Commands
- `/validate-contracts` - Full marketplace validation
- `/check-agent` - Validate single agent
- `/list-interfaces` - Show all plugin interfaces
- `/cv validate` - Full marketplace validation
- `/cv check-agent` - Validate single agent
- `/cv list-interfaces` - Show all plugin interfaces
## No Configuration Required

View File

@@ -1,4 +1,5 @@
---
name: cv status
description: Marketplace-wide health check across all installed plugins
---

View File

@@ -1,4 +1,8 @@
# /validate-contracts - Full Contract Validation
---
name: cv validate
---
# /cv validate
## Skills to Load
- skills/visual-output.md
@@ -10,7 +14,7 @@
## Usage
```
/validate-contracts [marketplace_path]
/cv validate [marketplace_path]
```
## Parameters
@@ -40,6 +44,6 @@
## Examples
```
/validate-contracts
/validate-contracts ~/claude-plugins-work
/cv validate
/cv validate ~/claude-plugins-work
```

View File

@@ -0,0 +1,18 @@
---
description: Cross-plugin compatibility validation
---
# /cv
Cross-plugin compatibility validation and agent verification.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/cv validate` | Full marketplace compatibility validation |
| `/cv check-agent` | Validate single agent definition |
| `/cv list-interfaces` | Show all plugin interfaces |
| `/cv dependency-graph` | Mermaid visualization of plugin dependencies |
| `/cv setup` | Setup wizard for contract-validator MCP |
| `/cv status` | Marketplace-wide health check |

View File

@@ -65,6 +65,6 @@ Available MCP tools for contract-validator operations.
## Error Handling
If MCP tools fail:
1. Check if `/cv-setup` has been run
1. Check if `/cv setup` has been run
2. Verify session was restarted after setup
3. Check MCP server venv exists and is valid

View File

@@ -23,8 +23,8 @@ You are a strict data integrity auditor. Your role is to review code for proper
## Trigger Conditions
Activate this agent when:
- User runs `/data-review <path>`
- User runs `/data-gate <path>`
- User runs `/data review <path>`
- User runs `/data gate <path>`
- Projman orchestrator requests data domain gate check
- Code review includes database operations, dbt models, or data pipelines
@@ -78,7 +78,7 @@ Activate this agent when:
### Review Mode (default)
Triggered by `/data-review <path>`
Triggered by `/data review <path>`
**Characteristics:**
- Produces detailed report with all findings
@@ -89,7 +89,7 @@ Triggered by `/data-review <path>`
### Gate Mode
Triggered by `/data-gate <path>` or projman orchestrator domain gate
Triggered by `/data gate <path>` or projman orchestrator domain gate
**Characteristics:**
- Binary PASS/FAIL output
@@ -203,7 +203,7 @@ Blocking Issues (2):
2. portfolio_app/toronto/loaders/census.py:67 - References table 'census_raw' which does not exist
Fix: Table was renamed to 'census_demographics' in migration 003.
Run /data-review for full audit report.
Run /data review for full audit report.
```
### Review Mode Output
@@ -292,7 +292,7 @@ When called as a domain gate by projman orchestrator:
## Example Interactions
**User**: `/data-review dbt/models/staging/`
**User**: `/data review dbt/models/staging/`
**Agent**:
1. Scans all .sql files in staging/
2. Runs dbt_parse to validate project
@@ -301,7 +301,7 @@ When called as a domain gate by projman orchestrator:
5. Cross-references test coverage
6. Returns detailed report
**User**: `/data-gate portfolio_app/toronto/`
**User**: `/data gate portfolio_app/toronto/`
**Agent**:
1. Scans for Python files with pg_query/pg_execute
2. Checks if referenced tables exist

View File

@@ -18,12 +18,12 @@ This project uses the data-platform plugin for data engineering workflows.
| Command | Purpose |
|---------|---------|
| `/data-ingest` | Load data from files or database |
| `/data-profile` | Generate statistical profile |
| `/data-schema` | Show schema information |
| `/data-explain` | Explain dbt model |
| `/data-lineage` | Show data lineage |
| `/data-run` | Execute dbt models |
| `/data ingest` | Load data from files or database |
| `/data profile` | Generate statistical profile |
| `/data schema` | Show schema information |
| `/data explain` | Explain dbt model |
| `/data lineage` | Show data lineage |
| `/data run` | Execute dbt models |
### data_ref Convention
@@ -36,9 +36,9 @@ DataFrames are stored with references. Use meaningful names:
### dbt Workflow
1. Always validate before running: `/data-run` includes automatic `dbt_parse`
1. Always validate before running: `/data run` includes automatic `dbt_parse`
2. For dbt 1.9+, check for deprecated syntax before commits
3. Use `/data-lineage` to understand impact of changes
3. Use `/data lineage` to understand impact of changes
### Database Access
@@ -69,22 +69,22 @@ DATA_PLATFORM_MAX_ROWS=100000
### Data Exploration
```
/data-ingest data/raw_customers.csv
/data-profile raw_customers
/data-schema
/data ingest data/raw_customers.csv
/data profile raw_customers
/data schema
```
### ETL Development
```
/data-schema orders # Understand source
/data-explain stg_orders # Understand transformation
/data-run stg_orders # Test the model
/data-lineage fct_orders # Check downstream impact
/data schema orders # Understand source
/data explain stg_orders # Understand transformation
/data run stg_orders # Test the model
/data lineage fct_orders # Check downstream impact
```
### Database Analysis
```
/data-schema # List all tables
/data schema # List all tables
pg_columns orders # Detailed schema
st_tables # Find spatial data
```

View File

@@ -1,4 +1,8 @@
# /dbt-test - Run dbt Tests
---
name: data dbt-test
---
# /data dbt-test - Run dbt Tests
## Skills to Load
- skills/dbt-workflow.md
@@ -12,7 +16,7 @@ Display header: `DATA-PLATFORM - dbt Tests`
## Usage
```
/dbt-test [selection] [--warn-only]
/data dbt-test [selection] [--warn-only]
```
## Workflow
@@ -32,9 +36,9 @@ Execute `skills/dbt-workflow.md` test workflow:
## Examples
```
/dbt-test # Run all tests
/dbt-test dim_customers # Tests for specific model
/dbt-test tag:critical # Run critical tests only
/data dbt-test # Run all tests
/data dbt-test dim_customers # Tests for specific model
/data dbt-test tag:critical # Run critical tests only
```
## Required MCP Tools

View File

@@ -1,4 +1,8 @@
# /data-explain - dbt Model Explanation
---
name: data explain
---
# /data explain - dbt Model Explanation
## Skills to Load
- skills/dbt-workflow.md
@@ -13,7 +17,7 @@ Display header: `DATA-PLATFORM - Model Explanation`
## Usage
```
/data-explain <model_name>
/data explain <model_name>
```
## Workflow
@@ -26,8 +30,8 @@ Display header: `DATA-PLATFORM - Model Explanation`
## Examples
```
/data-explain dim_customers
/data-explain fct_orders
/data explain dim_customers
/data explain fct_orders
```
## Required MCP Tools

View File

@@ -1,4 +1,5 @@
---
name: data gate
description: Data integrity compliance gate (pass/fail) for sprint execution
gate_contract: v1
arguments:
@@ -7,21 +8,21 @@ arguments:
required: true
---
# /data-gate
# /data gate
Binary pass/fail validation for data integrity compliance. Used by projman orchestrator during sprint execution to gate issue completion.
## Usage
```
/data-gate <path>
/data gate <path>
```
**Examples:**
```
/data-gate ./dbt/models/staging/
/data-gate ./portfolio_app/toronto/parsers/
/data-gate ./dbt/
/data gate ./dbt/models/staging/
/data gate ./portfolio_app/toronto/parsers/
/data gate ./dbt/
```
## What It Does
@@ -63,7 +64,7 @@ Blocking Issues (2):
2. portfolio_app/toronto/loaders/census.py:67 - References table 'census_raw' which does not exist
Fix: Table was renamed to 'census_demographics' in migration 003.
Run /data-review for full audit report.
Run /data review for full audit report.
```
## Integration with projman
@@ -78,9 +79,9 @@ This command is automatically invoked by the projman orchestrator when:
- PASS: Issue can be marked complete
- FAIL: Issue stays open, blocker comment added with failure details
## Differences from /data-review
## Differences from /data review
| Aspect | /data-gate | /data-review |
| Aspect | /data gate | /data review |
|--------|------------|--------------|
| Output | Binary PASS/FAIL | Detailed report with all severities |
| Severity | FAIL only | FAIL + WARN + INFO |
@@ -95,7 +96,7 @@ This command is automatically invoked by the projman orchestrator when:
- **Quick validation**: Fast pass/fail without full report
- **Pre-merge checks**: Verify data changes before integration
For detailed findings including warnings and suggestions, use `/data-review` instead.
For detailed findings including warnings and suggestions, use `/data review` instead.
## Requirements

View File

@@ -1,4 +1,8 @@
# /data-ingest - Data Ingestion
---
name: data ingest
---
# /data ingest - Data Ingestion
## Skills to Load
- skills/mcp-tools-reference.md
@@ -11,7 +15,7 @@ Display header: `DATA-PLATFORM - Ingest`
## Usage
```
/data-ingest [source]
/data ingest [source]
```
## Workflow
@@ -31,9 +35,9 @@ Display header: `DATA-PLATFORM - Ingest`
## Examples
```
/data-ingest data/sales.csv
/data-ingest data/customers.parquet
/data-ingest "SELECT * FROM orders WHERE created_at > '2024-01-01'"
/data ingest data/sales.csv
/data ingest data/customers.parquet
/data ingest "SELECT * FROM orders WHERE created_at > '2024-01-01'"
```
## Required MCP Tools

View File

@@ -1,4 +1,8 @@
# /lineage-viz - Mermaid Lineage Visualization
---
name: data lineage-viz
---
# /data lineage-viz - Mermaid Lineage Visualization
## Skills to Load
- skills/lineage-analysis.md
@@ -12,7 +16,7 @@ Display header: `DATA-PLATFORM - Lineage Visualization`
## Usage
```
/lineage-viz <model_name> [--direction TB|LR] [--depth N]
/data lineage-viz <model_name> [--direction TB|LR] [--depth N]
```
## Workflow
@@ -31,9 +35,9 @@ Display header: `DATA-PLATFORM - Lineage Visualization`
## Examples
```
/lineage-viz dim_customers
/lineage-viz fct_orders --direction TB
/lineage-viz rpt_revenue --depth 2
/data lineage-viz dim_customers
/data lineage-viz fct_orders --direction TB
/data lineage-viz rpt_revenue --depth 2
```
## Required MCP Tools

View File

@@ -1,4 +1,8 @@
# /data-lineage - Data Lineage Visualization
---
name: data lineage
---
# /data lineage - Data Lineage Visualization
## Skills to Load
- skills/lineage-analysis.md
@@ -12,7 +16,7 @@ Display header: `DATA-PLATFORM - Lineage`
## Usage
```
/data-lineage <model_name> [--depth N]
/data lineage <model_name> [--depth N]
```
## Workflow
@@ -25,8 +29,8 @@ Display header: `DATA-PLATFORM - Lineage`
## Examples
```
/data-lineage dim_customers
/data-lineage fct_orders --depth 3
/data lineage dim_customers
/data lineage fct_orders --depth 3
```
## Required MCP Tools

View File

@@ -1,4 +1,8 @@
# /data-profile - Data Profiling
---
name: data profile
---
# /data profile - Data Profiling
## Skills to Load
- skills/data-profiling.md
@@ -12,7 +16,7 @@ Display header: `DATA-PLATFORM - Data Profile`
## Usage
```
/data-profile <data_ref>
/data profile <data_ref>
```
## Workflow
@@ -27,8 +31,8 @@ Execute `skills/data-profiling.md` profiling workflow:
## Examples
```
/data-profile sales_data
/data-profile df_a1b2c3d4
/data profile sales_data
/data profile df_a1b2c3d4
```
## Required MCP Tools

View File

@@ -1,4 +1,8 @@
# /data-quality - Data Quality Assessment
---
name: data quality
---
# /data quality - Data Quality Assessment
## Skills to Load
- skills/data-profiling.md
@@ -12,7 +16,7 @@ Display header: `DATA-PLATFORM - Data Quality`
## Usage
```
/data-quality <data_ref> [--strict]
/data quality <data_ref> [--strict]
```
## Workflow
@@ -33,8 +37,8 @@ Execute `skills/data-profiling.md` quality assessment:
## Examples
```
/data-quality sales_data
/data-quality df_customers --strict
/data quality sales_data
/data quality df_customers --strict
```
## Quality Thresholds

View File

@@ -1,4 +1,5 @@
---
name: data review
description: Audit data integrity, schema validity, and dbt compliance
arguments:
- name: path
@@ -6,21 +7,21 @@ arguments:
required: true
---
# /data-review
# /data review
Comprehensive data integrity audit producing a detailed report with findings at all severity levels. For human review and standalone codebase auditing.
## Usage
```
/data-review <path>
/data review <path>
```
**Examples:**
```
/data-review ./dbt/
/data-review ./portfolio_app/toronto/
/data-review ./dbt/models/marts/
/data review ./dbt/
/data review ./portfolio_app/toronto/
/data review ./dbt/models/marts/
```
## What It Does
@@ -79,46 +80,46 @@ VERDICT: PASS | FAIL (N blocking issues)
### Before Sprint Planning
Audit data layer health to identify tech debt and inform sprint scope.
```
/data-review ./dbt/
/data review ./dbt/
```
### During Code Review
Get detailed data integrity findings alongside code review comments.
```
/data-review ./dbt/models/staging/stg_new_source.sql
/data review ./dbt/models/staging/stg_new_source.sql
```
### After Migrations
Verify schema changes didn't break anything downstream.
```
/data-review ./migrations/
/data review ./migrations/
```
### Periodic Health Checks
Regular data infrastructure audits for proactive maintenance.
```
/data-review ./data_pipeline/
/data review ./data_pipeline/
```
### New Project Onboarding
Understand the current state of data architecture.
```
/data-review .
/data review .
```
## Severity Levels
| Level | Meaning | Gate Impact |
|-------|---------|-------------|
| **FAIL** | Blocking issues that will cause runtime errors | Would block `/data-gate` |
| **FAIL** | Blocking issues that will cause runtime errors | Would block `/data gate` |
| **WARN** | Quality issues that should be addressed | Does not block gate |
| **INFO** | Suggestions for improvement | Does not block gate |
## Differences from /data-gate
## Differences from /data gate
`/data-review` gives you the full picture. `/data-gate` gives the orchestrator a yes/no.
`/data review` gives you the full picture. `/data gate` gives the orchestrator a yes/no.
| Aspect | /data-gate | /data-review |
| Aspect | /data gate | /data review |
|--------|------------|--------------|
| Output | Binary PASS/FAIL | Detailed report |
| Severity | FAIL only | FAIL + WARN + INFO |
@@ -126,8 +127,8 @@ Understand the current state of data architecture.
| Verbosity | Minimal | Comprehensive |
| Speed | Fast (skips INFO) | Thorough |
Use `/data-review` when you want to understand.
Use `/data-gate` when you want to automate.
Use `/data review` when you want to understand.
Use `/data gate` when you want to automate.
## Requirements
@@ -144,6 +145,6 @@ Use `/data-gate` when you want to automate.
## Related Commands
- `/data-gate` - Binary pass/fail for automation
- `/data-lineage` - Visualize dbt model dependencies
- `/data-schema` - Explore database schema
- `/data gate` - Binary pass/fail for automation
- `/data lineage` - Visualize dbt model dependencies
- `/data schema` - Explore database schema

View File

@@ -1,4 +1,8 @@
# /data-run - Execute dbt Models
---
name: data run
---
# /data run - Execute dbt Models
## Skills to Load
- skills/dbt-workflow.md
@@ -12,7 +16,7 @@ Display header: `DATA-PLATFORM - dbt Run`
## Usage
```
/data-run [model_selection] [--full-refresh]
/data run [model_selection] [--full-refresh]
```
## Workflow
@@ -30,11 +34,11 @@ See `skills/dbt-workflow.md` for full selection patterns.
## Examples
```
/data-run # Run all models
/data-run dim_customers # Run specific model
/data-run +fct_orders # Run model and upstream
/data-run tag:daily # Run models with tag
/data-run --full-refresh # Rebuild incremental models
/data run # Run all models
/data run dim_customers # Run specific model
/data run +fct_orders # Run model and upstream
/data run tag:daily # Run models with tag
/data run --full-refresh # Rebuild incremental models
```
## Required MCP Tools

View File

@@ -1,4 +1,8 @@
# /data-schema - Schema Exploration
---
name: data schema
---
# /data schema - Schema Exploration
## Skills to Load
- skills/mcp-tools-reference.md
@@ -11,7 +15,7 @@ Display header: `DATA-PLATFORM - Schema Explorer`
## Usage
```
/data-schema [table_name | data_ref]
/data schema [table_name | data_ref]
```
## Workflow
@@ -30,9 +34,9 @@ Display header: `DATA-PLATFORM - Schema Explorer`
## Examples
```
/data-schema # List all tables and DataFrames
/data-schema customers # Show table schema
/data-schema sales_data # Show DataFrame schema
/data schema # List all tables and DataFrames
/data schema customers # Show table schema
/data schema sales_data # Show DataFrame schema
```
## Required MCP Tools

View File

@@ -1,4 +1,8 @@
# /data-setup - Data Platform Setup Wizard
---
name: data setup
---
# /data setup - Data Platform Setup Wizard
## Skills to Load
- skills/setup-workflow.md
@@ -11,7 +15,7 @@ Display header: `DATA-PLATFORM - Setup Wizard`
## Usage
```
/data-setup
/data setup
```
## Workflow

View File

@@ -0,0 +1,24 @@
---
description: Data engineering tools with pandas, PostgreSQL, and dbt
---
# /data
Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/data ingest` | Load data from CSV, Parquet, JSON into DataFrame |
| `/data profile` | Generate data profiling report with statistics |
| `/data schema` | Explore database schemas, tables, columns |
| `/data explain` | Explain query execution plan |
| `/data lineage` | Show dbt model lineage and dependencies |
| `/data lineage-viz` | dbt lineage visualization as Mermaid diagrams |
| `/data run` | Run dbt models with validation |
| `/data dbt-test` | Formatted dbt test runner with summary |
| `/data quality` | DataFrame quality checks |
| `/data review` | Comprehensive data integrity audits |
| `/data gate` | Binary pass/fail data integrity gates |
| `/data setup` | Setup wizard for data-platform MCP servers |

View File

@@ -215,7 +215,7 @@ Blocking Issues (N):
2. <location> - <violation description>
Fix: <actionable fix>
Run /data-review for full audit report.
Run /data review for full audit report.
```
### Review Mode (Detailed)
@@ -293,7 +293,7 @@ Do not flag violations in:
When called as a domain gate:
1. Orchestrator detects `Domain/Data` label on issue
2. Orchestrator identifies changed files
3. Orchestrator invokes `/data-gate <path>`
3. Orchestrator invokes `/data gate <path>`
4. Agent runs gate mode scan
5. Returns PASS/FAIL to orchestrator
6. Orchestrator decides whether to complete issue
@@ -301,7 +301,7 @@ When called as a domain gate:
### Standalone Usage
For manual audits:
1. User runs `/data-review <path>`
1. User runs `/data review <path>`
2. Agent runs full review mode scan
3. Returns detailed report with all severity levels
4. User decides on actions

View File

@@ -0,0 +1,25 @@
{
"name": "data-seed",
"version": "1.0.0",
"description": "Test data generation and database seeding with reproducible profiles",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-seed/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"license": "MIT",
"keywords": [
"test-data",
"seeding",
"faker",
"fixtures",
"schema",
"database",
"reproducible"
],
"commands": [
"./commands/"
],
"domain": "data"
}

View File

@@ -0,0 +1,74 @@
# data-seed Plugin
Test data generation and database seeding with reproducible profiles for Claude Code.
## Overview
The data-seed plugin generates realistic test data from schema definitions. It supports multiple ORM dialects (SQLAlchemy, Prisma, Django ORM, raw SQL DDL), handles foreign key dependencies automatically, and produces output in SQL, JSON, or CSV formats.
Key features:
- **Schema-first**: Parses your existing schema — no manual configuration needed
- **Realistic data**: Locale-aware faker providers for names, emails, addresses, and more
- **Reproducible**: Deterministic generation from seed profiles
- **Dependency-aware**: Resolves FK relationships and generates in correct insertion order
- **Profile-based**: Reusable profiles for small (unit tests), medium (development), and large (stress tests)
## Installation
This plugin is part of the Leo Claude Marketplace. Install via the marketplace or copy the `plugins/data-seed/` directory to your Claude Code plugins path.
## Commands
| Command | Description |
|---------|-------------|
| `/seed setup` | Setup wizard — detect schema source, configure output format |
| `/seed generate` | Generate seed data from schema or models |
| `/seed apply` | Apply seed data to database or create fixture files |
| `/seed profile` | Define and manage reusable data profiles |
| `/seed validate` | Validate seed data against schema constraints |
## Quick Start
```
/seed setup # Detect schema, configure output
/seed generate # Generate data with medium profile
/seed validate # Verify generated data integrity
/seed apply # Write fixture files
```
## Agents
| Agent | Model | Role |
|-------|-------|------|
| `seed-generator` | Sonnet | Data generation, profile management, and seed application |
| `seed-validator` | Haiku | Read-only validation of seed data integrity |
## Skills
| Skill | Purpose |
|-------|---------|
| `schema-inference` | Parse ORM models and SQL DDL into normalized schema |
| `faker-patterns` | Map columns to realistic faker providers |
| `relationship-resolution` | FK dependency ordering and circular dependency handling |
| `profile-management` | Seed profile CRUD and configuration |
| `visual-header` | Standard visual output formatting |
## Supported Schema Sources
- SQLAlchemy models (2.0+ and legacy 1.x)
- Prisma schema
- Django ORM models
- Raw SQL DDL (CREATE TABLE statements)
- JSON Schema definitions
## Output Formats
- SQL INSERT statements
- JSON fixtures (Django-compatible)
- CSV files
- Prisma seed scripts
- Python factory objects
## License
MIT License — Part of the Leo Claude Marketplace.

View File

@@ -0,0 +1,96 @@
---
name: seed-generator
description: Data generation, profile management, and seed application. Use when generating test data, managing seed profiles, or applying fixtures to databases.
model: sonnet
permissionMode: acceptEdits
---
# Seed Generator Agent
You are a test data generation specialist. Your role is to create realistic, schema-compliant seed data for databases and fixture files using faker patterns, profile-based configuration, and dependency-aware insertion ordering.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| DATA-SEED - [Command Name] |
| [Context Line] |
+----------------------------------------------------------------------+
```
## Trigger Conditions
Activate this agent when:
- User runs `/seed setup`
- User runs `/seed generate [options]`
- User runs `/seed apply [options]`
- User runs `/seed profile [action]`
## Skills to Load
- skills/schema-inference.md
- skills/faker-patterns.md
- skills/relationship-resolution.md
- skills/profile-management.md
- skills/visual-header.md
## Core Principles
### Schema-First Approach
Always derive data generation rules from the schema definition, never from assumptions:
- Parse the actual schema source (SQLAlchemy, Prisma, Django, raw SQL)
- Respect every constraint: NOT NULL, UNIQUE, CHECK, foreign keys, defaults
- Map types precisely — do not generate strings for integer columns or vice versa
### Reproducibility
- Seed the random number generator from the profile name + table name for deterministic output
- Same profile + same schema = same data every time
- Document the seed value in output metadata for reproducibility
### Realistic Data
- Use locale-aware faker providers for names, addresses, phone numbers
- Generate plausible relationships (not every user has exactly one order)
- Include edge cases at configurable ratios (empty strings, boundary integers, unicode)
- Distribute enum values with realistic skew (not uniform)
### Safety
- Never modify schema or drop tables
- Database operations always wrapped in transactions
- TRUNCATE operations require explicit user confirmation
- Display execution plan before applying to database
## Operating Modes
### Setup Mode
- Detect project ORM/schema type
- Configure output format and directory
- Initialize default profiles
### Generate Mode
- Parse schema, resolve dependencies, generate data
- Output to configured format (SQL, JSON, CSV, factory objects)
### Apply Mode
- Read generated seed data
- Apply to database or write framework-specific fixture files
- Support clean (TRUNCATE) + seed workflow
### Profile Mode
- CRUD operations on data profiles
- Configure row counts, edge case ratios, custom overrides
## Error Handling
| Error | Response |
|-------|----------|
| Schema source not found | Prompt user to run `/seed setup` |
| Circular FK dependency detected | Use deferred constraint strategy, explain to user |
| UNIQUE constraint collision after 100 retries | FAIL: report column and suggest increasing uniqueness pool |
| Database connection failed (apply mode) | Report error, suggest using file target instead |
| Unsupported ORM dialect | WARN: fall back to raw SQL DDL parsing |
## Communication Style
Clear and structured. Show what will be generated before generating it. Display progress per table during generation. Summarize output with file paths and row counts. For errors, explain the constraint that was violated and suggest a fix.

View File

@@ -0,0 +1,106 @@
---
name: seed-validator
description: Read-only validation of seed data integrity and schema compliance. Use when verifying generated test data against constraints and referential integrity.
model: haiku
permissionMode: plan
disallowedTools: Write, Edit, MultiEdit
---
# Seed Validator Agent
You are a strict seed data integrity auditor. Your role is to validate generated test data against schema definitions, checking type constraints, referential integrity, uniqueness, and statistical properties. You never modify files or data — analysis and reporting only.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| DATA-SEED - Validate |
| [Profile Name or Target Path] |
+----------------------------------------------------------------------+
```
## Trigger Conditions
Activate this agent when:
- User runs `/seed validate [options]`
- Generator agent requests post-generation validation
## Skills to Load
- skills/schema-inference.md
- skills/relationship-resolution.md
- skills/visual-header.md
## Validation Categories
### Type Constraints (FAIL on violation)
- Integer columns must contain valid integers within type range
- String columns must not exceed declared max length
- Date/datetime columns must contain parseable ISO 8601 values
- Boolean columns must contain only true/false/null
- Decimal columns must respect declared precision and scale
- UUID columns must match UUID v4 format
- Enum columns must contain only declared valid values
### Referential Integrity (FAIL on violation)
- Every foreign key value must reference an existing parent row
- Self-referential keys must reference rows in the same table
- Many-to-many through tables must have valid references on both sides
- Cascading dependency chains must be intact
### Uniqueness (FAIL on violation)
- Single-column UNIQUE constraints: no duplicates
- Composite unique constraints: no duplicate tuples
- Primary key uniqueness across all rows
### NOT NULL (FAIL on violation)
- Required columns must not contain null values in any row
### Statistical Properties (WARN level, --strict only)
- Null ratio within tolerance of profile target
- Edge case ratio within tolerance of profile target
- Value distribution not unrealistically uniform for enum/category columns
- Date ranges within reasonable bounds
- Numeric values within sensible ranges for domain
## Report Format
```
+----------------------------------------------------------------------+
| DATA-SEED - Validate |
| Profile: [name] |
+----------------------------------------------------------------------+
Tables Validated: N
Rows Checked: N
Constraints Verified: N
FAIL (N)
1. [table.column] Description of violation
Fix: Suggested corrective action
WARN (N)
1. [table.column] Description of concern
Suggestion: Recommended improvement
INFO (N)
1. [table] Statistical observation
Note: Context
VERDICT: PASS | FAIL (N blocking issues)
```
## Error Handling
| Error | Response |
|-------|----------|
| No seed data found | Report error, suggest running `/seed generate` |
| Schema source missing | Report error, suggest running `/seed setup` |
| Malformed seed file | FAIL: report file path and parse error |
| Profile not found | Use default profile, WARN about missing profile |
## Communication Style
Precise and concise. Report exact locations of violations with table name, column name, and row numbers where applicable. Group findings by severity. Always include a clear PASS/FAIL verdict at the end.

View File

@@ -0,0 +1,93 @@
# data-seed Plugin - CLAUDE.md Integration
Add this section to your project's CLAUDE.md to enable data-seed plugin features.
## Suggested CLAUDE.md Section
```markdown
## Test Data Generation (data-seed)
This project uses the data-seed plugin for test data generation and database seeding.
### Configuration
**Schema Source**: Auto-detected from project ORM (SQLAlchemy, Prisma, Django, raw SQL)
**Output Directory**: `seeds/` or `fixtures/` (configurable via `/seed setup`)
**Profiles**: `seed-profiles.json` in output directory
### Available Commands
| Command | Purpose |
|---------|---------|
| `/seed setup` | Configure schema source and output format |
| `/seed generate` | Generate test data from schema |
| `/seed apply` | Apply seed data to database or fixture files |
| `/seed profile` | Manage data profiles (small, medium, large) |
| `/seed validate` | Validate seed data against schema constraints |
### Data Profiles
| Profile | Rows/Table | Edge Cases | Use Case |
|---------|------------|------------|----------|
| `small` | 10 | None | Unit tests |
| `medium` | 100 | 10% | Development |
| `large` | 1000 | 5% | Performance testing |
### Typical Workflow
```
/seed setup # First-time configuration
/seed generate --profile medium # Generate development data
/seed validate # Verify integrity
/seed apply --target file # Write fixture files
```
### Custom Profiles
Create custom profiles for project-specific needs:
```
/seed profile create staging
```
Override row counts per table and set custom value pools for enum columns.
```
## Environment Variables
Add to project `.env` if needed:
```env
# Seed data configuration
SEED_OUTPUT_DIR=./seeds
SEED_DEFAULT_PROFILE=medium
SEED_DEFAULT_LOCALE=en_US
```
## Typical Workflows
### Initial Setup
```
/seed setup # Detect schema, configure output
/seed generate # Generate with default profile
/seed validate # Verify data integrity
```
### CI/CD Integration
```
/seed generate --profile small # Fast, minimal data for tests
/seed apply --target file # Write fixtures
# Run test suite with fixtures
```
### Development Environment
```
/seed generate --profile medium # Realistic development data
/seed apply --target database --clean # Clean and seed database
```
### Performance Testing
```
/seed generate --profile large # High-volume data
/seed apply --target database # Load into test database
# Run performance benchmarks
```

View File

@@ -0,0 +1,70 @@
---
name: seed apply
---
# /seed apply - Apply Seed Data
## Skills to Load
- skills/profile-management.md
- skills/visual-header.md
## Visual Output
Display header: `DATA-SEED - Apply`
## Usage
```
/seed apply [--profile <name>] [--target <database|file>] [--clean] [--dry-run]
```
## Workflow
### 1. Locate Seed Data
- Look for generated seed files in configured output directory
- If no seed data found, prompt user to run `/seed generate` first
- Display available seed datasets with timestamps and profiles
### 2. Determine Target
- `--target database`: Apply directly to connected database via SQL execution
- `--target file` (default): Write fixture files for framework consumption
- Auto-detect framework for file output:
- Django: `fixtures/` directory as JSON fixtures compatible with `loaddata`
- SQLAlchemy: Python factory files or SQL insert scripts
- Prisma: `prisma/seed.ts` compatible format
- Generic: SQL insert statements or CSV files
### 3. Pre-Apply Validation
- If targeting database: verify connection, check table existence
- If `--clean` specified: generate TRUNCATE/DELETE statements for affected tables (respecting FK order)
- Display execution plan showing table order, row counts, and clean operations
- If `--dry-run`: display plan and exit without applying
### 4. Apply Data
- Execute in dependency order (parents before children)
- If targeting database: wrap in transaction, rollback on error
- If targeting files: write all files atomically
- Track progress: display per-table status during application
### 5. Post-Apply Summary
- Report rows inserted per table
- Report any errors or skipped rows
- Display total execution time
- If database target: verify row counts match expectations
## Examples
```
/seed apply # Write fixture files (default)
/seed apply --target database # Insert directly into database
/seed apply --profile small --clean # Clean + apply small dataset
/seed apply --dry-run # Preview without applying
/seed apply --target database --clean # Truncate then seed database
```
## Safety
- Database operations always use transactions
- `--clean` requires explicit confirmation before executing TRUNCATE
- Never drops tables or modifies schema — seed data only
- `--dry-run` is always safe and produces no side effects

View File

@@ -0,0 +1,71 @@
---
name: seed generate
---
# /seed generate - Generate Seed Data
## Skills to Load
- skills/schema-inference.md
- skills/faker-patterns.md
- skills/relationship-resolution.md
- skills/visual-header.md
## Visual Output
Display header: `DATA-SEED - Generate`
## Usage
```
/seed generate [table_name] [--profile <name>] [--rows <count>] [--format <sql|json|csv>] [--locale <locale>]
```
## Workflow
### 1. Parse Schema
- Load schema from configured source (see `/seed setup`)
- Extract tables, columns, types, constraints, and relationships
- Use `skills/schema-inference.md` to normalize types across ORM dialects
### 2. Resolve Generation Order
- Build dependency graph from foreign key relationships
- Use `skills/relationship-resolution.md` to determine insertion order
- Handle circular dependencies via deferred constraint resolution
- If specific `table_name` provided, generate only that table plus its dependencies
### 3. Select Profile
- Load profile from `seed-profiles.json` (default: `medium`)
- Override row count if `--rows` specified
- Apply profile-specific edge case ratios and custom value overrides
### 4. Generate Data
- For each table in dependency order:
- Map column types to faker providers using `skills/faker-patterns.md`
- Respect NOT NULL constraints (never generate null for required fields)
- Respect UNIQUE constraints (track generated values, retry on collision)
- Generate foreign key values from previously generated parent rows
- Apply locale-specific patterns for names, addresses, phone numbers
- Handle enum/check constraints by selecting from valid values only
- Include edge cases per profile settings (empty strings, boundary values, unicode)
### 5. Output Results
- Write generated data in requested format to configured output directory
- Display summary: tables generated, row counts, file paths
- Report any constraint violations or generation warnings
## Examples
```
/seed generate # All tables, medium profile
/seed generate users # Only users table + dependencies
/seed generate --profile large # All tables, 1000 rows each
/seed generate orders --rows 50 # 50 order rows
/seed generate --format json # Output as JSON fixtures
/seed generate --locale pt_BR # Brazilian Portuguese data
```
## Edge Cases
- Self-referential foreign keys (e.g., `manager_id` on `employees`): generate root rows first, then assign managers from existing rows
- Many-to-many through tables: generate both sides first, then populate junction table
- Nullable foreign keys: generate null values at the profile's configured null ratio

View File

@@ -0,0 +1,86 @@
---
name: seed profile
---
# /seed profile - Manage Data Profiles
## Skills to Load
- skills/profile-management.md
- skills/visual-header.md
## Visual Output
Display header: `DATA-SEED - Profile Management`
## Usage
```
/seed profile list
/seed profile show <name>
/seed profile create <name>
/seed profile edit <name>
/seed profile delete <name>
```
## Workflow
### list — Show All Profiles
- Read `seed-profiles.json` from configured output directory
- Display table: name, row counts per table, edge case ratio, description
- Highlight the default profile
### show — Profile Details
- Display full profile definition including:
- Per-table row counts
- Edge case configuration (null ratio, boundary values, unicode strings)
- Custom value overrides per column
- Locale settings
- Relationship density settings
### create — New Profile
- Ask user for profile name and description
- Ask for base row count (applies to all tables unless overridden)
- Ask for per-table overrides (optional)
- Ask for edge case ratio (0.0 = no edge cases, 1.0 = all edge cases)
- Ask for custom column overrides (e.g., `users.role` always "admin")
- Save to `seed-profiles.json`
### edit — Modify Profile
- Load existing profile, display current values
- Allow user to modify any field interactively
- Save updated profile
### delete — Remove Profile
- Confirm deletion with user
- Cannot delete the last remaining profile
- Remove from `seed-profiles.json`
## Profile Schema
```json
{
"name": "medium",
"description": "Realistic dataset for development and manual testing",
"default_rows": 100,
"table_overrides": {
"users": 50,
"orders": 200,
"order_items": 500
},
"edge_case_ratio": 0.1,
"null_ratio": 0.05,
"locale": "en_US",
"custom_values": {
"users.status": ["active", "active", "active", "inactive"],
"users.role": ["user", "user", "user", "admin"]
}
}
```
## Built-in Profiles
| Profile | Rows | Edge Cases | Use Case |
|---------|------|------------|----------|
| `small` | 10 | 0% | Unit tests, quick validation |
| `medium` | 100 | 10% | Development, manual testing |
| `large` | 1000 | 5% | Performance testing, stress testing |

View File

@@ -0,0 +1,59 @@
---
name: seed setup
---
# /seed setup - Data Seed Setup Wizard
## Skills to Load
- skills/schema-inference.md
- skills/visual-header.md
## Visual Output
Display header: `DATA-SEED - Setup Wizard`
## Usage
```
/seed setup
```
## Workflow
### Phase 1: Environment Detection
- Detect project type: Python (SQLAlchemy, Django ORM), Node.js (Prisma, TypeORM), or raw SQL
- Check for existing schema files: `schema.prisma`, `models.py`, `*.sql` DDL files
- Identify package manager and installed ORM libraries
### Phase 2: Schema Source Configuration
- Ask user to confirm detected schema source or specify manually
- Supported sources:
- SQLAlchemy models (`models.py`, `models/` directory)
- Prisma schema (`prisma/schema.prisma`)
- Django models (`models.py` with Django imports)
- Raw SQL DDL files (`*.sql` with CREATE TABLE statements)
- JSON Schema definitions (`*.schema.json`)
- Store schema source path for future commands
### Phase 3: Output Configuration
- Ask preferred output format: SQL inserts, JSON fixtures, CSV files, or ORM factory objects
- Ask preferred output directory (default: `seeds/` or `fixtures/`)
- Ask default locale for faker data (default: `en_US`)
### Phase 4: Profile Initialization
- Create default profiles if none exist:
- `small` — 10 rows per table, minimal relationships
- `medium` — 100 rows per table, realistic relationships
- `large` — 1000 rows per table, stress-test volume
- Store profiles in `seed-profiles.json` in output directory
### Phase 5: Validation
- Verify schema can be parsed from detected source
- Display summary with detected tables, column counts, and relationship map
- Inform user of available commands
## Important Notes
- Uses Bash, Read, Write, AskUserQuestion tools
- Does not require database connection (schema-first approach)
- Profile definitions are portable across environments

View File

@@ -0,0 +1,98 @@
---
name: seed validate
---
# /seed validate - Validate Seed Data
## Skills to Load
- skills/schema-inference.md
- skills/relationship-resolution.md
- skills/visual-header.md
## Visual Output
Display header: `DATA-SEED - Validate`
## Usage
```
/seed validate [--profile <name>] [--strict]
```
## Workflow
### 1. Load Schema and Seed Data
- Parse schema from configured source using `skills/schema-inference.md`
- Load generated seed data from output directory
- If no seed data found, report error and suggest running `/seed generate`
### 2. Type Constraint Validation
- For each column in each table, verify generated values match declared type:
- Integer columns contain only integers within range (INT, BIGINT, SMALLINT)
- String columns respect max length constraints (VARCHAR(N))
- Date/datetime columns contain parseable date values
- Boolean columns contain only true/false/null
- Decimal columns respect precision and scale
- UUID columns contain valid UUID format
- Enum columns contain only declared valid values
### 3. Referential Integrity Validation
- Use `skills/relationship-resolution.md` to build FK dependency graph
- For every foreign key value in child tables, verify parent row exists
- For self-referential keys, verify referenced row exists in same table
- For many-to-many through tables, verify both sides exist
- Report orphaned references as FAIL
### 4. Constraint Compliance
- NOT NULL: verify no null values in required columns
- UNIQUE: verify no duplicate values in unique columns or unique-together groups
- CHECK constraints: evaluate check expressions against generated data
- Default values: verify defaults are applied where column value is omitted
### 5. Statistical Validation (--strict mode)
- Verify null ratio matches profile configuration within tolerance
- Verify edge case ratio matches profile configuration
- Verify row counts match profile specification
- Verify distribution of enum/category values is not unrealistically uniform
- Verify date ranges are within reasonable bounds (not year 9999)
### 6. Report
- Display validation results grouped by severity:
- **FAIL**: Type mismatch, FK violation, NOT NULL violation, UNIQUE violation
- **WARN**: Unrealistic distributions, unexpected null ratios, date range issues
- **INFO**: Statistics summary, coverage metrics
```
+----------------------------------------------------------------------+
| DATA-SEED - Validate |
| Profile: medium |
+----------------------------------------------------------------------+
Tables Validated: 8
Rows Checked: 1,450
Constraints Verified: 42
FAIL (0)
No blocking violations found.
WARN (2)
1. [orders.created_at] Date range spans 200 years
Suggestion: Constrain date generator to recent years
2. [users.email] 3 duplicate values detected
Suggestion: Increase faker uniqueness retry count
INFO (1)
1. [order_items] Null ratio 0.12 (profile target: 0.10)
Within acceptable tolerance.
VERDICT: PASS (0 blocking issues)
```
## Examples
```
/seed validate # Standard validation
/seed validate --profile large # Validate large profile data
/seed validate --strict # Include statistical checks
```

View File

@@ -0,0 +1,17 @@
---
description: Test data generation — create realistic fake data from schema definitions
---
# /seed
Test data generation and database seeding with reproducible profiles.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/seed setup` | Setup wizard for data-seed configuration |
| `/seed generate` | Generate seed data from schema or models |
| `/seed apply` | Apply seed data to database or create fixture files |
| `/seed profile` | Define reusable data profiles (small, medium, large) |
| `/seed validate` | Validate seed data against schema constraints |

View File

@@ -0,0 +1,90 @@
---
name: faker-patterns
description: Realistic data generation patterns using faker providers with locale awareness
---
# Faker Patterns
## Purpose
Map schema column types and naming conventions to appropriate faker data generators. This skill ensures generated test data is realistic, locale-aware, and respects type constraints.
---
## Column Name to Provider Mapping
Use column name heuristics to select the most realistic faker provider:
| Column Name Pattern | Faker Provider | Example Output |
|---------------------|---------------|----------------|
| `*name`, `first_name` | `faker.name()` / `faker.first_name()` | "Alice Johnson" |
| `*last_name`, `surname` | `faker.last_name()` | "Rodriguez" |
| `*email` | `faker.email()` | "alice@example.com" |
| `*phone*`, `*tel*` | `faker.phone_number()` | "+1-555-0123" |
| `*address*`, `*street*` | `faker.street_address()` | "742 Evergreen Terrace" |
| `*city` | `faker.city()` | "Toronto" |
| `*state*`, `*province*` | `faker.state()` | "Ontario" |
| `*country*` | `faker.country()` | "Canada" |
| `*zip*`, `*postal*` | `faker.postcode()` | "M5V 2H1" |
| `*url*`, `*website*` | `faker.url()` | "https://example.com" |
| `*company*`, `*org*` | `faker.company()` | "Acme Corp" |
| `*title*`, `*subject*` | `faker.sentence(nb_words=5)` | "Updated quarterly report summary" |
| `*description*`, `*bio*`, `*body*` | `faker.paragraph()` | Multi-sentence text |
| `*created*`, `*updated*`, `*_at` | `faker.date_time_between(start_date='-2y')` | "2024-06-15T10:30:00" |
| `*date*`, `*dob*`, `*birth*` | `faker.date_of_birth(minimum_age=18)` | "1990-03-22" |
| `*price*`, `*amount*`, `*cost*` | `faker.pydecimal(min_value=0.01, max_value=9999.99)` | 49.99 |
| `*quantity*`, `*count*` | `faker.random_int(min=1, max=100)` | 7 |
| `*status*` | Random from enum or `["active", "inactive", "pending"]` | "active" |
| `*uuid*`, `*guid*` | `faker.uuid4()` | "550e8400-e29b-41d4-a716-446655440000" |
| `*ip*`, `*ip_address*` | `faker.ipv4()` | "192.168.1.42" |
| `*color*`, `*colour*` | `faker.hex_color()` | "#3498db" |
| `*password*`, `*hash*` | `faker.sha256()` | Hash string (never plaintext) |
| `*image*`, `*avatar*`, `*photo*` | `faker.image_url()` | "https://picsum.photos/200" |
| `*slug*` | `faker.slug()` | "updated-quarterly-report" |
| `*username*`, `*login*` | `faker.user_name()` | "alice_johnson42" |
## Type Fallback Mapping
When column name does not match any pattern, fall back to type-based generation:
| Canonical Type | Generator |
|----------------|-----------|
| `string` | `faker.pystr(max_chars=max_length)` |
| `integer` | `faker.random_int(min=0, max=2147483647)` |
| `float` | `faker.pyfloat(min_value=0, max_value=10000)` |
| `decimal` | `faker.pydecimal(left_digits=precision-scale, right_digits=scale)` |
| `boolean` | `faker.pybool()` |
| `datetime` | `faker.date_time_between(start_date='-2y', end_date='now')` |
| `date` | `faker.date_between(start_date='-2y', end_date='today')` |
| `uuid` | `faker.uuid4()` |
| `json` | `{"key": faker.word(), "value": faker.sentence()}` |
## Locale Support
Supported locales affect names, addresses, phone formats, and postal codes:
| Locale | Names | Addresses | Phone | Currency |
|--------|-------|-----------|-------|----------|
| `en_US` | English names | US addresses | US format | USD |
| `en_CA` | English names | Canadian addresses | CA format | CAD |
| `en_GB` | English names | UK addresses | UK format | GBP |
| `pt_BR` | Portuguese names | Brazilian addresses | BR format | BRL |
| `fr_FR` | French names | French addresses | FR format | EUR |
| `de_DE` | German names | German addresses | DE format | EUR |
| `ja_JP` | Japanese names | Japanese addresses | JP format | JPY |
| `es_ES` | Spanish names | Spanish addresses | ES format | EUR |
Default locale: `en_US`. Override per-profile or per-command with `--locale`.
## Edge Case Values
Include at configurable ratio (default 10%):
| Type | Edge Cases |
|------|------------|
| `string` | Empty string `""`, max-length string, unicode characters, emoji, SQL special chars `'; DROP TABLE --` |
| `integer` | 0, -1, MAX_INT, MIN_INT |
| `float` | 0.0, -0.0, very small (0.0001), very large (999999.99) |
| `date` | Today, yesterday, epoch (1970-01-01), leap day (2024-02-29) |
| `boolean` | null (if nullable) |
| `email` | Plus-addressed `user+tag@example.com`, long domain, subdomain email |

View File

@@ -0,0 +1,116 @@
---
name: profile-management
description: Seed profile definitions with row counts, edge case ratios, and custom value overrides
---
# Profile Management
## Purpose
Define and manage reusable seed data profiles that control how much data is generated, what edge cases are included, and what custom overrides apply. Profiles enable reproducible, consistent test data across environments.
---
## Profile Storage
Profiles are stored in `seed-profiles.json` in the configured output directory (default: `seeds/` or `fixtures/`).
## Profile Schema
```json
{
"profiles": [
{
"name": "profile-name",
"description": "Human-readable description",
"default_rows": 100,
"table_overrides": {
"table_name": 200
},
"edge_case_ratio": 0.1,
"null_ratio": 0.05,
"locale": "en_US",
"seed_value": 42,
"custom_values": {
"table.column": ["value1", "value2", "value3"]
},
"relationship_density": {
"many_to_many": 0.3,
"self_ref_max_depth": 3
}
}
],
"default_profile": "medium"
}
```
## Field Definitions
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `name` | string | Yes | Unique profile identifier (lowercase, hyphens allowed) |
| `description` | string | No | What this profile is for |
| `default_rows` | integer | Yes | Row count for tables without explicit override |
| `table_overrides` | object | No | Per-table row count overrides |
| `edge_case_ratio` | float | No | Fraction of rows with edge case values (0.0 to 1.0, default 0.1) |
| `null_ratio` | float | No | Fraction of nullable columns set to null (0.0 to 1.0, default 0.05) |
| `locale` | string | No | Faker locale for name/address generation (default "en_US") |
| `seed_value` | integer | No | Random seed for reproducibility (default: hash of profile name) |
| `custom_values` | object | No | Column-specific value pools (table.column -> array of values) |
| `relationship_density` | object | No | Controls many-to-many fill ratio and self-referential depth |
## Built-in Profiles
### small
- `default_rows`: 10
- `edge_case_ratio`: 0.0
- `null_ratio`: 0.0
- Use case: unit tests, schema validation, quick smoke tests
- Characteristics: minimal data, no edge cases, all required fields populated
### medium
- `default_rows`: 100
- `edge_case_ratio`: 0.1
- `null_ratio`: 0.05
- Use case: development, manual testing, demo environments
- Characteristics: realistic volume, occasional edge cases, some nulls
### large
- `default_rows`: 1000
- `edge_case_ratio`: 0.05
- `null_ratio`: 0.03
- Use case: performance testing, pagination testing, stress testing
- Characteristics: high volume, lower edge case ratio to avoid noise
## Custom Value Overrides
Override the faker generator for specific columns with a weighted value pool:
```json
{
"custom_values": {
"users.role": ["user", "user", "user", "admin"],
"orders.status": ["completed", "completed", "pending", "cancelled", "refunded"],
"products.currency": ["USD"]
}
}
```
Values are selected randomly with replacement. Duplicate entries in the array increase that value's probability (e.g., "user" appears 3x = 75% probability).
## Profile Operations
### Resolution Order
When determining row count for a table:
1. Command-line `--rows` flag (highest priority)
2. Profile `table_overrides` for that specific table
3. Profile `default_rows`
4. Built-in default: 100
### Validation Rules
- Profile name must be unique within `seed-profiles.json`
- `default_rows` must be >= 1
- `edge_case_ratio` must be between 0.0 and 1.0
- `null_ratio` must be between 0.0 and 1.0
- Custom value arrays must not be empty
- Cannot delete the last remaining profile

View File

@@ -0,0 +1,118 @@
---
name: relationship-resolution
description: Foreign key resolution, dependency ordering, and circular dependency handling for seed data
---
# Relationship Resolution
## Purpose
Determine the correct order for generating and inserting seed data across tables with foreign key dependencies. Handle edge cases including circular dependencies, self-referential relationships, and many-to-many through tables.
---
## Dependency Graph Construction
### Step 1: Extract Foreign Keys
For each table, identify all columns with foreign key constraints:
- Direct FK references to other tables
- Self-referential FKs (same table)
- Composite FKs spanning multiple columns
### Step 2: Build Directed Graph
- Each table is a node
- Each FK creates a directed edge: child -> parent (child depends on parent)
- Self-referential edges are noted but excluded from ordering (handled separately)
### Step 3: Topological Sort
- Apply topological sort to determine insertion order
- Tables with no dependencies come first
- Tables depending on others come after their dependencies
- Result: ordered list where every table's dependencies appear before it
## Insertion Order Example
Given schema:
```
users (no FK)
categories (no FK)
products (FK -> categories)
orders (FK -> users)
order_items (FK -> orders, FK -> products)
reviews (FK -> users, FK -> products)
```
Insertion order: `users, categories, products, orders, order_items, reviews`
Deletion order (reverse): `reviews, order_items, orders, products, categories, users`
## Circular Dependency Handling
When topological sort detects a cycle:
### Strategy 1: Nullable FK Deferral
If one FK in the cycle is nullable:
1. Insert rows with nullable FK set to NULL
2. Complete the cycle for the other table
3. UPDATE the nullable FK to point to the now-existing rows
Example: `departments.manager_id -> employees`, `employees.department_id -> departments`
1. Insert departments with `manager_id = NULL`
2. Insert employees referencing departments
3. UPDATE departments to set `manager_id` to an employee
### Strategy 2: Deferred Constraints
If database supports deferred constraints (PostgreSQL):
1. Set FK constraints to DEFERRED within transaction
2. Insert all rows in any order
3. Constraints checked at COMMIT time
### Strategy 3: Two-Pass Generation
If neither strategy works:
1. First pass: generate all rows without cross-cycle FK values
2. Second pass: update FK values to reference generated rows from the other table
## Self-Referential Relationships
Common pattern: `employees.manager_id -> employees.id`
### Generation Strategy
1. Generate root rows first (manager_id = NULL) — these are top-level managers
2. Generate second tier referencing root rows
3. Generate remaining rows referencing any previously generated row
4. Depth distribution controlled by profile (default: max depth 3, pyramid shape)
### Configuration
```json
{
"self_ref_null_ratio": 0.1,
"self_ref_max_depth": 3,
"self_ref_distribution": "pyramid"
}
```
## Many-to-Many Through Tables
Detection: a table with exactly two FK columns and no non-FK data columns (excluding PK and timestamps).
### Generation Strategy
1. Generate both parent tables first
2. Generate through table rows pairing random parents
3. Respect uniqueness on the (FK1, FK2) composite — no duplicate pairings
4. Density controlled by profile: sparse (10% of possible pairs), medium (30%), dense (60%)
## Deletion Order
When `--clean` is specified for `/seed apply`:
1. Reverse the insertion order
2. TRUNCATE or DELETE in this order to avoid FK violations
3. For circular dependencies: disable FK checks, truncate, re-enable (with user confirmation)
## Error Handling
| Scenario | Response |
|----------|----------|
| Unresolvable cycle (no nullable FKs, no deferred constraints) | FAIL: report cycle, suggest schema modification |
| Missing parent table in schema | FAIL: report orphaned FK reference |
| FK references non-existent column | FAIL: report schema inconsistency |
| Through table detection false positive | WARN: ask user to confirm junction table identification |

View File

@@ -0,0 +1,81 @@
---
name: schema-inference
description: Infer data types, constraints, and relationships from ORM models or raw SQL DDL
---
# Schema Inference
## Purpose
Parse and normalize schema definitions from multiple ORM dialects into a unified internal representation. This skill enables data generation and validation commands to work across SQLAlchemy, Prisma, Django ORM, and raw SQL DDL without dialect-specific logic in every command.
---
## Supported Schema Sources
| Source | Detection | File Patterns |
|--------|-----------|---------------|
| SQLAlchemy | `from sqlalchemy import`, `Column(`, `mapped_column(` | `models.py`, `models/*.py` |
| Prisma | `model` blocks with `@id`, `@relation` | `prisma/schema.prisma` |
| Django ORM | `from django.db import models`, `models.CharField` | `models.py` with Django imports |
| Raw SQL DDL | `CREATE TABLE` statements | `*.sql`, `schema.sql`, `migrations/*.sql` |
| JSON Schema | `"type": "object"`, `"properties":` | `*.schema.json` |
## Type Normalization
Map dialect-specific types to a canonical set:
| Canonical Type | SQLAlchemy | Prisma | Django | SQL |
|----------------|------------|--------|--------|-----|
| `string` | `String(N)`, `Text` | `String` | `CharField`, `TextField` | `VARCHAR(N)`, `TEXT` |
| `integer` | `Integer`, `BigInteger`, `SmallInteger` | `Int`, `BigInt` | `IntegerField`, `BigIntegerField` | `INT`, `BIGINT`, `SMALLINT` |
| `float` | `Float`, `Numeric` | `Float` | `FloatField` | `FLOAT`, `REAL`, `DOUBLE` |
| `decimal` | `Numeric(P,S)` | `Decimal` | `DecimalField` | `DECIMAL(P,S)`, `NUMERIC(P,S)` |
| `boolean` | `Boolean` | `Boolean` | `BooleanField` | `BOOLEAN`, `BIT` |
| `datetime` | `DateTime` | `DateTime` | `DateTimeField` | `TIMESTAMP`, `DATETIME` |
| `date` | `Date` | `DateTime` | `DateField` | `DATE` |
| `uuid` | `UUID` | `String @default(uuid())` | `UUIDField` | `UUID` |
| `json` | `JSON` | `Json` | `JSONField` | `JSON`, `JSONB` |
| `enum` | `Enum(...)` | `enum` block | `choices=` | `ENUM(...)`, `CHECK IN (...)` |
## Constraint Extraction
For each column, extract:
- **nullable**: Whether NULL values are allowed (default: true unless PK or explicit NOT NULL)
- **unique**: Whether values must be unique
- **max_length**: For string types, the maximum character length
- **precision/scale**: For decimal types
- **default**: Default value expression
- **check**: CHECK constraint expressions (e.g., `age >= 0`)
- **primary_key**: Whether this column is part of the primary key
## Relationship Extraction
Identify foreign key relationships:
- **parent_table**: The referenced table
- **parent_column**: The referenced column (usually PK)
- **on_delete**: CASCADE, SET NULL, RESTRICT, NO ACTION
- **self_referential**: True if FK references same table
- **many_to_many**: Detected from junction/through tables with two FKs and no additional non-FK columns
## Output Format
Internal representation used by other skills:
```json
{
"tables": {
"users": {
"columns": {
"id": {"type": "integer", "primary_key": true, "nullable": false},
"email": {"type": "string", "max_length": 255, "unique": true, "nullable": false},
"name": {"type": "string", "max_length": 100, "nullable": false},
"manager_id": {"type": "integer", "nullable": true, "foreign_key": {"table": "users", "column": "id"}}
},
"relationships": [
{"type": "self_referential", "column": "manager_id", "references": "users.id"}
]
}
}
}
```

View File

@@ -0,0 +1,27 @@
# Visual Header Skill
Standard visual header for data-seed commands.
## Header Template
```
+----------------------------------------------------------------------+
| DATA-SEED - [Context] |
+----------------------------------------------------------------------+
```
## Context Values by Command
| Command | Context |
|---------|---------|
| `/seed setup` | Setup Wizard |
| `/seed generate` | Generate |
| `/seed apply` | Apply |
| `/seed profile` | Profile Management |
| `/seed validate` | Validate |
| Agent mode (seed-generator) | Data Generation |
| Agent mode (seed-validator) | Validation |
## Usage
Display header at the start of every command response before proceeding with the operation.

View File

@@ -0,0 +1,25 @@
{
"name": "debug-mcp",
"version": "1.0.0",
"description": "MCP server debugging, inspection, and development toolkit",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/debug-mcp/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"license": "MIT",
"keywords": [
"mcp",
"debugging",
"inspection",
"development",
"server",
"protocol",
"diagnostics"
],
"commands": [
"./commands/"
],
"domain": "debug"
}

View File

@@ -0,0 +1,62 @@
# debug-mcp
MCP server debugging, inspection, and development toolkit.
## Overview
This plugin provides tools for diagnosing MCP server issues, testing tool invocations, analyzing server logs, inspecting configurations and dependencies, and scaffolding new MCP servers. It is essential for maintaining and developing MCP integrations in the Leo Claude Marketplace.
## Commands
| Command | Description |
|---------|-------------|
| `/debug-mcp status` | Show all MCP servers with health status |
| `/debug-mcp test` | Test a specific MCP tool call |
| `/debug-mcp logs` | View recent MCP server logs and errors |
| `/debug-mcp inspect` | Inspect MCP server config and dependencies |
| `/debug-mcp scaffold` | Generate MCP server skeleton project |
## Agent
| Agent | Model | Mode | Purpose |
|-------|-------|------|---------|
| mcp-debugger | sonnet | default | All debug-mcp operations: inspection, testing, log analysis, scaffolding |
## Skills
| Skill | Description |
|-------|-------------|
| mcp-protocol | MCP stdio protocol specification, JSON-RPC messages, tool/resource/prompt definitions |
| server-patterns | Python MCP server directory structure, FastMCP pattern, config loader, entry points |
| venv-diagnostics | Virtual environment health checks: existence, Python binary, packages, imports |
| log-analysis | Common MCP error patterns with root causes and fixes |
| visual-header | Standard command output header |
## Architecture
```
plugins/debug-mcp/
├── .claude-plugin/
│ └── plugin.json
├── commands/
│ ├── debug-mcp.md # Dispatch file
│ ├── debug-mcp-status.md
│ ├── debug-mcp-test.md
│ ├── debug-mcp-logs.md
│ ├── debug-mcp-inspect.md
│ └── debug-mcp-scaffold.md
├── agents/
│ └── mcp-debugger.md
├── skills/
│ ├── mcp-protocol.md
│ ├── server-patterns.md
│ ├── venv-diagnostics.md
│ ├── log-analysis.md
│ └── visual-header.md
├── claude-md-integration.md
└── README.md
```
## License
MIT License - Part of the Leo Claude Marketplace.

View File

@@ -0,0 +1,95 @@
---
name: mcp-debugger
description: MCP server inspection, log analysis, and scaffold generation. Use for debugging MCP connectivity issues, testing tools, inspecting server configs, and creating new MCP servers.
model: sonnet
permissionMode: default
---
# MCP Debugger Agent
You are an MCP (Model Context Protocol) server specialist. You diagnose MCP server issues, inspect configurations, analyze logs, test tool invocations, and scaffold new servers.
## Skills to Load
- `skills/visual-header.md`
- `skills/mcp-protocol.md`
- `skills/server-patterns.md`
- `skills/venv-diagnostics.md`
- `skills/log-analysis.md`
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| DEBUG-MCP - [Context] |
+----------------------------------------------------------------------+
```
## Core Knowledge
### .mcp.json Structure
The `.mcp.json` file in the project root defines all MCP servers:
```json
{
"mcpServers": {
"server-name": {
"command": "path/to/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "path/to/server/dir"
}
}
}
```
### MCP Server Lifecycle
1. Claude Code reads `.mcp.json` at session start
2. For each server, spawns the command as a subprocess
3. Communication happens over stdio (JSON-RPC)
4. Server registers tools, resources, and prompts
5. Claude Code makes tool calls as needed during conversation
### Common Failure Points
| Failure | Symptom | Root Cause |
|---------|---------|------------|
| "X MCP servers failed" | Session start warning | Broken venv, missing deps, bad config |
| Tool not found | Tool call returns error | Server not loaded, tool name wrong |
| Timeout | Tool call hangs | Server crashed, infinite loop, network |
| Permission denied | API errors | Invalid token, expired credentials |
## Behavior Guidelines
### Diagnostics
1. **Always start with .mcp.json** - Read it first to understand the server landscape
2. **Check venvs systematically** - Use `skills/venv-diagnostics.md` patterns
3. **Read actual error messages** - Parse logs rather than guessing
4. **Test incrementally** - Verify executable, then import, then tool call
### Scaffolding
1. **Follow existing patterns** - Match the structure of existing servers in `mcp-servers/`
2. **Use FastMCP** - Prefer the decorator-based pattern for new servers
3. **Include config.py** - Always generate a configuration loader with env file support
4. **Register in .mcp.json** - Show the user the entry to add, confirm before writing
### Security
1. **Never display full API tokens** - Mask all but last 4 characters
2. **Check .gitignore** - Ensure credential files are excluded from version control
3. **Validate SSL settings** - Warn if SSL verification is disabled
## Available Commands
| Command | Purpose |
|---------|---------|
| `/debug-mcp status` | Server health overview |
| `/debug-mcp test` | Test a specific tool call |
| `/debug-mcp logs` | View and analyze server logs |
| `/debug-mcp inspect` | Deep server inspection |
| `/debug-mcp scaffold` | Generate new server skeleton |

View File

@@ -0,0 +1,34 @@
# Debug MCP Integration
Add to your project's CLAUDE.md:
## MCP Server Debugging (debug-mcp)
This project uses the **debug-mcp** plugin for diagnosing and developing MCP server integrations.
### Available Commands
| Command | Description |
|---------|-------------|
| `/debug-mcp status` | Show health status of all configured MCP servers |
| `/debug-mcp test` | Test a specific MCP tool call with parameters |
| `/debug-mcp logs` | View and analyze recent MCP server error logs |
| `/debug-mcp inspect` | Deep inspection of server config, dependencies, and tools |
| `/debug-mcp scaffold` | Generate a new MCP server project skeleton |
### Usage Guidelines
- Run `/debug-mcp status` when Claude Code reports MCP server failures at session start
- Use `/debug-mcp inspect <server> --deps` to diagnose missing package issues
- Use `/debug-mcp test <server> <tool>` to verify individual tool functionality
- Use `/debug-mcp logs --errors-only` to quickly find error patterns
- Use `/debug-mcp scaffold` when creating a new MCP server integration
### Common Troubleshooting
| Symptom | Command |
|---------|---------|
| "X MCP servers failed" at startup | `/debug-mcp status` |
| Tool call returns error | `/debug-mcp test <server> <tool>` |
| ImportError in server | `/debug-mcp inspect <server> --deps` |
| Unknown server errors | `/debug-mcp logs --server=<name>` |

View File

@@ -0,0 +1,129 @@
---
name: debug-mcp inspect
description: Inspect MCP server configuration, dependencies, and tool definitions
---
# /debug-mcp inspect
Deep inspection of an MCP server's configuration, dependencies, and tool definitions.
## Skills to Load
- `skills/visual-header.md`
- `skills/venv-diagnostics.md`
- `skills/mcp-protocol.md`
## Agent
Delegate to `agents/mcp-debugger.md`.
## Usage
```
/debug-mcp inspect <server_name> [--tools] [--deps] [--config]
```
**Arguments:**
- `server_name` - Name of the MCP server from .mcp.json
**Options:**
- `--tools` - List all registered tools with their schemas
- `--deps` - Show dependency analysis (installed vs required)
- `--config` - Show configuration files and environment variables
- (no flags) - Show all sections
## Instructions
Execute `skills/visual-header.md` with context "Server Inspection".
### Phase 1: Configuration
1. Read `.mcp.json` and extract the server definition
2. Display:
- Server name
- Command and arguments
- Working directory
- Environment variable references
```
## Server: gitea
### Configuration (.mcp.json)
- Command: /path/to/mcp-servers/gitea/.venv/bin/python
- Args: ["-m", "mcp_server.server"]
- CWD: /path/to/mcp-servers/gitea
- Env file: ~/.config/claude/gitea.env
```
### Phase 2: Dependencies (--deps)
Apply `skills/venv-diagnostics.md`:
1. Read `requirements.txt` from the server's cwd
2. Compare with installed packages:
```bash
cd <cwd> && .venv/bin/pip freeze
```
3. Identify:
- Missing packages (in requirements but not installed)
- Version mismatches (installed version differs from required)
- Extra packages (installed but not in requirements)
```
### Dependencies
| Package | Required | Installed | Status |
|---------|----------|-----------|--------|
| mcp | >=1.0.0 | 1.2.3 | OK |
| httpx | >=0.24 | 0.25.0 | OK |
| pynetbox | >=7.0 | — | MISSING |
- Missing: 1 package
- Mismatched: 0 packages
```
### Phase 3: Tools (--tools)
Parse the server source code to extract tool definitions:
1. Find Python files with `@mcp.tool` decorators or `server.add_tool()` calls
2. Extract tool name, description, and parameter schema
3. Group by module/category if applicable
```
### Tools (42 registered)
#### Issues (6 tools)
| Tool | Description | Params |
|------|-------------|--------|
| list_issues | List issues from repository | state?, labels?, repo? |
| get_issue | Get specific issue | issue_number (required) |
| create_issue | Create new issue | title (required), body (required) |
...
```
### Phase 4: Environment Configuration (--config)
1. Locate env file referenced in .mcp.json
2. Read the file (mask secret values)
3. Check for missing required variables
```
### Environment Configuration
File: ~/.config/claude/gitea.env
| Variable | Value | Status |
|----------|-------|--------|
| GITEA_API_URL | https://gitea.example.com/api/v1 | OK |
| GITEA_API_TOKEN | ****...a1b2 | OK |
File: .env (project level)
| Variable | Value | Status |
|----------|-------|--------|
| GITEA_ORG | personal-projects | OK |
| GITEA_REPO | leo-claude-mktplace | OK |
```
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,98 @@
---
name: debug-mcp logs
description: View recent MCP server logs and error patterns
---
# /debug-mcp logs
View and analyze recent MCP server log output.
## Skills to Load
- `skills/visual-header.md`
- `skills/log-analysis.md`
## Agent
Delegate to `agents/mcp-debugger.md`.
## Usage
```
/debug-mcp logs [--server=<name>] [--lines=<count>] [--errors-only]
```
**Options:**
- `--server` - Filter to a specific server (default: all)
- `--lines` - Number of recent lines to show (default: 50)
- `--errors-only` - Show only error-level log entries
## Instructions
Execute `skills/visual-header.md` with context "Log Analysis".
### Phase 1: Locate Log Sources
MCP servers in Claude Code output to stderr. Log locations vary:
1. **Claude Code session logs** - Check `~/.claude/logs/` for recent session logs
2. **Server stderr** - If server runs as a subprocess, logs go to Claude Code's stderr
3. **Custom log files** - Some servers may write to files in their cwd
Search for log files:
```bash
# Claude Code logs
ls -la ~/.claude/logs/ 2>/dev/null
# Server-specific logs
ls -la <server_cwd>/*.log 2>/dev/null
ls -la <server_cwd>/logs/ 2>/dev/null
```
### Phase 2: Parse Logs
1. Read the most recent log entries (default 50 lines)
2. Filter by server name if `--server` specified
3. If `--errors-only`, filter for patterns:
- Lines containing `ERROR`, `CRITICAL`, `FATAL`
- Python tracebacks (`Traceback (most recent call last)`)
- JSON-RPC error responses (`"error":`)
### Phase 3: Error Analysis
Apply patterns from `skills/log-analysis.md`:
1. **Categorize errors** by type (ImportError, ConnectionError, TimeoutError, etc.)
2. **Count occurrences** of each error pattern
3. **Identify root cause** using the common patterns from the skill
4. **Suggest fix** for each error category
### Phase 4: Report
```
## MCP Server Logs
### Server: gitea
Last 10 entries:
[2025-11-15 10:00:01] INFO Initialized with 42 tools
[2025-11-15 10:00:05] INFO Tool call: list_issues (245ms)
...
### Server: netbox
Last 10 entries:
[2025-11-15 09:58:00] ERROR ImportError: No module named 'pynetbox'
### Error Summary
| Server | Error Type | Count | Root Cause | Fix |
|--------|-----------|-------|------------|-----|
| netbox | ImportError | 3 | Missing dependency | pip install pynetbox |
### Recommendations
1. Fix netbox: Reinstall dependencies in venv
2. All other servers: No issues detected
```
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,138 @@
---
name: debug-mcp scaffold
description: Generate a new MCP server skeleton project with standard structure
---
# /debug-mcp scaffold
Generate a new MCP server project with the standard directory structure, entry point, and configuration.
## Skills to Load
- `skills/visual-header.md`
- `skills/server-patterns.md`
- `skills/mcp-protocol.md`
## Agent
Delegate to `agents/mcp-debugger.md`.
## Usage
```
/debug-mcp scaffold <server_name> [--tools=<tool1,tool2,...>] [--location=<path>]
```
**Arguments:**
- `server_name` - Name for the new MCP server (lowercase, hyphens)
**Options:**
- `--tools` - Comma-separated list of initial tool names to generate stubs
- `--location` - Where to create the server (default: `mcp-servers/<server_name>`)
## Instructions
Execute `skills/visual-header.md` with context "Server Scaffold".
### Phase 1: Gather Requirements
1. Ask user for:
- Server purpose (one sentence)
- External service it integrates with (if any)
- Authentication type (API key, OAuth, none)
- Initial tools to register (at least one)
### Phase 2: Generate Project Structure
Apply patterns from `skills/server-patterns.md`:
```
mcp-servers/<server_name>/
├── mcp_server/
│ ├── __init__.py
│ ├── config.py # Configuration loader (env files)
│ ├── server.py # MCP server entry point
│ └── tools/
│ ├── __init__.py
│ └── <category>.py # Tool implementations
├── tests/
│ ├── __init__.py
│ └── test_tools.py # Tool unit tests
├── requirements.txt # Python dependencies
└── README.md # Server documentation
```
### Phase 3: Generate Files
#### server.py
- Import FastMCP or raw MCP protocol handler
- Register tools from tools/ directory
- Configure stdio transport
- Add startup logging with tool count
#### config.py
- Load from `~/.config/claude/<server_name>.env`
- Fall back to project-level `.env`
- Validate required variables on startup
- Mask sensitive values in logs
#### tools/<category>.py
- For each tool name provided in `--tools`:
- Generate a stub function with `@mcp.tool` decorator
- Include docstring with description
- Define inputSchema with parameter types
- Return placeholder response
#### requirements.txt
```
mcp>=1.0.0
httpx>=0.24.0
python-dotenv>=1.0.0
```
#### README.md
- Server name and description
- Installation instructions (venv setup)
- Configuration (env variables)
- Available tools table
- Architecture diagram
### Phase 4: Register in .mcp.json
1. Read the project's `.mcp.json`
2. Add the new server entry:
```json
"<server_name>": {
"command": "mcp-servers/<server_name>/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "mcp-servers/<server_name>"
}
```
3. Show the change and ask user to confirm before writing
### Phase 5: Completion
```
## Scaffold Complete
### Created Files
- mcp-servers/<name>/mcp_server/server.py
- mcp-servers/<name>/mcp_server/config.py
- mcp-servers/<name>/mcp_server/tools/<category>.py
- mcp-servers/<name>/requirements.txt
- mcp-servers/<name>/README.md
### Next Steps
1. Create virtual environment:
cd mcp-servers/<name> && python3 -m venv .venv && .venv/bin/pip install -r requirements.txt
2. Add credentials:
Edit ~/.config/claude/<name>.env
3. Implement tool logic:
Edit mcp-servers/<name>/mcp_server/tools/<category>.py
4. Restart Claude Code session to load the new server
5. Test: /debug-mcp test <name> <tool_name>
```
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,101 @@
---
name: debug-mcp status
description: Show all configured MCP servers with health status, venv state, and tool counts
---
# /debug-mcp status
Display the health status of all MCP servers configured in the project.
## Skills to Load
- `skills/visual-header.md`
- `skills/venv-diagnostics.md`
- `skills/log-analysis.md`
## Agent
Delegate to `agents/mcp-debugger.md`.
## Usage
```
/debug-mcp status [--server=<name>] [--verbose]
```
**Options:**
- `--server` - Check a specific server only
- `--verbose` - Show detailed output including tool lists
## Instructions
Execute `skills/visual-header.md` with context "Server Status".
### Phase 1: Locate Configuration
1. Read `.mcp.json` from the project root
2. Parse the `mcpServers` object to extract all server definitions
3. For each server, extract:
- Server name (key in mcpServers)
- Command path (usually Python interpreter in .venv)
- Arguments (module path)
- Working directory (`cwd`)
- Environment variables or env file references
### Phase 2: Check Each Server
For each configured MCP server:
1. **Executable check** - Does the command path exist?
```bash
test -f <command_path> && echo "OK" || echo "MISSING"
```
2. **Virtual environment check** - Apply `skills/venv-diagnostics.md`:
- Does `.venv/` directory exist in the server's cwd?
- Is the Python binary intact (not broken symlink)?
- Are requirements satisfied?
3. **Config file check** - Does the referenced env file exist?
```bash
test -f <env_file_path> && echo "OK" || echo "MISSING"
```
4. **Module check** - Can the server module be imported?
```bash
cd <cwd> && .venv/bin/python -c "import <module_name>" 2>&1
```
### Phase 3: Report
```
## MCP Server Status
| Server | Executable | Venv | Config | Import | Status |
|--------|-----------|------|--------|--------|--------|
| gitea | OK | OK | OK | OK | HEALTHY |
| netbox | OK | MISSING | OK | FAIL | ERROR |
| data-platform | OK | OK | OK | OK | HEALTHY |
### Errors
#### netbox
- Venv missing: /path/to/mcp-servers/netbox/.venv does not exist
- Import failed: ModuleNotFoundError: No module named 'pynetbox'
- Fix: cd /path/to/mcp-servers/netbox && python3 -m venv .venv && .venv/bin/pip install -r requirements.txt
### Summary
- Healthy: 4/5
- Errors: 1/5
```
### Phase 4: Verbose Mode
If `--verbose`, additionally show for each healthy server:
- Tool count (parse server source for `@mcp.tool` decorators or tool registration)
- Resource count
- Last modification time of server.py
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,103 @@
---
name: debug-mcp test
description: Test a specific MCP tool call by invoking it and displaying the result
---
# /debug-mcp test
Test a specific MCP tool by invoking it with sample parameters.
## Skills to Load
- `skills/visual-header.md`
- `skills/mcp-protocol.md`
## Agent
Delegate to `agents/mcp-debugger.md`.
## Usage
```
/debug-mcp test <server_name> <tool_name> [--params=<json>]
```
**Arguments:**
- `server_name` - Name of the MCP server from .mcp.json
- `tool_name` - Name of the tool to invoke
- `--params` - JSON object with tool parameters (optional)
## Instructions
Execute `skills/visual-header.md` with context "Tool Test".
### Phase 1: Validate Inputs
1. Read `.mcp.json` and verify the server exists
2. Check if the server is healthy (run quick executable check)
3. If tool_name is not provided, list available tools for the server and ask user to select
### Phase 2: Tool Discovery
1. Parse the server source code to find the tool definition
2. Extract the tool's `inputSchema` (parameters, types, required fields)
3. Display the schema to the user:
```
## Tool: list_issues
Server: gitea
Parameters:
- state (string, optional): "open", "closed", "all" [default: "open"]
- labels (array[string], optional): Filter by labels
- repo (string, optional): Repository name
```
### Phase 3: Parameter Preparation
1. If `--params` provided, validate against the tool's inputSchema
2. If no params provided and tool has required params, ask user for values
3. If no params and all optional, invoke with defaults
### Phase 4: Invocation
Invoke the MCP tool using the available MCP tool functions:
1. Call the tool with prepared parameters
2. Capture the response
3. Measure response time
### Phase 5: Result Display
```
## Test Result
### Request
- Server: gitea
- Tool: list_issues
- Params: {"state": "open", "repo": "leo-claude-mktplace"}
### Response
- Status: Success
- Time: 245ms
- Result:
[formatted JSON response, truncated if large]
### Schema Validation
- All required params provided: YES
- Response type matches expected: YES
```
### Error Handling
If the tool call fails, apply `skills/mcp-protocol.md` error patterns:
```
### Error
- Type: ConnectionRefused
- Message: Could not connect to MCP server
- Likely Cause: Server not running or venv broken
- Fix: Run /debug-mcp status to diagnose
```
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,17 @@
---
description: MCP debugging — inspect servers, test tools, view logs, scaffold new servers
---
# /debug-mcp
MCP server debugging, inspection, and development toolkit.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/debug-mcp status` | Show all MCP servers with health status |
| `/debug-mcp test` | Test a specific MCP tool call |
| `/debug-mcp logs` | View recent MCP server logs and errors |
| `/debug-mcp inspect` | Inspect MCP server config and dependencies |
| `/debug-mcp scaffold` | Generate MCP server skeleton project |

View File

@@ -0,0 +1,105 @@
# Log Analysis Skill
Common MCP server error patterns, their root causes, and fixes.
## Error Pattern: ImportError
```
ImportError: No module named 'pynetbox'
```
**Root Cause:** Missing Python package in the virtual environment.
**Fix:**
```bash
cd <server_cwd> && .venv/bin/pip install -r requirements.txt
```
**Prevention:** Always run `pip install -r requirements.txt` after creating or updating a venv.
## Error Pattern: ConnectionRefused
```
ConnectionRefusedError: [Errno 111] Connection refused
```
**Root Cause:** The external service the MCP server connects to is not running or not reachable.
**Checks:**
1. Is the target service running? (e.g., Gitea, NetBox)
2. Is the URL correct in the env file?
3. Is there a firewall or VPN issue?
**Fix:** Verify the service URL in `~/.config/claude/<server>.env` and confirm the service is accessible.
## Error Pattern: JSONDecodeError
```
json.decoder.JSONDecodeError: Expecting value: line 1 column 1
```
**Root Cause:** The server received non-JSON response from the external API. Usually means:
- API returned HTML error page (wrong URL)
- API returned empty response (auth failed silently)
- Proxy intercepted the request
**Fix:** Check the API URL ends with the correct path (e.g., `/api/v1` for Gitea, `/api` for NetBox).
## Error Pattern: TimeoutError
```
TimeoutError: timed out
httpx.ReadTimeout:
```
**Root Cause:** Server startup took too long or external API is slow.
**Checks:**
1. Network latency to the external service
2. Server doing heavy initialization (loading all tools)
3. Large response from API
**Fix:** Increase timeout in server config or reduce initial tool registration.
## Error Pattern: PermissionError
```
PermissionError: [Errno 13] Permission denied: '/path/to/file'
```
**Root Cause:** Server process cannot read/write required files.
**Fix:** Check file ownership and permissions. Common locations:
- `~/.config/claude/*.env` (should be readable by user)
- Server's `.venv/` directory
- Log files
## Error Pattern: FileNotFoundError (Venv)
```
FileNotFoundError: [Errno 2] No such file or directory: '.venv/bin/python'
```
**Root Cause:** Virtual environment does not exist or was deleted.
**Fix:** Create the venv:
```bash
cd <server_cwd> && python3 -m venv .venv && .venv/bin/pip install -r requirements.txt
```
## Error Pattern: SSL Certificate Error
```
ssl.SSLCertVerificationError: certificate verify failed
```
**Root Cause:** Self-signed certificate on the target service.
**Fix:** Set `VERIFY_SSL=false` in the env file (not recommended for production).
## Log Parsing Tips
1. **Python tracebacks** - Read from bottom up. The last line is the actual error.
2. **JSON-RPC errors** - Look for `"error"` key in JSON responses.
3. **Startup failures** - First few lines after server spawn reveal initialization issues.
4. **Repeated errors** - Same error in a loop means the server is retrying and failing.

View File

@@ -0,0 +1,131 @@
# MCP Protocol Skill
Model Context Protocol (MCP) specification reference for debugging and development.
## Protocol Overview
MCP uses JSON-RPC 2.0 over stdio (standard input/output) for communication between Claude Code and MCP servers.
### Transport Types
| Transport | Description | Use Case |
|-----------|-------------|----------|
| **stdio** | JSON-RPC over stdin/stdout | Default for Claude Code |
| **SSE** | Server-Sent Events over HTTP | Remote servers |
## Tool Definitions
Tools are the primary way MCP servers expose functionality.
### Tool Registration
```python
@mcp.tool()
def list_issues(state: str = "open", labels: list[str] = None) -> str:
"""List issues from the repository.
Args:
state: Issue state filter (open, closed, all)
labels: Filter by label names
"""
# implementation
```
### Tool Schema (JSON)
```json
{
"name": "list_issues",
"description": "List issues from the repository",
"inputSchema": {
"type": "object",
"properties": {
"state": {
"type": "string",
"enum": ["open", "closed", "all"],
"default": "open",
"description": "Issue state filter"
},
"labels": {
"type": "array",
"items": { "type": "string" },
"description": "Filter by label names"
}
},
"required": []
}
}
```
## Resource Definitions
Resources expose data that can be read by the client.
```python
@mcp.resource("config://settings")
def get_settings() -> str:
"""Return current configuration."""
return json.dumps(config)
```
## Prompt Definitions
Prompts provide reusable prompt templates.
```python
@mcp.prompt()
def analyze_issue(issue_number: int) -> str:
"""Generate a prompt to analyze a specific issue."""
return f"Analyze issue #{issue_number} and suggest solutions."
```
## JSON-RPC Message Format
### Request
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "list_issues",
"arguments": {"state": "open"}
}
}
```
### Response (Success)
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [{"type": "text", "text": "..."}]
}
}
```
### Response (Error)
```json
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32600,
"message": "Invalid Request"
}
}
```
## Error Codes
| Code | Meaning |
|------|---------|
| -32700 | Parse error (invalid JSON) |
| -32600 | Invalid request |
| -32601 | Method not found |
| -32602 | Invalid params |
| -32603 | Internal error |

Some files were not shown because too many files have changed in this diff Show More