94 Commits

Author SHA1 Message Date
2173f3389a Merge pull request 'development' (#289) from development into main
Reviewed-on: #289
2026-01-28 22:49:18 +00:00
d8971efafe Merge pull request 'release: v5.3.0 - Sprint 6 Visual Branding Overhaul' (#288) from release/v5.3.0 into development
Reviewed-on: #288
2026-01-28 22:47:46 +00:00
e3a8ebd4da release: v5.3.0 - Sprint 6 Visual Branding Overhaul
Sprint 6 adds consistent visual headers across all 12 plugins:
- Projman: Double-line headers with phase indicators
- Other plugins: Single-line headers with plugin icons
- 109 files updated (23 agents + 86 commands)
- 4 Wiki branding specification pages

Also fixes documentation drift:
- Version sync across CLAUDE.md, marketplace.json, README.md
- Added 18 missing commands to documentation

Closes Sprint 6 milestone.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:45:47 -05:00
fab1345bcb Merge pull request 'development' (#286) from development into main
Reviewed-on: #286
2026-01-28 22:41:07 +00:00
193908721c Merge pull request 'feat(projman): add visual output requirements (Sprint 6 Batch 1)' (#284) from feat/sprint-6-projman-visual into development
Reviewed-on: #284
2026-01-28 22:37:46 +00:00
7e9f70d0a7 Merge pull request 'feat(plugins): Add visual output headers to other plugin agents and commands (#275, #276)' (#285) from feat/sprint-6-other-plugins-visual into development
Reviewed-on: #285
2026-01-28 22:37:37 +00:00
86413c4801 docs: sync documentation with Sprint 4 & 5 commands
- Update CLAUDE.md version from 5.1.0 to 5.2.0
- Update marketplace.json plugin versions to match CHANGELOG
  - projman 3.2.0 → 3.3.0
  - git-flow 1.0.0 → 1.2.0
  - pr-review 1.0.0 → 1.1.0
  - clarity-assist 1.0.0 → 1.2.0
  - doc-guardian 1.0.0 → 1.1.0
  - claude-config-maintainer 1.0.0 → 1.1.0
  - cmdb-assistant 1.1.0 → 1.2.0
  - data-platform 1.0.0 → 1.2.0
  - viz-platform 1.0.0 → 1.1.0
  - contract-validator 1.0.0 → 1.2.0
- Add 18 missing commands to README.md and COMMANDS-CHEATSHEET.md
  - /sprint-diagram, /pr-diff, /changelog-gen, /doc-coverage
  - /stale-docs, /config-diff, /config-lint, /cmdb-topology
  - /change-audit, /ip-conflicts, /lineage-viz, /dbt-test
  - /data-quality, /chart-export, /accessibility-check
  - /breakpoints, /dependency-graph
- Add contract-validator plugin to COMMANDS-CHEATSHEET.md

Closes #277

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:32:24 -05:00
b5d36865ee feat(plugins): add Visual Output headers to all other plugin commands
Add single-line visual headers to 66 command files across 10 plugins:
- clarity-assist (2 commands): 💬
- claude-config-maintainer (5 commands): ⚙️
- cmdb-assistant (11 commands): 🖥️
- code-sentinel (3 commands): 🔒
- contract-validator (5 commands): 
- data-platform (10 commands): 📊
- doc-guardian (5 commands): 📝
- git-flow (8 commands): 🔀
- pr-review (7 commands): 🔍
- viz-platform (10 commands): 🎨

Each command now displays a consistent header at execution start:
┌────────────────────────────────────────────────────────────────┐
│  [icon] PLUGIN-NAME · Command Description                       │
└────────────────────────────────────────────────────────────────┘

Addresses #275 (other plugin commands visual output)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:24:49 -05:00
79ee93ea88 feat(plugins): add visual output requirements to all plugin agents
Add single-line box headers to 19 agents across all non-projman plugins:
- clarity-assist (1): Clarity Coach
- claude-config-maintainer (1): Maintainer
- code-sentinel (2): Security Reviewer, Refactor Advisor
- doc-guardian (1): Doc Analyzer
- git-flow (1): Git Assistant
- pr-review (5): Coordinator, Security, Maintainability, Performance, Test
- data-platform (2): Data Analysis, Data Ingestion
- viz-platform (3): Component Check, Layout Builder, Theme Setup
- contract-validator (2): Agent Check, Full Validation
- cmdb-assistant (1): CMDB Assistant

Uses single-line box format (not double-line like projman).

Part of #275

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:15:05 -05:00
3561025dfc feat(projman): add visual output requirements to agents and commands
Add Visual Output sections to all projman files:
- 4 agent files with phase-specific headers (PLANNING, EXECUTION, CLOSING)
- 16 command files with appropriate headers

Headers use double-line box characters for projman branding:
- Planning phase: TARGET PLANNING
- Execution phase: LIGHTNING EXECUTION (+ progress block for orchestrator)
- Closing phase: FLAG CLOSING
- Setup commands: GEAR SETUP

Closes #273, #274

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:10:49 -05:00
36e6ac2dd0 Merge pull request 'development' (#283) from development into main
Reviewed-on: #283
2026-01-28 21:48:57 +00:00
d27c440631 Merge pull request 'fix(gitea-mcp): address MCP tool issues from Sprint 6' (#282) from fix/281-mcp-tool-issues into development
Reviewed-on: #282
2026-01-28 21:48:07 +00:00
e56d685a68 fix(gitea-mcp): address MCP tool issues from Sprint 6
Fixes #281 - Multiple MCP tool issues discovered during sprint execution

## Changes

1. **list_issues Token Overflow** (Issue 1)
   - Added `milestone` parameter to filter issues server-side
   - Reduces response size by filtering at API level instead of client-side

2. **Type Coercion for MCP Serialization** (Issues 2 & 4)
   - Added `_coerce_types()` helper function in server.py
   - Handles integers passed as strings (milestone_id, issue_number, etc.)
   - Handles arrays passed as JSON strings (labels, tags, etc.)
   - Applied to all tool calls automatically

3. **Sprint Approval Check Clarification** (Issue 3)
   - Updated sprint-start.md to clarify approval is RECOMMENDED, not enforced
   - Changed STOP/block language to WARN/suggest language
   - Added note explaining this is workflow guidance, not code-enforced

## Files Changed
- mcp-servers/gitea/mcp_server/gitea_client.py: Added milestone param
- mcp-servers/gitea/mcp_server/tools/issues.py: Pass milestone param
- mcp-servers/gitea/mcp_server/server.py: Type coercion + milestone schema
- plugins/projman/commands/sprint-start.md: Clarified approval check

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 15:57:42 -05:00
3e0e779803 Merge pull request 'development' (#280) from development into main
Reviewed-on: #280
2026-01-28 20:37:52 +00:00
5638891d01 Merge pull request 'fix(projman): add new command verification step to sprint-close' (#279) from fix/sprint-close-new-command-verification into development
Reviewed-on: #279
2026-01-28 20:37:23 +00:00
611b50b150 fix(projman): add new command verification step to sprint-close
Addresses issue #278 - sprint-diagram command not discoverable after Sprint 4.

Root cause: Claude Code discovers skills at session start. Commands added
during a session are NOT discoverable until restart.

Prevention: Added step 7 "New Command Verification" to sprint-close workflow:
- Reminds about session restart requirement
- Creates follow-up verification task
- Explains why this happens

Lesson learned created: lessons/patterns/sprint-4---new-commands-not-discoverable-until-session-restart

Closes #278

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 15:36:11 -05:00
74198743ab Merge pull request 'development' (#271) from development into main
Reviewed-on: #271
2026-01-28 19:27:45 +00:00
cde5c67134 Merge pull request 'feat(plugins): implement Sprint 5 documentation and fixes (#266-#269)' (#270) from feat/sprint-5-documentation into development
Reviewed-on: #270
2026-01-28 19:27:15 +00:00
baad41da98 feat(plugins): implement Sprint 5 documentation and fixes (#266-#269)
Release v5.2.0

Documentation:
- Add git-flow branching strategy guide (docs/BRANCHING-STRATEGY.md)
- Add clarity-assist ND support documentation (docs/ND-SUPPORT.md)
- Update DEBUGGING-CHECKLIST.md with Gitea auto-close behavior and MCP restart notes
- Update plugin READMEs to reference new documentation

Bug Fix:
- Add milestone parameter to update_issue MCP tool (gitea_client.py, server.py, tools/issues.py)

Version Updates:
- Marketplace version: 5.1.0 → 5.2.0
- README title: v5.1.0 → v5.2.0
- CHANGELOG: [Unreleased] → [5.2.0] - 2026-01-28

Closes #266, Closes #267, Closes #268, Closes #269

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 14:26:42 -05:00
d57bff184e Merge pull request 'development' (#265) from development into main
Reviewed-on: #265
2026-01-28 18:45:38 +00:00
f6d9fcaae2 Merge pull request 'docs: fix GITEA_REPO format documentation' (#264) from fix/gitea-repo-format-docs into development
Reviewed-on: #264
2026-01-28 18:45:24 +00:00
8d94bb606c docs: fix GITEA_REPO format documentation
Update documentation to reflect that GITEA_REPO expects owner/repo
format (e.g., my-org/my-repo) instead of separate GITEA_ORG and
GITEA_REPO variables.

This matches the actual MCP server implementation in config.py.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 13:44:46 -05:00
b175d4d890 Merge pull request 'development' (#262) from development into main
Reviewed-on: #262
2026-01-28 18:35:04 +00:00
6973f657d7 Merge pull request 'docs(changelog): add Sprint 4 changes to [Unreleased]' (#263) from docs/sprint-4-changelog into development
Reviewed-on: #263
2026-01-28 18:34:49 +00:00
a0d1b38c6e docs(changelog): add Sprint 4 changes to [Unreleased]
- 18 new commands across 8 plugins
- viz-platform accessibility tools and chart export
- MCP project directory detection fix
- Links to wiki implementation and lessons learned

Sprint 4 - Commands milestone closed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:56:35 -05:00
c5f68256c5 Merge pull request 'feat(plugins): implement Sprint 4 commands (#241-#258)' (#261) from feat/sprint-4-commands into development
Reviewed-on: #261
2026-01-28 17:36:06 +00:00
9698e8724d feat(plugins): implement Sprint 4 commands (#241-#258)
Sprint 4 - Plugin Commands implementation adding 18 new user-facing
commands across 8 plugins as part of V5.2.0 Plugin Enhancements.

**projman:**
- #241: /sprint-diagram - Mermaid visualization of sprint issues

**pr-review:**
- #242: Confidence threshold config (PR_REVIEW_CONFIDENCE_THRESHOLD)
- #243: /pr-diff - Formatted diff with inline review comments

**data-platform:**
- #244: /data-quality - DataFrame quality checks (nulls, duplicates, outliers)
- #245: /lineage-viz - dbt lineage as Mermaid diagrams
- #246: /dbt-test - Formatted dbt test runner

**viz-platform:**
- #247: /chart-export - Export charts to PNG/SVG/PDF via kaleido
- #248: /accessibility-check - Color blind validation (WCAG contrast)
- #249: /breakpoints - Responsive layout configuration

**contract-validator:**
- #250: /dependency-graph - Plugin dependency visualization

**doc-guardian:**
- #251: /changelog-gen - Generate changelog from conventional commits
- #252: /doc-coverage - Documentation coverage metrics
- #253: /stale-docs - Flag outdated documentation

**claude-config-maintainer:**
- #254: /config-diff - Track CLAUDE.md changes over time
- #255: /config-lint - 31 lint rules for CLAUDE.md best practices

**cmdb-assistant:**
- #256: /cmdb-topology - Infrastructure topology diagrams
- #257: /change-audit - NetBox audit trail queries
- #258: /ip-conflicts - Detect IP conflicts and overlaps

Closes #241, #242, #243, #244, #245, #246, #247, #248, #249,
#250, #251, #252, #253, #254, #255, #256, #257, #258

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:02:26 -05:00
f3e1f42413 Merge pull request 'development' (#260) from development into main
Reviewed-on: #260
2026-01-28 16:44:43 +00:00
8a957b1b69 Merge pull request 'fix(mcp): capture CLAUDE_PROJECT_DIR from PWD before cd' (#259) from fix/mcp-project-dir-detection into development
Reviewed-on: #259
2026-01-28 16:44:32 +00:00
6e90064160 fix(mcp): capture CLAUDE_PROJECT_DIR from PWD before cd
All MCP server run.sh scripts now capture the original working
directory as CLAUDE_PROJECT_DIR before changing to the script
directory. This fixes the branch detection issue where MCP tools
detected the plugin repo's branch instead of the user's project branch.

This is a follow-up fix to #231 - the original fix relied on
CLAUDE_PROJECT_DIR being set by Claude Code, but it isn't.
Now we capture it ourselves from PWD at startup time.

Closes #231 (proper fix)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 11:43:07 -05:00
5c4e97a3f6 Merge pull request 'development' (#240) from development into main
Reviewed-on: #240
2026-01-28 16:02:47 +00:00
351be5a40d Merge pull request 'feat(projman): implement 8 improvement issues (#231-#238)' (#239) from feat/projman-improvements-231-238 into development
Reviewed-on: #239
2026-01-28 15:53:50 +00:00
67944a7e1c Merge branch 'fix/233-approval-token' into development 2026-01-28 10:51:42 -05:00
e37653f956 Merge branch 'fix/237-checkpoint-resume' into development 2026-01-28 10:51:42 -05:00
235e72d3d7 Merge branch 'fix/234-parallel-conflicts' into development 2026-01-28 10:51:42 -05:00
ba8e86e31c Merge branch 'fix/236-runaway-detection' into development 2026-01-28 10:51:42 -05:00
67f330be6c Merge branch 'fix/238-task-scoping' into development 2026-01-28 10:51:42 -05:00
445b744196 Merge branch 'fix/232-progress-visibility' into development 2026-01-28 10:51:42 -05:00
ad73c526b7 Merge branch 'fix/235-status-labels' into development 2026-01-28 10:51:42 -05:00
26310d05f0 feat(projman): add sprint approval requirement before execution (#233)
Sprint-plan approval workflow:
- Request explicit approval after creating issues
- Present scope summary (branches, files, dependencies)
- User must type "approve sprint N" to authorize
- Record approval in milestone description with timestamp

Sprint-start verification:
- Check milestone for "## Sprint Approval" section
- If missing, STOP and direct to /sprint-plan
- Extract approved scope (branches, files)
- Enforce scope during execution

Orchestrator scope enforcement:
- Verify approval before any execution
- Check each operation against approved scope
- Operations outside scope require re-approval

This separates planning (review) from execution (action),
preventing agents from executing without explicit user consent.

Closes #233

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:51:10 -05:00
459550e7d3 feat(projman): add checkpoint/resume for interrupted agent work (#237)
Executor checkpointing:
- Standard checkpoint comment format with branch, commit, phase
- Files modified with status (created, modified)
- Completed and pending steps tracking
- State notes for resumption context
- Save checkpoint after major steps, before stopping

Orchestrator resume detection:
- Scan issue comments for "## Checkpoint" markers
- Offer resume options: resume, start fresh, review details
- Verify branch exists and files match before resuming
- Dispatch executor with checkpoint context

Sprint-start integration:
- Checkpoint detection as first workflow step
- Resume flow documentation with example
- Checkpoint format specification

This enables resuming work after:
- Budget exhaustion (100 tool call limit)
- Agent failure/circuit breaker
- Manual interruption
- Session timeout

Closes #237

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:49:34 -05:00
a69a4d19d0 feat(projman): add file conflict prevention for parallel agents (#234)
Add pre-dispatch conflict detection:
- Analyze target files for each task before parallel dispatch
- Check for file overlap between tasks in same batch
- If overlap detected, sequentialize those specific tasks
- Example analysis showing conflict detection workflow

Branch isolation protocol:
- Each task MUST have its own branch
- Never have two agents work on the same branch
- Sequential merge after completion (not simultaneous)
- Handle merge conflicts by stopping second task

Conflict resolution rules:
- Same file → MUST sequentialize
- Same directory → Usually safe, review
- Shared config → Sequentialize
- Shared test fixture → Sequentialize or assign different files

This prevents parallel agents from modifying the same files
and causing git merge conflicts.

Closes #234

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:47:36 -05:00
f2a62627d0 feat(projman): add runaway detection and circuit breaker for agents (#236)
Executor self-monitoring:
- 10+ calls without progress → stop and reassess
- Same error 3+ times → circuit breaker, report failure
- 50+ calls → mandatory progress update
- 80+ calls → budget warning, evaluate completion
- 100+ calls → hard stop, save checkpoint

Orchestrator monitoring:
- Detect stuck agents (no progress for X minutes)
- Intervention protocol for runaway agents
- Timeout guidelines by task size (XS: 15min, S: 30min, M: 45min)
- Recovery actions with Status/Failed label

This prevents agents from running indefinitely (400+ tool calls
observed in Sprint 3) and provides clear stopping criteria.

Closes #236

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:46:04 -05:00
0abf510ec0 feat(projman): add strict task sizing rules to prevent runaway agents (#238)
Add mandatory task scoping rules:
- XS: 1 file, 0-2 checklist items, ~30 tool calls
- S: 1 file, 2-4 checklist items, ~50 tool calls
- M: 2-3 files, 4-6 checklist items, ~80 tool calls
- L/XL: MUST be broken down into smaller tasks

Sprint 3 showed agents running 400+ tool calls on single tasks,
causing 1+ hour waits with no visibility. This enforces:
- Maximum task scope (M = 2-3 files, 80 tool calls)
- Mandatory breakdown for L/XL tasks
- Clear scoping checklist for planners
- Good/bad examples showing proper breakdown

Planner must refuse to create L/XL tasks without breakdown.

Closes #238

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:44:52 -05:00
008187a0a4 feat(projman): add structured progress comments for real-time visibility (#232)
Add structured progress comment format for agents:
- Standard format with Status, Phase, Tool Calls budget
- Completed/In-Progress/Blockers/Next sections
- Clear examples for starting, blocked, and failed states
- Guidance on when to post (every 20-30 tool calls)

Update sprint-status.md:
- Document how to parse progress comments
- Show enhanced in-progress display with tool call tracking
- Add progress comment detection to blocker analysis

This enables users to see:
- Real-time agent progress
- Tool call budget consumption
- Current phase and step
- Blockers as they occur

Closes #232

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:43:28 -05:00
4bd15e5deb feat(projman): add Status labels for accurate issue state tracking (#235)
- Add Status/In-Progress, Status/Blocked, Status/Failed, Status/Deferred labels
- Update orchestrator.md with Status Label Management section
- Update executor.md with honest Status Reporting requirements
- Update labels-reference.md with Status detection guidelines

Status labels enable accurate tracking:
- In-Progress: Work actively being done
- Blocked: Waiting for dependency/external factor
- Failed: Attempted but couldn't complete
- Deferred: Moved to future sprint

Agents must report honestly - never say "completed" when blocked/failed.

Closes #235

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:42:14 -05:00
8234683bc3 fix(gitea-mcp): use project directory for branch detection (#231)
The _get_current_branch() method was running git commands from the
installed plugin directory instead of the user's project directory.
This caused incorrect branch detection (always seeing 'main' from
the marketplace repo instead of the user's actual branch).

Fix: Use CLAUDE_PROJECT_DIR environment variable to get the correct
project directory and pass it as cwd to subprocess.run().

Fixes #231

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:38:17 -05:00
5b3da8da85 Merge feat/230-breaking-change-detection into development
Resolved conflict in hooks.json - combined SessionStart and PostToolUse hooks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:12:46 -05:00
894e015c01 Merge feat/229-session-auto-validate into development 2026-01-28 10:12:18 -05:00
a66a2bc519 Merge feat/228-schema-diff-hook into development 2026-01-28 10:12:13 -05:00
b8851a0ae3 Merge feat/227-vagueness-detection-hook into development 2026-01-28 10:12:08 -05:00
aee199e6cf Merge feat/226-branch-name-validation into development 2026-01-28 10:12:03 -05:00
223a2d626a Merge feat/225-commit-message-hook into development 2026-01-28 10:11:57 -05:00
b7fce0fafd docs(changelog): add Sprint 3 hooks implementation
- git-flow: commit message and branch name validation hooks
- clarity-assist: vagueness detection hook
- data-platform: schema diff detection hook
- contract-validator: SessionStart auto-validate and breaking change hooks
- Document MCP bug #231 (branch detection from wrong directory)

Sprint 3 - Hooks completed (issues #225-#230)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:11:51 -05:00
551c60fb45 feat(contract-validator): add breaking change detection hook (#230)
Implements PostToolUse hook to warn about breaking interface changes:
- Detects changes to plugin.json, hooks.json, .mcp.json, agents/*.md
- Compares with git HEAD to find removed/changed elements
- Warns on: removed hooks, changed matchers, removed MCP servers
- Warns on: plugin name changes, major version bumps
- Non-blocking, configurable via CONTRACT_VALIDATOR_BREAKING_WARN

Depends on #229 (SessionStart auto-validate infrastructure).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:05:02 -05:00
af6a42b2ac feat(git-flow): add branch name validation hook (#226)
Implements PreToolUse/Bash hook to validate branch naming convention:
- Validates type/description format
- Allowed types: feat, fix, chore, docs, refactor, test, perf, debug
- Enforces lowercase, hyphens, max 50 chars
- Non-branch commands pass through

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:02:20 -05:00
7cae21f7c9 feat(contract-validator): add SessionStart auto-validate hook (#229)
Implements smart SessionStart hook for plugin validation:
- Validates plugin contracts only when files change (smart mode)
- Caches file hashes to avoid redundant checks
- Non-blocking warnings for compatibility issues
- Configurable via CONTRACT_VALIDATOR_AUTO_CHECK

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:02:00 -05:00
8048fba931 feat(clarity-assist): add vagueness detection hook (#227)
Implements UserPromptSubmit hook to detect vague prompts:
- Checks for short prompts without context
- Detects ambiguous phrases ("fix it", "help me with")
- Suggests /clarity-assist when beneficial
- Non-blocking, configurable via CLARITY_ASSIST_AUTO_SUGGEST

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:01:53 -05:00
1b36ca77ab feat(git-flow): add commit message enforcement hook (#225)
Implements PreToolUse/Bash hook to validate conventional commit format:
- Validates type(scope): description format
- Supports all 10 types: feat, fix, docs, style, refactor, perf, test, chore, build, ci
- Optional scope support
- Helpful error messages with examples
- Non-commit commands pass through
- Uses Python for reliable JSON parsing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:44:59 -05:00
eb85ea31bb feat(data-platform): add schema diff detection hook (#228)
Implements PostToolUse hook to warn about potentially breaking schema changes:
- DROP COLUMN/TABLE/INDEX detection
- Column type changes (ALTER TYPE, MODIFY COLUMN)
- NOT NULL constraint additions
- RENAME operations
- ORM model field removals

Non-blocking - outputs warnings only.
Configurable via DATA_PLATFORM_SCHEMA_WARN env var.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:38:26 -05:00
8627d9e968 Merge pull request 'development' (#224) from development into main
Reviewed-on: #224
2026-01-27 20:16:30 +00:00
3da9adf44e Merge pull request 'docs: sync documentation with code changes for v5.1.0' (#223) from docs/sync-documentation-v5.1.0 into development
Reviewed-on: #223
2026-01-27 20:16:17 +00:00
bcb24ae641 docs: sync documentation with code changes for v5.1.0
Changes applied:
- Updated version references from 5.0.0 to 5.1.0 (CLAUDE.md, CANONICAL-PATHS.md, setup.sh)
- Added missing projman commands to README.md (/suggest-version, /proposal-status)
- Added missing cmdb-assistant commands to README.md (/cmdb-audit, /cmdb-register, /cmdb-sync)
- Added /proposal-status to projman section in COMMANDS-CHEATSHEET.md
- Added 3 cmdb-assistant commands to COMMANDS-CHEATSHEET.md
- Added /suggest-version documentation to plugins/projman/README.md
- Added 4 missing scripts to CANONICAL-PATHS.md (verify-hooks.sh, setup-venvs.sh, venv-repair.sh, release.sh)

Fixes 14 documentation drift issues identified by /doc-guardian:doc-audit.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 15:12:11 -05:00
c8ede3c30b Merge pull request 'development' (#222) from development into main
Reviewed-on: #222
2026-01-27 19:58:14 +00:00
fb1c664309 Merge pull request 'fix(post-update): clear Claude plugin cache on update' (#221) from fix/clear-plugin-cache-on-update into development
Reviewed-on: #221
2026-01-27 19:57:59 +00:00
90f19dfc0f fix(post-update): clear Claude plugin cache on update
The Claude plugin cache at ~/.claude/plugins/cache/leo-claude-mktplace/
holds versioned copies of .mcp.json files. When we update .mcp.json
to use run.sh instead of direct python paths, the cache retains old
versions, causing MCP servers to fail.

Now post-update.sh clears this cache, forcing Claude to read fresh
configs from the installed marketplace on next session start.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:55:42 -05:00
75492b0d38 Merge pull request 'development' (#220) from development into main
Reviewed-on: #220
2026-01-27 19:42:17 +00:00
54bb347ee1 Merge pull request 'fix(gitea-mcp): add fix/* and other branch patterns to permissions' (#219) from fix/branch-permission-patterns into development
Reviewed-on: #219
2026-01-27 19:42:02 +00:00
51bcc26ea9 fix(gitea-mcp): add fix/* and other branch patterns to permissions
The branch permission check was only allowing feat/, feature/, and dev/
prefixes for write operations. This blocked PR creation from fix/*
branches, forcing fallback to direct API calls.

Added patterns:
- fix/, bugfix/, hotfix/ - for bug fixes
- chore/, refactor/ - for maintenance
- docs/, test/ - for documentation and tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:41:18 -05:00
d813147ca7 Merge pull request 'development' (#218) from development into main
Reviewed-on: #218
2026-01-27 19:38:48 +00:00
dbb6d46fa4 Merge pull request 'fix(mcp): use wrapper scripts instead of venv symlinks' (#217) from fix/mcp-venv-wrapper-scripts into development
Reviewed-on: #217
2026-01-27 19:38:31 +00:00
e7050e2ad8 fix(mcp): use wrapper scripts instead of venv symlinks
Replace direct python path in .mcp.json with run.sh wrapper scripts
that automatically locate the venv in cache or local directory.

Problem: .venv symlinks are gitignored, causing them to be wiped on
every git operation. The SessionStart hook should recreate them but
this was unreliable, leading to repeated MCP server failures.

Solution: run.sh scripts that:
- First check ~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv
- Fallback to local .venv if exists
- Exit with helpful error if neither found

This eliminates dependency on symlinks entirely - the scripts are
tracked in git and will always be present after clone/pull/update.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:36:05 -05:00
206f1c378e Merge pull request 'development' (#216) from development into main
Reviewed-on: #216
2026-01-27 17:34:34 +00:00
35380594b4 Merge pull request 'feat(cmdb-assistant): add data quality validation v1.1.0' (#215) from feat/cmdb-assistant-data-quality into development
Reviewed-on: #215
2026-01-27 17:34:19 +00:00
0055c9ecf2 feat(gitea-mcp): add create_pull_request tool
Add missing create_pull_request tool to Gitea MCP server. This completes
the PR lifecycle - previously only had list/get/review/comment tools.

- Add create_pull_request to GiteaClient
- Add async wrapper to PullRequestTools with branch permissions
- Register tool in server.py with proper schema
- Parameters: title, body, head, base, labels (optional)
- Branch-aware security: only allowed on development/feature branches

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 12:30:48 -05:00
a74a048898 feat(cmdb-assistant): add data quality validation v1.1.0
Add validation hooks, best practices skill, and new commands to enforce
NetBox data quality standards:

Hooks:
- SessionStart: Test NetBox connectivity, report data quality issues
- PreToolUse: Validate VM/device parameters before create/update

New Commands:
- /cmdb-audit: Data quality analysis (vms, devices, naming, roles)
- /cmdb-register: Register current machine with running applications
- /cmdb-sync: Sync machine state with NetBox, detect drift

Best Practices Skill:
- Dependency order (regions -> sites -> devices -> VMs)
- Site/tenant/platform assignment requirements
- Naming conventions enforcement
- Role consolidation guidance

Updated agent with validation requirements, dependency order checks,
naming convention warnings, and duplicate prevention.

Marketplace: 5.0.0 -> 5.1.0
Plugin: 1.0.0 -> 1.1.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 12:27:23 -05:00
37676d4645 Merge pull request 'development' (#214) from development into main
Reviewed-on: #214
2026-01-27 16:57:12 +00:00
7492cfad66 Merge pull request 'fix(post-update): use venv-repair for instant symlink restoration' (#213) from fix/post-update-venv-repair into development
Reviewed-on: #213
2026-01-27 16:56:52 +00:00
59db9ea0b0 fix(post-update): use venv-repair for instant symlink restoration
Replace the old approach (create venvs in marketplace directory) with
the new venv-repair.sh approach (symlinks to external cache).

This ensures post-update.sh:
- Instantly restores symlinks if cache exists
- Only does full pip install on first run
- Works correctly after marketplace updates

Flow after this fix:
  Update marketplace → post-update.sh → venv-repair → Session works

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 11:55:43 -05:00
9234cf1add Merge pull request 'development' (#212) from development into main
Reviewed-on: #212
2026-01-27 16:49:33 +00:00
a21199d3db Merge pull request 'fix(mcp): persistent venv cache survives marketplace updates' (#211) from fix/persistent-venv-cache into development
Reviewed-on: #211
2026-01-27 16:49:19 +00:00
1abda1ca0f fix(mcp): persistent venv cache survives marketplace updates
Problem:
- Venvs in marketplace directory got deleted on every update
- Users had to manually run setup.sh and wait for full pip install
- This caused MCP servers to fail until manually fixed

Solution:
- Store venvs in external cache (~/.cache/claude-mcp-venvs/)
- Auto-repair symlinks via SessionStart hook (instant operation)
- Only run pip install on first use or when requirements change

Architecture:
  Cache (runtime) → Marketplaces → External venv cache

  The chain of symlinks ensures all three locations work:
  1. ~/.claude/plugins/cache/.../mcp-servers/* (runtime)
  2. ~/.claude/plugins/marketplaces/.../mcp-servers/* (install)
  3. ~/.cache/claude-mcp-venvs/* (persistent venvs)

Performance:
- First install: ~2-3 min (unchanged)
- After marketplace update: 0.03 sec (was 2-3 min)

Files:
- scripts/venv-repair.sh: Fast symlink restoration for hooks
- scripts/setup-venvs.sh: Full setup with external cache
- plugins/projman/hooks/startup-check.sh: Auto-repair on session start
- .gitignore: Ignore .venv symlinks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 11:48:13 -05:00
0118bc7b9b Merge pull request 'development' (#210) from development into main
Reviewed-on: #210
2026-01-27 15:51:57 +00:00
bbb822db16 Merge pull request 'fix(post-update): create missing venvs instead of just warning' (#209) from fix/post-update-create-venvs into development
Reviewed-on: #209
2026-01-27 15:51:39 +00:00
08e1dcb1f5 fix(post-update): create missing venvs instead of just warning
Previously post-update.sh would only warn when venvs were missing,
requiring a separate setup.sh run. Now it automatically creates
missing venvs and installs dependencies including editable packages.

Also added viz-platform and contract-validator to the update list.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 10:41:04 -05:00
ec7141a5aa Merge pull request 'development' (#208) from development into main
Reviewed-on: #208
2026-01-26 22:27:33 +00:00
1b029d97b8 Merge pull request 'fix/data-platform-filter-index' (#207) from fix/data-platform-filter-index into development
Reviewed-on: #207
2026-01-26 22:27:19 +00:00
4ed3ed7e14 fix(data-platform): reset index after filter to prevent extra column
The filter tool was adding an __index_level_0__ column to results
because pandas query() preserves the original index, which gets
converted to a column when storing the DataFrame.

Added .reset_index(drop=True) after query() to drop the preserved
index and create a clean sequential index.

Fixes #203

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 17:25:37 -05:00
c5232bd7bf Merge pull request 'development' (#202) from development into main
Reviewed-on: #202
2026-01-26 21:49:08 +00:00
f9e23fd6eb Merge pull request 'fix/setup-editable-install' (#201) from fix/setup-editable-install into development
Reviewed-on: #201
2026-01-26 21:48:50 +00:00
457ed9c9ff fix(setup): install MCP packages in editable mode for viz-platform and contract-validator
The setup script only installed requirements.txt dependencies but not the
local package itself, causing MCP servers to fail with ModuleNotFoundError.

Changes:
- Add `pip install -e .` for servers with pyproject.toml
- Add viz-platform and contract-validator to MCP server setup
- Add symlink verification for new plugins
- Update version banner to v5.0.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 16:44:11 -05:00
dadb4d3576 Merge pull request 'Merge development into main: contract-validator setup docs' (#200) from development into main 2026-01-26 20:57:23 +00:00
ba771f100f Merge pull request 'docs: add contract-validator initial-setup and update version tags' (#199) from docs/contract-validator-setup into development 2026-01-26 20:57:09 +00:00
2b9cb5defd docs: add contract-validator initial-setup and update version tags
- Add /initial-setup command for contract-validator plugin
- Add contract-validator section to README.md (NEW in v5.0.0)
- Update data-platform version tag: *NEW* -> *NEW in v4.0.0*
- Update viz-platform version tag: *NEW* -> *NEW in v4.0.0*
- Add Contract Validator MCP Server section to README.md
- Add contract-validator to test commands table and repo structure

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 15:55:17 -05:00
173 changed files with 11414 additions and 242 deletions

View File

@@ -6,12 +6,12 @@
}, },
"metadata": { "metadata": {
"description": "Project management plugins with Gitea and NetBox integrations", "description": "Project management plugins with Gitea and NetBox integrations",
"version": "5.0.0" "version": "5.3.0"
}, },
"plugins": [ "plugins": [
{ {
"name": "projman", "name": "projman",
"version": "3.2.0", "version": "3.3.0",
"description": "Sprint planning and project management with Gitea integration", "description": "Sprint planning and project management with Gitea integration",
"source": "./plugins/projman", "source": "./plugins/projman",
"author": { "author": {
@@ -27,7 +27,7 @@
}, },
{ {
"name": "doc-guardian", "name": "doc-guardian",
"version": "1.0.0", "version": "1.1.0",
"description": "Automatic documentation drift detection and synchronization", "description": "Automatic documentation drift detection and synchronization",
"source": "./plugins/doc-guardian", "source": "./plugins/doc-guardian",
"author": { "author": {
@@ -75,8 +75,8 @@
}, },
{ {
"name": "cmdb-assistant", "name": "cmdb-assistant",
"version": "1.0.0", "version": "1.2.0",
"description": "NetBox CMDB integration for infrastructure management", "description": "NetBox CMDB integration with data quality validation and machine registration",
"source": "./plugins/cmdb-assistant", "source": "./plugins/cmdb-assistant",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
@@ -86,12 +86,12 @@
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"], "mcpServers": ["./.mcp.json"],
"category": "infrastructure", "category": "infrastructure",
"tags": ["cmdb", "netbox", "dcim", "ipam"], "tags": ["cmdb", "netbox", "dcim", "ipam", "data-quality", "validation"],
"license": "MIT" "license": "MIT"
}, },
{ {
"name": "claude-config-maintainer", "name": "claude-config-maintainer",
"version": "1.0.0", "version": "1.1.0",
"description": "CLAUDE.md optimization and maintenance for Claude Code projects", "description": "CLAUDE.md optimization and maintenance for Claude Code projects",
"source": "./plugins/claude-config-maintainer", "source": "./plugins/claude-config-maintainer",
"author": { "author": {
@@ -106,7 +106,7 @@
}, },
{ {
"name": "clarity-assist", "name": "clarity-assist",
"version": "1.0.0", "version": "1.2.0",
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations", "description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
"source": "./plugins/clarity-assist", "source": "./plugins/clarity-assist",
"author": { "author": {
@@ -121,7 +121,7 @@
}, },
{ {
"name": "git-flow", "name": "git-flow",
"version": "1.0.0", "version": "1.2.0",
"description": "Git workflow automation with intelligent commit messages and branch management", "description": "Git workflow automation with intelligent commit messages and branch management",
"source": "./plugins/git-flow", "source": "./plugins/git-flow",
"author": { "author": {
@@ -136,7 +136,7 @@
}, },
{ {
"name": "pr-review", "name": "pr-review",
"version": "1.0.0", "version": "1.1.0",
"description": "Multi-agent pull request review with confidence scoring and actionable feedback", "description": "Multi-agent pull request review with confidence scoring and actionable feedback",
"source": "./plugins/pr-review", "source": "./plugins/pr-review",
"author": { "author": {
@@ -152,7 +152,7 @@
}, },
{ {
"name": "data-platform", "name": "data-platform",
"version": "1.0.0", "version": "1.2.0",
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration", "description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
"source": "./plugins/data-platform", "source": "./plugins/data-platform",
"author": { "author": {
@@ -168,7 +168,7 @@
}, },
{ {
"name": "viz-platform", "name": "viz-platform",
"version": "1.0.0", "version": "1.1.0",
"description": "Visualization tools with Dash Mantine Components validation, Plotly charts, and theming", "description": "Visualization tools with Dash Mantine Components validation, Plotly charts, and theming",
"source": "./plugins/viz-platform", "source": "./plugins/viz-platform",
"author": { "author": {
@@ -184,7 +184,7 @@
}, },
{ {
"name": "contract-validator", "name": "contract-validator",
"version": "1.0.0", "version": "1.2.0",
"description": "Cross-plugin compatibility validation and Claude.md agent verification", "description": "Cross-plugin compatibility validation and Claude.md agent verification",
"source": "./plugins/contract-validator", "source": "./plugins/contract-validator",
"author": { "author": {

2
.gitignore vendored
View File

@@ -31,6 +31,8 @@ venv/
ENV/ ENV/
env/ env/
.venv/ .venv/
.venv
**/.venv
# PyCharm # PyCharm
.idea/ .idea/

View File

@@ -4,6 +4,196 @@ All notable changes to the Leo Claude Marketplace will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [5.3.0] - 2026-01-28
### Added
#### Sprint 6: Visual Branding Overhaul
Consistent visual headers and progress tracking across all plugins.
**Visual Output Headers (109 files):**
- **Projman**: Double-line headers (╔═╗) with phase indicators (🎯 PLANNING, ⚡ EXECUTION, 🏁 CLOSING)
- **Other Plugins**: Single-line headers (┌─┐) with plugin icons
- **All 23 agents** updated with Visual Output Requirements section
- **All 86 commands** updated with Visual Output section and header templates
**Plugin Icon Registry:**
| Plugin | Icon |
|--------|------|
| projman | 📋 |
| code-sentinel | 🔒 |
| doc-guardian | 📝 |
| pr-review | 🔍 |
| clarity-assist | 💬 |
| git-flow | 🔀 |
| cmdb-assistant | 🖥️ |
| data-platform | 📊 |
| viz-platform | 🎨 |
| contract-validator | ✅ |
| claude-config-maintainer | ⚙️ |
**Wiki Branding Specification (4 pages):**
- [branding/visual-spec](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fvisual-spec) - Central specification
- [branding/plugin-registry](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fplugin-registry) - Icons and styles
- [branding/header-templates](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fheader-templates) - Copy-paste templates
- [branding/progress-templates](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fprogress-templates) - Sprint progress blocks
### Fixed
- **Docs:** Version sync - CLAUDE.md, marketplace.json, README.md now consistent
- **Docs:** Added 18 missing commands from Sprint 4 & 5 to README.md and COMMANDS-CHEATSHEET.md
- **MCP:** Registered `/sprint-diagram` as invokable skill
**Sprint Completed:**
- Milestone: Sprint 6 - Visual Branding Overhaul (closed 2026-01-28)
- Issues: #272, #273, #274, #275, #276, #277, #278
- PRs: #284, #285
- Wiki: [Sprint 6 Lessons](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-6---visual-branding-and-documentation-maintenance)
---
## [5.2.0] - 2026-01-28
### Added
#### Sprint 5: Documentation (V5.2.0 Plugin Enhancements)
Documentation and guides for the plugin enhancements initiative.
**git-flow v1.2.0:**
- **Branching Strategy Guide** (`docs/BRANCHING-STRATEGY.md`) - Complete documentation of `development -> staging -> main` promotion flow with Mermaid diagrams
**clarity-assist v1.2.0:**
- **ND Support Guide** (`docs/ND-SUPPORT.md`) - Documentation of neurodivergent accommodations, features, and usage examples
**Gitea MCP Server:**
- **`update_issue` milestone parameter** - Can now assign/change milestones programmatically
**Sprint Completed:**
- Milestone: Sprint 5 - Documentation (closed 2026-01-28)
- Issues: #266, #267, #268, #269
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 5 Documentation)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-5-Documentation%29)
---
#### Sprint 4: Commands (V5.2.0 Plugin Enhancements)
Implementation of 18 new user-facing commands across 8 plugins.
**projman v3.3.0:**
- **`/sprint-diagram`** - Generate Mermaid diagram of sprint issues with dependencies and status
**pr-review v1.1.0:**
- **`/pr-diff`** - Formatted diff with inline review comments and annotations
- **Confidence threshold config** - `PR_REVIEW_CONFIDENCE_THRESHOLD` env var (default: 0.7)
**data-platform v1.2.0:**
- **`/data-quality`** - DataFrame quality checks (nulls, duplicates, types, outliers) with pass/warn/fail scoring
- **`/lineage-viz`** - dbt lineage visualization as Mermaid diagrams
- **`/dbt-test`** - Formatted dbt test runner with summary and failure details
**viz-platform v1.1.0:**
- **`/chart-export`** - Export charts to PNG, SVG, PDF via kaleido
- **`/accessibility-check`** - Color blind validation (WCAG contrast ratios)
- **`/breakpoints`** - Responsive layout breakpoint configuration
- **New MCP tools**: `chart_export`, `accessibility_validate_colors`, `accessibility_validate_theme`, `accessibility_suggest_alternative`, `layout_set_breakpoints`
- **New dependency**: kaleido>=0.2.1 for chart rendering
**contract-validator v1.2.0:**
- **`/dependency-graph`** - Mermaid visualization of plugin dependencies with data flow
**doc-guardian v1.1.0:**
- **`/changelog-gen`** - Generate changelog from conventional commits
- **`/doc-coverage`** - Documentation coverage metrics by function/class
- **`/stale-docs`** - Flag documentation behind code changes
**claude-config-maintainer v1.1.0:**
- **`/config-diff`** - Track CLAUDE.md changes over time with behavioral impact analysis
- **`/config-lint`** - 31 lint rules for CLAUDE.md (security, structure, content, format, best practices)
**cmdb-assistant v1.2.0:**
- **`/cmdb-topology`** - Infrastructure topology diagrams (rack, network, site views)
- **`/change-audit`** - NetBox audit trail queries with filtering
- **`/ip-conflicts`** - Detect IP conflicts and overlapping prefixes
**Sprint Completed:**
- Milestone: Sprint 4 - Commands (closed 2026-01-28)
- Issues: #241-#258 (18/18 closed)
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 4 Commands)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-4-Commands%29)
- Lessons: [Sprint 4 - Plugin Commands Implementation](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-4---plugin-commands-implementation)
### Fixed
- **MCP:** Project directory detection - all run.sh scripts now capture `CLAUDE_PROJECT_DIR` from PWD before changing directories
- **Docs:** Added Gitea auto-close behavior and MCP session restart notes to DEBUGGING-CHECKLIST.md
---
#### Sprint 3: Hooks (V5.2.0 Plugin Enhancements)
Implementation of 6 foundational hooks across 4 plugins.
**git-flow v1.1.0:**
- **Commit message enforcement hook** - PreToolUse hook validates conventional commit format on all `git commit` commands (not just `/commit`). Blocks invalid commits with format guidance.
- **Branch name validation hook** - PreToolUse hook validates branch naming on `git checkout -b` and `git switch -c`. Enforces `type/description` format, lowercase, max 50 chars.
**clarity-assist v1.1.0:**
- **Vagueness detection hook** - UserPromptSubmit hook detects vague prompts and suggests `/clarify` when ambiguity, missing context, or unclear scope detected.
**data-platform v1.1.0:**
- **Schema diff detection hook** - PostToolUse hook monitors edits to schema files (dbt models, SQL migrations). Warns on breaking changes (column removal, type narrowing, constraint addition).
**contract-validator v1.1.0:**
- **SessionStart auto-validate hook** - Smart validation that only runs when plugin files changed since last check. Detects interface compatibility issues at session start.
- **Breaking change detection hook** - PostToolUse hook monitors plugin interface files (README.md, plugin.json). Warns when changes would break consumers.
**Sprint Completed:**
- Milestone: Sprint 3 - Hooks (closed 2026-01-28)
- Issues: #225, #226, #227, #228, #229, #230
- Wiki: [Change V5.2.0: Plugin Enhancements Proposal](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0:-Plugin-Enhancements-Proposal)
- Lessons: Background agent permissions, agent runaway detection, MCP branch detection bug
### Known Issues
- **MCP Bug #231:** Branch detection in Gitea MCP runs from installed plugin directory, not user's project directory. Workaround: close issues via Gitea web UI.
---
#### Gitea MCP Server - create_pull_request Tool
- **`create_pull_request`**: Create new pull requests via MCP
- Parameters: title, body, head (source branch), base (target branch), labels
- Branch-aware security: only allowed on development/feature branches
- Completes the PR lifecycle (was previously missing - only had list/get/review/comment)
#### cmdb-assistant v1.1.0 - Data Quality Validation
- **SessionStart Hook**: Tests NetBox API connectivity at session start
- Warns if VMs exist without site assignment
- Warns if devices exist without platform
- Non-blocking: displays warning, doesn't prevent work
- **PreToolUse Hook**: Validates input parameters before VM/device operations
- Warns about missing site, tenant, platform
- Non-blocking: suggests best practices without blocking
- **`/cmdb-audit` Command**: Comprehensive data quality analysis
- Scopes: all, vms, devices, naming, roles
- Identifies Critical/High/Medium/Low issues
- Provides prioritized remediation recommendations
- **`/cmdb-register` Command**: Register current machine into NetBox
- Discovers system info: hostname, platform, hardware, network interfaces
- Discovers running apps: Docker containers, systemd services
- Creates device with interfaces, IPs, and sets primary IP
- Creates cluster and VMs for Docker containers
- **`/cmdb-sync` Command**: Sync machine state with NetBox
- Compares current state with NetBox record
- Shows diff of changes (interfaces, IPs, containers)
- Updates with user confirmation
- Supports --full and --dry-run flags
- **NetBox Best Practices Skill**: Reference documentation
- Dependency order for object creation
- Naming conventions (`{role}-{site}-{number}`, `{env}-{app}-{number}`)
- Role consolidation guidance
- Site/tenant/platform assignment requirements
- **Agent Enhancement**: Updated cmdb-assistant agent with validation requirements
- Proactive suggestions for missing fields
- Naming convention checks
- Dependency order enforcement
- Duplicate prevention
---
## [5.0.0] - 2026-01-26 ## [5.0.0] - 2026-01-26
### Added ### Added

View File

@@ -50,24 +50,24 @@ See `docs/DEBUGGING-CHECKLIST.md` for details on cache timing.
## Project Overview ## Project Overview
**Repository:** leo-claude-mktplace **Repository:** leo-claude-mktplace
**Version:** 5.0.0 **Version:** 5.3.0
**Status:** Production Ready **Status:** Production Ready
A plugin marketplace for Claude Code containing: A plugin marketplace for Claude Code containing:
| Plugin | Description | Version | | Plugin | Description | Version |
|--------|-------------|---------| |--------|-------------|---------|
| `projman` | Sprint planning and project management with Gitea integration | 3.2.0 | | `projman` | Sprint planning and project management with Gitea integration | 3.3.0 |
| `git-flow` | Git workflow automation with smart commits and branch management | 1.0.0 | | `git-flow` | Git workflow automation with smart commits and branch management | 1.2.0 |
| `pr-review` | Multi-agent PR review with confidence scoring | 1.0.0 | | `pr-review` | Multi-agent PR review with confidence scoring | 1.1.0 |
| `clarity-assist` | Prompt optimization with ND-friendly accommodations | 1.0.0 | | `clarity-assist` | Prompt optimization with ND-friendly accommodations | 1.2.0 |
| `doc-guardian` | Automatic documentation drift detection and synchronization | 1.0.0 | | `doc-guardian` | Automatic documentation drift detection and synchronization | 1.1.0 |
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.0 | | `code-sentinel` | Security scanning and code refactoring tools | 1.0.0 |
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 | | `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.1.0 |
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.0.0 | | `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.2.0 |
| `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.0.0 | | `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.2.0 |
| `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.0.0 | | `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.1.0 |
| `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.0.0 | | `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.2.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 | | `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
## Quick Start ## Quick Start
@@ -85,16 +85,17 @@ A plugin marketplace for Claude Code containing:
| Category | Commands | | Category | Commands |
|----------|----------| |----------|----------|
| **Setup** | `/initial-setup`, `/project-init`, `/project-sync` | | **Setup** | `/initial-setup`, `/project-init`, `/project-sync` |
| **Sprint** | `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close` | | **Sprint** | `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/sprint-diagram` |
| **Quality** | `/review`, `/test-check`, `/test-gen` | | **Quality** | `/review`, `/test-check`, `/test-gen` |
| **Versioning** | `/suggest-version` | | **Versioning** | `/suggest-version` |
| **PR Review** | `/pr-review:initial-setup`, `/pr-review:project-init` | | **PR Review** | `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff` |
| **Docs** | `/doc-audit`, `/doc-sync` | | **Docs** | `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs` |
| **Security** | `/security-scan`, `/refactor`, `/refactor-dry` | | **Security** | `/security-scan`, `/refactor`, `/refactor-dry` |
| **Config** | `/config-analyze`, `/config-optimize` | | **Config** | `/config-analyze`, `/config-optimize`, `/config-diff`, `/config-lint` |
| **Data** | `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/run` | | **Data** | `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality` |
| **Visualization** | `/component`, `/chart`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css` | | **Visualization** | `/component`, `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/accessibility-check`, `/breakpoints` |
| **Validation** | `/validate-contracts`, `/check-agent`, `/list-interfaces` | | **Validation** | `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph` |
| **CMDB** | `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`, `/cmdb-audit`, `/cmdb-register`, `/cmdb-sync`, `/cmdb-topology`, `/change-audit`, `/ip-conflicts` |
| **Debug** | `/debug-report`, `/debug-review` | | **Debug** | `/debug-report`, `/debug-review` |
## Repository Structure ## Repository Structure
@@ -376,4 +377,4 @@ The script will:
--- ---
**Last Updated:** 2026-01-24 **Last Updated:** 2026-01-28

View File

@@ -1,4 +1,4 @@
# Leo Claude Marketplace - v5.0.0 # Leo Claude Marketplace - v5.3.0
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows. A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
@@ -19,7 +19,7 @@ AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sp
- Branch-aware security (development/staging/production) - Branch-aware security (development/staging/production)
- Pre-sprint-close code quality review and test verification - Pre-sprint-close code quality review and test verification
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review` **Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/sprint-diagram`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review`, `/suggest-version`, `/proposal-status`
#### [git-flow](./plugins/git-flow/README.md) *NEW in v3.0.0* #### [git-flow](./plugins/git-flow/README.md) *NEW in v3.0.0*
**Git Workflow Automation** **Git Workflow Automation**
@@ -44,14 +44,27 @@ Comprehensive pull request review using specialized agents.
- Actionable feedback with suggested fixes - Actionable feedback with suggested fixes
- Gitea integration for automated review submission - Gitea integration for automated review submission
**Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/initial-setup`, `/project-init`, `/project-sync` **Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff`, `/initial-setup`, `/project-init`, `/project-sync`
#### [claude-config-maintainer](./plugins/claude-config-maintainer/README.md) #### [claude-config-maintainer](./plugins/claude-config-maintainer/README.md)
**CLAUDE.md Optimization and Maintenance** **CLAUDE.md Optimization and Maintenance**
Analyze, optimize, and create CLAUDE.md configuration files for Claude Code projects. Analyze, optimize, and create CLAUDE.md configuration files for Claude Code projects.
**Commands:** `/config-analyze`, `/config-optimize`, `/config-init` **Commands:** `/config-analyze`, `/config-optimize`, `/config-init`, `/config-diff`, `/config-lint`
#### [contract-validator](./plugins/contract-validator/README.md) *NEW in v5.0.0*
**Cross-Plugin Compatibility Validation**
Validate plugin marketplaces for command conflicts, tool overlaps, and broken agent references.
- Interface parsing from plugin README.md files
- Agent extraction from CLAUDE.md definitions
- Pairwise compatibility checks between all plugins
- Data flow validation for agent sequences
- Markdown or JSON reports with actionable suggestions
**Commands:** `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph`, `/initial-setup`
### Productivity ### Productivity
@@ -71,7 +84,7 @@ Transform vague requests into clear specifications using structured methodology.
Automatic documentation drift detection and synchronization. Automatic documentation drift detection and synchronization.
**Commands:** `/doc-audit`, `/doc-sync` **Commands:** `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs`
#### [project-hygiene](./plugins/project-hygiene/README.md) #### [project-hygiene](./plugins/project-hygiene/README.md)
**Post-Task Cleanup Automation** **Post-Task Cleanup Automation**
@@ -94,11 +107,11 @@ Security vulnerability detection and code refactoring tools.
Full CRUD operations for network infrastructure management directly from Claude Code. Full CRUD operations for network infrastructure management directly from Claude Code.
**Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site` **Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`, `/cmdb-audit`, `/cmdb-register`, `/cmdb-sync`, `/cmdb-topology`, `/change-audit`, `/ip-conflicts`
### Data Engineering ### Data Engineering
#### [data-platform](./plugins/data-platform/README.md) *NEW* #### [data-platform](./plugins/data-platform/README.md) *NEW in v4.0.0*
**pandas, PostgreSQL/PostGIS, and dbt Integration** **pandas, PostgreSQL/PostGIS, and dbt Integration**
Comprehensive data engineering toolkit with persistent DataFrame storage. Comprehensive data engineering toolkit with persistent DataFrame storage.
@@ -109,11 +122,11 @@ Comprehensive data engineering toolkit with persistent DataFrame storage.
- 100k row limit with chunking support - 100k row limit with chunking support
- Auto-detection of dbt projects - Auto-detection of dbt projects
**Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/run` **Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality`, `/initial-setup`
### Visualization ### Visualization
#### [viz-platform](./plugins/viz-platform/README.md) *NEW* #### [viz-platform](./plugins/viz-platform/README.md) *NEW in v4.0.0*
**Dash Mantine Components Validation and Theming** **Dash Mantine Components Validation and Theming**
Visualization toolkit with version-locked component validation and design token theming. Visualization toolkit with version-locked component validation and design token theming.
@@ -125,7 +138,7 @@ Visualization toolkit with version-locked component validation and design token
- 5 Page tools for multi-page app structure - 5 Page tools for multi-page app structure
- Dual theme storage: user-level and project-level - Dual theme storage: user-level and project-level
**Commands:** `/chart`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/initial-setup` **Commands:** `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/accessibility-check`, `/breakpoints`, `/initial-setup`
## MCP Servers ## MCP Servers
@@ -157,7 +170,7 @@ Comprehensive NetBox REST API integration for infrastructure management.
| Virtualization | Clusters, VMs, Interfaces | | Virtualization | Clusters, VMs, Interfaces |
| Extras | Tags, Custom Fields, Audit Log | | Extras | Tags, Custom Fields, Audit Log |
### Data Platform MCP Server (shared) *NEW* ### Data Platform MCP Server (shared) *NEW in v4.0.0*
pandas, PostgreSQL/PostGIS, and dbt integration for data engineering. pandas, PostgreSQL/PostGIS, and dbt integration for data engineering.
@@ -168,7 +181,7 @@ pandas, PostgreSQL/PostGIS, and dbt integration for data engineering.
| PostGIS | `st_tables`, `st_geometry_type`, `st_srid`, `st_extent` | | PostGIS | `st_tables`, `st_geometry_type`, `st_srid`, `st_extent` |
| dbt | `dbt_parse`, `dbt_run`, `dbt_test`, `dbt_build`, `dbt_compile`, `dbt_ls`, `dbt_docs_generate`, `dbt_lineage` | | dbt | `dbt_parse`, `dbt_run`, `dbt_test`, `dbt_build`, `dbt_compile`, `dbt_ls`, `dbt_docs_generate`, `dbt_lineage` |
### Viz Platform MCP Server (shared) *NEW* ### Viz Platform MCP Server (shared) *NEW in v4.0.0*
Dash Mantine Components validation and visualization tools. Dash Mantine Components validation and visualization tools.
@@ -180,6 +193,16 @@ Dash Mantine Components validation and visualization tools.
| Theme | `theme_create`, `theme_extend`, `theme_validate`, `theme_export_css`, `theme_list`, `theme_activate` | | Theme | `theme_create`, `theme_extend`, `theme_validate`, `theme_export_css`, `theme_list`, `theme_activate` |
| Page | `page_create`, `page_add_navbar`, `page_set_auth`, `page_list`, `page_get_app_config` | | Page | `page_create`, `page_add_navbar`, `page_set_auth`, `page_list`, `page_get_app_config` |
### Contract Validator MCP Server (shared) *NEW in v5.0.0*
Cross-plugin compatibility validation tools.
| Category | Tools |
|----------|-------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow` |
| Report | `generate_compatibility_report`, `list_issues` |
## Installation ## Installation
### Prerequisites ### Prerequisites
@@ -278,6 +301,7 @@ After installing plugins, the `/plugin` command may show `(no content)` - this i
| cmdb-assistant | `/cmdb-assistant:cmdb-search` | | cmdb-assistant | `/cmdb-assistant:cmdb-search` |
| data-platform | `/data-platform:ingest` | | data-platform | `/data-platform:ingest` |
| viz-platform | `/viz-platform:chart` | | viz-platform | `/viz-platform:chart` |
| contract-validator | `/contract-validator:validate-contracts` |
## Repository Structure ## Repository Structure
@@ -289,14 +313,16 @@ leo-claude-mktplace/
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki) │ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
│ ├── netbox/ # NetBox MCP (CMDB) │ ├── netbox/ # NetBox MCP (CMDB)
│ ├── data-platform/ # Data engineering (pandas, PostgreSQL, dbt) │ ├── data-platform/ # Data engineering (pandas, PostgreSQL, dbt)
── viz-platform/ # Visualization (DMC, Plotly, theming) ── viz-platform/ # Visualization (DMC, Plotly, theming)
│ └── contract-validator/ # Cross-plugin validation (v5.0.0)
├── plugins/ # All plugins ├── plugins/ # All plugins
│ ├── projman/ # Sprint management │ ├── projman/ # Sprint management
│ ├── git-flow/ # Git workflow automation │ ├── git-flow/ # Git workflow automation
│ ├── pr-review/ # PR review │ ├── pr-review/ # PR review
│ ├── clarity-assist/ # Prompt optimization │ ├── clarity-assist/ # Prompt optimization
│ ├── data-platform/ # Data engineering │ ├── data-platform/ # Data engineering
│ ├── viz-platform/ # Visualization (NEW) │ ├── viz-platform/ # Visualization
│ ├── contract-validator/ # Cross-plugin validation (NEW)
│ ├── claude-config-maintainer/ # CLAUDE.md optimization │ ├── claude-config-maintainer/ # CLAUDE.md optimization
│ ├── cmdb-assistant/ # NetBox CMDB integration │ ├── cmdb-assistant/ # NetBox CMDB integration
│ ├── doc-guardian/ # Documentation drift detection │ ├── doc-guardian/ # Documentation drift detection

View File

@@ -2,7 +2,7 @@
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.** **This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
Last Updated: 2026-01-26 (v5.0.0) Last Updated: 2026-01-27 (v5.1.0)
--- ---
@@ -165,7 +165,11 @@ leo-claude-mktplace/
│ ├── setup.sh # Initial setup (create venvs, config templates) │ ├── setup.sh # Initial setup (create venvs, config templates)
│ ├── post-update.sh # Post-update (rebuild venvs, verify symlinks) │ ├── post-update.sh # Post-update (rebuild venvs, verify symlinks)
│ ├── check-venv.sh # Check if venvs exist (for hooks) │ ├── check-venv.sh # Check if venvs exist (for hooks)
── validate-marketplace.sh # Marketplace compliance validation ── validate-marketplace.sh # Marketplace compliance validation
│ ├── verify-hooks.sh # Verify all hooks use correct event types
│ ├── setup-venvs.sh # Setup/repair MCP server venvs
│ ├── venv-repair.sh # Repair broken venv symlinks
│ └── release.sh # Release automation with version bumping
├── CLAUDE.md ├── CLAUDE.md
├── README.md ├── README.md
├── LICENSE ├── LICENSE

View File

@@ -23,6 +23,8 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **projman** | `/debug-report` | | X | Run diagnostics and create structured issue in marketplace | | **projman** | `/debug-report` | | X | Run diagnostics and create structured issue in marketplace |
| **projman** | `/debug-review` | | X | Investigate diagnostic issues and propose fixes with approval gates | | **projman** | `/debug-review` | | X | Investigate diagnostic issues and propose fixes with approval gates |
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump | | **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
| **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status |
| **projman** | `/sprint-diagram` | | X | Generate Mermaid diagram of sprint issues with dependencies |
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message | | **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation | | **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch | | **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
@@ -38,10 +40,14 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **pr-review** | `/pr-review` | | X | Full multi-agent PR review with confidence scoring | | **pr-review** | `/pr-review` | | X | Full multi-agent PR review with confidence scoring |
| **pr-review** | `/pr-summary` | | X | Quick summary of PR changes | | **pr-review** | `/pr-summary` | | X | Quick summary of PR changes |
| **pr-review** | `/pr-findings` | | X | List and filter review findings by category/severity | | **pr-review** | `/pr-findings` | | X | List and filter review findings by category/severity |
| **pr-review** | `/pr-diff` | | X | Formatted diff with inline review comments and annotations |
| **clarity-assist** | `/clarify` | | X | Full 4-D prompt optimization with ND accommodations | | **clarity-assist** | `/clarify` | | X | Full 4-D prompt optimization with ND accommodations |
| **clarity-assist** | `/quick-clarify` | | X | Rapid single-pass clarification for simple requests | | **clarity-assist** | `/quick-clarify` | | X | Rapid single-pass clarification for simple requests |
| **doc-guardian** | `/doc-audit` | | X | Full documentation audit - scans for doc drift | | **doc-guardian** | `/doc-audit` | | X | Full documentation audit - scans for doc drift |
| **doc-guardian** | `/doc-sync` | | X | Synchronize pending documentation updates | | **doc-guardian** | `/doc-sync` | | X | Synchronize pending documentation updates |
| **doc-guardian** | `/changelog-gen` | | X | Generate changelog from conventional commits |
| **doc-guardian** | `/doc-coverage` | | X | Documentation coverage metrics by function/class |
| **doc-guardian** | `/stale-docs` | | X | Flag documentation behind code changes |
| **doc-guardian** | *PostToolUse hook* | X | | Silently detects doc drift on Write/Edit | | **doc-guardian** | *PostToolUse hook* | X | | Silently detects doc drift on Write/Edit |
| **code-sentinel** | `/security-scan` | | X | Full security audit (SQL injection, XSS, secrets, etc.) | | **code-sentinel** | `/security-scan` | | X | Full security audit (SQL injection, XSS, secrets, etc.) |
| **code-sentinel** | `/refactor` | | X | Apply refactoring patterns to improve code | | **code-sentinel** | `/refactor` | | X | Apply refactoring patterns to improve code |
@@ -50,11 +56,19 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **claude-config-maintainer** | `/config-analyze` | | X | Analyze CLAUDE.md for optimization opportunities | | **claude-config-maintainer** | `/config-analyze` | | X | Analyze CLAUDE.md for optimization opportunities |
| **claude-config-maintainer** | `/config-optimize` | | X | Optimize CLAUDE.md structure with preview/backup | | **claude-config-maintainer** | `/config-optimize` | | X | Optimize CLAUDE.md structure with preview/backup |
| **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project | | **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project |
| **claude-config-maintainer** | `/config-diff` | | X | Track CLAUDE.md changes over time with behavioral impact |
| **claude-config-maintainer** | `/config-lint` | | X | Lint CLAUDE.md for anti-patterns and best practices |
| **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server | | **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server |
| **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites | | **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites |
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) | | **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
| **cmdb-assistant** | `/cmdb-ip` | | X | Manage IP addresses and prefixes | | **cmdb-assistant** | `/cmdb-ip` | | X | Manage IP addresses and prefixes |
| **cmdb-assistant** | `/cmdb-site` | | X | Manage sites, locations, racks, and regions | | **cmdb-assistant** | `/cmdb-site` | | X | Manage sites, locations, racks, and regions |
| **cmdb-assistant** | `/cmdb-audit` | | X | Data quality analysis (VMs, devices, naming, roles) |
| **cmdb-assistant** | `/cmdb-register` | | X | Register current machine into NetBox with running apps |
| **cmdb-assistant** | `/cmdb-sync` | | X | Sync machine state with NetBox (detect drift, update) |
| **cmdb-assistant** | `/cmdb-topology` | | X | Infrastructure topology diagrams (rack, network, site views) |
| **cmdb-assistant** | `/change-audit` | | X | NetBox audit trail queries with filtering |
| **cmdb-assistant** | `/ip-conflicts` | | X | Detect IP conflicts and overlapping prefixes |
| **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files | | **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files |
| **data-platform** | `/ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame | | **data-platform** | `/ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
| **data-platform** | `/profile` | | X | Generate data profiling report with statistics | | **data-platform** | `/profile` | | X | Generate data profiling report with statistics |
@@ -62,6 +76,9 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **data-platform** | `/explain` | | X | Explain query execution plan | | **data-platform** | `/explain` | | X | Explain query execution plan |
| **data-platform** | `/lineage` | | X | Show dbt model lineage and dependencies | | **data-platform** | `/lineage` | | X | Show dbt model lineage and dependencies |
| **data-platform** | `/run` | | X | Run dbt models with validation | | **data-platform** | `/run` | | X | Run dbt models with validation |
| **data-platform** | `/lineage-viz` | | X | dbt lineage visualization as Mermaid diagrams |
| **data-platform** | `/dbt-test` | | X | Formatted dbt test runner with summary and failure details |
| **data-platform** | `/data-quality` | | X | DataFrame quality checks (nulls, duplicates, types, outliers) |
| **data-platform** | `/initial-setup` | | X | Setup wizard for data-platform MCP servers | | **data-platform** | `/initial-setup` | | X | Setup wizard for data-platform MCP servers |
| **data-platform** | *SessionStart hook* | X | | Checks PostgreSQL connection (non-blocking warning) | | **data-platform** | *SessionStart hook* | X | | Checks PostgreSQL connection (non-blocking warning) |
| **viz-platform** | `/initial-setup` | | X | Setup wizard for viz-platform MCP server | | **viz-platform** | `/initial-setup` | | X | Setup wizard for viz-platform MCP server |
@@ -71,7 +88,15 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **viz-platform** | `/theme-new` | | X | Create new custom theme with design tokens | | **viz-platform** | `/theme-new` | | X | Create new custom theme with design tokens |
| **viz-platform** | `/theme-css` | | X | Export theme as CSS custom properties | | **viz-platform** | `/theme-css` | | X | Export theme as CSS custom properties |
| **viz-platform** | `/component` | | X | Inspect DMC component props and validation | | **viz-platform** | `/component` | | X | Inspect DMC component props and validation |
| **viz-platform** | `/chart-export` | | X | Export charts to PNG, SVG, PDF via kaleido |
| **viz-platform** | `/accessibility-check` | | X | Color blind validation (WCAG contrast ratios) |
| **viz-platform** | `/breakpoints` | | X | Configure responsive layout breakpoints |
| **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) | | **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) |
| **contract-validator** | `/validate-contracts` | | X | Full marketplace compatibility validation |
| **contract-validator** | `/check-agent` | | X | Validate single agent definition |
| **contract-validator** | `/list-interfaces` | | X | Show all plugin interfaces |
| **contract-validator** | `/dependency-graph` | | X | Mermaid visualization of plugin dependencies |
| **contract-validator** | `/initial-setup` | | X | Setup wizard for contract-validator MCP |
--- ---
@@ -87,6 +112,7 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **Infrastructure** | cmdb-assistant | NetBox CMDB management | | **Infrastructure** | cmdb-assistant | NetBox CMDB management |
| **Data Engineering** | data-platform | pandas, PostgreSQL, dbt operations | | **Data Engineering** | data-platform | pandas, PostgreSQL, dbt operations |
| **Visualization** | viz-platform | DMC validation, Plotly charts, theming | | **Visualization** | viz-platform | DMC validation, Plotly charts, theming |
| **Validation** | contract-validator | Cross-plugin compatibility checks |
| **Maintenance** | project-hygiene | Automatic cleanup | | **Maintenance** | project-hygiene | Automatic cleanup |
--- ---
@@ -245,9 +271,10 @@ Some plugins require MCP server connectivity:
| cmdb-assistant | NetBox | Infrastructure CMDB | | cmdb-assistant | NetBox | Infrastructure CMDB |
| data-platform | pandas, PostgreSQL, dbt | DataFrames, database queries, dbt builds | | data-platform | pandas, PostgreSQL, dbt | DataFrames, database queries, dbt builds |
| viz-platform | viz-platform | DMC validation, charts, layouts, themes, pages | | viz-platform | viz-platform | DMC validation, charts, layouts, themes, pages |
| contract-validator | contract-validator | Plugin interface parsing, compatibility validation |
Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/claude/netbox.env`, or `~/.config/claude/postgres.env`. Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/claude/netbox.env`, or `~/.config/claude/postgres.env`.
--- ---
*Last Updated: 2026-01-26* *Last Updated: 2026-01-28*

View File

@@ -171,8 +171,7 @@ This marketplace uses a **hybrid configuration** approach:
│ PROJECT-LEVEL (once per project) │ │ PROJECT-LEVEL (once per project) │
│ <project-root>/.env │ │ <project-root>/.env │
├─────────────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────────────────────┤
│ GITEA_ORG Organization for this project │ GITEA_REPORepository as owner/repo format
│ GITEA_REPO │ Repository name for this project │
│ GIT_WORKFLOW_STYLE │ (optional) Override system default │ │ GIT_WORKFLOW_STYLE │ (optional) Override system default │
│ PR_REVIEW_* │ (optional) PR review settings │ │ PR_REVIEW_* │ (optional) PR review settings │
└─────────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────────────────────┘
@@ -262,8 +261,7 @@ In each project root:
```bash ```bash
cat > .env << 'EOF' cat > .env << 'EOF'
GITEA_ORG=your-organization GITEA_REPO=your-organization/your-repo-name
GITEA_REPO=your-repo-name
EOF EOF
``` ```
@@ -307,7 +305,7 @@ GITEA_API_TOKEN=your_gitea_token_here
| `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` | | `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` |
| `GITEA_API_TOKEN` | Personal access token | `abc123...` | | `GITEA_API_TOKEN` | Personal access token | `abc123...` |
**Note:** `GITEA_ORG` is configured at the project level (see below) since different projects may belong to different organizations. **Note:** `GITEA_REPO` is configured at the project level in `owner/repo` format since different projects may belong to different organizations.
**Generating a Gitea Token:** **Generating a Gitea Token:**
1. Log into Gitea → **User Icon** → **Settings** 1. Log into Gitea → **User Icon** → **Settings**
@@ -362,9 +360,8 @@ GIT_CO_AUTHOR=true
Create `.env` in each project root: Create `.env` in each project root:
```bash ```bash
# Required for projman, pr-review # Required for projman, pr-review (use owner/repo format)
GITEA_ORG=your-organization GITEA_REPO=your-organization/your-repo-name
GITEA_REPO=your-repo-name
# Optional: Override git-flow defaults # Optional: Override git-flow defaults
GIT_WORKFLOW_STYLE=pr-required GIT_WORKFLOW_STYLE=pr-required
@@ -377,8 +374,7 @@ PR_REVIEW_AUTO_SUBMIT=false
| Variable | Required | Description | | Variable | Required | Description |
|----------|----------|-------------| |----------|----------|-------------|
| `GITEA_ORG` | Yes | Gitea organization for this project | | `GITEA_REPO` | Yes | Repository in `owner/repo` format (e.g., `my-org/my-repo`) |
| `GITEA_REPO` | Yes | Repository name (must match Gitea exactly) |
| `GIT_WORKFLOW_STYLE` | No | Override system default | | `GIT_WORKFLOW_STYLE` | No | Override system default |
| `PR_REVIEW_*` | No | PR review settings | | `PR_REVIEW_*` | No | PR review settings |
@@ -388,8 +384,8 @@ PR_REVIEW_AUTO_SUBMIT=false
| Plugin | System Config | Project Config | Setup Commands | | Plugin | System Config | Project Config | Setup Commands |
|--------|---------------|----------------|----------------| |--------|---------------|----------------|----------------|
| **projman** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` | | **projman** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` |
| **pr-review** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` | | **pr-review** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` |
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed | | **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
| **clarity-assist** | None | None | None needed | | **clarity-assist** | None | None | None needed |
| **cmdb-assistant** | netbox.env | None | `/initial-setup` | | **cmdb-assistant** | netbox.env | None | `/initial-setup` |
@@ -441,7 +437,7 @@ This catches typos and permission issues before saving configuration.
When you start a Claude Code session, a hook automatically: When you start a Claude Code session, a hook automatically:
1. Reads `GITEA_ORG` and `GITEA_REPO` from `.env` 1. Reads `GITEA_REPO` (in `owner/repo` format) from `.env`
2. Compares with current `git remote get-url origin` 2. Compares with current `git remote get-url origin`
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/project-sync` to update." 3. **Warns** if mismatch detected: "Repository location mismatch. Run `/project-sync` to update."
@@ -520,7 +516,8 @@ deactivate
# Check project .env # Check project .env
cat .env cat .env
# Verify GITEA_REPO matches the Gitea repository name exactly # Verify GITEA_REPO is in owner/repo format and matches Gitea exactly
# Example: GITEA_REPO=my-org/my-repo
``` ```
--- ---

View File

@@ -2,7 +2,7 @@
**Purpose:** Systematic approach to diagnose and fix plugin loading issues. **Purpose:** Systematic approach to diagnose and fix plugin loading issues.
Last Updated: 2026-01-22 Last Updated: 2026-01-28
--- ---
@@ -128,7 +128,7 @@ cat ~/.config/claude/netbox.env
# Project-level config (in target project) # Project-level config (in target project)
cat /path/to/project/.env cat /path/to/project/.env
# Should contain: GITEA_ORG, GITEA_REPO # Should contain: GITEA_REPO=owner/repo (e.g., my-org/my-repo)
``` ```
--- ---
@@ -186,6 +186,47 @@ echo -e "\n=== Config Files ==="
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes | | Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials | | Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) | | Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
| Gitea issues not closing | Merged to non-default branch | Manually close issues (see below) |
| MCP changes not taking effect | Session caching | Restart Claude Code session (see below) |
### Gitea Auto-Close Behavior
**Issue:** Using `Closes #XX` or `Fixes #XX` in commit/PR messages does NOT auto-close issues when merging to `development`.
**Root Cause:** Gitea only auto-closes issues when merging to the **default branch** (typically `main` or `master`). Merging to `development`, `staging`, or any other branch will NOT trigger auto-close.
**Workaround:**
1. Use the Gitea MCP tool to manually close issues after merging to development:
```
mcp__plugin_projman_gitea__update_issue(issue_number=XX, state="closed")
```
2. Or close issues via the Gitea web UI
3. The auto-close keywords will still work when the changes are eventually merged to `main`
**Recommendation:** Include the `Closes #XX` keywords in commits anyway - they'll work when the final merge to `main` happens.
### MCP Session Restart Requirement
**Issue:** Changes to MCP servers, hooks, or plugin configuration don't take effect immediately.
**Root Cause:** Claude Code loads MCP tools and plugin configuration at session start. These are cached in session memory and not reloaded dynamically.
**What requires a session restart:**
- MCP server code changes (Python files in `mcp-servers/`)
- Changes to `.mcp.json` files
- Changes to `hooks/hooks.json`
- Changes to `plugin.json`
- Adding new MCP tools or modifying tool signatures
**What does NOT require a restart:**
- Command/skill markdown files (`.md`) - these are read on invocation
- Agent markdown files - read when agent is invoked
**Correct workflow after plugin changes:**
1. Make changes to source files
2. Run `./scripts/verify-hooks.sh` to validate
3. Inform user: "Please restart Claude Code for changes to take effect"
4. **Do NOT clear cache mid-session** - see "Cache Clearing" section
--- ---

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/contract-validator/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -330,7 +330,7 @@ class PandasTools:
return {'error': f'DataFrame not found: {data_ref}'} return {'error': f'DataFrame not found: {data_ref}'}
try: try:
filtered = df.query(condition) filtered = df.query(condition).reset_index(drop=True)
result_name = name or f"{data_ref}_filtered" result_name = name or f"{data_ref}_filtered"
return self._check_and_store( return self._check_and_store(
filtered, filtered,

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -53,6 +53,7 @@ class GiteaClient:
self, self,
state: str = 'open', state: str = 'open',
labels: Optional[List[str]] = None, labels: Optional[List[str]] = None,
milestone: Optional[str] = None,
repo: Optional[str] = None repo: Optional[str] = None
) -> List[Dict]: ) -> List[Dict]:
""" """
@@ -61,6 +62,7 @@ class GiteaClient:
Args: Args:
state: Issue state (open, closed, all) state: Issue state (open, closed, all)
labels: Filter by labels labels: Filter by labels
milestone: Filter by milestone title (exact match)
repo: Repository in 'owner/repo' format repo: Repository in 'owner/repo' format
Returns: Returns:
@@ -71,6 +73,8 @@ class GiteaClient:
params = {'state': state} params = {'state': state}
if labels: if labels:
params['labels'] = ','.join(labels) params['labels'] = ','.join(labels)
if milestone:
params['milestones'] = milestone
logger.info(f"Listing issues from {owner}/{target_repo} with state={state}") logger.info(f"Listing issues from {owner}/{target_repo} with state={state}")
response = self.session.get(url, params=params) response = self.session.get(url, params=params)
response.raise_for_status() response.raise_for_status()
@@ -135,9 +139,24 @@ class GiteaClient:
body: Optional[str] = None, body: Optional[str] = None,
state: Optional[str] = None, state: Optional[str] = None,
labels: Optional[List[str]] = None, labels: Optional[List[str]] = None,
milestone: Optional[int] = None,
repo: Optional[str] = None repo: Optional[str] = None
) -> Dict: ) -> Dict:
"""Update existing issue. Repo must be 'owner/repo' format.""" """
Update existing issue.
Args:
issue_number: Issue number to update
title: New title (optional)
body: New body (optional)
state: New state - 'open' or 'closed' (optional)
labels: New labels (optional)
milestone: Milestone ID to assign (optional)
repo: Repository in 'owner/repo' format
Returns:
Updated issue dictionary
"""
owner, target_repo = self._parse_repo(repo) owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}" url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}"
data = {} data = {}
@@ -149,6 +168,8 @@ class GiteaClient:
data['state'] = state data['state'] = state
if labels is not None: if labels is not None:
data['labels'] = labels data['labels'] = labels
if milestone is not None:
data['milestone'] = milestone
logger.info(f"Updating issue #{issue_number} in {owner}/{target_repo}") logger.info(f"Updating issue #{issue_number} in {owner}/{target_repo}")
response = self.session.patch(url, json=data) response = self.session.patch(url, json=data)
response.raise_for_status() response.raise_for_status()
@@ -787,3 +808,42 @@ class GiteaClient:
response = self.session.post(url, json=data) response = self.session.post(url, json=data)
response.raise_for_status() response.raise_for_status()
return response.json() return response.json()
def create_pull_request(
self,
title: str,
body: str,
head: str,
base: str,
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a new pull request.
Args:
title: PR title
body: PR description/body
head: Source branch name (the branch with changes)
base: Target branch name (the branch to merge into)
labels: Optional list of label names
repo: Repository in 'owner/repo' format
Returns:
Created pull request dictionary
"""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls"
data = {
'title': title,
'body': body,
'head': head,
'base': base
}
if labels:
label_ids = self._resolve_label_ids(labels, owner, target_repo)
data['labels'] = label_ids
logger.info(f"Creating PR '{title}' in {owner}/{target_repo}: {head} -> {base}")
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()

View File

@@ -26,6 +26,44 @@ logging.getLogger("mcp").setLevel(logging.ERROR)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def _coerce_types(arguments: dict) -> dict:
"""
Coerce argument types to handle MCP serialization quirks.
MCP sometimes passes integers as strings and arrays as JSON strings.
This function normalizes them to the expected Python types.
"""
coerced = {}
for key, value in arguments.items():
if value is None:
coerced[key] = value
continue
# Coerce integer fields
int_fields = {'issue_number', 'milestone_id', 'pr_number', 'depends_on', 'milestone', 'limit'}
if key in int_fields and isinstance(value, str):
try:
coerced[key] = int(value)
continue
except ValueError:
pass
# Coerce array fields that might be JSON strings
array_fields = {'labels', 'tags', 'issue_numbers', 'comments'}
if key in array_fields and isinstance(value, str):
try:
parsed = json.loads(value)
if isinstance(parsed, list):
coerced[key] = parsed
continue
except json.JSONDecodeError:
pass
coerced[key] = value
return coerced
class GiteaMCPServer: class GiteaMCPServer:
"""MCP Server for Gitea integration""" """MCP Server for Gitea integration"""
@@ -88,6 +126,10 @@ class GiteaMCPServer:
"items": {"type": "string"}, "items": {"type": "string"},
"description": "Filter by labels" "description": "Filter by labels"
}, },
"milestone": {
"type": "string",
"description": "Filter by milestone title (exact match)"
},
"repo": { "repo": {
"type": "string", "type": "string",
"description": "Repository name (for PMO mode)" "description": "Repository name (for PMO mode)"
@@ -168,6 +210,10 @@ class GiteaMCPServer:
"items": {"type": "string"}, "items": {"type": "string"},
"description": "New labels" "description": "New labels"
}, },
"milestone": {
"type": "integer",
"description": "Milestone ID to assign"
},
"repo": { "repo": {
"type": "string", "type": "string",
"description": "Repository name (for PMO mode)" "description": "Repository name (for PMO mode)"
@@ -844,6 +890,41 @@ class GiteaMCPServer:
}, },
"required": ["pr_number", "body"] "required": ["pr_number", "body"]
} }
),
Tool(
name="create_pull_request",
description="Create a new pull request",
inputSchema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "PR title"
},
"body": {
"type": "string",
"description": "PR description/body"
},
"head": {
"type": "string",
"description": "Source branch name (the branch with changes)"
},
"base": {
"type": "string",
"description": "Target branch name (the branch to merge into)"
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "Optional list of label names"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["title", "body", "head", "base"]
}
) )
] ]
@@ -860,6 +941,9 @@ class GiteaMCPServer:
List of TextContent with results List of TextContent with results
""" """
try: try:
# Coerce types to handle MCP serialization quirks
arguments = _coerce_types(arguments)
# Route to appropriate tool handler # Route to appropriate tool handler
if name == "list_issues": if name == "list_issues":
result = await self.issue_tools.list_issues(**arguments) result = await self.issue_tools.list_issues(**arguments)
@@ -959,6 +1043,8 @@ class GiteaMCPServer:
result = await self.pr_tools.create_pr_review(**arguments) result = await self.pr_tools.create_pr_review(**arguments)
elif name == "add_pr_comment": elif name == "add_pr_comment":
result = await self.pr_tools.add_pr_comment(**arguments) result = await self.pr_tools.add_pr_comment(**arguments)
elif name == "create_pull_request":
result = await self.pr_tools.create_pull_request(**arguments)
else: else:
raise ValueError(f"Unknown tool: {name}") raise ValueError(f"Unknown tool: {name}")

View File

@@ -7,6 +7,7 @@ Provides async wrappers for issue CRUD operations with:
- Comprehensive error handling - Comprehensive error handling
""" """
import asyncio import asyncio
import os
import subprocess import subprocess
import logging import logging
from typing import List, Dict, Optional from typing import List, Dict, Optional
@@ -27,19 +28,34 @@ class IssueTools:
""" """
self.gitea = gitea_client self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str: def _get_current_branch(self) -> str:
""" """
Get current git branch. Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns: Returns:
Current branch name or 'unknown' if not in a git repo Current branch name or 'unknown' if not in a git repo
""" """
try: try:
project_dir = self._get_project_directory()
result = subprocess.run( result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'], ['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True, capture_output=True,
text=True, text=True,
check=True check=True,
cwd=project_dir # Run git in project directory, not plugin directory
) )
return result.stdout.strip() return result.stdout.strip()
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
@@ -66,7 +82,13 @@ class IssueTools:
return operation in ['list_issues', 'get_issue', 'get_labels', 'create_issue'] return operation in ['list_issues', 'get_issue', 'get_labels', 'create_issue']
# Development branches (full access) # Development branches (full access)
if branch in ['development', 'develop'] or branch.startswith(('feat/', 'feature/', 'dev/')): # Include all common feature/fix branch patterns
dev_prefixes = (
'feat/', 'feature/', 'dev/',
'fix/', 'bugfix/', 'hotfix/',
'chore/', 'refactor/', 'docs/', 'test/'
)
if branch in ['development', 'develop'] or branch.startswith(dev_prefixes):
return True return True
# Unknown branch - be restrictive # Unknown branch - be restrictive
@@ -76,6 +98,7 @@ class IssueTools:
self, self,
state: str = 'open', state: str = 'open',
labels: Optional[List[str]] = None, labels: Optional[List[str]] = None,
milestone: Optional[str] = None,
repo: Optional[str] = None repo: Optional[str] = None
) -> List[Dict]: ) -> List[Dict]:
""" """
@@ -84,6 +107,7 @@ class IssueTools:
Args: Args:
state: Issue state (open, closed, all) state: Issue state (open, closed, all)
labels: Filter by labels labels: Filter by labels
milestone: Filter by milestone title (exact match)
repo: Override configured repo (for PMO multi-repo) repo: Override configured repo (for PMO multi-repo)
Returns: Returns:
@@ -102,7 +126,7 @@ class IssueTools:
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
return await loop.run_in_executor( return await loop.run_in_executor(
None, None,
lambda: self.gitea.list_issues(state, labels, repo) lambda: self.gitea.list_issues(state, labels, milestone, repo)
) )
async def get_issue( async def get_issue(
@@ -178,6 +202,7 @@ class IssueTools:
body: Optional[str] = None, body: Optional[str] = None,
state: Optional[str] = None, state: Optional[str] = None,
labels: Optional[List[str]] = None, labels: Optional[List[str]] = None,
milestone: Optional[int] = None,
repo: Optional[str] = None repo: Optional[str] = None
) -> Dict: ) -> Dict:
""" """
@@ -189,6 +214,7 @@ class IssueTools:
body: New body (optional) body: New body (optional)
state: New state - 'open' or 'closed' (optional) state: New state - 'open' or 'closed' (optional)
labels: New labels (optional) labels: New labels (optional)
milestone: Milestone ID to assign (optional)
repo: Override configured repo (for PMO multi-repo) repo: Override configured repo (for PMO multi-repo)
Returns: Returns:
@@ -207,7 +233,7 @@ class IssueTools:
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
return await loop.run_in_executor( return await loop.run_in_executor(
None, None,
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, repo) lambda: self.gitea.update_issue(issue_number, title, body, state, labels, milestone, repo)
) )
async def add_comment( async def add_comment(

View File

@@ -7,6 +7,7 @@ Provides async wrappers for PR operations with:
- Comprehensive error handling - Comprehensive error handling
""" """
import asyncio import asyncio
import os
import subprocess import subprocess
import logging import logging
from typing import List, Dict, Optional from typing import List, Dict, Optional
@@ -27,19 +28,34 @@ class PullRequestTools:
""" """
self.gitea = gitea_client self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str: def _get_current_branch(self) -> str:
""" """
Get current git branch. Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns: Returns:
Current branch name or 'unknown' if not in a git repo Current branch name or 'unknown' if not in a git repo
""" """
try: try:
project_dir = self._get_project_directory()
result = subprocess.run( result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'], ['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True, capture_output=True,
text=True, text=True,
check=True check=True,
cwd=project_dir # Run git in project directory, not plugin directory
) )
return result.stdout.strip() return result.stdout.strip()
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
@@ -69,7 +85,13 @@ class PullRequestTools:
return operation in read_ops + ['add_pr_comment'] return operation in read_ops + ['add_pr_comment']
# Development branches (full access) # Development branches (full access)
if branch in ['development', 'develop'] or branch.startswith(('feat/', 'feature/', 'dev/')): # Include all common feature/fix branch patterns
dev_prefixes = (
'feat/', 'feature/', 'dev/',
'fix/', 'bugfix/', 'hotfix/',
'chore/', 'refactor/', 'docs/', 'test/'
)
if branch in ['development', 'develop'] or branch.startswith(dev_prefixes):
return True return True
# Unknown branch - be restrictive # Unknown branch - be restrictive
@@ -272,3 +294,42 @@ class PullRequestTools:
None, None,
lambda: self.gitea.add_pr_comment(pr_number, body, repo) lambda: self.gitea.add_pr_comment(pr_number, body, repo)
) )
async def create_pull_request(
self,
title: str,
body: str,
head: str,
base: str,
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a new pull request (async wrapper with branch check).
Args:
title: PR title
body: PR description/body
head: Source branch name (the branch with changes)
base: Target branch name (the branch to merge into)
labels: Optional list of label names
repo: Override configured repo (for PMO multi-repo)
Returns:
Created pull request dictionary
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('create_pull_request'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot create PR on branch '{branch}'. "
f"Switch to a development or feature branch to create PRs."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.create_pull_request(title, body, head, base, labels, repo)
)

21
mcp-servers/gitea/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

21
mcp-servers/netbox/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/netbox/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -0,0 +1,479 @@
"""
Accessibility validation tools for color blindness and WCAG compliance.
Provides tools for validating color palettes against color blindness
simulations and WCAG contrast requirements.
"""
import logging
import math
from typing import Dict, List, Optional, Any, Tuple
logger = logging.getLogger(__name__)
# Color-blind safe palettes
SAFE_PALETTES = {
"categorical": {
"name": "Paul Tol's Qualitative",
"colors": ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"],
"description": "Distinguishable for all types of color blindness"
},
"ibm": {
"name": "IBM Design",
"colors": ["#648FFF", "#785EF0", "#DC267F", "#FE6100", "#FFB000"],
"description": "IBM's accessible color palette"
},
"okabe_ito": {
"name": "Okabe-Ito",
"colors": ["#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#000000"],
"description": "Optimized for all color vision deficiencies"
},
"tableau_colorblind": {
"name": "Tableau Colorblind 10",
"colors": ["#006BA4", "#FF800E", "#ABABAB", "#595959", "#5F9ED1",
"#C85200", "#898989", "#A2C8EC", "#FFBC79", "#CFCFCF"],
"description": "Industry-standard accessible palette"
}
}
# Simulation matrices for color blindness (LMS color space transformation)
# These approximate how colors appear to people with different types of color blindness
SIMULATION_MATRICES = {
"deuteranopia": {
# Green-blind (most common)
"severity": "common",
"population": "6% males, 0.4% females",
"description": "Difficulty distinguishing red from green (green-blind)",
"matrix": [
[0.625, 0.375, 0.0],
[0.700, 0.300, 0.0],
[0.0, 0.300, 0.700]
]
},
"protanopia": {
# Red-blind
"severity": "common",
"population": "2.5% males, 0.05% females",
"description": "Difficulty distinguishing red from green (red-blind)",
"matrix": [
[0.567, 0.433, 0.0],
[0.558, 0.442, 0.0],
[0.0, 0.242, 0.758]
]
},
"tritanopia": {
# Blue-blind (rare)
"severity": "rare",
"population": "0.01% total",
"description": "Difficulty distinguishing blue from yellow",
"matrix": [
[0.950, 0.050, 0.0],
[0.0, 0.433, 0.567],
[0.0, 0.475, 0.525]
]
}
}
class AccessibilityTools:
"""
Color accessibility validation tools.
Validates colors for WCAG compliance and color blindness accessibility.
"""
def __init__(self, theme_store=None):
"""
Initialize accessibility tools.
Args:
theme_store: Optional ThemeStore for theme color extraction
"""
self.theme_store = theme_store
def _hex_to_rgb(self, hex_color: str) -> Tuple[int, int, int]:
"""Convert hex color to RGB tuple."""
hex_color = hex_color.lstrip('#')
if len(hex_color) == 3:
hex_color = ''.join([c * 2 for c in hex_color])
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def _rgb_to_hex(self, rgb: Tuple[int, int, int]) -> str:
"""Convert RGB tuple to hex color."""
return '#{:02x}{:02x}{:02x}'.format(
max(0, min(255, int(rgb[0]))),
max(0, min(255, int(rgb[1]))),
max(0, min(255, int(rgb[2])))
)
def _get_relative_luminance(self, rgb: Tuple[int, int, int]) -> float:
"""
Calculate relative luminance per WCAG 2.1.
https://www.w3.org/WAI/GL/wiki/Relative_luminance
"""
def channel_luminance(value: int) -> float:
v = value / 255
return v / 12.92 if v <= 0.03928 else ((v + 0.055) / 1.055) ** 2.4
r, g, b = rgb
return (
0.2126 * channel_luminance(r) +
0.7152 * channel_luminance(g) +
0.0722 * channel_luminance(b)
)
def _get_contrast_ratio(self, color1: str, color2: str) -> float:
"""
Calculate contrast ratio between two colors per WCAG 2.1.
Returns ratio between 1:1 and 21:1.
"""
rgb1 = self._hex_to_rgb(color1)
rgb2 = self._hex_to_rgb(color2)
l1 = self._get_relative_luminance(rgb1)
l2 = self._get_relative_luminance(rgb2)
lighter = max(l1, l2)
darker = min(l1, l2)
return (lighter + 0.05) / (darker + 0.05)
def _simulate_color_blindness(
self,
hex_color: str,
deficiency_type: str
) -> str:
"""
Simulate how a color appears with a specific color blindness type.
Uses linear RGB transformation approximation.
"""
if deficiency_type not in SIMULATION_MATRICES:
return hex_color
rgb = self._hex_to_rgb(hex_color)
matrix = SIMULATION_MATRICES[deficiency_type]["matrix"]
# Apply transformation matrix
r = rgb[0] * matrix[0][0] + rgb[1] * matrix[0][1] + rgb[2] * matrix[0][2]
g = rgb[0] * matrix[1][0] + rgb[1] * matrix[1][1] + rgb[2] * matrix[1][2]
b = rgb[0] * matrix[2][0] + rgb[1] * matrix[2][1] + rgb[2] * matrix[2][2]
return self._rgb_to_hex((r, g, b))
def _get_color_distance(self, color1: str, color2: str) -> float:
"""
Calculate perceptual color distance (CIE76 approximation).
Returns a value where < 20 means colors may be hard to distinguish.
"""
rgb1 = self._hex_to_rgb(color1)
rgb2 = self._hex_to_rgb(color2)
# Simple Euclidean distance in RGB space (approximation)
# For production, should use CIEDE2000
return math.sqrt(
(rgb1[0] - rgb2[0]) ** 2 +
(rgb1[1] - rgb2[1]) ** 2 +
(rgb1[2] - rgb2[2]) ** 2
)
async def accessibility_validate_colors(
self,
colors: List[str],
check_types: Optional[List[str]] = None,
min_contrast_ratio: float = 4.5
) -> Dict[str, Any]:
"""
Validate a list of colors for accessibility.
Args:
colors: List of hex colors to validate
check_types: Color blindness types to check (default: all)
min_contrast_ratio: Minimum WCAG contrast ratio (default: 4.5 for AA)
Returns:
Dict with:
- issues: List of accessibility issues found
- simulations: How colors appear under each deficiency
- recommendations: Suggestions for improvement
- safe_palettes: Color-blind safe palette suggestions
"""
check_types = check_types or list(SIMULATION_MATRICES.keys())
issues = []
simulations = {}
# Normalize colors
normalized_colors = [c.upper() if c.startswith('#') else f'#{c.upper()}' for c in colors]
# Simulate each color blindness type
for deficiency in check_types:
if deficiency not in SIMULATION_MATRICES:
continue
simulated = [self._simulate_color_blindness(c, deficiency) for c in normalized_colors]
simulations[deficiency] = {
"original": normalized_colors,
"simulated": simulated,
"info": SIMULATION_MATRICES[deficiency]
}
# Check if any color pairs become indistinguishable
for i in range(len(normalized_colors)):
for j in range(i + 1, len(normalized_colors)):
distance = self._get_color_distance(simulated[i], simulated[j])
if distance < 30: # Threshold for distinguishability
issues.append({
"type": "distinguishability",
"severity": "warning" if distance > 15 else "error",
"colors": [normalized_colors[i], normalized_colors[j]],
"affected_by": [deficiency],
"simulated_colors": [simulated[i], simulated[j]],
"distance": round(distance, 1),
"message": f"Colors may be hard to distinguish for {deficiency} ({SIMULATION_MATRICES[deficiency]['description']})"
})
# Check contrast ratios against white and black backgrounds
for color in normalized_colors:
white_contrast = self._get_contrast_ratio(color, "#FFFFFF")
black_contrast = self._get_contrast_ratio(color, "#000000")
if white_contrast < min_contrast_ratio and black_contrast < min_contrast_ratio:
issues.append({
"type": "contrast_ratio",
"severity": "error",
"colors": [color],
"white_contrast": round(white_contrast, 2),
"black_contrast": round(black_contrast, 2),
"required": min_contrast_ratio,
"message": f"Insufficient contrast against both white ({white_contrast:.1f}:1) and black ({black_contrast:.1f}:1) backgrounds"
})
# Generate recommendations
recommendations = self._generate_recommendations(issues)
# Calculate overall score
error_count = sum(1 for i in issues if i["severity"] == "error")
warning_count = sum(1 for i in issues if i["severity"] == "warning")
if error_count == 0 and warning_count == 0:
score = "A"
elif error_count == 0 and warning_count <= 2:
score = "B"
elif error_count <= 2:
score = "C"
else:
score = "D"
return {
"colors_checked": normalized_colors,
"overall_score": score,
"issue_count": len(issues),
"issues": issues,
"simulations": simulations,
"recommendations": recommendations,
"safe_palettes": SAFE_PALETTES
}
async def accessibility_validate_theme(
self,
theme_name: str
) -> Dict[str, Any]:
"""
Validate a theme's colors for accessibility.
Args:
theme_name: Theme name to validate
Returns:
Dict with accessibility validation results
"""
if not self.theme_store:
return {
"error": "Theme store not configured",
"theme_name": theme_name
}
theme = self.theme_store.get_theme(theme_name)
if not theme:
available = self.theme_store.list_themes()
return {
"error": f"Theme '{theme_name}' not found. Available: {available}",
"theme_name": theme_name
}
# Extract colors from theme
colors = []
tokens = theme.get("tokens", {})
color_tokens = tokens.get("colors", {})
def extract_colors(obj, prefix=""):
"""Recursively extract color values."""
if isinstance(obj, str) and (obj.startswith('#') or len(obj) == 6):
colors.append(obj if obj.startswith('#') else f'#{obj}')
elif isinstance(obj, dict):
for key, value in obj.items():
extract_colors(value, f"{prefix}.{key}")
elif isinstance(obj, list):
for item in obj:
extract_colors(item, prefix)
extract_colors(color_tokens)
# Validate extracted colors
result = await self.accessibility_validate_colors(colors)
result["theme_name"] = theme_name
# Add theme-specific checks
primary = color_tokens.get("primary")
background = color_tokens.get("background", {})
text = color_tokens.get("text", {})
if primary and background:
bg_color = background.get("base") if isinstance(background, dict) else background
if bg_color:
contrast = self._get_contrast_ratio(primary, bg_color)
if contrast < 4.5:
result["issues"].append({
"type": "primary_contrast",
"severity": "error",
"colors": [primary, bg_color],
"ratio": round(contrast, 2),
"required": 4.5,
"message": f"Primary color has insufficient contrast ({contrast:.1f}:1) against background"
})
return result
async def accessibility_suggest_alternative(
self,
color: str,
deficiency_type: str
) -> Dict[str, Any]:
"""
Suggest accessible alternative colors.
Args:
color: Original hex color
deficiency_type: Type of color blindness to optimize for
Returns:
Dict with alternative color suggestions
"""
rgb = self._hex_to_rgb(color)
suggestions = []
# Suggest shifting hue while maintaining saturation and brightness
# For red-green deficiency, shift toward blue or yellow
if deficiency_type in ["deuteranopia", "protanopia"]:
# Shift toward blue
blue_shift = self._rgb_to_hex((
max(0, rgb[0] - 50),
max(0, rgb[1] - 30),
min(255, rgb[2] + 80)
))
suggestions.append({
"color": blue_shift,
"description": "Blue-shifted alternative",
"preserves": "approximate brightness"
})
# Shift toward yellow/orange
yellow_shift = self._rgb_to_hex((
min(255, rgb[0] + 50),
min(255, rgb[1] + 30),
max(0, rgb[2] - 80)
))
suggestions.append({
"color": yellow_shift,
"description": "Yellow-shifted alternative",
"preserves": "approximate brightness"
})
elif deficiency_type == "tritanopia":
# For blue-yellow deficiency, shift toward red or green
red_shift = self._rgb_to_hex((
min(255, rgb[0] + 60),
max(0, rgb[1] - 20),
max(0, rgb[2] - 40)
))
suggestions.append({
"color": red_shift,
"description": "Red-shifted alternative",
"preserves": "approximate brightness"
})
# Add safe palette suggestions
for palette_name, palette in SAFE_PALETTES.items():
# Find closest color in safe palette
min_distance = float('inf')
closest = None
for safe_color in palette["colors"]:
distance = self._get_color_distance(color, safe_color)
if distance < min_distance:
min_distance = distance
closest = safe_color
if closest:
suggestions.append({
"color": closest,
"description": f"From {palette['name']} palette",
"palette": palette_name
})
return {
"original_color": color,
"deficiency_type": deficiency_type,
"suggestions": suggestions[:5] # Limit to 5 suggestions
}
def _generate_recommendations(self, issues: List[Dict[str, Any]]) -> List[str]:
"""Generate actionable recommendations based on issues."""
recommendations = []
# Check for distinguishability issues
distinguishability_issues = [i for i in issues if i["type"] == "distinguishability"]
if distinguishability_issues:
affected_types = set()
for issue in distinguishability_issues:
affected_types.update(issue.get("affected_by", []))
if "deuteranopia" in affected_types or "protanopia" in affected_types:
recommendations.append(
"Avoid using red and green as the only differentiators - "
"add patterns, shapes, or labels"
)
recommendations.append(
"Consider using a color-blind safe palette like Okabe-Ito or IBM Design"
)
# Check for contrast issues
contrast_issues = [i for i in issues if i["type"] in ["contrast_ratio", "primary_contrast"]]
if contrast_issues:
recommendations.append(
"Increase contrast by darkening colors for light backgrounds "
"or lightening for dark backgrounds"
)
recommendations.append(
"Use WCAG contrast checker tools to verify text readability"
)
# General recommendations
if len(issues) > 0:
recommendations.append(
"Add secondary visual cues (icons, patterns, labels) "
"to not rely solely on color"
)
if not recommendations:
recommendations.append(
"Color palette appears accessible! Consider adding patterns "
"for additional distinguishability"
)
return recommendations

View File

@@ -3,11 +3,21 @@ Chart creation tools using Plotly.
Provides tools for creating data visualizations with automatic theme integration. Provides tools for creating data visualizations with automatic theme integration.
""" """
import base64
import logging import logging
import os
from typing import Dict, List, Optional, Any, Union from typing import Dict, List, Optional, Any, Union
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Check for kaleido availability
KALEIDO_AVAILABLE = False
try:
import kaleido
KALEIDO_AVAILABLE = True
except ImportError:
logger.debug("kaleido not installed - chart export will be unavailable")
# Default color palette based on Mantine theme # Default color palette based on Mantine theme
DEFAULT_COLORS = [ DEFAULT_COLORS = [
@@ -395,3 +405,129 @@ class ChartTools:
"figure": figure, "figure": figure,
"interactions_added": [] "interactions_added": []
} }
async def chart_export(
self,
figure: Dict[str, Any],
format: str = "png",
width: Optional[int] = None,
height: Optional[int] = None,
scale: float = 2.0,
output_path: Optional[str] = None
) -> Dict[str, Any]:
"""
Export a Plotly chart to a static image format.
Args:
figure: Plotly figure JSON to export
format: Output format - png, svg, or pdf
width: Image width in pixels (default: from figure or 1200)
height: Image height in pixels (default: from figure or 800)
scale: Resolution scale factor (default: 2 for retina)
output_path: Optional file path to save the image
Returns:
Dict with:
- image_data: Base64-encoded image (if no output_path)
- file_path: Path to saved file (if output_path provided)
- format: Export format used
- dimensions: {width, height, scale}
- error: Error message if export failed
"""
# Validate format
valid_formats = ['png', 'svg', 'pdf']
format = format.lower()
if format not in valid_formats:
return {
"error": f"Invalid format '{format}'. Must be one of: {valid_formats}",
"format": format,
"image_data": None
}
# Check kaleido availability
if not KALEIDO_AVAILABLE:
return {
"error": "kaleido package not installed. Install with: pip install kaleido",
"format": format,
"image_data": None,
"install_hint": "pip install kaleido"
}
# Validate figure
if not figure or 'data' not in figure:
return {
"error": "Invalid figure: must contain 'data' key",
"format": format,
"image_data": None
}
try:
import plotly.graph_objects as go
import plotly.io as pio
# Create Plotly figure object
fig = go.Figure(figure)
# Determine dimensions
layout = figure.get('layout', {})
export_width = width or layout.get('width') or 1200
export_height = height or layout.get('height') or 800
# Export to bytes
image_bytes = pio.to_image(
fig,
format=format,
width=export_width,
height=export_height,
scale=scale
)
result = {
"format": format,
"dimensions": {
"width": export_width,
"height": export_height,
"scale": scale,
"effective_width": int(export_width * scale),
"effective_height": int(export_height * scale)
}
}
# Save to file or return base64
if output_path:
# Ensure directory exists
output_dir = os.path.dirname(output_path)
if output_dir and not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
# Add extension if missing
if not output_path.endswith(f'.{format}'):
output_path = f"{output_path}.{format}"
with open(output_path, 'wb') as f:
f.write(image_bytes)
result["file_path"] = output_path
result["file_size_bytes"] = len(image_bytes)
else:
# Return as base64
result["image_data"] = base64.b64encode(image_bytes).decode('utf-8')
result["data_uri"] = f"data:image/{format};base64,{result['image_data']}"
return result
except ImportError as e:
logger.error(f"Chart export failed - missing dependency: {e}")
return {
"error": f"Missing dependency for export: {e}",
"format": format,
"image_data": None,
"install_hint": "pip install plotly kaleido"
}
except Exception as e:
logger.error(f"Chart export failed: {e}")
return {
"error": str(e),
"format": format,
"image_data": None
}

View File

@@ -10,6 +10,46 @@ from uuid import uuid4
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Standard responsive breakpoints (Mantine/Bootstrap-aligned)
DEFAULT_BREAKPOINTS = {
"xs": {
"min_width": "0px",
"max_width": "575px",
"cols": 1,
"spacing": "xs",
"description": "Extra small devices (phones, portrait)"
},
"sm": {
"min_width": "576px",
"max_width": "767px",
"cols": 2,
"spacing": "sm",
"description": "Small devices (phones, landscape)"
},
"md": {
"min_width": "768px",
"max_width": "991px",
"cols": 6,
"spacing": "md",
"description": "Medium devices (tablets)"
},
"lg": {
"min_width": "992px",
"max_width": "1199px",
"cols": 12,
"spacing": "md",
"description": "Large devices (desktops)"
},
"xl": {
"min_width": "1200px",
"max_width": None,
"cols": 12,
"spacing": "lg",
"description": "Extra large devices (large desktops)"
}
}
# Layout templates # Layout templates
TEMPLATES = { TEMPLATES = {
"dashboard": { "dashboard": {
@@ -365,3 +405,149 @@ class LayoutTools:
} }
for name, config in FILTER_TYPES.items() for name, config in FILTER_TYPES.items()
} }
async def layout_set_breakpoints(
self,
layout_ref: str,
breakpoints: Dict[str, Dict[str, Any]],
mobile_first: bool = True
) -> Dict[str, Any]:
"""
Configure responsive breakpoints for a layout.
Args:
layout_ref: Layout name to configure
breakpoints: Breakpoint configuration dict:
{
"xs": {"cols": 1, "spacing": "xs"},
"sm": {"cols": 2, "spacing": "sm"},
"md": {"cols": 6, "spacing": "md"},
"lg": {"cols": 12, "spacing": "md"},
"xl": {"cols": 12, "spacing": "lg"}
}
mobile_first: If True, use min-width media queries (default)
Returns:
Dict with:
- breakpoints: Complete breakpoint configuration
- css_media_queries: Generated CSS media queries
- mobile_first: Whether mobile-first approach is used
"""
# Validate layout exists
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found. Create it first with layout_create.",
"breakpoints": None
}
layout = self._layouts[layout_ref]
# Validate breakpoint names
valid_breakpoints = ["xs", "sm", "md", "lg", "xl"]
for bp_name in breakpoints.keys():
if bp_name not in valid_breakpoints:
return {
"error": f"Invalid breakpoint '{bp_name}'. Must be one of: {valid_breakpoints}",
"breakpoints": layout.get("breakpoints")
}
# Merge with defaults
merged_breakpoints = {}
for bp_name in valid_breakpoints:
default = DEFAULT_BREAKPOINTS[bp_name].copy()
if bp_name in breakpoints:
default.update(breakpoints[bp_name])
merged_breakpoints[bp_name] = default
# Validate spacing values
valid_spacing = ["xs", "sm", "md", "lg", "xl"]
for bp_name, bp_config in merged_breakpoints.items():
if "spacing" in bp_config and bp_config["spacing"] not in valid_spacing:
return {
"error": f"Invalid spacing '{bp_config['spacing']}' for breakpoint '{bp_name}'. Must be one of: {valid_spacing}",
"breakpoints": layout.get("breakpoints")
}
# Validate column counts
for bp_name, bp_config in merged_breakpoints.items():
if "cols" in bp_config:
cols = bp_config["cols"]
if not isinstance(cols, int) or cols < 1 or cols > 24:
return {
"error": f"Invalid cols '{cols}' for breakpoint '{bp_name}'. Must be integer between 1 and 24.",
"breakpoints": layout.get("breakpoints")
}
# Generate CSS media queries
css_queries = self._generate_media_queries(merged_breakpoints, mobile_first)
# Store in layout
layout["breakpoints"] = merged_breakpoints
layout["mobile_first"] = mobile_first
layout["responsive_css"] = css_queries
return {
"layout_ref": layout_ref,
"breakpoints": merged_breakpoints,
"mobile_first": mobile_first,
"css_media_queries": css_queries
}
def _generate_media_queries(
self,
breakpoints: Dict[str, Dict[str, Any]],
mobile_first: bool
) -> List[str]:
"""Generate CSS media queries for breakpoints."""
queries = []
bp_order = ["xs", "sm", "md", "lg", "xl"]
if mobile_first:
# Use min-width queries (mobile-first)
for bp_name in bp_order[1:]: # Skip xs (base styles)
bp = breakpoints[bp_name]
min_width = bp.get("min_width", DEFAULT_BREAKPOINTS[bp_name]["min_width"])
if min_width and min_width != "0px":
queries.append(f"@media (min-width: {min_width}) {{ /* {bp_name} styles */ }}")
else:
# Use max-width queries (desktop-first)
for bp_name in reversed(bp_order[:-1]): # Skip xl (base styles)
bp = breakpoints[bp_name]
max_width = bp.get("max_width", DEFAULT_BREAKPOINTS[bp_name]["max_width"])
if max_width:
queries.append(f"@media (max-width: {max_width}) {{ /* {bp_name} styles */ }}")
return queries
async def layout_get_breakpoints(self, layout_ref: str) -> Dict[str, Any]:
"""
Get the breakpoint configuration for a layout.
Args:
layout_ref: Layout name
Returns:
Dict with breakpoint configuration
"""
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found.",
"breakpoints": None
}
layout = self._layouts[layout_ref]
return {
"layout_ref": layout_ref,
"breakpoints": layout.get("breakpoints", DEFAULT_BREAKPOINTS.copy()),
"mobile_first": layout.get("mobile_first", True),
"css_media_queries": layout.get("responsive_css", [])
}
def get_default_breakpoints(self) -> Dict[str, Any]:
"""Get the default breakpoint configuration."""
return {
"breakpoints": DEFAULT_BREAKPOINTS.copy(),
"description": "Standard responsive breakpoints aligned with Mantine/Bootstrap",
"mobile_first": True
}

View File

@@ -17,6 +17,7 @@ from .chart_tools import ChartTools
from .layout_tools import LayoutTools from .layout_tools import LayoutTools
from .theme_tools import ThemeTools from .theme_tools import ThemeTools
from .page_tools import PageTools from .page_tools import PageTools
from .accessibility_tools import AccessibilityTools
# Suppress noisy MCP validation warnings on stderr # Suppress noisy MCP validation warnings on stderr
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
@@ -36,6 +37,7 @@ class VizPlatformMCPServer:
self.layout_tools = LayoutTools() self.layout_tools = LayoutTools()
self.theme_tools = ThemeTools() self.theme_tools = ThemeTools()
self.page_tools = PageTools() self.page_tools = PageTools()
self.accessibility_tools = AccessibilityTools(theme_store=self.theme_tools.store)
async def initialize(self): async def initialize(self):
"""Initialize server and load configuration.""" """Initialize server and load configuration."""
@@ -198,6 +200,46 @@ class VizPlatformMCPServer:
} }
)) ))
# Chart export tool (Issue #247)
tools.append(Tool(
name="chart_export",
description=(
"Export a Plotly chart to static image format (PNG, SVG, PDF). "
"Requires kaleido package. Returns base64 image data or saves to file."
),
inputSchema={
"type": "object",
"properties": {
"figure": {
"type": "object",
"description": "Plotly figure JSON to export"
},
"format": {
"type": "string",
"enum": ["png", "svg", "pdf"],
"description": "Output format (default: png)"
},
"width": {
"type": "integer",
"description": "Image width in pixels (default: 1200)"
},
"height": {
"type": "integer",
"description": "Image height in pixels (default: 800)"
},
"scale": {
"type": "number",
"description": "Resolution scale factor (default: 2 for retina)"
},
"output_path": {
"type": "string",
"description": "Optional file path to save image"
}
},
"required": ["figure"]
}
))
# Layout tools (Issue #174) # Layout tools (Issue #174)
tools.append(Tool( tools.append(Tool(
name="layout_create", name="layout_create",
@@ -280,6 +322,36 @@ class VizPlatformMCPServer:
} }
)) ))
# Responsive breakpoints tool (Issue #249)
tools.append(Tool(
name="layout_set_breakpoints",
description=(
"Configure responsive breakpoints for a layout. "
"Supports xs, sm, md, lg, xl breakpoints with mobile-first approach. "
"Each breakpoint can define cols, spacing, and other grid properties."
),
inputSchema={
"type": "object",
"properties": {
"layout_ref": {
"type": "string",
"description": "Layout name to configure"
},
"breakpoints": {
"type": "object",
"description": (
"Breakpoint config: {xs: {cols, spacing}, sm: {...}, md: {...}, lg: {...}, xl: {...}}"
)
},
"mobile_first": {
"type": "boolean",
"description": "Use mobile-first (min-width) media queries (default: true)"
}
},
"required": ["layout_ref", "breakpoints"]
}
))
# Theme tools (Issue #175) # Theme tools (Issue #175)
tools.append(Tool( tools.append(Tool(
name="theme_create", name="theme_create",
@@ -451,6 +523,77 @@ class VizPlatformMCPServer:
} }
)) ))
# Accessibility tools (Issue #248)
tools.append(Tool(
name="accessibility_validate_colors",
description=(
"Validate colors for color blind accessibility. "
"Checks contrast ratios for deuteranopia, protanopia, tritanopia. "
"Returns issues, simulations, and accessible palette suggestions."
),
inputSchema={
"type": "object",
"properties": {
"colors": {
"type": "array",
"items": {"type": "string"},
"description": "List of hex colors to validate (e.g., ['#228be6', '#40c057'])"
},
"check_types": {
"type": "array",
"items": {"type": "string"},
"description": "Color blindness types to check: deuteranopia, protanopia, tritanopia (default: all)"
},
"min_contrast_ratio": {
"type": "number",
"description": "Minimum WCAG contrast ratio (default: 4.5 for AA)"
}
},
"required": ["colors"]
}
))
tools.append(Tool(
name="accessibility_validate_theme",
description=(
"Validate a theme's colors for accessibility. "
"Extracts all colors from theme tokens and checks for color blind safety."
),
inputSchema={
"type": "object",
"properties": {
"theme_name": {
"type": "string",
"description": "Theme name to validate"
}
},
"required": ["theme_name"]
}
))
tools.append(Tool(
name="accessibility_suggest_alternative",
description=(
"Suggest accessible alternative colors for a given color. "
"Provides alternatives optimized for specific color blindness types."
),
inputSchema={
"type": "object",
"properties": {
"color": {
"type": "string",
"description": "Hex color to find alternatives for"
},
"deficiency_type": {
"type": "string",
"enum": ["deuteranopia", "protanopia", "tritanopia"],
"description": "Color blindness type to optimize for"
}
},
"required": ["color", "deficiency_type"]
}
))
return tools return tools
@self.server.call_tool() @self.server.call_tool()
@@ -524,6 +667,26 @@ class VizPlatformMCPServer:
text=json.dumps(result, indent=2) text=json.dumps(result, indent=2)
)] )]
elif name == "chart_export":
figure = arguments.get('figure')
if not figure:
return [TextContent(
type="text",
text=json.dumps({"error": "figure is required"}, indent=2)
)]
result = await self.chart_tools.chart_export(
figure=figure,
format=arguments.get('format', 'png'),
width=arguments.get('width'),
height=arguments.get('height'),
scale=arguments.get('scale', 2.0),
output_path=arguments.get('output_path')
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Layout tools # Layout tools
elif name == "layout_create": elif name == "layout_create":
layout_name = arguments.get('name') layout_name = arguments.get('name')
@@ -568,6 +731,23 @@ class VizPlatformMCPServer:
text=json.dumps(result, indent=2) text=json.dumps(result, indent=2)
)] )]
elif name == "layout_set_breakpoints":
layout_ref = arguments.get('layout_ref')
breakpoints = arguments.get('breakpoints', {})
mobile_first = arguments.get('mobile_first', True)
if not layout_ref:
return [TextContent(
type="text",
text=json.dumps({"error": "layout_ref is required"}, indent=2)
)]
result = await self.layout_tools.layout_set_breakpoints(
layout_ref, breakpoints, mobile_first
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Theme tools # Theme tools
elif name == "theme_create": elif name == "theme_create":
theme_name = arguments.get('name') theme_name = arguments.get('name')
@@ -669,6 +849,53 @@ class VizPlatformMCPServer:
text=json.dumps(result, indent=2) text=json.dumps(result, indent=2)
)] )]
# Accessibility tools
elif name == "accessibility_validate_colors":
colors = arguments.get('colors')
if not colors:
return [TextContent(
type="text",
text=json.dumps({"error": "colors list is required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_validate_colors(
colors=colors,
check_types=arguments.get('check_types'),
min_contrast_ratio=arguments.get('min_contrast_ratio', 4.5)
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "accessibility_validate_theme":
theme_name = arguments.get('theme_name')
if not theme_name:
return [TextContent(
type="text",
text=json.dumps({"error": "theme_name is required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_validate_theme(theme_name)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "accessibility_suggest_alternative":
color = arguments.get('color')
deficiency_type = arguments.get('deficiency_type')
if not color or not deficiency_type:
return [TextContent(
type="text",
text=json.dumps({"error": "color and deficiency_type are required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_suggest_alternative(
color, deficiency_type
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
raise ValueError(f"Unknown tool: {name}") raise ValueError(f"Unknown tool: {name}")
except Exception as e: except Exception as e:

View File

@@ -5,6 +5,7 @@ mcp>=0.9.0
plotly>=5.18.0 plotly>=5.18.0
dash>=2.14.0 dash>=2.14.0
dash-mantine-components>=2.0.0 dash-mantine-components>=2.0.0
kaleido>=0.2.1 # For chart export (PNG, SVG, PDF)
# Utilities # Utilities
python-dotenv>=1.0.0 python-dotenv>=1.0.0

21
mcp-servers/viz-platform/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/viz-platform/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -0,0 +1,195 @@
"""
Tests for accessibility validation tools.
"""
import pytest
from mcp_server.accessibility_tools import AccessibilityTools
@pytest.fixture
def tools():
"""Create AccessibilityTools instance."""
return AccessibilityTools()
class TestHexToRgb:
"""Tests for _hex_to_rgb method."""
def test_hex_to_rgb_6_digit(self, tools):
"""Test 6-digit hex conversion."""
assert tools._hex_to_rgb("#FF0000") == (255, 0, 0)
assert tools._hex_to_rgb("#00FF00") == (0, 255, 0)
assert tools._hex_to_rgb("#0000FF") == (0, 0, 255)
def test_hex_to_rgb_3_digit(self, tools):
"""Test 3-digit hex conversion."""
assert tools._hex_to_rgb("#F00") == (255, 0, 0)
assert tools._hex_to_rgb("#0F0") == (0, 255, 0)
assert tools._hex_to_rgb("#00F") == (0, 0, 255)
def test_hex_to_rgb_lowercase(self, tools):
"""Test lowercase hex conversion."""
assert tools._hex_to_rgb("#ff0000") == (255, 0, 0)
class TestContrastRatio:
"""Tests for _get_contrast_ratio method."""
def test_black_white_contrast(self, tools):
"""Test black on white has maximum contrast."""
ratio = tools._get_contrast_ratio("#000000", "#FFFFFF")
assert ratio == pytest.approx(21.0, rel=0.01)
def test_same_color_contrast(self, tools):
"""Test same color has minimum contrast."""
ratio = tools._get_contrast_ratio("#FF0000", "#FF0000")
assert ratio == pytest.approx(1.0, rel=0.01)
def test_symmetric_contrast(self, tools):
"""Test contrast ratio is symmetric."""
ratio1 = tools._get_contrast_ratio("#228be6", "#FFFFFF")
ratio2 = tools._get_contrast_ratio("#FFFFFF", "#228be6")
assert ratio1 == pytest.approx(ratio2, rel=0.01)
class TestColorBlindnessSimulation:
"""Tests for _simulate_color_blindness method."""
def test_deuteranopia_simulation(self, tools):
"""Test deuteranopia (green-blind) simulation."""
# Red and green should appear more similar
original_red = "#FF0000"
original_green = "#00FF00"
simulated_red = tools._simulate_color_blindness(original_red, "deuteranopia")
simulated_green = tools._simulate_color_blindness(original_green, "deuteranopia")
# They should be different from originals
assert simulated_red != original_red or simulated_green != original_green
def test_protanopia_simulation(self, tools):
"""Test protanopia (red-blind) simulation."""
simulated = tools._simulate_color_blindness("#FF0000", "protanopia")
# Should return a modified color
assert simulated.startswith("#")
assert len(simulated) == 7
def test_tritanopia_simulation(self, tools):
"""Test tritanopia (blue-blind) simulation."""
simulated = tools._simulate_color_blindness("#0000FF", "tritanopia")
# Should return a modified color
assert simulated.startswith("#")
assert len(simulated) == 7
def test_unknown_deficiency_returns_original(self, tools):
"""Test unknown deficiency type returns original color."""
color = "#FF0000"
simulated = tools._simulate_color_blindness(color, "unknown")
assert simulated == color
class TestAccessibilityValidateColors:
"""Tests for accessibility_validate_colors method."""
@pytest.mark.asyncio
async def test_validate_single_color(self, tools):
"""Test validating a single color."""
result = await tools.accessibility_validate_colors(["#228be6"])
assert "colors_checked" in result
assert "overall_score" in result
assert "issues" in result
assert "safe_palettes" in result
@pytest.mark.asyncio
async def test_validate_problematic_colors(self, tools):
"""Test similar colors trigger warnings."""
# Use colors that are very close in hue, which should be harder to distinguish
result = await tools.accessibility_validate_colors(["#FF5555", "#FF6666"])
# Similar colors should trigger distinguishability warnings
assert "issues" in result
# The validation should at least run without errors
assert "colors_checked" in result
assert len(result["colors_checked"]) == 2
@pytest.mark.asyncio
async def test_validate_contrast_issue(self, tools):
"""Test low contrast colors trigger contrast warnings."""
# Yellow on white has poor contrast
result = await tools.accessibility_validate_colors(["#FFFF00"])
# Check for contrast issues (yellow may have issues with both black and white)
assert "issues" in result
@pytest.mark.asyncio
async def test_validate_with_specific_types(self, tools):
"""Test validating for specific color blindness types."""
result = await tools.accessibility_validate_colors(
["#FF0000", "#00FF00"],
check_types=["deuteranopia"]
)
assert "simulations" in result
assert "deuteranopia" in result["simulations"]
assert "protanopia" not in result["simulations"]
@pytest.mark.asyncio
async def test_overall_score(self, tools):
"""Test overall score is calculated."""
result = await tools.accessibility_validate_colors(["#228be6", "#ffffff"])
assert result["overall_score"] in ["A", "B", "C", "D"]
@pytest.mark.asyncio
async def test_recommendations_generated(self, tools):
"""Test recommendations are generated for issues."""
result = await tools.accessibility_validate_colors(["#FF0000", "#00FF00"])
assert "recommendations" in result
assert len(result["recommendations"]) > 0
class TestAccessibilitySuggestAlternative:
"""Tests for accessibility_suggest_alternative method."""
@pytest.mark.asyncio
async def test_suggest_alternative_deuteranopia(self, tools):
"""Test suggesting alternatives for deuteranopia."""
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
assert "original_color" in result
assert result["deficiency_type"] == "deuteranopia"
assert "suggestions" in result
assert len(result["suggestions"]) > 0
@pytest.mark.asyncio
async def test_suggest_alternative_tritanopia(self, tools):
"""Test suggesting alternatives for tritanopia."""
result = await tools.accessibility_suggest_alternative("#0000FF", "tritanopia")
assert "suggestions" in result
assert len(result["suggestions"]) > 0
@pytest.mark.asyncio
async def test_suggestions_include_safe_palettes(self, tools):
"""Test suggestions include colors from safe palettes."""
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
palette_suggestions = [
s for s in result["suggestions"]
if "palette" in s
]
assert len(palette_suggestions) > 0
class TestSafePalettes:
"""Tests for safe palette constants."""
def test_safe_palettes_exist(self, tools):
"""Test that safe palettes are defined."""
from mcp_server.accessibility_tools import SAFE_PALETTES
assert "categorical" in SAFE_PALETTES
assert "ibm" in SAFE_PALETTES
assert "okabe_ito" in SAFE_PALETTES
assert "tableau_colorblind" in SAFE_PALETTES
def test_safe_palettes_have_colors(self, tools):
"""Test that safe palettes have color lists."""
from mcp_server.accessibility_tools import SAFE_PALETTES
for palette_name, palette in SAFE_PALETTES.items():
assert "colors" in palette
assert len(palette["colors"]) > 0
# All colors should be valid hex
for color in palette["colors"]:
assert color.startswith("#")

View File

@@ -90,6 +90,10 @@ After clarification, you receive a structured specification:
[List of assumptions] [List of assumptions]
``` ```
## Documentation
- [Neurodivergent Support Guide](docs/ND-SUPPORT.md) - How clarity-assist supports ND users with executive function challenges and cognitive differences
## Integration ## Integration
For CLAUDE.md integration instructions, see `claude-md-integration.md`. For CLAUDE.md integration instructions, see `claude-md-integration.md`.

View File

@@ -1,5 +1,15 @@
# Clarity Coach Agent # Clarity Coach Agent
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ 💬 CLARITY-ASSIST · Clarity Coach │
└──────────────────────────────────────────────────────────────────┘
```
## Role ## Role
You are a patient, structured coach specializing in helping users articulate their requirements clearly. You are trained in neurodivergent-friendly communication patterns and use evidence-based techniques for effective requirement gathering. You are a patient, structured coach specializing in helping users articulate their requirements clearly. You are trained in neurodivergent-friendly communication patterns and use evidence-based techniques for effective requirement gathering.

View File

@@ -1,5 +1,17 @@
# /clarify - Full Prompt Optimization # /clarify - Full Prompt Optimization
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 💬 CLARITY-ASSIST · Prompt Optimization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the workflow.
## Purpose ## Purpose
Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations. Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations.

View File

@@ -1,5 +1,17 @@
# /quick-clarify - Rapid Clarification Mode # /quick-clarify - Rapid Clarification Mode
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 💬 CLARITY-ASSIST · Quick Clarify │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the workflow.
## Purpose ## Purpose
Single-pass clarification for requests that are mostly clear but need minor disambiguation. Single-pass clarification for requests that are mostly clear but need minor disambiguation.

View File

@@ -0,0 +1,328 @@
# Neurodivergent Support in clarity-assist
This document describes how clarity-assist is designed to support users with neurodivergent traits, including ADHD, autism, anxiety, and other conditions that affect executive function, sensory processing, or cognitive style.
## Overview
### Purpose
clarity-assist exists to help all users transform vague or incomplete requests into clear, actionable specifications. For neurodivergent users specifically, it addresses common challenges:
- **Executive function difficulties** - Breaking down complex tasks, getting started, managing scope
- **Working memory limitations** - Keeping track of context across long conversations
- **Decision fatigue** - Facing too many open-ended choices
- **Processing style differences** - Preferring structured, predictable interactions
- **Anxiety around uncertainty** - Needing clear expectations and explicit confirmation
### Philosophy
Our design philosophy centers on three principles:
1. **Reduce cognitive load** - Never force the user to hold too much in their head at once
2. **Provide structure** - Use consistent, predictable patterns for all interactions
3. **Respect different communication styles** - Accommodate rather than assume one "right" way to think
## Features for ND Users
### 1. Reduced Cognitive Load
**Prompt Simplification**
- The 4-D methodology (Deconstruct, Diagnose, Develop, Deliver) breaks down complex requests into manageable phases
- Users never need to specify everything upfront - clarification happens incrementally
**Task Breakdown**
- Large requests are decomposed into explicit components
- Dependencies and relationships are surfaced rather than left implicit
- Scope boundaries are clearly defined (in-scope vs. out-of-scope)
### 2. Structured Output
**Consistent Formatting**
- Every clarification session produces the same structured specification:
- Summary (1-2 sentences)
- Scope (In/Out)
- Requirements table (numbered, prioritized)
- Assumptions list
- This predictability reduces the mental effort of parsing responses
**Predictable Patterns**
- Questions always follow the same format
- Progress summaries appear at regular intervals
- Escalation (simple to complex) is always offered, never forced
**Bulleted Lists Over Prose**
- Requirements are presented as scannable lists, not paragraphs
- Options are numbered for easy reference
- Key information is highlighted with bold labels
### 3. Customizable Verbosity
**Detail Levels**
- `/clarify` - Full methodology for complex requests (more thorough, more questions)
- `/quick-clarify` - Rapid mode for simple disambiguation (fewer questions, faster)
**User Control**
- Users can always say "that's enough detail" to end questioning early
- The plugin offers to break sessions into smaller parts
- "Good enough for now" is explicitly validated as an acceptable outcome
### 4. Vagueness Detection
The `UserPromptSubmit` hook automatically detects prompts that might benefit from clarification and gently suggests using `/clarify`.
**Detection Signals**
- Short prompts (< 10 words) without specific technical terms
- Vague action phrases: "help me", "fix this", "make it better"
- Ambiguous scope words: "somehow", "something", "stuff", "etc."
- Open questions without context
**Non-Blocking Approach**
- The hook never prevents you from proceeding
- It provides a suggestion with a vagueness score (percentage)
- You can disable auto-suggestions entirely via environment variable
### 5. Focus Aids
**Task Prioritization**
- Requirements are tagged as Must/Should/Could/Won't (MoSCoW)
- Critical items are separated from nice-to-haves
- Scope creep is explicitly called out and deferred
**Context Switching Warnings**
- When questions touch multiple areas, the plugin acknowledges the complexity
- Offers to focus on one aspect at a time
- Summarizes frequently to rebuild context after interruptions
## How It Works
### The UserPromptSubmit Hook
When you submit a prompt, the vagueness detection hook (`hooks/vagueness-check.sh`) runs automatically:
```
User submits prompt
|
v
Hook reads prompt from stdin
|
v
Skip if: empty, starts with /, or contains file paths
|
v
Calculate vagueness score (0.0 - 1.0)
- Short prompts: +0.3
- Vague action phrases: +0.2
- Ambiguous scope words: +0.15
- Missing technical specifics: +0.2
- Short questions without context: +0.15
|
v
If score >= threshold (default 0.6):
- Output gentle suggestion with [clarity-assist] prefix
- Show vagueness percentage
|
v
Exit 0 (always non-blocking)
```
### Example Hook Output
```
[clarity-assist] Your prompt could benefit from more clarity.
[clarity-assist] Consider running /clarity-assist to refine your request.
[clarity-assist] (Vagueness score: 65% - this is a suggestion, not a block)
```
### The 4-D Methodology
| Phase | What Happens | ND Benefit |
|-------|--------------|------------|
| **Deconstruct** | Break request into components | Shows the full picture without overwhelming |
| **Diagnose** | Identify gaps and conflicts | Surfaces hidden assumptions |
| **Develop** | Ask structured questions | Option-based, chunked, contextual |
| **Deliver** | Produce specification | Consistent, scannable format |
## Usage Examples
### Before: Vague Request
```
User: Make the app faster
```
This triggers vagueness detection because:
- Short (4 words)
- Vague action ("make")
- No specific technical context
- No measurable outcome
### After: Clarified with /clarify
```
User: /clarify Make the app faster
Claude: Let me help clarify what kind of performance improvements you're looking for.
**What type of slowness are you experiencing?**
1. Initial page load takes too long
2. Interactions feel sluggish (clicking, typing)
3. API calls are slow
4. All of the above
5. Other
[I'm asking because each has very different solutions]
```
```
User: 1
Claude: Initial page load - got it.
**What's an acceptable load time for you?**
1. Under 1 second (aggressive)
2. Under 2 seconds (standard)
3. Under 3 seconds (relaxed)
4. I don't have a specific target
[This helps me know whether to suggest quick wins or a major optimization effort]
```
After 2-3 more questions, the user receives:
```markdown
## Clarified Request
### Summary
Optimize initial page load time to under 2 seconds by addressing bundle size and initial render blocking resources.
### Scope
**In Scope:**
- Bundle analysis and splitting
- Lazy loading non-critical components
- Image optimization
- CSS/JS minification check
**Out of Scope:**
- Backend API optimization
- Database query performance
- CDN setup
### Requirements
| # | Requirement | Priority | Notes |
|---|-------------|----------|-------|
| 1 | Page loads in < 2s on 4G | Must | Measured via Lighthouse |
| 2 | Largest contentful paint < 1.5s | Should | Core Web Vital |
| 3 | No layout shift after load | Should | CLS < 0.1 |
### Assumptions
- Current load time is > 3 seconds (based on "too long" description)
- Target browsers: modern evergreen (Chrome, Firefox, Safari)
- Build tooling already configured (webpack/vite)
```
### Common Patterns Detected
| Pattern | Example | What It Triggers |
|---------|---------|------------------|
| Missing context | "Fix the bug" | "Which bug? In which file or feature?" |
| Ambiguous scope | "Update the user stuff" | "What specifically needs updating?" |
| Unclear goals | "Make it better" | "Better in what way? Faster? Cleaner? More features?" |
| Implicit knowledge | "Do it like last time" | "I don't have context from previous sessions" |
| Conflicting requests | "Keep it simple but add everything" | "Which matters more: simplicity or completeness?" |
### Quick-Clarify Example
For simpler requests that just need minor disambiguation:
```
User: /quick-clarify Add a delete button to the user list
Claude: I'll add a delete button to each row in the user list.
**Should deletion require confirmation?**
1. Yes - show confirmation dialog (Recommended)
2. No - delete immediately
3. Soft delete - mark as deleted but keep data
[Asking because accidental deletions can be problematic]
```
```
User: 1
Claude: Got it. I'll implement a delete button with confirmation dialog. Starting now.
```
## Configuration Options
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `CLARITY_ASSIST_AUTO_SUGGEST` | `true` | Enable/disable automatic vagueness detection |
| `CLARITY_ASSIST_VAGUENESS_THRESHOLD` | `0.6` | Score threshold to trigger suggestion (0.0-1.0) |
### Disabling Auto-Suggestions
If you find the vagueness detection unhelpful, disable it in your shell profile or `.env`:
```bash
export CLARITY_ASSIST_AUTO_SUGGEST=false
```
### Adjusting Sensitivity
To make detection more or less sensitive:
```bash
# More sensitive (suggests more often)
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.4
# Less sensitive (only very vague prompts)
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.8
```
## Tips for ND Users
### If You're Feeling Overwhelmed
- Use `/quick-clarify` instead of `/clarify` for faster interactions
- Say "let's focus on just one thing" to narrow scope
- Ask to "pause and summarize" at any point
- It's OK to say "I don't know" - the plugin will offer concrete alternatives
### If You Have Executive Function Challenges
- Start with `/clarify` even for tasks you think are simple - it helps with planning
- The structured specification can serve as a checklist
- Use the scope boundaries to prevent scope creep
### If You Prefer Detailed Structure
- The 4-D methodology provides a predictable framework
- All output follows consistent formatting
- Questions always offer numbered options
### If You Have Anxiety About Getting It Right
- The plugin validates "good enough for now" as acceptable
- You can always revisit and change earlier answers
- Assumptions are explicitly listed - nothing is hidden
## Accessibility Notes
- All output uses standard markdown that works with screen readers
- No time pressure - take as long as you need between responses
- Questions are designed to be answerable without deep context retrieval
- Visual patterns (bold, bullets, tables) create scannable structure
## Feedback
If you have suggestions for improving neurodivergent support in clarity-assist, please open an issue at:
https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/issues
Include the label `clarity-assist` and describe:
- What challenge you faced
- What would have helped
- Any specific accommodations you'd like to see

View File

@@ -0,0 +1,10 @@
{
"hooks": {
"UserPromptSubmit": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/vagueness-check.sh"
}
]
}
}

View File

@@ -0,0 +1,216 @@
#!/bin/bash
# clarity-assist vagueness detection hook
# Analyzes user prompts for vagueness and suggests /clarity-assist when beneficial
# All output MUST have [clarity-assist] prefix
# This is a NON-BLOCKING hook - always exits 0
PREFIX="[clarity-assist]"
# Check if auto-suggest is enabled (default: true)
AUTO_SUGGEST="${CLARITY_ASSIST_AUTO_SUGGEST:-true}"
if [[ "$AUTO_SUGGEST" != "true" ]]; then
exit 0
fi
# Threshold for vagueness score (default: 0.6)
THRESHOLD="${CLARITY_ASSIST_VAGUENESS_THRESHOLD:-0.6}"
# Read user prompt from stdin
PROMPT=""
if [[ -t 0 ]]; then
# No stdin available
exit 0
else
PROMPT=$(cat)
fi
# Skip empty prompts
if [[ -z "$PROMPT" ]]; then
exit 0
fi
# Skip if prompt is a command (starts with /)
if [[ "$PROMPT" =~ ^[[:space:]]*/[a-zA-Z] ]]; then
exit 0
fi
# Skip if prompt mentions specific files or paths
if [[ "$PROMPT" =~ \.(py|js|ts|sh|md|json|yaml|yml|txt|css|html|go|rs|java|c|cpp|h)([[:space:]]|$|[^a-zA-Z]) ]] || \
[[ "$PROMPT" =~ [/\\][a-zA-Z0-9_-]+[/\\] ]] || \
[[ "$PROMPT" =~ (src|lib|test|docs|plugins|hooks|commands)/ ]]; then
exit 0
fi
# Initialize vagueness score
SCORE=0
# Count words in the prompt
WORD_COUNT=$(echo "$PROMPT" | wc -w | tr -d ' ')
# ============================================================================
# Vagueness Signal Detection
# ============================================================================
# Signal 1: Very short prompts (< 10 words) are often vague
if [[ "$WORD_COUNT" -lt 10 ]]; then
# But very short specific commands are OK
if [[ "$WORD_COUNT" -lt 3 ]]; then
# Extremely short - probably intentional or a command
:
else
SCORE=$(echo "$SCORE + 0.3" | bc)
fi
fi
# Signal 2: Vague action phrases (no specific outcome)
VAGUE_ACTIONS=(
"help me"
"help with"
"do something"
"work on"
"look at"
"check this"
"fix it"
"fix this"
"make it better"
"make this better"
"improve it"
"improve this"
"update this"
"update it"
"change it"
"change this"
"can you"
"could you"
"would you"
"please help"
)
PROMPT_LOWER=$(echo "$PROMPT" | tr '[:upper:]' '[:lower:]')
for phrase in "${VAGUE_ACTIONS[@]}"; do
if [[ "$PROMPT_LOWER" == *"$phrase"* ]]; then
SCORE=$(echo "$SCORE + 0.2" | bc)
break
fi
done
# Signal 3: Ambiguous scope indicators
AMBIGUOUS_SCOPE=(
"somehow"
"something"
"somewhere"
"anything"
"whatever"
"stuff"
"things"
"etc"
"and so on"
)
for word in "${AMBIGUOUS_SCOPE[@]}"; do
if [[ "$PROMPT_LOWER" == *"$word"* ]]; then
SCORE=$(echo "$SCORE + 0.15" | bc)
break
fi
done
# Signal 4: Missing context indicators (no reference to what/where)
# Check if prompt lacks specificity markers
HAS_SPECIFICS=false
# Specific technical terms suggest clarity
SPECIFIC_MARKERS=(
"function"
"class"
"method"
"variable"
"error"
"bug"
"test"
"api"
"endpoint"
"database"
"query"
"component"
"module"
"service"
"config"
"install"
"deploy"
"build"
"run"
"execute"
"create"
"delete"
"add"
"remove"
"implement"
"refactor"
"migrate"
"upgrade"
"debug"
"log"
"exception"
"stack"
"memory"
"performance"
"security"
"auth"
"token"
"session"
"route"
"controller"
"model"
"view"
"template"
"schema"
"migration"
"commit"
"branch"
"merge"
"pull"
"push"
)
for marker in "${SPECIFIC_MARKERS[@]}"; do
if [[ "$PROMPT_LOWER" == *"$marker"* ]]; then
HAS_SPECIFICS=true
break
fi
done
if [[ "$HAS_SPECIFICS" == false ]] && [[ "$WORD_COUNT" -gt 3 ]]; then
SCORE=$(echo "$SCORE + 0.2" | bc)
fi
# Signal 5: Question without context
if [[ "$PROMPT" =~ \?$ ]] && [[ "$WORD_COUNT" -lt 8 ]]; then
# Short questions without specifics are often vague
if [[ "$HAS_SPECIFICS" == false ]]; then
SCORE=$(echo "$SCORE + 0.15" | bc)
fi
fi
# Cap score at 1.0
if (( $(echo "$SCORE > 1.0" | bc -l) )); then
SCORE="1.0"
fi
# ============================================================================
# Output suggestion if score exceeds threshold
# ============================================================================
# Compare score to threshold using bc
if (( $(echo "$SCORE >= $THRESHOLD" | bc -l) )); then
# Format score as percentage for display
SCORE_PCT=$(echo "$SCORE * 100" | bc | cut -d'.' -f1)
# Gentle, non-blocking suggestion
echo "$PREFIX Your prompt could benefit from more clarity."
echo "$PREFIX Consider running /clarity-assist to refine your request."
echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)"
fi
# Always exit 0 - this hook is non-blocking
exit 0

View File

@@ -37,6 +37,33 @@ Create a new CLAUDE.md tailored to your project.
/config-init /config-init
``` ```
### `/config-diff`
Show differences between current CLAUDE.md and previous versions.
```
/config-diff # Compare working copy vs last commit
/config-diff --commit=abc1234 # Compare against specific commit
/config-diff --from=v1.0 --to=v2.0 # Compare two commits
/config-diff --section="Critical Rules" # Focus on specific section
```
### `/config-lint`
Lint CLAUDE.md for common anti-patterns and best practices.
```
/config-lint # Run all lint checks
/config-lint --fix # Auto-fix fixable issues
/config-lint --rules=security # Check only security rules
/config-lint --severity=error # Show only errors
```
**Lint Rule Categories:**
- **Security (SEC)** - Hardcoded secrets, paths, credentials
- **Structure (STR)** - Header hierarchy, required sections
- **Content (CNT)** - Contradictions, duplicates, vague rules
- **Format (FMT)** - Consistency, code blocks, whitespace
- **Best Practice (BPR)** - Missing Quick Start, Critical Rules sections
## Best Practices ## Best Practices
A good CLAUDE.md should be: A good CLAUDE.md should be:

View File

@@ -7,6 +7,16 @@ description: CLAUDE.md optimization and maintenance agent
You are the **Maintainer Agent** - a specialist in creating and optimizing CLAUDE.md configuration files for Claude Code projects. Your role is to ensure CLAUDE.md files are clear, concise, well-structured, and follow best practices. You are the **Maintainer Agent** - a specialist in creating and optimizing CLAUDE.md configuration files for Claude Code projects. Your role is to ensure CLAUDE.md files are clear, concise, well-structured, and follow best practices.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Optimization │
└──────────────────────────────────────────────────────────────────┘
```
## Your Personality ## Your Personality
**Optimization-Focused:** **Optimization-Focused:**

View File

@@ -6,6 +6,18 @@ description: Analyze CLAUDE.md for optimization opportunities and plugin integra
This command analyzes your project's CLAUDE.md file and provides a detailed report on optimization opportunities and plugin integration status. This command analyzes your project's CLAUDE.md file and provides a detailed report on optimization opportunities and plugin integration status.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Analysis │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
## What This Command Does ## What This Command Does
1. **Read CLAUDE.md** - Locates and reads the project's CLAUDE.md file 1. **Read CLAUDE.md** - Locates and reads the project's CLAUDE.md file

View File

@@ -0,0 +1,251 @@
---
description: Show diff between current CLAUDE.md and last commit
---
# Compare CLAUDE.md Changes
This command shows differences between your current CLAUDE.md file and previous versions, helping track configuration drift and review changes before committing.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Diff │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the diff.
## What This Command Does
1. **Detect CLAUDE.md Location** - Finds the project's CLAUDE.md file
2. **Compare Versions** - Shows diff against last commit or specified revision
3. **Highlight Sections** - Groups changes by affected sections
4. **Summarize Impact** - Explains what the changes mean for Claude's behavior
## Usage
```
/config-diff
```
Compare against a specific commit:
```
/config-diff --commit=abc1234
/config-diff --commit=HEAD~3
```
Compare two specific commits:
```
/config-diff --from=abc1234 --to=def5678
```
Show only specific sections:
```
/config-diff --section="Critical Rules"
/config-diff --section="Quick Start"
```
## Comparison Modes
### Default: Working vs Last Commit
Shows uncommitted changes to CLAUDE.md:
```
/config-diff
```
### Working vs Specific Commit
Shows changes since a specific point:
```
/config-diff --commit=v1.0.0
```
### Commit to Commit
Shows changes between two historical versions:
```
/config-diff --from=v1.0.0 --to=v2.0.0
```
### Branch Comparison
Shows CLAUDE.md differences between branches:
```
/config-diff --branch=main
/config-diff --from=feature-branch --to=main
```
## Expected Output
```
CLAUDE.md Diff Report
=====================
File: /path/to/project/CLAUDE.md
Comparing: Working copy vs HEAD (last commit)
Commit: abc1234 "Update build commands" (2 days ago)
Summary:
- Lines added: 12
- Lines removed: 5
- Net change: +7 lines
- Sections affected: 3
Section Changes:
----------------
## Quick Start [MODIFIED]
- Added new environment variable requirement
- Updated test command with coverage flag
## Critical Rules [ADDED CONTENT]
+ New rule: "Never modify database migrations directly"
## Architecture [UNCHANGED]
## Common Operations [MODIFIED]
- Removed deprecated deployment command
- Added new Docker workflow
Detailed Diff:
--------------
--- CLAUDE.md (HEAD)
+++ CLAUDE.md (working)
@@ -15,7 +15,10 @@
## Quick Start
```bash
+export DATABASE_URL=postgres://... # Required
pip install -r requirements.txt
-pytest
+pytest --cov=src # Run with coverage
uvicorn main:app --reload
```
@@ -45,6 +48,7 @@
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
- Always run tests before committing
Behavioral Impact:
------------------
These changes will affect Claude's behavior:
1. [NEW REQUIREMENT] Claude will now export DATABASE_URL before running
2. [MODIFIED] Test command now includes coverage reporting
3. [NEW RULE] Claude will avoid direct migration modifications
Review: Do these changes reflect your intended configuration?
```
## Section-Focused View
When using `--section`, output focuses on specific areas:
```
/config-diff --section="Critical Rules"
CLAUDE.md Section Diff: Critical Rules
======================================
--- HEAD
+++ Working
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
+- Always use type hints in Python code
- Always run tests before committing
-- Keep functions under 50 lines
Changes:
+ 2 rules added
- 1 rule removed
Impact: Claude will follow 2 new constraints and no longer enforce
the 50-line function limit.
```
## Options
| Option | Description |
|--------|-------------|
| `--commit=REF` | Compare working copy against specific commit/tag |
| `--from=REF` | Starting point for comparison |
| `--to=REF` | Ending point for comparison (default: HEAD) |
| `--branch=NAME` | Compare against branch tip |
| `--section=NAME` | Show only changes to specific section |
| `--stat` | Show only statistics, no detailed diff |
| `--no-color` | Disable colored output |
| `--context=N` | Lines of context around changes (default: 3) |
## Understanding the Output
### Change Indicators
| Symbol | Meaning |
|--------|---------|
| `+` | Line added |
| `-` | Line removed |
| `@@` | Location marker showing line numbers |
| `[MODIFIED]` | Section has changes |
| `[ADDED]` | New section created |
| `[REMOVED]` | Section deleted |
| `[UNCHANGED]` | No changes to section |
### Impact Categories
- **NEW REQUIREMENT** - Claude will now need to do something new
- **REMOVED REQUIREMENT** - Claude no longer needs to do something
- **MODIFIED** - Existing behavior changed
- **NEW RULE** - New constraint added
- **RELAXED RULE** - Constraint removed or softened
## When to Use
Run `/config-diff` when:
- Before committing CLAUDE.md changes
- Reviewing what changed after pulling updates
- Debugging unexpected Claude behavior
- Auditing configuration changes over time
- Comparing configurations across branches
## Integration with Other Commands
| Workflow | Commands |
|----------|----------|
| Review before commit | `/config-diff` then `git commit` |
| After optimization | `/config-optimize` then `/config-diff` |
| Audit history | `/config-diff --from=v1.0.0 --to=HEAD` |
| Branch comparison | `/config-diff --branch=main` |
## Tips
1. **Review before committing** - Always check what changed
2. **Track behavioral changes** - Focus on rules and requirements sections
3. **Use section filtering** - Large files benefit from focused diffs
4. **Compare across releases** - Use tags to track major changes
5. **Check after merges** - Ensure CLAUDE.md didn't get conflict artifacts
## Troubleshooting
### "No changes detected"
- CLAUDE.md matches the comparison target
- Check if you're comparing the right commits
### "File not found in commit"
- CLAUDE.md didn't exist at that commit
- Use `git log -- CLAUDE.md` to find when it was created
### "Not a git repository"
- This command requires git history
- Initialize git or use file backup comparison instead

View File

@@ -0,0 +1,346 @@
---
description: Lint CLAUDE.md for common anti-patterns and best practices
---
# Lint CLAUDE.md
This command checks your CLAUDE.md file against best practices and detects common anti-patterns that can cause issues with Claude Code.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Lint │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the linting.
## What This Command Does
1. **Parse Structure** - Validates markdown structure and hierarchy
2. **Check Security** - Detects hardcoded paths, secrets, and sensitive data
3. **Validate Content** - Identifies anti-patterns and problematic instructions
4. **Verify Format** - Ensures consistent formatting and style
5. **Generate Report** - Provides actionable findings with fix suggestions
## Usage
```
/config-lint
```
Lint with auto-fix:
```
/config-lint --fix
```
Check specific rules only:
```
/config-lint --rules=security,structure
```
## Linting Rules
### Security Rules (SEC)
| Rule | Description | Severity |
|------|-------------|----------|
| SEC001 | Hardcoded absolute paths | Warning |
| SEC002 | Potential secrets/API keys | Error |
| SEC003 | Hardcoded IP addresses | Warning |
| SEC004 | Exposed credentials patterns | Error |
| SEC005 | Hardcoded URLs with tokens | Error |
| SEC006 | Environment variable values (not names) | Warning |
### Structure Rules (STR)
| Rule | Description | Severity |
|------|-------------|----------|
| STR001 | Missing required sections | Error |
| STR002 | Invalid header hierarchy (h3 before h2) | Warning |
| STR003 | Orphaned content (text before first header) | Info |
| STR004 | Excessive nesting depth (>4 levels) | Warning |
| STR005 | Empty sections | Warning |
| STR006 | Missing section content | Warning |
### Content Rules (CNT)
| Rule | Description | Severity |
|------|-------------|----------|
| CNT001 | Contradictory instructions | Error |
| CNT002 | Vague or ambiguous rules | Warning |
| CNT003 | Overly long sections (>100 lines) | Info |
| CNT004 | Duplicate content | Warning |
| CNT005 | TODO/FIXME in production config | Warning |
| CNT006 | Outdated version references | Info |
| CNT007 | Broken internal links | Warning |
### Format Rules (FMT)
| Rule | Description | Severity |
|------|-------------|----------|
| FMT001 | Inconsistent header styles | Info |
| FMT002 | Inconsistent list markers | Info |
| FMT003 | Missing code block language | Info |
| FMT004 | Trailing whitespace | Info |
| FMT005 | Missing blank lines around headers | Info |
| FMT006 | Inconsistent indentation | Info |
### Best Practice Rules (BPR)
| Rule | Description | Severity |
|------|-------------|----------|
| BPR001 | No Quick Start section | Warning |
| BPR002 | No Critical Rules section | Warning |
| BPR003 | Instructions without examples | Info |
| BPR004 | Commands without explanation | Info |
| BPR005 | Rules without rationale | Info |
| BPR006 | Missing plugin integration docs | Info |
## Expected Output
```
CLAUDE.md Lint Report
=====================
File: /path/to/project/CLAUDE.md
Rules checked: 25
Time: 0.3s
Summary:
Errors: 2
Warnings: 5
Info: 3
Findings:
---------
[ERROR] SEC002: Potential secret detected (line 45)
│ api_key = "sk-1234567890abcdef"
│ ^^^^^^^^^^^^^^^^^^^^^^
└─ Hardcoded API key found. Use environment variable reference instead.
Suggested fix:
- api_key = "sk-1234567890abcdef"
+ api_key = $OPENAI_API_KEY # Set in environment
[ERROR] CNT001: Contradictory instructions (lines 23, 67)
│ Line 23: "Always run tests before committing"
│ Line 67: "Skip tests for documentation-only changes"
└─ These rules conflict. Clarify the exception explicitly.
Suggested fix:
+ "Always run tests before committing, except for documentation-only
+ changes (files in docs/ directory)"
[WARNING] SEC001: Hardcoded absolute path (line 12)
│ Database location: /home/user/data/myapp.db
│ ^^^^^^^^^^^^^^^^^^^^^^^^
└─ Absolute paths break portability. Use relative or variable.
Suggested fix:
- Database location: /home/user/data/myapp.db
+ Database location: ./data/myapp.db # Or $DATA_DIR/myapp.db
[WARNING] STR002: Invalid header hierarchy (line 34)
│ ### Subsection
│ (no preceding ## header)
└─ H3 header without parent H2. Add H2 or promote to H2.
[WARNING] CNT004: Duplicate content (lines 45-52, 89-96)
│ Same git workflow documented twice
└─ Remove duplicate or consolidate into single section.
[WARNING] STR005: Empty section (line 78)
│ ## Troubleshooting
│ (no content)
└─ Add content or remove empty section.
[WARNING] BPR002: No Critical Rules section
│ Missing "Critical Rules" or "Important Rules" section
└─ Add a section highlighting must-follow rules for Claude.
[INFO] FMT003: Missing code block language (line 56)
│ ```
│ npm install
│ ```
└─ Specify language for syntax highlighting: ```bash
[INFO] CNT003: Overly long section (lines 100-215)
│ "Architecture" section is 115 lines
└─ Consider breaking into subsections or condensing.
[INFO] FMT001: Inconsistent header styles
│ Line 10: "## Quick Start"
│ Line 25: "## Architecture:"
│ (colon suffix inconsistent)
└─ Standardize header format throughout document.
---
Auto-fixable: 4 issues (run with --fix)
Manual review required: 6 issues
Run `/config-lint --fix` to apply automatic fixes.
```
## Options
| Option | Description |
|--------|-------------|
| `--fix` | Automatically fix auto-fixable issues |
| `--rules=LIST` | Check only specified rule categories |
| `--ignore=LIST` | Skip specified rules (e.g., `--ignore=FMT001,FMT002`) |
| `--severity=LEVEL` | Show only issues at or above level (error/warning/info) |
| `--format=FORMAT` | Output format: `text` (default), `json`, `sarif` |
| `--config=FILE` | Use custom lint configuration |
| `--strict` | Treat warnings as errors |
## Rule Categories
Use `--rules` to focus on specific areas:
```
/config-lint --rules=security # Only security checks
/config-lint --rules=structure # Only structure checks
/config-lint --rules=security,content # Multiple categories
```
Available categories:
- `security` - SEC rules
- `structure` - STR rules
- `content` - CNT rules
- `format` - FMT rules
- `bestpractice` - BPR rules
## Custom Configuration
Create `.claude-lint.json` in project root:
```json
{
"rules": {
"SEC001": "warning",
"FMT001": "off",
"CNT003": {
"severity": "warning",
"maxLines": 150
}
},
"ignore": [
"FMT*"
],
"requiredSections": [
"Quick Start",
"Critical Rules",
"Project Overview"
]
}
```
## Anti-Pattern Examples
### Hardcoded Secrets (SEC002)
```markdown
# BAD
API_KEY=sk-1234567890abcdef
# GOOD
API_KEY=$OPENAI_API_KEY # Set via environment
```
### Hardcoded Paths (SEC001)
```markdown
# BAD
Config file: /home/john/projects/myapp/config.yml
# GOOD
Config file: ./config.yml
Config file: $PROJECT_ROOT/config.yml
```
### Contradictory Rules (CNT001)
```markdown
# BAD
- Always use TypeScript
- JavaScript files are acceptable for scripts
# GOOD
- Always use TypeScript for source code
- JavaScript (.js) is acceptable only for config files and scripts
```
### Vague Instructions (CNT002)
```markdown
# BAD
- Be careful with the database
# GOOD
- Never run DELETE without WHERE clause
- Always backup before migrations
```
### Invalid Hierarchy (STR002)
```markdown
# BAD
# Main Title
### Skipped Level
# GOOD
# Main Title
## Section
### Subsection
```
## When to Use
Run `/config-lint` when:
- Before committing CLAUDE.md changes
- During code review for CLAUDE.md modifications
- Setting up CI/CD checks for configuration files
- After major edits to catch introduced issues
- Periodically as maintenance check
## Integration with CI/CD
Add to your CI pipeline:
```yaml
# GitHub Actions example
- name: Lint CLAUDE.md
run: claude /config-lint --strict --format=sarif > lint-results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: lint-results.sarif
```
## Tips
1. **Start with errors** - Fix errors before warnings
2. **Use --fix carefully** - Review auto-fixes before committing
3. **Configure per-project** - Different projects have different needs
4. **Integrate in CI** - Catch issues before they reach main
5. **Review periodically** - Run lint check monthly as maintenance
## Related Commands
| Command | Relationship |
|---------|--------------|
| `/config-analyze` | Deeper content analysis (complements lint) |
| `/config-optimize` | Applies fixes and improvements |
| `/config-diff` | Shows what changed (run lint before commit) |

View File

@@ -6,6 +6,18 @@ description: Initialize a new CLAUDE.md file for a project
This command creates a new CLAUDE.md file tailored to your project, gathering context and generating appropriate content. This command creates a new CLAUDE.md file tailored to your project, gathering context and generating appropriate content.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Initialization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the initialization.
## What This Command Does ## What This Command Does
1. **Gather Context** - Analyzes project structure and asks clarifying questions 1. **Gather Context** - Analyzes project structure and asks clarifying questions

View File

@@ -6,6 +6,18 @@ description: Optimize CLAUDE.md structure and content
This command automatically optimizes your project's CLAUDE.md file based on best practices and identified issues. This command automatically optimizes your project's CLAUDE.md file based on best practices and identified issues.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Optimization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the optimization.
## What This Command Does ## What This Command Does
1. **Analyze Current File** - Identifies all optimization opportunities 1. **Analyze Current File** - Identifies all optimization opportunities

View File

@@ -1,7 +1,7 @@
{ {
"name": "cmdb-assistant", "name": "cmdb-assistant",
"version": "1.0.0", "version": "1.2.0",
"description": "NetBox CMDB integration for infrastructure management - query, create, update, and manage network devices, IP addresses, sites, and more", "description": "NetBox CMDB integration with data quality validation - query, create, update, and manage network devices, IP addresses, sites, and more with best practices enforcement",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
"email": "leobmiranda@gmail.com" "email": "leobmiranda@gmail.com"
@@ -15,7 +15,9 @@
"infrastructure", "infrastructure",
"network", "network",
"ipam", "ipam",
"dcim" "dcim",
"data-quality",
"validation"
], ],
"commands": ["./commands/"], "commands": ["./commands/"],
"mcpServers": ["./.mcp.json"] "mcpServers": ["./.mcp.json"]

View File

@@ -1,12 +1,8 @@
{ {
"mcpServers": { "mcpServers": {
"netbox": { "netbox": {
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox/.venv/bin/python", "command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox/run.sh",
"args": ["-m", "mcp_server.server"], "args": []
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox",
"env": {
"PYTHONPATH": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox"
}
} }
} }
} }

View File

@@ -2,6 +2,20 @@
A Claude Code plugin for NetBox CMDB integration - query, create, update, and manage your network infrastructure directly from Claude Code. A Claude Code plugin for NetBox CMDB integration - query, create, update, and manage your network infrastructure directly from Claude Code.
## What's New in v1.2.0
- **`/cmdb-topology`**: Generate Mermaid diagrams showing infrastructure topology (rack view, network view, site overview)
- **`/change-audit`**: Query and analyze NetBox audit log for change tracking and compliance
- **`/ip-conflicts`**: Detect IP address conflicts and overlapping prefixes
## What's New in v1.1.0
- **Data Quality Validation**: Hooks for SessionStart and PreToolUse that check data quality and warn about missing fields
- **Best Practices Skill**: Reference documentation for NetBox patterns (naming conventions, dependency order, role management)
- **`/cmdb-audit`**: Analyze data quality across VMs, devices, naming conventions, and roles
- **`/cmdb-register`**: Register the current machine into NetBox with all running applications (Docker containers, systemd services)
- **`/cmdb-sync`**: Synchronize existing machine state with NetBox (detect drift, update with confirmation)
## Features ## Features
- **Full CRUD Operations**: Create, read, update, and delete across all NetBox modules - **Full CRUD Operations**: Create, read, update, and delete across all NetBox modules
@@ -9,6 +23,9 @@ A Claude Code plugin for NetBox CMDB integration - query, create, update, and ma
- **IP Management**: Allocate IPs, manage prefixes, track VLANs - **IP Management**: Allocate IPs, manage prefixes, track VLANs
- **Infrastructure Documentation**: Document servers, network devices, and connections - **Infrastructure Documentation**: Document servers, network devices, and connections
- **Audit Trail**: Review changes and maintain infrastructure history - **Audit Trail**: Review changes and maintain infrastructure history
- **Data Quality Validation**: Proactive checks for missing site, tenant, platform assignments
- **Machine Registration**: Auto-discover and register servers with running applications
- **Drift Detection**: Sync machine state and detect changes over time
## Installation ## Installation
@@ -40,10 +57,17 @@ Add to your Claude Code plugins or marketplace configuration.
| Command | Description | | Command | Description |
|---------|-------------| |---------|-------------|
| `/initial-setup` | Interactive setup wizard for NetBox MCP server |
| `/cmdb-search <query>` | Search for devices, IPs, sites, or any CMDB object | | `/cmdb-search <query>` | Search for devices, IPs, sites, or any CMDB object |
| `/cmdb-device <action>` | Manage network devices (list, create, update, delete) | | `/cmdb-device <action>` | Manage network devices (list, create, update, delete) |
| `/cmdb-ip <action>` | Manage IP addresses and prefixes | | `/cmdb-ip <action>` | Manage IP addresses and prefixes |
| `/cmdb-site <action>` | Manage sites and locations | | `/cmdb-site <action>` | Manage sites and locations |
| `/cmdb-audit [scope]` | Data quality analysis (all, vms, devices, naming, roles) |
| `/cmdb-register` | Register current machine into NetBox with running apps |
| `/cmdb-sync` | Sync machine state with NetBox (detect drift, update) |
| `/cmdb-topology <view>` | Generate Mermaid diagrams (rack, network, site, full) |
| `/change-audit [filters]` | Query NetBox audit log for change tracking |
| `/ip-conflicts [scope]` | Detect IP conflicts and overlapping prefixes |
## Agent ## Agent
@@ -103,6 +127,15 @@ This plugin provides access to the full NetBox API:
- **Wireless**: WLANs, Wireless Links - **Wireless**: WLANs, Wireless Links
- **Extras**: Tags, Custom Fields, Journal Entries, Audit Log - **Extras**: Tags, Custom Fields, Journal Entries, Audit Log
## Hooks
| Event | Purpose |
|-------|---------|
| `SessionStart` | Test NetBox connectivity, report data quality issues |
| `PreToolUse` | Validate VM/device parameters before create/update |
Hooks are **non-blocking** - they emit warnings but never prevent operations.
## Architecture ## Architecture
``` ```
@@ -115,13 +148,26 @@ cmdb-assistant/
│ ├── cmdb-search.md # Search command │ ├── cmdb-search.md # Search command
│ ├── cmdb-device.md # Device management │ ├── cmdb-device.md # Device management
│ ├── cmdb-ip.md # IP management │ ├── cmdb-ip.md # IP management
── cmdb-site.md # Site management ── cmdb-site.md # Site management
│ ├── cmdb-audit.md # Data quality audit
│ ├── cmdb-register.md # Machine registration
│ ├── cmdb-sync.md # Machine sync
│ ├── cmdb-topology.md # Topology visualization (NEW)
│ ├── change-audit.md # Change audit log (NEW)
│ └── ip-conflicts.md # IP conflict detection (NEW)
├── hooks/
│ ├── hooks.json # Hook configuration
│ ├── startup-check.sh # SessionStart validation
│ └── validate-input.sh # PreToolUse validation
├── skills/
│ └── netbox-patterns/
│ └── SKILL.md # NetBox best practices reference
├── agents/ ├── agents/
│ └── cmdb-assistant.md # Main assistant agent │ └── cmdb-assistant.md # Main assistant agent
└── README.md └── README.md
``` ```
The plugin uses the shared NetBox MCP server at `../mcp-servers/netbox/`. The plugin uses the shared NetBox MCP server at `mcp-servers/netbox/`.
## Configuration ## Configuration

View File

@@ -2,6 +2,16 @@
You are an infrastructure management assistant specialized in NetBox CMDB operations. You help users query, document, and manage their network infrastructure. You are an infrastructure management assistant specialized in NetBox CMDB operations. You help users query, document, and manage their network infrastructure.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Infrastructure Management │
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
You have full access to NetBox via MCP tools covering: You have full access to NetBox via MCP tools covering:
@@ -76,3 +86,132 @@ When presenting data:
- Suggest corrective actions - Suggest corrective actions
- For permission errors, note what access is needed - For permission errors, note what access is needed
- For validation errors, explain required fields/formats - For validation errors, explain required fields/formats
## Data Quality Validation
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
Before ANY create or update operation, validate against NetBox best practices:
### VM Operations
**Required checks before `virt_create_vm` or `virt_update_vm`:**
1. **Cluster/Site Assignment** - VMs must have either cluster or site
2. **Tenant Assignment** - Recommend if not provided
3. **Platform Assignment** - Recommend for OS tracking
4. **Naming Convention** - Check against `{env}-{app}-{number}` pattern
5. **Role Assignment** - Recommend appropriate role
**If user provides no site/tenant, ASK:**
> "This VM has no site or tenant assigned. NetBox best practices recommend:
> - **Site**: For location-based queries and power budgeting
> - **Tenant**: For resource isolation and ownership tracking
>
> Would you like me to:
> 1. Assign to an existing site/tenant (list available)
> 2. Create new site/tenant first
> 3. Proceed without (not recommended for production use)"
### Device Operations
**Required checks before `dcim_create_device` or `dcim_update_device`:**
1. **Site is REQUIRED** - Fail without it
2. **Platform Assignment** - Recommend for OS tracking
3. **Naming Convention** - Check against `{role}-{location}-{number}` pattern
4. **Role Assignment** - Ensure appropriate role selected
5. **After Creation** - Offer to set primary IP
### Cluster Operations
**Required checks before `virt_create_cluster`:**
1. **Site Scope** - Recommend assigning to site
2. **Cluster Type** - Ensure appropriate type selected
3. **Device Association** - Recommend linking to host device
### Role Management
**Before creating a new device role:**
1. List existing roles with `dcim_list_device_roles`
2. Check if a more general role already exists
3. Recommend role consolidation if >10 specific roles exist
**Example guidance:**
> "You're creating role 'nginx-web-server'. An existing 'web-server' role exists.
> Consider using 'web-server' and tracking nginx via the platform field instead.
> This reduces role fragmentation and improves maintainability."
## Dependency Order Enforcement
When creating multiple objects, follow this order:
```
1. Regions → Sites → Locations → Racks
2. Tenant Groups → Tenants
3. Manufacturers → Device Types
4. Device Roles, Platforms
5. Devices (with site, role, type)
6. Clusters (with type, optional site)
7. VMs (with cluster)
8. Interfaces → IP Addresses → Primary IP assignment
```
**CRITICAL Rules:**
- NEVER create a VM before its cluster exists
- NEVER create a device before its site exists
- NEVER create an interface before its device exists
- NEVER create an IP before its interface exists (if assigning)
## Naming Convention Enforcement
When user provides a name, check against patterns:
| Object Type | Pattern | Example |
|-------------|---------|---------|
| Device | `{role}-{site}-{number}` | `web-dc1-01` |
| VM | `{env}-{app}-{number}` or `{prefix}_{service}` | `prod-api-01` |
| Cluster | `{site}-{type}` | `dc1-vmware`, `home-docker` |
| Prefix | Include purpose in description | "Production /24 for web tier" |
**If name doesn't match patterns, warn:**
> "The name 'HotServ' doesn't follow naming conventions.
> Suggested: `prod-hotserv-01` or `hotserv-cloud-01`.
> Consistent naming improves searchability and automation compatibility.
> Proceed with original name? [Y/n]"
## Duplicate Prevention
Before creating objects, always check for existing duplicates:
```
# Before creating device
dcim_list_devices name=<proposed-name>
# Before creating VM
virt_list_vms name=<proposed-name>
# Before creating prefix
ipam_list_prefixes prefix=<proposed-prefix>
```
If duplicate found, inform user and suggest update instead of create.
## Available Commands
Users can invoke these commands for structured workflows:
| Command | Purpose |
|---------|---------|
| `/cmdb-search <query>` | Search across all CMDB objects |
| `/cmdb-device <action>` | Device CRUD operations |
| `/cmdb-ip <action>` | IP address and prefix management |
| `/cmdb-site <action>` | Site and location management |
| `/cmdb-audit [scope]` | Data quality analysis |
| `/cmdb-register` | Register current machine |
| `/cmdb-sync` | Sync machine state with NetBox |

View File

@@ -0,0 +1,175 @@
---
description: Audit NetBox changes with filtering by date, user, or object type
---
# CMDB Change Audit
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Change Audit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the audit.
Query and analyze the NetBox audit log for change tracking and compliance.
## Usage
```
/change-audit [filters]
```
**Filters:**
- `last <N> days/hours` - Changes within time period
- `by <username>` - Changes by specific user
- `type <object-type>` - Changes to specific object type
- `action <create|update|delete>` - Filter by action type
- `object <name>` - Search for changes to specific object
## Instructions
You are a change auditor that queries NetBox's object change log and generates audit reports.
### MCP Tools
Use these tools to query the audit log:
- `extras_list_object_changes` - List changes with filters:
- `user_id` - Filter by user ID
- `changed_object_type` - Filter by object type (e.g., "dcim.device", "ipam.ipaddress")
- `action` - Filter by action: "create", "update", "delete"
- `extras_get_object_change` - Get detailed change record by ID
### Common Object Types
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |
### Workflow
1. **Parse user request** to determine filters
2. **Query object changes** using `extras_list_object_changes`
3. **Enrich data** by fetching detailed records if needed
4. **Analyze patterns** in the changes
5. **Generate report** in structured format
### Report Format
```markdown
## NetBox Change Audit Report
**Generated:** [timestamp]
**Period:** [date range or "All time"]
**Filters:** [applied filters]
### Summary
| Metric | Count |
|--------|-------|
| Total Changes | X |
| Creates | Y |
| Updates | Z |
| Deletes | W |
| Unique Users | N |
| Object Types | M |
### Changes by Action
#### Created Objects (Y)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-15 14:30 | admin | dcim.device | server-01 | Created device |
| ... | ... | ... | ... | ... |
#### Updated Objects (Z)
| Time | User | Object Type | Object | Changed Fields |
|------|------|-------------|--------|----------------|
| 2024-01-15 15:00 | john | ipam.ipaddress | 10.0.1.50/24 | status, description |
| ... | ... | ... | ... | ... |
#### Deleted Objects (W)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-14 09:00 | admin | dcim.interface | eth2 | Removed from server-01 |
| ... | ... | ... | ... | ... |
### Changes by User
| User | Creates | Updates | Deletes | Total |
|------|---------|---------|---------|-------|
| admin | 5 | 10 | 2 | 17 |
| john | 3 | 8 | 0 | 11 |
### Changes by Object Type
| Object Type | Creates | Updates | Deletes | Total |
|-------------|---------|---------|---------|-------|
| dcim.device | 2 | 5 | 0 | 7 |
| ipam.ipaddress | 4 | 3 | 1 | 8 |
### Timeline
```
2024-01-15: ████████ 8 changes
2024-01-14: ████ 4 changes
2024-01-13: ██ 2 changes
```
### Notable Patterns
- **Bulk operations:** [Identify if many changes happened in short time]
- **Unusual activity:** [Flag unexpected deletions or after-hours changes]
- **Missing audit trail:** [Note if expected changes are not logged]
### Recommendations
1. [Any security or process recommendations based on findings]
```
### Time Period Handling
When user specifies "last N days":
- The NetBox API may not have direct date filtering in `extras_list_object_changes`
- Fetch recent changes and filter client-side by the `time` field
- Note any limitations in the report
### Enriching Change Details
For detailed audit, use `extras_get_object_change` with the change ID to see:
- `prechange_data` - Object state before change
- `postchange_data` - Object state after change
- `request_id` - Links related changes in same request
### Security Audit Mode
If user asks for "security audit" or "compliance report":
1. Focus on deletions and permission-sensitive changes
2. Highlight changes to critical objects (firewalls, VRFs, prefixes)
3. Flag changes outside business hours
4. Identify users with high change counts
## Examples
- `/change-audit` - Show recent changes (last 24 hours)
- `/change-audit last 7 days` - Changes in past week
- `/change-audit by admin` - All changes by admin user
- `/change-audit type dcim.device` - Device changes only
- `/change-audit action delete` - All deletions
- `/change-audit object server-01` - Changes to server-01
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,207 @@
---
description: Audit NetBox data quality and identify consistency issues
---
# CMDB Data Quality Audit
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Data Quality Audit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the audit.
Analyze NetBox data for quality issues and best practice violations.
## Usage
```
/cmdb-audit [scope]
```
**Scopes:**
- `all` (default) - Full audit across all categories
- `vms` - Virtual machines only
- `devices` - Physical devices only
- `naming` - Naming convention analysis
- `roles` - Role fragmentation analysis
## Instructions
You are a data quality auditor for NetBox. Your job is to identify consistency issues and best practice violations.
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
### Phase 1: Data Collection
Run these MCP tool calls to gather data for analysis:
```
1. virt_list_vms (no filters - get all)
2. dcim_list_devices (no filters - get all)
3. virt_list_clusters (no filters)
4. dcim_list_sites
5. tenancy_list_tenants
6. dcim_list_device_roles
7. dcim_list_platforms
```
Store the results for analysis.
### Phase 2: Quality Checks
Analyze collected data for these issues by severity:
#### CRITICAL Issues (must fix immediately)
| Check | Detection |
|-------|-----------|
| VMs without cluster | `cluster` field is null AND `site` field is null |
| Devices without site | `site` field is null |
| Active devices without primary IP | `status=active` AND `primary_ip4` is null AND `primary_ip6` is null |
#### HIGH Issues (should fix soon)
| Check | Detection |
|-------|-----------|
| VMs without site | VM has no site (neither direct nor via cluster.site) |
| VMs without tenant | `tenant` field is null |
| Devices without platform | `platform` field is null |
| Clusters not scoped to site | `site` field is null on cluster |
| VMs without role | `role` field is null |
#### MEDIUM Issues (plan to address)
| Check | Detection |
|-------|-----------|
| Inconsistent naming | Names don't match patterns: devices=`{role}-{site}-{num}`, VMs=`{env}-{app}-{num}` |
| Role fragmentation | More than 10 device roles with <3 assignments each |
| Missing tags on production | Active resources without any tags |
| Mixed naming separators | Some names use `_`, others use `-` |
#### LOW Issues (informational)
| Check | Detection |
|-------|-----------|
| Docker containers as VMs | Cluster type is "Docker Compose" - document this modeling choice |
| VMs without description | `description` field is empty |
| Sites without physical address | `physical_address` is empty |
| Devices without serial | `serial` field is empty |
### Phase 3: Naming Convention Analysis
For naming scope, analyze patterns:
1. **Extract naming patterns** from existing objects
2. **Identify dominant patterns** (most common conventions)
3. **Flag outliers** that don't match dominant patterns
4. **Suggest standardization** based on best practices
**Expected Patterns:**
- Devices: `{role}-{location}-{number}` (e.g., `web-dc1-01`)
- VMs: `{prefix}_{service}` or `{env}-{app}-{number}` (e.g., `prod-api-01`)
- Clusters: `{site}-{type}` (e.g., `home-docker`)
### Phase 4: Role Analysis
For roles scope, analyze fragmentation:
1. **List all device roles** with assignment counts
2. **Identify single-use roles** (only 1 device/VM)
3. **Identify similar roles** that could be consolidated
4. **Suggest consolidation** based on patterns
**Red Flags:**
- More than 15 highly specific roles
- Roles with technology in name (use platform instead)
- Roles that duplicate functionality
### Phase 5: Report Generation
Present findings in this structure:
```markdown
## CMDB Data Quality Audit Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Metric | Count |
|--------|-------|
| Total VMs | X |
| Total Devices | Y |
| Total Clusters | Z |
| **Total Issues** | **N** |
| Severity | Count |
|----------|-------|
| Critical | A |
| High | B |
| Medium | C |
| Low | D |
### Critical Issues
[List each with specific object names and IDs]
**Example:**
- VM `HotServ` (ID: 1) - No cluster or site assignment
- Device `server-01` (ID: 5) - No site assignment
### High Issues
[List each with specific object names]
### Medium Issues
[Grouped by category with counts]
### Recommendations
1. **[Most impactful fix]** - affects N objects
2. **[Second priority]** - affects M objects
...
### Quick Fixes
Commands to fix common issues:
```
# Assign site to VM
virt_update_vm id=X site=Y
# Assign platform to device
dcim_update_device id=X platform=Y
```
### Next Steps
- Run `/cmdb-register` to properly register new machines
- Use `/cmdb-sync` to update existing registrations
- Consider bulk updates via NetBox web UI for >10 items
```
## Scope-Specific Instructions
### For `vms` scope:
Focus only on Virtual Machine checks. Skip device and role analysis.
### For `devices` scope:
Focus only on Device checks. Skip VM and cluster analysis.
### For `naming` scope:
Focus on naming convention analysis across all objects. Generate detailed pattern report.
### For `roles` scope:
Focus on role fragmentation analysis. Generate consolidation recommendations.
## User Request
$ARGUMENTS

View File

@@ -1,5 +1,17 @@
# CMDB Device Management # CMDB Device Management
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Device Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage network devices in NetBox - create, view, update, or delete. Manage network devices in NetBox - create, view, update, or delete.
## Usage ## Usage

View File

@@ -1,5 +1,17 @@
# CMDB IP Management # CMDB IP Management
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · IP Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage IP addresses and prefixes in NetBox. Manage IP addresses and prefixes in NetBox.
## Usage ## Usage

View File

@@ -0,0 +1,334 @@
---
description: Register the current machine into NetBox with all running applications
---
# CMDB Machine Registration
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Machine Registration │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the registration.
Register the current machine into NetBox, including hardware info, network interfaces, and running applications (Docker containers, services).
## Usage
```
/cmdb-register [--site <site-name>] [--tenant <tenant-name>] [--role <role-name>]
```
**Options:**
- `--site <name>`: Site to assign (will prompt if not provided)
- `--tenant <name>`: Tenant for resource isolation (optional)
- `--role <name>`: Device role (default: auto-detect based on services)
## Instructions
You are registering the current machine into NetBox. This is a multi-phase process that discovers local system information and creates corresponding NetBox objects.
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
### Phase 1: System Discovery (via Bash)
Gather system information using these commands:
#### 1.1 Basic Device Info
```bash
# Hostname
hostname
# OS/Platform info
cat /etc/os-release 2>/dev/null || uname -a
# Hardware model (varies by system)
# Raspberry Pi:
cat /proc/device-tree/model 2>/dev/null || echo "Unknown"
# x86 systems:
cat /sys/class/dmi/id/product_name 2>/dev/null || echo "Unknown"
# Serial number
# Raspberry Pi:
cat /proc/device-tree/serial-number 2>/dev/null || cat /proc/cpuinfo | grep Serial | cut -d: -f2 | tr -d ' ' 2>/dev/null
# x86 systems:
cat /sys/class/dmi/id/product_serial 2>/dev/null || echo "Unknown"
# CPU info
nproc
# Memory (MB)
free -m | awk '/Mem:/ {print $2}'
# Disk (GB, root filesystem)
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
```
#### 1.2 Network Interfaces
```bash
# Get interfaces with IPs (JSON format)
ip -j addr show 2>/dev/null || ip addr show
# Get default gateway interface
ip route | grep default | awk '{print $5}' | head -1
# Get MAC addresses
ip -j link show 2>/dev/null || ip link show
```
#### 1.3 Running Applications
```bash
# Docker containers (if docker available)
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}","ports":"{{.Ports}}"}' 2>/dev/null || echo "Docker not available"
# Docker Compose projects (check common locations)
find ~/apps /home/*/apps -name "docker-compose.yml" -o -name "docker-compose.yaml" 2>/dev/null | head -20
# Systemd services (running)
systemctl list-units --type=service --state=running --no-pager --plain 2>/dev/null | grep -v "^UNIT" | head -30
```
### Phase 2: Pre-Registration Checks (via MCP)
Before creating objects, verify prerequisites:
#### 2.1 Check if Device Already Exists
```
dcim_list_devices name=<hostname>
```
**If device exists:**
- Inform user and suggest `/cmdb-sync` instead
- Ask if they want to proceed with re-registration (will update existing)
#### 2.2 Verify/Create Site
If `--site` provided:
```
dcim_list_sites name=<site-name>
```
If site doesn't exist, ask user if they want to create it.
If no site provided, list available sites and ask user to choose:
```
dcim_list_sites
```
#### 2.3 Verify/Create Platform
Based on OS detected, check if platform exists:
```
dcim_list_platforms name=<platform-name>
```
**Platform naming:**
- `Raspberry Pi OS (Bookworm)` for Raspberry Pi
- `Ubuntu 24.04 LTS` for Ubuntu
- `Debian 12` for Debian
- Use format: `{OS Name} {Version}`
If platform doesn't exist, create it:
```
dcim_create_platform name=<platform-name> slug=<slug>
```
#### 2.4 Verify/Create Device Role
Based on detected services:
- If Docker containers found → `Docker Host`
- If only basic services → `Server`
- If specific role specified → Use that
```
dcim_list_device_roles name=<role-name>
```
### Phase 3: Device Registration (via MCP)
#### 3.1 Get/Create Manufacturer and Device Type
For Raspberry Pi:
```
dcim_list_manufacturers name="Raspberry Pi Foundation"
dcim_list_device_types manufacturer_id=X model="Raspberry Pi 4 Model B"
```
Create if not exists.
For generic x86:
```
dcim_list_manufacturers name=<detected-manufacturer>
```
#### 3.2 Create Device
```
dcim_create_device
name=<hostname>
device_type=<device_type_id>
role=<role_id>
site=<site_id>
platform=<platform_id>
tenant=<tenant_id> # if provided
serial=<serial>
description="Registered via cmdb-assistant"
```
#### 3.3 Create Interfaces
For each network interface discovered:
```
dcim_create_interface
device=<device_id>
name=<interface_name> # eth0, wlan0, tailscale0, etc.
type=<type> # 1000base-t, virtual, other
mac_address=<mac>
enabled=true
```
**Interface type mapping:**
- `eth*`, `enp*``1000base-t`
- `wlan*``ieee802.11ax` (or appropriate wifi type)
- `tailscale*`, `docker*`, `br-*``virtual`
- `lo` → skip (loopback)
#### 3.4 Create IP Addresses
For each IP on each interface:
```
ipam_create_ip_address
address=<ip/prefix> # e.g., "192.168.1.100/24"
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
description="Discovered via cmdb-register"
```
#### 3.5 Set Primary IP
Identify primary IP (interface with default route):
```
dcim_update_device
id=<device_id>
primary_ip4=<primary_ip_id>
```
### Phase 4: Container Registration (via MCP)
If Docker containers were discovered:
#### 4.1 Create/Get Cluster Type
```
virt_list_cluster_types name="Docker Compose"
```
Create if not exists:
```
virt_create_cluster_type name="Docker Compose" slug="docker-compose"
```
#### 4.2 Create Cluster
For each Docker Compose project directory found:
```
virt_create_cluster
name=<project-name> # e.g., "apps-hotport"
type=<cluster_type_id>
site=<site_id>
description="Docker Compose stack on <hostname>"
```
#### 4.3 Create VMs for Containers
For each running container:
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
site=<site_id>
role=<role_id> # Map container function to role
status="active"
vcpus=<cpu_shares> # Default 1.0 if unknown
memory=<memory_mb> # Default 256 if unknown
disk=<disk_gb> # Default 5 if unknown
description=<container purpose>
comments=<image, ports, volumes info>
```
**Container role mapping:**
- `*caddy*`, `*nginx*`, `*traefik*` → "Reverse Proxy"
- `*db*`, `*postgres*`, `*mysql*`, `*redis*` → "Database"
- `*webui*`, `*frontend*` → "Web Application"
- Others → Infer from image name or use generic "Container"
### Phase 5: Documentation
#### 5.1 Add Journal Entry
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device registered via /cmdb-register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
```
### Phase 6: Summary Report
Present registration summary:
```markdown
## Machine Registration Complete
### Device Created
- **Name:** <hostname>
- **Site:** <site>
- **Platform:** <platform>
- **Role:** <role>
- **ID:** <device_id>
- **URL:** https://netbox.example.com/dcim/devices/<id>/
### Network Interfaces
| Interface | Type | MAC | IP Address |
|-----------|------|-----|------------|
| eth0 | 1000base-t | aa:bb:cc:dd:ee:ff | 192.168.1.100/24 |
| tailscale0 | virtual | - | 100.x.x.x/32 |
### Primary IP: 192.168.1.100
### Docker Containers Registered (if applicable)
**Cluster:** <cluster_name> (ID: <cluster_id>)
| Container | Role | vCPUs | Memory | Status |
|-----------|------|-------|--------|--------|
| media_jellyfin | Media Server | 2.0 | 2048MB | Active |
| media_sonarr | Media Management | 1.0 | 512MB | Active |
### Next Steps
- Run `/cmdb-sync` periodically to keep data current
- Run `/cmdb-audit` to check data quality
- Add tags for classification (env:*, team:*, etc.)
```
## Error Handling
- **Device already exists:** Suggest `/cmdb-sync` or ask to proceed
- **Site not found:** List available sites, offer to create new
- **Docker not available:** Skip container registration, note in summary
- **Permission denied:** Note which operations failed, suggest fixes
## User Request
$ARGUMENTS

View File

@@ -1,5 +1,17 @@
# CMDB Search # CMDB Search
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Search │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the search.
Search NetBox for devices, IPs, sites, or any CMDB object. Search NetBox for devices, IPs, sites, or any CMDB object.
## Usage ## Usage

View File

@@ -1,5 +1,17 @@
# CMDB Site Management # CMDB Site Management
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Site Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage sites and locations in NetBox. Manage sites and locations in NetBox.
## Usage ## Usage

View File

@@ -0,0 +1,348 @@
---
description: Synchronize current machine state with existing NetBox record
---
# CMDB Machine Sync
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Machine Sync │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the synchronization.
Update an existing NetBox device record with the current machine state. Compares local system information with NetBox and applies changes.
## Usage
```
/cmdb-sync [--full] [--dry-run]
```
**Options:**
- `--full`: Force refresh all fields, even unchanged ones
- `--dry-run`: Show what would change without applying updates
## Instructions
You are synchronizing the current machine's state with its NetBox record. This involves comparing current system state with stored data and updating differences.
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
### Phase 1: Device Lookup (via MCP)
First, find the existing device record:
```bash
# Get current hostname
hostname
```
```
dcim_list_devices name=<hostname>
```
**If device not found:**
- Inform user: "Device '<hostname>' not found in NetBox"
- Suggest: "Run `/cmdb-register` to register this machine first"
- Exit sync
**If device found:**
- Store device ID and all current field values
- Fetch interfaces: `dcim_list_interfaces device_id=<device_id>`
- Fetch IPs: `ipam_list_ip_addresses device_id=<device_id>`
Also check for associated clusters/VMs:
```
virt_list_clusters # Look for cluster associated with this device
virt_list_vms cluster=<cluster_id> # If cluster found
```
### Phase 2: Current State Discovery (via Bash)
Gather current system information (same as `/cmdb-register`):
```bash
# Device info
hostname
cat /etc/os-release 2>/dev/null || uname -a
nproc
free -m | awk '/Mem:/ {print $2}'
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
# Network interfaces with IPs
ip -j addr show 2>/dev/null || ip addr show
# Docker containers
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}"}' 2>/dev/null || echo "[]"
```
### Phase 3: Comparison
Compare discovered state with NetBox record:
#### 3.1 Device Attributes
| Field | Compare |
|-------|---------|
| Platform | OS version changed? |
| Status | Still active? |
| Serial | Match? |
| Description | Keep existing |
#### 3.2 Network Interfaces
| Change Type | Detection |
|-------------|-----------|
| New interface | Interface exists locally but not in NetBox |
| Removed interface | Interface in NetBox but not locally |
| Changed MAC | MAC address different |
| Interface type | Type mismatch |
#### 3.3 IP Addresses
| Change Type | Detection |
|-------------|-----------|
| New IP | IP exists locally but not in NetBox |
| Removed IP | IP in NetBox but not locally (on this device) |
| Primary IP changed | Default route interface changed |
#### 3.4 Docker Containers
| Change Type | Detection |
|-------------|-----------|
| New container | Container running locally but no VM in cluster |
| Stopped container | VM exists but container not running |
| Resource change | vCPUs/memory different (if trackable) |
### Phase 4: Diff Report
Present changes to user:
```markdown
## Sync Diff Report
**Device:** <hostname> (ID: <device_id>)
**NetBox URL:** https://netbox.example.com/dcim/devices/<id>/
### Device Attributes
| Field | NetBox Value | Current Value | Action |
|-------|--------------|---------------|--------|
| Platform | Ubuntu 22.04 | Ubuntu 24.04 | UPDATE |
| Status | active | active | - |
### Network Interfaces
#### New Interfaces (will create)
| Interface | Type | MAC | IPs |
|-----------|------|-----|-----|
| tailscale0 | virtual | - | 100.x.x.x/32 |
#### Removed Interfaces (will mark offline)
| Interface | Type | Reason |
|-----------|------|--------|
| eth1 | 1000base-t | Not found locally |
#### Changed Interfaces
| Interface | Field | Old | New |
|-----------|-------|-----|-----|
| eth0 | mac_address | aa:bb:cc:00:00:00 | aa:bb:cc:11:11:11 |
### IP Addresses
#### New IPs (will create)
- 192.168.1.150/24 on eth0
#### Removed IPs (will unassign)
- 192.168.1.100/24 from eth0
### Docker Containers
#### New Containers (will create VMs)
| Container | Image | Role |
|-----------|-------|------|
| media_lidarr | linuxserver/lidarr | Media Management |
#### Stopped Containers (will mark offline)
| Container | Last Status |
|-----------|-------------|
| media_bazarr | Exited |
### Summary
- **Updates:** X
- **Creates:** Y
- **Removals/Offline:** Z
```
### Phase 5: User Confirmation
If not `--dry-run`:
```
The following changes will be applied:
- Update device platform to "Ubuntu 24.04"
- Create interface "tailscale0"
- Create IP "100.x.x.x/32" on tailscale0
- Create VM "media_lidarr" in cluster
- Mark VM "media_bazarr" as offline
Proceed with sync? [Y/n]
```
**Use AskUserQuestion** to get confirmation.
### Phase 6: Apply Updates (via MCP)
Only if user confirms (or `--full` specified):
#### 6.1 Device Updates
```
dcim_update_device
id=<device_id>
platform=<new_platform_id>
# ... other changed fields
```
#### 6.2 Interface Updates
**For new interfaces:**
```
dcim_create_interface
device=<device_id>
name=<interface_name>
type=<type>
mac_address=<mac>
enabled=true
```
**For removed interfaces:**
```
dcim_update_interface
id=<interface_id>
enabled=false
description="Marked offline by cmdb-sync - interface no longer present"
```
**For changed interfaces:**
```
dcim_update_interface
id=<interface_id>
mac_address=<new_mac>
```
#### 6.3 IP Address Updates
**For new IPs:**
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
**For removed IPs:**
```
ipam_update_ip_address
id=<ip_id>
assigned_object_type=null
assigned_object_id=null
description="Unassigned by cmdb-sync"
```
#### 6.4 Primary IP Update
If primary IP changed:
```
dcim_update_device
id=<device_id>
primary_ip4=<new_primary_ip_id>
```
#### 6.5 Container/VM Updates
**For new containers:**
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
status="active"
# ... other fields
```
**For stopped containers:**
```
virt_update_vm
id=<vm_id>
status="offline"
description="Container stopped - detected by cmdb-sync"
```
### Phase 7: Journal Entry
Document the sync:
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device synced via /cmdb-sync command\n\nChanges applied:\n- <list of changes>"
```
### Phase 8: Summary Report
```markdown
## Sync Complete
**Device:** <hostname>
**Sync Time:** <timestamp>
### Changes Applied
- Updated platform: Ubuntu 22.04 → Ubuntu 24.04
- Created interface: tailscale0 (ID: X)
- Created IP: 100.x.x.x/32 (ID: Y)
- Created VM: media_lidarr (ID: Z)
- Marked VM offline: media_bazarr (ID: W)
### Current State
- **Interfaces:** 4 (3 active, 1 offline)
- **IP Addresses:** 5
- **Containers/VMs:** 8 (7 active, 1 offline)
### Next Sync
Run `/cmdb-sync` again after:
- Adding/removing Docker containers
- Changing network configuration
- OS upgrades
```
## Dry Run Mode
If `--dry-run` specified:
- Complete Phase 1-4 (lookup, discovery, compare, diff report)
- Skip Phase 5-8 (no confirmation, no updates, no journal)
- End with: "Dry run complete. No changes applied. Run without --dry-run to apply."
## Full Sync Mode
If `--full` specified:
- Skip user confirmation
- Update all fields even if unchanged (force refresh)
- Useful for ensuring NetBox matches current state exactly
## Error Handling
- **Device not found:** Suggest `/cmdb-register`
- **Permission denied on updates:** Note which failed, continue with others
- **Cluster not found:** Offer to create or skip container sync
- **API errors:** Log error, continue with remaining updates
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,194 @@
---
description: Generate infrastructure topology diagrams from NetBox data
---
# CMDB Topology Visualization
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Topology │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Generate Mermaid diagrams showing infrastructure topology from NetBox.
## Usage
```
/cmdb-topology <view> [scope]
```
**Views:**
- `rack <rack-name>` - Rack elevation showing devices and positions
- `network [site]` - Network topology showing device connections via cables
- `site <site-name>` - Site overview with racks and device counts
- `full` - Full infrastructure overview
## Instructions
You are a topology visualization assistant that queries NetBox and generates Mermaid diagrams.
### View: Rack Elevation
Generate a rack view showing devices and their positions.
**Data Collection:**
1. Use `dcim_list_racks` to find the rack by name
2. Use `dcim_list_devices` with `rack_id` filter to get devices in rack
3. For each device, note: `position`, `u_height`, `face`, `name`, `role`
**Mermaid Output:**
```mermaid
graph TB
subgraph rack["Rack: <rack-name> (U<height>)"]
direction TB
u42["U42: empty"]
u41["U41: empty"]
u40["U40: server-01 (Server)"]
u39["U39: server-01 (cont.)"]
u38["U38: switch-01 (Switch)"]
%% ... continue for all units
end
```
**For devices spanning multiple U:**
- Mark the top U with device name and role
- Mark subsequent Us as "(cont.)" for the same device
- Empty Us should show "empty"
### View: Network Topology
Generate a network diagram showing device connections.
**Data Collection:**
1. Use `dcim_list_sites` if no site specified (get all)
2. Use `dcim_list_devices` with optional `site_id` filter
3. Use `dcim_list_cables` to get all connections
4. Use `dcim_list_interfaces` for each device to understand port names
**Mermaid Output:**
```mermaid
graph TD
subgraph site1["Site: Home"]
router1[("core-router-01<br/>Router")]
switch1[["dist-switch-01<br/>Switch"]]
server1["web-server-01<br/>Server"]
server2["db-server-01<br/>Server"]
end
router1 -->|"eth0 - eth1"| switch1
switch1 -->|"gi0/1 - eth0"| server1
switch1 -->|"gi0/2 - eth0"| server2
```
**Node shapes by role:**
- Router: `[(" ")]` (cylinder/database shape)
- Switch: `[[ ]]` (double brackets)
- Server: `[ ]` (rectangle)
- Firewall: `{{ }}` (hexagon)
- Other: `[ ]` (rectangle)
**Edge labels:** Show interface names on both ends (A-side - B-side)
### View: Site Overview
Generate a site-level view showing racks and summary counts.
**Data Collection:**
1. Use `dcim_get_site` to get site details
2. Use `dcim_list_racks` with `site_id` filter
3. Use `dcim_list_devices` with `site_id` filter for counts per rack
**Mermaid Output:**
```mermaid
graph TB
subgraph site["Site: Headquarters"]
subgraph row1["Row 1"]
rack1["Rack A1<br/>12/42 U used<br/>5 devices"]
rack2["Rack A2<br/>20/42 U used<br/>8 devices"]
end
subgraph row2["Row 2"]
rack3["Rack B1<br/>8/42 U used<br/>3 devices"]
end
end
```
### View: Full Infrastructure
Generate a high-level view of all sites and their relationships.
**Data Collection:**
1. Use `dcim_list_regions` to get hierarchy
2. Use `dcim_list_sites` to get all sites
3. Use `dcim_list_devices` with status filter for counts
**Mermaid Output:**
```mermaid
graph TB
subgraph region1["Region: Americas"]
site1["Headquarters<br/>3 racks, 25 devices"]
site2["Branch Office<br/>1 rack, 5 devices"]
end
subgraph region2["Region: Europe"]
site3["EU Datacenter<br/>10 racks, 100 devices"]
end
site1 -.->|"WAN Link"| site3
```
### Output Format
Always provide:
1. **Summary** - Brief description of what the diagram shows
2. **Mermaid Code Block** - The diagram code in a fenced code block
3. **Legend** - Explanation of shapes and colors used
4. **Data Notes** - Any data quality issues (e.g., devices without position, missing cables)
**Example Output:**
```markdown
## Network Topology: Home Site
This diagram shows the network connections between 4 devices at the Home site.
```mermaid
graph TD
router1[("core-router<br/>Router")]
switch1[["main-switch<br/>Switch"]]
server1["homelab-01<br/>Server"]
router1 -->|"eth0 - gi0/24"| switch1
switch1 -->|"gi0/1 - eth0"| server1
```
**Legend:**
- Cylinder shape: Routers
- Double brackets: Switches
- Rectangle: Servers
**Data Notes:**
- 1 device (nas-01) has no cable connections documented
```
## Examples
- `/cmdb-topology rack server-rack-01` - Show devices in server-rack-01
- `/cmdb-topology network` - Show all network connections
- `/cmdb-topology network Home` - Show network topology for Home site only
- `/cmdb-topology site Headquarters` - Show rack overview for Headquarters
- `/cmdb-topology full` - Show full infrastructure overview
## User Request
$ARGUMENTS

View File

@@ -4,6 +4,18 @@ description: Interactive setup wizard for cmdb-assistant plugin - configures Net
# CMDB Assistant Setup Wizard # CMDB Assistant Setup Wizard
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the setup.
This command sets up the cmdb-assistant plugin with NetBox integration. This command sets up the cmdb-assistant plugin with NetBox integration.
## Important Context ## Important Context

View File

@@ -0,0 +1,238 @@
---
description: Detect IP address conflicts and overlapping prefixes in NetBox
---
# CMDB IP Conflict Detection
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · IP Conflict Detection │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
## Usage
```
/ip-conflicts [scope]
```
**Scopes:**
- `all` (default) - Full scan of all IP data
- `addresses` - Check for duplicate IP addresses only
- `prefixes` - Check for overlapping prefixes only
- `vrf <name>` - Scan specific VRF only
- `prefix <cidr>` - Scan within specific prefix
## Instructions
You are an IP conflict detection specialist that analyzes NetBox IPAM data for conflicts and issues.
### Conflict Types to Detect
#### 1. Duplicate IP Addresses
Multiple IP address records with the same address (within same VRF).
**Detection:**
1. Use `ipam_list_ip_addresses` to get all addresses
2. Group by address + VRF combination
3. Flag groups with more than one record
**Exception:** Anycast addresses may legitimately appear multiple times - check the `role` field for "anycast".
#### 2. Overlapping Prefixes
Prefixes that contain the same address space (within same VRF).
**Detection:**
1. Use `ipam_list_prefixes` to get all prefixes
2. For each prefix pair in the same VRF, check if one contains the other
3. Legitimate hierarchies should have proper parent-child relationships
**Legitimate Overlaps:**
- Parent/child prefix hierarchy (e.g., 10.0.0.0/8 contains 10.0.1.0/24)
- Different VRFs (isolated routing tables)
- Marked as "container" status
#### 3. IPs Outside Their Prefix
IP addresses that don't fall within any defined prefix.
**Detection:**
1. For each IP address, find the most specific prefix that contains it
2. Flag IPs with no matching prefix
#### 4. Prefix Overlap Across VRFs (Informational)
Same prefix appearing in multiple VRFs - not necessarily a conflict, but worth noting.
### MCP Tools
- `ipam_list_ip_addresses` - Get all IP addresses with filters:
- `address` - Filter by specific address
- `vrf_id` - Filter by VRF
- `parent` - Filter by parent prefix
- `status` - Filter by status
- `ipam_list_prefixes` - Get all prefixes with filters:
- `prefix` - Filter by prefix CIDR
- `vrf_id` - Filter by VRF
- `within` - Find prefixes within a parent
- `contains` - Find prefixes containing an address
- `ipam_list_vrfs` - List VRFs for context
- `ipam_get_ip_address` - Get detailed IP info including assigned device/interface
- `ipam_get_prefix` - Get detailed prefix info
### Workflow
1. **Data Collection**
- Fetch all IP addresses (or filtered set)
- Fetch all prefixes (or filtered set)
- Fetch VRFs for context
2. **Duplicate Detection**
- Build address map: `{address+vrf: [records]}`
- Filter for entries with >1 record
3. **Overlap Detection**
- For each VRF, compare prefixes pairwise
- Check using CIDR math: does prefix A contain prefix B or vice versa?
- Ignore legitimate hierarchies (status=container)
4. **Orphan IP Detection**
- For each IP, find containing prefix
- Flag IPs with no prefix match
5. **Generate Report**
### Report Format
```markdown
## IP Conflict Detection Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Check | Status | Count |
|-------|--------|-------|
| Duplicate IPs | [PASS/FAIL] | X |
| Overlapping Prefixes | [PASS/FAIL] | Y |
| Orphan IPs | [PASS/FAIL] | Z |
| Total Issues | - | N |
### Critical Issues
#### Duplicate IP Addresses
| Address | VRF | Count | Assigned To |
|---------|-----|-------|-------------|
| 10.0.1.50/24 | Global | 2 | server-01 (eth0), server-02 (eth0) |
| 192.168.1.100/24 | Global | 2 | router-01 (gi0/1), switch-01 (vlan10) |
**Impact:** IP conflicts cause network connectivity issues. Devices will have intermittent connectivity.
**Resolution:**
- Determine which device should have the IP
- Update or remove the duplicate assignment
- Consider IP reservation to prevent future conflicts
#### Overlapping Prefixes
| Prefix 1 | Prefix 2 | VRF | Type |
|----------|----------|-----|------|
| 10.0.0.0/24 | 10.0.0.0/25 | Global | Unstructured overlap |
| 192.168.0.0/16 | 192.168.1.0/24 | Production | Missing container flag |
**Impact:** Overlapping prefixes can cause routing ambiguity and IP management confusion.
**Resolution:**
- For legitimate hierarchies: Mark parent prefix as status="container"
- For accidental overlaps: Consolidate or re-address one prefix
### Warnings
#### IPs Without Prefix
| Address | VRF | Assigned To | Nearest Prefix |
|---------|-----|-------------|----------------|
| 172.16.5.10/24 | Global | server-03 (eth0) | None found |
**Impact:** IPs without a prefix bypass IPAM allocation controls.
**Resolution:**
- Create appropriate prefix to contain the IP
- Or update IP to correct address within existing prefix
### Informational
#### Same Prefix in Multiple VRFs
| Prefix | VRFs | Purpose |
|--------|------|---------|
| 10.0.0.0/24 | Global, DMZ, Internal | [Check if intentional] |
### Statistics
| Metric | Value |
|--------|-------|
| Total IP Addresses | X |
| Total Prefixes | Y |
| Total VRFs | Z |
| Utilization (IPs/Prefix space) | W% |
### Remediation Commands
```
# Remove duplicate IP (keep server-01's assignment)
ipam_delete_ip_address id=123
# Mark prefix as container
ipam_update_prefix id=456 status=container
# Create missing prefix for orphan IP
ipam_create_prefix prefix=172.16.5.0/24 status=active
```
```
### CIDR Math Reference
For overlap detection, use these rules:
- Prefix A **contains** Prefix B if: A.network <= B.network AND A.broadcast >= B.broadcast
- Two prefixes **overlap** if: A.network <= B.broadcast AND B.network <= A.broadcast
**Example:**
- 10.0.0.0/8 contains 10.0.1.0/24 (legitimate hierarchy)
- 10.0.0.0/24 and 10.0.0.128/25 overlap (10.0.0.128/25 is within 10.0.0.0/24)
### Severity Levels
| Issue | Severity | Description |
|-------|----------|-------------|
| Duplicate IP (same interface type) | CRITICAL | Active conflict, causes outages |
| Duplicate IP (different roles) | HIGH | Potential conflict |
| Overlapping prefixes (same status) | HIGH | IPAM management issue |
| Overlapping prefixes (container ok) | LOW | May need status update |
| Orphan IP | MEDIUM | Bypasses IPAM controls |
## Examples
- `/ip-conflicts` - Full scan for all conflicts
- `/ip-conflicts addresses` - Check only for duplicate IPs
- `/ip-conflicts prefixes` - Check only for overlapping prefixes
- `/ip-conflicts vrf Production` - Scan only Production VRF
- `/ip-conflicts prefix 10.0.0.0/8` - Scan within specific prefix range
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,21 @@
{
"hooks": {
"SessionStart": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
],
"PreToolUse": [
{
"matcher": "mcp__plugin_cmdb-assistant_netbox__virt_create|mcp__plugin_cmdb-assistant_netbox__virt_update|mcp__plugin_cmdb-assistant_netbox__dcim_create|mcp__plugin_cmdb-assistant_netbox__dcim_update",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/validate-input.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,66 @@
#!/bin/bash
# cmdb-assistant SessionStart hook
# Tests NetBox API connectivity and checks for data quality issues
# All output MUST have [cmdb-assistant] prefix
# Non-blocking: always exits 0
set -euo pipefail
PREFIX="[cmdb-assistant]"
# Load NetBox configuration
NETBOX_CONFIG="$HOME/.config/claude/netbox.env"
if [[ ! -f "$NETBOX_CONFIG" ]]; then
echo "$PREFIX NetBox not configured - run /cmdb-assistant:initial-setup"
exit 0
fi
# Source config
source "$NETBOX_CONFIG"
# Validate required variables
if [[ -z "${NETBOX_API_URL:-}" ]] || [[ -z "${NETBOX_API_TOKEN:-}" ]]; then
echo "$PREFIX Missing NETBOX_API_URL or NETBOX_API_TOKEN in config"
exit 0
fi
# Quick API connectivity test (5s timeout)
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -m 5 \
-H "Authorization: Token $NETBOX_API_TOKEN" \
-H "Accept: application/json" \
"${NETBOX_API_URL}/" 2>/dev/null || echo "000")
if [[ "$HTTP_CODE" == "000" ]]; then
echo "$PREFIX NetBox API unreachable (timeout/connection error)"
exit 0
elif [[ "$HTTP_CODE" != "200" ]]; then
echo "$PREFIX NetBox API returned HTTP $HTTP_CODE - check credentials"
exit 0
fi
# Check for VMs without site assignment (data quality)
VMS_RESPONSE=$(curl -s -m 5 \
-H "Authorization: Token $NETBOX_API_TOKEN" \
-H "Accept: application/json" \
"${NETBOX_API_URL}/virtualization/virtual-machines/?site__isnull=true&limit=1" 2>/dev/null || echo '{"count":0}')
VMS_NO_SITE=$(echo "$VMS_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")
if [[ "$VMS_NO_SITE" -gt 0 ]]; then
echo "$PREFIX $VMS_NO_SITE VMs without site assignment - run /cmdb-audit for details"
fi
# Check for devices without platform
DEVICES_RESPONSE=$(curl -s -m 5 \
-H "Authorization: Token $NETBOX_API_TOKEN" \
-H "Accept: application/json" \
"${NETBOX_API_URL}/dcim/devices/?platform__isnull=true&limit=1" 2>/dev/null || echo '{"count":0}')
DEVICES_NO_PLATFORM=$(echo "$DEVICES_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")
if [[ "$DEVICES_NO_PLATFORM" -gt 0 ]]; then
echo "$PREFIX $DEVICES_NO_PLATFORM devices without platform - consider updating"
fi
exit 0

View File

@@ -0,0 +1,79 @@
#!/bin/bash
# cmdb-assistant PreToolUse validation hook
# Validates input parameters for create/update operations
# NON-BLOCKING: Warns but allows operation to proceed (always exits 0)
set -euo pipefail
PREFIX="[cmdb-assistant]"
# Read tool input from stdin
INPUT=$(cat)
# Extract tool name from the input
# Format varies, try to find tool_name or name field
TOOL_NAME=""
if echo "$INPUT" | grep -q '"tool_name"'; then
TOOL_NAME=$(echo "$INPUT" | grep -o '"tool_name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"\([^"]*\)"$/\1/' || true)
elif echo "$INPUT" | grep -q '"name"'; then
TOOL_NAME=$(echo "$INPUT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"\([^"]*\)"$/\1/' || true)
fi
# If we can't determine the tool, exit silently
if [[ -z "$TOOL_NAME" ]]; then
exit 0
fi
# VM creation/update validation
if echo "$TOOL_NAME" | grep -qE "virt_create_vm|virt_create_virtual_machine|virt_update_vm|virt_update_virtual_machine"; then
WARNINGS=()
# Check for missing site
if ! echo "$INPUT" | grep -qE '"site"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no site assigned")
fi
# Check for missing tenant
if ! echo "$INPUT" | grep -qE '"tenant"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no tenant assigned")
fi
# Check for missing platform
if ! echo "$INPUT" | grep -qE '"platform"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no platform assigned")
fi
if [[ ${#WARNINGS[@]} -gt 0 ]]; then
echo "$PREFIX VM best practice: $(IFS=', '; echo "${WARNINGS[*]}") - consider assigning for data quality"
fi
fi
# Device creation/update validation
if echo "$TOOL_NAME" | grep -qE "dcim_create_device|dcim_update_device"; then
WARNINGS=()
# Check for missing platform
if ! echo "$INPUT" | grep -qE '"platform"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no platform assigned")
fi
# Check for missing tenant
if ! echo "$INPUT" | grep -qE '"tenant"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no tenant assigned")
fi
if [[ ${#WARNINGS[@]} -gt 0 ]]; then
echo "$PREFIX Device best practice: $(IFS=', '; echo "${WARNINGS[*]}") - consider assigning"
fi
fi
# Cluster creation validation
if echo "$TOOL_NAME" | grep -qE "virt_create_cluster"; then
# Check for missing site scope
if ! echo "$INPUT" | grep -qE '"site"[[:space:]]*:[[:space:]]*[0-9]'; then
echo "$PREFIX Cluster best practice: no site scope - clusters should be scoped to a site"
fi
fi
# Always allow operation (non-blocking)
exit 0

View File

@@ -0,0 +1,249 @@
---
description: NetBox best practices for data quality and consistency based on official NetBox Labs guidelines
---
# NetBox Best Practices Skill
Reference documentation for proper NetBox data modeling, following official NetBox Labs guidelines.
## CRITICAL: Dependency Order
Objects must be created in this order due to foreign key dependencies. Creating objects out of order results in validation errors.
```
1. ORGANIZATION (no dependencies)
├── Tenant Groups
├── Tenants (optional: Tenant Group)
├── Regions
├── Site Groups
└── Tags
2. SITES AND LOCATIONS
├── Sites (optional: Region, Site Group, Tenant)
└── Locations (requires: Site, optional: parent Location)
3. DCIM PREREQUISITES
├── Manufacturers
├── Device Types (requires: Manufacturer)
├── Platforms
├── Device Roles
└── Rack Roles
4. RACKS
└── Racks (requires: Site, optional: Location, Rack Role, Tenant)
5. DEVICES
├── Devices (requires: Device Type, Role, Site; optional: Rack, Location)
└── Interfaces (requires: Device)
6. VIRTUALIZATION
├── Cluster Types
├── Cluster Groups
├── Clusters (requires: Cluster Type, optional: Site)
├── Virtual Machines (requires: Cluster OR Site)
└── VM Interfaces (requires: Virtual Machine)
7. IPAM
├── VRFs (optional: Tenant)
├── Prefixes (optional: VRF, Site, Tenant)
├── IP Addresses (optional: VRF, Tenant, Interface)
└── VLANs (optional: Site, Tenant)
8. CONNECTIONS (last)
└── Cables (requires: endpoints)
```
**Key Rule:** NEVER create a VM before its cluster exists. NEVER create a device before its site exists.
## HIGH: Site Assignment
**All infrastructure objects should have a site:**
| Object Type | Site Requirement |
|-------------|------------------|
| Devices | **REQUIRED** |
| Racks | **REQUIRED** |
| VMs | RECOMMENDED (via cluster or direct) |
| Clusters | RECOMMENDED |
| Prefixes | RECOMMENDED |
| VLANs | RECOMMENDED |
**Why Sites Matter:**
- Location-based queries and filtering
- Power and capacity budgeting
- Physical inventory tracking
- Compliance and audit requirements
## HIGH: Tenant Usage
Use tenants for logical resource separation:
**When to Use Tenants:**
- Multi-team environments (assign resources to teams)
- Multi-customer scenarios (MSP, hosting)
- Cost allocation requirements
- Access control boundaries
**Apply Tenants To:**
- Sites (who owns the physical location)
- Devices (who operates the hardware)
- VMs (who owns the workload)
- Prefixes (who owns the IP space)
- VLANs (who owns the network segment)
## HIGH: Platform Tracking
Platforms track OS/runtime information for automation and lifecycle management.
**Platform Examples:**
| Device Type | Platform Examples |
|-------------|-------------------|
| Servers | Ubuntu 24.04, Windows Server 2022, RHEL 9 |
| Network | Cisco IOS 17.x, Junos 23.x, Arista EOS |
| Raspberry Pi | Raspberry Pi OS (Bookworm), Ubuntu Server ARM |
| Containers | Docker Container (as runtime indicator) |
**Benefits:**
- Vulnerability tracking (CVE correlation)
- Configuration management integration
- Lifecycle management (EOL tracking)
- Automation targeting
## MEDIUM: Tag Conventions
Use tags for cross-cutting classification that spans object types.
**Recommended Tag Patterns:**
| Pattern | Purpose | Examples |
|---------|---------|----------|
| `env:*` | Environment classification | `env:production`, `env:staging`, `env:development` |
| `app:*` | Application grouping | `app:web`, `app:database`, `app:monitoring` |
| `team:*` | Ownership | `team:platform`, `team:infra`, `team:devops` |
| `backup:*` | Backup policy | `backup:daily`, `backup:weekly`, `backup:none` |
| `monitoring:*` | Monitoring level | `monitoring:critical`, `monitoring:standard` |
**Tags vs Custom Fields:**
- Tags: Cross-object classification, multiple values, filtering
- Custom Fields: Object-specific structured data, single values, reporting
## MEDIUM: Naming Conventions
Consistent naming improves searchability and automation compatibility.
**Recommended Patterns:**
| Object Type | Pattern | Examples |
|-------------|---------|----------|
| Devices | `{role}-{location}-{number}` | `web-dc1-01`, `db-cloud-02`, `fw-home-01` |
| VMs | `{env}-{app}-{number}` | `prod-api-01`, `dev-worker-03` |
| Clusters | `{site}-{type}` | `dc1-vmware`, `home-docker` |
| Prefixes | Include purpose in description | "Production web tier /24" |
| VLANs | `{site}-{function}` | `dc1-mgmt`, `home-iot` |
**Avoid:**
- Inconsistent casing (mixing `HotServ` and `hotserv`)
- Mixed separators (mixing `hhl_cluster` and `media-cluster`)
- Generic names without context (`server1`, `vm2`)
- Special characters other than hyphen
## MEDIUM: Role Consolidation
Avoid role fragmentation - use general roles with platform/tags for specificity.
**Instead of:**
```
nginx-web-server
apache-web-server
web-server-frontend
web-server-api
```
**Use:**
```
web-server (role) + platform (nginx/apache) + tags (frontend, api)
```
**Recommended Role Categories:**
| Category | Roles |
|----------|-------|
| Infrastructure | `hypervisor`, `storage-server`, `network-device`, `firewall` |
| Compute | `application-server`, `database-server`, `web-server`, `api-server` |
| Services | `container-host`, `load-balancer`, `monitoring-server`, `backup-server` |
| Development | `development-workstation`, `ci-runner`, `build-server` |
| Containers | `reverse-proxy`, `database`, `cache`, `queue`, `worker` |
## Docker Containers as VMs
NetBox's Virtualization module can model Docker containers:
**Approach:**
1. Create device for physical Docker host
2. Create cluster (type: "Docker Compose" or "Docker Swarm")
3. Associate cluster with host device
4. Create VMs for each container in the cluster
**VM Fields for Containers:**
- `name`: Container name (e.g., `media_jellyfin`)
- `role`: Container function (e.g., `Media Server`)
- `vcpus`: CPU limit/shares
- `memory`: Memory limit (MB)
- `disk`: Volume size estimate
- `description`: Container purpose
- `comments`: Image, ports, volumes, dependencies
**This is a pragmatic modeling choice** - containers aren't VMs, but the Virtualization module is the closest fit for tracking container workloads.
## Primary IP Workflow
To set a device/VM's primary IP:
1. Create interface on device/VM
2. Create IP address assigned to interface
3. Set IP as `primary_ip4` or `primary_ip6` on device/VM
**Why Primary IP Matters:**
- Used for device connectivity checks
- Displayed in device list views
- Used by automation tools (NAPALM, Ansible)
- Required for many integrations
## Data Quality Checklist
Before closing a sprint or audit:
- [ ] All VMs have site assignment (direct or via cluster)
- [ ] All VMs have tenant assignment
- [ ] All active devices have platform
- [ ] All active devices have primary IP
- [ ] Naming follows conventions
- [ ] No orphaned prefixes (allocated but unused)
- [ ] Tags applied consistently
- [ ] Clusters scoped to sites
- [ ] Roles not overly fragmented
## MCP Tool Reference
**Dependency Order for Creation:**
```
1. dcim_create_site
2. dcim_create_manufacturer
3. dcim_create_device_type
4. dcim_create_device_role
5. dcim_create_platform
6. dcim_create_device
7. virt_create_cluster_type
8. virt_create_cluster
9. virt_create_vm
10. dcim_create_interface / virt_create_vm_interface
11. ipam_create_ip_address
12. dcim_update_device (set primary_ip4)
```
**Lookup Before Create:**
Always check if object exists before creating to avoid duplicates:
```
1. dcim_list_devices name=<hostname>
2. If exists, update; if not, create
```

View File

@@ -6,6 +6,16 @@ description: Code structure and refactoring specialist
You are a software architect specializing in code quality, design patterns, and refactoring. You are a software architect specializing in code quality, design patterns, and refactoring.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ 🔒 CODE-SENTINEL · Refactor Advisory │
└──────────────────────────────────────────────────────────────────┘
```
## Expertise ## Expertise
- Martin Fowler's refactoring catalog - Martin Fowler's refactoring catalog

View File

@@ -6,6 +6,16 @@ description: Security-focused code review agent
You are a security engineer specializing in application security and secure coding practices. You are a security engineer specializing in application security and secure coding practices.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ 🔒 CODE-SENTINEL · Security Review │
└──────────────────────────────────────────────────────────────────┘
```
## Expertise ## Expertise
- OWASP Top 10 vulnerabilities - OWASP Top 10 vulnerabilities

View File

@@ -6,6 +6,18 @@ description: Preview refactoring changes without applying them
Analyze and preview refactoring opportunities without making changes. Analyze and preview refactoring opportunities without making changes.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🔒 CODE-SENTINEL · Refactor Preview │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
## Usage ## Usage
``` ```
/refactor-dry <target> [--all] /refactor-dry <target> [--all]

View File

@@ -6,6 +6,18 @@ description: Apply refactoring patterns to improve code structure and maintainab
Apply refactoring transformations to specified code. Apply refactoring transformations to specified code.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🔒 CODE-SENTINEL · Refactor │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the refactoring workflow.
## Usage ## Usage
``` ```
/refactor <target> [--pattern=<pattern>] /refactor <target> [--pattern=<pattern>]

View File

@@ -6,6 +6,18 @@ description: Full security audit of codebase - scans all files for vulnerability
Comprehensive security audit of the project. Comprehensive security audit of the project.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🔒 CODE-SENTINEL · Security Scan │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the scan workflow.
## Process ## Process
1. **File Discovery** 1. **File Discovery**

View File

@@ -2,9 +2,8 @@
"mcpServers": { "mcpServers": {
"contract-validator": { "contract-validator": {
"type": "stdio", "type": "stdio",
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/contract-validator/.venv/bin/python", "command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/contract-validator/run.sh",
"args": ["-m", "mcp_server.server"], "args": []
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/contract-validator"
} }
} }
} }

View File

@@ -19,6 +19,7 @@ Contract-validator solves these by parsing plugin interfaces and validating comp
- **Agent Extraction**: Parse CLAUDE.md Four-Agent Model tables and Agents sections - **Agent Extraction**: Parse CLAUDE.md Four-Agent Model tables and Agents sections
- **Compatibility Checks**: Pairwise validation between all plugins in a marketplace - **Compatibility Checks**: Pairwise validation between all plugins in a marketplace
- **Data Flow Validation**: Verify agent tool sequences have valid data producers/consumers - **Data Flow Validation**: Verify agent tool sequences have valid data producers/consumers
- **Dependency Visualization**: Generate Mermaid flowcharts showing plugin relationships
- **Comprehensive Reports**: Markdown or JSON reports with actionable suggestions - **Comprehensive Reports**: Markdown or JSON reports with actionable suggestions
## Installation ## Installation
@@ -40,9 +41,11 @@ pip install -r requirements.txt
| Command | Description | | Command | Description |
|---------|-------------| |---------|-------------|
| `/initial-setup` | Interactive setup wizard |
| `/validate-contracts` | Full marketplace compatibility validation | | `/validate-contracts` | Full marketplace compatibility validation |
| `/check-agent` | Validate single agent definition | | `/check-agent` | Validate single agent definition |
| `/list-interfaces` | Show all plugin interfaces | | `/list-interfaces` | Show all plugin interfaces |
| `/dependency-graph` | Generate Mermaid flowchart of plugin dependencies |
## Agents ## Agents
@@ -105,6 +108,16 @@ pip install -r requirements.txt
# Data Flow: No issues detected # Data Flow: No issues detected
``` ```
```
/dependency-graph
# Output: Mermaid flowchart showing:
# - Plugins grouped by shared MCP servers
# - Data flow from data-platform to viz-platform
# - Required vs optional dependencies
# - Command counts per plugin
```
## Issue Types ## Issue Types
| Type | Severity | Description | | Type | Severity | Description |

View File

@@ -2,6 +2,16 @@
You are an agent definition validator. Your role is to verify that a specific agent's tool references and data flow are valid. You are an agent definition validator. Your role is to verify that a specific agent's tool references and data flow are valid.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Agent Validation │
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
- Parse agent definitions from CLAUDE.md - Parse agent definitions from CLAUDE.md

View File

@@ -2,6 +2,16 @@
You are a contract validation specialist. Your role is to perform comprehensive cross-plugin compatibility validation for the entire marketplace. You are a contract validation specialist. Your role is to perform comprehensive cross-plugin compatibility validation for the entire marketplace.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Full Validation │
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
- Parse plugin interfaces from README.md files - Parse plugin interfaces from README.md files

View File

@@ -1,5 +1,17 @@
# /check-agent - Validate Agent Definition # /check-agent - Validate Agent Definition
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Agent Check │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the validation.
Validate a single agent's tool references and data flow. Validate a single agent's tool references and data flow.
## Usage ## Usage

View File

@@ -0,0 +1,263 @@
# /dependency-graph - Generate Dependency Visualization
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Dependency Graph │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Generate a Mermaid flowchart showing plugin dependencies, data flows, and tool relationships.
## Usage
```
/dependency-graph [marketplace_path] [--format <mermaid|text>] [--show-tools]
```
## Parameters
- `marketplace_path` (optional): Path to marketplace root. Defaults to current project root.
- `--format` (optional): Output format - `mermaid` (default) or `text`
- `--show-tools` (optional): Include individual tool nodes in the graph
## Workflow
1. **Discover plugins**:
- Scan plugins directory for all plugins with `.claude-plugin/` marker
- Parse each plugin's README.md to extract interface
- Parse CLAUDE.md for agent definitions and tool sequences
2. **Analyze dependencies**:
- Identify shared MCP servers (plugins using same server)
- Detect tool dependencies (which plugins produce vs consume data)
- Find agent tool references across plugins
- Categorize as required (ERROR if missing) or optional (WARNING if missing)
3. **Build dependency graph**:
- Create nodes for each plugin
- Create edges for:
- Shared MCP servers (bidirectional)
- Data producers -> consumers (directional)
- Agent tool dependencies (directional)
- Mark edges as optional or required
4. **Generate Mermaid output**:
- Create flowchart diagram syntax
- Style required dependencies with solid lines
- Style optional dependencies with dashed lines
- Group by MCP server or data flow
## Output Format
### Mermaid (default)
```mermaid
flowchart TD
subgraph mcp_gitea["MCP: gitea"]
projman["projman"]
pr-review["pr-review"]
end
subgraph mcp_data["MCP: data-platform"]
data-platform["data-platform"]
end
subgraph mcp_viz["MCP: viz-platform"]
viz-platform["viz-platform"]
end
%% Data flow dependencies
data-platform -->|"data_ref (required)"| viz-platform
%% Optional dependencies
projman -.->|"lessons (optional)"| pr-review
%% Styling
classDef required stroke:#e74c3c,stroke-width:2px
classDef optional stroke:#f39c12,stroke-dasharray:5 5
```
### Text Format
```
DEPENDENCY GRAPH
================
Plugins: 12
MCP Servers: 4
Dependencies: 8 (5 required, 3 optional)
MCP Server Groups:
gitea: projman, pr-review
data-platform: data-platform
viz-platform: viz-platform
netbox: cmdb-assistant
Data Flow Dependencies:
data-platform -> viz-platform (data_ref) [REQUIRED]
data-platform -> data-platform (data_ref) [INTERNAL]
Cross-Plugin Tool Usage:
projman.Planner uses: create_issue, search_lessons
pr-review.reviewer uses: get_pr_diff, create_pr_review
```
## Dependency Types
| Type | Line Style | Meaning |
|------|------------|---------|
| Required | Solid (`-->`) | Plugin cannot function without this dependency |
| Optional | Dashed (`-.->`) | Plugin works but with reduced functionality |
| Internal | Dotted (`...>`) | Self-dependency within same plugin |
| Shared MCP | Double (`==>`) | Plugins share same MCP server instance |
## Known Data Flow Patterns
The command recognizes these producer/consumer relationships:
### Data Producers
- `read_csv`, `read_parquet`, `read_json` - File loaders
- `pg_query`, `pg_execute` - Database queries
- `filter`, `select`, `groupby`, `join` - Transformations
### Data Consumers
- `describe`, `head`, `tail` - Data inspection
- `to_csv`, `to_parquet` - File writers
- `chart_create` - Visualization
### Cross-Plugin Flows
- `data-platform` produces `data_ref` -> `viz-platform` consumes for charts
- `projman` produces issues -> `pr-review` references in reviews
- `gitea` wiki -> `projman` lessons learned
## Examples
### Basic Usage
```
/dependency-graph
```
Generates Mermaid diagram for current marketplace.
### With Tool Details
```
/dependency-graph --show-tools
```
Includes individual tool nodes showing which tools each plugin provides.
### Text Summary
```
/dependency-graph --format text
```
Outputs text-based summary suitable for CLAUDE.md inclusion.
### Specific Path
```
/dependency-graph ~/claude-plugins-work
```
Analyze marketplace at specified path.
## Integration with Other Commands
Use with `/validate-contracts` to:
1. Run `/dependency-graph` to visualize relationships
2. Run `/validate-contracts` to find issues in those relationships
3. Fix issues and regenerate graph to verify
## Available Tools
Use these MCP tools:
- `parse_plugin_interface` - Extract interface from plugin README.md
- `parse_claude_md_agents` - Extract agents and their tool sequences
- `generate_compatibility_report` - Get full interface data (JSON format for analysis)
- `validate_data_flow` - Verify data producer/consumer relationships
## Implementation Notes
### Detecting Shared MCP Servers
Check each plugin's `.mcp.json` file for server definitions:
```bash
# List all .mcp.json files in plugins
find plugins/ -name ".mcp.json" -exec cat {} \;
```
Plugins with identical MCP server names share that server.
### Identifying Data Flows
1. Parse tool categories from README.md
2. Map known producer tools to their output types
3. Map known consumer tools to their input requirements
4. Create edges where outputs match inputs
### Optional vs Required
- **Required**: Consumer tool has no default/fallback behavior
- **Optional**: Consumer works without producer (e.g., lessons search returns empty)
Determination is based on:
- Issue severity from `validate_data_flow` (ERROR = required, WARNING = optional)
- Tool documentation stating "requires" vs "uses if available"
## Sample Output
For the leo-claude-mktplace:
```mermaid
flowchart TD
subgraph gitea_mcp["Shared MCP: gitea"]
projman["projman<br/>14 commands"]
pr-review["pr-review<br/>6 commands"]
end
subgraph netbox_mcp["Shared MCP: netbox"]
cmdb-assistant["cmdb-assistant<br/>3 commands"]
end
subgraph data_mcp["Shared MCP: data-platform"]
data-platform["data-platform<br/>7 commands"]
end
subgraph viz_mcp["Shared MCP: viz-platform"]
viz-platform["viz-platform<br/>7 commands"]
end
subgraph standalone["Standalone Plugins"]
doc-guardian["doc-guardian"]
code-sentinel["code-sentinel"]
clarity-assist["clarity-assist"]
git-flow["git-flow"]
claude-config-maintainer["claude-config-maintainer"]
contract-validator["contract-validator"]
end
%% Data flow: data-platform -> viz-platform
data-platform -->|"data_ref"| viz-platform
%% Cross-plugin: projman lessons -> pr-review context
projman -.->|"lessons"| pr-review
%% Styling
classDef mcpGroup fill:#e8f4fd,stroke:#2196f3
classDef standalone fill:#f5f5f5,stroke:#9e9e9e
classDef required stroke:#e74c3c,stroke-width:2px
classDef optional stroke:#f39c12,stroke-dasharray:5 5
class gitea_mcp,netbox_mcp,data_mcp,viz_mcp mcpGroup
class standalone standalone
```

View File

@@ -0,0 +1,164 @@
---
description: Interactive setup wizard for contract-validator plugin - verifies MCP server and shows capabilities
---
# Contract-Validator Setup Wizard
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the setup.
This command sets up the contract-validator plugin for cross-plugin compatibility validation.
## Important Context
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart**
- **No external credentials required** - this plugin validates local files only
---
## Phase 1: Environment Validation
### Step 1.1: Check Python Version
```bash
python3 --version
```
Requires Python 3.10+. If below, stop setup and inform user:
```
Python 3.10 or higher is required. Please install it and run /initial-setup again.
```
---
## Phase 2: MCP Server Setup
### Step 2.1: Locate Contract-Validator MCP Server
```bash
# If running from installed marketplace
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/contract-validator/ 2>/dev/null || echo "NOT_FOUND_INSTALLED"
# If running from source
ls -la ~/claude-plugins-work/mcp-servers/contract-validator/ 2>/dev/null || echo "NOT_FOUND_SOURCE"
```
Determine which path exists and use that as the MCP server path.
### Step 2.2: Check Virtual Environment
```bash
ls -la /path/to/mcp-servers/contract-validator/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING"
```
### Step 2.3: Create Virtual Environment (if missing)
```bash
cd /path/to/mcp-servers/contract-validator && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
**If pip install fails:**
- Show the error to the user
- Suggest: "Check your internet connection and try again."
---
## Phase 3: Validation
### Step 3.1: Verify MCP Server
```bash
cd /path/to/mcp-servers/contract-validator && .venv/bin/python -c "from mcp_server.server import ContractValidatorMCPServer; print('MCP Server OK')"
```
If this fails, check the error and report it to the user.
### Step 3.2: Summary
Display:
```
╔════════════════════════════════════════════════════════════════╗
║ CONTRACT-VALIDATOR SETUP COMPLETE ║
╠════════════════════════════════════════════════════════════════╣
║ MCP Server: ✓ Ready ║
║ Parse Tools: ✓ Available (2 tools) ║
║ Validation Tools: ✓ Available (3 tools) ║
║ Report Tools: ✓ Available (2 tools) ║
╚════════════════════════════════════════════════════════════════╝
```
### Step 3.3: Session Restart Notice
---
**Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/validate-contracts` to check all plugins for compatibility issues
- Run `/check-agent` to validate a single agent definition
- Run `/list-interfaces` to see all plugin commands and tools
---
## Available Tools
| Category | Tools | Description |
|----------|-------|-------------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` | Extract interfaces from README.md and agents from CLAUDE.md |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow` | Check conflicts, tool references, and data flows |
| Report | `generate_compatibility_report`, `list_issues` | Generate reports and filter issues |
---
## Available Commands
| Command | Description |
|---------|-------------|
| `/validate-contracts` | Full marketplace compatibility validation |
| `/check-agent` | Validate single agent definition |
| `/list-interfaces` | Show all plugin interfaces |
---
## Use Cases
### 1. Pre-Release Validation
Run `/validate-contracts` before releasing a new marketplace version to catch:
- Command name conflicts between plugins
- Missing tool references in agents
- Broken data flows
### 2. Agent Development
Run `/check-agent` when creating or modifying agents to verify:
- All referenced tools exist
- Data flows are valid
- No undeclared dependencies
### 3. Plugin Audit
Run `/list-interfaces` to get a complete view of:
- All commands across plugins
- All tools available
- Potential overlap areas
---
## No Configuration Required
This plugin doesn't require any configuration files. It reads plugin manifests and README files directly from the filesystem.
**Paths it scans:**
- Marketplace: `~/.claude/plugins/marketplaces/leo-claude-mktplace/plugins/`
- Source (if available): `~/claude-plugins-work/plugins/`

View File

@@ -1,5 +1,17 @@
# /list-interfaces - Show Plugin Interfaces # /list-interfaces - Show Plugin Interfaces
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · List Interfaces │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the interface listing.
Display what each plugin in the marketplace produces and accepts. Display what each plugin in the marketplace produces and accepts.
## Usage ## Usage

View File

@@ -1,5 +1,17 @@
# /validate-contracts - Full Contract Validation # /validate-contracts - Full Contract Validation
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ ✅ CONTRACT-VALIDATOR · Full Validation │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the validation.
Run comprehensive cross-plugin compatibility validation for the entire marketplace. Run comprehensive cross-plugin compatibility validation for the entire marketplace.
## Usage ## Usage

View File

@@ -0,0 +1,195 @@
#!/bin/bash
# contract-validator SessionStart auto-validate hook
# Validates plugin contracts only when plugin files have changed since last check
# All output MUST have [contract-validator] prefix
PREFIX="[contract-validator]"
# ============================================================================
# Configuration
# ============================================================================
# Enable/disable auto-check (default: true)
AUTO_CHECK="${CONTRACT_VALIDATOR_AUTO_CHECK:-true}"
# Cache location for storing last check hash
CACHE_DIR="$HOME/.cache/claude-plugins/contract-validator"
HASH_FILE="$CACHE_DIR/last-check.hash"
# Marketplace location (installed plugins)
MARKETPLACE_PATH="$HOME/.claude/plugins/marketplaces/leo-claude-mktplace"
# ============================================================================
# Early exit if disabled
# ============================================================================
if [[ "$AUTO_CHECK" != "true" ]]; then
exit 0
fi
# ============================================================================
# Smart mode: Check if plugin files have changed
# ============================================================================
# Function to compute hash of all plugin manifest files
compute_plugin_hash() {
local hash_input=""
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
# Hash all plugin.json, hooks.json, and agent files
while IFS= read -r -d '' file; do
if [[ -f "$file" ]]; then
hash_input+="$(md5sum "$file" 2>/dev/null | cut -d' ' -f1)"
fi
done < <(find "$MARKETPLACE_PATH/plugins" \
\( -name "plugin.json" -o -name "hooks.json" -o -name "*.md" -path "*/agents/*" \) \
-print0 2>/dev/null | sort -z)
fi
# Also include marketplace.json
if [[ -f "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" ]]; then
hash_input+="$(md5sum "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" 2>/dev/null | cut -d' ' -f1)"
fi
# Compute final hash
echo "$hash_input" | md5sum | cut -d' ' -f1
}
# Ensure cache directory exists
mkdir -p "$CACHE_DIR" 2>/dev/null
# Compute current hash
CURRENT_HASH=$(compute_plugin_hash)
# Check if we have a previous hash
if [[ -f "$HASH_FILE" ]]; then
PREVIOUS_HASH=$(cat "$HASH_FILE" 2>/dev/null)
# If hashes match, no changes - skip validation
if [[ "$CURRENT_HASH" == "$PREVIOUS_HASH" ]]; then
exit 0
fi
fi
# ============================================================================
# Run validation (hashes differ or no cache)
# ============================================================================
ISSUES_FOUND=0
WARNINGS=""
# Function to add warning
add_warning() {
WARNINGS+=" - $1"$'\n'
((ISSUES_FOUND++))
}
# 1. Check all installed plugins have valid plugin.json
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
for plugin_dir in "$MARKETPLACE_PATH/plugins"/*/; do
if [[ -d "$plugin_dir" ]]; then
plugin_name=$(basename "$plugin_dir")
plugin_json="$plugin_dir/.claude-plugin/plugin.json"
if [[ ! -f "$plugin_json" ]]; then
add_warning "$plugin_name: missing .claude-plugin/plugin.json"
continue
fi
# Basic JSON validation
if ! python3 -c "import json; json.load(open('$plugin_json'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in plugin.json"
continue
fi
# Check required fields
if ! python3 -c "
import json
with open('$plugin_json') as f:
data = json.load(f)
required = ['name', 'version', 'description']
missing = [r for r in required if r not in data]
if missing:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: plugin.json missing required fields"
fi
fi
done
fi
# 2. Check hooks.json files are properly formatted
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' hooks_file; do
plugin_name=$(basename "$(dirname "$(dirname "$hooks_file")")")
# Validate JSON
if ! python3 -c "import json; json.load(open('$hooks_file'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in hooks/hooks.json"
continue
fi
# Validate hook structure
if ! python3 -c "
import json
with open('$hooks_file') as f:
data = json.load(f)
if 'hooks' not in data:
exit(1)
valid_events = ['PreToolUse', 'PostToolUse', 'UserPromptSubmit', 'SessionStart', 'SessionEnd', 'Notification', 'Stop', 'SubagentStop', 'PreCompact']
for event in data['hooks']:
if event not in valid_events:
exit(1)
for hook in data['hooks'][event]:
# Support both flat structure (type at top) and nested structure (matcher + hooks array)
if 'type' in hook:
# Flat structure: {type: 'command', command: '...'}
pass
elif 'matcher' in hook and 'hooks' in hook:
# Nested structure: {matcher: '...', hooks: [{type: 'command', ...}]}
for nested_hook in hook['hooks']:
if 'type' not in nested_hook:
exit(1)
else:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: hooks.json has invalid structure or events"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/hooks/hooks.json" -print0 2>/dev/null)
fi
# 3. Check agent references are valid (agent files exist and are markdown)
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' agent_file; do
plugin_name=$(basename "$(dirname "$(dirname "$agent_file")")")
agent_name=$(basename "$agent_file")
# Check file is not empty
if [[ ! -s "$agent_file" ]]; then
add_warning "$plugin_name: empty agent file $agent_name"
continue
fi
# Check file has markdown content (at least a header)
if ! grep -q '^#' "$agent_file" 2>/dev/null; then
add_warning "$plugin_name: agent $agent_name missing markdown header"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/agents/*.md" -print0 2>/dev/null)
fi
# ============================================================================
# Store new hash and report results
# ============================================================================
# Always store the new hash (even if issues found - we don't want to recheck)
echo "$CURRENT_HASH" > "$HASH_FILE"
# Report any issues found (non-blocking warning)
if [[ $ISSUES_FOUND -gt 0 ]]; then
echo "$PREFIX Plugin contract validation found $ISSUES_FOUND issue(s):"
echo "$WARNINGS"
echo "$PREFIX Run /validate-contracts for full details"
fi
# Always exit 0 (non-blocking)
exit 0

View File

@@ -0,0 +1,174 @@
#!/bin/bash
# contract-validator breaking change detection hook
# Warns when plugin interface changes might break consumers
# This is a PostToolUse hook - non-blocking, warnings only
PREFIX="[contract-validator]"
# Check if warnings are enabled (default: true)
if [[ "${CONTRACT_VALIDATOR_BREAKING_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a plugin interface file
is_interface_file() {
local file="$1"
case "$file" in
*/plugin.json) return 0 ;;
*/.claude-plugin/plugin.json) return 0 ;;
*/hooks.json) return 0 ;;
*/hooks/hooks.json) return 0 ;;
*/.mcp.json) return 0 ;;
*/agents/*.md) return 0 ;;
*/commands/*.md) return 0 ;;
*/skills/*.md) return 0 ;;
esac
return 1
}
# Exit if not an interface file
if ! is_interface_file "$FILE_PATH"; then
exit 0
fi
# Check if file exists and is in a git repo
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
# Get the directory containing the file
FILE_DIR=$(dirname "$FILE_PATH")
FILE_NAME=$(basename "$FILE_PATH")
# Try to get the previous version from git
cd "$FILE_DIR" 2>/dev/null || exit 0
# Check if we're in a git repo
if ! git rev-parse --git-dir > /dev/null 2>&1; then
exit 0
fi
# Get previous version (HEAD version before current changes)
PREV_CONTENT=$(git show HEAD:"$FILE_PATH" 2>/dev/null || echo "")
# If no previous version, this is a new file - no breaking changes possible
if [ -z "$PREV_CONTENT" ]; then
exit 0
fi
# Read current content
CURR_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [ -z "$CURR_CONTENT" ]; then
exit 0
fi
BREAKING_CHANGES=()
# Detect breaking changes based on file type
case "$FILE_PATH" in
*/plugin.json|*/.claude-plugin/plugin.json)
# Check for removed or renamed fields in plugin.json
# Check if name changed
PREV_NAME=$(echo "$PREV_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
CURR_NAME=$(echo "$CURR_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
if [ -n "$PREV_NAME" ] && [ "$PREV_NAME" != "$CURR_NAME" ]; then
BREAKING_CHANGES+=("Plugin name changed - consumers may need updates")
fi
# Check if version had major bump (semantic versioning)
PREV_VER=$(echo "$PREV_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
CURR_VER=$(echo "$CURR_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
if [ -n "$PREV_VER" ] && [ -n "$CURR_VER" ] && [ "$CURR_VER" -gt "$PREV_VER" ] 2>/dev/null; then
BREAKING_CHANGES+=("Major version bump detected - verify breaking changes documented")
fi
;;
*/hooks.json|*/hooks/hooks.json)
# Check for removed hook events
PREV_EVENTS=$(echo "$PREV_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
CURR_EVENTS=$(echo "$CURR_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
# Find removed events
REMOVED_EVENTS=$(comm -23 <(echo "$PREV_EVENTS") <(echo "$CURR_EVENTS") 2>/dev/null)
if [ -n "$REMOVED_EVENTS" ]; then
BREAKING_CHANGES+=("Hook events removed: $(echo $REMOVED_EVENTS | tr '\n' ' ')")
fi
# Check for changed matchers
PREV_MATCHERS=$(echo "$PREV_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
CURR_MATCHERS=$(echo "$CURR_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
if [ "$PREV_MATCHERS" != "$CURR_MATCHERS" ]; then
BREAKING_CHANGES+=("Hook matchers changed - verify tool coverage")
fi
;;
*/.mcp.json)
# Check for removed MCP servers
PREV_SERVERS=$(echo "$PREV_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
CURR_SERVERS=$(echo "$CURR_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
REMOVED_SERVERS=$(comm -23 <(echo "$PREV_SERVERS") <(echo "$CURR_SERVERS") 2>/dev/null)
if [ -n "$REMOVED_SERVERS" ]; then
BREAKING_CHANGES+=("MCP servers removed - tools may be unavailable")
fi
;;
*/agents/*.md)
# Check if agent file was significantly reduced (might indicate removal of capabilities)
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
# If more than 50% reduction, warn
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Agent definition significantly reduced - capabilities may be removed")
fi
# Check if agent name/description changed in frontmatter
PREV_DESC=$(echo "$PREV_CONTENT" | head -20 | grep -i "description" | head -1)
CURR_DESC=$(echo "$CURR_CONTENT" | head -20 | grep -i "description" | head -1)
if [ -n "$PREV_DESC" ] && [ "$PREV_DESC" != "$CURR_DESC" ]; then
BREAKING_CHANGES+=("Agent description changed - verify consumer expectations")
fi
;;
*/commands/*.md|*/skills/*.md)
# Check if command/skill was significantly changed
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Command/skill significantly reduced - behavior may change")
fi
;;
esac
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Consider updating CHANGELOG and notifying consumers"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -0,0 +1,21 @@
{
"hooks": {
"SessionStart": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/auto-validate.sh"
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/breaking-change-check.sh"
}
]
}
]
}
}

View File

@@ -2,9 +2,8 @@
"mcpServers": { "mcpServers": {
"data-platform": { "data-platform": {
"type": "stdio", "type": "stdio",
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/data-platform/.venv/bin/python", "command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/data-platform/run.sh",
"args": ["-m", "mcp_server.server"], "args": []
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/data-platform"
} }
} }
} }

View File

@@ -49,10 +49,13 @@ DBT_PROFILES_DIR=~/.dbt
| `/initial-setup` | Interactive setup wizard for PostgreSQL and dbt configuration | | `/initial-setup` | Interactive setup wizard for PostgreSQL and dbt configuration |
| `/ingest` | Load data from files or database | | `/ingest` | Load data from files or database |
| `/profile` | Generate data profile and statistics | | `/profile` | Generate data profile and statistics |
| `/data-quality` | Data quality assessment with pass/warn/fail scoring |
| `/schema` | Show database/DataFrame schema | | `/schema` | Show database/DataFrame schema |
| `/explain` | Explain dbt model lineage | | `/explain` | Explain dbt model lineage |
| `/lineage` | Visualize data dependencies | | `/lineage` | Visualize data dependencies (ASCII) |
| `/lineage-viz` | Generate Mermaid flowchart for dbt lineage |
| `/run` | Execute dbt models | | `/run` | Execute dbt models |
| `/dbt-test` | Run dbt tests with formatted results |
## Agents ## Agents

View File

@@ -2,6 +2,16 @@
You are a data analysis specialist. Your role is to help users explore, profile, and understand their data. You are a data analysis specialist. Your role is to help users explore, profile, and understand their data.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Data Analysis │
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
- Profile datasets with statistical summaries - Profile datasets with statistical summaries

View File

@@ -2,6 +2,16 @@
You are a data ingestion specialist. Your role is to help users load, transform, and prepare data for analysis. You are a data ingestion specialist. Your role is to help users load, transform, and prepare data for analysis.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Data Ingestion │
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
- Load data from CSV, Parquet, JSON files - Load data from CSV, Parquet, JSON files

View File

@@ -0,0 +1,115 @@
# /data-quality - Data Quality Assessment
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Data Quality │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the assessment.
Comprehensive data quality check for DataFrames with pass/warn/fail scoring.
## Usage
```
/data-quality <data_ref> [--strict]
```
## Workflow
1. **Get data reference**:
- If no data_ref provided, use `list_data` to show available options
- Validate the data_ref exists
2. **Null analysis**:
- Calculate null percentage per column
- **PASS**: < 5% nulls
- **WARN**: 5-20% nulls
- **FAIL**: > 20% nulls
3. **Duplicate detection**:
- Check for fully duplicated rows
- **PASS**: 0% duplicates
- **WARN**: < 1% duplicates
- **FAIL**: >= 1% duplicates
4. **Type consistency**:
- Identify mixed-type columns (object columns with mixed content)
- Flag columns that could be numeric but contain strings
- **PASS**: All columns have consistent types
- **FAIL**: Mixed types detected
5. **Outlier detection** (numeric columns):
- Use IQR method (values beyond 1.5 * IQR)
- Report percentage of outliers per column
- **PASS**: < 1% outliers
- **WARN**: 1-5% outliers
- **FAIL**: > 5% outliers
6. **Generate quality report**:
- Overall quality score (0-100)
- Per-column breakdown
- Recommendations for remediation
## Report Format
```
=== Data Quality Report ===
Dataset: sales_data
Rows: 10,000 | Columns: 15
Overall Score: 82/100 [PASS]
--- Column Analysis ---
| Column | Nulls | Dups | Type | Outliers | Status |
|--------------|-------|------|----------|----------|--------|
| customer_id | 0.0% | - | int64 | 0.2% | PASS |
| email | 2.3% | - | object | - | PASS |
| amount | 15.2% | - | float64 | 3.1% | WARN |
| created_at | 0.0% | - | datetime | - | PASS |
--- Issues Found ---
[WARN] Column 'amount': 15.2% null values (threshold: 5%)
[WARN] Column 'amount': 3.1% outliers detected
[FAIL] 1.2% duplicate rows detected (12 rows)
--- Recommendations ---
1. Investigate null values in 'amount' column
2. Review outliers in 'amount' - may be data entry errors
3. Remove or deduplicate 12 duplicate rows
```
## Options
| Flag | Description |
|------|-------------|
| `--strict` | Use stricter thresholds (WARN at 1% nulls, FAIL at 5%) |
## Examples
```
/data-quality sales_data
/data-quality df_customers --strict
```
## Scoring
| Component | Weight | Scoring |
|-----------|--------|---------|
| Nulls | 30% | 100 - (avg_null_pct * 2) |
| Duplicates | 20% | 100 - (dup_pct * 50) |
| Type consistency | 25% | 100 if clean, 0 if mixed |
| Outliers | 25% | 100 - (avg_outlier_pct * 10) |
Final score: Weighted average, capped at 0-100
## Available Tools
Use these MCP tools:
- `describe` - Get statistical summary (for outlier detection)
- `head` - Preview data
- `list_data` - List available DataFrames

View File

@@ -0,0 +1,131 @@
# /dbt-test - Run dbt Tests
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · dbt Tests │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the tests.
Execute dbt tests with formatted pass/fail results.
## Usage
```
/dbt-test [selection] [--warn-only]
```
## Workflow
1. **Pre-validation** (MANDATORY):
- Use `dbt_parse` to validate project first
- If validation fails, show errors and STOP
2. **Execute tests**:
- Use `dbt_test` with provided selection
- Capture all test results
3. **Format results**:
- Group by test type (schema vs. data)
- Show pass/fail status with counts
- Display failure details
## Report Format
```
=== dbt Test Results ===
Project: my_project
Selection: tag:critical
--- Summary ---
Total: 24 tests
PASS: 22 (92%)
FAIL: 1 (4%)
WARN: 1 (4%)
SKIP: 0 (0%)
--- Schema Tests (18) ---
[PASS] unique_dim_customers_customer_id
[PASS] not_null_dim_customers_customer_id
[PASS] not_null_dim_customers_email
[PASS] accepted_values_dim_customers_status
[FAIL] relationships_fct_orders_customer_id
--- Data Tests (6) ---
[PASS] assert_positive_order_amounts
[PASS] assert_valid_dates
[WARN] assert_recent_orders (threshold: 7 days)
--- Failure Details ---
Test: relationships_fct_orders_customer_id
Type: schema (relationships)
Model: fct_orders
Message: 15 records failed referential integrity check
Query: SELECT * FROM fct_orders WHERE customer_id NOT IN (SELECT customer_id FROM dim_customers)
--- Warning Details ---
Test: assert_recent_orders
Type: data
Message: No orders in last 7 days (expected for dev environment)
Severity: warn
```
## Selection Syntax
| Pattern | Meaning |
|---------|---------|
| (none) | Run all tests |
| `model_name` | Tests for specific model |
| `+model_name` | Tests for model and upstream |
| `tag:critical` | Tests with tag |
| `test_type:schema` | Only schema tests |
| `test_type:data` | Only data tests |
## Options
| Flag | Description |
|------|-------------|
| `--warn-only` | Treat failures as warnings (don't fail CI) |
## Examples
```
/dbt-test # Run all tests
/dbt-test dim_customers # Tests for specific model
/dbt-test tag:critical # Run critical tests only
/dbt-test +fct_orders # Test model and its upstream
```
## Test Types
### Schema Tests
Built-in tests defined in `schema.yml`:
- `unique` - No duplicate values
- `not_null` - No null values
- `accepted_values` - Value in allowed list
- `relationships` - Foreign key integrity
### Data Tests
Custom SQL tests in `tests/` directory:
- Return rows that fail the assertion
- Zero rows = pass, any rows = fail
## Exit Codes
| Code | Meaning |
|------|---------|
| 0 | All tests passed |
| 1 | One or more tests failed |
| 2 | dbt error (parse failure, etc.) |
## Available Tools
Use these MCP tools:
- `dbt_parse` - Pre-validation (ALWAYS RUN FIRST)
- `dbt_test` - Execute tests (REQUIRED)
- `dbt_build` - Alternative: run + test together

View File

@@ -1,5 +1,17 @@
# /explain - dbt Model Explanation # /explain - dbt Model Explanation
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Model Explanation │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the explanation.
Explain a dbt model's purpose, dependencies, and SQL logic. Explain a dbt model's purpose, dependencies, and SQL logic.
## Usage ## Usage

View File

@@ -1,5 +1,17 @@
# /ingest - Data Ingestion # /ingest - Data Ingestion
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Ingest │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the ingestion.
Load data from files or database into the data platform. Load data from files or database into the data platform.
## Usage ## Usage

View File

@@ -4,6 +4,18 @@ description: Interactive setup wizard for data-platform plugin - configures MCP
# Data Platform Setup Wizard # Data Platform Setup Wizard
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the setup.
This command sets up the data-platform plugin with pandas, PostgreSQL, and dbt integration. This command sets up the data-platform plugin with pandas, PostgreSQL, and dbt integration.
## Important Context ## Important Context

View File

@@ -0,0 +1,137 @@
# /lineage-viz - Mermaid Lineage Visualization
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Lineage Visualization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Generate Mermaid flowchart syntax for dbt model lineage.
## Usage
```
/lineage-viz <model_name> [--direction TB|LR] [--depth N]
```
## Workflow
1. **Get lineage data**:
- Use `dbt_lineage` to fetch model dependencies
- Capture upstream sources and downstream consumers
2. **Build Mermaid graph**:
- Create nodes for each model/source
- Style nodes by materialization type
- Add directional arrows for dependencies
3. **Output**:
- Render Mermaid flowchart syntax
- Include copy-paste ready code block
## Output Format
```mermaid
flowchart LR
subgraph Sources
raw_customers[(raw_customers)]
raw_orders[(raw_orders)]
end
subgraph Staging
stg_customers[stg_customers]
stg_orders[stg_orders]
end
subgraph Marts
dim_customers{{dim_customers}}
fct_orders{{fct_orders}}
end
raw_customers --> stg_customers
raw_orders --> stg_orders
stg_customers --> dim_customers
stg_orders --> fct_orders
dim_customers --> fct_orders
```
## Node Styles
| Materialization | Mermaid Shape | Example |
|-----------------|---------------|---------|
| source | Cylinder `[( )]` | `raw_data[(raw_data)]` |
| view | Rectangle `[ ]` | `stg_model[stg_model]` |
| table | Double braces `{{ }}` | `dim_model{{dim_model}}` |
| incremental | Hexagon `{{ }}` | `fct_model{{fct_model}}` |
| ephemeral | Dashed `[/ /]` | `tmp_model[/tmp_model/]` |
## Options
| Flag | Description |
|------|-------------|
| `--direction TB` | Top-to-bottom layout (default: LR = left-to-right) |
| `--depth N` | Limit lineage depth (default: unlimited) |
## Examples
```
/lineage-viz dim_customers
/lineage-viz fct_orders --direction TB
/lineage-viz rpt_revenue --depth 2
```
## Usage Tips
1. **Paste in documentation**: Copy the output directly into README.md or docs
2. **GitHub/GitLab rendering**: Both platforms render Mermaid natively
3. **Mermaid Live Editor**: Paste at https://mermaid.live for interactive editing
## Example Output
For `/lineage-viz fct_orders`:
~~~markdown
```mermaid
flowchart LR
%% Sources
raw_customers[(raw_customers)]
raw_orders[(raw_orders)]
raw_products[(raw_products)]
%% Staging
stg_customers[stg_customers]
stg_orders[stg_orders]
stg_products[stg_products]
%% Marts
dim_customers{{dim_customers}}
dim_products{{dim_products}}
fct_orders{{fct_orders}}
%% Dependencies
raw_customers --> stg_customers
raw_orders --> stg_orders
raw_products --> stg_products
stg_customers --> dim_customers
stg_products --> dim_products
stg_orders --> fct_orders
dim_customers --> fct_orders
dim_products --> fct_orders
%% Highlight target model
style fct_orders fill:#f96,stroke:#333,stroke-width:2px
```
~~~
## Available Tools
Use these MCP tools:
- `dbt_lineage` - Get model dependencies (REQUIRED)
- `dbt_ls` - List dbt resources
- `dbt_docs_generate` - Generate full manifest if needed

View File

@@ -1,5 +1,17 @@
# /lineage - Data Lineage Visualization # /lineage - Data Lineage Visualization
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Lineage │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Show data lineage for dbt models or database tables. Show data lineage for dbt models or database tables.
## Usage ## Usage

View File

@@ -1,5 +1,17 @@
# /profile - Data Profiling # /profile - Data Profiling
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Data Profile │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the profiling.
Generate statistical profile and quality report for a DataFrame. Generate statistical profile and quality report for a DataFrame.
## Usage ## Usage

View File

@@ -1,5 +1,17 @@
# /run - Execute dbt Models # /run - Execute dbt Models
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · dbt Run │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the execution.
Run dbt models with automatic pre-validation. Run dbt models with automatic pre-validation.
## Usage ## Usage

View File

@@ -1,5 +1,17 @@
# /schema - Schema Exploration # /schema - Schema Exploration
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Schema Explorer │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the exploration.
Display schema information for database tables or DataFrames. Display schema information for database tables or DataFrames.
## Usage ## Usage

View File

@@ -5,6 +5,17 @@
"type": "command", "type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh" "command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
} }
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/schema-diff-check.sh"
}
]
}
] ]
} }
} }

View File

@@ -0,0 +1,138 @@
#!/bin/bash
# data-platform schema diff detection hook
# Warns about potentially breaking schema changes
# This is a command hook - non-blocking, warnings only
PREFIX="[data-platform]"
# Check if warnings are enabled (default: true)
if [[ "${DATA_PLATFORM_SCHEMA_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin (JSON with file_path)
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a schema-related file
is_schema_file() {
local file="$1"
# Check file extension
case "$file" in
*.sql) return 0 ;;
*/migrations/*.py) return 0 ;;
*/migrations/*.sql) return 0 ;;
*/models/*.py) return 0 ;;
*/models/*.sql) return 0 ;;
*schema.prisma) return 0 ;;
*schema.graphql) return 0 ;;
*/dbt/models/*.sql) return 0 ;;
*/dbt/models/*.yml) return 0 ;;
*/alembic/versions/*.py) return 0 ;;
esac
# Check directory patterns
if echo "$file" | grep -qE "(migrations?|schemas?|models)/"; then
return 0
fi
return 1
}
# Exit if not a schema file
if ! is_schema_file "$FILE_PATH"; then
exit 0
fi
# Read the file content (if it exists and is readable)
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
FILE_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [[ -z "$FILE_CONTENT" ]]; then
exit 0
fi
# Detect breaking changes
BREAKING_CHANGES=()
# Check for DROP COLUMN
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("DROP COLUMN detected - may break existing queries")
fi
# Check for DROP TABLE
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+TABLE"; then
BREAKING_CHANGES+=("DROP TABLE detected - data loss risk")
fi
# Check for DROP INDEX
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+INDEX"; then
BREAKING_CHANGES+=("DROP INDEX detected - may impact query performance")
fi
# Check for ALTER TYPE / MODIFY COLUMN type changes
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*(TYPE|COLUMN.*TYPE)"; then
BREAKING_CHANGES+=("Column type change detected - may cause data truncation")
fi
if echo "$FILE_CONTENT" | grep -qiE "MODIFY[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("MODIFY COLUMN detected - verify data compatibility")
fi
# Check for adding NOT NULL to existing column
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*SET[[:space:]]+NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL constraint - existing NULL values will fail")
fi
if echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL[^[:space:]]*[[:space:]]+DEFAULT"; then
# Adding NOT NULL with DEFAULT is usually safe - don't warn
:
elif echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL column without DEFAULT - INSERT may fail")
fi
# Check for RENAME TABLE/COLUMN
if echo "$FILE_CONTENT" | grep -qiE "RENAME[[:space:]]+(TABLE|COLUMN|TO)"; then
BREAKING_CHANGES+=("RENAME detected - update all references")
fi
# Check for removing from Django/SQLAlchemy models (Python files)
if [[ "$FILE_PATH" == *.py ]]; then
if echo "$FILE_CONTENT" | grep -qE "^-[[:space:]]*[a-z_]+[[:space:]]*=.*Field\("; then
BREAKING_CHANGES+=("Model field removal detected in Python ORM")
fi
fi
# Check for Prisma schema changes
if [[ "$FILE_PATH" == *schema.prisma ]]; then
if echo "$FILE_CONTENT" | grep -qE "@relation.*onDelete.*Cascade"; then
BREAKING_CHANGES+=("Cascade delete detected - verify data safety")
fi
fi
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking schema changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Review before deploying to production"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -22,6 +22,9 @@ doc-guardian monitors your code changes via hooks:
|---------|-------------| |---------|-------------|
| `/doc-audit` | Full project scan - reports all drift without changing anything | | `/doc-audit` | Full project scan - reports all drift without changing anything |
| `/doc-sync` | Apply all pending documentation updates in one commit | | `/doc-sync` | Apply all pending documentation updates in one commit |
| `/changelog-gen` | Generate changelog from conventional commits in Keep-a-Changelog format |
| `/doc-coverage` | Calculate documentation coverage percentage for functions and classes |
| `/stale-docs` | Detect documentation files that are stale relative to their associated code |
## Hooks ## Hooks
@@ -33,6 +36,8 @@ doc-guardian monitors your code changes via hooks:
- **Version Drift**: Python 3.9 in docs but 3.11 in pyproject.toml - **Version Drift**: Python 3.9 in docs but 3.11 in pyproject.toml
- **Missing Docs**: Public functions without docstrings - **Missing Docs**: Public functions without docstrings
- **Stale Examples**: CLI examples that no longer work - **Stale Examples**: CLI examples that no longer work
- **Low Coverage**: Undocumented functions and classes
- **Stale Files**: Documentation that hasn't been updated alongside code changes
## Installation ## Installation

View File

@@ -6,6 +6,16 @@ description: Specialized agent for documentation analysis and drift detection
You are an expert technical writer and documentation analyst. Your role is to detect discrepancies between code and documentation. You are an expert technical writer and documentation analyst. Your role is to detect discrepancies between code and documentation.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
┌──────────────────────────────────────────────────────────────────┐
│ 📝 DOC-GUARDIAN · Documentation Analysis │
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
1. **Pattern Recognition** 1. **Pattern Recognition**

View File

@@ -11,6 +11,9 @@ This project uses doc-guardian for automatic documentation synchronization.
- Pending updates are queued silently during work - Pending updates are queued silently during work
- Run `/doc-sync` to apply all pending documentation updates - Run `/doc-sync` to apply all pending documentation updates
- Run `/doc-audit` for a full project documentation review - Run `/doc-audit` for a full project documentation review
- Run `/changelog-gen` to generate changelog from conventional commits
- Run `/doc-coverage` to check documentation coverage metrics
- Run `/stale-docs` to find documentation that may be outdated
### Documentation Files Tracked ### Documentation Files Tracked
- README.md (root and subdirectories) - README.md (root and subdirectories)

View File

@@ -0,0 +1,121 @@
---
description: Generate changelog from conventional commits in Keep-a-Changelog format
---
# Changelog Generation
Generate a changelog entry from conventional commits.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📝 DOC-GUARDIAN · Changelog Generation │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the generation.
## Process
1. **Identify Commit Range**
- Default: commits since last tag
- Optional: specify range (e.g., `v1.0.0..HEAD`)
- Detect if this is first release (no previous tags)
2. **Parse Conventional Commits**
Extract from commit messages following the pattern:
```
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
```
**Recognized Types:**
| Type | Changelog Section |
|------|------------------|
| `feat` | Added |
| `fix` | Fixed |
| `docs` | Documentation |
| `perf` | Performance |
| `refactor` | Changed |
| `style` | Changed |
| `test` | Testing |
| `build` | Build |
| `ci` | CI/CD |
| `chore` | Maintenance |
| `BREAKING CHANGE` | Breaking Changes |
3. **Group by Type**
Organize commits into Keep-a-Changelog sections:
- Breaking Changes (if any `!` suffix or `BREAKING CHANGE` footer)
- Added (feat)
- Changed (refactor, style, perf)
- Deprecated
- Removed
- Fixed (fix)
- Security
4. **Format Entries**
For each commit:
- Extract scope (if present) as prefix
- Use description as entry text
- Link to commit hash if repository URL available
- Include PR/issue references from footer
5. **Output Format**
```markdown
## [Unreleased]
### Breaking Changes
- **scope**: Description of breaking change
### Added
- **scope**: New feature description
- Another feature without scope
### Changed
- **scope**: Refactoring description
### Fixed
- **scope**: Bug fix description
### Documentation
- Updated README with new examples
```
## Options
| Flag | Description | Default |
|------|-------------|---------|
| `--from <tag>` | Start from specific tag | Latest tag |
| `--to <ref>` | End at specific ref | HEAD |
| `--version <ver>` | Set version header | [Unreleased] |
| `--include-merge` | Include merge commits | false |
| `--group-by-scope` | Group by scope within sections | false |
## Integration
The generated output is designed to be copied directly into CHANGELOG.md:
- Follows [Keep a Changelog](https://keepachangelog.com) format
- Compatible with semantic versioning
- Excludes non-user-facing commits (chore, ci, test by default)
## Example Usage
```
/changelog-gen
/changelog-gen --from v1.0.0 --version 1.1.0
/changelog-gen --include-merge --group-by-scope
```
## Non-Conventional Commits
Commits not following conventional format are:
- Listed under "Other" section
- Flagged for manual categorization
- Skipped if `--strict` flag is used

View File

@@ -6,6 +6,18 @@ description: Full documentation audit - scans entire project for doc drift witho
Perform a comprehensive documentation drift analysis. Perform a comprehensive documentation drift analysis.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📝 DOC-GUARDIAN · Documentation Audit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the audit.
## Process ## Process
1. **Inventory Documentation Files** 1. **Inventory Documentation Files**

View File

@@ -0,0 +1,140 @@
---
description: Calculate documentation coverage percentage for functions and classes
---
# Documentation Coverage
Analyze codebase to calculate documentation coverage metrics.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📝 DOC-GUARDIAN · Documentation Coverage │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
## Process
1. **Scan Source Files**
Identify all documentable items:
**Python:**
- Functions (def)
- Classes
- Methods
- Module-level docstrings
**JavaScript/TypeScript:**
- Functions (function, arrow functions)
- Classes
- Methods
- JSDoc comments
**Other Languages:**
- Adapt patterns for Go, Rust, etc.
2. **Determine Documentation Status**
For each item, check:
- Has docstring/JSDoc comment
- Docstring is non-empty and meaningful (not just `pass` or `TODO`)
- Parameters are documented (for detailed mode)
- Return type is documented (for detailed mode)
3. **Calculate Metrics**
```
Coverage = (Documented Items / Total Items) * 100
```
**Levels:**
- Basic: Item has any docstring
- Standard: Docstring describes purpose
- Complete: All parameters and return documented
4. **Output Format**
```
## Documentation Coverage Report
### Summary
- Total documentable items: 156
- Documented: 142
- Coverage: 91.0%
### By Type
| Type | Total | Documented | Coverage |
|------|-------|------------|----------|
| Functions | 89 | 85 | 95.5% |
| Classes | 23 | 21 | 91.3% |
| Methods | 44 | 36 | 81.8% |
### By Directory
| Path | Total | Documented | Coverage |
|------|-------|------------|----------|
| src/api/ | 34 | 32 | 94.1% |
| src/utils/ | 28 | 28 | 100.0% |
| src/models/ | 45 | 38 | 84.4% |
| tests/ | 49 | 44 | 89.8% |
### Undocumented Items
- [ ] src/api/handlers.py:45 `create_order()`
- [ ] src/api/handlers.py:78 `update_order()`
- [ ] src/models/user.py:23 `UserModel.validate()`
```
## Options
| Flag | Description | Default |
|------|-------------|---------|
| `--path <dir>` | Scan specific directory | Project root |
| `--exclude <glob>` | Exclude files matching pattern | `**/test_*,**/*_test.*` |
| `--include-private` | Include private members (_prefixed) | false |
| `--include-tests` | Include test files | false |
| `--min-coverage <pct>` | Fail if below threshold | none |
| `--format <fmt>` | Output format (table, json, markdown) | table |
| `--detailed` | Check parameter/return docs | false |
## Thresholds
Common coverage targets:
| Level | Coverage | Description |
|-------|----------|-------------|
| Minimal | 60% | Basic documentation exists |
| Good | 80% | Most public APIs documented |
| Excellent | 95% | Comprehensive documentation |
## CI Integration
Use `--min-coverage` to enforce standards:
```bash
# Fail if coverage drops below 80%
claude /doc-coverage --min-coverage 80
```
Exit codes:
- 0: Coverage meets threshold (or no threshold set)
- 1: Coverage below threshold
## Example Usage
```
/doc-coverage
/doc-coverage --path src/
/doc-coverage --min-coverage 85 --exclude "**/generated/**"
/doc-coverage --detailed --include-private
```
## Language Detection
File extensions mapped to documentation patterns:
| Extension | Language | Doc Format |
|-----------|----------|------------|
| .py | Python | Docstrings (""") |
| .js, .ts | JavaScript/TypeScript | JSDoc (/** */) |
| .go | Go | // comments above |
| .rs | Rust | /// doc comments |
| .rb | Ruby | # comments, YARD |
| .java | Java | Javadoc (/** */) |

View File

@@ -6,6 +6,18 @@ description: Synchronize all pending documentation updates in a single commit
Apply all pending documentation updates detected by doc-guardian hooks. Apply all pending documentation updates detected by doc-guardian hooks.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📝 DOC-GUARDIAN · Documentation Sync │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the sync.
## Process ## Process
1. **Review Pending Queue** 1. **Review Pending Queue**

View File

@@ -0,0 +1,155 @@
---
description: Detect documentation files that are stale relative to their associated code
---
# Stale Documentation Detection
Identify documentation files that may be outdated based on commit history.
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 📝 DOC-GUARDIAN · Stale Documentation Check │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the check.
## Process
1. **Map Documentation to Code**
Build relationships between docs and code:
| Doc File | Related Code |
|----------|--------------|
| README.md | All files in same directory |
| API.md | src/api/**/* |
| CLAUDE.md | Configuration files, scripts |
| docs/module.md | src/module/**/* |
| Component.md | Component.tsx, Component.css |
2. **Analyze Commit History**
For each doc file:
- Find last commit that modified the doc
- Find last commit that modified related code
- Count commits to code since doc was updated
3. **Calculate Staleness**
```
Commits Behind = Code Commits Since Doc Update
Days Behind = Days Since Doc Update - Days Since Code Update
```
4. **Apply Threshold**
Default: Flag if documentation is 10+ commits behind related code
**Staleness Levels:**
| Commits Behind | Level | Action |
|----------------|-------|--------|
| 0-5 | Fresh | No action needed |
| 6-10 | Aging | Review recommended |
| 11-20 | Stale | Update needed |
| 20+ | Critical | Immediate attention |
5. **Output Format**
```
## Stale Documentation Report
### Critical (20+ commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| docs/api.md | 2024-01-15 | 34 | src/api/**/* |
### Stale (11-20 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| README.md | 2024-02-20 | 15 | package.json, src/index.ts |
### Aging (6-10 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| CONTRIBUTING.md | 2024-03-01 | 8 | .github/*, scripts/* |
### Summary
- Critical: 1 file
- Stale: 1 file
- Aging: 1 file
- Fresh: 12 files
- Total documentation files: 15
```
## Options
| Flag | Description | Default |
|------|-------------|---------|
| `--threshold <n>` | Commits behind to flag as stale | 10 |
| `--days` | Use days instead of commits | false |
| `--path <dir>` | Scan specific directory | Project root |
| `--doc-pattern <glob>` | Pattern for doc files | `**/*.md,**/README*` |
| `--ignore <glob>` | Ignore specific docs | `CHANGELOG.md,LICENSE` |
| `--show-fresh` | Include fresh docs in output | false |
| `--format <fmt>` | Output format (table, json) | table |
## Relationship Detection
How docs are mapped to code:
1. **Same Directory**
- `src/api/README.md` relates to `src/api/**/*`
2. **Name Matching**
- `docs/auth.md` relates to `**/auth.*`, `**/auth/**`
3. **Explicit Links**
- Parse `[link](path)` in docs to find related files
4. **Import Analysis**
- Track which modules are referenced in code examples
## Configuration
Create `.doc-guardian.yml` to customize mappings:
```yaml
stale-docs:
threshold: 10
mappings:
- doc: docs/deployment.md
code:
- Dockerfile
- docker-compose.yml
- .github/workflows/deploy.yml
- doc: ARCHITECTURE.md
code:
- src/**/*
ignore:
- CHANGELOG.md
- LICENSE
- vendor/**
```
## Example Usage
```
/stale-docs
/stale-docs --threshold 5
/stale-docs --days --threshold 30
/stale-docs --path docs/ --show-fresh
```
## Integration with doc-audit
`/stale-docs` focuses specifically on commit-based staleness, while `/doc-audit` checks content accuracy. Use both for comprehensive documentation health:
```
/doc-audit # Check for broken references and content drift
/stale-docs # Check for files that may need review
```
## Exit Codes
- 0: No critical or stale documentation
- 1: Stale documentation found (useful for CI)
- 2: Critical documentation found

View File

@@ -119,6 +119,10 @@ The git-assistant agent helps resolve merge conflicts with analysis and recommen
→ Status: Clean, up-to-date → Status: Clean, up-to-date
``` ```
## Documentation
- [Branching Strategy Guide](docs/BRANCHING-STRATEGY.md) - Detailed documentation of the `development -> staging -> main` promotion flow
## Integration ## Integration
For CLAUDE.md integration instructions, see `claude-md-integration.md`. For CLAUDE.md integration instructions, see `claude-md-integration.md`.

Some files were not shown because too many files have changed in this diff Show More