75 Commits

Author SHA1 Message Date
baad41da98 feat(plugins): implement Sprint 5 documentation and fixes (#266-#269)
Release v5.2.0

Documentation:
- Add git-flow branching strategy guide (docs/BRANCHING-STRATEGY.md)
- Add clarity-assist ND support documentation (docs/ND-SUPPORT.md)
- Update DEBUGGING-CHECKLIST.md with Gitea auto-close behavior and MCP restart notes
- Update plugin READMEs to reference new documentation

Bug Fix:
- Add milestone parameter to update_issue MCP tool (gitea_client.py, server.py, tools/issues.py)

Version Updates:
- Marketplace version: 5.1.0 → 5.2.0
- README title: v5.1.0 → v5.2.0
- CHANGELOG: [Unreleased] → [5.2.0] - 2026-01-28

Closes #266, Closes #267, Closes #268, Closes #269

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 14:26:42 -05:00
f6d9fcaae2 Merge pull request 'docs: fix GITEA_REPO format documentation' (#264) from fix/gitea-repo-format-docs into development
Reviewed-on: #264
2026-01-28 18:45:24 +00:00
8d94bb606c docs: fix GITEA_REPO format documentation
Update documentation to reflect that GITEA_REPO expects owner/repo
format (e.g., my-org/my-repo) instead of separate GITEA_ORG and
GITEA_REPO variables.

This matches the actual MCP server implementation in config.py.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 13:44:46 -05:00
b175d4d890 Merge pull request 'development' (#262) from development into main
Reviewed-on: #262
2026-01-28 18:35:04 +00:00
6973f657d7 Merge pull request 'docs(changelog): add Sprint 4 changes to [Unreleased]' (#263) from docs/sprint-4-changelog into development
Reviewed-on: #263
2026-01-28 18:34:49 +00:00
a0d1b38c6e docs(changelog): add Sprint 4 changes to [Unreleased]
- 18 new commands across 8 plugins
- viz-platform accessibility tools and chart export
- MCP project directory detection fix
- Links to wiki implementation and lessons learned

Sprint 4 - Commands milestone closed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:56:35 -05:00
c5f68256c5 Merge pull request 'feat(plugins): implement Sprint 4 commands (#241-#258)' (#261) from feat/sprint-4-commands into development
Reviewed-on: #261
2026-01-28 17:36:06 +00:00
9698e8724d feat(plugins): implement Sprint 4 commands (#241-#258)
Sprint 4 - Plugin Commands implementation adding 18 new user-facing
commands across 8 plugins as part of V5.2.0 Plugin Enhancements.

**projman:**
- #241: /sprint-diagram - Mermaid visualization of sprint issues

**pr-review:**
- #242: Confidence threshold config (PR_REVIEW_CONFIDENCE_THRESHOLD)
- #243: /pr-diff - Formatted diff with inline review comments

**data-platform:**
- #244: /data-quality - DataFrame quality checks (nulls, duplicates, outliers)
- #245: /lineage-viz - dbt lineage as Mermaid diagrams
- #246: /dbt-test - Formatted dbt test runner

**viz-platform:**
- #247: /chart-export - Export charts to PNG/SVG/PDF via kaleido
- #248: /accessibility-check - Color blind validation (WCAG contrast)
- #249: /breakpoints - Responsive layout configuration

**contract-validator:**
- #250: /dependency-graph - Plugin dependency visualization

**doc-guardian:**
- #251: /changelog-gen - Generate changelog from conventional commits
- #252: /doc-coverage - Documentation coverage metrics
- #253: /stale-docs - Flag outdated documentation

**claude-config-maintainer:**
- #254: /config-diff - Track CLAUDE.md changes over time
- #255: /config-lint - 31 lint rules for CLAUDE.md best practices

**cmdb-assistant:**
- #256: /cmdb-topology - Infrastructure topology diagrams
- #257: /change-audit - NetBox audit trail queries
- #258: /ip-conflicts - Detect IP conflicts and overlaps

Closes #241, #242, #243, #244, #245, #246, #247, #248, #249,
#250, #251, #252, #253, #254, #255, #256, #257, #258

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:02:26 -05:00
f3e1f42413 Merge pull request 'development' (#260) from development into main
Reviewed-on: #260
2026-01-28 16:44:43 +00:00
8a957b1b69 Merge pull request 'fix(mcp): capture CLAUDE_PROJECT_DIR from PWD before cd' (#259) from fix/mcp-project-dir-detection into development
Reviewed-on: #259
2026-01-28 16:44:32 +00:00
6e90064160 fix(mcp): capture CLAUDE_PROJECT_DIR from PWD before cd
All MCP server run.sh scripts now capture the original working
directory as CLAUDE_PROJECT_DIR before changing to the script
directory. This fixes the branch detection issue where MCP tools
detected the plugin repo's branch instead of the user's project branch.

This is a follow-up fix to #231 - the original fix relied on
CLAUDE_PROJECT_DIR being set by Claude Code, but it isn't.
Now we capture it ourselves from PWD at startup time.

Closes #231 (proper fix)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 11:43:07 -05:00
5c4e97a3f6 Merge pull request 'development' (#240) from development into main
Reviewed-on: #240
2026-01-28 16:02:47 +00:00
351be5a40d Merge pull request 'feat(projman): implement 8 improvement issues (#231-#238)' (#239) from feat/projman-improvements-231-238 into development
Reviewed-on: #239
2026-01-28 15:53:50 +00:00
67944a7e1c Merge branch 'fix/233-approval-token' into development 2026-01-28 10:51:42 -05:00
e37653f956 Merge branch 'fix/237-checkpoint-resume' into development 2026-01-28 10:51:42 -05:00
235e72d3d7 Merge branch 'fix/234-parallel-conflicts' into development 2026-01-28 10:51:42 -05:00
ba8e86e31c Merge branch 'fix/236-runaway-detection' into development 2026-01-28 10:51:42 -05:00
67f330be6c Merge branch 'fix/238-task-scoping' into development 2026-01-28 10:51:42 -05:00
445b744196 Merge branch 'fix/232-progress-visibility' into development 2026-01-28 10:51:42 -05:00
ad73c526b7 Merge branch 'fix/235-status-labels' into development 2026-01-28 10:51:42 -05:00
26310d05f0 feat(projman): add sprint approval requirement before execution (#233)
Sprint-plan approval workflow:
- Request explicit approval after creating issues
- Present scope summary (branches, files, dependencies)
- User must type "approve sprint N" to authorize
- Record approval in milestone description with timestamp

Sprint-start verification:
- Check milestone for "## Sprint Approval" section
- If missing, STOP and direct to /sprint-plan
- Extract approved scope (branches, files)
- Enforce scope during execution

Orchestrator scope enforcement:
- Verify approval before any execution
- Check each operation against approved scope
- Operations outside scope require re-approval

This separates planning (review) from execution (action),
preventing agents from executing without explicit user consent.

Closes #233

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:51:10 -05:00
459550e7d3 feat(projman): add checkpoint/resume for interrupted agent work (#237)
Executor checkpointing:
- Standard checkpoint comment format with branch, commit, phase
- Files modified with status (created, modified)
- Completed and pending steps tracking
- State notes for resumption context
- Save checkpoint after major steps, before stopping

Orchestrator resume detection:
- Scan issue comments for "## Checkpoint" markers
- Offer resume options: resume, start fresh, review details
- Verify branch exists and files match before resuming
- Dispatch executor with checkpoint context

Sprint-start integration:
- Checkpoint detection as first workflow step
- Resume flow documentation with example
- Checkpoint format specification

This enables resuming work after:
- Budget exhaustion (100 tool call limit)
- Agent failure/circuit breaker
- Manual interruption
- Session timeout

Closes #237

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:49:34 -05:00
a69a4d19d0 feat(projman): add file conflict prevention for parallel agents (#234)
Add pre-dispatch conflict detection:
- Analyze target files for each task before parallel dispatch
- Check for file overlap between tasks in same batch
- If overlap detected, sequentialize those specific tasks
- Example analysis showing conflict detection workflow

Branch isolation protocol:
- Each task MUST have its own branch
- Never have two agents work on the same branch
- Sequential merge after completion (not simultaneous)
- Handle merge conflicts by stopping second task

Conflict resolution rules:
- Same file → MUST sequentialize
- Same directory → Usually safe, review
- Shared config → Sequentialize
- Shared test fixture → Sequentialize or assign different files

This prevents parallel agents from modifying the same files
and causing git merge conflicts.

Closes #234

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:47:36 -05:00
f2a62627d0 feat(projman): add runaway detection and circuit breaker for agents (#236)
Executor self-monitoring:
- 10+ calls without progress → stop and reassess
- Same error 3+ times → circuit breaker, report failure
- 50+ calls → mandatory progress update
- 80+ calls → budget warning, evaluate completion
- 100+ calls → hard stop, save checkpoint

Orchestrator monitoring:
- Detect stuck agents (no progress for X minutes)
- Intervention protocol for runaway agents
- Timeout guidelines by task size (XS: 15min, S: 30min, M: 45min)
- Recovery actions with Status/Failed label

This prevents agents from running indefinitely (400+ tool calls
observed in Sprint 3) and provides clear stopping criteria.

Closes #236

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:46:04 -05:00
0abf510ec0 feat(projman): add strict task sizing rules to prevent runaway agents (#238)
Add mandatory task scoping rules:
- XS: 1 file, 0-2 checklist items, ~30 tool calls
- S: 1 file, 2-4 checklist items, ~50 tool calls
- M: 2-3 files, 4-6 checklist items, ~80 tool calls
- L/XL: MUST be broken down into smaller tasks

Sprint 3 showed agents running 400+ tool calls on single tasks,
causing 1+ hour waits with no visibility. This enforces:
- Maximum task scope (M = 2-3 files, 80 tool calls)
- Mandatory breakdown for L/XL tasks
- Clear scoping checklist for planners
- Good/bad examples showing proper breakdown

Planner must refuse to create L/XL tasks without breakdown.

Closes #238

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:44:52 -05:00
008187a0a4 feat(projman): add structured progress comments for real-time visibility (#232)
Add structured progress comment format for agents:
- Standard format with Status, Phase, Tool Calls budget
- Completed/In-Progress/Blockers/Next sections
- Clear examples for starting, blocked, and failed states
- Guidance on when to post (every 20-30 tool calls)

Update sprint-status.md:
- Document how to parse progress comments
- Show enhanced in-progress display with tool call tracking
- Add progress comment detection to blocker analysis

This enables users to see:
- Real-time agent progress
- Tool call budget consumption
- Current phase and step
- Blockers as they occur

Closes #232

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:43:28 -05:00
4bd15e5deb feat(projman): add Status labels for accurate issue state tracking (#235)
- Add Status/In-Progress, Status/Blocked, Status/Failed, Status/Deferred labels
- Update orchestrator.md with Status Label Management section
- Update executor.md with honest Status Reporting requirements
- Update labels-reference.md with Status detection guidelines

Status labels enable accurate tracking:
- In-Progress: Work actively being done
- Blocked: Waiting for dependency/external factor
- Failed: Attempted but couldn't complete
- Deferred: Moved to future sprint

Agents must report honestly - never say "completed" when blocked/failed.

Closes #235

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:42:14 -05:00
8234683bc3 fix(gitea-mcp): use project directory for branch detection (#231)
The _get_current_branch() method was running git commands from the
installed plugin directory instead of the user's project directory.
This caused incorrect branch detection (always seeing 'main' from
the marketplace repo instead of the user's actual branch).

Fix: Use CLAUDE_PROJECT_DIR environment variable to get the correct
project directory and pass it as cwd to subprocess.run().

Fixes #231

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:38:17 -05:00
5b3da8da85 Merge feat/230-breaking-change-detection into development
Resolved conflict in hooks.json - combined SessionStart and PostToolUse hooks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:12:46 -05:00
894e015c01 Merge feat/229-session-auto-validate into development 2026-01-28 10:12:18 -05:00
a66a2bc519 Merge feat/228-schema-diff-hook into development 2026-01-28 10:12:13 -05:00
b8851a0ae3 Merge feat/227-vagueness-detection-hook into development 2026-01-28 10:12:08 -05:00
aee199e6cf Merge feat/226-branch-name-validation into development 2026-01-28 10:12:03 -05:00
223a2d626a Merge feat/225-commit-message-hook into development 2026-01-28 10:11:57 -05:00
b7fce0fafd docs(changelog): add Sprint 3 hooks implementation
- git-flow: commit message and branch name validation hooks
- clarity-assist: vagueness detection hook
- data-platform: schema diff detection hook
- contract-validator: SessionStart auto-validate and breaking change hooks
- Document MCP bug #231 (branch detection from wrong directory)

Sprint 3 - Hooks completed (issues #225-#230)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:11:51 -05:00
551c60fb45 feat(contract-validator): add breaking change detection hook (#230)
Implements PostToolUse hook to warn about breaking interface changes:
- Detects changes to plugin.json, hooks.json, .mcp.json, agents/*.md
- Compares with git HEAD to find removed/changed elements
- Warns on: removed hooks, changed matchers, removed MCP servers
- Warns on: plugin name changes, major version bumps
- Non-blocking, configurable via CONTRACT_VALIDATOR_BREAKING_WARN

Depends on #229 (SessionStart auto-validate infrastructure).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:05:02 -05:00
af6a42b2ac feat(git-flow): add branch name validation hook (#226)
Implements PreToolUse/Bash hook to validate branch naming convention:
- Validates type/description format
- Allowed types: feat, fix, chore, docs, refactor, test, perf, debug
- Enforces lowercase, hyphens, max 50 chars
- Non-branch commands pass through

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:02:20 -05:00
7cae21f7c9 feat(contract-validator): add SessionStart auto-validate hook (#229)
Implements smart SessionStart hook for plugin validation:
- Validates plugin contracts only when files change (smart mode)
- Caches file hashes to avoid redundant checks
- Non-blocking warnings for compatibility issues
- Configurable via CONTRACT_VALIDATOR_AUTO_CHECK

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:02:00 -05:00
8048fba931 feat(clarity-assist): add vagueness detection hook (#227)
Implements UserPromptSubmit hook to detect vague prompts:
- Checks for short prompts without context
- Detects ambiguous phrases ("fix it", "help me with")
- Suggests /clarity-assist when beneficial
- Non-blocking, configurable via CLARITY_ASSIST_AUTO_SUGGEST

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:01:53 -05:00
1b36ca77ab feat(git-flow): add commit message enforcement hook (#225)
Implements PreToolUse/Bash hook to validate conventional commit format:
- Validates type(scope): description format
- Supports all 10 types: feat, fix, docs, style, refactor, perf, test, chore, build, ci
- Optional scope support
- Helpful error messages with examples
- Non-commit commands pass through
- Uses Python for reliable JSON parsing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:44:59 -05:00
eb85ea31bb feat(data-platform): add schema diff detection hook (#228)
Implements PostToolUse hook to warn about potentially breaking schema changes:
- DROP COLUMN/TABLE/INDEX detection
- Column type changes (ALTER TYPE, MODIFY COLUMN)
- NOT NULL constraint additions
- RENAME operations
- ORM model field removals

Non-blocking - outputs warnings only.
Configurable via DATA_PLATFORM_SCHEMA_WARN env var.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:38:26 -05:00
8627d9e968 Merge pull request 'development' (#224) from development into main
Reviewed-on: #224
2026-01-27 20:16:30 +00:00
3da9adf44e Merge pull request 'docs: sync documentation with code changes for v5.1.0' (#223) from docs/sync-documentation-v5.1.0 into development
Reviewed-on: #223
2026-01-27 20:16:17 +00:00
bcb24ae641 docs: sync documentation with code changes for v5.1.0
Changes applied:
- Updated version references from 5.0.0 to 5.1.0 (CLAUDE.md, CANONICAL-PATHS.md, setup.sh)
- Added missing projman commands to README.md (/suggest-version, /proposal-status)
- Added missing cmdb-assistant commands to README.md (/cmdb-audit, /cmdb-register, /cmdb-sync)
- Added /proposal-status to projman section in COMMANDS-CHEATSHEET.md
- Added 3 cmdb-assistant commands to COMMANDS-CHEATSHEET.md
- Added /suggest-version documentation to plugins/projman/README.md
- Added 4 missing scripts to CANONICAL-PATHS.md (verify-hooks.sh, setup-venvs.sh, venv-repair.sh, release.sh)

Fixes 14 documentation drift issues identified by /doc-guardian:doc-audit.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 15:12:11 -05:00
c8ede3c30b Merge pull request 'development' (#222) from development into main
Reviewed-on: #222
2026-01-27 19:58:14 +00:00
fb1c664309 Merge pull request 'fix(post-update): clear Claude plugin cache on update' (#221) from fix/clear-plugin-cache-on-update into development
Reviewed-on: #221
2026-01-27 19:57:59 +00:00
90f19dfc0f fix(post-update): clear Claude plugin cache on update
The Claude plugin cache at ~/.claude/plugins/cache/leo-claude-mktplace/
holds versioned copies of .mcp.json files. When we update .mcp.json
to use run.sh instead of direct python paths, the cache retains old
versions, causing MCP servers to fail.

Now post-update.sh clears this cache, forcing Claude to read fresh
configs from the installed marketplace on next session start.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:55:42 -05:00
75492b0d38 Merge pull request 'development' (#220) from development into main
Reviewed-on: #220
2026-01-27 19:42:17 +00:00
54bb347ee1 Merge pull request 'fix(gitea-mcp): add fix/* and other branch patterns to permissions' (#219) from fix/branch-permission-patterns into development
Reviewed-on: #219
2026-01-27 19:42:02 +00:00
51bcc26ea9 fix(gitea-mcp): add fix/* and other branch patterns to permissions
The branch permission check was only allowing feat/, feature/, and dev/
prefixes for write operations. This blocked PR creation from fix/*
branches, forcing fallback to direct API calls.

Added patterns:
- fix/, bugfix/, hotfix/ - for bug fixes
- chore/, refactor/ - for maintenance
- docs/, test/ - for documentation and tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:41:18 -05:00
d813147ca7 Merge pull request 'development' (#218) from development into main
Reviewed-on: #218
2026-01-27 19:38:48 +00:00
dbb6d46fa4 Merge pull request 'fix(mcp): use wrapper scripts instead of venv symlinks' (#217) from fix/mcp-venv-wrapper-scripts into development
Reviewed-on: #217
2026-01-27 19:38:31 +00:00
e7050e2ad8 fix(mcp): use wrapper scripts instead of venv symlinks
Replace direct python path in .mcp.json with run.sh wrapper scripts
that automatically locate the venv in cache or local directory.

Problem: .venv symlinks are gitignored, causing them to be wiped on
every git operation. The SessionStart hook should recreate them but
this was unreliable, leading to repeated MCP server failures.

Solution: run.sh scripts that:
- First check ~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv
- Fallback to local .venv if exists
- Exit with helpful error if neither found

This eliminates dependency on symlinks entirely - the scripts are
tracked in git and will always be present after clone/pull/update.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:36:05 -05:00
206f1c378e Merge pull request 'development' (#216) from development into main
Reviewed-on: #216
2026-01-27 17:34:34 +00:00
35380594b4 Merge pull request 'feat(cmdb-assistant): add data quality validation v1.1.0' (#215) from feat/cmdb-assistant-data-quality into development
Reviewed-on: #215
2026-01-27 17:34:19 +00:00
0055c9ecf2 feat(gitea-mcp): add create_pull_request tool
Add missing create_pull_request tool to Gitea MCP server. This completes
the PR lifecycle - previously only had list/get/review/comment tools.

- Add create_pull_request to GiteaClient
- Add async wrapper to PullRequestTools with branch permissions
- Register tool in server.py with proper schema
- Parameters: title, body, head, base, labels (optional)
- Branch-aware security: only allowed on development/feature branches

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 12:30:48 -05:00
a74a048898 feat(cmdb-assistant): add data quality validation v1.1.0
Add validation hooks, best practices skill, and new commands to enforce
NetBox data quality standards:

Hooks:
- SessionStart: Test NetBox connectivity, report data quality issues
- PreToolUse: Validate VM/device parameters before create/update

New Commands:
- /cmdb-audit: Data quality analysis (vms, devices, naming, roles)
- /cmdb-register: Register current machine with running applications
- /cmdb-sync: Sync machine state with NetBox, detect drift

Best Practices Skill:
- Dependency order (regions -> sites -> devices -> VMs)
- Site/tenant/platform assignment requirements
- Naming conventions enforcement
- Role consolidation guidance

Updated agent with validation requirements, dependency order checks,
naming convention warnings, and duplicate prevention.

Marketplace: 5.0.0 -> 5.1.0
Plugin: 1.0.0 -> 1.1.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 12:27:23 -05:00
37676d4645 Merge pull request 'development' (#214) from development into main
Reviewed-on: #214
2026-01-27 16:57:12 +00:00
7492cfad66 Merge pull request 'fix(post-update): use venv-repair for instant symlink restoration' (#213) from fix/post-update-venv-repair into development
Reviewed-on: #213
2026-01-27 16:56:52 +00:00
59db9ea0b0 fix(post-update): use venv-repair for instant symlink restoration
Replace the old approach (create venvs in marketplace directory) with
the new venv-repair.sh approach (symlinks to external cache).

This ensures post-update.sh:
- Instantly restores symlinks if cache exists
- Only does full pip install on first run
- Works correctly after marketplace updates

Flow after this fix:
  Update marketplace → post-update.sh → venv-repair → Session works

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 11:55:43 -05:00
9234cf1add Merge pull request 'development' (#212) from development into main
Reviewed-on: #212
2026-01-27 16:49:33 +00:00
a21199d3db Merge pull request 'fix(mcp): persistent venv cache survives marketplace updates' (#211) from fix/persistent-venv-cache into development
Reviewed-on: #211
2026-01-27 16:49:19 +00:00
1abda1ca0f fix(mcp): persistent venv cache survives marketplace updates
Problem:
- Venvs in marketplace directory got deleted on every update
- Users had to manually run setup.sh and wait for full pip install
- This caused MCP servers to fail until manually fixed

Solution:
- Store venvs in external cache (~/.cache/claude-mcp-venvs/)
- Auto-repair symlinks via SessionStart hook (instant operation)
- Only run pip install on first use or when requirements change

Architecture:
  Cache (runtime) → Marketplaces → External venv cache

  The chain of symlinks ensures all three locations work:
  1. ~/.claude/plugins/cache/.../mcp-servers/* (runtime)
  2. ~/.claude/plugins/marketplaces/.../mcp-servers/* (install)
  3. ~/.cache/claude-mcp-venvs/* (persistent venvs)

Performance:
- First install: ~2-3 min (unchanged)
- After marketplace update: 0.03 sec (was 2-3 min)

Files:
- scripts/venv-repair.sh: Fast symlink restoration for hooks
- scripts/setup-venvs.sh: Full setup with external cache
- plugins/projman/hooks/startup-check.sh: Auto-repair on session start
- .gitignore: Ignore .venv symlinks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 11:48:13 -05:00
0118bc7b9b Merge pull request 'development' (#210) from development into main
Reviewed-on: #210
2026-01-27 15:51:57 +00:00
bbb822db16 Merge pull request 'fix(post-update): create missing venvs instead of just warning' (#209) from fix/post-update-create-venvs into development
Reviewed-on: #209
2026-01-27 15:51:39 +00:00
08e1dcb1f5 fix(post-update): create missing venvs instead of just warning
Previously post-update.sh would only warn when venvs were missing,
requiring a separate setup.sh run. Now it automatically creates
missing venvs and installs dependencies including editable packages.

Also added viz-platform and contract-validator to the update list.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 10:41:04 -05:00
ec7141a5aa Merge pull request 'development' (#208) from development into main
Reviewed-on: #208
2026-01-26 22:27:33 +00:00
1b029d97b8 Merge pull request 'fix/data-platform-filter-index' (#207) from fix/data-platform-filter-index into development
Reviewed-on: #207
2026-01-26 22:27:19 +00:00
4ed3ed7e14 fix(data-platform): reset index after filter to prevent extra column
The filter tool was adding an __index_level_0__ column to results
because pandas query() preserves the original index, which gets
converted to a column when storing the DataFrame.

Added .reset_index(drop=True) after query() to drop the preserved
index and create a clean sequential index.

Fixes #203

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 17:25:37 -05:00
c5232bd7bf Merge pull request 'development' (#202) from development into main
Reviewed-on: #202
2026-01-26 21:49:08 +00:00
f9e23fd6eb Merge pull request 'fix/setup-editable-install' (#201) from fix/setup-editable-install into development
Reviewed-on: #201
2026-01-26 21:48:50 +00:00
457ed9c9ff fix(setup): install MCP packages in editable mode for viz-platform and contract-validator
The setup script only installed requirements.txt dependencies but not the
local package itself, causing MCP servers to fail with ModuleNotFoundError.

Changes:
- Add `pip install -e .` for servers with pyproject.toml
- Add viz-platform and contract-validator to MCP server setup
- Add symlink verification for new plugins
- Update version banner to v5.0.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 16:44:11 -05:00
dadb4d3576 Merge pull request 'Merge development into main: contract-validator setup docs' (#200) from development into main 2026-01-26 20:57:23 +00:00
ba771f100f Merge pull request 'docs: add contract-validator initial-setup and update version tags' (#199) from docs/contract-validator-setup into development 2026-01-26 20:57:09 +00:00
2b9cb5defd docs: add contract-validator initial-setup and update version tags
- Add /initial-setup command for contract-validator plugin
- Add contract-validator section to README.md (NEW in v5.0.0)
- Update data-platform version tag: *NEW* -> *NEW in v4.0.0*
- Update viz-platform version tag: *NEW* -> *NEW in v4.0.0*
- Add Contract Validator MCP Server section to README.md
- Add contract-validator to test commands table and repo structure

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 15:55:17 -05:00
98 changed files with 9881 additions and 198 deletions

View File

@@ -6,7 +6,7 @@
},
"metadata": {
"description": "Project management plugins with Gitea and NetBox integrations",
"version": "5.0.0"
"version": "5.2.0"
},
"plugins": [
{
@@ -75,8 +75,8 @@
},
{
"name": "cmdb-assistant",
"version": "1.0.0",
"description": "NetBox CMDB integration for infrastructure management",
"version": "1.1.0",
"description": "NetBox CMDB integration with data quality validation and machine registration",
"source": "./plugins/cmdb-assistant",
"author": {
"name": "Leo Miranda",
@@ -86,7 +86,7 @@
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"],
"category": "infrastructure",
"tags": ["cmdb", "netbox", "dcim", "ipam"],
"tags": ["cmdb", "netbox", "dcim", "ipam", "data-quality", "validation"],
"license": "MIT"
},
{

2
.gitignore vendored
View File

@@ -31,6 +31,8 @@ venv/
ENV/
env/
.venv/
.venv
**/.venv
# PyCharm
.idea/

View File

@@ -4,6 +4,149 @@ All notable changes to the Leo Claude Marketplace will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [5.2.0] - 2026-01-28
### Added
#### Sprint 5: Documentation (V5.2.0 Plugin Enhancements)
Documentation and guides for the plugin enhancements initiative.
**git-flow v1.2.0:**
- **Branching Strategy Guide** (`docs/BRANCHING-STRATEGY.md`) - Complete documentation of `development -> staging -> main` promotion flow with Mermaid diagrams
**clarity-assist v1.2.0:**
- **ND Support Guide** (`docs/ND-SUPPORT.md`) - Documentation of neurodivergent accommodations, features, and usage examples
**Gitea MCP Server:**
- **`update_issue` milestone parameter** - Can now assign/change milestones programmatically
**Sprint Completed:**
- Milestone: Sprint 5 - Documentation (closed 2026-01-28)
- Issues: #266, #267, #268, #269
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 5 Documentation)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-5-Documentation%29)
---
#### Sprint 4: Commands (V5.2.0 Plugin Enhancements)
Implementation of 18 new user-facing commands across 8 plugins.
**projman v3.3.0:**
- **`/sprint-diagram`** - Generate Mermaid diagram of sprint issues with dependencies and status
**pr-review v1.1.0:**
- **`/pr-diff`** - Formatted diff with inline review comments and annotations
- **Confidence threshold config** - `PR_REVIEW_CONFIDENCE_THRESHOLD` env var (default: 0.7)
**data-platform v1.2.0:**
- **`/data-quality`** - DataFrame quality checks (nulls, duplicates, types, outliers) with pass/warn/fail scoring
- **`/lineage-viz`** - dbt lineage visualization as Mermaid diagrams
- **`/dbt-test`** - Formatted dbt test runner with summary and failure details
**viz-platform v1.1.0:**
- **`/chart-export`** - Export charts to PNG, SVG, PDF via kaleido
- **`/accessibility-check`** - Color blind validation (WCAG contrast ratios)
- **`/breakpoints`** - Responsive layout breakpoint configuration
- **New MCP tools**: `chart_export`, `accessibility_validate_colors`, `accessibility_validate_theme`, `accessibility_suggest_alternative`, `layout_set_breakpoints`
- **New dependency**: kaleido>=0.2.1 for chart rendering
**contract-validator v1.2.0:**
- **`/dependency-graph`** - Mermaid visualization of plugin dependencies with data flow
**doc-guardian v1.1.0:**
- **`/changelog-gen`** - Generate changelog from conventional commits
- **`/doc-coverage`** - Documentation coverage metrics by function/class
- **`/stale-docs`** - Flag documentation behind code changes
**claude-config-maintainer v1.1.0:**
- **`/config-diff`** - Track CLAUDE.md changes over time with behavioral impact analysis
- **`/config-lint`** - 31 lint rules for CLAUDE.md (security, structure, content, format, best practices)
**cmdb-assistant v1.2.0:**
- **`/cmdb-topology`** - Infrastructure topology diagrams (rack, network, site views)
- **`/change-audit`** - NetBox audit trail queries with filtering
- **`/ip-conflicts`** - Detect IP conflicts and overlapping prefixes
**Sprint Completed:**
- Milestone: Sprint 4 - Commands (closed 2026-01-28)
- Issues: #241-#258 (18/18 closed)
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 4 Commands)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-4-Commands%29)
- Lessons: [Sprint 4 - Plugin Commands Implementation](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-4---plugin-commands-implementation)
### Fixed
- **MCP:** Project directory detection - all run.sh scripts now capture `CLAUDE_PROJECT_DIR` from PWD before changing directories
- **Docs:** Added Gitea auto-close behavior and MCP session restart notes to DEBUGGING-CHECKLIST.md
---
#### Sprint 3: Hooks (V5.2.0 Plugin Enhancements)
Implementation of 6 foundational hooks across 4 plugins.
**git-flow v1.1.0:**
- **Commit message enforcement hook** - PreToolUse hook validates conventional commit format on all `git commit` commands (not just `/commit`). Blocks invalid commits with format guidance.
- **Branch name validation hook** - PreToolUse hook validates branch naming on `git checkout -b` and `git switch -c`. Enforces `type/description` format, lowercase, max 50 chars.
**clarity-assist v1.1.0:**
- **Vagueness detection hook** - UserPromptSubmit hook detects vague prompts and suggests `/clarify` when ambiguity, missing context, or unclear scope detected.
**data-platform v1.1.0:**
- **Schema diff detection hook** - PostToolUse hook monitors edits to schema files (dbt models, SQL migrations). Warns on breaking changes (column removal, type narrowing, constraint addition).
**contract-validator v1.1.0:**
- **SessionStart auto-validate hook** - Smart validation that only runs when plugin files changed since last check. Detects interface compatibility issues at session start.
- **Breaking change detection hook** - PostToolUse hook monitors plugin interface files (README.md, plugin.json). Warns when changes would break consumers.
**Sprint Completed:**
- Milestone: Sprint 3 - Hooks (closed 2026-01-28)
- Issues: #225, #226, #227, #228, #229, #230
- Wiki: [Change V5.2.0: Plugin Enhancements Proposal](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0:-Plugin-Enhancements-Proposal)
- Lessons: Background agent permissions, agent runaway detection, MCP branch detection bug
### Known Issues
- **MCP Bug #231:** Branch detection in Gitea MCP runs from installed plugin directory, not user's project directory. Workaround: close issues via Gitea web UI.
---
#### Gitea MCP Server - create_pull_request Tool
- **`create_pull_request`**: Create new pull requests via MCP
- Parameters: title, body, head (source branch), base (target branch), labels
- Branch-aware security: only allowed on development/feature branches
- Completes the PR lifecycle (was previously missing - only had list/get/review/comment)
#### cmdb-assistant v1.1.0 - Data Quality Validation
- **SessionStart Hook**: Tests NetBox API connectivity at session start
- Warns if VMs exist without site assignment
- Warns if devices exist without platform
- Non-blocking: displays warning, doesn't prevent work
- **PreToolUse Hook**: Validates input parameters before VM/device operations
- Warns about missing site, tenant, platform
- Non-blocking: suggests best practices without blocking
- **`/cmdb-audit` Command**: Comprehensive data quality analysis
- Scopes: all, vms, devices, naming, roles
- Identifies Critical/High/Medium/Low issues
- Provides prioritized remediation recommendations
- **`/cmdb-register` Command**: Register current machine into NetBox
- Discovers system info: hostname, platform, hardware, network interfaces
- Discovers running apps: Docker containers, systemd services
- Creates device with interfaces, IPs, and sets primary IP
- Creates cluster and VMs for Docker containers
- **`/cmdb-sync` Command**: Sync machine state with NetBox
- Compares current state with NetBox record
- Shows diff of changes (interfaces, IPs, containers)
- Updates with user confirmation
- Supports --full and --dry-run flags
- **NetBox Best Practices Skill**: Reference documentation
- Dependency order for object creation
- Naming conventions (`{role}-{site}-{number}`, `{env}-{app}-{number}`)
- Role consolidation guidance
- Site/tenant/platform assignment requirements
- **Agent Enhancement**: Updated cmdb-assistant agent with validation requirements
- Proactive suggestions for missing fields
- Naming convention checks
- Dependency order enforcement
- Duplicate prevention
---
## [5.0.0] - 2026-01-26
### Added

View File

@@ -50,7 +50,7 @@ See `docs/DEBUGGING-CHECKLIST.md` for details on cache timing.
## Project Overview
**Repository:** leo-claude-mktplace
**Version:** 5.0.0
**Version:** 5.1.0
**Status:** Production Ready
A plugin marketplace for Claude Code containing:

View File

@@ -1,4 +1,4 @@
# Leo Claude Marketplace - v5.0.0
# Leo Claude Marketplace - v5.2.0
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
@@ -19,7 +19,7 @@ AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sp
- Branch-aware security (development/staging/production)
- Pre-sprint-close code quality review and test verification
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review`
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review`, `/suggest-version`, `/proposal-status`
#### [git-flow](./plugins/git-flow/README.md) *NEW in v3.0.0*
**Git Workflow Automation**
@@ -53,6 +53,19 @@ Analyze, optimize, and create CLAUDE.md configuration files for Claude Code proj
**Commands:** `/config-analyze`, `/config-optimize`, `/config-init`
#### [contract-validator](./plugins/contract-validator/README.md) *NEW in v5.0.0*
**Cross-Plugin Compatibility Validation**
Validate plugin marketplaces for command conflicts, tool overlaps, and broken agent references.
- Interface parsing from plugin README.md files
- Agent extraction from CLAUDE.md definitions
- Pairwise compatibility checks between all plugins
- Data flow validation for agent sequences
- Markdown or JSON reports with actionable suggestions
**Commands:** `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/initial-setup`
### Productivity
#### [clarity-assist](./plugins/clarity-assist/README.md) *NEW in v3.0.0*
@@ -94,11 +107,11 @@ Security vulnerability detection and code refactoring tools.
Full CRUD operations for network infrastructure management directly from Claude Code.
**Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`
**Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`, `/cmdb-audit`, `/cmdb-register`, `/cmdb-sync`
### Data Engineering
#### [data-platform](./plugins/data-platform/README.md) *NEW*
#### [data-platform](./plugins/data-platform/README.md) *NEW in v4.0.0*
**pandas, PostgreSQL/PostGIS, and dbt Integration**
Comprehensive data engineering toolkit with persistent DataFrame storage.
@@ -113,7 +126,7 @@ Comprehensive data engineering toolkit with persistent DataFrame storage.
### Visualization
#### [viz-platform](./plugins/viz-platform/README.md) *NEW*
#### [viz-platform](./plugins/viz-platform/README.md) *NEW in v4.0.0*
**Dash Mantine Components Validation and Theming**
Visualization toolkit with version-locked component validation and design token theming.
@@ -157,7 +170,7 @@ Comprehensive NetBox REST API integration for infrastructure management.
| Virtualization | Clusters, VMs, Interfaces |
| Extras | Tags, Custom Fields, Audit Log |
### Data Platform MCP Server (shared) *NEW*
### Data Platform MCP Server (shared) *NEW in v4.0.0*
pandas, PostgreSQL/PostGIS, and dbt integration for data engineering.
@@ -168,7 +181,7 @@ pandas, PostgreSQL/PostGIS, and dbt integration for data engineering.
| PostGIS | `st_tables`, `st_geometry_type`, `st_srid`, `st_extent` |
| dbt | `dbt_parse`, `dbt_run`, `dbt_test`, `dbt_build`, `dbt_compile`, `dbt_ls`, `dbt_docs_generate`, `dbt_lineage` |
### Viz Platform MCP Server (shared) *NEW*
### Viz Platform MCP Server (shared) *NEW in v4.0.0*
Dash Mantine Components validation and visualization tools.
@@ -180,6 +193,16 @@ Dash Mantine Components validation and visualization tools.
| Theme | `theme_create`, `theme_extend`, `theme_validate`, `theme_export_css`, `theme_list`, `theme_activate` |
| Page | `page_create`, `page_add_navbar`, `page_set_auth`, `page_list`, `page_get_app_config` |
### Contract Validator MCP Server (shared) *NEW in v5.0.0*
Cross-plugin compatibility validation tools.
| Category | Tools |
|----------|-------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow` |
| Report | `generate_compatibility_report`, `list_issues` |
## Installation
### Prerequisites
@@ -278,6 +301,7 @@ After installing plugins, the `/plugin` command may show `(no content)` - this i
| cmdb-assistant | `/cmdb-assistant:cmdb-search` |
| data-platform | `/data-platform:ingest` |
| viz-platform | `/viz-platform:chart` |
| contract-validator | `/contract-validator:validate-contracts` |
## Repository Structure
@@ -289,14 +313,16 @@ leo-claude-mktplace/
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
│ ├── netbox/ # NetBox MCP (CMDB)
│ ├── data-platform/ # Data engineering (pandas, PostgreSQL, dbt)
── viz-platform/ # Visualization (DMC, Plotly, theming)
── viz-platform/ # Visualization (DMC, Plotly, theming)
│ └── contract-validator/ # Cross-plugin validation (v5.0.0)
├── plugins/ # All plugins
│ ├── projman/ # Sprint management
│ ├── git-flow/ # Git workflow automation
│ ├── pr-review/ # PR review
│ ├── clarity-assist/ # Prompt optimization
│ ├── data-platform/ # Data engineering
│ ├── viz-platform/ # Visualization (NEW)
│ ├── viz-platform/ # Visualization
│ ├── contract-validator/ # Cross-plugin validation (NEW)
│ ├── claude-config-maintainer/ # CLAUDE.md optimization
│ ├── cmdb-assistant/ # NetBox CMDB integration
│ ├── doc-guardian/ # Documentation drift detection

View File

@@ -2,7 +2,7 @@
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
Last Updated: 2026-01-26 (v5.0.0)
Last Updated: 2026-01-27 (v5.1.0)
---
@@ -165,7 +165,11 @@ leo-claude-mktplace/
│ ├── setup.sh # Initial setup (create venvs, config templates)
│ ├── post-update.sh # Post-update (rebuild venvs, verify symlinks)
│ ├── check-venv.sh # Check if venvs exist (for hooks)
── validate-marketplace.sh # Marketplace compliance validation
── validate-marketplace.sh # Marketplace compliance validation
│ ├── verify-hooks.sh # Verify all hooks use correct event types
│ ├── setup-venvs.sh # Setup/repair MCP server venvs
│ ├── venv-repair.sh # Repair broken venv symlinks
│ └── release.sh # Release automation with version bumping
├── CLAUDE.md
├── README.md
├── LICENSE

View File

@@ -23,6 +23,7 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **projman** | `/debug-report` | | X | Run diagnostics and create structured issue in marketplace |
| **projman** | `/debug-review` | | X | Investigate diagnostic issues and propose fixes with approval gates |
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
| **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status |
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
@@ -55,6 +56,9 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
| **cmdb-assistant** | `/cmdb-ip` | | X | Manage IP addresses and prefixes |
| **cmdb-assistant** | `/cmdb-site` | | X | Manage sites, locations, racks, and regions |
| **cmdb-assistant** | `/cmdb-audit` | | X | Data quality analysis (VMs, devices, naming, roles) |
| **cmdb-assistant** | `/cmdb-register` | | X | Register current machine into NetBox with running apps |
| **cmdb-assistant** | `/cmdb-sync` | | X | Sync machine state with NetBox (detect drift, update) |
| **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files |
| **data-platform** | `/ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
| **data-platform** | `/profile` | | X | Generate data profiling report with statistics |

View File

@@ -171,8 +171,7 @@ This marketplace uses a **hybrid configuration** approach:
│ PROJECT-LEVEL (once per project) │
│ <project-root>/.env │
├─────────────────────────────────────────────────────────────────┤
│ GITEA_ORG Organization for this project
│ GITEA_REPO │ Repository name for this project │
│ GITEA_REPORepository as owner/repo format
│ GIT_WORKFLOW_STYLE │ (optional) Override system default │
│ PR_REVIEW_* │ (optional) PR review settings │
└─────────────────────────────────────────────────────────────────┘
@@ -262,8 +261,7 @@ In each project root:
```bash
cat > .env << 'EOF'
GITEA_ORG=your-organization
GITEA_REPO=your-repo-name
GITEA_REPO=your-organization/your-repo-name
EOF
```
@@ -307,7 +305,7 @@ GITEA_API_TOKEN=your_gitea_token_here
| `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` |
| `GITEA_API_TOKEN` | Personal access token | `abc123...` |
**Note:** `GITEA_ORG` is configured at the project level (see below) since different projects may belong to different organizations.
**Note:** `GITEA_REPO` is configured at the project level in `owner/repo` format since different projects may belong to different organizations.
**Generating a Gitea Token:**
1. Log into Gitea → **User Icon** → **Settings**
@@ -362,9 +360,8 @@ GIT_CO_AUTHOR=true
Create `.env` in each project root:
```bash
# Required for projman, pr-review
GITEA_ORG=your-organization
GITEA_REPO=your-repo-name
# Required for projman, pr-review (use owner/repo format)
GITEA_REPO=your-organization/your-repo-name
# Optional: Override git-flow defaults
GIT_WORKFLOW_STYLE=pr-required
@@ -377,8 +374,7 @@ PR_REVIEW_AUTO_SUBMIT=false
| Variable | Required | Description |
|----------|----------|-------------|
| `GITEA_ORG` | Yes | Gitea organization for this project |
| `GITEA_REPO` | Yes | Repository name (must match Gitea exactly) |
| `GITEA_REPO` | Yes | Repository in `owner/repo` format (e.g., `my-org/my-repo`) |
| `GIT_WORKFLOW_STYLE` | No | Override system default |
| `PR_REVIEW_*` | No | PR review settings |
@@ -388,8 +384,8 @@ PR_REVIEW_AUTO_SUBMIT=false
| Plugin | System Config | Project Config | Setup Commands |
|--------|---------------|----------------|----------------|
| **projman** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` |
| **pr-review** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` |
| **projman** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` |
| **pr-review** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` |
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
| **clarity-assist** | None | None | None needed |
| **cmdb-assistant** | netbox.env | None | `/initial-setup` |
@@ -441,7 +437,7 @@ This catches typos and permission issues before saving configuration.
When you start a Claude Code session, a hook automatically:
1. Reads `GITEA_ORG` and `GITEA_REPO` from `.env`
1. Reads `GITEA_REPO` (in `owner/repo` format) from `.env`
2. Compares with current `git remote get-url origin`
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/project-sync` to update."
@@ -520,7 +516,8 @@ deactivate
# Check project .env
cat .env
# Verify GITEA_REPO matches the Gitea repository name exactly
# Verify GITEA_REPO is in owner/repo format and matches Gitea exactly
# Example: GITEA_REPO=my-org/my-repo
```
---

View File

@@ -2,7 +2,7 @@
**Purpose:** Systematic approach to diagnose and fix plugin loading issues.
Last Updated: 2026-01-22
Last Updated: 2026-01-28
---
@@ -128,7 +128,7 @@ cat ~/.config/claude/netbox.env
# Project-level config (in target project)
cat /path/to/project/.env
# Should contain: GITEA_ORG, GITEA_REPO
# Should contain: GITEA_REPO=owner/repo (e.g., my-org/my-repo)
```
---
@@ -186,6 +186,47 @@ echo -e "\n=== Config Files ==="
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
| Gitea issues not closing | Merged to non-default branch | Manually close issues (see below) |
| MCP changes not taking effect | Session caching | Restart Claude Code session (see below) |
### Gitea Auto-Close Behavior
**Issue:** Using `Closes #XX` or `Fixes #XX` in commit/PR messages does NOT auto-close issues when merging to `development`.
**Root Cause:** Gitea only auto-closes issues when merging to the **default branch** (typically `main` or `master`). Merging to `development`, `staging`, or any other branch will NOT trigger auto-close.
**Workaround:**
1. Use the Gitea MCP tool to manually close issues after merging to development:
```
mcp__plugin_projman_gitea__update_issue(issue_number=XX, state="closed")
```
2. Or close issues via the Gitea web UI
3. The auto-close keywords will still work when the changes are eventually merged to `main`
**Recommendation:** Include the `Closes #XX` keywords in commits anyway - they'll work when the final merge to `main` happens.
### MCP Session Restart Requirement
**Issue:** Changes to MCP servers, hooks, or plugin configuration don't take effect immediately.
**Root Cause:** Claude Code loads MCP tools and plugin configuration at session start. These are cached in session memory and not reloaded dynamically.
**What requires a session restart:**
- MCP server code changes (Python files in `mcp-servers/`)
- Changes to `.mcp.json` files
- Changes to `hooks/hooks.json`
- Changes to `plugin.json`
- Adding new MCP tools or modifying tool signatures
**What does NOT require a restart:**
- Command/skill markdown files (`.md`) - these are read on invocation
- Agent markdown files - read when agent is invoked
**Correct workflow after plugin changes:**
1. Make changes to source files
2. Run `./scripts/verify-hooks.sh` to validate
3. Inform user: "Please restart Claude Code for changes to take effect"
4. **Do NOT clear cache mid-session** - see "Cache Clearing" section
---

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/contract-validator/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -330,7 +330,7 @@ class PandasTools:
return {'error': f'DataFrame not found: {data_ref}'}
try:
filtered = df.query(condition)
filtered = df.query(condition).reset_index(drop=True)
result_name = name or f"{data_ref}_filtered"
return self._check_and_store(
filtered,

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -135,9 +135,24 @@ class GiteaClient:
body: Optional[str] = None,
state: Optional[str] = None,
labels: Optional[List[str]] = None,
milestone: Optional[int] = None,
repo: Optional[str] = None
) -> Dict:
"""Update existing issue. Repo must be 'owner/repo' format."""
"""
Update existing issue.
Args:
issue_number: Issue number to update
title: New title (optional)
body: New body (optional)
state: New state - 'open' or 'closed' (optional)
labels: New labels (optional)
milestone: Milestone ID to assign (optional)
repo: Repository in 'owner/repo' format
Returns:
Updated issue dictionary
"""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}"
data = {}
@@ -149,6 +164,8 @@ class GiteaClient:
data['state'] = state
if labels is not None:
data['labels'] = labels
if milestone is not None:
data['milestone'] = milestone
logger.info(f"Updating issue #{issue_number} in {owner}/{target_repo}")
response = self.session.patch(url, json=data)
response.raise_for_status()
@@ -787,3 +804,42 @@ class GiteaClient:
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()
def create_pull_request(
self,
title: str,
body: str,
head: str,
base: str,
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a new pull request.
Args:
title: PR title
body: PR description/body
head: Source branch name (the branch with changes)
base: Target branch name (the branch to merge into)
labels: Optional list of label names
repo: Repository in 'owner/repo' format
Returns:
Created pull request dictionary
"""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls"
data = {
'title': title,
'body': body,
'head': head,
'base': base
}
if labels:
label_ids = self._resolve_label_ids(labels, owner, target_repo)
data['labels'] = label_ids
logger.info(f"Creating PR '{title}' in {owner}/{target_repo}: {head} -> {base}")
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()

View File

@@ -168,6 +168,10 @@ class GiteaMCPServer:
"items": {"type": "string"},
"description": "New labels"
},
"milestone": {
"type": "integer",
"description": "Milestone ID to assign"
},
"repo": {
"type": "string",
"description": "Repository name (for PMO mode)"
@@ -844,6 +848,41 @@ class GiteaMCPServer:
},
"required": ["pr_number", "body"]
}
),
Tool(
name="create_pull_request",
description="Create a new pull request",
inputSchema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "PR title"
},
"body": {
"type": "string",
"description": "PR description/body"
},
"head": {
"type": "string",
"description": "Source branch name (the branch with changes)"
},
"base": {
"type": "string",
"description": "Target branch name (the branch to merge into)"
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "Optional list of label names"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["title", "body", "head", "base"]
}
)
]
@@ -959,6 +998,8 @@ class GiteaMCPServer:
result = await self.pr_tools.create_pr_review(**arguments)
elif name == "add_pr_comment":
result = await self.pr_tools.add_pr_comment(**arguments)
elif name == "create_pull_request":
result = await self.pr_tools.create_pull_request(**arguments)
else:
raise ValueError(f"Unknown tool: {name}")

View File

@@ -7,6 +7,7 @@ Provides async wrappers for issue CRUD operations with:
- Comprehensive error handling
"""
import asyncio
import os
import subprocess
import logging
from typing import List, Dict, Optional
@@ -27,19 +28,34 @@ class IssueTools:
"""
self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str:
"""
Get current git branch.
Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns:
Current branch name or 'unknown' if not in a git repo
"""
try:
project_dir = self._get_project_directory()
result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True,
text=True,
check=True
check=True,
cwd=project_dir # Run git in project directory, not plugin directory
)
return result.stdout.strip()
except subprocess.CalledProcessError:
@@ -66,7 +82,13 @@ class IssueTools:
return operation in ['list_issues', 'get_issue', 'get_labels', 'create_issue']
# Development branches (full access)
if branch in ['development', 'develop'] or branch.startswith(('feat/', 'feature/', 'dev/')):
# Include all common feature/fix branch patterns
dev_prefixes = (
'feat/', 'feature/', 'dev/',
'fix/', 'bugfix/', 'hotfix/',
'chore/', 'refactor/', 'docs/', 'test/'
)
if branch in ['development', 'develop'] or branch.startswith(dev_prefixes):
return True
# Unknown branch - be restrictive
@@ -178,6 +200,7 @@ class IssueTools:
body: Optional[str] = None,
state: Optional[str] = None,
labels: Optional[List[str]] = None,
milestone: Optional[int] = None,
repo: Optional[str] = None
) -> Dict:
"""
@@ -189,6 +212,7 @@ class IssueTools:
body: New body (optional)
state: New state - 'open' or 'closed' (optional)
labels: New labels (optional)
milestone: Milestone ID to assign (optional)
repo: Override configured repo (for PMO multi-repo)
Returns:
@@ -207,7 +231,7 @@ class IssueTools:
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, repo)
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, milestone, repo)
)
async def add_comment(

View File

@@ -7,6 +7,7 @@ Provides async wrappers for PR operations with:
- Comprehensive error handling
"""
import asyncio
import os
import subprocess
import logging
from typing import List, Dict, Optional
@@ -27,19 +28,34 @@ class PullRequestTools:
"""
self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str:
"""
Get current git branch.
Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns:
Current branch name or 'unknown' if not in a git repo
"""
try:
project_dir = self._get_project_directory()
result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True,
text=True,
check=True
check=True,
cwd=project_dir # Run git in project directory, not plugin directory
)
return result.stdout.strip()
except subprocess.CalledProcessError:
@@ -69,7 +85,13 @@ class PullRequestTools:
return operation in read_ops + ['add_pr_comment']
# Development branches (full access)
if branch in ['development', 'develop'] or branch.startswith(('feat/', 'feature/', 'dev/')):
# Include all common feature/fix branch patterns
dev_prefixes = (
'feat/', 'feature/', 'dev/',
'fix/', 'bugfix/', 'hotfix/',
'chore/', 'refactor/', 'docs/', 'test/'
)
if branch in ['development', 'develop'] or branch.startswith(dev_prefixes):
return True
# Unknown branch - be restrictive
@@ -272,3 +294,42 @@ class PullRequestTools:
None,
lambda: self.gitea.add_pr_comment(pr_number, body, repo)
)
async def create_pull_request(
self,
title: str,
body: str,
head: str,
base: str,
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a new pull request (async wrapper with branch check).
Args:
title: PR title
body: PR description/body
head: Source branch name (the branch with changes)
base: Target branch name (the branch to merge into)
labels: Optional list of label names
repo: Override configured repo (for PMO multi-repo)
Returns:
Created pull request dictionary
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('create_pull_request'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot create PR on branch '{branch}'. "
f"Switch to a development or feature branch to create PRs."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.create_pull_request(title, body, head, base, labels, repo)
)

21
mcp-servers/gitea/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

21
mcp-servers/netbox/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/netbox/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -0,0 +1,479 @@
"""
Accessibility validation tools for color blindness and WCAG compliance.
Provides tools for validating color palettes against color blindness
simulations and WCAG contrast requirements.
"""
import logging
import math
from typing import Dict, List, Optional, Any, Tuple
logger = logging.getLogger(__name__)
# Color-blind safe palettes
SAFE_PALETTES = {
"categorical": {
"name": "Paul Tol's Qualitative",
"colors": ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"],
"description": "Distinguishable for all types of color blindness"
},
"ibm": {
"name": "IBM Design",
"colors": ["#648FFF", "#785EF0", "#DC267F", "#FE6100", "#FFB000"],
"description": "IBM's accessible color palette"
},
"okabe_ito": {
"name": "Okabe-Ito",
"colors": ["#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#000000"],
"description": "Optimized for all color vision deficiencies"
},
"tableau_colorblind": {
"name": "Tableau Colorblind 10",
"colors": ["#006BA4", "#FF800E", "#ABABAB", "#595959", "#5F9ED1",
"#C85200", "#898989", "#A2C8EC", "#FFBC79", "#CFCFCF"],
"description": "Industry-standard accessible palette"
}
}
# Simulation matrices for color blindness (LMS color space transformation)
# These approximate how colors appear to people with different types of color blindness
SIMULATION_MATRICES = {
"deuteranopia": {
# Green-blind (most common)
"severity": "common",
"population": "6% males, 0.4% females",
"description": "Difficulty distinguishing red from green (green-blind)",
"matrix": [
[0.625, 0.375, 0.0],
[0.700, 0.300, 0.0],
[0.0, 0.300, 0.700]
]
},
"protanopia": {
# Red-blind
"severity": "common",
"population": "2.5% males, 0.05% females",
"description": "Difficulty distinguishing red from green (red-blind)",
"matrix": [
[0.567, 0.433, 0.0],
[0.558, 0.442, 0.0],
[0.0, 0.242, 0.758]
]
},
"tritanopia": {
# Blue-blind (rare)
"severity": "rare",
"population": "0.01% total",
"description": "Difficulty distinguishing blue from yellow",
"matrix": [
[0.950, 0.050, 0.0],
[0.0, 0.433, 0.567],
[0.0, 0.475, 0.525]
]
}
}
class AccessibilityTools:
"""
Color accessibility validation tools.
Validates colors for WCAG compliance and color blindness accessibility.
"""
def __init__(self, theme_store=None):
"""
Initialize accessibility tools.
Args:
theme_store: Optional ThemeStore for theme color extraction
"""
self.theme_store = theme_store
def _hex_to_rgb(self, hex_color: str) -> Tuple[int, int, int]:
"""Convert hex color to RGB tuple."""
hex_color = hex_color.lstrip('#')
if len(hex_color) == 3:
hex_color = ''.join([c * 2 for c in hex_color])
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def _rgb_to_hex(self, rgb: Tuple[int, int, int]) -> str:
"""Convert RGB tuple to hex color."""
return '#{:02x}{:02x}{:02x}'.format(
max(0, min(255, int(rgb[0]))),
max(0, min(255, int(rgb[1]))),
max(0, min(255, int(rgb[2])))
)
def _get_relative_luminance(self, rgb: Tuple[int, int, int]) -> float:
"""
Calculate relative luminance per WCAG 2.1.
https://www.w3.org/WAI/GL/wiki/Relative_luminance
"""
def channel_luminance(value: int) -> float:
v = value / 255
return v / 12.92 if v <= 0.03928 else ((v + 0.055) / 1.055) ** 2.4
r, g, b = rgb
return (
0.2126 * channel_luminance(r) +
0.7152 * channel_luminance(g) +
0.0722 * channel_luminance(b)
)
def _get_contrast_ratio(self, color1: str, color2: str) -> float:
"""
Calculate contrast ratio between two colors per WCAG 2.1.
Returns ratio between 1:1 and 21:1.
"""
rgb1 = self._hex_to_rgb(color1)
rgb2 = self._hex_to_rgb(color2)
l1 = self._get_relative_luminance(rgb1)
l2 = self._get_relative_luminance(rgb2)
lighter = max(l1, l2)
darker = min(l1, l2)
return (lighter + 0.05) / (darker + 0.05)
def _simulate_color_blindness(
self,
hex_color: str,
deficiency_type: str
) -> str:
"""
Simulate how a color appears with a specific color blindness type.
Uses linear RGB transformation approximation.
"""
if deficiency_type not in SIMULATION_MATRICES:
return hex_color
rgb = self._hex_to_rgb(hex_color)
matrix = SIMULATION_MATRICES[deficiency_type]["matrix"]
# Apply transformation matrix
r = rgb[0] * matrix[0][0] + rgb[1] * matrix[0][1] + rgb[2] * matrix[0][2]
g = rgb[0] * matrix[1][0] + rgb[1] * matrix[1][1] + rgb[2] * matrix[1][2]
b = rgb[0] * matrix[2][0] + rgb[1] * matrix[2][1] + rgb[2] * matrix[2][2]
return self._rgb_to_hex((r, g, b))
def _get_color_distance(self, color1: str, color2: str) -> float:
"""
Calculate perceptual color distance (CIE76 approximation).
Returns a value where < 20 means colors may be hard to distinguish.
"""
rgb1 = self._hex_to_rgb(color1)
rgb2 = self._hex_to_rgb(color2)
# Simple Euclidean distance in RGB space (approximation)
# For production, should use CIEDE2000
return math.sqrt(
(rgb1[0] - rgb2[0]) ** 2 +
(rgb1[1] - rgb2[1]) ** 2 +
(rgb1[2] - rgb2[2]) ** 2
)
async def accessibility_validate_colors(
self,
colors: List[str],
check_types: Optional[List[str]] = None,
min_contrast_ratio: float = 4.5
) -> Dict[str, Any]:
"""
Validate a list of colors for accessibility.
Args:
colors: List of hex colors to validate
check_types: Color blindness types to check (default: all)
min_contrast_ratio: Minimum WCAG contrast ratio (default: 4.5 for AA)
Returns:
Dict with:
- issues: List of accessibility issues found
- simulations: How colors appear under each deficiency
- recommendations: Suggestions for improvement
- safe_palettes: Color-blind safe palette suggestions
"""
check_types = check_types or list(SIMULATION_MATRICES.keys())
issues = []
simulations = {}
# Normalize colors
normalized_colors = [c.upper() if c.startswith('#') else f'#{c.upper()}' for c in colors]
# Simulate each color blindness type
for deficiency in check_types:
if deficiency not in SIMULATION_MATRICES:
continue
simulated = [self._simulate_color_blindness(c, deficiency) for c in normalized_colors]
simulations[deficiency] = {
"original": normalized_colors,
"simulated": simulated,
"info": SIMULATION_MATRICES[deficiency]
}
# Check if any color pairs become indistinguishable
for i in range(len(normalized_colors)):
for j in range(i + 1, len(normalized_colors)):
distance = self._get_color_distance(simulated[i], simulated[j])
if distance < 30: # Threshold for distinguishability
issues.append({
"type": "distinguishability",
"severity": "warning" if distance > 15 else "error",
"colors": [normalized_colors[i], normalized_colors[j]],
"affected_by": [deficiency],
"simulated_colors": [simulated[i], simulated[j]],
"distance": round(distance, 1),
"message": f"Colors may be hard to distinguish for {deficiency} ({SIMULATION_MATRICES[deficiency]['description']})"
})
# Check contrast ratios against white and black backgrounds
for color in normalized_colors:
white_contrast = self._get_contrast_ratio(color, "#FFFFFF")
black_contrast = self._get_contrast_ratio(color, "#000000")
if white_contrast < min_contrast_ratio and black_contrast < min_contrast_ratio:
issues.append({
"type": "contrast_ratio",
"severity": "error",
"colors": [color],
"white_contrast": round(white_contrast, 2),
"black_contrast": round(black_contrast, 2),
"required": min_contrast_ratio,
"message": f"Insufficient contrast against both white ({white_contrast:.1f}:1) and black ({black_contrast:.1f}:1) backgrounds"
})
# Generate recommendations
recommendations = self._generate_recommendations(issues)
# Calculate overall score
error_count = sum(1 for i in issues if i["severity"] == "error")
warning_count = sum(1 for i in issues if i["severity"] == "warning")
if error_count == 0 and warning_count == 0:
score = "A"
elif error_count == 0 and warning_count <= 2:
score = "B"
elif error_count <= 2:
score = "C"
else:
score = "D"
return {
"colors_checked": normalized_colors,
"overall_score": score,
"issue_count": len(issues),
"issues": issues,
"simulations": simulations,
"recommendations": recommendations,
"safe_palettes": SAFE_PALETTES
}
async def accessibility_validate_theme(
self,
theme_name: str
) -> Dict[str, Any]:
"""
Validate a theme's colors for accessibility.
Args:
theme_name: Theme name to validate
Returns:
Dict with accessibility validation results
"""
if not self.theme_store:
return {
"error": "Theme store not configured",
"theme_name": theme_name
}
theme = self.theme_store.get_theme(theme_name)
if not theme:
available = self.theme_store.list_themes()
return {
"error": f"Theme '{theme_name}' not found. Available: {available}",
"theme_name": theme_name
}
# Extract colors from theme
colors = []
tokens = theme.get("tokens", {})
color_tokens = tokens.get("colors", {})
def extract_colors(obj, prefix=""):
"""Recursively extract color values."""
if isinstance(obj, str) and (obj.startswith('#') or len(obj) == 6):
colors.append(obj if obj.startswith('#') else f'#{obj}')
elif isinstance(obj, dict):
for key, value in obj.items():
extract_colors(value, f"{prefix}.{key}")
elif isinstance(obj, list):
for item in obj:
extract_colors(item, prefix)
extract_colors(color_tokens)
# Validate extracted colors
result = await self.accessibility_validate_colors(colors)
result["theme_name"] = theme_name
# Add theme-specific checks
primary = color_tokens.get("primary")
background = color_tokens.get("background", {})
text = color_tokens.get("text", {})
if primary and background:
bg_color = background.get("base") if isinstance(background, dict) else background
if bg_color:
contrast = self._get_contrast_ratio(primary, bg_color)
if contrast < 4.5:
result["issues"].append({
"type": "primary_contrast",
"severity": "error",
"colors": [primary, bg_color],
"ratio": round(contrast, 2),
"required": 4.5,
"message": f"Primary color has insufficient contrast ({contrast:.1f}:1) against background"
})
return result
async def accessibility_suggest_alternative(
self,
color: str,
deficiency_type: str
) -> Dict[str, Any]:
"""
Suggest accessible alternative colors.
Args:
color: Original hex color
deficiency_type: Type of color blindness to optimize for
Returns:
Dict with alternative color suggestions
"""
rgb = self._hex_to_rgb(color)
suggestions = []
# Suggest shifting hue while maintaining saturation and brightness
# For red-green deficiency, shift toward blue or yellow
if deficiency_type in ["deuteranopia", "protanopia"]:
# Shift toward blue
blue_shift = self._rgb_to_hex((
max(0, rgb[0] - 50),
max(0, rgb[1] - 30),
min(255, rgb[2] + 80)
))
suggestions.append({
"color": blue_shift,
"description": "Blue-shifted alternative",
"preserves": "approximate brightness"
})
# Shift toward yellow/orange
yellow_shift = self._rgb_to_hex((
min(255, rgb[0] + 50),
min(255, rgb[1] + 30),
max(0, rgb[2] - 80)
))
suggestions.append({
"color": yellow_shift,
"description": "Yellow-shifted alternative",
"preserves": "approximate brightness"
})
elif deficiency_type == "tritanopia":
# For blue-yellow deficiency, shift toward red or green
red_shift = self._rgb_to_hex((
min(255, rgb[0] + 60),
max(0, rgb[1] - 20),
max(0, rgb[2] - 40)
))
suggestions.append({
"color": red_shift,
"description": "Red-shifted alternative",
"preserves": "approximate brightness"
})
# Add safe palette suggestions
for palette_name, palette in SAFE_PALETTES.items():
# Find closest color in safe palette
min_distance = float('inf')
closest = None
for safe_color in palette["colors"]:
distance = self._get_color_distance(color, safe_color)
if distance < min_distance:
min_distance = distance
closest = safe_color
if closest:
suggestions.append({
"color": closest,
"description": f"From {palette['name']} palette",
"palette": palette_name
})
return {
"original_color": color,
"deficiency_type": deficiency_type,
"suggestions": suggestions[:5] # Limit to 5 suggestions
}
def _generate_recommendations(self, issues: List[Dict[str, Any]]) -> List[str]:
"""Generate actionable recommendations based on issues."""
recommendations = []
# Check for distinguishability issues
distinguishability_issues = [i for i in issues if i["type"] == "distinguishability"]
if distinguishability_issues:
affected_types = set()
for issue in distinguishability_issues:
affected_types.update(issue.get("affected_by", []))
if "deuteranopia" in affected_types or "protanopia" in affected_types:
recommendations.append(
"Avoid using red and green as the only differentiators - "
"add patterns, shapes, or labels"
)
recommendations.append(
"Consider using a color-blind safe palette like Okabe-Ito or IBM Design"
)
# Check for contrast issues
contrast_issues = [i for i in issues if i["type"] in ["contrast_ratio", "primary_contrast"]]
if contrast_issues:
recommendations.append(
"Increase contrast by darkening colors for light backgrounds "
"or lightening for dark backgrounds"
)
recommendations.append(
"Use WCAG contrast checker tools to verify text readability"
)
# General recommendations
if len(issues) > 0:
recommendations.append(
"Add secondary visual cues (icons, patterns, labels) "
"to not rely solely on color"
)
if not recommendations:
recommendations.append(
"Color palette appears accessible! Consider adding patterns "
"for additional distinguishability"
)
return recommendations

View File

@@ -3,11 +3,21 @@ Chart creation tools using Plotly.
Provides tools for creating data visualizations with automatic theme integration.
"""
import base64
import logging
import os
from typing import Dict, List, Optional, Any, Union
logger = logging.getLogger(__name__)
# Check for kaleido availability
KALEIDO_AVAILABLE = False
try:
import kaleido
KALEIDO_AVAILABLE = True
except ImportError:
logger.debug("kaleido not installed - chart export will be unavailable")
# Default color palette based on Mantine theme
DEFAULT_COLORS = [
@@ -395,3 +405,129 @@ class ChartTools:
"figure": figure,
"interactions_added": []
}
async def chart_export(
self,
figure: Dict[str, Any],
format: str = "png",
width: Optional[int] = None,
height: Optional[int] = None,
scale: float = 2.0,
output_path: Optional[str] = None
) -> Dict[str, Any]:
"""
Export a Plotly chart to a static image format.
Args:
figure: Plotly figure JSON to export
format: Output format - png, svg, or pdf
width: Image width in pixels (default: from figure or 1200)
height: Image height in pixels (default: from figure or 800)
scale: Resolution scale factor (default: 2 for retina)
output_path: Optional file path to save the image
Returns:
Dict with:
- image_data: Base64-encoded image (if no output_path)
- file_path: Path to saved file (if output_path provided)
- format: Export format used
- dimensions: {width, height, scale}
- error: Error message if export failed
"""
# Validate format
valid_formats = ['png', 'svg', 'pdf']
format = format.lower()
if format not in valid_formats:
return {
"error": f"Invalid format '{format}'. Must be one of: {valid_formats}",
"format": format,
"image_data": None
}
# Check kaleido availability
if not KALEIDO_AVAILABLE:
return {
"error": "kaleido package not installed. Install with: pip install kaleido",
"format": format,
"image_data": None,
"install_hint": "pip install kaleido"
}
# Validate figure
if not figure or 'data' not in figure:
return {
"error": "Invalid figure: must contain 'data' key",
"format": format,
"image_data": None
}
try:
import plotly.graph_objects as go
import plotly.io as pio
# Create Plotly figure object
fig = go.Figure(figure)
# Determine dimensions
layout = figure.get('layout', {})
export_width = width or layout.get('width') or 1200
export_height = height or layout.get('height') or 800
# Export to bytes
image_bytes = pio.to_image(
fig,
format=format,
width=export_width,
height=export_height,
scale=scale
)
result = {
"format": format,
"dimensions": {
"width": export_width,
"height": export_height,
"scale": scale,
"effective_width": int(export_width * scale),
"effective_height": int(export_height * scale)
}
}
# Save to file or return base64
if output_path:
# Ensure directory exists
output_dir = os.path.dirname(output_path)
if output_dir and not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
# Add extension if missing
if not output_path.endswith(f'.{format}'):
output_path = f"{output_path}.{format}"
with open(output_path, 'wb') as f:
f.write(image_bytes)
result["file_path"] = output_path
result["file_size_bytes"] = len(image_bytes)
else:
# Return as base64
result["image_data"] = base64.b64encode(image_bytes).decode('utf-8')
result["data_uri"] = f"data:image/{format};base64,{result['image_data']}"
return result
except ImportError as e:
logger.error(f"Chart export failed - missing dependency: {e}")
return {
"error": f"Missing dependency for export: {e}",
"format": format,
"image_data": None,
"install_hint": "pip install plotly kaleido"
}
except Exception as e:
logger.error(f"Chart export failed: {e}")
return {
"error": str(e),
"format": format,
"image_data": None
}

View File

@@ -10,6 +10,46 @@ from uuid import uuid4
logger = logging.getLogger(__name__)
# Standard responsive breakpoints (Mantine/Bootstrap-aligned)
DEFAULT_BREAKPOINTS = {
"xs": {
"min_width": "0px",
"max_width": "575px",
"cols": 1,
"spacing": "xs",
"description": "Extra small devices (phones, portrait)"
},
"sm": {
"min_width": "576px",
"max_width": "767px",
"cols": 2,
"spacing": "sm",
"description": "Small devices (phones, landscape)"
},
"md": {
"min_width": "768px",
"max_width": "991px",
"cols": 6,
"spacing": "md",
"description": "Medium devices (tablets)"
},
"lg": {
"min_width": "992px",
"max_width": "1199px",
"cols": 12,
"spacing": "md",
"description": "Large devices (desktops)"
},
"xl": {
"min_width": "1200px",
"max_width": None,
"cols": 12,
"spacing": "lg",
"description": "Extra large devices (large desktops)"
}
}
# Layout templates
TEMPLATES = {
"dashboard": {
@@ -365,3 +405,149 @@ class LayoutTools:
}
for name, config in FILTER_TYPES.items()
}
async def layout_set_breakpoints(
self,
layout_ref: str,
breakpoints: Dict[str, Dict[str, Any]],
mobile_first: bool = True
) -> Dict[str, Any]:
"""
Configure responsive breakpoints for a layout.
Args:
layout_ref: Layout name to configure
breakpoints: Breakpoint configuration dict:
{
"xs": {"cols": 1, "spacing": "xs"},
"sm": {"cols": 2, "spacing": "sm"},
"md": {"cols": 6, "spacing": "md"},
"lg": {"cols": 12, "spacing": "md"},
"xl": {"cols": 12, "spacing": "lg"}
}
mobile_first: If True, use min-width media queries (default)
Returns:
Dict with:
- breakpoints: Complete breakpoint configuration
- css_media_queries: Generated CSS media queries
- mobile_first: Whether mobile-first approach is used
"""
# Validate layout exists
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found. Create it first with layout_create.",
"breakpoints": None
}
layout = self._layouts[layout_ref]
# Validate breakpoint names
valid_breakpoints = ["xs", "sm", "md", "lg", "xl"]
for bp_name in breakpoints.keys():
if bp_name not in valid_breakpoints:
return {
"error": f"Invalid breakpoint '{bp_name}'. Must be one of: {valid_breakpoints}",
"breakpoints": layout.get("breakpoints")
}
# Merge with defaults
merged_breakpoints = {}
for bp_name in valid_breakpoints:
default = DEFAULT_BREAKPOINTS[bp_name].copy()
if bp_name in breakpoints:
default.update(breakpoints[bp_name])
merged_breakpoints[bp_name] = default
# Validate spacing values
valid_spacing = ["xs", "sm", "md", "lg", "xl"]
for bp_name, bp_config in merged_breakpoints.items():
if "spacing" in bp_config and bp_config["spacing"] not in valid_spacing:
return {
"error": f"Invalid spacing '{bp_config['spacing']}' for breakpoint '{bp_name}'. Must be one of: {valid_spacing}",
"breakpoints": layout.get("breakpoints")
}
# Validate column counts
for bp_name, bp_config in merged_breakpoints.items():
if "cols" in bp_config:
cols = bp_config["cols"]
if not isinstance(cols, int) or cols < 1 or cols > 24:
return {
"error": f"Invalid cols '{cols}' for breakpoint '{bp_name}'. Must be integer between 1 and 24.",
"breakpoints": layout.get("breakpoints")
}
# Generate CSS media queries
css_queries = self._generate_media_queries(merged_breakpoints, mobile_first)
# Store in layout
layout["breakpoints"] = merged_breakpoints
layout["mobile_first"] = mobile_first
layout["responsive_css"] = css_queries
return {
"layout_ref": layout_ref,
"breakpoints": merged_breakpoints,
"mobile_first": mobile_first,
"css_media_queries": css_queries
}
def _generate_media_queries(
self,
breakpoints: Dict[str, Dict[str, Any]],
mobile_first: bool
) -> List[str]:
"""Generate CSS media queries for breakpoints."""
queries = []
bp_order = ["xs", "sm", "md", "lg", "xl"]
if mobile_first:
# Use min-width queries (mobile-first)
for bp_name in bp_order[1:]: # Skip xs (base styles)
bp = breakpoints[bp_name]
min_width = bp.get("min_width", DEFAULT_BREAKPOINTS[bp_name]["min_width"])
if min_width and min_width != "0px":
queries.append(f"@media (min-width: {min_width}) {{ /* {bp_name} styles */ }}")
else:
# Use max-width queries (desktop-first)
for bp_name in reversed(bp_order[:-1]): # Skip xl (base styles)
bp = breakpoints[bp_name]
max_width = bp.get("max_width", DEFAULT_BREAKPOINTS[bp_name]["max_width"])
if max_width:
queries.append(f"@media (max-width: {max_width}) {{ /* {bp_name} styles */ }}")
return queries
async def layout_get_breakpoints(self, layout_ref: str) -> Dict[str, Any]:
"""
Get the breakpoint configuration for a layout.
Args:
layout_ref: Layout name
Returns:
Dict with breakpoint configuration
"""
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found.",
"breakpoints": None
}
layout = self._layouts[layout_ref]
return {
"layout_ref": layout_ref,
"breakpoints": layout.get("breakpoints", DEFAULT_BREAKPOINTS.copy()),
"mobile_first": layout.get("mobile_first", True),
"css_media_queries": layout.get("responsive_css", [])
}
def get_default_breakpoints(self) -> Dict[str, Any]:
"""Get the default breakpoint configuration."""
return {
"breakpoints": DEFAULT_BREAKPOINTS.copy(),
"description": "Standard responsive breakpoints aligned with Mantine/Bootstrap",
"mobile_first": True
}

View File

@@ -17,6 +17,7 @@ from .chart_tools import ChartTools
from .layout_tools import LayoutTools
from .theme_tools import ThemeTools
from .page_tools import PageTools
from .accessibility_tools import AccessibilityTools
# Suppress noisy MCP validation warnings on stderr
logging.basicConfig(level=logging.INFO)
@@ -36,6 +37,7 @@ class VizPlatformMCPServer:
self.layout_tools = LayoutTools()
self.theme_tools = ThemeTools()
self.page_tools = PageTools()
self.accessibility_tools = AccessibilityTools(theme_store=self.theme_tools.store)
async def initialize(self):
"""Initialize server and load configuration."""
@@ -198,6 +200,46 @@ class VizPlatformMCPServer:
}
))
# Chart export tool (Issue #247)
tools.append(Tool(
name="chart_export",
description=(
"Export a Plotly chart to static image format (PNG, SVG, PDF). "
"Requires kaleido package. Returns base64 image data or saves to file."
),
inputSchema={
"type": "object",
"properties": {
"figure": {
"type": "object",
"description": "Plotly figure JSON to export"
},
"format": {
"type": "string",
"enum": ["png", "svg", "pdf"],
"description": "Output format (default: png)"
},
"width": {
"type": "integer",
"description": "Image width in pixels (default: 1200)"
},
"height": {
"type": "integer",
"description": "Image height in pixels (default: 800)"
},
"scale": {
"type": "number",
"description": "Resolution scale factor (default: 2 for retina)"
},
"output_path": {
"type": "string",
"description": "Optional file path to save image"
}
},
"required": ["figure"]
}
))
# Layout tools (Issue #174)
tools.append(Tool(
name="layout_create",
@@ -280,6 +322,36 @@ class VizPlatformMCPServer:
}
))
# Responsive breakpoints tool (Issue #249)
tools.append(Tool(
name="layout_set_breakpoints",
description=(
"Configure responsive breakpoints for a layout. "
"Supports xs, sm, md, lg, xl breakpoints with mobile-first approach. "
"Each breakpoint can define cols, spacing, and other grid properties."
),
inputSchema={
"type": "object",
"properties": {
"layout_ref": {
"type": "string",
"description": "Layout name to configure"
},
"breakpoints": {
"type": "object",
"description": (
"Breakpoint config: {xs: {cols, spacing}, sm: {...}, md: {...}, lg: {...}, xl: {...}}"
)
},
"mobile_first": {
"type": "boolean",
"description": "Use mobile-first (min-width) media queries (default: true)"
}
},
"required": ["layout_ref", "breakpoints"]
}
))
# Theme tools (Issue #175)
tools.append(Tool(
name="theme_create",
@@ -451,6 +523,77 @@ class VizPlatformMCPServer:
}
))
# Accessibility tools (Issue #248)
tools.append(Tool(
name="accessibility_validate_colors",
description=(
"Validate colors for color blind accessibility. "
"Checks contrast ratios for deuteranopia, protanopia, tritanopia. "
"Returns issues, simulations, and accessible palette suggestions."
),
inputSchema={
"type": "object",
"properties": {
"colors": {
"type": "array",
"items": {"type": "string"},
"description": "List of hex colors to validate (e.g., ['#228be6', '#40c057'])"
},
"check_types": {
"type": "array",
"items": {"type": "string"},
"description": "Color blindness types to check: deuteranopia, protanopia, tritanopia (default: all)"
},
"min_contrast_ratio": {
"type": "number",
"description": "Minimum WCAG contrast ratio (default: 4.5 for AA)"
}
},
"required": ["colors"]
}
))
tools.append(Tool(
name="accessibility_validate_theme",
description=(
"Validate a theme's colors for accessibility. "
"Extracts all colors from theme tokens and checks for color blind safety."
),
inputSchema={
"type": "object",
"properties": {
"theme_name": {
"type": "string",
"description": "Theme name to validate"
}
},
"required": ["theme_name"]
}
))
tools.append(Tool(
name="accessibility_suggest_alternative",
description=(
"Suggest accessible alternative colors for a given color. "
"Provides alternatives optimized for specific color blindness types."
),
inputSchema={
"type": "object",
"properties": {
"color": {
"type": "string",
"description": "Hex color to find alternatives for"
},
"deficiency_type": {
"type": "string",
"enum": ["deuteranopia", "protanopia", "tritanopia"],
"description": "Color blindness type to optimize for"
}
},
"required": ["color", "deficiency_type"]
}
))
return tools
@self.server.call_tool()
@@ -524,6 +667,26 @@ class VizPlatformMCPServer:
text=json.dumps(result, indent=2)
)]
elif name == "chart_export":
figure = arguments.get('figure')
if not figure:
return [TextContent(
type="text",
text=json.dumps({"error": "figure is required"}, indent=2)
)]
result = await self.chart_tools.chart_export(
figure=figure,
format=arguments.get('format', 'png'),
width=arguments.get('width'),
height=arguments.get('height'),
scale=arguments.get('scale', 2.0),
output_path=arguments.get('output_path')
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Layout tools
elif name == "layout_create":
layout_name = arguments.get('name')
@@ -568,6 +731,23 @@ class VizPlatformMCPServer:
text=json.dumps(result, indent=2)
)]
elif name == "layout_set_breakpoints":
layout_ref = arguments.get('layout_ref')
breakpoints = arguments.get('breakpoints', {})
mobile_first = arguments.get('mobile_first', True)
if not layout_ref:
return [TextContent(
type="text",
text=json.dumps({"error": "layout_ref is required"}, indent=2)
)]
result = await self.layout_tools.layout_set_breakpoints(
layout_ref, breakpoints, mobile_first
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Theme tools
elif name == "theme_create":
theme_name = arguments.get('name')
@@ -669,6 +849,53 @@ class VizPlatformMCPServer:
text=json.dumps(result, indent=2)
)]
# Accessibility tools
elif name == "accessibility_validate_colors":
colors = arguments.get('colors')
if not colors:
return [TextContent(
type="text",
text=json.dumps({"error": "colors list is required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_validate_colors(
colors=colors,
check_types=arguments.get('check_types'),
min_contrast_ratio=arguments.get('min_contrast_ratio', 4.5)
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "accessibility_validate_theme":
theme_name = arguments.get('theme_name')
if not theme_name:
return [TextContent(
type="text",
text=json.dumps({"error": "theme_name is required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_validate_theme(theme_name)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "accessibility_suggest_alternative":
color = arguments.get('color')
deficiency_type = arguments.get('deficiency_type')
if not color or not deficiency_type:
return [TextContent(
type="text",
text=json.dumps({"error": "color and deficiency_type are required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_suggest_alternative(
color, deficiency_type
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
raise ValueError(f"Unknown tool: {name}")
except Exception as e:

View File

@@ -5,6 +5,7 @@ mcp>=0.9.0
plotly>=5.18.0
dash>=2.14.0
dash-mantine-components>=2.0.0
kaleido>=0.2.1 # For chart export (PNG, SVG, PDF)
# Utilities
python-dotenv>=1.0.0

21
mcp-servers/viz-platform/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/viz-platform/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -0,0 +1,195 @@
"""
Tests for accessibility validation tools.
"""
import pytest
from mcp_server.accessibility_tools import AccessibilityTools
@pytest.fixture
def tools():
"""Create AccessibilityTools instance."""
return AccessibilityTools()
class TestHexToRgb:
"""Tests for _hex_to_rgb method."""
def test_hex_to_rgb_6_digit(self, tools):
"""Test 6-digit hex conversion."""
assert tools._hex_to_rgb("#FF0000") == (255, 0, 0)
assert tools._hex_to_rgb("#00FF00") == (0, 255, 0)
assert tools._hex_to_rgb("#0000FF") == (0, 0, 255)
def test_hex_to_rgb_3_digit(self, tools):
"""Test 3-digit hex conversion."""
assert tools._hex_to_rgb("#F00") == (255, 0, 0)
assert tools._hex_to_rgb("#0F0") == (0, 255, 0)
assert tools._hex_to_rgb("#00F") == (0, 0, 255)
def test_hex_to_rgb_lowercase(self, tools):
"""Test lowercase hex conversion."""
assert tools._hex_to_rgb("#ff0000") == (255, 0, 0)
class TestContrastRatio:
"""Tests for _get_contrast_ratio method."""
def test_black_white_contrast(self, tools):
"""Test black on white has maximum contrast."""
ratio = tools._get_contrast_ratio("#000000", "#FFFFFF")
assert ratio == pytest.approx(21.0, rel=0.01)
def test_same_color_contrast(self, tools):
"""Test same color has minimum contrast."""
ratio = tools._get_contrast_ratio("#FF0000", "#FF0000")
assert ratio == pytest.approx(1.0, rel=0.01)
def test_symmetric_contrast(self, tools):
"""Test contrast ratio is symmetric."""
ratio1 = tools._get_contrast_ratio("#228be6", "#FFFFFF")
ratio2 = tools._get_contrast_ratio("#FFFFFF", "#228be6")
assert ratio1 == pytest.approx(ratio2, rel=0.01)
class TestColorBlindnessSimulation:
"""Tests for _simulate_color_blindness method."""
def test_deuteranopia_simulation(self, tools):
"""Test deuteranopia (green-blind) simulation."""
# Red and green should appear more similar
original_red = "#FF0000"
original_green = "#00FF00"
simulated_red = tools._simulate_color_blindness(original_red, "deuteranopia")
simulated_green = tools._simulate_color_blindness(original_green, "deuteranopia")
# They should be different from originals
assert simulated_red != original_red or simulated_green != original_green
def test_protanopia_simulation(self, tools):
"""Test protanopia (red-blind) simulation."""
simulated = tools._simulate_color_blindness("#FF0000", "protanopia")
# Should return a modified color
assert simulated.startswith("#")
assert len(simulated) == 7
def test_tritanopia_simulation(self, tools):
"""Test tritanopia (blue-blind) simulation."""
simulated = tools._simulate_color_blindness("#0000FF", "tritanopia")
# Should return a modified color
assert simulated.startswith("#")
assert len(simulated) == 7
def test_unknown_deficiency_returns_original(self, tools):
"""Test unknown deficiency type returns original color."""
color = "#FF0000"
simulated = tools._simulate_color_blindness(color, "unknown")
assert simulated == color
class TestAccessibilityValidateColors:
"""Tests for accessibility_validate_colors method."""
@pytest.mark.asyncio
async def test_validate_single_color(self, tools):
"""Test validating a single color."""
result = await tools.accessibility_validate_colors(["#228be6"])
assert "colors_checked" in result
assert "overall_score" in result
assert "issues" in result
assert "safe_palettes" in result
@pytest.mark.asyncio
async def test_validate_problematic_colors(self, tools):
"""Test similar colors trigger warnings."""
# Use colors that are very close in hue, which should be harder to distinguish
result = await tools.accessibility_validate_colors(["#FF5555", "#FF6666"])
# Similar colors should trigger distinguishability warnings
assert "issues" in result
# The validation should at least run without errors
assert "colors_checked" in result
assert len(result["colors_checked"]) == 2
@pytest.mark.asyncio
async def test_validate_contrast_issue(self, tools):
"""Test low contrast colors trigger contrast warnings."""
# Yellow on white has poor contrast
result = await tools.accessibility_validate_colors(["#FFFF00"])
# Check for contrast issues (yellow may have issues with both black and white)
assert "issues" in result
@pytest.mark.asyncio
async def test_validate_with_specific_types(self, tools):
"""Test validating for specific color blindness types."""
result = await tools.accessibility_validate_colors(
["#FF0000", "#00FF00"],
check_types=["deuteranopia"]
)
assert "simulations" in result
assert "deuteranopia" in result["simulations"]
assert "protanopia" not in result["simulations"]
@pytest.mark.asyncio
async def test_overall_score(self, tools):
"""Test overall score is calculated."""
result = await tools.accessibility_validate_colors(["#228be6", "#ffffff"])
assert result["overall_score"] in ["A", "B", "C", "D"]
@pytest.mark.asyncio
async def test_recommendations_generated(self, tools):
"""Test recommendations are generated for issues."""
result = await tools.accessibility_validate_colors(["#FF0000", "#00FF00"])
assert "recommendations" in result
assert len(result["recommendations"]) > 0
class TestAccessibilitySuggestAlternative:
"""Tests for accessibility_suggest_alternative method."""
@pytest.mark.asyncio
async def test_suggest_alternative_deuteranopia(self, tools):
"""Test suggesting alternatives for deuteranopia."""
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
assert "original_color" in result
assert result["deficiency_type"] == "deuteranopia"
assert "suggestions" in result
assert len(result["suggestions"]) > 0
@pytest.mark.asyncio
async def test_suggest_alternative_tritanopia(self, tools):
"""Test suggesting alternatives for tritanopia."""
result = await tools.accessibility_suggest_alternative("#0000FF", "tritanopia")
assert "suggestions" in result
assert len(result["suggestions"]) > 0
@pytest.mark.asyncio
async def test_suggestions_include_safe_palettes(self, tools):
"""Test suggestions include colors from safe palettes."""
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
palette_suggestions = [
s for s in result["suggestions"]
if "palette" in s
]
assert len(palette_suggestions) > 0
class TestSafePalettes:
"""Tests for safe palette constants."""
def test_safe_palettes_exist(self, tools):
"""Test that safe palettes are defined."""
from mcp_server.accessibility_tools import SAFE_PALETTES
assert "categorical" in SAFE_PALETTES
assert "ibm" in SAFE_PALETTES
assert "okabe_ito" in SAFE_PALETTES
assert "tableau_colorblind" in SAFE_PALETTES
def test_safe_palettes_have_colors(self, tools):
"""Test that safe palettes have color lists."""
from mcp_server.accessibility_tools import SAFE_PALETTES
for palette_name, palette in SAFE_PALETTES.items():
assert "colors" in palette
assert len(palette["colors"]) > 0
# All colors should be valid hex
for color in palette["colors"]:
assert color.startswith("#")

View File

@@ -90,6 +90,10 @@ After clarification, you receive a structured specification:
[List of assumptions]
```
## Documentation
- [Neurodivergent Support Guide](docs/ND-SUPPORT.md) - How clarity-assist supports ND users with executive function challenges and cognitive differences
## Integration
For CLAUDE.md integration instructions, see `claude-md-integration.md`.

View File

@@ -0,0 +1,328 @@
# Neurodivergent Support in clarity-assist
This document describes how clarity-assist is designed to support users with neurodivergent traits, including ADHD, autism, anxiety, and other conditions that affect executive function, sensory processing, or cognitive style.
## Overview
### Purpose
clarity-assist exists to help all users transform vague or incomplete requests into clear, actionable specifications. For neurodivergent users specifically, it addresses common challenges:
- **Executive function difficulties** - Breaking down complex tasks, getting started, managing scope
- **Working memory limitations** - Keeping track of context across long conversations
- **Decision fatigue** - Facing too many open-ended choices
- **Processing style differences** - Preferring structured, predictable interactions
- **Anxiety around uncertainty** - Needing clear expectations and explicit confirmation
### Philosophy
Our design philosophy centers on three principles:
1. **Reduce cognitive load** - Never force the user to hold too much in their head at once
2. **Provide structure** - Use consistent, predictable patterns for all interactions
3. **Respect different communication styles** - Accommodate rather than assume one "right" way to think
## Features for ND Users
### 1. Reduced Cognitive Load
**Prompt Simplification**
- The 4-D methodology (Deconstruct, Diagnose, Develop, Deliver) breaks down complex requests into manageable phases
- Users never need to specify everything upfront - clarification happens incrementally
**Task Breakdown**
- Large requests are decomposed into explicit components
- Dependencies and relationships are surfaced rather than left implicit
- Scope boundaries are clearly defined (in-scope vs. out-of-scope)
### 2. Structured Output
**Consistent Formatting**
- Every clarification session produces the same structured specification:
- Summary (1-2 sentences)
- Scope (In/Out)
- Requirements table (numbered, prioritized)
- Assumptions list
- This predictability reduces the mental effort of parsing responses
**Predictable Patterns**
- Questions always follow the same format
- Progress summaries appear at regular intervals
- Escalation (simple to complex) is always offered, never forced
**Bulleted Lists Over Prose**
- Requirements are presented as scannable lists, not paragraphs
- Options are numbered for easy reference
- Key information is highlighted with bold labels
### 3. Customizable Verbosity
**Detail Levels**
- `/clarify` - Full methodology for complex requests (more thorough, more questions)
- `/quick-clarify` - Rapid mode for simple disambiguation (fewer questions, faster)
**User Control**
- Users can always say "that's enough detail" to end questioning early
- The plugin offers to break sessions into smaller parts
- "Good enough for now" is explicitly validated as an acceptable outcome
### 4. Vagueness Detection
The `UserPromptSubmit` hook automatically detects prompts that might benefit from clarification and gently suggests using `/clarify`.
**Detection Signals**
- Short prompts (< 10 words) without specific technical terms
- Vague action phrases: "help me", "fix this", "make it better"
- Ambiguous scope words: "somehow", "something", "stuff", "etc."
- Open questions without context
**Non-Blocking Approach**
- The hook never prevents you from proceeding
- It provides a suggestion with a vagueness score (percentage)
- You can disable auto-suggestions entirely via environment variable
### 5. Focus Aids
**Task Prioritization**
- Requirements are tagged as Must/Should/Could/Won't (MoSCoW)
- Critical items are separated from nice-to-haves
- Scope creep is explicitly called out and deferred
**Context Switching Warnings**
- When questions touch multiple areas, the plugin acknowledges the complexity
- Offers to focus on one aspect at a time
- Summarizes frequently to rebuild context after interruptions
## How It Works
### The UserPromptSubmit Hook
When you submit a prompt, the vagueness detection hook (`hooks/vagueness-check.sh`) runs automatically:
```
User submits prompt
|
v
Hook reads prompt from stdin
|
v
Skip if: empty, starts with /, or contains file paths
|
v
Calculate vagueness score (0.0 - 1.0)
- Short prompts: +0.3
- Vague action phrases: +0.2
- Ambiguous scope words: +0.15
- Missing technical specifics: +0.2
- Short questions without context: +0.15
|
v
If score >= threshold (default 0.6):
- Output gentle suggestion with [clarity-assist] prefix
- Show vagueness percentage
|
v
Exit 0 (always non-blocking)
```
### Example Hook Output
```
[clarity-assist] Your prompt could benefit from more clarity.
[clarity-assist] Consider running /clarity-assist to refine your request.
[clarity-assist] (Vagueness score: 65% - this is a suggestion, not a block)
```
### The 4-D Methodology
| Phase | What Happens | ND Benefit |
|-------|--------------|------------|
| **Deconstruct** | Break request into components | Shows the full picture without overwhelming |
| **Diagnose** | Identify gaps and conflicts | Surfaces hidden assumptions |
| **Develop** | Ask structured questions | Option-based, chunked, contextual |
| **Deliver** | Produce specification | Consistent, scannable format |
## Usage Examples
### Before: Vague Request
```
User: Make the app faster
```
This triggers vagueness detection because:
- Short (4 words)
- Vague action ("make")
- No specific technical context
- No measurable outcome
### After: Clarified with /clarify
```
User: /clarify Make the app faster
Claude: Let me help clarify what kind of performance improvements you're looking for.
**What type of slowness are you experiencing?**
1. Initial page load takes too long
2. Interactions feel sluggish (clicking, typing)
3. API calls are slow
4. All of the above
5. Other
[I'm asking because each has very different solutions]
```
```
User: 1
Claude: Initial page load - got it.
**What's an acceptable load time for you?**
1. Under 1 second (aggressive)
2. Under 2 seconds (standard)
3. Under 3 seconds (relaxed)
4. I don't have a specific target
[This helps me know whether to suggest quick wins or a major optimization effort]
```
After 2-3 more questions, the user receives:
```markdown
## Clarified Request
### Summary
Optimize initial page load time to under 2 seconds by addressing bundle size and initial render blocking resources.
### Scope
**In Scope:**
- Bundle analysis and splitting
- Lazy loading non-critical components
- Image optimization
- CSS/JS minification check
**Out of Scope:**
- Backend API optimization
- Database query performance
- CDN setup
### Requirements
| # | Requirement | Priority | Notes |
|---|-------------|----------|-------|
| 1 | Page loads in < 2s on 4G | Must | Measured via Lighthouse |
| 2 | Largest contentful paint < 1.5s | Should | Core Web Vital |
| 3 | No layout shift after load | Should | CLS < 0.1 |
### Assumptions
- Current load time is > 3 seconds (based on "too long" description)
- Target browsers: modern evergreen (Chrome, Firefox, Safari)
- Build tooling already configured (webpack/vite)
```
### Common Patterns Detected
| Pattern | Example | What It Triggers |
|---------|---------|------------------|
| Missing context | "Fix the bug" | "Which bug? In which file or feature?" |
| Ambiguous scope | "Update the user stuff" | "What specifically needs updating?" |
| Unclear goals | "Make it better" | "Better in what way? Faster? Cleaner? More features?" |
| Implicit knowledge | "Do it like last time" | "I don't have context from previous sessions" |
| Conflicting requests | "Keep it simple but add everything" | "Which matters more: simplicity or completeness?" |
### Quick-Clarify Example
For simpler requests that just need minor disambiguation:
```
User: /quick-clarify Add a delete button to the user list
Claude: I'll add a delete button to each row in the user list.
**Should deletion require confirmation?**
1. Yes - show confirmation dialog (Recommended)
2. No - delete immediately
3. Soft delete - mark as deleted but keep data
[Asking because accidental deletions can be problematic]
```
```
User: 1
Claude: Got it. I'll implement a delete button with confirmation dialog. Starting now.
```
## Configuration Options
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `CLARITY_ASSIST_AUTO_SUGGEST` | `true` | Enable/disable automatic vagueness detection |
| `CLARITY_ASSIST_VAGUENESS_THRESHOLD` | `0.6` | Score threshold to trigger suggestion (0.0-1.0) |
### Disabling Auto-Suggestions
If you find the vagueness detection unhelpful, disable it in your shell profile or `.env`:
```bash
export CLARITY_ASSIST_AUTO_SUGGEST=false
```
### Adjusting Sensitivity
To make detection more or less sensitive:
```bash
# More sensitive (suggests more often)
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.4
# Less sensitive (only very vague prompts)
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.8
```
## Tips for ND Users
### If You're Feeling Overwhelmed
- Use `/quick-clarify` instead of `/clarify` for faster interactions
- Say "let's focus on just one thing" to narrow scope
- Ask to "pause and summarize" at any point
- It's OK to say "I don't know" - the plugin will offer concrete alternatives
### If You Have Executive Function Challenges
- Start with `/clarify` even for tasks you think are simple - it helps with planning
- The structured specification can serve as a checklist
- Use the scope boundaries to prevent scope creep
### If You Prefer Detailed Structure
- The 4-D methodology provides a predictable framework
- All output follows consistent formatting
- Questions always offer numbered options
### If You Have Anxiety About Getting It Right
- The plugin validates "good enough for now" as acceptable
- You can always revisit and change earlier answers
- Assumptions are explicitly listed - nothing is hidden
## Accessibility Notes
- All output uses standard markdown that works with screen readers
- No time pressure - take as long as you need between responses
- Questions are designed to be answerable without deep context retrieval
- Visual patterns (bold, bullets, tables) create scannable structure
## Feedback
If you have suggestions for improving neurodivergent support in clarity-assist, please open an issue at:
https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/issues
Include the label `clarity-assist` and describe:
- What challenge you faced
- What would have helped
- Any specific accommodations you'd like to see

View File

@@ -0,0 +1,10 @@
{
"hooks": {
"UserPromptSubmit": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/vagueness-check.sh"
}
]
}
}

View File

@@ -0,0 +1,216 @@
#!/bin/bash
# clarity-assist vagueness detection hook
# Analyzes user prompts for vagueness and suggests /clarity-assist when beneficial
# All output MUST have [clarity-assist] prefix
# This is a NON-BLOCKING hook - always exits 0
PREFIX="[clarity-assist]"
# Check if auto-suggest is enabled (default: true)
AUTO_SUGGEST="${CLARITY_ASSIST_AUTO_SUGGEST:-true}"
if [[ "$AUTO_SUGGEST" != "true" ]]; then
exit 0
fi
# Threshold for vagueness score (default: 0.6)
THRESHOLD="${CLARITY_ASSIST_VAGUENESS_THRESHOLD:-0.6}"
# Read user prompt from stdin
PROMPT=""
if [[ -t 0 ]]; then
# No stdin available
exit 0
else
PROMPT=$(cat)
fi
# Skip empty prompts
if [[ -z "$PROMPT" ]]; then
exit 0
fi
# Skip if prompt is a command (starts with /)
if [[ "$PROMPT" =~ ^[[:space:]]*/[a-zA-Z] ]]; then
exit 0
fi
# Skip if prompt mentions specific files or paths
if [[ "$PROMPT" =~ \.(py|js|ts|sh|md|json|yaml|yml|txt|css|html|go|rs|java|c|cpp|h)([[:space:]]|$|[^a-zA-Z]) ]] || \
[[ "$PROMPT" =~ [/\\][a-zA-Z0-9_-]+[/\\] ]] || \
[[ "$PROMPT" =~ (src|lib|test|docs|plugins|hooks|commands)/ ]]; then
exit 0
fi
# Initialize vagueness score
SCORE=0
# Count words in the prompt
WORD_COUNT=$(echo "$PROMPT" | wc -w | tr -d ' ')
# ============================================================================
# Vagueness Signal Detection
# ============================================================================
# Signal 1: Very short prompts (< 10 words) are often vague
if [[ "$WORD_COUNT" -lt 10 ]]; then
# But very short specific commands are OK
if [[ "$WORD_COUNT" -lt 3 ]]; then
# Extremely short - probably intentional or a command
:
else
SCORE=$(echo "$SCORE + 0.3" | bc)
fi
fi
# Signal 2: Vague action phrases (no specific outcome)
VAGUE_ACTIONS=(
"help me"
"help with"
"do something"
"work on"
"look at"
"check this"
"fix it"
"fix this"
"make it better"
"make this better"
"improve it"
"improve this"
"update this"
"update it"
"change it"
"change this"
"can you"
"could you"
"would you"
"please help"
)
PROMPT_LOWER=$(echo "$PROMPT" | tr '[:upper:]' '[:lower:]')
for phrase in "${VAGUE_ACTIONS[@]}"; do
if [[ "$PROMPT_LOWER" == *"$phrase"* ]]; then
SCORE=$(echo "$SCORE + 0.2" | bc)
break
fi
done
# Signal 3: Ambiguous scope indicators
AMBIGUOUS_SCOPE=(
"somehow"
"something"
"somewhere"
"anything"
"whatever"
"stuff"
"things"
"etc"
"and so on"
)
for word in "${AMBIGUOUS_SCOPE[@]}"; do
if [[ "$PROMPT_LOWER" == *"$word"* ]]; then
SCORE=$(echo "$SCORE + 0.15" | bc)
break
fi
done
# Signal 4: Missing context indicators (no reference to what/where)
# Check if prompt lacks specificity markers
HAS_SPECIFICS=false
# Specific technical terms suggest clarity
SPECIFIC_MARKERS=(
"function"
"class"
"method"
"variable"
"error"
"bug"
"test"
"api"
"endpoint"
"database"
"query"
"component"
"module"
"service"
"config"
"install"
"deploy"
"build"
"run"
"execute"
"create"
"delete"
"add"
"remove"
"implement"
"refactor"
"migrate"
"upgrade"
"debug"
"log"
"exception"
"stack"
"memory"
"performance"
"security"
"auth"
"token"
"session"
"route"
"controller"
"model"
"view"
"template"
"schema"
"migration"
"commit"
"branch"
"merge"
"pull"
"push"
)
for marker in "${SPECIFIC_MARKERS[@]}"; do
if [[ "$PROMPT_LOWER" == *"$marker"* ]]; then
HAS_SPECIFICS=true
break
fi
done
if [[ "$HAS_SPECIFICS" == false ]] && [[ "$WORD_COUNT" -gt 3 ]]; then
SCORE=$(echo "$SCORE + 0.2" | bc)
fi
# Signal 5: Question without context
if [[ "$PROMPT" =~ \?$ ]] && [[ "$WORD_COUNT" -lt 8 ]]; then
# Short questions without specifics are often vague
if [[ "$HAS_SPECIFICS" == false ]]; then
SCORE=$(echo "$SCORE + 0.15" | bc)
fi
fi
# Cap score at 1.0
if (( $(echo "$SCORE > 1.0" | bc -l) )); then
SCORE="1.0"
fi
# ============================================================================
# Output suggestion if score exceeds threshold
# ============================================================================
# Compare score to threshold using bc
if (( $(echo "$SCORE >= $THRESHOLD" | bc -l) )); then
# Format score as percentage for display
SCORE_PCT=$(echo "$SCORE * 100" | bc | cut -d'.' -f1)
# Gentle, non-blocking suggestion
echo "$PREFIX Your prompt could benefit from more clarity."
echo "$PREFIX Consider running /clarity-assist to refine your request."
echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)"
fi
# Always exit 0 - this hook is non-blocking
exit 0

View File

@@ -37,6 +37,33 @@ Create a new CLAUDE.md tailored to your project.
/config-init
```
### `/config-diff`
Show differences between current CLAUDE.md and previous versions.
```
/config-diff # Compare working copy vs last commit
/config-diff --commit=abc1234 # Compare against specific commit
/config-diff --from=v1.0 --to=v2.0 # Compare two commits
/config-diff --section="Critical Rules" # Focus on specific section
```
### `/config-lint`
Lint CLAUDE.md for common anti-patterns and best practices.
```
/config-lint # Run all lint checks
/config-lint --fix # Auto-fix fixable issues
/config-lint --rules=security # Check only security rules
/config-lint --severity=error # Show only errors
```
**Lint Rule Categories:**
- **Security (SEC)** - Hardcoded secrets, paths, credentials
- **Structure (STR)** - Header hierarchy, required sections
- **Content (CNT)** - Contradictions, duplicates, vague rules
- **Format (FMT)** - Consistency, code blocks, whitespace
- **Best Practice (BPR)** - Missing Quick Start, Critical Rules sections
## Best Practices
A good CLAUDE.md should be:

View File

@@ -0,0 +1,239 @@
---
description: Show diff between current CLAUDE.md and last commit
---
# Compare CLAUDE.md Changes
This command shows differences between your current CLAUDE.md file and previous versions, helping track configuration drift and review changes before committing.
## What This Command Does
1. **Detect CLAUDE.md Location** - Finds the project's CLAUDE.md file
2. **Compare Versions** - Shows diff against last commit or specified revision
3. **Highlight Sections** - Groups changes by affected sections
4. **Summarize Impact** - Explains what the changes mean for Claude's behavior
## Usage
```
/config-diff
```
Compare against a specific commit:
```
/config-diff --commit=abc1234
/config-diff --commit=HEAD~3
```
Compare two specific commits:
```
/config-diff --from=abc1234 --to=def5678
```
Show only specific sections:
```
/config-diff --section="Critical Rules"
/config-diff --section="Quick Start"
```
## Comparison Modes
### Default: Working vs Last Commit
Shows uncommitted changes to CLAUDE.md:
```
/config-diff
```
### Working vs Specific Commit
Shows changes since a specific point:
```
/config-diff --commit=v1.0.0
```
### Commit to Commit
Shows changes between two historical versions:
```
/config-diff --from=v1.0.0 --to=v2.0.0
```
### Branch Comparison
Shows CLAUDE.md differences between branches:
```
/config-diff --branch=main
/config-diff --from=feature-branch --to=main
```
## Expected Output
```
CLAUDE.md Diff Report
=====================
File: /path/to/project/CLAUDE.md
Comparing: Working copy vs HEAD (last commit)
Commit: abc1234 "Update build commands" (2 days ago)
Summary:
- Lines added: 12
- Lines removed: 5
- Net change: +7 lines
- Sections affected: 3
Section Changes:
----------------
## Quick Start [MODIFIED]
- Added new environment variable requirement
- Updated test command with coverage flag
## Critical Rules [ADDED CONTENT]
+ New rule: "Never modify database migrations directly"
## Architecture [UNCHANGED]
## Common Operations [MODIFIED]
- Removed deprecated deployment command
- Added new Docker workflow
Detailed Diff:
--------------
--- CLAUDE.md (HEAD)
+++ CLAUDE.md (working)
@@ -15,7 +15,10 @@
## Quick Start
```bash
+export DATABASE_URL=postgres://... # Required
pip install -r requirements.txt
-pytest
+pytest --cov=src # Run with coverage
uvicorn main:app --reload
```
@@ -45,6 +48,7 @@
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
- Always run tests before committing
Behavioral Impact:
------------------
These changes will affect Claude's behavior:
1. [NEW REQUIREMENT] Claude will now export DATABASE_URL before running
2. [MODIFIED] Test command now includes coverage reporting
3. [NEW RULE] Claude will avoid direct migration modifications
Review: Do these changes reflect your intended configuration?
```
## Section-Focused View
When using `--section`, output focuses on specific areas:
```
/config-diff --section="Critical Rules"
CLAUDE.md Section Diff: Critical Rules
======================================
--- HEAD
+++ Working
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
+- Always use type hints in Python code
- Always run tests before committing
-- Keep functions under 50 lines
Changes:
+ 2 rules added
- 1 rule removed
Impact: Claude will follow 2 new constraints and no longer enforce
the 50-line function limit.
```
## Options
| Option | Description |
|--------|-------------|
| `--commit=REF` | Compare working copy against specific commit/tag |
| `--from=REF` | Starting point for comparison |
| `--to=REF` | Ending point for comparison (default: HEAD) |
| `--branch=NAME` | Compare against branch tip |
| `--section=NAME` | Show only changes to specific section |
| `--stat` | Show only statistics, no detailed diff |
| `--no-color` | Disable colored output |
| `--context=N` | Lines of context around changes (default: 3) |
## Understanding the Output
### Change Indicators
| Symbol | Meaning |
|--------|---------|
| `+` | Line added |
| `-` | Line removed |
| `@@` | Location marker showing line numbers |
| `[MODIFIED]` | Section has changes |
| `[ADDED]` | New section created |
| `[REMOVED]` | Section deleted |
| `[UNCHANGED]` | No changes to section |
### Impact Categories
- **NEW REQUIREMENT** - Claude will now need to do something new
- **REMOVED REQUIREMENT** - Claude no longer needs to do something
- **MODIFIED** - Existing behavior changed
- **NEW RULE** - New constraint added
- **RELAXED RULE** - Constraint removed or softened
## When to Use
Run `/config-diff` when:
- Before committing CLAUDE.md changes
- Reviewing what changed after pulling updates
- Debugging unexpected Claude behavior
- Auditing configuration changes over time
- Comparing configurations across branches
## Integration with Other Commands
| Workflow | Commands |
|----------|----------|
| Review before commit | `/config-diff` then `git commit` |
| After optimization | `/config-optimize` then `/config-diff` |
| Audit history | `/config-diff --from=v1.0.0 --to=HEAD` |
| Branch comparison | `/config-diff --branch=main` |
## Tips
1. **Review before committing** - Always check what changed
2. **Track behavioral changes** - Focus on rules and requirements sections
3. **Use section filtering** - Large files benefit from focused diffs
4. **Compare across releases** - Use tags to track major changes
5. **Check after merges** - Ensure CLAUDE.md didn't get conflict artifacts
## Troubleshooting
### "No changes detected"
- CLAUDE.md matches the comparison target
- Check if you're comparing the right commits
### "File not found in commit"
- CLAUDE.md didn't exist at that commit
- Use `git log -- CLAUDE.md` to find when it was created
### "Not a git repository"
- This command requires git history
- Initialize git or use file backup comparison instead

View File

@@ -0,0 +1,334 @@
---
description: Lint CLAUDE.md for common anti-patterns and best practices
---
# Lint CLAUDE.md
This command checks your CLAUDE.md file against best practices and detects common anti-patterns that can cause issues with Claude Code.
## What This Command Does
1. **Parse Structure** - Validates markdown structure and hierarchy
2. **Check Security** - Detects hardcoded paths, secrets, and sensitive data
3. **Validate Content** - Identifies anti-patterns and problematic instructions
4. **Verify Format** - Ensures consistent formatting and style
5. **Generate Report** - Provides actionable findings with fix suggestions
## Usage
```
/config-lint
```
Lint with auto-fix:
```
/config-lint --fix
```
Check specific rules only:
```
/config-lint --rules=security,structure
```
## Linting Rules
### Security Rules (SEC)
| Rule | Description | Severity |
|------|-------------|----------|
| SEC001 | Hardcoded absolute paths | Warning |
| SEC002 | Potential secrets/API keys | Error |
| SEC003 | Hardcoded IP addresses | Warning |
| SEC004 | Exposed credentials patterns | Error |
| SEC005 | Hardcoded URLs with tokens | Error |
| SEC006 | Environment variable values (not names) | Warning |
### Structure Rules (STR)
| Rule | Description | Severity |
|------|-------------|----------|
| STR001 | Missing required sections | Error |
| STR002 | Invalid header hierarchy (h3 before h2) | Warning |
| STR003 | Orphaned content (text before first header) | Info |
| STR004 | Excessive nesting depth (>4 levels) | Warning |
| STR005 | Empty sections | Warning |
| STR006 | Missing section content | Warning |
### Content Rules (CNT)
| Rule | Description | Severity |
|------|-------------|----------|
| CNT001 | Contradictory instructions | Error |
| CNT002 | Vague or ambiguous rules | Warning |
| CNT003 | Overly long sections (>100 lines) | Info |
| CNT004 | Duplicate content | Warning |
| CNT005 | TODO/FIXME in production config | Warning |
| CNT006 | Outdated version references | Info |
| CNT007 | Broken internal links | Warning |
### Format Rules (FMT)
| Rule | Description | Severity |
|------|-------------|----------|
| FMT001 | Inconsistent header styles | Info |
| FMT002 | Inconsistent list markers | Info |
| FMT003 | Missing code block language | Info |
| FMT004 | Trailing whitespace | Info |
| FMT005 | Missing blank lines around headers | Info |
| FMT006 | Inconsistent indentation | Info |
### Best Practice Rules (BPR)
| Rule | Description | Severity |
|------|-------------|----------|
| BPR001 | No Quick Start section | Warning |
| BPR002 | No Critical Rules section | Warning |
| BPR003 | Instructions without examples | Info |
| BPR004 | Commands without explanation | Info |
| BPR005 | Rules without rationale | Info |
| BPR006 | Missing plugin integration docs | Info |
## Expected Output
```
CLAUDE.md Lint Report
=====================
File: /path/to/project/CLAUDE.md
Rules checked: 25
Time: 0.3s
Summary:
Errors: 2
Warnings: 5
Info: 3
Findings:
---------
[ERROR] SEC002: Potential secret detected (line 45)
│ api_key = "sk-1234567890abcdef"
│ ^^^^^^^^^^^^^^^^^^^^^^
└─ Hardcoded API key found. Use environment variable reference instead.
Suggested fix:
- api_key = "sk-1234567890abcdef"
+ api_key = $OPENAI_API_KEY # Set in environment
[ERROR] CNT001: Contradictory instructions (lines 23, 67)
│ Line 23: "Always run tests before committing"
│ Line 67: "Skip tests for documentation-only changes"
└─ These rules conflict. Clarify the exception explicitly.
Suggested fix:
+ "Always run tests before committing, except for documentation-only
+ changes (files in docs/ directory)"
[WARNING] SEC001: Hardcoded absolute path (line 12)
│ Database location: /home/user/data/myapp.db
│ ^^^^^^^^^^^^^^^^^^^^^^^^
└─ Absolute paths break portability. Use relative or variable.
Suggested fix:
- Database location: /home/user/data/myapp.db
+ Database location: ./data/myapp.db # Or $DATA_DIR/myapp.db
[WARNING] STR002: Invalid header hierarchy (line 34)
│ ### Subsection
│ (no preceding ## header)
└─ H3 header without parent H2. Add H2 or promote to H2.
[WARNING] CNT004: Duplicate content (lines 45-52, 89-96)
│ Same git workflow documented twice
└─ Remove duplicate or consolidate into single section.
[WARNING] STR005: Empty section (line 78)
│ ## Troubleshooting
│ (no content)
└─ Add content or remove empty section.
[WARNING] BPR002: No Critical Rules section
│ Missing "Critical Rules" or "Important Rules" section
└─ Add a section highlighting must-follow rules for Claude.
[INFO] FMT003: Missing code block language (line 56)
│ ```
│ npm install
│ ```
└─ Specify language for syntax highlighting: ```bash
[INFO] CNT003: Overly long section (lines 100-215)
│ "Architecture" section is 115 lines
└─ Consider breaking into subsections or condensing.
[INFO] FMT001: Inconsistent header styles
│ Line 10: "## Quick Start"
│ Line 25: "## Architecture:"
│ (colon suffix inconsistent)
└─ Standardize header format throughout document.
---
Auto-fixable: 4 issues (run with --fix)
Manual review required: 6 issues
Run `/config-lint --fix` to apply automatic fixes.
```
## Options
| Option | Description |
|--------|-------------|
| `--fix` | Automatically fix auto-fixable issues |
| `--rules=LIST` | Check only specified rule categories |
| `--ignore=LIST` | Skip specified rules (e.g., `--ignore=FMT001,FMT002`) |
| `--severity=LEVEL` | Show only issues at or above level (error/warning/info) |
| `--format=FORMAT` | Output format: `text` (default), `json`, `sarif` |
| `--config=FILE` | Use custom lint configuration |
| `--strict` | Treat warnings as errors |
## Rule Categories
Use `--rules` to focus on specific areas:
```
/config-lint --rules=security # Only security checks
/config-lint --rules=structure # Only structure checks
/config-lint --rules=security,content # Multiple categories
```
Available categories:
- `security` - SEC rules
- `structure` - STR rules
- `content` - CNT rules
- `format` - FMT rules
- `bestpractice` - BPR rules
## Custom Configuration
Create `.claude-lint.json` in project root:
```json
{
"rules": {
"SEC001": "warning",
"FMT001": "off",
"CNT003": {
"severity": "warning",
"maxLines": 150
}
},
"ignore": [
"FMT*"
],
"requiredSections": [
"Quick Start",
"Critical Rules",
"Project Overview"
]
}
```
## Anti-Pattern Examples
### Hardcoded Secrets (SEC002)
```markdown
# BAD
API_KEY=sk-1234567890abcdef
# GOOD
API_KEY=$OPENAI_API_KEY # Set via environment
```
### Hardcoded Paths (SEC001)
```markdown
# BAD
Config file: /home/john/projects/myapp/config.yml
# GOOD
Config file: ./config.yml
Config file: $PROJECT_ROOT/config.yml
```
### Contradictory Rules (CNT001)
```markdown
# BAD
- Always use TypeScript
- JavaScript files are acceptable for scripts
# GOOD
- Always use TypeScript for source code
- JavaScript (.js) is acceptable only for config files and scripts
```
### Vague Instructions (CNT002)
```markdown
# BAD
- Be careful with the database
# GOOD
- Never run DELETE without WHERE clause
- Always backup before migrations
```
### Invalid Hierarchy (STR002)
```markdown
# BAD
# Main Title
### Skipped Level
# GOOD
# Main Title
## Section
### Subsection
```
## When to Use
Run `/config-lint` when:
- Before committing CLAUDE.md changes
- During code review for CLAUDE.md modifications
- Setting up CI/CD checks for configuration files
- After major edits to catch introduced issues
- Periodically as maintenance check
## Integration with CI/CD
Add to your CI pipeline:
```yaml
# GitHub Actions example
- name: Lint CLAUDE.md
run: claude /config-lint --strict --format=sarif > lint-results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: lint-results.sarif
```
## Tips
1. **Start with errors** - Fix errors before warnings
2. **Use --fix carefully** - Review auto-fixes before committing
3. **Configure per-project** - Different projects have different needs
4. **Integrate in CI** - Catch issues before they reach main
5. **Review periodically** - Run lint check monthly as maintenance
## Related Commands
| Command | Relationship |
|---------|--------------|
| `/config-analyze` | Deeper content analysis (complements lint) |
| `/config-optimize` | Applies fixes and improvements |
| `/config-diff` | Shows what changed (run lint before commit) |

View File

@@ -1,7 +1,7 @@
{
"name": "cmdb-assistant",
"version": "1.0.0",
"description": "NetBox CMDB integration for infrastructure management - query, create, update, and manage network devices, IP addresses, sites, and more",
"version": "1.2.0",
"description": "NetBox CMDB integration with data quality validation - query, create, update, and manage network devices, IP addresses, sites, and more with best practices enforcement",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
@@ -15,7 +15,9 @@
"infrastructure",
"network",
"ipam",
"dcim"
"dcim",
"data-quality",
"validation"
],
"commands": ["./commands/"],
"mcpServers": ["./.mcp.json"]

View File

@@ -1,12 +1,8 @@
{
"mcpServers": {
"netbox": {
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox",
"env": {
"PYTHONPATH": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox"
}
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox/run.sh",
"args": []
}
}
}

View File

@@ -2,6 +2,20 @@
A Claude Code plugin for NetBox CMDB integration - query, create, update, and manage your network infrastructure directly from Claude Code.
## What's New in v1.2.0
- **`/cmdb-topology`**: Generate Mermaid diagrams showing infrastructure topology (rack view, network view, site overview)
- **`/change-audit`**: Query and analyze NetBox audit log for change tracking and compliance
- **`/ip-conflicts`**: Detect IP address conflicts and overlapping prefixes
## What's New in v1.1.0
- **Data Quality Validation**: Hooks for SessionStart and PreToolUse that check data quality and warn about missing fields
- **Best Practices Skill**: Reference documentation for NetBox patterns (naming conventions, dependency order, role management)
- **`/cmdb-audit`**: Analyze data quality across VMs, devices, naming conventions, and roles
- **`/cmdb-register`**: Register the current machine into NetBox with all running applications (Docker containers, systemd services)
- **`/cmdb-sync`**: Synchronize existing machine state with NetBox (detect drift, update with confirmation)
## Features
- **Full CRUD Operations**: Create, read, update, and delete across all NetBox modules
@@ -9,6 +23,9 @@ A Claude Code plugin for NetBox CMDB integration - query, create, update, and ma
- **IP Management**: Allocate IPs, manage prefixes, track VLANs
- **Infrastructure Documentation**: Document servers, network devices, and connections
- **Audit Trail**: Review changes and maintain infrastructure history
- **Data Quality Validation**: Proactive checks for missing site, tenant, platform assignments
- **Machine Registration**: Auto-discover and register servers with running applications
- **Drift Detection**: Sync machine state and detect changes over time
## Installation
@@ -40,10 +57,17 @@ Add to your Claude Code plugins or marketplace configuration.
| Command | Description |
|---------|-------------|
| `/initial-setup` | Interactive setup wizard for NetBox MCP server |
| `/cmdb-search <query>` | Search for devices, IPs, sites, or any CMDB object |
| `/cmdb-device <action>` | Manage network devices (list, create, update, delete) |
| `/cmdb-ip <action>` | Manage IP addresses and prefixes |
| `/cmdb-site <action>` | Manage sites and locations |
| `/cmdb-audit [scope]` | Data quality analysis (all, vms, devices, naming, roles) |
| `/cmdb-register` | Register current machine into NetBox with running apps |
| `/cmdb-sync` | Sync machine state with NetBox (detect drift, update) |
| `/cmdb-topology <view>` | Generate Mermaid diagrams (rack, network, site, full) |
| `/change-audit [filters]` | Query NetBox audit log for change tracking |
| `/ip-conflicts [scope]` | Detect IP conflicts and overlapping prefixes |
## Agent
@@ -103,6 +127,15 @@ This plugin provides access to the full NetBox API:
- **Wireless**: WLANs, Wireless Links
- **Extras**: Tags, Custom Fields, Journal Entries, Audit Log
## Hooks
| Event | Purpose |
|-------|---------|
| `SessionStart` | Test NetBox connectivity, report data quality issues |
| `PreToolUse` | Validate VM/device parameters before create/update |
Hooks are **non-blocking** - they emit warnings but never prevent operations.
## Architecture
```
@@ -115,13 +148,26 @@ cmdb-assistant/
│ ├── cmdb-search.md # Search command
│ ├── cmdb-device.md # Device management
│ ├── cmdb-ip.md # IP management
── cmdb-site.md # Site management
── cmdb-site.md # Site management
│ ├── cmdb-audit.md # Data quality audit
│ ├── cmdb-register.md # Machine registration
│ ├── cmdb-sync.md # Machine sync
│ ├── cmdb-topology.md # Topology visualization (NEW)
│ ├── change-audit.md # Change audit log (NEW)
│ └── ip-conflicts.md # IP conflict detection (NEW)
├── hooks/
│ ├── hooks.json # Hook configuration
│ ├── startup-check.sh # SessionStart validation
│ └── validate-input.sh # PreToolUse validation
├── skills/
│ └── netbox-patterns/
│ └── SKILL.md # NetBox best practices reference
├── agents/
│ └── cmdb-assistant.md # Main assistant agent
└── README.md
```
The plugin uses the shared NetBox MCP server at `../mcp-servers/netbox/`.
The plugin uses the shared NetBox MCP server at `mcp-servers/netbox/`.
## Configuration

View File

@@ -76,3 +76,132 @@ When presenting data:
- Suggest corrective actions
- For permission errors, note what access is needed
- For validation errors, explain required fields/formats
## Data Quality Validation
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
Before ANY create or update operation, validate against NetBox best practices:
### VM Operations
**Required checks before `virt_create_vm` or `virt_update_vm`:**
1. **Cluster/Site Assignment** - VMs must have either cluster or site
2. **Tenant Assignment** - Recommend if not provided
3. **Platform Assignment** - Recommend for OS tracking
4. **Naming Convention** - Check against `{env}-{app}-{number}` pattern
5. **Role Assignment** - Recommend appropriate role
**If user provides no site/tenant, ASK:**
> "This VM has no site or tenant assigned. NetBox best practices recommend:
> - **Site**: For location-based queries and power budgeting
> - **Tenant**: For resource isolation and ownership tracking
>
> Would you like me to:
> 1. Assign to an existing site/tenant (list available)
> 2. Create new site/tenant first
> 3. Proceed without (not recommended for production use)"
### Device Operations
**Required checks before `dcim_create_device` or `dcim_update_device`:**
1. **Site is REQUIRED** - Fail without it
2. **Platform Assignment** - Recommend for OS tracking
3. **Naming Convention** - Check against `{role}-{location}-{number}` pattern
4. **Role Assignment** - Ensure appropriate role selected
5. **After Creation** - Offer to set primary IP
### Cluster Operations
**Required checks before `virt_create_cluster`:**
1. **Site Scope** - Recommend assigning to site
2. **Cluster Type** - Ensure appropriate type selected
3. **Device Association** - Recommend linking to host device
### Role Management
**Before creating a new device role:**
1. List existing roles with `dcim_list_device_roles`
2. Check if a more general role already exists
3. Recommend role consolidation if >10 specific roles exist
**Example guidance:**
> "You're creating role 'nginx-web-server'. An existing 'web-server' role exists.
> Consider using 'web-server' and tracking nginx via the platform field instead.
> This reduces role fragmentation and improves maintainability."
## Dependency Order Enforcement
When creating multiple objects, follow this order:
```
1. Regions → Sites → Locations → Racks
2. Tenant Groups → Tenants
3. Manufacturers → Device Types
4. Device Roles, Platforms
5. Devices (with site, role, type)
6. Clusters (with type, optional site)
7. VMs (with cluster)
8. Interfaces → IP Addresses → Primary IP assignment
```
**CRITICAL Rules:**
- NEVER create a VM before its cluster exists
- NEVER create a device before its site exists
- NEVER create an interface before its device exists
- NEVER create an IP before its interface exists (if assigning)
## Naming Convention Enforcement
When user provides a name, check against patterns:
| Object Type | Pattern | Example |
|-------------|---------|---------|
| Device | `{role}-{site}-{number}` | `web-dc1-01` |
| VM | `{env}-{app}-{number}` or `{prefix}_{service}` | `prod-api-01` |
| Cluster | `{site}-{type}` | `dc1-vmware`, `home-docker` |
| Prefix | Include purpose in description | "Production /24 for web tier" |
**If name doesn't match patterns, warn:**
> "The name 'HotServ' doesn't follow naming conventions.
> Suggested: `prod-hotserv-01` or `hotserv-cloud-01`.
> Consistent naming improves searchability and automation compatibility.
> Proceed with original name? [Y/n]"
## Duplicate Prevention
Before creating objects, always check for existing duplicates:
```
# Before creating device
dcim_list_devices name=<proposed-name>
# Before creating VM
virt_list_vms name=<proposed-name>
# Before creating prefix
ipam_list_prefixes prefix=<proposed-prefix>
```
If duplicate found, inform user and suggest update instead of create.
## Available Commands
Users can invoke these commands for structured workflows:
| Command | Purpose |
|---------|---------|
| `/cmdb-search <query>` | Search across all CMDB objects |
| `/cmdb-device <action>` | Device CRUD operations |
| `/cmdb-ip <action>` | IP address and prefix management |
| `/cmdb-site <action>` | Site and location management |
| `/cmdb-audit [scope]` | Data quality analysis |
| `/cmdb-register` | Register current machine |
| `/cmdb-sync` | Sync machine state with NetBox |

View File

@@ -0,0 +1,163 @@
---
description: Audit NetBox changes with filtering by date, user, or object type
---
# CMDB Change Audit
Query and analyze the NetBox audit log for change tracking and compliance.
## Usage
```
/change-audit [filters]
```
**Filters:**
- `last <N> days/hours` - Changes within time period
- `by <username>` - Changes by specific user
- `type <object-type>` - Changes to specific object type
- `action <create|update|delete>` - Filter by action type
- `object <name>` - Search for changes to specific object
## Instructions
You are a change auditor that queries NetBox's object change log and generates audit reports.
### MCP Tools
Use these tools to query the audit log:
- `extras_list_object_changes` - List changes with filters:
- `user_id` - Filter by user ID
- `changed_object_type` - Filter by object type (e.g., "dcim.device", "ipam.ipaddress")
- `action` - Filter by action: "create", "update", "delete"
- `extras_get_object_change` - Get detailed change record by ID
### Common Object Types
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |
### Workflow
1. **Parse user request** to determine filters
2. **Query object changes** using `extras_list_object_changes`
3. **Enrich data** by fetching detailed records if needed
4. **Analyze patterns** in the changes
5. **Generate report** in structured format
### Report Format
```markdown
## NetBox Change Audit Report
**Generated:** [timestamp]
**Period:** [date range or "All time"]
**Filters:** [applied filters]
### Summary
| Metric | Count |
|--------|-------|
| Total Changes | X |
| Creates | Y |
| Updates | Z |
| Deletes | W |
| Unique Users | N |
| Object Types | M |
### Changes by Action
#### Created Objects (Y)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-15 14:30 | admin | dcim.device | server-01 | Created device |
| ... | ... | ... | ... | ... |
#### Updated Objects (Z)
| Time | User | Object Type | Object | Changed Fields |
|------|------|-------------|--------|----------------|
| 2024-01-15 15:00 | john | ipam.ipaddress | 10.0.1.50/24 | status, description |
| ... | ... | ... | ... | ... |
#### Deleted Objects (W)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-14 09:00 | admin | dcim.interface | eth2 | Removed from server-01 |
| ... | ... | ... | ... | ... |
### Changes by User
| User | Creates | Updates | Deletes | Total |
|------|---------|---------|---------|-------|
| admin | 5 | 10 | 2 | 17 |
| john | 3 | 8 | 0 | 11 |
### Changes by Object Type
| Object Type | Creates | Updates | Deletes | Total |
|-------------|---------|---------|---------|-------|
| dcim.device | 2 | 5 | 0 | 7 |
| ipam.ipaddress | 4 | 3 | 1 | 8 |
### Timeline
```
2024-01-15: ████████ 8 changes
2024-01-14: ████ 4 changes
2024-01-13: ██ 2 changes
```
### Notable Patterns
- **Bulk operations:** [Identify if many changes happened in short time]
- **Unusual activity:** [Flag unexpected deletions or after-hours changes]
- **Missing audit trail:** [Note if expected changes are not logged]
### Recommendations
1. [Any security or process recommendations based on findings]
```
### Time Period Handling
When user specifies "last N days":
- The NetBox API may not have direct date filtering in `extras_list_object_changes`
- Fetch recent changes and filter client-side by the `time` field
- Note any limitations in the report
### Enriching Change Details
For detailed audit, use `extras_get_object_change` with the change ID to see:
- `prechange_data` - Object state before change
- `postchange_data` - Object state after change
- `request_id` - Links related changes in same request
### Security Audit Mode
If user asks for "security audit" or "compliance report":
1. Focus on deletions and permission-sensitive changes
2. Highlight changes to critical objects (firewalls, VRFs, prefixes)
3. Flag changes outside business hours
4. Identify users with high change counts
## Examples
- `/change-audit` - Show recent changes (last 24 hours)
- `/change-audit last 7 days` - Changes in past week
- `/change-audit by admin` - All changes by admin user
- `/change-audit type dcim.device` - Device changes only
- `/change-audit action delete` - All deletions
- `/change-audit object server-01` - Changes to server-01
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,195 @@
---
description: Audit NetBox data quality and identify consistency issues
---
# CMDB Data Quality Audit
Analyze NetBox data for quality issues and best practice violations.
## Usage
```
/cmdb-audit [scope]
```
**Scopes:**
- `all` (default) - Full audit across all categories
- `vms` - Virtual machines only
- `devices` - Physical devices only
- `naming` - Naming convention analysis
- `roles` - Role fragmentation analysis
## Instructions
You are a data quality auditor for NetBox. Your job is to identify consistency issues and best practice violations.
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
### Phase 1: Data Collection
Run these MCP tool calls to gather data for analysis:
```
1. virt_list_vms (no filters - get all)
2. dcim_list_devices (no filters - get all)
3. virt_list_clusters (no filters)
4. dcim_list_sites
5. tenancy_list_tenants
6. dcim_list_device_roles
7. dcim_list_platforms
```
Store the results for analysis.
### Phase 2: Quality Checks
Analyze collected data for these issues by severity:
#### CRITICAL Issues (must fix immediately)
| Check | Detection |
|-------|-----------|
| VMs without cluster | `cluster` field is null AND `site` field is null |
| Devices without site | `site` field is null |
| Active devices without primary IP | `status=active` AND `primary_ip4` is null AND `primary_ip6` is null |
#### HIGH Issues (should fix soon)
| Check | Detection |
|-------|-----------|
| VMs without site | VM has no site (neither direct nor via cluster.site) |
| VMs without tenant | `tenant` field is null |
| Devices without platform | `platform` field is null |
| Clusters not scoped to site | `site` field is null on cluster |
| VMs without role | `role` field is null |
#### MEDIUM Issues (plan to address)
| Check | Detection |
|-------|-----------|
| Inconsistent naming | Names don't match patterns: devices=`{role}-{site}-{num}`, VMs=`{env}-{app}-{num}` |
| Role fragmentation | More than 10 device roles with <3 assignments each |
| Missing tags on production | Active resources without any tags |
| Mixed naming separators | Some names use `_`, others use `-` |
#### LOW Issues (informational)
| Check | Detection |
|-------|-----------|
| Docker containers as VMs | Cluster type is "Docker Compose" - document this modeling choice |
| VMs without description | `description` field is empty |
| Sites without physical address | `physical_address` is empty |
| Devices without serial | `serial` field is empty |
### Phase 3: Naming Convention Analysis
For naming scope, analyze patterns:
1. **Extract naming patterns** from existing objects
2. **Identify dominant patterns** (most common conventions)
3. **Flag outliers** that don't match dominant patterns
4. **Suggest standardization** based on best practices
**Expected Patterns:**
- Devices: `{role}-{location}-{number}` (e.g., `web-dc1-01`)
- VMs: `{prefix}_{service}` or `{env}-{app}-{number}` (e.g., `prod-api-01`)
- Clusters: `{site}-{type}` (e.g., `home-docker`)
### Phase 4: Role Analysis
For roles scope, analyze fragmentation:
1. **List all device roles** with assignment counts
2. **Identify single-use roles** (only 1 device/VM)
3. **Identify similar roles** that could be consolidated
4. **Suggest consolidation** based on patterns
**Red Flags:**
- More than 15 highly specific roles
- Roles with technology in name (use platform instead)
- Roles that duplicate functionality
### Phase 5: Report Generation
Present findings in this structure:
```markdown
## CMDB Data Quality Audit Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Metric | Count |
|--------|-------|
| Total VMs | X |
| Total Devices | Y |
| Total Clusters | Z |
| **Total Issues** | **N** |
| Severity | Count |
|----------|-------|
| Critical | A |
| High | B |
| Medium | C |
| Low | D |
### Critical Issues
[List each with specific object names and IDs]
**Example:**
- VM `HotServ` (ID: 1) - No cluster or site assignment
- Device `server-01` (ID: 5) - No site assignment
### High Issues
[List each with specific object names]
### Medium Issues
[Grouped by category with counts]
### Recommendations
1. **[Most impactful fix]** - affects N objects
2. **[Second priority]** - affects M objects
...
### Quick Fixes
Commands to fix common issues:
```
# Assign site to VM
virt_update_vm id=X site=Y
# Assign platform to device
dcim_update_device id=X platform=Y
```
### Next Steps
- Run `/cmdb-register` to properly register new machines
- Use `/cmdb-sync` to update existing registrations
- Consider bulk updates via NetBox web UI for >10 items
```
## Scope-Specific Instructions
### For `vms` scope:
Focus only on Virtual Machine checks. Skip device and role analysis.
### For `devices` scope:
Focus only on Device checks. Skip VM and cluster analysis.
### For `naming` scope:
Focus on naming convention analysis across all objects. Generate detailed pattern report.
### For `roles` scope:
Focus on role fragmentation analysis. Generate consolidation recommendations.
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,322 @@
---
description: Register the current machine into NetBox with all running applications
---
# CMDB Machine Registration
Register the current machine into NetBox, including hardware info, network interfaces, and running applications (Docker containers, services).
## Usage
```
/cmdb-register [--site <site-name>] [--tenant <tenant-name>] [--role <role-name>]
```
**Options:**
- `--site <name>`: Site to assign (will prompt if not provided)
- `--tenant <name>`: Tenant for resource isolation (optional)
- `--role <name>`: Device role (default: auto-detect based on services)
## Instructions
You are registering the current machine into NetBox. This is a multi-phase process that discovers local system information and creates corresponding NetBox objects.
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
### Phase 1: System Discovery (via Bash)
Gather system information using these commands:
#### 1.1 Basic Device Info
```bash
# Hostname
hostname
# OS/Platform info
cat /etc/os-release 2>/dev/null || uname -a
# Hardware model (varies by system)
# Raspberry Pi:
cat /proc/device-tree/model 2>/dev/null || echo "Unknown"
# x86 systems:
cat /sys/class/dmi/id/product_name 2>/dev/null || echo "Unknown"
# Serial number
# Raspberry Pi:
cat /proc/device-tree/serial-number 2>/dev/null || cat /proc/cpuinfo | grep Serial | cut -d: -f2 | tr -d ' ' 2>/dev/null
# x86 systems:
cat /sys/class/dmi/id/product_serial 2>/dev/null || echo "Unknown"
# CPU info
nproc
# Memory (MB)
free -m | awk '/Mem:/ {print $2}'
# Disk (GB, root filesystem)
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
```
#### 1.2 Network Interfaces
```bash
# Get interfaces with IPs (JSON format)
ip -j addr show 2>/dev/null || ip addr show
# Get default gateway interface
ip route | grep default | awk '{print $5}' | head -1
# Get MAC addresses
ip -j link show 2>/dev/null || ip link show
```
#### 1.3 Running Applications
```bash
# Docker containers (if docker available)
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}","ports":"{{.Ports}}"}' 2>/dev/null || echo "Docker not available"
# Docker Compose projects (check common locations)
find ~/apps /home/*/apps -name "docker-compose.yml" -o -name "docker-compose.yaml" 2>/dev/null | head -20
# Systemd services (running)
systemctl list-units --type=service --state=running --no-pager --plain 2>/dev/null | grep -v "^UNIT" | head -30
```
### Phase 2: Pre-Registration Checks (via MCP)
Before creating objects, verify prerequisites:
#### 2.1 Check if Device Already Exists
```
dcim_list_devices name=<hostname>
```
**If device exists:**
- Inform user and suggest `/cmdb-sync` instead
- Ask if they want to proceed with re-registration (will update existing)
#### 2.2 Verify/Create Site
If `--site` provided:
```
dcim_list_sites name=<site-name>
```
If site doesn't exist, ask user if they want to create it.
If no site provided, list available sites and ask user to choose:
```
dcim_list_sites
```
#### 2.3 Verify/Create Platform
Based on OS detected, check if platform exists:
```
dcim_list_platforms name=<platform-name>
```
**Platform naming:**
- `Raspberry Pi OS (Bookworm)` for Raspberry Pi
- `Ubuntu 24.04 LTS` for Ubuntu
- `Debian 12` for Debian
- Use format: `{OS Name} {Version}`
If platform doesn't exist, create it:
```
dcim_create_platform name=<platform-name> slug=<slug>
```
#### 2.4 Verify/Create Device Role
Based on detected services:
- If Docker containers found → `Docker Host`
- If only basic services → `Server`
- If specific role specified → Use that
```
dcim_list_device_roles name=<role-name>
```
### Phase 3: Device Registration (via MCP)
#### 3.1 Get/Create Manufacturer and Device Type
For Raspberry Pi:
```
dcim_list_manufacturers name="Raspberry Pi Foundation"
dcim_list_device_types manufacturer_id=X model="Raspberry Pi 4 Model B"
```
Create if not exists.
For generic x86:
```
dcim_list_manufacturers name=<detected-manufacturer>
```
#### 3.2 Create Device
```
dcim_create_device
name=<hostname>
device_type=<device_type_id>
role=<role_id>
site=<site_id>
platform=<platform_id>
tenant=<tenant_id> # if provided
serial=<serial>
description="Registered via cmdb-assistant"
```
#### 3.3 Create Interfaces
For each network interface discovered:
```
dcim_create_interface
device=<device_id>
name=<interface_name> # eth0, wlan0, tailscale0, etc.
type=<type> # 1000base-t, virtual, other
mac_address=<mac>
enabled=true
```
**Interface type mapping:**
- `eth*`, `enp*``1000base-t`
- `wlan*``ieee802.11ax` (or appropriate wifi type)
- `tailscale*`, `docker*`, `br-*``virtual`
- `lo` → skip (loopback)
#### 3.4 Create IP Addresses
For each IP on each interface:
```
ipam_create_ip_address
address=<ip/prefix> # e.g., "192.168.1.100/24"
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
description="Discovered via cmdb-register"
```
#### 3.5 Set Primary IP
Identify primary IP (interface with default route):
```
dcim_update_device
id=<device_id>
primary_ip4=<primary_ip_id>
```
### Phase 4: Container Registration (via MCP)
If Docker containers were discovered:
#### 4.1 Create/Get Cluster Type
```
virt_list_cluster_types name="Docker Compose"
```
Create if not exists:
```
virt_create_cluster_type name="Docker Compose" slug="docker-compose"
```
#### 4.2 Create Cluster
For each Docker Compose project directory found:
```
virt_create_cluster
name=<project-name> # e.g., "apps-hotport"
type=<cluster_type_id>
site=<site_id>
description="Docker Compose stack on <hostname>"
```
#### 4.3 Create VMs for Containers
For each running container:
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
site=<site_id>
role=<role_id> # Map container function to role
status="active"
vcpus=<cpu_shares> # Default 1.0 if unknown
memory=<memory_mb> # Default 256 if unknown
disk=<disk_gb> # Default 5 if unknown
description=<container purpose>
comments=<image, ports, volumes info>
```
**Container role mapping:**
- `*caddy*`, `*nginx*`, `*traefik*` → "Reverse Proxy"
- `*db*`, `*postgres*`, `*mysql*`, `*redis*` → "Database"
- `*webui*`, `*frontend*` → "Web Application"
- Others → Infer from image name or use generic "Container"
### Phase 5: Documentation
#### 5.1 Add Journal Entry
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device registered via /cmdb-register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
```
### Phase 6: Summary Report
Present registration summary:
```markdown
## Machine Registration Complete
### Device Created
- **Name:** <hostname>
- **Site:** <site>
- **Platform:** <platform>
- **Role:** <role>
- **ID:** <device_id>
- **URL:** https://netbox.example.com/dcim/devices/<id>/
### Network Interfaces
| Interface | Type | MAC | IP Address |
|-----------|------|-----|------------|
| eth0 | 1000base-t | aa:bb:cc:dd:ee:ff | 192.168.1.100/24 |
| tailscale0 | virtual | - | 100.x.x.x/32 |
### Primary IP: 192.168.1.100
### Docker Containers Registered (if applicable)
**Cluster:** <cluster_name> (ID: <cluster_id>)
| Container | Role | vCPUs | Memory | Status |
|-----------|------|-------|--------|--------|
| media_jellyfin | Media Server | 2.0 | 2048MB | Active |
| media_sonarr | Media Management | 1.0 | 512MB | Active |
### Next Steps
- Run `/cmdb-sync` periodically to keep data current
- Run `/cmdb-audit` to check data quality
- Add tags for classification (env:*, team:*, etc.)
```
## Error Handling
- **Device already exists:** Suggest `/cmdb-sync` or ask to proceed
- **Site not found:** List available sites, offer to create new
- **Docker not available:** Skip container registration, note in summary
- **Permission denied:** Note which operations failed, suggest fixes
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,336 @@
---
description: Synchronize current machine state with existing NetBox record
---
# CMDB Machine Sync
Update an existing NetBox device record with the current machine state. Compares local system information with NetBox and applies changes.
## Usage
```
/cmdb-sync [--full] [--dry-run]
```
**Options:**
- `--full`: Force refresh all fields, even unchanged ones
- `--dry-run`: Show what would change without applying updates
## Instructions
You are synchronizing the current machine's state with its NetBox record. This involves comparing current system state with stored data and updating differences.
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference.
### Phase 1: Device Lookup (via MCP)
First, find the existing device record:
```bash
# Get current hostname
hostname
```
```
dcim_list_devices name=<hostname>
```
**If device not found:**
- Inform user: "Device '<hostname>' not found in NetBox"
- Suggest: "Run `/cmdb-register` to register this machine first"
- Exit sync
**If device found:**
- Store device ID and all current field values
- Fetch interfaces: `dcim_list_interfaces device_id=<device_id>`
- Fetch IPs: `ipam_list_ip_addresses device_id=<device_id>`
Also check for associated clusters/VMs:
```
virt_list_clusters # Look for cluster associated with this device
virt_list_vms cluster=<cluster_id> # If cluster found
```
### Phase 2: Current State Discovery (via Bash)
Gather current system information (same as `/cmdb-register`):
```bash
# Device info
hostname
cat /etc/os-release 2>/dev/null || uname -a
nproc
free -m | awk '/Mem:/ {print $2}'
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
# Network interfaces with IPs
ip -j addr show 2>/dev/null || ip addr show
# Docker containers
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}"}' 2>/dev/null || echo "[]"
```
### Phase 3: Comparison
Compare discovered state with NetBox record:
#### 3.1 Device Attributes
| Field | Compare |
|-------|---------|
| Platform | OS version changed? |
| Status | Still active? |
| Serial | Match? |
| Description | Keep existing |
#### 3.2 Network Interfaces
| Change Type | Detection |
|-------------|-----------|
| New interface | Interface exists locally but not in NetBox |
| Removed interface | Interface in NetBox but not locally |
| Changed MAC | MAC address different |
| Interface type | Type mismatch |
#### 3.3 IP Addresses
| Change Type | Detection |
|-------------|-----------|
| New IP | IP exists locally but not in NetBox |
| Removed IP | IP in NetBox but not locally (on this device) |
| Primary IP changed | Default route interface changed |
#### 3.4 Docker Containers
| Change Type | Detection |
|-------------|-----------|
| New container | Container running locally but no VM in cluster |
| Stopped container | VM exists but container not running |
| Resource change | vCPUs/memory different (if trackable) |
### Phase 4: Diff Report
Present changes to user:
```markdown
## Sync Diff Report
**Device:** <hostname> (ID: <device_id>)
**NetBox URL:** https://netbox.example.com/dcim/devices/<id>/
### Device Attributes
| Field | NetBox Value | Current Value | Action |
|-------|--------------|---------------|--------|
| Platform | Ubuntu 22.04 | Ubuntu 24.04 | UPDATE |
| Status | active | active | - |
### Network Interfaces
#### New Interfaces (will create)
| Interface | Type | MAC | IPs |
|-----------|------|-----|-----|
| tailscale0 | virtual | - | 100.x.x.x/32 |
#### Removed Interfaces (will mark offline)
| Interface | Type | Reason |
|-----------|------|--------|
| eth1 | 1000base-t | Not found locally |
#### Changed Interfaces
| Interface | Field | Old | New |
|-----------|-------|-----|-----|
| eth0 | mac_address | aa:bb:cc:00:00:00 | aa:bb:cc:11:11:11 |
### IP Addresses
#### New IPs (will create)
- 192.168.1.150/24 on eth0
#### Removed IPs (will unassign)
- 192.168.1.100/24 from eth0
### Docker Containers
#### New Containers (will create VMs)
| Container | Image | Role |
|-----------|-------|------|
| media_lidarr | linuxserver/lidarr | Media Management |
#### Stopped Containers (will mark offline)
| Container | Last Status |
|-----------|-------------|
| media_bazarr | Exited |
### Summary
- **Updates:** X
- **Creates:** Y
- **Removals/Offline:** Z
```
### Phase 5: User Confirmation
If not `--dry-run`:
```
The following changes will be applied:
- Update device platform to "Ubuntu 24.04"
- Create interface "tailscale0"
- Create IP "100.x.x.x/32" on tailscale0
- Create VM "media_lidarr" in cluster
- Mark VM "media_bazarr" as offline
Proceed with sync? [Y/n]
```
**Use AskUserQuestion** to get confirmation.
### Phase 6: Apply Updates (via MCP)
Only if user confirms (or `--full` specified):
#### 6.1 Device Updates
```
dcim_update_device
id=<device_id>
platform=<new_platform_id>
# ... other changed fields
```
#### 6.2 Interface Updates
**For new interfaces:**
```
dcim_create_interface
device=<device_id>
name=<interface_name>
type=<type>
mac_address=<mac>
enabled=true
```
**For removed interfaces:**
```
dcim_update_interface
id=<interface_id>
enabled=false
description="Marked offline by cmdb-sync - interface no longer present"
```
**For changed interfaces:**
```
dcim_update_interface
id=<interface_id>
mac_address=<new_mac>
```
#### 6.3 IP Address Updates
**For new IPs:**
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
**For removed IPs:**
```
ipam_update_ip_address
id=<ip_id>
assigned_object_type=null
assigned_object_id=null
description="Unassigned by cmdb-sync"
```
#### 6.4 Primary IP Update
If primary IP changed:
```
dcim_update_device
id=<device_id>
primary_ip4=<new_primary_ip_id>
```
#### 6.5 Container/VM Updates
**For new containers:**
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
status="active"
# ... other fields
```
**For stopped containers:**
```
virt_update_vm
id=<vm_id>
status="offline"
description="Container stopped - detected by cmdb-sync"
```
### Phase 7: Journal Entry
Document the sync:
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device synced via /cmdb-sync command\n\nChanges applied:\n- <list of changes>"
```
### Phase 8: Summary Report
```markdown
## Sync Complete
**Device:** <hostname>
**Sync Time:** <timestamp>
### Changes Applied
- Updated platform: Ubuntu 22.04 → Ubuntu 24.04
- Created interface: tailscale0 (ID: X)
- Created IP: 100.x.x.x/32 (ID: Y)
- Created VM: media_lidarr (ID: Z)
- Marked VM offline: media_bazarr (ID: W)
### Current State
- **Interfaces:** 4 (3 active, 1 offline)
- **IP Addresses:** 5
- **Containers/VMs:** 8 (7 active, 1 offline)
### Next Sync
Run `/cmdb-sync` again after:
- Adding/removing Docker containers
- Changing network configuration
- OS upgrades
```
## Dry Run Mode
If `--dry-run` specified:
- Complete Phase 1-4 (lookup, discovery, compare, diff report)
- Skip Phase 5-8 (no confirmation, no updates, no journal)
- End with: "Dry run complete. No changes applied. Run without --dry-run to apply."
## Full Sync Mode
If `--full` specified:
- Skip user confirmation
- Update all fields even if unchanged (force refresh)
- Useful for ensuring NetBox matches current state exactly
## Error Handling
- **Device not found:** Suggest `/cmdb-register`
- **Permission denied on updates:** Note which failed, continue with others
- **Cluster not found:** Offer to create or skip container sync
- **API errors:** Log error, continue with remaining updates
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,182 @@
---
description: Generate infrastructure topology diagrams from NetBox data
---
# CMDB Topology Visualization
Generate Mermaid diagrams showing infrastructure topology from NetBox.
## Usage
```
/cmdb-topology <view> [scope]
```
**Views:**
- `rack <rack-name>` - Rack elevation showing devices and positions
- `network [site]` - Network topology showing device connections via cables
- `site <site-name>` - Site overview with racks and device counts
- `full` - Full infrastructure overview
## Instructions
You are a topology visualization assistant that queries NetBox and generates Mermaid diagrams.
### View: Rack Elevation
Generate a rack view showing devices and their positions.
**Data Collection:**
1. Use `dcim_list_racks` to find the rack by name
2. Use `dcim_list_devices` with `rack_id` filter to get devices in rack
3. For each device, note: `position`, `u_height`, `face`, `name`, `role`
**Mermaid Output:**
```mermaid
graph TB
subgraph rack["Rack: <rack-name> (U<height>)"]
direction TB
u42["U42: empty"]
u41["U41: empty"]
u40["U40: server-01 (Server)"]
u39["U39: server-01 (cont.)"]
u38["U38: switch-01 (Switch)"]
%% ... continue for all units
end
```
**For devices spanning multiple U:**
- Mark the top U with device name and role
- Mark subsequent Us as "(cont.)" for the same device
- Empty Us should show "empty"
### View: Network Topology
Generate a network diagram showing device connections.
**Data Collection:**
1. Use `dcim_list_sites` if no site specified (get all)
2. Use `dcim_list_devices` with optional `site_id` filter
3. Use `dcim_list_cables` to get all connections
4. Use `dcim_list_interfaces` for each device to understand port names
**Mermaid Output:**
```mermaid
graph TD
subgraph site1["Site: Home"]
router1[("core-router-01<br/>Router")]
switch1[["dist-switch-01<br/>Switch"]]
server1["web-server-01<br/>Server"]
server2["db-server-01<br/>Server"]
end
router1 -->|"eth0 - eth1"| switch1
switch1 -->|"gi0/1 - eth0"| server1
switch1 -->|"gi0/2 - eth0"| server2
```
**Node shapes by role:**
- Router: `[(" ")]` (cylinder/database shape)
- Switch: `[[ ]]` (double brackets)
- Server: `[ ]` (rectangle)
- Firewall: `{{ }}` (hexagon)
- Other: `[ ]` (rectangle)
**Edge labels:** Show interface names on both ends (A-side - B-side)
### View: Site Overview
Generate a site-level view showing racks and summary counts.
**Data Collection:**
1. Use `dcim_get_site` to get site details
2. Use `dcim_list_racks` with `site_id` filter
3. Use `dcim_list_devices` with `site_id` filter for counts per rack
**Mermaid Output:**
```mermaid
graph TB
subgraph site["Site: Headquarters"]
subgraph row1["Row 1"]
rack1["Rack A1<br/>12/42 U used<br/>5 devices"]
rack2["Rack A2<br/>20/42 U used<br/>8 devices"]
end
subgraph row2["Row 2"]
rack3["Rack B1<br/>8/42 U used<br/>3 devices"]
end
end
```
### View: Full Infrastructure
Generate a high-level view of all sites and their relationships.
**Data Collection:**
1. Use `dcim_list_regions` to get hierarchy
2. Use `dcim_list_sites` to get all sites
3. Use `dcim_list_devices` with status filter for counts
**Mermaid Output:**
```mermaid
graph TB
subgraph region1["Region: Americas"]
site1["Headquarters<br/>3 racks, 25 devices"]
site2["Branch Office<br/>1 rack, 5 devices"]
end
subgraph region2["Region: Europe"]
site3["EU Datacenter<br/>10 racks, 100 devices"]
end
site1 -.->|"WAN Link"| site3
```
### Output Format
Always provide:
1. **Summary** - Brief description of what the diagram shows
2. **Mermaid Code Block** - The diagram code in a fenced code block
3. **Legend** - Explanation of shapes and colors used
4. **Data Notes** - Any data quality issues (e.g., devices without position, missing cables)
**Example Output:**
```markdown
## Network Topology: Home Site
This diagram shows the network connections between 4 devices at the Home site.
```mermaid
graph TD
router1[("core-router<br/>Router")]
switch1[["main-switch<br/>Switch"]]
server1["homelab-01<br/>Server"]
router1 -->|"eth0 - gi0/24"| switch1
switch1 -->|"gi0/1 - eth0"| server1
```
**Legend:**
- Cylinder shape: Routers
- Double brackets: Switches
- Rectangle: Servers
**Data Notes:**
- 1 device (nas-01) has no cable connections documented
```
## Examples
- `/cmdb-topology rack server-rack-01` - Show devices in server-rack-01
- `/cmdb-topology network` - Show all network connections
- `/cmdb-topology network Home` - Show network topology for Home site only
- `/cmdb-topology site Headquarters` - Show rack overview for Headquarters
- `/cmdb-topology full` - Show full infrastructure overview
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,226 @@
---
description: Detect IP address conflicts and overlapping prefixes in NetBox
---
# CMDB IP Conflict Detection
Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
## Usage
```
/ip-conflicts [scope]
```
**Scopes:**
- `all` (default) - Full scan of all IP data
- `addresses` - Check for duplicate IP addresses only
- `prefixes` - Check for overlapping prefixes only
- `vrf <name>` - Scan specific VRF only
- `prefix <cidr>` - Scan within specific prefix
## Instructions
You are an IP conflict detection specialist that analyzes NetBox IPAM data for conflicts and issues.
### Conflict Types to Detect
#### 1. Duplicate IP Addresses
Multiple IP address records with the same address (within same VRF).
**Detection:**
1. Use `ipam_list_ip_addresses` to get all addresses
2. Group by address + VRF combination
3. Flag groups with more than one record
**Exception:** Anycast addresses may legitimately appear multiple times - check the `role` field for "anycast".
#### 2. Overlapping Prefixes
Prefixes that contain the same address space (within same VRF).
**Detection:**
1. Use `ipam_list_prefixes` to get all prefixes
2. For each prefix pair in the same VRF, check if one contains the other
3. Legitimate hierarchies should have proper parent-child relationships
**Legitimate Overlaps:**
- Parent/child prefix hierarchy (e.g., 10.0.0.0/8 contains 10.0.1.0/24)
- Different VRFs (isolated routing tables)
- Marked as "container" status
#### 3. IPs Outside Their Prefix
IP addresses that don't fall within any defined prefix.
**Detection:**
1. For each IP address, find the most specific prefix that contains it
2. Flag IPs with no matching prefix
#### 4. Prefix Overlap Across VRFs (Informational)
Same prefix appearing in multiple VRFs - not necessarily a conflict, but worth noting.
### MCP Tools
- `ipam_list_ip_addresses` - Get all IP addresses with filters:
- `address` - Filter by specific address
- `vrf_id` - Filter by VRF
- `parent` - Filter by parent prefix
- `status` - Filter by status
- `ipam_list_prefixes` - Get all prefixes with filters:
- `prefix` - Filter by prefix CIDR
- `vrf_id` - Filter by VRF
- `within` - Find prefixes within a parent
- `contains` - Find prefixes containing an address
- `ipam_list_vrfs` - List VRFs for context
- `ipam_get_ip_address` - Get detailed IP info including assigned device/interface
- `ipam_get_prefix` - Get detailed prefix info
### Workflow
1. **Data Collection**
- Fetch all IP addresses (or filtered set)
- Fetch all prefixes (or filtered set)
- Fetch VRFs for context
2. **Duplicate Detection**
- Build address map: `{address+vrf: [records]}`
- Filter for entries with >1 record
3. **Overlap Detection**
- For each VRF, compare prefixes pairwise
- Check using CIDR math: does prefix A contain prefix B or vice versa?
- Ignore legitimate hierarchies (status=container)
4. **Orphan IP Detection**
- For each IP, find containing prefix
- Flag IPs with no prefix match
5. **Generate Report**
### Report Format
```markdown
## IP Conflict Detection Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Check | Status | Count |
|-------|--------|-------|
| Duplicate IPs | [PASS/FAIL] | X |
| Overlapping Prefixes | [PASS/FAIL] | Y |
| Orphan IPs | [PASS/FAIL] | Z |
| Total Issues | - | N |
### Critical Issues
#### Duplicate IP Addresses
| Address | VRF | Count | Assigned To |
|---------|-----|-------|-------------|
| 10.0.1.50/24 | Global | 2 | server-01 (eth0), server-02 (eth0) |
| 192.168.1.100/24 | Global | 2 | router-01 (gi0/1), switch-01 (vlan10) |
**Impact:** IP conflicts cause network connectivity issues. Devices will have intermittent connectivity.
**Resolution:**
- Determine which device should have the IP
- Update or remove the duplicate assignment
- Consider IP reservation to prevent future conflicts
#### Overlapping Prefixes
| Prefix 1 | Prefix 2 | VRF | Type |
|----------|----------|-----|------|
| 10.0.0.0/24 | 10.0.0.0/25 | Global | Unstructured overlap |
| 192.168.0.0/16 | 192.168.1.0/24 | Production | Missing container flag |
**Impact:** Overlapping prefixes can cause routing ambiguity and IP management confusion.
**Resolution:**
- For legitimate hierarchies: Mark parent prefix as status="container"
- For accidental overlaps: Consolidate or re-address one prefix
### Warnings
#### IPs Without Prefix
| Address | VRF | Assigned To | Nearest Prefix |
|---------|-----|-------------|----------------|
| 172.16.5.10/24 | Global | server-03 (eth0) | None found |
**Impact:** IPs without a prefix bypass IPAM allocation controls.
**Resolution:**
- Create appropriate prefix to contain the IP
- Or update IP to correct address within existing prefix
### Informational
#### Same Prefix in Multiple VRFs
| Prefix | VRFs | Purpose |
|--------|------|---------|
| 10.0.0.0/24 | Global, DMZ, Internal | [Check if intentional] |
### Statistics
| Metric | Value |
|--------|-------|
| Total IP Addresses | X |
| Total Prefixes | Y |
| Total VRFs | Z |
| Utilization (IPs/Prefix space) | W% |
### Remediation Commands
```
# Remove duplicate IP (keep server-01's assignment)
ipam_delete_ip_address id=123
# Mark prefix as container
ipam_update_prefix id=456 status=container
# Create missing prefix for orphan IP
ipam_create_prefix prefix=172.16.5.0/24 status=active
```
```
### CIDR Math Reference
For overlap detection, use these rules:
- Prefix A **contains** Prefix B if: A.network <= B.network AND A.broadcast >= B.broadcast
- Two prefixes **overlap** if: A.network <= B.broadcast AND B.network <= A.broadcast
**Example:**
- 10.0.0.0/8 contains 10.0.1.0/24 (legitimate hierarchy)
- 10.0.0.0/24 and 10.0.0.128/25 overlap (10.0.0.128/25 is within 10.0.0.0/24)
### Severity Levels
| Issue | Severity | Description |
|-------|----------|-------------|
| Duplicate IP (same interface type) | CRITICAL | Active conflict, causes outages |
| Duplicate IP (different roles) | HIGH | Potential conflict |
| Overlapping prefixes (same status) | HIGH | IPAM management issue |
| Overlapping prefixes (container ok) | LOW | May need status update |
| Orphan IP | MEDIUM | Bypasses IPAM controls |
## Examples
- `/ip-conflicts` - Full scan for all conflicts
- `/ip-conflicts addresses` - Check only for duplicate IPs
- `/ip-conflicts prefixes` - Check only for overlapping prefixes
- `/ip-conflicts vrf Production` - Scan only Production VRF
- `/ip-conflicts prefix 10.0.0.0/8` - Scan within specific prefix range
## User Request
$ARGUMENTS

View File

@@ -0,0 +1,21 @@
{
"hooks": {
"SessionStart": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
],
"PreToolUse": [
{
"matcher": "mcp__plugin_cmdb-assistant_netbox__virt_create|mcp__plugin_cmdb-assistant_netbox__virt_update|mcp__plugin_cmdb-assistant_netbox__dcim_create|mcp__plugin_cmdb-assistant_netbox__dcim_update",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/validate-input.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,66 @@
#!/bin/bash
# cmdb-assistant SessionStart hook
# Tests NetBox API connectivity and checks for data quality issues
# All output MUST have [cmdb-assistant] prefix
# Non-blocking: always exits 0
set -euo pipefail
PREFIX="[cmdb-assistant]"
# Load NetBox configuration
NETBOX_CONFIG="$HOME/.config/claude/netbox.env"
if [[ ! -f "$NETBOX_CONFIG" ]]; then
echo "$PREFIX NetBox not configured - run /cmdb-assistant:initial-setup"
exit 0
fi
# Source config
source "$NETBOX_CONFIG"
# Validate required variables
if [[ -z "${NETBOX_API_URL:-}" ]] || [[ -z "${NETBOX_API_TOKEN:-}" ]]; then
echo "$PREFIX Missing NETBOX_API_URL or NETBOX_API_TOKEN in config"
exit 0
fi
# Quick API connectivity test (5s timeout)
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -m 5 \
-H "Authorization: Token $NETBOX_API_TOKEN" \
-H "Accept: application/json" \
"${NETBOX_API_URL}/" 2>/dev/null || echo "000")
if [[ "$HTTP_CODE" == "000" ]]; then
echo "$PREFIX NetBox API unreachable (timeout/connection error)"
exit 0
elif [[ "$HTTP_CODE" != "200" ]]; then
echo "$PREFIX NetBox API returned HTTP $HTTP_CODE - check credentials"
exit 0
fi
# Check for VMs without site assignment (data quality)
VMS_RESPONSE=$(curl -s -m 5 \
-H "Authorization: Token $NETBOX_API_TOKEN" \
-H "Accept: application/json" \
"${NETBOX_API_URL}/virtualization/virtual-machines/?site__isnull=true&limit=1" 2>/dev/null || echo '{"count":0}')
VMS_NO_SITE=$(echo "$VMS_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")
if [[ "$VMS_NO_SITE" -gt 0 ]]; then
echo "$PREFIX $VMS_NO_SITE VMs without site assignment - run /cmdb-audit for details"
fi
# Check for devices without platform
DEVICES_RESPONSE=$(curl -s -m 5 \
-H "Authorization: Token $NETBOX_API_TOKEN" \
-H "Accept: application/json" \
"${NETBOX_API_URL}/dcim/devices/?platform__isnull=true&limit=1" 2>/dev/null || echo '{"count":0}')
DEVICES_NO_PLATFORM=$(echo "$DEVICES_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")
if [[ "$DEVICES_NO_PLATFORM" -gt 0 ]]; then
echo "$PREFIX $DEVICES_NO_PLATFORM devices without platform - consider updating"
fi
exit 0

View File

@@ -0,0 +1,79 @@
#!/bin/bash
# cmdb-assistant PreToolUse validation hook
# Validates input parameters for create/update operations
# NON-BLOCKING: Warns but allows operation to proceed (always exits 0)
set -euo pipefail
PREFIX="[cmdb-assistant]"
# Read tool input from stdin
INPUT=$(cat)
# Extract tool name from the input
# Format varies, try to find tool_name or name field
TOOL_NAME=""
if echo "$INPUT" | grep -q '"tool_name"'; then
TOOL_NAME=$(echo "$INPUT" | grep -o '"tool_name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"\([^"]*\)"$/\1/' || true)
elif echo "$INPUT" | grep -q '"name"'; then
TOOL_NAME=$(echo "$INPUT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"\([^"]*\)"$/\1/' || true)
fi
# If we can't determine the tool, exit silently
if [[ -z "$TOOL_NAME" ]]; then
exit 0
fi
# VM creation/update validation
if echo "$TOOL_NAME" | grep -qE "virt_create_vm|virt_create_virtual_machine|virt_update_vm|virt_update_virtual_machine"; then
WARNINGS=()
# Check for missing site
if ! echo "$INPUT" | grep -qE '"site"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no site assigned")
fi
# Check for missing tenant
if ! echo "$INPUT" | grep -qE '"tenant"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no tenant assigned")
fi
# Check for missing platform
if ! echo "$INPUT" | grep -qE '"platform"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no platform assigned")
fi
if [[ ${#WARNINGS[@]} -gt 0 ]]; then
echo "$PREFIX VM best practice: $(IFS=', '; echo "${WARNINGS[*]}") - consider assigning for data quality"
fi
fi
# Device creation/update validation
if echo "$TOOL_NAME" | grep -qE "dcim_create_device|dcim_update_device"; then
WARNINGS=()
# Check for missing platform
if ! echo "$INPUT" | grep -qE '"platform"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no platform assigned")
fi
# Check for missing tenant
if ! echo "$INPUT" | grep -qE '"tenant"[[:space:]]*:[[:space:]]*[0-9]'; then
WARNINGS+=("no tenant assigned")
fi
if [[ ${#WARNINGS[@]} -gt 0 ]]; then
echo "$PREFIX Device best practice: $(IFS=', '; echo "${WARNINGS[*]}") - consider assigning"
fi
fi
# Cluster creation validation
if echo "$TOOL_NAME" | grep -qE "virt_create_cluster"; then
# Check for missing site scope
if ! echo "$INPUT" | grep -qE '"site"[[:space:]]*:[[:space:]]*[0-9]'; then
echo "$PREFIX Cluster best practice: no site scope - clusters should be scoped to a site"
fi
fi
# Always allow operation (non-blocking)
exit 0

View File

@@ -0,0 +1,249 @@
---
description: NetBox best practices for data quality and consistency based on official NetBox Labs guidelines
---
# NetBox Best Practices Skill
Reference documentation for proper NetBox data modeling, following official NetBox Labs guidelines.
## CRITICAL: Dependency Order
Objects must be created in this order due to foreign key dependencies. Creating objects out of order results in validation errors.
```
1. ORGANIZATION (no dependencies)
├── Tenant Groups
├── Tenants (optional: Tenant Group)
├── Regions
├── Site Groups
└── Tags
2. SITES AND LOCATIONS
├── Sites (optional: Region, Site Group, Tenant)
└── Locations (requires: Site, optional: parent Location)
3. DCIM PREREQUISITES
├── Manufacturers
├── Device Types (requires: Manufacturer)
├── Platforms
├── Device Roles
└── Rack Roles
4. RACKS
└── Racks (requires: Site, optional: Location, Rack Role, Tenant)
5. DEVICES
├── Devices (requires: Device Type, Role, Site; optional: Rack, Location)
└── Interfaces (requires: Device)
6. VIRTUALIZATION
├── Cluster Types
├── Cluster Groups
├── Clusters (requires: Cluster Type, optional: Site)
├── Virtual Machines (requires: Cluster OR Site)
└── VM Interfaces (requires: Virtual Machine)
7. IPAM
├── VRFs (optional: Tenant)
├── Prefixes (optional: VRF, Site, Tenant)
├── IP Addresses (optional: VRF, Tenant, Interface)
└── VLANs (optional: Site, Tenant)
8. CONNECTIONS (last)
└── Cables (requires: endpoints)
```
**Key Rule:** NEVER create a VM before its cluster exists. NEVER create a device before its site exists.
## HIGH: Site Assignment
**All infrastructure objects should have a site:**
| Object Type | Site Requirement |
|-------------|------------------|
| Devices | **REQUIRED** |
| Racks | **REQUIRED** |
| VMs | RECOMMENDED (via cluster or direct) |
| Clusters | RECOMMENDED |
| Prefixes | RECOMMENDED |
| VLANs | RECOMMENDED |
**Why Sites Matter:**
- Location-based queries and filtering
- Power and capacity budgeting
- Physical inventory tracking
- Compliance and audit requirements
## HIGH: Tenant Usage
Use tenants for logical resource separation:
**When to Use Tenants:**
- Multi-team environments (assign resources to teams)
- Multi-customer scenarios (MSP, hosting)
- Cost allocation requirements
- Access control boundaries
**Apply Tenants To:**
- Sites (who owns the physical location)
- Devices (who operates the hardware)
- VMs (who owns the workload)
- Prefixes (who owns the IP space)
- VLANs (who owns the network segment)
## HIGH: Platform Tracking
Platforms track OS/runtime information for automation and lifecycle management.
**Platform Examples:**
| Device Type | Platform Examples |
|-------------|-------------------|
| Servers | Ubuntu 24.04, Windows Server 2022, RHEL 9 |
| Network | Cisco IOS 17.x, Junos 23.x, Arista EOS |
| Raspberry Pi | Raspberry Pi OS (Bookworm), Ubuntu Server ARM |
| Containers | Docker Container (as runtime indicator) |
**Benefits:**
- Vulnerability tracking (CVE correlation)
- Configuration management integration
- Lifecycle management (EOL tracking)
- Automation targeting
## MEDIUM: Tag Conventions
Use tags for cross-cutting classification that spans object types.
**Recommended Tag Patterns:**
| Pattern | Purpose | Examples |
|---------|---------|----------|
| `env:*` | Environment classification | `env:production`, `env:staging`, `env:development` |
| `app:*` | Application grouping | `app:web`, `app:database`, `app:monitoring` |
| `team:*` | Ownership | `team:platform`, `team:infra`, `team:devops` |
| `backup:*` | Backup policy | `backup:daily`, `backup:weekly`, `backup:none` |
| `monitoring:*` | Monitoring level | `monitoring:critical`, `monitoring:standard` |
**Tags vs Custom Fields:**
- Tags: Cross-object classification, multiple values, filtering
- Custom Fields: Object-specific structured data, single values, reporting
## MEDIUM: Naming Conventions
Consistent naming improves searchability and automation compatibility.
**Recommended Patterns:**
| Object Type | Pattern | Examples |
|-------------|---------|----------|
| Devices | `{role}-{location}-{number}` | `web-dc1-01`, `db-cloud-02`, `fw-home-01` |
| VMs | `{env}-{app}-{number}` | `prod-api-01`, `dev-worker-03` |
| Clusters | `{site}-{type}` | `dc1-vmware`, `home-docker` |
| Prefixes | Include purpose in description | "Production web tier /24" |
| VLANs | `{site}-{function}` | `dc1-mgmt`, `home-iot` |
**Avoid:**
- Inconsistent casing (mixing `HotServ` and `hotserv`)
- Mixed separators (mixing `hhl_cluster` and `media-cluster`)
- Generic names without context (`server1`, `vm2`)
- Special characters other than hyphen
## MEDIUM: Role Consolidation
Avoid role fragmentation - use general roles with platform/tags for specificity.
**Instead of:**
```
nginx-web-server
apache-web-server
web-server-frontend
web-server-api
```
**Use:**
```
web-server (role) + platform (nginx/apache) + tags (frontend, api)
```
**Recommended Role Categories:**
| Category | Roles |
|----------|-------|
| Infrastructure | `hypervisor`, `storage-server`, `network-device`, `firewall` |
| Compute | `application-server`, `database-server`, `web-server`, `api-server` |
| Services | `container-host`, `load-balancer`, `monitoring-server`, `backup-server` |
| Development | `development-workstation`, `ci-runner`, `build-server` |
| Containers | `reverse-proxy`, `database`, `cache`, `queue`, `worker` |
## Docker Containers as VMs
NetBox's Virtualization module can model Docker containers:
**Approach:**
1. Create device for physical Docker host
2. Create cluster (type: "Docker Compose" or "Docker Swarm")
3. Associate cluster with host device
4. Create VMs for each container in the cluster
**VM Fields for Containers:**
- `name`: Container name (e.g., `media_jellyfin`)
- `role`: Container function (e.g., `Media Server`)
- `vcpus`: CPU limit/shares
- `memory`: Memory limit (MB)
- `disk`: Volume size estimate
- `description`: Container purpose
- `comments`: Image, ports, volumes, dependencies
**This is a pragmatic modeling choice** - containers aren't VMs, but the Virtualization module is the closest fit for tracking container workloads.
## Primary IP Workflow
To set a device/VM's primary IP:
1. Create interface on device/VM
2. Create IP address assigned to interface
3. Set IP as `primary_ip4` or `primary_ip6` on device/VM
**Why Primary IP Matters:**
- Used for device connectivity checks
- Displayed in device list views
- Used by automation tools (NAPALM, Ansible)
- Required for many integrations
## Data Quality Checklist
Before closing a sprint or audit:
- [ ] All VMs have site assignment (direct or via cluster)
- [ ] All VMs have tenant assignment
- [ ] All active devices have platform
- [ ] All active devices have primary IP
- [ ] Naming follows conventions
- [ ] No orphaned prefixes (allocated but unused)
- [ ] Tags applied consistently
- [ ] Clusters scoped to sites
- [ ] Roles not overly fragmented
## MCP Tool Reference
**Dependency Order for Creation:**
```
1. dcim_create_site
2. dcim_create_manufacturer
3. dcim_create_device_type
4. dcim_create_device_role
5. dcim_create_platform
6. dcim_create_device
7. virt_create_cluster_type
8. virt_create_cluster
9. virt_create_vm
10. dcim_create_interface / virt_create_vm_interface
11. ipam_create_ip_address
12. dcim_update_device (set primary_ip4)
```
**Lookup Before Create:**
Always check if object exists before creating to avoid duplicates:
```
1. dcim_list_devices name=<hostname>
2. If exists, update; if not, create
```

View File

@@ -2,9 +2,8 @@
"mcpServers": {
"contract-validator": {
"type": "stdio",
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/contract-validator/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/contract-validator"
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/contract-validator/run.sh",
"args": []
}
}
}

View File

@@ -19,6 +19,7 @@ Contract-validator solves these by parsing plugin interfaces and validating comp
- **Agent Extraction**: Parse CLAUDE.md Four-Agent Model tables and Agents sections
- **Compatibility Checks**: Pairwise validation between all plugins in a marketplace
- **Data Flow Validation**: Verify agent tool sequences have valid data producers/consumers
- **Dependency Visualization**: Generate Mermaid flowcharts showing plugin relationships
- **Comprehensive Reports**: Markdown or JSON reports with actionable suggestions
## Installation
@@ -40,9 +41,11 @@ pip install -r requirements.txt
| Command | Description |
|---------|-------------|
| `/initial-setup` | Interactive setup wizard |
| `/validate-contracts` | Full marketplace compatibility validation |
| `/check-agent` | Validate single agent definition |
| `/list-interfaces` | Show all plugin interfaces |
| `/dependency-graph` | Generate Mermaid flowchart of plugin dependencies |
## Agents
@@ -105,6 +108,16 @@ pip install -r requirements.txt
# Data Flow: No issues detected
```
```
/dependency-graph
# Output: Mermaid flowchart showing:
# - Plugins grouped by shared MCP servers
# - Data flow from data-platform to viz-platform
# - Required vs optional dependencies
# - Command counts per plugin
```
## Issue Types
| Type | Severity | Description |

View File

@@ -0,0 +1,251 @@
# /dependency-graph - Generate Dependency Visualization
Generate a Mermaid flowchart showing plugin dependencies, data flows, and tool relationships.
## Usage
```
/dependency-graph [marketplace_path] [--format <mermaid|text>] [--show-tools]
```
## Parameters
- `marketplace_path` (optional): Path to marketplace root. Defaults to current project root.
- `--format` (optional): Output format - `mermaid` (default) or `text`
- `--show-tools` (optional): Include individual tool nodes in the graph
## Workflow
1. **Discover plugins**:
- Scan plugins directory for all plugins with `.claude-plugin/` marker
- Parse each plugin's README.md to extract interface
- Parse CLAUDE.md for agent definitions and tool sequences
2. **Analyze dependencies**:
- Identify shared MCP servers (plugins using same server)
- Detect tool dependencies (which plugins produce vs consume data)
- Find agent tool references across plugins
- Categorize as required (ERROR if missing) or optional (WARNING if missing)
3. **Build dependency graph**:
- Create nodes for each plugin
- Create edges for:
- Shared MCP servers (bidirectional)
- Data producers -> consumers (directional)
- Agent tool dependencies (directional)
- Mark edges as optional or required
4. **Generate Mermaid output**:
- Create flowchart diagram syntax
- Style required dependencies with solid lines
- Style optional dependencies with dashed lines
- Group by MCP server or data flow
## Output Format
### Mermaid (default)
```mermaid
flowchart TD
subgraph mcp_gitea["MCP: gitea"]
projman["projman"]
pr-review["pr-review"]
end
subgraph mcp_data["MCP: data-platform"]
data-platform["data-platform"]
end
subgraph mcp_viz["MCP: viz-platform"]
viz-platform["viz-platform"]
end
%% Data flow dependencies
data-platform -->|"data_ref (required)"| viz-platform
%% Optional dependencies
projman -.->|"lessons (optional)"| pr-review
%% Styling
classDef required stroke:#e74c3c,stroke-width:2px
classDef optional stroke:#f39c12,stroke-dasharray:5 5
```
### Text Format
```
DEPENDENCY GRAPH
================
Plugins: 12
MCP Servers: 4
Dependencies: 8 (5 required, 3 optional)
MCP Server Groups:
gitea: projman, pr-review
data-platform: data-platform
viz-platform: viz-platform
netbox: cmdb-assistant
Data Flow Dependencies:
data-platform -> viz-platform (data_ref) [REQUIRED]
data-platform -> data-platform (data_ref) [INTERNAL]
Cross-Plugin Tool Usage:
projman.Planner uses: create_issue, search_lessons
pr-review.reviewer uses: get_pr_diff, create_pr_review
```
## Dependency Types
| Type | Line Style | Meaning |
|------|------------|---------|
| Required | Solid (`-->`) | Plugin cannot function without this dependency |
| Optional | Dashed (`-.->`) | Plugin works but with reduced functionality |
| Internal | Dotted (`...>`) | Self-dependency within same plugin |
| Shared MCP | Double (`==>`) | Plugins share same MCP server instance |
## Known Data Flow Patterns
The command recognizes these producer/consumer relationships:
### Data Producers
- `read_csv`, `read_parquet`, `read_json` - File loaders
- `pg_query`, `pg_execute` - Database queries
- `filter`, `select`, `groupby`, `join` - Transformations
### Data Consumers
- `describe`, `head`, `tail` - Data inspection
- `to_csv`, `to_parquet` - File writers
- `chart_create` - Visualization
### Cross-Plugin Flows
- `data-platform` produces `data_ref` -> `viz-platform` consumes for charts
- `projman` produces issues -> `pr-review` references in reviews
- `gitea` wiki -> `projman` lessons learned
## Examples
### Basic Usage
```
/dependency-graph
```
Generates Mermaid diagram for current marketplace.
### With Tool Details
```
/dependency-graph --show-tools
```
Includes individual tool nodes showing which tools each plugin provides.
### Text Summary
```
/dependency-graph --format text
```
Outputs text-based summary suitable for CLAUDE.md inclusion.
### Specific Path
```
/dependency-graph ~/claude-plugins-work
```
Analyze marketplace at specified path.
## Integration with Other Commands
Use with `/validate-contracts` to:
1. Run `/dependency-graph` to visualize relationships
2. Run `/validate-contracts` to find issues in those relationships
3. Fix issues and regenerate graph to verify
## Available Tools
Use these MCP tools:
- `parse_plugin_interface` - Extract interface from plugin README.md
- `parse_claude_md_agents` - Extract agents and their tool sequences
- `generate_compatibility_report` - Get full interface data (JSON format for analysis)
- `validate_data_flow` - Verify data producer/consumer relationships
## Implementation Notes
### Detecting Shared MCP Servers
Check each plugin's `.mcp.json` file for server definitions:
```bash
# List all .mcp.json files in plugins
find plugins/ -name ".mcp.json" -exec cat {} \;
```
Plugins with identical MCP server names share that server.
### Identifying Data Flows
1. Parse tool categories from README.md
2. Map known producer tools to their output types
3. Map known consumer tools to their input requirements
4. Create edges where outputs match inputs
### Optional vs Required
- **Required**: Consumer tool has no default/fallback behavior
- **Optional**: Consumer works without producer (e.g., lessons search returns empty)
Determination is based on:
- Issue severity from `validate_data_flow` (ERROR = required, WARNING = optional)
- Tool documentation stating "requires" vs "uses if available"
## Sample Output
For the leo-claude-mktplace:
```mermaid
flowchart TD
subgraph gitea_mcp["Shared MCP: gitea"]
projman["projman<br/>14 commands"]
pr-review["pr-review<br/>6 commands"]
end
subgraph netbox_mcp["Shared MCP: netbox"]
cmdb-assistant["cmdb-assistant<br/>3 commands"]
end
subgraph data_mcp["Shared MCP: data-platform"]
data-platform["data-platform<br/>7 commands"]
end
subgraph viz_mcp["Shared MCP: viz-platform"]
viz-platform["viz-platform<br/>7 commands"]
end
subgraph standalone["Standalone Plugins"]
doc-guardian["doc-guardian"]
code-sentinel["code-sentinel"]
clarity-assist["clarity-assist"]
git-flow["git-flow"]
claude-config-maintainer["claude-config-maintainer"]
contract-validator["contract-validator"]
end
%% Data flow: data-platform -> viz-platform
data-platform -->|"data_ref"| viz-platform
%% Cross-plugin: projman lessons -> pr-review context
projman -.->|"lessons"| pr-review
%% Styling
classDef mcpGroup fill:#e8f4fd,stroke:#2196f3
classDef standalone fill:#f5f5f5,stroke:#9e9e9e
classDef required stroke:#e74c3c,stroke-width:2px
classDef optional stroke:#f39c12,stroke-dasharray:5 5
class gitea_mcp,netbox_mcp,data_mcp,viz_mcp mcpGroup
class standalone standalone
```

View File

@@ -0,0 +1,152 @@
---
description: Interactive setup wizard for contract-validator plugin - verifies MCP server and shows capabilities
---
# Contract-Validator Setup Wizard
This command sets up the contract-validator plugin for cross-plugin compatibility validation.
## Important Context
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart**
- **No external credentials required** - this plugin validates local files only
---
## Phase 1: Environment Validation
### Step 1.1: Check Python Version
```bash
python3 --version
```
Requires Python 3.10+. If below, stop setup and inform user:
```
Python 3.10 or higher is required. Please install it and run /initial-setup again.
```
---
## Phase 2: MCP Server Setup
### Step 2.1: Locate Contract-Validator MCP Server
```bash
# If running from installed marketplace
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/contract-validator/ 2>/dev/null || echo "NOT_FOUND_INSTALLED"
# If running from source
ls -la ~/claude-plugins-work/mcp-servers/contract-validator/ 2>/dev/null || echo "NOT_FOUND_SOURCE"
```
Determine which path exists and use that as the MCP server path.
### Step 2.2: Check Virtual Environment
```bash
ls -la /path/to/mcp-servers/contract-validator/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING"
```
### Step 2.3: Create Virtual Environment (if missing)
```bash
cd /path/to/mcp-servers/contract-validator && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
**If pip install fails:**
- Show the error to the user
- Suggest: "Check your internet connection and try again."
---
## Phase 3: Validation
### Step 3.1: Verify MCP Server
```bash
cd /path/to/mcp-servers/contract-validator && .venv/bin/python -c "from mcp_server.server import ContractValidatorMCPServer; print('MCP Server OK')"
```
If this fails, check the error and report it to the user.
### Step 3.2: Summary
Display:
```
╔════════════════════════════════════════════════════════════════╗
║ CONTRACT-VALIDATOR SETUP COMPLETE ║
╠════════════════════════════════════════════════════════════════╣
║ MCP Server: ✓ Ready ║
║ Parse Tools: ✓ Available (2 tools) ║
║ Validation Tools: ✓ Available (3 tools) ║
║ Report Tools: ✓ Available (2 tools) ║
╚════════════════════════════════════════════════════════════════╝
```
### Step 3.3: Session Restart Notice
---
**Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/validate-contracts` to check all plugins for compatibility issues
- Run `/check-agent` to validate a single agent definition
- Run `/list-interfaces` to see all plugin commands and tools
---
## Available Tools
| Category | Tools | Description |
|----------|-------|-------------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` | Extract interfaces from README.md and agents from CLAUDE.md |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow` | Check conflicts, tool references, and data flows |
| Report | `generate_compatibility_report`, `list_issues` | Generate reports and filter issues |
---
## Available Commands
| Command | Description |
|---------|-------------|
| `/validate-contracts` | Full marketplace compatibility validation |
| `/check-agent` | Validate single agent definition |
| `/list-interfaces` | Show all plugin interfaces |
---
## Use Cases
### 1. Pre-Release Validation
Run `/validate-contracts` before releasing a new marketplace version to catch:
- Command name conflicts between plugins
- Missing tool references in agents
- Broken data flows
### 2. Agent Development
Run `/check-agent` when creating or modifying agents to verify:
- All referenced tools exist
- Data flows are valid
- No undeclared dependencies
### 3. Plugin Audit
Run `/list-interfaces` to get a complete view of:
- All commands across plugins
- All tools available
- Potential overlap areas
---
## No Configuration Required
This plugin doesn't require any configuration files. It reads plugin manifests and README files directly from the filesystem.
**Paths it scans:**
- Marketplace: `~/.claude/plugins/marketplaces/leo-claude-mktplace/plugins/`
- Source (if available): `~/claude-plugins-work/plugins/`

View File

@@ -0,0 +1,195 @@
#!/bin/bash
# contract-validator SessionStart auto-validate hook
# Validates plugin contracts only when plugin files have changed since last check
# All output MUST have [contract-validator] prefix
PREFIX="[contract-validator]"
# ============================================================================
# Configuration
# ============================================================================
# Enable/disable auto-check (default: true)
AUTO_CHECK="${CONTRACT_VALIDATOR_AUTO_CHECK:-true}"
# Cache location for storing last check hash
CACHE_DIR="$HOME/.cache/claude-plugins/contract-validator"
HASH_FILE="$CACHE_DIR/last-check.hash"
# Marketplace location (installed plugins)
MARKETPLACE_PATH="$HOME/.claude/plugins/marketplaces/leo-claude-mktplace"
# ============================================================================
# Early exit if disabled
# ============================================================================
if [[ "$AUTO_CHECK" != "true" ]]; then
exit 0
fi
# ============================================================================
# Smart mode: Check if plugin files have changed
# ============================================================================
# Function to compute hash of all plugin manifest files
compute_plugin_hash() {
local hash_input=""
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
# Hash all plugin.json, hooks.json, and agent files
while IFS= read -r -d '' file; do
if [[ -f "$file" ]]; then
hash_input+="$(md5sum "$file" 2>/dev/null | cut -d' ' -f1)"
fi
done < <(find "$MARKETPLACE_PATH/plugins" \
\( -name "plugin.json" -o -name "hooks.json" -o -name "*.md" -path "*/agents/*" \) \
-print0 2>/dev/null | sort -z)
fi
# Also include marketplace.json
if [[ -f "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" ]]; then
hash_input+="$(md5sum "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" 2>/dev/null | cut -d' ' -f1)"
fi
# Compute final hash
echo "$hash_input" | md5sum | cut -d' ' -f1
}
# Ensure cache directory exists
mkdir -p "$CACHE_DIR" 2>/dev/null
# Compute current hash
CURRENT_HASH=$(compute_plugin_hash)
# Check if we have a previous hash
if [[ -f "$HASH_FILE" ]]; then
PREVIOUS_HASH=$(cat "$HASH_FILE" 2>/dev/null)
# If hashes match, no changes - skip validation
if [[ "$CURRENT_HASH" == "$PREVIOUS_HASH" ]]; then
exit 0
fi
fi
# ============================================================================
# Run validation (hashes differ or no cache)
# ============================================================================
ISSUES_FOUND=0
WARNINGS=""
# Function to add warning
add_warning() {
WARNINGS+=" - $1"$'\n'
((ISSUES_FOUND++))
}
# 1. Check all installed plugins have valid plugin.json
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
for plugin_dir in "$MARKETPLACE_PATH/plugins"/*/; do
if [[ -d "$plugin_dir" ]]; then
plugin_name=$(basename "$plugin_dir")
plugin_json="$plugin_dir/.claude-plugin/plugin.json"
if [[ ! -f "$plugin_json" ]]; then
add_warning "$plugin_name: missing .claude-plugin/plugin.json"
continue
fi
# Basic JSON validation
if ! python3 -c "import json; json.load(open('$plugin_json'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in plugin.json"
continue
fi
# Check required fields
if ! python3 -c "
import json
with open('$plugin_json') as f:
data = json.load(f)
required = ['name', 'version', 'description']
missing = [r for r in required if r not in data]
if missing:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: plugin.json missing required fields"
fi
fi
done
fi
# 2. Check hooks.json files are properly formatted
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' hooks_file; do
plugin_name=$(basename "$(dirname "$(dirname "$hooks_file")")")
# Validate JSON
if ! python3 -c "import json; json.load(open('$hooks_file'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in hooks/hooks.json"
continue
fi
# Validate hook structure
if ! python3 -c "
import json
with open('$hooks_file') as f:
data = json.load(f)
if 'hooks' not in data:
exit(1)
valid_events = ['PreToolUse', 'PostToolUse', 'UserPromptSubmit', 'SessionStart', 'SessionEnd', 'Notification', 'Stop', 'SubagentStop', 'PreCompact']
for event in data['hooks']:
if event not in valid_events:
exit(1)
for hook in data['hooks'][event]:
# Support both flat structure (type at top) and nested structure (matcher + hooks array)
if 'type' in hook:
# Flat structure: {type: 'command', command: '...'}
pass
elif 'matcher' in hook and 'hooks' in hook:
# Nested structure: {matcher: '...', hooks: [{type: 'command', ...}]}
for nested_hook in hook['hooks']:
if 'type' not in nested_hook:
exit(1)
else:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: hooks.json has invalid structure or events"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/hooks/hooks.json" -print0 2>/dev/null)
fi
# 3. Check agent references are valid (agent files exist and are markdown)
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' agent_file; do
plugin_name=$(basename "$(dirname "$(dirname "$agent_file")")")
agent_name=$(basename "$agent_file")
# Check file is not empty
if [[ ! -s "$agent_file" ]]; then
add_warning "$plugin_name: empty agent file $agent_name"
continue
fi
# Check file has markdown content (at least a header)
if ! grep -q '^#' "$agent_file" 2>/dev/null; then
add_warning "$plugin_name: agent $agent_name missing markdown header"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/agents/*.md" -print0 2>/dev/null)
fi
# ============================================================================
# Store new hash and report results
# ============================================================================
# Always store the new hash (even if issues found - we don't want to recheck)
echo "$CURRENT_HASH" > "$HASH_FILE"
# Report any issues found (non-blocking warning)
if [[ $ISSUES_FOUND -gt 0 ]]; then
echo "$PREFIX Plugin contract validation found $ISSUES_FOUND issue(s):"
echo "$WARNINGS"
echo "$PREFIX Run /validate-contracts for full details"
fi
# Always exit 0 (non-blocking)
exit 0

View File

@@ -0,0 +1,174 @@
#!/bin/bash
# contract-validator breaking change detection hook
# Warns when plugin interface changes might break consumers
# This is a PostToolUse hook - non-blocking, warnings only
PREFIX="[contract-validator]"
# Check if warnings are enabled (default: true)
if [[ "${CONTRACT_VALIDATOR_BREAKING_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a plugin interface file
is_interface_file() {
local file="$1"
case "$file" in
*/plugin.json) return 0 ;;
*/.claude-plugin/plugin.json) return 0 ;;
*/hooks.json) return 0 ;;
*/hooks/hooks.json) return 0 ;;
*/.mcp.json) return 0 ;;
*/agents/*.md) return 0 ;;
*/commands/*.md) return 0 ;;
*/skills/*.md) return 0 ;;
esac
return 1
}
# Exit if not an interface file
if ! is_interface_file "$FILE_PATH"; then
exit 0
fi
# Check if file exists and is in a git repo
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
# Get the directory containing the file
FILE_DIR=$(dirname "$FILE_PATH")
FILE_NAME=$(basename "$FILE_PATH")
# Try to get the previous version from git
cd "$FILE_DIR" 2>/dev/null || exit 0
# Check if we're in a git repo
if ! git rev-parse --git-dir > /dev/null 2>&1; then
exit 0
fi
# Get previous version (HEAD version before current changes)
PREV_CONTENT=$(git show HEAD:"$FILE_PATH" 2>/dev/null || echo "")
# If no previous version, this is a new file - no breaking changes possible
if [ -z "$PREV_CONTENT" ]; then
exit 0
fi
# Read current content
CURR_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [ -z "$CURR_CONTENT" ]; then
exit 0
fi
BREAKING_CHANGES=()
# Detect breaking changes based on file type
case "$FILE_PATH" in
*/plugin.json|*/.claude-plugin/plugin.json)
# Check for removed or renamed fields in plugin.json
# Check if name changed
PREV_NAME=$(echo "$PREV_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
CURR_NAME=$(echo "$CURR_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
if [ -n "$PREV_NAME" ] && [ "$PREV_NAME" != "$CURR_NAME" ]; then
BREAKING_CHANGES+=("Plugin name changed - consumers may need updates")
fi
# Check if version had major bump (semantic versioning)
PREV_VER=$(echo "$PREV_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
CURR_VER=$(echo "$CURR_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
if [ -n "$PREV_VER" ] && [ -n "$CURR_VER" ] && [ "$CURR_VER" -gt "$PREV_VER" ] 2>/dev/null; then
BREAKING_CHANGES+=("Major version bump detected - verify breaking changes documented")
fi
;;
*/hooks.json|*/hooks/hooks.json)
# Check for removed hook events
PREV_EVENTS=$(echo "$PREV_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
CURR_EVENTS=$(echo "$CURR_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
# Find removed events
REMOVED_EVENTS=$(comm -23 <(echo "$PREV_EVENTS") <(echo "$CURR_EVENTS") 2>/dev/null)
if [ -n "$REMOVED_EVENTS" ]; then
BREAKING_CHANGES+=("Hook events removed: $(echo $REMOVED_EVENTS | tr '\n' ' ')")
fi
# Check for changed matchers
PREV_MATCHERS=$(echo "$PREV_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
CURR_MATCHERS=$(echo "$CURR_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
if [ "$PREV_MATCHERS" != "$CURR_MATCHERS" ]; then
BREAKING_CHANGES+=("Hook matchers changed - verify tool coverage")
fi
;;
*/.mcp.json)
# Check for removed MCP servers
PREV_SERVERS=$(echo "$PREV_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
CURR_SERVERS=$(echo "$CURR_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
REMOVED_SERVERS=$(comm -23 <(echo "$PREV_SERVERS") <(echo "$CURR_SERVERS") 2>/dev/null)
if [ -n "$REMOVED_SERVERS" ]; then
BREAKING_CHANGES+=("MCP servers removed - tools may be unavailable")
fi
;;
*/agents/*.md)
# Check if agent file was significantly reduced (might indicate removal of capabilities)
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
# If more than 50% reduction, warn
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Agent definition significantly reduced - capabilities may be removed")
fi
# Check if agent name/description changed in frontmatter
PREV_DESC=$(echo "$PREV_CONTENT" | head -20 | grep -i "description" | head -1)
CURR_DESC=$(echo "$CURR_CONTENT" | head -20 | grep -i "description" | head -1)
if [ -n "$PREV_DESC" ] && [ "$PREV_DESC" != "$CURR_DESC" ]; then
BREAKING_CHANGES+=("Agent description changed - verify consumer expectations")
fi
;;
*/commands/*.md|*/skills/*.md)
# Check if command/skill was significantly changed
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Command/skill significantly reduced - behavior may change")
fi
;;
esac
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Consider updating CHANGELOG and notifying consumers"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -0,0 +1,21 @@
{
"hooks": {
"SessionStart": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/auto-validate.sh"
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/breaking-change-check.sh"
}
]
}
]
}
}

View File

@@ -2,9 +2,8 @@
"mcpServers": {
"data-platform": {
"type": "stdio",
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/data-platform/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/data-platform"
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/data-platform/run.sh",
"args": []
}
}
}

View File

@@ -49,10 +49,13 @@ DBT_PROFILES_DIR=~/.dbt
| `/initial-setup` | Interactive setup wizard for PostgreSQL and dbt configuration |
| `/ingest` | Load data from files or database |
| `/profile` | Generate data profile and statistics |
| `/data-quality` | Data quality assessment with pass/warn/fail scoring |
| `/schema` | Show database/DataFrame schema |
| `/explain` | Explain dbt model lineage |
| `/lineage` | Visualize data dependencies |
| `/lineage` | Visualize data dependencies (ASCII) |
| `/lineage-viz` | Generate Mermaid flowchart for dbt lineage |
| `/run` | Execute dbt models |
| `/dbt-test` | Run dbt tests with formatted results |
## Agents

View File

@@ -0,0 +1,103 @@
# /data-quality - Data Quality Assessment
Comprehensive data quality check for DataFrames with pass/warn/fail scoring.
## Usage
```
/data-quality <data_ref> [--strict]
```
## Workflow
1. **Get data reference**:
- If no data_ref provided, use `list_data` to show available options
- Validate the data_ref exists
2. **Null analysis**:
- Calculate null percentage per column
- **PASS**: < 5% nulls
- **WARN**: 5-20% nulls
- **FAIL**: > 20% nulls
3. **Duplicate detection**:
- Check for fully duplicated rows
- **PASS**: 0% duplicates
- **WARN**: < 1% duplicates
- **FAIL**: >= 1% duplicates
4. **Type consistency**:
- Identify mixed-type columns (object columns with mixed content)
- Flag columns that could be numeric but contain strings
- **PASS**: All columns have consistent types
- **FAIL**: Mixed types detected
5. **Outlier detection** (numeric columns):
- Use IQR method (values beyond 1.5 * IQR)
- Report percentage of outliers per column
- **PASS**: < 1% outliers
- **WARN**: 1-5% outliers
- **FAIL**: > 5% outliers
6. **Generate quality report**:
- Overall quality score (0-100)
- Per-column breakdown
- Recommendations for remediation
## Report Format
```
=== Data Quality Report ===
Dataset: sales_data
Rows: 10,000 | Columns: 15
Overall Score: 82/100 [PASS]
--- Column Analysis ---
| Column | Nulls | Dups | Type | Outliers | Status |
|--------------|-------|------|----------|----------|--------|
| customer_id | 0.0% | - | int64 | 0.2% | PASS |
| email | 2.3% | - | object | - | PASS |
| amount | 15.2% | - | float64 | 3.1% | WARN |
| created_at | 0.0% | - | datetime | - | PASS |
--- Issues Found ---
[WARN] Column 'amount': 15.2% null values (threshold: 5%)
[WARN] Column 'amount': 3.1% outliers detected
[FAIL] 1.2% duplicate rows detected (12 rows)
--- Recommendations ---
1. Investigate null values in 'amount' column
2. Review outliers in 'amount' - may be data entry errors
3. Remove or deduplicate 12 duplicate rows
```
## Options
| Flag | Description |
|------|-------------|
| `--strict` | Use stricter thresholds (WARN at 1% nulls, FAIL at 5%) |
## Examples
```
/data-quality sales_data
/data-quality df_customers --strict
```
## Scoring
| Component | Weight | Scoring |
|-----------|--------|---------|
| Nulls | 30% | 100 - (avg_null_pct * 2) |
| Duplicates | 20% | 100 - (dup_pct * 50) |
| Type consistency | 25% | 100 if clean, 0 if mixed |
| Outliers | 25% | 100 - (avg_outlier_pct * 10) |
Final score: Weighted average, capped at 0-100
## Available Tools
Use these MCP tools:
- `describe` - Get statistical summary (for outlier detection)
- `head` - Preview data
- `list_data` - List available DataFrames

View File

@@ -0,0 +1,119 @@
# /dbt-test - Run dbt Tests
Execute dbt tests with formatted pass/fail results.
## Usage
```
/dbt-test [selection] [--warn-only]
```
## Workflow
1. **Pre-validation** (MANDATORY):
- Use `dbt_parse` to validate project first
- If validation fails, show errors and STOP
2. **Execute tests**:
- Use `dbt_test` with provided selection
- Capture all test results
3. **Format results**:
- Group by test type (schema vs. data)
- Show pass/fail status with counts
- Display failure details
## Report Format
```
=== dbt Test Results ===
Project: my_project
Selection: tag:critical
--- Summary ---
Total: 24 tests
PASS: 22 (92%)
FAIL: 1 (4%)
WARN: 1 (4%)
SKIP: 0 (0%)
--- Schema Tests (18) ---
[PASS] unique_dim_customers_customer_id
[PASS] not_null_dim_customers_customer_id
[PASS] not_null_dim_customers_email
[PASS] accepted_values_dim_customers_status
[FAIL] relationships_fct_orders_customer_id
--- Data Tests (6) ---
[PASS] assert_positive_order_amounts
[PASS] assert_valid_dates
[WARN] assert_recent_orders (threshold: 7 days)
--- Failure Details ---
Test: relationships_fct_orders_customer_id
Type: schema (relationships)
Model: fct_orders
Message: 15 records failed referential integrity check
Query: SELECT * FROM fct_orders WHERE customer_id NOT IN (SELECT customer_id FROM dim_customers)
--- Warning Details ---
Test: assert_recent_orders
Type: data
Message: No orders in last 7 days (expected for dev environment)
Severity: warn
```
## Selection Syntax
| Pattern | Meaning |
|---------|---------|
| (none) | Run all tests |
| `model_name` | Tests for specific model |
| `+model_name` | Tests for model and upstream |
| `tag:critical` | Tests with tag |
| `test_type:schema` | Only schema tests |
| `test_type:data` | Only data tests |
## Options
| Flag | Description |
|------|-------------|
| `--warn-only` | Treat failures as warnings (don't fail CI) |
## Examples
```
/dbt-test # Run all tests
/dbt-test dim_customers # Tests for specific model
/dbt-test tag:critical # Run critical tests only
/dbt-test +fct_orders # Test model and its upstream
```
## Test Types
### Schema Tests
Built-in tests defined in `schema.yml`:
- `unique` - No duplicate values
- `not_null` - No null values
- `accepted_values` - Value in allowed list
- `relationships` - Foreign key integrity
### Data Tests
Custom SQL tests in `tests/` directory:
- Return rows that fail the assertion
- Zero rows = pass, any rows = fail
## Exit Codes
| Code | Meaning |
|------|---------|
| 0 | All tests passed |
| 1 | One or more tests failed |
| 2 | dbt error (parse failure, etc.) |
## Available Tools
Use these MCP tools:
- `dbt_parse` - Pre-validation (ALWAYS RUN FIRST)
- `dbt_test` - Execute tests (REQUIRED)
- `dbt_build` - Alternative: run + test together

View File

@@ -0,0 +1,125 @@
# /lineage-viz - Mermaid Lineage Visualization
Generate Mermaid flowchart syntax for dbt model lineage.
## Usage
```
/lineage-viz <model_name> [--direction TB|LR] [--depth N]
```
## Workflow
1. **Get lineage data**:
- Use `dbt_lineage` to fetch model dependencies
- Capture upstream sources and downstream consumers
2. **Build Mermaid graph**:
- Create nodes for each model/source
- Style nodes by materialization type
- Add directional arrows for dependencies
3. **Output**:
- Render Mermaid flowchart syntax
- Include copy-paste ready code block
## Output Format
```mermaid
flowchart LR
subgraph Sources
raw_customers[(raw_customers)]
raw_orders[(raw_orders)]
end
subgraph Staging
stg_customers[stg_customers]
stg_orders[stg_orders]
end
subgraph Marts
dim_customers{{dim_customers}}
fct_orders{{fct_orders}}
end
raw_customers --> stg_customers
raw_orders --> stg_orders
stg_customers --> dim_customers
stg_orders --> fct_orders
dim_customers --> fct_orders
```
## Node Styles
| Materialization | Mermaid Shape | Example |
|-----------------|---------------|---------|
| source | Cylinder `[( )]` | `raw_data[(raw_data)]` |
| view | Rectangle `[ ]` | `stg_model[stg_model]` |
| table | Double braces `{{ }}` | `dim_model{{dim_model}}` |
| incremental | Hexagon `{{ }}` | `fct_model{{fct_model}}` |
| ephemeral | Dashed `[/ /]` | `tmp_model[/tmp_model/]` |
## Options
| Flag | Description |
|------|-------------|
| `--direction TB` | Top-to-bottom layout (default: LR = left-to-right) |
| `--depth N` | Limit lineage depth (default: unlimited) |
## Examples
```
/lineage-viz dim_customers
/lineage-viz fct_orders --direction TB
/lineage-viz rpt_revenue --depth 2
```
## Usage Tips
1. **Paste in documentation**: Copy the output directly into README.md or docs
2. **GitHub/GitLab rendering**: Both platforms render Mermaid natively
3. **Mermaid Live Editor**: Paste at https://mermaid.live for interactive editing
## Example Output
For `/lineage-viz fct_orders`:
~~~markdown
```mermaid
flowchart LR
%% Sources
raw_customers[(raw_customers)]
raw_orders[(raw_orders)]
raw_products[(raw_products)]
%% Staging
stg_customers[stg_customers]
stg_orders[stg_orders]
stg_products[stg_products]
%% Marts
dim_customers{{dim_customers}}
dim_products{{dim_products}}
fct_orders{{fct_orders}}
%% Dependencies
raw_customers --> stg_customers
raw_orders --> stg_orders
raw_products --> stg_products
stg_customers --> dim_customers
stg_products --> dim_products
stg_orders --> fct_orders
dim_customers --> fct_orders
dim_products --> fct_orders
%% Highlight target model
style fct_orders fill:#f96,stroke:#333,stroke-width:2px
```
~~~
## Available Tools
Use these MCP tools:
- `dbt_lineage` - Get model dependencies (REQUIRED)
- `dbt_ls` - List dbt resources
- `dbt_docs_generate` - Generate full manifest if needed

View File

@@ -5,6 +5,17 @@
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/schema-diff-check.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,138 @@
#!/bin/bash
# data-platform schema diff detection hook
# Warns about potentially breaking schema changes
# This is a command hook - non-blocking, warnings only
PREFIX="[data-platform]"
# Check if warnings are enabled (default: true)
if [[ "${DATA_PLATFORM_SCHEMA_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin (JSON with file_path)
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a schema-related file
is_schema_file() {
local file="$1"
# Check file extension
case "$file" in
*.sql) return 0 ;;
*/migrations/*.py) return 0 ;;
*/migrations/*.sql) return 0 ;;
*/models/*.py) return 0 ;;
*/models/*.sql) return 0 ;;
*schema.prisma) return 0 ;;
*schema.graphql) return 0 ;;
*/dbt/models/*.sql) return 0 ;;
*/dbt/models/*.yml) return 0 ;;
*/alembic/versions/*.py) return 0 ;;
esac
# Check directory patterns
if echo "$file" | grep -qE "(migrations?|schemas?|models)/"; then
return 0
fi
return 1
}
# Exit if not a schema file
if ! is_schema_file "$FILE_PATH"; then
exit 0
fi
# Read the file content (if it exists and is readable)
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
FILE_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [[ -z "$FILE_CONTENT" ]]; then
exit 0
fi
# Detect breaking changes
BREAKING_CHANGES=()
# Check for DROP COLUMN
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("DROP COLUMN detected - may break existing queries")
fi
# Check for DROP TABLE
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+TABLE"; then
BREAKING_CHANGES+=("DROP TABLE detected - data loss risk")
fi
# Check for DROP INDEX
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+INDEX"; then
BREAKING_CHANGES+=("DROP INDEX detected - may impact query performance")
fi
# Check for ALTER TYPE / MODIFY COLUMN type changes
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*(TYPE|COLUMN.*TYPE)"; then
BREAKING_CHANGES+=("Column type change detected - may cause data truncation")
fi
if echo "$FILE_CONTENT" | grep -qiE "MODIFY[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("MODIFY COLUMN detected - verify data compatibility")
fi
# Check for adding NOT NULL to existing column
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*SET[[:space:]]+NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL constraint - existing NULL values will fail")
fi
if echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL[^[:space:]]*[[:space:]]+DEFAULT"; then
# Adding NOT NULL with DEFAULT is usually safe - don't warn
:
elif echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL column without DEFAULT - INSERT may fail")
fi
# Check for RENAME TABLE/COLUMN
if echo "$FILE_CONTENT" | grep -qiE "RENAME[[:space:]]+(TABLE|COLUMN|TO)"; then
BREAKING_CHANGES+=("RENAME detected - update all references")
fi
# Check for removing from Django/SQLAlchemy models (Python files)
if [[ "$FILE_PATH" == *.py ]]; then
if echo "$FILE_CONTENT" | grep -qE "^-[[:space:]]*[a-z_]+[[:space:]]*=.*Field\("; then
BREAKING_CHANGES+=("Model field removal detected in Python ORM")
fi
fi
# Check for Prisma schema changes
if [[ "$FILE_PATH" == *schema.prisma ]]; then
if echo "$FILE_CONTENT" | grep -qE "@relation.*onDelete.*Cascade"; then
BREAKING_CHANGES+=("Cascade delete detected - verify data safety")
fi
fi
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking schema changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Review before deploying to production"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -22,6 +22,9 @@ doc-guardian monitors your code changes via hooks:
|---------|-------------|
| `/doc-audit` | Full project scan - reports all drift without changing anything |
| `/doc-sync` | Apply all pending documentation updates in one commit |
| `/changelog-gen` | Generate changelog from conventional commits in Keep-a-Changelog format |
| `/doc-coverage` | Calculate documentation coverage percentage for functions and classes |
| `/stale-docs` | Detect documentation files that are stale relative to their associated code |
## Hooks
@@ -33,6 +36,8 @@ doc-guardian monitors your code changes via hooks:
- **Version Drift**: Python 3.9 in docs but 3.11 in pyproject.toml
- **Missing Docs**: Public functions without docstrings
- **Stale Examples**: CLI examples that no longer work
- **Low Coverage**: Undocumented functions and classes
- **Stale Files**: Documentation that hasn't been updated alongside code changes
## Installation

View File

@@ -11,6 +11,9 @@ This project uses doc-guardian for automatic documentation synchronization.
- Pending updates are queued silently during work
- Run `/doc-sync` to apply all pending documentation updates
- Run `/doc-audit` for a full project documentation review
- Run `/changelog-gen` to generate changelog from conventional commits
- Run `/doc-coverage` to check documentation coverage metrics
- Run `/stale-docs` to find documentation that may be outdated
### Documentation Files Tracked
- README.md (root and subdirectories)

View File

@@ -0,0 +1,109 @@
---
description: Generate changelog from conventional commits in Keep-a-Changelog format
---
# Changelog Generation
Generate a changelog entry from conventional commits.
## Process
1. **Identify Commit Range**
- Default: commits since last tag
- Optional: specify range (e.g., `v1.0.0..HEAD`)
- Detect if this is first release (no previous tags)
2. **Parse Conventional Commits**
Extract from commit messages following the pattern:
```
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
```
**Recognized Types:**
| Type | Changelog Section |
|------|------------------|
| `feat` | Added |
| `fix` | Fixed |
| `docs` | Documentation |
| `perf` | Performance |
| `refactor` | Changed |
| `style` | Changed |
| `test` | Testing |
| `build` | Build |
| `ci` | CI/CD |
| `chore` | Maintenance |
| `BREAKING CHANGE` | Breaking Changes |
3. **Group by Type**
Organize commits into Keep-a-Changelog sections:
- Breaking Changes (if any `!` suffix or `BREAKING CHANGE` footer)
- Added (feat)
- Changed (refactor, style, perf)
- Deprecated
- Removed
- Fixed (fix)
- Security
4. **Format Entries**
For each commit:
- Extract scope (if present) as prefix
- Use description as entry text
- Link to commit hash if repository URL available
- Include PR/issue references from footer
5. **Output Format**
```markdown
## [Unreleased]
### Breaking Changes
- **scope**: Description of breaking change
### Added
- **scope**: New feature description
- Another feature without scope
### Changed
- **scope**: Refactoring description
### Fixed
- **scope**: Bug fix description
### Documentation
- Updated README with new examples
```
## Options
| Flag | Description | Default |
|------|-------------|---------|
| `--from <tag>` | Start from specific tag | Latest tag |
| `--to <ref>` | End at specific ref | HEAD |
| `--version <ver>` | Set version header | [Unreleased] |
| `--include-merge` | Include merge commits | false |
| `--group-by-scope` | Group by scope within sections | false |
## Integration
The generated output is designed to be copied directly into CHANGELOG.md:
- Follows [Keep a Changelog](https://keepachangelog.com) format
- Compatible with semantic versioning
- Excludes non-user-facing commits (chore, ci, test by default)
## Example Usage
```
/changelog-gen
/changelog-gen --from v1.0.0 --version 1.1.0
/changelog-gen --include-merge --group-by-scope
```
## Non-Conventional Commits
Commits not following conventional format are:
- Listed under "Other" section
- Flagged for manual categorization
- Skipped if `--strict` flag is used

View File

@@ -0,0 +1,128 @@
---
description: Calculate documentation coverage percentage for functions and classes
---
# Documentation Coverage
Analyze codebase to calculate documentation coverage metrics.
## Process
1. **Scan Source Files**
Identify all documentable items:
**Python:**
- Functions (def)
- Classes
- Methods
- Module-level docstrings
**JavaScript/TypeScript:**
- Functions (function, arrow functions)
- Classes
- Methods
- JSDoc comments
**Other Languages:**
- Adapt patterns for Go, Rust, etc.
2. **Determine Documentation Status**
For each item, check:
- Has docstring/JSDoc comment
- Docstring is non-empty and meaningful (not just `pass` or `TODO`)
- Parameters are documented (for detailed mode)
- Return type is documented (for detailed mode)
3. **Calculate Metrics**
```
Coverage = (Documented Items / Total Items) * 100
```
**Levels:**
- Basic: Item has any docstring
- Standard: Docstring describes purpose
- Complete: All parameters and return documented
4. **Output Format**
```
## Documentation Coverage Report
### Summary
- Total documentable items: 156
- Documented: 142
- Coverage: 91.0%
### By Type
| Type | Total | Documented | Coverage |
|------|-------|------------|----------|
| Functions | 89 | 85 | 95.5% |
| Classes | 23 | 21 | 91.3% |
| Methods | 44 | 36 | 81.8% |
### By Directory
| Path | Total | Documented | Coverage |
|------|-------|------------|----------|
| src/api/ | 34 | 32 | 94.1% |
| src/utils/ | 28 | 28 | 100.0% |
| src/models/ | 45 | 38 | 84.4% |
| tests/ | 49 | 44 | 89.8% |
### Undocumented Items
- [ ] src/api/handlers.py:45 `create_order()`
- [ ] src/api/handlers.py:78 `update_order()`
- [ ] src/models/user.py:23 `UserModel.validate()`
```
## Options
| Flag | Description | Default |
|------|-------------|---------|
| `--path <dir>` | Scan specific directory | Project root |
| `--exclude <glob>` | Exclude files matching pattern | `**/test_*,**/*_test.*` |
| `--include-private` | Include private members (_prefixed) | false |
| `--include-tests` | Include test files | false |
| `--min-coverage <pct>` | Fail if below threshold | none |
| `--format <fmt>` | Output format (table, json, markdown) | table |
| `--detailed` | Check parameter/return docs | false |
## Thresholds
Common coverage targets:
| Level | Coverage | Description |
|-------|----------|-------------|
| Minimal | 60% | Basic documentation exists |
| Good | 80% | Most public APIs documented |
| Excellent | 95% | Comprehensive documentation |
## CI Integration
Use `--min-coverage` to enforce standards:
```bash
# Fail if coverage drops below 80%
claude /doc-coverage --min-coverage 80
```
Exit codes:
- 0: Coverage meets threshold (or no threshold set)
- 1: Coverage below threshold
## Example Usage
```
/doc-coverage
/doc-coverage --path src/
/doc-coverage --min-coverage 85 --exclude "**/generated/**"
/doc-coverage --detailed --include-private
```
## Language Detection
File extensions mapped to documentation patterns:
| Extension | Language | Doc Format |
|-----------|----------|------------|
| .py | Python | Docstrings (""") |
| .js, .ts | JavaScript/TypeScript | JSDoc (/** */) |
| .go | Go | // comments above |
| .rs | Rust | /// doc comments |
| .rb | Ruby | # comments, YARD |
| .java | Java | Javadoc (/** */) |

View File

@@ -0,0 +1,143 @@
---
description: Detect documentation files that are stale relative to their associated code
---
# Stale Documentation Detection
Identify documentation files that may be outdated based on commit history.
## Process
1. **Map Documentation to Code**
Build relationships between docs and code:
| Doc File | Related Code |
|----------|--------------|
| README.md | All files in same directory |
| API.md | src/api/**/* |
| CLAUDE.md | Configuration files, scripts |
| docs/module.md | src/module/**/* |
| Component.md | Component.tsx, Component.css |
2. **Analyze Commit History**
For each doc file:
- Find last commit that modified the doc
- Find last commit that modified related code
- Count commits to code since doc was updated
3. **Calculate Staleness**
```
Commits Behind = Code Commits Since Doc Update
Days Behind = Days Since Doc Update - Days Since Code Update
```
4. **Apply Threshold**
Default: Flag if documentation is 10+ commits behind related code
**Staleness Levels:**
| Commits Behind | Level | Action |
|----------------|-------|--------|
| 0-5 | Fresh | No action needed |
| 6-10 | Aging | Review recommended |
| 11-20 | Stale | Update needed |
| 20+ | Critical | Immediate attention |
5. **Output Format**
```
## Stale Documentation Report
### Critical (20+ commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| docs/api.md | 2024-01-15 | 34 | src/api/**/* |
### Stale (11-20 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| README.md | 2024-02-20 | 15 | package.json, src/index.ts |
### Aging (6-10 commits behind)
| File | Last Updated | Commits Behind | Related Code |
|------|--------------|----------------|--------------|
| CONTRIBUTING.md | 2024-03-01 | 8 | .github/*, scripts/* |
### Summary
- Critical: 1 file
- Stale: 1 file
- Aging: 1 file
- Fresh: 12 files
- Total documentation files: 15
```
## Options
| Flag | Description | Default |
|------|-------------|---------|
| `--threshold <n>` | Commits behind to flag as stale | 10 |
| `--days` | Use days instead of commits | false |
| `--path <dir>` | Scan specific directory | Project root |
| `--doc-pattern <glob>` | Pattern for doc files | `**/*.md,**/README*` |
| `--ignore <glob>` | Ignore specific docs | `CHANGELOG.md,LICENSE` |
| `--show-fresh` | Include fresh docs in output | false |
| `--format <fmt>` | Output format (table, json) | table |
## Relationship Detection
How docs are mapped to code:
1. **Same Directory**
- `src/api/README.md` relates to `src/api/**/*`
2. **Name Matching**
- `docs/auth.md` relates to `**/auth.*`, `**/auth/**`
3. **Explicit Links**
- Parse `[link](path)` in docs to find related files
4. **Import Analysis**
- Track which modules are referenced in code examples
## Configuration
Create `.doc-guardian.yml` to customize mappings:
```yaml
stale-docs:
threshold: 10
mappings:
- doc: docs/deployment.md
code:
- Dockerfile
- docker-compose.yml
- .github/workflows/deploy.yml
- doc: ARCHITECTURE.md
code:
- src/**/*
ignore:
- CHANGELOG.md
- LICENSE
- vendor/**
```
## Example Usage
```
/stale-docs
/stale-docs --threshold 5
/stale-docs --days --threshold 30
/stale-docs --path docs/ --show-fresh
```
## Integration with doc-audit
`/stale-docs` focuses specifically on commit-based staleness, while `/doc-audit` checks content accuracy. Use both for comprehensive documentation health:
```
/doc-audit # Check for broken references and content drift
/stale-docs # Check for files that may need review
```
## Exit Codes
- 0: No critical or stale documentation
- 1: Stale documentation found (useful for CI)
- 2: Critical documentation found

View File

@@ -119,6 +119,10 @@ The git-assistant agent helps resolve merge conflicts with analysis and recommen
→ Status: Clean, up-to-date
```
## Documentation
- [Branching Strategy Guide](docs/BRANCHING-STRATEGY.md) - Detailed documentation of the `development -> staging -> main` promotion flow
## Integration
For CLAUDE.md integration instructions, see `claude-md-integration.md`.

View File

@@ -0,0 +1,541 @@
# Branching Strategy
## Overview
This document defines the branching strategy used by git-flow to manage code changes across environments. The strategy ensures:
- **Stability**: Production code is always release-ready
- **Isolation**: Features are developed in isolation without affecting others
- **Traceability**: Clear history of what changed and when
- **Collaboration**: Multiple developers can work simultaneously without conflicts
The strategy follows a hierarchical promotion model: feature branches merge into development, which promotes to staging for testing, and finally to main for production release.
## Branch Types
### Main Branches
| Branch | Purpose | Stability | Direct Commits |
|--------|---------|-----------|----------------|
| `main` | Production releases | Highest | Never |
| `staging` | Pre-production testing | High | Never |
| `development` | Active development | Medium | Never |
### Feature Branches
Feature branches are short-lived branches created from `development` for implementing specific changes.
| Prefix | Purpose | Example |
|--------|---------|---------|
| `feat/` | New features | `feat/user-authentication` |
| `fix/` | Bug fixes | `fix/login-timeout-error` |
| `docs/` | Documentation only | `docs/api-reference` |
| `refactor/` | Code restructuring | `refactor/auth-module` |
| `test/` | Test additions/fixes | `test/auth-coverage` |
| `chore/` | Maintenance tasks | `chore/update-dependencies` |
### Special Branches
| Branch | Purpose | Created From | Merges Into |
|--------|---------|--------------|-------------|
| `hotfix/*` | Emergency production fixes | `main` | `main` AND `development` |
| `release/*` | Release preparation | `development` | `staging` then `main` |
## Branch Hierarchy
```
PRODUCTION
┌───────────────[ main ]───────────────┐
│ (stable releases) │
│ ▲ │
│ │ PR │
│ │ │
│ ┌───────┴───────┐ │
│ │ staging │ │
│ │ (testing) │ │
│ └───────▲───────┘ │
│ │ PR │
│ │ │
│ ┌─────────┴─────────┐ │
│ │ development │ │
│ │ (integration) │ │
│ └─────────▲─────────┘ │
│ ╲ │
│ PR │PR ╲PR │
│ ╲ │
│ ┌─────┴──┐ ┌──┴───┐ ┌┴─────┐ │
│ │ feat/* │ │ fix/*│ │docs/*│ │
│ └────────┘ └──────┘ └──────┘ │
│ FEATURE BRANCHES │
└───────────────────────────────────────┘
```
## Workflow
### Feature Development
The standard workflow for implementing a new feature or fix:
```mermaid
gitGraph
commit id: "initial"
branch development
checkout development
commit id: "dev-base"
branch feat/new-feature
checkout feat/new-feature
commit id: "implement"
commit id: "tests"
commit id: "polish"
checkout development
merge feat/new-feature id: "PR merged"
commit id: "other-work"
```
**Steps:**
1. **Create branch from development**
```bash
git checkout development
git pull origin development
git checkout -b feat/add-user-auth
```
Or use git-flow command:
```
/branch-start add user authentication
```
2. **Implement changes**
- Make commits following conventional commit format
- Keep commits atomic and focused
- Push regularly to remote
3. **Create Pull Request**
- Target: `development`
- Include description of changes
- Link related issues
4. **Review and merge**
- Address review feedback
- Squash or rebase as needed
- Merge when approved
5. **Cleanup**
- Delete feature branch after merge
```
/branch-cleanup
```
### Release Promotion
Promoting code from development to production:
```mermaid
gitGraph
commit id: "v1.0.0" tag: "v1.0.0"
branch development
checkout development
commit id: "feat-1"
commit id: "feat-2"
commit id: "fix-1"
branch staging
checkout staging
commit id: "staging-test"
checkout main
merge staging id: "v1.1.0" tag: "v1.1.0"
checkout development
merge main id: "sync"
```
**Steps:**
1. **Prepare release**
- Ensure development is stable
- Update version numbers
- Update CHANGELOG.md
2. **Promote to staging**
```bash
git checkout staging
git merge development
git push origin staging
```
3. **Test in staging**
- Run integration tests
- Perform QA validation
- Fix any issues (merge fixes to development first, then re-promote)
4. **Promote to main**
```bash
git checkout main
git merge staging
git tag -a v1.1.0 -m "Release v1.1.0"
git push origin main --tags
```
5. **Sync development**
```bash
git checkout development
git merge main
git push origin development
```
### Hotfix Handling
Emergency fixes for production issues:
```mermaid
gitGraph
commit id: "v1.0.0" tag: "v1.0.0"
branch development
checkout development
commit id: "ongoing-work"
checkout main
branch hotfix/critical-bug
checkout hotfix/critical-bug
commit id: "fix"
checkout main
merge hotfix/critical-bug id: "v1.0.1" tag: "v1.0.1"
checkout development
merge main id: "sync-fix"
```
**Steps:**
1. **Create hotfix branch from main**
```bash
git checkout main
git pull origin main
git checkout -b hotfix/critical-security-fix
```
2. **Implement fix**
- Minimal, focused changes only
- Include tests for the fix
- Follow conventional commit format
3. **Merge to main**
```bash
git checkout main
git merge hotfix/critical-security-fix
git tag -a v1.0.1 -m "Hotfix: critical security fix"
git push origin main --tags
```
4. **Merge to development** (critical step)
```bash
git checkout development
git merge main
git push origin development
```
5. **Delete hotfix branch**
```bash
git branch -d hotfix/critical-security-fix
git push origin --delete hotfix/critical-security-fix
```
## PR Requirements
### To development
| Requirement | Description |
|-------------|-------------|
| Passing tests | All CI tests must pass |
| Conventional commit | Message follows `type(scope): description` format |
| No conflicts | Branch must be rebased on latest development |
| Code review | At least one approval (recommended) |
### To staging
| Requirement | Description |
|-------------|-------------|
| All checks pass | CI, linting, tests, security scans |
| From development | Only PRs from development branch |
| Clean history | Squashed or rebased commits |
| Release notes | CHANGELOG updated with version |
### To main (Protected)
| Requirement | Description |
|-------------|-------------|
| Source restriction | PRs only from `staging` or `development` |
| All checks pass | Complete CI pipeline success |
| Approvals | Minimum 1-2 reviewer approvals |
| No direct push | Force push disabled |
| Version tag | Must include version tag on merge |
## Branch Flow Diagram
### Complete Lifecycle
```mermaid
flowchart TD
subgraph Feature["Feature Development"]
A[Create feature branch] --> B[Implement changes]
B --> C[Push commits]
C --> D[Create PR to development]
D --> E{Review passed?}
E -->|No| B
E -->|Yes| F[Merge to development]
F --> G[Delete feature branch]
end
subgraph Release["Release Cycle"]
H[development stable] --> I[Create PR to staging]
I --> J[Test in staging]
J --> K{Tests passed?}
K -->|No| L[Fix in development]
L --> H
K -->|Yes| M[Create PR to main]
M --> N[Tag release]
N --> O[Sync development]
end
subgraph Hotfix["Hotfix Process"]
P[Critical bug in prod] --> Q[Create hotfix from main]
Q --> R[Implement fix]
R --> S[Merge to main + tag]
S --> T[Merge to development]
end
G --> H
O --> Feature
T --> Feature
```
### Release Cycle Timeline
```mermaid
gantt
title Release Cycle
dateFormat YYYY-MM-DD
section Feature Development
Feature A :a1, 2024-01-01, 5d
Feature B :a2, 2024-01-03, 4d
Bug Fix :a3, 2024-01-06, 2d
section Integration
Merge to development:b1, after a1, 1d
Integration testing :b2, after b1, 2d
section Release
Promote to staging :c1, after b2, 1d
QA testing :c2, after c1, 3d
Release to main :milestone, after c2, 0d
```
## Examples
### Example 1: Adding a New Feature
**Scenario:** Add user password reset functionality
```bash
# 1. Start from development
git checkout development
git pull origin development
# 2. Create feature branch
git checkout -b feat/password-reset
# 3. Make changes and commit
git add src/auth/password-reset.ts
git commit -m "feat(auth): add password reset request handler"
git add src/email/templates/reset-email.html
git commit -m "feat(email): add password reset email template"
git add tests/auth/password-reset.test.ts
git commit -m "test(auth): add password reset tests"
# 4. Push and create PR
git push -u origin feat/password-reset
# Create PR targeting development
# 5. After merge, cleanup
git checkout development
git pull origin development
git branch -d feat/password-reset
```
### Example 2: Bug Fix
**Scenario:** Fix login timeout issue
```bash
# 1. Create fix branch
git checkout development
git pull origin development
git checkout -b fix/login-timeout
# 2. Fix and commit
git add src/auth/session.ts
git commit -m "fix(auth): increase session timeout to 30 minutes
The default 5-minute timeout was causing frequent re-authentication.
Extended to 30 minutes based on user feedback.
Closes #456"
# 3. Push and create PR
git push -u origin fix/login-timeout
```
### Example 3: Documentation Update
**Scenario:** Update API documentation
```bash
# 1. Create docs branch
git checkout development
git checkout -b docs/api-authentication
# 2. Update docs
git add docs/api/authentication.md
git commit -m "docs(api): document authentication endpoints
Add detailed documentation for:
- POST /auth/login
- POST /auth/logout
- POST /auth/refresh
- GET /auth/me"
# 3. Push and create PR
git push -u origin docs/api-authentication
```
### Example 4: Emergency Hotfix
**Scenario:** Critical security vulnerability in production
```bash
# 1. Create hotfix from main
git checkout main
git pull origin main
git checkout -b hotfix/sql-injection
# 2. Fix the issue
git add src/db/queries.ts
git commit -m "fix(security): sanitize SQL query parameters
CRITICAL: Addresses SQL injection vulnerability in user search.
All user inputs are now properly parameterized.
CVE: CVE-2024-XXXXX"
# 3. Merge to main with tag
git checkout main
git merge hotfix/sql-injection
git tag -a v2.1.1 -m "Hotfix: SQL injection vulnerability"
git push origin main --tags
# 4. Sync to development (IMPORTANT!)
git checkout development
git merge main
git push origin development
# 5. Cleanup
git branch -d hotfix/sql-injection
```
### Example 5: Release Promotion
**Scenario:** Release version 2.2.0
```bash
# 1. Ensure development is ready
git checkout development
git pull origin development
# Run full test suite
npm test
# 2. Update version and changelog
# Edit package.json, CHANGELOG.md
git add package.json CHANGELOG.md
git commit -m "chore(release): prepare v2.2.0"
# 3. Promote to staging
git checkout staging
git merge development
git push origin staging
# 4. After QA approval, promote to main
git checkout main
git merge staging
git tag -a v2.2.0 -m "Release v2.2.0"
git push origin main --tags
# 5. Sync development
git checkout development
git merge main
git push origin development
```
## Configuration
### Environment Variables
Configure the branching strategy via environment variables:
```bash
# Default base branch for new features
GIT_DEFAULT_BASE=development
# Protected branches (comma-separated)
GIT_PROTECTED_BRANCHES=main,master,development,staging,production
# Workflow style
GIT_WORKFLOW_STYLE=feature-branch
# Auto-delete merged branches
GIT_AUTO_DELETE_MERGED=true
```
### Per-Project Configuration
Create `.git-flow.json` in project root:
```json
{
"defaultBase": "development",
"protectedBranches": ["main", "staging", "development"],
"branchPrefixes": ["feat", "fix", "docs", "refactor", "test", "chore"],
"requirePR": {
"main": true,
"staging": true,
"development": false
},
"squashOnMerge": {
"development": true,
"staging": false,
"main": false
}
}
```
## Best Practices
### Do
- Keep feature branches short-lived (< 1 week)
- Rebase feature branches on development regularly
- Write descriptive commit messages
- Delete branches after merging
- Tag all releases on main
- Always sync development after hotfixes
### Avoid
- Long-lived feature branches
- Direct commits to protected branches
- Force pushing to shared branches
- Merging untested code to staging
- Skipping the development sync after hotfixes
## Related Documentation
- [Branching Strategies Skill](/plugins/git-flow/skills/workflow-patterns/branching-strategies.md)
- [git-flow README](/plugins/git-flow/README.md)
- [Conventional Commits](https://www.conventionalcommits.org/)

View File

@@ -0,0 +1,102 @@
#!/bin/bash
# git-flow branch name validation hook
# Validates branch names follow the convention: <type>/<description>
# Command hook - guaranteed predictable behavior
# Read tool input from stdin (JSON format)
INPUT=$(cat)
# Extract command from JSON input
# The Bash tool sends {"command": "..."} format
COMMAND=$(echo "$INPUT" | grep -o '"command"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"command"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no command found, exit silently (allow)
if [ -z "$COMMAND" ]; then
exit 0
fi
# Check if this is a branch creation command
# Patterns: git checkout -b, git branch (without -d/-D), git switch -c/-C
IS_BRANCH_CREATE=false
BRANCH_NAME=""
# git checkout -b <branch>
if echo "$COMMAND" | grep -qE 'git\s+checkout\s+(-b|--branch)\s+'; then
IS_BRANCH_CREATE=true
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+checkout\s\+\(-b\|--branch\)\s\+\([^ ]*\).*/\2/p')
fi
# git switch -c/-C <branch>
if echo "$COMMAND" | grep -qE 'git\s+switch\s+(-c|-C|--create|--force-create)\s+'; then
IS_BRANCH_CREATE=true
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+switch\s\+\(-c\|-C\|--create\|--force-create\)\s\+\([^ ]*\).*/\2/p')
fi
# git branch <name> (without -d/-D/-m/-M which are delete/rename)
if echo "$COMMAND" | grep -qE 'git\s+branch\s+[^-]' && ! echo "$COMMAND" | grep -qE 'git\s+branch\s+(-d|-D|-m|-M|--delete|--move|--list|--show-current)'; then
IS_BRANCH_CREATE=true
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+branch\s\+\([^ -][^ ]*\).*/\1/p')
fi
# If not a branch creation command, exit silently (allow)
if [ "$IS_BRANCH_CREATE" = false ]; then
exit 0
fi
# If we couldn't extract the branch name, exit silently (allow)
if [ -z "$BRANCH_NAME" ]; then
exit 0
fi
# Remove any quotes from branch name
BRANCH_NAME=$(echo "$BRANCH_NAME" | tr -d '"' | tr -d "'")
# Skip validation for special branches
case "$BRANCH_NAME" in
main|master|develop|development|staging|release|hotfix)
exit 0
;;
esac
# Allowed branch types
VALID_TYPES="feat|fix|chore|docs|refactor|test|perf|debug"
# Validate branch name format: <type>/<description>
# Description: lowercase letters, numbers, hyphens only, max 50 chars total
if ! echo "$BRANCH_NAME" | grep -qE "^($VALID_TYPES)/[a-z0-9][a-z0-9-]*$"; then
echo ""
echo "[git-flow] Branch name validation failed"
echo ""
echo "Branch: $BRANCH_NAME"
echo ""
echo "Expected format: <type>/<description>"
echo ""
echo "Valid types: feat, fix, chore, docs, refactor, test, perf, debug"
echo ""
echo "Description rules:"
echo " - Lowercase letters, numbers, and hyphens only"
echo " - Must start with letter or number"
echo " - No spaces or special characters"
echo ""
echo "Examples:"
echo " feat/add-user-auth"
echo " fix/login-timeout"
echo " chore/update-deps"
echo " docs/api-reference"
echo ""
exit 1
fi
# Check total length (max 50 chars)
if [ ${#BRANCH_NAME} -gt 50 ]; then
echo ""
echo "[git-flow] Branch name too long"
echo ""
echo "Branch: $BRANCH_NAME (${#BRANCH_NAME} chars)"
echo "Maximum: 50 characters"
echo ""
exit 1
fi
# Valid branch name
exit 0

View File

@@ -0,0 +1,74 @@
#!/bin/bash
# git-flow commit message validation hook
# Validates git commit messages follow conventional commit format
# PreToolUse hook for Bash commands - type: command
# Read tool input from stdin
INPUT=$(cat)
# Use Python to properly parse JSON and extract the command
COMMAND=$(echo "$INPUT" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('command',''))" 2>/dev/null)
# If no command or python failed, allow through
if [ -z "$COMMAND" ]; then
exit 0
fi
# Check if it is a git commit command with -m flag
if ! echo "$COMMAND" | grep -qE 'git\s+commit.*-m'; then
# Not a git commit with -m, allow through
exit 0
fi
# Extract commit message - handle various quoting styles
# Try double quotes first
COMMIT_MSG=$(echo "$COMMAND" | sed -n 's/.*-m[[:space:]]*"\([^"]*\)".*/\1/p')
# If empty, try single quotes
if [ -z "$COMMIT_MSG" ]; then
COMMIT_MSG=$(echo "$COMMAND" | sed -n "s/.*-m[[:space:]]*'\\([^']*\\)'.*/\\1/p")
fi
# If still empty, try HEREDOC pattern
if [ -z "$COMMIT_MSG" ]; then
if echo "$COMMAND" | grep -qE -- '-m[[:space:]]+"\$\(cat <<'; then
# HEREDOC pattern - too complex to parse, allow through
exit 0
fi
fi
# If no message extracted, allow through
if [ -z "$COMMIT_MSG" ]; then
exit 0
fi
# Validate conventional commit format
# Format: <type>(<scope>): <description>
# or: <type>: <description>
# Valid types: feat, fix, docs, style, refactor, perf, test, chore, build, ci
VALID_TYPES="feat|fix|docs|style|refactor|perf|test|chore|build|ci"
# Check if message matches conventional commit format
if echo "$COMMIT_MSG" | grep -qE "^($VALID_TYPES)(\([a-zA-Z0-9_-]+\))?:[[:space:]]+.+"; then
# Valid format
exit 0
fi
# Invalid format - output warning
echo "[git-flow] WARNING: Commit message does not follow conventional commit format"
echo ""
echo "Expected format: <type>(<scope>): <description>"
echo " or: <type>: <description>"
echo ""
echo "Valid types: feat, fix, docs, style, refactor, perf, test, chore, build, ci"
echo ""
echo "Examples:"
echo " feat(auth): add password reset functionality"
echo " fix: resolve login timeout issue"
echo " docs(readme): update installation instructions"
echo ""
echo "Your message: $COMMIT_MSG"
echo ""
echo "To proceed anyway, use /commit command which auto-generates valid messages."
# Exit with non-zero to block
exit 1

View File

@@ -0,0 +1,19 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/branch-check.sh"
},
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/commit-msg-check.sh"
}
]
}
]
}
}

View File

@@ -1,12 +1,8 @@
{
"mcpServers": {
"gitea": {
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea",
"env": {
"PYTHONPATH": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea"
}
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/run.sh",
"args": []
}
}
}

View File

@@ -13,6 +13,7 @@ pr-review conducts comprehensive code reviews using specialized agents for secur
| `/pr-review <pr#>` | Full multi-agent review |
| `/pr-summary <pr#>` | Quick summary without full review |
| `/pr-findings <pr#>` | Filter findings by category/confidence |
| `/pr-diff <pr#>` | View diff with inline comment annotations |
| `/initial-setup` | Full interactive setup wizard |
| `/project-init` | Quick project setup (system already configured) |
| `/project-sync` | Sync configuration with current git remote |
@@ -51,14 +52,38 @@ Requires Gitea MCP server configuration.
## Configuration
Environment variables can be set in your project's `.env` file or shell environment.
| Variable | Default | Description |
|----------|---------|-------------|
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.7` | Minimum confidence score (0.0-1.0) for reporting findings. Findings below this threshold are filtered out to reduce noise. |
| `PR_REVIEW_AUTO_SUBMIT` | `false` | Automatically submit review to Gitea without confirmation prompt |
### Example Configuration
```bash
# Minimum confidence to report (default: 0.5)
PR_REVIEW_CONFIDENCE_THRESHOLD=0.5
# Project .env file
# Only show high-confidence findings (MEDIUM and HIGH)
PR_REVIEW_CONFIDENCE_THRESHOLD=0.7
# Auto-submit review to Gitea (default: false)
PR_REVIEW_AUTO_SUBMIT=false
```
### Confidence Threshold Details
The confidence threshold filters which findings appear in review output:
| Threshold | Effect |
|-----------|--------|
| `0.9` | Only definite issues (HIGH confidence) |
| `0.7` | Likely issues and above (MEDIUM+HIGH) - **recommended** |
| `0.5` | Include possible concerns (LOW+MEDIUM+HIGH) |
| `0.3` | Include speculative findings |
Lower thresholds show more findings but may include false positives. Higher thresholds reduce noise but may miss some valid concerns.
## Usage Examples
### Full Review

View File

@@ -120,10 +120,13 @@ Focus on findings that:
### Respect Confidence Thresholds
Never report findings below 0.5 confidence. Be transparent about uncertainty:
- 0.9+ → "This is definitely an issue"
- 0.7-0.89 → "This is likely an issue"
- 0.5-0.69 → "This might be an issue"
Filter findings based on `PR_REVIEW_CONFIDENCE_THRESHOLD` (default: 0.7). Be transparent about uncertainty:
- 0.9+ → "This is definitely an issue" (HIGH)
- 0.7-0.89 → "This is likely an issue" (MEDIUM)
- 0.5-0.69 → "This might be an issue" (LOW)
- < threshold → Filtered from output
With the default threshold of 0.7, only MEDIUM and HIGH confidence findings are reported.
### Avoid Noise

View File

@@ -15,6 +15,7 @@ This project uses the pr-review plugin for automated code review.
| `/pr-review <pr#>` | Full multi-agent review |
| `/pr-summary <pr#>` | Quick change summary |
| `/pr-findings <pr#>` | Filter review findings |
| `/pr-diff <pr#>` | View diff with inline comments |
### Review Categories
@@ -26,11 +27,16 @@ Reviews analyze:
### Confidence Threshold
Findings below 0.5 confidence are suppressed.
Configure via `PR_REVIEW_CONFIDENCE_THRESHOLD` (default: 0.7).
- HIGH (0.9+): Definite issue
- MEDIUM (0.7-0.89): Likely issue
- LOW (0.5-0.69): Possible concern
| Range | Label | Action |
|-------|-------|--------|
| 0.9 - 1.0 | HIGH | Must address |
| 0.7 - 0.89 | MEDIUM | Should address |
| 0.5 - 0.69 | LOW | Consider addressing |
| < threshold | (filtered) | Not reported |
With default threshold of 0.7, only MEDIUM and HIGH findings are shown.
### Verdict Rules

View File

@@ -0,0 +1,154 @@
# /pr-diff - Annotated PR Diff Viewer
## Purpose
Display the PR diff with inline annotations from review comments, making it easy to see what feedback has been given alongside the code changes.
## Usage
```
/pr-diff <pr-number> [--repo owner/repo] [--context <lines>]
```
### Options
```
--repo <owner/repo> Override repository (default: from .env)
--context <n> Lines of context around changes (default: 3)
--no-comments Show diff without comment annotations
--file <pattern> Filter to specific files (glob pattern)
```
## Behavior
### Step 1: Fetch PR Data
Using Gitea MCP tools:
1. `get_pr_diff` - Unified diff of all changes
2. `get_pr_comments` - All review comments on the PR
### Step 2: Parse and Annotate
Parse the diff and overlay comments at their respective file/line positions:
```
═══════════════════════════════════════════════════
PR #123 Diff - Add user authentication
═══════════════════════════════════════════════════
Branch: feat/user-auth → development
Files: 12 changed (+234 / -45)
───────────────────────────────────────────────────
src/api/users.ts (+85 / -12)
───────────────────────────────────────────────────
@@ -42,6 +42,15 @@ export async function getUser(id: string) {
42 │ const db = getDatabase();
43 │
44 │- const user = db.query("SELECT * FROM users WHERE id = " + id);
│ ┌─────────────────────────────────────────────────────────────
│ │ COMMENT by @reviewer (2h ago):
│ │ This is a SQL injection vulnerability. Use parameterized
│ │ queries instead: `db.query("SELECT * FROM users WHERE id = ?", [id])`
│ └─────────────────────────────────────────────────────────────
45 │+ const query = "SELECT * FROM users WHERE id = ?";
46 │+ const user = db.query(query, [id]);
47 │
48 │ if (!user) {
49 │ throw new NotFoundError("User not found");
50 │ }
@@ -78,3 +87,12 @@ export async function updateUser(id: string, data: UserInput) {
87 │+ // Validate input before update
88 │+ validateUserInput(data);
89 │+
90 │+ const result = db.query(
91 │+ "UPDATE users SET name = ?, email = ? WHERE id = ?",
92 │+ [data.name, data.email, id]
93 │+ );
│ ┌─────────────────────────────────────────────────────────────
│ │ COMMENT by @maintainer (1h ago):
│ │ Good use of parameterized query here!
│ │
│ │ REPLY by @author (30m ago):
│ │ Thanks! Applied the same pattern throughout.
│ └─────────────────────────────────────────────────────────────
───────────────────────────────────────────────────
src/components/LoginForm.tsx (+65 / -0) [NEW FILE]
───────────────────────────────────────────────────
@@ -0,0 +1,65 @@
1 │+import React, { useState } from 'react';
2 │+import { useAuth } from '../context/AuthContext';
3 │+
4 │+export function LoginForm() {
5 │+ const [email, setEmail] = useState('');
6 │+ const [password, setPassword] = useState('');
7 │+ const { login } = useAuth();
... (remaining diff content)
═══════════════════════════════════════════════════
Comment Summary: 5 comments, 2 resolved
═══════════════════════════════════════════════════
```
### Step 3: Filter by Confidence (Optional)
If `PR_REVIEW_CONFIDENCE_THRESHOLD` is set, also annotate with high-confidence findings from previous reviews:
```
44 │- const user = db.query("SELECT * FROM users WHERE id = " + id);
│ ┌─── REVIEW FINDING (0.95 HIGH) ─────────────────────────────
│ │ [SEC-001] SQL Injection Vulnerability
│ │ Use parameterized queries to prevent injection attacks.
│ └─────────────────────────────────────────────────────────────
│ ┌─── COMMENT by @reviewer ────────────────────────────────────
│ │ This is a SQL injection vulnerability...
│ └─────────────────────────────────────────────────────────────
```
## Output Formats
### Default (Annotated Diff)
Full diff with inline comments as shown above.
### Plain (--no-comments)
```
/pr-diff 123 --no-comments
# Standard unified diff output without annotations
```
### File Filter (--file)
```
/pr-diff 123 --file "src/api/*"
# Shows diff only for files matching pattern
```
## Use Cases
- **Review preparation**: See the full context of changes with existing feedback
- **Followup work**: Understand what was commented on and where
- **Discussion context**: View threaded conversations alongside the code
- **Progress tracking**: See which comments have been resolved
## Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.7` | Minimum confidence for showing review findings |
## Related Commands
| Command | Purpose |
|---------|---------|
| `/pr-summary` | Quick overview without diff |
| `/pr-review` | Full multi-agent review |
| `/pr-findings` | Filter review findings by category |

View File

@@ -46,14 +46,16 @@ Collect findings from all agents, each with:
### Step 4: Filter by Confidence
Only display findings with confidence >= 0.5:
Filter findings based on `PR_REVIEW_CONFIDENCE_THRESHOLD` (default: 0.7):
| Confidence | Label | Description |
|------------|-------|-------------|
| 0.9 - 1.0 | HIGH | Definite issue, must address |
| 0.7 - 0.89 | MEDIUM | Likely issue, should address |
| 0.5 - 0.69 | LOW | Possible concern, consider addressing |
| < 0.5 | (suppressed) | Too uncertain to report |
| < threshold | (filtered) | Below configured threshold |
**Note:** With the default threshold of 0.7, only MEDIUM and HIGH confidence findings are shown. Adjust `PR_REVIEW_CONFIDENCE_THRESHOLD` to include more or fewer findings.
### Step 5: Generate Report
@@ -135,5 +137,5 @@ Full review report with:
| Variable | Default | Description |
|----------|---------|-------------|
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.5` | Minimum confidence to report |
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.7` | Minimum confidence to report (0.0-1.0) |
| `PR_REVIEW_AUTO_SUBMIT` | `false` | Auto-submit to Gitea |

View File

@@ -73,10 +73,12 @@ Base confidence by pattern:
## Threshold Configuration
The default threshold is 0.5. This can be adjusted:
The default threshold is 0.7 (showing MEDIUM and HIGH confidence findings). This can be adjusted:
```bash
PR_REVIEW_CONFIDENCE_THRESHOLD=0.7 # Only high-confidence
PR_REVIEW_CONFIDENCE_THRESHOLD=0.9 # Only definite issues (HIGH)
PR_REVIEW_CONFIDENCE_THRESHOLD=0.7 # Likely issues and above (MEDIUM+HIGH) - default
PR_REVIEW_CONFIDENCE_THRESHOLD=0.5 # Include possible concerns (LOW+)
PR_REVIEW_CONFIDENCE_THRESHOLD=0.3 # Include more speculative
```

View File

@@ -1,12 +1,8 @@
{
"mcpServers": {
"gitea": {
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea",
"env": {
"PYTHONPATH": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea"
}
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/run.sh",
"args": []
}
}
}

View File

@@ -278,6 +278,17 @@ Investigate diagnostic issues and propose fixes with human approval.
**When to use:** In the marketplace repo, to investigate and fix issues reported by `/debug-report`.
### `/suggest-version`
Analyze CHANGELOG and recommend semantic version bump.
**What it does:**
- Reads CHANGELOG.md `[Unreleased]` section
- Analyzes changes to determine bump type (major/minor/patch)
- Applies SemVer rules: breaking changes → major, features → minor, fixes → patch
- Returns recommended version with rationale
**When to use:** Before creating a release to determine the appropriate version number.
## Code Quality Commands
The `/review` and `/test-check` commands complement the Executor agent by catching issues before work is marked complete. Run both commands before `/sprint-close` for a complete quality check.

View File

@@ -108,7 +108,127 @@ git branch --show-current
## Your Responsibilities
### 1. Implement Features Following Specs
### 1. Status Reporting (Structured Progress)
**CRITICAL: Post structured progress comments for visibility.**
**Standard Progress Comment Format:**
```markdown
## Progress Update
**Status:** In Progress | Blocked | Failed
**Phase:** [current phase name]
**Tool Calls:** X (budget: Y)
### Completed
- [x] Step 1
- [x] Step 2
### In Progress
- [ ] Current step (estimated: Z more calls)
### Blockers
- None | [blocker description]
### Next
- What happens after current step
```
**When to Post Progress Comments:**
- **Immediately on starting** - Post initial status
- **Every 20-30 tool calls** - Show progress
- **On phase transitions** - Moving from implementation to testing
- **When blocked or encountering errors**
- **Before budget limit** - If approaching turn limit
**Starting Work Example:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** In Progress
**Phase:** Starting
**Tool Calls:** 5 (budget: 100)
### Completed
- [x] Read issue and acceptance criteria
- [x] Created feature branch feat/45-jwt-service
### In Progress
- [ ] Implementing JWT service
### Blockers
- None
### Next
- Create auth/jwt_service.py
- Implement core token functions
"""
)
```
**Blocked Example:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** Blocked
**Phase:** Testing
**Tool Calls:** 67 (budget: 100)
### Completed
- [x] Implemented jwt_service.py
- [x] Wrote unit tests
### In Progress
- [ ] Running tests - BLOCKED
### Blockers
- Missing PyJWT dependency in requirements.txt
- Need orchestrator to add dependency
### Next
- Resume after blocker resolved
"""
)
```
**Failed Example:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** Failed
**Phase:** Implementation
**Tool Calls:** 89 (budget: 100)
### Completed
- [x] Created jwt_service.py
- [x] Implemented generate_token()
### In Progress
- [ ] verify_token() - FAILED
### Blockers
- Critical: Cannot decode tokens - algorithm mismatch
- Attempted: HS256, HS384, RS256
- Error: InvalidSignatureError consistently
### Next
- Needs human investigation
- Possible issue with secret key encoding
"""
)
```
**NEVER report "completed" unless:**
- All acceptance criteria are met
- Tests pass
- Code is committed and pushed
- No unresolved errors
**If you cannot complete, report failure honestly.** The orchestrator needs accurate status to coordinate effectively.
### 2. Implement Features Following Specs
**You receive:**
- Issue number and description
@@ -122,7 +242,7 @@ git branch --show-current
- Proper error handling
- Edge case coverage
### 2. Follow Best Practices
### 3. Follow Best Practices
**Code Quality Standards:**
@@ -150,7 +270,7 @@ git branch --show-current
- Handle errors gracefully
- Follow OWASP guidelines
### 3. Handle Edge Cases
### 4. Handle Edge Cases
Always consider:
- What if input is None/null/undefined?
@@ -160,7 +280,7 @@ Always consider:
- What if user doesn't have permission?
- What if resource doesn't exist?
### 4. Apply Lessons Learned
### 5. Apply Lessons Learned
Reference relevant lessons in your implementation:
@@ -179,7 +299,7 @@ def test_verify_expired_token(jwt_service):
...
```
### 5. Create Merge Requests (When Branch Protected)
### 6. Create Merge Requests (When Branch Protected)
**MR Body Template - NO SUBTASKS:**
@@ -208,7 +328,7 @@ Closes #45
The issue already tracks subtasks. MR body should be summary only.
### 6. Auto-Close Issues via Commit Messages
### 7. Auto-Close Issues via Commit Messages
**Always include closing keywords in commits:**
@@ -229,7 +349,7 @@ Closes #45"
This ensures issues auto-close when MR is merged.
### 7. Generate Completion Reports
### 8. Generate Completion Reports
After implementation, provide a concise completion report:
@@ -304,18 +424,185 @@ As the executor, you interact with MCP tools for status updates:
- Apply best practices
- Deliver quality work
## Checkpointing (Save Progress for Resume)
**CRITICAL: Save checkpoints so work can be resumed if interrupted.**
**Checkpoint Comment Format:**
```markdown
## Checkpoint
**Branch:** feat/45-jwt-service
**Commit:** abc123 (or "uncommitted")
**Phase:** [current phase]
**Tool Calls:** 45
### Files Modified
- auth/jwt_service.py (created)
- tests/test_jwt.py (created)
### Completed Steps
- [x] Created jwt_service.py skeleton
- [x] Implemented generate_token()
- [x] Implemented verify_token()
### Pending Steps
- [ ] Write unit tests
- [ ] Add token refresh logic
- [ ] Commit and push
### State Notes
[Any important context for resumption]
```
**When to Save Checkpoints:**
- After completing each major step (every 20-30 tool calls)
- Before stopping due to budget limit
- When encountering a blocker
- After any commit
**Checkpoint Example:**
```
add_comment(
issue_number=45,
body="""## Checkpoint
**Branch:** feat/45-jwt-service
**Commit:** uncommitted (changes staged)
**Phase:** Testing
**Tool Calls:** 67
### Files Modified
- auth/jwt_service.py (created, 120 lines)
- auth/__init__.py (modified, added import)
- tests/test_jwt.py (created, 50 lines, incomplete)
### Completed Steps
- [x] Created auth/jwt_service.py
- [x] Implemented generate_token() with HS256
- [x] Implemented verify_token()
- [x] Updated auth/__init__.py exports
### Pending Steps
- [ ] Complete test_jwt.py (5 tests remaining)
- [ ] Add token refresh logic
- [ ] Commit changes
- [ ] Push to remote
### State Notes
- Using PyJWT 2.8.0
- Secret key from JWT_SECRET env var
- Tests use pytest fixtures in conftest.py
"""
)
```
**Checkpoint on Interruption:**
If you must stop (budget, failure, blocker), ALWAYS post a checkpoint FIRST.
## Runaway Detection (Self-Monitoring)
**CRITICAL: Monitor yourself to prevent infinite loops and wasted resources.**
**Self-Monitoring Checkpoints:**
| Trigger | Action |
|---------|--------|
| 10+ tool calls without progress | STOP - Post progress update, reassess approach |
| Same error 3+ times | CIRCUIT BREAKER - Stop, report failure with error pattern |
| 50+ tool calls total | POST progress update (mandatory) |
| 80+ tool calls total | WARN - Approaching budget, evaluate if completion is realistic |
| 100+ tool calls total | STOP - Save state, report incomplete with checkpoint |
**What Counts as "Progress":**
- File created or modified
- Test passing that wasn't before
- New functionality working
- Moving to next phase of work
**What Does NOT Count as Progress:**
- Reading more files
- Searching for something
- Retrying the same operation
- Adding logging/debugging
**Circuit Breaker Protocol:**
If you encounter the same error 3+ times:
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** Failed (Circuit Breaker)
**Phase:** [phase when stopped]
**Tool Calls:** 67 (budget: 100)
### Circuit Breaker Triggered
Same error occurred 3+ times:
```
[error message]
```
### What Was Tried
1. [first attempt]
2. [second attempt]
3. [third attempt]
### Recommendation
[What human should investigate]
### Files Modified
- [list any files changed before failure]
"""
)
```
**Budget Approaching Protocol:**
At 80+ tool calls, post an update:
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** In Progress (Budget Warning)
**Phase:** [current phase]
**Tool Calls:** 82 (budget: 100)
### Completed
- [x] [completed steps]
### Remaining
- [ ] [what's left]
### Assessment
[Realistic? Should I continue or stop and checkpoint?]
"""
)
```
**Hard Stop at 100 Calls:**
If you reach 100 tool calls:
1. STOP immediately
2. Save current state
3. Post checkpoint comment
4. Report as incomplete (not failed)
## Critical Reminders
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
2. **Branch naming** - Always use `feat/`, `fix/`, or `debug/` prefix with issue number
3. **Branch check FIRST** - Never implement on staging/production
4. **Follow specs precisely** - Respect architectural decisions
5. **Apply lessons learned** - Reference in code and tests
6. **Write tests** - Cover edge cases, not just happy path
7. **Clean code** - Readable, maintainable, documented
8. **No MR subtasks** - MR body should NOT have checklists
9. **Use closing keywords** - `Closes #XX` in commit messages
10. **Report thoroughly** - Complete summary when done
2. **Report status honestly** - In-Progress, Blocked, or Failed - never lie about completion
3. **Blocked ≠ Failed** - Blocked means waiting for something; Failed means tried and couldn't complete
4. **Self-monitor** - Watch for runaway patterns, trigger circuit breaker when stuck
5. **Branch naming** - Always use `feat/`, `fix/`, or `debug/` prefix with issue number
6. **Branch check FIRST** - Never implement on staging/production
7. **Follow specs precisely** - Respect architectural decisions
8. **Apply lessons learned** - Reference in code and tests
9. **Write tests** - Cover edge cases, not just happy path
10. **Clean code** - Readable, maintainable, documented
11. **No MR subtasks** - MR body should NOT have checklists
12. **Use closing keywords** - `Closes #XX` in commit messages
13. **Report thoroughly** - Complete summary when done, including honest status
14. **Hard stop at 100 calls** - Save checkpoint and report incomplete
## Your Mission

View File

@@ -57,9 +57,42 @@ curl -X POST "https://gitea.../api/..."
- Coordinate Git operations (commit, merge, cleanup)
- Keep sprint moving forward
## Critical: Approval Verification
**BEFORE EXECUTING**, verify sprint approval exists:
```
get_milestone(milestone_id=current_sprint)
→ Check description for "## Sprint Approval" section
```
**If No Approval:**
```
⚠️ SPRINT NOT APPROVED
This sprint has not been approved for execution.
Please run /sprint-plan to approve the sprint first.
```
**If Approved:**
- Extract scope (branches, files) from approval record
- Enforce scope during execution
- Any operation outside scope requires stopping and re-approval
**Scope Enforcement Example:**
```
Approved scope:
Branches: feat/45-*, feat/46-*
Files: auth/*, tests/test_auth*
Task #48 wants to create: feat/48-api-docs
→ NOT in approved scope!
→ STOP and ask user to approve expanded scope
```
## Critical: Branch Detection
**BEFORE DOING ANYTHING**, check the current git branch:
**AFTER approval verification**, check the current git branch:
```bash
git branch --show-current
@@ -93,7 +126,44 @@ git branch --show-current
**Workflow:**
**A. Fetch Sprint Issues**
**A. Fetch Sprint Issues and Detect Checkpoints**
```
list_issues(state="open", labels=["sprint-current"])
```
**For each open issue, check for checkpoint comments:**
```
get_issue(issue_number=45) # Comments included
→ Look for comments containing "## Checkpoint"
```
**If Checkpoint Found:**
```
Checkpoint Detected for #45
Found checkpoint from previous session:
Branch: feat/45-jwt-service
Phase: Testing
Tool Calls: 67
Files Modified: 3
Completed: 4/7 steps
Options:
1. Resume from checkpoint (recommended)
2. Start fresh (discard previous work)
3. Review checkpoint details first
Would you like to resume?
```
**Resume Protocol:**
1. Verify branch exists: `git branch -a | grep feat/45-jwt-service`
2. Switch to branch: `git checkout feat/45-jwt-service`
3. Verify files match checkpoint
4. Dispatch executor with checkpoint context
5. Executor continues from pending steps
**B. Fetch Sprint Issues (Standard)**
```
list_issues(state="open", labels=["sprint-current"])
```
@@ -147,11 +217,56 @@ Relevant Lessons:
Ready to start? I can dispatch multiple tasks in parallel.
```
### 2. Parallel Task Dispatch
### 2. File Conflict Prevention (Pre-Dispatch)
**When starting execution:**
**BEFORE dispatching parallel agents, analyze file overlap.**
For independent tasks (same batch), spawn multiple Executor agents in parallel:
**Conflict Detection Workflow:**
1. **Read each issue's checklist/body** to identify target files
2. **Build file map** for all tasks in the batch
3. **Check for overlap** - Same file in multiple tasks?
4. **Sequentialize conflicts** - Don't parallelize if same file
**Example Analysis:**
```
Analyzing Batch 1 for conflicts:
#45 - Implement JWT service
→ auth/jwt_service.py, auth/__init__.py, tests/test_jwt.py
#48 - Update API documentation
→ docs/api.md, README.md
Overlap check: NONE
Decision: Safe to parallelize ✅
```
**If Conflict Detected:**
```
Analyzing Batch 2 for conflicts:
#46 - Build login endpoint
→ api/routes/auth.py, auth/__init__.py
#49 - Add auth tests
→ tests/test_auth.py, auth/__init__.py
Overlap: auth/__init__.py ⚠️
Decision: Sequentialize - run #46 first, then #49
```
**Conflict Resolution:**
- Same file → MUST sequentialize
- Same directory → Usually safe, review file names
- Shared config → Sequentialize
- Shared test fixture → Assign different fixture files or sequentialize
### 3. Parallel Task Dispatch
**After conflict check passes, dispatch parallel agents:**
For independent tasks (same batch) WITH NO FILE CONFLICTS, spawn multiple Executor agents in parallel:
```
Dispatching Batch 1 (2 tasks in parallel):
@@ -167,6 +282,14 @@ Task 2: #48 - Update API documentation
Both tasks running in parallel. I'll monitor progress.
```
**Branch Isolation:** Each task MUST have its own branch. Never have two agents work on the same branch.
**Sequential Merge Protocol:**
1. Wait for task to complete
2. Merge its branch to development
3. Then merge next completed task
4. Never merge simultaneously
**Branch Naming Convention (MANDATORY):**
- Features: `feat/<issue-number>-<short-description>`
- Bug fixes: `fix/<issue-number>-<short-description>`
@@ -177,7 +300,7 @@ Both tasks running in parallel. I'll monitor progress.
- `fix/46-login-timeout`
- `debug/47-investigate-memory-leak`
### 3. Generate Lean Execution Prompts
### 4. Generate Lean Execution Prompts
**NOT THIS (too verbose):**
```
@@ -222,11 +345,127 @@ Dependencies: None (can start immediately)
Ready to start? Say "yes" and I'll monitor progress.
```
### 4. Progress Tracking
### 5. Status Label Management
**Monitor and Update:**
**CRITICAL: Use Status labels to communicate issue state accurately.**
**Add Progress Comments:**
**When dispatching a task:**
```
update_issue(
issue_number=45,
labels=["Status/In-Progress", ...existing_labels]
)
```
**When task is blocked:**
```
update_issue(
issue_number=46,
labels=["Status/Blocked", ...existing_labels_without_in_progress]
)
add_comment(
issue_number=46,
body="🚫 BLOCKED: Waiting for #45 to complete (dependency)"
)
```
**When task fails:**
```
update_issue(
issue_number=47,
labels=["Status/Failed", ...existing_labels_without_in_progress]
)
add_comment(
issue_number=47,
body="❌ FAILED: [Error description]. Needs investigation."
)
```
**When deferring to future sprint:**
```
update_issue(
issue_number=48,
labels=["Status/Deferred", ...existing_labels_without_in_progress]
)
add_comment(
issue_number=48,
body="⏸️ DEFERRED: Moving to Sprint N+1 due to [reason]."
)
```
**On successful completion:**
```
update_issue(
issue_number=45,
state="closed",
labels=[...existing_labels_without_status] # Remove all Status/* labels
)
```
**Status Label Rules:**
- Only ONE Status label at a time (In-Progress, Blocked, Failed, or Deferred)
- Remove Status labels when closing successfully
- Always add comment explaining status changes
### 6. Progress Tracking (Structured Comments)
**CRITICAL: Use structured progress comments for visibility.**
**Standard Progress Comment Format:**
```markdown
## Progress Update
**Status:** In Progress | Blocked | Failed
**Phase:** [current phase name]
**Tool Calls:** X (budget: Y)
### Completed
- [x] Step 1
- [x] Step 2
### In Progress
- [ ] Current step (estimated: Z more calls)
### Blockers
- None | [blocker description]
### Next
- What happens after current step
```
**Example Progress Comment:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** In Progress
**Phase:** Implementation
**Tool Calls:** 45 (budget: 100)
### Completed
- [x] Created auth/jwt_service.py
- [x] Implemented generate_token()
- [x] Implemented verify_token()
### In Progress
- [ ] Writing unit tests (estimated: 20 more calls)
### Blockers
- None
### Next
- Run tests and fix any failures
- Commit and push
"""
)
```
**When to Post Progress Comments:**
- After completing each major phase (every 20-30 tool calls)
- When status changes (blocked, failed)
- When encountering unexpected issues
- Before approaching tool call budget limit
**Simple progress updates (for minor milestones):**
```
add_comment(
issue_number=45,
@@ -264,7 +503,7 @@ add_comment(
- Notify that new tasks are ready for execution
- Update the execution queue
### 5. Monitor Parallel Execution
### 7. Monitor Parallel Execution
**Track multiple running tasks:**
```
@@ -282,7 +521,7 @@ Batch 2 (now unblocked):
Starting #46 while #48 continues...
```
### 6. Branch Protection Detection
### 8. Branch Protection Detection
Before merging, check if development branch is protected:
@@ -312,7 +551,7 @@ Closes #45
**NEVER include subtask checklists in MR body.** The issue already has them.
### 7. Sprint Close - Capture Lessons Learned
### 9. Sprint Close - Capture Lessons Learned
**Invoked by:** `/sprint-close`
@@ -564,6 +803,64 @@ Would you like me to handle git operations?
- Document blockers promptly
- Never let tasks slip through
## Runaway Detection (Monitoring Dispatched Agents)
**Monitor dispatched agents for runaway behavior:**
**Warning Signs:**
- Agent running 30+ minutes with no progress comment
- Progress comment shows "same phase" for 20+ tool calls
- Error patterns repeating in progress comments
**Intervention Protocol:**
When you detect an agent may be stuck:
1. **Read latest progress comment** - Check tool call count and phase
2. **If no progress in 20+ calls** - Consider stopping the agent
3. **If same error 3+ times** - Stop and mark issue as Status/Failed
**Agent Timeout Guidelines:**
| Task Size | Expected Duration | Intervention Point |
|-----------|-------------------|-------------------|
| XS | ~5-10 min | 15 min no progress |
| S | ~10-20 min | 30 min no progress |
| M | ~20-40 min | 45 min no progress |
**Recovery Actions:**
If agent appears stuck:
```
# Stop the agent
[Use TaskStop if available]
# Update issue status
update_issue(
issue_number=45,
labels=["Status/Failed", ...other_labels]
)
# Add explanation comment
add_comment(
issue_number=45,
body="""## Agent Intervention
**Reason:** No progress detected for [X] minutes / [Y] tool calls
**Last Status:** [from progress comment]
**Action:** Stopped agent, requires human review
### What Was Completed
[from progress comment]
### What Remains
[from progress comment]
### Recommendation
[Manual completion / Different approach / Break down further]
"""
)
```
## Critical Reminders
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
@@ -572,14 +869,18 @@ Would you like me to handle git operations?
4. **Parallel dispatch** - Run independent tasks simultaneously
5. **Lean prompts** - Brief, actionable, not verbose documents
6. **Branch naming** - `feat/`, `fix/`, `debug/` prefixes required
7. **No MR subtasks** - MR body should NOT have checklists
8. **Auto-check subtasks** - Mark issue subtasks complete on close
9. **Track meticulously** - Update issues immediately, document blockers
10. **Capture lessons** - At sprint close, interview thoroughly
11. **Update wiki status** - At sprint close, update implementation and proposal pages
12. **Link lessons to wiki** - Include lesson links in implementation completion summary
13. **Update CHANGELOG** - MANDATORY at sprint close, never skip
14. **Run suggest-version** - Check if release is needed after CHANGELOG update
7. **Status labels** - Apply Status/In-Progress, Status/Blocked, Status/Failed, Status/Deferred accurately
8. **One status at a time** - Remove old Status/* label before applying new one
9. **Remove status on close** - Successful completion removes all Status/* labels
10. **Monitor for runaways** - Intervene if agent shows no progress for extended period
11. **No MR subtasks** - MR body should NOT have checklists
12. **Auto-check subtasks** - Mark issue subtasks complete on close
13. **Track meticulously** - Update issues immediately, document blockers
14. **Capture lessons** - At sprint close, interview thoroughly
15. **Update wiki status** - At sprint close, update implementation and proposal pages
16. **Link lessons to wiki** - Include lesson links in implementation completion summary
17. **Update CHANGELOG** - MANDATORY at sprint close, never skip
18. **Run suggest-version** - Check if release is needed after CHANGELOG update
## Your Mission

View File

@@ -310,14 +310,55 @@ Think through the technical approach:
- `[Sprint 17] fix: Resolve login timeout issue`
- `[Sprint 18] refactor: Extract authentication module`
**Task Granularity Guidelines:**
| Size | Scope | Example |
|------|-------|---------|
| **Small** | 1-2 hours, single file/component | Add validation to one field |
| **Medium** | Half day, multiple files, one feature | Implement new API endpoint |
| **Large** | Should be broken down | Full authentication system |
**Task Sizing Rules (MANDATORY):**
**If a task is too large, break it down into smaller tasks.**
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|--------|-------|-----------------|----------------|-------------|
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
| **L** | MUST BREAK DOWN | - | - | Too large for one agent |
| **XL** | MUST BREAK DOWN | - | - | Way too large |
**CRITICAL: L and XL tasks MUST be broken into subtasks.**
**Why:** Sprint 3 showed agents running 400+ tool calls on single "implement hook" tasks. This causes:
- Long wait times (1+ hour per task)
- No progress visibility
- Resource exhaustion
- Difficult debugging
**Task Scoping Checklist:**
1. Can this be completed in one file? → XS or S
2. Does it touch 2-3 files? → M (max)
3. Does it touch 4+ files? → MUST break down
4. Does it require complex decision-making? → MUST break down
5. Would you estimate 50+ tool calls? → MUST break down
**Breaking Down Large Tasks:**
**BAD (L/XL - too broad):**
```
[Sprint 3] feat: Implement git-flow branch validation hook
Labels: Efforts/L, ...
```
**GOOD (broken into S/M tasks):**
```
[Sprint 3] feat: Create branch validation hook skeleton
Labels: Efforts/S, ...
[Sprint 3] feat: Add prefix pattern validation (feat/, fix/, etc.)
Labels: Efforts/S, ...
[Sprint 3] feat: Add issue number extraction and validation
Labels: Efforts/S, ...
[Sprint 3] test: Add branch validation unit tests
Labels: Efforts/S, ...
```
**If a task is estimated L or XL, STOP and break it down before creating.**
**IMPORTANT: Include wiki implementation reference in issue body:**
@@ -479,5 +520,9 @@ Sprint 17 - User Authentication (Due: 2025-02-01)
11. **Always use suggest_labels** - Don't guess labels
12. **Always think through architecture** - Consider edge cases
13. **Always cleanup local files** - Delete after migrating to wiki
14. **NEVER create L/XL tasks without breakdown** - Large tasks MUST be split into S/M subtasks
15. **Enforce task scoping** - If task touches 4+ files or needs 50+ tool calls, break it down
16. **ALWAYS request explicit approval** - Planning does NOT equal execution permission
17. **Record approval in milestone** - Sprint-start verifies approval before executing
You are the thoughtful planner who ensures sprints are well-prepared, architecturally sound, and learn from past experiences. Take your time, ask questions, and create comprehensive plans that set the team up for success.

View File

@@ -0,0 +1,180 @@
---
description: Generate Mermaid diagram of sprint issues with dependencies and status
---
# Sprint Diagram
This command generates a visual Mermaid diagram showing the current sprint's issues, their dependencies, and execution flow.
## What This Command Does
1. **Fetch Sprint Issues** - Gets all issues for the current sprint milestone
2. **Fetch Dependencies** - Retrieves dependency relationships between issues
3. **Generate Mermaid Syntax** - Creates flowchart showing issue flow
4. **Apply Status Styling** - Colors nodes based on issue state (open/closed/in-progress)
5. **Show Execution Order** - Visualizes parallel batches and critical path
## Usage
```
/sprint-diagram
/sprint-diagram --milestone "Sprint 4"
```
## MCP Tools Used
**Issue Tools:**
- `list_issues(state="all")` - Fetch all sprint issues
- `list_milestones()` - Find current sprint milestone
**Dependency Tools:**
- `list_issue_dependencies(issue_number)` - Get dependencies for each issue
- `get_execution_order(issue_numbers)` - Get parallel execution batches
## Implementation Steps
1. **Get Current Milestone:**
```
milestones = list_milestones(state="open")
current_sprint = milestones[0] # Most recent open milestone
```
2. **Fetch Sprint Issues:**
```
issues = list_issues(state="all", milestone=current_sprint.title)
```
3. **Fetch Dependencies for Each Issue:**
```python
dependencies = {}
for issue in issues:
deps = list_issue_dependencies(issue.number)
dependencies[issue.number] = deps
```
4. **Generate Mermaid Diagram:**
```mermaid
flowchart TD
subgraph Sprint["Sprint 4 - Commands"]
241["#241: sprint-diagram"]
242["#242: confidence threshold"]
243["#243: pr-diff"]
241 --> 242
242 --> 243
end
classDef completed fill:#90EE90,stroke:#228B22
classDef inProgress fill:#FFD700,stroke:#DAA520
classDef open fill:#ADD8E6,stroke:#4682B4
classDef blocked fill:#FFB6C1,stroke:#DC143C
class 241 completed
class 242 inProgress
class 243 open
```
## Expected Output
```
Sprint Diagram: Sprint 4 - Commands
===================================
```mermaid
flowchart TD
subgraph batch1["Batch 1 - No Dependencies"]
241["#241: sprint-diagram<br/>projman"]
242["#242: confidence threshold<br/>pr-review"]
244["#244: data-quality<br/>data-platform"]
247["#247: chart-export<br/>viz-platform"]
250["#250: dependency-graph<br/>contract-validator"]
251["#251: changelog-gen<br/>doc-guardian"]
254["#254: config-diff<br/>config-maintainer"]
256["#256: cmdb-topology<br/>cmdb-assistant"]
end
subgraph batch2["Batch 2 - After Batch 1"]
243["#243: pr-diff<br/>pr-review"]
245["#245: lineage-viz<br/>data-platform"]
248["#248: color blind<br/>viz-platform"]
252["#252: doc-coverage<br/>doc-guardian"]
255["#255: linting<br/>config-maintainer"]
257["#257: change-audit<br/>cmdb-assistant"]
end
subgraph batch3["Batch 3 - Final"]
246["#246: dbt-test<br/>data-platform"]
249["#249: responsive<br/>viz-platform"]
253["#253: stale-docs<br/>doc-guardian"]
258["#258: IP conflict<br/>cmdb-assistant"]
end
batch1 --> batch2
batch2 --> batch3
classDef completed fill:#90EE90,stroke:#228B22
classDef inProgress fill:#FFD700,stroke:#DAA520
classDef open fill:#ADD8E6,stroke:#4682B4
class 241,242 completed
class 243,244 inProgress
```
## Status Legend
| Status | Color | Description |
|--------|-------|-------------|
| Completed | Green | Issue closed |
| In Progress | Yellow | Currently being worked on |
| Open | Blue | Ready to start |
| Blocked | Red | Waiting on dependencies |
## Diagram Types
### Default: Dependency Flow
Shows how issues depend on each other with arrows indicating blockers.
### Batch View (--batch)
Groups issues by execution batch for parallel work visualization.
### Plugin View (--by-plugin)
Groups issues by plugin for component-level overview.
## When to Use
- **Sprint Planning**: Visualize scope and dependencies
- **Daily Standups**: Show progress at a glance
- **Documentation**: Include in wiki pages
- **Stakeholder Updates**: Visual progress reports
## Integration
The generated Mermaid diagram can be:
- Pasted into GitHub/Gitea issues
- Rendered in wiki pages
- Included in PRs for context
- Used in sprint retrospectives
## Example
```
User: /sprint-diagram
Generating sprint diagram...
Milestone: Sprint 4 - Commands (18 issues)
Fetching dependencies...
Building diagram...
```mermaid
flowchart LR
241[sprint-diagram] --> |enables| 242[confidence]
242 --> 243[pr-diff]
style 241 fill:#90EE90
style 242 fill:#ADD8E6
style 243 fill:#ADD8E6
```
Open: 16 | In Progress: 1 | Completed: 1
```

View File

@@ -136,6 +136,58 @@ The planner agent will:
- Document dependency graph
- Provide sprint overview with wiki links
11. **Request Sprint Approval**
- Present approval request with scope summary
- Capture explicit user approval
- Record approval in milestone description
- Approval scopes what sprint-start can execute
## Sprint Approval (MANDATORY)
**Planning DOES NOT equal execution permission.**
After creating issues, the planner MUST request explicit approval:
```
Sprint 17 Planning Complete
===========================
Created Issues:
- #45: [Sprint 17] feat: JWT token generation
- #46: [Sprint 17] feat: Login endpoint
- #47: [Sprint 17] test: Auth tests
Execution Scope:
- Branches: feat/45-*, feat/46-*, feat/47-*
- Files: auth/*, api/routes/auth.py, tests/test_auth*
- Dependencies: PyJWT, python-jose
⚠️ APPROVAL REQUIRED
Do you approve this sprint for execution?
This grants permission for agents to:
- Create and modify files in the listed scope
- Create branches with the listed prefixes
- Install listed dependencies
Type "approve sprint 17" to authorize execution.
```
**On Approval:**
1. Record approval in milestone description
2. Note timestamp and scope
3. Sprint-start will verify approval exists
**Approval Record Format:**
```markdown
## Sprint Approval
**Approved:** 2026-01-28 14:30
**Approver:** User
**Scope:**
- Branches: feat/45-*, feat/46-*, feat/47-*
- Files: auth/*, api/routes/auth.py, tests/test_auth*
```
## Issue Title Format (MANDATORY)
```
@@ -155,15 +207,70 @@ The planner agent will:
- `[Sprint 17] fix: Resolve login timeout issue`
- `[Sprint 18] refactor: Extract authentication module`
## Task Granularity Guidelines
## Task Sizing Rules (MANDATORY)
| Size | Scope | Example |
|------|-------|---------|
| **Small** | 1-2 hours, single file/component | Add validation to one field |
| **Medium** | Half day, multiple files, one feature | Implement new API endpoint |
| **Large** | Should be broken down | Full authentication system |
**CRITICAL: Tasks sized L or XL MUST be broken down into smaller tasks.**
**If a task is too large, break it down into smaller tasks.**
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|--------|-------|-----------------|----------------|-------------|
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
| **L** | MUST BREAK DOWN | - | - | Too large for one agent |
| **XL** | MUST BREAK DOWN | - | - | Way too large |
**Why This Matters:**
- Agents running 400+ tool calls take 1+ hour, with no visibility
- Large tasks lack clear completion criteria
- Debugging failures is extremely difficult
- Small tasks enable parallel execution
**Scoping Checklist:**
1. Can this be completed in one file? → XS or S
2. Does it touch 2-3 files? → M (maximum for single task)
3. Does it touch 4+ files? → MUST break down
4. Would you estimate 50+ tool calls? → MUST break down
5. Does it require complex decision-making mid-task? → MUST break down
**Example Breakdown:**
**BAD (L - too broad):**
```
[Sprint 3] feat: Implement schema diff detection hook
Labels: Efforts/L
- Hook skeleton
- Pattern detection for DROP/ALTER/RENAME
- Warning output formatting
- Integration with hooks.json
```
**GOOD (broken into S tasks):**
```
[Sprint 3] feat: Create schema-diff-check.sh hook skeleton
Labels: Efforts/S
- [ ] Create hook file with standard header
- [ ] Add file type detection for SQL/migrations
- [ ] Exit 0 (non-blocking)
[Sprint 3] feat: Add DROP/ALTER pattern detection
Labels: Efforts/S
- [ ] Detect DROP COLUMN/TABLE/INDEX
- [ ] Detect ALTER TYPE changes
- [ ] Detect RENAME operations
[Sprint 3] feat: Add warning output formatting
Labels: Efforts/S
- [ ] Format breaking change warnings
- [ ] Add hook prefix to output
- [ ] Test output visibility
[Sprint 3] chore: Register hook in hooks.json
Labels: Efforts/XS
- [ ] Add PostToolUse:Edit hook entry
- [ ] Test hook triggers on SQL edits
```
**The planner MUST refuse to create L/XL tasks without breakdown.**
## MCP Tools Available

View File

@@ -6,6 +6,47 @@ description: Begin sprint execution with relevant lessons learned from previous
You are initiating sprint execution. The orchestrator agent will coordinate the work, analyze dependencies for parallel execution, search for relevant lessons learned, and guide you through the implementation process.
## Sprint Approval Verification
**CRITICAL: Sprint must be approved before execution.**
The orchestrator checks for approval in the milestone description:
```
get_milestone(milestone_id=17)
→ Check description for "## Sprint Approval" section
```
**If Approval Missing:**
```
⚠️ SPRINT NOT APPROVED
Sprint 17 has not been approved for execution.
The milestone description does not contain an approval record.
Please run /sprint-plan to:
1. Review the sprint scope
2. Approve the execution plan
Then run /sprint-start again.
```
**If Approval Found:**
```
✓ Sprint Approval Verified
Approved: 2026-01-28 14:30
Scope:
Branches: feat/45-*, feat/46-*, feat/47-*
Files: auth/*, api/routes/auth.py, tests/test_auth*
Proceeding with execution within approved scope...
```
**Scope Enforcement:**
- Agents can ONLY create branches matching approved patterns
- Agents can ONLY modify files within approved paths
- Operations outside scope require re-approval via `/sprint-plan`
## Branch Detection
**CRITICAL:** Before proceeding, check the current git branch:
@@ -25,7 +66,18 @@ If you are on a production or staging branch, you MUST stop and ask the user to
The orchestrator agent will:
1. **Fetch Sprint Issues**
1. **Verify Sprint Approval**
- Check milestone description for `## Sprint Approval` section
- If no approval found, STOP and direct user to `/sprint-plan`
- If approval found, extract scope (branches, files)
- Agents operate ONLY within approved scope
2. **Detect Checkpoints (Resume Support)**
- Check each open issue for `## Checkpoint` comments
- If checkpoint found, offer to resume from that point
- Resume preserves: branch, completed work, pending steps
3. **Fetch Sprint Issues**
- Use `list_issues` to fetch open issues for the sprint
- Identify priorities based on labels (Priority/Critical, Priority/High, etc.)
@@ -72,6 +124,67 @@ Parallel Execution Batches:
**Independent tasks in the same batch run in parallel.**
## File Conflict Prevention (MANDATORY)
**CRITICAL: Before dispatching parallel agents, check for file overlap.**
**Pre-Dispatch Conflict Check:**
1. **Identify target files** for each task in the batch
2. **Check for overlap** - Do any tasks modify the same file?
3. **If overlap detected** - Sequentialize those specific tasks
**Example Conflict Detection:**
```
Batch 1 Analysis:
#45 - Implement JWT service
Files: auth/jwt_service.py, auth/__init__.py, tests/test_jwt.py
#48 - Update API documentation
Files: docs/api.md, README.md
Overlap: NONE → Safe to parallelize
Batch 2 Analysis:
#46 - Build login endpoint
Files: api/routes/auth.py, auth/__init__.py
#49 - Add auth tests
Files: tests/test_auth.py, auth/__init__.py
Overlap: auth/__init__.py → CONFLICT!
Action: Sequentialize #46 and #49 (run #46 first)
```
**Conflict Resolution Rules:**
| Conflict Type | Action |
|---------------|--------|
| Same file in checklist | Sequentialize tasks |
| Same directory | Review if safe, usually OK |
| Shared test file | Sequentialize or assign different test files |
| Shared config | Sequentialize |
**Branch Isolation Protocol:**
Even for parallel tasks, each MUST run on its own branch:
```
Task #45 → feat/45-jwt-service (isolated)
Task #48 → feat/48-api-docs (isolated)
```
**Sequential Merge After Completion:**
```
1. Task #45 completes → merge feat/45-jwt-service to development
2. Task #48 completes → merge feat/48-api-docs to development
3. Never merge simultaneously - always sequential to detect conflicts
```
**If Merge Conflict Occurs:**
1. Stop second task
2. Resolve conflict manually or assign to human
3. Resume/restart second task with updated base
## Branch Naming Convention (MANDATORY)
When creating branches for tasks:
@@ -239,6 +352,61 @@ Batch 2 (now unblocked):
Starting #46 while #48 continues...
```
## Checkpoint Resume Support
If a previous session was interrupted (agent stopped, failure, budget exhausted), checkpoints enable resumption.
**Checkpoint Detection:**
The orchestrator scans issue comments for `## Checkpoint` markers containing:
- Branch name
- Last commit hash
- Completed/pending steps
- Files modified
**Resume Flow:**
```
User: /sprint-start
Orchestrator: Checking for checkpoints...
Found checkpoint for #45 (JWT service):
Branch: feat/45-jwt-service
Last activity: 2 hours ago
Progress: 4/7 steps completed
Pending: Write tests, add refresh, commit
Options:
1. Resume from checkpoint (recommended)
2. Start fresh (lose previous work)
3. Review checkpoint details
User: 1
Orchestrator: Resuming #45 from checkpoint...
✓ Branch exists
✓ Files match checkpoint
✓ Dispatching executor with context
Executor continues from pending steps...
```
**Checkpoint Format:**
Executors save checkpoints after major steps:
```markdown
## Checkpoint
**Branch:** feat/45-jwt-service
**Commit:** abc123
**Phase:** Testing
### Completed Steps
- [x] Step 1
- [x] Step 2
### Pending Steps
- [ ] Step 3
- [ ] Step 4
```
## Getting Started
Simply invoke `/sprint-start` and the orchestrator will:

View File

@@ -79,7 +79,12 @@ Completed Issues (3):
In Progress (2):
#46: [Sprint 18] feat: Build login endpoint [Type/Feature, Priority/High]
Status: In Progress | Phase: Implementation | Tool Calls: 45/100
Progress: 3/5 steps | Current: Writing validation logic
#49: [Sprint 18] test: Add auth tests [Type/Test, Priority/Medium]
Status: In Progress | Phase: Testing | Tool Calls: 30/100
Progress: 2/4 steps | Current: Testing edge cases
Ready to Start (2):
#50: [Sprint 18] feat: Integrate OAuth providers [Type/Feature, Priority/Low]
@@ -137,12 +142,53 @@ Show only backend issues:
list_issues(labels=["Component/Backend"])
```
## Progress Comment Parsing
Agents post structured progress comments in this format:
```markdown
## Progress Update
**Status:** In Progress | Blocked | Failed
**Phase:** [current phase name]
**Tool Calls:** X (budget: Y)
### Completed
- [x] Step 1
### In Progress
- [ ] Current step
### Blockers
- None | [blocker description]
```
**To extract real-time progress:**
1. Fetch issue comments: `get_issue(number)` includes recent comments
2. Look for comments containing `## Progress Update`
3. Parse the **Status:** line for current state
4. Parse **Tool Calls:** for budget consumption
5. Extract blockers from `### Blockers` section
**Progress Summary Display:**
```
In Progress Issues:
#45: [Sprint 18] feat: JWT service
Status: In Progress | Phase: Testing | Tool Calls: 67/100
Completed: 4/6 steps | Current: Writing unit tests
#46: [Sprint 18] feat: Login endpoint
Status: Blocked | Phase: Implementation | Tool Calls: 23/100
Blocker: Waiting for JWT service (#45)
```
## Blocker Detection
The command identifies blocked issues by:
1. **Dependency Analysis** - Uses `list_issue_dependencies` to find unmet dependencies
2. **Comment Keywords** - Checks for "blocked", "blocker", "waiting for"
3. **Stale Issues** - Issues with no recent activity (>7 days)
1. **Progress Comments** - Parse `### Blockers` section from structured comments
2. **Status Labels** - Check for `Status/Blocked` label on issue
3. **Dependency Analysis** - Uses `list_issue_dependencies` to find unmet dependencies
4. **Comment Keywords** - Checks for "blocked", "blocker", "waiting for"
5. **Stale Issues** - Issues with no recent activity (>7 days)
## When to Use

View File

@@ -5,13 +5,43 @@
PREFIX="[projman]"
# Check if MCP venv exists
# Calculate paths
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}"
VENV_PATH="$PLUGIN_ROOT/mcp-servers/gitea/.venv/bin/python"
# Marketplace root is 2 levels up from plugin root (plugins/projman -> .)
MARKETPLACE_ROOT="$(dirname "$(dirname "$PLUGIN_ROOT")")"
VENV_REPAIR_SCRIPT="$MARKETPLACE_ROOT/scripts/venv-repair.sh"
PLUGIN_CACHE="$HOME/.claude/plugins/cache/leo-claude-mktplace"
if [[ ! -f "$VENV_PATH" ]]; then
echo "$PREFIX MCP venvs missing - run setup.sh from installed marketplace"
exit 0
# ============================================================================
# Clear stale plugin cache (MUST run before MCP servers load)
# ============================================================================
# The cache at ~/.claude/plugins/cache/ holds versioned .mcp.json files.
# After marketplace updates, cached configs may point to old paths.
# Clearing forces Claude to read fresh configs from installed marketplace.
if [[ -d "$PLUGIN_CACHE" ]]; then
rm -rf "$PLUGIN_CACHE"
# Don't output anything - this should be silent and automatic
fi
# ============================================================================
# Auto-repair MCP venvs (runs before other checks)
# ============================================================================
if [[ -x "$VENV_REPAIR_SCRIPT" ]]; then
# Run venv repair - this creates symlinks to cached venvs
# Only outputs messages if something needed fixing
"$VENV_REPAIR_SCRIPT" 2>/dev/null || {
echo "$PREFIX MCP venv setup failed - run: cd $MARKETPLACE_ROOT && ./scripts/setup-venvs.sh"
exit 0
}
else
# Fallback: just check if venv exists
VENV_PATH="$PLUGIN_ROOT/mcp-servers/gitea/.venv/bin/python"
if [[ ! -f "$VENV_PATH" ]]; then
echo "$PREFIX MCP venvs missing - run setup.sh from installed marketplace"
exit 0
fi
fi
# Check git remote vs .env config (only if .env exists)

View File

@@ -13,9 +13,9 @@ description: Dynamic reference for Gitea label taxonomy (organization + reposito
This skill provides the current label taxonomy used for issue classification in Gitea. Labels are **fetched dynamically** from Gitea and should never be hardcoded.
**Current Taxonomy:** 43 labels (27 organization + 16 repository)
**Current Taxonomy:** 47 labels (31 organization + 16 repository)
## Organization Labels (27)
## Organization Labels (31)
Organization-level labels are shared across all repositories in your configured organization.
@@ -60,6 +60,12 @@ Organization-level labels are shared across all repositories in your configured
- `Type/Test` (#1d76db) - Testing-related work (unit, integration, e2e)
- `Type/Chore` (#fef2c0) - Maintenance, tooling, dependencies, build tasks
### Status (4)
- `Status/In-Progress` (#0052cc) - Work is actively being done on this issue
- `Status/Blocked` (#ff5630) - Blocked by external dependency or issue
- `Status/Failed` (#de350b) - Implementation attempted but failed, needs investigation
- `Status/Deferred` (#6554c0) - Moved to a future sprint or backlog
## Repository Labels (16)
Repository-level labels are specific to each project.
@@ -168,6 +174,28 @@ When suggesting labels for issues, consider the following patterns:
- Keywords: "deploy", "deployment", "docker", "infrastructure", "ci/cd", "production"
- Example: "Deploy authentication service to production"
### Status Detection
**Status/In-Progress:**
- Applied when: Agent starts working on an issue
- Remove when: Work completes, fails, or is blocked
- Example: Orchestrator applies when dispatching task to executor
**Status/Blocked:**
- Applied when: Issue cannot proceed due to external dependency
- Context: Waiting for another issue, external service, or decision
- Example: "Blocked by #45 - need JWT service first"
**Status/Failed:**
- Applied when: Implementation was attempted but failed
- Context: Errors, permission issues, technical blockers
- Example: Agent hit permission errors and couldn't complete
**Status/Deferred:**
- Applied when: Work is moved to a future sprint
- Context: Scope reduction, reprioritization
- Example: "Moving to Sprint 5 due to scope constraints"
### Tech Detection
**Tech/Python:**

View File

@@ -2,9 +2,8 @@
"mcpServers": {
"viz-platform": {
"type": "stdio",
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/viz-platform/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/viz-platform"
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/viz-platform/run.sh",
"args": []
}
}
}

View File

@@ -51,7 +51,10 @@ DMC_VERSION=0.14.7
| `/initial-setup` | Interactive setup wizard for DMC and theme preferences |
| `/component {name}` | Inspect component props and validation |
| `/chart {type}` | Create a Plotly chart |
| `/chart-export {format}` | Export chart to PNG, SVG, or PDF |
| `/dashboard {template}` | Create a dashboard layout |
| `/breakpoints {layout}` | Configure responsive breakpoints |
| `/accessibility-check` | Validate colors for color blind users |
| `/theme {name}` | Apply an existing theme |
| `/theme-new {name}` | Create a new custom theme |
| `/theme-css {name}` | Export theme as CSS |
@@ -75,15 +78,16 @@ Prevent invalid component props before runtime.
| `get_component_props` | Get detailed prop specifications |
| `validate_component` | Validate a component configuration |
### Charts (2 tools)
### Charts (3 tools)
Create Plotly charts with theme integration.
| Tool | Description |
|------|-------------|
| `chart_create` | Create a chart (line, bar, scatter, pie, etc.) |
| `chart_configure_interaction` | Configure chart interactivity |
| `chart_export` | Export chart to PNG, SVG, or PDF |
### Layouts (5 tools)
### Layouts (6 tools)
Build dashboard structures with filters and grids.
| Tool | Description |
@@ -91,9 +95,19 @@ Build dashboard structures with filters and grids.
| `layout_create` | Create a layout structure |
| `layout_add_filter` | Add filter components |
| `layout_set_grid` | Configure responsive grid |
| `layout_set_breakpoints` | Configure responsive breakpoints (xs-xl) |
| `layout_add_section` | Add content sections |
| `layout_get` | Retrieve layout details |
### Accessibility (3 tools)
Validate colors for accessibility and color blindness.
| Tool | Description |
|------|-------------|
| `accessibility_validate_colors` | Check colors for color blind accessibility |
| `accessibility_validate_theme` | Validate a theme's color accessibility |
| `accessibility_suggest_alternative` | Suggest accessible color alternatives |
### Themes (6 tools)
Manage design tokens and styling.
@@ -188,9 +202,37 @@ viz-platform works seamlessly with data-platform:
| `tabs` | Multi-page dashboards |
| `split` | Comparisons, master-detail |
## Responsive Breakpoints
The plugin supports mobile-first responsive design with standard breakpoints:
| Breakpoint | Min Width | Description |
|------------|-----------|-------------|
| `xs` | 0px | Extra small (mobile portrait) |
| `sm` | 576px | Small (mobile landscape) |
| `md` | 768px | Medium (tablet) |
| `lg` | 992px | Large (desktop) |
| `xl` | 1200px | Extra large (large desktop) |
Example:
```
/breakpoints my-dashboard
# Configure cols, spacing per breakpoint
```
## Color Accessibility
The plugin validates colors for color blindness:
- **Deuteranopia** (green-blind) - 6% of males
- **Protanopia** (red-blind) - 2.5% of males
- **Tritanopia** (blue-blind) - 0.01% of population
Includes WCAG contrast ratio checking and accessible palette suggestions.
## Requirements
- Python 3.10+
- dash-mantine-components >= 0.14.0
- plotly >= 5.18.0
- dash >= 2.14.0
- kaleido >= 0.2.1 (for chart export)

View File

@@ -0,0 +1,144 @@
---
description: Validate color accessibility for color blind users
---
# Accessibility Check
Validate theme or chart colors for color blind accessibility, checking contrast ratios and suggesting alternatives.
## Usage
```
/accessibility-check {target}
```
## Arguments
- `target` (optional): "theme" or "chart" - defaults to active theme
## Examples
```
/accessibility-check
/accessibility-check theme
/accessibility-check chart
```
## Tool Mapping
This command uses the `accessibility_validate_colors` MCP tool:
```python
accessibility_validate_colors(
colors=["#228be6", "#40c057", "#fa5252"], # Colors to check
check_types=["deuteranopia", "protanopia", "tritanopia"],
min_contrast_ratio=4.5 # WCAG AA standard
)
```
Or validate a full theme:
```python
accessibility_validate_theme(
theme_name="corporate"
)
```
## Workflow
1. **User invokes**: `/accessibility-check theme`
2. **Tool analyzes**: Theme color palette
3. **Tool simulates**: Color perception for each deficiency type
4. **Tool checks**: Contrast ratios between color pairs
5. **Tool returns**: Issues found and alternative suggestions
## Color Blindness Types
| Type | Affected Colors | Population |
|------|-----------------|------------|
| **Deuteranopia** | Red-Green (green-blind) | ~6% males, 0.4% females |
| **Protanopia** | Red-Green (red-blind) | ~2.5% males, 0.05% females |
| **Tritanopia** | Blue-Yellow | ~0.01% total |
## Output Example
```json
{
"theme_name": "corporate",
"overall_score": "B",
"issues": [
{
"type": "contrast",
"severity": "warning",
"colors": ["#fa5252", "#40c057"],
"affected_by": ["deuteranopia", "protanopia"],
"message": "Red and green may be indistinguishable for red-green color blind users",
"suggestion": "Use blue (#228be6) instead of green to differentiate from red"
},
{
"type": "contrast_ratio",
"severity": "error",
"colors": ["#fab005", "#ffffff"],
"ratio": 2.1,
"required": 4.5,
"message": "Insufficient contrast for WCAG AA compliance",
"suggestion": "Darken yellow to #e6a200 for ratio of 4.5+"
}
],
"recommendations": [
"Add patterns or shapes to distinguish data series, not just color",
"Include labels directly on chart elements",
"Consider using a color-blind safe palette"
],
"safe_palettes": {
"categorical": ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"],
"sequential": ["#FEE0D2", "#FC9272", "#DE2D26"],
"diverging": ["#4575B4", "#FFFFBF", "#D73027"]
}
}
```
## WCAG Contrast Standards
| Level | Ratio | Use Case |
|-------|-------|----------|
| AA (normal text) | 4.5:1 | Body text, labels |
| AA (large text) | 3:1 | Headings, 14pt+ bold |
| AAA (enhanced) | 7:1 | Highest accessibility |
## Color-Blind Safe Palettes
The tool can suggest complete color-blind safe palettes:
### IBM Design Colors
Designed for accessibility:
```
#648FFF #785EF0 #DC267F #FE6100 #FFB000
```
### Tableau Colorblind 10
Industry-standard accessible palette:
```
#006BA4 #FF800E #ABABAB #595959 #5F9ED1
#C85200 #898989 #A2C8EC #FFBC79 #CFCFCF
```
### Okabe-Ito
Optimized for all types of color blindness:
```
#E69F00 #56B4E9 #009E73 #F0E442 #0072B2
#D55E00 #CC79A7 #000000
```
## Related Commands
- `/theme-new {name}` - Create accessible theme from the start
- `/theme-validate {name}` - General theme validation
- `/chart {type}` - Create chart (check colors after)
## Best Practices
1. **Don't rely on color alone** - Use shapes, patterns, or labels
2. **Test with simulation** - View your visualizations through color blindness simulators
3. **Use sufficient contrast** - Minimum 4.5:1 for text, 3:1 for large elements
4. **Limit color count** - Fewer colors = easier to distinguish
5. **Use semantic colors** - Blue for information, red for errors (with icons)

View File

@@ -0,0 +1,193 @@
---
description: Configure responsive breakpoints for dashboard layouts
---
# Configure Breakpoints
Configure responsive breakpoints for a layout to support mobile-first design across different screen sizes.
## Usage
```
/breakpoints {layout_ref}
```
## Arguments
- `layout_ref` (required): Layout name to configure breakpoints for
## Examples
```
/breakpoints my-dashboard
/breakpoints sales-report
```
## Tool Mapping
This command uses the `layout_set_breakpoints` MCP tool:
```python
layout_set_breakpoints(
layout_ref="my-dashboard",
breakpoints={
"xs": {"cols": 1, "spacing": "xs"}, # < 576px (mobile)
"sm": {"cols": 2, "spacing": "sm"}, # >= 576px (large mobile)
"md": {"cols": 6, "spacing": "md"}, # >= 768px (tablet)
"lg": {"cols": 12, "spacing": "md"}, # >= 992px (desktop)
"xl": {"cols": 12, "spacing": "lg"} # >= 1200px (large desktop)
},
mobile_first=True
)
```
## Workflow
1. **User invokes**: `/breakpoints my-dashboard`
2. **Agent asks**: Which breakpoints to customize? (shows current settings)
3. **Agent asks**: Mobile column count? (xs, typically 1-2)
4. **Agent asks**: Tablet column count? (md, typically 4-6)
5. **Agent applies**: Breakpoint configuration
6. **Agent returns**: Complete responsive configuration
## Breakpoint Sizes
| Name | Min Width | Common Devices |
|------|-----------|----------------|
| `xs` | 0px | Small phones (portrait) |
| `sm` | 576px | Large phones, small tablets |
| `md` | 768px | Tablets (portrait) |
| `lg` | 992px | Tablets (landscape), laptops |
| `xl` | 1200px | Desktops, large screens |
## Mobile-First Approach
When `mobile_first=True` (default), styles cascade up:
- Define base styles for `xs` (mobile)
- Override only what changes at larger breakpoints
- Smaller CSS footprint, better performance
```python
# Mobile-first example
{
"xs": {"cols": 1}, # Stack everything on mobile
"md": {"cols": 6}, # Two-column on tablet
"lg": {"cols": 12} # Full grid on desktop
}
```
When `mobile_first=False`, styles cascade down:
- Define base styles for `xl` (desktop)
- Override for smaller screens
- Traditional "desktop-first" approach
## Grid Configuration per Breakpoint
Each breakpoint can configure:
| Property | Description | Values |
|----------|-------------|--------|
| `cols` | Grid column count | 1-24 |
| `spacing` | Gap between items | xs, sm, md, lg, xl |
| `gutter` | Outer padding | xs, sm, md, lg, xl |
| `grow` | Items grow to fill | true, false |
## Common Patterns
### Dashboard (Charts & Filters)
```python
{
"xs": {"cols": 1, "spacing": "xs"}, # Full-width cards
"sm": {"cols": 2, "spacing": "sm"}, # 2 cards per row
"md": {"cols": 3, "spacing": "md"}, # 3 cards per row
"lg": {"cols": 4, "spacing": "md"}, # 4 cards per row
"xl": {"cols": 6, "spacing": "lg"} # 6 cards per row
}
```
### Data Table
```python
{
"xs": {"cols": 1, "scroll": true}, # Horizontal scroll
"md": {"cols": 1, "scroll": false}, # Full table visible
"lg": {"cols": 1} # Same as md
}
```
### Form Layout
```python
{
"xs": {"cols": 1}, # Single column
"md": {"cols": 2}, # Two columns
"lg": {"cols": 3} # Three columns
}
```
### Sidebar Layout
```python
{
"xs": {"sidebar": "hidden"}, # No sidebar on mobile
"md": {"sidebar": "collapsed"}, # Icon-only sidebar
"lg": {"sidebar": "expanded"} # Full sidebar
}
```
## Component Span
Control how many columns a component spans at each breakpoint:
```python
# A chart that spans full width on mobile, half on desktop
{
"component": "sales-chart",
"span": {
"xs": 1, # Full width (1/1)
"md": 3, # Half width (3/6)
"lg": 6 # Half width (6/12)
}
}
```
## DMC Grid Integration
This maps to Dash Mantine Components Grid:
```python
dmc.Grid(
children=[
dmc.GridCol(
children=[chart],
span={"base": 12, "sm": 6, "lg": 4} # Responsive span
)
],
gutter="md"
)
```
## Output
```json
{
"layout_ref": "my-dashboard",
"breakpoints": {
"xs": {"cols": 1, "spacing": "xs", "min_width": "0px"},
"sm": {"cols": 2, "spacing": "sm", "min_width": "576px"},
"md": {"cols": 6, "spacing": "md", "min_width": "768px"},
"lg": {"cols": 12, "spacing": "md", "min_width": "992px"},
"xl": {"cols": 12, "spacing": "lg", "min_width": "1200px"}
},
"mobile_first": true,
"css_media_queries": [
"@media (min-width: 576px) { ... }",
"@media (min-width: 768px) { ... }",
"@media (min-width: 992px) { ... }",
"@media (min-width: 1200px) { ... }"
]
}
```
## Related Commands
- `/dashboard {template}` - Create layout with default breakpoints
- `/layout-set-grid` - Configure grid without responsive settings
- `/theme {name}` - Theme includes default spacing values

View File

@@ -0,0 +1,114 @@
---
description: Export a Plotly chart to PNG, SVG, or PDF format
---
# Export Chart
Export a Plotly chart to static image formats for sharing, embedding, or printing.
## Usage
```
/chart-export {format}
```
## Arguments
- `format` (required): Output format - one of: png, svg, pdf
## Examples
```
/chart-export png
/chart-export svg
/chart-export pdf
```
## Tool Mapping
This command uses the `chart_export` MCP tool:
```python
chart_export(
figure=figure_json, # Plotly figure JSON from chart_create
format="png", # Output format: png, svg, pdf
width=1200, # Optional: image width in pixels
height=800, # Optional: image height in pixels
scale=2, # Optional: resolution scale factor
output_path=None # Optional: save to file path
)
```
## Workflow
1. **User invokes**: `/chart-export png`
2. **Agent asks**: Which chart to export? (if multiple charts in context)
3. **Agent asks**: Image dimensions? (optional, uses chart defaults)
4. **Agent exports**: Chart with `chart_export` tool
5. **Agent returns**: Base64 image data or file path
## Output Formats
| Format | Best For | File Size |
|--------|----------|-----------|
| `png` | Web, presentations, general use | Medium |
| `svg` | Scalable graphics, editing | Small |
| `pdf` | Print, documents, archival | Large |
## Resolution Options
### Width & Height
Specify exact pixel dimensions:
```python
chart_export(figure, format="png", width=1920, height=1080)
```
### Scale Factor
Increase resolution for high-DPI displays:
```python
chart_export(figure, format="png", scale=3) # 3x resolution
```
Common scale values:
- `1` - Standard resolution (72 DPI)
- `2` - Retina/HiDPI (144 DPI)
- `3` - Print quality (216 DPI)
- `4` - High-quality print (288 DPI)
## Output Options
### Return as Base64
Default behavior - returns base64-encoded image data:
```python
result = chart_export(figure, format="png")
# result["image_data"] contains base64 string
```
### Save to File
Specify output path to save directly:
```python
chart_export(figure, format="png", output_path="/path/to/chart.png")
# result["file_path"] contains the saved path
```
## Requirements
This tool requires the `kaleido` package for rendering:
```bash
pip install kaleido
```
Kaleido is a cross-platform library that renders Plotly figures without a browser.
## Error Handling
Common issues:
- **Kaleido not installed**: Install with `pip install kaleido`
- **Invalid figure**: Ensure figure is valid Plotly JSON
- **Permission denied**: Check write permissions for output path
## Related Commands
- `/chart {type}` - Create a chart
- `/theme {name}` - Apply theme before export
- `/dashboard` - Create layout containing charts

View File

@@ -1,17 +1,20 @@
#!/usr/bin/env bash
#
# post-update.sh - Run after pulling updates
# post-update.sh - Run after pulling updates or marketplace sync
#
# Usage: ./scripts/post-update.sh
#
# This script:
# 1. Updates Python dependencies for MCP servers
# 2. Validates configuration still works
# 3. Reports any new manual steps from CHANGELOG
# 1. Clears Claude plugin cache (forces fresh .mcp.json reads)
# 2. Restores MCP venv symlinks (instant if cache exists)
# 3. Creates venvs in external cache if missing (first run only)
# 4. Shows recent changelog updates
#
set -euo pipefail
CLAUDE_PLUGIN_CACHE="$HOME/.claude/plugins/cache/leo-claude-mktplace"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
@@ -19,42 +22,25 @@ REPO_ROOT="$(dirname "$SCRIPT_DIR")"
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
RED='\033[0;31m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[OK]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
update_mcp_server() {
local server_name="$1"
local server_path="$REPO_ROOT/mcp-servers/$server_name"
log_info "Updating $server_name dependencies..."
if [[ -d "$server_path/.venv" ]] && [[ -f "$server_path/requirements.txt" ]]; then
cd "$server_path"
source .venv/bin/activate
pip install -q --upgrade pip
pip install -q -r requirements.txt
deactivate
cd "$REPO_ROOT"
log_success "$server_name dependencies updated"
else
log_warn "$server_name not fully set up - run ./scripts/setup.sh first"
fi
}
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
check_changelog() {
log_info "Checking CHANGELOG for recent updates..."
if [[ -f "$REPO_ROOT/CHANGELOG.md" ]]; then
# Show the Unreleased section
echo ""
echo "Recent changes (from CHANGELOG.md):"
echo "-----------------------------------"
sed -n '/## \[Unreleased\]/,/## \[/p' "$REPO_ROOT/CHANGELOG.md" | head -30
echo "-----------------------------------"
echo ""
local unreleased
unreleased=$(sed -n '/## \[Unreleased\]/,/## \[/p' "$REPO_ROOT/CHANGELOG.md" | grep -E '^### ' | head -1 || true)
if [[ -n "$unreleased" ]]; then
echo ""
log_info "Recent changes (from CHANGELOG.md):"
echo "-----------------------------------"
sed -n '/## \[Unreleased\]/,/## \[/p' "$REPO_ROOT/CHANGELOG.md" | head -20
echo "-----------------------------------"
fi
fi
}
@@ -64,23 +50,37 @@ main() {
echo "=============================================="
echo ""
# Shared MCP servers at repository root (v3.0.0+)
update_mcp_server "gitea"
update_mcp_server "netbox"
update_mcp_server "data-platform"
# Clear Claude plugin cache to force fresh .mcp.json reads
# This cache holds versioned copies that become stale after updates
if [[ -d "$CLAUDE_PLUGIN_CACHE" ]]; then
log_info "Clearing Claude plugin cache..."
rm -rf "$CLAUDE_PLUGIN_CACHE"
log_success "Plugin cache cleared"
fi
# Run venv-repair.sh to restore symlinks to external cache
# This is instant if cache exists, or does full setup on first run
if [[ -x "$SCRIPT_DIR/venv-repair.sh" ]]; then
log_info "Restoring MCP venv symlinks..."
if "$SCRIPT_DIR/venv-repair.sh"; then
log_success "MCP venvs ready"
else
log_error "MCP venv setup failed"
log_warn "Run: $SCRIPT_DIR/setup-venvs.sh for full setup"
exit 1
fi
else
log_error "venv-repair.sh not found at $SCRIPT_DIR"
exit 1
fi
check_changelog
echo ""
log_success "Post-update complete!"
echo ""
echo "If you see new features in the changelog that require"
echo "configuration changes, update your ~/.config/claude/*.env files."
echo "IMPORTANT: Restart Claude Code for changes to take effect."
echo "MCP servers will work immediately on next session start."
}
main "$@"
# Clear plugin cache to ensure fresh hooks are loaded
echo "Clearing plugin cache..."
rm -rf ~/.claude/plugins/cache/leo-claude-mktplace/
echo "Cache cleared"

281
scripts/setup-venvs.sh Executable file
View File

@@ -0,0 +1,281 @@
#!/usr/bin/env bash
#
# setup-venvs.sh - Smart MCP server venv management with external cache
#
# This script manages Python virtual environments for MCP servers in a
# PERSISTENT location outside the marketplace directory, so they survive
# marketplace updates.
#
# Features:
# - Stores venvs in ~/.cache/claude-mcp-venvs/ (survives updates)
# - Incremental installs (only missing packages)
# - Hash-based change detection (skip if requirements unchanged)
# - Can be called from SessionStart hooks safely
#
# Usage:
# ./scripts/setup-venvs.sh # Full setup
# ./scripts/setup-venvs.sh --check # Check only, no install
# ./scripts/setup-venvs.sh --quick # Skip if hash unchanged
# ./scripts/setup-venvs.sh gitea # Setup specific server only
#
set -euo pipefail
# ============================================================================
# Configuration
# ============================================================================
# Persistent venv location (outside marketplace)
VENV_CACHE_DIR="${HOME}/.cache/claude-mcp-venvs/leo-claude-mktplace"
# Script and repo paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
# MCP servers to manage
MCP_SERVERS=(gitea netbox data-platform viz-platform contract-validator)
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Flags
CHECK_ONLY=false
QUICK_MODE=false
SPECIFIC_SERVER=""
# ============================================================================
# Argument Parsing
# ============================================================================
while [[ $# -gt 0 ]]; do
case $1 in
--check)
CHECK_ONLY=true
shift
;;
--quick)
QUICK_MODE=true
shift
;;
-h|--help)
echo "Usage: $0 [OPTIONS] [SERVER]"
echo ""
echo "Options:"
echo " --check Check venv status without installing"
echo " --quick Skip servers with unchanged requirements"
echo " -h,--help Show this help"
echo ""
echo "Servers: ${MCP_SERVERS[*]}"
exit 0
;;
*)
SPECIFIC_SERVER="$1"
shift
;;
esac
done
# ============================================================================
# Helper Functions
# ============================================================================
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_ok() { echo -e "${GREEN}[OK]${NC} $1"; }
log_skip() { echo -e "${YELLOW}[SKIP]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1" >&2; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
# Calculate hash of requirements file(s)
requirements_hash() {
local server_path="$1"
local hash_input=""
if [[ -f "$server_path/requirements.txt" ]]; then
hash_input+=$(cat "$server_path/requirements.txt")
fi
if [[ -f "$server_path/pyproject.toml" ]]; then
hash_input+=$(cat "$server_path/pyproject.toml")
fi
echo "$hash_input" | sha256sum | cut -d' ' -f1
}
# Check if requirements changed since last install
requirements_changed() {
local server_name="$1"
local server_path="$2"
local hash_file="$VENV_CACHE_DIR/$server_name/.requirements_hash"
local current_hash
current_hash=$(requirements_hash "$server_path")
if [[ -f "$hash_file" ]]; then
local stored_hash
stored_hash=$(cat "$hash_file")
if [[ "$current_hash" == "$stored_hash" ]]; then
return 1 # Not changed
fi
fi
return 0 # Changed or no hash file
}
# Save requirements hash after successful install
save_requirements_hash() {
local server_name="$1"
local server_path="$2"
local hash_file="$VENV_CACHE_DIR/$server_name/.requirements_hash"
requirements_hash "$server_path" > "$hash_file"
}
# ============================================================================
# Main Setup Function
# ============================================================================
setup_server() {
local server_name="$1"
local server_path="$REPO_ROOT/mcp-servers/$server_name"
local venv_path="$VENV_CACHE_DIR/$server_name/.venv"
# Verify server exists in repo
if [[ ! -d "$server_path" ]]; then
log_error "$server_name: source directory not found at $server_path"
return 1
fi
# Check-only mode
if [[ "$CHECK_ONLY" == true ]]; then
if [[ -f "$venv_path/bin/python" ]]; then
log_ok "$server_name: venv exists"
else
log_error "$server_name: venv MISSING"
return 1
fi
return 0
fi
# Quick mode: skip if requirements unchanged
if [[ "$QUICK_MODE" == true ]] && [[ -f "$venv_path/bin/python" ]]; then
if ! requirements_changed "$server_name" "$server_path"; then
log_skip "$server_name: requirements unchanged"
return 0
fi
fi
log_info "$server_name: setting up venv..."
# Create cache directory
mkdir -p "$VENV_CACHE_DIR/$server_name"
# Create venv if missing
if [[ ! -d "$venv_path" ]]; then
python3 -m venv "$venv_path"
log_ok "$server_name: venv created"
fi
# Activate and install
# shellcheck disable=SC1091
source "$venv_path/bin/activate"
# Upgrade pip quietly
pip install -q --upgrade pip
# Install requirements (incremental - pip handles already-installed)
if [[ -f "$server_path/requirements.txt" ]]; then
pip install -q -r "$server_path/requirements.txt"
fi
# Install local package in editable mode if pyproject.toml exists
if [[ -f "$server_path/pyproject.toml" ]]; then
pip install -q -e "$server_path"
log_ok "$server_name: package installed (editable)"
fi
deactivate
# Save hash for quick mode
save_requirements_hash "$server_name" "$server_path"
log_ok "$server_name: ready"
}
# ============================================================================
# Create Symlinks (for backward compatibility)
# ============================================================================
create_symlinks() {
log_info "Creating symlinks for backward compatibility..."
for server_name in "${MCP_SERVERS[@]}"; do
local server_path="$REPO_ROOT/mcp-servers/$server_name"
local venv_path="$VENV_CACHE_DIR/$server_name/.venv"
local link_path="$server_path/.venv"
# Skip if source doesn't exist
[[ ! -d "$server_path" ]] && continue
# Skip if venv not in cache
[[ ! -d "$venv_path" ]] && continue
# Remove existing venv or symlink
if [[ -L "$link_path" ]]; then
rm "$link_path"
elif [[ -d "$link_path" ]]; then
log_warn "$server_name: removing old venv directory (now using cache)"
rm -rf "$link_path"
fi
# Create symlink
ln -s "$venv_path" "$link_path"
log_ok "$server_name: symlink created"
done
}
# ============================================================================
# Main
# ============================================================================
main() {
echo "=============================================="
echo " MCP Server Venv Manager"
echo "=============================================="
echo "Cache: $VENV_CACHE_DIR"
echo ""
local failed=0
if [[ -n "$SPECIFIC_SERVER" ]]; then
# Setup specific server
if setup_server "$SPECIFIC_SERVER"; then
: # success
else
failed=1
fi
else
# Setup all servers
for server in "${MCP_SERVERS[@]}"; do
if ! setup_server "$server"; then
((failed++)) || true
fi
done
fi
# Create symlinks for backward compatibility
if [[ "$CHECK_ONLY" != true ]]; then
create_symlinks
fi
echo ""
if [[ $failed -eq 0 ]]; then
log_ok "All MCP servers ready"
else
log_error "$failed server(s) failed"
return 1
fi
}
main "$@"

View File

@@ -72,11 +72,16 @@ setup_shared_mcp() {
log_success "$server_name venv created"
fi
# Install/update dependencies
# Install/update dependencies and local package
if [[ -f "requirements.txt" ]]; then
source .venv/bin/activate
pip install -q --upgrade pip
pip install -q -r requirements.txt
# Install local package in editable mode (required for MCP server to work)
if [[ -f "pyproject.toml" ]]; then
pip install -q -e .
log_success "$server_name package installed (editable mode)"
fi
deactivate
log_success "$server_name dependencies installed"
else
@@ -125,6 +130,24 @@ verify_symlinks() {
log_error "data-platform symlink missing"
log_todo "Run: ln -s ../../../mcp-servers/data-platform plugins/data-platform/mcp-servers/data-platform"
fi
# Check viz-platform -> viz-platform symlink
local vizplatform_link="$REPO_ROOT/plugins/viz-platform/mcp-servers/viz-platform"
if [[ -L "$vizplatform_link" ]]; then
log_success "viz-platform symlink exists"
else
log_error "viz-platform symlink missing"
log_todo "Run: ln -s ../../../mcp-servers/viz-platform plugins/viz-platform/mcp-servers/viz-platform"
fi
# Check contract-validator -> contract-validator symlink
local contractvalidator_link="$REPO_ROOT/plugins/contract-validator/mcp-servers/contract-validator"
if [[ -L "$contractvalidator_link" ]]; then
log_success "contract-validator symlink exists"
else
log_error "contract-validator symlink missing"
log_todo "Run: ln -s ../../../mcp-servers/contract-validator plugins/contract-validator/mcp-servers/contract-validator"
fi
}
# --- Section 3: Config File Templates ---
@@ -301,7 +324,7 @@ print_report() {
# --- Main ---
main() {
echo "=============================================="
echo " Leo Claude Marketplace Setup (v3.0.0)"
echo " Leo Claude Marketplace Setup (v5.1.0)"
echo "=============================================="
echo ""
@@ -309,6 +332,8 @@ main() {
setup_shared_mcp "gitea"
setup_shared_mcp "netbox"
setup_shared_mcp "data-platform"
setup_shared_mcp "viz-platform"
setup_shared_mcp "contract-validator"
# Verify symlinks from plugins to shared MCP servers
verify_symlinks

169
scripts/venv-repair.sh Executable file
View File

@@ -0,0 +1,169 @@
#!/usr/bin/env bash
#
# venv-repair.sh - Fast MCP venv auto-repair for SessionStart hooks
#
# This script is designed to run at session start. It:
# 1. Checks if venvs exist in external cache (~/.cache/claude-mcp-venvs/)
# 2. Creates symlinks from marketplace to cache (instant operation)
# 3. Only runs pip install if cache is missing (first install)
#
# Output format: All messages prefixed with [mcp-venv] for hook display
#
# Usage:
# ./scripts/venv-repair.sh # Auto-repair (default)
# ./scripts/venv-repair.sh --silent # Silent mode (no output unless error)
#
set -euo pipefail
# ============================================================================
# Configuration
# ============================================================================
PREFIX="[mcp-venv]"
VENV_CACHE_DIR="${HOME}/.cache/claude-mcp-venvs/leo-claude-mktplace"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
# MCP servers
MCP_SERVERS=(gitea netbox data-platform viz-platform contract-validator)
# Parse args
SILENT=false
[[ "${1:-}" == "--silent" ]] && SILENT=true
log() {
[[ "$SILENT" == true ]] && return
echo "$PREFIX $1"
}
log_error() {
echo "$PREFIX ERROR: $1" >&2
}
# ============================================================================
# Check if all venvs exist in cache
# ============================================================================
cache_complete() {
for server in "${MCP_SERVERS[@]}"; do
local venv_python="$VENV_CACHE_DIR/$server/.venv/bin/python"
[[ ! -f "$venv_python" ]] && return 1
done
return 0
}
# ============================================================================
# Create symlinks from marketplace to cache
# ============================================================================
create_symlink() {
local server_name="$1"
local server_path="$REPO_ROOT/mcp-servers/$server_name"
local venv_cache="$VENV_CACHE_DIR/$server_name/.venv"
local venv_link="$server_path/.venv"
# Skip if server doesn't exist
[[ ! -d "$server_path" ]] && return 0
# Skip if cache doesn't exist
[[ ! -d "$venv_cache" ]] && return 1
# Already correct symlink?
if [[ -L "$venv_link" ]]; then
local target
target=$(readlink "$venv_link")
[[ "$target" == "$venv_cache" ]] && return 0
rm "$venv_link"
elif [[ -d "$venv_link" ]]; then
# Old venv directory exists - back it up or remove
rm -rf "$venv_link"
fi
# Create symlink
ln -s "$venv_cache" "$venv_link"
return 0
}
create_all_symlinks() {
local created=0
for server in "${MCP_SERVERS[@]}"; do
if create_symlink "$server"; then
((created++)) || true
fi
done
[[ $created -gt 0 ]] && log "Restored $created venv symlinks"
}
# ============================================================================
# Full setup (only if cache missing)
# ============================================================================
setup_server() {
local server_name="$1"
local server_path="$REPO_ROOT/mcp-servers/$server_name"
local venv_path="$VENV_CACHE_DIR/$server_name/.venv"
[[ ! -d "$server_path" ]] && return 0
mkdir -p "$VENV_CACHE_DIR/$server_name"
# Create venv
if [[ ! -d "$venv_path" ]]; then
python3 -m venv "$venv_path"
fi
# Install dependencies
# shellcheck disable=SC1091
source "$venv_path/bin/activate"
pip install -q --upgrade pip
if [[ -f "$server_path/requirements.txt" ]]; then
pip install -q -r "$server_path/requirements.txt"
fi
if [[ -f "$server_path/pyproject.toml" ]]; then
pip install -q -e "$server_path"
fi
deactivate
# Save hash for future quick checks
local hash_file="$VENV_CACHE_DIR/$server_name/.requirements_hash"
{
if [[ -f "$server_path/requirements.txt" ]]; then
cat "$server_path/requirements.txt"
fi
if [[ -f "$server_path/pyproject.toml" ]]; then
cat "$server_path/pyproject.toml"
fi
echo "" # Ensure non-empty input for sha256sum
} | sha256sum | cut -d' ' -f1 > "$hash_file"
}
full_setup() {
log "First run - setting up MCP venvs (this only happens once)..."
for server in "${MCP_SERVERS[@]}"; do
log " Setting up $server..."
setup_server "$server"
done
log "Setup complete. Future sessions will be instant."
}
# ============================================================================
# Main
# ============================================================================
main() {
# Fast path: cache exists, just ensure symlinks
if cache_complete; then
create_all_symlinks
exit 0
fi
# Slow path: need to create venvs (first install)
full_setup
create_all_symlinks
}
main "$@"

View File

@@ -23,7 +23,7 @@ if [ -d ~/.claude/plugins/cache/leo-claude-mktplace ]; then
fi
# Verify installed hooks are command type
for plugin in doc-guardian code-sentinel projman pr-review project-hygiene data-platform; do
for plugin in doc-guardian code-sentinel projman pr-review project-hygiene data-platform cmdb-assistant; do
HOOK_FILE=~/.claude/plugins/marketplaces/leo-claude-mktplace/plugins/$plugin/hooks/hooks.json
if [ -f "$HOOK_FILE" ]; then
if grep -q '"type": "command"' "$HOOK_FILE" || grep -q '"type":"command"' "$HOOK_FILE"; then