162 Commits

Author SHA1 Message Date
c6182a3fda Merge pull request 'feat(projman): add plan-then-batch skill optimization' (#421) from feat/plan-then-batch-optimization into development
Reviewed-on: #421
2026-02-04 00:59:03 +00:00
0e70156e26 feat(projman): add plan-then-batch skill optimization
Separate cognitive work from mechanical API execution to reduce
skill-related token consumption by ~76-83% during sprint workflows.

Changes:
- Add batch-execution.md skill with 4-phase protocol
- Promote mcp-tools-reference and batch-execution to frontmatter
  for planner and orchestrator agents (auto-injected, zero re-read)
- Replace "Skills to Load" with phase-based "Skill Loading Protocol"
- Restructure planning-workflow.md Steps 8-10 for batch execution
- Update agent matrix in CLAUDE.md and docs/CONFIGURATION.md
- Add Phase-Based Skill Loading documentation section
- Clean up .gitignore (transient files, dev symlinks)

Token impact:
- 6-issue sprint planning: ~76% reduction
- 10-issue sprint planning: ~80% reduction
- 8-issue status updates: ~83% reduction

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 19:57:10 -05:00
01c225540b Merge pull request 'development' (#420) from development into main
Reviewed-on: #420
2026-02-03 20:48:48 +00:00
52c5be32c4 Merge pull request 'chore: release v5.10.0' (#419) from chore/release-5.10.0 into development
Reviewed-on: #419
2026-02-03 20:48:33 +00:00
46e83bc711 chore: release v5.10.0
- NetBox MCP: Module-based tool filtering for token optimization
- Gitea MCP: Standardized build backend to setuptools
- cmdb-assistant: Fixed documentation tool name references

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 15:11:15 -05:00
c0443a7f36 Merge pull request 'development' (#418) from development into main
Reviewed-on: #418
2026-02-03 20:07:40 +00:00
c4dd4ee25d Merge pull request 'fix(gitea): standardize build backend to setuptools' (#417) from fix/gitea-mcp-setuptools into development
Reviewed-on: #417
2026-02-03 20:07:24 +00:00
184ab48933 fix(gitea): standardize build backend to setuptools
Replace hatchling with setuptools to match all other MCP servers
(contract-validator, viz-platform, data-platform).

Changes:
- build-system: hatchling → setuptools>=61.0
- license: string → PEP 639 format {text = "MIT"}
- Remove redundant License classifier
- Add [tool.setuptools.packages.find] config
- Add [tool.pytest.ini_options] for consistency

Verified: pip install -e . succeeds, 36 tools registered, 64 tests pass.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 15:05:56 -05:00
a741ec3f88 Merge pull request 'development' (#416) from development into main
Reviewed-on: #416
2026-02-03 19:47:50 +00:00
f1732f07c1 Merge pull request 'feat(gitea): add pip-installable packaging for external consumption' (#415) from feat/gitea-mcp-packaging into development
Reviewed-on: #415
2026-02-03 19:47:33 +00:00
f9df3b57ea Merge pull request 'development' (#414) from development into main
Reviewed-on: #414
2026-02-03 19:23:55 +00:00
b0e6d738fa Merge pull request 'fix(gitea): fix 15 failing tests and update documentation' (#413) from fix/gitea-mcp-tests-docs into development
Reviewed-on: #413
2026-02-03 19:23:41 +00:00
9044fe28ec fix(gitea): fix 15 failing tests and update documentation
Test Fixes:
- Fix mock_config fixture to use 'owner/repo' format (was separate fields)
- Update test_client_initialization to match current client API
- Add required 'org' argument to get_org_labels, list_repos, aggregate_issues tests
- Update error message assertion in test_no_repo_specified_error
- Fix test_create_issue to mock is_org_repo and label resolution
- Update aggregate_issues tests in test_issues.py with org argument

Documentation Updates:
- Expand tools table from 8 to 36 tools (organized by category)
- Update directory structure to show all 6 tool files
- Remove unused GITEA_OWNER from configuration docs
- Add automatic repository detection documentation
- Add project directory detection strategies
- Update test count from 42 to 64
- Create CHANGELOG.md with full version history

All 64 tests now pass. No production code changes.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 14:22:02 -05:00
c37107fc42 feat(gitea): add pip-installable packaging for external consumption
Extract tool definitions and dispatcher from server.py into tool_registry.py
to enable transport-agnostic reuse. External consumers (e.g., HTTP transport
in gitea-mcp-remote) can now import and use the Gitea MCP tools without
duplicating code.

Changes:
- Create pyproject.toml with PEP 621 compliant package manifest (hatchling)
- Create tool_registry.py with get_tool_definitions() and create_tool_dispatcher()
- Refactor server.py to use registry (1100 -> 93 lines)
- Update __init__.py with package exports and __version__

The tool_filter parameter enables selective tool exposure for remote servers.
Stdio transport behavior is unchanged - all 36 tools still work identically.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 13:57:59 -05:00
841ce67dae Merge pull request 'development' (#412) from development into main
Reviewed-on: #412
2026-02-03 17:12:09 +00:00
da0be51946 Merge pull request 'feat(netbox): add module-based tool filtering for token optimization' (#411) from feat/netbox-module-filtering into development
Reviewed-on: #411
2026-02-03 17:11:44 +00:00
d9d80d77cb feat(netbox): add module-based tool filtering for token optimization
Reduces NetBox MCP context token consumption from ~19,810 tokens (182 tools)
to ~4,500 tokens (~43 tools) by enabling environment-variable-driven module
filtering.

Key changes:
- Add NETBOX_ENABLED_MODULES env var to config.py
- Filter tool registration based on enabled modules in server.py
- Conditional tool class instantiation for memory efficiency
- Routing guard with clear error messages for disabled modules
- Startup logging shows enabled modules and tool count

Also fixes documentation referencing incorrect tool names:
- virtualization_* → virt_* in cmdb-assistant docs
- wireless_* → wlan_* in README
- circuits_list_circuit_terminations → circ_list_terminations

Recommended config for cmdb-assistant users:
NETBOX_ENABLED_MODULES=dcim,ipam,virtualization,extras

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 12:08:46 -05:00
3557f17177 Merge pull request 'development' (#410) from development into main
Reviewed-on: #410
2026-02-03 16:11:00 +00:00
a005610a37 Merge pull request 'feat(agents): add permissionMode, disallowedTools, skills frontmatter to all 25 agents' (#409) from feat/agent-frontmatter-hardening-v3 into development
Reviewed-on: #409
2026-02-03 16:10:25 +00:00
19ba80191f feat(agents): add permissionMode, disallowedTools, skills frontmatter to all 25 agents
- permissionMode: 1 bypassPermissions, 7 acceptEdits, 7 default, 10 plan
- disallowedTools: 12 agents blocked from Write/Edit/MultiEdit
- model: promote Planner + Code Reviewer to opus
- skills: auto-inject on Executor (7), Code Reviewer (4), Maintainer (2)
- docs: CLAUDE.md + CONFIGURATION.md updated with full agent matrix

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 11:08:49 -05:00
8f9ba64688 Merge pull request 'development' (#408) from development into main
Reviewed-on: #408
2026-02-03 07:42:17 +00:00
e35e22cffb Merge pull request 'chore: release v5.9.0' (#407) from release/v5.9.0 into development
Reviewed-on: #407
2026-02-03 07:41:20 +00:00
61907b78db chore: release v5.9.0
- Plugin installation scripts for consumer projects
- MCP server mapping via mcp_servers field in plugin.json
- CLAUDE.md section markers for install/uninstall
- Agent model selection (25 agents with model frontmatter)
- Agent frontmatter standardization

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:38:31 -05:00
0ea30e0d75 Merge pull request 'development' (#406) from development into main
Reviewed-on: #406
2026-02-03 07:15:10 +00:00
c4037f505c Merge pull request 'fix(plugins): remove invalid mcp_servers key from plugin.json files' (#405) from fix/startup-hook-venv-cache-path into development
Reviewed-on: #405
2026-02-03 07:14:24 +00:00
dbf3fa7e0d fix(plugins): remove invalid mcp_servers key from plugin.json files
The mcp_servers key is not a valid key in the Claude plugin manifest
schema. MCP servers are configured in the root .mcp.json file instead.

Affected plugins:
- cmdb-assistant
- contract-validator
- data-platform
- pr-review
- projman
- viz-platform

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:13:46 -05:00
c530f568ed Merge pull request 'development' (#404) from development into main
Reviewed-on: #404
2026-02-03 03:40:17 +00:00
6d093e83b6 Merge pull request 'fix(hooks): add auto-symlink creation in data-platform startup hook' (#403) from fix/startup-hook-venv-cache-path into development
Reviewed-on: #403
2026-02-03 03:40:00 +00:00
13de992638 fix(hooks): add auto-symlink creation in data-platform startup hook
Note: This fix may not help because MCP servers fail BEFORE hooks run.
See lessons learned wiki page for full debug trace.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 22:38:27 -05:00
ef28f172d6 fix(plugins): sync plugin.json versions with marketplace.json
Plugin load failures were caused by version mismatch between
marketplace.json and individual plugin.json files:
- contract-validator: 1.2.0 vs 1.1.0
- git-flow: 1.2.0 vs 1.0.0
- projman: 3.4.0 vs 3.3.0
- clarity-assist: 1.2.0 vs 1.0.0
- doc-guardian: 1.1.0 vs 1.0.0

Claude Code silently fails to load plugins when versions don't match.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 22:26:24 -05:00
027ae660c4 Merge pull request 'development' (#402) from development into main
Reviewed-on: #402
2026-02-03 03:15:17 +00:00
39556dbb59 Merge pull request 'fix(hooks): check venv cache path before marketplace path' (#401) from fix/startup-hook-venv-cache-path into development
Reviewed-on: #401
2026-02-03 03:15:00 +00:00
c9e054e013 fix(hooks): check venv cache path before marketplace path
Startup hooks in data-platform and pr-review were checking for venvs
at the marketplace path (~/.claude/plugins/marketplaces/.../mcp-servers/)
which gets wiped on updates. The actual venvs live in the cache directory
(~/.cache/claude-mcp-venvs/) which survives updates.

This caused false "MCP venv missing" errors even when venvs existed,
wasting hours of debugging time.

Fixed hooks now check cache path first, matching the pattern used
by run.sh scripts.

Also updated docs/CANONICAL-PATHS.md with the correct venv path pattern
to prevent future occurrences.

Lesson learned: lessons/patterns/startup-hooks-must-check-venv-cache-path-first

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 22:13:40 -05:00
5e20c6b6ef Merge pull request 'development' (#400) from development into main
Reviewed-on: #400
2026-02-03 02:58:24 +00:00
db8fec42f2 Merge pull request 'fix(hooks): correct venv path in startup-check scripts' (#399) from fix/startup-hook-venv-paths into development
Reviewed-on: #399
2026-02-03 02:58:09 +00:00
ba1dee4553 fix(hooks): correct venv path in startup-check scripts
The startup hooks were looking for MCP venvs relative to the plugin
directory instead of the marketplace root, causing false "venv missing"
errors on every session start.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 21:56:43 -05:00
5e20a4a229 Merge pull request 'development' (#398) from development into main
Reviewed-on: #398
2026-02-03 01:40:43 +00:00
01e184b68f Merge pull request 'feat(agents): add model selection and standardize frontmatter' (#397) from fix/plugin-install-mcp-mapping into development
Reviewed-on: #397
2026-02-03 01:39:32 +00:00
c0d62f4957 feat(agents): add model selection and standardize frontmatter
Add per-agent model selection using Claude Code's now-supported `model`
frontmatter field, and standardize all agent frontmatter across the
marketplace.

Changes:
- Add `model` field to all 25 agents (18 sonnet, 7 haiku)
- Fix viz-platform/data-platform agents using `agent:` instead of `name:`
- Remove non-standard `triggers:` field from domain agents
- Add missing frontmatter to 13 agents
- Document model selection in CLAUDE.md and CONFIGURATION.md
- Fix undocumented commands in README.md

Model assignments based on reasoning depth, tool complexity, and latency:
- sonnet: Planner, Orchestrator, Executor, Coordinator, Security Reviewers
- haiku: Maintainability Auditor, Test Validator, Git Assistant, etc.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 20:37:58 -05:00
56c9a38813 Merge pull request 'development' (#396) from development into main
Reviewed-on: #396
2026-02-03 00:38:17 +00:00
5b1dde694c Merge pull request 'fix(scripts): MCP server mapping and CLAUDE.md section markers' (#395) from fix/plugin-install-mcp-mapping into development
Reviewed-on: #395
2026-02-03 00:37:15 +00:00
eafcfe5bd1 fix(scripts): MCP server mapping and CLAUDE.md section markers
Issue 1 - MCP Server Mapping:
- Add mcp_servers field to plugin.json for plugins using shared MCP servers
- projman/pr-review now install gitea MCP server
- cmdb-assistant now installs netbox MCP server
- Scripts read MCP server names from plugin.json

Issue 2 - CLAUDE.md Section Markers:
- Install wraps content with HTML comment markers for precise removal
- Uninstall uses markers first, falls back to legacy header detection
- Fixes code block false positives during uninstall

Bug fix:
- Change ((servers_added++)) to ((++servers_added)) to avoid exit code 1
  with set -e when incrementing from 0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 19:33:45 -05:00
67d769e9e5 Merge pull request 'development' (#394) from development into main
Reviewed-on: #394
2026-02-02 22:03:39 +00:00
dc113d8b09 Merge pull request 'feat(scripts): add plugin installation mechanism for consumer projects' (#393) from feat/plugin-install-scripts into development
Reviewed-on: #393
2026-02-02 22:03:24 +00:00
aca5c6e5b1 feat(scripts): add plugin installation mechanism for consumer projects
Add three new scripts for installing marketplace plugins to consumer projects:

- install-plugin.sh: Install plugin to target project (.mcp.json + CLAUDE.md)
- uninstall-plugin.sh: Remove plugin from target project
- list-installed.sh: Show installed/available plugins in a project

Features:
- Idempotent operations (safe to run multiple times)
- Handles plugins with/without MCP servers
- Code block aware CLAUDE.md section removal
- Flexible header format detection

Documentation updated in docs/CONFIGURATION.md with usage examples.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 17:01:35 -05:00
3d2f14b0ab Merge pull request 'development' (#392) from development into main
Reviewed-on: #392
2026-02-02 21:03:07 +00:00
120f00ece6 Merge pull request 'feat(claude-config-maintainer): add settings.local.json audit feature v1.2.0' (#391) from feat/config-maintainer-settings-audit into development
Reviewed-on: #391
2026-02-02 21:02:43 +00:00
3012a7af68 feat(claude-config-maintainer): add settings.local.json audit feature v1.2.0
Add 3 new commands for auditing and optimizing Claude Code permission
configurations, leveraging the marketplace's multi-layer review architecture.

New commands:
- /config-audit-settings - 100-point scoring across redundancy, coverage,
  safety alignment, and profile fit
- /config-optimize-settings - apply optimizations with dry-run, named
  profiles (conservative, reviewed, autonomous), consolidation modes
- /config-permissions-map - Mermaid diagram of review layer coverage

New skill:
- settings-optimization.md - 7 sections covering file formats, syntax
  reference, consolidation rules, review-layer-aware recommendations,
  named profiles, scoring criteria, and hook detection

Updated agent maintainer.md with new "Audit Settings Files" responsibility.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 15:54:15 -05:00
b76e53c215 Merge pull request 'development' (#390) from development into main
Reviewed-on: #390
2026-02-02 19:30:43 +00:00
d12d9b4962 Merge pull request 'docs: close 5.8.0 punch list — version sync, stale header, emoji clarity' (#389) from docs/punch-list-5.8.0-cleanup into development
Reviewed-on: #389
2026-02-02 19:30:27 +00:00
b302a4237d docs: close 5.8.0 punch list — version sync, stale header, emoji clarity
- Fix /review command visual header: replace inline CLOSING box with
  skill reference (Code Reviewer uses 🔍 REVIEW, not 🏁 CLOSING)
- Update CLAUDE.md version 5.4.0 → 5.8.0, fix data-platform version
  in plugin table (1.1.0 → 1.3.0), update Last Updated date
- Add Unicode emoji to Phase Registry tables in visual-output.md
  (now shows 🎯 Target instead of just "Target")

Items verified complete:
- README.md already shows v5.8.0
- marketplace.json already shows 5.8.0
- CHANGELOG 5.8.0 entry complete (rfc-reject in both Changed/Removed)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:28:09 -05:00
3d96f6b505 Merge pull request 'development' (#388) from development into main
Reviewed-on: #388
2026-02-02 19:10:30 +00:00
a034c12eb6 Merge pull request 'feat(projman): hardening sprint v5.8.0' (#387) from feat/projman-hardening into development
Reviewed-on: #387
2026-02-02 19:10:13 +00:00
636bd0af59 chore: bump version to 5.8.0, update CHANGELOG
- Added [5.8.0] section documenting all projman hardening changes
- Updated README.md title to v5.8.0
- Updated marketplace.json version to 5.8.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:07:15 -05:00
bf5029d6dc fix(contract-validator): update test for new contract INFO issue
Test fixture without gate_contract now correctly expects INFO issue
rather than zero issues.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:05:24 -05:00
e939af0689 docs(projman): expand /test command documentation
- Added sprint integration section (pre-close verification workflow)
- Added concrete usage examples for all modes
- Added edge cases table
- Added DO NOT rules for both modes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:02:32 -05:00
eb6ce62a76 feat(projman): add sprint dispatch log for session recovery
- progress-tracking.md: new Sprint Dispatch Log section with event types
- orchestrator.md: new responsibility to maintain dispatch log
- Enables timeline reconstruction after interrupted sessions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:00:24 -05:00
f7bcd48fc0 refactor(projman): create shared visual-output skill for DRY headers
- New skill: visual-output.md defines all header, progress, and verdict formats
- All 4 agent files now reference the skill instead of inline templates
- Phase Registry table maps agents to their emoji and phase name
- Single source of truth for visual branding changes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:57:56 -05:00
72b3436a24 feat(contract-validator): add gate contract versioning
- design-gate.md and data-gate.md declare gate_contract: v1
- domain-consultation.md Gate Command Reference includes Contract column
- validate_workflow_integration now checks contract version compatibility
- Tests added for match, mismatch, and missing contract scenarios

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:54:19 -05:00
bea46d7689 feat(projman): add sprint lifecycle state machine via milestone metadata
- New skill: sprint-lifecycle.md defines states, transitions, and check protocol
- All sprint commands now check and set lifecycle state
- States tracked in milestone description metadata (Sprint/Planning, Sprint/Executing, Sprint/Reviewing)
- Out-of-order calls produce warnings with guidance
- --force override available for all lifecycle checks
- Added Sprint/* labels to label taxonomy documentation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:48:13 -05:00
f2fddafca3 refactor(projman): normalize RFC to sub-command pattern, absorb clear-cache
- Created unified /rfc command with create|list|review|approve|reject sub-commands
- Deleted 5 individual rfc-*.md command files
- Moved /clear-cache into /setup --clear-cache
- Updated all cross-references in skills, docs, and integration files
- Command count: 17 -> 12 (net -5)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:43:29 -05:00
fdcb5d9874 feat(projman): harden sprint approval gate with --force override
- sprint-approval.md: approval is now a hard block, not a warning
- sprint-start.md: added --force flag documentation
- orchestrator.md: approval verification is now a hard stop
- docs: updated commands cheatsheet

BREAKING: /sprint-start now requires approval or --force flag

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:37:35 -05:00
edee44088c Merge pull request 'development' (#385) from development into main
Reviewed-on: #385
2026-02-02 16:49:08 +00:00
45510b44c0 Merge pull request 'fix(marketplace): remove integrates_with field (schema violation)' (#384) from fix/marketplace-schema-hotfix into development
Reviewed-on: #384
2026-02-02 16:47:12 +00:00
79468f5d9e fix(marketplace): remove integrates_with field (schema violation)
Claude Code's marketplace schema does not support custom fields.
The `integrates_with` field on data-platform and viz-platform caused:
"Invalid schema: plugins.9: Unrecognized key: integrates_with"

This reverts the schema extension while keeping the validate_workflow_integration
MCP tool functional (it reads from plugin.json files directly, not marketplace.json).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 11:46:29 -05:00
938ddd7b69 Merge pull request 'development' (#383) from development into main
Reviewed-on: #383
2026-02-02 16:27:41 +00:00
9148e21bc5 Merge pull request 'feat(contract-validator): add validate_workflow_integration tool (v5.7.1)' (#382) from feat/v5.7.1-domain-advisory-hardening into development
Reviewed-on: #382
2026-02-02 16:27:18 +00:00
2925ec4229 feat(contract-validator): add validate_workflow_integration tool (v5.7.1)
Domain Advisory Pattern Hardening - patch release to close gaps from v5.6.0/v5.7.0:

## Added
- New `validate_workflow_integration` MCP tool validates domain plugins expose
  required advisory interfaces (gate command, review command, advisory agent)
- New `MISSING_INTEGRATION` issue type for workflow integration validation
- New `WorkflowIntegrationResult` Pydantic model for structured validation output
- `integrates_with` field on viz-platform and data-platform in marketplace.json
  declaring projman integration metadata
- 4 new test cases for workflow integration validation

## Fixed
- scripts/setup.sh banner version updated from v5.1.0 to v5.7.1

## Documentation
- Updated mcp-tools-reference.md with new tool
- Updated validation-rules.md with Workflow Integration Checks section
- Added /design-gate, /design-review, /data-gate, /data-review to COMMANDS-CHEATSHEET
- Added contract-validator to CONFIGURATION.md plugin table
- Updated README.md Contract Validator tools table

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 11:24:51 -05:00
aa68883f87 Merge pull request 'development' (#381) from development into main
Reviewed-on: #381
2026-02-02 14:57:54 +00:00
4340873f4e Merge pull request 'fix: update README.md version to 5.7.0' (#380) from fix/readme-version-5.7.0 into development
Reviewed-on: #380
2026-02-02 14:57:32 +00:00
98fd4e45e2 fix: update README.md version to 5.7.0
The README title was still showing v5.6.0 after the Sprint 10 release.
All other version references (CHANGELOG.md, marketplace.json) were correct.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 09:55:36 -05:00
dd36a79bcb Merge pull request 'development' (#379) from development into main
Reviewed-on: #379
2026-02-02 14:50:21 +00:00
793565aaaa Merge pull request 'feat: Sprint 10 - Data Platform Domain Advisory Pattern (v5.7.0)' (#378) from feat/sprint-10-data-platform-advisory into development
Reviewed-on: #378
2026-02-02 14:49:57 +00:00
27f3603f52 Merge feat/377-version-docs 2026-02-02 01:32:24 -05:00
2872092554 docs: bump version to 5.7.0 and update documentation
- marketplace.json: 5.6.0 → 5.7.0
- data-platform plugin.json: 1.1.0 → 1.3.0
- CHANGELOG.md: Add v5.7.0 entry with data-platform additions
- README.md: Update Domain Advisory table (Data: planned → active)

Domain Advisory Pattern now fully operational for both domains.

Closes #377

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:31:39 -05:00
4c857dde47 Merge feat/376-data-review-command 2026-02-02 01:30:26 -05:00
efe3808bd2 feat(data-platform): add /data-review command
Comprehensive data integrity audit for human review.

- Detailed report with FAIL/WARN/INFO severity levels
- Schema, lineage, dbt, and PostGIS validation
- Actionable recommendations for each finding
- Graceful degradation when components unavailable

Use cases: sprint planning, code review, post-migration,
periodic health checks, project onboarding.

Closes #376

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:29:41 -05:00
61c315a605 Merge feat/375-data-gate-command 2026-02-02 01:29:03 -05:00
56d3307cd3 feat(data-platform): add /data-gate command
Binary pass/fail gate command for projman orchestrator integration.

- Invoked when Domain/Data label present on issue
- Checks FAIL-level violations only (speed optimization)
- Returns compact PASS/FAIL output for automation
- Graceful degradation when database/dbt unavailable

Completes the Domain Advisory Pattern for data domain.

Closes #375

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:28:19 -05:00
ea9d501a5d Merge feat/374-data-advisor-agent 2026-02-02 01:27:38 -05:00
cd27b92b28 feat(data-platform): add data-advisor agent
Advisory agent for data integrity validation using existing MCP tools.

Features:
- Two operating modes: review (detailed) and gate (binary)
- PostgreSQL schema validation
- dbt project health checks (parse, compile, test, lineage)
- PostGIS spatial validation
- Python code pattern scanning
- Graceful degradation when components unavailable

Integrates with projman orchestrator for Domain/Data gates.

Closes #374

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:26:54 -05:00
61654057b8 Merge feat/373-data-integrity-audit-skill 2026-02-02 01:25:39 -05:00
2c998dff1d feat(data-platform): add data-integrity-audit skill
Define audit rules, severity classification, scanning strategies,
and report templates for data integrity validation.

Covers:
- Schema validity (PostgreSQL tables, columns, types)
- dbt project health (parse, compile, test, lineage)
- PostGIS compliance (SRID, geometry types, extent)
- Data type consistency (DataFrame dtypes)
- Query safety patterns

Closes #373

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:24:54 -05:00
091e3d25f3 Merge pull request 'development' (#372) from development into main
Reviewed-on: #372
2026-02-02 00:11:11 +00:00
192f808f48 Merge pull request 'feat: Domain Advisory Pattern - viz-platform integration (Sprint 9)' (#370) from feat/sprint-9-domain-advisory-pattern into development
Reviewed-on: #370
2026-02-01 23:46:53 +00:00
65574a03fb feat(projman): add domain-consultation skill and update orchestrator
- Create domain-consultation.md skill with detection rules and gate protocols
- Update orchestrator.md to load skill and run domain gates before completion
- Add critical reminder for domain gate enforcement

Closes #356
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:32:40 -05:00
571c713697 Merge feat/363-domain-labels-version-bump 2026-02-01 18:30:59 -05:00
5429f193b3 Merge feat/361-planner-domain-consultation 2026-02-01 18:30:59 -05:00
a866e0d43d Merge feat/359-design-review-command 2026-02-01 18:30:49 -05:00
2f5161675c Merge feat/360-design-gate-command 2026-02-01 18:30:49 -05:00
f95b7eb650 Merge feat/358-design-reviewer-agent 2026-02-01 18:30:49 -05:00
8095f16cd6 Merge feat/357-design-system-audit-skill 2026-02-01 18:30:43 -05:00
7d2705e3bf feat(labels): add Domain/Viz and Domain/Data labels for cross-plugin integration
Add new Domain category with 2 labels for domain-specific validation gates:
- Domain/Viz: triggers viz-platform design gates for frontend/visualization work
- Domain/Data: triggers data-platform data gates for data engineering work

Update version to 5.6.0 and document Domain Advisory Pattern feature.

Closes #363

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:23:27 -05:00
0d04c8703a feat(viz-platform): add /design-review command for design audits
Add standalone command that invokes the design-reviewer agent to
perform detailed design system compliance audits on target paths.
Returns comprehensive findings grouped by severity with file paths,
line numbers, and recommended fixes.

Closes #359

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:20:14 -05:00
407dd1b93b feat(viz-platform): add design-gate command for binary pass/fail validation
Add /design-gate command that provides binary pass/fail validation for
design system compliance. This command is used by projman orchestrator
during sprint execution to gate issue completion for Domain/Viz issues.

Features:
- Binary PASS/FAIL output for automation
- Checks only FAIL-level violations (invalid props, missing components)
- Integrates with projman sprint execution workflow
- Lightweight alternative to /design-review

Closes #360

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:18:09 -05:00
4614726350 feat(projman): add domain consultation step to planner agent
Integrate domain-consultation skill into the planner agent workflow:
- Add skills/domain-consultation.md to Skills to Load section
- Add new responsibility "7. Domain Consultation" between Task Sizing and Issue Creation
- Renumber subsequent sections (Issue Creation -> 8, Request Approval -> 9)
- Add critical reminder #8 to always check domain signals

The domain consultation step analyzes planned issues for domain-specific
signals (viz-platform, data-platform) and appends appropriate acceptance
criteria before issue creation.

Closes #361

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:15:15 -05:00
69358a78ba feat(viz-platform): add design-reviewer agent for design system compliance
Add design reviewer agent that uses viz-platform MCP tools to audit code
for design system compliance in Dash/DMC applications.

Features:
- Review mode: detailed report with FAIL/WARN/INFO severity levels
- Gate mode: binary PASS/FAIL for CI/CD integration
- Component validation using validate_component, get_component_props
- Theme compliance checking for hardcoded colors/sizes
- Accessibility validation using accessibility_validate_colors
- Structured output for projman orchestrator integration

Closes #358

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:13:15 -05:00
557bf6115b feat(viz-platform): add design-system-audit skill
Add comprehensive design system audit skill that provides:
- What to check: component prop validity, theme token usage,
  accessibility compliance, responsive design
- Common violation patterns at FAIL/WARN/INFO severity levels
- Scanning strategy for finding DMC components in Python files
- Report template for audit output
- MCP tool integration patterns

Closes #357

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:06:21 -05:00
6eeb4a4e9a feat(projman): add domain-consultation skill for cross-plugin integration
Add the core skill that enables projman agents to detect and consult
domain-specific plugins (viz-platform, data-platform) during sprint
lifecycle. Includes domain detection rules, planning protocol,
execution gate protocol, review protocol, and extensibility guidelines.

Closes #356

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:05:39 -05:00
b605a2de5e Merge pull request 'development' (#354) from development into main
Reviewed-on: #354
2026-02-01 19:33:57 +00:00
fde61f5f73 Merge pull request 'chore: release v5.5.0' (#353) from chore/release-5.5.0 into development
Reviewed-on: #353
2026-02-01 19:33:20 +00:00
addfc9c1d5 chore: release v5.5.0
- RFC System for Feature Tracking
- 5 new commands: /rfc-create, /rfc-list, /rfc-review, /rfc-approve, /rfc-reject
- New MCP tool: allocate_rfc_number
- Sprint integration: /sprint-plan detects approved RFCs
- Clarity-assist integration: feature request detection

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 14:29:25 -05:00
4aa0baa2a6 Merge pull request 'development' (#352) from development into main
Reviewed-on: #352
2026-02-01 19:10:34 +00:00
733b8679c9 Merge pull request 'docs: add RFC commands to README and bump projman version' (#351) from feat/rfc-system into development
Reviewed-on: #351
2026-02-01 19:10:13 +00:00
5becd3d41c Merge pull request 'docs: add RFC commands to README and bump projman version' (#350) from fix/rfc-docs into development
Reviewed-on: #350
2026-02-01 19:08:56 +00:00
305c534402 docs: add RFC commands to README and bump projman version
- Add 5 RFC commands to projman Commands line in README.md
- Bump projman version 3.3.0 → 3.4.0 in marketplace.json

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 14:05:02 -05:00
7b7e7dce16 docs: add RFC commands to README and bump projman version
- Add 5 RFC commands to projman Commands line in README.md
- Bump projman version 3.3.0 → 3.4.0 in marketplace.json

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:58:59 -05:00
e3db084195 Merge pull request 'development' (#349) from development into main
Reviewed-on: #349
2026-02-01 18:49:22 +00:00
b9eaae5317 Merge pull request 'feat(projman): add RFC system for feature tracking' (#348) from feat/rfc-system into development
Reviewed-on: #348
2026-02-01 17:42:01 +00:00
16acc0609e feat(projman): add RFC system for feature tracking
Implement wiki-based Request for Comments system for capturing,
reviewing, and tracking feature ideas through their lifecycle.

New commands:
- /rfc-create: Create RFC from conversation or clarified spec
- /rfc-list: List RFCs grouped by status
- /rfc-review: Submit Draft RFC for review
- /rfc-approve: Approve RFC for sprint planning
- /rfc-reject: Reject RFC with documented reason

RFC lifecycle: Draft → Review → Approved → Implementing → Implemented

Integration:
- /sprint-plan detects approved RFCs and offers selection
- /sprint-close updates RFC status on completion
- clarity-assist suggests /rfc-create for feature ideas

New MCP tool: allocate_rfc_number

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 12:38:02 -05:00
2a92211b28 Merge pull request 'development' (#347) from development into main
Reviewed-on: #347
2026-01-31 21:15:06 +00:00
f0f4369eac Merge pull request 'fix: correct hooks.json structure with nested matcher objects' (#346) from fix/hooks-json-structure into development
Reviewed-on: #346
2026-01-31 21:14:44 +00:00
89e841d448 fix: correct hooks.json structure with nested matcher objects
The hooks.json files had incorrect structure. Claude Code requires:
- Top-level "hooks" object
- Event names as keys containing arrays
- Each array element must have "matcher" and nested "hooks" array

Fixed 8 plugins:
- clarity-assist
- claude-config-maintainer
- cmdb-assistant
- contract-validator
- data-platform
- pr-review
- projman
- viz-platform

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 16:07:25 -05:00
de6cba5f31 Merge pull request 'development' (#345) from development into main
Reviewed-on: #345
2026-01-31 19:28:42 +00:00
b6b09c1754 Merge pull request 'fix: correct hooks.json structure per Claude Code documentation' (#344) from fix/hooks-json-structure into development
Reviewed-on: #344
2026-01-31 19:28:26 +00:00
94d8f03cf9 fix: correct hooks.json structure per Claude Code documentation
Three plugins had incorrect hooks.json structure that caused hooks to fail:

- pr-review: SessionStart used nested `hooks` array without matcher
- cmdb-assistant: SessionStart used nested `hooks` array without matcher
- git-flow: Used completely wrong format (array with `event` field)

Per Claude Code documentation:
- Without matcher: direct `type`/`command` in the event array
- With matcher: nested `hooks` array inside matcher object

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 14:26:51 -05:00
31dcf0338c Merge pull request 'development' (#343) from development into main
Reviewed-on: #343
2026-01-30 23:15:45 +00:00
f23e047842 Merge pull request 'docs: add mandatory behavior rules to CLAUDE.md' (#342) from fix/revert-corrupted-hooks-and-update-rules into development
Reviewed-on: #342
2026-01-30 23:14:53 +00:00
dc59cb3713 docs: add mandatory behavior rules to CLAUDE.md
Add prominent behavior rules section at top of CLAUDE.md to ensure
thorough verification before claiming completion and believing user
reports of issues.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 18:13:28 -05:00
569dc9a8f2 Merge pull request 'development' (#341) from development into main
Reviewed-on: #341
2026-01-30 23:07:33 +00:00
a78bde2e42 Merge pull request 'fix: address critical issues from codebase analysis' (#340) from fix/critical-issues-from-analysis into development
Reviewed-on: #340
2026-01-30 23:07:21 +00:00
f082b78c0b fix: address critical issues from codebase analysis
- Add hooks declarations to 9 plugins missing them in marketplace.json
- Change .mcp.json to use relative paths (portable across users)
- Fix pr-review hook schema to use standard nested hooks structure
- Fix token exposure in cmdb-assistant startup-check.sh (use curl -K)
- Update version to 5.4.1 in marketplace.json, README.md
- Fix CANONICAL-PATHS.md version (was incorrectly showing 5.5.0)

All 12 plugins now have hooks properly registered.
All validations pass.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 18:06:04 -05:00
7217790143 Merge pull request 'development' (#339) from development into main
Reviewed-on: #339
2026-01-30 22:35:42 +00:00
f684b47161 Merge pull request 'refactor: extract skills from commands across 10 plugins' (#338) from refactor/all-plugins-skills-extraction into development
Reviewed-on: #338
2026-01-30 22:35:20 +00:00
7c8a20c804 refactor: extract skills from commands across 8 plugins
Refactored commands to extract reusable skills following the
Commands → Skills separation pattern. Each command is now <50 lines
and references skill files for detailed knowledge.

Plugins refactored:
- claude-config-maintainer: 5 commands → 7 skills
- code-sentinel: 3 commands → 2 skills
- contract-validator: 5 commands → 6 skills
- data-platform: 10 commands → 6 skills
- doc-guardian: 5 commands → 6 skills (replaced nested dir)
- git-flow: 8 commands → 7 skills

Skills contain: workflows, validation rules, conventions,
reference data, tool documentation

Commands now contain: YAML frontmatter, agent assignment,
skills list, brief workflow steps, parameters

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:32:24 -05:00
aad02ef2d9 Merge branch 'refactor/viz-platform-skills-extraction' into refactor/all-plugins-skills-extraction 2026-01-30 17:31:56 -05:00
db8ffa23ec Merge pr-review branch, keep clarity-assist changes 2026-01-30 17:31:09 -05:00
5bf1271347 refactor(clarity-assist): extract skills from commands
Extract shared knowledge from clarify.md and quick-clarify.md into
reusable skill files:
- 4d-methodology.md: Core 4-phase clarification process
- nd-accommodations.md: Neurodivergent-friendly question patterns
- clarification-techniques.md: Anti-patterns and question templates
- escalation-patterns.md: Mode switching guidelines

Commands slimmed from 149/96 lines to 44/49 lines respectively.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:23:40 -05:00
747a2b15e5 refactor(cmdb-assistant): extract skills and slim commands
- Extract 9 skill files from command knowledge:
  - mcp-tools-reference.md: Complete NetBox MCP tools reference
  - system-discovery.md: Bash commands for system info gathering
  - device-registration.md: Device registration workflow
  - sync-workflow.md: Machine sync process
  - audit-workflow.md: Data quality audit checks
  - ip-management.md: IP/prefix management and conflict detection
  - topology-generation.md: Mermaid diagram generation
  - change-audit.md: NetBox change audit workflow
  - visual-header.md: Standard visual header pattern

- Slim all 11 commands to under 60 lines:
  - cmdb-sync.md: 348 -> 57 lines
  - cmdb-register.md: 334 -> 51 lines
  - ip-conflicts.md: 238 -> 58 lines
  - cmdb-audit.md: 207 -> 58 lines
  - cmdb-topology.md: 194 -> 54 lines
  - initial-setup.md: 176 -> 74 lines
  - change-audit.md: 175 -> 57 lines
  - cmdb-site.md: 68 -> 50 lines
  - cmdb-ip.md: 65 -> 52 lines
  - cmdb-device.md: 64 -> 55 lines
  - cmdb-search.md: 46 lines (unchanged)

- Update agent to reference skills for best practices
- Preserve existing netbox-patterns skill

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:21:21 -05:00
5152cda161 refactor(viz-platform): extract skills and slim commands
Extract shared knowledge from 10 commands into 7 reusable skills:
- mcp-tools-reference.md: All viz-platform MCP tool signatures
- theming-system.md: Theme tokens, CSS variables, color palettes
- accessibility-rules.md: WCAG contrast, color-blind safe palettes
- dmc-components.md: DMC categories, validation, common props
- responsive-design.md: Breakpoints, mobile-first, grid config
- chart-types.md: Plotly chart types, export formats, resolution
- layout-templates.md: Dashboard templates, filter types

All commands now reference skills via "Skills to Load" section.
Commands reduced from 1396 lines total to 427 lines (69% reduction).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 16:59:26 -05:00
80b9e919c7 refactor(contract-validator): extract skills from commands
Extract shared knowledge from 5 command files into 6 reusable skills:
- plugin-discovery.md: Plugin scanning and discovery
- interface-parsing.md: README.md and CLAUDE.md parsing
- dependency-analysis.md: MCP server and data flow analysis
- validation-rules.md: Compatibility and agent validation
- mcp-tools-reference.md: Available MCP tools
- visual-output.md: Standard formatting and headers

Slim commands from 263-164 lines down to 44-55 lines each.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 16:57:13 -05:00
97159274c7 Merge pull request 'development' (#337) from development into main
Reviewed-on: #337
2026-01-30 20:04:20 +00:00
3437ece76e Merge pull request 'refactor(projman): extract skills and consolidate commands' (#336) from refactor/projman-skills-commands-consolidation into development
Reviewed-on: #336
2026-01-30 20:03:57 +00:00
2e65b60725 refactor(projman): extract skills and consolidate commands
Major refactoring of projman plugin architecture:

Skills Extraction (17 new files):
- Extracted reusable knowledge from commands and agents into skills/
- branch-security, dependency-management, git-workflow, input-detection
- issue-conventions, lessons-learned, mcp-tools-reference, planning-workflow
- progress-tracking, repo-validation, review-checklist, runaway-detection
- setup-workflows, sprint-approval, task-sizing, test-standards, wiki-conventions

Command Consolidation (17 → 12 commands):
- /setup: consolidates initial-setup, project-init, project-sync (--full/--quick/--sync)
- /debug: consolidates debug-report, debug-review (report/review modes)
- /test: consolidates test-check, test-gen (run/gen modes)
- /sprint-status: absorbs sprint-diagram via --diagram flag

Architecture Cleanup:
- Remove plugin-level mcp-servers/ symlinks (6 plugins)
- Remove plugin README.md files (12 files, ~2000 lines)
- Update all documentation to reflect new command structure
- Fix documentation drift in CONFIGURATION.md, COMMANDS-CHEATSHEET.md

Commands are now thin dispatchers (~20-50 lines) that reference skills.
Agents reference skills for domain knowledge instead of inline content.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 15:02:16 -05:00
5cf4b4a78c Merge pull request 'development' (#335) from development into main
Reviewed-on: #335
2026-01-30 18:24:55 +00:00
8fe685037e Merge pull request 'perf(hooks): Sprint 8 - Hook Efficiency Quick Wins' (#334) from feat/sprint-8-hook-efficiency into development
Reviewed-on: #334
2026-01-30 18:24:40 +00:00
c9276d983b docs(changelog): add Sprint 8 hook efficiency changes to Unreleased
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 13:22:12 -05:00
63327ecf65 perf(hooks): improve hook efficiency with early exits and cooldowns
Sprint 8 - Hook Efficiency Quick Wins (v5.5.0)

- Remove viz-platform SessionStart hook (zero value, just echoed "loaded")
- Add early git check to git-flow branch-check.sh (skip JSON parsing for non-git commands)
- Add early git check to git-flow commit-msg-check.sh (skip Python spawn for non-commit commands)
- Add 60-second cooldown to project-hygiene cleanup.sh (reduce find operations)

Closes #321, #322, #323, #324

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 13:19:03 -05:00
96a612a1f4 Merge pull request 'development' (#333) from development into main
Reviewed-on: #333
2026-01-30 17:15:50 +00:00
3839575272 Merge pull request 'fix(projman): add YAML frontmatter and remove unsupported model fields' (#332) from fix/projman-command-frontmatter into development
Reviewed-on: #332
2026-01-30 17:15:37 +00:00
11d77ebe84 revert: remove unsupported defaultModel and model fields
Claude Code rejects `defaultModel` in plugin.json and `model` in agent
frontmatter with "Unrecognized key" validation error.

Removed:
- defaultModel from 6 plugin.json files
- model from 7 agent frontmatter files
- docs/MODEL-RECOMMENDATIONS.md (deleted)
- Model config sections from CONFIGURATION.md and CLAUDE.md
- Model validation from validate-marketplace.sh

This reverts Sprint 7 (v5.4.0) multi-model feature that was never
supported by Claude Code's plugin schema.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 12:09:38 -05:00
47a3a8b48a Merge pull request 'development' (#330) from development into main
Reviewed-on: #330
2026-01-30 16:50:26 +00:00
e190eb8b28 Merge pull request 'fix(projman): add missing YAML frontmatter to command files' (#329) from fix/projman-command-frontmatter into development
Reviewed-on: #329
2026-01-30 16:49:20 +00:00
c81c4a9981 fix(projman): add missing YAML frontmatter to command files
clear-cache.md and suggest-version.md were missing the required YAML
frontmatter with description field. This caused Claude Code to skip
loading the entire projman plugin's commands.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 11:45:56 -05:00
1b75b10fec Merge pull request 'development' (#328) from development into main
Reviewed-on: #328
2026-01-30 16:36:27 +00:00
45af713366 Merge pull request 'fix(hooks): remove all venv auto-repair code that deleted .venv directories' (#327) from fix/remove-venv-auto-repair into development
Reviewed-on: #327
2026-01-30 16:34:58 +00:00
9550a85f4d fix(hooks): remove all venv auto-repair code that deleted .venv directories
BREAKING: Removes automatic venv management that was causing session failures

Changes:
- Delete scripts/venv-repair.sh (was deleting and recreating venvs)
- Remove auto-repair code from projman/hooks/startup-check.sh
- Remove venv-repair call from scripts/post-update.sh
- Remove rm -rf .venv instructions from docs/UPDATING.md and CONFIGURATION.md
- Update docs/CANONICAL-PATHS.md to remove venv-repair.sh reference

Additionally:
- Add Pre-Change Protocol to CLAUDE.md (mandatory dependency check before edits)
- Add Pre-Change Protocol enforcement to claude-config-maintainer plugin
- Add Development Context section to CLAUDE.md clarifying which plugins are
  used in this project vs only being developed
- Reorganize commands table to separate relevant vs non-relevant commands

The venv auto-repair was the root cause of repeated MCP server failures,
requiring manual setup.sh runs after every session start.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 11:33:25 -05:00
e925f80252 Merge pull request 'development' (#326) from development into main
Reviewed-on: #326
2026-01-29 23:10:38 +00:00
e1d7ec46ae Merge pull request 'fix(plugins): remove broken mcpServers references that broke plugin loading' (#325) from fix/remove-broken-mcp-references into development
Reviewed-on: #325
2026-01-29 23:10:25 +00:00
c8b91f6a87 fix(plugins): remove broken mcpServers references that broke plugin loading
The MCP consolidation commit (afd4c44) deleted plugin-level .mcp.json files
but left references to them in plugin.json and marketplace.json. This caused
7 plugins to fail loading (projman, pr-review, cmdb-assistant, data-platform,
viz-platform, contract-validator, and indirectly git-flow/clarity-assist).

Changes:
- Remove mcpServers field from 6 plugin.json files (file no longer exists)
- Remove mcpServers field from 6 marketplace.json entries
- Add file reference validation to validate-marketplace.sh:
  - Validates mcpServers references point to existing files
  - Validates hooks references point to existing files
  - Validates commands references point to existing paths
- Add pre-commit hook (.git/hooks/pre-commit) to enforce validation

The validation script will now FAIL if any config file references a
non-existent file, preventing this class of bug from happening again.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 18:09:08 -05:00
b1070aac52 Merge pull request 'development' (#320) from development into main
Reviewed-on: #320
2026-01-29 17:13:12 +00:00
ce106ace8a Merge pull request 'fix(mcp): consolidate all MCP servers at marketplace root' (#319) from fix/consolidate-mcp-servers into development
Reviewed-on: #319
2026-01-29 17:12:51 +00:00
afd4c44d11 fix(mcp): consolidate all MCP servers at marketplace root
Move all MCP server declarations from individual plugin .mcp.json files
to a single .mcp.json at the marketplace root. This fixes MCP loading
failures where only one plugin's MCP would load.

- Add .mcp.json at marketplace root with all 5 servers
- Remove plugin-level .mcp.json files (projman, pr-review, cmdb-assistant,
  data-platform, viz-platform, contract-validator)
- Update CLAUDE.md to reflect new architecture

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 12:12:08 -05:00
d2b6560fba Merge pull request 'development' (#318) from development into main
Reviewed-on: #318
2026-01-29 17:00:15 +00:00
7c3a2ac31c Merge pull request 'fix(projman): remove automatic cache clearing from session start hook' (#317) from fix/remove-auto-cache-clear into development
Reviewed-on: #317
2026-01-29 16:59:53 +00:00
e0ab4c2ddf fix(projman): remove automatic cache clearing from session start hook
The startup-check.sh hook was clearing ~/.claude/plugins/cache/ at every
session start, which was aggressive and potentially disruptive. Cache
clearing is now a manual operation via the new /clear-cache command.

Changes:
- Remove automatic cache clearing from startup-check.sh
- Add /clear-cache command for manual cache clearing when needed

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 11:58:31 -05:00
4b1c561bb6 Merge pull request 'development' (#316) from development into main
Reviewed-on: #316
2026-01-29 03:20:50 +00:00
5f82f8ebbd Merge pull request 'fix(gitea-mcp): accept string or integer for numeric params' (#315) from fix/mcp-integer-type-coercion into development
Reviewed-on: #315
2026-01-29 03:20:10 +00:00
b492a13702 fix(gitea-mcp): accept string or integer for numeric params
MCP library validates schema BEFORE call_tool handler runs, so
our _coerce_types function never gets a chance to convert strings.

Changed all integer fields to accept both types:
- issue_number, milestone_id, pr_number, depends_on, milestone, limit, position

This fixes: "Input validation error: '312' is not of type 'integer'"

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 22:17:19 -05:00
786d3c0013 Merge pull request 'development' (#314) from development into main
Reviewed-on: #314
2026-01-29 03:13:15 +00:00
5aaab4cb9a Merge pull request 'fix(doc-guardian): make hook silent by default' (#313) from fix/312-doc-guardian-silent-hook into development
Reviewed-on: #313
2026-01-29 03:12:40 +00:00
3c3b3b4575 fix(doc-guardian): make hook silent by default
- Remove all output by default to prevent workflow interruption
- Queue changes silently to .doc-guardian-queue
- Add file+type deduplication (same file won't be queued twice)
- Add DOC_GUARDIAN_VERBOSE=1 env var for opt-in notifications
- Users run /doc-sync or /doc-audit to process queue

Fixes #312 (partial - addresses issues 1, 2, 3)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 22:12:09 -05:00
306 changed files with 20299 additions and 18233 deletions

View File

@@ -6,12 +6,12 @@
}, },
"metadata": { "metadata": {
"description": "Project management plugins with Gitea and NetBox integrations", "description": "Project management plugins with Gitea and NetBox integrations",
"version": "5.4.0" "version": "5.10.0"
}, },
"plugins": [ "plugins": [
{ {
"name": "projman", "name": "projman",
"version": "3.3.0", "version": "3.4.0",
"description": "Sprint planning and project management with Gitea integration", "description": "Sprint planning and project management with Gitea integration",
"source": "./plugins/projman", "source": "./plugins/projman",
"author": { "author": {
@@ -20,7 +20,7 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"], "hooks": ["./hooks/hooks.json"],
"category": "development", "category": "development",
"tags": ["sprint", "agile", "gitea", "project-management"], "tags": ["sprint", "agile", "gitea", "project-management"],
"license": "MIT" "license": "MIT"
@@ -43,7 +43,7 @@
}, },
{ {
"name": "code-sentinel", "name": "code-sentinel",
"version": "1.0.0", "version": "1.0.1",
"description": "Security scanning and code refactoring tools", "description": "Security scanning and code refactoring tools",
"source": "./plugins/code-sentinel", "source": "./plugins/code-sentinel",
"author": { "author": {
@@ -84,15 +84,15 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/cmdb-assistant/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/cmdb-assistant/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"], "hooks": ["./hooks/hooks.json"],
"category": "infrastructure", "category": "infrastructure",
"tags": ["cmdb", "netbox", "dcim", "ipam", "data-quality", "validation"], "tags": ["cmdb", "netbox", "dcim", "ipam", "data-quality", "validation"],
"license": "MIT" "license": "MIT"
}, },
{ {
"name": "claude-config-maintainer", "name": "claude-config-maintainer",
"version": "1.1.0", "version": "1.2.0",
"description": "CLAUDE.md optimization and maintenance for Claude Code projects", "description": "CLAUDE.md and settings.local.json optimization for Claude Code projects",
"source": "./plugins/claude-config-maintainer", "source": "./plugins/claude-config-maintainer",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
@@ -100,6 +100,7 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/claude-config-maintainer/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/claude-config-maintainer/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "development", "category": "development",
"tags": ["claude-md", "configuration", "optimization"], "tags": ["claude-md", "configuration", "optimization"],
"license": "MIT" "license": "MIT"
@@ -115,6 +116,7 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/clarity-assist/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/clarity-assist/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "productivity", "category": "productivity",
"tags": ["prompts", "requirements", "clarification", "nd-friendly"], "tags": ["prompts", "requirements", "clarification", "nd-friendly"],
"license": "MIT" "license": "MIT"
@@ -130,6 +132,7 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/git-flow/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/git-flow/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "development", "category": "development",
"tags": ["git", "workflow", "commits", "branching"], "tags": ["git", "workflow", "commits", "branching"],
"license": "MIT" "license": "MIT"
@@ -145,14 +148,14 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"], "hooks": ["./hooks/hooks.json"],
"category": "development", "category": "development",
"tags": ["code-review", "pull-requests", "security", "quality"], "tags": ["code-review", "pull-requests", "security", "quality"],
"license": "MIT" "license": "MIT"
}, },
{ {
"name": "data-platform", "name": "data-platform",
"version": "1.2.0", "version": "1.3.0",
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration", "description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
"source": "./plugins/data-platform", "source": "./plugins/data-platform",
"author": { "author": {
@@ -161,7 +164,7 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-platform/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-platform/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"], "hooks": ["./hooks/hooks.json"],
"category": "data", "category": "data",
"tags": ["pandas", "postgresql", "postgis", "dbt", "data-engineering", "etl"], "tags": ["pandas", "postgresql", "postgis", "dbt", "data-engineering", "etl"],
"license": "MIT" "license": "MIT"
@@ -177,7 +180,7 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/viz-platform/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/viz-platform/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"], "hooks": ["./hooks/hooks.json"],
"category": "visualization", "category": "visualization",
"tags": ["dash", "plotly", "mantine", "charts", "dashboards", "theming", "dmc"], "tags": ["dash", "plotly", "mantine", "charts", "dashboards", "theming", "dmc"],
"license": "MIT" "license": "MIT"
@@ -193,7 +196,7 @@
}, },
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/contract-validator/README.md", "homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/contract-validator/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git", "repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"mcpServers": ["./.mcp.json"], "hooks": ["./hooks/hooks.json"],
"category": "development", "category": "development",
"tags": ["validation", "contracts", "compatibility", "agents", "interfaces", "cross-plugin"], "tags": ["validation", "contracts", "compatibility", "agents", "interfaces", "cross-plugin"],
"license": "MIT" "license": "MIT"

View File

@@ -0,0 +1,249 @@
# CLAUDE.md
This file provides guidance to Claude Code when working with code in this repository.
## Project Overview
**Repository:** leo-claude-mktplace
**Version:** 3.0.1
**Status:** Production Ready
A plugin marketplace for Claude Code containing:
| Plugin | Description | Version |
|--------|-------------|---------|
| `projman` | Sprint planning and project management with Gitea integration | 3.0.0 |
| `git-flow` | Git workflow automation with smart commits and branch management | 1.0.0 |
| `pr-review` | Multi-agent PR review with confidence scoring | 1.0.0 |
| `clarity-assist` | Prompt optimization with ND-friendly accommodations | 1.0.0 |
| `doc-guardian` | Automatic documentation drift detection and synchronization | 1.0.0 |
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.0 |
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 |
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.0.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
## Quick Start
```bash
# Validate marketplace compliance
./scripts/validate-marketplace.sh
# Setup commands (in a target project with plugin installed)
/initial-setup # First time: full setup wizard
/project-init # New project: quick config
/project-sync # After repo move: sync config
# Run projman commands
/sprint-plan # Start sprint planning
/sprint-status # Check progress
/review # Pre-close code quality review
/test-check # Verify tests before close
/sprint-close # Complete sprint
```
## Repository Structure
```
leo-claude-mktplace/
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
│ └── netbox/ # NetBox MCP (CMDB)
├── plugins/
│ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ # 12 commands (incl. setup)
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ ├── agents/ # 4 agents
│ │ └── skills/label-taxonomy/
│ ├── git-flow/ # Git workflow automation
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 8 commands
│ │ └── agents/
│ ├── pr-review/ # Multi-agent PR review
│ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ # 6 commands (incl. setup)
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ └── agents/ # 5 agents
│ ├── clarity-assist/ # Prompt optimization (NEW v3.0.0)
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 2 commands
│ │ └── agents/
│ ├── doc-guardian/ # Documentation drift detection
│ ├── code-sentinel/ # Security scanning & refactoring
│ ├── claude-config-maintainer/
│ ├── cmdb-assistant/
│ └── project-hygiene/
├── scripts/
│ ├── setup.sh, post-update.sh
│ └── validate-marketplace.sh # Marketplace compliance validation
└── docs/
├── CANONICAL-PATHS.md # Single source of truth for paths
└── CONFIGURATION.md # Centralized configuration guide
```
## CRITICAL: Rules You MUST Follow
### File Operations
- **NEVER** create files in repository root unless listed in "Allowed Root Files"
- **NEVER** modify `.gitignore` without explicit permission
- **ALWAYS** use `.scratch/` for temporary/exploratory work
- **ALWAYS** verify paths against `docs/CANONICAL-PATHS.md` before creating files
### Plugin Development
- **plugin.json MUST be in `.claude-plugin/` directory** (not plugin root)
- **Every plugin MUST be listed in marketplace.json**
- **MCP servers are SHARED at root** with symlinks from plugins
- **MCP server venv path**: `${CLAUDE_PLUGIN_ROOT}/mcp-servers/{name}/.venv/bin/python`
- **CLI tools forbidden** - Use MCP tools exclusively (never `tea`, `gh`, etc.)
### Hooks (Valid Events Only)
`PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
**INVALID:** `task-completed`, `file-changed`, `git-commit-msg-needed`
### Allowed Root Files
`CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example`
### Allowed Root Directories
`.claude/`, `.claude-plugin/`, `.claude-plugins/`, `.scratch/`, `docs/`, `hooks/`, `mcp-servers/`, `plugins/`, `scripts/`
## Architecture
### Four-Agent Model (projman)
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **Planner** | Thoughtful, methodical | Sprint planning, architecture analysis, issue creation, lesson search |
| **Orchestrator** | Concise, action-oriented | Sprint execution, parallel batching, Git operations, lesson capture |
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
### MCP Server Tools (Gitea)
| Category | Tools |
|----------|-------|
| Issues | `list_issues`, `get_issue`, `create_issue`, `update_issue`, `add_comment` |
| Labels | `get_labels`, `suggest_labels`, `create_label` |
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone` |
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `get_execution_order` |
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `create_lesson`, `search_lessons` |
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` *(NEW v3.0.0)* |
| Validation | `validate_repo_org`, `get_branch_protection` |
### Hybrid Configuration
| Level | Location | Purpose |
|-------|----------|---------|
| System | `~/.config/claude/gitea.env` | Credentials (GITEA_API_URL, GITEA_API_TOKEN) |
| Project | `.env` in project root | Repository specification (GITEA_ORG, GITEA_REPO) |
**Note:** `GITEA_ORG` is at project level since different projects may belong to different organizations.
### Branch-Aware Security
| Branch Pattern | Mode | Capabilities |
|----------------|------|--------------|
| `development`, `feat/*` | Development | Full access |
| `staging` | Staging | Read-only code, can create issues |
| `main`, `master` | Production | Read-only, emergency only |
## Label Taxonomy
43 labels total: 27 organization + 16 repository
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Type/6
**Repository:** Component/9, Tech/7
Sync with `/labels-sync` command.
## Lessons Learned System
Stored in Gitea Wiki under `lessons-learned/sprints/`.
**Workflow:**
1. Orchestrator captures at sprint close via MCP tools
2. Planner searches at sprint start using `search_lessons`
3. Tags enable cross-project discovery
## Common Operations
### Adding a New Plugin
1. Create `plugins/{name}/.claude-plugin/plugin.json`
2. Add entry to `.claude-plugin/marketplace.json` with category, tags, license
3. Create `README.md` and `claude-md-integration.md`
4. If using MCP server, create symlink: `ln -s ../../../mcp-servers/{server} plugins/{name}/mcp-servers/{server}`
5. Run `./scripts/validate-marketplace.sh`
6. Update `CHANGELOG.md`
### Adding a Command to projman
1. Create `plugins/projman/commands/{name}.md`
2. Update `plugins/projman/README.md`
3. Update marketplace description if significant
### Validation
```bash
./scripts/validate-marketplace.sh # Validates all manifests
```
## Path Verification Protocol
**Before creating any file:**
1. Read `docs/CANONICAL-PATHS.md`
2. List all paths to be created/modified
3. Verify each against canonical paths
4. If not in canonical paths, STOP and ask
## Documentation Index
| Document | Purpose |
|----------|---------|
| `docs/CANONICAL-PATHS.md` | **Single source of truth** for paths |
| `docs/COMMANDS-CHEATSHEET.md` | All commands quick reference with workflow examples |
| `docs/CONFIGURATION.md` | Centralized setup guide |
| `docs/UPDATING.md` | Update guide for the marketplace |
| `plugins/projman/CONFIGURATION.md` | Quick reference (links to central) |
| `plugins/projman/README.md` | Projman full documentation |
## Versioning and Changelog Rules
### Version Display
**The marketplace version is displayed ONLY in the main `README.md` title.**
- Format: `# Leo Claude Marketplace - vX.Y.Z`
- Do NOT add version numbers to individual plugin documentation titles
- Do NOT add version numbers to configuration guides
- Do NOT add version numbers to CLAUDE.md or other docs
### Changelog Maintenance (MANDATORY)
**`CHANGELOG.md` is the authoritative source for version history.**
When releasing a new version:
1. Update main `README.md` title with new version
2. Update `CHANGELOG.md` with:
- Version number and date: `## [X.Y.Z] - YYYY-MM-DD`
- **Added**: New features, commands, files
- **Changed**: Modifications to existing functionality
- **Fixed**: Bug fixes
- **Removed**: Deleted features, files, deprecated items
3. Update `marketplace.json` metadata version
4. Update plugin `plugin.json` versions if plugin-specific changes
### Version Format
- Follow [Semantic Versioning](https://semver.org/): MAJOR.MINOR.PATCH
- MAJOR: Breaking changes
- MINOR: New features, backward compatible
- PATCH: Bug fixes, minor improvements
---
**Last Updated:** 2026-01-20

27
.doc-guardian-queue Normal file
View File

@@ -0,0 +1,27 @@
# Doc Guardian Queue - cleared after sync on 2026-02-02
2026-02-02T11:41:00 | .claude-plugin | /home/lmiranda/claude-plugins-work/.claude-plugin/marketplace.json | CLAUDE.md .claude-plugin/marketplace.json
2026-02-02T13:35:48 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/sprint-approval.md | README.md
2026-02-02T13:36:03 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-start.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:36:16 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/orchestrator.md | README.md CLAUDE.md
2026-02-02T13:39:07 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/rfc.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:39:15 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/setup.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:39:32 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/rfc-workflow.md | README.md
2026-02-02T13:43:14 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/rfc-templates.md | README.md
2026-02-02T13:44:55 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/sprint-lifecycle.md | README.md
2026-02-02T13:45:04 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/label-taxonomy/labels-reference.md | README.md
2026-02-02T13:45:14 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-plan.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:45:48 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/review.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:07 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-close.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:21 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-status.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:38 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/planner.md | README.md CLAUDE.md
2026-02-02T13:46:57 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/code-reviewer.md | README.md CLAUDE.md
2026-02-02T13:49:13 | commands | /home/lmiranda/claude-plugins-work/plugins/viz-platform/commands/design-gate.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:49:24 | commands | /home/lmiranda/claude-plugins-work/plugins/data-platform/commands/data-gate.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:49:35 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/domain-consultation.md | README.md
2026-02-02T13:50:04 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:50:59 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/server.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:51:32 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:51:49 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/validation-rules.md | README.md
2026-02-02T13:52:07 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/mcp-tools-reference.md | README.md
2026-02-02T13:59:09 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/progress-tracking.md | README.md
2026-02-02T14:01:34 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/test.md | docs/COMMANDS-CHEATSHEET.md README.md

1
.env Normal file
View File

@@ -0,0 +1 @@
GITEA_REPO=personal-projects/leo-claude-mktplace

7
.gitignore vendored
View File

@@ -84,6 +84,13 @@ Thumbs.db
# Claude Code # Claude Code
.claude/settings.local.json .claude/settings.local.json
.claude/history/ .claude/history/
.claude/backups/
# Doc Guardian transient files
.doc-guardian-queue
# Development convenience links
.marketplaces-link
# Logs # Logs
logs/ logs/

24
.mcp.json Normal file
View File

@@ -0,0 +1,24 @@
{
"mcpServers": {
"gitea": {
"command": "./mcp-servers/gitea/run.sh",
"args": []
},
"netbox": {
"command": "./mcp-servers/netbox/run.sh",
"args": []
},
"viz-platform": {
"command": "./mcp-servers/viz-platform/run.sh",
"args": []
},
"data-platform": {
"command": "./mcp-servers/data-platform/run.sh",
"args": []
},
"contract-validator": {
"command": "./mcp-servers/contract-validator/run.sh",
"args": []
}
}
}

View File

@@ -4,46 +4,416 @@ All notable changes to the Leo Claude Marketplace will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [5.4.0] - 2026-01-28 ## [Unreleased]
---
## [6.0.0] - YYYY-MM-DD
### Added ### Added
#### Sprint 7: Multi-Model Agent Support #### Plan-Then-Batch Skill Optimization (projman)
Configurable model selection for agents with inheritance chain.
**Model Configuration:** New execution pattern that separates cognitive work from mechanical API operations, reducing skill-related token consumption by ~76-83% during sprint workflows.
- Agent-level `model` field in YAML frontmatter (opus|sonnet|haiku)
- Plugin-level `defaultModel` in plugin.json
- Inheritance: Agent → Plugin → System default (sonnet)
**Recommended Model Assignments:** - **`skills/batch-execution.md`** — New skill defining the plan-then-batch protocol:
| Model | Use Case | Agents | - Phase 1: Cognitive work with all skills loaded
|-------|----------|--------| - Phase 2: Execution manifest (structured plan of all API operations)
| **Opus** | Complex reasoning, security analysis | planner, code-reviewer, security-reviewer, data-analysis | - Phase 3: Batch execute API calls using only frontmatter skills
| **Sonnet** | Implementation, coordination | orchestrator, executor, layout-builder, data-ingestion | - Phase 4: Batch report with success/failure summary
| **Haiku** | Quick validation | component-check, agent-check | - Error handling: continue on individual failures, report at end
- **Frontmatter skill promotion:**
- Planner agent: `mcp-tools-reference` and `batch-execution` promoted to frontmatter (auto-injected, zero re-read cost)
- Orchestrator agent: same promotion
- Eliminates per-operation skill file re-reads during API execution loops
- **Phase-based skill loading:**
- Planner: 3 phases (validation → analysis → approval) with explicit "read once" instructions
- Orchestrator: 2 phases (startup → dispatch) with same pattern
- New `## Skill Loading Protocol` section replaces flat `## Skills to Load` in agent files
### Changed
- **`planning-workflow.md`** — Steps 8-10 restructured:
- Step 8: "Draft Issue Specifications" (no API calls — resolve all parameters first)
- Step 8a: "Batch Execute Issue Creation" (tight API loop, frontmatter skills only)
- Step 9: Merged into Step 8a (dependencies created in batch)
- Step 10: Milestone creation moved before batch (must exist for assignment)
- **Agent matrix updated:**
- Planner: `body text (14)``frontmatter (2) + body text (12)`
- Orchestrator: `body text (12)``frontmatter (2) + body text (10)`
- **`docs/CONFIGURATION.md`** — New "Phase-Based Skill Loading" subsection documenting the pattern
### Token Impact
| Scenario | Before | After | Savings |
|----------|--------|-------|---------|
| 6-issue sprint (planning) | ~23,800 lines | ~5,600 lines | ~76% |
| 10-issue sprint (planning) | ~35,000 lines | ~7,000 lines | ~80% |
| 8-issue status updates (orchestrator) | ~9,600 lines | ~1,600 lines | ~83% |
---
## [5.10.0] - 2026-02-03
### Added
#### NetBox MCP Server: Module-Based Tool Filtering
Environment-variable-driven module filtering to reduce token consumption:
- **New config option**: `NETBOX_ENABLED_MODULES` in `~/.config/claude/netbox.env`
- **Token savings**: ~15,000 tokens (from ~19,810 to ~4,500) with recommended config
- **Default behavior**: All modules enabled if env var unset (backward compatible)
- **Startup logging**: Shows enabled modules and tool count on initialization
- **Routing guard**: Clear error message when calling disabled module's tools
**Recommended configuration for cmdb-assistant users:**
```bash
NETBOX_ENABLED_MODULES=dcim,ipam,virtualization,extras
```
This enables ~43 tools covering all cmdb-assistant commands while staying well below the 25K token warning threshold.
### Fixed
#### cmdb-assistant Documentation: Incorrect Tool Names
Fixed documentation referencing non-existent `virtualization_*` tool names:
| File | Wrong | Correct |
|------|-------|---------|
| `claude-md-integration.md` | `virtualization_list_virtual_machines` | `virt_list_vms` |
| `claude-md-integration.md` | `virtualization_create_virtual_machine` | `virt_create_vm` |
| `cmdb-search.md` | `virtualization_list_virtual_machines` | `virt_list_vms` |
Also fixed NetBox README.md tool name references for virtualization, wireless, and circuits modules.
#### Gitea MCP Server: Standardized Build Backend
Changed `mcp-servers/gitea/pyproject.toml` from hatchling to setuptools:
- Matches all other MCP servers (contract-validator, viz-platform, data-platform)
- Updated license format to PEP 639 compliance
- Added pytest configuration for consistency
---
## [5.9.0] - 2026-02-03
### Added
#### Plugin Installation Scripts
New scripts for installing marketplace plugins into consumer projects:
- **`scripts/install-plugin.sh`** — Install a plugin to a consumer project
- Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
- Appends integration snippet to target's `CLAUDE.md`
- Idempotent: safe to run multiple times
- Validates plugin exists and target path is valid
- **`scripts/uninstall-plugin.sh`** — Remove a plugin from a consumer project
- Removes MCP server entry from `.mcp.json`
- Removes integration section from `CLAUDE.md`
- **`scripts/list-installed.sh`** — Show installed plugins in a project
- Lists fully installed, partially installed, and available plugins
- Shows plugin versions and descriptions
**Usage:**
```bash
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
./scripts/list-installed.sh ~/projects/personal-portfolio
./scripts/uninstall-plugin.sh data-platform ~/projects/personal-portfolio
```
**Documentation:** `docs/CONFIGURATION.md` updated with "Installing Plugins to Consumer Projects" section.
### Fixed
#### Plugin Installation Scripts — MCP Mapping & Section Markers
**MCP Server Mapping:**
- Added `mcp_servers` field to plugin.json for plugins that use shared MCP servers
- `projman` and `pr-review` now correctly install `gitea` MCP server
- `cmdb-assistant` now correctly installs `netbox` MCP server
- Scripts read MCP server names from plugin.json instead of assuming plugin name = server name
**CLAUDE.md Section Markers:**
- Install script now wraps integration content with HTML comment markers:
`<!-- BEGIN marketplace-plugin: {name} -->` and `<!-- END marketplace-plugin: {name} -->`
- Uninstall script uses markers for precise section removal (no more code block false positives)
- Backward compatible: falls back to legacy header detection for pre-marker installations
**Plugins updated with `mcp_servers` field:**
- `projman``["gitea"]`
- `pr-review``["gitea"]`
- `cmdb-assistant``["netbox"]`
- `data-platform``["data-platform"]`
- `viz-platform``["viz-platform"]`
- `contract-validator``["contract-validator"]`
#### Agent Model Selection
Per-agent model selection using Claude Code's now-supported `model` frontmatter field.
- All 25 marketplace agents assigned appropriate model (`sonnet`, `haiku`, or `inherit`)
- Model assignment based on reasoning depth, tool complexity, and latency requirements
- Documentation added to `CLAUDE.md` and `docs/CONFIGURATION.md`
**Supported values:** `sonnet` (default), `opus`, `haiku`, `inherit`
**Model assignments:**
| Model | Agent Types |
|-------|-------------|
| sonnet | Planner, Orchestrator, Executor, Code Reviewer, Coordinator, Security Reviewers, Data Advisor, Design Reviewer, etc. |
| haiku | Maintainability Auditor, Test Validator, Component Check, Theme Setup, Git Assistant, Data Ingestion, Agent Check |
### Fixed
#### Agent Frontmatter Standardization
- Fixed viz-platform and data-platform agents using non-standard `agent:` field (now `name:`)
- Removed non-standard `triggers:` field from domain agents (trigger info already in agent body)
- Added missing frontmatter to 13 agents across pr-review, viz-platform, contract-validator, clarity-assist, git-flow, doc-guardian, code-sentinel, cmdb-assistant, and data-platform
- All 25 agents now have consistent `name`, `description`, and `model` fields
### Changed
#### Agent Frontmatter Hardening v3
Comprehensive agent-level configuration using Claude Code's supported frontmatter fields.
**permissionMode added to all 25 agents:**
- `bypassPermissions` (1): Executor — full autonomy with code-sentinel + Code Reviewer safety nets
- `acceptEdits` (7): Orchestrator, Data Ingestion, Theme Setup, Refactor Advisor, Doc Analyzer, Git Assistant, Maintainer
- `default` (7): Planner, Code Reviewer, Data Advisor, Layout Builder, Full Validation, Clarity Coach, CMDB Assistant
- `plan` (10): All pr-review agents (5), Data Analysis, Design Reviewer, Component Check, Agent Check, Security Reviewer (code-sentinel)
**disallowedTools added to 12 agents:**
- All `plan`-mode agents (10) + Code Reviewer + Clarity Coach receive `disallowedTools: Write, Edit, MultiEdit`
- Enforces read-only contracts at platform level (defense-in-depth with `permissionMode`)
**Model promotions:**
- Planner: `sonnet``opus` (architectural reasoning benefits from deeper analysis)
- Code Reviewer: `sonnet``opus` (quality gate benefits from thorough review)
**skills frontmatter on 3 agents:**
- Executor: 7 safety-critical skills auto-injected (branch-security, runaway-detection, etc.)
- Code Reviewer: 4 review skills auto-injected
- Maintainer: 2 config skills auto-injected
- Body text `## Skills to Load` removed for these agents to avoid duplication
**Documentation:** **Documentation:**
- `docs/MODEL-RECOMMENDATIONS.md` - Central model selection guide - `CLAUDE.md` and `docs/CONFIGURATION.md` updated with complete agent configuration matrix
- `docs/CONFIGURATION.md` - Added agent model configuration section - New subsections: permissionMode Guide, disallowedTools Guide, skills Frontmatter Guide
- `CLAUDE.md` - Added model config quick reference
**Agent Updates (7 files):** ---
- Opus: planner, code-reviewer (projman), security-reviewer (pr-review, code-sentinel), data-analysis
- Haiku: component-check (viz-platform), agent-check (contract-validator)
**Plugin Manifest Updates (6 files):** ## [5.8.0] - 2026-02-02
- All plugins with agents now have `defaultModel: sonnet`
- Version bumps: projman 3.3.0, pr-review 1.1.0, data-platform 1.1.0, viz-platform 1.1.0, code-sentinel 1.0.1, contract-validator 1.1.0
**Validation:** ### Added
- `scripts/validate-marketplace.sh` - Added model field validation (v5.4.0+)
**Sprint Completed:** #### claude-config-maintainer v1.2.0 - Settings Audit Feature
- Milestone: Sprint 7 - Multi-Model Agent Support
New commands for auditing and optimizing `settings.local.json` permission configurations:
- **`/config-audit-settings`** — Audit `settings.local.json` permissions with 100-point scoring across redundancy, coverage, safety alignment, and profile fit
- **`/config-optimize-settings`** — Apply permission optimizations with dry-run, named profiles (`conservative`, `reviewed`, `autonomous`), and consolidation modes
- **`/config-permissions-map`** — Generate Mermaid diagram of review layer coverage and permission gaps
- **`skills/settings-optimization.md`** — Comprehensive skill for permission pattern analysis, consolidation rules, review-layer-aware recommendations, and named profiles
**Key Features:**
- Settings Efficiency Score (100 points) alongside existing CLAUDE.md score
- Review layer verification — agent reads `hooks/hooks.json` from installed plugins before recommending auto-allow patterns
- Three named profiles: `conservative` (prompts for most writes), `reviewed` (for projects with ≥2 review layers), `autonomous` (sandboxed environments)
- Pattern consolidation detection: duplicates, subsets, merge candidates, stale entries, conflicts
#### Projman Hardening Sprint
Targeted improvements to safety gates, command structure, lifecycle tracking, and cross-plugin contracts.
**Sprint Lifecycle State Machine:**
- New `skills/sprint-lifecycle.md` - defines valid states and transitions via milestone metadata
- States: idle -> Sprint/Planning -> Sprint/Executing -> Sprint/Reviewing -> idle
- All sprint commands check and set lifecycle state on entry/exit
- Out-of-order calls produce warnings with guidance, `--force` override available
**Sprint Dispatch Log:**
- Orchestrator now maintains a structured dispatch log during execution
- Records task dispatch, completion, failure, gate checks, and resume events
- Enables timeline reconstruction after interrupted sessions
**Gate Contract Versioning:**
- Gate commands (`/design-gate`, `/data-gate`) declare `gate_contract: v1` in frontmatter
- `domain-consultation.md` Gate Command Reference includes expected contract version
- `validate_workflow_integration` now checks contract version compatibility
- Mismatch produces WARNING, missing contract produces INFO suggestion
**Shared Visual Output Skill:**
- New `skills/visual-output.md` - single source of truth for projman visual headers
- All 4 agent files reference the skill instead of inline templates
- Phase Registry maps agents to emoji and phase names
### Changed
**Sprint Approval Gate Hardened:**
- Approval is now a hard block, not a warning (was "recommended", now required)
- `--force` flag added to bypass in emergencies (logged to milestone)
- Consistent language across sprint-approval.md, sprint-start.md, and orchestrator.md
**RFC Commands Normalized:**
- 5 individual commands (`/rfc-create`, `/rfc-list`, `/rfc-review`, `/rfc-approve`, `/rfc-reject`) consolidated into `/rfc create|list|review|approve|reject`
- `/clear-cache` absorbed into `/setup --clear-cache`
- Command count reduced from 17 to 12
**`/test` Command Documentation Expanded:**
- Sprint integration section (pre-close verification workflow)
- Concrete usage examples for all modes
- Edge cases table
- DO NOT rules for both modes
### Removed
- `plugins/projman/commands/rfc-create.md` (replaced by `/rfc create`)
- `plugins/projman/commands/rfc-list.md` (replaced by `/rfc list`)
- `plugins/projman/commands/rfc-review.md` (replaced by `/rfc review`)
- `plugins/projman/commands/rfc-approve.md` (replaced by `/rfc approve`)
- `plugins/projman/commands/rfc-reject.md` (replaced by `/rfc reject`)
- `plugins/projman/commands/clear-cache.md` (replaced by `/setup --clear-cache`)
---
## [5.7.1] - 2026-02-02
### Added
- **contract-validator**: New `validate_workflow_integration` MCP tool — validates domain plugins expose required advisory interfaces (gate command, review command, advisory agent)
- **contract-validator**: New `MISSING_INTEGRATION` issue type for workflow integration validation
### Fixed
- `scripts/setup.sh` banner version updated from v5.1.0 to v5.7.1
### Reverted
- **marketplace.json**: Removed `integrates_with` field — Claude Code schema does not support custom plugin fields (causes marketplace load failure)
---
## [5.7.0] - 2026-02-02
### Added
- **data-platform**: New `data-advisor` agent for data integrity, schema, and dbt compliance validation
- **data-platform**: New `data-integrity-audit.md` skill defining audit rules, severity levels, and scanning strategies
- **data-platform**: New `/data-gate` command for binary pass/fail data integrity gates (projman integration)
- **data-platform**: New `/data-review` command for comprehensive data integrity audits
### Changed
- Domain Advisory Pattern now fully operational for both Viz and Data domains
- projman orchestrator `Domain/Data` gates now resolve to live `/data-gate` command (previously fell through to "gate unavailable" warning)
---
## [5.6.0] - 2026-02-01
### Added
- **Domain Advisory Pattern**: Cross-plugin integration enabling projman to consult domain-specific plugins during sprint lifecycle
- **projman**: New `domain-consultation.md` skill for domain detection and gate protocols
- **viz-platform**: New `design-reviewer` agent for design system compliance auditing
- **viz-platform**: New `design-system-audit.md` skill defining audit rules and severity levels
- **viz-platform**: New `/design-review` command for detailed design system audits
- **viz-platform**: New `/design-gate` command for binary pass/fail validation gates
- **Labels**: New `Domain/Viz` and `Domain/Data` labels for domain routing
### Changed
- **projman planner**: Now loads domain-consultation skill and performs domain detection during planning
- **projman orchestrator**: Now runs domain gates before marking Domain/* labeled issues as complete
---
## [5.5.0] - 2026-02-01
### Added
#### RFC System for Feature Tracking
Wiki-based Request for Comments (RFC) system for capturing, reviewing, and tracking feature ideas through their lifecycle.
**New Commands (projman):**
- `/rfc-create` - Create new RFC from conversation or clarified specification
- `/rfc-list` - List all RFCs grouped by status (Draft, Review, Approved, Implementing, Implemented, Rejected, Stale)
- `/rfc-review` - Submit Draft RFC for maintainer review
- `/rfc-approve` - Approve RFC, making it available for sprint planning
- `/rfc-reject` - Reject RFC with documented reason
**RFC Lifecycle:**
- Draft → Review → Approved → Implementing → Implemented
- Terminal states: Rejected, Superseded
- Stale: Drafts with no activity >90 days
**Sprint Integration:**
- `/sprint-plan` now detects approved RFCs and offers selection
- `/sprint-close` updates RFC status to Implemented on completion
- RFC-Index wiki page auto-maintained with status sections
**Clarity-Assist Integration:**
- Vagueness hook now detects feature request patterns
- Suggests `/rfc-create` for feature ideas
- `/clarify` offers RFC creation after delivering clarified spec
**New MCP Tool:**
- `allocate_rfc_number` - Allocates next sequential RFC number
**New Skills:**
- `skills/rfc-workflow.md` - RFC lifecycle and state transitions
- `skills/rfc-templates.md` - RFC page template specifications
### Changed
#### Sprint 8: Hook Efficiency Quick Wins
Performance optimizations for plugin hooks to reduce overhead on every command.
**Changes:**
- **viz-platform:** Remove SessionStart hook that only echoed "loaded" (zero value)
- **git-flow:** Add early exit to `branch-check.sh` for non-git commands (skip JSON parsing)
- **git-flow:** Add early exit to `commit-msg-check.sh` for non-git commands (skip Python spawn)
- **project-hygiene:** Add 60-second cooldown to `cleanup.sh` (reduce find operations)
**Impact:** Hooks now exit immediately for 90%+ of Bash commands that don't need processing.
**Issues:** #321, #322, #323, #324
**PR:** #334
---
## [5.4.1] - 2026-01-30
### Removed
#### Multi-Model Agent Support (REVERTED)
**Reason:** Claude Code does not support `defaultModel` in plugin.json or `model` in agent frontmatter. The schema validation rejects these as "Unrecognized key".
**Removed:**
- `defaultModel` field from all plugin.json files (6 plugins)
- `model` field references from agent frontmatter
- `docs/MODEL-RECOMMENDATIONS.md` - Deleted entirely
- Model configuration sections from `docs/CONFIGURATION.md` and `CLAUDE.md`
**Lesson:** Do not implement features without verifying they are supported by Claude Code's plugin schema.
---
## [5.4.0] - 2026-01-28 [REVERTED]
### Added (NOW REMOVED - See 5.4.1)
#### Sprint 7: Multi-Model Agent Support
~~Configurable model selection for agents with inheritance chain.~~
**This feature was reverted in 5.4.1 - Claude Code does not support these fields.**
Original sprint work:
- Issues: #302, #303, #304, #305, #306 - Issues: #302, #303, #304, #305, #306
- PRs: #307, #308 - PRs: #307, #308
- Wiki: [Change V5.4.0: Multi-Model Support (Sprint 7 Implementation)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.4.0%3A-Multi-Model-Support-%28Sprint-7-Implementation%29)
--- ---

260
CLAUDE.md
View File

@@ -1,5 +1,37 @@
# CLAUDE.md # CLAUDE.md
## ⛔ MANDATORY BEHAVIOR RULES - READ FIRST
**These rules are NON-NEGOTIABLE. Violating them wastes the user's time and money.**
### 1. WHEN USER ASKS YOU TO CHECK SOMETHING - CHECK EVERYTHING
- Search ALL locations, not just where you think it is
- Check cache directories: `~/.claude/plugins/cache/`
- Check installed: `~/.claude/plugins/marketplaces/`
- Check source directories
- **NEVER say "no" or "that's not the issue" without exhaustive verification**
### 2. WHEN USER SAYS SOMETHING IS WRONG - BELIEVE THEM
- The user knows their system better than you
- Investigate thoroughly before disagreeing
- **Your confidence is often wrong. User's instincts are often right.**
### 3. NEVER SAY "DONE" WITHOUT VERIFICATION
- Run the actual command/script to verify
- Show the output to the user
- **"Done" means VERIFIED WORKING, not "I made changes"**
### 4. SHOW EXACTLY WHAT USER ASKS FOR
- If user asks for messages, show the MESSAGES
- If user asks for code, show the CODE
- **Do not interpret or summarize unless asked**
**FAILURE TO FOLLOW THESE RULES = WASTED USER TIME = UNACCEPTABLE**
---
This file provides guidance to Claude Code when working with code in this repository. This file provides guidance to Claude Code when working with code in this repository.
## ⛔ RULES - READ FIRST ## ⛔ RULES - READ FIRST
@@ -35,18 +67,86 @@ Run `./scripts/verify-hooks.sh`. If changes affect MCP servers or hooks, inform
| **File creation** | Only in allowed paths. Use `.scratch/` for temp work. Verify against `docs/CANONICAL-PATHS.md` | | **File creation** | Only in allowed paths. Use `.scratch/` for temp work. Verify against `docs/CANONICAL-PATHS.md` |
| **plugin.json location** | Must be in `.claude-plugin/` directory | | **plugin.json location** | Must be in `.claude-plugin/` directory |
| **Hooks** | Use `hooks/hooks.json` (auto-discovered). Never inline in plugin.json | | **Hooks** | Use `hooks/hooks.json` (auto-discovered). Never inline in plugin.json |
| **MCP servers** | Shared at root with symlinks. Use MCP tools, never CLI (`tea`, `gh`) | | **MCP servers** | Defined in root `.mcp.json`. Use MCP tools, never CLI (`tea`, `gh`) |
| **Allowed root files** | `CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example` | | **Allowed root files** | `CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example` |
**Valid hook events:** `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact` **Valid hook events:** `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
### ⛔ MANDATORY: Before Any Code Change
**Claude MUST show this checklist BEFORE editing any file:**
#### 1. Impact Search Results
Run and show output of:
```bash
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
```
#### 2. Files That Will Be Affected
Numbered list of every file to be modified, with the specific change for each.
#### 3. Files Searched But Not Changed (and why)
Proof that related files were checked and determined unchanged.
#### 4. Documentation That References This
List of docs that mention this feature/script/function.
**User verifies this list before Claude proceeds. If Claude skips this, STOP IMMEDIATELY.**
#### After Changes
Run the same grep and show results proving no references remain unaddressed.
--- ---
## ⚠️ Development Context: We Build AND Use These Plugins
**This is a self-referential project.** We are:
1. **BUILDING** a plugin marketplace (source code in `plugins/`)
2. **USING** the installed marketplace to build it (dogfooding)
### Plugins ACTIVELY USED in This Project
These plugins are installed and should be used during development:
| Plugin | Used For |
|--------|----------|
| **projman** | Sprint planning, issue management, lessons learned |
| **git-flow** | Commits, branch management |
| **pr-review** | Pull request reviews |
| **doc-guardian** | Documentation drift detection |
| **code-sentinel** | Security scanning, refactoring |
| **clarity-assist** | Prompt clarification |
| **claude-config-maintainer** | CLAUDE.md optimization |
| **contract-validator** | Cross-plugin compatibility |
### Plugins NOT Used Here (Development Only)
These plugins exist in source but are **NOT relevant** to this project's workflow:
| Plugin | Why Not Used |
|--------|--------------|
| **data-platform** | For data engineering projects (pandas, PostgreSQL, dbt) |
| **viz-platform** | For dashboard projects (Dash, Plotly) |
| **cmdb-assistant** | For infrastructure projects (NetBox) |
**Do NOT suggest** `/ingest`, `/profile`, `/chart`, `/cmdb-*` commands - they don't apply here.
### Key Distinction
| Context | Path | What To Do |
|---------|------|------------|
| **Editing plugin source** | `~/claude-plugins-work/plugins/` | Modify code, add features |
| **Using installed plugins** | `~/.claude/plugins/marketplaces/` | Run commands like `/sprint-plan` |
When user says "run /sprint-plan", use the INSTALLED plugin.
When user says "fix the sprint-plan command", edit the SOURCE code.
---
## Project Overview ## Project Overview
**Repository:** leo-claude-mktplace **Repository:** leo-claude-mktplace
**Version:** 5.4.0 **Version:** 5.9.0
**Status:** Production Ready **Status:** Production Ready
A plugin marketplace for Claude Code containing: A plugin marketplace for Claude Code containing:
@@ -61,7 +161,7 @@ A plugin marketplace for Claude Code containing:
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.1 | | `code-sentinel` | Security scanning and code refactoring tools | 1.0.1 |
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 | | `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 |
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.2.0 | | `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.2.0 |
| `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.1.0 | | `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.3.0 |
| `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.1.0 | | `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.1.0 |
| `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.1.0 | | `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.1.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 | | `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
@@ -73,26 +173,33 @@ A plugin marketplace for Claude Code containing:
./scripts/validate-marketplace.sh ./scripts/validate-marketplace.sh
# After updates # After updates
./scripts/post-update.sh # Rebuild venvs, verify symlinks ./scripts/post-update.sh # Rebuild venvs
``` ```
### Plugin Commands by Category ### Plugin Commands - USE THESE in This Project
| Category | Commands | | Category | Commands |
|----------|----------| |----------|----------|
| **Setup** | `/initial-setup`, `/project-init`, `/project-sync` | | **Setup** | `/setup` (modes: `--full`, `--quick`, `--sync`) |
| **Sprint** | `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/sprint-diagram` | | **Sprint** | `/sprint-plan`, `/sprint-start`, `/sprint-status` (with `--diagram`), `/sprint-close` |
| **Quality** | `/review`, `/test-check`, `/test-gen` | | **Quality** | `/review`, `/test` (modes: `run`, `gen`) |
| **Versioning** | `/suggest-version` | | **Versioning** | `/suggest-version` |
| **PR Review** | `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff` | | **PR Review** | `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff` |
| **Docs** | `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs` | | **Docs** | `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs` |
| **Security** | `/security-scan`, `/refactor`, `/refactor-dry` | | **Security** | `/security-scan`, `/refactor`, `/refactor-dry` |
| **Config** | `/config-analyze`, `/config-optimize`, `/config-diff`, `/config-lint` | | **Config** | `/config-analyze`, `/config-optimize`, `/config-diff`, `/config-lint` |
| **Data** | `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality` |
| **Visualization** | `/component`, `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/accessibility-check`, `/breakpoints` |
| **Validation** | `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph` | | **Validation** | `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph` |
| **CMDB** | `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`, `/cmdb-audit`, `/cmdb-register`, `/cmdb-sync`, `/cmdb-topology`, `/change-audit`, `/ip-conflicts` | | **Debug** | `/debug` (modes: `report`, `review`) |
| **Debug** | `/debug-report`, `/debug-review` |
### Plugin Commands - NOT RELEVANT to This Project
These commands are being developed but don't apply to this project's workflow:
| Category | Commands | For Projects Using |
|----------|----------|-------------------|
| **Data** | `/ingest`, `/profile`, `/schema`, `/lineage`, `/dbt-test` | pandas, PostgreSQL, dbt |
| **Visualization** | `/component`, `/chart`, `/dashboard`, `/theme` | Dash, Plotly dashboards |
| **CMDB** | `/cmdb-search`, `/cmdb-device`, `/cmdb-sync` | NetBox infrastructure |
## Repository Structure ## Repository Structure
@@ -100,46 +207,40 @@ A plugin marketplace for Claude Code containing:
leo-claude-mktplace/ leo-claude-mktplace/
├── .claude-plugin/ ├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest │ └── marketplace.json # Marketplace manifest
├── mcp-servers/ # SHARED MCP servers (v3.0.0+) ├── .mcp.json # MCP server configuration (all servers)
├── mcp-servers/ # SHARED MCP servers
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki) │ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
│ ├── netbox/ # NetBox MCP (CMDB) │ ├── netbox/ # NetBox MCP (CMDB)
│ ├── data-platform/ # pandas, PostgreSQL, dbt │ ├── data-platform/ # pandas, PostgreSQL, dbt
── viz-platform/ # DMC validation, charts, themes ── viz-platform/ # DMC validation, charts, themes
│ └── contract-validator/ # Plugin compatibility validation
├── plugins/ ├── plugins/
│ ├── projman/ # Sprint management │ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/plugin.json │ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json │ │ ├── commands/ # 12 commands
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK │ │ ├── hooks/ # SessionStart: mismatch detection
│ │ ├── commands/ # 14 commands (incl. setup, debug, suggest-version)
│ │ ├── hooks/ # SessionStart: mismatch detection + sprint suggestions
│ │ ├── agents/ # 4 agents │ │ ├── agents/ # 4 agents
│ │ └── skills/label-taxonomy/ │ │ └── skills/ # 17 reusable skill files
│ ├── git-flow/ # Git workflow automation │ ├── git-flow/ # Git workflow automation
│ │ ├── .claude-plugin/plugin.json │ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 8 commands │ │ ├── commands/ # 8 commands
│ │ └── agents/ │ │ └── agents/
│ ├── pr-review/ # Multi-agent PR review │ ├── pr-review/ # Multi-agent PR review
│ │ ├── .claude-plugin/plugin.json │ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json │ │ ├── commands/ # 6 commands
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ # 6 commands (incl. setup)
│ │ ├── hooks/ # SessionStart mismatch detection │ │ ├── hooks/ # SessionStart mismatch detection
│ │ └── agents/ # 5 agents │ │ └── agents/ # 5 agents
│ ├── clarity-assist/ # Prompt optimization │ ├── clarity-assist/ # Prompt optimization
│ │ ├── .claude-plugin/plugin.json │ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 2 commands │ │ ├── commands/ # 2 commands
│ │ └── agents/ │ │ └── agents/
│ ├── data-platform/ # Data engineering (NEW v4.0.0) │ ├── data-platform/ # Data engineering
│ │ ├── .claude-plugin/plugin.json │ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/ # pandas, postgresql, dbt MCPs
│ │ ├── commands/ # 7 commands │ │ ├── commands/ # 7 commands
│ │ ├── hooks/ # SessionStart PostgreSQL check │ │ ├── hooks/ # SessionStart PostgreSQL check
│ │ └── agents/ # 2 agents │ │ └── agents/ # 2 agents
│ ├── viz-platform/ # Visualization (NEW v4.0.0) │ ├── viz-platform/ # Visualization
│ │ ├── .claude-plugin/plugin.json │ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/ # viz-platform MCP
│ │ ├── commands/ # 7 commands │ │ ├── commands/ # 7 commands
│ │ ├── hooks/ # SessionStart DMC check │ │ ├── hooks/ # SessionStart DMC check
│ │ └── agents/ # 3 agents │ │ └── agents/ # 3 agents
@@ -147,6 +248,7 @@ leo-claude-mktplace/
│ ├── code-sentinel/ # Security scanning & refactoring │ ├── code-sentinel/ # Security scanning & refactoring
│ ├── claude-config-maintainer/ │ ├── claude-config-maintainer/
│ ├── cmdb-assistant/ │ ├── cmdb-assistant/
│ ├── contract-validator/
│ └── project-hygiene/ │ └── project-hygiene/
├── scripts/ ├── scripts/
│ ├── setup.sh, post-update.sh │ ├── setup.sh, post-update.sh
@@ -169,6 +271,61 @@ leo-claude-mktplace/
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation | | **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification | | **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
### Agent Frontmatter Configuration
Agents specify their configuration in frontmatter using Claude Code's supported fields. Reference: https://code.claude.com/docs/en/sub-agents
**Supported frontmatter fields:**
| Field | Required | Default | Description |
|-------|----------|---------|-------------|
| `name` | Yes | — | Unique identifier, lowercase + hyphens |
| `description` | Yes | — | When Claude should delegate to this subagent |
| `model` | No | `inherit` | `sonnet`, `opus`, `haiku`, or `inherit` |
| `permissionMode` | No | `default` | Controls permission prompts: `default`, `acceptEdits`, `dontAsk`, `bypassPermissions`, `plan` |
| `disallowedTools` | No | none | Comma-separated tools to remove from agent's toolset |
| `skills` | No | none | Comma-separated skills auto-injected into context at startup |
| `hooks` | No | none | Lifecycle hooks scoped to this subagent |
**Complete agent matrix:**
| Plugin | Agent | `model` | `permissionMode` | `disallowedTools` | `skills` |
|--------|-------|---------|-------------------|--------------------|----------|
| projman | planner | opus | default | — | frontmatter (2) + body text (12) |
| projman | orchestrator | sonnet | acceptEdits | — | frontmatter (2) + body text (10) |
| projman | executor | sonnet | bypassPermissions | — | frontmatter (7) |
| projman | code-reviewer | opus | default | Write, Edit, MultiEdit | frontmatter (4) |
| pr-review | coordinator | sonnet | plan | Write, Edit, MultiEdit | — |
| pr-review | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
| pr-review | performance-analyst | sonnet | plan | Write, Edit, MultiEdit | — |
| pr-review | maintainability-auditor | haiku | plan | Write, Edit, MultiEdit | — |
| pr-review | test-validator | haiku | plan | Write, Edit, MultiEdit | — |
| data-platform | data-advisor | sonnet | default | — | — |
| data-platform | data-analysis | sonnet | plan | Write, Edit, MultiEdit | — |
| data-platform | data-ingestion | haiku | acceptEdits | — | — |
| viz-platform | design-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
| viz-platform | layout-builder | sonnet | default | — | — |
| viz-platform | component-check | haiku | plan | Write, Edit, MultiEdit | — |
| viz-platform | theme-setup | haiku | acceptEdits | — | — |
| contract-validator | full-validation | sonnet | default | — | — |
| contract-validator | agent-check | haiku | plan | Write, Edit, MultiEdit | — |
| code-sentinel | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
| code-sentinel | refactor-advisor | sonnet | acceptEdits | — | — |
| doc-guardian | doc-analyzer | sonnet | acceptEdits | — | — |
| clarity-assist | clarity-coach | sonnet | default | Write, Edit, MultiEdit | — |
| git-flow | git-assistant | haiku | acceptEdits | — | — |
| claude-config-maintainer | maintainer | sonnet | acceptEdits | — | frontmatter (2) |
| cmdb-assistant | cmdb-assistant | sonnet | default | — | — |
**Design principles:**
- `bypassPermissions` is granted to exactly ONE agent (Executor) which has code-sentinel PreToolUse hook + Code Reviewer downstream as safety nets.
- `plan` mode is assigned to all pure analysis agents (pr-review, read-only validators).
- `disallowedTools: Write, Edit, MultiEdit` provides defense-in-depth on agents that should never write files.
- `skills` frontmatter is used for agents with ≤7 skills where guaranteed loading is safety-critical. Agents with 8+ skills use body text `## Skills to Load` for selective loading.
- `hooks` (agent-scoped) is reserved for future use (v6.0+).
Override any field by editing the agent's `.md` file in `plugins/{plugin}/agents/`.
### MCP Server Tools (Gitea) ### MCP Server Tools (Gitea)
| Category | Tools | | Category | Tools |
@@ -177,7 +334,7 @@ leo-claude-mktplace/
| Labels | `get_labels`, `suggest_labels`, `create_label`, `create_label_smart` | | Labels | `get_labels`, `suggest_labels`, `create_label`, `create_label_smart` |
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone`, `delete_milestone` | | Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone`, `delete_milestone` |
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `remove_issue_dependency`, `get_execution_order` | | Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `remove_issue_dependency`, `get_execution_order` |
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `update_wiki_page`, `create_lesson`, `search_lessons` | | Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `update_wiki_page`, `create_lesson`, `search_lessons`, `allocate_rfc_number` |
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` | | **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` |
| Validation | `validate_repo_org`, `get_branch_protection` | | Validation | `validate_repo_org`, `get_branch_protection` |
@@ -190,21 +347,6 @@ leo-claude-mktplace/
**Note:** `GITEA_ORG` is at project level since different projects may belong to different organizations. **Note:** `GITEA_ORG` is at project level since different projects may belong to different organizations.
### Agent Model Configuration
Agents can specify preferred Claude models for cost/performance optimization:
| Model | Use For | Agents |
|-------|---------|--------|
| `opus` | Complex reasoning, security | planner, code-reviewer, security-reviewer |
| `sonnet` | Implementation, coordination | orchestrator, executor, most agents |
| `haiku` | Simple validation | component-check, agent-check |
**Configuration:** Add `model: opus|sonnet|haiku` to agent frontmatter, or `defaultModel` to plugin.json.
**Inheritance:** Agent → Plugin default → System default (sonnet)
See `docs/MODEL-RECOMMENDATIONS.md` for detailed guidance.
### Branch-Aware Security ### Branch-Aware Security
| Branch Pattern | Mode | Capabilities | | Branch Pattern | Mode | Capabilities |
@@ -213,6 +355,20 @@ See `docs/MODEL-RECOMMENDATIONS.md` for detailed guidance.
| `staging` | Staging | Read-only code, can create issues | | `staging` | Staging | Read-only code, can create issues |
| `main`, `master` | Production | Read-only, emergency only | | `main`, `master` | Production | Read-only, emergency only |
### RFC System
Wiki-based Request for Comments system for tracking feature ideas from proposal through implementation.
**RFC Wiki Naming:**
- RFC pages: `RFC-NNNN: Short Title` (4-digit zero-padded)
- Index page: `RFC-Index` (auto-maintained)
**Lifecycle:** Draft → Review → Approved → Implementing → Implemented
**Integration with Sprint Planning:**
- `/sprint-plan` detects approved RFCs and offers selection
- `/sprint-close` updates RFC status on completion
## Label Taxonomy ## Label Taxonomy
43 labels total: 27 organization + 16 repository 43 labels total: 27 organization + 16 repository
@@ -237,16 +393,15 @@ Stored in Gitea Wiki under `lessons-learned/sprints/`.
1. Create `plugins/{name}/.claude-plugin/plugin.json` 1. Create `plugins/{name}/.claude-plugin/plugin.json`
2. Add entry to `.claude-plugin/marketplace.json` with category, tags, license 2. Add entry to `.claude-plugin/marketplace.json` with category, tags, license
3. Create `README.md` and `claude-md-integration.md` 3. Create `claude-md-integration.md`
4. If using MCP server, create symlink: `ln -s ../../../mcp-servers/{server} plugins/{name}/mcp-servers/{server}` 4. If using new MCP server, add to root `mcp-servers/` and update `.mcp.json`
5. Run `./scripts/validate-marketplace.sh` 5. Run `./scripts/validate-marketplace.sh`
6. Update `CHANGELOG.md` 6. Update `CHANGELOG.md`
### Adding a Command to projman ### Adding a Command to projman
1. Create `plugins/projman/commands/{name}.md` 1. Create `plugins/projman/commands/{name}.md`
2. Update `plugins/projman/README.md` 2. Update marketplace description if significant
3. Update marketplace description if significant
### Validation ### Validation
@@ -273,7 +428,6 @@ Stored in Gitea Wiki under `lessons-learned/sprints/`.
| `docs/DEBUGGING-CHECKLIST.md` | Systematic troubleshooting guide | | `docs/DEBUGGING-CHECKLIST.md` | Systematic troubleshooting guide |
| `docs/UPDATING.md` | Update guide for the marketplace | | `docs/UPDATING.md` | Update guide for the marketplace |
| `plugins/projman/CONFIGURATION.md` | Projman quick reference (links to central) | | `plugins/projman/CONFIGURATION.md` | Projman quick reference (links to central) |
| `plugins/projman/README.md` | Projman full documentation |
## Installation Paths ## Installation Paths
@@ -295,12 +449,12 @@ See `docs/DEBUGGING-CHECKLIST.md` for systematic troubleshooting.
| Symptom | Likely Cause | Fix | | Symptom | Likely Cause | Fix |
|---------|--------------|-----| |---------|--------------|-----|
| "X MCP servers failed" | Missing venv in installed path | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` | | "X MCP servers failed" | Missing venv in installed path | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
| MCP tools not available | Symlink broken or venv missing | Run `/debug-report` to diagnose | | MCP tools not available | Venv missing or .mcp.json misconfigured | Run `/debug report` to diagnose |
| Changes not taking effect | Editing source, not installed | Reinstall plugin or edit installed path | | Changes not taking effect | Editing source, not installed | Reinstall plugin or edit installed path |
**Debug Commands:** **Debug Commands:**
- `/debug-report` - Run full diagnostics, create issue if needed - `/debug report` - Run full diagnostics, create issue if needed
- `/debug-review` - Investigate and propose fixes - `/debug review` - Investigate and propose fixes
## Versioning Workflow ## Versioning Workflow
@@ -354,4 +508,4 @@ The script will:
--- ---
**Last Updated:** 2026-01-28 **Last Updated:** 2026-02-02

View File

@@ -1,4 +1,4 @@
# Leo Claude Marketplace - v5.4.0 # Leo Claude Marketplace - v5.10.0
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows. A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
@@ -6,12 +6,13 @@ A collection of Claude Code plugins for project management, infrastructure autom
### Development & Project Management ### Development & Project Management
#### [projman](./plugins/projman/README.md) #### [projman](./plugins/projman)
**Sprint Planning and Project Management** **Sprint Planning and Project Management**
AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sprint workflow into a distributable plugin. AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sprint workflow into a distributable plugin.
- Four-agent model: Planner, Orchestrator, Executor, Code Reviewer - Four-agent model: Planner, Orchestrator, Executor, Code Reviewer
- Plan-then-batch execution: skills loaded once per phase, API calls batched for ~80% token savings
- Intelligent label suggestions from 43-label taxonomy - Intelligent label suggestions from 43-label taxonomy
- Lessons learned capture via Gitea Wiki - Lessons learned capture via Gitea Wiki
- Native issue dependencies with parallel execution - Native issue dependencies with parallel execution
@@ -19,9 +20,9 @@ AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sp
- Branch-aware security (development/staging/production) - Branch-aware security (development/staging/production)
- Pre-sprint-close code quality review and test verification - Pre-sprint-close code quality review and test verification
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/sprint-diagram`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review`, `/suggest-version`, `/proposal-status` **Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/setup`, `/review`, `/test`, `/debug`, `/suggest-version`, `/proposal-status`, `/rfc`
#### [git-flow](./plugins/git-flow/README.md) *NEW in v3.0.0* #### [git-flow](./plugins/git-flow) *NEW in v3.0.0*
**Git Workflow Automation** **Git Workflow Automation**
Smart git operations with intelligent commit messages and branch management. Smart git operations with intelligent commit messages and branch management.
@@ -34,7 +35,7 @@ Smart git operations with intelligent commit messages and branch management.
**Commands:** `/commit`, `/commit-push`, `/commit-merge`, `/commit-sync`, `/branch-start`, `/branch-cleanup`, `/git-status`, `/git-config` **Commands:** `/commit`, `/commit-push`, `/commit-merge`, `/commit-sync`, `/branch-start`, `/branch-cleanup`, `/git-status`, `/git-config`
#### [pr-review](./plugins/pr-review/README.md) *NEW in v3.0.0* #### [pr-review](./plugins/pr-review) *NEW in v3.0.0*
**Multi-Agent PR Review** **Multi-Agent PR Review**
Comprehensive pull request review using specialized agents. Comprehensive pull request review using specialized agents.
@@ -46,14 +47,14 @@ Comprehensive pull request review using specialized agents.
**Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff`, `/initial-setup`, `/project-init`, `/project-sync` **Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff`, `/initial-setup`, `/project-init`, `/project-sync`
#### [claude-config-maintainer](./plugins/claude-config-maintainer/README.md) #### [claude-config-maintainer](./plugins/claude-config-maintainer)
**CLAUDE.md Optimization and Maintenance** **CLAUDE.md and Settings Optimization**
Analyze, optimize, and create CLAUDE.md configuration files for Claude Code projects. Analyze, optimize, and create CLAUDE.md configuration files. Audit and optimize settings.local.json permissions.
**Commands:** `/config-analyze`, `/config-optimize`, `/config-init`, `/config-diff`, `/config-lint` **Commands:** `/analyze`, `/optimize`, `/init`, `/config-diff`, `/config-lint`, `/config-audit-settings`, `/config-optimize-settings`, `/config-permissions-map`
#### [contract-validator](./plugins/contract-validator/README.md) *NEW in v5.0.0* #### [contract-validator](./plugins/contract-validator) *NEW in v5.0.0*
**Cross-Plugin Compatibility Validation** **Cross-Plugin Compatibility Validation**
Validate plugin marketplaces for command conflicts, tool overlaps, and broken agent references. Validate plugin marketplaces for command conflicts, tool overlaps, and broken agent references.
@@ -68,7 +69,7 @@ Validate plugin marketplaces for command conflicts, tool overlaps, and broken ag
### Productivity ### Productivity
#### [clarity-assist](./plugins/clarity-assist/README.md) *NEW in v3.0.0* #### [clarity-assist](./plugins/clarity-assist) *NEW in v3.0.0*
**Prompt Optimization with ND Accommodations** **Prompt Optimization with ND Accommodations**
Transform vague requests into clear specifications using structured methodology. Transform vague requests into clear specifications using structured methodology.
@@ -79,21 +80,21 @@ Transform vague requests into clear specifications using structured methodology.
**Commands:** `/clarify`, `/quick-clarify` **Commands:** `/clarify`, `/quick-clarify`
#### [doc-guardian](./plugins/doc-guardian/README.md) #### [doc-guardian](./plugins/doc-guardian)
**Documentation Lifecycle Management** **Documentation Lifecycle Management**
Automatic documentation drift detection and synchronization. Automatic documentation drift detection and synchronization.
**Commands:** `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs` **Commands:** `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs`
#### [project-hygiene](./plugins/project-hygiene/README.md) #### [project-hygiene](./plugins/project-hygiene)
**Post-Task Cleanup Automation** **Post-Task Cleanup Automation**
Hook-based cleanup that runs after Claude completes work. Hook-based cleanup that runs after Claude completes work.
### Security ### Security
#### [code-sentinel](./plugins/code-sentinel/README.md) #### [code-sentinel](./plugins/code-sentinel)
**Security Scanning & Refactoring** **Security Scanning & Refactoring**
Security vulnerability detection and code refactoring tools. Security vulnerability detection and code refactoring tools.
@@ -102,7 +103,7 @@ Security vulnerability detection and code refactoring tools.
### Infrastructure ### Infrastructure
#### [cmdb-assistant](./plugins/cmdb-assistant/README.md) #### [cmdb-assistant](./plugins/cmdb-assistant)
**NetBox CMDB Integration** **NetBox CMDB Integration**
Full CRUD operations for network infrastructure management directly from Claude Code. Full CRUD operations for network infrastructure management directly from Claude Code.
@@ -111,7 +112,7 @@ Full CRUD operations for network infrastructure management directly from Claude
### Data Engineering ### Data Engineering
#### [data-platform](./plugins/data-platform/README.md) *NEW in v4.0.0* #### [data-platform](./plugins/data-platform) *NEW in v4.0.0*
**pandas, PostgreSQL/PostGIS, and dbt Integration** **pandas, PostgreSQL/PostGIS, and dbt Integration**
Comprehensive data engineering toolkit with persistent DataFrame storage. Comprehensive data engineering toolkit with persistent DataFrame storage.
@@ -122,11 +123,11 @@ Comprehensive data engineering toolkit with persistent DataFrame storage.
- 100k row limit with chunking support - 100k row limit with chunking support
- Auto-detection of dbt projects - Auto-detection of dbt projects
**Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality`, `/initial-setup` **Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality`, `/data-review`, `/data-gate`, `/initial-setup`
### Visualization ### Visualization
#### [viz-platform](./plugins/viz-platform/README.md) *NEW in v4.0.0* #### [viz-platform](./plugins/viz-platform) *NEW in v4.0.0*
**Dash Mantine Components Validation and Theming** **Dash Mantine Components Validation and Theming**
Visualization toolkit with version-locked component validation and design token theming. Visualization toolkit with version-locked component validation and design token theming.
@@ -138,11 +139,26 @@ Visualization toolkit with version-locked component validation and design token
- 5 Page tools for multi-page app structure - 5 Page tools for multi-page app structure
- Dual theme storage: user-level and project-level - Dual theme storage: user-level and project-level
**Commands:** `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/accessibility-check`, `/breakpoints`, `/initial-setup` **Commands:** `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/accessibility-check`, `/breakpoints`, `/design-review`, `/design-gate`, `/initial-setup`
## Domain Advisory Pattern
The marketplace supports cross-plugin domain advisory integration:
- **Domain Detection**: projman automatically detects when issues involve specialized domains (frontend/viz, data engineering)
- **Acceptance Criteria**: Domain-specific acceptance criteria are added to issues during planning
- **Execution Gates**: Domain validation gates (`/design-gate`, `/data-gate`) run before issue completion
- **Extensible**: New domains can be added by creating advisory agents and gate commands
**Current Domains:**
| Domain | Plugin | Gate Command |
|--------|--------|--------------|
| Visualization | viz-platform | `/design-gate` |
| Data | data-platform | `/data-gate` |
## MCP Servers ## MCP Servers
MCP servers are **shared at repository root** with **symlinks** from plugins that use them. MCP servers are **shared at repository root** and configured in `.mcp.json`.
### Gitea MCP Server (shared) ### Gitea MCP Server (shared)
@@ -200,7 +216,7 @@ Cross-plugin compatibility validation tools.
| Category | Tools | | Category | Tools |
|----------|-------| |----------|-------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` | | Parse | `parse_plugin_interface`, `parse_claude_md_agents` |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow` | | Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow`, `validate_workflow_integration` |
| Report | `generate_compatibility_report`, `list_issues` | | Report | `generate_compatibility_report`, `list_issues` |
## Installation ## Installation
@@ -297,7 +313,7 @@ After installing plugins, the `/plugin` command may show `(no content)` - this i
| clarity-assist | `/clarity-assist:clarify` | | clarity-assist | `/clarity-assist:clarify` |
| doc-guardian | `/doc-guardian:doc-audit` | | doc-guardian | `/doc-guardian:doc-audit` |
| code-sentinel | `/code-sentinel:security-scan` | | code-sentinel | `/code-sentinel:security-scan` |
| claude-config-maintainer | `/claude-config-maintainer:config-analyze` | | claude-config-maintainer | `/claude-config-maintainer:analyze` |
| cmdb-assistant | `/cmdb-assistant:cmdb-search` | | cmdb-assistant | `/cmdb-assistant:cmdb-search` |
| data-platform | `/data-platform:ingest` | | data-platform | `/data-platform:ingest` |
| viz-platform | `/viz-platform:chart` | | viz-platform | `/viz-platform:chart` |

View File

@@ -2,7 +2,7 @@
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.** **This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
Last Updated: 2026-01-27 (v5.1.0) Last Updated: 2026-01-30 (v5.4.1)
--- ---
@@ -76,9 +76,6 @@ leo-claude-mktplace/
├── plugins/ # ALL plugins ├── plugins/ # ALL plugins
│ ├── projman/ # Sprint management │ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── .mcp.json
│ │ ├── mcp-servers/
│ │ │ └── gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ │ │ ├── commands/
│ │ ├── agents/ │ │ ├── agents/
│ │ ├── skills/ │ │ ├── skills/
@@ -99,9 +96,6 @@ leo-claude-mktplace/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
│ ├── cmdb-assistant/ # NetBox CMDB integration │ ├── cmdb-assistant/ # NetBox CMDB integration
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── .mcp.json
│ │ ├── mcp-servers/
│ │ │ └── netbox -> ../../../mcp-servers/netbox # SYMLINK
│ │ ├── commands/ │ │ ├── commands/
│ │ ├── agents/ │ │ ├── agents/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
@@ -114,61 +108,48 @@ leo-claude-mktplace/
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── hooks/ │ │ ├── hooks/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
│ ├── clarity-assist/ # NEW in v3.0.0 │ ├── clarity-assist/
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── commands/ │ │ ├── commands/
│ │ ├── agents/ │ │ ├── agents/
│ │ ├── skills/ │ │ ├── skills/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
│ ├── git-flow/ # NEW in v3.0.0 │ ├── git-flow/
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── commands/ │ │ ├── commands/
│ │ ├── agents/ │ │ ├── agents/
│ │ ├── skills/ │ │ ├── skills/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
│ ├── pr-review/ # NEW in v3.0.0 │ ├── pr-review/
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── .mcp.json
│ │ ├── mcp-servers/
│ │ │ └── gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ │ │ ├── commands/
│ │ ├── agents/ │ │ ├── agents/
│ │ ├── skills/ │ │ ├── skills/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
│ ├── data-platform/ # NEW in v4.0.0 │ ├── data-platform/
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── .mcp.json
│ │ ├── mcp-servers/
│ │ │ └── data-platform -> ../../../mcp-servers/data-platform # SYMLINK
│ │ ├── commands/ │ │ ├── commands/
│ │ ├── agents/ │ │ ├── agents/
│ │ ├── hooks/ │ │ ├── hooks/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
│ ├── contract-validator/ # NEW in v5.0.0 │ ├── contract-validator/
│ │ ├── .claude-plugin/ │ │ ├── .claude-plugin/
│ │ ├── .mcp.json
│ │ ├── mcp-servers/
│ │ │ └── contract-validator -> ../../../mcp-servers/contract-validator # SYMLINK
│ │ ├── commands/ │ │ ├── commands/
│ │ ├── agents/ │ │ ├── agents/
│ │ └── claude-md-integration.md │ │ └── claude-md-integration.md
│ └── viz-platform/ # NEW in v4.1.0 │ └── viz-platform/
│ ├── .claude-plugin/ │ ├── .claude-plugin/
│ ├── .mcp.json
│ ├── mcp-servers/
│ │ └── viz-platform -> ../../../mcp-servers/viz-platform # SYMLINK
│ ├── commands/ │ ├── commands/
│ ├── agents/ │ ├── agents/
│ ├── hooks/ │ ├── hooks/
│ └── claude-md-integration.md │ └── claude-md-integration.md
├── scripts/ # Setup and maintenance scripts ├── scripts/ # Setup and maintenance scripts
│ ├── setup.sh # Initial setup (create venvs, config templates) │ ├── setup.sh # Initial setup (create venvs, config templates)
│ ├── post-update.sh # Post-update (rebuild venvs, verify symlinks) │ ├── post-update.sh # Post-update (clear cache, show changelog)
│ ├── check-venv.sh # Check if venvs exist (for hooks) │ ├── check-venv.sh # Check if venvs exist (read-only)
│ ├── validate-marketplace.sh # Marketplace compliance validation │ ├── validate-marketplace.sh # Marketplace compliance validation
│ ├── verify-hooks.sh # Verify all hooks use correct event types │ ├── verify-hooks.sh # Verify all hooks use correct event types
│ ├── setup-venvs.sh # Setup/repair MCP server venvs │ ├── setup-venvs.sh # Setup MCP server venvs (create only, never delete)
│ ├── venv-repair.sh # Repair broken venv symlinks
│ └── release.sh # Release automation with version bumping │ └── release.sh # Release automation with version bumping
├── CLAUDE.md ├── CLAUDE.md
├── README.md ├── README.md
@@ -189,29 +170,53 @@ leo-claude-mktplace/
| Plugin manifest | `plugins/{plugin-name}/.claude-plugin/plugin.json` | `plugins/projman/.claude-plugin/plugin.json` | | Plugin manifest | `plugins/{plugin-name}/.claude-plugin/plugin.json` | `plugins/projman/.claude-plugin/plugin.json` |
| Plugin commands | `plugins/{plugin-name}/commands/` | `plugins/projman/commands/` | | Plugin commands | `plugins/{plugin-name}/commands/` | `plugins/projman/commands/` |
| Plugin agents | `plugins/{plugin-name}/agents/` | `plugins/projman/agents/` | | Plugin agents | `plugins/{plugin-name}/agents/` | `plugins/projman/agents/` |
| Plugin .mcp.json | `plugins/{plugin-name}/.mcp.json` | `plugins/projman/.mcp.json` | | Plugin skills | `plugins/{plugin-name}/skills/` | `plugins/projman/skills/` |
| Plugin integration snippet | `plugins/{plugin-name}/claude-md-integration.md` | `plugins/projman/claude-md-integration.md` | | Plugin integration snippet | `plugins/{plugin-name}/claude-md-integration.md` | `plugins/projman/claude-md-integration.md` |
### MCP Server Paths (v3.0.0 Architecture) ### MCP Server Paths
MCP servers are **shared at repository root** with **symlinks** from plugins. MCP servers are **shared at repository root** and configured in `.mcp.json`.
| Context | Pattern | Example | | Context | Pattern | Example |
|---------|---------|---------| |---------|---------|---------|
| MCP configuration | `.mcp.json` | `.mcp.json` (at repo root) |
| Shared MCP server | `mcp-servers/{server}/` | `mcp-servers/gitea/` | | Shared MCP server | `mcp-servers/{server}/` | `mcp-servers/gitea/` |
| MCP server code | `mcp-servers/{server}/mcp_server/` | `mcp-servers/gitea/mcp_server/` | | MCP server code | `mcp-servers/{server}/mcp_server/` | `mcp-servers/gitea/mcp_server/` |
| MCP venv | `mcp-servers/{server}/.venv/` | `mcp-servers/gitea/.venv/` | | MCP venv (local) | `mcp-servers/{server}/.venv/` | `mcp-servers/gitea/.venv/` |
| Plugin symlink | `plugins/{plugin}/mcp-servers/{server}` | `plugins/projman/mcp-servers/gitea` |
### Symlink Pattern **Note:** Plugins do NOT have their own `mcp-servers/` directories. All MCP servers are shared at root and configured via `.mcp.json`.
Plugins that use MCP servers create symlinks: ### MCP Venv Paths - CRITICAL
**Venvs live in a CACHE directory that SURVIVES marketplace updates.**
When checking for venvs, ALWAYS check in this order:
| Priority | Path | Survives Updates? |
|----------|------|-------------------|
| 1 (CHECK FIRST) | `~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/` | YES |
| 2 (fallback) | `{marketplace}/mcp-servers/{server}/.venv/` | NO |
**Why cache first?**
- Marketplace directory gets WIPED on every update/reinstall
- Cache directory SURVIVES updates
- False "venv missing" errors waste hours of debugging
**Pattern for hooks checking venvs:**
```bash ```bash
# From plugin directory CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/bin/python"
ln -s ../../../mcp-servers/gitea plugins/projman/mcp-servers/gitea LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/{server}/.venv/bin/python"
if [[ -f "$CACHE_VENV" ]]; then
VENV_PATH="$CACHE_VENV"
elif [[ -f "$LOCAL_VENV" ]]; then
VENV_PATH="$LOCAL_VENV"
else
echo "venv missing"
fi
``` ```
The symlink target is relative: `../../../mcp-servers/{server}` **See lesson learned:** [Startup Hooks Must Check Venv Cache Path First](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/patterns/startup-hooks-must-check-venv-cache-path-first)
### Documentation Paths ### Documentation Paths
@@ -240,15 +245,12 @@ The symlink target is relative: `../../../mcp-servers/{server}`
2. Verify each path against patterns in this file 2. Verify each path against patterns in this file
3. Show verification to user before proceeding 3. Show verification to user before proceeding
### Relative Path Calculation (v3.0.0) ### Relative Path Calculation
From `plugins/projman/.mcp.json` to shared `mcp-servers/gitea/`: From `.mcp.json` (at root) to `mcp-servers/gitea/`:
``` ```
plugins/projman/.mcp.json .mcp.json (at repository root)
→ Uses ${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/ → Uses absolute installed path: ~/.claude/plugins/marketplaces/.../mcp-servers/gitea/run.sh
→ Symlink at plugins/projman/mcp-servers/gitea points to ../../../mcp-servers/gitea
Result in .mcp.json: ${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/.venv/bin/python
``` ```
From `.claude-plugin/marketplace.json` to `plugins/projman/`: From `.claude-plugin/marketplace.json` to `plugins/projman/`:
@@ -267,30 +269,34 @@ Result: ./plugins/projman
| Wrong | Why | Correct | | Wrong | Why | Correct |
|-------|-----|---------| |-------|-----|---------|
| `projman/` at root | Plugins go in `plugins/` | `plugins/projman/` | | `projman/` at root | Plugins go in `plugins/` | `plugins/projman/` |
| Direct path in .mcp.json to root mcp-servers | Use symlink | Symlink at `plugins/{plugin}/mcp-servers/` | | `mcp-servers/` inside plugins | MCP servers are shared at root | Use root `mcp-servers/` |
| Creating new mcp-servers inside plugins | Use shared + symlink | Symlink to `mcp-servers/` | | Plugin-level `.mcp.json` | MCP config is at root | Use root `.mcp.json` |
| Hardcoding absolute paths | Breaks portability | Use `${CLAUDE_PLUGIN_ROOT}` | | Hardcoding absolute paths in source | Breaks portability | Use relative paths or `${CLAUDE_PLUGIN_ROOT}` |
--- ---
## Architecture Note (v3.0.0) ## Architecture Note
MCP servers are now **shared at repository root** with **symlinks** from plugins: MCP servers are **shared at repository root** and configured in a single `.mcp.json` file.
**Benefits:** **Benefits:**
- Single source of truth for each MCP server - Single source of truth for each MCP server
- Updates apply to all plugins automatically - Updates apply to all plugins automatically
- Reduced duplication - No duplication - clean plugin structure
- Symlinks work with Claude Code caching - Simple configuration in one place
**Symlink Pattern:** **Configuration:**
``` All MCP servers are defined in `.mcp.json` at repository root:
plugins/projman/mcp-servers/gitea -> ../../../mcp-servers/gitea ```json
plugins/cmdb-assistant/mcp-servers/netbox -> ../../../mcp-servers/netbox {
plugins/pr-review/mcp-servers/gitea -> ../../../mcp-servers/gitea "mcpServers": {
plugins/data-platform/mcp-servers/data-platform -> ../../../mcp-servers/data-platform "gitea": { "command": ".../mcp-servers/gitea/run.sh" },
plugins/viz-platform/mcp-servers/viz-platform -> ../../../mcp-servers/viz-platform "netbox": { "command": ".../mcp-servers/netbox/run.sh" },
plugins/contract-validator/mcp-servers/contract-validator -> ../../../mcp-servers/contract-validator "data-platform": { "command": ".../mcp-servers/data-platform/run.sh" },
"viz-platform": { "command": ".../mcp-servers/viz-platform/run.sh" },
"contract-validator": { "command": ".../mcp-servers/contract-validator/run.sh" }
}
}
``` ```
--- ---
@@ -299,6 +305,7 @@ plugins/contract-validator/mcp-servers/contract-validator -> ../../../mcp-server
| Date | Change | By | | Date | Change | By |
|------|--------|-----| |------|--------|-----|
| 2026-01-30 | v5.5.0: Removed plugin-level mcp-servers symlinks - all MCP config now in root .mcp.json | Claude Code |
| 2026-01-26 | v5.0.0: Added contract-validator plugin and MCP server | Claude Code | | 2026-01-26 | v5.0.0: Added contract-validator plugin and MCP server | Claude Code |
| 2026-01-26 | v4.1.0: Added viz-platform plugin and MCP server | Claude Code | | 2026-01-26 | v4.1.0: Added viz-platform plugin and MCP server | Claude Code |
| 2026-01-25 | v4.0.0: Added data-platform plugin and MCP server | Claude Code | | 2026-01-25 | v4.0.0: Added data-platform plugin and MCP server | Claude Code |

View File

@@ -9,22 +9,18 @@ Quick reference for all commands in the Leo Claude Marketplace.
| Plugin | Command | Auto | Manual | Description | | Plugin | Command | Auto | Manual | Description |
|--------|---------|:----:|:------:|-------------| |--------|---------|:----:|:------:|-------------|
| **projman** | `/sprint-plan` | | X | Start sprint planning with AI-guided architecture analysis and issue creation | | **projman** | `/sprint-plan` | | X | Start sprint planning with AI-guided architecture analysis and issue creation |
| **projman** | `/sprint-start` | | X | Begin sprint execution with dependency analysis and parallel task coordination | | **projman** | `/sprint-start` | | X | Begin sprint execution with dependency analysis and parallel task coordination (requires approval or `--force`) |
| **projman** | `/sprint-status` | | X | Check current sprint progress and identify blockers | | **projman** | `/sprint-status` | | X | Check current sprint progress (add `--diagram` for Mermaid visualization) |
| **projman** | `/review` | | X | Pre-sprint-close code quality review (debug artifacts, security, error handling) | | **projman** | `/review` | | X | Pre-sprint-close code quality review (debug artifacts, security, error handling) |
| **projman** | `/test-check` | | X | Run tests and verify coverage before sprint close | | **projman** | `/test` | | X | Run tests (`/test run`) or generate tests (`/test gen <target>`) |
| **projman** | `/sprint-close` | | X | Complete sprint and capture lessons learned to Gitea Wiki | | **projman** | `/sprint-close` | | X | Complete sprint and capture lessons learned to Gitea Wiki |
| **projman** | `/labels-sync` | | X | Synchronize label taxonomy from Gitea | | **projman** | `/labels-sync` | | X | Synchronize label taxonomy from Gitea |
| **projman** | `/initial-setup` | | X | Full setup wizard: MCP server + system config + project config | | **projman** | `/setup` | | X | Auto-detect mode or use `--full`, `--quick`, `--sync`, `--clear-cache` |
| **projman** | `/project-init` | | X | Quick project setup (assumes system config exists) | | **projman** | *SessionStart hook* | X | | Detects git remote vs .env mismatch, warns to run `/setup --sync` |
| **projman** | `/project-sync` | | X | Sync config with git remote after repo move/rename | | **projman** | `/debug` | | X | Diagnostics (`/debug report`) or investigate (`/debug review`) |
| **projman** | *SessionStart hook* | X | | Detects git remote vs .env mismatch, warns to run /project-sync |
| **projman** | `/test-gen` | | X | Generate comprehensive tests for specified code |
| **projman** | `/debug-report` | | X | Run diagnostics and create structured issue in marketplace |
| **projman** | `/debug-review` | | X | Investigate diagnostic issues and propose fixes with approval gates |
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump | | **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
| **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status | | **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status |
| **projman** | `/sprint-diagram` | | X | Generate Mermaid diagram of sprint issues with dependencies | | **projman** | `/rfc` | | X | RFC lifecycle management (`/rfc create\|list\|review\|approve\|reject`) |
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message | | **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation | | **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch | | **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
@@ -58,6 +54,9 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project | | **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project |
| **claude-config-maintainer** | `/config-diff` | | X | Track CLAUDE.md changes over time with behavioral impact | | **claude-config-maintainer** | `/config-diff` | | X | Track CLAUDE.md changes over time with behavioral impact |
| **claude-config-maintainer** | `/config-lint` | | X | Lint CLAUDE.md for anti-patterns and best practices | | **claude-config-maintainer** | `/config-lint` | | X | Lint CLAUDE.md for anti-patterns and best practices |
| **claude-config-maintainer** | `/config-audit-settings` | | X | Audit settings.local.json permissions (100-point score) |
| **claude-config-maintainer** | `/config-optimize-settings` | | X | Optimize permissions (profiles, consolidation, dry-run) |
| **claude-config-maintainer** | `/config-permissions-map` | | X | Visual review layer + permission coverage map |
| **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server | | **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server |
| **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites | | **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites |
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) | | **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
@@ -91,7 +90,11 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **viz-platform** | `/chart-export` | | X | Export charts to PNG, SVG, PDF via kaleido | | **viz-platform** | `/chart-export` | | X | Export charts to PNG, SVG, PDF via kaleido |
| **viz-platform** | `/accessibility-check` | | X | Color blind validation (WCAG contrast ratios) | | **viz-platform** | `/accessibility-check` | | X | Color blind validation (WCAG contrast ratios) |
| **viz-platform** | `/breakpoints` | | X | Configure responsive layout breakpoints | | **viz-platform** | `/breakpoints` | | X | Configure responsive layout breakpoints |
| **viz-platform** | `/design-review` | | X | Detailed design system audits |
| **viz-platform** | `/design-gate` | | X | Binary pass/fail design system validation gates |
| **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) | | **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) |
| **data-platform** | `/data-review` | | X | Comprehensive data integrity audits |
| **data-platform** | `/data-gate` | | X | Binary pass/fail data integrity gates |
| **contract-validator** | `/validate-contracts` | | X | Full marketplace compatibility validation | | **contract-validator** | `/validate-contracts` | | X | Full marketplace compatibility validation |
| **contract-validator** | `/check-agent` | | X | Validate single agent definition | | **contract-validator** | `/check-agent` | | X | Validate single agent definition |
| **contract-validator** | `/list-interfaces` | | X | Show all plugin interfaces | | **contract-validator** | `/list-interfaces` | | X | Show all plugin interfaces |
@@ -104,7 +107,7 @@ Quick reference for all commands in the Leo Claude Marketplace.
| Category | Plugins | Primary Use | | Category | Plugins | Primary Use |
|----------|---------|-------------| |----------|---------|-------------|
| **Setup** | projman, pr-review, cmdb-assistant, data-platform | `/initial-setup`, `/project-init` | | **Setup** | projman, pr-review, cmdb-assistant, data-platform | `/setup`, `/initial-setup` |
| **Task Planning** | projman, clarity-assist | Sprint management, requirement clarification | | **Task Planning** | projman, clarity-assist | Sprint management, requirement clarification |
| **Code Quality** | code-sentinel, pr-review | Security scanning, PR reviews | | **Code Quality** | code-sentinel, pr-review | Security scanning, PR reviews |
| **Documentation** | doc-guardian, claude-config-maintainer | Doc sync, CLAUDE.md maintenance | | **Documentation** | doc-guardian, claude-config-maintainer | Doc sync, CLAUDE.md maintenance |
@@ -133,6 +136,22 @@ Quick reference for all commands in the Leo Claude Marketplace.
## Dev Workflow Examples ## Dev Workflow Examples
### Example 0: RFC-Driven Feature Development
Full workflow from idea to implementation using RFCs:
```
1. /clarify # Clarify the feature idea
2. /rfc create # Create RFC from clarified spec
... refine RFC content ...
3. /rfc review 0001 # Submit RFC for review
... review discussion ...
4. /rfc approve 0001 # Approve RFC for implementation
5. /sprint-plan # Select approved RFC for sprint
... implement feature ...
6. /sprint-close # Complete sprint, RFC marked Implemented
```
### Example 1: Starting a New Feature Sprint ### Example 1: Starting a New Feature Sprint
A typical workflow for planning and executing a feature sprint: A typical workflow for planning and executing a feature sprint:
@@ -145,9 +164,9 @@ A typical workflow for planning and executing a feature sprint:
5. /branch-start feat/... # Create feature branch 5. /branch-start feat/... # Create feature branch
... implement features ... ... implement features ...
6. /commit # Commit with conventional message 6. /commit # Commit with conventional message
7. /sprint-status # Check progress mid-sprint 7. /sprint-status --diagram # Check progress with visualization
8. /review # Pre-close quality review 8. /review # Pre-close quality review
9. /test-check # Verify test coverage 9. /test run # Verify test coverage
10. /sprint-close # Capture lessons learned 10. /sprint-close # Capture lessons learned
``` ```
@@ -194,7 +213,7 @@ Safe refactoring with preview:
1. /refactor-dry # Preview opportunities 1. /refactor-dry # Preview opportunities
2. /security-scan # Baseline security check 2. /security-scan # Baseline security check
3. /refactor # Apply improvements 3. /refactor # Apply improvements
4. /test-check # Verify nothing broke 4. /test run # Verify nothing broke
5. /commit # Commit with descriptive message 5. /commit # Commit with descriptive message
``` ```
@@ -227,7 +246,7 @@ Working with data pipelines:
Setting up the marketplace for the first time: Setting up the marketplace for the first time:
``` ```
1. /initial-setup # Full setup: MCP + system config + project 1. /setup --full # Full setup: MCP + system config + project
# → Follow prompts for Gitea URL, org # → Follow prompts for Gitea URL, org
# → Add token manually when prompted # → Add token manually when prompted
# → Confirm repository name # → Confirm repository name
@@ -241,7 +260,7 @@ Setting up the marketplace for the first time:
Adding a new project when system config exists: Adding a new project when system config exists:
``` ```
1. /project-init # Quick project setup 1. /setup --quick # Quick project setup (auto-detected)
# → Confirms detected repo name # → Confirms detected repo name
# → Creates .env # → Creates .env
2. /labels-sync # Sync Gitea labels 2. /labels-sync # Sync Gitea labels
@@ -277,4 +296,4 @@ Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/cl
--- ---
*Last Updated: 2026-01-28* *Last Updated: 2026-02-02*

View File

@@ -9,10 +9,10 @@ Centralized configuration documentation for all plugins and MCP servers in the L
**After installing the marketplace and plugins via Claude Code:** **After installing the marketplace and plugins via Claude Code:**
``` ```
/initial-setup /setup
``` ```
The interactive wizard handles everything except manually adding your API tokens. The interactive wizard auto-detects what's needed and handles everything except manually adding your API tokens.
--- ---
@@ -25,7 +25,8 @@ The interactive wizard handles everything except manually adding your API tokens
└─────────────────────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────────────────────────────────┘
/initial-setup /setup --full
(or /setup auto-detects)
┌──────────────────────────────┼──────────────────────────────┐ ┌──────────────────────────────┼──────────────────────────────┐
▼ ▼ ▼ ▼ ▼ ▼
@@ -78,8 +79,8 @@ The interactive wizard handles everything except manually adding your API tokens
┌───────────────┴───────────────┐ ┌───────────────┴───────────────┐
▼ ▼ ▼ ▼
/project-init /initial-setup /setup --quick /setup
(direct path) (smart detection) (explicit mode) (auto-detects mode)
│ │ │ │
│ ┌──────────┴──────────┐ │ ┌──────────┴──────────┐
│ ▼ ▼ │ ▼ ▼
@@ -108,7 +109,7 @@ The interactive wizard handles everything except manually adding your API tokens
## What Runs Automatically vs User Interaction ## What Runs Automatically vs User Interaction
### `/initial-setup` - Full Setup ### `/setup --full` - Full Setup
| Phase | Type | What Happens | | Phase | Type | What Happens |
|-------|------|--------------| |-------|------|--------------|
@@ -120,7 +121,7 @@ The interactive wizard handles everything except manually adding your API tokens
| **6. Project Config** | Automated | Creates `.env` file, checks `.gitignore` | | **6. Project Config** | Automated | Creates `.env` file, checks `.gitignore` |
| **7. Validation** | Automated | Tests API connectivity, shows summary | | **7. Validation** | Automated | Tests API connectivity, shows summary |
### `/project-init` - Quick Project Setup ### `/setup --quick` - Quick Project Setup
| Phase | Type | What Happens | | Phase | Type | What Happens |
|-------|------|--------------| |-------|------|--------------|
@@ -131,23 +132,25 @@ The interactive wizard handles everything except manually adding your API tokens
--- ---
## Three Commands for Different Scenarios ## One Command, Three Modes
| Command | When to Use | What It Does | | Mode | When to Use | What It Does |
|---------|-------------|--------------| |------|-------------|--------------|
| `/initial-setup` | First time on a machine | Full setup: MCP server + system config + project config | | `/setup` | Any time | Auto-detects: runs full, quick, or sync as needed |
| `/project-init` | Starting a new project | Quick setup: project config only (assumes system is ready) | | `/setup --full` | First time on a machine | Full setup: MCP server + system config + project config |
| `/project-sync` | After repo move/rename | Updates .env to match current git remote | | `/setup --quick` | Starting a new project | Quick setup: project config only (assumes system is ready) |
| `/setup --sync` | After repo move/rename | Updates .env to match current git remote |
**Auto-detection logic:**
1. No system config → **full** mode
2. System config exists, no project config → **quick** mode
3. Both exist, git remote differs → **sync** mode
4. Both exist, match → already configured, offer to reconfigure
**Typical workflow:** **Typical workflow:**
1. Install plugin → run `/initial-setup` (once per machine) 1. Install plugin → run `/setup` (auto-runs full mode)
2. Start new project → run `/project-init` (once per project) 2. Start new project → run `/setup` (auto-runs quick mode)
3. Repository moved? → run `/project-sync` (updates config) 3. Repository moved? → run `/setup` (auto-runs sync mode)
**Smart features:**
- `/initial-setup` detects existing system config and offers quick project setup
- All commands validate org/repo via Gitea API before saving (auto-fills if verified)
- SessionStart hook automatically detects git remote vs .env mismatches
--- ---
@@ -179,7 +182,7 @@ This marketplace uses a **hybrid configuration** approach:
**Benefits:** **Benefits:**
- Single token per service (update once, use everywhere) - Single token per service (update once, use everywhere)
- Easy multi-project setup (just run `/project-init` in each project) - Easy multi-project setup (just run `/setup` in each project)
- Security (tokens never committed to git, never typed into AI chat) - Security (tokens never committed to git, never typed into AI chat)
- Project isolation (each project can override defaults) - Project isolation (each project can override defaults)
@@ -187,7 +190,7 @@ This marketplace uses a **hybrid configuration** approach:
## Prerequisites ## Prerequisites
Before running `/initial-setup`: Before running `/setup`:
1. **Python 3.10+** installed 1. **Python 3.10+** installed
```bash ```bash
@@ -210,10 +213,10 @@ Before running `/initial-setup`:
Run the setup wizard in Claude Code: Run the setup wizard in Claude Code:
``` ```
/initial-setup /setup
``` ```
The wizard will guide you through each step interactively. The wizard will guide you through each step interactively and auto-detect the appropriate mode.
**Note:** After first-time setup, you'll need to restart your Claude Code session for MCP tools to become available. **Note:** After first-time setup, you'll need to restart your Claude Code session for MCP tools to become available.
@@ -382,10 +385,10 @@ PR_REVIEW_AUTO_SUBMIT=false
## Plugin Configuration Summary ## Plugin Configuration Summary
| Plugin | System Config | Project Config | Setup Commands | | Plugin | System Config | Project Config | Setup Command |
|--------|---------------|----------------|----------------| |--------|---------------|----------------|---------------|
| **projman** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` | | **projman** | gitea.env | .env (GITEA_REPO=owner/repo) | `/setup` |
| **pr-review** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` | | **pr-review** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup` |
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed | | **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
| **clarity-assist** | None | None | None needed | | **clarity-assist** | None | None | None needed |
| **cmdb-assistant** | netbox.env | None | `/initial-setup` | | **cmdb-assistant** | netbox.env | None | `/initial-setup` |
@@ -395,6 +398,7 @@ PR_REVIEW_AUTO_SUBMIT=false
| **code-sentinel** | None | None | None needed | | **code-sentinel** | None | None | None needed |
| **project-hygiene** | None | None | None needed | | **project-hygiene** | None | None | None needed |
| **claude-config-maintainer** | None | None | None needed | | **claude-config-maintainer** | None | None | None needed |
| **contract-validator** | None | None | `/initial-setup` |
--- ---
@@ -402,21 +406,224 @@ PR_REVIEW_AUTO_SUBMIT=false
Once system-level config is set up, adding new projects is simple: Once system-level config is set up, adding new projects is simple:
**Option 1: Use `/project-init` (faster)**
``` ```
cd ~/projects/new-project cd ~/projects/new-project
/project-init /setup
``` ```
**Option 2: Use `/initial-setup` (auto-detects)** The command auto-detects that system config exists and runs quick project setup.
```
cd ~/projects/new-project ---
/initial-setup
# → Detects system config exists ## Installing Plugins to Consumer Projects
# → Offers "Quick project setup" option
The marketplace provides scripts to install plugins into consumer projects. This sets up the MCP server connections and adds CLAUDE.md integration snippets.
### Install a Plugin
```bash
cd /path/to/leo-claude-mktplace
./scripts/install-plugin.sh <plugin-name> <target-project-path>
``` ```
Both approaches work. Use `/project-init` when you know the system is already configured. **Examples:**
```bash
# Install data-platform to a portfolio project
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
# Install multiple plugins
./scripts/install-plugin.sh viz-platform ~/projects/personal-portfolio
./scripts/install-plugin.sh projman ~/projects/personal-portfolio
```
**What it does:**
1. Validates the plugin exists in the marketplace
2. Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
3. Appends integration snippet to target's `CLAUDE.md`
4. Reports changes and lists available commands
**After installation:** Restart your Claude Code session for MCP tools to become available.
### Uninstall a Plugin
```bash
./scripts/uninstall-plugin.sh <plugin-name> <target-project-path>
```
Removes the MCP server entry and CLAUDE.md integration section.
### List Installed Plugins
```bash
./scripts/list-installed.sh <target-project-path>
```
Shows which marketplace plugins are installed, partially installed, or available.
**Output example:**
```
✓ Fully Installed:
PLUGIN VERSION DESCRIPTION
------ ------- -----------
data-platform 1.3.0 pandas, PostgreSQL, and dbt integration...
viz-platform 1.1.0 DMC validation, Plotly charts, and theming...
○ Available (not installed):
projman 3.4.0 Sprint planning and project management...
```
### Plugins with MCP Servers
Not all plugins have MCP servers. The install script handles this automatically:
| Plugin | Has MCP Server | Notes |
|--------|---------------|-------|
| data-platform | ✓ | pandas, PostgreSQL, dbt tools |
| viz-platform | ✓ | DMC validation, chart, theme tools |
| contract-validator | ✓ | Plugin compatibility validation |
| cmdb-assistant | ✓ (via netbox) | NetBox CMDB tools |
| projman | ✓ (via gitea) | Issue, wiki, PR tools |
| pr-review | ✓ (via gitea) | PR review tools |
| git-flow | ✗ | Commands only |
| doc-guardian | ✗ | Commands and hooks only |
| code-sentinel | ✗ | Commands and hooks only |
| clarity-assist | ✗ | Commands only |
### Script Requirements
- **jq** must be installed (`sudo apt install jq`)
- Scripts are idempotent (safe to run multiple times)
---
## Agent Frontmatter Configuration
Agents specify their configuration in frontmatter using Claude Code's supported fields. Reference: https://code.claude.com/docs/en/sub-agents
### Supported Frontmatter Fields
| Field | Required | Default | Description |
|-------|----------|---------|-------------|
| `name` | Yes | — | Unique identifier, lowercase + hyphens |
| `description` | Yes | — | When Claude should delegate to this subagent |
| `model` | No | `inherit` | `sonnet`, `opus`, `haiku`, or `inherit` |
| `permissionMode` | No | `default` | Controls permission prompts: `default`, `acceptEdits`, `dontAsk`, `bypassPermissions`, `plan` |
| `disallowedTools` | No | none | Comma-separated tools to remove from agent's toolset |
| `skills` | No | none | Comma-separated skills auto-injected into context at startup |
| `hooks` | No | none | Lifecycle hooks scoped to this subagent |
### Complete Agent Matrix
| Plugin | Agent | `model` | `permissionMode` | `disallowedTools` | `skills` |
|--------|-------|---------|-------------------|--------------------|----------|
| projman | planner | opus | default | — | frontmatter (2) + body text (12) |
| projman | orchestrator | sonnet | acceptEdits | — | frontmatter (2) + body text (10) |
| projman | executor | sonnet | bypassPermissions | — | frontmatter (7) |
| projman | code-reviewer | opus | default | Write, Edit, MultiEdit | frontmatter (4) |
| pr-review | coordinator | sonnet | plan | Write, Edit, MultiEdit | — |
| pr-review | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
| pr-review | performance-analyst | sonnet | plan | Write, Edit, MultiEdit | — |
| pr-review | maintainability-auditor | haiku | plan | Write, Edit, MultiEdit | — |
| pr-review | test-validator | haiku | plan | Write, Edit, MultiEdit | — |
| data-platform | data-advisor | sonnet | default | — | — |
| data-platform | data-analysis | sonnet | plan | Write, Edit, MultiEdit | — |
| data-platform | data-ingestion | haiku | acceptEdits | — | — |
| viz-platform | design-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
| viz-platform | layout-builder | sonnet | default | — | — |
| viz-platform | component-check | haiku | plan | Write, Edit, MultiEdit | — |
| viz-platform | theme-setup | haiku | acceptEdits | — | — |
| contract-validator | full-validation | sonnet | default | — | — |
| contract-validator | agent-check | haiku | plan | Write, Edit, MultiEdit | — |
| code-sentinel | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
| code-sentinel | refactor-advisor | sonnet | acceptEdits | — | — |
| doc-guardian | doc-analyzer | sonnet | acceptEdits | — | — |
| clarity-assist | clarity-coach | sonnet | default | Write, Edit, MultiEdit | — |
| git-flow | git-assistant | haiku | acceptEdits | — | — |
| claude-config-maintainer | maintainer | sonnet | acceptEdits | — | frontmatter (2) |
| cmdb-assistant | cmdb-assistant | sonnet | default | — | — |
### Design Principles
- `bypassPermissions` is granted to exactly ONE agent (Executor) which has code-sentinel PreToolUse hook + Code Reviewer downstream as safety nets.
- `plan` mode is assigned to all pure analysis agents (pr-review, read-only validators).
- `disallowedTools: Write, Edit, MultiEdit` provides defense-in-depth on agents that should never write files.
- `skills` frontmatter is used for agents with ≤7 skills where guaranteed loading is safety-critical. Agents with 8+ skills use body text `## Skills to Load` for selective loading.
- `hooks` (agent-scoped) is reserved for future use (v6.0+).
Override any field by editing the agent's `.md` file in `plugins/{plugin}/agents/`.
### permissionMode Guide
| Value | Prompts for file ops? | Prompts for Bash? | Prompts for MCP? | Use when |
|-------|-----------------------|-------------------|-------------------|----------|
| `default` | Yes | Yes | No (MCP bypasses permissions) | You want full visibility |
| `acceptEdits` | No | Yes | No | Core job is file read/write, Bash visibility useful |
| `dontAsk` | No | No (most) | No | Even Bash prompts are friction |
| `bypassPermissions` | No | No | No | Agent has downstream safety layers |
| `plan` | N/A (read-only) | N/A (read-only) | No | Pure analysis, no modifications |
### disallowedTools Guide
Use `disallowedTools` to remove specific tools from an agent's toolset. This is a blacklist — the agent inherits all tools from the main thread, then the listed tools are removed.
Prefer `disallowedTools` over `tools` (whitelist) because:
- New MCP servers are automatically available without updating every agent.
- Less configuration to maintain.
- Easier to audit — you only list what's blocked.
Common patterns:
- `disallowedTools: Write, Edit, MultiEdit` — read-only agent, cannot modify files.
- `disallowedTools: Bash` — no shell access (rare, most agents need at least read-only Bash).
### skills Frontmatter Guide
The `skills` field auto-injects skill file contents into the agent's context window at startup. The agent does NOT need to read the files — they are already present.
**When to use frontmatter `skills`:**
- Agent has ≤7 skills.
- Skills are safety-critical (e.g., `branch-security`, `runaway-detection`).
- You need guaranteed loading — no risk of the agent skipping a skill.
**When to keep body text `## Skills to Load`:**
- Agent has 8+ skills (context window cost too high for full injection).
- Skills are situational — not all needed for every invocation.
- Agent benefits from selective loading based on the specific task.
Skill names in frontmatter are resolved relative to the plugin's `skills/` directory. Use the filename without the `.md` extension.
### Phase-Based Skill Loading (Body Text)
For agents with 8+ skills, use **phase-based loading** in the agent body text. This structures skill reads into logical phases, with explicit instructions to read each skill exactly once.
**Pattern:**
```markdown
## Skill Loading Protocol
**Frontmatter skills (auto-injected, always available — DO NOT re-read these):**
- `skill-a` — description
- `skill-b` — description
**Phase 1 skills — read ONCE at session start:**
- skills/validation-skill.md
- skills/safety-skill.md
**Phase 2 skills — read ONCE when entering main work:**
- skills/workflow-skill.md
- skills/domain-skill.md
**CRITICAL: Read each skill file exactly ONCE. Do NOT re-read skill files between MCP API calls.**
```
**Benefits:**
- Frontmatter skills (always needed) are auto-injected — zero file read cost
- Phase skills are read once at the appropriate time — not re-read per API call
- `batch-execution` skill provides protocol for API-heavy phases
- ~76-83% reduction in skill-related token consumption for typical sprints
**Currently applied to:**
- Planner agent: 2 frontmatter + 12 body text (3 phases)
- Orchestrator agent: 2 frontmatter + 10 body text (2 phases)
--- ---
@@ -424,12 +631,12 @@ Both approaches work. Use `/project-init` when you know the system is already co
### API Validation ### API Validation
When running `/initial-setup`, `/project-init`, or `/project-sync`, the commands: When running `/setup`, the command:
1. **Detect** organization and repository from git remote URL 1. **Detects** organization and repository from git remote URL
2. **Validate** via Gitea API: `GET /api/v1/repos/{org}/{repo}` 2. **Validates** via Gitea API: `GET /api/v1/repos/{org}/{repo}`
3. **Auto-fill** if repository exists and is accessible (no confirmation needed) 3. **Auto-fills** if repository exists and is accessible (no confirmation needed)
4. **Ask for confirmation** only if validation fails (404 or permission error) 4. **Asks for confirmation** only if validation fails (404 or permission error)
This catches typos and permission issues before saving configuration. This catches typos and permission issues before saving configuration.
@@ -439,7 +646,7 @@ When you start a Claude Code session, a hook automatically:
1. Reads `GITEA_REPO` (in `owner/repo` format) from `.env` 1. Reads `GITEA_REPO` (in `owner/repo` format) from `.env`
2. Compares with current `git remote get-url origin` 2. Compares with current `git remote get-url origin`
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/project-sync` to update." 3. **Warns** if mismatch detected: "Repository location mismatch. Run `/setup --sync` to update."
This helps when you: This helps when you:
- Move a repository to a different organization - Move a repository to a different organization
@@ -501,9 +708,8 @@ If you get 401, regenerate your token in Gitea.
# Check venv exists # Check venv exists
ls /path/to/mcp-servers/gitea/.venv ls /path/to/mcp-servers/gitea/.venv
# Reinstall if missing # If missing, create venv (do NOT delete existing venvs)
cd /path/to/mcp-servers/gitea cd /path/to/mcp-servers/gitea
rm -rf .venv
python3 -m venv .venv python3 -m venv .venv
source .venv/bin/activate source .venv/bin/activate
pip install -r requirements.txt pip install -r requirements.txt
@@ -522,56 +728,6 @@ cat .env
--- ---
## Agent Model Configuration
Agents can specify which Claude model to use for optimal cost/performance.
### Model Options
| Model | Use For | Cost |
|-------|---------|------|
| `opus` | Complex reasoning, security analysis | Highest |
| `sonnet` | Implementation, coordination (default) | Medium |
| `haiku` | Simple validation, quick checks | Lowest |
### Configuration Levels
**1. Agent-Level (highest priority)**
Add to agent frontmatter in `agents/*.md`:
```yaml
---
name: planner
description: Sprint planning agent
model: opus
---
```
**2. Plugin-Level (fallback)**
Add to `plugin.json`:
```json
{
"defaultModel": "sonnet"
}
```
**3. System Default**
If neither is specified, agents use `sonnet`.
### Inheritance Chain
```
Agent model → Plugin defaultModel → System default (sonnet)
```
See [Model Recommendations](MODEL-RECOMMENDATIONS.md) for detailed guidance on model selection by task type.
---
## Security Best Practices ## Security Best Practices
1. **Never commit tokens** 1. **Never commit tokens**
@@ -585,7 +741,7 @@ See [Model Recommendations](MODEL-RECOMMENDATIONS.md) for detailed guidance on m
3. **Never type tokens into AI chat** 3. **Never type tokens into AI chat**
- Always edit config files directly in your editor - Always edit config files directly in your editor
- The `/initial-setup` wizard respects this - The `/setup` wizard respects this
4. **Rotate tokens periodically** 4. **Rotate tokens periodically**
- Every 6-12 months - Every 6-12 months

View File

@@ -73,25 +73,19 @@ cd $RUNTIME && ./scripts/setup.sh
--- ---
## Step 4: Verify Symlink Resolution ## Step 4: Verify MCP Configuration
Plugins use symlinks to shared MCP servers. Verify they resolve correctly: Check `.mcp.json` at marketplace root is correctly configured:
```bash ```bash
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
# Check symlinks exist and resolve # Check .mcp.json exists and has valid content
readlink -f $RUNTIME/plugins/projman/mcp-servers/gitea cat $RUNTIME/.mcp.json | jq '.mcpServers | keys'
readlink -f $RUNTIME/plugins/pr-review/mcp-servers/gitea
readlink -f $RUNTIME/plugins/cmdb-assistant/mcp-servers/netbox
# Should resolve to: # Should list: gitea, netbox, data-platform, viz-platform, contract-validator
# $RUNTIME/mcp-servers/gitea
# $RUNTIME/mcp-servers/netbox
``` ```
**If broken:** Symlinks are relative. If directory structure differs, they'll break.
--- ---
## Step 5: Test MCP Server Startup ## Step 5: Test MCP Server Startup
@@ -165,10 +159,8 @@ echo -e "\n=== Virtual Environments ==="
[ -f "$RUNTIME/mcp-servers/gitea/.venv/bin/python" ] && echo "Gitea venv: OK" || echo "Gitea venv: MISSING" [ -f "$RUNTIME/mcp-servers/gitea/.venv/bin/python" ] && echo "Gitea venv: OK" || echo "Gitea venv: MISSING"
[ -f "$RUNTIME/mcp-servers/netbox/.venv/bin/python" ] && echo "NetBox venv: OK" || echo "NetBox venv: MISSING" [ -f "$RUNTIME/mcp-servers/netbox/.venv/bin/python" ] && echo "NetBox venv: OK" || echo "NetBox venv: MISSING"
echo -e "\n=== Symlinks ===" echo -e "\n=== MCP Configuration ==="
[ -L "$RUNTIME/plugins/projman/mcp-servers/gitea" ] && echo "projman->gitea: OK" || echo "projman->gitea: MISSING" [ -f "$RUNTIME/.mcp.json" ] && echo ".mcp.json: OK" || echo ".mcp.json: MISSING"
[ -L "$RUNTIME/plugins/pr-review/mcp-servers/gitea" ] && echo "pr-review->gitea: OK" || echo "pr-review->gitea: MISSING"
[ -L "$RUNTIME/plugins/cmdb-assistant/mcp-servers/netbox" ] && echo "cmdb-assistant->netbox: OK" || echo "cmdb-assistant->netbox: MISSING"
echo -e "\n=== Config Files ===" echo -e "\n=== Config Files ==="
[ -f ~/.config/claude/gitea.env ] && echo "gitea.env: OK" || echo "gitea.env: MISSING" [ -f ~/.config/claude/gitea.env ] && echo "gitea.env: OK" || echo "gitea.env: MISSING"
@@ -182,7 +174,7 @@ echo -e "\n=== Config Files ==="
| Issue | Symptom | Fix | | Issue | Symptom | Fix |
|-------|---------|-----| |-------|---------|-----|
| Missing venvs | "X MCP servers failed" | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` | | Missing venvs | "X MCP servers failed" | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
| Broken symlinks | MCP tools not available | Reinstall marketplace or manually recreate symlinks | | Missing .mcp.json | MCP tools not available | Check `.mcp.json` exists at marketplace root |
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes | | Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials | | Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) | | Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
@@ -287,8 +279,8 @@ Error: Could not find a suitable TLS CA certificate bundle, invalid path:
Use these commands for automated checking: Use these commands for automated checking:
- `/debug-report` - Run full diagnostics, create issue if problems found - `/debug report` - Run full diagnostics, create issue if problems found
- `/debug-review` - Investigate existing diagnostic issues and propose fixes - `/debug review` - Investigate existing diagnostic issues and propose fixes
--- ---

View File

@@ -1,149 +0,0 @@
# Model Recommendations
Guidelines for selecting Claude models (opus, sonnet, haiku) for plugin agents.
---
## Model Overview
| Model | Best For | Cost | Speed |
|-------|----------|------|-------|
| **Opus** | Complex reasoning, architecture decisions, security analysis | Highest | Slower |
| **Sonnet** | Implementation, coordination, standard tasks | Medium | Balanced |
| **Haiku** | Simple validation, quick checks, status queries | Lowest | Fastest |
---
## Task-Type Recommendations
| Task Type | Model | Rationale |
|-----------|-------|-----------|
| Architecture decisions | Opus | Requires deep reasoning, trade-off analysis |
| Security analysis | Opus | Critical thinking, vulnerability pattern recognition |
| Code review (quality) | Opus | Thorough analysis, edge case detection |
| Sprint planning | Opus | Strategic thinking, dependency analysis |
| Complex data analysis | Opus | Multi-step reasoning, insight generation |
| Code implementation | Sonnet | Fast, capable code generation |
| Coordination/dispatch | Sonnet | Task management, status tracking |
| Data transformation | Sonnet | ETL operations, query building |
| Documentation | Sonnet | Clear writing, structure |
| Simple validation | Haiku | Fast prop checks, schema validation |
| Status checks | Haiku | Quick queries, cost-effective |
| Quick verification | Haiku | Simple pass/fail checks |
---
## Agent Model Assignments
### projman (Sprint Management)
| Agent | Model | Rationale |
|-------|-------|-----------|
| `planner` | opus | Architecture decisions, issue structuring |
| `orchestrator` | sonnet | Coordination, parallel execution |
| `executor` | sonnet | Code implementation |
| `code-reviewer` | opus | Quality review, security analysis |
### pr-review (PR Analysis)
| Agent | Model | Rationale |
|-------|-------|-----------|
| `coordinator` | sonnet | Task dispatch, result aggregation |
| `security-reviewer` | opus | Security vulnerability detection |
| `performance-analyst` | sonnet | Pattern recognition |
| `maintainability-auditor` | sonnet | Code quality checks |
| `test-validator` | sonnet | Test coverage analysis |
### code-sentinel (Security & Refactoring)
| Agent | Model | Rationale |
|-------|-------|-----------|
| `security-reviewer` | opus | Security scanning |
| `refactor-advisor` | sonnet | Refactoring suggestions |
### data-platform (Data Engineering)
| Agent | Model | Rationale |
|-------|-------|-----------|
| `data-analysis` | opus | Complex data insights |
| `data-ingestion` | sonnet | ETL operations |
### viz-platform (Visualization)
| Agent | Model | Rationale |
|-------|-------|-----------|
| `component-check` | haiku | Simple prop validation |
| `layout-builder` | sonnet | UI construction |
| `theme-setup` | sonnet | Design configuration |
### contract-validator (Compatibility)
| Agent | Model | Rationale |
|-------|-------|-----------|
| `full-validation` | sonnet | Contract checking |
| `agent-check` | haiku | Quick verification |
---
## Configuration Schema
### Agent-Level (Frontmatter)
Add `model` field to agent YAML frontmatter:
```yaml
---
name: planner
description: Sprint planning agent
model: opus
---
```
**Valid values:** `opus`, `sonnet`, `haiku`
### Plugin-Level (plugin.json)
Add `defaultModel` for plugin-wide fallback:
```json
{
"name": "projman",
"version": "3.4.0",
"defaultModel": "sonnet"
}
```
---
## Inheritance Chain
Model selection follows this precedence:
```
1. Agent model field (highest priority)
↓ if not specified
2. Plugin defaultModel (plugin.json)
↓ if not specified
3. System default: sonnet
```
**Example:**
- Agent has `model: opus` → Uses opus
- Agent has no model, plugin has `defaultModel: sonnet` → Uses sonnet
- Neither specified → Uses sonnet (system default)
---
## Cost Optimization Tips
1. **Default to Sonnet** - Good balance for most tasks
2. **Reserve Opus** for critical decisions (security, architecture)
3. **Use Haiku** for validation and quick checks
4. **Batch simple tasks** - Use haiku for parallel validation
---
## See Also
- [Configuration Guide](CONFIGURATION.md) - Full configuration reference
- [Plugin Development](../README.md) - Adding new plugins

View File

@@ -132,10 +132,8 @@ When updating, review if changes affect the setup workflow:
### Dependencies fail to install ### Dependencies fail to install
```bash ```bash
# Rebuild virtual environment # Install missing dependencies (do NOT delete .venv)
cd mcp-servers/gitea cd mcp-servers/gitea
rm -rf .venv
python3 -m venv .venv
source .venv/bin/activate source .venv/bin/activate
pip install -r requirements.txt pip install -r requirements.txt
deactivate deactivate
@@ -164,7 +162,7 @@ If that doesn't work:
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/gitea/.venv ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/gitea/.venv
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/netbox/.venv ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/netbox/.venv
``` ```
3. If missing, the symlinks won't resolve. Run setup.sh as shown above. 3. If missing, run setup.sh as shown above.
4. Restart Claude Code session 4. Restart Claude Code session
5. Check logs for specific errors 5. Check logs for specific errors

View File

@@ -0,0 +1,20 @@
2026-01-26T14:36:42 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:37:38 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:37:48 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:38:05 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:38:55 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:39:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:40:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:02:30 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:02:37 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:03:41 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_report_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:56:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:57:49 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:58:22 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/mcp-tools-reference.md | README.md
2026-02-02T10:58:38 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/validation-rules.md | README.md
2026-02-02T10:59:13 | .claude-plugin | /home/lmiranda/claude-plugins-work/.claude-plugin/marketplace.json | CLAUDE.md .claude-plugin/marketplace.json
2026-02-02T13:55:33 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/visual-output.md | README.md
2026-02-02T13:55:41 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/planner.md | README.md CLAUDE.md
2026-02-02T13:55:55 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/orchestrator.md | README.md CLAUDE.md
2026-02-02T13:56:14 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/executor.md | README.md CLAUDE.md
2026-02-02T13:56:34 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/code-reviewer.md | README.md CLAUDE.md

View File

@@ -131,6 +131,28 @@ class ContractValidatorMCPServer:
"required": ["agent_name", "claude_md_path"] "required": ["agent_name", "claude_md_path"]
} }
), ),
Tool(
name="validate_workflow_integration",
description="Validate that a domain plugin exposes the required advisory interfaces (gate command, review command, advisory agent) expected by projman's domain-consultation skill. Also checks gate contract version compatibility.",
inputSchema={
"type": "object",
"properties": {
"plugin_path": {
"type": "string",
"description": "Path to the domain plugin directory"
},
"domain_label": {
"type": "string",
"description": "The Domain/* label it claims to handle, e.g. Domain/Viz"
},
"expected_contract": {
"type": "string",
"description": "Expected contract version (e.g., 'v1'). If provided, validates the gate command's contract matches."
}
},
"required": ["plugin_path", "domain_label"]
}
),
# Report tools (to be implemented in #188) # Report tools (to be implemented in #188)
Tool( Tool(
name="generate_compatibility_report", name="generate_compatibility_report",
@@ -198,6 +220,8 @@ class ContractValidatorMCPServer:
result = await self._validate_agent_refs(**arguments) result = await self._validate_agent_refs(**arguments)
elif name == "validate_data_flow": elif name == "validate_data_flow":
result = await self._validate_data_flow(**arguments) result = await self._validate_data_flow(**arguments)
elif name == "validate_workflow_integration":
result = await self._validate_workflow_integration(**arguments)
elif name == "generate_compatibility_report": elif name == "generate_compatibility_report":
result = await self._generate_compatibility_report(**arguments) result = await self._generate_compatibility_report(**arguments)
elif name == "list_issues": elif name == "list_issues":
@@ -241,6 +265,17 @@ class ContractValidatorMCPServer:
"""Validate agent data flow""" """Validate agent data flow"""
return await self.validation_tools.validate_data_flow(agent_name, claude_md_path) return await self.validation_tools.validate_data_flow(agent_name, claude_md_path)
async def _validate_workflow_integration(
self,
plugin_path: str,
domain_label: str,
expected_contract: str = None
) -> dict:
"""Validate domain plugin exposes required advisory interfaces"""
return await self.validation_tools.validate_workflow_integration(
plugin_path, domain_label, expected_contract
)
# Report tool implementations (Issue #188) # Report tool implementations (Issue #188)
async def _generate_compatibility_report(self, marketplace_path: str, format: str = "markdown") -> dict: async def _generate_compatibility_report(self, marketplace_path: str, format: str = "markdown") -> dict:

View File

@@ -26,6 +26,7 @@ class IssueType(str, Enum):
OPTIONAL_DEPENDENCY = "optional_dependency" OPTIONAL_DEPENDENCY = "optional_dependency"
UNDECLARED_OUTPUT = "undeclared_output" UNDECLARED_OUTPUT = "undeclared_output"
INVALID_SEQUENCE = "invalid_sequence" INVALID_SEQUENCE = "invalid_sequence"
MISSING_INTEGRATION = "missing_integration"
class ValidationIssue(BaseModel): class ValidationIssue(BaseModel):
@@ -65,6 +66,18 @@ class DataFlowResult(BaseModel):
issues: list[ValidationIssue] = [] issues: list[ValidationIssue] = []
class WorkflowIntegrationResult(BaseModel):
"""Result of workflow integration validation for domain plugins"""
plugin_name: str
domain_label: str
valid: bool
gate_command_found: bool
gate_contract: Optional[str] = None # Contract version declared by gate command
review_command_found: bool
advisory_agent_found: bool
issues: list[ValidationIssue] = []
class ValidationTools: class ValidationTools:
"""Tools for validating plugin compatibility and agent references""" """Tools for validating plugin compatibility and agent references"""
@@ -336,3 +349,145 @@ class ValidationTools:
) )
return result.model_dump() return result.model_dump()
async def validate_workflow_integration(
self,
plugin_path: str,
domain_label: str,
expected_contract: Optional[str] = None
) -> dict:
"""
Validate that a domain plugin exposes required advisory interfaces.
Checks for:
- Gate command (e.g., /design-gate, /data-gate) - REQUIRED
- Gate contract version (gate_contract in frontmatter) - INFO if missing
- Review command (e.g., /design-review, /data-review) - recommended
- Advisory agent referencing the domain label - recommended
Args:
plugin_path: Path to the domain plugin directory
domain_label: The Domain/* label it claims to handle (e.g., Domain/Viz)
expected_contract: Expected contract version (e.g., 'v1'). If provided,
validates the gate command's contract matches.
Returns:
Validation result with found interfaces and issues
"""
import re
plugin_path_obj = Path(plugin_path)
issues = []
# Extract plugin name from path
plugin_name = plugin_path_obj.name
if not plugin_path_obj.exists():
return {
"error": f"Plugin directory not found: {plugin_path}",
"plugin_path": plugin_path,
"domain_label": domain_label
}
# Extract domain short name from label (e.g., "Domain/Viz" -> "viz", "Domain/Data" -> "data")
domain_short = domain_label.split("/")[-1].lower() if "/" in domain_label else domain_label.lower()
# Check for gate command
commands_dir = plugin_path_obj / "commands"
gate_command_found = False
gate_contract = None
gate_patterns = ["pass", "fail", "PASS", "FAIL", "Binary pass/fail", "gate"]
if commands_dir.exists():
for cmd_file in commands_dir.glob("*.md"):
if "gate" in cmd_file.name.lower():
# Verify it's actually a gate command by checking content
content = cmd_file.read_text()
if any(pattern in content for pattern in gate_patterns):
gate_command_found = True
# Parse frontmatter for gate_contract
frontmatter_match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
if frontmatter_match:
frontmatter = frontmatter_match.group(1)
contract_match = re.search(r'gate_contract:\s*(\S+)', frontmatter)
if contract_match:
gate_contract = contract_match.group(1)
break
if not gate_command_found:
issues.append(ValidationIssue(
severity=IssueSeverity.ERROR,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks a gate command for domain '{domain_label}'",
location=str(commands_dir),
suggestion=f"Create commands/{domain_short}-gate.md with binary PASS/FAIL output"
))
# Check for review command
review_command_found = False
if commands_dir.exists():
for cmd_file in commands_dir.glob("*.md"):
if "review" in cmd_file.name.lower() and "gate" not in cmd_file.name.lower():
review_command_found = True
break
if not review_command_found:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks a review command for domain '{domain_label}'",
location=str(commands_dir),
suggestion=f"Create commands/{domain_short}-review.md for detailed audits"
))
# Check for advisory agent
agents_dir = plugin_path_obj / "agents"
advisory_agent_found = False
if agents_dir.exists():
for agent_file in agents_dir.glob("*.md"):
content = agent_file.read_text()
# Check if agent references the domain label or gate command
if domain_label in content or f"{domain_short}-gate" in content.lower() or "advisor" in agent_file.name.lower() or "reviewer" in agent_file.name.lower():
advisory_agent_found = True
break
if not advisory_agent_found:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks an advisory agent for domain '{domain_label}'",
location=str(agents_dir) if agents_dir.exists() else str(plugin_path_obj),
suggestion=f"Create agents/{domain_short}-advisor.md referencing '{domain_label}'"
))
# Check gate contract version
if gate_command_found:
if not gate_contract:
issues.append(ValidationIssue(
severity=IssueSeverity.INFO,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Gate command does not declare a contract version",
location=str(commands_dir),
suggestion="Consider adding `gate_contract: v1` to frontmatter for version tracking"
))
elif expected_contract and gate_contract != expected_contract:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.INTERFACE_MISMATCH,
message=f"Contract version mismatch: gate declares {gate_contract}, projman expects {expected_contract}",
location=str(commands_dir),
suggestion=f"Update domain-consultation.md Gate Command Reference table to {gate_contract}, or update gate command to {expected_contract}"
))
result = WorkflowIntegrationResult(
plugin_name=plugin_name,
domain_label=domain_label,
valid=gate_command_found, # Only gate is required for validity
gate_command_found=gate_command_found,
gate_contract=gate_contract,
review_command_found=review_command_found,
advisory_agent_found=advisory_agent_found,
issues=issues
)
return result.model_dump()

View File

@@ -254,3 +254,261 @@ async def test_validate_data_flow_missing_producer(validation_tools, tmp_path):
# Should have warning about missing producer # Should have warning about missing producer
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"] warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
assert len(warning_issues) > 0 assert len(warning_issues) > 0
# --- Workflow Integration Tests ---
@pytest.fixture
def domain_plugin_complete(tmp_path):
"""Create a complete domain plugin with gate, review, and advisory agent"""
plugin_dir = tmp_path / "viz-platform"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Gate command with PASS/FAIL pattern
gate_cmd = plugin_dir / "commands" / "design-gate.md"
gate_cmd.write_text("""# /design-gate
Binary pass/fail validation gate for design system compliance.
## Output
- **PASS**: All design system checks passed
- **FAIL**: Design system violations detected
""")
# Review command
review_cmd = plugin_dir / "commands" / "design-review.md"
review_cmd.write_text("""# /design-review
Comprehensive design system audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "design-reviewer.md"
agent.write_text("""# design-reviewer
Design system compliance auditor.
Handles issues with `Domain/Viz` label.
""")
return str(plugin_dir)
@pytest.fixture
def domain_plugin_missing_gate(tmp_path):
"""Create domain plugin with review and agent but no gate command"""
plugin_dir = tmp_path / "data-platform"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Review command (but no gate)
review_cmd = plugin_dir / "commands" / "data-review.md"
review_cmd.write_text("""# /data-review
Data integrity audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "data-advisor.md"
agent.write_text("""# data-advisor
Data integrity advisor for Domain/Data issues.
""")
return str(plugin_dir)
@pytest.fixture
def domain_plugin_minimal(tmp_path):
"""Create minimal plugin with no commands or agents"""
plugin_dir = tmp_path / "minimal-plugin"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
readme = plugin_dir / "README.md"
readme.write_text("# Minimal Plugin\n\nNo commands or agents.")
return str(plugin_dir)
@pytest.mark.asyncio
async def test_validate_workflow_integration_complete(validation_tools, domain_plugin_complete):
"""Test complete domain plugin returns valid with all interfaces found"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_complete,
"Domain/Viz"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_command_found"] is True
assert result["review_command_found"] is True
assert result["advisory_agent_found"] is True
# May have INFO issue about missing contract version (not an error/warning)
error_or_warning = [i for i in result["issues"]
if i["severity"].value in ("error", "warning")]
assert len(error_or_warning) == 0
@pytest.mark.asyncio
async def test_validate_workflow_integration_missing_gate(validation_tools, domain_plugin_missing_gate):
"""Test plugin missing gate command returns invalid with ERROR"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_missing_gate,
"Domain/Data"
)
assert "error" not in result
assert result["valid"] is False
assert result["gate_command_found"] is False
assert result["review_command_found"] is True
assert result["advisory_agent_found"] is True
# Should have one ERROR for missing gate
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
assert len(error_issues) == 1
assert "gate" in error_issues[0]["message"].lower()
@pytest.mark.asyncio
async def test_validate_workflow_integration_minimal(validation_tools, domain_plugin_minimal):
"""Test minimal plugin returns invalid with multiple issues"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_minimal,
"Domain/Test"
)
assert "error" not in result
assert result["valid"] is False
assert result["gate_command_found"] is False
assert result["review_command_found"] is False
assert result["advisory_agent_found"] is False
# Should have one ERROR (gate) and two WARNINGs (review, agent)
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
assert len(error_issues) == 1
assert len(warning_issues) == 2
@pytest.mark.asyncio
async def test_validate_workflow_integration_nonexistent_plugin(validation_tools, tmp_path):
"""Test error when plugin directory doesn't exist"""
result = await validation_tools.validate_workflow_integration(
str(tmp_path / "nonexistent"),
"Domain/Test"
)
assert "error" in result
assert "not found" in result["error"].lower()
# --- Gate Contract Version Tests ---
@pytest.fixture
def domain_plugin_with_contract(tmp_path):
"""Create domain plugin with gate_contract: v1 in frontmatter"""
plugin_dir = tmp_path / "viz-platform-versioned"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Gate command with gate_contract in frontmatter
gate_cmd = plugin_dir / "commands" / "design-gate.md"
gate_cmd.write_text("""---
description: Design system compliance gate (pass/fail)
gate_contract: v1
---
# /design-gate
Binary pass/fail validation gate for design system compliance.
## Output
- **PASS**: All design system checks passed
- **FAIL**: Design system violations detected
""")
# Review command
review_cmd = plugin_dir / "commands" / "design-review.md"
review_cmd.write_text("""# /design-review
Comprehensive design system audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "design-reviewer.md"
agent.write_text("""# design-reviewer
Design system compliance auditor for Domain/Viz issues.
""")
return str(plugin_dir)
@pytest.mark.asyncio
async def test_validate_workflow_contract_match(validation_tools, domain_plugin_with_contract):
"""Test that matching expected_contract produces no warning"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_with_contract,
"Domain/Viz",
expected_contract="v1"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_contract"] == "v1"
# Should have no warnings about contract mismatch
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
assert len(contract_warnings) == 0
@pytest.mark.asyncio
async def test_validate_workflow_contract_mismatch(validation_tools, domain_plugin_with_contract):
"""Test that mismatched expected_contract produces WARNING"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_with_contract,
"Domain/Viz",
expected_contract="v2" # Gate has v1
)
assert "error" not in result
assert result["valid"] is True # Contract mismatch doesn't affect validity
assert result["gate_contract"] == "v1"
# Should have warning about contract mismatch
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
assert len(contract_warnings) == 1
assert "mismatch" in contract_warnings[0]["message"].lower()
assert "v1" in contract_warnings[0]["message"]
assert "v2" in contract_warnings[0]["message"]
@pytest.mark.asyncio
async def test_validate_workflow_no_contract(validation_tools, domain_plugin_complete):
"""Test that missing gate_contract produces INFO suggestion"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_complete,
"Domain/Viz"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_contract"] is None
# Should have info issue about missing contract
info_issues = [i for i in result["issues"] if i["severity"].value == "info"]
contract_info = [i for i in info_issues if "contract" in i["message"].lower()]
assert len(contract_info) == 1
assert "does not declare" in contract_info[0]["message"].lower()

View File

@@ -0,0 +1,6 @@
2026-02-03T14:09:25 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/gitea/tests/test_config.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-03T14:09:33 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/gitea/tests/test_gitea_client.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-03T14:10:22 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/gitea/tests/test_issues.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-03T14:17:12 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/gitea/README.md | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-03T14:18:27 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/gitea/CHANGELOG.md | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-03T14:18:41 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/gitea/TESTING.md | docs/COMMANDS-CHEATSHEET.md CLAUDE.md

View File

@@ -0,0 +1,92 @@
# Changelog
All notable changes to the Gitea MCP Server will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [1.3.0] - 2026-02-03
### Added
- Pull request tools (7 tools):
- `list_pull_requests` - List PRs from repository
- `get_pull_request` - Get specific PR details
- `get_pr_diff` - Get PR diff content
- `get_pr_comments` - Get comments on a PR
- `create_pr_review` - Create PR review (approve/request changes/comment)
- `add_pr_comment` - Add comment to PR
- `create_pull_request` - Create new pull request
- Label creation tools (3 tools):
- `create_label` - Create repo-level label
- `create_org_label` - Create organization-level label
- `create_label_smart` - Auto-detect org vs repo for label creation
- Validation tools (2 tools):
- `validate_repo_org` - Check if repo belongs to organization
- `get_branch_protection` - Get branch protection rules
### Changed
- Total tools increased from 20 to 36
- Updated test suite to 64 tests (was 42)
### Fixed
- Test fixtures updated to use `owner/repo` format
- Fixed aggregate_issues tests to pass required `org` argument
## [1.2.0] - 2026-01-28
### Added
- Milestone management tools (5 tools):
- `list_milestones` - List all milestones
- `get_milestone` - Get specific milestone
- `create_milestone` - Create new milestone
- `update_milestone` - Update existing milestone
- `delete_milestone` - Delete a milestone
- Issue dependency tools (4 tools):
- `list_issue_dependencies` - List blocking issues
- `create_issue_dependency` - Create dependency between issues
- `remove_issue_dependency` - Remove dependency
- `get_execution_order` - Calculate parallelizable execution order
## [1.1.0] - 2026-01-21
### Added
- Wiki and lessons learned tools (7 tools):
- `list_wiki_pages` - List all wiki pages
- `get_wiki_page` - Get specific wiki page content
- `create_wiki_page` - Create new wiki page
- `update_wiki_page` - Update existing wiki page
- `create_lesson` - Create lessons learned entry
- `search_lessons` - Search lessons by query/tags
- `allocate_rfc_number` - Get next available RFC number
- Automatic git remote URL detection for repository configuration
- Support for SSH, HTTPS, and HTTP git URL formats
### Changed
- Configuration now uses `owner/repo` format exclusively
- Removed separate `GITEA_OWNER` configuration (now derived from repo path)
## [1.0.0] - 2025-01-06
### Added
- Initial release with 8 core tools:
- `list_issues` - List issues from repository
- `get_issue` - Get specific issue details
- `create_issue` - Create new issue with labels
- `update_issue` - Update existing issue
- `add_comment` - Add comment to issue
- `get_labels` - Get all labels (org + repo)
- `suggest_labels` - Intelligent label suggestion
- `aggregate_issues` - Cross-repository issue aggregation (PMO mode)
- Hybrid configuration system (system + project level)
- Branch-aware security model
- Mode detection (project vs company/PMO)
- 42 unit tests with mocks
- Comprehensive documentation
[Unreleased]: https://github.com/owner/repo/compare/v1.3.0...HEAD
[1.3.0]: https://github.com/owner/repo/compare/v1.2.0...v1.3.0
[1.2.0]: https://github.com/owner/repo/compare/v1.1.0...v1.2.0
[1.1.0]: https://github.com/owner/repo/compare/v1.0.0...v1.1.0
[1.0.0]: https://github.com/owner/repo/releases/tag/v1.0.0

View File

@@ -19,8 +19,9 @@ The Gitea MCP Server provides Claude Code with direct access to Gitea for issue
- **Hybrid Configuration**: System-level credentials + project-level paths - **Hybrid Configuration**: System-level credentials + project-level paths
- **PMO Support**: Multi-repository aggregation for organization-wide views - **PMO Support**: Multi-repository aggregation for organization-wide views
### Tools Provided ### Tools Provided (36 total)
#### Issue Management (6 tools)
| Tool | Description | Mode | | Tool | Description | Mode |
|------|-------------|------| |------|-------------|------|
| `list_issues` | List issues from repository | Both | | `list_issues` | List issues from repository | Both |
@@ -28,9 +29,61 @@ The Gitea MCP Server provides Claude Code with direct access to Gitea for issue
| `create_issue` | Create new issue with labels | Both | | `create_issue` | Create new issue with labels | Both |
| `update_issue` | Update existing issue | Both | | `update_issue` | Update existing issue | Both |
| `add_comment` | Add comment to issue | Both | | `add_comment` | Add comment to issue | Both |
| `aggregate_issues` | Cross-repository issue aggregation | PMO Only |
#### Label Management (5 tools)
| Tool | Description | Mode |
|------|-------------|------|
| `get_labels` | Get all labels (org + repo) | Both | | `get_labels` | Get all labels (org + repo) | Both |
| `suggest_labels` | Intelligent label suggestion | Both | | `suggest_labels` | Intelligent label suggestion | Both |
| `aggregate_issues` | Cross-repository issue aggregation | PMO Only | | `create_label` | Create repo-level label | Both |
| `create_org_label` | Create organization-level label | Both |
| `create_label_smart` | Auto-detect org vs repo for label creation | Both |
#### Wiki & Lessons Learned (7 tools)
| Tool | Description | Mode |
|------|-------------|------|
| `list_wiki_pages` | List all wiki pages | Both |
| `get_wiki_page` | Get specific wiki page content | Both |
| `create_wiki_page` | Create new wiki page | Both |
| `update_wiki_page` | Update existing wiki page | Both |
| `create_lesson` | Create lessons learned entry | Both |
| `search_lessons` | Search lessons by query/tags | Both |
| `allocate_rfc_number` | Get next available RFC number | Both |
#### Milestone Management (5 tools)
| Tool | Description | Mode |
|------|-------------|------|
| `list_milestones` | List all milestones | Both |
| `get_milestone` | Get specific milestone | Both |
| `create_milestone` | Create new milestone | Both |
| `update_milestone` | Update existing milestone | Both |
| `delete_milestone` | Delete a milestone | Both |
#### Issue Dependencies (4 tools)
| Tool | Description | Mode |
|------|-------------|------|
| `list_issue_dependencies` | List blocking issues | Both |
| `create_issue_dependency` | Create dependency between issues | Both |
| `remove_issue_dependency` | Remove dependency | Both |
| `get_execution_order` | Calculate parallelizable execution order | Both |
#### Pull Request Tools (7 tools)
| Tool | Description | Mode |
|------|-------------|------|
| `list_pull_requests` | List PRs from repository | Both |
| `get_pull_request` | Get specific PR details | Both |
| `get_pr_diff` | Get PR diff content | Both |
| `get_pr_comments` | Get comments on a PR | Both |
| `create_pr_review` | Create PR review (approve/request changes) | Both |
| `add_pr_comment` | Add comment to PR | Both |
| `create_pull_request` | Create new pull request | Both |
#### Validation Tools (2 tools)
| Tool | Description | Mode |
|------|-------------|------|
| `validate_repo_org` | Check if repo belongs to organization | Both |
| `get_branch_protection` | Get branch protection rules | Both |
## Architecture ## Architecture
@@ -40,15 +93,20 @@ The Gitea MCP Server provides Claude Code with direct access to Gitea for issue
mcp-servers/gitea/ mcp-servers/gitea/
├── .venv/ # Python virtual environment ├── .venv/ # Python virtual environment
├── requirements.txt # Python dependencies ├── requirements.txt # Python dependencies
├── run.sh # Entry point script
├── mcp_server/ ├── mcp_server/
│ ├── __init__.py │ ├── __init__.py
│ ├── server.py # MCP server entry point │ ├── server.py # MCP server entry point (36 tools)
│ ├── config.py # Configuration loader │ ├── config.py # Configuration loader with auto-detection
│ ├── gitea_client.py # Gitea API client │ ├── gitea_client.py # Gitea API client
│ └── tools/ │ └── tools/
│ ├── __init__.py │ ├── __init__.py
│ ├── issues.py # Issue tools │ ├── issues.py # Issue management tools
── labels.py # Label tools ── labels.py # Label management tools
│ ├── wiki.py # Wiki & lessons learned tools
│ ├── milestones.py # Milestone management tools
│ ├── dependencies.py # Issue dependency tools
│ └── pull_requests.py # Pull request tools
├── tests/ ├── tests/
│ ├── __init__.py │ ├── __init__.py
│ ├── test_config.py │ ├── test_config.py
@@ -56,7 +114,8 @@ mcp-servers/gitea/
│ ├── test_issues.py │ ├── test_issues.py
│ └── test_labels.py │ └── test_labels.py
├── README.md # This file ├── README.md # This file
── TESTING.md # Testing instructions ── TESTING.md # Testing instructions
└── CHANGELOG.md # Version history
``` ```
### Mode Detection ### Mode Detection
@@ -111,7 +170,6 @@ mkdir -p ~/.config/claude
cat > ~/.config/claude/gitea.env << EOF cat > ~/.config/claude/gitea.env << EOF
GITEA_API_URL=https://gitea.example.com/api/v1 GITEA_API_URL=https://gitea.example.com/api/v1
GITEA_API_TOKEN=your_gitea_token_here GITEA_API_TOKEN=your_gitea_token_here
GITEA_OWNER=bandit
EOF EOF
chmod 600 ~/.config/claude/gitea.env chmod 600 ~/.config/claude/gitea.env
@@ -137,14 +195,34 @@ For company/PMO mode, omit the `.env` file or don't set `GITEA_REPO`.
**Required Variables**: **Required Variables**:
- `GITEA_API_URL` - Gitea API endpoint (e.g., `https://gitea.example.com/api/v1`) - `GITEA_API_URL` - Gitea API endpoint (e.g., `https://gitea.example.com/api/v1`)
- `GITEA_API_TOKEN` - Personal access token with repo permissions - `GITEA_API_TOKEN` - Personal access token with repo permissions
- `GITEA_OWNER` - Organization or user name (e.g., `bandit`)
### Project-Level Configuration ### Project-Level Configuration
**File**: `<project-root>/.env` **File**: `<project-root>/.env`
**Optional Variables**: **Optional Variables**:
- `GITEA_REPO` - Repository name (enables project mode) - `GITEA_REPO` - Repository in `owner/repo` format (enables project mode)
### Automatic Repository Detection
If `GITEA_REPO` is not set, the server auto-detects the repository from your git remote:
**Supported URL Formats**:
- SSH: `ssh://git@gitea.example.com:22/owner/repo.git`
- SSH short: `git@gitea.example.com:owner/repo.git`
- HTTPS: `https://gitea.example.com/owner/repo.git`
- HTTP: `http://gitea.example.com/owner/repo.git`
The repository is extracted as `owner/repo` format automatically.
### Project Directory Detection
The server finds your project directory using these strategies (in order):
1. `CLAUDE_PROJECT_DIR` environment variable (highest priority)
2. `PWD` environment variable (if `.git` or `.env` present)
3. Current working directory (if `.git` or `.env` present)
4. Falls back to company/PMO mode if no project found
### Generating Gitea API Token ### Generating Gitea API Token
@@ -220,13 +298,13 @@ suggestions = await label_tools.suggest_labels(context)
### Unit Tests ### Unit Tests
Run all 42 unit tests with mocks: Run all 64 unit tests with mocks:
```bash ```bash
pytest tests/ -v pytest tests/ -v
``` ```
Expected: `42 passed in 0.57s` Expected: `64 passed`
### Integration Tests ### Integration Tests
@@ -327,11 +405,15 @@ See [TESTING.md](./TESTING.md#troubleshooting) for more details.
### Project Structure ### Project Structure
- `config.py` - Hybrid configuration loader with mode detection - `config.py` - Hybrid configuration loader with auto-detection
- `gitea_client.py` - Synchronous Gitea API client using requests - `gitea_client.py` - Synchronous Gitea API client using requests
- `tools/issues.py` - Async wrappers with branch detection - `tools/issues.py` - Issue management with branch detection
- `tools/labels.py` - Label management and suggestion - `tools/labels.py` - Label management and intelligent suggestions
- `server.py` - MCP server with JSON-RPC 2.0 over stdio - `tools/wiki.py` - Wiki pages and lessons learned
- `tools/milestones.py` - Milestone CRUD operations
- `tools/dependencies.py` - Issue dependency tracking
- `tools/pull_requests.py` - PR review and management
- `server.py` - MCP server with 36 tools over JSON-RPC 2.0 stdio
### Adding New Tools ### Adding New Tools
@@ -374,18 +456,14 @@ def list_issues(self, state='open', labels=None, repo=None):
## Changelog ## Changelog
### v1.0.0 (2025-01-06) - Phase 1 Complete See [CHANGELOG.md](./CHANGELOG.md) for full version history.
✅ Initial implementation: ### Recent Updates
- Configuration management (hybrid system + project)
- Gitea API client with all CRUD operations - **v1.3.0** - Pull request tools (7 tools), label creation tools (3)
- MCP server with 8 tools - **v1.2.0** - Milestone management (5 tools), issue dependencies (4 tools)
- Issue tools with branch detection - **v1.1.0** - Wiki & lessons learned system (7 tools)
- Label tools with intelligent suggestions - **v1.0.0** - Initial release with core issue/label tools (8 tools)
- Mode detection (project vs company)
- Branch-aware security model
- 42 unit tests (100% passing)
- Comprehensive documentation
## License ## License
@@ -407,6 +485,6 @@ For issues or questions:
--- ---
**Built for**: Leo Claude Marketplace - Project Management Plugins **Built for**: Leo Claude Marketplace - Project Management Plugins
**Phase**: 1 (Complete) **Tools**: 36
**Status**: ✅ Production Ready **Status**: ✅ Production Ready
**Last Updated**: 2025-01-06 **Last Updated**: 2026-02-03

View File

@@ -28,7 +28,7 @@ source .venv/bin/activate # Linux/Mac
### Running All Tests ### Running All Tests
Run all 42 unit tests: Run all 64 unit tests:
```bash ```bash
pytest tests/ -v pytest tests/ -v
@@ -36,7 +36,7 @@ pytest tests/ -v
Expected output: Expected output:
``` ```
============================== 42 passed in 0.57s ============================== ============================== 64 passed ==============================
``` ```
### Running Specific Test Files ### Running Specific Test Files
@@ -532,7 +532,7 @@ python -m mcp_server.server
After completing all tests, verify: After completing all tests, verify:
- ✅ All 42 unit tests pass - ✅ All 64 unit tests pass
- ✅ MCP server starts without errors - ✅ MCP server starts without errors
- ✅ Configuration loads correctly - ✅ Configuration loads correctly
- ✅ Gitea API client connects successfully - ✅ Gitea API client connects successfully
@@ -548,7 +548,7 @@ After completing all tests, verify:
Phase 1 is complete when: Phase 1 is complete when:
1. **All unit tests pass** (42/42) 1. **All unit tests pass** (64/64)
2. **MCP server starts without errors** 2. **MCP server starts without errors**
3. **Can list issues from Gitea** 3. **Can list issues from Gitea**
4. **Can create issues with labels** (in development mode) 4. **Can create issues with labels** (in development mode)

View File

@@ -0,0 +1,30 @@
"""
Gitea MCP Server package.
Provides MCP tools for Gitea integration via JSON-RPC 2.0.
For external consumers (e.g., HTTP transport), use:
from mcp_server import get_tool_definitions, create_tool_dispatcher, GiteaClient
# Get tool schemas
tools = get_tool_definitions()
# Create dispatcher bound to a client
client = GiteaClient()
dispatch = create_tool_dispatcher(client)
result = await dispatch("list_issues", {"state": "open"})
"""
__version__ = "1.0.0"
from .tool_registry import get_tool_definitions, create_tool_dispatcher
from .gitea_client import GiteaClient
from .config import GiteaConfig
__all__ = [
"__version__",
"get_tool_definitions",
"create_tool_dispatcher",
"GiteaClient",
"GiteaConfig",
]

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -4,9 +4,11 @@ Wiki management tools for MCP server.
Provides async wrappers for wiki operations to support lessons learned: Provides async wrappers for wiki operations to support lessons learned:
- Page CRUD operations - Page CRUD operations
- Lessons learned creation and search - Lessons learned creation and search
- RFC number allocation
""" """
import asyncio import asyncio
import logging import logging
import re
from typing import List, Dict, Optional from typing import List, Dict, Optional
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
@@ -147,3 +149,39 @@ class WikiTools:
lambda: self.gitea.search_lessons(query, tags, repo) lambda: self.gitea.search_lessons(query, tags, repo)
) )
return results[:limit] return results[:limit]
async def allocate_rfc_number(self, repo: Optional[str] = None) -> Dict:
"""
Allocate the next available RFC number.
Scans existing wiki pages for RFC-NNNN pattern and returns
the next sequential number.
Args:
repo: Repository in owner/repo format
Returns:
Dict with 'next_number' (int) and 'formatted' (str like 'RFC-0001')
"""
pages = await self.list_wiki_pages(repo)
# Extract RFC numbers from page titles
rfc_numbers = []
rfc_pattern = re.compile(r'^RFC-(\d{4})')
for page in pages:
title = page.get('title', '')
match = rfc_pattern.match(title)
if match:
rfc_numbers.append(int(match.group(1)))
# Calculate next number
if rfc_numbers:
next_num = max(rfc_numbers) + 1
else:
next_num = 1
return {
'next_number': next_num,
'formatted': f'RFC-{next_num:04d}'
}

View File

@@ -0,0 +1,43 @@
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "gitea-mcp-server"
version = "1.0.0"
description = "MCP Server for Gitea integration - provides issue, label, wiki, milestone, dependency, and PR tools"
readme = "README.md"
requires-python = ">=3.10"
license = {text = "MIT"}
authors = [
{ name = "Leo Miranda" }
]
keywords = ["mcp", "gitea", "claude", "tools"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
]
dependencies = [
"mcp>=0.9.0",
"python-dotenv>=1.0.0",
"requests>=2.31.0",
"pydantic>=2.5.0",
]
[project.optional-dependencies]
test = [
"pytest>=7.4.3",
"pytest-asyncio>=0.23.0",
]
[tool.setuptools.packages.find]
where = ["."]
include = ["mcp_server*"]
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]

View File

@@ -28,7 +28,6 @@ def test_load_system_config(tmp_path, monkeypatch):
assert result['api_url'] == 'https://test.com/api/v1' assert result['api_url'] == 'https://test.com/api/v1'
assert result['api_token'] == 'test_token' assert result['api_token'] == 'test_token'
assert result['owner'] == 'test_owner'
assert result['mode'] == 'company' # No repo specified assert result['mode'] == 'company' # No repo specified
assert result['repo'] is None assert result['repo'] is None

View File

@@ -14,8 +14,7 @@ def mock_config():
mock_instance.load.return_value = { mock_instance.load.return_value = {
'api_url': 'https://test.com/api/v1', 'api_url': 'https://test.com/api/v1',
'api_token': 'test_token', 'api_token': 'test_token',
'owner': 'test_owner', 'repo': 'test_owner/test_repo', # Combined owner/repo format
'repo': 'test_repo',
'mode': 'project' 'mode': 'project'
} }
yield mock_cfg yield mock_cfg
@@ -31,8 +30,7 @@ def test_client_initialization(gitea_client):
"""Test client initializes with correct configuration""" """Test client initializes with correct configuration"""
assert gitea_client.base_url == 'https://test.com/api/v1' assert gitea_client.base_url == 'https://test.com/api/v1'
assert gitea_client.token == 'test_token' assert gitea_client.token == 'test_token'
assert gitea_client.owner == 'test_owner' assert gitea_client.repo == 'test_owner/test_repo' # Combined format
assert gitea_client.repo == 'test_repo'
assert gitea_client.mode == 'project' assert gitea_client.mode == 'project'
assert 'Authorization' in gitea_client.session.headers assert 'Authorization' in gitea_client.session.headers
assert gitea_client.session.headers['Authorization'] == 'token test_token' assert gitea_client.session.headers['Authorization'] == 'token test_token'
@@ -92,15 +90,20 @@ def test_create_issue(gitea_client):
} }
mock_response.raise_for_status = Mock() mock_response.raise_for_status = Mock()
with patch.object(gitea_client.session, 'post', return_value=mock_response): # Mock is_org_repo to avoid network call during label resolution
issue = gitea_client.create_issue( with patch.object(gitea_client, 'is_org_repo', return_value=True):
title='New Issue', # Mock get_org_labels and get_labels for label resolution
body='Issue body', with patch.object(gitea_client, 'get_org_labels', return_value=[{'name': 'Type/Bug', 'id': 1}]):
labels=['Type/Bug'] with patch.object(gitea_client, 'get_labels', return_value=[]):
) with patch.object(gitea_client.session, 'post', return_value=mock_response):
issue = gitea_client.create_issue(
title='New Issue',
body='Issue body',
labels=['Type/Bug']
)
assert issue['title'] == 'New Issue' assert issue['title'] == 'New Issue'
gitea_client.session.post.assert_called_once() gitea_client.session.post.assert_called_once()
def test_update_issue(gitea_client): def test_update_issue(gitea_client):
@@ -161,7 +164,7 @@ def test_get_org_labels(gitea_client):
mock_response.raise_for_status = Mock() mock_response.raise_for_status = Mock()
with patch.object(gitea_client.session, 'get', return_value=mock_response): with patch.object(gitea_client.session, 'get', return_value=mock_response):
labels = gitea_client.get_org_labels() labels = gitea_client.get_org_labels(org='test_owner')
assert len(labels) == 2 assert len(labels) == 2
@@ -176,7 +179,7 @@ def test_list_repos(gitea_client):
mock_response.raise_for_status = Mock() mock_response.raise_for_status = Mock()
with patch.object(gitea_client.session, 'get', return_value=mock_response): with patch.object(gitea_client.session, 'get', return_value=mock_response):
repos = gitea_client.list_repos() repos = gitea_client.list_repos(org='test_owner')
assert len(repos) == 2 assert len(repos) == 2
assert repos[0]['name'] == 'repo1' assert repos[0]['name'] == 'repo1'
@@ -196,7 +199,7 @@ def test_aggregate_issues(gitea_client):
[{'number': 2, 'title': 'Issue 2'}] # repo2 [{'number': 2, 'title': 'Issue 2'}] # repo2
]) ])
aggregated = gitea_client.aggregate_issues(state='open') aggregated = gitea_client.aggregate_issues(org='test_owner', state='open')
assert 'repo1' in aggregated assert 'repo1' in aggregated
assert 'repo2' in aggregated assert 'repo2' in aggregated
@@ -205,14 +208,13 @@ def test_aggregate_issues(gitea_client):
def test_no_repo_specified_error(gitea_client): def test_no_repo_specified_error(gitea_client):
"""Test error when repository not specified""" """Test error when repository not specified or invalid format"""
# Create client without repo # Create client without repo
with patch('mcp_server.gitea_client.GiteaConfig') as mock_cfg: with patch('mcp_server.gitea_client.GiteaConfig') as mock_cfg:
mock_instance = mock_cfg.return_value mock_instance = mock_cfg.return_value
mock_instance.load.return_value = { mock_instance.load.return_value = {
'api_url': 'https://test.com/api/v1', 'api_url': 'https://test.com/api/v1',
'api_token': 'test_token', 'api_token': 'test_token',
'owner': 'test_owner',
'repo': None, # No repo 'repo': None, # No repo
'mode': 'company' 'mode': 'company'
} }
@@ -221,7 +223,7 @@ def test_no_repo_specified_error(gitea_client):
with pytest.raises(ValueError) as exc_info: with pytest.raises(ValueError) as exc_info:
client.list_issues() client.list_issues()
assert "Repository not specified" in str(exc_info.value) assert "Use 'owner/repo' format" in str(exc_info.value)
# ======================================== # ========================================

View File

@@ -119,22 +119,26 @@ async def test_aggregate_issues_company_mode(issue_tools):
'repo2': [{'number': 2}] 'repo2': [{'number': 2}]
}) })
aggregated = await issue_tools.aggregate_issues() aggregated = await issue_tools.aggregate_issues(org='test_owner')
assert 'repo1' in aggregated assert 'repo1' in aggregated
assert 'repo2' in aggregated assert 'repo2' in aggregated
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_aggregate_issues_project_mode_error(issue_tools): async def test_aggregate_issues_project_mode(issue_tools):
"""Test that aggregate_issues fails in project mode""" """Test that aggregate_issues works in project mode with org argument"""
issue_tools.gitea.mode = 'project' issue_tools.gitea.mode = 'project'
with patch.object(issue_tools, '_get_current_branch', return_value='development'): with patch.object(issue_tools, '_get_current_branch', return_value='development'):
with pytest.raises(ValueError) as exc_info: issue_tools.gitea.aggregate_issues = Mock(return_value={
await issue_tools.aggregate_issues() 'repo1': [{'number': 1}]
})
assert "only available in company mode" in str(exc_info.value) # aggregate_issues now works in any mode when org is provided
aggregated = await issue_tools.aggregate_issues(org='test_owner')
assert 'repo1' in aggregated
def test_branch_detection(): def test_branch_detection():

View File

@@ -79,6 +79,69 @@ Add to your Claude Code MCP configuration (`~/.config/claude/mcp.json` or projec
1. **System-level** (`~/.config/claude/netbox.env`): Credentials and defaults 1. **System-level** (`~/.config/claude/netbox.env`): Credentials and defaults
2. **Project-level** (`.env` in current directory): Optional overrides 2. **Project-level** (`.env` in current directory): Optional overrides
## Module Filtering (Token Optimization)
By default, the NetBox MCP server registers all 182 tools across 8 modules, consuming ~19,810 tokens of context. For most workflows, you only need a subset of modules.
### Configuration
Add `NETBOX_ENABLED_MODULES` to your `~/.config/claude/netbox.env`:
```bash
# Enable only specific modules (comma-separated)
NETBOX_ENABLED_MODULES=dcim,ipam,virtualization,extras
```
If unset, all modules are enabled (backward compatible).
### Available Modules
| Module | Tool Count | Description | cmdb-assistant Commands |
|--------|------------|-------------|------------------------|
| `dcim` | ~60 | Sites, devices, racks, interfaces, cables | `/cmdb-device`, `/cmdb-site`, `/cmdb-search`, `/cmdb-topology` |
| `ipam` | ~40 | IP addresses, prefixes, VLANs, VRFs | `/cmdb-ip`, `/ip-conflicts`, `/cmdb-search` |
| `virtualization` | ~20 | Clusters, VMs, VM interfaces | `/cmdb-search`, `/cmdb-audit`, `/cmdb-register` |
| `extras` | ~12 | Tags, journal entries, audit log | `/change-audit`, `/cmdb-register` |
| `circuits` | ~15 | Providers, circuits, terminations | — |
| `tenancy` | ~12 | Tenants, contacts | — |
| `vpn` | ~15 | Tunnels, IKE/IPSec policies, L2VPN | — |
| `wireless` | ~8 | Wireless LANs, links, groups | — |
### Recommended Configurations
**For cmdb-assistant users** (~43 tools, ~4,500 tokens):
```bash
NETBOX_ENABLED_MODULES=dcim,ipam,virtualization,extras
```
**Basic infrastructure** (~100 tools):
```bash
NETBOX_ENABLED_MODULES=dcim,ipam
```
**Full CMDB** (all modules, ~182 tools):
```bash
# Omit NETBOX_ENABLED_MODULES or set to all modules
NETBOX_ENABLED_MODULES=dcim,ipam,circuits,virtualization,tenancy,vpn,wireless,extras
```
### Startup Logging
On startup, the server logs enabled modules and tool count:
```
NetBox MCP Server initialized: 43 tools registered (modules: dcim, extras, ipam, virtualization)
```
### Disabled Tool Behavior
Calling a tool from a disabled module returns a clear error:
```
Tool 'circuits_list_circuits' is not available (module 'circuits' not enabled).
Enabled modules: dcim, extras, ipam, virtualization
```
## Available Tools ## Available Tools
### DCIM (Data Center Infrastructure Management) ### DCIM (Data Center Infrastructure Management)
@@ -128,18 +191,18 @@ Add to your Claude Code MCP configuration (`~/.config/claude/mcp.json` or projec
| `circuits_create_provider` | Create a provider | | `circuits_create_provider` | Create a provider |
| `circuits_list_circuits` | List circuits | | `circuits_list_circuits` | List circuits |
| `circuits_create_circuit` | Create a circuit | | `circuits_create_circuit` | Create a circuit |
| `circuits_list_circuit_terminations` | List terminations | | `circ_list_terminations` | List terminations |
| ... and more | | ... and more |
### Virtualization ### Virtualization
| Tool | Description | | Tool | Description |
|------|-------------| |------|-------------|
| `virtualization_list_clusters` | List clusters | | `virt_list_clusters` | List clusters |
| `virtualization_create_cluster` | Create a cluster | | `virt_create_cluster` | Create a cluster |
| `virtualization_list_virtual_machines` | List VMs | | `virt_list_vms` | List VMs |
| `virtualization_create_virtual_machine` | Create a VM | | `virt_create_vm` | Create a VM |
| `virtualization_list_vm_interfaces` | List VM interfaces | | `virt_list_vm_ifaces` | List VM interfaces |
| ... and more | | ... and more |
### Tenancy ### Tenancy
@@ -167,9 +230,9 @@ Add to your Claude Code MCP configuration (`~/.config/claude/mcp.json` or projec
| Tool | Description | | Tool | Description |
|------|-------------| |------|-------------|
| `wireless_list_wireless_lans` | List wireless LANs | | `wlan_list_lans` | List wireless LANs |
| `wireless_create_wireless_lan` | Create a WLAN | | `wlan_create_lan` | Create a WLAN |
| `wireless_list_wireless_links` | List wireless links | | `wlan_list_links` | List wireless links |
| ... and more | | ... and more |
### Extras ### Extras

View File

@@ -9,11 +9,17 @@ from pathlib import Path
from dotenv import load_dotenv from dotenv import load_dotenv
import os import os
import logging import logging
from typing import Dict, Optional from typing import Dict, List, Optional, Set
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# All available NetBox modules
ALL_MODULES = frozenset([
'dcim', 'ipam', 'circuits', 'virtualization',
'tenancy', 'vpn', 'wireless', 'extras'
])
class NetBoxConfig: class NetBoxConfig:
"""Configuration loader for NetBox MCP Server""" """Configuration loader for NetBox MCP Server"""
@@ -23,6 +29,7 @@ class NetBoxConfig:
self.api_token: Optional[str] = None self.api_token: Optional[str] = None
self.verify_ssl: bool = True self.verify_ssl: bool = True
self.timeout: int = 30 self.timeout: int = 30
self.enabled_modules: Set[str] = set(ALL_MODULES)
def load(self) -> Dict[str, any]: def load(self) -> Dict[str, any]:
""" """
@@ -73,6 +80,9 @@ class NetBoxConfig:
self.timeout = 30 self.timeout = 30
logger.warning(f"Invalid NETBOX_TIMEOUT value '{timeout_str}', using default 30") logger.warning(f"Invalid NETBOX_TIMEOUT value '{timeout_str}', using default 30")
# Module filtering
self.enabled_modules = self._load_enabled_modules()
# Validate required variables # Validate required variables
self._validate() self._validate()
@@ -84,7 +94,8 @@ class NetBoxConfig:
'api_url': self.api_url, 'api_url': self.api_url,
'api_token': self.api_token, 'api_token': self.api_token,
'verify_ssl': self.verify_ssl, 'verify_ssl': self.verify_ssl,
'timeout': self.timeout 'timeout': self.timeout,
'enabled_modules': self.enabled_modules
} }
def _validate(self) -> None: def _validate(self) -> None:
@@ -106,3 +117,40 @@ class NetBoxConfig:
f"Missing required configuration: {', '.join(missing)}\n" f"Missing required configuration: {', '.join(missing)}\n"
"Check your ~/.config/claude/netbox.env file" "Check your ~/.config/claude/netbox.env file"
) )
def _load_enabled_modules(self) -> Set[str]:
"""
Load enabled modules from NETBOX_ENABLED_MODULES environment variable.
Format: Comma-separated list of module names.
Example: NETBOX_ENABLED_MODULES=dcim,ipam,virtualization,extras
Returns:
Set of enabled module names. If env var is unset/empty, returns all modules.
"""
modules_str = os.getenv('NETBOX_ENABLED_MODULES', '').strip()
if not modules_str:
logger.info("NETBOX_ENABLED_MODULES not set, all modules enabled (default)")
return set(ALL_MODULES)
# Parse comma-separated list, strip whitespace
requested = {m.strip().lower() for m in modules_str.split(',') if m.strip()}
# Validate module names
invalid = requested - ALL_MODULES
if invalid:
logger.warning(
f"Unknown modules in NETBOX_ENABLED_MODULES: {', '.join(sorted(invalid))}. "
f"Valid modules: {', '.join(sorted(ALL_MODULES))}"
)
# Return only valid modules
enabled = requested & ALL_MODULES
if not enabled:
logger.warning("No valid modules enabled, falling back to all modules")
return set(ALL_MODULES)
logger.info(f"Enabled modules: {', '.join(sorted(enabled))}")
return enabled

View File

@@ -8,11 +8,12 @@ Tenancy, VPN, Wireless, and Extras.
import asyncio import asyncio
import logging import logging
import json import json
from typing import Optional, Set
from mcp.server import Server from mcp.server import Server
from mcp.server.stdio import stdio_server from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent from mcp.types import Tool, TextContent
from .config import NetBoxConfig from .config import NetBoxConfig, ALL_MODULES
from .netbox_client import NetBoxClient from .netbox_client import NetBoxClient
from .tools.dcim import DCIMTools from .tools.dcim import DCIMTools
from .tools.ipam import IPAMTools from .tools.ipam import IPAMTools
@@ -1453,6 +1454,49 @@ TOOL_NAME_MAP = {
} }
# Map tool name prefixes to module names.
# This handles both full prefixes and shortened prefixes used in TOOL_NAME_MAP.
PREFIX_TO_MODULE = {
'dcim': 'dcim',
'ipam': 'ipam',
'circuits': 'circuits',
'circ': 'circuits', # Shortened prefix
'virtualization': 'virtualization',
'virt': 'virtualization', # Shortened prefix
'tenancy': 'tenancy',
'vpn': 'vpn',
'wireless': 'wireless',
'wlan': 'wireless', # Shortened prefix
'extras': 'extras',
}
def _get_tool_module(tool_name: str) -> Optional[str]:
"""
Determine which module a tool belongs to.
Checks TOOL_NAME_MAP first for shortened names, then falls back to prefix extraction.
Args:
tool_name: The tool name (e.g., 'dcim_list_devices', 'virt_list_vms')
Returns:
Module name (e.g., 'dcim', 'virtualization') or None if unknown
"""
# Check mapped short names first
if tool_name in TOOL_NAME_MAP:
category, _ = TOOL_NAME_MAP[tool_name]
return category
# Fall back to prefix extraction
parts = tool_name.split('_', 1)
if len(parts) < 2:
return None
prefix = parts[0]
return PREFIX_TO_MODULE.get(prefix)
class NetBoxMCPServer: class NetBoxMCPServer:
"""MCP Server for NetBox integration""" """MCP Server for NetBox integration"""
@@ -1460,6 +1504,8 @@ class NetBoxMCPServer:
self.server = Server("netbox-mcp") self.server = Server("netbox-mcp")
self.config = None self.config = None
self.client = None self.client = None
self.enabled_modules: Set[str] = set(ALL_MODULES)
# Tool instances - only instantiated for enabled modules
self.dcim_tools = None self.dcim_tools = None
self.ipam_tools = None self.ipam_tools = None
self.circuits_tools = None self.circuits_tools = None
@@ -1474,18 +1520,39 @@ class NetBoxMCPServer:
try: try:
config_loader = NetBoxConfig() config_loader = NetBoxConfig()
self.config = config_loader.load() self.config = config_loader.load()
self.enabled_modules = self.config['enabled_modules']
self.client = NetBoxClient() self.client = NetBoxClient()
self.dcim_tools = DCIMTools(self.client)
self.ipam_tools = IPAMTools(self.client)
self.circuits_tools = CircuitsTools(self.client)
self.virtualization_tools = VirtualizationTools(self.client)
self.tenancy_tools = TenancyTools(self.client)
self.vpn_tools = VPNTools(self.client)
self.wireless_tools = WirelessTools(self.client)
self.extras_tools = ExtrasTools(self.client)
logger.info(f"NetBox MCP Server initialized for {self.config['api_url']}") # Conditionally instantiate tool classes for enabled modules only
if 'dcim' in self.enabled_modules:
self.dcim_tools = DCIMTools(self.client)
if 'ipam' in self.enabled_modules:
self.ipam_tools = IPAMTools(self.client)
if 'circuits' in self.enabled_modules:
self.circuits_tools = CircuitsTools(self.client)
if 'virtualization' in self.enabled_modules:
self.virtualization_tools = VirtualizationTools(self.client)
if 'tenancy' in self.enabled_modules:
self.tenancy_tools = TenancyTools(self.client)
if 'vpn' in self.enabled_modules:
self.vpn_tools = VPNTools(self.client)
if 'wireless' in self.enabled_modules:
self.wireless_tools = WirelessTools(self.client)
if 'extras' in self.enabled_modules:
self.extras_tools = ExtrasTools(self.client)
# Count tools that will be registered
tool_count = sum(
1 for name in TOOL_DEFINITIONS
if _get_tool_module(name) in self.enabled_modules
)
modules_str = ', '.join(sorted(self.enabled_modules))
logger.info(
f"NetBox MCP Server initialized: {tool_count} tools registered "
f"(modules: {modules_str})"
)
except Exception as e: except Exception as e:
logger.error(f"Failed to initialize: {e}") logger.error(f"Failed to initialize: {e}")
raise raise
@@ -1495,9 +1562,14 @@ class NetBoxMCPServer:
@self.server.list_tools() @self.server.list_tools()
async def list_tools() -> list[Tool]: async def list_tools() -> list[Tool]:
"""Return list of available tools""" """Return list of available tools, filtered by enabled modules"""
tools = [] tools = []
for name, definition in TOOL_DEFINITIONS.items(): for name, definition in TOOL_DEFINITIONS.items():
# Filter tools by enabled modules
module = _get_tool_module(name)
if module not in self.enabled_modules:
continue
tools.append(Tool( tools.append(Tool(
name=name, name=name,
description=definition['description'], description=definition['description'],
@@ -1532,6 +1604,14 @@ class NetBoxMCPServer:
'virtualization_list_virtual_machines') to meet the 28-character 'virtualization_list_virtual_machines') to meet the 28-character
limit. TOOL_NAME_MAP handles the translation to actual method names. limit. TOOL_NAME_MAP handles the translation to actual method names.
""" """
# Check module is enabled (routing guard)
module = _get_tool_module(name)
if module and module not in self.enabled_modules:
raise ValueError(
f"Tool '{name}' is not available (module '{module}' not enabled). "
f"Enabled modules: {', '.join(sorted(self.enabled_modules))}"
)
# Check if this is a mapped short name # Check if this is a mapped short name
if name in TOOL_NAME_MAP: if name in TOOL_NAME_MAP:
category, method_name = TOOL_NAME_MAP[name] category, method_name = TOOL_NAME_MAP[name]

View File

@@ -0,0 +1,5 @@
2026-01-26T11:40:11 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/registry/dmc_2_5.json | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:31 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_chart_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:32 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:34 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md

View File

@@ -112,4 +112,4 @@ pytest tests/ -v
## Usage ## Usage
This MCP server is used by the `viz-platform` plugin. See [plugins/viz-platform/README.md](../../plugins/viz-platform/README.md) for usage instructions. This MCP server is used by the `viz-platform` plugin. See the plugin's commands in `plugins/viz-platform/commands/` for usage.

View File

@@ -1,6 +1,6 @@
{ {
"name": "clarity-assist", "name": "clarity-assist",
"version": "1.0.0", "version": "1.2.0",
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations", "description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",

View File

@@ -1,103 +0,0 @@
# clarity-assist
Prompt optimization and requirement clarification plugin with neurodivergent-friendly accommodations.
## Overview
clarity-assist helps transform vague, incomplete, or ambiguous requests into clear, actionable specifications. It uses a structured 4-D methodology (Deconstruct, Diagnose, Develop, Deliver) and ND-friendly communication patterns.
## Commands
| Command | Description |
|---------|-------------|
| `/clarify` | Full 4-D prompt optimization for complex requests |
| `/quick-clarify` | Rapid single-pass clarification for simple requests |
## Features
### 4-D Methodology
1. **Deconstruct** - Break down the request into components
2. **Diagnose** - Analyze gaps and potential issues
3. **Develop** - Gather clarifications through structured questions
4. **Deliver** - Produce refined specification
### ND-Friendly Design
- **Option-based questioning** - Always provide 2-4 concrete choices
- **Chunked questions** - Ask 1-2 questions at a time
- **Context for questions** - Explain why you're asking
- **Conflict detection** - Check previous answers before new questions
- **Progress acknowledgment** - Summarize frequently
### Escalation Protocol
When requests are complex or users seem overwhelmed:
- Acknowledge complexity
- Offer to focus on one aspect at a time
- Build incrementally
## Installation
Add to your project's `.claude/settings.json`:
```json
{
"plugins": ["clarity-assist"]
}
```
## Usage
### Full Clarification
```
/clarify
[Your vague or complex request here]
```
### Quick Clarification
```
/quick-clarify
[Your mostly-clear request here]
```
## Configuration
No configuration required. The plugin uses sensible defaults.
## Output Format
After clarification, you receive a structured specification:
```markdown
## Clarified Request
### Summary
[Description of what will be built]
### Scope
**In Scope:** [items]
**Out of Scope:** [items]
### Requirements
[Prioritized table]
### Assumptions
[List of assumptions]
```
## Documentation
- [Neurodivergent Support Guide](docs/ND-SUPPORT.md) - How clarity-assist supports ND users with executive function challenges and cognitive differences
## Integration
For CLAUDE.md integration instructions, see `claude-md-integration.md`.
## License
MIT

View File

@@ -1,3 +1,11 @@
---
name: clarity-coach
description: Patient, structured coach helping users articulate requirements clearly. Uses neurodivergent-friendly communication patterns.
model: sonnet
permissionMode: default
disallowedTools: Write, Edit, MultiEdit
---
# Clarity Coach Agent # Clarity Coach Agent
## Visual Output Requirements ## Visual Output Requirements

View File

@@ -2,16 +2,12 @@
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +----------------------------------------------------------------------+
│ 💬 CLARITY-ASSIST · Prompt Optimization | CLARITY-ASSIST - Prompt Optimization |
└──────────────────────────────────────────────────────────────────┘ +----------------------------------------------------------------------+
``` ```
Then proceed with the workflow.
## Purpose ## Purpose
Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations. Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations.
@@ -23,127 +19,46 @@ Transform vague, incomplete, or ambiguous requests into clear, actionable specif
- Tasks requiring significant context gathering - Tasks requiring significant context gathering
- When user seems uncertain about what they want - When user seems uncertain about what they want
## 4-D Methodology ## Skills to Load
### Phase 1: Deconstruct Load these skills before proceeding:
Break down the user's request into components: - `skills/4d-methodology.md` - Core 4-phase process
- `skills/nd-accommodations.md` - ND-friendly question patterns
- `skills/clarification-techniques.md` - Anti-patterns and templates
- `skills/escalation-patterns.md` - When to adjust approach
1. **Extract explicit requirements** - What was directly stated ## Workflow
2. **Identify implicit assumptions** - What seems assumed but not stated
3. **Note ambiguities** - Points that could go multiple ways
4. **List dependencies** - External factors that might affect implementation
### Phase 2: Diagnose 1. **Deconstruct** - Break down request into components
2. **Diagnose** - Identify gaps and conflicts
Analyze gaps and potential issues: 3. **Develop** - Gather clarifications via structured questions
4. **Deliver** - Present refined specification
1. **Missing information** - What do we need to know? 5. **Offer RFC Creation** - For feature work, offer to save as RFC
2. **Conflicting requirements** - Do any stated goals contradict?
3. **Scope boundaries** - What's in/out of scope?
4. **Technical constraints** - Platform, language, architecture limits
### Phase 3: Develop
Gather clarifications through structured questioning:
**ND-Friendly Question Rules:**
- Present 2-4 concrete options (never open-ended alone)
- Include "Other" for custom responses
- Ask 1-2 questions at a time maximum
- Provide brief context for why you're asking
- Check for conflicts with previous answers
**Example Format:**
```
To help me understand the scope better:
**How should errors be handled?**
1. Silent logging (user sees nothing)
2. Toast notifications (brief, dismissible)
3. Modal dialogs (requires user action)
4. Other
[Context: This affects both UX and how much error-handling code we need]
```
### Phase 4: Deliver
Produce the refined specification:
```markdown
## Clarified Request
### Summary
[1-2 sentence description of what will be built]
### Scope
**In Scope:**
- [Item 1]
- [Item 2]
**Out of Scope:**
- [Item 1]
### Requirements
| # | Requirement | Priority | Notes |
|---|-------------|----------|-------|
| 1 | ... | Must | ... |
| 2 | ... | Should | ... |
### Assumptions
- [Assumption made based on conversation]
### Open Questions
- [Any remaining ambiguities, if any]
```
## Conflict Detection
Before each question, review previous answers for conflicts:
```
[Internal check]
Previous: User said "keep it simple"
Current question about: Adding configuration options
Potential conflict: More options = more complexity
→ Acknowledge: "Earlier you mentioned keeping it simple. With that in mind..."
```
## Escalation Protocol
If the request is particularly messy or the user seems overwhelmed:
1. Acknowledge the complexity
2. Offer to start with just ONE aspect
3. Build incrementally
```
"This touches a lot of areas. Rather than tackle everything at once,
let's start with [most critical piece]. Once that's clear, we can
add the other parts. Sound good?"
```
## Example Interaction
**User:** Make the app faster
**Clarify Response:**
```
Let me help clarify what kind of performance improvements you're looking for.
**What type of slowness are you experiencing?**
1. Initial page load takes too long
2. Interactions feel sluggish (clicking, typing)
3. API calls are slow
4. All of the above
5. Other
[I'm asking because each has very different solutions]
```
## Output Format ## Output Format
After gathering all necessary information, use the Deliver phase format to present the clarified specification for user confirmation. Use the Deliver phase template from `skills/4d-methodology.md` to present the clarified specification for user confirmation.
## RFC Creation Offer (Step 5)
After presenting the clarified specification, if the request appears to be a feature or enhancement:
```
---
Would you like to save this as an RFC for formal tracking?
An RFC (Request for Comments) provides:
- Structured documentation of the proposal
- Review workflow before implementation
- Integration with sprint planning
[1] Yes, create RFC from this specification
[2] No, proceed with implementation directly
```
If user selects [1]:
- Pass clarified specification to `/rfc-create`
- The Summary, Motivation, and Design sections will be populated from the clarified spec
- User can then refine the RFC and submit for review

View File

@@ -2,16 +2,12 @@
## Visual Output ## Visual Output
When executing this command, display the plugin header:
``` ```
┌──────────────────────────────────────────────────────────────────┐ +----------------------------------------------------------------------+
│ 💬 CLARITY-ASSIST · Quick Clarify | CLARITY-ASSIST - Quick Clarify |
└──────────────────────────────────────────────────────────────────┘ +----------------------------------------------------------------------+
``` ```
Then proceed with the workflow.
## Purpose ## Purpose
Single-pass clarification for requests that are mostly clear but need minor disambiguation. Single-pass clarification for requests that are mostly clear but need minor disambiguation.
@@ -23,74 +19,27 @@ Single-pass clarification for requests that are mostly clear but need minor disa
- Follow-up to an already-clarified request - Follow-up to an already-clarified request
- Simple feature additions or bug fixes - Simple feature additions or bug fixes
## Process ## Skills to Load
### Step 1: Echo Understanding - `skills/nd-accommodations.md` - ND-friendly question patterns
- `skills/clarification-techniques.md` - Echo and micro-summary techniques
- `skills/escalation-patterns.md` - When to escalate to full /clarify
Restate what you understood in a single sentence: ## Workflow
``` 1. **Echo Understanding** - Restate in a single sentence
"I understand you want [X] that does [Y]." 2. **Quick Disambiguation** - Ask ONE multiple-choice question if needed
``` 3. **Proceed or Confirm** - Start work or offer micro-summary
### Step 2: Quick Disambiguation
If needed, ask ONE multiple-choice question:
```
**Just to confirm:** Should this [A] or [B]?
1. [Option A with brief description]
2. [Option B with brief description]
3. Both
4. Neither / Other approach
```
### Step 3: Proceed or Confirm
If confident (>90%), proceed with a brief confirmation:
```
"Got it. I'll implement [X with chosen option]. Starting now."
```
If less confident, offer a micro-summary:
```
"Quick summary before I start:
- [Key point 1]
- [Key point 2]
- [Assumption made]
Proceed? (Or clarify anything)"
```
## Example Interaction
**User:** Add a delete button to the user list
**Quick-Clarify Response:**
```
I'll add a delete button to each row in the user list.
**Should deletion require confirmation?**
1. Yes - show confirmation dialog (Recommended)
2. No - delete immediately
3. Soft delete - mark as deleted but keep data
[Asking because accidental deletions can be problematic]
```
## Escalation to Full /clarify
If quick-clarify reveals complexity:
```
"This is more involved than it first appeared - there are
several decisions to make. Want me to switch to a more
thorough clarification process? (Just say 'yes' or 'clarify')"
```
## Output Format ## Output Format
For quick-clarify, no formal specification document is needed. Just proceed with the task after brief confirmation, documenting assumptions inline with the work. No formal specification document needed. Proceed after brief confirmation, documenting assumptions inline with the work.
## Escalation
If complexity emerges, offer to switch to full `/clarify`:
```
"This is more involved than it first appeared. Want me to switch
to a more thorough clarification process?"
```

View File

@@ -2,8 +2,13 @@
"hooks": { "hooks": {
"UserPromptSubmit": [ "UserPromptSubmit": [
{ {
"type": "command", "matcher": "",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/vagueness-check.sh" "hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/vagueness-check.sh"
}
]
} }
] ]
} }

View File

@@ -197,6 +197,38 @@ if (( $(echo "$SCORE > 1.0" | bc -l) )); then
SCORE="1.0" SCORE="1.0"
fi fi
# ============================================================================
# Feature Request Detection (for RFC suggestion)
# ============================================================================
FEATURE_REQUEST=false
# Feature request phrases
FEATURE_PHRASES=(
"we should"
"it would be nice"
"feature request"
"idea:"
"suggestion:"
"what if we"
"wouldn't it be great"
"i think we need"
"we need to add"
"we could add"
"how about adding"
"can we add"
"new feature"
"enhancement"
"proposal"
)
for phrase in "${FEATURE_PHRASES[@]}"; do
if [[ "$PROMPT_LOWER" == *"$phrase"* ]]; then
FEATURE_REQUEST=true
break
fi
done
# ============================================================================ # ============================================================================
# Output suggestion if score exceeds threshold # Output suggestion if score exceeds threshold
# ============================================================================ # ============================================================================
@@ -208,8 +240,16 @@ if (( $(echo "$SCORE >= $THRESHOLD" | bc -l) )); then
# Gentle, non-blocking suggestion # Gentle, non-blocking suggestion
echo "$PREFIX Your prompt could benefit from more clarity." echo "$PREFIX Your prompt could benefit from more clarity."
echo "$PREFIX Consider running /clarity-assist to refine your request." echo "$PREFIX Consider running /clarify to refine your request."
echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)" echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)"
# Additional RFC suggestion if feature request detected
if [[ "$FEATURE_REQUEST" == true ]]; then
echo "$PREFIX This looks like a feature idea. Consider /rfc-create to track it formally."
fi
elif [[ "$FEATURE_REQUEST" == true ]]; then
# Feature request detected but not vague - still suggest RFC
echo "$PREFIX This looks like a feature idea. Consider /rfc-create to track it formally."
fi fi
# Always exit 0 - this hook is non-blocking # Always exit 0 - this hook is non-blocking

View File

@@ -0,0 +1,76 @@
# 4-D Methodology for Prompt Clarification
The 4-D methodology transforms vague requests into actionable specifications.
## Phase 1: Deconstruct
Break down the user's request into components:
1. **Extract explicit requirements** - What was directly stated
2. **Identify implicit assumptions** - What seems assumed but not stated
3. **Note ambiguities** - Points that could go multiple ways
4. **List dependencies** - External factors that might affect implementation
## Phase 2: Diagnose
Analyze gaps and potential issues:
1. **Missing information** - What do we need to know?
2. **Conflicting requirements** - Do any stated goals contradict?
3. **Scope boundaries** - What is in/out of scope?
4. **Technical constraints** - Platform, language, architecture limits
## Phase 3: Develop
Gather clarifications through structured questioning:
- Present 2-4 concrete options (never open-ended alone)
- Include "Other" for custom responses
- Ask 1-2 questions at a time maximum
- Provide brief context for why you are asking
- Check for conflicts with previous answers
**Example Format:**
```
To help me understand the scope better:
**How should errors be handled?**
1. Silent logging (user sees nothing)
2. Toast notifications (brief, dismissible)
3. Modal dialogs (requires user action)
4. Other
[Context: This affects both UX and how much error-handling code we need]
```
## Phase 4: Deliver
Produce the refined specification:
```markdown
## Clarified Request
### Summary
[1-2 sentence description of what will be built]
### Scope
**In Scope:**
- [Item 1]
- [Item 2]
**Out of Scope:**
- [Item 1]
### Requirements
| # | Requirement | Priority | Notes |
|---|-------------|----------|-------|
| 1 | ... | Must | ... |
| 2 | ... | Should | ... |
### Assumptions
- [Assumption made based on conversation]
### Open Questions
- [Any remaining ambiguities, if any]
```

View File

@@ -0,0 +1,86 @@
# Clarification Techniques
Structured approaches for disambiguating user requests.
## Anti-Patterns to Detect
### Vague Requests
**Triggers:** "improve", "fix", "update", "change", "better", "faster", "cleaner"
**Response:** Ask for specific metrics or outcomes
### Scope Creep Signals
**Triggers:** "while you're at it", "also", "might as well", "and another thing"
**Response:** Acknowledge, then isolate: "I'll note that for after the main task"
### Assumption Gaps
**Triggers:** References to "the" thing (which thing?), "it" (what?), "there" (where?)
**Response:** Echo back specific understanding
### Conflicting Requirements
**Triggers:** "Simple but comprehensive", "Fast but thorough", "Minimal but complete"
**Response:** Prioritize: "Which matters more: simplicity or completeness?"
## Question Templates
### For Unclear Purpose
```
**What problem does this solve?**
1. [Specific problem A]
2. [Specific problem B]
3. Combination
4. Different problem: ____
```
### For Missing Scope
```
**What should this include?**
- [ ] Feature A
- [ ] Feature B
- [ ] Feature C
- [ ] Other: ____
```
### For Ambiguous Behavior
```
**When [trigger event], what should happen?**
1. [Behavior option A]
2. [Behavior option B]
3. Nothing (ignore)
4. Depends on: ____
```
### For Technical Decisions
```
**Implementation approach:**
1. [Approach A] - pros: X, cons: Y
2. [Approach B] - pros: X, cons: Y
3. Let me decide based on codebase
4. Need more info about: ____
```
## Echo Understanding Technique
Before diving into questions, restate understanding:
```
"I understand you want [X] that does [Y]."
```
This validates comprehension and gives user a chance to correct early.
## Micro-Summary Technique
For quick confirmations before proceeding:
```
"Quick summary before I start:
- [Key point 1]
- [Key point 2]
- [Assumption made]
Proceed? (Or clarify anything)"
```

View File

@@ -0,0 +1,57 @@
# Escalation Patterns
Guidelines for when to escalate between clarification modes.
## Quick-Clarify to Full Clarify
Escalate when quick-clarify reveals unexpected complexity:
```
"This is more involved than it first appeared - there are
several decisions to make. Want me to switch to a more
thorough clarification process? (Just say 'yes' or 'clarify')"
```
### Triggers for Escalation
- Multiple ambiguities discovered during quick pass
- User's answer reveals hidden dependencies
- Scope expands beyond original understanding
- Technical constraints emerge that need discussion
- Conflicting requirements surface
## Full Clarify to Incremental
When user is overwhelmed by full 4-D process:
```
"This touches a lot of areas. Rather than tackle everything at once,
let's start with [most critical piece]. Once that's clear, we can
add the other parts. Sound good?"
```
### Signs of Overwhelm
- Long pauses or hesitation
- "I don't know" responses
- Requesting breaks
- Contradicting earlier answers
- Expressing frustration
## Choosing Initial Mode
### Use /quick-clarify When
- Request is fairly clear, just one or two ambiguities
- User is in a hurry
- Follow-up to an already-clarified request
- Simple feature additions or bug fixes
- Confidence is high (>90%)
### Use /clarify When
- Complex multi-step requests
- Requirements with multiple possible interpretations
- Tasks requiring significant context gathering
- User seems uncertain about what they want
- First time working on this feature/area

View File

@@ -0,0 +1,74 @@
# Neurodivergent-Friendly Accommodations
Guidelines for making clarification interactions accessible and comfortable for neurodivergent users.
## Core Principles
### Reduce Cognitive Load
- Maximum 4 options per question
- Always include "Other" escape hatch
- Provide examples, not just descriptions
- Use numbered lists for easy reference
### Support Working Memory
- Summarize frequently
- Reference earlier decisions explicitly
- Do not assume user remembers context from many turns ago
- Echo back understanding before proceeding
### Allow Processing Time
- Do not rapid-fire questions
- Validate answers before moving on
- Offer to revisit or change earlier answers
- One question block at a time
### Manage Overwhelm
- Offer to break into smaller sessions
- Prioritize must-haves vs nice-to-haves
- Provide "good enough for now" options
- Acknowledge complexity openly
## Question Formatting Rules
**Always do:**
```
**How should errors be handled?**
1. Silent logging (user sees nothing)
2. Toast notifications (brief, dismissible)
3. Modal dialogs (requires user action)
4. Other
[Context: This affects both UX and error-handling complexity]
```
**Never do:**
```
How do you want to handle errors? There are many approaches...
```
## Conflict Acknowledgment
Before asking about something that might conflict with a previous answer:
```
[Internal check]
Previous: User said "keep it simple"
Current question about: Adding configuration options
Potential conflict: More options = more complexity
```
Then acknowledge: "Earlier you mentioned keeping it simple. With that in mind..."
## Escalation for Overwhelm
If the request is particularly complex or user seems overwhelmed:
1. Acknowledge the complexity openly
2. Offer to start with just ONE aspect
3. Build incrementally
```
"This touches a lot of areas. Rather than tackle everything at once,
let's start with [most critical piece]. Once that's clear, we can
add the other parts. Sound good?"
```

View File

@@ -1,7 +1,7 @@
{ {
"name": "claude-config-maintainer", "name": "claude-config-maintainer",
"version": "1.0.0", "version": "1.2.0",
"description": "Maintains and optimizes CLAUDE.md configuration files for Claude Code projects", "description": "Maintains and optimizes CLAUDE.md and settings.local.json configuration files for Claude Code projects",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
"email": "leobmiranda@gmail.com" "email": "leobmiranda@gmail.com"
@@ -14,7 +14,9 @@
"configuration", "configuration",
"optimization", "optimization",
"claude-md", "claude-md",
"developer-tools" "developer-tools",
"settings",
"permissions"
], ],
"commands": ["./commands/"] "commands": ["./commands/"]
} }

View File

@@ -1,126 +0,0 @@
# Claude Config Maintainer
A Claude Code plugin for creating and maintaining optimized CLAUDE.md configuration files.
## Overview
CLAUDE.md files provide instructions to Claude Code when working with a project. This plugin helps you:
- **Analyze** existing CLAUDE.md files for improvement opportunities
- **Optimize** structure, clarity, and conciseness
- **Initialize** new CLAUDE.md files with project-specific content
## Installation
This plugin is part of the Leo Claude Marketplace. Install the marketplace and the plugin will be available.
## Commands
### `/config-analyze`
Analyze your CLAUDE.md and get a detailed report with scores and recommendations.
```
/config-analyze
```
### `/config-optimize`
Automatically optimize your CLAUDE.md based on best practices.
```
/config-optimize
```
### `/config-init`
Create a new CLAUDE.md tailored to your project.
```
/config-init
```
### `/config-diff`
Show differences between current CLAUDE.md and previous versions.
```
/config-diff # Compare working copy vs last commit
/config-diff --commit=abc1234 # Compare against specific commit
/config-diff --from=v1.0 --to=v2.0 # Compare two commits
/config-diff --section="Critical Rules" # Focus on specific section
```
### `/config-lint`
Lint CLAUDE.md for common anti-patterns and best practices.
```
/config-lint # Run all lint checks
/config-lint --fix # Auto-fix fixable issues
/config-lint --rules=security # Check only security rules
/config-lint --severity=error # Show only errors
```
**Lint Rule Categories:**
- **Security (SEC)** - Hardcoded secrets, paths, credentials
- **Structure (STR)** - Header hierarchy, required sections
- **Content (CNT)** - Contradictions, duplicates, vague rules
- **Format (FMT)** - Consistency, code blocks, whitespace
- **Best Practice (BPR)** - Missing Quick Start, Critical Rules sections
## Best Practices
A good CLAUDE.md should be:
- **Clear** - Easy to understand at a glance
- **Concise** - No unnecessary content
- **Complete** - All essential information included
- **Current** - Up to date with the project
### Recommended Structure
```markdown
# CLAUDE.md
## Project Overview
What does this project do?
## Quick Start
Essential build/test/run commands.
## Critical Rules
What must Claude NEVER do?
## Architecture (optional)
Key technical decisions.
## Common Operations (optional)
Frequent tasks and workflows.
```
### Length Guidelines
| Project Size | Recommended Lines |
|-------------|------------------|
| Small | 50-100 |
| Medium | 100-200 |
| Large | 200-400 |
## Scoring System
The analyzer scores CLAUDE.md files on:
- **Structure** (25 pts) - Organization and navigation
- **Clarity** (25 pts) - Clear, unambiguous instructions
- **Completeness** (25 pts) - Essential sections present
- **Conciseness** (25 pts) - Efficient information density
Target score: **70+** for effective Claude Code usage.
## Tips
1. Run `/config-analyze` periodically to maintain quality
2. Update CLAUDE.md when adding major features
3. Keep critical rules prominent and clear
4. Include examples where they add clarity
5. Remove generic advice that applies to all projects
## Contributing
This plugin is part of the personal-projects/leo-claude-mktplace repository.

View File

@@ -1,6 +1,9 @@
--- ---
name: maintainer name: maintainer
description: CLAUDE.md optimization and maintenance agent description: CLAUDE.md optimization and maintenance agent
model: sonnet
permissionMode: acceptEdits
skills: visual-header, settings-optimization
--- ---
# CLAUDE.md Maintainer Agent # CLAUDE.md Maintainer Agent
@@ -114,7 +117,54 @@ Report plugin coverage percentage and offer to add missing integrations:
- Display the integration content that would be added - Display the integration content that would be added
- Ask user for confirmation before modifying CLAUDE.md - Ask user for confirmation before modifying CLAUDE.md
### 2. Optimize CLAUDE.md Structure ### 2. Audit Settings Files
When auditing settings files, perform:
#### A. Permission Analysis
Read `.claude/settings.local.json` (primary) and check `.claude/settings.json` and `~/.claude.json` project entries (secondary).
Evaluate using `skills/settings-optimization.md`:
**Redundancy:**
- Duplicate entries in allow/deny arrays
- Subset patterns covered by broader patterns
- Patterns that could be merged
**Coverage:**
- Common safe tools missing from allow list
- MCP server tools not covered
- Directory scopes with no matching permission
**Safety Alignment:**
- Deny rules cover secrets and destructive commands
- Allow rules don't bypass active review layers
- No overly broad patterns without justification
**Profile Fit:**
- Compare against recommended profile for the project's review architecture
- Identify specific additions/removals to reach target profile
#### B. Review Layer Verification
Before recommending auto-allow patterns, verify active review layers:
1. Read `plugins/*/hooks/hooks.json` for each installed plugin
2. Map hook types (PreToolUse, PostToolUse) to tool matchers (Write, Edit, Bash)
3. Confirm plugins are listed in `.claude-plugin/marketplace.json`
4. Only recommend auto-allow for scopes covered by ≥2 verified review layers
#### C. Settings Efficiency Score (100 points)
| Category | Points |
|----------|--------|
| Redundancy | 25 |
| Coverage | 25 |
| Safety Alignment | 25 |
| Profile Fit | 25 |
### 3. Optimize CLAUDE.md Structure
**Recommended Structure:** **Recommended Structure:**
@@ -149,7 +199,7 @@ Common issues and solutions.
- Use headers that scan easily - Use headers that scan easily
- Include examples where they add clarity - Include examples where they add clarity
### 3. Apply Best Practices ### 4. Apply Best Practices
**DO:** **DO:**
- Use clear, direct language - Use clear, direct language
@@ -166,7 +216,7 @@ Common issues and solutions.
- Add generic advice that applies to all projects - Add generic advice that applies to all projects
- Use emojis unless project requires them - Use emojis unless project requires them
### 4. Generate Improvement Reports ### 5. Generate Improvement Reports
After analyzing a CLAUDE.md, provide: After analyzing a CLAUDE.md, provide:
@@ -202,7 +252,7 @@ Suggested Actions:
Would you like me to implement these improvements? Would you like me to implement these improvements?
``` ```
### 5. Insert Plugin Integrations ### 6. Insert Plugin Integrations
When adding plugin integration content to CLAUDE.md: When adding plugin integration content to CLAUDE.md:
@@ -237,7 +287,7 @@ Add this integration to CLAUDE.md?
- Allow users to skip specific plugins they don't want documented - Allow users to skip specific plugins they don't want documented
- Preserve existing CLAUDE.md structure and content - Preserve existing CLAUDE.md structure and content
### 6. Create New CLAUDE.md Files ### 7. Create New CLAUDE.md Files
When creating a new CLAUDE.md: When creating a new CLAUDE.md:
@@ -277,6 +327,39 @@ Every CLAUDE.md should have:
1. **Project Overview** - What is this? 1. **Project Overview** - What is this?
2. **Quick Start** - How do I build/test/run? 2. **Quick Start** - How do I build/test/run?
3. **Important Rules** - What must I NOT do? 3. **Important Rules** - What must I NOT do?
4. **Pre-Change Protocol** - Mandatory dependency check before code changes
### Pre-Change Protocol Section (MANDATORY)
**This section is REQUIRED in every CLAUDE.md.** It ensures Claude performs comprehensive dependency analysis before making any code changes.
```markdown
## ⛔ MANDATORY: Before Any Code Change
**Claude MUST show this checklist BEFORE editing any file:**
### 1. Impact Search Results
Run and show output of:
```bash
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
```
### 2. Files That Will Be Affected
Numbered list of every file to be modified, with the specific change for each.
### 3. Files Searched But Not Changed (and why)
Proof that related files were checked and determined unchanged.
### 4. Documentation That References This
List of docs that mention this feature/script/function.
**User verifies this list before Claude proceeds. If Claude skips this, stop immediately.**
### After Changes
Run the same grep and show results proving no references remain unaddressed.
```
**When analyzing a CLAUDE.md, flag as HIGH priority issue if this section is missing.**
### Optional Sections (as needed) ### Optional Sections (as needed)

View File

@@ -1,6 +1,6 @@
## CLAUDE.md Maintenance (claude-config-maintainer) ## CLAUDE.md Maintenance (claude-config-maintainer)
This project uses the **claude-config-maintainer** plugin to analyze and optimize CLAUDE.md configuration files. This project uses the **claude-config-maintainer** plugin to analyze and optimize CLAUDE.md and settings.local.json configuration files.
### Available Commands ### Available Commands
@@ -9,8 +9,13 @@ This project uses the **claude-config-maintainer** plugin to analyze and optimiz
| `/config-analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring | | `/config-analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring |
| `/config-optimize` | Automatically optimize CLAUDE.md structure and content | | `/config-optimize` | Automatically optimize CLAUDE.md structure and content |
| `/config-init` | Initialize a new CLAUDE.md file for a project | | `/config-init` | Initialize a new CLAUDE.md file for a project |
| `/config-diff` | Track CLAUDE.md changes over time with behavioral impact analysis |
| `/config-lint` | Lint CLAUDE.md for anti-patterns and best practices (31 rules) |
| `/config-audit-settings` | Audit settings.local.json permissions with 100-point scoring |
| `/config-optimize-settings` | Optimize permission patterns and apply named profiles |
| `/config-permissions-map` | Visual map of review layers and permission coverage |
### Scoring System ### CLAUDE.md Scoring System
The analysis uses a 100-point scoring system across four categories: The analysis uses a 100-point scoring system across four categories:
@@ -21,10 +26,31 @@ The analysis uses a 100-point scoring system across four categories:
| Completeness | 25 | Overview, quick start, critical rules, workflows | | Completeness | 25 | Overview, quick start, critical rules, workflows |
| Conciseness | 25 | Efficiency, no repetition, appropriate length | | Conciseness | 25 | Efficiency, no repetition, appropriate length |
### Settings Scoring System
The settings audit uses a 100-point scoring system across four categories:
| Category | Points | What It Measures |
|----------|--------|------------------|
| Redundancy | 25 | No duplicates, no subset patterns, efficient rules |
| Coverage | 25 | Common tools allowed, MCP servers covered |
| Safety Alignment | 25 | Deny rules for secrets/destructive ops, review layers verified |
| Profile Fit | 25 | Alignment with recommended profile for review layer count |
### Permission Profiles
| Profile | Use Case |
|---------|----------|
| `conservative` | New users, minimal auto-allow, prompts for most writes |
| `reviewed` | Projects with 2+ review layers (code-sentinel, doc-guardian, PR review) |
| `autonomous` | Trusted CI/sandboxed environments only |
### Usage Guidelines ### Usage Guidelines
- Run `/config-analyze` periodically to assess CLAUDE.md quality - Run `/config-analyze` periodically to assess CLAUDE.md quality
- Run `/config-audit-settings` to check permission efficiency
- Target a score of **70+/100** for effective Claude Code operation - Target a score of **70+/100** for effective Claude Code operation
- Address HIGH priority issues first when optimizing - Address HIGH priority issues first when optimizing
- Use `/config-init` when setting up new projects to start with best practices - Use `/config-init` when setting up new projects to start with best practices
- Use `/config-permissions-map` to visualize review layer coverage
- Re-analyze after making changes to verify improvements - Re-analyze after making changes to verify improvements

View File

@@ -4,29 +4,18 @@ description: Analyze CLAUDE.md for optimization opportunities and plugin integra
# Analyze CLAUDE.md # Analyze CLAUDE.md
This command analyzes your project's CLAUDE.md file and provides a detailed report on optimization opportunities and plugin integration status. Analyze your CLAUDE.md and provide a scored report with recommendations.
## Skills to Load
- skills/visual-header.md
- skills/analysis-workflow.md
- skills/optimization-patterns.md
- skills/pre-change-protocol.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Analysis`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Analysis │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
## What This Command Does
1. **Read CLAUDE.md** - Locates and reads the project's CLAUDE.md file
2. **Analyze Structure** - Evaluates organization, headers, and flow
3. **Check Content** - Reviews clarity, completeness, and conciseness
4. **Identify Issues** - Finds redundancy, verbosity, and missing sections
5. **Detect Active Plugins** - Identifies marketplace plugins enabled in the project
6. **Check Plugin Integration** - Verifies CLAUDE.md references active plugins
7. **Generate Report** - Provides scored assessment with recommendations
## Usage ## Usage
@@ -34,165 +23,27 @@ Then proceed with the analysis.
/config-analyze /config-analyze
``` ```
Or invoke the maintainer agent directly: ## Workflow
``` 1. Locate and parse CLAUDE.md
Analyze the CLAUDE.md file in this project 2. Evaluate structure, clarity, completeness, conciseness
``` 3. Find redundancy, verbosity, missing sections
4. Detect active marketplace plugins
5. Verify plugin integration in CLAUDE.md
6. Generate scored report with recommendations
## Analysis Criteria ## Scoring (100 points)
### Structure (25 points) | Category | Points |
- Logical section ordering |----------|--------|
- Clear header hierarchy | Structure | 25 |
- Easy navigation | Clarity | 25 |
- Appropriate grouping | Completeness | 25 |
| Conciseness | 25 |
### Clarity (25 points)
- Clear instructions
- Good examples
- Unambiguous language
- Appropriate detail level
### Completeness (25 points)
- Project overview present
- Quick start commands documented
- Critical rules highlighted
- Key workflows covered
### Conciseness (25 points)
- No unnecessary repetition
- Efficient information density
- Appropriate length for project size
- No generic filler content
## Plugin Integration Analysis
After the content analysis, the command detects and analyzes marketplace plugin integration:
### Detection Method
1. **Read `.claude/settings.local.json`** - Check for enabled MCP servers
2. **Map MCP servers to plugins** - Use marketplace registry to identify active plugins:
- `gitea` → projman
- `netbox` → cmdb-assistant
3. **Check for hooks** - Identify hook-based plugins (project-hygiene)
4. **Scan CLAUDE.md** - Look for plugin integration content
### Plugin Coverage Scoring
For each detected plugin, verify CLAUDE.md contains:
- Plugin section header or mention
- Available commands documentation
- MCP tools reference (if applicable)
- Usage guidelines
Coverage is reported as percentage: `(plugins referenced / plugins detected) * 100`
## Expected Output
```
CLAUDE.md Analysis Report
=========================
File: /path/to/project/CLAUDE.md
Lines: 245
Last Modified: 2025-01-18
Overall Score: 72/100
Category Scores:
- Structure: 20/25 (Good)
- Clarity: 18/25 (Good)
- Completeness: 22/25 (Excellent)
- Conciseness: 12/25 (Needs Work)
Strengths:
+ Clear project overview with good context
+ Critical rules prominently displayed
+ Comprehensive coverage of workflows
Issues Found:
1. [HIGH] Verbose explanations (lines 45-78)
Section "Running Tests" has 34 lines that could be 8 lines.
Impact: Harder to scan, important info buried
2. [MEDIUM] Duplicate content (lines 102-115, 189-200)
Same git workflow documented twice.
Impact: Maintenance burden, inconsistency risk
3. [MEDIUM] Missing Quick Start section
No clear "how to get started" instructions.
Impact: Slower onboarding for Claude
4. [LOW] Inconsistent header formatting
Mix of "## Title" and "## Title:" styles.
Impact: Minor readability issue
Recommendations:
1. Add Quick Start section at top (priority: high)
2. Condense Testing section to essentials (priority: high)
3. Remove duplicate git workflow (priority: medium)
4. Standardize header formatting (priority: low)
Estimated improvement: 15-20 points after changes
---
Plugin Integration Analysis
===========================
Detected Active Plugins:
✓ projman (via gitea MCP server)
✓ cmdb-assistant (via netbox MCP server)
✓ project-hygiene (via PostToolUse hook)
Plugin Coverage: 33% (1/3 plugins referenced)
✓ projman - Referenced in CLAUDE.md
✗ cmdb-assistant - NOT referenced
✗ project-hygiene - NOT referenced
Missing Integration Content:
1. cmdb-assistant
Add infrastructure management commands and NetBox MCP tools reference.
2. project-hygiene
Add cleanup hook documentation and configuration options.
---
Would you like me to:
[1] Implement all content recommendations
[2] Add missing plugin integrations to CLAUDE.md
[3] Do both (recommended)
[4] Show preview of changes first
```
## When to Use
Run `/config-analyze` when:
- Setting up a new project with existing CLAUDE.md
- CLAUDE.md feels too long or hard to use
- Claude seems to miss instructions
- Before major project changes
- Periodic maintenance (quarterly)
- After installing new marketplace plugins
- When Claude doesn't seem to use available plugin tools
## Follow-Up Actions ## Follow-Up Actions
After analysis, you can: 1. Implement content recommendations
- Run `/config-optimize` to automatically improve the file 2. Add missing plugin integrations
- Manually address specific issues 3. Do both (recommended)
- Request detailed recommendations for any section 4. Show preview first
- Compare with best practice templates
## Tips
- Run analysis after significant project changes
- Address HIGH priority issues first
- Keep scores above 70/100 for best results
- Re-analyze after making changes to verify improvement

View File

@@ -0,0 +1,204 @@
---
name: config-audit-settings
description: Audit settings.local.json for permission optimization opportunities
---
# /config-audit-settings
Audit Claude Code `settings.local.json` permissions with 100-point scoring across redundancy, coverage, safety alignment, and profile fit.
## Skills to Load
Before executing, load:
- `skills/visual-header.md`
- `skills/settings-optimization.md`
## Visual Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Audit |
+-----------------------------------------------------------------+
```
## Usage
```
/config-audit-settings # Full audit with recommendations
/config-audit-settings --diagram # Include Mermaid diagram of review layer coverage
```
## Workflow
### Step 1: Locate Settings Files
Search in order:
1. `.claude/settings.local.json` (primary target)
2. `.claude/settings.json` (shared config)
3. `~/.claude.json` project entry (legacy)
Report which format is in use.
### Step 2: Parse Permission Arrays
Extract and analyze:
- `permissions.allow` array
- `permissions.deny` array
- `permissions.ask` array (if present)
- Legacy `allowedTools` array (if legacy format)
### Step 3: Run Pattern Consolidation Analysis
Using `settings-optimization.md` Section 3, detect:
| Check | Description |
|-------|-------------|
| Duplicates | Exact same pattern appearing multiple times |
| Subsets | Narrower patterns covered by broader ones |
| Merge candidates | 4+ similar patterns that could be consolidated |
| Overly broad | Unscoped tool permissions (e.g., `Bash` without pattern) |
| Stale entries | Patterns referencing non-existent paths |
| Conflicts | Same pattern in both allow and deny |
### Step 4: Detect Active Marketplace Hooks
Read `plugins/*/hooks/hooks.json` files:
```bash
# Check each plugin's hooks
plugins/code-sentinel/hooks/hooks.json # PreToolUse security
plugins/doc-guardian/hooks/hooks.json # PostToolUse drift detection
plugins/project-hygiene/hooks/hooks.json # PostToolUse cleanup
plugins/data-platform/hooks/hooks.json # PostToolUse schema diff
plugins/contract-validator/hooks/hooks.json # Plugin validation
```
Parse each to identify:
- Hook event type (PreToolUse, PostToolUse)
- Tool matchers (Write, Edit, MultiEdit, Bash)
- Whether hook is command type (reliable) or prompt type (unreliable)
### Step 5: Map Review Layers to Directory Scopes
For each directory scope in `settings-optimization.md` Section 4:
1. Count how many review layers are verified active
2. Determine if auto-allow is justified (≥2 layers required)
3. Note any scopes that lack coverage
### Step 6: Compare Against Recommended Profile
Based on review layer count:
- 0-1 layers: Recommend `conservative` profile
- 2+ layers: Recommend `reviewed` profile
- CI/sandboxed: May recommend `autonomous` profile
Calculate profile fit percentage.
### Step 7: Generate Scored Report
Calculate scores using `settings-optimization.md` Section 6.
## Output Format
```
Settings Efficiency Score: XX/100
Redundancy: XX/25
Coverage: XX/25
Safety Alignment: XX/25
Profile Fit: XX/25
Current Profile: [closest match or "custom"]
Recommended Profile: [target based on review layers]
Issues Found:
🔴 CRITICAL: [description]
🟠 HIGH: [description]
🟡 MEDIUM: [description]
🔵 LOW: [description]
Active Review Layers Detected:
✓ code-sentinel (PreToolUse: Write|Edit|MultiEdit)
✓ doc-guardian (PostToolUse: Write|Edit|MultiEdit)
✓ project-hygiene (PostToolUse: Write|Edit)
✗ data-platform schema-diff (not detected)
Recommendations:
1. [specific action with pattern]
2. [specific action with pattern]
...
Follow-Up Actions:
1. Run /config-optimize-settings to apply recommendations
2. Run /config-optimize-settings --dry-run to preview first
3. Run /config-optimize-settings --profile=reviewed to apply profile
```
## Diagram Output (--diagram flag)
When `--diagram` is specified, generate a Mermaid flowchart showing:
**Before generating:** Read `/mnt/skills/user/mermaid-diagrams/SKILL.md` for diagram requirements.
**Diagram structure:**
- Left column: File operation types (Write, Edit, Bash)
- Middle: Review layers that intercept each operation
- Right column: Current permission status (auto-allowed, prompted, denied)
**Color coding:**
- PreToolUse hooks: Blue
- PostToolUse hooks: Green
- Sprint Approval: Amber
- PR Review: Purple
Example structure:
```mermaid
flowchart LR
subgraph Operations
W[Write]
E[Edit]
B[Bash]
end
subgraph Review Layers
CS[code-sentinel]
DG[doc-guardian]
PR[pr-review]
end
subgraph Permission
A[Auto-allowed]
P[Prompted]
D[Denied]
end
W --> CS
W --> DG
E --> CS
E --> DG
CS --> A
DG --> A
B --> P
classDef preHook fill:#e3f2fd
classDef postHook fill:#e8f5e9
classDef prReview fill:#f3e5f5
class CS preHook
class DG postHook
class PR prReview
```
## Issue Severity Levels
| Severity | Icon | Examples |
|----------|------|----------|
| CRITICAL | 🔴 | Unscoped `Bash` in allow, missing deny for secrets |
| HIGH | 🟠 | Overly broad patterns, missing MCP coverage |
| MEDIUM | 🟡 | Subset redundancy, merge candidates |
| LOW | 🔵 | Exact duplicates, minor optimizations |
## DO NOT
- Modify any files (this is audit only)
- Recommend `autonomous` profile unless explicitly sandboxed environment
- Recommend auto-allow for scopes with <2 verified review layers
- Skip hook verification before making recommendations

View File

@@ -4,248 +4,45 @@ description: Show diff between current CLAUDE.md and last commit
# Compare CLAUDE.md Changes # Compare CLAUDE.md Changes
This command shows differences between your current CLAUDE.md file and previous versions, helping track configuration drift and review changes before committing. Show differences between CLAUDE.md versions to track configuration drift.
## Skills to Load
- skills/visual-header.md
- skills/diff-analysis.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Diff`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Diff │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the diff.
## What This Command Does
1. **Detect CLAUDE.md Location** - Finds the project's CLAUDE.md file
2. **Compare Versions** - Shows diff against last commit or specified revision
3. **Highlight Sections** - Groups changes by affected sections
4. **Summarize Impact** - Explains what the changes mean for Claude's behavior
## Usage ## Usage
``` ```
/config-diff /config-diff # Working vs last commit
/config-diff --commit=abc1234 # Working vs specific commit
/config-diff --from=v1.0 --to=v2.0 # Compare two commits
/config-diff --section="Critical Rules" # Specific section only
``` ```
Compare against a specific commit: ## Workflow
``` 1. Find project's CLAUDE.md file
/config-diff --commit=abc1234 2. Show diff against target revision
/config-diff --commit=HEAD~3 3. Group changes by affected sections
``` 4. Explain behavioral implications
Compare two specific commits:
```
/config-diff --from=abc1234 --to=def5678
```
Show only specific sections:
```
/config-diff --section="Critical Rules"
/config-diff --section="Quick Start"
```
## Comparison Modes
### Default: Working vs Last Commit
Shows uncommitted changes to CLAUDE.md:
```
/config-diff
```
### Working vs Specific Commit
Shows changes since a specific point:
```
/config-diff --commit=v1.0.0
```
### Commit to Commit
Shows changes between two historical versions:
```
/config-diff --from=v1.0.0 --to=v2.0.0
```
### Branch Comparison
Shows CLAUDE.md differences between branches:
```
/config-diff --branch=main
/config-diff --from=feature-branch --to=main
```
## Expected Output
```
CLAUDE.md Diff Report
=====================
File: /path/to/project/CLAUDE.md
Comparing: Working copy vs HEAD (last commit)
Commit: abc1234 "Update build commands" (2 days ago)
Summary:
- Lines added: 12
- Lines removed: 5
- Net change: +7 lines
- Sections affected: 3
Section Changes:
----------------
## Quick Start [MODIFIED]
- Added new environment variable requirement
- Updated test command with coverage flag
## Critical Rules [ADDED CONTENT]
+ New rule: "Never modify database migrations directly"
## Architecture [UNCHANGED]
## Common Operations [MODIFIED]
- Removed deprecated deployment command
- Added new Docker workflow
Detailed Diff:
--------------
--- CLAUDE.md (HEAD)
+++ CLAUDE.md (working)
@@ -15,7 +15,10 @@
## Quick Start
```bash
+export DATABASE_URL=postgres://... # Required
pip install -r requirements.txt
-pytest
+pytest --cov=src # Run with coverage
uvicorn main:app --reload
```
@@ -45,6 +48,7 @@
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
- Always run tests before committing
Behavioral Impact:
------------------
These changes will affect Claude's behavior:
1. [NEW REQUIREMENT] Claude will now export DATABASE_URL before running
2. [MODIFIED] Test command now includes coverage reporting
3. [NEW RULE] Claude will avoid direct migration modifications
Review: Do these changes reflect your intended configuration?
```
## Section-Focused View
When using `--section`, output focuses on specific areas:
```
/config-diff --section="Critical Rules"
CLAUDE.md Section Diff: Critical Rules
======================================
--- HEAD
+++ Working
## Critical Rules
- Never modify `.env` files directly
+- Never modify database migrations directly
+- Always use type hints in Python code
- Always run tests before committing
-- Keep functions under 50 lines
Changes:
+ 2 rules added
- 1 rule removed
Impact: Claude will follow 2 new constraints and no longer enforce
the 50-line function limit.
```
## Options ## Options
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
| `--commit=REF` | Compare working copy against specific commit/tag | | `--commit=REF` | Compare against specific commit |
| `--from=REF` | Starting point for comparison | | `--from=REF` | Starting point |
| `--to=REF` | Ending point for comparison (default: HEAD) | | `--to=REF` | Ending point (default: HEAD) |
| `--branch=NAME` | Compare against branch tip | | `--section=NAME` | Show only specific section |
| `--section=NAME` | Show only changes to specific section | | `--stat` | Statistics only |
| `--stat` | Show only statistics, no detailed diff |
| `--no-color` | Disable colored output |
| `--context=N` | Lines of context around changes (default: 3) |
## Understanding the Output
### Change Indicators
| Symbol | Meaning |
|--------|---------|
| `+` | Line added |
| `-` | Line removed |
| `@@` | Location marker showing line numbers |
| `[MODIFIED]` | Section has changes |
| `[ADDED]` | New section created |
| `[REMOVED]` | Section deleted |
| `[UNCHANGED]` | No changes to section |
### Impact Categories
- **NEW REQUIREMENT** - Claude will now need to do something new
- **REMOVED REQUIREMENT** - Claude no longer needs to do something
- **MODIFIED** - Existing behavior changed
- **NEW RULE** - New constraint added
- **RELAXED RULE** - Constraint removed or softened
## When to Use ## When to Use
Run `/config-diff` when:
- Before committing CLAUDE.md changes - Before committing CLAUDE.md changes
- Reviewing what changed after pulling updates - Reviewing changes after pull
- Debugging unexpected Claude behavior - Debugging unexpected behavior
- Auditing configuration changes over time
- Comparing configurations across branches
## Integration with Other Commands
| Workflow | Commands |
|----------|----------|
| Review before commit | `/config-diff` then `git commit` |
| After optimization | `/config-optimize` then `/config-diff` |
| Audit history | `/config-diff --from=v1.0.0 --to=HEAD` |
| Branch comparison | `/config-diff --branch=main` |
## Tips
1. **Review before committing** - Always check what changed
2. **Track behavioral changes** - Focus on rules and requirements sections
3. **Use section filtering** - Large files benefit from focused diffs
4. **Compare across releases** - Use tags to track major changes
5. **Check after merges** - Ensure CLAUDE.md didn't get conflict artifacts
## Troubleshooting
### "No changes detected"
- CLAUDE.md matches the comparison target
- Check if you're comparing the right commits
### "File not found in commit"
- CLAUDE.md didn't exist at that commit
- Use `git log -- CLAUDE.md` to find when it was created
### "Not a git repository"
- This command requires git history
- Initialize git or use file backup comparison instead

View File

@@ -4,343 +4,45 @@ description: Lint CLAUDE.md for common anti-patterns and best practices
# Lint CLAUDE.md # Lint CLAUDE.md
This command checks your CLAUDE.md file against best practices and detects common anti-patterns that can cause issues with Claude Code. Check CLAUDE.md against best practices and detect common anti-patterns.
## Skills to Load
- skills/visual-header.md
- skills/lint-rules.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Lint`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Lint │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the linting.
## What This Command Does
1. **Parse Structure** - Validates markdown structure and hierarchy
2. **Check Security** - Detects hardcoded paths, secrets, and sensitive data
3. **Validate Content** - Identifies anti-patterns and problematic instructions
4. **Verify Format** - Ensures consistent formatting and style
5. **Generate Report** - Provides actionable findings with fix suggestions
## Usage ## Usage
``` ```
/config-lint /config-lint # Full lint
/config-lint --fix # Auto-fix issues
/config-lint --rules=security # Check specific category
``` ```
Lint with auto-fix: ## Workflow
``` 1. Parse markdown structure and hierarchy
/config-lint --fix 2. Check for hardcoded paths, secrets, sensitive data
``` 3. Identify content anti-patterns
4. Verify consistent formatting
Check specific rules only: 5. Generate report with fix suggestions
```
/config-lint --rules=security,structure
```
## Linting Rules
### Security Rules (SEC)
| Rule | Description | Severity |
|------|-------------|----------|
| SEC001 | Hardcoded absolute paths | Warning |
| SEC002 | Potential secrets/API keys | Error |
| SEC003 | Hardcoded IP addresses | Warning |
| SEC004 | Exposed credentials patterns | Error |
| SEC005 | Hardcoded URLs with tokens | Error |
| SEC006 | Environment variable values (not names) | Warning |
### Structure Rules (STR)
| Rule | Description | Severity |
|------|-------------|----------|
| STR001 | Missing required sections | Error |
| STR002 | Invalid header hierarchy (h3 before h2) | Warning |
| STR003 | Orphaned content (text before first header) | Info |
| STR004 | Excessive nesting depth (>4 levels) | Warning |
| STR005 | Empty sections | Warning |
| STR006 | Missing section content | Warning |
### Content Rules (CNT)
| Rule | Description | Severity |
|------|-------------|----------|
| CNT001 | Contradictory instructions | Error |
| CNT002 | Vague or ambiguous rules | Warning |
| CNT003 | Overly long sections (>100 lines) | Info |
| CNT004 | Duplicate content | Warning |
| CNT005 | TODO/FIXME in production config | Warning |
| CNT006 | Outdated version references | Info |
| CNT007 | Broken internal links | Warning |
### Format Rules (FMT)
| Rule | Description | Severity |
|------|-------------|----------|
| FMT001 | Inconsistent header styles | Info |
| FMT002 | Inconsistent list markers | Info |
| FMT003 | Missing code block language | Info |
| FMT004 | Trailing whitespace | Info |
| FMT005 | Missing blank lines around headers | Info |
| FMT006 | Inconsistent indentation | Info |
### Best Practice Rules (BPR)
| Rule | Description | Severity |
|------|-------------|----------|
| BPR001 | No Quick Start section | Warning |
| BPR002 | No Critical Rules section | Warning |
| BPR003 | Instructions without examples | Info |
| BPR004 | Commands without explanation | Info |
| BPR005 | Rules without rationale | Info |
| BPR006 | Missing plugin integration docs | Info |
## Expected Output
```
CLAUDE.md Lint Report
=====================
File: /path/to/project/CLAUDE.md
Rules checked: 25
Time: 0.3s
Summary:
Errors: 2
Warnings: 5
Info: 3
Findings:
---------
[ERROR] SEC002: Potential secret detected (line 45)
│ api_key = "sk-1234567890abcdef"
│ ^^^^^^^^^^^^^^^^^^^^^^
└─ Hardcoded API key found. Use environment variable reference instead.
Suggested fix:
- api_key = "sk-1234567890abcdef"
+ api_key = $OPENAI_API_KEY # Set in environment
[ERROR] CNT001: Contradictory instructions (lines 23, 67)
│ Line 23: "Always run tests before committing"
│ Line 67: "Skip tests for documentation-only changes"
└─ These rules conflict. Clarify the exception explicitly.
Suggested fix:
+ "Always run tests before committing, except for documentation-only
+ changes (files in docs/ directory)"
[WARNING] SEC001: Hardcoded absolute path (line 12)
│ Database location: /home/user/data/myapp.db
│ ^^^^^^^^^^^^^^^^^^^^^^^^
└─ Absolute paths break portability. Use relative or variable.
Suggested fix:
- Database location: /home/user/data/myapp.db
+ Database location: ./data/myapp.db # Or $DATA_DIR/myapp.db
[WARNING] STR002: Invalid header hierarchy (line 34)
│ ### Subsection
│ (no preceding ## header)
└─ H3 header without parent H2. Add H2 or promote to H2.
[WARNING] CNT004: Duplicate content (lines 45-52, 89-96)
│ Same git workflow documented twice
└─ Remove duplicate or consolidate into single section.
[WARNING] STR005: Empty section (line 78)
│ ## Troubleshooting
│ (no content)
└─ Add content or remove empty section.
[WARNING] BPR002: No Critical Rules section
│ Missing "Critical Rules" or "Important Rules" section
└─ Add a section highlighting must-follow rules for Claude.
[INFO] FMT003: Missing code block language (line 56)
│ ```
│ npm install
│ ```
└─ Specify language for syntax highlighting: ```bash
[INFO] CNT003: Overly long section (lines 100-215)
│ "Architecture" section is 115 lines
└─ Consider breaking into subsections or condensing.
[INFO] FMT001: Inconsistent header styles
│ Line 10: "## Quick Start"
│ Line 25: "## Architecture:"
│ (colon suffix inconsistent)
└─ Standardize header format throughout document.
---
Auto-fixable: 4 issues (run with --fix)
Manual review required: 6 issues
Run `/config-lint --fix` to apply automatic fixes.
```
## Options ## Options
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
| `--fix` | Automatically fix auto-fixable issues | | `--fix` | Auto-fix issues |
| `--rules=LIST` | Check only specified rule categories | | `--rules=LIST` | Check specific categories |
| `--ignore=LIST` | Skip specified rules (e.g., `--ignore=FMT001,FMT002`) | | `--ignore=LIST` | Skip specified rules |
| `--severity=LEVEL` | Show only issues at or above level (error/warning/info) | | `--severity=LEVEL` | Filter by severity |
| `--format=FORMAT` | Output format: `text` (default), `json`, `sarif` |
| `--config=FILE` | Use custom lint configuration |
| `--strict` | Treat warnings as errors | | `--strict` | Treat warnings as errors |
## Rule Categories
Use `--rules` to focus on specific areas:
```
/config-lint --rules=security # Only security checks
/config-lint --rules=structure # Only structure checks
/config-lint --rules=security,content # Multiple categories
```
Available categories:
- `security` - SEC rules
- `structure` - STR rules
- `content` - CNT rules
- `format` - FMT rules
- `bestpractice` - BPR rules
## Custom Configuration
Create `.claude-lint.json` in project root:
```json
{
"rules": {
"SEC001": "warning",
"FMT001": "off",
"CNT003": {
"severity": "warning",
"maxLines": 150
}
},
"ignore": [
"FMT*"
],
"requiredSections": [
"Quick Start",
"Critical Rules",
"Project Overview"
]
}
```
## Anti-Pattern Examples
### Hardcoded Secrets (SEC002)
```markdown
# BAD
API_KEY=sk-1234567890abcdef
# GOOD
API_KEY=$OPENAI_API_KEY # Set via environment
```
### Hardcoded Paths (SEC001)
```markdown
# BAD
Config file: /home/john/projects/myapp/config.yml
# GOOD
Config file: ./config.yml
Config file: $PROJECT_ROOT/config.yml
```
### Contradictory Rules (CNT001)
```markdown
# BAD
- Always use TypeScript
- JavaScript files are acceptable for scripts
# GOOD
- Always use TypeScript for source code
- JavaScript (.js) is acceptable only for config files and scripts
```
### Vague Instructions (CNT002)
```markdown
# BAD
- Be careful with the database
# GOOD
- Never run DELETE without WHERE clause
- Always backup before migrations
```
### Invalid Hierarchy (STR002)
```markdown
# BAD
# Main Title
### Skipped Level
# GOOD
# Main Title
## Section
### Subsection
```
## When to Use ## When to Use
Run `/config-lint` when:
- Before committing CLAUDE.md changes - Before committing CLAUDE.md changes
- During code review for CLAUDE.md modifications - During code review
- Setting up CI/CD checks for configuration files - Periodically as maintenance
- After major edits to catch introduced issues
- Periodically as maintenance check
## Integration with CI/CD
Add to your CI pipeline:
```yaml
# GitHub Actions example
- name: Lint CLAUDE.md
run: claude /config-lint --strict --format=sarif > lint-results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: lint-results.sarif
```
## Tips
1. **Start with errors** - Fix errors before warnings
2. **Use --fix carefully** - Review auto-fixes before committing
3. **Configure per-project** - Different projects have different needs
4. **Integrate in CI** - Catch issues before they reach main
5. **Review periodically** - Run lint check monthly as maintenance
## Related Commands
| Command | Relationship |
|---------|--------------|
| `/config-analyze` | Deeper content analysis (complements lint) |
| `/config-optimize` | Applies fixes and improvements |
| `/config-diff` | Shows what changed (run lint before commit) |

View File

@@ -0,0 +1,243 @@
---
name: config-optimize-settings
description: Optimize settings.local.json permissions based on audit recommendations
---
# /config-optimize-settings
Optimize Claude Code `settings.local.json` permission patterns and apply named profiles.
## Skills to Load
Before executing, load:
- `skills/visual-header.md`
- `skills/settings-optimization.md`
- `skills/pre-change-protocol.md`
## Visual Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
```
## Usage
```
/config-optimize-settings # Apply audit recommendations
/config-optimize-settings --dry-run # Preview only, no changes
/config-optimize-settings --profile=reviewed # Apply named profile
/config-optimize-settings --consolidate-only # Only merge/dedupe, no new rules
```
## Options
| Option | Description |
|--------|-------------|
| `--dry-run` | Preview changes without applying |
| `--profile=NAME` | Apply named profile (`conservative`, `reviewed`, `autonomous`) |
| `--consolidate-only` | Only deduplicate and merge patterns, don't add new rules |
| `--no-backup` | Skip backup (not recommended) |
## Workflow
### Step 1: Run Audit Analysis
Execute the same analysis as `/config-audit-settings`:
1. Locate settings file
2. Parse permission arrays
3. Detect issues (duplicates, subsets, merge candidates, etc.)
4. Verify active review layers
5. Calculate current score
### Step 2: Generate Optimization Plan
Based on audit results, create a change plan:
**For `--consolidate-only`:**
- Remove exact duplicates
- Remove subset patterns covered by broader patterns
- Merge similar patterns (4+ threshold)
- Remove stale patterns for non-existent paths
- Remove conflicting allow entries that are already denied
**For `--profile=NAME`:**
- Calculate diff between current permissions and target profile
- Show additions and removals
- Preserve any custom deny rules not in profile
**For default (full optimization):**
- Apply all consolidation changes
- Add recommended patterns based on verified review layers
- Suggest profile alignment if appropriate
### Step 3: Show Before/After Preview
**MANDATORY:** Always show preview before applying changes.
```
Current Settings:
allow: [12 patterns]
deny: [4 patterns]
Proposed Changes:
REMOVE from allow (redundant):
- Write(plugins/projman/*) [covered by Write(plugins/**)]
- Write(plugins/git-flow/*) [covered by Write(plugins/**)]
- Bash(git status) [covered by Bash(git *)]
ADD to allow (recommended):
+ Bash(npm *) [2 review layers active]
+ Bash(pytest *) [2 review layers active]
ADD to deny (security):
+ Bash(curl * | bash*) [missing safety rule]
After Optimization:
allow: [10 patterns]
deny: [5 patterns]
Score Impact: 67/100 → 85/100 (+18 points)
```
### Step 4: Request User Approval
Ask for confirmation before proceeding:
```
Apply these changes to .claude/settings.local.json?
[1] Yes, apply changes
[2] No, cancel
[3] Apply partial (select which changes)
```
### Step 5: Create Backup
**Before any write operation:**
```bash
# Backup location
.claude/backups/settings.local.json.{YYYYMMDD-HHMMSS}
```
Create the `.claude/backups/` directory if it doesn't exist.
### Step 6: Apply Changes
Write the optimized `settings.local.json` file.
### Step 7: Verify
Re-read the file and re-calculate the score to confirm improvement.
```
Optimization Complete!
Backup saved: .claude/backups/settings.local.json.20260202-143022
Settings Efficiency Score: 85/100 (+18 from 67)
Redundancy: 25/25 (+8)
Coverage: 22/25 (+5)
Safety Alignment: 23/25 (+3)
Profile Fit: 15/25 (+2)
Changes applied:
- Removed 3 redundant patterns
- Added 2 recommended patterns
- Added 1 safety deny rule
```
## Profile Application
When using `--profile=NAME`:
### `conservative`
```
Switching to conservative profile...
This profile:
- Allows: Read, Glob, Grep, LS, basic Bash commands
- Allows: Write/Edit only for docs/
- Denies: .env*, secrets/, rm -rf, sudo
All other Write/Edit operations will prompt for approval.
```
### `reviewed`
```
Switching to reviewed profile...
Prerequisites verified:
✓ code-sentinel hook active (PreToolUse)
✓ doc-guardian hook active (PostToolUse)
✓ 2+ review layers detected
This profile:
- Allows: All file operations (Edit, Write, MultiEdit)
- Allows: Scoped Bash commands (git, npm, python, etc.)
- Denies: .env*, secrets/, rm -rf, sudo, curl|bash
```
### `autonomous`
```
⚠️ WARNING: Autonomous profile requested
This profile allows unscoped Bash execution.
Only use in fully sandboxed environments (CI, containers).
Confirm this is a sandboxed environment?
[1] Yes, this is sandboxed - apply autonomous profile
[2] No, cancel
```
## Safety Rules
1. **ALWAYS backup before writing** (unless `--no-backup`)
2. **NEVER remove deny rules without explicit confirmation**
3. **NEVER add unscoped `Bash` to allow** — always use scoped patterns
4. **Preview is MANDATORY** before applying changes
5. **Verify review layers** before recommending broad permissions
## Output Format
### Dry Run Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
DRY RUN - No changes will be made
[... preview content ...]
To apply these changes, run:
/config-optimize-settings
```
### Applied Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
Optimization Applied Successfully
Backup: .claude/backups/settings.local.json.20260202-143022
[... summary of changes ...]
Score: 67/100 → 85/100
```
## DO NOT
- Apply changes without showing preview
- Remove deny rules silently
- Add unscoped `Bash` permission
- Skip backup without explicit `--no-backup` flag
- Apply `autonomous` profile without sandbox confirmation
- Recommend broad permissions without verifying review layers

View File

@@ -0,0 +1,256 @@
---
name: config-permissions-map
description: Generate visual map of review layers and permission coverage
---
# /config-permissions-map
Generate a Mermaid diagram showing the relationship between file operations, review layers, and permission status.
## Skills to Load
Before executing, load:
- `skills/visual-header.md`
- `skills/settings-optimization.md`
Also read: `/mnt/skills/user/mermaid-diagrams/SKILL.md` (for diagram requirements)
## Visual Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Permissions Map |
+-----------------------------------------------------------------+
```
## Usage
```
/config-permissions-map # Generate and display diagram
/config-permissions-map --save # Save diagram to .mermaid file
```
## Workflow
### Step 1: Detect Active Hooks
Read all plugin hooks from the marketplace:
```
plugins/code-sentinel/hooks/hooks.json
plugins/doc-guardian/hooks/hooks.json
plugins/project-hygiene/hooks/hooks.json
plugins/data-platform/hooks/hooks.json
plugins/contract-validator/hooks/hooks.json
plugins/cmdb-assistant/hooks/hooks.json
```
For each hook, extract:
- Event type (PreToolUse, PostToolUse, SessionStart, etc.)
- Tool matchers (Write, Edit, MultiEdit, Bash patterns)
- Hook command/script
### Step 2: Map Hooks to File Scopes
Create a mapping of which review layers cover which operations:
| Operation | PreToolUse Hooks | PostToolUse Hooks | Other Gates |
|-----------|------------------|-------------------|-------------|
| Write | code-sentinel | doc-guardian, project-hygiene | PR review |
| Edit | code-sentinel | doc-guardian, project-hygiene | PR review |
| MultiEdit | code-sentinel | doc-guardian | PR review |
| Bash(git *) | git-flow | — | — |
### Step 3: Read Current Permissions
Load `.claude/settings.local.json` and parse:
- `allow` array → auto-allowed operations
- `deny` array → blocked operations
- `ask` array → always-prompted operations
### Step 4: Generate Mermaid Flowchart
**Diagram requirements (from mermaid-diagrams skill):**
- Use `classDef` for styling
- Maximum 3 colors (blue, green, amber/purple)
- Semantic arrow labels
- Left-to-right flow
**Structure:**
```mermaid
flowchart LR
subgraph ops[File Operations]
direction TB
W[Write]
E[Edit]
ME[MultiEdit]
BG[Bash git]
BN[Bash npm]
BO[Bash other]
end
subgraph pre[PreToolUse Hooks]
direction TB
CS[code-sentinel<br/>Security Scan]
GF[git-flow<br/>Branch Check]
end
subgraph post[PostToolUse Hooks]
direction TB
DG[doc-guardian<br/>Drift Detection]
PH[project-hygiene<br/>Cleanup]
DP[data-platform<br/>Schema Diff]
end
subgraph perm[Permission Status]
direction TB
AA[Auto-Allowed]
PR[Prompted]
DN[Denied]
end
W -->|intercepted| CS
W -->|tracked| DG
E -->|intercepted| CS
E -->|tracked| DG
BG -->|checked| GF
CS -->|passed| AA
DG -->|logged| AA
GF -->|valid| AA
BO -->|no hook| PR
classDef preHook fill:#e3f2fd,stroke:#1976d2
classDef postHook fill:#e8f5e9,stroke:#388e3c
classDef sprint fill:#fff3e0,stroke:#f57c00
classDef prReview fill:#f3e5f5,stroke:#7b1fa2
classDef allowed fill:#c8e6c9,stroke:#2e7d32
classDef prompted fill:#fff9c4,stroke:#f9a825
classDef denied fill:#ffcdd2,stroke:#c62828
class CS,GF preHook
class DG,PH,DP postHook
class AA allowed
class PR prompted
class DN denied
```
### Step 5: Generate Coverage Summary Table
```
Review Layer Coverage Summary
=============================
| Directory Scope | Layers | Status | Recommendation |
|--------------------------|--------|-----------------|----------------|
| plugins/*/commands/*.md | 3 | ✓ Auto-allowed | — |
| plugins/*/skills/*.md | 2 | ✓ Auto-allowed | — |
| mcp-servers/**/*.py | 3 | ✓ Auto-allowed | — |
| docs/** | 2 | ✓ Auto-allowed | — |
| scripts/*.sh | 2 | ⚠ Prompted | Consider auto-allow |
| .env* | 0 | ✗ Denied | Correct - secrets |
| Root directory | 1 | ⚠ Prompted | Keep prompted |
Legend:
✓ = Covered by ≥2 review layers, auto-allowed
⚠ = Fewer than 2 layers or not allowed
✗ = Explicitly denied
```
### Step 6: Identify Gaps
Report any gaps in coverage:
```
Coverage Gaps Detected:
1. Bash(npm *) — not in allow list, but npm operations are common
→ 2 review layers active, could be auto-allowed
2. mcp__data-platform__* — MCP server configured but tools not allowed
→ Add to allow list to avoid prompts
3. scripts/*.sh — 2 review layers but still prompted
→ Consider adding Write(scripts/**) to allow
```
### Step 7: Output Diagram
Display the Mermaid diagram inline.
If `--save` flag is used:
- Save to `.claude/permissions-map.mermaid`
- Report the file path
## Output Format
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Permissions Map |
+-----------------------------------------------------------------+
Review Layer Status
===================
PreToolUse Hooks (intercept before operation):
✓ code-sentinel — Write, Edit, MultiEdit
✓ git-flow — Bash(git checkout *), Bash(git commit *)
PostToolUse Hooks (track after operation):
✓ doc-guardian — Write, Edit, MultiEdit
✓ project-hygiene — Write, Edit
✗ data-platform — not detected
Other Review Gates:
✓ Sprint Approval (projman milestone workflow)
✓ PR Review (pr-review multi-agent)
Permissions Flow Diagram
========================
```mermaid
[diagram here]
```
Coverage Summary
================
[table here]
Gaps & Recommendations
======================
[gaps list here]
```
## File Output (--save flag)
When `--save` is specified:
```
Diagram saved to: .claude/permissions-map.mermaid
To view:
- Open in VS Code with Mermaid extension
- Paste into https://mermaid.live
- Include in documentation with ```mermaid code fence
```
## Color Scheme
| Element | Color | Hex |
|---------|-------|-----|
| PreToolUse hooks | Blue | #e3f2fd |
| PostToolUse hooks | Green | #e8f5e9 |
| Sprint/Planning gates | Amber | #fff3e0 |
| PR Review | Purple | #f3e5f5 |
| Auto-allowed | Light green | #c8e6c9 |
| Prompted | Light yellow | #fff9c4 |
| Denied | Light red | #ffcdd2 |
## DO NOT
- Generate diagrams without reading the mermaid-diagrams skill
- Use more than 3 primary colors in the diagram
- Skip the coverage summary table
- Fail to identify coverage gaps

View File

@@ -4,220 +4,46 @@ description: Initialize a new CLAUDE.md file for a project
# Initialize CLAUDE.md # Initialize CLAUDE.md
This command creates a new CLAUDE.md file tailored to your project, gathering context and generating appropriate content. Create a new CLAUDE.md file tailored to your project.
## Skills to Load
- skills/visual-header.md
- skills/claude-md-structure.md
- skills/pre-change-protocol.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Initialization`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Initialization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the initialization.
## What This Command Does
1. **Gather Context** - Analyzes project structure and asks clarifying questions
2. **Detect Stack** - Identifies technologies, frameworks, and tools
3. **Generate Content** - Creates tailored CLAUDE.md sections
4. **Review & Refine** - Allows customization before saving
5. **Save File** - Creates the CLAUDE.md in project root
## Usage ## Usage
``` ```
/config-init /config-init # Interactive
/config-init --minimal # Minimal version
/config-init --comprehensive # Detailed version
``` ```
Or with options: ## Workflow
``` 1. Analyze project structure, ask clarifying questions
/config-init --template=api # Use API project template 2. Detect technologies, frameworks, tools
/config-init --minimal # Create minimal version 3. Generate tailored CLAUDE.md sections
/config-init --comprehensive # Create detailed version 4. Allow review and customization
``` 5. Save file in project root
## Initialization Workflow
```
CLAUDE.md Initialization
========================
Step 1: Project Analysis
------------------------
Scanning project structure...
Detected:
- Language: Python 3.11
- Framework: FastAPI
- Package Manager: pip (requirements.txt found)
- Testing: pytest
- Docker: Yes (Dockerfile found)
- Git: Yes (.git directory)
Step 2: Clarifying Questions
----------------------------
1. Project Description:
What does this project do? (1-2 sentences)
> [User provides description]
2. Build/Run Commands:
Detected commands - are these correct?
- Install: pip install -r requirements.txt
- Test: pytest
- Run: uvicorn main:app --reload
[Y/n/edit]
3. Critical Rules:
Any rules Claude MUST follow?
Examples: "Never modify migrations", "Always use type hints"
> [User provides rules]
4. Sensitive Areas:
Any files/directories Claude should be careful with?
> [User provides or skips]
Step 3: Generate CLAUDE.md
--------------------------
Generating content based on:
- Project type: FastAPI web API
- Detected technologies
- Your provided context
Preview:
---
# CLAUDE.md
## Project Overview
[Generated description]
## Quick Start
```bash
pip install -r requirements.txt # Install dependencies
pytest # Run tests
uvicorn main:app --reload # Start dev server
```
## Architecture
[Generated based on structure]
## Critical Rules
[Your provided rules]
## File Structure
[Generated from analysis]
---
Save this CLAUDE.md? [Y/n/edit]
Step 4: Complete
----------------
CLAUDE.md created successfully!
Location: /path/to/project/CLAUDE.md
Lines: 87
Score: 85/100 (following best practices)
Recommendations:
- Run /config-analyze periodically to maintain quality
- Update when adding major features
- Add troubleshooting section as issues are discovered
```
## Templates ## Templates
### Minimal Template | Template | Sections |
For small projects or when starting fresh: |----------|----------|
- Project Overview (required) | Minimal | Overview, Quick Start, Critical Rules, Pre-Change Protocol |
- Quick Start (required) | Standard | + Architecture, Common Operations, File Structure |
- Critical Rules (required) | Comprehensive | + Troubleshooting, Integration Points, Workflow |
### Standard Template (default) **Note:** Pre-Change Protocol is MANDATORY in all templates.
For typical projects:
- Project Overview
- Quick Start
- Architecture
- Critical Rules
- Common Operations
- File Structure
### Comprehensive Template
For large or complex projects:
- All standard sections plus:
- Detailed Architecture
- Troubleshooting
- Integration Points
- Development Workflow
- Deployment Notes
## Auto-Detection
The command automatically detects:
| What | How |
|------|-----|
| Language | File extensions, config files |
| Framework | package.json, requirements.txt, etc. |
| Build system | Makefile, package.json scripts, etc. |
| Testing | pytest.ini, jest.config, etc. |
| Docker | Dockerfile, docker-compose.yml |
| Database | Connection strings, ORM configs |
## Customization
After generation, you can:
- Edit any section before saving
- Add additional sections
- Remove unnecessary sections
- Adjust detail level
- Add project-specific content
## When to Use ## When to Use
Run `/config-init` when:
- Starting a new project - Starting a new project
- Project lacks CLAUDE.md - Project lacks CLAUDE.md
- Existing CLAUDE.md is outdated/poor quality - Taking over unfamiliar project
- Taking over an unfamiliar project
## Tips
1. **Provide accurate description** - This shapes the whole file
2. **Include critical rules** - What must Claude never do?
3. **Review generated content** - Auto-detection isn't perfect
4. **Start minimal, grow as needed** - Add sections when required
5. **Keep it current** - Update when project changes significantly
## Examples
### For a CLI Tool
```
/config-init
> Description: CLI tool for managing cloud infrastructure
> Critical rules: Never delete resources without confirmation, always show dry-run first
```
### For a Web App
```
/config-init
> Description: E-commerce platform with React frontend and Node.js backend
> Critical rules: Never expose API keys, always validate user input, follow the existing component patterns
```
### For a Library
```
/config-init --template=minimal
> Description: Python library for parsing log files
> Critical rules: Maintain backward compatibility, all public functions need docstrings
```

View File

@@ -4,187 +4,47 @@ description: Optimize CLAUDE.md structure and content
# Optimize CLAUDE.md # Optimize CLAUDE.md
This command automatically optimizes your project's CLAUDE.md file based on best practices and identified issues. Automatically optimize CLAUDE.md based on best practices.
## Skills to Load
- skills/visual-header.md
- skills/optimization-patterns.md
- skills/pre-change-protocol.md
- skills/claude-md-structure.md
## Visual Output ## Visual Output
When executing this command, display the plugin header: Display: `CONFIG-MAINTAINER - CLAUDE.md Optimization`
```
┌──────────────────────────────────────────────────────────────────┐
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Optimization │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the optimization.
## What This Command Does
1. **Analyze Current File** - Identifies all optimization opportunities
2. **Plan Changes** - Determines what to restructure, condense, or add
3. **Show Preview** - Displays before/after comparison
4. **Apply Changes** - Updates the file with your approval
5. **Verify Results** - Confirms improvements achieved
## Usage ## Usage
``` ```
/config-optimize /config-optimize # Full optimization
/config-optimize --condense # Reduce verbosity
/config-optimize --dry-run # Preview only
``` ```
Or specify specific optimizations: ## Workflow
``` 1. Identify optimization opportunities
/config-optimize --condense # Focus on reducing verbosity 2. Plan restructure, condense, or add actions
/config-optimize --restructure # Focus on reorganization 3. Show before/after preview
/config-optimize --add-missing # Focus on adding missing sections 4. Apply changes with approval
``` 5. Verify improvements
## Optimization Actions
### Restructure
- Reorder sections by importance
- Group related content together
- Improve header hierarchy
- Add navigation aids
### Condense
- Remove redundant explanations
- Convert verbose text to bullet points
- Eliminate duplicate content
- Shorten overly detailed sections
### Enhance
- Add missing essential sections
- Improve unclear instructions
- Add helpful examples
- Highlight critical rules
### Format
- Standardize header styles
- Fix code block formatting
- Align list formatting
- Improve table layouts
## Expected Output
```
CLAUDE.md Optimization
======================
Current Analysis:
- Score: 72/100
- Lines: 245
- Issues: 4
Planned Optimizations:
1. ADD: Quick Start section (new, ~15 lines)
+ Build command
+ Test command
+ Run command
2. CONDENSE: Testing section (34 → 8 lines)
Before: Verbose explanation with redundant setup info
After: Concise command reference with comments
3. REMOVE: Duplicate git workflow (lines 189-200)
Keeping: Original at lines 102-115
4. FORMAT: Standardize headers
Changing 12 headers from "## Title:" to "## Title"
Preview Changes? [Y/n] y
--- CLAUDE.md (before)
+++ CLAUDE.md (after)
@@ -1,5 +1,20 @@
# CLAUDE.md
+## Quick Start
+
+```bash
+# Install dependencies
+pip install -r requirements.txt
+
+# Run tests
+pytest
+
+# Start development server
+python manage.py runserver
+```
+
## Project Overview
...
[Full diff shown]
Apply these changes? [Y/n] y
Optimization Complete!
- Previous score: 72/100
- New score: 89/100
- Lines reduced: 245 → 198 (-19%)
- Issues resolved: 4/4
Backup saved to: .claude/backups/CLAUDE.md.2025-01-18
```
## Safety Features
### Backup Creation
- Automatic backup before changes
- Stored in `.claude/backups/`
- Easy restoration if needed
### Preview Mode
- All changes shown before applying
- Diff format for easy review
- Option to approve/reject
### Selective Application
- Can apply individual changes
- Skip specific optimizations
- Iterative refinement
## Options ## Options
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
| `--dry-run` | Show changes without applying | | `--dry-run` | Preview without applying |
| `--no-backup` | Skip backup creation | | `--no-backup` | Skip backup |
| `--aggressive` | Maximum condensation | | `--aggressive` | Maximum condensation |
| `--preserve-comments` | Keep all existing comments | | `--section=NAME` | Optimize specific section |
| `--section=NAME` | Optimize specific section only |
## When to Use **Priority:** Add Pre-Change Protocol if missing.
Run `/config-optimize` when: ## Safety
- Analysis shows score below 70
- File has grown too long
- Structure needs reorganization
- Missing critical sections
- After major refactoring
## Best Practices - Auto backup to `.claude/backups/`
- Preview before applying
1. **Run analysis first** - Understand current state
2. **Review preview carefully** - Ensure nothing important lost
3. **Test after changes** - Verify Claude follows instructions
4. **Keep backups** - Restore if issues arise
5. **Iterate** - Multiple small optimizations beat one large one
## Rollback
If optimization causes issues:
```bash
# Restore from backup
cp .claude/backups/CLAUDE.md.TIMESTAMP ./CLAUDE.md
```
Or ask:
```
Restore CLAUDE.md from the most recent backup
```

View File

@@ -2,8 +2,13 @@
"hooks": { "hooks": {
"SessionStart": [ "SessionStart": [
{ {
"type": "command", "matcher": "",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/enforce-rules.sh" "hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/enforce-rules.sh"
}
]
} }
] ]
} }

View File

@@ -0,0 +1,112 @@
# CLAUDE.md Analysis Workflow
This skill defines the workflow for analyzing CLAUDE.md files.
## Analysis Steps
1. **Locate File** - Find CLAUDE.md in project root
2. **Parse Structure** - Extract headers and sections
3. **Evaluate Content** - Score against criteria
4. **Detect Plugins** - Identify active marketplace plugins
5. **Check Integration** - Verify plugin references
6. **Generate Report** - Provide scored assessment
## Content Analysis
### What to Check
| Area | Check For |
|------|-----------|
| Structure | Header hierarchy, section ordering, grouping |
| Clarity | Clear instructions, examples, unambiguous language |
| Completeness | Required sections present, workflows documented |
| Conciseness | No redundancy, efficient density, appropriate length |
### Required Sections Check
1. Project Overview - present?
2. Quick Start - present with commands?
3. Critical Rules - present?
4. **Pre-Change Protocol** - present? (HIGH PRIORITY if missing)
## Plugin Integration Analysis
### Detection Method
1. Read `.claude/settings.local.json` for enabled MCP servers
2. Map MCP servers to plugins:
- `gitea` -> projman
- `netbox` -> cmdb-assistant
3. Check for hook-based plugins (project-hygiene)
4. Scan CLAUDE.md for plugin references
### Coverage Scoring
For each detected plugin, verify CLAUDE.md contains:
- Plugin section header or mention
- Available commands documentation
- MCP tools reference (if applicable)
- Usage guidelines
Coverage = (plugins referenced / plugins detected) * 100%
## Report Format
```
CLAUDE.md Analysis Report
=========================
File: /path/to/project/CLAUDE.md
Lines: N
Last Modified: YYYY-MM-DD
Overall Score: NN/100
Category Scores:
- Structure: NN/25 (Rating)
- Clarity: NN/25 (Rating)
- Completeness: NN/25 (Rating)
- Conciseness: NN/25 (Rating)
Strengths:
+ [Positive finding]
Issues Found:
N. [SEVERITY] Issue description (location)
Context explaining the problem.
Impact: What happens if not fixed.
Recommendations:
N. Action to take (priority: high/medium/low)
---
Plugin Integration Analysis
===========================
Detected Active Plugins:
[check] plugin-name (via detection method)
Plugin Coverage: NN% (N/N plugins referenced)
Missing Integration Content:
N. plugin-name
What to add.
```
## Issue Severity
| Level | When to Use |
|-------|-------------|
| HIGH | Missing mandatory sections, security issues |
| MEDIUM | Missing recommended content, duplicate content |
| LOW | Formatting issues, minor improvements |
## Follow-Up Actions
After analysis, offer:
1. Implement all content recommendations
2. Add missing plugin integrations
3. Do both (recommended)
4. Show preview of changes first

View File

@@ -0,0 +1,113 @@
# CLAUDE.md Structure Reference
This skill defines the standard structure, required sections, and templates for CLAUDE.md files.
## Required Sections
Every CLAUDE.md MUST have these sections:
| Section | Purpose | Priority |
|---------|---------|----------|
| Project Overview | What the project does | Required |
| Quick Start | Build/test/run commands | Required |
| Critical Rules | Must-follow constraints | Required |
| Pre-Change Protocol | Dependency check before edits | **MANDATORY** |
## Recommended Sections
| Section | When to Include |
|---------|-----------------|
| Architecture | Complex projects with multiple components |
| Common Operations | Projects with repetitive tasks |
| File Structure | Large codebases |
| Troubleshooting | Projects with known gotchas |
| Integration Points | Projects with external dependencies |
## Header Hierarchy
```
# CLAUDE.md (H1 - only one)
## Section (H2 - main sections)
### Subsection (H3 - within sections)
#### Detail (H4 - rarely needed)
```
**Rules:**
- Never skip levels (no H3 before H2)
- Maximum depth: 4 levels
- No orphaned content before first header
## Templates
### Minimal Template
For small projects:
```markdown
# CLAUDE.md
## Project Overview
[Description]
## Quick Start
[Commands]
## Critical Rules
[Constraints]
## Pre-Change Protocol
[Mandatory section - see pre-change-protocol.md]
```
### Standard Template (Default)
```markdown
# CLAUDE.md
## Project Overview
## Quick Start
## Architecture
## Critical Rules
## Pre-Change Protocol
## Common Operations
## File Structure
```
### Comprehensive Template
For large projects - adds:
- Detailed Architecture
- Troubleshooting
- Integration Points
- Development Workflow
- Deployment Notes
## Auto-Detection Signals
| Technology | Detection Method |
|------------|------------------|
| Language | File extensions, config files |
| Framework | package.json, requirements.txt, Cargo.toml |
| Build system | Makefile, scripts in package.json |
| Testing | pytest.ini, jest.config.js, go.mod |
| Docker | Dockerfile, docker-compose.yml |
| Database | ORM configs, connection strings |
## Section Content Guidelines
### Project Overview
- 1-3 sentences describing purpose
- Target audience if relevant
- Key technologies used
### Quick Start
- Install command
- Test command
- Run command
- Each with brief inline comment
### Critical Rules
- Numbered or bulleted list
- Specific, actionable constraints
- Include rationale for non-obvious rules
### Architecture
- High-level component diagram (ASCII or description)
- Data flow explanation
- Key file/directory purposes

View File

@@ -0,0 +1,97 @@
# CLAUDE.md Diff Analysis
This skill defines how to analyze and present CLAUDE.md differences.
## Comparison Modes
| Mode | Command | Description |
|------|---------|-------------|
| Working vs HEAD | `/config-diff` | Uncommitted changes |
| Working vs Commit | `--commit=REF` | Changes since specific point |
| Commit to Commit | `--from=X --to=Y` | Historical comparison |
| Branch Comparison | `--branch=NAME` | Cross-branch differences |
## Change Indicators
| Symbol | Meaning |
|--------|---------|
| `+` | Line added |
| `-` | Line removed |
| `@@` | Location marker (line numbers) |
| `[MODIFIED]` | Section has changes |
| `[ADDED]` | New section created |
| `[REMOVED]` | Section deleted |
| `[UNCHANGED]` | No changes to section |
## Impact Categories
| Category | Meaning |
|----------|---------|
| NEW REQUIREMENT | Claude will need to do something new |
| REMOVED REQUIREMENT | Claude no longer needs to do something |
| MODIFIED | Existing behavior changed |
| NEW RULE | New constraint added |
| RELAXED RULE | Constraint removed or softened |
## Report Format
```
CLAUDE.md Diff Report
=====================
File: /path/to/project/CLAUDE.md
Comparing: [mode description]
Commit: [ref] "[message]" (time ago)
Summary:
- Lines added: N
- Lines removed: N
- Net change: +/-N lines
- Sections affected: N
Section Changes:
----------------
## Section Name [STATUS]
+/- Change description
Detailed Diff:
--------------
--- CLAUDE.md (before)
+++ CLAUDE.md (after)
@@ -N,M +N,M @@
context
-removed
+added
context
Behavioral Impact:
------------------
These changes will affect Claude's behavior:
N. [CATEGORY] Description of impact
```
## Section-Focused View
When using `--section=NAME`:
- Filter diff to only that section
- Show section-specific statistics
- Highlight behavioral impact for that area
## Troubleshooting
### No changes detected
- File matches comparison target
- Verify comparing correct commits
### File not found in commit
- CLAUDE.md didn't exist at that point
- Use `git log -- CLAUDE.md` to find creation
### Not a git repository
- Command requires git history
- Initialize git or use file backup comparison

View File

@@ -0,0 +1,136 @@
# CLAUDE.md Lint Rules
This skill defines all linting rules for validating CLAUDE.md files.
## Rule Categories
### Security Rules (SEC)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| SEC001 | Hardcoded absolute paths | Warning | Yes |
| SEC002 | Potential secrets/API keys | Error | No |
| SEC003 | Hardcoded IP addresses | Warning | No |
| SEC004 | Exposed credentials patterns | Error | No |
| SEC005 | Hardcoded URLs with tokens | Error | No |
| SEC006 | Environment variable values (not names) | Warning | No |
### Structure Rules (STR)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| STR001 | Missing required sections | Error | Yes |
| STR002 | Invalid header hierarchy (h3 before h2) | Warning | Yes |
| STR003 | Orphaned content before first header | Info | No |
| STR004 | Excessive nesting depth (>4 levels) | Warning | No |
| STR005 | Empty sections | Warning | Yes |
| STR006 | Missing section content | Warning | No |
### Content Rules (CNT)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| CNT001 | Contradictory instructions | Error | No |
| CNT002 | Vague or ambiguous rules | Warning | No |
| CNT003 | Overly long sections (>100 lines) | Info | No |
| CNT004 | Duplicate content | Warning | No |
| CNT005 | TODO/FIXME in production config | Warning | No |
| CNT006 | Outdated version references | Info | No |
| CNT007 | Broken internal links | Warning | No |
### Format Rules (FMT)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| FMT001 | Inconsistent header styles | Info | Yes |
| FMT002 | Inconsistent list markers | Info | Yes |
| FMT003 | Missing code block language | Info | Yes |
| FMT004 | Trailing whitespace | Info | Yes |
| FMT005 | Missing blank lines around headers | Info | Yes |
| FMT006 | Inconsistent indentation | Info | Yes |
### Best Practice Rules (BPR)
| Rule | Description | Severity | Auto-fix |
|------|-------------|----------|----------|
| BPR001 | No Quick Start section | Warning | No |
| BPR002 | No Critical Rules section | Warning | No |
| BPR003 | Instructions without examples | Info | No |
| BPR004 | Commands without explanation | Info | No |
| BPR005 | Rules without rationale | Info | No |
| BPR006 | Missing plugin integration docs | Info | No |
## Anti-Pattern Examples
### SEC002: Hardcoded Secrets
```markdown
# BAD
API_KEY=sk-1234567890abcdef
# GOOD
API_KEY=$OPENAI_API_KEY # Set via environment
```
### SEC001: Hardcoded Paths
```markdown
# BAD
Config file: /home/john/projects/myapp/config.yml
# GOOD
Config file: ./config.yml
Config file: $PROJECT_ROOT/config.yml
```
### CNT001: Contradictory Rules
```markdown
# BAD
- Always use TypeScript
- JavaScript files are acceptable for scripts
# GOOD
- Always use TypeScript for source code
- JavaScript (.js) is acceptable only for config files and scripts
```
### CNT002: Vague Instructions
```markdown
# BAD
- Be careful with the database
# GOOD
- Never run DELETE without WHERE clause
- Always backup before migrations
```
### STR002: Invalid Hierarchy
```markdown
# BAD
# Main Title
### Skipped Level
# GOOD
# Main Title
## Section
### Subsection
```
## Output Format
```
[SEVERITY] RULE_ID: Description (line N)
| Context line showing issue
| ^^^^^^ indicator
+-- Explanation of problem
Suggested fix:
- old line
+ new line
```
## Severity Levels
| Level | Meaning | Action |
|-------|---------|--------|
| Error | Must fix | Blocks commit |
| Warning | Should fix | Review recommended |
| Info | Consider fixing | Optional improvement |

View File

@@ -0,0 +1,136 @@
# CLAUDE.md Optimization Patterns
This skill defines patterns for optimizing CLAUDE.md files.
## Optimization Categories
### Restructure
- Reorder sections by importance (Quick Start near top)
- Group related content together
- Improve header hierarchy
- Add navigation aids (TOC for long files)
### Condense
- Remove redundant explanations
- Convert verbose text to bullet points
- Eliminate duplicate content
- Shorten overly detailed sections
### Enhance
- Add missing essential sections
- **Add Pre-Change Protocol if missing (HIGH PRIORITY)**
- Improve unclear instructions
- Add helpful examples
- Highlight critical rules
### Format
- Standardize header styles (no trailing colons)
- Fix code block formatting (add language tags)
- Align list formatting (consistent markers)
- Improve table layouts
## Scoring Criteria
### Structure (25 points)
- Logical section ordering
- Clear header hierarchy
- Easy navigation
- Appropriate grouping
### Clarity (25 points)
- Clear instructions
- Good examples
- Unambiguous language
- Appropriate detail level
### Completeness (25 points)
- Project overview present
- Quick start commands documented
- Critical rules highlighted
- Key workflows covered
- Pre-Change Protocol present (MANDATORY)
### Conciseness (25 points)
- No unnecessary repetition
- Efficient information density
- Appropriate length for project size
- No generic filler content
## Score Interpretation
| Score | Rating | Action |
|-------|--------|--------|
| 90-100 | Excellent | Maintenance only |
| 70-89 | Good | Minor improvements |
| 50-69 | Needs Work | Optimization recommended |
| Below 50 | Poor | Major restructuring needed |
## Common Optimizations
### Verbose to Concise
```markdown
# Before (34 lines)
## Running Tests
To run the tests, you first need to make sure you have all the
dependencies installed. The dependencies are listed in requirements.txt.
Once you have installed the dependencies, you can run the tests using
pytest. Pytest will automatically discover all test files...
# After (8 lines)
## Running Tests
```bash
pip install -r requirements.txt # Install dependencies
pytest # Run all tests
pytest -v # Verbose output
pytest tests/unit/ # Run specific directory
```
```
### Duplicate Removal
- Keep first occurrence
- Add cross-reference if needed: "See [Section Name] above"
### Header Standardization
```markdown
# Before
## Quick Start:
## Architecture
## Testing:
# After
## Quick Start
## Architecture
## Testing
```
### Code Block Enhancement
```markdown
# Before
```
npm install
npm test
```
# After
```bash
npm install # Install dependencies
npm test # Run test suite
```
```
## Safety Features
### Backup Creation
- Always backup before changes
- Store in `.claude/backups/CLAUDE.md.TIMESTAMP`
- Easy restoration if needed
### Preview Mode
- Show all changes before applying
- Use diff format for easy review
- Allow approve/reject per change
### Selective Application
- Can apply individual changes
- Skip specific optimizations
- Iterative refinement supported

View File

@@ -0,0 +1,83 @@
# Pre-Change Protocol
This skill defines the mandatory Pre-Change Protocol section that MUST be included in every CLAUDE.md file.
## Why This Is Mandatory
The Pre-Change Protocol prevents the #1 cause of bugs from AI-assisted coding: **incomplete changes where Claude modifies some files but misses others that reference the same code**.
Without this protocol:
- Claude may rename a function but miss callers
- Claude may modify a config but miss documentation
- Claude may update a schema but miss dependent code
## Detection
Search CLAUDE.md for these indicators:
- Header containing "Pre-Change" or "Before Any Code Change"
- References to `grep -rn` or impact search
- Checklist with "Files That Will Be Affected"
- User verification checkpoint
## Required Section Content
```markdown
## MANDATORY: Before Any Code Change
**Claude MUST show this checklist BEFORE editing any file:**
### 1. Impact Search Results
Run and show output of:
```bash
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
```
### 2. Files That Will Be Affected
Numbered list of every file to be modified, with the specific change for each.
### 3. Files Searched But Not Changed (and why)
Proof that related files were checked and determined unchanged.
### 4. Documentation That References This
List of docs that mention this feature/script/function.
**User verifies this list before Claude proceeds. If Claude skips this, stop immediately.**
### After Changes
Run the same grep and show results proving no references remain unaddressed.
```
## Placement
Insert Pre-Change Protocol section:
- **After:** Critical Rules section
- **Before:** Common Operations section
## If Missing During Analysis
Flag as **HIGH PRIORITY** issue:
```
1. [HIGH] Missing Pre-Change Protocol section
CLAUDE.md lacks mandatory dependency-check protocol.
Impact: Claude may miss file references when making changes,
leading to broken dependencies and incomplete updates.
Recommendation: Add Pre-Change Protocol section immediately.
This is the #1 cause of cascading bugs from incomplete changes.
```
## If Missing During Optimization
**Automatically add the section** at the correct position. This is the highest priority enhancement.
## Variations
The exact wording can vary, but these elements are required:
1. **Search requirement** - Must run grep/search before changes
2. **Affected files list** - Must enumerate all files to modify
3. **Non-affected files proof** - Must show what was checked but unchanged
4. **Documentation check** - Must list referencing docs
5. **User checkpoint** - Must pause for user verification
6. **Post-change verification** - Must verify after changes

View File

@@ -0,0 +1,377 @@
# Settings Optimization Skill
This skill provides comprehensive knowledge for auditing and optimizing Claude Code `settings.local.json` permission configurations.
---
## Section 1: Settings File Locations & Format
Claude Code uses two configuration formats for permissions:
### Newer Format (Recommended)
**Primary target:** `.claude/settings.local.json` (project-local, gitignored)
**Secondary locations:**
- `.claude/settings.json` (shared, committed)
- `~/.claude.json` (legacy global config)
```json
{
"permissions": {
"allow": ["Edit", "Write(plugins/**)", "Bash(git *)"],
"deny": ["Read(.env*)", "Bash(rm *)"],
"ask": ["Bash(pip install *)"]
}
}
```
**Field meanings:**
- `allow`: Operations auto-approved without prompting
- `deny`: Operations blocked entirely
- `ask`: Operations that always prompt (overrides allow)
### Legacy Format
Found in `~/.claude.json` with per-project entries:
```json
{
"projects": {
"/path/to/project": {
"allowedTools": ["Read", "Write", "Bash(git *)"]
}
}
}
```
**Detection strategy:**
1. Check `.claude/settings.local.json` first (primary)
2. Check `.claude/settings.json` (shared)
3. Check `~/.claude.json` for project entry (legacy)
4. Report which format is in use
---
## Section 2: Permission Rule Syntax Reference
| Pattern | Meaning |
|---------|---------|
| `Tool` or `Tool(*)` | Allow all uses of that tool |
| `Bash(npm run build)` | Exact command match |
| `Bash(npm run test *)` | Prefix match (space+asterisk = word boundary) |
| `Bash(npm*)` | Prefix match without word boundary |
| `Write(plugins/**)` | Glob — all files recursively under `plugins/` |
| `Write(plugins/projman/*)` | Glob — direct children only |
| `Read(.env*)` | Pattern matching `.env`, `.env.local`, etc. |
| `mcp__gitea__*` | All tools from the gitea MCP server |
| `mcp__netbox__list_*` | Specific MCP tool pattern |
| `WebFetch(domain:github.com)` | Domain-restricted web fetch |
### Important Nuances
**Word boundary matching:**
- `Bash(ls *)` (with space) matches `ls -la` but NOT `lsof`
- `Bash(ls*)` (no space) matches both `ls -la` AND `lsof`
**Precedence rules:**
- `deny` rules take precedence over `allow` rules
- `ask` rules override both (always prompts even if allowed)
- More specific patterns do NOT override broader patterns
**Command operators:**
- Piped commands (`cmd1 | cmd2`) may not match individual command rules (known Claude Code limitation)
- Shell operators (`&&`, `||`) — Claude Code is aware of these and won't let prefix rules bypass them
- Commands with redirects (`>`, `>>`, `<`) are evaluated as complete strings
---
## Section 3: Pattern Consolidation Rules
The audit detects these optimization opportunities:
| Issue | Example | Recommendation |
|-------|---------|----------------|
| **Exact duplicates** | `Write(plugins/**)` listed twice | Remove duplicate |
| **Subset redundancy** | `Write(plugins/projman/*)` when `Write(plugins/**)` exists | Remove the narrower pattern — already covered |
| **Merge candidates** | `Write(plugins/projman/*)`, `Write(plugins/git-flow/*)`, `Write(plugins/pr-review/*)` ... (4+ similar patterns) | Merge to `Write(plugins/**)` |
| **Overly broad** | `Bash` (no specifier = allows ALL bash) | Flag as security concern, suggest scoped patterns |
| **Stale patterns** | `Write(plugins/old-plugin/**)` for a plugin that no longer exists | Remove stale entry |
| **Missing MCP permissions** | MCP servers in `.mcp.json` but no `mcp__servername__*` in allow | Suggest adding if server is trusted |
| **Conflicting rules** | Same pattern in both `allow` and `deny` | Flag conflict — deny wins, but allow is dead weight |
### Consolidation Algorithm
1. **Deduplicate:** Remove exact duplicates from each array
2. **Subset elimination:** For each pattern, check if a broader pattern exists
- `Write(plugins/projman/*)` is subset of `Write(plugins/**)`
- `Bash(git status)` is subset of `Bash(git *)`
3. **Merge detection:** If 4+ patterns share a common prefix, suggest merge
- Threshold: 4 patterns minimum before suggesting consolidation
4. **Stale detection:** Cross-reference file patterns against actual filesystem
5. **Conflict detection:** Check for patterns appearing in multiple arrays
---
## Section 4: Review-Layer-Aware Recommendations
This is the key section. Map upstream review processes to directory scopes:
| Directory Scope | Active Review Layers | Auto-Allow Recommendation |
|----------------|---------------------|---------------------------|
| `plugins/*/commands/*.md` | Sprint approval, PR review, doc-guardian PostToolUse | `Write(plugins/*/commands/**)` — 3 layers cover this |
| `plugins/*/skills/*.md` | Sprint approval, PR review | `Write(plugins/*/skills/**)` — 2 layers |
| `plugins/*/agents/*.md` | Sprint approval, PR review, contract-validator | `Write(plugins/*/agents/**)` — 3 layers |
| `mcp-servers/*/mcp_server/*.py` | Code-sentinel PreToolUse, sprint approval, PR review | `Write(mcp-servers/**)` + `Edit(mcp-servers/**)` — sentinel catches secrets |
| `docs/*.md` | Doc-guardian PostToolUse, PR review | `Write(docs/**)` + `Edit(docs/**)` |
| `.claude-plugin/*.json` | validate-marketplace.sh, PR review | `Write(.claude-plugin/**)` |
| `scripts/*.sh` | Code-sentinel, PR review | `Write(scripts/**)` — with caution flag |
| `CLAUDE.md`, `CHANGELOG.md`, `README.md` | Doc-guardian, PR review | `Write(CLAUDE.md)`, `Write(CHANGELOG.md)`, `Write(README.md)` |
### Critical Rule: Hook Verification
**Before recommending auto-allow for a scope, the agent MUST verify the hook is actually configured.**
Read the relevant `plugins/*/hooks/hooks.json` file:
- If code-sentinel's hook is missing or disabled, do NOT recommend auto-allowing `mcp-servers/**` writes
- If doc-guardian's hook is missing, do NOT recommend auto-allowing `docs/**` without caution
- Count the number of verified review layers before making recommendations
**Minimum threshold:** Recommend auto-allow only for scopes covered by ≥2 verified review layers.
---
## Section 5: Permission Profiles
Three named profiles for different project contexts:
### `conservative` (Default for New Users)
Minimal permissions, prompts for most write operations:
```json
{
"permissions": {
"allow": [
"Read",
"Glob",
"Grep",
"LS",
"Write(docs/**)",
"Edit(docs/**)",
"Bash(git status *)",
"Bash(git diff *)",
"Bash(git log *)",
"Bash(cat *)",
"Bash(ls *)",
"Bash(head *)",
"Bash(tail *)",
"Bash(wc *)",
"Bash(grep *)"
],
"deny": [
"Read(.env*)",
"Read(./secrets/**)",
"Bash(rm -rf *)",
"Bash(sudo *)"
]
}
}
```
### `reviewed` (Projects with ≥2 Upstream Review Layers)
This is the target profile for projects using the marketplace's multi-layer review architecture:
```json
{
"permissions": {
"allow": [
"Read",
"Glob",
"Grep",
"LS",
"Edit",
"Write",
"MultiEdit",
"Bash(git *)",
"Bash(python *)",
"Bash(pip install *)",
"Bash(cd *)",
"Bash(cat *)",
"Bash(ls *)",
"Bash(head *)",
"Bash(tail *)",
"Bash(wc *)",
"Bash(grep *)",
"Bash(find *)",
"Bash(mkdir *)",
"Bash(cp *)",
"Bash(mv *)",
"Bash(touch *)",
"Bash(chmod *)",
"Bash(source *)",
"Bash(echo *)",
"Bash(sed *)",
"Bash(awk *)",
"Bash(sort *)",
"Bash(uniq *)",
"Bash(diff *)",
"Bash(jq *)",
"Bash(npm *)",
"Bash(npx *)",
"Bash(node *)",
"Bash(pytest *)",
"Bash(python -m *)",
"Bash(./scripts/*)",
"WebFetch",
"WebSearch"
],
"deny": [
"Read(.env*)",
"Read(./secrets/**)",
"Bash(rm -rf *)",
"Bash(sudo *)",
"Bash(curl * | bash*)",
"Bash(wget * | bash*)"
]
}
}
```
### `autonomous` (Trusted CI/Sandboxed Environments Only)
Maximum permissions for automated environments:
```json
{
"permissions": {
"allow": [
"Read",
"Glob",
"Grep",
"LS",
"Edit",
"Write",
"MultiEdit",
"Bash",
"WebFetch",
"WebSearch"
],
"deny": [
"Read(.env*)",
"Read(./secrets/**)",
"Bash(rm -rf /)",
"Bash(sudo *)"
]
}
}
```
**Warning:** The `autonomous` profile allows unscoped `Bash` — only use in fully sandboxed environments.
---
## Section 6: Scoring Criteria (Settings Efficiency Score — 100 points)
| Category | Points | What It Measures |
|----------|--------|------------------|
| **Redundancy** | 25 | No duplicates, no subset patterns, merged where possible |
| **Coverage** | 25 | Common tools allowed, MCP servers covered, no unnecessary gaps |
| **Safety Alignment** | 25 | Deny rules cover secrets, destructive commands; review layers verified |
| **Profile Fit** | 25 | How close to recommended profile for the project's review layer count |
### Scoring Breakdown
**Redundancy (25 points):**
- 25: No duplicates, no subsets, patterns are consolidated
- 20: 1-2 minor redundancies
- 15: 3-5 redundancies or 1 merge candidate group
- 10: 6+ redundancies or 2+ merge candidate groups
- 5: Significant redundancy (10+ issues)
- 0: Severe redundancy (20+ issues)
**Coverage (25 points):**
- 25: All common tools allowed, MCP servers covered
- 20: Missing 1-2 common tool patterns
- 15: Missing 3-5 patterns or 1 MCP server
- 10: Missing 6+ patterns or 2+ MCP servers
- 5: Significant gaps causing frequent prompts
- 0: Minimal coverage (prompts on most operations)
**Safety Alignment (25 points):**
- 25: Deny rules cover secrets + destructive ops, review layers verified
- 20: Minor gaps (e.g., missing one secret pattern)
- 15: Overly broad allow without review layer coverage
- 10: Missing deny rules for secrets or destructive commands
- 5: Unsafe patterns without review layer justification
- 0: Security concerns (e.g., unscoped `Bash` without review layers)
**Profile Fit (25 points):**
- 25: Matches recommended profile exactly
- 20: Within 90% of recommended profile
- 15: Within 80% of recommended profile
- 10: Within 70% of recommended profile
- 5: Significant deviation from recommended profile
- 0: No alignment with any named profile
### Score Interpretation
| Score Range | Status | Meaning |
|-------------|--------|---------|
| 90-100 | Optimized | Minimal prompt interruptions, safety maintained |
| 70-89 | Good | Minor consolidation opportunities |
| 50-69 | Needs Work | Significant redundancy or missing permissions |
| Below 50 | Poor | Likely getting constant approval prompts unnecessarily |
---
## Section 7: Hook Detection Method
To verify which review layers are active, read these files:
| File | Hook Type | Tool Matcher | Purpose |
|------|-----------|--------------|---------|
| `plugins/code-sentinel/hooks/hooks.json` | PreToolUse | Write\|Edit\|MultiEdit | Blocks hardcoded secrets |
| `plugins/doc-guardian/hooks/hooks.json` | PostToolUse | Write\|Edit\|MultiEdit | Tracks documentation drift |
| `plugins/project-hygiene/hooks/hooks.json` | PostToolUse | Write\|Edit | Cleanup tracking |
| `plugins/data-platform/hooks/hooks.json` | PostToolUse | Edit\|Write | Schema diff detection |
| `plugins/cmdb-assistant/hooks/hooks.json` | PreToolUse | (if exists) | Input validation |
### Verification Process
1. **Read each hooks.json file**
2. **Parse the JSON to find hook configurations**
3. **Check the `type` field** — must be `"command"` (not `"prompt"`)
4. **Check the `event` field** — maps to when hook runs
5. **Check the `tools` array** — which operations are intercepted
6. **Verify plugin is in marketplace** — check `.claude-plugin/marketplace.json`
### Example Hook Structure
```json
{
"hooks": [
{
"event": "PreToolUse",
"type": "command",
"command": "./hooks/security-check.sh",
"tools": ["Write", "Edit", "MultiEdit"]
}
]
}
```
### Review Layer Count
Count verified review layers for each scope:
| Layer | Verification |
|-------|-------------|
| Sprint approval | Check if projman plugin is installed (milestone workflow) |
| PR review | Check if pr-review plugin is installed |
| code-sentinel PreToolUse | hooks.json exists with PreToolUse on Write/Edit |
| doc-guardian PostToolUse | hooks.json exists with PostToolUse on Write/Edit |
| contract-validator | Plugin installed + hooks present |
**Recommendation threshold:** Only recommend auto-allow for scopes with ≥2 verified layers.

View File

@@ -0,0 +1,73 @@
# Visual Header Display
This skill defines the standard visual header for claude-config-maintainer commands.
## Header Format
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - [Command Name] |
+-----------------------------------------------------------------+
```
## Command-Specific Headers
### /config-analyze
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Analysis |
+-----------------------------------------------------------------+
```
### /config-optimize
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Optimization |
+-----------------------------------------------------------------+
```
### /config-lint
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Lint |
+-----------------------------------------------------------------+
```
### /config-diff
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Diff |
+-----------------------------------------------------------------+
```
### /config-init
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - CLAUDE.md Initialization |
+-----------------------------------------------------------------+
```
### /config-audit-settings
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Audit |
+-----------------------------------------------------------------+
```
### /config-optimize-settings
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
```
### /config-permissions-map
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Permissions Map |
+-----------------------------------------------------------------+
```
## Usage
Display the header at the start of command execution, before any analysis or output.

View File

@@ -19,6 +19,5 @@
"data-quality", "data-quality",
"validation" "validation"
], ],
"commands": ["./commands/"], "commands": ["./commands/"]
"mcpServers": ["./.mcp.json"]
} }

View File

@@ -1,8 +0,0 @@
{
"mcpServers": {
"netbox": {
"command": "${CLAUDE_PLUGIN_ROOT}/mcp-servers/netbox/run.sh",
"args": []
}
}
}

View File

@@ -1,217 +0,0 @@
# CMDB Assistant
A Claude Code plugin for NetBox CMDB integration - query, create, update, and manage your network infrastructure directly from Claude Code.
## What's New in v1.2.0
- **`/cmdb-topology`**: Generate Mermaid diagrams showing infrastructure topology (rack view, network view, site overview)
- **`/change-audit`**: Query and analyze NetBox audit log for change tracking and compliance
- **`/ip-conflicts`**: Detect IP address conflicts and overlapping prefixes
## What's New in v1.1.0
- **Data Quality Validation**: Hooks for SessionStart and PreToolUse that check data quality and warn about missing fields
- **Best Practices Skill**: Reference documentation for NetBox patterns (naming conventions, dependency order, role management)
- **`/cmdb-audit`**: Analyze data quality across VMs, devices, naming conventions, and roles
- **`/cmdb-register`**: Register the current machine into NetBox with all running applications (Docker containers, systemd services)
- **`/cmdb-sync`**: Synchronize existing machine state with NetBox (detect drift, update with confirmation)
## Features
- **Full CRUD Operations**: Create, read, update, and delete across all NetBox modules
- **Smart Search**: Find devices, IPs, sites, and more with natural language queries
- **IP Management**: Allocate IPs, manage prefixes, track VLANs
- **Infrastructure Documentation**: Document servers, network devices, and connections
- **Audit Trail**: Review changes and maintain infrastructure history
- **Data Quality Validation**: Proactive checks for missing site, tenant, platform assignments
- **Machine Registration**: Auto-discover and register servers with running applications
- **Drift Detection**: Sync machine state and detect changes over time
## Installation
### Prerequisites
1. A running NetBox instance (v4.x recommended)
2. NetBox API token with appropriate permissions
3. The NetBox MCP server configured (see below)
### Configure NetBox Credentials
Create the configuration file:
```bash
mkdir -p ~/.config/claude
cat > ~/.config/claude/netbox.env << 'EOF'
NETBOX_API_URL=https://your-netbox-instance/api
NETBOX_API_TOKEN=your-api-token-here
NETBOX_VERIFY_SSL=true
NETBOX_TIMEOUT=30
EOF
```
### Install the Plugin
Add to your Claude Code plugins or marketplace configuration.
## Commands
| Command | Description |
|---------|-------------|
| `/initial-setup` | Interactive setup wizard for NetBox MCP server |
| `/cmdb-search <query>` | Search for devices, IPs, sites, or any CMDB object |
| `/cmdb-device <action>` | Manage network devices (list, create, update, delete) |
| `/cmdb-ip <action>` | Manage IP addresses and prefixes |
| `/cmdb-site <action>` | Manage sites and locations |
| `/cmdb-audit [scope]` | Data quality analysis (all, vms, devices, naming, roles) |
| `/cmdb-register` | Register current machine into NetBox with running apps |
| `/cmdb-sync` | Sync machine state with NetBox (detect drift, update) |
| `/cmdb-topology <view>` | Generate Mermaid diagrams (rack, network, site, full) |
| `/change-audit [filters]` | Query NetBox audit log for change tracking |
| `/ip-conflicts [scope]` | Detect IP conflicts and overlapping prefixes |
## Agent
The **cmdb-assistant** agent provides conversational infrastructure management:
```
@cmdb-assistant Show me all devices at the headquarters site
@cmdb-assistant Allocate the next available IP from 10.0.1.0/24 for the new web server
@cmdb-assistant What changes were made to the network today?
```
## Usage Examples
### Search for Infrastructure
```
/cmdb-search router
/cmdb-search 10.0.1.0/24
/cmdb-search datacenter
```
### Device Management
```
/cmdb-device list
/cmdb-device show core-router-01
/cmdb-device create web-server-03
/cmdb-device at headquarters
```
### IP Address Management
```
/cmdb-ip prefixes
/cmdb-ip available in 10.0.1.0/24
/cmdb-ip allocate from 10.0.1.0/24
```
### Site Management
```
/cmdb-site list
/cmdb-site show headquarters
/cmdb-site racks at datacenter-east
```
## NetBox Coverage
This plugin provides access to the full NetBox API:
- **DCIM**: Sites, Locations, Racks, Devices, Interfaces, Cables, Power
- **IPAM**: IP Addresses, Prefixes, VLANs, VRFs, ASNs, Services
- **Circuits**: Providers, Circuits, Terminations
- **Virtualization**: Clusters, Virtual Machines, VM Interfaces
- **Tenancy**: Tenants, Contacts
- **VPN**: Tunnels, L2VPNs, IKE/IPSec Policies
- **Wireless**: WLANs, Wireless Links
- **Extras**: Tags, Custom Fields, Journal Entries, Audit Log
## Hooks
| Event | Purpose |
|-------|---------|
| `SessionStart` | Test NetBox connectivity, report data quality issues |
| `PreToolUse` | Validate VM/device parameters before create/update |
Hooks are **non-blocking** - they emit warnings but never prevent operations.
## Architecture
```
cmdb-assistant/
├── .claude-plugin/
│ └── plugin.json # Plugin manifest
├── .mcp.json # MCP server configuration
├── commands/
│ ├── initial-setup.md # Setup wizard
│ ├── cmdb-search.md # Search command
│ ├── cmdb-device.md # Device management
│ ├── cmdb-ip.md # IP management
│ ├── cmdb-site.md # Site management
│ ├── cmdb-audit.md # Data quality audit
│ ├── cmdb-register.md # Machine registration
│ ├── cmdb-sync.md # Machine sync
│ ├── cmdb-topology.md # Topology visualization (NEW)
│ ├── change-audit.md # Change audit log (NEW)
│ └── ip-conflicts.md # IP conflict detection (NEW)
├── hooks/
│ ├── hooks.json # Hook configuration
│ ├── startup-check.sh # SessionStart validation
│ └── validate-input.sh # PreToolUse validation
├── skills/
│ └── netbox-patterns/
│ └── SKILL.md # NetBox best practices reference
├── agents/
│ └── cmdb-assistant.md # Main assistant agent
└── README.md
```
The plugin uses the shared NetBox MCP server at `mcp-servers/netbox/`.
## Configuration
### Required Environment Variables
| Variable | Description |
|----------|-------------|
| `NETBOX_API_URL` | Full URL to NetBox API (e.g., `https://netbox.example.com/api`) |
| `NETBOX_API_TOKEN` | API authentication token |
### Optional Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `NETBOX_VERIFY_SSL` | `true` | Verify SSL certificates |
| `NETBOX_TIMEOUT` | `30` | Request timeout in seconds |
## Getting a NetBox API Token
1. Log into your NetBox instance
2. Navigate to your profile (top-right menu)
3. Go to "API Tokens"
4. Click "Add a token"
5. Set appropriate permissions (read-only or read-write)
6. Copy the generated token
## Troubleshooting
### Connection Issues
- Verify `NETBOX_API_URL` is correct and accessible
- Check firewall rules allow access to NetBox
- For self-signed certificates, set `NETBOX_VERIFY_SSL=false`
### Authentication Errors
- Ensure API token is valid and not expired
- Check token has required permissions for the operation
### Timeout Errors
- Increase `NETBOX_TIMEOUT` for slow connections
- Check network latency to NetBox instance
## License
MIT License - Part of the Leo Claude Marketplace.

View File

@@ -1,21 +1,27 @@
---
name: cmdb-assistant
description: Infrastructure management assistant specialized in NetBox CMDB operations. Use for device management, IP addressing, and infrastructure queries.
model: sonnet
permissionMode: default
---
# CMDB Assistant Agent # CMDB Assistant Agent
You are an infrastructure management assistant specialized in NetBox CMDB operations. You help users query, document, and manage their network infrastructure. You are an infrastructure management assistant specialized in NetBox CMDB operations.
## Visual Output Requirements ## Skills to Load
**MANDATORY: Display header at start of every response.** - `skills/visual-header.md`
- `skills/netbox-patterns/SKILL.md`
- `skills/mcp-tools-reference.md`
``` ## Visual Output
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Infrastructure Management Execute `skills/visual-header.md` with context "Infrastructure Management".
└──────────────────────────────────────────────────────────────────┘
```
## Capabilities ## Capabilities
You have full access to NetBox via MCP tools covering: Full access to NetBox via MCP tools covering:
- **DCIM**: Sites, locations, racks, devices, interfaces, cables, power - **DCIM**: Sites, locations, racks, devices, interfaces, cables, power
- **IPAM**: IP addresses, prefixes, VLANs, VRFs, ASNs, services - **IPAM**: IP addresses, prefixes, VLANs, VRFs, ASNs, services
- **Circuits**: Providers, circuits, terminations - **Circuits**: Providers, circuits, terminations
@@ -29,183 +35,66 @@ You have full access to NetBox via MCP tools covering:
### Query Operations ### Query Operations
- Start with list operations to find objects - Start with list operations to find objects
- Use filters to narrow results (name, status, site_id, etc.) - Use filters to narrow results
- Follow up with get operations for detailed information - Follow up with get operations for details
- Present results in clear, organized format
### Create Operations ### Create Operations
- Always confirm required fields with user before creating - Confirm required fields before creating
- Look up related object IDs (device_type, role, site) first - Look up related object IDs first
- Provide the created object details after success - Suggest follow-up actions after success
- Suggest follow-up actions (add interfaces, assign IPs, etc.)
### Update Operations ### Update Operations
- Show current values before updating - Show current values before updating
- Confirm changes with user - Confirm changes with user
- Report what was changed after success
### Delete Operations ### Delete Operations
- ALWAYS ask for explicit confirmation before deleting - ALWAYS ask for explicit confirmation
- Show what will be deleted - Warn about dependent objects
- Warn about dependent objects that may be affected
## Common Workflows
### Document a New Server
1. Create device with `dcim_create_device`
2. Add interfaces with `dcim_create_interface`
3. Assign IPs with `ipam_create_ip_address`
4. Add journal entry with `extras_create_journal_entry`
### Allocate IP Space
1. Find available prefixes with `ipam_list_available_prefixes`
2. Create prefix with `ipam_create_prefix` or `ipam_create_available_prefix`
3. Allocate IPs with `ipam_create_available_ip`
### Audit Infrastructure
1. List recent changes with `extras_list_object_changes`
2. Review devices by site with `dcim_list_devices`
3. Check IP utilization with prefix operations
### Cable Management
1. List interfaces with `dcim_list_interfaces`
2. Create cable with `dcim_create_cable`
3. Verify connectivity
## Response Format
When presenting data:
- Use tables for lists
- Highlight key fields (name, status, IPs)
- Include IDs for reference in follow-up operations
- Suggest next steps when appropriate
## Error Handling
- If an operation fails, explain why clearly
- Suggest corrective actions
- For permission errors, note what access is needed
- For validation errors, explain required fields/formats
## Data Quality Validation ## Data Quality Validation
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Reference `skills/netbox-patterns/SKILL.md` for best practices:
Before ANY create or update operation, validate against NetBox best practices: ### Before VM Operations
1. Cluster/Site assignment required
2. Recommend tenant if not provided
3. Check naming convention
### VM Operations ### Before Device Operations
1. Site is REQUIRED
2. Recommend platform
3. Check naming convention
4. Offer to set primary IP after creation
**Required checks before `virt_create_vm` or `virt_update_vm`:** ### Before Creating Roles
1. List existing roles first
2. Recommend consolidation if >10 specific roles
1. **Cluster/Site Assignment** - VMs must have either cluster or site ## Dependency Order
2. **Tenant Assignment** - Recommend if not provided
3. **Platform Assignment** - Recommend for OS tracking
4. **Naming Convention** - Check against `{env}-{app}-{number}` pattern
5. **Role Assignment** - Recommend appropriate role
**If user provides no site/tenant, ASK:**
> "This VM has no site or tenant assigned. NetBox best practices recommend:
> - **Site**: For location-based queries and power budgeting
> - **Tenant**: For resource isolation and ownership tracking
>
> Would you like me to:
> 1. Assign to an existing site/tenant (list available)
> 2. Create new site/tenant first
> 3. Proceed without (not recommended for production use)"
### Device Operations
**Required checks before `dcim_create_device` or `dcim_update_device`:**
1. **Site is REQUIRED** - Fail without it
2. **Platform Assignment** - Recommend for OS tracking
3. **Naming Convention** - Check against `{role}-{location}-{number}` pattern
4. **Role Assignment** - Ensure appropriate role selected
5. **After Creation** - Offer to set primary IP
### Cluster Operations
**Required checks before `virt_create_cluster`:**
1. **Site Scope** - Recommend assigning to site
2. **Cluster Type** - Ensure appropriate type selected
3. **Device Association** - Recommend linking to host device
### Role Management
**Before creating a new device role:**
1. List existing roles with `dcim_list_device_roles`
2. Check if a more general role already exists
3. Recommend role consolidation if >10 specific roles exist
**Example guidance:**
> "You're creating role 'nginx-web-server'. An existing 'web-server' role exists.
> Consider using 'web-server' and tracking nginx via the platform field instead.
> This reduces role fragmentation and improves maintainability."
## Dependency Order Enforcement
When creating multiple objects, follow this order:
Follow order from `skills/netbox-patterns/SKILL.md`:
``` ```
1. Regions Sites Locations Racks 1. Regions -> Sites -> Locations -> Racks
2. Tenant Groups Tenants 2. Tenant Groups -> Tenants
3. Manufacturers Device Types 3. Manufacturers -> Device Types
4. Device Roles, Platforms 4. Device Roles, Platforms
5. Devices (with site, role, type) 5. Devices (with site, role, type)
6. Clusters (with type, optional site) 6. Clusters (with type, optional site)
7. VMs (with cluster) 7. VMs (with cluster)
8. Interfaces IP Addresses Primary IP assignment 8. Interfaces -> IP Addresses -> Primary IP
``` ```
**CRITICAL Rules:**
- NEVER create a VM before its cluster exists
- NEVER create a device before its site exists
- NEVER create an interface before its device exists
- NEVER create an IP before its interface exists (if assigning)
## Naming Convention Enforcement
When user provides a name, check against patterns:
| Object Type | Pattern | Example |
|-------------|---------|---------|
| Device | `{role}-{site}-{number}` | `web-dc1-01` |
| VM | `{env}-{app}-{number}` or `{prefix}_{service}` | `prod-api-01` |
| Cluster | `{site}-{type}` | `dc1-vmware`, `home-docker` |
| Prefix | Include purpose in description | "Production /24 for web tier" |
**If name doesn't match patterns, warn:**
> "The name 'HotServ' doesn't follow naming conventions.
> Suggested: `prod-hotserv-01` or `hotserv-cloud-01`.
> Consistent naming improves searchability and automation compatibility.
> Proceed with original name? [Y/n]"
## Duplicate Prevention ## Duplicate Prevention
Before creating objects, always check for existing duplicates: Before creating, check for existing:
``` ```
# Before creating device
dcim_list_devices name=<proposed-name> dcim_list_devices name=<proposed-name>
# Before creating VM
virt_list_vms name=<proposed-name> virt_list_vms name=<proposed-name>
# Before creating prefix
ipam_list_prefixes prefix=<proposed-prefix> ipam_list_prefixes prefix=<proposed-prefix>
``` ```
If duplicate found, inform user and suggest update instead of create.
## Available Commands ## Available Commands
Users can invoke these commands for structured workflows:
| Command | Purpose | | Command | Purpose |
|---------|---------| |---------|---------|
| `/cmdb-search <query>` | Search across all CMDB objects | | `/cmdb-search <query>` | Search across all CMDB objects |
@@ -215,3 +104,6 @@ Users can invoke these commands for structured workflows:
| `/cmdb-audit [scope]` | Data quality analysis | | `/cmdb-audit [scope]` | Data quality analysis |
| `/cmdb-register` | Register current machine | | `/cmdb-register` | Register current machine |
| `/cmdb-sync` | Sync machine state with NetBox | | `/cmdb-sync` | Sync machine state with NetBox |
| `/cmdb-topology <view>` | Generate infrastructure diagrams |
| `/change-audit [filters]` | Audit NetBox changes |
| `/ip-conflicts [scope]` | Detect IP conflicts |

View File

@@ -32,9 +32,9 @@ The following NetBox MCP tools are available for infrastructure management:
- `ipam_list_available_ips`, `ipam_create_available_ip` - IP allocation - `ipam_list_available_ips`, `ipam_create_available_ip` - IP allocation
**Virtualization:** **Virtualization:**
- `virtualization_list_virtual_machines`, `virtualization_create_virtual_machine` - VM management - `virt_list_vms`, `virt_create_vm`, `virt_update_vm`, `virt_delete_vm` - VM management
- `virtualization_list_clusters`, `virtualization_create_cluster` - Cluster management - `virt_list_clusters`, `virt_create_cluster`, `virt_update_cluster`, `virt_delete_cluster` - Cluster management
- `virtualization_list_vm_interfaces` - VM interface management - `virt_list_vm_ifaces`, `virt_create_vm_iface` - VM interface management
**Circuits:** **Circuits:**
- `circuits_list_circuits`, `circuits_create_circuit` - Circuit management - `circuits_list_circuits`, `circuits_create_circuit` - Circuit management

View File

@@ -4,20 +4,14 @@ description: Audit NetBox changes with filtering by date, user, or object type
# CMDB Change Audit # CMDB Change Audit
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Change Audit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the audit.
Query and analyze the NetBox audit log for change tracking and compliance. Query and analyze the NetBox audit log for change tracking and compliance.
## Skills to Load
- `skills/visual-header.md`
- `skills/change-audit.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -33,142 +27,30 @@ Query and analyze the NetBox audit log for change tracking and compliance.
## Instructions ## Instructions
You are a change auditor that queries NetBox's object change log and generates audit reports. Execute `skills/visual-header.md` with context "Change Audit".
### MCP Tools Execute `skills/change-audit.md` which covers:
1. Parse user request for filters
2. Query object changes via MCP
3. Enrich data with detailed records
4. Analyze patterns
5. Generate report
Use these tools to query the audit log: ## Security Audit Mode
- `extras_list_object_changes` - List changes with filters:
- `user_id` - Filter by user ID
- `changed_object_type` - Filter by object type (e.g., "dcim.device", "ipam.ipaddress")
- `action` - Filter by action: "create", "update", "delete"
- `extras_get_object_change` - Get detailed change record by ID
### Common Object Types
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |
### Workflow
1. **Parse user request** to determine filters
2. **Query object changes** using `extras_list_object_changes`
3. **Enrich data** by fetching detailed records if needed
4. **Analyze patterns** in the changes
5. **Generate report** in structured format
### Report Format
```markdown
## NetBox Change Audit Report
**Generated:** [timestamp]
**Period:** [date range or "All time"]
**Filters:** [applied filters]
### Summary
| Metric | Count |
|--------|-------|
| Total Changes | X |
| Creates | Y |
| Updates | Z |
| Deletes | W |
| Unique Users | N |
| Object Types | M |
### Changes by Action
#### Created Objects (Y)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-15 14:30 | admin | dcim.device | server-01 | Created device |
| ... | ... | ... | ... | ... |
#### Updated Objects (Z)
| Time | User | Object Type | Object | Changed Fields |
|------|------|-------------|--------|----------------|
| 2024-01-15 15:00 | john | ipam.ipaddress | 10.0.1.50/24 | status, description |
| ... | ... | ... | ... | ... |
#### Deleted Objects (W)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-14 09:00 | admin | dcim.interface | eth2 | Removed from server-01 |
| ... | ... | ... | ... | ... |
### Changes by User
| User | Creates | Updates | Deletes | Total |
|------|---------|---------|---------|-------|
| admin | 5 | 10 | 2 | 17 |
| john | 3 | 8 | 0 | 11 |
### Changes by Object Type
| Object Type | Creates | Updates | Deletes | Total |
|-------------|---------|---------|---------|-------|
| dcim.device | 2 | 5 | 0 | 7 |
| ipam.ipaddress | 4 | 3 | 1 | 8 |
### Timeline
```
2024-01-15: ████████ 8 changes
2024-01-14: ████ 4 changes
2024-01-13: ██ 2 changes
```
### Notable Patterns
- **Bulk operations:** [Identify if many changes happened in short time]
- **Unusual activity:** [Flag unexpected deletions or after-hours changes]
- **Missing audit trail:** [Note if expected changes are not logged]
### Recommendations
1. [Any security or process recommendations based on findings]
```
### Time Period Handling
When user specifies "last N days":
- The NetBox API may not have direct date filtering in `extras_list_object_changes`
- Fetch recent changes and filter client-side by the `time` field
- Note any limitations in the report
### Enriching Change Details
For detailed audit, use `extras_get_object_change` with the change ID to see:
- `prechange_data` - Object state before change
- `postchange_data` - Object state after change
- `request_id` - Links related changes in same request
### Security Audit Mode
If user asks for "security audit" or "compliance report": If user asks for "security audit" or "compliance report":
1. Focus on deletions and permission-sensitive changes - Focus on deletions and permission-sensitive changes
2. Highlight changes to critical objects (firewalls, VRFs, prefixes) - Highlight changes to critical objects (firewalls, VRFs, prefixes)
3. Flag changes outside business hours - Flag changes outside business hours
4. Identify users with high change counts - Identify users with high change counts
## Examples ## Examples
- `/change-audit` - Show recent changes (last 24 hours) - `/change-audit` - Recent changes (last 24 hours)
- `/change-audit last 7 days` - Changes in past week - `/change-audit last 7 days` - Past week
- `/change-audit by admin` - All changes by admin user - `/change-audit by admin` - All changes by admin
- `/change-audit type dcim.device` - Device changes only - `/change-audit type dcim.device` - Device changes only
- `/change-audit action delete` - All deletions - `/change-audit action delete` - All deletions
- `/change-audit object server-01` - Changes to server-01
## User Request ## User Request

View File

@@ -4,20 +4,15 @@ description: Audit NetBox data quality and identify consistency issues
# CMDB Data Quality Audit # CMDB Data Quality Audit
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Data Quality Audit │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the audit.
Analyze NetBox data for quality issues and best practice violations. Analyze NetBox data for quality issues and best practice violations.
## Skills to Load
- `skills/visual-header.md`
- `skills/audit-workflow.md`
- `skills/netbox-patterns/SKILL.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -33,174 +28,30 @@ Analyze NetBox data for quality issues and best practice violations.
## Instructions ## Instructions
You are a data quality auditor for NetBox. Your job is to identify consistency issues and best practice violations. Execute `skills/visual-header.md` with context "Data Quality Audit".
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Execute `skills/audit-workflow.md` which covers:
1. Data collection via MCP
2. Quality checks by severity (CRITICAL, HIGH, MEDIUM, LOW)
3. Naming convention analysis
4. Role fragmentation analysis
5. Report generation with recommendations
### Phase 1: Data Collection ## Scope-Specific Focus
Run these MCP tool calls to gather data for analysis: | Scope | Focus |
|-------|-------|
| `all` | Full audit across all categories |
| `vms` | Virtual Machine checks only |
| `devices` | Device checks only |
| `naming` | Naming convention analysis |
| `roles` | Role fragmentation analysis |
``` ## Examples
1. virt_list_vms (no filters - get all)
2. dcim_list_devices (no filters - get all)
3. virt_list_clusters (no filters)
4. dcim_list_sites
5. tenancy_list_tenants
6. dcim_list_device_roles
7. dcim_list_platforms
```
Store the results for analysis. - `/cmdb-audit` - Full audit
- `/cmdb-audit vms` - VM-specific checks
### Phase 2: Quality Checks - `/cmdb-audit naming` - Naming conventions
Analyze collected data for these issues by severity:
#### CRITICAL Issues (must fix immediately)
| Check | Detection |
|-------|-----------|
| VMs without cluster | `cluster` field is null AND `site` field is null |
| Devices without site | `site` field is null |
| Active devices without primary IP | `status=active` AND `primary_ip4` is null AND `primary_ip6` is null |
#### HIGH Issues (should fix soon)
| Check | Detection |
|-------|-----------|
| VMs without site | VM has no site (neither direct nor via cluster.site) |
| VMs without tenant | `tenant` field is null |
| Devices without platform | `platform` field is null |
| Clusters not scoped to site | `site` field is null on cluster |
| VMs without role | `role` field is null |
#### MEDIUM Issues (plan to address)
| Check | Detection |
|-------|-----------|
| Inconsistent naming | Names don't match patterns: devices=`{role}-{site}-{num}`, VMs=`{env}-{app}-{num}` |
| Role fragmentation | More than 10 device roles with <3 assignments each |
| Missing tags on production | Active resources without any tags |
| Mixed naming separators | Some names use `_`, others use `-` |
#### LOW Issues (informational)
| Check | Detection |
|-------|-----------|
| Docker containers as VMs | Cluster type is "Docker Compose" - document this modeling choice |
| VMs without description | `description` field is empty |
| Sites without physical address | `physical_address` is empty |
| Devices without serial | `serial` field is empty |
### Phase 3: Naming Convention Analysis
For naming scope, analyze patterns:
1. **Extract naming patterns** from existing objects
2. **Identify dominant patterns** (most common conventions)
3. **Flag outliers** that don't match dominant patterns
4. **Suggest standardization** based on best practices
**Expected Patterns:**
- Devices: `{role}-{location}-{number}` (e.g., `web-dc1-01`)
- VMs: `{prefix}_{service}` or `{env}-{app}-{number}` (e.g., `prod-api-01`)
- Clusters: `{site}-{type}` (e.g., `home-docker`)
### Phase 4: Role Analysis
For roles scope, analyze fragmentation:
1. **List all device roles** with assignment counts
2. **Identify single-use roles** (only 1 device/VM)
3. **Identify similar roles** that could be consolidated
4. **Suggest consolidation** based on patterns
**Red Flags:**
- More than 15 highly specific roles
- Roles with technology in name (use platform instead)
- Roles that duplicate functionality
### Phase 5: Report Generation
Present findings in this structure:
```markdown
## CMDB Data Quality Audit Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Metric | Count |
|--------|-------|
| Total VMs | X |
| Total Devices | Y |
| Total Clusters | Z |
| **Total Issues** | **N** |
| Severity | Count |
|----------|-------|
| Critical | A |
| High | B |
| Medium | C |
| Low | D |
### Critical Issues
[List each with specific object names and IDs]
**Example:**
- VM `HotServ` (ID: 1) - No cluster or site assignment
- Device `server-01` (ID: 5) - No site assignment
### High Issues
[List each with specific object names]
### Medium Issues
[Grouped by category with counts]
### Recommendations
1. **[Most impactful fix]** - affects N objects
2. **[Second priority]** - affects M objects
...
### Quick Fixes
Commands to fix common issues:
```
# Assign site to VM
virt_update_vm id=X site=Y
# Assign platform to device
dcim_update_device id=X platform=Y
```
### Next Steps
- Run `/cmdb-register` to properly register new machines
- Use `/cmdb-sync` to update existing registrations
- Consider bulk updates via NetBox web UI for >10 items
```
## Scope-Specific Instructions
### For `vms` scope:
Focus only on Virtual Machine checks. Skip device and role analysis.
### For `devices` scope:
Focus only on Device checks. Skip VM and cluster analysis.
### For `naming` scope:
Focus on naming convention analysis across all objects. Generate detailed pattern report.
### For `roles` scope:
Focus on role fragmentation analysis. Generate consolidation recommendations.
## User Request ## User Request

View File

@@ -1,18 +1,11 @@
# CMDB Device Management # CMDB Device Management
## Visual Output Manage network devices in NetBox.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐ - `skills/mcp-tools-reference.md`
│ 🖥️ CMDB-ASSISTANT · Device Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage network devices in NetBox - create, view, update, or delete.
## Usage ## Usage
@@ -22,42 +15,40 @@ Manage network devices in NetBox - create, view, update, or delete.
## Instructions ## Instructions
You are a device management assistant with full CRUD access to NetBox devices. Execute `skills/visual-header.md` with context "Device Management".
### Actions ### Actions
**List/View:** **List/View:**
- `list` or `show all` - List all devices using `dcim_list_devices` - `list` or `show all` - List all devices: `dcim_list_devices`
- `show <name>` - Get device details using `dcim_list_devices` with name filter, then `dcim_get_device` - `show <name>` - Get device details: `dcim_get_device`
- `at <site>` - List devices at a specific site - `at <site>` - List devices at site
**Create:** **Create:**
- `create <name>` - Create a new device - `create <name>` - Create new device
- Required: name, device_type, role, site - Required: name, device_type, role, site
- Use `dcim_list_device_types`, `dcim_list_device_roles`, `dcim_list_sites` to help user find IDs - Use `dcim_list_device_types`, `dcim_list_device_roles`, `dcim_list_sites` to find IDs
- Then use `dcim_create_device`
**Update:** **Update:**
- `update <name>` - Update device properties - `update <name>` - Update device properties
- First get the device ID, then use `dcim_update_device` - Get device ID first, then use `dcim_update_device`
**Delete:** **Delete:**
- `delete <name>` - Delete a device (ask for confirmation first) - `delete <name>` - Delete device (ask confirmation first)
- Use `dcim_delete_device`
### Related Operations ### Related Operations
After creating a device, offer to: After creating a device, offer to:
- Add interfaces with `dcim_create_interface` - Add interfaces: `dcim_create_interface`
- Assign IP addresses with `ipam_create_ip_address` - Assign IP addresses: `ipam_create_ip_address`
- Add to a rack with `dcim_update_device` - Add to rack: `dcim_update_device`
## Examples ## Examples
- `/cmdb-device list` - Show all devices - `/cmdb-device list`
- `/cmdb-device show core-router-01` - Get details for specific device - `/cmdb-device show core-router-01`
- `/cmdb-device create web-server-03` - Create a new device - `/cmdb-device create web-server-03`
- `/cmdb-device at headquarters` - List devices at headquarters site - `/cmdb-device at headquarters`
## User Request ## User Request

View File

@@ -1,19 +1,13 @@
# CMDB IP Management # CMDB IP Management
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · IP Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage IP addresses and prefixes in NetBox. Manage IP addresses and prefixes in NetBox.
## Skills to Load
- `skills/visual-header.md`
- `skills/ip-management.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -22,43 +16,36 @@ Manage IP addresses and prefixes in NetBox.
## Instructions ## Instructions
You are an IP address management (IPAM) assistant with access to NetBox. Execute `skills/visual-header.md` with context "IP Management".
Execute operations from `skills/ip-management.md`.
### Actions ### Actions
**Prefixes:** **Prefixes:**
- `prefixes` - List all prefixes using `ipam_list_prefixes` - `prefixes` - List all prefixes
- `prefix <cidr>` - Get prefix details or find prefix containing address - `prefix <cidr>` - Get prefix details
- `available in <prefix>` - Show available IPs in a prefix using `ipam_list_available_ips` - `available in <prefix>` - Show available IPs
- `create prefix <cidr>` - Create new prefix using `ipam_create_prefix` - `create prefix <cidr>` - Create new prefix
**IP Addresses:** **IP Addresses:**
- `list` - List all IP addresses using `ipam_list_ip_addresses` - `list` - List all IP addresses
- `show <address>` - Get IP details - `show <address>` - Get IP details
- `allocate from <prefix>` - Auto-allocate next available IP using `ipam_create_available_ip` - `allocate from <prefix>` - Auto-allocate next available
- `create <address>` - Create specific IP using `ipam_create_ip_address` - `create <address>` - Create specific IP
- `assign <ip> to <device>` - Assign IP to device interface - `assign <ip> to <device> <interface>` - Assign IP to interface
**VLANs:** **VLANs and VRFs:**
- `vlans` - List VLANs using `ipam_list_vlans` - `vlans` - List VLANs
- `vlan <id>` - Get VLAN details - `vlan <id>` - Get VLAN details
- `vrfs` - List VRFs
**VRFs:**
- `vrfs` - List VRFs using `ipam_list_vrfs`
### Workflow Examples
**Allocate IP to new server:**
1. Find available IPs in target prefix
2. Create the IP address
3. Assign to device interface
## Examples ## Examples
- `/cmdb-ip prefixes` - List all prefixes - `/cmdb-ip prefixes`
- `/cmdb-ip available in 10.0.1.0/24` - Show available IPs - `/cmdb-ip available in 10.0.1.0/24`
- `/cmdb-ip allocate from 10.0.1.0/24` - Get next available IP - `/cmdb-ip allocate from 10.0.1.0/24`
- `/cmdb-ip assign 10.0.1.50/24 to web-server-01 eth0` - Assign IP to interface - `/cmdb-ip assign 10.0.1.50/24 to web-server-01 eth0`
## User Request ## User Request

View File

@@ -4,19 +4,15 @@ description: Register the current machine into NetBox with all running applicati
# CMDB Machine Registration # CMDB Machine Registration
## Visual Output Register the current machine into NetBox, including hardware info, network interfaces, and running applications.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐ - `skills/device-registration.md`
│ 🖥️ CMDB-ASSISTANT · Machine Registration │ - `skills/system-discovery.md`
└──────────────────────────────────────────────────────────────────┘ - `skills/netbox-patterns/SKILL.md`
``` - `skills/mcp-tools-reference.md`
Then proceed with the registration.
Register the current machine into NetBox, including hardware info, network interfaces, and running applications (Docker containers, services).
## Usage ## Usage
@@ -31,303 +27,24 @@ Register the current machine into NetBox, including hardware info, network inter
## Instructions ## Instructions
You are registering the current machine into NetBox. This is a multi-phase process that discovers local system information and creates corresponding NetBox objects. Execute `skills/visual-header.md` with context "Machine Registration".
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Execute `skills/device-registration.md` which covers:
1. System discovery via Bash (use `skills/system-discovery.md`)
### Phase 1: System Discovery (via Bash) 2. Pre-registration checks (device exists?, site?, platform?, role?)
3. Device creation via MCP
Gather system information using these commands: 4. Interface and IP creation
5. Container registration (if Docker found)
#### 1.1 Basic Device Info 6. Journal entry documentation
```bash
# Hostname
hostname
# OS/Platform info
cat /etc/os-release 2>/dev/null || uname -a
# Hardware model (varies by system)
# Raspberry Pi:
cat /proc/device-tree/model 2>/dev/null || echo "Unknown"
# x86 systems:
cat /sys/class/dmi/id/product_name 2>/dev/null || echo "Unknown"
# Serial number
# Raspberry Pi:
cat /proc/device-tree/serial-number 2>/dev/null || cat /proc/cpuinfo | grep Serial | cut -d: -f2 | tr -d ' ' 2>/dev/null
# x86 systems:
cat /sys/class/dmi/id/product_serial 2>/dev/null || echo "Unknown"
# CPU info
nproc
# Memory (MB)
free -m | awk '/Mem:/ {print $2}'
# Disk (GB, root filesystem)
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
```
#### 1.2 Network Interfaces
```bash
# Get interfaces with IPs (JSON format)
ip -j addr show 2>/dev/null || ip addr show
# Get default gateway interface
ip route | grep default | awk '{print $5}' | head -1
# Get MAC addresses
ip -j link show 2>/dev/null || ip link show
```
#### 1.3 Running Applications
```bash
# Docker containers (if docker available)
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}","ports":"{{.Ports}}"}' 2>/dev/null || echo "Docker not available"
# Docker Compose projects (check common locations)
find ~/apps /home/*/apps -name "docker-compose.yml" -o -name "docker-compose.yaml" 2>/dev/null | head -20
# Systemd services (running)
systemctl list-units --type=service --state=running --no-pager --plain 2>/dev/null | grep -v "^UNIT" | head -30
```
### Phase 2: Pre-Registration Checks (via MCP)
Before creating objects, verify prerequisites:
#### 2.1 Check if Device Already Exists
```
dcim_list_devices name=<hostname>
```
**If device exists:**
- Inform user and suggest `/cmdb-sync` instead
- Ask if they want to proceed with re-registration (will update existing)
#### 2.2 Verify/Create Site
If `--site` provided:
```
dcim_list_sites name=<site-name>
```
If site doesn't exist, ask user if they want to create it.
If no site provided, list available sites and ask user to choose:
```
dcim_list_sites
```
#### 2.3 Verify/Create Platform
Based on OS detected, check if platform exists:
```
dcim_list_platforms name=<platform-name>
```
**Platform naming:**
- `Raspberry Pi OS (Bookworm)` for Raspberry Pi
- `Ubuntu 24.04 LTS` for Ubuntu
- `Debian 12` for Debian
- Use format: `{OS Name} {Version}`
If platform doesn't exist, create it:
```
dcim_create_platform name=<platform-name> slug=<slug>
```
#### 2.4 Verify/Create Device Role
Based on detected services:
- If Docker containers found → `Docker Host`
- If only basic services → `Server`
- If specific role specified → Use that
```
dcim_list_device_roles name=<role-name>
```
### Phase 3: Device Registration (via MCP)
#### 3.1 Get/Create Manufacturer and Device Type
For Raspberry Pi:
```
dcim_list_manufacturers name="Raspberry Pi Foundation"
dcim_list_device_types manufacturer_id=X model="Raspberry Pi 4 Model B"
```
Create if not exists.
For generic x86:
```
dcim_list_manufacturers name=<detected-manufacturer>
```
#### 3.2 Create Device
```
dcim_create_device
name=<hostname>
device_type=<device_type_id>
role=<role_id>
site=<site_id>
platform=<platform_id>
tenant=<tenant_id> # if provided
serial=<serial>
description="Registered via cmdb-assistant"
```
#### 3.3 Create Interfaces
For each network interface discovered:
```
dcim_create_interface
device=<device_id>
name=<interface_name> # eth0, wlan0, tailscale0, etc.
type=<type> # 1000base-t, virtual, other
mac_address=<mac>
enabled=true
```
**Interface type mapping:**
- `eth*`, `enp*``1000base-t`
- `wlan*``ieee802.11ax` (or appropriate wifi type)
- `tailscale*`, `docker*`, `br-*``virtual`
- `lo` → skip (loopback)
#### 3.4 Create IP Addresses
For each IP on each interface:
```
ipam_create_ip_address
address=<ip/prefix> # e.g., "192.168.1.100/24"
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
description="Discovered via cmdb-register"
```
#### 3.5 Set Primary IP
Identify primary IP (interface with default route):
```
dcim_update_device
id=<device_id>
primary_ip4=<primary_ip_id>
```
### Phase 4: Container Registration (via MCP)
If Docker containers were discovered:
#### 4.1 Create/Get Cluster Type
```
virt_list_cluster_types name="Docker Compose"
```
Create if not exists:
```
virt_create_cluster_type name="Docker Compose" slug="docker-compose"
```
#### 4.2 Create Cluster
For each Docker Compose project directory found:
```
virt_create_cluster
name=<project-name> # e.g., "apps-hotport"
type=<cluster_type_id>
site=<site_id>
description="Docker Compose stack on <hostname>"
```
#### 4.3 Create VMs for Containers
For each running container:
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
site=<site_id>
role=<role_id> # Map container function to role
status="active"
vcpus=<cpu_shares> # Default 1.0 if unknown
memory=<memory_mb> # Default 256 if unknown
disk=<disk_gb> # Default 5 if unknown
description=<container purpose>
comments=<image, ports, volumes info>
```
**Container role mapping:**
- `*caddy*`, `*nginx*`, `*traefik*` → "Reverse Proxy"
- `*db*`, `*postgres*`, `*mysql*`, `*redis*` → "Database"
- `*webui*`, `*frontend*` → "Web Application"
- Others → Infer from image name or use generic "Container"
### Phase 5: Documentation
#### 5.1 Add Journal Entry
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device registered via /cmdb-register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
```
### Phase 6: Summary Report
Present registration summary:
```markdown
## Machine Registration Complete
### Device Created
- **Name:** <hostname>
- **Site:** <site>
- **Platform:** <platform>
- **Role:** <role>
- **ID:** <device_id>
- **URL:** https://netbox.example.com/dcim/devices/<id>/
### Network Interfaces
| Interface | Type | MAC | IP Address |
|-----------|------|-----|------------|
| eth0 | 1000base-t | aa:bb:cc:dd:ee:ff | 192.168.1.100/24 |
| tailscale0 | virtual | - | 100.x.x.x/32 |
### Primary IP: 192.168.1.100
### Docker Containers Registered (if applicable)
**Cluster:** <cluster_name> (ID: <cluster_id>)
| Container | Role | vCPUs | Memory | Status |
|-----------|------|-------|--------|--------|
| media_jellyfin | Media Server | 2.0 | 2048MB | Active |
| media_sonarr | Media Management | 1.0 | 512MB | Active |
### Next Steps
- Run `/cmdb-sync` periodically to keep data current
- Run `/cmdb-audit` to check data quality
- Add tags for classification (env:*, team:*, etc.)
```
## Error Handling ## Error Handling
- **Device already exists:** Suggest `/cmdb-sync` or ask to proceed | Error | Action |
- **Site not found:** List available sites, offer to create new |-------|--------|
- **Docker not available:** Skip container registration, note in summary | Device already exists | Suggest `/cmdb-sync` or ask to proceed |
- **Permission denied:** Note which operations failed, suggest fixes | Site not found | List available sites, offer to create new |
| Docker not available | Skip container registration, note in summary |
| Permission denied | Note which operations failed, suggest fixes |
## User Request ## User Request

View File

@@ -31,7 +31,7 @@ When the user provides a search query, determine the best approach:
3. **Site search**: Use `dcim_list_sites` with name filter 3. **Site search**: Use `dcim_list_sites` with name filter
4. **Prefix search**: Use `ipam_list_prefixes` with prefix or within filter 4. **Prefix search**: Use `ipam_list_prefixes` with prefix or within filter
5. **VLAN search**: Use `ipam_list_vlans` with vid or name filter 5. **VLAN search**: Use `ipam_list_vlans` with vid or name filter
6. **VM search**: Use `virtualization_list_virtual_machines` with name filter 6. **VM search**: Use `virt_list_vms` with name filter
For broad searches, query multiple endpoints and consolidate results. For broad searches, query multiple endpoints and consolidate results.

View File

@@ -1,19 +1,12 @@
# CMDB Site Management # CMDB Site Management
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Site Management │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the operation.
Manage sites and locations in NetBox. Manage sites and locations in NetBox.
## Skills to Load
- `skills/visual-header.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -22,46 +15,35 @@ Manage sites and locations in NetBox.
## Instructions ## Instructions
You are a site/location management assistant with access to NetBox. Execute `skills/visual-header.md` with context "Site Management".
### Actions ### Actions
**Sites:** **Sites:**
- `list` - List all sites using `dcim_list_sites` - `list` - List all sites: `dcim_list_sites`
- `show <name>` - Get site details using `dcim_get_site` - `show <name>` - Get site details: `dcim_get_site`
- `create <name>` - Create new site using `dcim_create_site` - `create <name>` - Create new site: `dcim_create_site`
- `update <name>` - Update site using `dcim_update_site` - `update <name>` - Update site: `dcim_update_site`
- `delete <name>` - Delete site (with confirmation) - `delete <name>` - Delete site (with confirmation)
**Locations (within sites):** **Locations:**
- `locations at <site>` - List locations using `dcim_list_locations` - `locations at <site>` - List locations: `dcim_list_locations`
- `create location <name> at <site>` - Create location using `dcim_create_location` - `create location <name> at <site>` - Create location
**Racks:** **Racks:**
- `racks at <site>` - List racks using `dcim_list_racks` - `racks at <site>` - List racks: `dcim_list_racks`
- `create rack <name> at <site>` - Create rack using `dcim_create_rack` - `create rack <name> at <site>` - Create rack
**Regions:** **Regions:**
- `regions` - List regions using `dcim_list_regions` - `regions` - List regions: `dcim_list_regions`
- `create region <name>` - Create region using `dcim_create_region` - `create region <name>` - Create region
### Site Properties
When creating/updating sites:
- name (required)
- slug (required, auto-generated if not provided)
- status: active, planned, staging, decommissioning, retired
- region: parent region ID
- facility: datacenter/building name
- physical_address, shipping_address
- time_zone
## Examples ## Examples
- `/cmdb-site list` - Show all sites - `/cmdb-site list`
- `/cmdb-site show headquarters` - Get HQ site details - `/cmdb-site show headquarters`
- `/cmdb-site create branch-office-nyc` - Create new site - `/cmdb-site create branch-office-nyc`
- `/cmdb-site racks at headquarters` - List racks at HQ - `/cmdb-site racks at headquarters`
## User Request ## User Request

View File

@@ -4,19 +4,14 @@ description: Synchronize current machine state with existing NetBox record
# CMDB Machine Sync # CMDB Machine Sync
## Visual Output Update an existing NetBox device record with the current machine state.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐ - `skills/sync-workflow.md`
│ 🖥️ CMDB-ASSISTANT · Machine Sync │ - `skills/system-discovery.md`
└──────────────────────────────────────────────────────────────────┘ - `skills/mcp-tools-reference.md`
```
Then proceed with the synchronization.
Update an existing NetBox device record with the current machine state. Compares local system information with NetBox and applies changes.
## Usage ## Usage
@@ -30,318 +25,32 @@ Update an existing NetBox device record with the current machine state. Compares
## Instructions ## Instructions
You are synchronizing the current machine's state with its NetBox record. This involves comparing current system state with stored data and updating differences. Execute `skills/visual-header.md` with context "Machine Sync".
**IMPORTANT:** Load the `netbox-patterns` skill for best practice reference. Execute `skills/sync-workflow.md` which covers:
1. Device lookup via MCP
### Phase 1: Device Lookup (via MCP) 2. Current state discovery via Bash
3. Comparison of NetBox vs local state
First, find the existing device record: 4. Diff report generation
5. User confirmation (unless dry-run)
```bash 6. Apply updates via MCP
# Get current hostname 7. Journal entry creation
hostname
``` ## Modes
``` | Mode | Behavior |
dcim_list_devices name=<hostname> |------|----------|
``` | Default | Show diff, ask confirmation, apply changes |
| `--dry-run` | Show diff only, no changes applied |
**If device not found:** | `--full` | Skip confirmation, update all fields |
- Inform user: "Device '<hostname>' not found in NetBox"
- Suggest: "Run `/cmdb-register` to register this machine first"
- Exit sync
**If device found:**
- Store device ID and all current field values
- Fetch interfaces: `dcim_list_interfaces device_id=<device_id>`
- Fetch IPs: `ipam_list_ip_addresses device_id=<device_id>`
Also check for associated clusters/VMs:
```
virt_list_clusters # Look for cluster associated with this device
virt_list_vms cluster=<cluster_id> # If cluster found
```
### Phase 2: Current State Discovery (via Bash)
Gather current system information (same as `/cmdb-register`):
```bash
# Device info
hostname
cat /etc/os-release 2>/dev/null || uname -a
nproc
free -m | awk '/Mem:/ {print $2}'
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
# Network interfaces with IPs
ip -j addr show 2>/dev/null || ip addr show
# Docker containers
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}"}' 2>/dev/null || echo "[]"
```
### Phase 3: Comparison
Compare discovered state with NetBox record:
#### 3.1 Device Attributes
| Field | Compare |
|-------|---------|
| Platform | OS version changed? |
| Status | Still active? |
| Serial | Match? |
| Description | Keep existing |
#### 3.2 Network Interfaces
| Change Type | Detection |
|-------------|-----------|
| New interface | Interface exists locally but not in NetBox |
| Removed interface | Interface in NetBox but not locally |
| Changed MAC | MAC address different |
| Interface type | Type mismatch |
#### 3.3 IP Addresses
| Change Type | Detection |
|-------------|-----------|
| New IP | IP exists locally but not in NetBox |
| Removed IP | IP in NetBox but not locally (on this device) |
| Primary IP changed | Default route interface changed |
#### 3.4 Docker Containers
| Change Type | Detection |
|-------------|-----------|
| New container | Container running locally but no VM in cluster |
| Stopped container | VM exists but container not running |
| Resource change | vCPUs/memory different (if trackable) |
### Phase 4: Diff Report
Present changes to user:
```markdown
## Sync Diff Report
**Device:** <hostname> (ID: <device_id>)
**NetBox URL:** https://netbox.example.com/dcim/devices/<id>/
### Device Attributes
| Field | NetBox Value | Current Value | Action |
|-------|--------------|---------------|--------|
| Platform | Ubuntu 22.04 | Ubuntu 24.04 | UPDATE |
| Status | active | active | - |
### Network Interfaces
#### New Interfaces (will create)
| Interface | Type | MAC | IPs |
|-----------|------|-----|-----|
| tailscale0 | virtual | - | 100.x.x.x/32 |
#### Removed Interfaces (will mark offline)
| Interface | Type | Reason |
|-----------|------|--------|
| eth1 | 1000base-t | Not found locally |
#### Changed Interfaces
| Interface | Field | Old | New |
|-----------|-------|-----|-----|
| eth0 | mac_address | aa:bb:cc:00:00:00 | aa:bb:cc:11:11:11 |
### IP Addresses
#### New IPs (will create)
- 192.168.1.150/24 on eth0
#### Removed IPs (will unassign)
- 192.168.1.100/24 from eth0
### Docker Containers
#### New Containers (will create VMs)
| Container | Image | Role |
|-----------|-------|------|
| media_lidarr | linuxserver/lidarr | Media Management |
#### Stopped Containers (will mark offline)
| Container | Last Status |
|-----------|-------------|
| media_bazarr | Exited |
### Summary
- **Updates:** X
- **Creates:** Y
- **Removals/Offline:** Z
```
### Phase 5: User Confirmation
If not `--dry-run`:
```
The following changes will be applied:
- Update device platform to "Ubuntu 24.04"
- Create interface "tailscale0"
- Create IP "100.x.x.x/32" on tailscale0
- Create VM "media_lidarr" in cluster
- Mark VM "media_bazarr" as offline
Proceed with sync? [Y/n]
```
**Use AskUserQuestion** to get confirmation.
### Phase 6: Apply Updates (via MCP)
Only if user confirms (or `--full` specified):
#### 6.1 Device Updates
```
dcim_update_device
id=<device_id>
platform=<new_platform_id>
# ... other changed fields
```
#### 6.2 Interface Updates
**For new interfaces:**
```
dcim_create_interface
device=<device_id>
name=<interface_name>
type=<type>
mac_address=<mac>
enabled=true
```
**For removed interfaces:**
```
dcim_update_interface
id=<interface_id>
enabled=false
description="Marked offline by cmdb-sync - interface no longer present"
```
**For changed interfaces:**
```
dcim_update_interface
id=<interface_id>
mac_address=<new_mac>
```
#### 6.3 IP Address Updates
**For new IPs:**
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
**For removed IPs:**
```
ipam_update_ip_address
id=<ip_id>
assigned_object_type=null
assigned_object_id=null
description="Unassigned by cmdb-sync"
```
#### 6.4 Primary IP Update
If primary IP changed:
```
dcim_update_device
id=<device_id>
primary_ip4=<new_primary_ip_id>
```
#### 6.5 Container/VM Updates
**For new containers:**
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
status="active"
# ... other fields
```
**For stopped containers:**
```
virt_update_vm
id=<vm_id>
status="offline"
description="Container stopped - detected by cmdb-sync"
```
### Phase 7: Journal Entry
Document the sync:
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device synced via /cmdb-sync command\n\nChanges applied:\n- <list of changes>"
```
### Phase 8: Summary Report
```markdown
## Sync Complete
**Device:** <hostname>
**Sync Time:** <timestamp>
### Changes Applied
- Updated platform: Ubuntu 22.04 → Ubuntu 24.04
- Created interface: tailscale0 (ID: X)
- Created IP: 100.x.x.x/32 (ID: Y)
- Created VM: media_lidarr (ID: Z)
- Marked VM offline: media_bazarr (ID: W)
### Current State
- **Interfaces:** 4 (3 active, 1 offline)
- **IP Addresses:** 5
- **Containers/VMs:** 8 (7 active, 1 offline)
### Next Sync
Run `/cmdb-sync` again after:
- Adding/removing Docker containers
- Changing network configuration
- OS upgrades
```
## Dry Run Mode
If `--dry-run` specified:
- Complete Phase 1-4 (lookup, discovery, compare, diff report)
- Skip Phase 5-8 (no confirmation, no updates, no journal)
- End with: "Dry run complete. No changes applied. Run without --dry-run to apply."
## Full Sync Mode
If `--full` specified:
- Skip user confirmation
- Update all fields even if unchanged (force refresh)
- Useful for ensuring NetBox matches current state exactly
## Error Handling ## Error Handling
- **Device not found:** Suggest `/cmdb-register` | Error | Action |
- **Permission denied on updates:** Note which failed, continue with others |-------|--------|
- **Cluster not found:** Offer to create or skip container sync | Device not found | Suggest `/cmdb-register` |
- **API errors:** Log error, continue with remaining updates | Permission denied | Note which failed, continue others |
| Cluster not found | Offer to create or skip container sync |
## User Request ## User Request

View File

@@ -4,20 +4,14 @@ description: Generate infrastructure topology diagrams from NetBox data
# CMDB Topology Visualization # CMDB Topology Visualization
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Topology │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the visualization.
Generate Mermaid diagrams showing infrastructure topology from NetBox. Generate Mermaid diagrams showing infrastructure topology from NetBox.
## Skills to Load
- `skills/visual-header.md`
- `skills/topology-generation.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -26,168 +20,34 @@ Generate Mermaid diagrams showing infrastructure topology from NetBox.
**Views:** **Views:**
- `rack <rack-name>` - Rack elevation showing devices and positions - `rack <rack-name>` - Rack elevation showing devices and positions
- `network [site]` - Network topology showing device connections via cables - `network [site]` - Network topology showing device connections
- `site <site-name>` - Site overview with racks and device counts - `site <site-name>` - Site overview with racks and device counts
- `full` - Full infrastructure overview - `full` - Full infrastructure overview
## Instructions ## Instructions
You are a topology visualization assistant that queries NetBox and generates Mermaid diagrams. Execute `skills/visual-header.md` with context "Topology".
### View: Rack Elevation Execute `skills/topology-generation.md` which covers:
- Data collection via MCP for each view type
- Mermaid diagram generation with proper shapes
- Legend and data notes
Generate a rack view showing devices and their positions. ## Output Format
**Data Collection:**
1. Use `dcim_list_racks` to find the rack by name
2. Use `dcim_list_devices` with `rack_id` filter to get devices in rack
3. For each device, note: `position`, `u_height`, `face`, `name`, `role`
**Mermaid Output:**
```mermaid
graph TB
subgraph rack["Rack: <rack-name> (U<height>)"]
direction TB
u42["U42: empty"]
u41["U41: empty"]
u40["U40: server-01 (Server)"]
u39["U39: server-01 (cont.)"]
u38["U38: switch-01 (Switch)"]
%% ... continue for all units
end
```
**For devices spanning multiple U:**
- Mark the top U with device name and role
- Mark subsequent Us as "(cont.)" for the same device
- Empty Us should show "empty"
### View: Network Topology
Generate a network diagram showing device connections.
**Data Collection:**
1. Use `dcim_list_sites` if no site specified (get all)
2. Use `dcim_list_devices` with optional `site_id` filter
3. Use `dcim_list_cables` to get all connections
4. Use `dcim_list_interfaces` for each device to understand port names
**Mermaid Output:**
```mermaid
graph TD
subgraph site1["Site: Home"]
router1[("core-router-01<br/>Router")]
switch1[["dist-switch-01<br/>Switch"]]
server1["web-server-01<br/>Server"]
server2["db-server-01<br/>Server"]
end
router1 -->|"eth0 - eth1"| switch1
switch1 -->|"gi0/1 - eth0"| server1
switch1 -->|"gi0/2 - eth0"| server2
```
**Node shapes by role:**
- Router: `[(" ")]` (cylinder/database shape)
- Switch: `[[ ]]` (double brackets)
- Server: `[ ]` (rectangle)
- Firewall: `{{ }}` (hexagon)
- Other: `[ ]` (rectangle)
**Edge labels:** Show interface names on both ends (A-side - B-side)
### View: Site Overview
Generate a site-level view showing racks and summary counts.
**Data Collection:**
1. Use `dcim_get_site` to get site details
2. Use `dcim_list_racks` with `site_id` filter
3. Use `dcim_list_devices` with `site_id` filter for counts per rack
**Mermaid Output:**
```mermaid
graph TB
subgraph site["Site: Headquarters"]
subgraph row1["Row 1"]
rack1["Rack A1<br/>12/42 U used<br/>5 devices"]
rack2["Rack A2<br/>20/42 U used<br/>8 devices"]
end
subgraph row2["Row 2"]
rack3["Rack B1<br/>8/42 U used<br/>3 devices"]
end
end
```
### View: Full Infrastructure
Generate a high-level view of all sites and their relationships.
**Data Collection:**
1. Use `dcim_list_regions` to get hierarchy
2. Use `dcim_list_sites` to get all sites
3. Use `dcim_list_devices` with status filter for counts
**Mermaid Output:**
```mermaid
graph TB
subgraph region1["Region: Americas"]
site1["Headquarters<br/>3 racks, 25 devices"]
site2["Branch Office<br/>1 rack, 5 devices"]
end
subgraph region2["Region: Europe"]
site3["EU Datacenter<br/>10 racks, 100 devices"]
end
site1 -.->|"WAN Link"| site3
```
### Output Format
Always provide: Always provide:
1. **Summary** - Brief description
1. **Summary** - Brief description of what the diagram shows 2. **Mermaid Code Block** - The diagram
2. **Mermaid Code Block** - The diagram code in a fenced code block 3. **Legend** - Shape explanations
3. **Legend** - Explanation of shapes and colors used 4. **Data Notes** - Quality issues found
4. **Data Notes** - Any data quality issues (e.g., devices without position, missing cables)
**Example Output:**
```markdown
## Network Topology: Home Site
This diagram shows the network connections between 4 devices at the Home site.
```mermaid
graph TD
router1[("core-router<br/>Router")]
switch1[["main-switch<br/>Switch"]]
server1["homelab-01<br/>Server"]
router1 -->|"eth0 - gi0/24"| switch1
switch1 -->|"gi0/1 - eth0"| server1
```
**Legend:**
- Cylinder shape: Routers
- Double brackets: Switches
- Rectangle: Servers
**Data Notes:**
- 1 device (nas-01) has no cable connections documented
```
## Examples ## Examples
- `/cmdb-topology rack server-rack-01` - Show devices in server-rack-01 - `/cmdb-topology rack server-rack-01` - Rack elevation
- `/cmdb-topology network` - Show all network connections - `/cmdb-topology network` - All network connections
- `/cmdb-topology network Home` - Show network topology for Home site only - `/cmdb-topology network Home` - Network for Home site
- `/cmdb-topology site Headquarters` - Show rack overview for Headquarters - `/cmdb-topology site Headquarters` - Site overview
- `/cmdb-topology full` - Show full infrastructure overview - `/cmdb-topology full` - Full infrastructure
## User Request ## User Request

View File

@@ -1,176 +1,74 @@
--- ---
description: Interactive setup wizard for cmdb-assistant plugin - configures NetBox MCP server description: Interactive setup wizard for cmdb-assistant plugin
--- ---
# CMDB Assistant Setup Wizard # CMDB Assistant Setup Wizard
## Visual Output Configure the cmdb-assistant plugin with NetBox integration.
When executing this command, display the plugin header: ## Skills to Load
``` - `skills/visual-header.md`
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the setup.
This command sets up the cmdb-assistant plugin with NetBox integration.
## Important Context ## Important Context
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools - **Uses Bash, Read, Write, AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart** - **MCP tools unavailable until after setup + session restart**
- **Uses NetBox MCP server (separate from Gitea MCP)**
--- ## Usage
## Phase 1: Environment Validation ```
/initial-setup
```
### Step 1.1: Check Python Version ## Instructions
Execute `skills/visual-header.md` with context "Setup Wizard".
### Phase 1: Environment Validation
```bash ```bash
python3 --version python3 --version
``` ```
If below 3.10, stop and inform user.
If below 3.10, stop setup and inform user. ### Phase 2: MCP Server Setup
--- 1. Locate NetBox MCP server in marketplace
2. Check virtual environment exists
3. Create venv if missing: `python3 -m venv .venv && pip install -r requirements.txt`
## Phase 2: MCP Server Setup ### Phase 3: System Configuration
### Step 2.1: Locate NetBox MCP Server 1. Create config directory: `mkdir -p ~/.config/claude`
2. Check `~/.config/claude/netbox.env` exists
3. If missing, ask user for NetBox API URL (must include `/api`)
4. Create config file with placeholder token
5. Instruct user to add API token manually
```bash ### Phase 4: Validation
find ~/.claude ~/.config/claude -name "mcp_server" -path "*netbox*" 2>/dev/null | head -5
```
If not found, ask user for marketplace location. 1. Test API connection if token was added
2. Report result (200=success, 403=invalid token)
3. Display completion summary
4. Remind user to restart session for MCP tools
### Step 2.2: Check Virtual Environment ## Completion Summary
```bash
ls -la /path/to/mcp-servers/netbox/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING"
```
### Step 2.3: Create Virtual Environment (if missing)
```bash
cd /path/to/mcp-servers/netbox && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
---
## Phase 3: System Configuration
### Step 3.1: Create Config Directory
```bash
mkdir -p ~/.config/claude
```
### Step 3.2: Check NetBox Configuration
```bash
cat ~/.config/claude/netbox.env 2>/dev/null || echo "FILE_NOT_FOUND"
```
**If file exists with valid values:** Skip to Phase 4.
**If missing or has placeholders:** Continue.
### Step 3.3: Gather NetBox Information
Use AskUserQuestion:
- Question: "What is your NetBox API URL? (e.g., https://netbox.company.com/api)"
- Header: "NetBox URL"
- Options:
- "Other (I'll provide the URL)"
Ask user to provide the URL.
**Important:** The URL must include `/api` at the end. If the user provides a URL without `/api`, append it automatically.
### Step 3.4: Create Configuration File
```bash
cat > ~/.config/claude/netbox.env << 'EOF'
# NetBox API Configuration
# Generated by cmdb-assistant /initial-setup
NETBOX_API_URL=<USER_PROVIDED_URL>
NETBOX_API_TOKEN=PASTE_YOUR_TOKEN_HERE
EOF
chmod 600 ~/.config/claude/netbox.env
```
### Step 3.5: Token Instructions
---
**Action Required: Add Your NetBox API Token**
I've created `~/.config/claude/netbox.env` but you need to add your API token manually.
**Steps:**
1. Open: `nano ~/.config/claude/netbox.env`
2. Generate token in NetBox: Admin → API Tokens → Add Token
3. Replace `PASTE_YOUR_TOKEN_HERE` with your token
4. Save the file
---
Use AskUserQuestion:
- Question: "Have you added your NetBox token?"
- Header: "Token"
- Options:
- "Yes, I've added the token"
- "Skip for now"
---
## Phase 4: Validation
### Step 4.1: Test Configuration (if token was added)
```bash
source ~/.config/claude/netbox.env && curl -s -o /dev/null -w "%{http_code}" -H "Authorization: Token $NETBOX_API_TOKEN" "$NETBOX_API_URL/"
```
**Note:** The URL already includes `/api`, so we just append `/` for the root API endpoint.
Report result:
- 200: Success
- 403: Invalid token
- Other: Connection issue
### Step 4.2: Summary
``` ```
╔════════════════════════════════════════════════════════════╗ CMDB-ASSISTANT SETUP COMPLETE
║ CMDB-ASSISTANT SETUP COMPLETE ║ MCP Server (NetBox): Ready
╠════════════════════════════════════════════════════════════╣ System Config: ~/.config/claude/netbox.env
║ MCP Server (NetBox): ✓ Ready ║
║ System Config: ✓ ~/.config/claude/netbox.env ║ Restart your Claude Code session for MCP tools.
╚════════════════════════════════════════════════════════════╝
After restart, try:
- /cmdb-device <hostname>
- /cmdb-ip <address>
- /cmdb-site <name>
- /cmdb-search <query>
``` ```
### Step 4.3: Session Restart Notice ## User Request
--- $ARGUMENTS
**⚠️ Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/cmdb-device <hostname>` to look up a device
- Run `/cmdb-ip <address>` to look up an IP address
- Run `/cmdb-site <name>` to look up a site
- Run `/cmdb-search <query>` for general search
---
## Note on Project Configuration
cmdb-assistant does not require project-level configuration. The NetBox connection is system-wide and not tied to specific repositories.

View File

@@ -4,20 +4,14 @@ description: Detect IP address conflicts and overlapping prefixes in NetBox
# CMDB IP Conflict Detection # CMDB IP Conflict Detection
## Visual Output
When executing this command, display the plugin header:
```
┌──────────────────────────────────────────────────────────────────┐
│ 🖥️ CMDB-ASSISTANT · IP Conflict Detection │
└──────────────────────────────────────────────────────────────────┘
```
Then proceed with the analysis.
Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes. Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
## Skills to Load
- `skills/visual-header.md`
- `skills/ip-management.md`
- `skills/mcp-tools-reference.md`
## Usage ## Usage
``` ```
@@ -33,205 +27,31 @@ Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
## Instructions ## Instructions
You are an IP conflict detection specialist that analyzes NetBox IPAM data for conflicts and issues. Execute `skills/visual-header.md` with context "IP Conflict Detection".
### Conflict Types to Detect Execute conflict detection from `skills/ip-management.md`:
#### 1. Duplicate IP Addresses 1. **Data Collection** - Fetch IPs, prefixes, VRFs via MCP
2. **Duplicate Detection** - Group by address+VRF, flag >1 record
3. **Overlap Detection** - Compare prefixes pairwise using CIDR math
4. **Orphan IP Detection** - Find IPs without containing prefix
5. **Generate Report** - Use template from skill
Multiple IP address records with the same address (within same VRF). ## Conflict Types
**Detection:** | Type | Severity |
1. Use `ipam_list_ip_addresses` to get all addresses |------|----------|
2. Group by address + VRF combination | Duplicate IP (same interface type) | CRITICAL |
3. Flag groups with more than one record | Duplicate IP (different roles) | HIGH |
| Overlapping prefixes (same status) | HIGH |
**Exception:** Anycast addresses may legitimately appear multiple times - check the `role` field for "anycast". | Overlapping prefixes (container ok) | LOW |
| Orphan IP | MEDIUM |
#### 2. Overlapping Prefixes
Prefixes that contain the same address space (within same VRF).
**Detection:**
1. Use `ipam_list_prefixes` to get all prefixes
2. For each prefix pair in the same VRF, check if one contains the other
3. Legitimate hierarchies should have proper parent-child relationships
**Legitimate Overlaps:**
- Parent/child prefix hierarchy (e.g., 10.0.0.0/8 contains 10.0.1.0/24)
- Different VRFs (isolated routing tables)
- Marked as "container" status
#### 3. IPs Outside Their Prefix
IP addresses that don't fall within any defined prefix.
**Detection:**
1. For each IP address, find the most specific prefix that contains it
2. Flag IPs with no matching prefix
#### 4. Prefix Overlap Across VRFs (Informational)
Same prefix appearing in multiple VRFs - not necessarily a conflict, but worth noting.
### MCP Tools
- `ipam_list_ip_addresses` - Get all IP addresses with filters:
- `address` - Filter by specific address
- `vrf_id` - Filter by VRF
- `parent` - Filter by parent prefix
- `status` - Filter by status
- `ipam_list_prefixes` - Get all prefixes with filters:
- `prefix` - Filter by prefix CIDR
- `vrf_id` - Filter by VRF
- `within` - Find prefixes within a parent
- `contains` - Find prefixes containing an address
- `ipam_list_vrfs` - List VRFs for context
- `ipam_get_ip_address` - Get detailed IP info including assigned device/interface
- `ipam_get_prefix` - Get detailed prefix info
### Workflow
1. **Data Collection**
- Fetch all IP addresses (or filtered set)
- Fetch all prefixes (or filtered set)
- Fetch VRFs for context
2. **Duplicate Detection**
- Build address map: `{address+vrf: [records]}`
- Filter for entries with >1 record
3. **Overlap Detection**
- For each VRF, compare prefixes pairwise
- Check using CIDR math: does prefix A contain prefix B or vice versa?
- Ignore legitimate hierarchies (status=container)
4. **Orphan IP Detection**
- For each IP, find containing prefix
- Flag IPs with no prefix match
5. **Generate Report**
### Report Format
```markdown
## IP Conflict Detection Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Check | Status | Count |
|-------|--------|-------|
| Duplicate IPs | [PASS/FAIL] | X |
| Overlapping Prefixes | [PASS/FAIL] | Y |
| Orphan IPs | [PASS/FAIL] | Z |
| Total Issues | - | N |
### Critical Issues
#### Duplicate IP Addresses
| Address | VRF | Count | Assigned To |
|---------|-----|-------|-------------|
| 10.0.1.50/24 | Global | 2 | server-01 (eth0), server-02 (eth0) |
| 192.168.1.100/24 | Global | 2 | router-01 (gi0/1), switch-01 (vlan10) |
**Impact:** IP conflicts cause network connectivity issues. Devices will have intermittent connectivity.
**Resolution:**
- Determine which device should have the IP
- Update or remove the duplicate assignment
- Consider IP reservation to prevent future conflicts
#### Overlapping Prefixes
| Prefix 1 | Prefix 2 | VRF | Type |
|----------|----------|-----|------|
| 10.0.0.0/24 | 10.0.0.0/25 | Global | Unstructured overlap |
| 192.168.0.0/16 | 192.168.1.0/24 | Production | Missing container flag |
**Impact:** Overlapping prefixes can cause routing ambiguity and IP management confusion.
**Resolution:**
- For legitimate hierarchies: Mark parent prefix as status="container"
- For accidental overlaps: Consolidate or re-address one prefix
### Warnings
#### IPs Without Prefix
| Address | VRF | Assigned To | Nearest Prefix |
|---------|-----|-------------|----------------|
| 172.16.5.10/24 | Global | server-03 (eth0) | None found |
**Impact:** IPs without a prefix bypass IPAM allocation controls.
**Resolution:**
- Create appropriate prefix to contain the IP
- Or update IP to correct address within existing prefix
### Informational
#### Same Prefix in Multiple VRFs
| Prefix | VRFs | Purpose |
|--------|------|---------|
| 10.0.0.0/24 | Global, DMZ, Internal | [Check if intentional] |
### Statistics
| Metric | Value |
|--------|-------|
| Total IP Addresses | X |
| Total Prefixes | Y |
| Total VRFs | Z |
| Utilization (IPs/Prefix space) | W% |
### Remediation Commands
```
# Remove duplicate IP (keep server-01's assignment)
ipam_delete_ip_address id=123
# Mark prefix as container
ipam_update_prefix id=456 status=container
# Create missing prefix for orphan IP
ipam_create_prefix prefix=172.16.5.0/24 status=active
```
```
### CIDR Math Reference
For overlap detection, use these rules:
- Prefix A **contains** Prefix B if: A.network <= B.network AND A.broadcast >= B.broadcast
- Two prefixes **overlap** if: A.network <= B.broadcast AND B.network <= A.broadcast
**Example:**
- 10.0.0.0/8 contains 10.0.1.0/24 (legitimate hierarchy)
- 10.0.0.0/24 and 10.0.0.128/25 overlap (10.0.0.128/25 is within 10.0.0.0/24)
### Severity Levels
| Issue | Severity | Description |
|-------|----------|-------------|
| Duplicate IP (same interface type) | CRITICAL | Active conflict, causes outages |
| Duplicate IP (different roles) | HIGH | Potential conflict |
| Overlapping prefixes (same status) | HIGH | IPAM management issue |
| Overlapping prefixes (container ok) | LOW | May need status update |
| Orphan IP | MEDIUM | Bypasses IPAM controls |
## Examples ## Examples
- `/ip-conflicts` - Full scan for all conflicts - `/ip-conflicts` - Full scan
- `/ip-conflicts addresses` - Check only for duplicate IPs - `/ip-conflicts addresses` - Duplicate IPs only
- `/ip-conflicts prefixes` - Check only for overlapping prefixes - `/ip-conflicts vrf Production` - Scan specific VRF
- `/ip-conflicts vrf Production` - Scan only Production VRF
- `/ip-conflicts prefix 10.0.0.0/8` - Scan within specific prefix range
## User Request ## User Request

View File

@@ -2,8 +2,13 @@
"hooks": { "hooks": {
"SessionStart": [ "SessionStart": [
{ {
"type": "command", "matcher": "",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh" "hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
]
} }
], ],
"PreToolUse": [ "PreToolUse": [

View File

@@ -25,11 +25,24 @@ if [[ -z "${NETBOX_API_URL:-}" ]] || [[ -z "${NETBOX_API_TOKEN:-}" ]]; then
exit 0 exit 0
fi fi
# Helper function to make authenticated API calls
# Token passed via curl config to avoid exposure in process listings
netbox_curl() {
local url="$1"
curl -s -K - <<EOF 2>/dev/null
-H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json"
url = "${url}"
EOF
}
# Quick API connectivity test (5s timeout) # Quick API connectivity test (5s timeout)
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -m 5 \ HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -m 5 -K - <<EOF 2>/dev/null || echo "000"
-H "Authorization: Token $NETBOX_API_TOKEN" \ -H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json" \ -H "Accept: application/json"
"${NETBOX_API_URL}/" 2>/dev/null || echo "000") url = "${NETBOX_API_URL}/"
EOF
)
if [[ "$HTTP_CODE" == "000" ]]; then if [[ "$HTTP_CODE" == "000" ]]; then
echo "$PREFIX NetBox API unreachable (timeout/connection error)" echo "$PREFIX NetBox API unreachable (timeout/connection error)"
@@ -40,10 +53,12 @@ elif [[ "$HTTP_CODE" != "200" ]]; then
fi fi
# Check for VMs without site assignment (data quality) # Check for VMs without site assignment (data quality)
VMS_RESPONSE=$(curl -s -m 5 \ VMS_RESPONSE=$(curl -s -m 5 -K - <<EOF 2>/dev/null || echo '{"count":0}'
-H "Authorization: Token $NETBOX_API_TOKEN" \ -H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json" \ -H "Accept: application/json"
"${NETBOX_API_URL}/virtualization/virtual-machines/?site__isnull=true&limit=1" 2>/dev/null || echo '{"count":0}') url = "${NETBOX_API_URL}/virtualization/virtual-machines/?site__isnull=true&limit=1"
EOF
)
VMS_NO_SITE=$(echo "$VMS_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0") VMS_NO_SITE=$(echo "$VMS_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")
@@ -52,10 +67,12 @@ if [[ "$VMS_NO_SITE" -gt 0 ]]; then
fi fi
# Check for devices without platform # Check for devices without platform
DEVICES_RESPONSE=$(curl -s -m 5 \ DEVICES_RESPONSE=$(curl -s -m 5 -K - <<EOF 2>/dev/null || echo '{"count":0}'
-H "Authorization: Token $NETBOX_API_TOKEN" \ -H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json" \ -H "Accept: application/json"
"${NETBOX_API_URL}/dcim/devices/?platform__isnull=true&limit=1" 2>/dev/null || echo '{"count":0}') url = "${NETBOX_API_URL}/dcim/devices/?platform__isnull=true&limit=1"
EOF
)
DEVICES_NO_PLATFORM=$(echo "$DEVICES_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0") DEVICES_NO_PLATFORM=$(echo "$DEVICES_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")

View File

@@ -1 +0,0 @@
../../../mcp-servers/netbox

View File

@@ -0,0 +1,163 @@
# Audit Workflow Skill
How to audit NetBox data quality.
## Prerequisites
Load these skills:
- `netbox-patterns` - Best practices reference
- `mcp-tools-reference` - MCP tool reference
## Data Collection
```
virt_list_vms
dcim_list_devices
virt_list_clusters
dcim_list_sites
tenancy_list_tenants
dcim_list_device_roles
dcim_list_platforms
```
## Quality Checks by Severity
### CRITICAL (must fix immediately)
| Check | Detection |
|-------|-----------|
| VMs without cluster | `cluster` is null AND `site` is null |
| Devices without site | `site` is null |
| Active devices without primary IP | `status=active` AND `primary_ip4` is null AND `primary_ip6` is null |
### HIGH (should fix soon)
| Check | Detection |
|-------|-----------|
| VMs without site | No site (neither direct nor via cluster.site) |
| VMs without tenant | `tenant` is null |
| Devices without platform | `platform` is null |
| Clusters not scoped to site | `site` is null on cluster |
| VMs without role | `role` is null |
### MEDIUM (plan to address)
| Check | Detection |
|-------|-----------|
| Inconsistent naming | Names don't match patterns |
| Role fragmentation | >10 device roles with <3 assignments each |
| Missing tags on production | Active resources without tags |
| Mixed naming separators | Some `_`, others `-` |
### LOW (informational)
| Check | Detection |
|-------|-----------|
| Docker containers as VMs | Cluster type is "Docker Compose" |
| VMs without description | `description` is empty |
| Sites without physical address | `physical_address` is empty |
| Devices without serial | `serial` is empty |
## Naming Convention Analysis
### Expected Patterns
| Object Type | Pattern | Example |
|-------------|---------|---------|
| Devices | `{role}-{location}-{number}` | `web-dc1-01` |
| VMs | `{env}-{app}-{number}` | `prod-api-01` |
| Clusters | `{site}-{type}` | `home-docker` |
### Analysis Steps
1. Extract naming patterns from existing objects
2. Identify dominant patterns (most common)
3. Flag outliers that don't match
4. Suggest standardization
## Role Fragmentation Analysis
### Red Flags
- More than 15 highly specific roles
- Roles with technology in name (use platform instead)
- Roles that duplicate functionality
- Single-use roles (only 1 device/VM)
### Recommended Consolidation
Use general roles + platform/tags for specificity:
- Instead of `nginx-web-server`, use `web-server` + platform `nginx`
## Report Template
```markdown
## CMDB Data Quality Audit Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Metric | Count |
|--------|-------|
| Total VMs | X |
| Total Devices | Y |
| Total Clusters | Z |
| **Total Issues** | **N** |
| Severity | Count |
|----------|-------|
| Critical | A |
| High | B |
| Medium | C |
| Low | D |
### Critical Issues
[List each with specific object names and IDs]
- VM `HotServ` (ID: 1) - No cluster or site assignment
- Device `server-01` (ID: 5) - No site assignment
### High Issues
[List each with specific object names]
### Medium Issues
[Grouped by category with counts]
### Recommendations
1. **[Most impactful fix]** - affects N objects
2. **[Second priority]** - affects M objects
### Quick Fixes
Commands to fix common issues:
```
# Assign site to VM
virt_update_vm id=X site=Y
# Assign platform to device
dcim_update_device id=X platform=Y
```
### Next Steps
- Run `/cmdb-register` to properly register new machines
- Use `/cmdb-sync` to update existing registrations
- Consider bulk updates via NetBox web UI for >10 items
```
## Scope-Specific Focus
| Scope | Focus |
|-------|-------|
| `all` | Full audit across all categories |
| `vms` | Virtual Machine checks only |
| `devices` | Device checks only |
| `naming` | Naming convention analysis |
| `roles` | Role fragmentation analysis |

View File

@@ -0,0 +1,130 @@
# Change Audit Skill
Audit NetBox changes for tracking and compliance.
## Prerequisites
Load skill: `mcp-tools-reference`
## MCP Tools
| Tool | Purpose | Parameters |
|------|---------|------------|
| `extras_list_object_changes` | List changes | `user_id`, `changed_object_type`, `action` |
| `extras_get_object_change` | Get change details | `id` |
## Common Object Types
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |
## Audit Workflow
1. **Parse user request** - Determine filters
2. **Query object changes** - `extras_list_object_changes`
3. **Enrich data** - Fetch detailed records if needed
4. **Analyze patterns** - Identify bulk operations, unusual activity
5. **Generate report** - Structured format
## Report Template
```markdown
## NetBox Change Audit Report
**Generated:** [timestamp]
**Period:** [date range or "All time"]
**Filters:** [applied filters]
### Summary
| Metric | Count |
|--------|-------|
| Total Changes | X |
| Creates | Y |
| Updates | Z |
| Deletes | W |
| Unique Users | N |
| Object Types | M |
### Changes by Action
#### Created Objects (Y)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-15 14:30 | admin | dcim.device | server-01 | Created device |
#### Updated Objects (Z)
| Time | User | Object Type | Object | Changed Fields |
|------|------|-------------|--------|----------------|
| 2024-01-15 15:00 | john | ipam.ipaddress | 10.0.1.50/24 | status, description |
#### Deleted Objects (W)
| Time | User | Object Type | Object | Details |
|------|------|-------------|--------|---------|
| 2024-01-14 09:00 | admin | dcim.interface | eth2 | Removed from server-01 |
### Changes by User
| User | Creates | Updates | Deletes | Total |
|------|---------|---------|---------|-------|
| admin | 5 | 10 | 2 | 17 |
| john | 3 | 8 | 0 | 11 |
### Changes by Object Type
| Object Type | Creates | Updates | Deletes | Total |
|-------------|---------|---------|---------|-------|
| dcim.device | 2 | 5 | 0 | 7 |
| ipam.ipaddress | 4 | 3 | 1 | 8 |
### Timeline
```
2024-01-15: ######## 8 changes
2024-01-14: #### 4 changes
2024-01-13: ## 2 changes
```
### Notable Patterns
- **Bulk operations:** [Many changes in short time]
- **Unusual activity:** [Unexpected deletions, after-hours changes]
- **Missing audit trail:** [Expected changes not logged]
### Recommendations
1. [Security or process recommendations based on findings]
```
## Enriching Change Details
For detailed audit, use `extras_get_object_change` to see:
- `prechange_data` - Object state before change
- `postchange_data` - Object state after change
- `request_id` - Links related changes in same request
## Security Audit Mode
When user asks for "security audit" or "compliance report":
1. Focus on deletions and permission-sensitive changes
2. Highlight changes to critical objects (firewalls, VRFs, prefixes)
3. Flag changes outside business hours
4. Identify users with high change counts
## Filter Examples
| Request | Filter |
|---------|--------|
| Recent changes | None (last 24 hours default) |
| Last 7 days | Filter by `time` field |
| By user | `user_id=<id>` |
| Device changes | `changed_object_type=dcim.device` |
| All deletions | `action=delete` |

View File

@@ -0,0 +1,177 @@
# Device Registration Skill
How to register devices into NetBox.
## Prerequisites
Load these skills:
- `system-discovery` - Bash commands for gathering system info
- `netbox-patterns` - Best practices for data quality
- `mcp-tools-reference` - MCP tool reference
## Registration Workflow
### Phase 1: System Discovery
Use commands from `system-discovery` skill to gather:
- Hostname, OS, hardware model, serial number
- CPU, memory, disk
- Network interfaces with IPs
- Running Docker containers
### Phase 2: Pre-Registration Checks
1. **Check if device exists:**
```
dcim_list_devices name=<hostname>
```
If exists, suggest `/cmdb-sync` instead.
2. **Verify/Create site:**
```
dcim_list_sites name=<site-name>
```
If not found, list available sites or offer to create.
3. **Verify/Create platform:**
```
dcim_list_platforms name=<platform-name>
```
Create if not exists with `dcim_create_platform`.
4. **Verify/Create device role:**
```
dcim_list_device_roles name=<role-name>
```
### Phase 3: Device Creation
1. **Get/Create manufacturer and device type:**
```
dcim_list_manufacturers name="<manufacturer>"
dcim_list_device_types manufacturer_id=X model="<model>"
```
2. **Create device:**
```
dcim_create_device
name=<hostname>
device_type=<device_type_id>
role=<role_id>
site=<site_id>
platform=<platform_id>
tenant=<tenant_id> # if provided
serial=<serial>
description="Registered via cmdb-assistant"
```
3. **Create interfaces:**
For each network interface:
```
dcim_create_interface
device=<device_id>
name=<interface_name>
type=<type>
mac_address=<mac>
enabled=true
```
4. **Create IP addresses:**
For each IP:
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
5. **Set primary IP:**
```
dcim_update_device
id=<device_id>
primary_ip4=<primary_ip_id>
```
### Phase 4: Container Registration (if Docker)
1. **Create/Get cluster type:**
```
virt_list_cluster_types name="Docker Compose"
virt_create_cluster_type name="Docker Compose" slug="docker-compose"
```
2. **Create cluster:**
```
virt_create_cluster
name=<project-name>
type=<cluster_type_id>
site=<site_id>
description="Docker Compose stack on <hostname>"
```
3. **Create VMs for containers:**
For each running container:
```
virt_create_vm
name=<container_name>
cluster=<cluster_id>
site=<site_id>
role=<role_id>
status="active"
vcpus=<cpu_shares>
memory=<memory_mb>
disk=<disk_gb>
```
### Phase 5: Documentation
Add journal entry:
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device registered via /cmdb-register command\n\nDiscovered:\n- X network interfaces\n- Y IP addresses\n- Z Docker containers"
```
## Summary Report Template
```markdown
## Machine Registration Complete
### Device Created
- **Name:** <hostname>
- **Site:** <site>
- **Platform:** <platform>
- **Role:** <role>
- **ID:** <device_id>
- **URL:** https://netbox.example.com/dcim/devices/<id>/
### Network Interfaces
| Interface | Type | MAC | IP Address |
|-----------|------|-----|------------|
| eth0 | 1000base-t | aa:bb:cc:dd:ee:ff | 192.168.1.100/24 |
### Primary IP: 192.168.1.100
### Docker Containers Registered (if applicable)
**Cluster:** <cluster_name> (ID: <cluster_id>)
| Container | Role | vCPUs | Memory | Status |
|-----------|------|-------|--------|--------|
| media_jellyfin | Media Server | 2.0 | 2048MB | Active |
### Next Steps
- Run `/cmdb-sync` periodically to keep data current
- Run `/cmdb-audit` to check data quality
- Add tags for classification
```
## Error Handling
| Error | Action |
|-------|--------|
| Device already exists | Suggest `/cmdb-sync` or ask to proceed |
| Site not found | List available sites, offer to create new |
| Docker not available | Skip container registration, note in summary |
| Permission denied | Note which operations failed, suggest fixes |

View File

@@ -0,0 +1,162 @@
# IP Management Skill
IP address and prefix management in NetBox.
## Prerequisites
Load skill: `mcp-tools-reference`
## IPAM Operations
### Prefix Management
| Action | Tool | Key Parameters |
|--------|------|----------------|
| List prefixes | `ipam_list_prefixes` | `prefix`, `vrf_id`, `within`, `contains` |
| Get details | `ipam_get_prefix` | `id` |
| Find available child | `ipam_list_available_prefixes` | `prefix_id` |
| Create prefix | `ipam_create_prefix` | `prefix`, `status`, `site`, `vrf` |
| Allocate child | `ipam_create_available_prefix` | `prefix_id`, `prefix_length` |
### IP Address Management
| Action | Tool | Key Parameters |
|--------|------|----------------|
| List IPs | `ipam_list_ip_addresses` | `address`, `vrf_id`, `device_id` |
| Get details | `ipam_get_ip_address` | `id` |
| Find available | `ipam_list_available_ips` | `prefix_id` |
| Create IP | `ipam_create_ip_address` | `address`, `assigned_object_type`, `assigned_object_id` |
| Allocate next | `ipam_create_available_ip` | `prefix_id` |
| Assign to interface | `ipam_update_ip_address` | `id`, `assigned_object_id` |
### VLAN and VRF
| Action | Tool |
|--------|------|
| List VLANs | `ipam_list_vlans` |
| Get VLAN | `ipam_get_vlan` |
| Create VLAN | `ipam_create_vlan` |
| List VRFs | `ipam_list_vrfs` |
| Get VRF | `ipam_get_vrf` |
## IP Allocation Workflow
1. **Find available IPs in target prefix:**
```
ipam_list_available_ips prefix_id=<id>
```
2. **Create the IP address:**
```
ipam_create_ip_address
address=<ip/prefix>
assigned_object_type="dcim.interface"
assigned_object_id=<interface_id>
status="active"
```
3. **Set as primary (if needed):**
```
dcim_update_device id=<device_id> primary_ip4=<ip_id>
```
## IP Conflict Detection
### Conflict Types
1. **Duplicate IP Addresses**
- Multiple records with same address in same VRF
- Exception: Anycast addresses (check `role` field)
2. **Overlapping Prefixes**
- Prefixes containing same address space in same VRF
- Legitimate: Parent/child hierarchy, different VRFs, "container" status
3. **IPs Outside Prefix**
- IP addresses not within any defined prefix
4. **Same Prefix in Multiple VRFs** (informational)
### Detection Workflow
1. **Duplicate Detection:**
- Get all addresses: `ipam_list_ip_addresses`
- Group by address + VRF
- Flag groups with >1 record
2. **Overlap Detection:**
- Get all prefixes: `ipam_list_prefixes`
- For each VRF, compare prefixes pairwise
- Check if prefix A contains prefix B or vice versa
- Ignore legitimate hierarchies (status=container)
3. **Orphan IP Detection:**
- For each IP, find containing prefix
- Flag IPs with no prefix match
### CIDR Math Rules
- Prefix A **contains** Prefix B if: `A.network <= B.network AND A.broadcast >= B.broadcast`
- Two prefixes **overlap** if: `A.network <= B.broadcast AND B.network <= A.broadcast`
### Severity Levels
| Issue | Severity |
|-------|----------|
| Duplicate IP (same interface type) | CRITICAL |
| Duplicate IP (different roles) | HIGH |
| Overlapping prefixes (same status) | HIGH |
| Overlapping prefixes (container ok) | LOW |
| Orphan IP | MEDIUM |
## Conflict Report Template
```markdown
## IP Conflict Detection Report
**Generated:** [timestamp]
**Scope:** [scope parameter]
### Summary
| Check | Status | Count |
|-------|--------|-------|
| Duplicate IPs | [PASS/FAIL] | X |
| Overlapping Prefixes | [PASS/FAIL] | Y |
| Orphan IPs | [PASS/FAIL] | Z |
### Critical Issues
#### Duplicate IP Addresses
| Address | VRF | Count | Assigned To |
|---------|-----|-------|-------------|
| 10.0.1.50/24 | Global | 2 | server-01, server-02 |
**Resolution:**
- Determine which device should have the IP
- Update or remove the duplicate
#### Overlapping Prefixes
| Prefix 1 | Prefix 2 | VRF | Type |
|----------|----------|-----|------|
| 10.0.0.0/24 | 10.0.0.0/25 | Global | Unstructured |
**Resolution:**
- For legitimate hierarchies: Mark parent as status="container"
- For accidental: Consolidate or re-address
### Remediation Commands
```
# Remove duplicate IP
ipam_delete_ip_address id=123
# Mark prefix as container
ipam_update_prefix id=456 status=container
# Create missing prefix
ipam_create_prefix prefix=172.16.5.0/24 status=active
```
```

View File

@@ -0,0 +1,281 @@
# NetBox MCP Tools Reference
Complete reference for NetBox MCP tools organized by category.
## DCIM (Data Center Infrastructure Management)
### Sites and Locations
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_sites` | List all sites | `name`, `status`, `region_id` |
| `dcim_get_site` | Get site details | `id` |
| `dcim_create_site` | Create new site | `name`, `slug`, `status` |
| `dcim_update_site` | Update site | `id`, fields to update |
| `dcim_delete_site` | Delete site | `id` |
| `dcim_list_locations` | List locations within sites | `site_id`, `parent_id` |
| `dcim_get_location` | Get location details | `id` |
| `dcim_create_location` | Create location | `name`, `slug`, `site` |
| `dcim_update_location` | Update location | `id`, fields to update |
| `dcim_delete_location` | Delete location | `id` |
| `dcim_list_regions` | List regions | `name` |
| `dcim_get_region` | Get region details | `id` |
| `dcim_create_region` | Create region | `name`, `slug` |
| `dcim_update_region` | Update region | `id`, fields to update |
| `dcim_delete_region` | Delete region | `id` |
### Racks
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_racks` | List racks | `site_id`, `location_id`, `name` |
| `dcim_get_rack` | Get rack details | `id` |
| `dcim_create_rack` | Create rack | `name`, `site`, `u_height` |
| `dcim_update_rack` | Update rack | `id`, fields to update |
| `dcim_delete_rack` | Delete rack | `id` |
### Devices
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_devices` | List devices | `name`, `site_id`, `role_id`, `status` |
| `dcim_get_device` | Get device details | `id` |
| `dcim_create_device` | Create device | `name`, `device_type`, `role`, `site` |
| `dcim_update_device` | Update device | `id`, `primary_ip4`, etc. |
| `dcim_delete_device` | Delete device | `id` |
### Device Types and Roles
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_device_types` | List device types | `manufacturer_id`, `model` |
| `dcim_get_device_type` | Get type details | `id` |
| `dcim_create_device_type` | Create device type | `manufacturer`, `model`, `slug` |
| `dcim_update_device_type` | Update device type | `id`, fields |
| `dcim_delete_device_type` | Delete device type | `id` |
| `dcim_list_device_roles` | List device roles | `name` |
| `dcim_get_device_role` | Get role details | `id` |
| `dcim_create_device_role` | Create device role | `name`, `slug` |
| `dcim_update_device_role` | Update device role | `id`, fields |
| `dcim_delete_device_role` | Delete device role | `id` |
### Manufacturers and Platforms
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_manufacturers` | List manufacturers | `name` |
| `dcim_get_manufacturer` | Get manufacturer details | `id` |
| `dcim_create_manufacturer` | Create manufacturer | `name`, `slug` |
| `dcim_update_manufacturer` | Update manufacturer | `id`, fields |
| `dcim_delete_manufacturer` | Delete manufacturer | `id` |
| `dcim_list_platforms` | List platforms | `name` |
| `dcim_get_platform` | Get platform details | `id` |
| `dcim_create_platform` | Create platform | `name`, `slug` |
| `dcim_update_platform` | Update platform | `id`, fields |
| `dcim_delete_platform` | Delete platform | `id` |
### Interfaces and Cables
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_interfaces` | List interfaces | `device_id`, `name`, `type` |
| `dcim_get_interface` | Get interface details | `id` |
| `dcim_create_interface` | Create interface | `device`, `name`, `type` |
| `dcim_update_interface` | Update interface | `id`, `enabled`, `mac_address` |
| `dcim_delete_interface` | Delete interface | `id` |
| `dcim_list_cables` | List cables | `device_id`, `site_id` |
| `dcim_get_cable` | Get cable details | `id` |
| `dcim_create_cable` | Create cable | `a_terminations`, `b_terminations` |
| `dcim_update_cable` | Update cable | `id`, fields |
| `dcim_delete_cable` | Delete cable | `id` |
### Power
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_power_panels` | List power panels | `site_id` |
| `dcim_get_power_panel` | Get panel details | `id` |
| `dcim_create_power_panel` | Create power panel | `name`, `site` |
| `dcim_list_power_feeds` | List power feeds | `power_panel_id` |
| `dcim_get_power_feed` | Get feed details | `id` |
| `dcim_create_power_feed` | Create power feed | `name`, `power_panel`, `supply` |
### Other DCIM
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `dcim_list_virtual_chassis` | List virtual chassis | (varies) |
| `dcim_get_virtual_chassis` | Get virtual chassis | `id` |
| `dcim_list_inventory_items` | List inventory items | `device_id` |
| `dcim_get_inventory_item` | Get inventory item | `id` |
## IPAM (IP Address Management)
### Prefixes
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_prefixes` | List prefixes | `prefix`, `vrf_id`, `within`, `contains` |
| `ipam_get_prefix` | Get prefix details | `id` |
| `ipam_create_prefix` | Create prefix | `prefix`, `status`, `site`, `vrf` |
| `ipam_update_prefix` | Update prefix | `id`, `status`, etc. |
| `ipam_delete_prefix` | Delete prefix | `id` |
| `ipam_list_available_prefixes` | List available child prefixes | `prefix_id` |
| `ipam_create_available_prefix` | Allocate from parent | `prefix_id`, `prefix_length` |
### IP Addresses
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_ip_addresses` | List IP addresses | `address`, `vrf_id`, `device_id`, `status` |
| `ipam_get_ip_address` | Get IP details | `id` |
| `ipam_create_ip_address` | Create IP address | `address`, `assigned_object_type`, `assigned_object_id` |
| `ipam_update_ip_address` | Update IP address | `id`, `status`, etc. |
| `ipam_delete_ip_address` | Delete IP address | `id` |
| `ipam_list_available_ips` | List available IPs in prefix | `prefix_id` |
| `ipam_create_available_ip` | Allocate next available | `prefix_id` |
### VLANs and VRFs
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_vlans` | List VLANs | `vid`, `name`, `site_id` |
| `ipam_get_vlan` | Get VLAN details | `id` |
| `ipam_create_vlan` | Create VLAN | `vid`, `name`, `site` |
| `ipam_update_vlan` | Update VLAN | `id`, fields |
| `ipam_delete_vlan` | Delete VLAN | `id` |
| `ipam_list_vlan_groups` | List VLAN groups | `site_id` |
| `ipam_get_vlan_group` | Get VLAN group | `id` |
| `ipam_create_vlan_group` | Create VLAN group | `name`, `slug`, `scope_type` |
| `ipam_list_vrfs` | List VRFs | `name` |
| `ipam_get_vrf` | Get VRF details | `id` |
| `ipam_create_vrf` | Create VRF | `name`, `rd` |
| `ipam_update_vrf` | Update VRF | `id`, fields |
| `ipam_delete_vrf` | Delete VRF | `id` |
### Other IPAM
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `ipam_list_asns` | List ASNs | (varies) |
| `ipam_get_asn` | Get ASN details | `id` |
| `ipam_create_asn` | Create ASN | `asn`, `rir` |
| `ipam_list_rirs` | List RIRs | `name` |
| `ipam_get_rir` | Get RIR details | `id` |
| `ipam_list_aggregates` | List aggregates | `prefix`, `rir_id` |
| `ipam_get_aggregate` | Get aggregate | `id` |
| `ipam_create_aggregate` | Create aggregate | `prefix`, `rir` |
| `ipam_list_services` | List services | `device_id`, `name` |
| `ipam_get_service` | Get service details | `id` |
| `ipam_create_service` | Create service | `name`, `ports`, `protocol` |
## Virtualization
### Clusters
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `virt_list_cluster_types` | List cluster types | `name` |
| `virt_get_cluster_type` | Get cluster type | `id` |
| `virt_create_cluster_type` | Create cluster type | `name`, `slug` |
| `virt_list_cluster_groups` | List cluster groups | `name` |
| `virt_get_cluster_group` | Get cluster group | `id` |
| `virt_create_cluster_group` | Create cluster group | `name`, `slug` |
| `virt_list_clusters` | List clusters | `name`, `site_id`, `type_id` |
| `virt_get_cluster` | Get cluster details | `id` |
| `virt_create_cluster` | Create cluster | `name`, `type`, `site` |
| `virt_update_cluster` | Update cluster | `id`, fields |
| `virt_delete_cluster` | Delete cluster | `id` |
### Virtual Machines
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `virt_list_vms` | List VMs | `name`, `cluster_id`, `site_id`, `status` |
| `virt_get_vm` | Get VM details | `id` |
| `virt_create_vm` | Create VM | `name`, `cluster`, `site`, `status` |
| `virt_update_vm` | Update VM | `id`, `status`, etc. |
| `virt_delete_vm` | Delete VM | `id` |
| `virt_list_vm_ifaces` | List VM interfaces | `virtual_machine_id` |
| `virt_get_vm_iface` | Get VM interface | `id` |
| `virt_create_vm_iface` | Create VM interface | `virtual_machine`, `name` |
## Circuits
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `circuits_list_providers` | List providers | `name` |
| `circuits_get_provider` | Get provider | `id` |
| `circuits_create_provider` | Create provider | `name`, `slug` |
| `circuits_update_provider` | Update provider | `id`, fields |
| `circuits_delete_provider` | Delete provider | `id` |
| `circ_list_types` | List circuit types | `name` |
| `circ_get_type` | Get circuit type | `id` |
| `circ_create_type` | Create circuit type | `name`, `slug` |
| `circuits_list_circuits` | List circuits | `provider_id`, `type_id` |
| `circuits_get_circuit` | Get circuit | `id` |
| `circuits_create_circuit` | Create circuit | `cid`, `provider`, `type` |
| `circuits_update_circuit` | Update circuit | `id`, fields |
| `circuits_delete_circuit` | Delete circuit | `id` |
| `circ_list_terminations` | List terminations | `circuit_id` |
| `circ_get_termination` | Get termination | `id` |
| `circ_create_termination` | Create termination | `circuit`, `site`, `term_side` |
## Tenancy
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `tenancy_list_tenant_groups` | List tenant groups | `name` |
| `tenancy_get_tenant_group` | Get tenant group | `id` |
| `tenancy_create_tenant_group` | Create tenant group | `name`, `slug` |
| `tenancy_list_tenants` | List tenants | `name`, `group_id` |
| `tenancy_get_tenant` | Get tenant | `id` |
| `tenancy_create_tenant` | Create tenant | `name`, `slug` |
| `tenancy_update_tenant` | Update tenant | `id`, fields |
| `tenancy_delete_tenant` | Delete tenant | `id` |
| `tenancy_list_contacts` | List contacts | `name` |
| `tenancy_get_contact` | Get contact | `id` |
| `tenancy_create_contact` | Create contact | `name` |
## VPN
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `vpn_list_tunnels` | List VPN tunnels | `name` |
| `vpn_get_tunnel` | Get tunnel | `id` |
| `vpn_create_tunnel` | Create tunnel | `name`, `status` |
| `vpn_list_l2vpns` | List L2VPNs | `name` |
| `vpn_get_l2vpn` | Get L2VPN | `id` |
| `vpn_create_l2vpn` | Create L2VPN | `name`, `type` |
| `vpn_list_ike_policies` | List IKE policies | (varies) |
| `vpn_list_ipsec_policies` | List IPSec policies | (varies) |
| `vpn_list_ipsec_profiles` | List IPSec profiles | (varies) |
## Wireless
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `wlan_list_groups` | List WLAN groups | `name` |
| `wlan_get_group` | Get WLAN group | `id` |
| `wlan_create_group` | Create WLAN group | `name`, `slug` |
| `wlan_list_lans` | List WLANs | `ssid` |
| `wlan_get_lan` | Get WLAN | `id` |
| `wlan_create_lan` | Create WLAN | `ssid`, `group` |
| `wlan_list_links` | List wireless links | (varies) |
| `wlan_get_link` | Get wireless link | `id` |
## Extras
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `extras_list_tags` | List tags | `name` |
| `extras_get_tag` | Get tag | `id` |
| `extras_create_tag` | Create tag | `name`, `slug`, `color` |
| `extras_update_tag` | Update tag | `id`, fields |
| `extras_delete_tag` | Delete tag | `id` |
| `extras_list_custom_fields` | List custom fields | `name` |
| `extras_get_custom_field` | Get custom field | `id` |
| `extras_list_webhooks` | List webhooks | `name` |
| `extras_get_webhook` | Get webhook | `id` |
| `extras_list_journal_entries` | List journal entries | `assigned_object_type`, `assigned_object_id` |
| `extras_get_journal_entry` | Get journal entry | `id` |
| `extras_create_journal_entry` | Create journal entry | `assigned_object_type`, `assigned_object_id`, `comments` |
| `extras_list_object_changes` | List audit log | `user_id`, `changed_object_type`, `action` |
| `extras_get_object_change` | Get change details | `id` |
| `extras_list_config_contexts` | List config contexts | `name` |
| `extras_get_config_context` | Get config context | `id` |
## Common Object Types for Filtering
| Category | Object Types |
|----------|--------------|
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
| Tenancy | `tenancy.tenant`, `tenancy.contact` |

View File

@@ -0,0 +1,191 @@
# Sync Workflow Skill
How to synchronize machine state with NetBox.
## Prerequisites
Load these skills:
- `system-discovery` - Bash commands for system info
- `mcp-tools-reference` - MCP tool reference
## Sync Workflow
### Phase 1: Device Lookup
```
dcim_list_devices name=<hostname>
```
If not found, suggest `/cmdb-register` first.
If found:
- Store device ID and current field values
- Fetch interfaces: `dcim_list_interfaces device_id=<device_id>`
- Fetch IPs: `ipam_list_ip_addresses device_id=<device_id>`
- Check clusters/VMs: `virt_list_clusters`, `virt_list_vms cluster=<cluster_id>`
### Phase 2: Current State Discovery
Use commands from `system-discovery` skill.
### Phase 3: Comparison
#### Device Attributes
| Field | Compare |
|-------|---------|
| Platform | OS version changed? |
| Status | Still active? |
| Serial | Match? |
| Description | Keep existing |
#### Network Interfaces
| Change Type | Detection |
|-------------|-----------|
| New interface | Exists locally but not in NetBox |
| Removed interface | In NetBox but not locally |
| Changed MAC | MAC address different |
| Interface type | Type mismatch |
#### IP Addresses
| Change Type | Detection |
|-------------|-----------|
| New IP | Exists locally but not in NetBox |
| Removed IP | In NetBox but not locally |
| Primary IP changed | Default route interface changed |
#### Docker Containers
| Change Type | Detection |
|-------------|-----------|
| New container | Running locally but no VM in cluster |
| Stopped container | VM exists but container not running |
| Resource change | vCPUs/memory different |
### Phase 4: Diff Report
```markdown
## Sync Diff Report
**Device:** <hostname> (ID: <device_id>)
**NetBox URL:** https://netbox.example.com/dcim/devices/<id>/
### Device Attributes
| Field | NetBox Value | Current Value | Action |
|-------|--------------|---------------|--------|
| Platform | Ubuntu 22.04 | Ubuntu 24.04 | UPDATE |
### Network Interfaces
#### New Interfaces (will create)
| Interface | Type | MAC | IPs |
|-----------|------|-----|-----|
| tailscale0 | virtual | - | 100.x.x.x/32 |
#### Removed Interfaces (will mark offline)
| Interface | Type | Reason |
|-----------|------|--------|
| eth1 | 1000base-t | Not found locally |
#### Changed Interfaces
| Interface | Field | Old | New |
|-----------|-------|-----|-----|
| eth0 | mac_address | aa:bb:cc:00:00:00 | aa:bb:cc:11:11:11 |
### IP Addresses
#### New IPs (will create)
- 192.168.1.150/24 on eth0
#### Removed IPs (will unassign)
- 192.168.1.100/24 from eth0
### Docker Containers
#### New Containers (will create VMs)
| Container | Image | Role |
|-----------|-------|------|
| media_lidarr | linuxserver/lidarr | Media Management |
### Summary
- **Updates:** X
- **Creates:** Y
- **Removals/Offline:** Z
```
### Phase 5: Apply Updates
#### Device Updates
```
dcim_update_device id=<device_id> platform=<new_platform_id>
```
#### Interface Updates
New:
```
dcim_create_interface device=<device_id> name=<name> type=<type>
```
Removed (mark offline):
```
dcim_update_interface id=<id> enabled=false description="Marked offline by cmdb-sync"
```
Changed:
```
dcim_update_interface id=<id> mac_address=<new_mac>
```
#### IP Address Updates
New:
```
ipam_create_ip_address address=<ip/prefix> assigned_object_type="dcim.interface" assigned_object_id=<id>
```
Removed (unassign):
```
ipam_update_ip_address id=<id> assigned_object_type=null assigned_object_id=null
```
#### Primary IP Update
```
dcim_update_device id=<device_id> primary_ip4=<new_primary_ip_id>
```
#### Container/VM Updates
New:
```
virt_create_vm name=<name> cluster=<cluster_id> status="active"
```
Stopped:
```
virt_update_vm id=<id> status="offline"
```
### Phase 6: Journal Entry
```
extras_create_journal_entry
assigned_object_type="dcim.device"
assigned_object_id=<device_id>
comments="Device synced via /cmdb-sync command\n\nChanges applied:\n- <list>"
```
## Sync Modes
### Dry Run Mode
- Complete phases 1-4 (lookup, discovery, compare, diff report)
- Skip phases 5-6 (no updates, no journal)
- End with: "Dry run complete. No changes applied."
### Full Sync Mode
- Skip user confirmation
- Update all fields even if unchanged (force refresh)
## Error Handling
| Error | Action |
|-------|--------|
| Device not found | Suggest `/cmdb-register` |
| Permission denied | Note which failed, continue others |
| Cluster not found | Offer to create or skip container sync |
| API errors | Log error, continue with remaining |

View File

@@ -0,0 +1,101 @@
# System Discovery Skill
Bash commands for gathering system information from the current machine.
## Basic Device Information
```bash
# Hostname
hostname
# OS/Platform info
cat /etc/os-release 2>/dev/null || uname -a
# Hardware model - Raspberry Pi
cat /proc/device-tree/model 2>/dev/null || echo "Unknown"
# Hardware model - x86 systems
cat /sys/class/dmi/id/product_name 2>/dev/null || echo "Unknown"
# Serial number - Raspberry Pi
cat /proc/device-tree/serial-number 2>/dev/null || cat /proc/cpuinfo | grep Serial | cut -d: -f2 | tr -d ' ' 2>/dev/null
# Serial number - x86 systems
cat /sys/class/dmi/id/product_serial 2>/dev/null || echo "Unknown"
# CPU count
nproc
# Memory in MB
free -m | awk '/Mem:/ {print $2}'
# Disk size in GB (root filesystem)
df -BG / | awk 'NR==2 {print $2}' | tr -d 'G'
```
## Network Interfaces
```bash
# Get interfaces with IPs (JSON format)
ip -j addr show 2>/dev/null || ip addr show
# Get default gateway interface
ip route | grep default | awk '{print $5}' | head -1
# Get MAC addresses
ip -j link show 2>/dev/null || ip link show
```
## Running Applications
```bash
# Docker containers (JSON format)
docker ps --format '{"name":"{{.Names}}","image":"{{.Image}}","status":"{{.Status}}","ports":"{{.Ports}}"}' 2>/dev/null || echo "Docker not available"
# Docker Compose projects (find compose files)
find ~/apps /home/*/apps -name "docker-compose.yml" -o -name "docker-compose.yaml" 2>/dev/null | head -20
# Running systemd services
systemctl list-units --type=service --state=running --no-pager --plain 2>/dev/null | grep -v "^UNIT" | head -30
```
## Interface Type Mapping
| Interface Pattern | NetBox Type |
|-------------------|-------------|
| `eth*`, `enp*` | `1000base-t` |
| `wlan*` | `ieee802.11ax` |
| `tailscale*`, `docker*`, `br-*` | `virtual` |
| `lo` | Skip (loopback) |
## Platform Detection
Based on OS detected, determine platform name:
| OS Detection | Platform Name |
|--------------|---------------|
| Raspberry Pi OS | `Raspberry Pi OS (Bookworm)` |
| Ubuntu | `Ubuntu {version} LTS` |
| Debian | `Debian {version}` |
| Default | `{OS Name} {Version}` |
## Device Role Auto-Detection
Based on detected services:
| Detection | Suggested Role |
|-----------|----------------|
| Docker containers found | `Docker Host` |
| Only basic services | `Server` |
| Specific role specified | Use specified |
## Container Role Mapping
Map container names/images to roles:
| Container Pattern | Role |
|-------------------|------|
| `*caddy*`, `*nginx*`, `*traefik*` | Reverse Proxy |
| `*db*`, `*postgres*`, `*mysql*`, `*redis*` | Database |
| `*webui*`, `*frontend*` | Web Application |
| Others | Infer from image or use "Container" |

View File

@@ -0,0 +1,155 @@
# Topology Generation Skill
Generate Mermaid diagrams from NetBox data.
## Prerequisites
Load skill: `mcp-tools-reference`
## View: Rack Elevation
### Data Collection
1. Find rack: `dcim_list_racks name=<name>`
2. Get devices: `dcim_list_devices rack_id=<id>`
3. Note for each: `position`, `u_height`, `face`, `name`, `role`
### Mermaid Template
```mermaid
graph TB
subgraph rack["Rack: <rack-name> (U<height>)"]
direction TB
u42["U42: empty"]
u41["U41: empty"]
u40["U40: server-01 (Server)"]
u39["U39: server-01 (cont.)"]
u38["U38: switch-01 (Switch)"]
end
```
### Rules
- Mark top U with device name and role
- Mark subsequent Us as "(cont.)" for multi-U devices
- Empty Us show "empty"
## View: Network Topology
### Data Collection
1. List sites: `dcim_list_sites`
2. List devices: `dcim_list_devices site_id=<id>`
3. List cables: `dcim_list_cables`
4. List interfaces: `dcim_list_interfaces device_id=<id>`
### Mermaid Template
```mermaid
graph TD
subgraph site1["Site: Home"]
router1[("core-router-01<br/>Router")]
switch1[["dist-switch-01<br/>Switch"]]
server1["web-server-01<br/>Server"]
server2["db-server-01<br/>Server"]
end
router1 -->|"eth0 - eth1"| switch1
switch1 -->|"gi0/1 - eth0"| server1
switch1 -->|"gi0/2 - eth0"| server2
```
### Node Shapes by Role
| Role | Shape | Mermaid Syntax |
|------|-------|----------------|
| Router | Cylinder | `[(" ")]` |
| Switch | Double brackets | `[[ ]]` |
| Server | Rectangle | `[ ]` |
| Firewall | Hexagon | `{{ }}` |
| Other | Rectangle | `[ ]` |
### Edge Labels
Show interface names: `A-side - B-side`
## View: Site Overview
### Data Collection
1. Get site: `dcim_get_site id=<id>`
2. List racks: `dcim_list_racks site_id=<id>`
3. Count devices per rack: `dcim_list_devices rack_id=<id>`
### Mermaid Template
```mermaid
graph TB
subgraph site["Site: Headquarters"]
subgraph row1["Row 1"]
rack1["Rack A1<br/>12/42 U used<br/>5 devices"]
rack2["Rack A2<br/>20/42 U used<br/>8 devices"]
end
subgraph row2["Row 2"]
rack3["Rack B1<br/>8/42 U used<br/>3 devices"]
end
end
```
## View: Full Infrastructure
### Data Collection
1. List regions: `dcim_list_regions`
2. List sites: `dcim_list_sites`
3. Count devices: `dcim_list_devices status=active`
### Mermaid Template
```mermaid
graph TB
subgraph region1["Region: Americas"]
site1["Headquarters<br/>3 racks, 25 devices"]
site2["Branch Office<br/>1 rack, 5 devices"]
end
subgraph region2["Region: Europe"]
site3["EU Datacenter<br/>10 racks, 100 devices"]
end
site1 -.->|"WAN Link"| site3
```
## Output Format
Always provide:
1. **Summary** - Brief description of diagram content
2. **Mermaid Code Block** - The diagram code
3. **Legend** - Explanation of shapes and colors
4. **Data Notes** - Any data quality issues
### Example Output
```markdown
## Network Topology: Home Site
This diagram shows network connections between 4 devices at Home site.
```mermaid
graph TD
router1[("core-router<br/>Router")]
switch1[["main-switch<br/>Switch"]]
server1["homelab-01<br/>Server"]
router1 -->|"eth0 - gi0/24"| switch1
switch1 -->|"gi0/1 - eth0"| server1
```
**Legend:**
- Cylinder shape: Routers
- Double brackets: Switches
- Rectangle: Servers
**Data Notes:**
- 1 device (nas-01) has no cable connections documented
```

View File

@@ -0,0 +1,32 @@
# Visual Header Skill
Standard visual header for cmdb-assistant commands.
## Header Template
```
+----------------------------------------------------------------------+
| CMDB-ASSISTANT - [Context] |
+----------------------------------------------------------------------+
```
## Context Values by Command
| Command | Context |
|---------|---------|
| `/cmdb-search` | Search |
| `/cmdb-device` | Device Management |
| `/cmdb-ip` | IP Management |
| `/cmdb-site` | Site Management |
| `/cmdb-audit` | Data Quality Audit |
| `/cmdb-register` | Machine Registration |
| `/cmdb-sync` | Machine Sync |
| `/cmdb-topology` | Topology |
| `/change-audit` | Change Audit |
| `/ip-conflicts` | IP Conflict Detection |
| `/initial-setup` | Setup Wizard |
| Agent mode | Infrastructure Management |
## Usage
Display header at the start of every command response before proceeding with the operation.

View File

@@ -2,7 +2,6 @@
"name": "code-sentinel", "name": "code-sentinel",
"description": "Security scanning and code refactoring tools", "description": "Security scanning and code refactoring tools",
"version": "1.0.1", "version": "1.0.1",
"defaultModel": "sonnet",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
"email": "leobmiranda@gmail.com" "email": "leobmiranda@gmail.com"

View File

@@ -1,47 +0,0 @@
# code-sentinel
Security scanning and code refactoring tools for Claude Code projects.
## Features
### Security Scanning
- **PreToolUse Hook**: Catches vulnerabilities BEFORE code is written
- **Full Audit**: `/security-scan` for comprehensive project review
- **Pattern Detection**: SQL injection, XSS, command injection, secrets, and more
### Refactoring
- **Pattern Library**: Extract method, simplify conditionals, modernize syntax
- **Safe Transforms**: Preview changes before applying
- **Reference Updates**: Automatically updates all call sites
## Commands
| Command | Description |
|---------|-------------|
| `/security-scan` | Full project security audit |
| `/refactor <target>` | Apply refactoring with pattern |
| `/refactor-dry <target>` | Preview opportunities without changes |
## Hooks
- **PreToolUse (Write\|Edit)**: Scans code for security patterns before writing
## Security Patterns Detected
| Category | Examples |
|----------|----------|
| Injection | SQL, Command, Code (eval), XSS |
| Secrets | Hardcoded API keys, passwords |
| Deserialization | Pickle, unsafe YAML |
| Path Traversal | Unsanitized file paths |
## Installation
```bash
/plugin marketplace add https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git
/plugin install code-sentinel
```
## Integration
See claude-md-integration.md for CLAUDE.md additions.

View File

@@ -1,5 +1,8 @@
--- ---
description: Code structure and refactoring specialist name: refactor-advisor
description: Code structure and refactoring specialist. Use when analyzing code quality, design patterns, or planning refactoring work.
model: sonnet
permissionMode: acceptEdits
--- ---
# Refactor Advisor Agent # Refactor Advisor Agent

View File

@@ -1,7 +1,9 @@
--- ---
name: security-reviewer name: security-reviewer
description: Security-focused code review agent description: Security-focused code review agent
model: opus model: sonnet
permissionMode: plan
disallowedTools: Write, Edit, MultiEdit
--- ---
# Security Reviewer Agent # Security Reviewer Agent

Some files were not shown because too many files have changed in this diff Show More