464 Commits

Author SHA1 Message Date
61907b78db chore: release v5.9.0
- Plugin installation scripts for consumer projects
- MCP server mapping via mcp_servers field in plugin.json
- CLAUDE.md section markers for install/uninstall
- Agent model selection (25 agents with model frontmatter)
- Agent frontmatter standardization

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:38:31 -05:00
c4037f505c Merge pull request 'fix(plugins): remove invalid mcp_servers key from plugin.json files' (#405) from fix/startup-hook-venv-cache-path into development
Reviewed-on: #405
2026-02-03 07:14:24 +00:00
dbf3fa7e0d fix(plugins): remove invalid mcp_servers key from plugin.json files
The mcp_servers key is not a valid key in the Claude plugin manifest
schema. MCP servers are configured in the root .mcp.json file instead.

Affected plugins:
- cmdb-assistant
- contract-validator
- data-platform
- pr-review
- projman
- viz-platform

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:13:46 -05:00
6d093e83b6 Merge pull request 'fix(hooks): add auto-symlink creation in data-platform startup hook' (#403) from fix/startup-hook-venv-cache-path into development
Reviewed-on: #403
2026-02-03 03:40:00 +00:00
13de992638 fix(hooks): add auto-symlink creation in data-platform startup hook
Note: This fix may not help because MCP servers fail BEFORE hooks run.
See lessons learned wiki page for full debug trace.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 22:38:27 -05:00
ef28f172d6 fix(plugins): sync plugin.json versions with marketplace.json
Plugin load failures were caused by version mismatch between
marketplace.json and individual plugin.json files:
- contract-validator: 1.2.0 vs 1.1.0
- git-flow: 1.2.0 vs 1.0.0
- projman: 3.4.0 vs 3.3.0
- clarity-assist: 1.2.0 vs 1.0.0
- doc-guardian: 1.1.0 vs 1.0.0

Claude Code silently fails to load plugins when versions don't match.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 22:26:24 -05:00
39556dbb59 Merge pull request 'fix(hooks): check venv cache path before marketplace path' (#401) from fix/startup-hook-venv-cache-path into development
Reviewed-on: #401
2026-02-03 03:15:00 +00:00
c9e054e013 fix(hooks): check venv cache path before marketplace path
Startup hooks in data-platform and pr-review were checking for venvs
at the marketplace path (~/.claude/plugins/marketplaces/.../mcp-servers/)
which gets wiped on updates. The actual venvs live in the cache directory
(~/.cache/claude-mcp-venvs/) which survives updates.

This caused false "MCP venv missing" errors even when venvs existed,
wasting hours of debugging time.

Fixed hooks now check cache path first, matching the pattern used
by run.sh scripts.

Also updated docs/CANONICAL-PATHS.md with the correct venv path pattern
to prevent future occurrences.

Lesson learned: lessons/patterns/startup-hooks-must-check-venv-cache-path-first

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 22:13:40 -05:00
db8fec42f2 Merge pull request 'fix(hooks): correct venv path in startup-check scripts' (#399) from fix/startup-hook-venv-paths into development
Reviewed-on: #399
2026-02-03 02:58:09 +00:00
ba1dee4553 fix(hooks): correct venv path in startup-check scripts
The startup hooks were looking for MCP venvs relative to the plugin
directory instead of the marketplace root, causing false "venv missing"
errors on every session start.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 21:56:43 -05:00
01e184b68f Merge pull request 'feat(agents): add model selection and standardize frontmatter' (#397) from fix/plugin-install-mcp-mapping into development
Reviewed-on: #397
2026-02-03 01:39:32 +00:00
c0d62f4957 feat(agents): add model selection and standardize frontmatter
Add per-agent model selection using Claude Code's now-supported `model`
frontmatter field, and standardize all agent frontmatter across the
marketplace.

Changes:
- Add `model` field to all 25 agents (18 sonnet, 7 haiku)
- Fix viz-platform/data-platform agents using `agent:` instead of `name:`
- Remove non-standard `triggers:` field from domain agents
- Add missing frontmatter to 13 agents
- Document model selection in CLAUDE.md and CONFIGURATION.md
- Fix undocumented commands in README.md

Model assignments based on reasoning depth, tool complexity, and latency:
- sonnet: Planner, Orchestrator, Executor, Coordinator, Security Reviewers
- haiku: Maintainability Auditor, Test Validator, Git Assistant, etc.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 20:37:58 -05:00
5b1dde694c Merge pull request 'fix(scripts): MCP server mapping and CLAUDE.md section markers' (#395) from fix/plugin-install-mcp-mapping into development
Reviewed-on: #395
2026-02-03 00:37:15 +00:00
eafcfe5bd1 fix(scripts): MCP server mapping and CLAUDE.md section markers
Issue 1 - MCP Server Mapping:
- Add mcp_servers field to plugin.json for plugins using shared MCP servers
- projman/pr-review now install gitea MCP server
- cmdb-assistant now installs netbox MCP server
- Scripts read MCP server names from plugin.json

Issue 2 - CLAUDE.md Section Markers:
- Install wraps content with HTML comment markers for precise removal
- Uninstall uses markers first, falls back to legacy header detection
- Fixes code block false positives during uninstall

Bug fix:
- Change ((servers_added++)) to ((++servers_added)) to avoid exit code 1
  with set -e when incrementing from 0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 19:33:45 -05:00
dc113d8b09 Merge pull request 'feat(scripts): add plugin installation mechanism for consumer projects' (#393) from feat/plugin-install-scripts into development
Reviewed-on: #393
2026-02-02 22:03:24 +00:00
aca5c6e5b1 feat(scripts): add plugin installation mechanism for consumer projects
Add three new scripts for installing marketplace plugins to consumer projects:

- install-plugin.sh: Install plugin to target project (.mcp.json + CLAUDE.md)
- uninstall-plugin.sh: Remove plugin from target project
- list-installed.sh: Show installed/available plugins in a project

Features:
- Idempotent operations (safe to run multiple times)
- Handles plugins with/without MCP servers
- Code block aware CLAUDE.md section removal
- Flexible header format detection

Documentation updated in docs/CONFIGURATION.md with usage examples.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 17:01:35 -05:00
120f00ece6 Merge pull request 'feat(claude-config-maintainer): add settings.local.json audit feature v1.2.0' (#391) from feat/config-maintainer-settings-audit into development
Reviewed-on: #391
2026-02-02 21:02:43 +00:00
3012a7af68 feat(claude-config-maintainer): add settings.local.json audit feature v1.2.0
Add 3 new commands for auditing and optimizing Claude Code permission
configurations, leveraging the marketplace's multi-layer review architecture.

New commands:
- /config-audit-settings - 100-point scoring across redundancy, coverage,
  safety alignment, and profile fit
- /config-optimize-settings - apply optimizations with dry-run, named
  profiles (conservative, reviewed, autonomous), consolidation modes
- /config-permissions-map - Mermaid diagram of review layer coverage

New skill:
- settings-optimization.md - 7 sections covering file formats, syntax
  reference, consolidation rules, review-layer-aware recommendations,
  named profiles, scoring criteria, and hook detection

Updated agent maintainer.md with new "Audit Settings Files" responsibility.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 15:54:15 -05:00
d12d9b4962 Merge pull request 'docs: close 5.8.0 punch list — version sync, stale header, emoji clarity' (#389) from docs/punch-list-5.8.0-cleanup into development
Reviewed-on: #389
2026-02-02 19:30:27 +00:00
b302a4237d docs: close 5.8.0 punch list — version sync, stale header, emoji clarity
- Fix /review command visual header: replace inline CLOSING box with
  skill reference (Code Reviewer uses 🔍 REVIEW, not 🏁 CLOSING)
- Update CLAUDE.md version 5.4.0 → 5.8.0, fix data-platform version
  in plugin table (1.1.0 → 1.3.0), update Last Updated date
- Add Unicode emoji to Phase Registry tables in visual-output.md
  (now shows 🎯 Target instead of just "Target")

Items verified complete:
- README.md already shows v5.8.0
- marketplace.json already shows 5.8.0
- CHANGELOG 5.8.0 entry complete (rfc-reject in both Changed/Removed)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:28:09 -05:00
a034c12eb6 Merge pull request 'feat(projman): hardening sprint v5.8.0' (#387) from feat/projman-hardening into development
Reviewed-on: #387
2026-02-02 19:10:13 +00:00
636bd0af59 chore: bump version to 5.8.0, update CHANGELOG
- Added [5.8.0] section documenting all projman hardening changes
- Updated README.md title to v5.8.0
- Updated marketplace.json version to 5.8.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:07:15 -05:00
bf5029d6dc fix(contract-validator): update test for new contract INFO issue
Test fixture without gate_contract now correctly expects INFO issue
rather than zero issues.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:05:24 -05:00
e939af0689 docs(projman): expand /test command documentation
- Added sprint integration section (pre-close verification workflow)
- Added concrete usage examples for all modes
- Added edge cases table
- Added DO NOT rules for both modes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:02:32 -05:00
eb6ce62a76 feat(projman): add sprint dispatch log for session recovery
- progress-tracking.md: new Sprint Dispatch Log section with event types
- orchestrator.md: new responsibility to maintain dispatch log
- Enables timeline reconstruction after interrupted sessions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:00:24 -05:00
f7bcd48fc0 refactor(projman): create shared visual-output skill for DRY headers
- New skill: visual-output.md defines all header, progress, and verdict formats
- All 4 agent files now reference the skill instead of inline templates
- Phase Registry table maps agents to their emoji and phase name
- Single source of truth for visual branding changes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:57:56 -05:00
72b3436a24 feat(contract-validator): add gate contract versioning
- design-gate.md and data-gate.md declare gate_contract: v1
- domain-consultation.md Gate Command Reference includes Contract column
- validate_workflow_integration now checks contract version compatibility
- Tests added for match, mismatch, and missing contract scenarios

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:54:19 -05:00
bea46d7689 feat(projman): add sprint lifecycle state machine via milestone metadata
- New skill: sprint-lifecycle.md defines states, transitions, and check protocol
- All sprint commands now check and set lifecycle state
- States tracked in milestone description metadata (Sprint/Planning, Sprint/Executing, Sprint/Reviewing)
- Out-of-order calls produce warnings with guidance
- --force override available for all lifecycle checks
- Added Sprint/* labels to label taxonomy documentation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:48:13 -05:00
f2fddafca3 refactor(projman): normalize RFC to sub-command pattern, absorb clear-cache
- Created unified /rfc command with create|list|review|approve|reject sub-commands
- Deleted 5 individual rfc-*.md command files
- Moved /clear-cache into /setup --clear-cache
- Updated all cross-references in skills, docs, and integration files
- Command count: 17 -> 12 (net -5)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:43:29 -05:00
fdcb5d9874 feat(projman): harden sprint approval gate with --force override
- sprint-approval.md: approval is now a hard block, not a warning
- sprint-start.md: added --force flag documentation
- orchestrator.md: approval verification is now a hard stop
- docs: updated commands cheatsheet

BREAKING: /sprint-start now requires approval or --force flag

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:37:35 -05:00
45510b44c0 Merge pull request 'fix(marketplace): remove integrates_with field (schema violation)' (#384) from fix/marketplace-schema-hotfix into development
Reviewed-on: #384
2026-02-02 16:47:12 +00:00
79468f5d9e fix(marketplace): remove integrates_with field (schema violation)
Claude Code's marketplace schema does not support custom fields.
The `integrates_with` field on data-platform and viz-platform caused:
"Invalid schema: plugins.9: Unrecognized key: integrates_with"

This reverts the schema extension while keeping the validate_workflow_integration
MCP tool functional (it reads from plugin.json files directly, not marketplace.json).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 11:46:29 -05:00
9148e21bc5 Merge pull request 'feat(contract-validator): add validate_workflow_integration tool (v5.7.1)' (#382) from feat/v5.7.1-domain-advisory-hardening into development
Reviewed-on: #382
2026-02-02 16:27:18 +00:00
2925ec4229 feat(contract-validator): add validate_workflow_integration tool (v5.7.1)
Domain Advisory Pattern Hardening - patch release to close gaps from v5.6.0/v5.7.0:

## Added
- New `validate_workflow_integration` MCP tool validates domain plugins expose
  required advisory interfaces (gate command, review command, advisory agent)
- New `MISSING_INTEGRATION` issue type for workflow integration validation
- New `WorkflowIntegrationResult` Pydantic model for structured validation output
- `integrates_with` field on viz-platform and data-platform in marketplace.json
  declaring projman integration metadata
- 4 new test cases for workflow integration validation

## Fixed
- scripts/setup.sh banner version updated from v5.1.0 to v5.7.1

## Documentation
- Updated mcp-tools-reference.md with new tool
- Updated validation-rules.md with Workflow Integration Checks section
- Added /design-gate, /design-review, /data-gate, /data-review to COMMANDS-CHEATSHEET
- Added contract-validator to CONFIGURATION.md plugin table
- Updated README.md Contract Validator tools table

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 11:24:51 -05:00
4340873f4e Merge pull request 'fix: update README.md version to 5.7.0' (#380) from fix/readme-version-5.7.0 into development
Reviewed-on: #380
2026-02-02 14:57:32 +00:00
98fd4e45e2 fix: update README.md version to 5.7.0
The README title was still showing v5.6.0 after the Sprint 10 release.
All other version references (CHANGELOG.md, marketplace.json) were correct.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 09:55:36 -05:00
793565aaaa Merge pull request 'feat: Sprint 10 - Data Platform Domain Advisory Pattern (v5.7.0)' (#378) from feat/sprint-10-data-platform-advisory into development
Reviewed-on: #378
2026-02-02 14:49:57 +00:00
27f3603f52 Merge feat/377-version-docs 2026-02-02 01:32:24 -05:00
2872092554 docs: bump version to 5.7.0 and update documentation
- marketplace.json: 5.6.0 → 5.7.0
- data-platform plugin.json: 1.1.0 → 1.3.0
- CHANGELOG.md: Add v5.7.0 entry with data-platform additions
- README.md: Update Domain Advisory table (Data: planned → active)

Domain Advisory Pattern now fully operational for both domains.

Closes #377

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:31:39 -05:00
4c857dde47 Merge feat/376-data-review-command 2026-02-02 01:30:26 -05:00
efe3808bd2 feat(data-platform): add /data-review command
Comprehensive data integrity audit for human review.

- Detailed report with FAIL/WARN/INFO severity levels
- Schema, lineage, dbt, and PostGIS validation
- Actionable recommendations for each finding
- Graceful degradation when components unavailable

Use cases: sprint planning, code review, post-migration,
periodic health checks, project onboarding.

Closes #376

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:29:41 -05:00
61c315a605 Merge feat/375-data-gate-command 2026-02-02 01:29:03 -05:00
56d3307cd3 feat(data-platform): add /data-gate command
Binary pass/fail gate command for projman orchestrator integration.

- Invoked when Domain/Data label present on issue
- Checks FAIL-level violations only (speed optimization)
- Returns compact PASS/FAIL output for automation
- Graceful degradation when database/dbt unavailable

Completes the Domain Advisory Pattern for data domain.

Closes #375

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:28:19 -05:00
ea9d501a5d Merge feat/374-data-advisor-agent 2026-02-02 01:27:38 -05:00
cd27b92b28 feat(data-platform): add data-advisor agent
Advisory agent for data integrity validation using existing MCP tools.

Features:
- Two operating modes: review (detailed) and gate (binary)
- PostgreSQL schema validation
- dbt project health checks (parse, compile, test, lineage)
- PostGIS spatial validation
- Python code pattern scanning
- Graceful degradation when components unavailable

Integrates with projman orchestrator for Domain/Data gates.

Closes #374

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:26:54 -05:00
61654057b8 Merge feat/373-data-integrity-audit-skill 2026-02-02 01:25:39 -05:00
2c998dff1d feat(data-platform): add data-integrity-audit skill
Define audit rules, severity classification, scanning strategies,
and report templates for data integrity validation.

Covers:
- Schema validity (PostgreSQL tables, columns, types)
- dbt project health (parse, compile, test, lineage)
- PostGIS compliance (SRID, geometry types, extent)
- Data type consistency (DataFrame dtypes)
- Query safety patterns

Closes #373

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 01:24:54 -05:00
192f808f48 Merge pull request 'feat: Domain Advisory Pattern - viz-platform integration (Sprint 9)' (#370) from feat/sprint-9-domain-advisory-pattern into development
Reviewed-on: #370
2026-02-01 23:46:53 +00:00
65574a03fb feat(projman): add domain-consultation skill and update orchestrator
- Create domain-consultation.md skill with detection rules and gate protocols
- Update orchestrator.md to load skill and run domain gates before completion
- Add critical reminder for domain gate enforcement

Closes #356
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:32:40 -05:00
571c713697 Merge feat/363-domain-labels-version-bump 2026-02-01 18:30:59 -05:00
5429f193b3 Merge feat/361-planner-domain-consultation 2026-02-01 18:30:59 -05:00
a866e0d43d Merge feat/359-design-review-command 2026-02-01 18:30:49 -05:00
2f5161675c Merge feat/360-design-gate-command 2026-02-01 18:30:49 -05:00
f95b7eb650 Merge feat/358-design-reviewer-agent 2026-02-01 18:30:49 -05:00
8095f16cd6 Merge feat/357-design-system-audit-skill 2026-02-01 18:30:43 -05:00
7d2705e3bf feat(labels): add Domain/Viz and Domain/Data labels for cross-plugin integration
Add new Domain category with 2 labels for domain-specific validation gates:
- Domain/Viz: triggers viz-platform design gates for frontend/visualization work
- Domain/Data: triggers data-platform data gates for data engineering work

Update version to 5.6.0 and document Domain Advisory Pattern feature.

Closes #363

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:23:27 -05:00
0d04c8703a feat(viz-platform): add /design-review command for design audits
Add standalone command that invokes the design-reviewer agent to
perform detailed design system compliance audits on target paths.
Returns comprehensive findings grouped by severity with file paths,
line numbers, and recommended fixes.

Closes #359

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:20:14 -05:00
407dd1b93b feat(viz-platform): add design-gate command for binary pass/fail validation
Add /design-gate command that provides binary pass/fail validation for
design system compliance. This command is used by projman orchestrator
during sprint execution to gate issue completion for Domain/Viz issues.

Features:
- Binary PASS/FAIL output for automation
- Checks only FAIL-level violations (invalid props, missing components)
- Integrates with projman sprint execution workflow
- Lightweight alternative to /design-review

Closes #360

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:18:09 -05:00
4614726350 feat(projman): add domain consultation step to planner agent
Integrate domain-consultation skill into the planner agent workflow:
- Add skills/domain-consultation.md to Skills to Load section
- Add new responsibility "7. Domain Consultation" between Task Sizing and Issue Creation
- Renumber subsequent sections (Issue Creation -> 8, Request Approval -> 9)
- Add critical reminder #8 to always check domain signals

The domain consultation step analyzes planned issues for domain-specific
signals (viz-platform, data-platform) and appends appropriate acceptance
criteria before issue creation.

Closes #361

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:15:15 -05:00
69358a78ba feat(viz-platform): add design-reviewer agent for design system compliance
Add design reviewer agent that uses viz-platform MCP tools to audit code
for design system compliance in Dash/DMC applications.

Features:
- Review mode: detailed report with FAIL/WARN/INFO severity levels
- Gate mode: binary PASS/FAIL for CI/CD integration
- Component validation using validate_component, get_component_props
- Theme compliance checking for hardcoded colors/sizes
- Accessibility validation using accessibility_validate_colors
- Structured output for projman orchestrator integration

Closes #358

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:13:15 -05:00
557bf6115b feat(viz-platform): add design-system-audit skill
Add comprehensive design system audit skill that provides:
- What to check: component prop validity, theme token usage,
  accessibility compliance, responsive design
- Common violation patterns at FAIL/WARN/INFO severity levels
- Scanning strategy for finding DMC components in Python files
- Report template for audit output
- MCP tool integration patterns

Closes #357

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:06:21 -05:00
6eeb4a4e9a feat(projman): add domain-consultation skill for cross-plugin integration
Add the core skill that enables projman agents to detect and consult
domain-specific plugins (viz-platform, data-platform) during sprint
lifecycle. Includes domain detection rules, planning protocol,
execution gate protocol, review protocol, and extensibility guidelines.

Closes #356

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:05:39 -05:00
fde61f5f73 Merge pull request 'chore: release v5.5.0' (#353) from chore/release-5.5.0 into development
Reviewed-on: #353
2026-02-01 19:33:20 +00:00
addfc9c1d5 chore: release v5.5.0
- RFC System for Feature Tracking
- 5 new commands: /rfc-create, /rfc-list, /rfc-review, /rfc-approve, /rfc-reject
- New MCP tool: allocate_rfc_number
- Sprint integration: /sprint-plan detects approved RFCs
- Clarity-assist integration: feature request detection

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 14:29:25 -05:00
733b8679c9 Merge pull request 'docs: add RFC commands to README and bump projman version' (#351) from feat/rfc-system into development
Reviewed-on: #351
2026-02-01 19:10:13 +00:00
5becd3d41c Merge pull request 'docs: add RFC commands to README and bump projman version' (#350) from fix/rfc-docs into development
Reviewed-on: #350
2026-02-01 19:08:56 +00:00
305c534402 docs: add RFC commands to README and bump projman version
- Add 5 RFC commands to projman Commands line in README.md
- Bump projman version 3.3.0 → 3.4.0 in marketplace.json

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 14:05:02 -05:00
7b7e7dce16 docs: add RFC commands to README and bump projman version
- Add 5 RFC commands to projman Commands line in README.md
- Bump projman version 3.3.0 → 3.4.0 in marketplace.json

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:58:59 -05:00
b9eaae5317 Merge pull request 'feat(projman): add RFC system for feature tracking' (#348) from feat/rfc-system into development
Reviewed-on: #348
2026-02-01 17:42:01 +00:00
16acc0609e feat(projman): add RFC system for feature tracking
Implement wiki-based Request for Comments system for capturing,
reviewing, and tracking feature ideas through their lifecycle.

New commands:
- /rfc-create: Create RFC from conversation or clarified spec
- /rfc-list: List RFCs grouped by status
- /rfc-review: Submit Draft RFC for review
- /rfc-approve: Approve RFC for sprint planning
- /rfc-reject: Reject RFC with documented reason

RFC lifecycle: Draft → Review → Approved → Implementing → Implemented

Integration:
- /sprint-plan detects approved RFCs and offers selection
- /sprint-close updates RFC status on completion
- clarity-assist suggests /rfc-create for feature ideas

New MCP tool: allocate_rfc_number

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 12:38:02 -05:00
f0f4369eac Merge pull request 'fix: correct hooks.json structure with nested matcher objects' (#346) from fix/hooks-json-structure into development
Reviewed-on: #346
2026-01-31 21:14:44 +00:00
89e841d448 fix: correct hooks.json structure with nested matcher objects
The hooks.json files had incorrect structure. Claude Code requires:
- Top-level "hooks" object
- Event names as keys containing arrays
- Each array element must have "matcher" and nested "hooks" array

Fixed 8 plugins:
- clarity-assist
- claude-config-maintainer
- cmdb-assistant
- contract-validator
- data-platform
- pr-review
- projman
- viz-platform

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 16:07:25 -05:00
b6b09c1754 Merge pull request 'fix: correct hooks.json structure per Claude Code documentation' (#344) from fix/hooks-json-structure into development
Reviewed-on: #344
2026-01-31 19:28:26 +00:00
94d8f03cf9 fix: correct hooks.json structure per Claude Code documentation
Three plugins had incorrect hooks.json structure that caused hooks to fail:

- pr-review: SessionStart used nested `hooks` array without matcher
- cmdb-assistant: SessionStart used nested `hooks` array without matcher
- git-flow: Used completely wrong format (array with `event` field)

Per Claude Code documentation:
- Without matcher: direct `type`/`command` in the event array
- With matcher: nested `hooks` array inside matcher object

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 14:26:51 -05:00
f23e047842 Merge pull request 'docs: add mandatory behavior rules to CLAUDE.md' (#342) from fix/revert-corrupted-hooks-and-update-rules into development
Reviewed-on: #342
2026-01-30 23:14:53 +00:00
dc59cb3713 docs: add mandatory behavior rules to CLAUDE.md
Add prominent behavior rules section at top of CLAUDE.md to ensure
thorough verification before claiming completion and believing user
reports of issues.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 18:13:28 -05:00
a78bde2e42 Merge pull request 'fix: address critical issues from codebase analysis' (#340) from fix/critical-issues-from-analysis into development
Reviewed-on: #340
2026-01-30 23:07:21 +00:00
f082b78c0b fix: address critical issues from codebase analysis
- Add hooks declarations to 9 plugins missing them in marketplace.json
- Change .mcp.json to use relative paths (portable across users)
- Fix pr-review hook schema to use standard nested hooks structure
- Fix token exposure in cmdb-assistant startup-check.sh (use curl -K)
- Update version to 5.4.1 in marketplace.json, README.md
- Fix CANONICAL-PATHS.md version (was incorrectly showing 5.5.0)

All 12 plugins now have hooks properly registered.
All validations pass.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 18:06:04 -05:00
f684b47161 Merge pull request 'refactor: extract skills from commands across 10 plugins' (#338) from refactor/all-plugins-skills-extraction into development
Reviewed-on: #338
2026-01-30 22:35:20 +00:00
7c8a20c804 refactor: extract skills from commands across 8 plugins
Refactored commands to extract reusable skills following the
Commands → Skills separation pattern. Each command is now <50 lines
and references skill files for detailed knowledge.

Plugins refactored:
- claude-config-maintainer: 5 commands → 7 skills
- code-sentinel: 3 commands → 2 skills
- contract-validator: 5 commands → 6 skills
- data-platform: 10 commands → 6 skills
- doc-guardian: 5 commands → 6 skills (replaced nested dir)
- git-flow: 8 commands → 7 skills

Skills contain: workflows, validation rules, conventions,
reference data, tool documentation

Commands now contain: YAML frontmatter, agent assignment,
skills list, brief workflow steps, parameters

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:32:24 -05:00
aad02ef2d9 Merge branch 'refactor/viz-platform-skills-extraction' into refactor/all-plugins-skills-extraction 2026-01-30 17:31:56 -05:00
db8ffa23ec Merge pr-review branch, keep clarity-assist changes 2026-01-30 17:31:09 -05:00
5bf1271347 refactor(clarity-assist): extract skills from commands
Extract shared knowledge from clarify.md and quick-clarify.md into
reusable skill files:
- 4d-methodology.md: Core 4-phase clarification process
- nd-accommodations.md: Neurodivergent-friendly question patterns
- clarification-techniques.md: Anti-patterns and question templates
- escalation-patterns.md: Mode switching guidelines

Commands slimmed from 149/96 lines to 44/49 lines respectively.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:23:40 -05:00
747a2b15e5 refactor(cmdb-assistant): extract skills and slim commands
- Extract 9 skill files from command knowledge:
  - mcp-tools-reference.md: Complete NetBox MCP tools reference
  - system-discovery.md: Bash commands for system info gathering
  - device-registration.md: Device registration workflow
  - sync-workflow.md: Machine sync process
  - audit-workflow.md: Data quality audit checks
  - ip-management.md: IP/prefix management and conflict detection
  - topology-generation.md: Mermaid diagram generation
  - change-audit.md: NetBox change audit workflow
  - visual-header.md: Standard visual header pattern

- Slim all 11 commands to under 60 lines:
  - cmdb-sync.md: 348 -> 57 lines
  - cmdb-register.md: 334 -> 51 lines
  - ip-conflicts.md: 238 -> 58 lines
  - cmdb-audit.md: 207 -> 58 lines
  - cmdb-topology.md: 194 -> 54 lines
  - initial-setup.md: 176 -> 74 lines
  - change-audit.md: 175 -> 57 lines
  - cmdb-site.md: 68 -> 50 lines
  - cmdb-ip.md: 65 -> 52 lines
  - cmdb-device.md: 64 -> 55 lines
  - cmdb-search.md: 46 lines (unchanged)

- Update agent to reference skills for best practices
- Preserve existing netbox-patterns skill

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:21:21 -05:00
5152cda161 refactor(viz-platform): extract skills and slim commands
Extract shared knowledge from 10 commands into 7 reusable skills:
- mcp-tools-reference.md: All viz-platform MCP tool signatures
- theming-system.md: Theme tokens, CSS variables, color palettes
- accessibility-rules.md: WCAG contrast, color-blind safe palettes
- dmc-components.md: DMC categories, validation, common props
- responsive-design.md: Breakpoints, mobile-first, grid config
- chart-types.md: Plotly chart types, export formats, resolution
- layout-templates.md: Dashboard templates, filter types

All commands now reference skills via "Skills to Load" section.
Commands reduced from 1396 lines total to 427 lines (69% reduction).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 16:59:26 -05:00
80b9e919c7 refactor(contract-validator): extract skills from commands
Extract shared knowledge from 5 command files into 6 reusable skills:
- plugin-discovery.md: Plugin scanning and discovery
- interface-parsing.md: README.md and CLAUDE.md parsing
- dependency-analysis.md: MCP server and data flow analysis
- validation-rules.md: Compatibility and agent validation
- mcp-tools-reference.md: Available MCP tools
- visual-output.md: Standard formatting and headers

Slim commands from 263-164 lines down to 44-55 lines each.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 16:57:13 -05:00
3437ece76e Merge pull request 'refactor(projman): extract skills and consolidate commands' (#336) from refactor/projman-skills-commands-consolidation into development
Reviewed-on: #336
2026-01-30 20:03:57 +00:00
2e65b60725 refactor(projman): extract skills and consolidate commands
Major refactoring of projman plugin architecture:

Skills Extraction (17 new files):
- Extracted reusable knowledge from commands and agents into skills/
- branch-security, dependency-management, git-workflow, input-detection
- issue-conventions, lessons-learned, mcp-tools-reference, planning-workflow
- progress-tracking, repo-validation, review-checklist, runaway-detection
- setup-workflows, sprint-approval, task-sizing, test-standards, wiki-conventions

Command Consolidation (17 → 12 commands):
- /setup: consolidates initial-setup, project-init, project-sync (--full/--quick/--sync)
- /debug: consolidates debug-report, debug-review (report/review modes)
- /test: consolidates test-check, test-gen (run/gen modes)
- /sprint-status: absorbs sprint-diagram via --diagram flag

Architecture Cleanup:
- Remove plugin-level mcp-servers/ symlinks (6 plugins)
- Remove plugin README.md files (12 files, ~2000 lines)
- Update all documentation to reflect new command structure
- Fix documentation drift in CONFIGURATION.md, COMMANDS-CHEATSHEET.md

Commands are now thin dispatchers (~20-50 lines) that reference skills.
Agents reference skills for domain knowledge instead of inline content.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 15:02:16 -05:00
8fe685037e Merge pull request 'perf(hooks): Sprint 8 - Hook Efficiency Quick Wins' (#334) from feat/sprint-8-hook-efficiency into development
Reviewed-on: #334
2026-01-30 18:24:40 +00:00
c9276d983b docs(changelog): add Sprint 8 hook efficiency changes to Unreleased
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 13:22:12 -05:00
63327ecf65 perf(hooks): improve hook efficiency with early exits and cooldowns
Sprint 8 - Hook Efficiency Quick Wins (v5.5.0)

- Remove viz-platform SessionStart hook (zero value, just echoed "loaded")
- Add early git check to git-flow branch-check.sh (skip JSON parsing for non-git commands)
- Add early git check to git-flow commit-msg-check.sh (skip Python spawn for non-commit commands)
- Add 60-second cooldown to project-hygiene cleanup.sh (reduce find operations)

Closes #321, #322, #323, #324

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 13:19:03 -05:00
3839575272 Merge pull request 'fix(projman): add YAML frontmatter and remove unsupported model fields' (#332) from fix/projman-command-frontmatter into development
Reviewed-on: #332
2026-01-30 17:15:37 +00:00
11d77ebe84 revert: remove unsupported defaultModel and model fields
Claude Code rejects `defaultModel` in plugin.json and `model` in agent
frontmatter with "Unrecognized key" validation error.

Removed:
- defaultModel from 6 plugin.json files
- model from 7 agent frontmatter files
- docs/MODEL-RECOMMENDATIONS.md (deleted)
- Model config sections from CONFIGURATION.md and CLAUDE.md
- Model validation from validate-marketplace.sh

This reverts Sprint 7 (v5.4.0) multi-model feature that was never
supported by Claude Code's plugin schema.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 12:09:38 -05:00
e190eb8b28 Merge pull request 'fix(projman): add missing YAML frontmatter to command files' (#329) from fix/projman-command-frontmatter into development
Reviewed-on: #329
2026-01-30 16:49:20 +00:00
c81c4a9981 fix(projman): add missing YAML frontmatter to command files
clear-cache.md and suggest-version.md were missing the required YAML
frontmatter with description field. This caused Claude Code to skip
loading the entire projman plugin's commands.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 11:45:56 -05:00
45af713366 Merge pull request 'fix(hooks): remove all venv auto-repair code that deleted .venv directories' (#327) from fix/remove-venv-auto-repair into development
Reviewed-on: #327
2026-01-30 16:34:58 +00:00
9550a85f4d fix(hooks): remove all venv auto-repair code that deleted .venv directories
BREAKING: Removes automatic venv management that was causing session failures

Changes:
- Delete scripts/venv-repair.sh (was deleting and recreating venvs)
- Remove auto-repair code from projman/hooks/startup-check.sh
- Remove venv-repair call from scripts/post-update.sh
- Remove rm -rf .venv instructions from docs/UPDATING.md and CONFIGURATION.md
- Update docs/CANONICAL-PATHS.md to remove venv-repair.sh reference

Additionally:
- Add Pre-Change Protocol to CLAUDE.md (mandatory dependency check before edits)
- Add Pre-Change Protocol enforcement to claude-config-maintainer plugin
- Add Development Context section to CLAUDE.md clarifying which plugins are
  used in this project vs only being developed
- Reorganize commands table to separate relevant vs non-relevant commands

The venv auto-repair was the root cause of repeated MCP server failures,
requiring manual setup.sh runs after every session start.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 11:33:25 -05:00
e1d7ec46ae Merge pull request 'fix(plugins): remove broken mcpServers references that broke plugin loading' (#325) from fix/remove-broken-mcp-references into development
Reviewed-on: #325
2026-01-29 23:10:25 +00:00
c8b91f6a87 fix(plugins): remove broken mcpServers references that broke plugin loading
The MCP consolidation commit (afd4c44) deleted plugin-level .mcp.json files
but left references to them in plugin.json and marketplace.json. This caused
7 plugins to fail loading (projman, pr-review, cmdb-assistant, data-platform,
viz-platform, contract-validator, and indirectly git-flow/clarity-assist).

Changes:
- Remove mcpServers field from 6 plugin.json files (file no longer exists)
- Remove mcpServers field from 6 marketplace.json entries
- Add file reference validation to validate-marketplace.sh:
  - Validates mcpServers references point to existing files
  - Validates hooks references point to existing files
  - Validates commands references point to existing paths
- Add pre-commit hook (.git/hooks/pre-commit) to enforce validation

The validation script will now FAIL if any config file references a
non-existent file, preventing this class of bug from happening again.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 18:09:08 -05:00
ce106ace8a Merge pull request 'fix(mcp): consolidate all MCP servers at marketplace root' (#319) from fix/consolidate-mcp-servers into development
Reviewed-on: #319
2026-01-29 17:12:51 +00:00
afd4c44d11 fix(mcp): consolidate all MCP servers at marketplace root
Move all MCP server declarations from individual plugin .mcp.json files
to a single .mcp.json at the marketplace root. This fixes MCP loading
failures where only one plugin's MCP would load.

- Add .mcp.json at marketplace root with all 5 servers
- Remove plugin-level .mcp.json files (projman, pr-review, cmdb-assistant,
  data-platform, viz-platform, contract-validator)
- Update CLAUDE.md to reflect new architecture

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 12:12:08 -05:00
7c3a2ac31c Merge pull request 'fix(projman): remove automatic cache clearing from session start hook' (#317) from fix/remove-auto-cache-clear into development
Reviewed-on: #317
2026-01-29 16:59:53 +00:00
e0ab4c2ddf fix(projman): remove automatic cache clearing from session start hook
The startup-check.sh hook was clearing ~/.claude/plugins/cache/ at every
session start, which was aggressive and potentially disruptive. Cache
clearing is now a manual operation via the new /clear-cache command.

Changes:
- Remove automatic cache clearing from startup-check.sh
- Add /clear-cache command for manual cache clearing when needed

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 11:58:31 -05:00
5f82f8ebbd Merge pull request 'fix(gitea-mcp): accept string or integer for numeric params' (#315) from fix/mcp-integer-type-coercion into development
Reviewed-on: #315
2026-01-29 03:20:10 +00:00
b492a13702 fix(gitea-mcp): accept string or integer for numeric params
MCP library validates schema BEFORE call_tool handler runs, so
our _coerce_types function never gets a chance to convert strings.

Changed all integer fields to accept both types:
- issue_number, milestone_id, pr_number, depends_on, milestone, limit, position

This fixes: "Input validation error: '312' is not of type 'integer'"

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 22:17:19 -05:00
5aaab4cb9a Merge pull request 'fix(doc-guardian): make hook silent by default' (#313) from fix/312-doc-guardian-silent-hook into development
Reviewed-on: #313
2026-01-29 03:12:40 +00:00
3c3b3b4575 fix(doc-guardian): make hook silent by default
- Remove all output by default to prevent workflow interruption
- Queue changes silently to .doc-guardian-queue
- Add file+type deduplication (same file won't be queued twice)
- Add DOC_GUARDIAN_VERBOSE=1 env var for opt-in notifications
- Users run /doc-sync or /doc-audit to process queue

Fixes #312 (partial - addresses issues 1, 2, 3)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 22:12:09 -05:00
6e9b703151 Merge pull request 'chore: release v5.4.0' (#310) from chore/release-v5.4.0 into development
Reviewed-on: #310
2026-01-29 03:02:36 +00:00
b603743811 chore: release v5.4.0
- CHANGELOG.md: [Unreleased] → [5.4.0] - 2026-01-28
- README.md: title → v5.4.0
- marketplace.json: version → 5.4.0
- CLAUDE.md: version → 5.4.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 22:00:58 -05:00
a63ccc079d Merge pull request 'chore: add Sprint 7 changelog and fix version table' (#309) from chore/sprint-7-changelog into development
Reviewed-on: #309
2026-01-29 02:59:12 +00:00
d4481ec09f chore: add Sprint 7 changelog and fix version table
- Add [Unreleased] section with Sprint 7 multi-model support changes
- Fix CLAUDE.md plugin version table to match actual plugin.json files

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:58:10 -05:00
50951378f7 Merge pull request '[Sprint 7] feat: add model field validation to marketplace script' (#308) from feat/306-model-validation into development
Reviewed-on: #308
2026-01-29 02:56:24 +00:00
b3975c2f4f Merge pull request '[Sprint 7] feat: add defaultModel to plugin manifests' (#307) from feat/305-plugin-defaults into development
Reviewed-on: #307
2026-01-29 02:56:08 +00:00
8a95e061ad feat: add model field validation to marketplace script
Add validation for:
- defaultModel field in plugin.json (must be opus|sonnet|haiku)
- model field in agent frontmatter (must be opus|sonnet|haiku)

The validation passes when fields are absent (optional) but errors
if present with invalid values.

Fixes #306

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:55:16 -05:00
4983cc9feb feat: add defaultModel to plugin manifests
Add defaultModel: sonnet to all plugin manifests that have agents,
establishing the plugin-level default in the model inheritance chain.

Version bumps:
- projman: 3.2.0 → 3.3.0 (minor: new feature)
- pr-review: 1.0.0 → 1.1.0 (minor: new feature)
- data-platform: 1.0.0 → 1.1.0 (minor: new feature)
- viz-platform: 1.0.0 → 1.1.0 (minor: new feature)
- code-sentinel: 1.0.0 → 1.0.1 (patch: config addition)
- contract-validator: 1.0.0 → 1.1.0 (minor: new feature)

Fixes #305

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:53:36 -05:00
cf4d1b595c feat: add model:haiku to validation agents
- viz-platform/component-check.md - simple prop validation
- contract-validator/agent-check.md - quick verification

Fixes #304

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:52:02 -05:00
5aff53972e feat: add model:opus to critical reasoning agents
- projman/planner.md - architecture decisions
- projman/code-reviewer.md - quality review
- pr-review/security-reviewer.md - security analysis
- code-sentinel/security-reviewer.md - security scanning
- data-platform/data-analysis.md - complex data insights

Fixes #303

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:51:20 -05:00
d429319392 docs: add model recommendations documentation
- Create docs/MODEL-RECOMMENDATIONS.md with task-type guidance
- Update docs/CONFIGURATION.md with model configuration section
- Update CLAUDE.md with agent model configuration overview

Fixes #302

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:48:17 -05:00
5b1b0f609c Merge pull request 'fix(git-flow): use array format for hooks.json' (#300) from fix/git-flow-hooks-format into development
Reviewed-on: #300
2026-01-29 02:25:49 +00:00
0acd42ea65 fix(git-flow): use array format for hooks.json
Changed from nested object format to array format to fix
"PreToolUse:Bash hook error" with no message.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:24:52 -05:00
8c1890c258 Merge pull request 'docs: add protected branch rule to CLAUDE.md' (#298) from docs/add-protected-branch-rule into development
Reviewed-on: #298
2026-01-29 02:18:34 +00:00
e44d97edc2 docs: add protected branch rule to CLAUDE.md
NEVER push directly to development/main. Always use feature branch + PR.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:16:36 -05:00
dc96207da7 Merge pull request 'docs: consolidate CLAUDE.md redundant rules sections' (#295) from docs/claude-md-optimization into development
Reviewed-on: #295
2026-01-29 02:14:43 +00:00
c998c0a2dc Merge pull request 'docs(projman): add warning about manual issue closing in debug-review' (#294) from docs/debug-review-manual-close-warning into development
Reviewed-on: #294
2026-01-29 02:14:26 +00:00
14633736aa docs: add mandatory CLI tools prohibition rule
NEVER use gh, tea, curl to APIs, or any CLI for external services.
MCP tools are the ONLY allowed method.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:11:28 -05:00
af46046bc8 docs: consolidate CLAUDE.md rules sections
Merged two overlapping rules sections into one unified section:
- "MANDATORY BEHAVIOR RULES" + "CRITICAL: Rules" → "RULES"
- Converted verbose lists to scannable tables
- Reduced from 381 to 332 lines (-13%)
- All rules preserved, just better organized

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 21:02:33 -05:00
f669479122 docs(projman): add warning about manual issue closing in debug-review
PRs merged to development don't trigger Gitea's auto-close feature
(only merges to main do). Added warnings in Steps 13 and 15 to remind
about mandatory manual issue closing after PR merge.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 20:53:30 -05:00
d457e458a8 Merge pull request 'feat(projman): add SessionStart version sync check' (#292) from fix/issue-290-version-sync-check into development
Reviewed-on: #292
2026-01-29 01:41:17 +00:00
7c4959fb77 feat(projman): add SessionStart version sync check
Adds early detection of version drift between README.md, marketplace.json,
and CHANGELOG.md at session start. When versions don't match, displays
a warning and suggests running /suggest-version to analyze and fix.

This addresses acceptance criteria #4 from issue #290:
- [x] SessionStart warns about version drift

Remaining criteria for future PRs:
- [ ] Version mismatch detected before commit (PreToolUse hook)
- [ ] /sprint-close includes version bump step (enforcement)
- [ ] Release workflow works with protected branches (PR creation)

Partial fix for #290

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 20:39:49 -05:00
ba4db941ab Merge pull request 'fix(doc-guardian): use passive wording and add debouncing to reduce interruptions' (#291) from fix/issue-287-doc-guardian-hook-wording into development
Reviewed-on: #291
2026-01-29 01:34:43 +00:00
1dad393eaf fix(doc-guardian): use passive wording and add debouncing to reduce interruptions
The PostToolUse hook was causing workflow interruptions because:
1. Actionable language ("update needed") triggered Claude to seek confirmation
2. Rapid edits (4+ in sequence) generated multiple notifications

Changes:
- Message changed from "update needed" to "drift queued" (passive, informational)
- Added 5-second debouncing: same-type edits within window are silently queued
- Added queue clearing step to doc-sync.md command

Note: Issue #287 also mentions URL restriction behavior, but this was not
found in the current codebase - may have been a different component or
already fixed. Marking as partial fix.

Fixes #287

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 20:32:46 -05:00
d8971efafe Merge pull request 'release: v5.3.0 - Sprint 6 Visual Branding Overhaul' (#288) from release/v5.3.0 into development
Reviewed-on: #288
2026-01-28 22:47:46 +00:00
e3a8ebd4da release: v5.3.0 - Sprint 6 Visual Branding Overhaul
Sprint 6 adds consistent visual headers across all 12 plugins:
- Projman: Double-line headers with phase indicators
- Other plugins: Single-line headers with plugin icons
- 109 files updated (23 agents + 86 commands)
- 4 Wiki branding specification pages

Also fixes documentation drift:
- Version sync across CLAUDE.md, marketplace.json, README.md
- Added 18 missing commands to documentation

Closes Sprint 6 milestone.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:45:47 -05:00
193908721c Merge pull request 'feat(projman): add visual output requirements (Sprint 6 Batch 1)' (#284) from feat/sprint-6-projman-visual into development
Reviewed-on: #284
2026-01-28 22:37:46 +00:00
7e9f70d0a7 Merge pull request 'feat(plugins): Add visual output headers to other plugin agents and commands (#275, #276)' (#285) from feat/sprint-6-other-plugins-visual into development
Reviewed-on: #285
2026-01-28 22:37:37 +00:00
86413c4801 docs: sync documentation with Sprint 4 & 5 commands
- Update CLAUDE.md version from 5.1.0 to 5.2.0
- Update marketplace.json plugin versions to match CHANGELOG
  - projman 3.2.0 → 3.3.0
  - git-flow 1.0.0 → 1.2.0
  - pr-review 1.0.0 → 1.1.0
  - clarity-assist 1.0.0 → 1.2.0
  - doc-guardian 1.0.0 → 1.1.0
  - claude-config-maintainer 1.0.0 → 1.1.0
  - cmdb-assistant 1.1.0 → 1.2.0
  - data-platform 1.0.0 → 1.2.0
  - viz-platform 1.0.0 → 1.1.0
  - contract-validator 1.0.0 → 1.2.0
- Add 18 missing commands to README.md and COMMANDS-CHEATSHEET.md
  - /sprint-diagram, /pr-diff, /changelog-gen, /doc-coverage
  - /stale-docs, /config-diff, /config-lint, /cmdb-topology
  - /change-audit, /ip-conflicts, /lineage-viz, /dbt-test
  - /data-quality, /chart-export, /accessibility-check
  - /breakpoints, /dependency-graph
- Add contract-validator plugin to COMMANDS-CHEATSHEET.md

Closes #277

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:32:24 -05:00
b5d36865ee feat(plugins): add Visual Output headers to all other plugin commands
Add single-line visual headers to 66 command files across 10 plugins:
- clarity-assist (2 commands): 💬
- claude-config-maintainer (5 commands): ⚙️
- cmdb-assistant (11 commands): 🖥️
- code-sentinel (3 commands): 🔒
- contract-validator (5 commands): 
- data-platform (10 commands): 📊
- doc-guardian (5 commands): 📝
- git-flow (8 commands): 🔀
- pr-review (7 commands): 🔍
- viz-platform (10 commands): 🎨

Each command now displays a consistent header at execution start:
┌────────────────────────────────────────────────────────────────┐
│  [icon] PLUGIN-NAME · Command Description                       │
└────────────────────────────────────────────────────────────────┘

Addresses #275 (other plugin commands visual output)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:24:49 -05:00
79ee93ea88 feat(plugins): add visual output requirements to all plugin agents
Add single-line box headers to 19 agents across all non-projman plugins:
- clarity-assist (1): Clarity Coach
- claude-config-maintainer (1): Maintainer
- code-sentinel (2): Security Reviewer, Refactor Advisor
- doc-guardian (1): Doc Analyzer
- git-flow (1): Git Assistant
- pr-review (5): Coordinator, Security, Maintainability, Performance, Test
- data-platform (2): Data Analysis, Data Ingestion
- viz-platform (3): Component Check, Layout Builder, Theme Setup
- contract-validator (2): Agent Check, Full Validation
- cmdb-assistant (1): CMDB Assistant

Uses single-line box format (not double-line like projman).

Part of #275

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:15:05 -05:00
3561025dfc feat(projman): add visual output requirements to agents and commands
Add Visual Output sections to all projman files:
- 4 agent files with phase-specific headers (PLANNING, EXECUTION, CLOSING)
- 16 command files with appropriate headers

Headers use double-line box characters for projman branding:
- Planning phase: TARGET PLANNING
- Execution phase: LIGHTNING EXECUTION (+ progress block for orchestrator)
- Closing phase: FLAG CLOSING
- Setup commands: GEAR SETUP

Closes #273, #274

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:10:49 -05:00
d27c440631 Merge pull request 'fix(gitea-mcp): address MCP tool issues from Sprint 6' (#282) from fix/281-mcp-tool-issues into development
Reviewed-on: #282
2026-01-28 21:48:07 +00:00
e56d685a68 fix(gitea-mcp): address MCP tool issues from Sprint 6
Fixes #281 - Multiple MCP tool issues discovered during sprint execution

## Changes

1. **list_issues Token Overflow** (Issue 1)
   - Added `milestone` parameter to filter issues server-side
   - Reduces response size by filtering at API level instead of client-side

2. **Type Coercion for MCP Serialization** (Issues 2 & 4)
   - Added `_coerce_types()` helper function in server.py
   - Handles integers passed as strings (milestone_id, issue_number, etc.)
   - Handles arrays passed as JSON strings (labels, tags, etc.)
   - Applied to all tool calls automatically

3. **Sprint Approval Check Clarification** (Issue 3)
   - Updated sprint-start.md to clarify approval is RECOMMENDED, not enforced
   - Changed STOP/block language to WARN/suggest language
   - Added note explaining this is workflow guidance, not code-enforced

## Files Changed
- mcp-servers/gitea/mcp_server/gitea_client.py: Added milestone param
- mcp-servers/gitea/mcp_server/tools/issues.py: Pass milestone param
- mcp-servers/gitea/mcp_server/server.py: Type coercion + milestone schema
- plugins/projman/commands/sprint-start.md: Clarified approval check

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 15:57:42 -05:00
5638891d01 Merge pull request 'fix(projman): add new command verification step to sprint-close' (#279) from fix/sprint-close-new-command-verification into development
Reviewed-on: #279
2026-01-28 20:37:23 +00:00
611b50b150 fix(projman): add new command verification step to sprint-close
Addresses issue #278 - sprint-diagram command not discoverable after Sprint 4.

Root cause: Claude Code discovers skills at session start. Commands added
during a session are NOT discoverable until restart.

Prevention: Added step 7 "New Command Verification" to sprint-close workflow:
- Reminds about session restart requirement
- Creates follow-up verification task
- Explains why this happens

Lesson learned created: lessons/patterns/sprint-4---new-commands-not-discoverable-until-session-restart

Closes #278

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 15:36:11 -05:00
cde5c67134 Merge pull request 'feat(plugins): implement Sprint 5 documentation and fixes (#266-#269)' (#270) from feat/sprint-5-documentation into development
Reviewed-on: #270
2026-01-28 19:27:15 +00:00
baad41da98 feat(plugins): implement Sprint 5 documentation and fixes (#266-#269)
Release v5.2.0

Documentation:
- Add git-flow branching strategy guide (docs/BRANCHING-STRATEGY.md)
- Add clarity-assist ND support documentation (docs/ND-SUPPORT.md)
- Update DEBUGGING-CHECKLIST.md with Gitea auto-close behavior and MCP restart notes
- Update plugin READMEs to reference new documentation

Bug Fix:
- Add milestone parameter to update_issue MCP tool (gitea_client.py, server.py, tools/issues.py)

Version Updates:
- Marketplace version: 5.1.0 → 5.2.0
- README title: v5.1.0 → v5.2.0
- CHANGELOG: [Unreleased] → [5.2.0] - 2026-01-28

Closes #266, Closes #267, Closes #268, Closes #269

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 14:26:42 -05:00
f6d9fcaae2 Merge pull request 'docs: fix GITEA_REPO format documentation' (#264) from fix/gitea-repo-format-docs into development
Reviewed-on: #264
2026-01-28 18:45:24 +00:00
8d94bb606c docs: fix GITEA_REPO format documentation
Update documentation to reflect that GITEA_REPO expects owner/repo
format (e.g., my-org/my-repo) instead of separate GITEA_ORG and
GITEA_REPO variables.

This matches the actual MCP server implementation in config.py.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 13:44:46 -05:00
b175d4d890 Merge pull request 'development' (#262) from development into main
Reviewed-on: #262
2026-01-28 18:35:04 +00:00
6973f657d7 Merge pull request 'docs(changelog): add Sprint 4 changes to [Unreleased]' (#263) from docs/sprint-4-changelog into development
Reviewed-on: #263
2026-01-28 18:34:49 +00:00
a0d1b38c6e docs(changelog): add Sprint 4 changes to [Unreleased]
- 18 new commands across 8 plugins
- viz-platform accessibility tools and chart export
- MCP project directory detection fix
- Links to wiki implementation and lessons learned

Sprint 4 - Commands milestone closed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:56:35 -05:00
c5f68256c5 Merge pull request 'feat(plugins): implement Sprint 4 commands (#241-#258)' (#261) from feat/sprint-4-commands into development
Reviewed-on: #261
2026-01-28 17:36:06 +00:00
9698e8724d feat(plugins): implement Sprint 4 commands (#241-#258)
Sprint 4 - Plugin Commands implementation adding 18 new user-facing
commands across 8 plugins as part of V5.2.0 Plugin Enhancements.

**projman:**
- #241: /sprint-diagram - Mermaid visualization of sprint issues

**pr-review:**
- #242: Confidence threshold config (PR_REVIEW_CONFIDENCE_THRESHOLD)
- #243: /pr-diff - Formatted diff with inline review comments

**data-platform:**
- #244: /data-quality - DataFrame quality checks (nulls, duplicates, outliers)
- #245: /lineage-viz - dbt lineage as Mermaid diagrams
- #246: /dbt-test - Formatted dbt test runner

**viz-platform:**
- #247: /chart-export - Export charts to PNG/SVG/PDF via kaleido
- #248: /accessibility-check - Color blind validation (WCAG contrast)
- #249: /breakpoints - Responsive layout configuration

**contract-validator:**
- #250: /dependency-graph - Plugin dependency visualization

**doc-guardian:**
- #251: /changelog-gen - Generate changelog from conventional commits
- #252: /doc-coverage - Documentation coverage metrics
- #253: /stale-docs - Flag outdated documentation

**claude-config-maintainer:**
- #254: /config-diff - Track CLAUDE.md changes over time
- #255: /config-lint - 31 lint rules for CLAUDE.md best practices

**cmdb-assistant:**
- #256: /cmdb-topology - Infrastructure topology diagrams
- #257: /change-audit - NetBox audit trail queries
- #258: /ip-conflicts - Detect IP conflicts and overlaps

Closes #241, #242, #243, #244, #245, #246, #247, #248, #249,
#250, #251, #252, #253, #254, #255, #256, #257, #258

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:02:26 -05:00
f3e1f42413 Merge pull request 'development' (#260) from development into main
Reviewed-on: #260
2026-01-28 16:44:43 +00:00
8a957b1b69 Merge pull request 'fix(mcp): capture CLAUDE_PROJECT_DIR from PWD before cd' (#259) from fix/mcp-project-dir-detection into development
Reviewed-on: #259
2026-01-28 16:44:32 +00:00
6e90064160 fix(mcp): capture CLAUDE_PROJECT_DIR from PWD before cd
All MCP server run.sh scripts now capture the original working
directory as CLAUDE_PROJECT_DIR before changing to the script
directory. This fixes the branch detection issue where MCP tools
detected the plugin repo's branch instead of the user's project branch.

This is a follow-up fix to #231 - the original fix relied on
CLAUDE_PROJECT_DIR being set by Claude Code, but it isn't.
Now we capture it ourselves from PWD at startup time.

Closes #231 (proper fix)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 11:43:07 -05:00
5c4e97a3f6 Merge pull request 'development' (#240) from development into main
Reviewed-on: #240
2026-01-28 16:02:47 +00:00
351be5a40d Merge pull request 'feat(projman): implement 8 improvement issues (#231-#238)' (#239) from feat/projman-improvements-231-238 into development
Reviewed-on: #239
2026-01-28 15:53:50 +00:00
67944a7e1c Merge branch 'fix/233-approval-token' into development 2026-01-28 10:51:42 -05:00
e37653f956 Merge branch 'fix/237-checkpoint-resume' into development 2026-01-28 10:51:42 -05:00
235e72d3d7 Merge branch 'fix/234-parallel-conflicts' into development 2026-01-28 10:51:42 -05:00
ba8e86e31c Merge branch 'fix/236-runaway-detection' into development 2026-01-28 10:51:42 -05:00
67f330be6c Merge branch 'fix/238-task-scoping' into development 2026-01-28 10:51:42 -05:00
445b744196 Merge branch 'fix/232-progress-visibility' into development 2026-01-28 10:51:42 -05:00
ad73c526b7 Merge branch 'fix/235-status-labels' into development 2026-01-28 10:51:42 -05:00
26310d05f0 feat(projman): add sprint approval requirement before execution (#233)
Sprint-plan approval workflow:
- Request explicit approval after creating issues
- Present scope summary (branches, files, dependencies)
- User must type "approve sprint N" to authorize
- Record approval in milestone description with timestamp

Sprint-start verification:
- Check milestone for "## Sprint Approval" section
- If missing, STOP and direct to /sprint-plan
- Extract approved scope (branches, files)
- Enforce scope during execution

Orchestrator scope enforcement:
- Verify approval before any execution
- Check each operation against approved scope
- Operations outside scope require re-approval

This separates planning (review) from execution (action),
preventing agents from executing without explicit user consent.

Closes #233

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:51:10 -05:00
459550e7d3 feat(projman): add checkpoint/resume for interrupted agent work (#237)
Executor checkpointing:
- Standard checkpoint comment format with branch, commit, phase
- Files modified with status (created, modified)
- Completed and pending steps tracking
- State notes for resumption context
- Save checkpoint after major steps, before stopping

Orchestrator resume detection:
- Scan issue comments for "## Checkpoint" markers
- Offer resume options: resume, start fresh, review details
- Verify branch exists and files match before resuming
- Dispatch executor with checkpoint context

Sprint-start integration:
- Checkpoint detection as first workflow step
- Resume flow documentation with example
- Checkpoint format specification

This enables resuming work after:
- Budget exhaustion (100 tool call limit)
- Agent failure/circuit breaker
- Manual interruption
- Session timeout

Closes #237

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:49:34 -05:00
a69a4d19d0 feat(projman): add file conflict prevention for parallel agents (#234)
Add pre-dispatch conflict detection:
- Analyze target files for each task before parallel dispatch
- Check for file overlap between tasks in same batch
- If overlap detected, sequentialize those specific tasks
- Example analysis showing conflict detection workflow

Branch isolation protocol:
- Each task MUST have its own branch
- Never have two agents work on the same branch
- Sequential merge after completion (not simultaneous)
- Handle merge conflicts by stopping second task

Conflict resolution rules:
- Same file → MUST sequentialize
- Same directory → Usually safe, review
- Shared config → Sequentialize
- Shared test fixture → Sequentialize or assign different files

This prevents parallel agents from modifying the same files
and causing git merge conflicts.

Closes #234

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:47:36 -05:00
f2a62627d0 feat(projman): add runaway detection and circuit breaker for agents (#236)
Executor self-monitoring:
- 10+ calls without progress → stop and reassess
- Same error 3+ times → circuit breaker, report failure
- 50+ calls → mandatory progress update
- 80+ calls → budget warning, evaluate completion
- 100+ calls → hard stop, save checkpoint

Orchestrator monitoring:
- Detect stuck agents (no progress for X minutes)
- Intervention protocol for runaway agents
- Timeout guidelines by task size (XS: 15min, S: 30min, M: 45min)
- Recovery actions with Status/Failed label

This prevents agents from running indefinitely (400+ tool calls
observed in Sprint 3) and provides clear stopping criteria.

Closes #236

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:46:04 -05:00
0abf510ec0 feat(projman): add strict task sizing rules to prevent runaway agents (#238)
Add mandatory task scoping rules:
- XS: 1 file, 0-2 checklist items, ~30 tool calls
- S: 1 file, 2-4 checklist items, ~50 tool calls
- M: 2-3 files, 4-6 checklist items, ~80 tool calls
- L/XL: MUST be broken down into smaller tasks

Sprint 3 showed agents running 400+ tool calls on single tasks,
causing 1+ hour waits with no visibility. This enforces:
- Maximum task scope (M = 2-3 files, 80 tool calls)
- Mandatory breakdown for L/XL tasks
- Clear scoping checklist for planners
- Good/bad examples showing proper breakdown

Planner must refuse to create L/XL tasks without breakdown.

Closes #238

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:44:52 -05:00
008187a0a4 feat(projman): add structured progress comments for real-time visibility (#232)
Add structured progress comment format for agents:
- Standard format with Status, Phase, Tool Calls budget
- Completed/In-Progress/Blockers/Next sections
- Clear examples for starting, blocked, and failed states
- Guidance on when to post (every 20-30 tool calls)

Update sprint-status.md:
- Document how to parse progress comments
- Show enhanced in-progress display with tool call tracking
- Add progress comment detection to blocker analysis

This enables users to see:
- Real-time agent progress
- Tool call budget consumption
- Current phase and step
- Blockers as they occur

Closes #232

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:43:28 -05:00
4bd15e5deb feat(projman): add Status labels for accurate issue state tracking (#235)
- Add Status/In-Progress, Status/Blocked, Status/Failed, Status/Deferred labels
- Update orchestrator.md with Status Label Management section
- Update executor.md with honest Status Reporting requirements
- Update labels-reference.md with Status detection guidelines

Status labels enable accurate tracking:
- In-Progress: Work actively being done
- Blocked: Waiting for dependency/external factor
- Failed: Attempted but couldn't complete
- Deferred: Moved to future sprint

Agents must report honestly - never say "completed" when blocked/failed.

Closes #235

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:42:14 -05:00
8234683bc3 fix(gitea-mcp): use project directory for branch detection (#231)
The _get_current_branch() method was running git commands from the
installed plugin directory instead of the user's project directory.
This caused incorrect branch detection (always seeing 'main' from
the marketplace repo instead of the user's actual branch).

Fix: Use CLAUDE_PROJECT_DIR environment variable to get the correct
project directory and pass it as cwd to subprocess.run().

Fixes #231

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:38:17 -05:00
5b3da8da85 Merge feat/230-breaking-change-detection into development
Resolved conflict in hooks.json - combined SessionStart and PostToolUse hooks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:12:46 -05:00
894e015c01 Merge feat/229-session-auto-validate into development 2026-01-28 10:12:18 -05:00
a66a2bc519 Merge feat/228-schema-diff-hook into development 2026-01-28 10:12:13 -05:00
b8851a0ae3 Merge feat/227-vagueness-detection-hook into development 2026-01-28 10:12:08 -05:00
aee199e6cf Merge feat/226-branch-name-validation into development 2026-01-28 10:12:03 -05:00
223a2d626a Merge feat/225-commit-message-hook into development 2026-01-28 10:11:57 -05:00
b7fce0fafd docs(changelog): add Sprint 3 hooks implementation
- git-flow: commit message and branch name validation hooks
- clarity-assist: vagueness detection hook
- data-platform: schema diff detection hook
- contract-validator: SessionStart auto-validate and breaking change hooks
- Document MCP bug #231 (branch detection from wrong directory)

Sprint 3 - Hooks completed (issues #225-#230)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:11:51 -05:00
551c60fb45 feat(contract-validator): add breaking change detection hook (#230)
Implements PostToolUse hook to warn about breaking interface changes:
- Detects changes to plugin.json, hooks.json, .mcp.json, agents/*.md
- Compares with git HEAD to find removed/changed elements
- Warns on: removed hooks, changed matchers, removed MCP servers
- Warns on: plugin name changes, major version bumps
- Non-blocking, configurable via CONTRACT_VALIDATOR_BREAKING_WARN

Depends on #229 (SessionStart auto-validate infrastructure).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 10:05:02 -05:00
af6a42b2ac feat(git-flow): add branch name validation hook (#226)
Implements PreToolUse/Bash hook to validate branch naming convention:
- Validates type/description format
- Allowed types: feat, fix, chore, docs, refactor, test, perf, debug
- Enforces lowercase, hyphens, max 50 chars
- Non-branch commands pass through

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:02:20 -05:00
7cae21f7c9 feat(contract-validator): add SessionStart auto-validate hook (#229)
Implements smart SessionStart hook for plugin validation:
- Validates plugin contracts only when files change (smart mode)
- Caches file hashes to avoid redundant checks
- Non-blocking warnings for compatibility issues
- Configurable via CONTRACT_VALIDATOR_AUTO_CHECK

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:02:00 -05:00
8048fba931 feat(clarity-assist): add vagueness detection hook (#227)
Implements UserPromptSubmit hook to detect vague prompts:
- Checks for short prompts without context
- Detects ambiguous phrases ("fix it", "help me with")
- Suggests /clarity-assist when beneficial
- Non-blocking, configurable via CLARITY_ASSIST_AUTO_SUGGEST

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:01:53 -05:00
1b36ca77ab feat(git-flow): add commit message enforcement hook (#225)
Implements PreToolUse/Bash hook to validate conventional commit format:
- Validates type(scope): description format
- Supports all 10 types: feat, fix, docs, style, refactor, perf, test, chore, build, ci
- Optional scope support
- Helpful error messages with examples
- Non-commit commands pass through
- Uses Python for reliable JSON parsing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:44:59 -05:00
eb85ea31bb feat(data-platform): add schema diff detection hook (#228)
Implements PostToolUse hook to warn about potentially breaking schema changes:
- DROP COLUMN/TABLE/INDEX detection
- Column type changes (ALTER TYPE, MODIFY COLUMN)
- NOT NULL constraint additions
- RENAME operations
- ORM model field removals

Non-blocking - outputs warnings only.
Configurable via DATA_PLATFORM_SCHEMA_WARN env var.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:38:26 -05:00
8627d9e968 Merge pull request 'development' (#224) from development into main
Reviewed-on: #224
2026-01-27 20:16:30 +00:00
3da9adf44e Merge pull request 'docs: sync documentation with code changes for v5.1.0' (#223) from docs/sync-documentation-v5.1.0 into development
Reviewed-on: #223
2026-01-27 20:16:17 +00:00
bcb24ae641 docs: sync documentation with code changes for v5.1.0
Changes applied:
- Updated version references from 5.0.0 to 5.1.0 (CLAUDE.md, CANONICAL-PATHS.md, setup.sh)
- Added missing projman commands to README.md (/suggest-version, /proposal-status)
- Added missing cmdb-assistant commands to README.md (/cmdb-audit, /cmdb-register, /cmdb-sync)
- Added /proposal-status to projman section in COMMANDS-CHEATSHEET.md
- Added 3 cmdb-assistant commands to COMMANDS-CHEATSHEET.md
- Added /suggest-version documentation to plugins/projman/README.md
- Added 4 missing scripts to CANONICAL-PATHS.md (verify-hooks.sh, setup-venvs.sh, venv-repair.sh, release.sh)

Fixes 14 documentation drift issues identified by /doc-guardian:doc-audit.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 15:12:11 -05:00
c8ede3c30b Merge pull request 'development' (#222) from development into main
Reviewed-on: #222
2026-01-27 19:58:14 +00:00
fb1c664309 Merge pull request 'fix(post-update): clear Claude plugin cache on update' (#221) from fix/clear-plugin-cache-on-update into development
Reviewed-on: #221
2026-01-27 19:57:59 +00:00
90f19dfc0f fix(post-update): clear Claude plugin cache on update
The Claude plugin cache at ~/.claude/plugins/cache/leo-claude-mktplace/
holds versioned copies of .mcp.json files. When we update .mcp.json
to use run.sh instead of direct python paths, the cache retains old
versions, causing MCP servers to fail.

Now post-update.sh clears this cache, forcing Claude to read fresh
configs from the installed marketplace on next session start.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:55:42 -05:00
75492b0d38 Merge pull request 'development' (#220) from development into main
Reviewed-on: #220
2026-01-27 19:42:17 +00:00
54bb347ee1 Merge pull request 'fix(gitea-mcp): add fix/* and other branch patterns to permissions' (#219) from fix/branch-permission-patterns into development
Reviewed-on: #219
2026-01-27 19:42:02 +00:00
51bcc26ea9 fix(gitea-mcp): add fix/* and other branch patterns to permissions
The branch permission check was only allowing feat/, feature/, and dev/
prefixes for write operations. This blocked PR creation from fix/*
branches, forcing fallback to direct API calls.

Added patterns:
- fix/, bugfix/, hotfix/ - for bug fixes
- chore/, refactor/ - for maintenance
- docs/, test/ - for documentation and tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:41:18 -05:00
d813147ca7 Merge pull request 'development' (#218) from development into main
Reviewed-on: #218
2026-01-27 19:38:48 +00:00
dbb6d46fa4 Merge pull request 'fix(mcp): use wrapper scripts instead of venv symlinks' (#217) from fix/mcp-venv-wrapper-scripts into development
Reviewed-on: #217
2026-01-27 19:38:31 +00:00
e7050e2ad8 fix(mcp): use wrapper scripts instead of venv symlinks
Replace direct python path in .mcp.json with run.sh wrapper scripts
that automatically locate the venv in cache or local directory.

Problem: .venv symlinks are gitignored, causing them to be wiped on
every git operation. The SessionStart hook should recreate them but
this was unreliable, leading to repeated MCP server failures.

Solution: run.sh scripts that:
- First check ~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv
- Fallback to local .venv if exists
- Exit with helpful error if neither found

This eliminates dependency on symlinks entirely - the scripts are
tracked in git and will always be present after clone/pull/update.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:36:05 -05:00
206f1c378e Merge pull request 'development' (#216) from development into main
Reviewed-on: #216
2026-01-27 17:34:34 +00:00
35380594b4 Merge pull request 'feat(cmdb-assistant): add data quality validation v1.1.0' (#215) from feat/cmdb-assistant-data-quality into development
Reviewed-on: #215
2026-01-27 17:34:19 +00:00
0055c9ecf2 feat(gitea-mcp): add create_pull_request tool
Add missing create_pull_request tool to Gitea MCP server. This completes
the PR lifecycle - previously only had list/get/review/comment tools.

- Add create_pull_request to GiteaClient
- Add async wrapper to PullRequestTools with branch permissions
- Register tool in server.py with proper schema
- Parameters: title, body, head, base, labels (optional)
- Branch-aware security: only allowed on development/feature branches

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 12:30:48 -05:00
a74a048898 feat(cmdb-assistant): add data quality validation v1.1.0
Add validation hooks, best practices skill, and new commands to enforce
NetBox data quality standards:

Hooks:
- SessionStart: Test NetBox connectivity, report data quality issues
- PreToolUse: Validate VM/device parameters before create/update

New Commands:
- /cmdb-audit: Data quality analysis (vms, devices, naming, roles)
- /cmdb-register: Register current machine with running applications
- /cmdb-sync: Sync machine state with NetBox, detect drift

Best Practices Skill:
- Dependency order (regions -> sites -> devices -> VMs)
- Site/tenant/platform assignment requirements
- Naming conventions enforcement
- Role consolidation guidance

Updated agent with validation requirements, dependency order checks,
naming convention warnings, and duplicate prevention.

Marketplace: 5.0.0 -> 5.1.0
Plugin: 1.0.0 -> 1.1.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 12:27:23 -05:00
37676d4645 Merge pull request 'development' (#214) from development into main
Reviewed-on: #214
2026-01-27 16:57:12 +00:00
7492cfad66 Merge pull request 'fix(post-update): use venv-repair for instant symlink restoration' (#213) from fix/post-update-venv-repair into development
Reviewed-on: #213
2026-01-27 16:56:52 +00:00
59db9ea0b0 fix(post-update): use venv-repair for instant symlink restoration
Replace the old approach (create venvs in marketplace directory) with
the new venv-repair.sh approach (symlinks to external cache).

This ensures post-update.sh:
- Instantly restores symlinks if cache exists
- Only does full pip install on first run
- Works correctly after marketplace updates

Flow after this fix:
  Update marketplace → post-update.sh → venv-repair → Session works

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 11:55:43 -05:00
9234cf1add Merge pull request 'development' (#212) from development into main
Reviewed-on: #212
2026-01-27 16:49:33 +00:00
a21199d3db Merge pull request 'fix(mcp): persistent venv cache survives marketplace updates' (#211) from fix/persistent-venv-cache into development
Reviewed-on: #211
2026-01-27 16:49:19 +00:00
1abda1ca0f fix(mcp): persistent venv cache survives marketplace updates
Problem:
- Venvs in marketplace directory got deleted on every update
- Users had to manually run setup.sh and wait for full pip install
- This caused MCP servers to fail until manually fixed

Solution:
- Store venvs in external cache (~/.cache/claude-mcp-venvs/)
- Auto-repair symlinks via SessionStart hook (instant operation)
- Only run pip install on first use or when requirements change

Architecture:
  Cache (runtime) → Marketplaces → External venv cache

  The chain of symlinks ensures all three locations work:
  1. ~/.claude/plugins/cache/.../mcp-servers/* (runtime)
  2. ~/.claude/plugins/marketplaces/.../mcp-servers/* (install)
  3. ~/.cache/claude-mcp-venvs/* (persistent venvs)

Performance:
- First install: ~2-3 min (unchanged)
- After marketplace update: 0.03 sec (was 2-3 min)

Files:
- scripts/venv-repair.sh: Fast symlink restoration for hooks
- scripts/setup-venvs.sh: Full setup with external cache
- plugins/projman/hooks/startup-check.sh: Auto-repair on session start
- .gitignore: Ignore .venv symlinks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 11:48:13 -05:00
0118bc7b9b Merge pull request 'development' (#210) from development into main
Reviewed-on: #210
2026-01-27 15:51:57 +00:00
bbb822db16 Merge pull request 'fix(post-update): create missing venvs instead of just warning' (#209) from fix/post-update-create-venvs into development
Reviewed-on: #209
2026-01-27 15:51:39 +00:00
08e1dcb1f5 fix(post-update): create missing venvs instead of just warning
Previously post-update.sh would only warn when venvs were missing,
requiring a separate setup.sh run. Now it automatically creates
missing venvs and installs dependencies including editable packages.

Also added viz-platform and contract-validator to the update list.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 10:41:04 -05:00
ec7141a5aa Merge pull request 'development' (#208) from development into main
Reviewed-on: #208
2026-01-26 22:27:33 +00:00
1b029d97b8 Merge pull request 'fix/data-platform-filter-index' (#207) from fix/data-platform-filter-index into development
Reviewed-on: #207
2026-01-26 22:27:19 +00:00
4ed3ed7e14 fix(data-platform): reset index after filter to prevent extra column
The filter tool was adding an __index_level_0__ column to results
because pandas query() preserves the original index, which gets
converted to a column when storing the DataFrame.

Added .reset_index(drop=True) after query() to drop the preserved
index and create a clean sequential index.

Fixes #203

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 17:25:37 -05:00
c5232bd7bf Merge pull request 'development' (#202) from development into main
Reviewed-on: #202
2026-01-26 21:49:08 +00:00
f9e23fd6eb Merge pull request 'fix/setup-editable-install' (#201) from fix/setup-editable-install into development
Reviewed-on: #201
2026-01-26 21:48:50 +00:00
457ed9c9ff fix(setup): install MCP packages in editable mode for viz-platform and contract-validator
The setup script only installed requirements.txt dependencies but not the
local package itself, causing MCP servers to fail with ModuleNotFoundError.

Changes:
- Add `pip install -e .` for servers with pyproject.toml
- Add viz-platform and contract-validator to MCP server setup
- Add symlink verification for new plugins
- Update version banner to v5.0.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 16:44:11 -05:00
dadb4d3576 Merge pull request 'Merge development into main: contract-validator setup docs' (#200) from development into main 2026-01-26 20:57:23 +00:00
ba771f100f Merge pull request 'docs: add contract-validator initial-setup and update version tags' (#199) from docs/contract-validator-setup into development 2026-01-26 20:57:09 +00:00
2b9cb5defd docs: add contract-validator initial-setup and update version tags
- Add /initial-setup command for contract-validator plugin
- Add contract-validator section to README.md (NEW in v5.0.0)
- Update data-platform version tag: *NEW* -> *NEW in v4.0.0*
- Update viz-platform version tag: *NEW* -> *NEW in v4.0.0*
- Add Contract Validator MCP Server section to README.md
- Add contract-validator to test commands table and repo structure

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 15:55:17 -05:00
34227126c2 Merge pull request 'Release v5.0.0' (#198) from development into main 2026-01-26 20:29:28 +00:00
ef94602eba Merge pull request 'chore: release v5.0.0 - version updates' (#197) from release/v5.0.0 into development 2026-01-26 20:29:05 +00:00
155e7be399 chore: release v5.0.0
- Update version to 5.0.0 in README.md, marketplace.json, CLAUDE.md
- Convert [Unreleased] to [5.0.0] in CHANGELOG.md
- Add contract-validator to CANONICAL-PATHS.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 15:28:27 -05:00
1fb9e6cece Merge pull request 'Release: Merge development into main (Sprint 2 - contract-validator v5.0.0)' (#196) from development into main 2026-01-26 20:24:46 +00:00
f2cf082ba8 Merge pull request 'feat(contract-validator): Complete Sprint 2 - Contract Validator Plugin' (#195) from feat/193-tests into development 2026-01-26 20:22:41 +00:00
d580464f4a test(contract-validator): add comprehensive tests (#193)
Add 34 tests across 3 test modules:
- test_parse_tools.py: 11 tests for README/CLAUDE.md parsing
- test_validation_tools.py: 11 tests for compatibility and agent validation
- test_report_tools.py: 12 tests for report generation and filtering

Coverage: parse_tools 79%, validation_tools 96%, report_tools 89%

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 15:04:26 -05:00
fe6b354ee2 chore(contract-validator): marketplace integration (#192)
- Add contract-validator to marketplace.json with proper metadata
- Update CLAUDE.md plugin table and commands list
- Validation passes with ./scripts/validate-marketplace.sh

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 15:00:09 -05:00
ec965dc8ee docs(contract-validator): create documentation (#191)
Add comprehensive plugin documentation:
- README.md: Plugin overview, problem statement, tool docs,
  example workflows, issue types, best practices
- claude-md-integration.md: CLAUDE.md snippets, interface
  documentation standards, CI/CD integration guide

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:58:34 -05:00
cb07a382ea feat(contract-validator): create agents (#190)
Add 2 autonomous agents:
- full-validation: Complete cross-plugin compatibility validation
  triggered by /validate-contracts command
- agent-check: Single agent definition validation triggered
  by /check-agent command

Each agent documents capabilities, workflow, validation rules,
and example interactions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:57:15 -05:00
8f450c0e7b feat(contract-validator): create commands (#189)
Add 3 user-facing commands:
- /validate-contracts: Full marketplace compatibility validation
- /check-agent: Validate single agent definition
- /list-interfaces: Show plugin interfaces summary

Each command documents usage, workflow, parameters, and available
MCP tools for implementation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:55:58 -05:00
fdee539371 Merge feat/188-report-tools: implement report tools 2026-01-26 14:54:24 -05:00
0e9187c5a9 feat(contract-validator): implement report tools (#188)
Add report generation and issue listing tools:
- generate_compatibility_report: Full marketplace validation with
  markdown or JSON output, includes summary statistics
- list_issues: Filtered issue listing by severity and type

The report tools coordinate parse_tools and validation_tools to
scan all plugins in a marketplace, run pairwise compatibility
checks, and aggregate findings into comprehensive reports.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:54:03 -05:00
46af00019c feat(contract-validator): implement validation tools (#187)
Implement validate_compatibility, validate_agent_refs, and
validate_data_flow tools for cross-plugin validation.

validate_compatibility:
- Compares tool and command names between plugins
- Identifies naming conflicts (ERROR) and shared tools (WARNING)
- Found: data-platform and projman share /initial-setup command

validate_agent_refs:
- Checks agent tool references against available plugins
- Reports missing tools and undocumented references
- Supports optional plugin_paths for tool lookup

validate_data_flow:
- Validates data flow through agent tool sequences
- Checks producer/consumer patterns (e.g., data_ref)
- Extracts workflow steps from responsibilities

Issue types detected:
- missing_tool (ERROR)
- interface_mismatch (ERROR/WARNING)
- optional_dependency (WARNING)
- undeclared_output (WARNING/INFO)

Sprint: Sprint 2 - contract-validator Plugin

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:50:12 -05:00
2b041cb771 feat(contract-validator): implement parse tools (#186)
Implement parse_plugin_interface and parse_claude_md_agents tools
for extracting structured data from plugin documentation.

parse_plugin_interface extracts:
- Plugin name and description
- Commands (from tables and ### headers)
- Agents (from Agents section tables)
- Tools (with categories from Tools Summary)
- Features list

parse_claude_md_agents extracts:
- Agent definitions from Four-Agent Model tables
- Agent personality and responsibilities
- Tool references in agent workflows

Tested on: projman (12 cmds), data-platform (7 cmds, 2 agents, 32 tools),
pr-review (3 cmds), code-sentinel (1 cmd), and CLAUDE.md (4 agents)

Sprint: Sprint 2 - contract-validator Plugin

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:41:10 -05:00
0fc40d0fda feat(contract-validator): create plugin structure (#185)
Create the basic plugin structure for contract-validator:

Plugin structure:
- plugins/contract-validator/.claude-plugin/plugin.json
- plugins/contract-validator/.mcp.json
- plugins/contract-validator/mcp-servers/contract-validator -> symlink

MCP server scaffolding:
- mcp-servers/contract-validator/mcp_server/server.py (7 placeholder tools)
- mcp-servers/contract-validator/pyproject.toml
- mcp-servers/contract-validator/requirements.txt
- Virtual environment with mcp>=0.9.0

Tools defined (placeholders):
- parse_plugin_interface
- parse_claude_md_agents
- validate_compatibility
- validate_agent_refs
- validate_data_flow
- generate_compatibility_report
- list_issues

Sprint: Sprint 2 - contract-validator Plugin

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:32:10 -05:00
68f50fed55 Merge pull request 'docs: Fix documentation drift (19 items)' (#194) from fix/documentation-drift into development 2026-01-26 19:26:32 +00:00
9d5615409c docs: fix documentation drift (19 items)
Critical fixes:
- Fix projman version mismatch (3.1.0 -> 3.2.0 in plugin.json)
- Update CLAUDE.md version to 4.1.0

Stale documentation updates:
- Add viz-platform to README.md (plugins, MCP servers, verification table)
- Add viz-platform to COMMANDS-CHEATSHEET.md (8 commands, hooks, MCP table)
- Add viz-platform and data-platform to CANONICAL-PATHS.md
- Add viz-platform and data-platform to CONFIGURATION.md
- Fix CHANGELOG.md tool counts (Layout 5, Theme 6, Page 5)
- Add data-platform to CLAUDE.md repository structure

Missing documentation:
- Create mcp-servers/viz-platform/README.md (21 tools documented)
- Add viz-platform SessionStart hook to COMMANDS-CHEATSHEET.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:11:53 -05:00
48ce693bb5 Merge pull request 'development' (#184) from development into main
Reviewed-on: #184
2026-01-26 18:59:23 +00:00
74531e06d0 Merge pull request 'feat(viz-platform): Complete Sprint 1 - Plugin Structure and Tests' (#183) from feat/viz-platform-sprint1-completion into development
Reviewed-on: #183
2026-01-26 18:58:58 +00:00
20458add3f feat(viz-platform): complete Sprint 1 - plugin structure and tests
Sprint 1 - viz-platform Plugin completed (13/13 issues):
- Commands: 7 files (initial-setup, chart, dashboard, theme, theme-new, theme-css, component)
- Agents: 3 files (theme-setup, layout-builder, component-check)
- Documentation: README.md, claude-md-integration.md
- Tests: 94 tests passing (68-99% coverage)
- CHANGELOG updated with completion status

Closes: #178, #179, #180, #181, #182

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 13:53:03 -05:00
45b899b093 feat(viz-platform): create plugin structure (#177)
- Add plugins/viz-platform/ directory structure
- Create .claude-plugin/plugin.json with metadata
- Create .mcp.json pointing to viz-platform MCP server
- Create hooks/hooks.json with SessionStart hook
- Create symlink to mcp-servers/viz-platform
- Add plugin to marketplace.json

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 12:00:54 -05:00
12133f698e feat(viz-platform): implement page tools (#176)
- Add page_create tool: create pages with routing
- Add page_add_navbar tool: generate top/side navigation
- Add page_set_auth tool: configure page authentication
- Support DMC AppShell component structure
- Auth types: none, basic, oauth, custom

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:56:05 -05:00
797b3064c9 feat(viz-platform): implement theme tools (#175)
- Add theme_create tool: create themes with design tokens
- Add theme_extend tool: extend existing themes with overrides
- Add theme_validate tool: validate theme completeness
- Add theme_export_css tool: export as CSS custom properties
- Add ThemeStore for theme persistence (user and project level)
- Default theme based on Mantine defaults with 47 CSS variables

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:53:21 -05:00
057c61fb16 feat(viz-platform): implement layout tools (#174)
- Add layout_create tool: create layouts with templates (dashboard, report, form, blank)
- Add layout_add_filter tool: add filter controls (dropdown, date_range, search, etc.)
- Add layout_set_grid tool: configure responsive grid system (1-24 cols, breakpoints)
- 4 templates and 9 filter types supported
- Maps to DMC Grid and AppShell component patterns

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:50:10 -05:00
c91f21f3d1 feat(viz-platform): implement chart tools (#173)
- Add chart_create tool: create line, bar, scatter, pie, heatmap, histogram, area charts
- Add chart_configure_interaction tool: hover templates, click data, selection, zoom
- Support theme color integration when theme is active
- Default color palette based on Mantine theme

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:47:54 -05:00
67fae6a93d feat(viz-platform): implement DMC validation tools (#172)
- Add list_components tool: list DMC components by category
- Add get_component_props tool: get props schema with types/defaults/enums
- Add validate_component tool: validate props with error/warning messages
- Includes typo detection and common mistake warnings
- All tools registered in MCP server with proper JSON schemas

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:45:40 -05:00
9e4b3d5a91 feat(viz-platform): implement DMC 2.x component registry (#171)
- Add component_registry.py with version-locked registry loading
- Create dmc_2_5.json with 39 components for DMC 2.5.1/Mantine 7
- Add generate-dmc-registry.py script for future registry generation
- Update dependencies to DMC >=2.0.0 (installs 2.5.1)
- Includes prop validation with typo detection

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:41:42 -05:00
bf0745cc94 feat(viz-platform): create MCP server foundation (#170)
Add viz-platform MCP server structure at mcp-servers/viz-platform/:
- mcp_server/server.py: Main MCP server entry point with async initialization
- mcp_server/config.py: Hybrid config loader with DMC version detection
- mcp_server/dmc_tools.py: Placeholder for DMC validation tools
- pyproject.toml and requirements.txt for dependencies
- tests/ directory structure

Server starts without errors with empty tool list.
Config detects DMC installation status via importlib.metadata.

Closes #170

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:31:44 -05:00
72049c2518 docs(projman): document Sprint 1 viz-platform planning
Add [Unreleased] section with Sprint 1 planning for viz-platform plugin:
- 15 MCP tools across 5 categories (DMC, Charts, Layouts, Themes, Pages)
- 7 commands and 3 agents
- Links to Gitea milestone, issues #170-#182, and wiki implementation plan

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 11:25:45 -05:00
bc0282b5f8 Merge pull request 'Merge development into main (v4.1.0 release)' (#169) from development into main 2026-01-26 15:57:59 +00:00
4c262a7227 Merge pull request 'fix(gitea-mcp): URL-encode wiki page names and include title in updates' (#168) from fix/160-wiki-page-unnamed-bug into development 2026-01-26 15:55:16 +00:00
530c5f4aa0 fix(gitea-mcp): URL-encode wiki page names and include title in updates
Fixes #160: update_wiki_page was renaming pages to "unnamed"

Root causes:
1. page_name wasn't URL-encoded, breaking pages with special chars like ':'
2. PATCH request was missing 'title' field, causing Gitea to use default name

Changes:
- Add URL encoding (urllib.parse.quote) to get_wiki_page, update_wiki_page, delete_wiki_page
- Add 'title': page_name to update_wiki_page payload to preserve page name

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 10:54:33 -05:00
61a3b4611f Merge pull request 'chore: release v4.1.0 with versioning workflow fix' (#167) from release/v4.1.0 into development 2026-01-26 15:41:35 +00:00
834cf3ac56 chore: release v4.1.0 2026-01-26 10:38:19 -05:00
28c9552d1d fix(projman): add mandatory CHANGELOG and versioning to sprint-close workflow
Problem: Version workflow was documented but not enforced in sprint-close.
After completing sprints, CHANGELOG wasn't updated and releases weren't created.

Solution:
- Add "Update CHANGELOG" as mandatory step 7 in sprint-close
- Add "Version Check" as step 8 with /suggest-version and release.sh
- Update orchestrator agent with CHANGELOG and version reminders
- Add V04.1.0 wiki workflow changes to CHANGELOG [Unreleased]
- Document MCP bug #160 in Known Issues

This ensures versioning workflow is part of the sprint close process,
not just documented in CLAUDE.md.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 10:38:19 -05:00
db67d3cc76 Merge pull request 'development' (#166) from development into main
Reviewed-on: #166
2026-01-26 15:29:35 +00:00
896cdcfa0f Merge pull request '[V04.1.0] feat: Wiki-Based Planning Workflow Enhancement' (#165) from feat/v04.1.0-wiki-planning-workflow into development
Reviewed-on: #165
2026-01-26 15:28:00 +00:00
34de0e4e82 [Sprint V04.1.0] feat: Add /proposal-status command for proposal tree view
- Create new proposal-status.md command
- Shows all proposals with status (Pending, In Progress, Implemented, Abandoned)
- Tree view of implementations under each proposal
- Links to issues and lessons learned
- Update README with new command documentation

Phase 4 of V04.1.0 Wiki-Based Planning Enhancement.
Closes #164

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 10:23:53 -05:00
1a0f3aa779 [Sprint V04.1.0] feat: Add cross-linking between issues, wiki, and lessons
- Add Metadata section to lesson structure with Implementation link
- Update lesson examples to include metadata with wiki reference
- Enable bidirectional traceability: lessons ↔ implementation pages

Phase 3 of V04.1.0 Wiki-Based Planning Enhancement.
Closes #163

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 10:21:53 -05:00
30b379b68c [Sprint V04.1.0] feat: Add wiki status updates to sprint-close workflow
- Add step 5: Update wiki implementation page status (Implemented/Partial/Failed)
- Add step 6: Update wiki proposal page when all implementations complete
- Add update_wiki_page to MCP tools in both sprint-close.md and orchestrator.md
- Add critical reminders for wiki status updates

Implements Phase 2 of V04.1.0 Wiki-Based Planning Enhancement.
Closes #162

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 10:15:08 -05:00
842ce3f6bc feat(projman): add wiki-based planning workflow to sprint-plan
Phase 1 implementation for Change V04.1.0:
- Add flexible input source detection (file, wiki, conversation)
- Add wiki proposal and implementation page creation
- Add wiki reference to created issues
- Add cleanup step to delete local files after migration
- Update planner agent with wiki workflow responsibilities

Closes #161

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 10:01:13 -05:00
a933edeef1 Merge pull request 'development' (#159) from development into main
Reviewed-on: #159
2026-01-25 23:46:30 +00:00
74ff305b67 Merge pull request 'fix(data-platform): use separate hooks.json for development' (#158) from fix/data-platform-hooks-dev into development
Reviewed-on: #158
2026-01-25 20:52:19 +00:00
f22d49ed07 docs: correct plugin.json hooks format rules
Hooks should be in separate hooks/hooks.json file (auto-discovered),
NOT inline in plugin.json.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 15:48:58 -05:00
3f79288b54 fix(data-platform): use separate hooks.json file (not inline)
Cherry-pick fix from hotfix/data-platform-hooks to development.
Hooks must be in separate hooks/hooks.json file (auto-discovered),
NOT inline in plugin.json.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 15:48:41 -05:00
1df9573f7a Merge pull request 'hotfix(data-platform): use separate hooks.json file' (#157) from hotfix/data-platform-hooks into main
Reviewed-on: #157
2026-01-25 20:46:14 +00:00
35d5f14003 docs: correct plugin.json hooks format rules
Hooks should be in separate hooks/hooks.json file (auto-discovered),
NOT inline in plugin.json. Previous documentation was wrong.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 15:42:13 -05:00
5321b2929e fix(data-platform): use separate hooks.json file (not inline)
Previous fix was wrong. Hooks should be in separate hooks/hooks.json
file (auto-discovered), NOT inline in plugin.json.

Pattern matches working plugins: projman, pr-review, claude-config-maintainer

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 15:41:46 -05:00
05aa50d409 Merge pull request 'development' (#156) from development into main
Reviewed-on: #156
2026-01-25 20:37:15 +00:00
a910e9327d Merge pull request 'fix(data-platform): correct invalid plugin.json manifest format' (#155) from fix/data-platform-manifest into development
Reviewed-on: #155
2026-01-25 20:36:53 +00:00
9d1fedd3a5 docs: add plugin.json format rules to CLAUDE.md
Add critical warning about hooks and agents field formats
to prevent future manifest validation failures.

References lesson: plugin-manifest-validation---hooks-and-agents-format-requirements

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 15:33:49 -05:00
320ea6b72b fix(data-platform): correct invalid plugin.json manifest format
- Remove invalid "agents": ["./agents/"] - agent .md files don't need registration
- Inline hooks instead of external reference "hooks/hooks.json"
- Delete orphaned hooks.json file (content now in plugin.json)

This fixes "invalid input" validation errors for hooks and agents fields.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 15:32:54 -05:00
54e8e694b1 Merge pull request 'development' (#154) from development into main
Reviewed-on: #154
2026-01-25 20:24:29 +00:00
69bbffd9cc Merge pull request 'fix: proactive documentation and sprint planning automation' (#153) from fix/proactive-automation-docs into development
Reviewed-on: #153
2026-01-25 20:23:56 +00:00
6b666bff46 fix: proactive documentation and sprint planning automation
- doc-guardian: Hook now tracks documentation dependencies and outputs
  specific files needing updates (e.g., commands → COMMANDS-CHEATSHEET.md)
- projman: SessionStart hook now suggests /sprint-plan when open issues
  exist without milestone, and warns about unreleased CHANGELOG entries
- projman: Add /suggest-version command for semantic version recommendations
- docs: Update COMMANDS-CHEATSHEET.md with data-platform plugin (was missing)
- docs: Update CLAUDE.md with data-platform and version 4.0.0

Fixes documentation drift and lack of proactive workflow suggestions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 15:18:30 -05:00
8ec7cbb1e9 Merge pull request 'development' (#152) from development into main
Reviewed-on: #152
2026-01-25 19:37:53 +00:00
a77b8ee123 Merge pull request 'chore: release v4.0.0' (#151) from chore/release-v4.0.0 into development
Reviewed-on: #151
2026-01-25 19:36:25 +00:00
498dac5230 chore: release v4.0.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 14:33:47 -05:00
3519a96d06 Merge pull request 'development' (#150) from development into main
Reviewed-on: #150
2026-01-25 19:28:43 +00:00
af0b92461a Merge pull request 'feat: add data-platform plugin (v4.0.0)' (#149) from feat/data-platform-plugin-v4.0.0 into development
Reviewed-on: #149
2026-01-25 19:27:26 +00:00
89f0354ccc feat: add data-platform plugin (v4.0.0)
Add new data-platform plugin for data engineering workflows with:

MCP Server (32 tools):
- pandas operations (14 tools): read_csv, read_parquet, read_json,
  to_csv, to_parquet, describe, head, tail, filter, select, groupby,
  join, list_data, drop_data
- PostgreSQL/PostGIS (10 tools): pg_connect, pg_query, pg_execute,
  pg_tables, pg_columns, pg_schemas, st_tables, st_geometry_type,
  st_srid, st_extent
- dbt integration (8 tools): dbt_parse, dbt_run, dbt_test, dbt_build,
  dbt_compile, dbt_ls, dbt_docs_generate, dbt_lineage

Plugin Features:
- Arrow IPC data_ref system for DataFrame persistence across tool calls
- Pre-execution validation for dbt with `dbt parse`
- SessionStart hook for PostgreSQL connectivity check (non-blocking)
- Hybrid configuration (system ~/.config/claude/postgres.env + project .env)
- Memory management with 100k row limit and chunking support

Commands: /initial-setup, /ingest, /profile, /schema, /explain, /lineage, /run
Agents: data-ingestion, data-analysis

Test suite: 71 tests covering config, data store, pandas, postgres, dbt tools

Addresses data workflow issues from personal-portfolio project:
- Lost data after multiple interactions (solved by Arrow IPC data_ref)
- dbt 1.9+ syntax deprecation (solved by pre-execution validation)
- Ungraceful PostgreSQL error handling (solved by SessionStart hook)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 14:24:03 -05:00
f809c672b5 Merge pull request 'development' (#148) from development into main
Reviewed-on: #148
2026-01-24 18:16:29 +00:00
6a267d074b Merge pull request 'feat: add versioning workflow with release script + release v3.2.0' (#147) from fix/issue-143-versioning-workflow into development
Reviewed-on: #147
2026-01-24 18:16:05 +00:00
bcde33c7d0 chore: release v3.2.0
Version 3.2.0 includes:
- New features: netbox device params, claude-config-maintainer auto-enforce
- Bug fixes: cmdb-assistant schemas, netbox tool names, API URLs
- All hooks converted to command type
- Versioning workflow with release script

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 13:15:30 -05:00
ee3268fbe0 feat: add versioning workflow with release script
- Add scripts/release.sh for consistent version releases
- Fix CHANGELOG.md: consolidate all changes under [Unreleased]
- Update CLAUDE.md with comprehensive versioning documentation
- Include all commits since v3.1.1 in [Unreleased] section

The release script ensures version consistency across:
- Git tags
- README.md title
- marketplace.json version
- CHANGELOG.md sections

Addresses #143

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 13:15:30 -05:00
f6a38ffaa8 Merge pull request 'fix: remove destructive cache-clear instruction from CLAUDE.md' (#146) from fix/issue-145-cache-clear-instructions into development
Reviewed-on: #146
2026-01-24 18:14:03 +00:00
b13ffce0a0 fix: remove destructive cache-clear instruction from CLAUDE.md
The CLAUDE.md mandatory rule instructed clearing the plugin cache
mid-session, which breaks all MCP tools that were already loaded.
MCP tools are loaded with absolute paths to the venv, and deleting
the cache removes the venv while the session still references it.

Changes:
- CLAUDE.md: Replace "ALWAYS CLEAR CACHE" with "VERIFY AND RESTART"
- verify-hooks.sh: Change cache existence from error to info message
- DEBUGGING-CHECKLIST.md: Add section explaining cache timing

Fixes #145

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 12:57:32 -05:00
5d205c9c13 Merge pull request 'development' (#142) from development into main
Reviewed-on: #142
2026-01-24 17:36:43 +00:00
b39e01efd7 Merge pull request 'feat(projman): add user-reported issue mode to /debug-report' (#141) from feat/issue-139-debug-report-user-mode into development
Reviewed-on: #141
2026-01-24 17:36:16 +00:00
98eea5b6f9 Merge pull request 'chore: remove unreleased v3.1.2 changelog entry' (#140) from chore/remove-unreleased-changelog into development
Reviewed-on: #140
2026-01-24 17:35:58 +00:00
fe36ed91f2 feat(projman): add user-reported issue mode to /debug-report
Adds a new "user-reported" mode alongside the existing automated
diagnostics mode. Users can now choose to:

1. Run automated diagnostics (existing behavior)
2. Report an issue they experienced while using any plugin command

User-reported mode:
- Step 0: Mode selection via AskUserQuestion
- Step 0.1: Structured feedback gathering
  - Which plugin/command was affected
  - What the user was trying to do
  - What went wrong (error, missing feature, unexpected behavior, docs)
  - Expected vs actual behavior
  - Any workarounds found
- Step 5.1: Smart label generation based on problem type
- Step 6.1: User-friendly issue template with investigation hints

This allows capturing UX issues, missing features, and documentation
problems that wouldn't be caught by automated MCP tool tests.

Fixes #139

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 12:30:37 -05:00
8c85f9ca5f chore: remove unreleased v3.1.2 changelog entry
Tag v3.1.2 was never created. Remove the changelog entry to
avoid confusion between documented and actual releases.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 12:26:08 -05:00
98df35a33e Merge pull request 'fix(cmdb-assistant): complete MCP tool schemas for update operations' (#138) from fix/issue-137-update-schema-completeness into development
Reviewed-on: #138
2026-01-24 17:22:00 +00:00
70d6963d0d fix(cmdb-assistant): complete MCP tool schemas for update operations
Expand parameter schemas for 15 update tools that previously only exposed
the 'id' field. The underlying Python implementation already supported
all fields via **kwargs, but Claude couldn't discover available parameters.

Updated tools:
- virtualization: update_virtual_machine, update_cluster
- dcim: update_site, update_location, update_rack, update_manufacturer,
        update_device_type, update_device_role, update_platform,
        update_interface, update_cable
- ipam: update_vrf, update_prefix, update_ip_address, update_vlan

Each tool now exposes all commonly-used optional fields matching the
NetBox API, following the pattern established by dcim_update_device.

Fixes #137

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 12:18:57 -05:00
efb83e0f28 Merge pull request 'development' (#135) from development into main
Reviewed-on: #135
2026-01-24 16:48:40 +00:00
54c6694117 Merge pull request 'fix(netbox): shorten tool names to meet 64-char API limit' (#134) from fix/netbox-tool-name-length into development
Reviewed-on: #134
2026-01-24 16:47:48 +00:00
2402f88daf fix(netbox): shorten tool names to meet 64-char API limit
Claude API has a 64-character limit on tool names. Claude Code uses a
36-character prefix (mcp__plugin_cmdb-assistant_netbox__), leaving only
28 characters for the actual tool name.

Shortened 33 tools that exceeded this limit:
- virtualization_* -> virt_* (19 tools)
- circuits_*_circuit_type* -> circ_*_type* (3 tools)
- circuits_*_circuit_termination* -> circ_*_termination* (3 tools)
- wireless_*_wireless_* -> wlan_* (8 tools)

Added TOOL_NAME_MAP to route shortened names to original method names.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 11:45:34 -05:00
7bedfa2c65 Merge pull request 'development' (#133) from development into main
Reviewed-on: #133
2026-01-23 22:54:42 +00:00
1cf1dbefb8 Merge pull request 'fix(cmdb-assistant): correct NetBox API URL format in setup wizard' (#132) from fix/issue-126-netbox-url-format into development
Reviewed-on: #132
2026-01-23 22:54:14 +00:00
dafa8db8bb fix(cmdb-assistant): correct NetBox API URL format in setup wizard
The /initial-setup command was generating NETBOX_API_URL without the
/api suffix, causing all MCP tools to fail with JSON parsing errors.

Changes:
- Update example URL to include /api suffix
- Add instruction to auto-append /api if user forgets
- Fix validation test to be consistent with actual usage

Root cause: Setup asked for base URL (https://netbox.company.com) but
NetBoxClient expects API URL (https://netbox.company.com/api). The
validation test incorrectly added /api/ making it pass, while the
actual MCP server failed.

Fixes #126

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 17:52:23 -05:00
42ab4f13cf Merge pull request 'development' (#125) from development into main
Reviewed-on: #125
2026-01-23 21:48:13 +00:00
65e79efb24 Merge pull request 'fix(gitea,projman): type safety for create_label_smart, curl-based debug-report' (#124) from fix/issue-123-mcp-tool-failures into development
Reviewed-on: #124
2026-01-23 21:47:51 +00:00
5ffc13b635 fix(gitea,projman): type safety for create_label_smart, curl-based debug-report
Fixes multiple issues from diagnostic #123:

1. create_label_smart type safety (labels.py)
   - Add isinstance(result, dict) checks after API calls
   - Return structured error dict if API returns unexpected type
   - Prevents "list indices must be integers" crash

2. debug-report always uses curl with labels
   - Remove MCP option - always use curl for marketplace issues
   - Add label ID fetching step (Source/Diagnostic, Type/Bug)
   - Include labels in curl POST payload
   - Avoids branch protection restrictions on main branch

Fixes #123

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 16:46:37 -05:00
77dc122079 Merge pull request 'development' (#122) from development into main
Reviewed-on: #122
2026-01-23 21:08:10 +00:00
50bfd20fd4 Merge pull request 'fix(netbox): add diagnostic logging for JSON parse errors' (#121) from fix/issue-120-json-parse-diagnostics into development
Reviewed-on: #121
2026-01-23 21:07:52 +00:00
c14f1f46cd fix(netbox): add diagnostic logging for JSON parse errors
When NetBox MCP tools fail with JSON decode errors, the error message
now includes:
- HTTP status code
- Response content length
- Preview of actual content received (first 200 bytes)

This helps diagnose transient issues like network timeouts or
incomplete responses that result in cryptic "Expecting value" errors.

Fixes #120

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 16:06:51 -05:00
ce774bcc6f Merge pull request 'development' (#119) from development into main
Reviewed-on: #119
2026-01-23 19:47:39 +00:00
52c8371f4a Merge pull request 'feat(netbox): add platform and primary_ip params to device tools' (#118) from feat/netbox-device-update-params into development
Reviewed-on: #118
2026-01-23 19:47:10 +00:00
f8d6d42150 feat(netbox): add platform and primary_ip params to device tools
Expose additional parameters in dcim_create_device and dcim_update_device
MCP tools that were already supported by the backend but not exposed:

dcim_create_device:
- platform, primary_ip4, primary_ip6, asset_tag, description, comments

dcim_update_device:
- platform, primary_ip4, primary_ip6, serial, asset_tag, site, rack,
  position, description, comments

This enables setting the platform (OS) and primary IP address when
creating or updating devices in NetBox.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 14:46:09 -05:00
b3abe863af Merge pull request 'development' (#117) from development into main
Reviewed-on: #117
2026-01-23 17:50:25 +00:00
469487f6ed Merge pull request 'fix(labels): add duplicate check before creating labels' (#116) from fix/labels-duplicate-check into development
Reviewed-on: #116
2026-01-23 17:49:57 +00:00
7a2966367d feat(claude-config-maintainer): auto-enforce mandatory behavior rules
SessionStart hook checks if CLAUDE.md has mandatory rules.
If missing, adds them automatically.

Rules enforced:
- Check everything when user asks
- Believe user when they say something's wrong
- Never say "done" without verification
- Show exactly what user asks for

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 12:46:18 -05:00
0466b299a7 fix: add mandatory behavior rules and verification
- Add MANDATORY BEHAVIOR RULES to CLAUDE.md (read every session)
- Rules: check everything, believe user, verify before saying done
- post-update.sh now clears plugin cache
- verify-hooks.sh checks all locations for prompt hooks

These rules prevent wasted user time from AI overconfidence.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 12:44:27 -05:00
b34304ed57 fix: add cache clearing and hook verification scripts
- post-update.sh now clears plugin cache automatically
- verify-hooks.sh checks ALL locations for prompt hooks
- Prevents cached old hooks from overriding fixed hooks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 12:41:33 -05:00
96963531fc fix(labels): add duplicate check before creating labels
create_label_smart now checks if label already exists before creating.
- Checks both org and repo labels
- Handles format variations (Type/Bug vs Type: Bug)
- Returns {skipped: true} if label already exists
- Prevents duplicate label creation errors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 12:36:00 -05:00
4c9a7c55ae Merge pull request 'development' (#115) from development into main
Reviewed-on: #115
2026-01-23 17:25:18 +00:00
8a75203251 Merge pull request 'fix: hooks command type conversion + org-level labels' (#114) from fix/hooks-and-labels-v3.2.0 into development
Reviewed-on: #114
2026-01-23 17:24:08 +00:00
da6e81260e fix(hooks): convert ALL hooks to command type with proper prefixes
ALL hooks now use command type (bash scripts) instead of prompt type.
Prompt hooks are unreliable - model ignores instructions.

Changes:
- projman: SessionStart → startup-check.sh with [projman] prefix
- pr-review: SessionStart → startup-check.sh with [pr-review] prefix
- project-hygiene: cleanup.sh now has [project-hygiene] prefix
- doc-guardian: already fixed (notify.sh with [doc-guardian] prefix)
- code-sentinel: already fixed (security-check.sh with [code-sentinel] prefix)

All hook output now guaranteed to have plugin name prefix.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 12:18:40 -05:00
e1f1335655 fix(code-sentinel): replace prompt hook with command hook
Same fix as doc-guardian - prompt hooks unreliable.
Command hook guarantees exact behavior.

- Add security-check.sh that skips config/doc files silently
- Only checks code files for hardcoded secrets
- Outputs with [code-sentinel] prefix

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 12:16:06 -05:00
b017db83a1 feat(gitea): add organization-level label creation
- Add create_org_label() method to gitea_client.py for org-level labels
- Add create_label_smart() to labels.py that auto-detects correct level
- Register both tools in server.py
- Update labels-sync.md to use create_label_smart

Label level detection:
- Organization: Type/*, Priority/*, Complexity/*, Effort/*, Risk/*, Source/*, Agent/*
- Repository: Component/*, Tech/*

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 12:15:14 -05:00
bc136fab7e docs: update CHANGELOG with actual fix for #110
Prompt hook approach didn't work - Claude ignores instructions.
Real fix was switching to command hook type.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 11:51:31 -05:00
6c24bcbb91 fix(doc-guardian): replace prompt hook with command hook
Prompt hooks are unreliable - Claude ignores instructions and generates
verbose analysis despite explicit FORBIDDEN rules. Command hooks guarantee
the exact output we want.

- Add notify.sh script that only outputs for config file changes
- Change hooks.json from prompt type to command type
- Script exits silently for non-config files (no blocking)

Fixes #110

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 11:48:54 -05:00
11a05799d3 docs: sync documentation with codebase
- CLAUDE.md: Update version 3.0.1 → 3.1.2, projman 3.0.0 → 3.1.0, command count 12 → 13
- README.md: Add debug-report/debug-review to projman commands, add DEBUGGING-CHECKLIST.md to docs table
- CANONICAL-PATHS.md: Update version, remove non-existent docs/workflows/, add COMMANDS-CHEATSHEET.md
- projman/README.md: Fix "Three-Agent" → "Four-Agent", update architecture to show symlink
- pr-review/README.md: Add missing setup commands (initial-setup, project-init, project-sync)
- cmdb-assistant/README.md: Add initial-setup.md to architecture
- project-hygiene/README.md: Fix invalid hook event name (task-completed → PostToolUse)
- doc-guardian/plugin.json: Add missing commands field
- code-sentinel/plugin.json: Add missing commands field

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 11:25:14 -05:00
403271dc0c Merge pull request 'development' (#112) from development into main
Reviewed-on: #112
2026-01-23 16:10:12 +00:00
cc4abf67b9 Merge pull request 'fix: protected branch detection and non-blocking hooks' (#111) from fix/issue-109-110-hooks-and-protected-branch into development
Reviewed-on: #111
2026-01-23 16:08:38 +00:00
35cf20e02d fix: protected branch detection and non-blocking hooks
- Add protected branch detection to /commit command (Step 1)
- Warn users before committing to protected branches
- Offer to create feature branch automatically
- Rewrite doc-guardian hook to be truly non-blocking
- Enforce strict [plugin-name] prefix in all hook outputs
- Add forbidden words list to prevent accidental blocking

Fixes #109, #110

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 11:08:00 -05:00
5209f82efb Merge pull request 'development' (#108) from development into main
Reviewed-on: #108
2026-01-22 23:03:57 +00:00
1f55387e9e Merge pull request 'feat: enhance debug commands with sprint awareness and lessons learned' (#107) from feat/debug-commands-enhancements into development
Reviewed-on: #107
2026-01-22 23:03:34 +00:00
32bbca73ba feat: enhance debug commands with sprint awareness and lessons learned
Debug Report (/debug-report):
- Add Step 1.5: Sprint context detection based on branch and milestone
- Add Step 5: Smart labeling via suggest_labels MCP tool
- Update issue creation to support milestone association

Debug Review (/debug-review):
- Add Step 9.5: Search lessons learned before proposing fixes
- Add Step 15: Verify, close issue, and optionally capture lesson

Hooks:
- Simplify doc-guardian hook to be truly non-blocking (15 words max)
- Update code-sentinel to skip docs/config files entirely

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 18:02:09 -05:00
0e6999ea21 Merge pull request 'development' (#106) from development into main
Reviewed-on: #106
2026-01-22 21:17:18 +00:00
0d120bd041 Merge pull request 'feat: add plugin name prefixes to hooks and improve git-flow sync' (#105) from feat/hook-improvements-and-sync-prune into development 2026-01-22 21:14:32 +00:00
508832dae1 feat: add plugin name prefixes to hooks and improve git-flow sync
- Add [plugin-name] prefix to all hook messages for better identification
- Make doc-guardian hook notification-only (non-blocking)
- Add stale branch detection to /commit-sync with git fetch --prune
- Enhance /branch-cleanup to handle stale branches separately

Closes improvements for hook UX and git workflow

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 16:13:17 -05:00
6cf3c1830c Merge pull request 'development' (#104) from development into main
Reviewed-on: #104
2026-01-22 20:02:06 +00:00
0b23a02886 Merge pull request 'fix: add curl fallback to /debug-report when MCP tools unavailable' (#103) from fix/issue-100-debug-report-fallback into development
Reviewed-on: #103
2026-01-22 20:01:32 +00:00
71987ee537 fix: switch to development branch before adding issue comments
The MCP server's branch-aware security blocks write operations on
protected branches (main, fix/*, etc). After pushing a feature branch
and creating a PR, we must switch back to development before adding
comments to issues via MCP tools.
2026-01-22 14:28:01 -05:00
b7829dca05 fix: add curl fallback to /debug-report when MCP tools unavailable
When MCP tools are not available in a session (the very scenario
/debug-report is designed to diagnose), the command now falls back to:

1. Check for Gitea credentials at ~/.config/claude/gitea.env
2. Use curl + jq to create the issue via Gitea REST API
3. If no credentials, save report to local file for manual submission

Security measures:
- Uses mktemp -m 600 for restrictive file permissions
- Uses jq --rawfile for safe JSON construction (no command substitution)
- Proper cleanup of temporary files

Fixes #100

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 14:19:05 -05:00
9b0e9a69b1 Merge pull request 'development' (#102) from development into main
Reviewed-on: #102
2026-01-22 18:36:36 +00:00
ad0e14d07f Merge pull request 'feat: add debugging infrastructure and CLAUDE.md optimization' (#101) from feat/debugging-infrastructure into development
Reviewed-on: #101
2026-01-22 18:36:15 +00:00
7fd5fffedf feat: add debugging infrastructure and CLAUDE.md optimization
- Add docs/DEBUGGING-CHECKLIST.md with systematic troubleshooting guide
- Enhance SessionStart hooks to detect missing MCP venvs and warn users
- Add Installation Paths and Debugging sections to CLAUDE.md
- Add Plugin Commands by Category table to Quick Start
- Condense Versioning section for better readability
- Add scripts/check-venv.sh for programmatic venv checking
- Update docs/CANONICAL-PATHS.md with new files

Addresses issues #97, #98, #99

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 13:33:32 -05:00
620173eef6 Merge pull request 'Release: suggest_labels fix' (#96) from development into main 2026-01-22 15:54:28 +00:00
0fe4f62a30 Merge pull request 'fix: expose repo parameter in suggest_labels MCP tool' (#95) from fix/suggest-labels-repo-parameter into development 2026-01-22 15:54:26 +00:00
533810f018 fix: expose repo parameter in suggest_labels MCP tool
The suggest_labels tool accepted a repo parameter in the implementation
but didn't expose it in the MCP tool schema, causing it to always rely
on auto-detection which failed in some contexts.

Fixes #94

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 10:54:07 -05:00
2ee23a39d8 Merge pull request 'Release: Critical MCP setup documentation' (#93) from development into main 2026-01-22 15:35:36 +00:00
894c85bd54 Merge pull request 'docs: add critical setup instructions for installed marketplace' (#92) from docs/mcp-setup-critical-fix into development 2026-01-22 15:35:23 +00:00
01809a7367 docs: add critical setup instructions for installed marketplace
MCP servers fail when venvs don't exist in ~/.claude/plugins/marketplaces/.
Claude Code doesn't run setup.sh when installing marketplaces, so users
must run it manually.

Added:
- Critical warning section at top of UPDATING.md
- Step to run setup in installed location after updates
- Troubleshooting for "X MCP servers failed" error

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 10:35:07 -05:00
a20f1bfdf8 Merge pull request 'Release: pr-review MCP config fix' (#91) from development into main 2026-01-22 15:22:30 +00:00
7879e07815 Merge pull request 'fix: add missing PYTHONPATH env to pr-review MCP config' (#90) from fix/pr-review-mcp-pythonpath into development 2026-01-22 15:21:11 +00:00
eced0fbd07 fix: add missing PYTHONPATH env to pr-review MCP config
Aligns pr-review .mcp.json with projman by adding PYTHONPATH environment
variable. This inconsistency may have caused MCP server failures when
both plugins are loaded.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 10:20:27 -05:00
aa6d7f5866 Merge pull request 'Release: SessionStart hook fix' (#89) from development into main 2026-01-22 15:08:29 +00:00
3e5197779d Merge pull request 'fix: correct SessionStart hook structure in projman and pr-review' (#88) from fix/session-start-hook-structure into development 2026-01-22 15:08:11 +00:00
9206931a3c fix: correct SessionStart hook structure in projman and pr-review
Remove incorrect nested matcher/hooks structure from SessionStart hooks.
SessionStart events don't use matchers - that format is only for tool-based
hooks like PreToolUse/PostToolUse.

Fixes recurring "SessionStart:startup hook error" on Claude Code startup.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 10:07:12 -05:00
ff3be54f1c Merge pull request 'Release: Add debug commands documentation' (#87) from development into main 2026-01-22 13:11:35 +00:00
1b0f5f4973 Merge pull request 'docs: add debug commands to projman documentation' (#86) from docs/add-debug-commands-documentation into development 2026-01-22 13:11:08 +00:00
8ed0d8f207 docs: add debug commands to projman documentation
- Add /debug-report and /debug-review to projman README
- Add debug commands to COMMANDS-CHEATSHEET.md
- Remove obsolete doc-guardian Stop hook references from cheatsheet
- Update Architecture section with debug command files

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 08:10:33 -05:00
007b55916c Merge pull request 'Release: doc-guardian README fix' (#85) from development into main 2026-01-22 12:58:20 +00:00
eeef35aa61 Merge pull request 'fix: correct doc-guardian README to match actual implementation' (#84) from fix/doc-guardian-readme-accuracy into development 2026-01-22 12:44:50 +00:00
be2d989899 fix: correct doc-guardian README to match actual implementation
- Remove false claim about Stop hook (was removed in d2ad90d)
- Fix Solution section to accurately describe prompt-based behavior
- Remove misleading "queue" language since there's no persistent queue

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 07:42:51 -05:00
306143882a Merge pull request 'Release v3.1.0: Debug workflow commands + hook fix' (#83) from development into main
Reviewed-on: #83
2026-01-21 23:00:12 +00:00
0c07820b5a Merge pull request 'fix: remove broken Stop hook from doc-guardian' (#82) from fix/remove-broken-stop-hook into development 2026-01-21 22:56:50 +00:00
d2ad90d5bb fix: remove broken Stop hook from doc-guardian
The Stop hook referenced a non-existent "internal queue" for tracking
documentation drift. Each hook runs in isolation with no way to pass
data between invocations, so the queue concept couldn't work.

The hook was causing errors on every session end:
"Stop hook error: Prompt hook condition was not met..."

Changes:
- Removed the Stop hook entirely
- Updated PostToolUse hook to report drift immediately when found
  (instead of referencing non-existent queue)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 17:55:53 -05:00
642dca7062 Merge pull request 'Release v3.1.0: Debug workflow commands' (#81) from development into main
Reviewed-on: #81
2026-01-21 22:47:22 +00:00
faafced061 Merge pull request 'feat: add debug workflow commands and bump to v3.1.0' (#80) from feat/debug-workflow-v3.1.0 into development 2026-01-21 22:46:13 +00:00
c3df0f95e6 feat: add debug workflow commands and bump to v3.1.0
New commands for structured debugging workflow:

/debug-report (for test projects):
- Runs 5 diagnostic MCP tool tests
- Captures full project context (git remote, cwd, branch)
- Generates structured issue with hypothesis
- Creates issue in marketplace repo automatically

/debug-review (for marketplace repo):
- Lists open diagnostic issues for triage
- Maps errors to relevant code files
- MANDATORY: Reads files before proposing fixes
- Three human approval gates
- Creates feature branch, commits, PR with linking

Also includes:
- Dynamic label format detection in suggest_labels
- Rewritten labels-sync.md with explicit execution steps
- Version bump to 3.1.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 17:44:54 -05:00
f714957d83 Merge pull request 'Merge development into main: labels-sync repo detection fix' (#79) from development into main
Reviewed-on: #79
2026-01-21 22:33:11 +00:00
40af243229 Merge pull request 'fix: rewrite labels-sync.md with explicit repo detection steps' (#78) from fix/labels-sync-repo-detection into development 2026-01-21 22:26:12 +00:00
69b71fc7cf fix: rewrite labels-sync.md with explicit repo detection steps
ROOT CAUSE: The MCP server runs with cwd set to the plugin directory,
not the user's project. It cannot auto-detect the repository.

SOLUTION: The command documentation now explicitly instructs Claude to:
1. Run `git remote get-url origin` via Bash first
2. Parse the URL to extract owner/repo
3. Pass repo parameter to ALL MCP tool calls

Also removed the "Label Reference" section that was causing Claude
to ask about creating a local reference file.

Key changes:
- Added "CRITICAL: Execution Steps" section with numbered steps
- Added "DO NOT" section to prevent common mistakes
- Removed confusing reference file documentation
- Made all MCP tool calls show required repo parameter

Fixes #77

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 17:16:48 -05:00
5ad207520a Merge pull request 'Merge development into main: dynamic label format detection' (#76) from development into main
Reviewed-on: #76
2026-01-21 22:06:52 +00:00
78d77c1e0a Merge pull request 'fix: make suggest_labels detect actual label format from repository' (#75) from fix/dynamic-label-format into development 2026-01-21 22:04:55 +00:00
5cf43d5de2 fix: make suggest_labels detect actual label format from repository
The suggest_labels function now dynamically detects the label naming convention
used in the repository (slash format like Type/Bug or colon-space format like
Type: Bug) instead of hardcoding slash format.

Changes:
- Added _build_label_lookup() to parse and normalize label formats
- Added _find_label() to find actual labels from the lookup
- Updated suggest_labels() to accept optional repo parameter
- Labels are fetched first, then suggestions match actual names
- Supports Efforts/Effort normalization (handles singular/plural)

Fixes issue #73 sub-issue 3: label format mismatch

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 16:58:44 -05:00
51ef10633b Merge pull request 'fix: remove conflicting reference file instruction from labels-sync' (#74) from fix/labels-sync-no-reference-file into development 2026-01-21 21:50:14 +00:00
83094598c5 fix: remove conflicting reference file instruction from labels-sync
The command description still said "updates the local reference file"
which caused the agent to prompt about creating a reference file.

Updated to clarify: no local files are created or modified.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 16:47:57 -05:00
5da29c8e35 Merge pull request 'Release: fix project directory detection for MCP server' (#72) from development into main
Reviewed-on: #72
2026-01-21 21:41:55 +00:00
4f3560d121 Merge pull request 'fix: detect project directory correctly for git remote parsing' (#71) from fix/project-directory-detection into development 2026-01-21 21:38:24 +00:00
d5e521a759 fix: detect project directory correctly for git remote parsing
The MCP server runs with cwd set to the plugin directory, not the
user's project directory. This caused git remote auto-detection to
fail because it was looking at the wrong directory.

Changes:
- Added _find_project_directory() method with multiple strategies:
  1. CLAUDE_PROJECT_DIR environment variable
  2. PWD environment variable (if it has .git or .env)
  3. Current working directory (if it has .git or .env)
- Updated _detect_repo_from_git() to accept project_dir parameter
- Added 3 new tests for project directory detection

Fixes #70

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 16:25:14 -05:00
b2c51251f3 Merge pull request 'Release: labels-sync documentation improvements' (#69) from development into main
Reviewed-on: #69
2026-01-21 21:16:28 +00:00
71efa1aafa Merge pull request 'docs: clarify labels-sync command behavior' (#68) from docs/labels-sync-clarification into development 2026-01-21 21:14:14 +00:00
aa3ff016e2 docs: clarify labels-sync command behavior and remove misleading prompts
Updates labels-sync.md to:
- Remove [Y/n] prompts that suggested manual confirmation
- Clarify that execution is autonomous
- Document that labels are fetched dynamically (not from reference file)
- Update examples to show actual behavior
- Improve troubleshooting for common issues
- Add user-owned repo handling documentation

Related to #67

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 16:02:05 -05:00
4557d2ce40 Merge pull request 'Release: repo auto-detection and org validation fixes' (#66) from development into main
Reviewed-on: #66
2026-01-21 20:15:02 +00:00
d282a65fc6 Merge pull request 'fix: add repo auto-detection and improve org validation' (#65) from fix/repo-auto-detection-and-org-validation into development 2026-01-21 20:12:30 +00:00
ad56700059 fix: add repo auto-detection and improve org validation
1. Repo Auto-Detection (config.py):
   - Added _detect_repo_from_git() to parse git remote URL
   - Supports SSH, SSH short, HTTPS, HTTP URL formats
   - Falls back to git remote when GITEA_REPO not set

2. Organization Validation (gitea_client.py):
   - Changed is_org_repo() to use /orgs/{owner} endpoint
   - Added _is_organization() method for reliable org detection
   - Fixes issue where owner.type was null in Gitea API

3. Tests:
   - Added 6 tests for git URL parsing
   - Added 3 tests for org detection

Fixes #64

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 15:10:53 -05:00
df2f5ebb47 Merge pull request 'Release: fix user-owned repo labels' (#63) from development into main
Reviewed-on: #63
2026-01-21 19:48:24 +00:00
feb86b059f Merge pull request 'fix: handle user-owned repos in get_labels (skip org labels)' (#62) from fix/user-owned-repo-labels into development
Merge pull request #62 from fix/user-owned-repo-labels
2026-01-21 19:47:10 +00:00
c23e84f965 fix: handle user-owned repos in get_labels (skip org labels)
The get_labels tool was failing with 404 for user-owned repositories
because it always queried /api/v1/orgs/{owner}/labels, which only
works for organizations.

Changes:
- labels.py: Check is_org_repo() before fetching org labels
- gitea_client.py: Same fix in _resolve_label_ids()
- test_labels.py: Added tests for both org and user-owned repos

Fixes #61

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 14:40:30 -05:00
195ca5c10c Merge pull request 'Release v3.0.1' (#60) from development into main
Reviewed-on: #60
2026-01-21 17:07:45 +00:00
53f1b9662f Merge pull request 'chore: bump version to v3.0.1' (#59) from chore/version-bump-3.0.1 into development 2026-01-21 17:01:34 +00:00
eeffb9e853 chore: bump version to v3.0.1
Updates version references in:
- README.md title
- CHANGELOG.md (new entry with all changes since v3.0.0)
- marketplace.json metadata
- CLAUDE.md project overview

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 12:00:39 -05:00
6c142a9710 Merge pull request 'development' (#57) from development into main
Reviewed-on: #57
2026-01-21 16:56:12 +00:00
f781c6f7b1 Merge pull request 'fix: rename env vars to match MCP expectations' (#58) from fix/env-variable-names into development 2026-01-21 16:54:59 +00:00
8228c20d47 fix: rename environment variables to match MCP server expectations
Gitea MCP server expects:
- GITEA_API_URL (not GITEA_URL) - must include /api/v1
- GITEA_API_TOKEN (not GITEA_TOKEN)

NetBox MCP server expects:
- NETBOX_API_URL (not NETBOX_URL) - must include /api
- NETBOX_API_TOKEN (not NETBOX_TOKEN)

Changes:
- Update all setup commands to use correct variable names
- Update docs/CONFIGURATION.md with correct variables and URL format
- Update scripts/setup.sh templates
- Update CLAUDE.md hybrid configuration reference
- Remove GITEA_ORG from system-level config (it's project-level)
- Fix API URL paths in curl commands (remove redundant /api/v1)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 11:53:04 -05:00
85953d8e1e Merge pull request 'fix: remove redundant manual setup from README' (#56) from fix/readme-simplify into development 2026-01-21 16:37:18 +00:00
f8b6131bfc fix: remove redundant manual setup from README
Manual setup instructions belong in docs/CONFIGURATION.md, not README.
README should only show the simple wizard path (/initial-setup).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 11:36:58 -05:00
cd3d4c69f0 Merge pull request 'fix: update README.md with new setup commands and correct GITEA_ORG placement' (#55) from fix/readme-setup-commands into development 2026-01-21 16:31:56 +00:00
7f6e0893dd fix: update README.md with new setup commands and correct GITEA_ORG placement
- Add /project-init and /project-sync commands to projman
- Add setup commands (/initial-setup, /project-init, /project-sync) to pr-review
- Add /initial-setup to cmdb-assistant
- Replace manual MCP setup instructions with /initial-setup wizard
- Move GITEA_ORG from system-level to project-level in examples
- Add COMMANDS-CHEATSHEET.md and UPDATING.md to documentation table
- Fix GITEA_ORG validation in projman initial-setup (was checking system config)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 11:30:04 -05:00
39105688a5 Merge pull request 'development' (#54) from development into main
Reviewed-on: #54
2026-01-21 16:24:26 +00:00
2a6b3df8e1 Merge pull request 'feat: add interactive setup wizard with API validation and mismatch detection' (#53) from feat/interactive-setup-wizard into development
Reviewed-on: #53
2026-01-21 16:24:02 +00:00
0c2fc8c0d9 feat: add interactive setup wizard with API validation and mismatch detection
Major improvements to plugin setup experience:

Setup Commands:
- Redesign /initial-setup as interactive wizard (MCP + system + project config)
- Add /project-init for quick project-only setup
- Add /project-sync for handling repository moves/renames
- Add Gitea API validation to auto-fill org/repo when verified

Configuration Changes:
- Move GITEA_ORG from system to project level (supports multi-org users)
- System config now only contains GITEA_URL and GITEA_TOKEN
- Project .env now contains GITEA_ORG and GITEA_REPO

Automation:
- Add SessionStart hook for projman and pr-review
- Automatically detects git remote vs .env mismatch
- Warns user to run /project-sync when mismatch found

Documentation:
- Unify configuration docs (remove duplicate in plugins/projman)
- Add flow diagrams to CONFIGURATION.md
- Add setup script review guidance to UPDATING.md
- Update COMMANDS-CHEATSHEET.md with new commands and hooks

Files added:
- plugins/projman/commands/project-init.md
- plugins/projman/commands/project-sync.md
- plugins/projman/hooks/hooks.json
- plugins/pr-review/commands/initial-setup.md
- plugins/pr-review/commands/project-init.md
- plugins/pr-review/commands/project-sync.md
- plugins/pr-review/hooks/hooks.json
- plugins/cmdb-assistant/commands/initial-setup.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 11:20:36 -05:00
b5144de0cf Merge pull request 'development' (#52) from development into main
Reviewed-on: #52
2026-01-21 15:15:35 +00:00
29c54279a9 Merge pull request 'fix: marketplace schema corrections for Claude Code compatibility' (#51) from fix/marketplace-schema into development
Reviewed-on: #51
2026-01-21 15:15:21 +00:00
178593f355 docs: add plugin installation verification guide
Explains how to verify plugins are installed since /plugin shows
"(no content)" which can confuse users. Includes test commands
for each plugin.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 22:19:44 -05:00
a70df64cae Merge pull request 'development' (#50) from development into main
Reviewed-on: #50
2026-01-21 02:50:35 +00:00
2a2ac5f85e Merge pull request 'fix: remove unsupported agents field from all plugin manifests' (#49) from fix/marketplace-schema into development
Reviewed-on: #49
2026-01-21 02:50:17 +00:00
e01ba74e84 fix: remove agents field from all plugin.json files
Claude Code v2.0.44 doesn't support the agents field in plugin manifests.
Removed from: clarity-assist, pr-review, projman, claude-config-maintainer, cmdb-assistant

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 21:48:39 -05:00
565540d0ba fix: remove agents field from git-flow plugin.json
Claude Code doesn't support the agents field in plugin manifests.
Commands-only manifest should work.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 21:47:36 -05:00
394c91f8cf Merge pull request 'development' (#48) from development into main
Reviewed-on: #48
2026-01-21 02:13:48 +00:00
89bfd98d9f Merge pull request 'fix: simplify plugin.json schema for commands/agents/skills' (#47) from fix/marketplace-schema into development
Reviewed-on: #47
2026-01-21 02:13:28 +00:00
5c9dd8d6e0 fix: simplify plugin.json schema for commands/agents/skills
Use directory paths instead of object arrays for:
- git-flow
- clarity-assist
- pr-review

Claude Code expects ["./commands/"] format, not detailed object arrays.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 21:11:46 -05:00
374912b463 Merge pull request 'development' (#46) from development into main
Reviewed-on: #46
2026-01-21 01:58:17 +00:00
debb91aa7e Merge pull request 'fix: correct marketplace.json schema for Claude Code compatibility' (#45) from fix/marketplace-schema into development
Reviewed-on: #45
2026-01-21 01:57:48 +00:00
40860c172e fix: correct marketplace.json schema for Claude Code compatibility
- mcpServers: use relative paths (./.mcp.json) instead of names
- hooks: use relative paths (./hooks/hooks.json) instead of event names
- Remove unsupported integrationFile field from all plugins

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 20:55:52 -05:00
50ebe83c0a Merge pull request 'development' (#44) from development into main
Reviewed-on: #44
2026-01-21 01:21:53 +00:00
7295345013 Merge pull request 'chore: rename marketplace to Leo Claude Marketplace' (#43) from chore/rename-marketplace into development
Reviewed-on: #43
2026-01-21 01:18:42 +00:00
a2502c708b chore: rename marketplace to Leo Claude Marketplace
Update all references from old names to new marketplace identity:
- support-claude-mktplace → leo-claude-mktplace (URLs)
- lm-claude-plugins → leo-claude-mktplace (repo name)
- Claude Code Marketplace → Leo Claude Marketplace (display name)

Files updated:
- Core docs (CLAUDE.md, README.md, CHANGELOG.md)
- Documentation (CANONICAL-PATHS, CONFIGURATION, UPDATING, COMMANDS-CHEATSHEET)
- Marketplace manifest and all 9 plugin.json files
- Plugin READMEs and MCP server READMEs
- Setup script and label taxonomy reference

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 20:17:27 -05:00
4ede59e89a Merge pull request 'development' (#42) from development into main
Reviewed-on: #42
2026-01-20 22:31:17 +00:00
3017e4c097 Merge pull request 'Update README.md' (#41) from fix/marketplace-name-change into development
Reviewed-on: #41
2026-01-20 22:30:56 +00:00
de7675a649 Update README.md 2026-01-20 22:30:37 +00:00
aa7bb8f1a4 Merge pull request 'development' (#40) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#40
2026-01-20 22:14:18 +00:00
0a8af05f9c Merge pull request 'docs: add plugin commands cheat sheet with workflow examples' (#39) from docs/commands-cheatsheet into development
Reviewed-on: personal-projects/support-claude-mktplace#39
2026-01-20 22:13:57 +00:00
04322732bc docs: add plugin commands cheat sheet with workflow examples
Add comprehensive documentation for all marketplace plugin commands:
- Reference table with all 37 commands across 9 plugins
- Auto vs manual trigger indicators
- 7 dev workflow examples for practical usage
- Hook-based automation summary
- MCP server requirements

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 17:12:38 -05:00
09d82b310e Merge pull request 'development' (#38) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#38
2026-01-20 22:04:00 +00:00
50b45f4834 Merge pull request 'feat/v3.0.0-architecture-overhaul' (#37) from feat/v3.0.0-architecture-overhaul into development
Reviewed-on: personal-projects/support-claude-mktplace#37
2026-01-20 22:03:21 +00:00
39ad0043c6 fix: bump projman version to 3.0.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 17:01:25 -05:00
e5ca804692 feat: v3.0.0 architecture overhaul
- Rename marketplace to lm-claude-plugins
- Move MCP servers to root with symlinks
- Add 6 PR tools to Gitea MCP (list_pull_requests, get_pull_request,
  get_pr_diff, get_pr_comments, create_pr_review, add_pr_comment)
- Add clarity-assist plugin (prompt optimization with ND accommodations)
- Add git-flow plugin (workflow automation)
- Add pr-review plugin (multi-agent review with confidence scoring)
- Centralize configuration docs
- Update all documentation for v3.0.0

BREAKING CHANGE: MCP server paths changed, marketplace renamed

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 16:56:53 -05:00
1c694b6469 Merge pull request 'development' (#36) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#36
2026-01-20 17:49:26 +00:00
c1e9382031 docs: sync all documentation with v2.3.0 changes
Updates missed in initial implementation:
- projman/README.md: add /test-gen command documentation and update architecture
- CLAUDE.md: bump to v2.3.0, add doc-guardian and code-sentinel to plugin table,
  update projman version, update command count to 9, update repository structure
- docs/CANONICAL-PATHS.md: add doc-guardian and code-sentinel plugin paths

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:43:02 -05:00
b6c632b75f docs: update CHANGELOG for v2.3.0 release
Document all additions in v2.3.0:
- doc-guardian plugin with /doc-audit and /doc-sync commands
- code-sentinel plugin with /security-scan, /refactor, /refactor-dry commands
- projman /test-gen command for test generation
- Version bumps for marketplace and projman

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:36:23 -05:00
a8ea1fcc25 docs: add doc-guardian and code-sentinel to README, update projman commands
- Update marketplace version to v2.3.0
- Add doc-guardian plugin section (documentation lifecycle management)
- Add code-sentinel plugin section (security scanning & refactoring)
- Update projman commands to include /test-gen
- Update repository structure diagram

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:35:44 -05:00
ebb950d39c chore(projman): bump version to 2.3.0 for test-gen command
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:34:43 -05:00
b508d4bcce feat: register doc-guardian and code-sentinel plugins in marketplace
- Add doc-guardian v1.0.0 with PostToolUse and Stop hooks
- Add code-sentinel v1.0.0 with PreToolUse hook
- Update marketplace version to 2.3.0
- Update projman version to 2.3.0
- Update hookMapping for new plugins

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:34:09 -05:00
23537158bc feat(projman): add /test-gen command for test generation
Adds test generation command that complements existing /test-check:
- Auto-detects test framework (pytest, jest, vitest, go test, etc.)
- Generates unit, integration, e2e, or snapshot tests
- Creates happy path, edge case, and error tests
- Supports multiple languages (Python, JavaScript, Go, etc.)
- Integrates with /test-check for generate-then-verify workflow

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:33:22 -05:00
870ed26510 feat: add code-sentinel plugin for security scanning and refactoring
Adds security scanning via PreToolUse hooks + refactoring commands:
- PreToolUse hook catches security issues before code is written
- /security-scan command for comprehensive security audit
- /refactor command to apply refactoring patterns
- /refactor-dry command to preview refactoring opportunities
- security-reviewer agent for vulnerability analysis
- refactor-advisor agent for code structure improvements
- security-patterns skill for vulnerability detection rules

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:32:43 -05:00
395daecda8 feat: add doc-guardian plugin for documentation lifecycle management
Adds automatic documentation drift detection and synchronization:
- PostToolUse hook detects when code changes affect docs
- Stop hook reminds of pending updates before session ends
- /doc-audit command for full project documentation scan
- /doc-sync command to batch apply pending updates
- doc-analyzer agent for cross-reference analysis
- doc-patterns skill for documentation structure knowledge

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:30:42 -05:00
337f40600a Merge pull request 'fix: remove Wiki.js references from architecture docs' (#35) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#35
2026-01-20 16:24:53 +00:00
14425cfad1 fix: remove Wiki.js references from architecture docs
- Updated agent-workflow.spec.md to use Gitea Wiki instead of Wiki.js
- Updated component-map.spec.md to show Gitea (Issues + Wiki) as single service
- Changed GraphQL references to REST API (Gitea uses REST)
- Added Code Reviewer agent swimlane to agent-workflow diagram
- Added ARCHITECTURE NOTES sections to clarify current design

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 11:22:20 -05:00
c38404a98a Merge pull request 'development: small fixes in the changelog' (#34) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#34
2026-01-20 16:17:13 +00:00
70d3933da4 docs: update CHANGELOG with complete version history and add maintenance rules
CHANGELOG.md updates:
- Added proper dates to all versions (2.1.0, 2.0.0, 1.0.0, 0.1.0)
- Added v2.0.0 section (was missing)
- Added v1.0.0 section with initial features
- Updated v2.2.0 with version consolidation changes
- Follows Keep a Changelog format

CLAUDE.md updates:
- Added "Changelog Maintenance (MANDATORY)" section
- Documented required steps when releasing new version
- Added Semantic Versioning reference
- Clarified version display rules

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 11:15:55 -05:00
cce2066d3b fix: consolidate version display to main README.md only
Version standardization:
- Move version to main README.md title: "Claude Code Marketplace - v2.2.0"
- Remove version from plugins/projman/README.md title
- Remove version from plugins/projman/CONFIGURATION.md title
- Remove version from plugin listings in README.md
- Remove "Key Features (v2.2.0)" -> "Key Features"
- Remove Version section from plugins/projman/README.md
- Remove version references from CLAUDE.md structure comments

Added versioning rule to CLAUDE.md:
- Version displayed ONLY in main README.md title
- Do NOT add versions to plugin docs or config guides
- Version history maintained in CHANGELOG.md only

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 11:12:44 -05:00
5d95e42eb5 Merge pull request 'development: final tuning adjustments' (#33) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#33
2026-01-20 15:55:14 +00:00
f35706e621 chore: remove obsolete docs/references/ planning documents
Deleted (obsolete planning docs from pre-implementation phase):
- docs/references/MCP-GITEA.md - wrong MCP server location
- docs/references/MCP-WIKIJS.md - Wiki.js MCP never implemented
- docs/references/PLUGIN-PMO.md - projman-pmo not yet built
- docs/references/PLUGIN-PROJMAN.md - superseded by plugins/projman/README.md
- docs/references/PROJECT-SUMMARY.md - outdated architecture

Updated references in:
- CLAUDE.md - removed docs/references/ from documentation index
- docs/CANONICAL-PATHS.md - removed references directory from structure
- docs/UPDATING.md - updated to reference current documentation
- plugins/projman/mcp-servers/gitea/README.md - fixed related docs links
- plugins/projman/mcp-servers/gitea/TESTING.md - fixed resource links

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:52:35 -05:00
697031c526 docs: overhaul CLAUDE.md for v2.2.0 with maintainer best practices
Complete rewrite following claude-config-maintainer principles:
- Clear, concise, complete, current
- Scannable structure with tables
- Critical rules prominently placed
- Quick Start section added

Updates:
- Version: 2.2.0, Status: Production Ready
- Four-Agent Model (added Code Reviewer)
- New commands: /review, /test-check
- New file: validate-marketplace.sh
- Removed outdated "Next Steps" and stale dates
- Consolidated verbose sections into tables
- Added Version History table

Reduced from ~480 lines to ~195 lines while maintaining all essential information.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:34:13 -05:00
32797ce473 Merge pull request 'development: new version launched (v2.2.0)' (#32) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#32
2026-01-20 15:23:12 +00:00
368f9a4c2e Merge feat/marketplace-compliance-and-review into development
Release v2.2.0 - Marketplace Compliance Updates & Feature Additions

New Features:
- /review command for pre-sprint-close code quality checks
- /test-check command for test verification before sprint close
- code-reviewer agent for structured code review
- Validation script (scripts/validate-marketplace.sh)

Compliance Fixes:
- Updated marketplace.json with required fields (metadata, author, homepage, repository)
- Updated all plugin manifests with required fields
- Fixed installation documentation to use official Claude Code methods
- Prioritized public HTTPS URL over Tailscale SSH URL

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:16:36 -05:00
cb0a5ddec5 chore: bump version to v2.2.0 and update documentation
Version updates:
- marketplace.json metadata.version: 2.0.0 → 2.2.0
- marketplace.json projman version: 2.0.0 → 2.2.0
- plugins/projman/plugin.json version: 2.0.0 → 2.2.0

Documentation updates:
- CHANGELOG.md: Changed [Unreleased] to [2.2.0] - 2026-01-20
- README.md: Updated projman version to v2.2.0
- README.md: Added /review and /test-check to commands list
- README.md: Added code-reviewer to agent list
- README.md: Updated Key Features section to v2.2.0
- README.md: Added validate-marketplace.sh to structure
- plugins/projman/README.md: Updated title to v2.2.0
- plugins/projman/README.md: Updated version changelog

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:09:01 -05:00
8da7117b89 docs: update projman readme with new commands
- Add /review command documentation
- Add /test-check command documentation
- Add Code Quality Commands section
- Add code-reviewer agent to Agents section
- Update architecture directory listing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:00:37 -05:00
c322cf4b2f docs: update changelog with compliance and feature additions
- Document all compliance fixes (marketplace, plugins, README)
- Document new /review and /test-check commands
- Document code-reviewer agent addition
- Document validation script addition

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 09:59:43 -05:00
a6d3fe6c6c feat(projman): add /test-check command for test verification
- Add test-check.md command for pre-sprint-close test verification
- Automatic framework detection (pytest, jest, go test, cargo, etc.)
- Coverage reporting when available
- Sprint file analysis for untested changes
- Clear pass/fail summary with recommendations

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 09:59:16 -05:00
c62e0dbd2c feat(projman): add /review command for code quality checks
- Add review.md command for pre-sprint-close code quality review
- Add code-reviewer.md agent for structured review workflow
- Covers debug artifacts, code quality, security, error handling
- Integrates with projman sprint context when available
- Provides structured output with severity levels

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 09:58:35 -05:00
34c8f0bdb6 feat: add validation script for marketplace compliance
- Validates marketplace.json syntax and required fields
- Checks each plugin entry has required fields
- Validates individual plugin.json files
- Checks for recommended fields (author, homepage, repository)
- Returns non-zero exit code on errors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 09:57:40 -05:00
72941c1fe1 docs: fix installation instructions for official methods
- Replace pluginMarketplace with extraKnownMarketplaces
- Add CLI command method (recommended)
- Add settings.json method for team distribution
- Add local development option
- Prioritize public HTTPS URL over Tailscale SSH URL
- Reorganize installation sections for clarity

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 09:56:41 -05:00
1adb434c58 fix: add required fields to plugin manifests
- Add author, homepage, repository to all plugin manifests
- Add license field where missing
- Add commands and agents directory references
- Update keywords for better discoverability

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 09:55:57 -05:00
76462d5d8c fix: update marketplace.json with required fields
- Add metadata wrapper for description and version
- Add author, homepage, and repository to all plugin entries
- Ensure owner.email field exists and is valid

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 09:55:13 -05:00
5b7c1f3fce Update plugins/projman/README.md 2026-01-20 13:55:26 +00:00
87b30cbb85 Update plugins/projman/README.md 2026-01-20 13:54:46 +00:00
67c057f9ee Merge pull request 'Update README.md' (#31) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#31
2026-01-20 13:44:27 +00:00
c34fd22852 Update README.md 2026-01-20 13:42:39 +00:00
5fc18f28e9 Merge pull request 'feat: add plugin integration analysis to config-analyze command' (#30) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#30
2026-01-19 23:27:15 +00:00
113839b0d8 feat: add plugin integration analysis to config-analyze command
Add automatic detection of active marketplace plugins and verification
that CLAUDE.md properly references them. This ensures projects using
marketplace plugins will have proper documentation to guide Claude Code
in using available tools.

Changes:
- Add claude-md-integration.md snippets to all 4 plugins (projman,
  cmdb-assistant, claude-config-maintainer, project-hygiene)
- Update marketplace.json with MCP server mappings for plugin detection
- Enhance /config-analyze to detect active plugins via MCP server names
- Update maintainer agent with plugin integration workflow
- Add plugin coverage percentage to analysis report
- User confirmation required before adding plugin references

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 18:24:29 -05:00
a12665c22d Merge pull request 'chore: rebrand for public release' (#28) from development into main
Reviewed-on: personal-projects/support-claude-mktplace#28
2026-01-19 22:54:29 +00:00
15e0654950 chore: rebrand for public release
- Move repository from bandit to personal-projects organization
- Remove all "Bandit Labs" references
- Update author to Leo Miranda
- Rename marketplace from bandit-claude-marketplace to claude-code-marketplace
- Add MIT LICENSE file
- Remove outdated root .mcp.json (MCP servers now bundled in plugins)
- Update all repository URLs to new location

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 17:49:40 -05:00
3c3b9329c5 Merge pull request 'development' (#27) from development into main
Reviewed-on: bandit/support-claude-mktplace#27
2026-01-19 22:23:28 +00:00
1e63de6679 Merge pull request 'refactor: bundle MCP servers inside plugins for cache compatibility' (#26) from development into main
Reviewed-on: bandit/support-claude-mktplace#26
2025-12-15 22:23:57 +00:00
6c77c264f6 Merge pull request 'fix: correct labels path in setup.sh' (#25) from development into main
Reviewed-on: bandit/support-claude-mktplace#25
2025-12-15 22:09:32 +00:00
b3a41f722c Merge pull request 'development' (#24) from development into main
Reviewed-on: bandit/support-claude-mktplace#24
2025-12-12 17:38:07 +00:00
97085021a9 Merge pull request 'fix: add missing plugin declarations and fix hardcoded paths' (#23) from development into main
Reviewed-on: bandit/support-claude-mktplace#23
2025-12-12 15:57:19 +00:00
eb2c184641 Merge pull request 'development' (#22) from development into main
Reviewed-on: bandit/support-claude-mktplace#22
2025-12-12 07:13:09 +00:00
5cfe858b12 Merge pull request 'Add path safeguards to prevent structural damage' (#21) from development into main
Reviewed-on: bandit/support-claude-mktplace#21
2025-12-12 05:15:04 +00:00
388 changed files with 45710 additions and 11926 deletions

View File

@@ -1,35 +1,205 @@
{
"name": "bandit-claude-marketplace",
"version": "2.0.0",
"description": "Project management plugins with Gitea and NetBox integrations",
"name": "leo-claude-mktplace",
"owner": {
"name": "Bandit Labs",
"email": "dev@banditlabs.io"
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"metadata": {
"description": "Project management plugins with Gitea and NetBox integrations",
"version": "5.9.0"
},
"plugins": [
{
"name": "projman",
"version": "2.0.0",
"version": "3.4.0",
"description": "Sprint planning and project management with Gitea integration",
"source": "./plugins/projman"
"source": "./plugins/projman",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "development",
"tags": ["sprint", "agile", "gitea", "project-management"],
"license": "MIT"
},
{
"name": "doc-guardian",
"version": "1.1.0",
"description": "Automatic documentation drift detection and synchronization",
"source": "./plugins/doc-guardian",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/doc-guardian/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "productivity",
"tags": ["documentation", "drift-detection", "sync"],
"license": "MIT"
},
{
"name": "code-sentinel",
"version": "1.0.1",
"description": "Security scanning and code refactoring tools",
"source": "./plugins/code-sentinel",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/code-sentinel/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "security",
"tags": ["security-scan", "refactoring", "vulnerabilities"],
"license": "MIT"
},
{
"name": "project-hygiene",
"version": "0.1.0",
"description": "Post-task cleanup hook that removes temp files and manages orphaned files",
"source": "./plugins/project-hygiene"
"source": "./plugins/project-hygiene",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/project-hygiene/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "productivity",
"tags": ["cleanup", "automation", "hygiene"],
"license": "MIT"
},
{
"name": "cmdb-assistant",
"version": "1.0.0",
"description": "NetBox CMDB integration for infrastructure management",
"source": "./plugins/cmdb-assistant"
"version": "1.2.0",
"description": "NetBox CMDB integration with data quality validation and machine registration",
"source": "./plugins/cmdb-assistant",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/cmdb-assistant/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "infrastructure",
"tags": ["cmdb", "netbox", "dcim", "ipam", "data-quality", "validation"],
"license": "MIT"
},
{
"name": "claude-config-maintainer",
"version": "1.0.0",
"description": "CLAUDE.md optimization and maintenance for Claude Code projects",
"source": "./plugins/claude-config-maintainer"
"version": "1.2.0",
"description": "CLAUDE.md and settings.local.json optimization for Claude Code projects",
"source": "./plugins/claude-config-maintainer",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/claude-config-maintainer/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "development",
"tags": ["claude-md", "configuration", "optimization"],
"license": "MIT"
},
{
"name": "clarity-assist",
"version": "1.2.0",
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
"source": "./plugins/clarity-assist",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/clarity-assist/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "productivity",
"tags": ["prompts", "requirements", "clarification", "nd-friendly"],
"license": "MIT"
},
{
"name": "git-flow",
"version": "1.2.0",
"description": "Git workflow automation with intelligent commit messages and branch management",
"source": "./plugins/git-flow",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/git-flow/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "development",
"tags": ["git", "workflow", "commits", "branching"],
"license": "MIT"
},
{
"name": "pr-review",
"version": "1.1.0",
"description": "Multi-agent pull request review with confidence scoring and actionable feedback",
"source": "./plugins/pr-review",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "development",
"tags": ["code-review", "pull-requests", "security", "quality"],
"license": "MIT"
},
{
"name": "data-platform",
"version": "1.3.0",
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
"source": "./plugins/data-platform",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-platform/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "data",
"tags": ["pandas", "postgresql", "postgis", "dbt", "data-engineering", "etl"],
"license": "MIT"
},
{
"name": "viz-platform",
"version": "1.1.0",
"description": "Visualization tools with Dash Mantine Components validation, Plotly charts, and theming",
"source": "./plugins/viz-platform",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/viz-platform/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "visualization",
"tags": ["dash", "plotly", "mantine", "charts", "dashboards", "theming", "dmc"],
"license": "MIT"
},
{
"name": "contract-validator",
"version": "1.2.0",
"description": "Cross-plugin compatibility validation and Claude.md agent verification",
"source": "./plugins/contract-validator",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/contract-validator/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": ["./hooks/hooks.json"],
"category": "development",
"tags": ["validation", "contracts", "compatibility", "agents", "interfaces", "cross-plugin"],
"license": "MIT"
}
]
}

View File

@@ -0,0 +1,249 @@
# CLAUDE.md
This file provides guidance to Claude Code when working with code in this repository.
## Project Overview
**Repository:** leo-claude-mktplace
**Version:** 3.0.1
**Status:** Production Ready
A plugin marketplace for Claude Code containing:
| Plugin | Description | Version |
|--------|-------------|---------|
| `projman` | Sprint planning and project management with Gitea integration | 3.0.0 |
| `git-flow` | Git workflow automation with smart commits and branch management | 1.0.0 |
| `pr-review` | Multi-agent PR review with confidence scoring | 1.0.0 |
| `clarity-assist` | Prompt optimization with ND-friendly accommodations | 1.0.0 |
| `doc-guardian` | Automatic documentation drift detection and synchronization | 1.0.0 |
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.0 |
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 |
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.0.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
## Quick Start
```bash
# Validate marketplace compliance
./scripts/validate-marketplace.sh
# Setup commands (in a target project with plugin installed)
/initial-setup # First time: full setup wizard
/project-init # New project: quick config
/project-sync # After repo move: sync config
# Run projman commands
/sprint-plan # Start sprint planning
/sprint-status # Check progress
/review # Pre-close code quality review
/test-check # Verify tests before close
/sprint-close # Complete sprint
```
## Repository Structure
```
leo-claude-mktplace/
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
│ └── netbox/ # NetBox MCP (CMDB)
├── plugins/
│ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ # 12 commands (incl. setup)
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ ├── agents/ # 4 agents
│ │ └── skills/label-taxonomy/
│ ├── git-flow/ # Git workflow automation
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 8 commands
│ │ └── agents/
│ ├── pr-review/ # Multi-agent PR review
│ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ # 6 commands (incl. setup)
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ └── agents/ # 5 agents
│ ├── clarity-assist/ # Prompt optimization (NEW v3.0.0)
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 2 commands
│ │ └── agents/
│ ├── doc-guardian/ # Documentation drift detection
│ ├── code-sentinel/ # Security scanning & refactoring
│ ├── claude-config-maintainer/
│ ├── cmdb-assistant/
│ └── project-hygiene/
├── scripts/
│ ├── setup.sh, post-update.sh
│ └── validate-marketplace.sh # Marketplace compliance validation
└── docs/
├── CANONICAL-PATHS.md # Single source of truth for paths
└── CONFIGURATION.md # Centralized configuration guide
```
## CRITICAL: Rules You MUST Follow
### File Operations
- **NEVER** create files in repository root unless listed in "Allowed Root Files"
- **NEVER** modify `.gitignore` without explicit permission
- **ALWAYS** use `.scratch/` for temporary/exploratory work
- **ALWAYS** verify paths against `docs/CANONICAL-PATHS.md` before creating files
### Plugin Development
- **plugin.json MUST be in `.claude-plugin/` directory** (not plugin root)
- **Every plugin MUST be listed in marketplace.json**
- **MCP servers are SHARED at root** with symlinks from plugins
- **MCP server venv path**: `${CLAUDE_PLUGIN_ROOT}/mcp-servers/{name}/.venv/bin/python`
- **CLI tools forbidden** - Use MCP tools exclusively (never `tea`, `gh`, etc.)
### Hooks (Valid Events Only)
`PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
**INVALID:** `task-completed`, `file-changed`, `git-commit-msg-needed`
### Allowed Root Files
`CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example`
### Allowed Root Directories
`.claude/`, `.claude-plugin/`, `.claude-plugins/`, `.scratch/`, `docs/`, `hooks/`, `mcp-servers/`, `plugins/`, `scripts/`
## Architecture
### Four-Agent Model (projman)
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **Planner** | Thoughtful, methodical | Sprint planning, architecture analysis, issue creation, lesson search |
| **Orchestrator** | Concise, action-oriented | Sprint execution, parallel batching, Git operations, lesson capture |
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
### MCP Server Tools (Gitea)
| Category | Tools |
|----------|-------|
| Issues | `list_issues`, `get_issue`, `create_issue`, `update_issue`, `add_comment` |
| Labels | `get_labels`, `suggest_labels`, `create_label` |
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone` |
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `get_execution_order` |
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `create_lesson`, `search_lessons` |
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` *(NEW v3.0.0)* |
| Validation | `validate_repo_org`, `get_branch_protection` |
### Hybrid Configuration
| Level | Location | Purpose |
|-------|----------|---------|
| System | `~/.config/claude/gitea.env` | Credentials (GITEA_API_URL, GITEA_API_TOKEN) |
| Project | `.env` in project root | Repository specification (GITEA_ORG, GITEA_REPO) |
**Note:** `GITEA_ORG` is at project level since different projects may belong to different organizations.
### Branch-Aware Security
| Branch Pattern | Mode | Capabilities |
|----------------|------|--------------|
| `development`, `feat/*` | Development | Full access |
| `staging` | Staging | Read-only code, can create issues |
| `main`, `master` | Production | Read-only, emergency only |
## Label Taxonomy
43 labels total: 27 organization + 16 repository
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Type/6
**Repository:** Component/9, Tech/7
Sync with `/labels-sync` command.
## Lessons Learned System
Stored in Gitea Wiki under `lessons-learned/sprints/`.
**Workflow:**
1. Orchestrator captures at sprint close via MCP tools
2. Planner searches at sprint start using `search_lessons`
3. Tags enable cross-project discovery
## Common Operations
### Adding a New Plugin
1. Create `plugins/{name}/.claude-plugin/plugin.json`
2. Add entry to `.claude-plugin/marketplace.json` with category, tags, license
3. Create `README.md` and `claude-md-integration.md`
4. If using MCP server, create symlink: `ln -s ../../../mcp-servers/{server} plugins/{name}/mcp-servers/{server}`
5. Run `./scripts/validate-marketplace.sh`
6. Update `CHANGELOG.md`
### Adding a Command to projman
1. Create `plugins/projman/commands/{name}.md`
2. Update `plugins/projman/README.md`
3. Update marketplace description if significant
### Validation
```bash
./scripts/validate-marketplace.sh # Validates all manifests
```
## Path Verification Protocol
**Before creating any file:**
1. Read `docs/CANONICAL-PATHS.md`
2. List all paths to be created/modified
3. Verify each against canonical paths
4. If not in canonical paths, STOP and ask
## Documentation Index
| Document | Purpose |
|----------|---------|
| `docs/CANONICAL-PATHS.md` | **Single source of truth** for paths |
| `docs/COMMANDS-CHEATSHEET.md` | All commands quick reference with workflow examples |
| `docs/CONFIGURATION.md` | Centralized setup guide |
| `docs/UPDATING.md` | Update guide for the marketplace |
| `plugins/projman/CONFIGURATION.md` | Quick reference (links to central) |
| `plugins/projman/README.md` | Projman full documentation |
## Versioning and Changelog Rules
### Version Display
**The marketplace version is displayed ONLY in the main `README.md` title.**
- Format: `# Leo Claude Marketplace - vX.Y.Z`
- Do NOT add version numbers to individual plugin documentation titles
- Do NOT add version numbers to configuration guides
- Do NOT add version numbers to CLAUDE.md or other docs
### Changelog Maintenance (MANDATORY)
**`CHANGELOG.md` is the authoritative source for version history.**
When releasing a new version:
1. Update main `README.md` title with new version
2. Update `CHANGELOG.md` with:
- Version number and date: `## [X.Y.Z] - YYYY-MM-DD`
- **Added**: New features, commands, files
- **Changed**: Modifications to existing functionality
- **Fixed**: Bug fixes
- **Removed**: Deleted features, files, deprecated items
3. Update `marketplace.json` metadata version
4. Update plugin `plugin.json` versions if plugin-specific changes
### Version Format
- Follow [Semantic Versioning](https://semver.org/): MAJOR.MINOR.PATCH
- MAJOR: Breaking changes
- MINOR: New features, backward compatible
- PATCH: Bug fixes, minor improvements
---
**Last Updated:** 2026-01-20

27
.doc-guardian-queue Normal file
View File

@@ -0,0 +1,27 @@
# Doc Guardian Queue - cleared after sync on 2026-02-02
2026-02-02T11:41:00 | .claude-plugin | /home/lmiranda/claude-plugins-work/.claude-plugin/marketplace.json | CLAUDE.md .claude-plugin/marketplace.json
2026-02-02T13:35:48 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/sprint-approval.md | README.md
2026-02-02T13:36:03 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-start.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:36:16 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/orchestrator.md | README.md CLAUDE.md
2026-02-02T13:39:07 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/rfc.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:39:15 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/setup.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:39:32 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/rfc-workflow.md | README.md
2026-02-02T13:43:14 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/rfc-templates.md | README.md
2026-02-02T13:44:55 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/sprint-lifecycle.md | README.md
2026-02-02T13:45:04 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/label-taxonomy/labels-reference.md | README.md
2026-02-02T13:45:14 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-plan.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:45:48 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/review.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:07 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-close.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:21 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-status.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:38 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/planner.md | README.md CLAUDE.md
2026-02-02T13:46:57 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/code-reviewer.md | README.md CLAUDE.md
2026-02-02T13:49:13 | commands | /home/lmiranda/claude-plugins-work/plugins/viz-platform/commands/design-gate.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:49:24 | commands | /home/lmiranda/claude-plugins-work/plugins/data-platform/commands/data-gate.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:49:35 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/domain-consultation.md | README.md
2026-02-02T13:50:04 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:50:59 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/server.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:51:32 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:51:49 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/validation-rules.md | README.md
2026-02-02T13:52:07 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/mcp-tools-reference.md | README.md
2026-02-02T13:59:09 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/progress-tracking.md | README.md
2026-02-02T14:01:34 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/test.md | docs/COMMANDS-CHEATSHEET.md README.md

1
.env Normal file
View File

@@ -0,0 +1 @@
GITEA_REPO=personal-projects/leo-claude-mktplace

2
.gitignore vendored
View File

@@ -31,6 +31,8 @@ venv/
ENV/
env/
.venv/
.venv
**/.venv
# PyCharm
.idea/

View File

@@ -1,28 +1,24 @@
{
"mcpServers": {
"gitea": {
"command": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/gitea/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/gitea",
"env": {
"PYTHONPATH": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/gitea"
}
},
"wikijs": {
"command": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/wikijs/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/wikijs",
"env": {
"PYTHONPATH": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/wikijs"
}
"command": "./mcp-servers/gitea/run.sh",
"args": []
},
"netbox": {
"command": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/netbox/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/netbox",
"env": {
"PYTHONPATH": "/home/lmiranda/repos/bandit/support-claude-mktplace/mcp-servers/netbox"
}
"command": "./mcp-servers/netbox/run.sh",
"args": []
},
"viz-platform": {
"command": "./mcp-servers/viz-platform/run.sh",
"args": []
},
"data-platform": {
"command": "./mcp-servers/data-platform/run.sh",
"args": []
},
"contract-validator": {
"command": "./mcp-servers/contract-validator/run.sh",
"args": []
}
}
}

View File

@@ -1,11 +1,825 @@
# Changelog
All notable changes to support-claude-mktplace will be documented in this file.
All notable changes to the Leo Claude Marketplace will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [Unreleased]
---
## [5.9.0] - 2026-02-03
### Added
#### Plugin Installation Scripts
New scripts for installing marketplace plugins into consumer projects:
- **`scripts/install-plugin.sh`** — Install a plugin to a consumer project
- Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
- Appends integration snippet to target's `CLAUDE.md`
- Idempotent: safe to run multiple times
- Validates plugin exists and target path is valid
- **`scripts/uninstall-plugin.sh`** — Remove a plugin from a consumer project
- Removes MCP server entry from `.mcp.json`
- Removes integration section from `CLAUDE.md`
- **`scripts/list-installed.sh`** — Show installed plugins in a project
- Lists fully installed, partially installed, and available plugins
- Shows plugin versions and descriptions
**Usage:**
```bash
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
./scripts/list-installed.sh ~/projects/personal-portfolio
./scripts/uninstall-plugin.sh data-platform ~/projects/personal-portfolio
```
**Documentation:** `docs/CONFIGURATION.md` updated with "Installing Plugins to Consumer Projects" section.
### Fixed
#### Plugin Installation Scripts — MCP Mapping & Section Markers
**MCP Server Mapping:**
- Added `mcp_servers` field to plugin.json for plugins that use shared MCP servers
- `projman` and `pr-review` now correctly install `gitea` MCP server
- `cmdb-assistant` now correctly installs `netbox` MCP server
- Scripts read MCP server names from plugin.json instead of assuming plugin name = server name
**CLAUDE.md Section Markers:**
- Install script now wraps integration content with HTML comment markers:
`<!-- BEGIN marketplace-plugin: {name} -->` and `<!-- END marketplace-plugin: {name} -->`
- Uninstall script uses markers for precise section removal (no more code block false positives)
- Backward compatible: falls back to legacy header detection for pre-marker installations
**Plugins updated with `mcp_servers` field:**
- `projman``["gitea"]`
- `pr-review``["gitea"]`
- `cmdb-assistant``["netbox"]`
- `data-platform``["data-platform"]`
- `viz-platform``["viz-platform"]`
- `contract-validator``["contract-validator"]`
#### Agent Model Selection
Per-agent model selection using Claude Code's now-supported `model` frontmatter field.
- All 25 marketplace agents assigned appropriate model (`sonnet`, `haiku`, or `inherit`)
- Model assignment based on reasoning depth, tool complexity, and latency requirements
- Documentation added to `CLAUDE.md` and `docs/CONFIGURATION.md`
**Supported values:** `sonnet` (default), `opus`, `haiku`, `inherit`
**Model assignments:**
| Model | Agent Types |
|-------|-------------|
| sonnet | Planner, Orchestrator, Executor, Code Reviewer, Coordinator, Security Reviewers, Data Advisor, Design Reviewer, etc. |
| haiku | Maintainability Auditor, Test Validator, Component Check, Theme Setup, Git Assistant, Data Ingestion, Agent Check |
### Fixed
#### Agent Frontmatter Standardization
- Fixed viz-platform and data-platform agents using non-standard `agent:` field (now `name:`)
- Removed non-standard `triggers:` field from domain agents (trigger info already in agent body)
- Added missing frontmatter to 13 agents across pr-review, viz-platform, contract-validator, clarity-assist, git-flow, doc-guardian, code-sentinel, cmdb-assistant, and data-platform
- All 25 agents now have consistent `name`, `description`, and `model` fields
---
## [5.8.0] - 2026-02-02
### Added
#### claude-config-maintainer v1.2.0 - Settings Audit Feature
New commands for auditing and optimizing `settings.local.json` permission configurations:
- **`/config-audit-settings`** — Audit `settings.local.json` permissions with 100-point scoring across redundancy, coverage, safety alignment, and profile fit
- **`/config-optimize-settings`** — Apply permission optimizations with dry-run, named profiles (`conservative`, `reviewed`, `autonomous`), and consolidation modes
- **`/config-permissions-map`** — Generate Mermaid diagram of review layer coverage and permission gaps
- **`skills/settings-optimization.md`** — Comprehensive skill for permission pattern analysis, consolidation rules, review-layer-aware recommendations, and named profiles
**Key Features:**
- Settings Efficiency Score (100 points) alongside existing CLAUDE.md score
- Review layer verification — agent reads `hooks/hooks.json` from installed plugins before recommending auto-allow patterns
- Three named profiles: `conservative` (prompts for most writes), `reviewed` (for projects with ≥2 review layers), `autonomous` (sandboxed environments)
- Pattern consolidation detection: duplicates, subsets, merge candidates, stale entries, conflicts
#### Projman Hardening Sprint
Targeted improvements to safety gates, command structure, lifecycle tracking, and cross-plugin contracts.
**Sprint Lifecycle State Machine:**
- New `skills/sprint-lifecycle.md` - defines valid states and transitions via milestone metadata
- States: idle -> Sprint/Planning -> Sprint/Executing -> Sprint/Reviewing -> idle
- All sprint commands check and set lifecycle state on entry/exit
- Out-of-order calls produce warnings with guidance, `--force` override available
**Sprint Dispatch Log:**
- Orchestrator now maintains a structured dispatch log during execution
- Records task dispatch, completion, failure, gate checks, and resume events
- Enables timeline reconstruction after interrupted sessions
**Gate Contract Versioning:**
- Gate commands (`/design-gate`, `/data-gate`) declare `gate_contract: v1` in frontmatter
- `domain-consultation.md` Gate Command Reference includes expected contract version
- `validate_workflow_integration` now checks contract version compatibility
- Mismatch produces WARNING, missing contract produces INFO suggestion
**Shared Visual Output Skill:**
- New `skills/visual-output.md` - single source of truth for projman visual headers
- All 4 agent files reference the skill instead of inline templates
- Phase Registry maps agents to emoji and phase names
### Changed
**Sprint Approval Gate Hardened:**
- Approval is now a hard block, not a warning (was "recommended", now required)
- `--force` flag added to bypass in emergencies (logged to milestone)
- Consistent language across sprint-approval.md, sprint-start.md, and orchestrator.md
**RFC Commands Normalized:**
- 5 individual commands (`/rfc-create`, `/rfc-list`, `/rfc-review`, `/rfc-approve`, `/rfc-reject`) consolidated into `/rfc create|list|review|approve|reject`
- `/clear-cache` absorbed into `/setup --clear-cache`
- Command count reduced from 17 to 12
**`/test` Command Documentation Expanded:**
- Sprint integration section (pre-close verification workflow)
- Concrete usage examples for all modes
- Edge cases table
- DO NOT rules for both modes
### Removed
- `plugins/projman/commands/rfc-create.md` (replaced by `/rfc create`)
- `plugins/projman/commands/rfc-list.md` (replaced by `/rfc list`)
- `plugins/projman/commands/rfc-review.md` (replaced by `/rfc review`)
- `plugins/projman/commands/rfc-approve.md` (replaced by `/rfc approve`)
- `plugins/projman/commands/rfc-reject.md` (replaced by `/rfc reject`)
- `plugins/projman/commands/clear-cache.md` (replaced by `/setup --clear-cache`)
---
## [5.7.1] - 2026-02-02
### Added
- **contract-validator**: New `validate_workflow_integration` MCP tool — validates domain plugins expose required advisory interfaces (gate command, review command, advisory agent)
- **contract-validator**: New `MISSING_INTEGRATION` issue type for workflow integration validation
### Fixed
- `scripts/setup.sh` banner version updated from v5.1.0 to v5.7.1
### Reverted
- **marketplace.json**: Removed `integrates_with` field — Claude Code schema does not support custom plugin fields (causes marketplace load failure)
---
## [5.7.0] - 2026-02-02
### Added
- **data-platform**: New `data-advisor` agent for data integrity, schema, and dbt compliance validation
- **data-platform**: New `data-integrity-audit.md` skill defining audit rules, severity levels, and scanning strategies
- **data-platform**: New `/data-gate` command for binary pass/fail data integrity gates (projman integration)
- **data-platform**: New `/data-review` command for comprehensive data integrity audits
### Changed
- Domain Advisory Pattern now fully operational for both Viz and Data domains
- projman orchestrator `Domain/Data` gates now resolve to live `/data-gate` command (previously fell through to "gate unavailable" warning)
---
## [5.6.0] - 2026-02-01
### Added
- **Domain Advisory Pattern**: Cross-plugin integration enabling projman to consult domain-specific plugins during sprint lifecycle
- **projman**: New `domain-consultation.md` skill for domain detection and gate protocols
- **viz-platform**: New `design-reviewer` agent for design system compliance auditing
- **viz-platform**: New `design-system-audit.md` skill defining audit rules and severity levels
- **viz-platform**: New `/design-review` command for detailed design system audits
- **viz-platform**: New `/design-gate` command for binary pass/fail validation gates
- **Labels**: New `Domain/Viz` and `Domain/Data` labels for domain routing
### Changed
- **projman planner**: Now loads domain-consultation skill and performs domain detection during planning
- **projman orchestrator**: Now runs domain gates before marking Domain/* labeled issues as complete
---
## [5.5.0] - 2026-02-01
### Added
#### RFC System for Feature Tracking
Wiki-based Request for Comments (RFC) system for capturing, reviewing, and tracking feature ideas through their lifecycle.
**New Commands (projman):**
- `/rfc-create` - Create new RFC from conversation or clarified specification
- `/rfc-list` - List all RFCs grouped by status (Draft, Review, Approved, Implementing, Implemented, Rejected, Stale)
- `/rfc-review` - Submit Draft RFC for maintainer review
- `/rfc-approve` - Approve RFC, making it available for sprint planning
- `/rfc-reject` - Reject RFC with documented reason
**RFC Lifecycle:**
- Draft → Review → Approved → Implementing → Implemented
- Terminal states: Rejected, Superseded
- Stale: Drafts with no activity >90 days
**Sprint Integration:**
- `/sprint-plan` now detects approved RFCs and offers selection
- `/sprint-close` updates RFC status to Implemented on completion
- RFC-Index wiki page auto-maintained with status sections
**Clarity-Assist Integration:**
- Vagueness hook now detects feature request patterns
- Suggests `/rfc-create` for feature ideas
- `/clarify` offers RFC creation after delivering clarified spec
**New MCP Tool:**
- `allocate_rfc_number` - Allocates next sequential RFC number
**New Skills:**
- `skills/rfc-workflow.md` - RFC lifecycle and state transitions
- `skills/rfc-templates.md` - RFC page template specifications
### Changed
#### Sprint 8: Hook Efficiency Quick Wins
Performance optimizations for plugin hooks to reduce overhead on every command.
**Changes:**
- **viz-platform:** Remove SessionStart hook that only echoed "loaded" (zero value)
- **git-flow:** Add early exit to `branch-check.sh` for non-git commands (skip JSON parsing)
- **git-flow:** Add early exit to `commit-msg-check.sh` for non-git commands (skip Python spawn)
- **project-hygiene:** Add 60-second cooldown to `cleanup.sh` (reduce find operations)
**Impact:** Hooks now exit immediately for 90%+ of Bash commands that don't need processing.
**Issues:** #321, #322, #323, #324
**PR:** #334
---
## [5.4.1] - 2026-01-30
### Removed
#### Multi-Model Agent Support (REVERTED)
**Reason:** Claude Code does not support `defaultModel` in plugin.json or `model` in agent frontmatter. The schema validation rejects these as "Unrecognized key".
**Removed:**
- `defaultModel` field from all plugin.json files (6 plugins)
- `model` field references from agent frontmatter
- `docs/MODEL-RECOMMENDATIONS.md` - Deleted entirely
- Model configuration sections from `docs/CONFIGURATION.md` and `CLAUDE.md`
**Lesson:** Do not implement features without verifying they are supported by Claude Code's plugin schema.
---
## [5.4.0] - 2026-01-28 [REVERTED]
### Added (NOW REMOVED - See 5.4.1)
#### Sprint 7: Multi-Model Agent Support
~~Configurable model selection for agents with inheritance chain.~~
**This feature was reverted in 5.4.1 - Claude Code does not support these fields.**
Original sprint work:
- Issues: #302, #303, #304, #305, #306
- PRs: #307, #308
---
## [5.3.0] - 2026-01-28
### Added
#### Sprint 6: Visual Branding Overhaul
Consistent visual headers and progress tracking across all plugins.
**Visual Output Headers (109 files):**
- **Projman**: Double-line headers (╔═╗) with phase indicators (🎯 PLANNING, ⚡ EXECUTION, 🏁 CLOSING)
- **Other Plugins**: Single-line headers (┌─┐) with plugin icons
- **All 23 agents** updated with Visual Output Requirements section
- **All 86 commands** updated with Visual Output section and header templates
**Plugin Icon Registry:**
| Plugin | Icon |
|--------|------|
| projman | 📋 |
| code-sentinel | 🔒 |
| doc-guardian | 📝 |
| pr-review | 🔍 |
| clarity-assist | 💬 |
| git-flow | 🔀 |
| cmdb-assistant | 🖥️ |
| data-platform | 📊 |
| viz-platform | 🎨 |
| contract-validator | ✅ |
| claude-config-maintainer | ⚙️ |
**Wiki Branding Specification (4 pages):**
- [branding/visual-spec](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fvisual-spec) - Central specification
- [branding/plugin-registry](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fplugin-registry) - Icons and styles
- [branding/header-templates](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fheader-templates) - Copy-paste templates
- [branding/progress-templates](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fprogress-templates) - Sprint progress blocks
### Fixed
- **Docs:** Version sync - CLAUDE.md, marketplace.json, README.md now consistent
- **Docs:** Added 18 missing commands from Sprint 4 & 5 to README.md and COMMANDS-CHEATSHEET.md
- **MCP:** Registered `/sprint-diagram` as invokable skill
**Sprint Completed:**
- Milestone: Sprint 6 - Visual Branding Overhaul (closed 2026-01-28)
- Issues: #272, #273, #274, #275, #276, #277, #278
- PRs: #284, #285
- Wiki: [Sprint 6 Lessons](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-6---visual-branding-and-documentation-maintenance)
---
## [5.2.0] - 2026-01-28
### Added
#### Sprint 5: Documentation (V5.2.0 Plugin Enhancements)
Documentation and guides for the plugin enhancements initiative.
**git-flow v1.2.0:**
- **Branching Strategy Guide** (`docs/BRANCHING-STRATEGY.md`) - Complete documentation of `development -> staging -> main` promotion flow with Mermaid diagrams
**clarity-assist v1.2.0:**
- **ND Support Guide** (`docs/ND-SUPPORT.md`) - Documentation of neurodivergent accommodations, features, and usage examples
**Gitea MCP Server:**
- **`update_issue` milestone parameter** - Can now assign/change milestones programmatically
**Sprint Completed:**
- Milestone: Sprint 5 - Documentation (closed 2026-01-28)
- Issues: #266, #267, #268, #269
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 5 Documentation)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-5-Documentation%29)
---
#### Sprint 4: Commands (V5.2.0 Plugin Enhancements)
Implementation of 18 new user-facing commands across 8 plugins.
**projman v3.3.0:**
- **`/sprint-diagram`** - Generate Mermaid diagram of sprint issues with dependencies and status
**pr-review v1.1.0:**
- **`/pr-diff`** - Formatted diff with inline review comments and annotations
- **Confidence threshold config** - `PR_REVIEW_CONFIDENCE_THRESHOLD` env var (default: 0.7)
**data-platform v1.2.0:**
- **`/data-quality`** - DataFrame quality checks (nulls, duplicates, types, outliers) with pass/warn/fail scoring
- **`/lineage-viz`** - dbt lineage visualization as Mermaid diagrams
- **`/dbt-test`** - Formatted dbt test runner with summary and failure details
**viz-platform v1.1.0:**
- **`/chart-export`** - Export charts to PNG, SVG, PDF via kaleido
- **`/accessibility-check`** - Color blind validation (WCAG contrast ratios)
- **`/breakpoints`** - Responsive layout breakpoint configuration
- **New MCP tools**: `chart_export`, `accessibility_validate_colors`, `accessibility_validate_theme`, `accessibility_suggest_alternative`, `layout_set_breakpoints`
- **New dependency**: kaleido>=0.2.1 for chart rendering
**contract-validator v1.2.0:**
- **`/dependency-graph`** - Mermaid visualization of plugin dependencies with data flow
**doc-guardian v1.1.0:**
- **`/changelog-gen`** - Generate changelog from conventional commits
- **`/doc-coverage`** - Documentation coverage metrics by function/class
- **`/stale-docs`** - Flag documentation behind code changes
**claude-config-maintainer v1.1.0:**
- **`/config-diff`** - Track CLAUDE.md changes over time with behavioral impact analysis
- **`/config-lint`** - 31 lint rules for CLAUDE.md (security, structure, content, format, best practices)
**cmdb-assistant v1.2.0:**
- **`/cmdb-topology`** - Infrastructure topology diagrams (rack, network, site views)
- **`/change-audit`** - NetBox audit trail queries with filtering
- **`/ip-conflicts`** - Detect IP conflicts and overlapping prefixes
**Sprint Completed:**
- Milestone: Sprint 4 - Commands (closed 2026-01-28)
- Issues: #241-#258 (18/18 closed)
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 4 Commands)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-4-Commands%29)
- Lessons: [Sprint 4 - Plugin Commands Implementation](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-4---plugin-commands-implementation)
### Fixed
- **MCP:** Project directory detection - all run.sh scripts now capture `CLAUDE_PROJECT_DIR` from PWD before changing directories
- **Docs:** Added Gitea auto-close behavior and MCP session restart notes to DEBUGGING-CHECKLIST.md
---
#### Sprint 3: Hooks (V5.2.0 Plugin Enhancements)
Implementation of 6 foundational hooks across 4 plugins.
**git-flow v1.1.0:**
- **Commit message enforcement hook** - PreToolUse hook validates conventional commit format on all `git commit` commands (not just `/commit`). Blocks invalid commits with format guidance.
- **Branch name validation hook** - PreToolUse hook validates branch naming on `git checkout -b` and `git switch -c`. Enforces `type/description` format, lowercase, max 50 chars.
**clarity-assist v1.1.0:**
- **Vagueness detection hook** - UserPromptSubmit hook detects vague prompts and suggests `/clarify` when ambiguity, missing context, or unclear scope detected.
**data-platform v1.1.0:**
- **Schema diff detection hook** - PostToolUse hook monitors edits to schema files (dbt models, SQL migrations). Warns on breaking changes (column removal, type narrowing, constraint addition).
**contract-validator v1.1.0:**
- **SessionStart auto-validate hook** - Smart validation that only runs when plugin files changed since last check. Detects interface compatibility issues at session start.
- **Breaking change detection hook** - PostToolUse hook monitors plugin interface files (README.md, plugin.json). Warns when changes would break consumers.
**Sprint Completed:**
- Milestone: Sprint 3 - Hooks (closed 2026-01-28)
- Issues: #225, #226, #227, #228, #229, #230
- Wiki: [Change V5.2.0: Plugin Enhancements Proposal](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0:-Plugin-Enhancements-Proposal)
- Lessons: Background agent permissions, agent runaway detection, MCP branch detection bug
### Known Issues
- **MCP Bug #231:** Branch detection in Gitea MCP runs from installed plugin directory, not user's project directory. Workaround: close issues via Gitea web UI.
---
#### Gitea MCP Server - create_pull_request Tool
- **`create_pull_request`**: Create new pull requests via MCP
- Parameters: title, body, head (source branch), base (target branch), labels
- Branch-aware security: only allowed on development/feature branches
- Completes the PR lifecycle (was previously missing - only had list/get/review/comment)
#### cmdb-assistant v1.1.0 - Data Quality Validation
- **SessionStart Hook**: Tests NetBox API connectivity at session start
- Warns if VMs exist without site assignment
- Warns if devices exist without platform
- Non-blocking: displays warning, doesn't prevent work
- **PreToolUse Hook**: Validates input parameters before VM/device operations
- Warns about missing site, tenant, platform
- Non-blocking: suggests best practices without blocking
- **`/cmdb-audit` Command**: Comprehensive data quality analysis
- Scopes: all, vms, devices, naming, roles
- Identifies Critical/High/Medium/Low issues
- Provides prioritized remediation recommendations
- **`/cmdb-register` Command**: Register current machine into NetBox
- Discovers system info: hostname, platform, hardware, network interfaces
- Discovers running apps: Docker containers, systemd services
- Creates device with interfaces, IPs, and sets primary IP
- Creates cluster and VMs for Docker containers
- **`/cmdb-sync` Command**: Sync machine state with NetBox
- Compares current state with NetBox record
- Shows diff of changes (interfaces, IPs, containers)
- Updates with user confirmation
- Supports --full and --dry-run flags
- **NetBox Best Practices Skill**: Reference documentation
- Dependency order for object creation
- Naming conventions (`{role}-{site}-{number}`, `{env}-{app}-{number}`)
- Role consolidation guidance
- Site/tenant/platform assignment requirements
- **Agent Enhancement**: Updated cmdb-assistant agent with validation requirements
- Proactive suggestions for missing fields
- Naming convention checks
- Dependency order enforcement
- Duplicate prevention
---
## [5.0.0] - 2026-01-26
### Added
#### Sprint 1: viz-platform Plugin ✅ Completed
- **viz-platform** v1.0.0 - Visualization tools with Dash Mantine Components validation and theming
- **DMC Tools** (3 tools): `list_components`, `get_component_props`, `validate_component`
- Version-locked component registry prevents Claude from hallucinating invalid props
- Static JSON registry approach for fast, deterministic validation
- **Chart Tools** (2 tools): `chart_create`, `chart_configure_interaction`
- Plotly-based visualization with theme token support
- **Layout Tools** (5 tools): `layout_create`, `layout_add_filter`, `layout_set_grid`, `layout_get`, `layout_add_section`
- Dashboard composition with responsive grid system
- **Theme Tools** (6 tools): `theme_create`, `theme_extend`, `theme_validate`, `theme_export_css`, `theme_list`, `theme_activate`
- Design token-based theming system
- Dual storage: user-level (`~/.config/claude/themes/`) and project-level
- **Page Tools** (5 tools): `page_create`, `page_add_navbar`, `page_set_auth`, `page_list`, `page_get_app_config`
- Multi-page Dash app structure generation
- **Commands**: `/chart`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/initial-setup`
- **Agents**: `theme-setup`, `layout-builder`, `component-check`
- **SessionStart Hook**: DMC version check (non-blocking)
- **Tests**: 94 tests passing
- config.py: 82% coverage
- component_registry.py: 92% coverage
- dmc_tools.py: 88% coverage
- chart_tools.py: 68% coverage
- theme_tools.py: 99% coverage
**Sprint Completed:**
- Milestone: Sprint 1 - viz-platform Plugin (closed 2026-01-26)
- Issues: #170-#182 (13/13 closed)
- Wiki: [Sprint-1-viz-platform-Implementation-Plan](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Sprint-1-viz-platform-Implementation-Plan)
- Lessons: [sprint-1---viz-platform-plugin-implementation](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-1---viz-platform-plugin-implementation)
- Reference: `docs/changes/CHANGE_V04_0_0_PROPOSAL_ORIGINAL.md` (Phases 4-5)
---
## [4.1.0] - 2026-01-26
### Added
- **projman:** Wiki-based planning workflow enhancement (V04.1.0)
- Flexible input source detection in `/sprint-plan` (file, wiki, or conversation)
- Wiki proposal and implementation page creation during sprint planning
- Wiki reference linking in created issues
- Wiki status updates in `/sprint-close` (Implemented/Partial/Failed)
- Metadata section in lessons learned with implementation link for traceability
- New `/proposal-status` command for viewing proposal/implementation tree
- **projman:** `/suggest-version` command - Analyzes CHANGELOG and recommends semantic version bump
- **projman:** SessionStart hook now suggests sprint planning when open issues exist without milestone
- **projman:** SessionStart hook now warns about unreleased CHANGELOG entries
### Changed
- **doc-guardian:** Hook now tracks documentation dependencies and queues specific files needing updates
- Outputs which specific docs need updating (e.g., "commands changed → update needed: docs/COMMANDS-CHEATSHEET.md README.md")
- Maintains queue file (`.doc-guardian-queue`) for batch processing
- **docs:** COMMANDS-CHEATSHEET.md updated with data-platform plugin (7 commands + hook)
### Fixed
- Documentation drift: COMMANDS-CHEATSHEET.md was missing data-platform plugin added in v4.0.0
- Proactive sprint planning: projman now suggests `/sprint-plan` at session start when unplanned issues exist
### Known Issues
- **MCP Bug #160:** `update_wiki_page` tool renames pages to "unnamed" when page_name contains URL-encoded characters (`:``%3A`). Workaround: use `create_wiki_page` to overwrite instead.
---
## [4.0.0] - 2026-01-25
### Added
#### New Plugin: data-platform v1.0.0
- **pandas MCP Tools** (14 tools): DataFrame operations with Arrow IPC data_ref persistence
- `read_csv`, `read_parquet`, `read_json` - Load data with chunking support
- `to_csv`, `to_parquet` - Export to various formats
- `describe`, `head`, `tail` - Data exploration
- `filter`, `select`, `groupby`, `join` - Data transformation
- `list_data`, `drop_data` - Memory management
- **PostgreSQL MCP Tools** (10 tools): Database operations with asyncpg connection pooling
- `pg_connect`, `pg_query`, `pg_execute` - Core database operations
- `pg_tables`, `pg_columns`, `pg_schemas` - Schema exploration
- `st_tables`, `st_geometry_type`, `st_srid`, `st_extent` - PostGIS spatial support
- **dbt MCP Tools** (8 tools): Build tool wrapper with pre-execution validation
- `dbt_parse` - Pre-flight validation (catches dbt 1.9+ deprecations)
- `dbt_run`, `dbt_test`, `dbt_build` - Execution with auto-validation
- `dbt_compile`, `dbt_ls`, `dbt_docs_generate`, `dbt_lineage` - Analysis tools
- **Commands**: `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/run`
- **Agents**: `data-ingestion` (loading/transformation), `data-analysis` (exploration/profiling)
- **SessionStart Hook**: Graceful PostgreSQL connection check (non-blocking warning)
- **Key Features**:
- data_ref system for DataFrame persistence across tool calls
- 100k row limit with chunking support for large datasets
- Hybrid configuration (system: `~/.config/claude/postgres.env`, project: `.env`)
- Auto-detection of dbt projects
- Arrow IPC format for efficient memory management
---
## [3.2.0] - 2026-01-24
### Added
- **git-flow:** `/commit` now detects protected branches before committing
- Warns when on protected branch (main, master, development, staging, production)
- Offers to create feature branch automatically instead of committing directly
- Configurable via `GIT_PROTECTED_BRANCHES` environment variable
- **netbox:** Platform and primary_ip parameters added to device update tools
- **claude-config-maintainer:** Auto-enforce mandatory behavior rules via SessionStart hook
- **scripts:** `release.sh` - Versioning workflow script for consistent releases
- **scripts:** `verify-hooks.sh` - Verify all hooks are command type
### Changed
- **doc-guardian:** Hook switched from `prompt` type to `command` type
- Prompt hooks unreliable - Claude ignores explicit instructions
- New `notify.sh` bash script guarantees exact output behavior
- Only notifies for config file changes (commands/, agents/, skills/, hooks/)
- Silent exit for all other files - no blocking possible
- **All hooks:** Converted to command type with stricter plugin prefix enforcement
- All hooks now mandate `[plugin-name]` prefix with "NO EXCEPTIONS" rule
- Simplified output formats with word limits
- Consistent structure across projman, pr-review, code-sentinel, doc-guardian
- **CLAUDE.md:** Replaced destructive "ALWAYS CLEAR CACHE" rule with "VERIFY AND RESTART"
- Cache clearing mid-session breaks MCP tools
- Added guidance for proper plugin development workflow
### Fixed
- **cmdb-assistant:** Complete MCP tool schemas for update operations (#138)
- **netbox:** Shorten tool names to meet 64-char API limit (#134)
- **cmdb-assistant:** Correct NetBox API URL format in setup wizard (#132)
- **gitea/projman:** Type safety for `create_label_smart`, curl-based debug-report (#124)
- **netbox:** Add diagnostic logging for JSON parse errors (#121)
- **labels:** Add duplicate check before creating labels (#116)
- **hooks:** Convert ALL hooks to command type with proper prefixes (#114)
- Protected branch workflow: Claude no longer commits directly to protected branches (fixes #109)
- doc-guardian hook no longer blocks workflow (fixes #110)
---
## [3.1.1] - 2026-01-22
### Added
- **git-flow:** `/commit-sync` now prunes stale remote-tracking branches with `git fetch --prune`
- **git-flow:** `/commit-sync` detects and reports local branches with deleted upstreams
- **git-flow:** `/branch-cleanup` now handles stale branches (upstream gone) separately from merged branches
- **git-flow:** New `GIT_CLEANUP_STALE` environment variable for stale branch cleanup control
### Changed
- **All hooks:** Added `[plugin-name]` prefix to all hook messages for better identification
- `[projman]`, `[pr-review]`, `[code-sentinel]`, `[doc-guardian]` prefixes
- **doc-guardian:** Hook now notification-only (no file reads or blocking operations)
- Suggests running `/doc-sync` instead of performing inline checks
- Significantly reduces workflow interruption
### Fixed
- doc-guardian hook no longer stalls workflow with deep file analysis
---
## [3.1.0] - 2026-01-21
### Added
#### Debug Workflow Commands (projman)
- **`/debug-report`** - Run diagnostics in test projects, create structured issues in marketplace
- Runs 5 diagnostic MCP tool tests with explicit repo parameter
- Captures full project context (git remote, cwd, branch)
- Generates structured issue with hypothesis and investigation steps
- Creates issue in configured marketplace repository automatically
- **`/debug-review`** - Investigate diagnostic issues with human approval gates
- Lists open diagnostic issues for triage
- Maps errors to relevant code files using error-to-file mapping
- MANDATORY: Reads relevant files before proposing any fix
- Three approval gates: investigation summary, fix approach, PR creation
- Creates feature branch, commits, and PR with proper linking
#### MCP Server Improvements
- Dynamic label format detection in `suggest_labels`
- Supports slash format (`Type/Bug`) and colon-space format (`Type: Bug`)
- Fetches actual labels from repo and matches suggestions to real format
- Handles Effort/Efforts singular/plural normalization
### Changed
- **`/labels-sync`** completely rewritten with explicit execution steps
- Step 1 now explicitly requires running `git remote get-url origin` via Bash
- All MCP tool calls show required `repo` parameter
- Added "DO NOT" section preventing common mistakes
- Removed confusing "Label Reference" section that caused file creation prompts
### Fixed
- MCP tools no longer fail with "Use 'owner/repo' format" error
- Root cause: MCP server is sandboxed and cannot auto-detect project directory
- Solution: Command documentation now instructs Claude to detect repo via Bash first
---
## [3.0.1] - 2026-01-21
### Added
- `/project-init` command for quick project setup when system is already configured
- `/project-sync` command to sync .env with git remote after repository move/rename
- SessionStart hooks for automatic mismatch detection between git remote and .env
- Interactive setup wizard (`/initial-setup`) redesigned to use Claude tools instead of bash script
### Changed
- `GITEA_ORG` moved from system-level to project-level configuration (different projects may belong to different organizations)
- Environment variables renamed to match MCP server expectations:
- `GITEA_URL``GITEA_API_URL` (must include `/api/v1`)
- `GITEA_TOKEN``GITEA_API_TOKEN`
- `NETBOX_URL``NETBOX_API_URL` (must include `/api`)
- `NETBOX_TOKEN``NETBOX_API_TOKEN`
- Setup commands now validate repository via Gitea API before saving configuration
- README.md simplified to show only wizard setup path (manual setup moved to CONFIGURATION.md)
### Fixed
- API URL paths in curl commands (removed redundant `/api/v1` since it's now in the URL variable)
- Documentation now correctly references environment variable names
---
## [3.0.0] - 2026-01-20
### Added
#### New Plugins
- **clarity-assist** v1.0.0 - Prompt optimization with ND accommodations
- `/clarify` command for full 4-D methodology optimization
- `/quick-clarify` command for rapid single-pass clarification
- clarity-coach agent with ND-friendly questioning patterns
- prompt-patterns skill with optimization rules
- **git-flow** v1.0.0 - Git workflow automation
- `/commit` command with smart conventional commit messages
- `/commit-push`, `/commit-merge`, `/commit-sync` workflow commands
- `/branch-start`, `/branch-cleanup` branch management commands
- `/git-status` enhanced status with recommendations
- `/git-config` interactive configuration
- git-assistant agent for complex operations
- workflow-patterns skill with branching strategies
- **pr-review** v1.0.0 - Multi-agent pull request review
- `/pr-review` command for comprehensive multi-agent review
- `/pr-summary` command for quick PR overview
- `/pr-findings` command for filtering review findings
- coordinator agent for orchestrating reviews
- security-reviewer, performance-analyst, maintainability-auditor, test-validator agents
- review-patterns skill with confidence scoring rules
#### Gitea MCP Server Enhancements
- 6 new Pull Request tools:
- `list_pull_requests` - List PRs with filters
- `get_pull_request` - Get PR details
- `get_pr_diff` - Get PR diff
- `get_pr_comments` - Get PR comments
- `create_pr_review` - Create review (approve, request changes, comment)
- `add_pr_comment` - Add comment to PR
#### Documentation
- `docs/CONFIGURATION.md` - Centralized configuration guide for all plugins
### Changed
- **BREAKING:** Marketplace renamed from `claude-code-marketplace` to `leo-claude-mktplace`
- **BREAKING:** MCP servers moved from plugin directories to shared `mcp-servers/` at repository root
- All plugins now have `category`, `tags`, and `license` fields in marketplace.json
- Plugin MCP dependencies now use symlinks to shared servers
- projman version bumped to 3.0.0 (includes PR tools integration)
- projman CONFIGURATION.md slimmed down, links to central docs
### Removed
- Standalone MCP server directories inside plugins (replaced with symlinks)
---
## [2.3.0] - 2026-01-20
### Added
#### New Plugins
- **doc-guardian** v1.0.0 - Documentation lifecycle management
- `/doc-audit` command for full project documentation drift analysis
- `/doc-sync` command to batch apply pending documentation updates
- PostToolUse hook for automatic drift detection
- Stop hook reminder for pending updates
- doc-analyzer agent for cross-reference analysis
- doc-patterns skill for documentation structure knowledge
- **code-sentinel** v1.0.0 - Security scanning and refactoring
- `/security-scan` command for comprehensive security audit
- `/refactor` command to apply refactoring patterns
- `/refactor-dry` command to preview refactoring opportunities
- PreToolUse hook for real-time security scanning
- security-reviewer agent for vulnerability analysis
- refactor-advisor agent for code structure improvements
- security-patterns skill for vulnerability detection rules
#### projman Enhancements
- `/test-gen` command - Generate unit, integration, and e2e tests for specified code
### Changed
- Marketplace version bumped to 2.3.0
- projman version bumped to 2.3.0
## [2.2.0] - 2026-01-20
### Added
- `/review` command for pre-sprint-close code quality checks (projman)
- `/test-check` command for test verification before sprint close (projman)
- `code-reviewer` agent for structured code review workflow (projman)
- Validation script (`scripts/validate-marketplace.sh`) for marketplace compliance
- `homepage` and `repository` fields to all plugin entries in marketplace.json
- `metadata` wrapper for description/version in marketplace.json
- Keywords to all plugin manifests for better discoverability
- `commands` and `agents` directory references to plugin manifests
- Versioning rule: version displayed only in main README.md title
### Changed
- Updated marketplace.json with required fields per Claude Code spec
- Fixed installation documentation to use official Claude Code methods
- Prioritized public HTTPS URL over Tailscale SSH URL in documentation
- Updated all plugin manifests with author, homepage, repository, license fields
- Consolidated version display to main README.md title only
- Removed version numbers from plugin documentation titles
### Fixed
- Plugin manifests now include all required fields per Claude Code spec
- Installation section uses `extraKnownMarketplaces` instead of undocumented `pluginMarketplace`
### Removed
- `docs/references/` directory (obsolete planning documents)
- Version numbers from individual plugin README titles
- Version section from plugins/projman/README.md
## [2.1.0] - 2026-01-15
### Added
- `docs/CANONICAL-PATHS.md` - Single source of truth for all file paths
- Path verification rules in CLAUDE.md (mandatory pre-flight check)
@@ -15,16 +829,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- Update documentation (`docs/UPDATING.md`)
- `/initial-setup` slash command
- File creation governance rules in CLAUDE.md
- Architecture diagram specifications in `docs/architecture/`
- `.scratch/` directory for transient work
- `scripts/` directory for setup automation
- `docs/architecture/` for Draw.io diagrams
- `docs/workflows/` for workflow documentation
### Changed
- Replaced `docs/CORRECT-ARCHITECTURE.md` reference with `docs/CANONICAL-PATHS.md`
- Added mandatory path verification section to CLAUDE.md
- Reorganized documentation into `docs/references/`, `docs/architecture/`, `docs/workflows/`
- Updated CLAUDE.md with file creation governance
### Fixed
@@ -32,21 +842,44 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
### Removed
- Organization/workspace GID variable (no longer needed)
- Deprecated `cmdb-assistant/` plugin
- Development output files (test scripts, status reports)
- IDE-specific workspace files
- Stray files from project root
## [0.1.0] - Initial Release
## [2.0.0] - 2026-01-06
### Added
- projman plugin for sprint management
- Full Gitea integration with wiki, milestones, dependencies
- Parallel execution batching via dependency graph
- Wiki tools for lessons learned (`create_lesson`, `search_lessons`)
- Milestone tools (`list_milestones`, `create_milestone`, `update_milestone`)
- Dependency tools (`list_issue_dependencies`, `create_issue_dependency`, `get_execution_order`)
- Validation tools (`validate_repo_org`, `get_branch_protection`)
- MCP servers bundled inside plugins (not shared at root)
### Changed
- MCP server architecture: bundled in plugins instead of shared at root
- Configuration uses `${CLAUDE_PLUGIN_ROOT}/mcp-servers/` paths
## [1.0.0] - 2025-12-15
### Added
- projman plugin with basic sprint commands
- `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close` commands
- `/labels-sync` command for label taxonomy synchronization
- Three-agent model (planner, orchestrator, executor)
- Gitea MCP server with issue and label tools
- 43-label taxonomy system
- Hybrid configuration system (system + project level)
- Branch-aware security model
## [0.1.0] - 2025-12-01
### Added
- Initial repository structure
- projman plugin structure (planned)
- projman-pmo plugin structure (planned)
- project-hygiene plugin for cleanup automation
- Gitea MCP server
- Wiki.js MCP server
- 43-label taxonomy system
- Lessons learned capture system
- Hybrid configuration system (system + project level)
- Three-agent model (planner, orchestrator, executor)
- Branch-aware security model
- claude-config-maintainer plugin structure
- cmdb-assistant plugin structure
- Basic marketplace manifest

838
CLAUDE.md
View File

@@ -1,478 +1,490 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## ⛔ MANDATORY BEHAVIOR RULES - READ FIRST
**These rules are NON-NEGOTIABLE. Violating them wastes the user's time and money.**
### 1. WHEN USER ASKS YOU TO CHECK SOMETHING - CHECK EVERYTHING
- Search ALL locations, not just where you think it is
- Check cache directories: `~/.claude/plugins/cache/`
- Check installed: `~/.claude/plugins/marketplaces/`
- Check source directories
- **NEVER say "no" or "that's not the issue" without exhaustive verification**
### 2. WHEN USER SAYS SOMETHING IS WRONG - BELIEVE THEM
- The user knows their system better than you
- Investigate thoroughly before disagreeing
- **Your confidence is often wrong. User's instincts are often right.**
### 3. NEVER SAY "DONE" WITHOUT VERIFICATION
- Run the actual command/script to verify
- Show the output to the user
- **"Done" means VERIFIED WORKING, not "I made changes"**
### 4. SHOW EXACTLY WHAT USER ASKS FOR
- If user asks for messages, show the MESSAGES
- If user asks for code, show the CODE
- **Do not interpret or summarize unless asked**
**FAILURE TO FOLLOW THESE RULES = WASTED USER TIME = UNACCEPTABLE**
---
This file provides guidance to Claude Code when working with code in this repository.
## ⛔ RULES - READ FIRST
### Behavioral Rules
| Rule | Summary |
|------|---------|
| **Check everything** | Search cache (`~/.claude/plugins/cache/`), installed (`~/.claude/plugins/marketplaces/`), and source (`~/claude-plugins-work/`) |
| **Believe the user** | User knows their system. Investigate before disagreeing. |
| **Verify before "done"** | Run commands, show output, check all locations. "Done" = verified working. |
| **Show what's asked** | Don't interpret or summarize unless asked. |
### After Plugin Updates
Run `./scripts/verify-hooks.sh`. If changes affect MCP servers or hooks, inform user to restart session.
**DO NOT clear cache mid-session** - breaks loaded MCP tools.
### NEVER USE CLI TOOLS FOR EXTERNAL SERVICES
- **FORBIDDEN:** `gh`, `tea`, `curl` to APIs, any CLI that talks to Gitea/GitHub/external services
- **REQUIRED:** Use MCP tools exclusively (`mcp__plugin_projman_gitea__*`, `mcp__plugin_pr-review_gitea__*`)
- **NO EXCEPTIONS.** Don't try CLI first. Don't fall back to CLI. MCP ONLY.
### NEVER PUSH DIRECTLY TO PROTECTED BRANCHES
- **FORBIDDEN:** `git push origin development`, `git push origin main`, `git push origin master`
- **REQUIRED:** Create feature branch → push feature branch → create PR via MCP
- If you accidentally commit to a protected branch locally: `git checkout -b fix/branch-name` then reset the protected branch
### Repository Rules
| Rule | Details |
|------|---------|
| **File creation** | Only in allowed paths. Use `.scratch/` for temp work. Verify against `docs/CANONICAL-PATHS.md` |
| **plugin.json location** | Must be in `.claude-plugin/` directory |
| **Hooks** | Use `hooks/hooks.json` (auto-discovered). Never inline in plugin.json |
| **MCP servers** | Defined in root `.mcp.json`. Use MCP tools, never CLI (`tea`, `gh`) |
| **Allowed root files** | `CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example` |
**Valid hook events:** `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
### ⛔ MANDATORY: Before Any Code Change
**Claude MUST show this checklist BEFORE editing any file:**
#### 1. Impact Search Results
Run and show output of:
```bash
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
```
#### 2. Files That Will Be Affected
Numbered list of every file to be modified, with the specific change for each.
#### 3. Files Searched But Not Changed (and why)
Proof that related files were checked and determined unchanged.
#### 4. Documentation That References This
List of docs that mention this feature/script/function.
**User verifies this list before Claude proceeds. If Claude skips this, STOP IMMEDIATELY.**
#### After Changes
Run the same grep and show results proving no references remain unaddressed.
---
## ⚠️ Development Context: We Build AND Use These Plugins
**This is a self-referential project.** We are:
1. **BUILDING** a plugin marketplace (source code in `plugins/`)
2. **USING** the installed marketplace to build it (dogfooding)
### Plugins ACTIVELY USED in This Project
These plugins are installed and should be used during development:
| Plugin | Used For |
|--------|----------|
| **projman** | Sprint planning, issue management, lessons learned |
| **git-flow** | Commits, branch management |
| **pr-review** | Pull request reviews |
| **doc-guardian** | Documentation drift detection |
| **code-sentinel** | Security scanning, refactoring |
| **clarity-assist** | Prompt clarification |
| **claude-config-maintainer** | CLAUDE.md optimization |
| **contract-validator** | Cross-plugin compatibility |
### Plugins NOT Used Here (Development Only)
These plugins exist in source but are **NOT relevant** to this project's workflow:
| Plugin | Why Not Used |
|--------|--------------|
| **data-platform** | For data engineering projects (pandas, PostgreSQL, dbt) |
| **viz-platform** | For dashboard projects (Dash, Plotly) |
| **cmdb-assistant** | For infrastructure projects (NetBox) |
**Do NOT suggest** `/ingest`, `/profile`, `/chart`, `/cmdb-*` commands - they don't apply here.
### Key Distinction
| Context | Path | What To Do |
|---------|------|------------|
| **Editing plugin source** | `~/claude-plugins-work/plugins/` | Modify code, add features |
| **Using installed plugins** | `~/.claude/plugins/marketplaces/` | Run commands like `/sprint-plan` |
When user says "run /sprint-plan", use the INSTALLED plugin.
When user says "fix the sprint-plan command", edit the SOURCE code.
---
## Project Overview
This repository contains Claude Code plugins for project management:
**Repository:** leo-claude-mktplace
**Version:** 5.9.0
**Status:** Production Ready
1. **`projman`** - Single-repository project management plugin with Gitea integration
2. **`projman-pmo`** - Multi-project PMO coordination plugin
3. **`claude-config-maintainer`** - CLAUDE.md optimization and maintenance plugin
4. **`cmdb-assistant`** - NetBox CMDB integration for infrastructure management
A plugin marketplace for Claude Code containing:
These plugins transform a proven 15-sprint workflow into reusable, distributable tools for managing software development with Claude Code, Gitea, and agile methodologies.
| Plugin | Description | Version |
|--------|-------------|---------|
| `projman` | Sprint planning and project management with Gitea integration | 3.3.0 |
| `git-flow` | Git workflow automation with smart commits and branch management | 1.0.0 |
| `pr-review` | Multi-agent PR review with confidence scoring | 1.1.0 |
| `clarity-assist` | Prompt optimization with ND-friendly accommodations | 1.0.0 |
| `doc-guardian` | Automatic documentation drift detection and synchronization | 1.0.0 |
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.1 |
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 |
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.2.0 |
| `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.3.0 |
| `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.1.0 |
| `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.1.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
**Status:** projman v1.0.0 complete with full Gitea integration
## Quick Start
## File Creation Governance
```bash
# Validate marketplace compliance
./scripts/validate-marketplace.sh
### Allowed Root Files
# After updates
./scripts/post-update.sh # Rebuild venvs
```
Only these files may exist at the repository root:
### Plugin Commands - USE THESE in This Project
- `CLAUDE.md` - This file
- `README.md` - Repository overview
- `LICENSE` - License file
- `CHANGELOG.md` - Version history
- `.gitignore` - Git ignore rules
- `.env.example` - Environment template (if needed)
| Category | Commands |
|----------|----------|
| **Setup** | `/setup` (modes: `--full`, `--quick`, `--sync`) |
| **Sprint** | `/sprint-plan`, `/sprint-start`, `/sprint-status` (with `--diagram`), `/sprint-close` |
| **Quality** | `/review`, `/test` (modes: `run`, `gen`) |
| **Versioning** | `/suggest-version` |
| **PR Review** | `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff` |
| **Docs** | `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs` |
| **Security** | `/security-scan`, `/refactor`, `/refactor-dry` |
| **Config** | `/config-analyze`, `/config-optimize`, `/config-diff`, `/config-lint` |
| **Validation** | `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph` |
| **Debug** | `/debug` (modes: `report`, `review`) |
### Allowed Root Directories
### Plugin Commands - NOT RELEVANT to This Project
Only these directories may exist at the repository root:
These commands are being developed but don't apply to this project's workflow:
| Directory | Purpose |
|-----------|---------|
| `.claude/` | Claude Code local settings |
| `.claude-plugin/` | Marketplace manifest |
| `.claude-plugins/` | Local marketplace definitions |
| `.scratch/` | Transient work (auto-cleaned) |
| `docs/` | Documentation |
| `hooks/` | Shared hooks (if any) |
| `plugins/` | All plugins (projman, projman-pmo, project-hygiene, cmdb-assistant, claude-config-maintainer) |
| `scripts/` | Setup and maintenance scripts |
| Category | Commands | For Projects Using |
|----------|----------|-------------------|
| **Data** | `/ingest`, `/profile`, `/schema`, `/lineage`, `/dbt-test` | pandas, PostgreSQL, dbt |
| **Visualization** | `/component`, `/chart`, `/dashboard`, `/theme` | Dash, Plotly dashboards |
| **CMDB** | `/cmdb-search`, `/cmdb-device`, `/cmdb-sync` | NetBox infrastructure |
### File Creation Rules
## Repository Structure
1. **No new root files** - Do not create files directly in the repository root unless listed above
2. **No new root directories** - Do not create top-level directories without explicit approval
3. **Transient work goes in `.scratch/`** - Any temporary files, test outputs, or exploratory work must be created in `.scratch/`
4. **Clean up after tasks** - Delete files in `.scratch/` when the task is complete
5. **Documentation location** - All documentation goes in `docs/` with appropriate subdirectory:
- `docs/references/` - Reference specifications and summaries
- `docs/architecture/` - Architecture diagrams (Draw.io files)
- `docs/workflows/` - Workflow documentation
6. **No output files** - Do not leave generated output, logs, or test results outside designated directories
```
leo-claude-mktplace/
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest
├── .mcp.json # MCP server configuration (all servers)
├── mcp-servers/ # SHARED MCP servers
├── gitea/ # Gitea MCP (issues, PRs, wiki)
├── netbox/ # NetBox MCP (CMDB)
│ ├── data-platform/ # pandas, PostgreSQL, dbt
│ ├── viz-platform/ # DMC validation, charts, themes
│ └── contract-validator/ # Plugin compatibility validation
├── plugins/
│ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 12 commands
│ │ ├── hooks/ # SessionStart: mismatch detection
│ │ ├── agents/ # 4 agents
│ │ └── skills/ # 17 reusable skill files
│ ├── git-flow/ # Git workflow automation
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 8 commands
│ │ └── agents/
│ ├── pr-review/ # Multi-agent PR review
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 6 commands
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ └── agents/ # 5 agents
│ ├── clarity-assist/ # Prompt optimization
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 2 commands
│ │ └── agents/
│ ├── data-platform/ # Data engineering
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 7 commands
│ │ ├── hooks/ # SessionStart PostgreSQL check
│ │ └── agents/ # 2 agents
│ ├── viz-platform/ # Visualization
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 7 commands
│ │ ├── hooks/ # SessionStart DMC check
│ │ └── agents/ # 3 agents
│ ├── doc-guardian/ # Documentation drift detection
│ ├── code-sentinel/ # Security scanning & refactoring
│ ├── claude-config-maintainer/
│ ├── cmdb-assistant/
│ ├── contract-validator/
│ └── project-hygiene/
├── scripts/
│ ├── setup.sh, post-update.sh
│ ├── validate-marketplace.sh # Marketplace compliance validation
│ ├── verify-hooks.sh # Verify all hooks are command type
│ └── check-venv.sh # Check MCP server venvs exist
└── docs/
├── CANONICAL-PATHS.md # Single source of truth for paths
└── CONFIGURATION.md # Centralized configuration guide
```
### Enforcement
## Architecture
Before creating any file, verify:
### Four-Agent Model (projman)
1. Is this file type allowed in the target location?
2. If temporary, am I using `.scratch/`?
3. If documentation, am I using the correct `docs/` subdirectory?
4. Will this file be cleaned up after the task?
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **Planner** | Thoughtful, methodical | Sprint planning, architecture analysis, issue creation, lesson search |
| **Orchestrator** | Concise, action-oriented | Sprint execution, parallel batching, Git operations, lesson capture |
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
**Violation of these rules creates technical debt and project chaos.**
### Agent Model Selection
## Path Verification (MANDATORY)
Agents specify their model in frontmatter using Claude Code's `model` field. Supported values: `sonnet` (default), `opus`, `haiku`, `inherit`.
### Before Generating Any Prompt or Creating Any File
| Plugin | Agent | Model | Rationale |
|--------|-------|-------|-----------|
| projman | Planner | sonnet | Architectural analysis, sprint planning |
| projman | Orchestrator | sonnet | Coordination and tool dispatch |
| projman | Executor | sonnet | Code generation and implementation |
| projman | Code Reviewer | sonnet | Quality gate, pattern detection |
| pr-review | Coordinator | sonnet | Orchestrates sub-agents, aggregates findings |
| pr-review | Security Reviewer | sonnet | Security analysis |
| pr-review | Performance Analyst | sonnet | Performance pattern detection |
| pr-review | Maintainability Auditor | haiku | Pattern matching (complexity, duplication) |
| pr-review | Test Validator | haiku | Coverage gap detection |
| data-platform | Data Advisor | sonnet | Schema validation, dbt orchestration |
| data-platform | Data Analysis | sonnet | Data exploration and profiling |
| data-platform | Data Ingestion | haiku | Data loading operations |
| viz-platform | Design Reviewer | sonnet | DMC validation + accessibility |
| viz-platform | Layout Builder | sonnet | Dashboard design guidance |
| viz-platform | Component Check | haiku | Quick component validation |
| viz-platform | Theme Setup | haiku | Theme configuration |
| contract-validator | Agent Check | haiku | Reference checking |
| contract-validator | Full Validation | sonnet | Marketplace sweep |
| code-sentinel | Security Reviewer | sonnet | Security analysis |
| code-sentinel | Refactor Advisor | sonnet | Code refactoring advice |
| doc-guardian | Doc Analyzer | sonnet | Documentation drift detection |
| clarity-assist | Clarity Coach | sonnet | Conversational coaching |
| git-flow | Git Assistant | haiku | Git operations |
| claude-config-maintainer | Maintainer | sonnet | CLAUDE.md optimization |
| cmdb-assistant | CMDB Assistant | sonnet | NetBox operations |
**This is non-negotiable. Failure to follow causes structural damage.**
Override by editing the `model:` field in `plugins/{plugin}/agents/{agent}.md`.
1. **READ `docs/CANONICAL-PATHS.md` FIRST**
- This file is the single source of truth
- Never infer paths from memory or context
- Never assume paths based on conversation history
### MCP Server Tools (Gitea)
2. **List All Paths**
- Before generating a prompt, list every file path it will create/modify
- Show the list to the user
| Category | Tools |
|----------|-------|
| Issues | `list_issues`, `get_issue`, `create_issue`, `update_issue`, `add_comment`, `aggregate_issues` |
| Labels | `get_labels`, `suggest_labels`, `create_label`, `create_label_smart` |
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone`, `delete_milestone` |
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `remove_issue_dependency`, `get_execution_order` |
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `update_wiki_page`, `create_lesson`, `search_lessons`, `allocate_rfc_number` |
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` |
| Validation | `validate_repo_org`, `get_branch_protection` |
3. **Verify Each Path**
- Check each path against `docs/CANONICAL-PATHS.md`
- If a path is not in that file, STOP and ask
### Hybrid Configuration
4. **Show Verification**
- Present a verification table to user:
```
| Path | Matches CANONICAL-PATHS.md? |
|------|----------------------------|
| plugins/projman/... | ✅ Yes |
```
| Level | Location | Purpose |
|-------|----------|---------|
| System | `~/.config/claude/gitea.env` | Credentials (GITEA_API_URL, GITEA_API_TOKEN) |
| Project | `.env` in project root | Repository specification (GITEA_ORG, GITEA_REPO) |
5. **Get Confirmation**
- User must confirm paths are correct before proceeding
**Note:** `GITEA_ORG` is at project level since different projects may belong to different organizations.
### Relative Path Rules
### Branch-Aware Security
- Plugin to bundled MCP server: `${CLAUDE_PLUGIN_ROOT}/mcp-servers/{server}`
- Marketplace to plugin: `./../../../plugins/{plugin-name}`
- **ALWAYS calculate from CANONICAL-PATHS.md, never from memory**
| Branch Pattern | Mode | Capabilities |
|----------------|------|--------------|
| `development`, `feat/*` | Development | Full access |
| `staging` | Staging | Read-only code, can create issues |
| `main`, `master` | Production | Read-only, emergency only |
### Recovery Protocol
### RFC System
If you suspect paths are wrong:
1. Read `docs/CANONICAL-PATHS.md`
2. Compare actual structure against documented structure
3. Report discrepancies
4. Generate corrective prompt if needed
Wiki-based Request for Comments system for tracking feature ideas from proposal through implementation.
## Core Architecture
**RFC Wiki Naming:**
- RFC pages: `RFC-NNNN: Short Title` (4-digit zero-padded)
- Index page: `RFC-Index` (auto-maintained)
### Three-Agent Model
**Lifecycle:** Draft → Review → Approved → Implementing → Implemented
The plugins implement a three-agent architecture that mirrors the proven workflow:
**Integration with Sprint Planning:**
- `/sprint-plan` detects approved RFCs and offers selection
- `/sprint-close` updates RFC status on completion
**Planner Agent** (`agents/planner.md`)
- Performs architecture analysis and sprint planning
- Creates detailed planning documents
- Makes architectural decisions
- Creates Gitea issues with appropriate labels
- Personality: Asks clarifying questions, thinks through edge cases, never rushes
## Label Taxonomy
**Orchestrator Agent** (`agents/orchestrator.md`)
- Coordinates sprint execution
- Generates lean execution prompts (not full docs)
- Tracks progress and updates documentation
- Handles Git operations (commit, merge, cleanup)
- Manages task dependencies
- Personality: Concise, action-oriented, tracks details meticulously
43 labels total: 27 organization + 16 repository
**Executor Agent** (`agents/executor.md`)
- Implements features according to execution prompts
- Writes clean, tested code
- Follows architectural decisions from planning
- Generates completion reports
- Personality: Implementation-focused, follows specs precisely
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Type/6
**Repository:** Component/9, Tech/7
### MCP Server Integration
**Gitea MCP Server** (Python) - bundled in projman plugin
**Issue Tools:**
- `list_issues` - Query issues with filters
- `get_issue` - Fetch single issue details
- `create_issue` - Create new issue with labels
- `update_issue` - Modify existing issue
- `add_comment` - Add comments to issues
- `get_labels` - Fetch org + repo label taxonomy
- `suggest_labels` - Analyze context and suggest appropriate labels
**Milestone Tools:**
- `list_milestones` - List sprint milestones
- `get_milestone` - Get milestone details
- `create_milestone` - Create sprint milestone
- `update_milestone` - Update/close milestone
**Dependency Tools:**
- `list_issue_dependencies` - Get issue dependencies
- `create_issue_dependency` - Create dependency between issues
- `get_execution_order` - Get parallel execution batches
**Wiki Tools (Gitea Wiki):**
- `list_wiki_pages` - List wiki pages
- `get_wiki_page` - Fetch specific page content
- `create_wiki_page` - Create new wiki page
- `create_lesson` - Create lessons learned document
- `search_lessons` - Search past lessons by tags
**Validation Tools:**
- `validate_repo_org` - Check repo belongs to organization
- `get_branch_protection` - Check branch protection rules
- `create_label` - Create missing required labels
**Key Architecture Points:**
- MCP servers are **bundled inside each plugin** at `plugins/{plugin}/mcp-servers/`
- This ensures plugins work when cached by Claude Code (only plugin directory is cached)
- Configuration uses hybrid approach (system-level + project-level)
- All plugins reference `${CLAUDE_PLUGIN_ROOT}/mcp-servers/` in their `.mcp.json` files
## Branch-Aware Security Model
Plugin behavior adapts to the current Git branch to prevent accidental changes:
**Development Mode** (`development`, `feat/*`)
- Full access to all operations
- Can create Gitea issues
- Can modify all files
**Staging Mode** (`staging`)
- Read-only for application code
- Can modify `.env` files
- Can create issues to document needed fixes
- Warns on attempted code changes
**Production Mode** (`main`)
- Read-only for application code
- Emergency-only `.env` modifications
- Can create incident issues
- Blocks code changes
This behavior is implemented in both CLAUDE.md (file-level) and plugin agents (tool-level).
## Label Taxonomy System
The project uses a sophisticated 43-label taxonomy at organization level:
**Organization Labels (27):**
- Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Type/6
**Repository Labels (16):**
- Component/9, Tech/7
**Important Labels:**
- `Type/Refactor` - For architectural changes and code restructuring (exclusive Type label)
- Used for service extraction, architecture modifications, technical debt
The label system includes:
- `skills/label-taxonomy/labels-reference.md` - Local reference synced from Gitea
- Label suggestion logic that detects appropriate labels from context
- `/labels-sync` command to review and sync changes from Gitea
Sync with `/labels-sync` command.
## Lessons Learned System
**Critical Feature:** After 15 sprints without lesson capture, repeated mistakes occurred (e.g., Claude Code infinite loops on similar issues 2-3 times).
**Gitea Wiki Structure:**
Lessons learned are stored in the Gitea repository's built-in wiki under `lessons-learned/sprints/`.
Stored in Gitea Wiki under `lessons-learned/sprints/`.
**Workflow:**
- Orchestrator captures lessons at sprint close via Gitea Wiki MCP tools
- Planner searches relevant lessons at sprint start using `search_lessons`
- Tags enable cross-project lesson discovery
- Focus on preventable repetitions, not every detail
- Web interface available through Gitea Wiki
1. Orchestrator captures at sprint close via MCP tools
2. Planner searches at sprint start using `search_lessons`
3. Tags enable cross-project discovery
## Development Workflow
## Common Operations
### Build Order
### Adding a New Plugin
1. **Phase 1-8:** Build `projman` plugin first (single-repo)
2. **Phase 9-11:** Build `pmo` plugin second (multi-project)
3. **Phase 12:** Production deployment
1. Create `plugins/{name}/.claude-plugin/plugin.json`
2. Add entry to `.claude-plugin/marketplace.json` with category, tags, license
3. Create `claude-md-integration.md`
4. If using new MCP server, add to root `mcp-servers/` and update `.mcp.json`
5. Run `./scripts/validate-marketplace.sh`
6. Update `CHANGELOG.md`
See [docs/reference-material/projman-implementation-plan.md](docs/reference-material/projman-implementation-plan.md) for the complete 12-phase implementation plan.
### Adding a Command to projman
### Repository Structure (DEFINITIVE)
1. Create `plugins/projman/commands/{name}.md`
2. Update marketplace description if significant
⚠️ **See `docs/CANONICAL-PATHS.md` for the authoritative path reference - THIS IS THE SINGLE SOURCE OF TRUTH**
### Validation
```
bandit/support-claude-mktplace/
├── .claude-plugin/
│ └── marketplace.json
├── plugins/ # ← ALL PLUGINS (with bundled MCP servers)
│ ├── projman/ # ← PROJECT PLUGIN
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ ├── .mcp.json # Points to ${CLAUDE_PLUGIN_ROOT}/mcp-servers/
│ │ ├── mcp-servers/ # ← MCP servers BUNDLED IN plugin
│ │ │ └── gitea/ # Gitea + Wiki tools
│ │ │ ├── .venv/
│ │ │ ├── requirements.txt
│ │ │ ├── mcp_server/
│ │ │ └── tests/
│ │ ├── commands/
│ │ │ ├── sprint-plan.md
│ │ │ ├── sprint-start.md
│ │ │ ├── sprint-status.md
│ │ │ ├── sprint-close.md
│ │ │ ├── labels-sync.md
│ │ │ └── initial-setup.md
│ │ ├── agents/
│ │ │ ├── planner.md
│ │ │ ├── orchestrator.md
│ │ │ └── executor.md
│ │ ├── skills/
│ │ │ └── label-taxonomy/
│ │ │ └── labels-reference.md
│ │ ├── README.md
│ │ └── CONFIGURATION.md
│ ├── projman-pmo/ # ← PMO PLUGIN
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ ├── .mcp.json
│ │ ├── commands/
│ │ ├── agents/
│ │ │ └── pmo-coordinator.md
│ │ └── README.md
│ ├── cmdb-assistant/ # ← CMDB PLUGIN
│ │ ├── .claude-plugin/
│ │ │ └── plugin.json
│ │ ├── .mcp.json # Points to ${CLAUDE_PLUGIN_ROOT}/mcp-servers/
│ │ ├── mcp-servers/ # ← MCP servers BUNDLED IN plugin
│ │ │ └── netbox/
│ │ │ ├── .venv/
│ │ │ ├── requirements.txt
│ │ │ └── mcp_server/
│ │ ├── commands/
│ │ └── agents/
│ └── project-hygiene/ # ← CLEANUP PLUGIN
│ └── ...
├── scripts/ # Setup and maintenance scripts
│ ├── setup.sh
│ └── post-update.sh
└── docs/
```bash
./scripts/validate-marketplace.sh # Validates all manifests
```
### Key Design Decisions
## Path Verification Protocol
**MCP Servers (Bundled in Plugins):**
- **Gitea MCP**: Issues, labels, wiki, milestones, dependencies (bundled in projman)
- **NetBox MCP**: Infrastructure management (bundled in cmdb-assistant)
- Servers are **bundled inside each plugin** that needs them
- This ensures plugins work when cached by Claude Code
**Before creating any file:**
**Python Implementation:**
- Python chosen over Node.js for MCP servers
- Better suited for configuration management and modular code
- Easier to maintain and extend
- Virtual environment (.venv) per MCP server
**Hybrid Configuration:**
- **System-level**: `~/.config/claude/gitea.env` (credentials)
- **Project-level**: `project-root/.env` (repository specification)
- Merge strategy: project overrides system
- Benefits: Single token per service, easy multi-project setup
**Skills as Knowledge, Not Orchestrators:**
- Skills provide supporting knowledge loaded when relevant
- Agents are the primary interface
- Reduces token usage
- Makes knowledge reusable across agents
**Branch Detection:**
- Two layers: CLAUDE.md (file access) + Plugin agents (tool usage)
- Defense in depth approach
- Plugin works with or without CLAUDE.md
## Multi-Project Context (PMO Plugin)
The `projman-pmo` plugin coordinates interdependent projects across an organization. Example use cases:
- Main product repository
- Marketing/documentation sites
- Extracted services
- Supporting tools
PMO plugin adds:
- Cross-project issue aggregation (all repos in organization)
- Dependency tracking and visualization
- Resource allocation across projects
- Deployment coordination
- Multi-project prioritization
- Company-wide lessons learned search
**Configuration Difference:**
- PMO operates at company level (no `GITEA_REPO`)
- Accesses all repositories in organization
- Aggregates issues and lessons across projects
Build PMO plugin AFTER projman is working and validated.
## Testing Approach
**Local Marketplace:**
Create local marketplace for plugin development:
```
~/projman-dev-marketplace/
├── .claude-plugin/
│ └── marketplace.json
└── projman/ # Symlink to plugin directory
```
**Integration Testing:**
Test in a real repository with actual Gitea instance before distribution.
**Success Metrics:**
- Sprint planning time reduced 40%
- Manual steps eliminated: 10+ per sprint
- Lessons learned capture rate: 100% (vs 0% before)
- Label accuracy on issues: 90%+
- User satisfaction: Better than current manual workflow
## Important Notes
- **Never modify docker-compose files with 'version' attribute** - It's obsolete
- **Focus on implementation, not over-engineering** - This system has been validated over 15 sprints
- **Lessons learned is critical** - Prevents repeated mistakes (e.g., Claude infinite loops)
- **Type/Refactor label** - Newly implemented at org level for architectural work
- **Branch detection must be 100% reliable** - Prevents production accidents
- **Python for MCP servers** - Use Python 3.8+ with virtual environments
- **CLI tools forbidden** - Use MCP tools exclusively, never CLI tools like `tea` or `gh`
## CRITICAL: Rules You MUST Follow
### DO NOT MODIFY .gitignore Without Explicit Permission
- This is a **private repository** - credentials in `.env` files are intentional
- **NEVER** add `.env` or `.env.*` to .gitignore
- **NEVER** add venv patterns unless explicitly asked
- If you think something should be ignored, ASK FIRST
### Plugin Structure Requirements
- **plugin.json MUST be in `.claude-plugin/` directory** - NOT in plugin root
- Every plugin in the repo MUST be listed in the marketplace.json
- After creating/modifying a plugin, VERIFY it's in the marketplace
### Hooks Syntax (Claude Code Official)
- **Valid events**: `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
- **INVALID events**: `task-completed`, `file-changed`, `git-commit-msg-needed` (these DO NOT exist)
- Hooks schema:
```json
{
"hooks": {
"EventName": [
{
"matcher": "optional-pattern",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/path/to/script.sh"
}
]
}
]
}
}
```
### MCP Server Configuration
- MCP servers MUST use venv python: `${CLAUDE_PLUGIN_ROOT}/../../mcp-servers/NAME/.venv/bin/python`
- NEVER use bare `python` command - always use venv path
- Test MCP servers after any config change
### Before Completing Any Plugin Work
1. Verify plugin.json is in `.claude-plugin/` directory
2. Verify plugin is listed in marketplace.json
3. Test MCP server configs load correctly
4. Verify hooks use valid event types
5. Check .gitignore wasn't modified inappropriately
1. Read `docs/CANONICAL-PATHS.md`
2. List all paths to be created/modified
3. Verify each against canonical paths
4. If not in canonical paths, STOP and ask
## Documentation Index
This repository contains comprehensive planning documentation:
| Document | Purpose |
|----------|---------|
| `docs/CANONICAL-PATHS.md` | **Single source of truth** for paths |
| `docs/COMMANDS-CHEATSHEET.md` | All commands quick reference |
| `docs/CONFIGURATION.md` | Centralized setup guide |
| `docs/DEBUGGING-CHECKLIST.md` | Systematic troubleshooting guide |
| `docs/UPDATING.md` | Update guide for the marketplace |
| `plugins/projman/CONFIGURATION.md` | Projman quick reference (links to central) |
- **`docs/CANONICAL-PATHS.md`** - ⚠️ SINGLE SOURCE OF TRUTH for all paths (MANDATORY reading before any file operations)
- **`docs/DOCUMENT-INDEX.md`** - Complete guide to all planning documents
- **`docs/projman-implementation-plan-updated.md`** - Full 12-phase implementation plan
- **`docs/projman-python-quickstart.md`** - Python-specific implementation guide
- **`docs/two-mcp-architecture-guide.md`** - Deep dive into two-MCP architecture
## Installation Paths
**Start with:** `docs/DOCUMENT-INDEX.md` for navigation guidance
Understanding where files live is critical for debugging:
## Recent Updates (Updated: 2025-06-11)
| Context | Path | Purpose |
|---------|------|---------|
| **Source** | `~/claude-plugins-work/` | Development - edit here |
| **Installed** | `~/.claude/plugins/marketplaces/leo-claude-mktplace/` | Runtime - Claude uses this |
| **Cache** | `~/.claude/` | Plugin metadata and settings |
### Planning Phase Complete
- Comprehensive 12-phase implementation plan finalized
- Architecture decisions documented and validated
- Two-MCP-server approach confirmed (Gitea + Wiki.js)
- Python selected for MCP server implementation
- Hybrid configuration strategy defined (system + project level)
- Wiki.js structure planned with configurable base path
- Repository structure designed with shared MCP servers
**Key insight:** Edits to source require reinstall/update to take effect at runtime.
### Key Architectural Decisions Made
1. **Shared MCP Servers**: Both plugins use the same MCP codebase at `mcp-servers/`
2. **Mode Detection**: MCP servers detect project vs company-wide mode via environment variables
3. **Python Implementation**: MCP servers written in Python (not Node.js) for better configuration handling
4. **Wiki.js Integration**: Lessons learned and documentation moved to Wiki.js for better collaboration
5. **Hybrid Config**: System-level credentials + project-level paths for flexibility
## Debugging & Troubleshooting
### Next Steps
- Begin Phase 1.1a: Gitea MCP Server implementation
- Set up Python virtual environments
- Create configuration loaders
- Implement core Gitea tools (issues, labels)
- Write integration tests
See `docs/DEBUGGING-CHECKLIST.md` for systematic troubleshooting.
**Common Issues:**
| Symptom | Likely Cause | Fix |
|---------|--------------|-----|
| "X MCP servers failed" | Missing venv in installed path | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
| MCP tools not available | Venv missing or .mcp.json misconfigured | Run `/debug report` to diagnose |
| Changes not taking effect | Editing source, not installed | Reinstall plugin or edit installed path |
**Debug Commands:**
- `/debug report` - Run full diagnostics, create issue if needed
- `/debug review` - Investigate and propose fixes
## Versioning Workflow
This project follows [SemVer](https://semver.org/) and [Keep a Changelog](https://keepachangelog.com).
### Version Locations (must stay in sync)
| Location | Format | Example |
|----------|--------|---------|
| Git tags | `vX.Y.Z` | `v3.2.0` |
| README.md title | `# Leo Claude Marketplace - vX.Y.Z` | `v3.2.0` |
| marketplace.json | `"version": "X.Y.Z"` | `3.2.0` |
| CHANGELOG.md | `## [X.Y.Z] - YYYY-MM-DD` | `[3.2.0] - 2026-01-24` |
### During Development
**All changes go under `[Unreleased]` in CHANGELOG.md.** Never create a versioned section until release time.
```markdown
## [Unreleased]
### Added
- New feature description
### Fixed
- Bug fix description
```
### Creating a Release
Use the release script to ensure consistency:
```bash
./scripts/release.sh 3.2.0
```
The script will:
1. Validate `[Unreleased]` section has content
2. Replace `[Unreleased]` with `[3.2.0] - YYYY-MM-DD`
3. Update README.md title
4. Update marketplace.json version
5. Commit and create git tag
### SemVer Guidelines
| Change Type | Version Bump | Example |
|-------------|--------------|---------|
| Bug fixes only | PATCH (x.y.**Z**) | 3.1.1 → 3.1.2 |
| New features (backwards compatible) | MINOR (x.**Y**.0) | 3.1.2 → 3.2.0 |
| Breaking changes | MAJOR (**X**.0.0) | 3.2.0 → 4.0.0 |
---
**Last Updated:** 2026-02-02

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Leo Miranda
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

416
README.md
View File

@@ -1,75 +1,179 @@
# Claude Code Marketplace - Bandit Labs
# Leo Claude Marketplace - v5.9.0
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
## Plugins
### [projman](./plugins/projman/README.md) v2.0.0
### Development & Project Management
#### [projman](./plugins/projman)
**Sprint Planning and Project Management**
AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sprint workflow into a distributable plugin.
- Three-agent model: Planner, Orchestrator, Executor
- Four-agent model: Planner, Orchestrator, Executor, Code Reviewer
- Intelligent label suggestions from 43-label taxonomy
- Lessons learned capture via Gitea Wiki
- Native issue dependencies with parallel execution
- Milestone management for sprint organization
- Branch-aware security (development/staging/production)
- Pre-sprint-close code quality review and test verification
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/initial-setup`
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/setup`, `/review`, `/test`, `/debug`, `/suggest-version`, `/proposal-status`, `/rfc`
### [claude-config-maintainer](./plugins/claude-config-maintainer/README.md)
**CLAUDE.md Optimization and Maintenance**
#### [git-flow](./plugins/git-flow) *NEW in v3.0.0*
**Git Workflow Automation**
Analyze, optimize, and create CLAUDE.md configuration files for Claude Code projects.
Smart git operations with intelligent commit messages and branch management.
- Structure and clarity scoring (100-point system)
- Automatic optimization with preview and backup
- Project-aware initialization with stack detection
- Best practices enforcement
- Auto-generated conventional commit messages
- Multiple workflow styles (simple, feature-branch, pr-required, trunk-based)
- Branch naming enforcement
- Merge and cleanup automation
- Protected branch awareness
**Commands:** `/config-analyze`, `/config-optimize`, `/config-init`
**Commands:** `/commit`, `/commit-push`, `/commit-merge`, `/commit-sync`, `/branch-start`, `/branch-cleanup`, `/git-status`, `/git-config`
### [cmdb-assistant](./plugins/cmdb-assistant/README.md)
**NetBox CMDB Integration**
#### [pr-review](./plugins/pr-review) *NEW in v3.0.0*
**Multi-Agent PR Review**
Full CRUD operations for network infrastructure management directly from Claude Code.
Comprehensive pull request review using specialized agents.
- Device, IP, site, and rack management
- Smart search across all NetBox modules
- Conversational infrastructure queries
- Audit trail and change tracking
- Multi-agent review: Security, Performance, Maintainability, Tests
- Confidence scoring (only reports HIGH/MEDIUM confidence findings)
- Actionable feedback with suggested fixes
- Gitea integration for automated review submission
**Commands:** `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`
**Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff`, `/initial-setup`, `/project-init`, `/project-sync`
### [project-hygiene](./plugins/project-hygiene/README.md)
#### [claude-config-maintainer](./plugins/claude-config-maintainer)
**CLAUDE.md and Settings Optimization**
Analyze, optimize, and create CLAUDE.md configuration files. Audit and optimize settings.local.json permissions.
**Commands:** `/analyze`, `/optimize`, `/init`, `/config-diff`, `/config-lint`, `/config-audit-settings`, `/config-optimize-settings`, `/config-permissions-map`
#### [contract-validator](./plugins/contract-validator) *NEW in v5.0.0*
**Cross-Plugin Compatibility Validation**
Validate plugin marketplaces for command conflicts, tool overlaps, and broken agent references.
- Interface parsing from plugin README.md files
- Agent extraction from CLAUDE.md definitions
- Pairwise compatibility checks between all plugins
- Data flow validation for agent sequences
- Markdown or JSON reports with actionable suggestions
**Commands:** `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph`, `/initial-setup`
### Productivity
#### [clarity-assist](./plugins/clarity-assist) *NEW in v3.0.0*
**Prompt Optimization with ND Accommodations**
Transform vague requests into clear specifications using structured methodology.
- 4-D methodology: Deconstruct, Diagnose, Develop, Deliver
- ND-friendly question patterns (option-based, chunked)
- Conflict detection and escalation protocols
**Commands:** `/clarify`, `/quick-clarify`
#### [doc-guardian](./plugins/doc-guardian)
**Documentation Lifecycle Management**
Automatic documentation drift detection and synchronization.
**Commands:** `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs`
#### [project-hygiene](./plugins/project-hygiene)
**Post-Task Cleanup Automation**
Hook-based cleanup that runs after Claude completes work.
- Deletes temp files (`*.tmp`, `*.bak`, `__pycache__`, etc.)
- Warns about unexpected files in project root
- Identifies orphaned supporting files
- Configurable via `.hygiene.json`
### Security
#### [code-sentinel](./plugins/code-sentinel)
**Security Scanning & Refactoring**
Security vulnerability detection and code refactoring tools.
**Commands:** `/security-scan`, `/refactor`, `/refactor-dry`
### Infrastructure
#### [cmdb-assistant](./plugins/cmdb-assistant)
**NetBox CMDB Integration**
Full CRUD operations for network infrastructure management directly from Claude Code.
**Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`, `/cmdb-audit`, `/cmdb-register`, `/cmdb-sync`, `/cmdb-topology`, `/change-audit`, `/ip-conflicts`
### Data Engineering
#### [data-platform](./plugins/data-platform) *NEW in v4.0.0*
**pandas, PostgreSQL/PostGIS, and dbt Integration**
Comprehensive data engineering toolkit with persistent DataFrame storage.
- 14 pandas tools with Arrow IPC data_ref system
- 10 PostgreSQL/PostGIS tools with connection pooling
- 8 dbt tools with automatic pre-validation
- 100k row limit with chunking support
- Auto-detection of dbt projects
**Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality`, `/data-review`, `/data-gate`, `/initial-setup`
### Visualization
#### [viz-platform](./plugins/viz-platform) *NEW in v4.0.0*
**Dash Mantine Components Validation and Theming**
Visualization toolkit with version-locked component validation and design token theming.
- 3 DMC tools with static JSON registry (prevents prop hallucination)
- 2 Chart tools with Plotly and theme integration
- 5 Layout tools for dashboard composition
- 6 Theme tools with design token system
- 5 Page tools for multi-page app structure
- Dual theme storage: user-level and project-level
**Commands:** `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/accessibility-check`, `/breakpoints`, `/design-review`, `/design-gate`, `/initial-setup`
## Domain Advisory Pattern
The marketplace supports cross-plugin domain advisory integration:
- **Domain Detection**: projman automatically detects when issues involve specialized domains (frontend/viz, data engineering)
- **Acceptance Criteria**: Domain-specific acceptance criteria are added to issues during planning
- **Execution Gates**: Domain validation gates (`/design-gate`, `/data-gate`) run before issue completion
- **Extensible**: New domains can be added by creating advisory agents and gate commands
**Current Domains:**
| Domain | Plugin | Gate Command |
|--------|--------|--------------|
| Visualization | viz-platform | `/design-gate` |
| Data | data-platform | `/data-gate` |
## MCP Servers
MCP servers are **bundled inside each plugin** that needs them. This ensures plugins work when cached by Claude Code.
MCP servers are **shared at repository root** and configured in `.mcp.json`.
### Gitea MCP Server (bundled in projman)
### Gitea MCP Server (shared)
Full Gitea API integration for project management.
| Category | Tools |
|----------|-------|
| Issues | `list_issues`, `get_issue`, `create_issue`, `update_issue`, `add_comment` |
| Labels | `get_labels`, `suggest_labels`, `create_label` |
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `create_lesson`, `search_lessons` |
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone` |
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `get_execution_order` |
| Issues | `list_issues`, `get_issue`, `create_issue`, `update_issue`, `add_comment`, `aggregate_issues` |
| Labels | `get_labels`, `suggest_labels`, `create_label`, `create_label_smart` |
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `update_wiki_page`, `create_lesson`, `search_lessons` |
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone`, `delete_milestone` |
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `remove_issue_dependency`, `get_execution_order` |
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` *(NEW in v3.0.0)* |
| Validation | `validate_repo_org`, `get_branch_protection` |
### NetBox MCP Server (bundled in cmdb-assistant)
### NetBox MCP Server (shared)
Comprehensive NetBox REST API integration for infrastructure management.
@@ -81,6 +185,39 @@ Comprehensive NetBox REST API integration for infrastructure management.
| Virtualization | Clusters, VMs, Interfaces |
| Extras | Tags, Custom Fields, Audit Log |
### Data Platform MCP Server (shared) *NEW in v4.0.0*
pandas, PostgreSQL/PostGIS, and dbt integration for data engineering.
| Category | Tools |
|----------|-------|
| pandas | `read_csv`, `read_parquet`, `read_json`, `to_csv`, `to_parquet`, `describe`, `head`, `tail`, `filter`, `select`, `groupby`, `join`, `list_data`, `drop_data` |
| PostgreSQL | `pg_connect`, `pg_query`, `pg_execute`, `pg_tables`, `pg_columns`, `pg_schemas` |
| PostGIS | `st_tables`, `st_geometry_type`, `st_srid`, `st_extent` |
| dbt | `dbt_parse`, `dbt_run`, `dbt_test`, `dbt_build`, `dbt_compile`, `dbt_ls`, `dbt_docs_generate`, `dbt_lineage` |
### Viz Platform MCP Server (shared) *NEW in v4.0.0*
Dash Mantine Components validation and visualization tools.
| Category | Tools |
|----------|-------|
| DMC | `list_components`, `get_component_props`, `validate_component` |
| Chart | `chart_create`, `chart_configure_interaction` |
| Layout | `layout_create`, `layout_add_filter`, `layout_set_grid`, `layout_get`, `layout_add_section` |
| Theme | `theme_create`, `theme_extend`, `theme_validate`, `theme_export_css`, `theme_list`, `theme_activate` |
| Page | `page_create`, `page_add_navbar`, `page_set_auth`, `page_list`, `page_get_app_config` |
### Contract Validator MCP Server (shared) *NEW in v5.0.0*
Cross-plugin compatibility validation tools.
| Category | Tools |
|----------|-------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow`, `validate_workflow_integration` |
| Report | `generate_compatibility_report`, `list_issues` |
## Installation
### Prerequisites
@@ -89,132 +226,147 @@ Comprehensive NetBox REST API integration for infrastructure management.
- Python 3.10+
- Access to target services (Gitea, NetBox as needed)
### Quick Start
### Add Marketplace to Claude Code
1. **Clone the repository:**
```bash
git clone ssh://git@hotserv.tailc9b278.ts.net:2222/bandit/support-claude-mktplace.git
cd support-claude-mktplace
**Option 1 - CLI command (recommended):**
```bash
/plugin marketplace add https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git
```
**Option 2 - Settings file (for team distribution):**
Add to `.claude/settings.json` in your target project:
```json
{
"extraKnownMarketplaces": {
"leo-claude-mktplace": {
"source": {
"source": "git",
"url": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git"
}
}
}
}
```
### Run Interactive Setup
After installing plugins, run the setup wizard:
```
/initial-setup
```
The wizard handles everything:
- Sets up MCP server (Python venv + dependencies)
- Creates system config (`~/.config/claude/gitea.env`)
- Guides you through adding your API token
- Detects and validates your repository via API
- Creates project config (`.env`)
**For new projects** (when system is already configured):
```
/project-init
```
**After moving a repository:**
```
/project-sync
```
See [docs/CONFIGURATION.md](./docs/CONFIGURATION.md) for manual setup and advanced options.
## Verifying Plugin Installation
After installing plugins, the `/plugin` command may show `(no content)` - this is normal Claude Code behavior and doesn't indicate an error.
**To verify a plugin is installed correctly:**
1. **Check installed plugins list:**
```
2. **Install MCP server dependencies:**
```bash
# Gitea MCP (for projman)
cd plugins/projman/mcp-servers/gitea
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
deactivate
# NetBox MCP (for cmdb-assistant)
cd ../../../cmdb-assistant/mcp-servers/netbox
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
deactivate
/plugin list
```
Look for `✔ plugin-name · Installed`
3. **Configure system-level credentials:**
```bash
mkdir -p ~/.config/claude
# Gitea credentials
cat > ~/.config/claude/gitea.env << 'EOF'
GITEA_URL=https://gitea.example.com
GITEA_TOKEN=your_token
GITEA_ORG=your_org
EOF
# NetBox credentials
cat > ~/.config/claude/netbox.env << 'EOF'
NETBOX_API_URL=https://netbox.example.com/api
NETBOX_API_TOKEN=your_token
EOF
chmod 600 ~/.config/claude/*.env
2. **Test a plugin command directly:**
```
4. **Configure project-level settings:**
```bash
# In your target project root
cat > .env << 'EOF'
GITEA_REPO=your-repository-name
EOF
/git-flow:git-status
/projman:sprint-status
/clarity-assist:clarify
```
If the command executes and shows output, the plugin is working.
5. **Add marketplace to Claude Code:**
Add to your project's `.claude/settings.json`:
```json
{
"pluginMarketplace": "/path/to/support-claude-mktplace"
}
3. **Check for loading errors:**
```
/plugin list
```
Look for any `Plugin Loading Errors` section - this indicates manifest issues.
**Command format:** All plugin commands use the format `/plugin-name:command-name`
| Plugin | Test Command |
|--------|--------------|
| git-flow | `/git-flow:git-status` |
| projman | `/projman:sprint-status` |
| pr-review | `/pr-review:pr-summary` |
| clarity-assist | `/clarity-assist:clarify` |
| doc-guardian | `/doc-guardian:doc-audit` |
| code-sentinel | `/code-sentinel:security-scan` |
| claude-config-maintainer | `/claude-config-maintainer:analyze` |
| cmdb-assistant | `/cmdb-assistant:cmdb-search` |
| data-platform | `/data-platform:ingest` |
| viz-platform | `/viz-platform:chart` |
| contract-validator | `/contract-validator:validate-contracts` |
## Repository Structure
```
support-claude-mktplace/
leo-claude-mktplace/
├── .claude-plugin/ # Marketplace manifest
│ └── marketplace.json
├── plugins/ # All plugins (with bundled MCP servers)
│ ├── projman/ # Sprint management plugin
│ ├── .claude-plugin/
│ ├── .mcp.json
│ ├── mcp-servers/ # Bundled MCP server
└── gitea/
├── commands/
│ ├── agents/
│ └── skills/
│ ├── claude-config-maintainer/ # CLAUDE.md optimization plugin
├── .claude-plugin/
│ ├── commands/
│ └── agents/
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
├── netbox/ # NetBox MCP (CMDB)
├── data-platform/ # Data engineering (pandas, PostgreSQL, dbt)
├── viz-platform/ # Visualization (DMC, Plotly, theming)
└── contract-validator/ # Cross-plugin validation (v5.0.0)
├── plugins/ # All plugins
├── projman/ # Sprint management
├── git-flow/ # Git workflow automation
│ ├── pr-review/ # PR review
│ ├── clarity-assist/ # Prompt optimization
├── data-platform/ # Data engineering
├── viz-platform/ # Visualization
│ ├── contract-validator/ # Cross-plugin validation (NEW)
│ ├── claude-config-maintainer/ # CLAUDE.md optimization
│ ├── cmdb-assistant/ # NetBox CMDB integration
│ ├── .claude-plugin/
│ ├── .mcp.json
│ ├── mcp-servers/ # Bundled MCP server
└── netbox/
│ ├── commands/
│ └── agents/
│ ├── projman-pmo/ # PMO coordination plugin (planned)
│ └── project-hygiene/ # Cleanup automation plugin
├── docs/ # Reference documentation
│ ├── CANONICAL-PATHS.md # Single source of truth for paths
│ └── references/
└── scripts/ # Setup and maintenance scripts
├── doc-guardian/ # Documentation drift detection
├── code-sentinel/ # Security scanning
└── project-hygiene/ # Cleanup automation
├── docs/ # Documentation
├── CANONICAL-PATHS.md # Path reference
└── CONFIGURATION.md # Setup guide
├── scripts/ # Setup scripts
└── CHANGELOG.md # Version history
```
## Key Features (v2.0.0)
### Parallel Execution
Tasks are batched by dependency graph for optimal parallel execution:
```
Batch 1 (parallel): Task A, Task B, Task C
Batch 2 (parallel): Task D, Task E (depend on Batch 1)
Batch 3 (sequential): Task F (depends on Batch 2)
```
### Naming Conventions
- **Tasks:** `[Sprint XX] <type>: <description>`
- **Branches:** `feat/`, `fix/`, `debug/` prefixes with issue numbers
### CLI Tools Blocked
All agents use MCP tools exclusively. CLI tools like `tea` or `gh` are forbidden to ensure consistent, auditable operations.
## Documentation
| Document | Description |
|----------|-------------|
| [CLAUDE.md](./CLAUDE.md) | Main project instructions |
| [CONFIGURATION.md](./docs/CONFIGURATION.md) | Centralized setup guide |
| [COMMANDS-CHEATSHEET.md](./docs/COMMANDS-CHEATSHEET.md) | All commands quick reference |
| [UPDATING.md](./docs/UPDATING.md) | Update guide for the marketplace |
| [CANONICAL-PATHS.md](./docs/CANONICAL-PATHS.md) | Authoritative path reference |
| [projman/CONFIGURATION.md](./plugins/projman/CONFIGURATION.md) | Projman setup guide |
| [DEBUGGING-CHECKLIST.md](./docs/DEBUGGING-CHECKLIST.md) | Systematic troubleshooting guide |
| [CHANGELOG.md](./CHANGELOG.md) | Version history |
## License
MIT License - Bandit Labs
MIT License
## Support
- **Issues**: Contact repository maintainer
- **Repository**: `ssh://git@hotserv.tailc9b278.ts.net:2222/bandit/support-claude-mktplace.git`
- **Repository**: `https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git`

View File

@@ -2,44 +2,155 @@
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
Last Updated: 2025-12-15
Last Updated: 2026-01-30 (v5.4.1)
---
## Repository Root Structure
```
support-claude-mktplace/
leo-claude-mktplace/
├── .claude/ # Claude Code local settings
├── .claude-plugin/ # Marketplace manifest (bandit-claude-marketplace)
├── .claude-plugin/ # Marketplace manifest
│ └── marketplace.json
├── .scratch/ # Transient work (auto-cleaned)
├── docs/ # All documentation
│ ├── architecture/ # Draw.io diagrams and specs
│ ├── references/ # Reference specifications
── workflows/ # Workflow documentation
│ ├── CANONICAL-PATHS.md # This file - single source of truth
── COMMANDS-CHEATSHEET.md # All commands quick reference
│ ├── CONFIGURATION.md # Centralized configuration guide
│ ├── DEBUGGING-CHECKLIST.md # Systematic troubleshooting guide
│ └── UPDATING.md # Update guide
├── hooks/ # Shared hooks (if any)
├── plugins/ # ALL plugins with bundled MCP servers
│ ├── projman/
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
│ ├── gitea/ # Gitea MCP server
│ │ ├── mcp_server/
│ │ │ ├── server.py
│ │ │ ├── gitea_client.py
│ │ │ ├── config.py
│ │ │ └── tools/
│ │ │ ├── issues.py
│ │ │ ├── labels.py
│ │ │ ├── wiki.py
│ │ │ ├── milestones.py
│ │ │ ├── dependencies.py
│ │ │ └── pull_requests.py # NEW in v3.0.0
│ │ ├── requirements.txt
│ │ └── .venv/
│ ├── netbox/ # NetBox MCP server
│ │ ├── mcp_server/
│ │ ├── requirements.txt
│ │ └── .venv/
│ ├── data-platform/ # Data engineering MCP (NEW v4.0.0)
│ │ ├── mcp_server/
│ │ │ ├── server.py
│ │ │ ├── pandas_tools.py
│ │ │ ├── postgres_tools.py
│ │ │ └── dbt_tools.py
│ │ ├── requirements.txt
│ │ └── .venv/
│ ├── contract-validator/ # Contract validation MCP (NEW v5.0.0)
│ │ ├── mcp_server/
│ │ │ ├── server.py
│ │ │ ├── parse_tools.py
│ │ │ ├── validation_tools.py
│ │ │ └── report_tools.py
│ │ ├── tests/
│ │ ├── requirements.txt
│ │ └── .venv/
│ └── viz-platform/ # Visualization MCP (NEW v4.1.0)
│ ├── mcp_server/
│ │ ├── server.py
│ │ ├── config.py
│ │ ├── component_registry.py
│ │ ├── dmc_tools.py
│ │ ├── chart_tools.py
│ │ ├── layout_tools.py
│ │ ├── theme_tools.py
│ │ ├── theme_store.py
│ │ └── page_tools.py
│ ├── registry/ # DMC component JSON registries
│ ├── tests/ # 94 tests
│ ├── requirements.txt
│ └── .venv/
├── plugins/ # ALL plugins
│ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/
│ │ ├── mcp-servers/ # MCP servers bundled IN plugin
│ │ │ └── gitea/ # Gitea + Wiki tools
│ │ ├── commands/
│ │ ├── agents/
│ │ ── skills/
├── projman-pmo/
│ ├── project-hygiene/
│ ├── cmdb-assistant/
│ │ ── skills/
│ └── claude-md-integration.md
│ ├── doc-guardian/ # Documentation drift detection
│ │ ├── .claude-plugin/
│ │ ├── mcp-servers/ # MCP servers bundled IN plugin
│ │ │ └── netbox/
│ │ ├── hooks/
│ │ ├── commands/
│ │ ── agents/
└── claude-config-maintainer/
│ │ ── agents/
│ ├── skills/
│ │ └── claude-md-integration.md
│ ├── code-sentinel/ # Security scanning & refactoring
│ │ ├── .claude-plugin/
│ │ ├── hooks/
│ │ ├── commands/
│ │ ├── agents/
│ │ ├── skills/
│ │ └── claude-md-integration.md
│ ├── cmdb-assistant/ # NetBox CMDB integration
│ │ ├── .claude-plugin/
│ │ ├── commands/
│ │ ├── agents/
│ │ └── claude-md-integration.md
│ ├── claude-config-maintainer/
│ │ ├── .claude-plugin/
│ │ ├── commands/
│ │ ├── agents/
│ │ └── claude-md-integration.md
│ ├── project-hygiene/
│ │ ├── .claude-plugin/
│ │ ├── hooks/
│ │ └── claude-md-integration.md
│ ├── clarity-assist/
│ │ ├── .claude-plugin/
│ │ ├── commands/
│ │ ├── agents/
│ │ ├── skills/
│ │ └── claude-md-integration.md
│ ├── git-flow/
│ │ ├── .claude-plugin/
│ │ ├── commands/
│ │ ├── agents/
│ │ ├── skills/
│ │ └── claude-md-integration.md
│ ├── pr-review/
│ │ ├── .claude-plugin/
│ │ ├── commands/
│ │ ├── agents/
│ │ ├── skills/
│ │ └── claude-md-integration.md
│ ├── data-platform/
│ │ ├── .claude-plugin/
│ │ ├── commands/
│ │ ├── agents/
│ │ ├── hooks/
│ │ └── claude-md-integration.md
│ ├── contract-validator/
│ │ ├── .claude-plugin/
│ │ ├── commands/
│ │ ├── agents/
│ │ └── claude-md-integration.md
│ └── viz-platform/
│ ├── .claude-plugin/
│ ├── commands/
── agents/
── agents/
│ ├── hooks/
│ └── claude-md-integration.md
├── scripts/ # Setup and maintenance scripts
│ ├── setup.sh # Initial setup (create venvs, config templates)
│ ├── post-update.sh # Post-update (clear cache, show changelog)
│ ├── check-venv.sh # Check if venvs exist (read-only)
│ ├── validate-marketplace.sh # Marketplace compliance validation
│ ├── verify-hooks.sh # Verify all hooks use correct event types
│ ├── setup-venvs.sh # Setup MCP server venvs (create only, never delete)
│ └── release.sh # Release automation with version bumping
├── CLAUDE.md
├── README.md
├── LICENSE
@@ -59,33 +170,64 @@ support-claude-mktplace/
| Plugin manifest | `plugins/{plugin-name}/.claude-plugin/plugin.json` | `plugins/projman/.claude-plugin/plugin.json` |
| Plugin commands | `plugins/{plugin-name}/commands/` | `plugins/projman/commands/` |
| Plugin agents | `plugins/{plugin-name}/agents/` | `plugins/projman/agents/` |
| Plugin .mcp.json | `plugins/{plugin-name}/.mcp.json` | `plugins/projman/.mcp.json` |
| Plugin skills | `plugins/{plugin-name}/skills/` | `plugins/projman/skills/` |
| Plugin integration snippet | `plugins/{plugin-name}/claude-md-integration.md` | `plugins/projman/claude-md-integration.md` |
### MCP Server Paths (Bundled in Plugins)
### MCP Server Paths
MCP servers are now **bundled inside each plugin** to ensure they work when plugins are cached.
MCP servers are **shared at repository root** and configured in `.mcp.json`.
| Context | Pattern | Example |
|---------|---------|---------|
| MCP server location | `plugins/{plugin}/mcp-servers/{server}/` | `plugins/projman/mcp-servers/gitea/` |
| MCP server code | `plugins/{plugin}/mcp-servers/{server}/mcp_server/` | `plugins/projman/mcp-servers/gitea/mcp_server/` |
| MCP venv | `plugins/{plugin}/mcp-servers/{server}/.venv/` | `plugins/projman/mcp-servers/gitea/.venv/` |
| MCP configuration | `.mcp.json` | `.mcp.json` (at repo root) |
| Shared MCP server | `mcp-servers/{server}/` | `mcp-servers/gitea/` |
| MCP server code | `mcp-servers/{server}/mcp_server/` | `mcp-servers/gitea/mcp_server/` |
| MCP venv (local) | `mcp-servers/{server}/.venv/` | `mcp-servers/gitea/.venv/` |
### Relative Path Patterns (CRITICAL)
**Note:** Plugins do NOT have their own `mcp-servers/` directories. All MCP servers are shared at root and configured via `.mcp.json`.
| From | To | Pattern |
|------|----|---------|
| Plugin .mcp.json | Bundled MCP server | `${CLAUDE_PLUGIN_ROOT}/mcp-servers/{server}` |
| marketplace.json | Plugin | `./plugins/{plugin-name}` |
### MCP Venv Paths - CRITICAL
**Venvs live in a CACHE directory that SURVIVES marketplace updates.**
When checking for venvs, ALWAYS check in this order:
| Priority | Path | Survives Updates? |
|----------|------|-------------------|
| 1 (CHECK FIRST) | `~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/` | YES |
| 2 (fallback) | `{marketplace}/mcp-servers/{server}/.venv/` | NO |
**Why cache first?**
- Marketplace directory gets WIPED on every update/reinstall
- Cache directory SURVIVES updates
- False "venv missing" errors waste hours of debugging
**Pattern for hooks checking venvs:**
```bash
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/bin/python"
LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/{server}/.venv/bin/python"
if [[ -f "$CACHE_VENV" ]]; then
VENV_PATH="$CACHE_VENV"
elif [[ -f "$LOCAL_VENV" ]]; then
VENV_PATH="$LOCAL_VENV"
else
echo "venv missing"
fi
```
**See lesson learned:** [Startup Hooks Must Check Venv Cache Path First](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/patterns/startup-hooks-must-check-venv-cache-path-first)
### Documentation Paths
| Type | Location |
|------|----------|
| Reference specs | `docs/references/` |
| Architecture diagrams | `docs/architecture/` |
| Workflow docs | `docs/workflows/` |
| This file | `docs/CANONICAL-PATHS.md` |
| Update guide | `docs/UPDATING.md` |
| Configuration guide | `docs/CONFIGURATION.md` |
| Commands cheat sheet | `docs/COMMANDS-CHEATSHEET.md` |
| Debugging checklist | `docs/DEBUGGING-CHECKLIST.md` |
---
@@ -105,13 +247,10 @@ MCP servers are now **bundled inside each plugin** to ensure they work when plug
### Relative Path Calculation
From `plugins/projman/.mcp.json` to bundled `mcp-servers/gitea/`:
From `.mcp.json` (at root) to `mcp-servers/gitea/`:
```
plugins/projman/.mcp.json
MCP servers are IN the plugin at mcp-servers/
Result: mcp-servers/gitea/
With variable: ${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/
.mcp.json (at repository root)
Uses absolute installed path: ~/.claude/plugins/marketplaces/.../mcp-servers/gitea/run.sh
```
From `.claude-plugin/marketplace.json` to `plugins/projman/`:
@@ -130,18 +269,35 @@ Result: ./plugins/projman
| Wrong | Why | Correct |
|-------|-----|---------|
| `projman/` at root | Plugins go in `plugins/` | `plugins/projman/` |
| `mcp-servers/` at root | MCP servers are bundled in plugins | `plugins/{plugin}/mcp-servers/` |
| `../../mcp-servers/` from plugin | Old pattern, doesn't work with caching | `${CLAUDE_PLUGIN_ROOT}/mcp-servers/` |
| `./../../../plugins/projman` in marketplace | Wrong (old nested structure) | `./plugins/projman` |
| `mcp-servers/` inside plugins | MCP servers are shared at root | Use root `mcp-servers/` |
| Plugin-level `.mcp.json` | MCP config is at root | Use root `.mcp.json` |
| Hardcoding absolute paths in source | Breaks portability | Use relative paths or `${CLAUDE_PLUGIN_ROOT}` |
---
## Architecture Note
MCP servers are bundled inside each plugin (not shared at root) because:
- Claude Code caches only the plugin directory when installed
- Relative paths to parent directories break in the cache
- Each plugin must be self-contained to work properly
MCP servers are **shared at repository root** and configured in a single `.mcp.json` file.
**Benefits:**
- Single source of truth for each MCP server
- Updates apply to all plugins automatically
- No duplication - clean plugin structure
- Simple configuration in one place
**Configuration:**
All MCP servers are defined in `.mcp.json` at repository root:
```json
{
"mcpServers": {
"gitea": { "command": ".../mcp-servers/gitea/run.sh" },
"netbox": { "command": ".../mcp-servers/netbox/run.sh" },
"data-platform": { "command": ".../mcp-servers/data-platform/run.sh" },
"viz-platform": { "command": ".../mcp-servers/viz-platform/run.sh" },
"contract-validator": { "command": ".../mcp-servers/contract-validator/run.sh" }
}
}
```
---
@@ -149,5 +305,15 @@ MCP servers are bundled inside each plugin (not shared at root) because:
| Date | Change | By |
|------|--------|-----|
| 2025-12-15 | Restructured: MCP servers now bundled in plugins | Claude Code |
| 2026-01-30 | v5.5.0: Removed plugin-level mcp-servers symlinks - all MCP config now in root .mcp.json | Claude Code |
| 2026-01-26 | v5.0.0: Added contract-validator plugin and MCP server | Claude Code |
| 2026-01-26 | v4.1.0: Added viz-platform plugin and MCP server | Claude Code |
| 2026-01-25 | v4.0.0: Added data-platform plugin and MCP server | Claude Code |
| 2026-01-20 | v3.0.0: MCP servers moved to root with symlinks | Claude Code |
| 2026-01-20 | v3.0.0: Added clarity-assist, git-flow, pr-review plugins | Claude Code |
| 2026-01-20 | v3.0.0: Added docs/CONFIGURATION.md | Claude Code |
| 2026-01-20 | v3.0.0: Renamed marketplace to leo-claude-mktplace | Claude Code |
| 2026-01-20 | Removed docs/references/ (obsolete planning docs) | Claude Code |
| 2026-01-19 | Added claude-md-integration.md path pattern | Claude Code |
| 2025-12-15 | Restructured: MCP servers bundled in plugins | Claude Code |
| 2025-12-12 | Initial creation | Claude Code |

299
docs/COMMANDS-CHEATSHEET.md Normal file
View File

@@ -0,0 +1,299 @@
# Plugin Commands Cheat Sheet
Quick reference for all commands in the Leo Claude Marketplace.
---
## Command Reference Table
| Plugin | Command | Auto | Manual | Description |
|--------|---------|:----:|:------:|-------------|
| **projman** | `/sprint-plan` | | X | Start sprint planning with AI-guided architecture analysis and issue creation |
| **projman** | `/sprint-start` | | X | Begin sprint execution with dependency analysis and parallel task coordination (requires approval or `--force`) |
| **projman** | `/sprint-status` | | X | Check current sprint progress (add `--diagram` for Mermaid visualization) |
| **projman** | `/review` | | X | Pre-sprint-close code quality review (debug artifacts, security, error handling) |
| **projman** | `/test` | | X | Run tests (`/test run`) or generate tests (`/test gen <target>`) |
| **projman** | `/sprint-close` | | X | Complete sprint and capture lessons learned to Gitea Wiki |
| **projman** | `/labels-sync` | | X | Synchronize label taxonomy from Gitea |
| **projman** | `/setup` | | X | Auto-detect mode or use `--full`, `--quick`, `--sync`, `--clear-cache` |
| **projman** | *SessionStart hook* | X | | Detects git remote vs .env mismatch, warns to run `/setup --sync` |
| **projman** | `/debug` | | X | Diagnostics (`/debug report`) or investigate (`/debug review`) |
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
| **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status |
| **projman** | `/rfc` | | X | RFC lifecycle management (`/rfc create\|list\|review\|approve\|reject`) |
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
| **git-flow** | `/commit-sync` | | X | Full sync: commit, push, and sync with upstream/base branch |
| **git-flow** | `/branch-start` | | X | Create new feature/fix/chore branch with naming conventions |
| **git-flow** | `/branch-cleanup` | | X | Remove merged branches locally and optionally on remote |
| **git-flow** | `/git-status` | | X | Enhanced git status with recommendations |
| **git-flow** | `/git-config` | | X | Configure git-flow settings for the project |
| **pr-review** | `/initial-setup` | | X | Setup wizard for pr-review (shares Gitea MCP with projman) |
| **pr-review** | `/project-init` | | X | Quick project setup for PR reviews |
| **pr-review** | `/project-sync` | | X | Sync config with git remote after repo move/rename |
| **pr-review** | *SessionStart hook* | X | | Detects git remote vs .env mismatch |
| **pr-review** | `/pr-review` | | X | Full multi-agent PR review with confidence scoring |
| **pr-review** | `/pr-summary` | | X | Quick summary of PR changes |
| **pr-review** | `/pr-findings` | | X | List and filter review findings by category/severity |
| **pr-review** | `/pr-diff` | | X | Formatted diff with inline review comments and annotations |
| **clarity-assist** | `/clarify` | | X | Full 4-D prompt optimization with ND accommodations |
| **clarity-assist** | `/quick-clarify` | | X | Rapid single-pass clarification for simple requests |
| **doc-guardian** | `/doc-audit` | | X | Full documentation audit - scans for doc drift |
| **doc-guardian** | `/doc-sync` | | X | Synchronize pending documentation updates |
| **doc-guardian** | `/changelog-gen` | | X | Generate changelog from conventional commits |
| **doc-guardian** | `/doc-coverage` | | X | Documentation coverage metrics by function/class |
| **doc-guardian** | `/stale-docs` | | X | Flag documentation behind code changes |
| **doc-guardian** | *PostToolUse hook* | X | | Silently detects doc drift on Write/Edit |
| **code-sentinel** | `/security-scan` | | X | Full security audit (SQL injection, XSS, secrets, etc.) |
| **code-sentinel** | `/refactor` | | X | Apply refactoring patterns to improve code |
| **code-sentinel** | `/refactor-dry` | | X | Preview refactoring without applying changes |
| **code-sentinel** | *PreToolUse hook* | X | | Scans code before writing; blocks critical issues |
| **claude-config-maintainer** | `/config-analyze` | | X | Analyze CLAUDE.md for optimization opportunities |
| **claude-config-maintainer** | `/config-optimize` | | X | Optimize CLAUDE.md structure with preview/backup |
| **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project |
| **claude-config-maintainer** | `/config-diff` | | X | Track CLAUDE.md changes over time with behavioral impact |
| **claude-config-maintainer** | `/config-lint` | | X | Lint CLAUDE.md for anti-patterns and best practices |
| **claude-config-maintainer** | `/config-audit-settings` | | X | Audit settings.local.json permissions (100-point score) |
| **claude-config-maintainer** | `/config-optimize-settings` | | X | Optimize permissions (profiles, consolidation, dry-run) |
| **claude-config-maintainer** | `/config-permissions-map` | | X | Visual review layer + permission coverage map |
| **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server |
| **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites |
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
| **cmdb-assistant** | `/cmdb-ip` | | X | Manage IP addresses and prefixes |
| **cmdb-assistant** | `/cmdb-site` | | X | Manage sites, locations, racks, and regions |
| **cmdb-assistant** | `/cmdb-audit` | | X | Data quality analysis (VMs, devices, naming, roles) |
| **cmdb-assistant** | `/cmdb-register` | | X | Register current machine into NetBox with running apps |
| **cmdb-assistant** | `/cmdb-sync` | | X | Sync machine state with NetBox (detect drift, update) |
| **cmdb-assistant** | `/cmdb-topology` | | X | Infrastructure topology diagrams (rack, network, site views) |
| **cmdb-assistant** | `/change-audit` | | X | NetBox audit trail queries with filtering |
| **cmdb-assistant** | `/ip-conflicts` | | X | Detect IP conflicts and overlapping prefixes |
| **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files |
| **data-platform** | `/ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
| **data-platform** | `/profile` | | X | Generate data profiling report with statistics |
| **data-platform** | `/schema` | | X | Explore database schemas, tables, columns |
| **data-platform** | `/explain` | | X | Explain query execution plan |
| **data-platform** | `/lineage` | | X | Show dbt model lineage and dependencies |
| **data-platform** | `/run` | | X | Run dbt models with validation |
| **data-platform** | `/lineage-viz` | | X | dbt lineage visualization as Mermaid diagrams |
| **data-platform** | `/dbt-test` | | X | Formatted dbt test runner with summary and failure details |
| **data-platform** | `/data-quality` | | X | DataFrame quality checks (nulls, duplicates, types, outliers) |
| **data-platform** | `/initial-setup` | | X | Setup wizard for data-platform MCP servers |
| **data-platform** | *SessionStart hook* | X | | Checks PostgreSQL connection (non-blocking warning) |
| **viz-platform** | `/initial-setup` | | X | Setup wizard for viz-platform MCP server |
| **viz-platform** | `/chart` | | X | Create Plotly charts with theme integration |
| **viz-platform** | `/dashboard` | | X | Create dashboard layouts with filters and grids |
| **viz-platform** | `/theme` | | X | Apply existing theme to visualizations |
| **viz-platform** | `/theme-new` | | X | Create new custom theme with design tokens |
| **viz-platform** | `/theme-css` | | X | Export theme as CSS custom properties |
| **viz-platform** | `/component` | | X | Inspect DMC component props and validation |
| **viz-platform** | `/chart-export` | | X | Export charts to PNG, SVG, PDF via kaleido |
| **viz-platform** | `/accessibility-check` | | X | Color blind validation (WCAG contrast ratios) |
| **viz-platform** | `/breakpoints` | | X | Configure responsive layout breakpoints |
| **viz-platform** | `/design-review` | | X | Detailed design system audits |
| **viz-platform** | `/design-gate` | | X | Binary pass/fail design system validation gates |
| **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) |
| **data-platform** | `/data-review` | | X | Comprehensive data integrity audits |
| **data-platform** | `/data-gate` | | X | Binary pass/fail data integrity gates |
| **contract-validator** | `/validate-contracts` | | X | Full marketplace compatibility validation |
| **contract-validator** | `/check-agent` | | X | Validate single agent definition |
| **contract-validator** | `/list-interfaces` | | X | Show all plugin interfaces |
| **contract-validator** | `/dependency-graph` | | X | Mermaid visualization of plugin dependencies |
| **contract-validator** | `/initial-setup` | | X | Setup wizard for contract-validator MCP |
---
## Plugins by Category
| Category | Plugins | Primary Use |
|----------|---------|-------------|
| **Setup** | projman, pr-review, cmdb-assistant, data-platform | `/setup`, `/initial-setup` |
| **Task Planning** | projman, clarity-assist | Sprint management, requirement clarification |
| **Code Quality** | code-sentinel, pr-review | Security scanning, PR reviews |
| **Documentation** | doc-guardian, claude-config-maintainer | Doc sync, CLAUDE.md maintenance |
| **Git Operations** | git-flow | Commits, branches, workflow automation |
| **Infrastructure** | cmdb-assistant | NetBox CMDB management |
| **Data Engineering** | data-platform | pandas, PostgreSQL, dbt operations |
| **Visualization** | viz-platform | DMC validation, Plotly charts, theming |
| **Validation** | contract-validator | Cross-plugin compatibility checks |
| **Maintenance** | project-hygiene | Automatic cleanup |
---
## Hook-Based Automation Summary
| Plugin | Hook Event | Behavior |
|--------|------------|----------|
| **projman** | SessionStart | Checks git remote vs .env; warns if mismatch detected; suggests sprint planning if issues exist |
| **pr-review** | SessionStart | Checks git remote vs .env; warns if mismatch detected |
| **doc-guardian** | PostToolUse (Write/Edit) | Tracks documentation drift; auto-updates dependent docs |
| **code-sentinel** | PreToolUse (Write/Edit) | Scans for security issues; blocks critical vulnerabilities |
| **project-hygiene** | PostToolUse (Write/Edit) | Cleans temp files, warns about misplaced files |
| **data-platform** | SessionStart | Checks PostgreSQL connection; non-blocking warning if unavailable |
| **viz-platform** | SessionStart | Checks DMC version; non-blocking warning if mismatch detected |
---
## Dev Workflow Examples
### Example 0: RFC-Driven Feature Development
Full workflow from idea to implementation using RFCs:
```
1. /clarify # Clarify the feature idea
2. /rfc create # Create RFC from clarified spec
... refine RFC content ...
3. /rfc review 0001 # Submit RFC for review
... review discussion ...
4. /rfc approve 0001 # Approve RFC for implementation
5. /sprint-plan # Select approved RFC for sprint
... implement feature ...
6. /sprint-close # Complete sprint, RFC marked Implemented
```
### Example 1: Starting a New Feature Sprint
A typical workflow for planning and executing a feature sprint:
```
1. /clarify # Clarify requirements if vague
2. /sprint-plan # Plan the sprint with architecture analysis
3. /labels-sync # Ensure labels are up-to-date
4. /sprint-start # Begin execution with dependency ordering
5. /branch-start feat/... # Create feature branch
... implement features ...
6. /commit # Commit with conventional message
7. /sprint-status --diagram # Check progress with visualization
8. /review # Pre-close quality review
9. /test run # Verify test coverage
10. /sprint-close # Capture lessons learned
```
### Example 2: Daily Development Cycle
Quick daily workflow with git-flow:
```
1. /git-status # Check current state
2. /branch-start fix/... # Start bugfix branch
... make changes ...
3. /commit # Auto-generate commit message
4. /commit-push # Push to remote
5. /branch-cleanup # Clean merged branches
```
### Example 3: Pull Request Review Workflow
Reviewing a PR before merge:
```
1. /pr-summary # Quick overview of changes
2. /pr-review # Full multi-agent review
3. /pr-findings # Filter findings by severity
4. /security-scan # Deep security audit if needed
```
### Example 4: Documentation Maintenance
Keeping docs in sync:
```
1. /doc-audit # Scan for documentation drift
2. /doc-sync # Apply pending updates
3. /config-analyze # Check CLAUDE.md health
4. /config-optimize # Optimize if needed
```
### Example 5: Code Refactoring Session
Safe refactoring with preview:
```
1. /refactor-dry # Preview opportunities
2. /security-scan # Baseline security check
3. /refactor # Apply improvements
4. /test run # Verify nothing broke
5. /commit # Commit with descriptive message
```
### Example 6: Infrastructure Documentation
Managing infrastructure with CMDB:
```
1. /cmdb-search "server" # Find existing devices
2. /cmdb-device view X # Check device details
3. /cmdb-ip list # List available IPs
4. /cmdb-site view Y # Check site info
```
### Example 6b: Data Engineering Workflow
Working with data pipelines:
```
1. /ingest file.csv # Load data into DataFrame
2. /profile # Generate data profiling report
3. /schema # Explore database schemas
4. /lineage model_name # View dbt model dependencies
5. /run model_name # Execute dbt models
6. /explain "SELECT ..." # Analyze query execution plan
```
### Example 7: First-Time Setup (New Machine)
Setting up the marketplace for the first time:
```
1. /setup --full # Full setup: MCP + system config + project
# → Follow prompts for Gitea URL, org
# → Add token manually when prompted
# → Confirm repository name
2. # Restart Claude Code session
3. /labels-sync # Sync Gitea labels
4. /sprint-plan # Plan first sprint
```
### Example 8: New Project Setup (System Already Configured)
Adding a new project when system config exists:
```
1. /setup --quick # Quick project setup (auto-detected)
# → Confirms detected repo name
# → Creates .env
2. /labels-sync # Sync Gitea labels
3. /sprint-plan # Plan first sprint
```
---
## Quick Tips
- **Hooks run automatically** - doc-guardian and code-sentinel protect you without manual invocation
- **Use `/commit` over `git commit`** - generates better commit messages following conventions
- **Run `/review` before `/sprint-close`** - catches issues before closing the sprint
- **Use `/clarify` for vague requests** - especially helpful for complex requirements
- **`/refactor-dry` is safe** - always preview before applying refactoring changes
---
## MCP Server Requirements
Some plugins require MCP server connectivity:
| Plugin | MCP Server | Purpose |
|--------|------------|---------|
| projman | Gitea | Issues, PRs, wiki, labels, milestones |
| pr-review | Gitea | PR operations and reviews |
| cmdb-assistant | NetBox | Infrastructure CMDB |
| data-platform | pandas, PostgreSQL, dbt | DataFrames, database queries, dbt builds |
| viz-platform | viz-platform | DMC validation, charts, layouts, themes, pages |
| contract-validator | contract-validator | Plugin interface parsing, compatibility validation |
Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/claude/netbox.env`, or `~/.config/claude/postgres.env`.
---
*Last Updated: 2026-02-02*

678
docs/CONFIGURATION.md Normal file
View File

@@ -0,0 +1,678 @@
# Configuration Guide
Centralized configuration documentation for all plugins and MCP servers in the Leo Claude Marketplace.
---
## Quick Start
**After installing the marketplace and plugins via Claude Code:**
```
/setup
```
The interactive wizard auto-detects what's needed and handles everything except manually adding your API tokens.
---
## Setup Flow Diagram
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ FIRST TIME SETUP │
│ (once per machine) │
└─────────────────────────────────────────────────────────────────────────────┘
/setup --full
(or /setup auto-detects)
┌──────────────────────────────┼──────────────────────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ PHASE 1 │ │ PHASE 2 │ │ PHASE 3 │
│ Automated │───────────▶│ Automated │───────────▶│ Interactive │
│ │ │ │ │ │
│ • Check │ │ • Find MCP path │ │ • Ask Gitea URL │
│ Python │ │ • Create venv │ │ • Ask Org name │
│ version │ │ • Install deps │ │ • Create config │
└─────────────┘ └─────────────────┘ └─────────────────┘
┌───────────────────────────┐
│ PHASE 4 │
│ USER ACTION │
│ │
│ Edit config file to add │
│ API token (for security) │
│ │
│ nano ~/.config/claude/ │
│ gitea.env │
└───────────────────────────┘
┌──────────────────────────────┬──────────────────────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ PHASE 5 │ │ PHASE 6 │ │ PHASE 7 │
│ Interactive │ │ Automated │ │ Automated │
│ │ │ │ │ │
│ • Confirm │ │ • Create .env │ │ • Test API │
│ repo name │ │ • Check │ │ • Show summary │
│ from git │ │ .gitignore │ │ • Restart note │
└─────────────┘ └─────────────────┘ └─────────────────┘
┌───────────────────────────┐
│ RESTART SESSION │
│ │
│ MCP tools available │
│ after restart │
└───────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ NEW PROJECT SETUP │
│ (once per project) │
└─────────────────────────────────────────────────────────────────────────────┘
┌───────────────┴───────────────┐
▼ ▼
/setup --quick /setup
(explicit mode) (auto-detects mode)
│ │
│ ┌──────────┴──────────┐
│ ▼ ▼
│ "Quick setup" "Full setup"
│ (skips to (re-runs
│ project config) everything)
│ │ │
└────────────────────┴─────────────────────┘
┌─────────────────────┐
│ PROJECT CONFIG │
│ │
│ • Detect repo from │
│ git remote │
│ • Confirm with user │
│ • Create .env │
│ • Check .gitignore │
└─────────────────────┘
Done!
```
---
## What Runs Automatically vs User Interaction
### `/setup --full` - Full Setup
| Phase | Type | What Happens |
|-------|------|--------------|
| **1. Environment Check** | Automated | Verifies Python 3.10+ is installed |
| **2. MCP Server Setup** | Automated | Finds plugin path, creates venv, installs dependencies |
| **3. System Config Creation** | Interactive | Asks for Gitea URL and organization name |
| **4. Token Entry** | **User Action** | User manually edits config file to add API token |
| **5. Project Detection** | Interactive | Shows detected repo name, asks for confirmation |
| **6. Project Config** | Automated | Creates `.env` file, checks `.gitignore` |
| **7. Validation** | Automated | Tests API connectivity, shows summary |
### `/setup --quick` - Quick Project Setup
| Phase | Type | What Happens |
|-------|------|--------------|
| **1. Pre-flight Check** | Automated | Verifies system config exists |
| **2. Project Detection** | Interactive | Shows detected repo name, asks for confirmation |
| **3. Project Config** | Automated | Creates/updates `.env` file |
| **4. Gitignore Check** | Interactive | Asks to add `.env` to `.gitignore` if missing |
---
## One Command, Three Modes
| Mode | When to Use | What It Does |
|------|-------------|--------------|
| `/setup` | Any time | Auto-detects: runs full, quick, or sync as needed |
| `/setup --full` | First time on a machine | Full setup: MCP server + system config + project config |
| `/setup --quick` | Starting a new project | Quick setup: project config only (assumes system is ready) |
| `/setup --sync` | After repo move/rename | Updates .env to match current git remote |
**Auto-detection logic:**
1. No system config → **full** mode
2. System config exists, no project config → **quick** mode
3. Both exist, git remote differs → **sync** mode
4. Both exist, match → already configured, offer to reconfigure
**Typical workflow:**
1. Install plugin → run `/setup` (auto-runs full mode)
2. Start new project → run `/setup` (auto-runs quick mode)
3. Repository moved? → run `/setup` (auto-runs sync mode)
---
## Configuration Architecture
This marketplace uses a **hybrid configuration** approach:
```
┌─────────────────────────────────────────────────────────────────┐
│ SYSTEM-LEVEL (once per machine) │
│ ~/.config/claude/ │
├─────────────────────────────────────────────────────────────────┤
│ gitea.env │ GITEA_API_URL, GITEA_API_TOKEN │
│ netbox.env │ NETBOX_API_URL, NETBOX_API_TOKEN │
│ git-flow.env │ GIT_WORKFLOW_STYLE, GIT_DEFAULT_BASE, etc. │
└─────────────────────────────────────────────────────────────────┘
│ Shared across all projects
┌─────────────────────────────────────────────────────────────────┐
│ PROJECT-LEVEL (once per project) │
│ <project-root>/.env │
├─────────────────────────────────────────────────────────────────┤
│ GITEA_REPO │ Repository as owner/repo format │
│ GIT_WORKFLOW_STYLE │ (optional) Override system default │
│ PR_REVIEW_* │ (optional) PR review settings │
└─────────────────────────────────────────────────────────────────┘
```
**Benefits:**
- Single token per service (update once, use everywhere)
- Easy multi-project setup (just run `/setup` in each project)
- Security (tokens never committed to git, never typed into AI chat)
- Project isolation (each project can override defaults)
---
## Prerequisites
Before running `/setup`:
1. **Python 3.10+** installed
```bash
python3 --version # Should be 3.10.0 or higher
```
2. **Git repository** initialized (for project setup)
```bash
git status # Should show initialized repository
```
3. **Claude Code** installed and working with the marketplace
---
## Setup Methods
### Method 1: Interactive Wizard (Recommended)
Run the setup wizard in Claude Code:
```
/setup
```
The wizard will guide you through each step interactively and auto-detect the appropriate mode.
**Note:** After first-time setup, you'll need to restart your Claude Code session for MCP tools to become available.
### Method 2: Manual Setup
If you prefer to set up manually or need to troubleshoot:
#### Step 1: MCP Server Setup
```bash
# Navigate to marketplace directory
cd /path/to/leo-claude-mktplace
# Set up Gitea MCP server
cd mcp-servers/gitea
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
deactivate
# (Optional) Set up NetBox MCP server
cd ../netbox
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
deactivate
```
#### Step 2: System Configuration
```bash
mkdir -p ~/.config/claude
# Gitea configuration (credentials only)
cat > ~/.config/claude/gitea.env << 'EOF'
GITEA_API_URL=https://gitea.example.com
GITEA_API_TOKEN=your_token_here
EOF
chmod 600 ~/.config/claude/gitea.env
```
#### Step 3: Project Configuration
In each project root:
```bash
cat > .env << 'EOF'
GITEA_REPO=your-organization/your-repo-name
EOF
```
Add `.env` to `.gitignore` if not already there.
### Method 3: Automation Script (CI/Scripting)
For automated setups or CI environments:
```bash
cd /path/to/leo-claude-mktplace
./scripts/setup.sh
```
This script is useful for CI/CD pipelines and bulk provisioning.
---
## Configuration Reference
### System-Level Files
Located in `~/.config/claude/`:
| File | Required By | Purpose |
|------|-------------|---------|
| `gitea.env` | projman, pr-review | Gitea API credentials |
| `netbox.env` | cmdb-assistant | NetBox API credentials |
| `git-flow.env` | git-flow | Default git workflow settings |
### Gitea Configuration
```bash
# ~/.config/claude/gitea.env
GITEA_API_URL=https://gitea.example.com/api/v1
GITEA_API_TOKEN=your_gitea_token_here
```
| Variable | Description | Example |
|----------|-------------|---------|
| `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` |
| `GITEA_API_TOKEN` | Personal access token | `abc123...` |
**Note:** `GITEA_REPO` is configured at the project level in `owner/repo` format since different projects may belong to different organizations.
**Generating a Gitea Token:**
1. Log into Gitea → **User Icon** → **Settings**
2. **Applications** tab → **Manage Access Tokens**
3. **Generate New Token** with permissions:
- `repo` (all sub-permissions)
- `read:org`
- `read:user`
- `write:repo` (for wiki access)
4. Copy token immediately (shown only once)
### NetBox Configuration
```bash
# ~/.config/claude/netbox.env
NETBOX_API_URL=https://netbox.example.com
NETBOX_API_TOKEN=your_netbox_token_here
```
| Variable | Description | Example |
|----------|-------------|---------|
| `NETBOX_API_URL` | NetBox base URL | `https://netbox.example.com` |
| `NETBOX_API_TOKEN` | API token | `abc123...` |
### Git-Flow Configuration
```bash
# ~/.config/claude/git-flow.env
GIT_WORKFLOW_STYLE=feature-branch
GIT_DEFAULT_BASE=development
GIT_AUTO_DELETE_MERGED=true
GIT_AUTO_PUSH=false
GIT_PROTECTED_BRANCHES=main,master,development,staging,production
GIT_COMMIT_STYLE=conventional
GIT_CO_AUTHOR=true
```
| Variable | Default | Description |
|----------|---------|-------------|
| `GIT_WORKFLOW_STYLE` | `feature-branch` | Branching strategy |
| `GIT_DEFAULT_BASE` | `development` | Default base branch |
| `GIT_AUTO_DELETE_MERGED` | `true` | Delete merged branches |
| `GIT_AUTO_PUSH` | `false` | Auto-push after commit |
| `GIT_PROTECTED_BRANCHES` | `main,master,...` | Protected branches |
| `GIT_COMMIT_STYLE` | `conventional` | Commit message style |
| `GIT_CO_AUTHOR` | `true` | Include Claude co-author |
---
## Project-Level Configuration
Create `.env` in each project root:
```bash
# Required for projman, pr-review (use owner/repo format)
GITEA_REPO=your-organization/your-repo-name
# Optional: Override git-flow defaults
GIT_WORKFLOW_STYLE=pr-required
GIT_DEFAULT_BASE=main
# Optional: PR review settings
PR_REVIEW_CONFIDENCE_THRESHOLD=0.5
PR_REVIEW_AUTO_SUBMIT=false
```
| Variable | Required | Description |
|----------|----------|-------------|
| `GITEA_REPO` | Yes | Repository in `owner/repo` format (e.g., `my-org/my-repo`) |
| `GIT_WORKFLOW_STYLE` | No | Override system default |
| `PR_REVIEW_*` | No | PR review settings |
---
## Plugin Configuration Summary
| Plugin | System Config | Project Config | Setup Command |
|--------|---------------|----------------|---------------|
| **projman** | gitea.env | .env (GITEA_REPO=owner/repo) | `/setup` |
| **pr-review** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup` |
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
| **clarity-assist** | None | None | None needed |
| **cmdb-assistant** | netbox.env | None | `/initial-setup` |
| **data-platform** | postgres.env | .env (optional) | `/initial-setup` |
| **viz-platform** | None | .env (optional DMC_VERSION) | `/initial-setup` |
| **doc-guardian** | None | None | None needed |
| **code-sentinel** | None | None | None needed |
| **project-hygiene** | None | None | None needed |
| **claude-config-maintainer** | None | None | None needed |
| **contract-validator** | None | None | `/initial-setup` |
---
## Multi-Project Workflow
Once system-level config is set up, adding new projects is simple:
```
cd ~/projects/new-project
/setup
```
The command auto-detects that system config exists and runs quick project setup.
---
## Installing Plugins to Consumer Projects
The marketplace provides scripts to install plugins into consumer projects. This sets up the MCP server connections and adds CLAUDE.md integration snippets.
### Install a Plugin
```bash
cd /path/to/leo-claude-mktplace
./scripts/install-plugin.sh <plugin-name> <target-project-path>
```
**Examples:**
```bash
# Install data-platform to a portfolio project
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
# Install multiple plugins
./scripts/install-plugin.sh viz-platform ~/projects/personal-portfolio
./scripts/install-plugin.sh projman ~/projects/personal-portfolio
```
**What it does:**
1. Validates the plugin exists in the marketplace
2. Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
3. Appends integration snippet to target's `CLAUDE.md`
4. Reports changes and lists available commands
**After installation:** Restart your Claude Code session for MCP tools to become available.
### Uninstall a Plugin
```bash
./scripts/uninstall-plugin.sh <plugin-name> <target-project-path>
```
Removes the MCP server entry and CLAUDE.md integration section.
### List Installed Plugins
```bash
./scripts/list-installed.sh <target-project-path>
```
Shows which marketplace plugins are installed, partially installed, or available.
**Output example:**
```
✓ Fully Installed:
PLUGIN VERSION DESCRIPTION
------ ------- -----------
data-platform 1.3.0 pandas, PostgreSQL, and dbt integration...
viz-platform 1.1.0 DMC validation, Plotly charts, and theming...
○ Available (not installed):
projman 3.4.0 Sprint planning and project management...
```
### Plugins with MCP Servers
Not all plugins have MCP servers. The install script handles this automatically:
| Plugin | Has MCP Server | Notes |
|--------|---------------|-------|
| data-platform | ✓ | pandas, PostgreSQL, dbt tools |
| viz-platform | ✓ | DMC validation, chart, theme tools |
| contract-validator | ✓ | Plugin compatibility validation |
| cmdb-assistant | ✓ (via netbox) | NetBox CMDB tools |
| projman | ✓ (via gitea) | Issue, wiki, PR tools |
| pr-review | ✓ (via gitea) | PR review tools |
| git-flow | ✗ | Commands only |
| doc-guardian | ✗ | Commands and hooks only |
| code-sentinel | ✗ | Commands and hooks only |
| clarity-assist | ✗ | Commands only |
### Script Requirements
- **jq** must be installed (`sudo apt install jq`)
- Scripts are idempotent (safe to run multiple times)
---
## Agent Model Selection
Marketplace agents specify their preferred model using Claude Code's `model` frontmatter field. This allows cost/performance optimization per agent.
### Supported Values
| Value | Description |
|-------|-------------|
| `sonnet` | Default. Balanced performance and cost. |
| `opus` | Higher reasoning depth. Use for complex analysis. |
| `haiku` | Faster, lower cost. Use for mechanical tasks. |
| `inherit` | Use session's current model setting. |
### How It Works
Each agent in `plugins/{plugin}/agents/{agent}.md` has frontmatter like:
```yaml
---
name: planner
description: Sprint planning agent - thoughtful architecture analysis
model: sonnet
---
```
Claude Code reads this field when invoking the agent as a subagent.
### Model Assignments
Agents are assigned models based on their task complexity:
| Model | Agents | Rationale |
|-------|--------|-----------|
| **sonnet** | Planner, Orchestrator, Executor, Code Reviewer, Coordinator, Security Reviewers, Performance Analyst, Data Advisor, Data Analysis, Design Reviewer, Layout Builder, Full Validation, Doc Analyzer, Clarity Coach, Maintainer, CMDB Assistant, Refactor Advisor | Standard reasoning, tool orchestration, code generation |
| **haiku** | Maintainability Auditor, Test Validator, Component Check, Theme Setup, Agent Check, Data Ingestion, Git Assistant | Pattern matching, quick validation, mechanical tasks |
### Overriding Model Selection
**Per-agent override:** Edit the `model:` field in the agent file:
```bash
# Change executor to use opus for heavy implementation work
nano plugins/projman/agents/executor.md
# Change model: sonnet to model: opus
```
**Session-level:** Users on Opus subscription can change the agent's model to `inherit` to use whatever model the session is using.
### Best Practices
1. **Default to sonnet** - Good balance for most tasks
2. **Use haiku for speed-sensitive agents** - Sub-agents dispatched in parallel, read-only tasks
3. **Reserve opus for heavy analysis** - Only when sonnet's reasoning isn't sufficient
4. **Use inherit sparingly** - Only when you want session-level control
---
## Automatic Validation Features
### API Validation
When running `/setup`, the command:
1. **Detects** organization and repository from git remote URL
2. **Validates** via Gitea API: `GET /api/v1/repos/{org}/{repo}`
3. **Auto-fills** if repository exists and is accessible (no confirmation needed)
4. **Asks for confirmation** only if validation fails (404 or permission error)
This catches typos and permission issues before saving configuration.
### Mismatch Detection (SessionStart Hook)
When you start a Claude Code session, a hook automatically:
1. Reads `GITEA_REPO` (in `owner/repo` format) from `.env`
2. Compares with current `git remote get-url origin`
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/setup --sync` to update."
This helps when you:
- Move a repository to a different organization
- Rename a repository
- Clone a repo but forget to update `.env`
---
## Verification
### Test Gitea Connection
```bash
source ~/.config/claude/gitea.env
curl -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/user"
```
### Verify Project Setup
In Claude Code, after restarting your session:
```
/labels-sync
```
If this works, your setup is complete.
---
## Troubleshooting
### MCP tools not available
**Cause:** Session wasn't restarted after setup.
**Solution:** Exit Claude Code and start a new session.
### "Configuration not found" error
```bash
# Check system config exists
ls -la ~/.config/claude/gitea.env
# Check permissions (should be 600)
stat ~/.config/claude/gitea.env
```
### Authentication failed
```bash
# Test token directly
source ~/.config/claude/gitea.env
curl -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/user"
```
If you get 401, regenerate your token in Gitea.
### MCP server won't start
```bash
# Check venv exists
ls /path/to/mcp-servers/gitea/.venv
# If missing, create venv (do NOT delete existing venvs)
cd /path/to/mcp-servers/gitea
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
deactivate
```
### Wrong repository
```bash
# Check project .env
cat .env
# Verify GITEA_REPO is in owner/repo format and matches Gitea exactly
# Example: GITEA_REPO=my-org/my-repo
```
---
## Security Best Practices
1. **Never commit tokens**
- Keep credentials in `~/.config/claude/` only
- Add `.env` to `.gitignore`
2. **Secure configuration files**
```bash
chmod 600 ~/.config/claude/*.env
```
3. **Never type tokens into AI chat**
- Always edit config files directly in your editor
- The `/setup` wizard respects this
4. **Rotate tokens periodically**
- Every 6-12 months
- Immediately if compromised
5. **Minimum permissions**
- Only grant required token permissions
- Use separate tokens for different environments

291
docs/DEBUGGING-CHECKLIST.md Normal file
View File

@@ -0,0 +1,291 @@
# Debugging Checklist for Marketplace Troubleshooting
**Purpose:** Systematic approach to diagnose and fix plugin loading issues.
Last Updated: 2026-01-28
---
## Step 1: Identify the Loading Path
Claude Code loads plugins from different locations depending on context:
| Location | Path | When Used |
|----------|------|-----------|
| **Source** | `~/claude-plugins-work/` | When developing in this directory |
| **Installed** | `~/.claude/plugins/marketplaces/leo-claude-mktplace/` | After marketplace install |
| **Cache** | `~/.claude/` | Plugin metadata, settings |
**Determine which path Claude is using:**
```bash
# Check if installed marketplace exists
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/
# Check Claude's current plugin loading
cat ~/.claude/settings.local.json | grep -A5 "mcpServers"
```
**Key insight:** If you're editing source but Claude uses installed, your changes won't take effect.
---
## Step 2: Verify Files Exist at Runtime Location
Check the files Claude will actually load:
```bash
# For installed marketplace
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
# Check MCP server exists
ls -la $RUNTIME/mcp-servers/gitea/
ls -la $RUNTIME/mcp-servers/netbox/
# Check plugin manifests
ls -la $RUNTIME/plugins/projman/.claude-plugin/plugin.json
ls -la $RUNTIME/plugins/pr-review/.claude-plugin/plugin.json
# Check .mcp.json files
cat $RUNTIME/plugins/projman/.mcp.json
```
---
## Step 3: Verify Virtual Environments Exist
**This is the most common failure point after installation.**
MCP servers require Python venvs to exist at the INSTALLED location:
```bash
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
# Check venvs exist
ls -la $RUNTIME/mcp-servers/gitea/.venv/bin/python
ls -la $RUNTIME/mcp-servers/netbox/.venv/bin/python
# If missing, create them:
cd $RUNTIME && ./scripts/setup.sh
```
**Common error:** "X MCP servers failed to start" = venvs don't exist in installed path.
---
## Step 4: Verify MCP Configuration
Check `.mcp.json` at marketplace root is correctly configured:
```bash
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
# Check .mcp.json exists and has valid content
cat $RUNTIME/.mcp.json | jq '.mcpServers | keys'
# Should list: gitea, netbox, data-platform, viz-platform, contract-validator
```
---
## Step 5: Test MCP Server Startup
Manually test if the MCP server can start:
```bash
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
# Test Gitea MCP
cd $RUNTIME/mcp-servers/gitea
PYTHONPATH=. .venv/bin/python -c "from mcp_server.server import main; print('OK')"
# Test NetBox MCP
cd $RUNTIME/mcp-servers/netbox
PYTHONPATH=. .venv/bin/python -c "from mcp_server.server import main; print('OK')"
```
**If import fails:** Check requirements.txt installed, check Python version compatibility.
---
## Step 6: Verify Configuration Files
Check environment variables are set:
```bash
# System-level credentials (should exist)
cat ~/.config/claude/gitea.env
# Should contain: GITEA_API_URL, GITEA_API_TOKEN
cat ~/.config/claude/netbox.env
# Should contain: NETBOX_API_URL, NETBOX_API_TOKEN
# Project-level config (in target project)
cat /path/to/project/.env
# Should contain: GITEA_REPO=owner/repo (e.g., my-org/my-repo)
```
---
## Step 7: Verify Hooks Configuration
Check hooks are valid:
```bash
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
# List all hooks.json files
find $RUNTIME/plugins -name "hooks.json" -exec echo "=== {} ===" \; -exec cat {} \;
# Verify hook events are valid
# Valid: PreToolUse, PostToolUse, UserPromptSubmit, SessionStart, SessionEnd,
# Notification, Stop, SubagentStop, PreCompact
# INVALID: task-completed, file-changed, git-commit-msg-needed
```
---
## Quick Diagnostic Commands
Run these to quickly identify issues:
```bash
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
echo "=== Installation Status ==="
[ -d "$RUNTIME" ] && echo "Installed: YES" || echo "Installed: NO"
echo -e "\n=== Virtual Environments ==="
[ -f "$RUNTIME/mcp-servers/gitea/.venv/bin/python" ] && echo "Gitea venv: OK" || echo "Gitea venv: MISSING"
[ -f "$RUNTIME/mcp-servers/netbox/.venv/bin/python" ] && echo "NetBox venv: OK" || echo "NetBox venv: MISSING"
echo -e "\n=== MCP Configuration ==="
[ -f "$RUNTIME/.mcp.json" ] && echo ".mcp.json: OK" || echo ".mcp.json: MISSING"
echo -e "\n=== Config Files ==="
[ -f ~/.config/claude/gitea.env ] && echo "gitea.env: OK" || echo "gitea.env: MISSING"
[ -f ~/.config/claude/netbox.env ] && echo "netbox.env: OK" || echo "netbox.env: MISSING"
```
---
## Common Issues and Fixes
| Issue | Symptom | Fix |
|-------|---------|-----|
| Missing venvs | "X MCP servers failed" | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
| Missing .mcp.json | MCP tools not available | Check `.mcp.json` exists at marketplace root |
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
| Gitea issues not closing | Merged to non-default branch | Manually close issues (see below) |
| MCP changes not taking effect | Session caching | Restart Claude Code session (see below) |
### Gitea Auto-Close Behavior
**Issue:** Using `Closes #XX` or `Fixes #XX` in commit/PR messages does NOT auto-close issues when merging to `development`.
**Root Cause:** Gitea only auto-closes issues when merging to the **default branch** (typically `main` or `master`). Merging to `development`, `staging`, or any other branch will NOT trigger auto-close.
**Workaround:**
1. Use the Gitea MCP tool to manually close issues after merging to development:
```
mcp__plugin_projman_gitea__update_issue(issue_number=XX, state="closed")
```
2. Or close issues via the Gitea web UI
3. The auto-close keywords will still work when the changes are eventually merged to `main`
**Recommendation:** Include the `Closes #XX` keywords in commits anyway - they'll work when the final merge to `main` happens.
### MCP Session Restart Requirement
**Issue:** Changes to MCP servers, hooks, or plugin configuration don't take effect immediately.
**Root Cause:** Claude Code loads MCP tools and plugin configuration at session start. These are cached in session memory and not reloaded dynamically.
**What requires a session restart:**
- MCP server code changes (Python files in `mcp-servers/`)
- Changes to `.mcp.json` files
- Changes to `hooks/hooks.json`
- Changes to `plugin.json`
- Adding new MCP tools or modifying tool signatures
**What does NOT require a restart:**
- Command/skill markdown files (`.md`) - these are read on invocation
- Agent markdown files - read when agent is invoked
**Correct workflow after plugin changes:**
1. Make changes to source files
2. Run `./scripts/verify-hooks.sh` to validate
3. Inform user: "Please restart Claude Code for changes to take effect"
4. **Do NOT clear cache mid-session** - see "Cache Clearing" section
---
## After Fixing Issues
1. **Restart Claude Code** - Plugins are loaded at startup
2. **Verify fix works** - Run a simple command that uses the MCP
3. **Document the issue** - If it's a new failure mode, add to this checklist
---
## Cache Clearing: When It's Safe vs Destructive
**⚠️ CRITICAL: Never clear plugin cache mid-session.**
### Why Cache Clearing Breaks MCP Tools
When Claude Code starts a session:
1. MCP tools are loaded from the cache directory
2. Tool definitions include **absolute paths** to the venv (e.g., `~/.claude/plugins/cache/.../venv/`)
3. These paths are cached in the session memory
4. Deleting the cache removes the venv, but the session still references the old paths
5. Any MCP tool making HTTP requests fails with TLS certificate errors
### When Cache Clearing is SAFE
| Scenario | Safe? | Action |
|----------|-------|--------|
| Before starting Claude Code | ✅ Yes | Clear cache, then start session |
| Between sessions | ✅ Yes | Clear cache after `/exit`, before next session |
| During a session | ❌ NO | Never - will break MCP tools |
| After plugin source edits | ❌ NO | Restart session instead |
### Recovery: MCP Tools Broken Mid-Session
If you accidentally cleared cache during a session and MCP tools fail:
```
Error: Could not find a suitable TLS CA certificate bundle, invalid path:
/home/.../.claude/plugins/cache/.../certifi/cacert.pem
```
**Fix:**
1. Exit the current session (`/exit` or Ctrl+C)
2. Start a new Claude Code session
3. MCP tools will reload from the reinstalled cache
### Correct Workflow for Plugin Development
1. Make changes to plugin source files
2. Run `./scripts/verify-hooks.sh` (verifies hook types)
3. Tell user: "Please restart Claude Code for changes to take effect"
4. **Do NOT clear cache** - session restart handles reloading
---
## Automated Diagnostics
Use these commands for automated checking:
- `/debug report` - Run full diagnostics, create issue if problems found
- `/debug review` - Investigate existing diagnostic issues and propose fixes
---
## Related Documentation
- `CLAUDE.md` - Installation Paths and Troubleshooting sections
- `docs/CONFIGURATION.md` - Setup and configuration guide
- `docs/UPDATING.md` - Update procedures

View File

@@ -1,24 +1,71 @@
# Updating support-claude-mktplace
# Updating Leo Claude Marketplace
This guide covers how to update your local installation when new versions are released.
## Quick Update
---
## ⚠️ CRITICAL: Run Setup in Installed Location
When Claude Code installs a marketplace, it copies files to `~/.claude/plugins/marketplaces/` but **does NOT create Python virtual environments**. You must run setup manually after installation or update.
**After installing or updating the marketplace:**
```bash
# 1. Pull latest changes
cd /path/to/support-claude-mktplace
cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
```
This creates the required `.venv` directories for MCP servers. Without this step, **all MCP servers will fail to start**.
---
## Quick Update (Source Repository)
```bash
# 1. Pull latest changes to source
cd /path/to/leo-claude-mktplace
git pull origin main
# 2. Run post-update script
# 2. Run post-update script (updates source repo venvs)
./scripts/post-update.sh
# 3. CRITICAL: Run setup in installed marketplace location
cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
```
**Then restart your Claude Code session** to load any changes.
---
## What the Post-Update Script Does
1. **Updates Python dependencies** for Gitea and Wiki.js MCP servers
1. **Updates Python dependencies** for MCP servers (gitea, netbox)
2. **Shows recent changelog entries** so you know what changed
3. **Validates your configuration** is still compatible
---
## After Updating: Re-run Setup if Needed
### When to Re-run `/initial-setup`
You typically **don't need** to re-run setup after updates. However, re-run if:
- Changelog mentions **new required environment variables**
- Changelog mentions **breaking changes** to configuration
- MCP tools stop working after update
### For Existing Projects
If an update requires new project-level configuration:
```
/project-init
```
This will detect existing settings and only add what's missing.
---
## Manual Steps After Update
Some updates may require manual configuration changes:
@@ -29,8 +76,7 @@ If the changelog mentions new environment variables:
1. Check the variable name and purpose in the changelog
2. Add it to the appropriate config file:
- Gitea variables → `~/.config/claude/gitea.env`
- Wiki.js variables → `~/.config/claude/wikijs.env`
- System variables → `~/.config/claude/gitea.env` or `netbox.env`
- Project variables → `.env` in your project root
### New MCP Server Features
@@ -38,30 +84,56 @@ If the changelog mentions new environment variables:
If a new MCP server tool is added:
1. The post-update script handles dependency installation
2. Check `docs/references/MCP-*.md` for usage documentation
3. New tools are available immediately after update
2. Check plugin documentation for usage
3. New tools are available immediately after session restart
### Breaking Changes
Breaking changes will be clearly marked in CHANGELOG.md with migration instructions.
## Troubleshooting
### Setup Script and Configuration Workflow Changes
When updating, review if changes affect the setup workflow:
1. **Check for setup command changes:**
```bash
git diff HEAD~1 plugins/*/commands/initial-setup.md
git diff HEAD~1 plugins/*/commands/project-init.md
git diff HEAD~1 plugins/*/commands/project-sync.md
```
2. **Check for hook changes:**
```bash
git diff HEAD~1 plugins/*/hooks/hooks.json
```
3. **Check for configuration structure changes:**
```bash
git diff HEAD~1 docs/CONFIGURATION.md
```
**If setup commands changed:**
- Review what's new (new validation steps, new prompts, etc.)
- Consider re-running `/initial-setup` or `/project-init` to benefit from improvements
- Existing configurations remain valid unless changelog notes breaking changes
**If hooks changed:**
- Restart your Claude Code session to load new hooks
- New hooks (like SessionStart validation) activate automatically
**If configuration structure changed:**
- Check if new variables are required
- Run `/project-sync` if repository detection logic improved
---
## Troubleshooting Updates
### Dependencies fail to install
```bash
# Rebuild virtual environment
# Install missing dependencies (do NOT delete .venv)
cd mcp-servers/gitea
rm -rf .venv
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
deactivate
# Repeat for wikijs
cd ../wikijs
rm -rf .venv
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
deactivate
@@ -70,14 +142,47 @@ deactivate
### Configuration no longer works
1. Check CHANGELOG.md for breaking changes
2. Compare your config files with updated `.env.example` (if provided)
3. Run `./scripts/setup.sh` to validate configuration
2. Run `/initial-setup` to re-validate and fix configuration
3. Compare your config files with documentation in `docs/CONFIGURATION.md`
### MCP server won't start
### MCP server won't start after update
**Most common cause:** Virtual environments don't exist in the installed marketplace.
```bash
# Fix: Run setup in installed location
cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
```
If that doesn't work:
1. Check Python version: `python3 --version` (requires 3.10+)
2. Verify venv exists: `ls mcp-servers/gitea/.venv`
3. Check logs for specific errors
2. Verify venv exists in INSTALLED location:
```bash
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/gitea/.venv
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/netbox/.venv
```
3. If missing, run setup.sh as shown above.
4. Restart Claude Code session
5. Check logs for specific errors
### "X MCP servers failed" on startup
This almost always means the venvs don't exist in the installed marketplace:
```bash
cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
```
Then restart Claude Code.
### New commands not available
1. Restart your Claude Code session
2. Verify the plugin is still installed
3. Check if the command requires additional setup
---
## Version Pinning
@@ -88,14 +193,28 @@ To stay on a specific version:
git tag
# Checkout specific version
git checkout v1.0.0
git checkout v3.0.0
# Run post-update
./scripts/post-update.sh
```
---
## Checking Current Version
The version is displayed in the main README.md title and in `CHANGELOG.md`.
```bash
# Check version from changelog
head -20 CHANGELOG.md
```
---
## Getting Help
- Check `docs/references/` for component documentation
- Review CHANGELOG.md for recent changes
- Check `docs/CONFIGURATION.md` for setup guide
- Check `docs/COMMANDS-CHEATSHEET.md` for command reference
- Review `CHANGELOG.md` for recent changes
- Search existing issues in Gitea

View File

@@ -2,7 +2,7 @@
**Target File:** `docs/architecture/agent-workflow.drawio`
**Purpose:** Shows when Planner, Orchestrator, and Executor agents trigger during sprint lifecycle.
**Purpose:** Shows when Planner, Orchestrator, Executor, and Code Reviewer agents trigger during sprint lifecycle.
**Diagram Type:** Swimlane / Sequence Diagram
@@ -16,8 +16,8 @@
| planner-lane | Planner Agent | #4A90D9 | 2 |
| orchestrator-lane | Orchestrator Agent | #7CB342 | 3 |
| executor-lane | Executor Agent | #FF9800 | 4 |
| gitea-lane | Gitea | #9E9E9E | 5 |
| wikijs-lane | Wiki.js | #9E9E9E | 6 (rightmost) |
| reviewer-lane | Code Reviewer Agent | #9C27B0 | 5 |
| gitea-lane | Gitea (Issues + Wiki) | #9E9E9E | 6 (rightmost) |
---
@@ -30,7 +30,7 @@
| p1-start | /sprint-plan | rounded-rect | user-lane | 1 |
| p1-activate | Planner Activates | rectangle | planner-lane | 2 |
| p1-search-lessons | Search Lessons Learned | rectangle | planner-lane | 3 |
| p1-wikijs-query | Query Past Lessons | rectangle | wikijs-lane | 4 |
| p1-gitea-wiki-query | Query Past Lessons (Wiki) | rectangle | gitea-lane | 4 |
| p1-return-lessons | Return Relevant Lessons | rectangle | planner-lane | 5 |
| p1-clarify | Ask Clarifying Questions | diamond | planner-lane | 6 |
| p1-user-answers | Provide Answers | rectangle | user-lane | 7 |
@@ -44,8 +44,8 @@
|------|----|-------|-------|
| p1-start | p1-activate | invokes | solid |
| p1-activate | p1-search-lessons | | solid |
| p1-search-lessons | p1-wikijs-query | GraphQL search | solid |
| p1-wikijs-query | p1-return-lessons | lessons data | dashed |
| p1-search-lessons | p1-gitea-wiki-query | REST API (search_lessons) | solid |
| p1-gitea-wiki-query | p1-return-lessons | lessons data | dashed |
| p1-return-lessons | p1-clarify | | solid |
| p1-clarify | p1-user-answers | questions | solid |
| p1-user-answers | p1-clarify | answers | dashed |
@@ -65,7 +65,7 @@
| p2-orch-activate | Orchestrator Activates | rectangle | orchestrator-lane | 12 |
| p2-fetch-issues | Fetch Sprint Issues | rectangle | orchestrator-lane | 13 |
| p2-gitea-list | List Open Issues | rectangle | gitea-lane | 14 |
| p2-sequence | Sequence Work | rectangle | orchestrator-lane | 15 |
| p2-sequence | Sequence Work (Dependencies) | rectangle | orchestrator-lane | 15 |
| p2-dispatch | Dispatch Task | rectangle | orchestrator-lane | 16 |
| p2-exec-activate | Executor Activates | rectangle | executor-lane | 17 |
| p2-implement | Implement Task | rectangle | executor-lane | 18 |
@@ -83,7 +83,7 @@
| p2-orch-activate | p2-fetch-issues | | solid |
| p2-fetch-issues | p2-gitea-list | REST API | solid |
| p2-gitea-list | p2-sequence | issues data | dashed |
| p2-sequence | p2-dispatch | | solid |
| p2-sequence | p2-dispatch | parallel batching | solid |
| p2-dispatch | p2-exec-activate | execution prompt | solid |
| p2-exec-activate | p2-implement | | solid |
| p2-implement | p2-update-status | | solid |
@@ -95,23 +95,50 @@
---
## PHASE 2.5: CODE REVIEW (Pre-Close)
### Nodes
| ID | Label | Type | Lane | Sequence |
|----|-------|------|------|----------|
| p25-start | /review | rounded-rect | user-lane | 24 |
| p25-reviewer-activate | Code Reviewer Activates | rectangle | reviewer-lane | 25 |
| p25-scan-changes | Scan Recent Changes | rectangle | reviewer-lane | 26 |
| p25-check-quality | Check Code Quality | rectangle | reviewer-lane | 27 |
| p25-security-scan | Security Scan | rectangle | reviewer-lane | 28 |
| p25-report | Generate Review Report | rectangle | reviewer-lane | 29 |
| p25-complete | Review Complete | rounded-rect | reviewer-lane | 30 |
### Edges
| From | To | Label | Style |
|------|----|-------|-------|
| p25-start | p25-reviewer-activate | invokes | solid |
| p25-reviewer-activate | p25-scan-changes | | solid |
| p25-scan-changes | p25-check-quality | | solid |
| p25-check-quality | p25-security-scan | | solid |
| p25-security-scan | p25-report | | solid |
| p25-report | p25-complete | | solid |
---
## PHASE 3: SPRINT CLOSE
### Nodes
| ID | Label | Type | Lane | Sequence |
|----|-------|------|------|----------|
| p3-start | /sprint-close | rounded-rect | user-lane | 24 |
| p3-orch-activate | Orchestrator Activates | rectangle | orchestrator-lane | 25 |
| p3-review | Review Sprint | rectangle | orchestrator-lane | 26 |
| p3-gitea-status | Get Final Status | rectangle | gitea-lane | 27 |
| p3-capture | Capture Lessons Learned | rectangle | orchestrator-lane | 28 |
| p3-user-input | Confirm Lessons | diamond | user-lane | 29 |
| p3-create-wiki | Create Wiki Pages | rectangle | orchestrator-lane | 30 |
| p3-wikijs-create | Store Lessons | rectangle | wikijs-lane | 31 |
| p3-close-issues | Close Issues | rectangle | orchestrator-lane | 32 |
| p3-gitea-close | Mark Closed | rectangle | gitea-lane | 33 |
| p3-complete | Sprint Closed | rounded-rect | orchestrator-lane | 34 |
| p3-start | /sprint-close | rounded-rect | user-lane | 31 |
| p3-orch-activate | Orchestrator Activates | rectangle | orchestrator-lane | 32 |
| p3-review | Review Sprint | rectangle | orchestrator-lane | 33 |
| p3-gitea-status | Get Final Status | rectangle | gitea-lane | 34 |
| p3-capture | Capture Lessons Learned | rectangle | orchestrator-lane | 35 |
| p3-user-input | Confirm Lessons | diamond | user-lane | 36 |
| p3-create-wiki | Create Wiki Pages | rectangle | orchestrator-lane | 37 |
| p3-gitea-wiki-create | Store Lessons (Wiki) | rectangle | gitea-lane | 38 |
| p3-close-issues | Close Issues | rectangle | orchestrator-lane | 39 |
| p3-gitea-close | Mark Closed | rectangle | gitea-lane | 40 |
| p3-complete | Sprint Closed | rounded-rect | orchestrator-lane | 41 |
### Edges
@@ -123,8 +150,8 @@
| p3-gitea-status | p3-capture | status data | dashed |
| p3-capture | p3-user-input | proposed lessons | solid |
| p3-user-input | p3-create-wiki | confirmed | solid |
| p3-create-wiki | p3-wikijs-create | GraphQL mutation | solid |
| p3-wikijs-create | p3-close-issues | confirm | dashed |
| p3-create-wiki | p3-gitea-wiki-create | REST API (create_lesson) | solid |
| p3-gitea-wiki-create | p3-close-issues | confirm | dashed |
| p3-close-issues | p3-gitea-close | REST API | solid |
| p3-gitea-close | p3-complete | confirm | dashed |
@@ -133,59 +160,71 @@
## LAYOUT NOTES
```
+--------+------------+---------------+------------+--------+----------+
| User | Planner | Orchestrator | Executor | Gitea | Wiki.js |
+--------+------------+---------------+------------+--------+----------+
| | | | | | |
| PHASE 1: SPRINT PLANNING |
|---------------------------------------------------------------------+
| O | | | | | |
| | | | | | | |
| +---->| O | | | | |
| | | | | | | |
| | +----------|---------------|------------|------->| O |
| | |<---------|---------------|------------|--------+ | |
| | | | | | | |
| | O<> | | | | |
| O<--->+ | | | | | |
| | | | | | | |
| | +----------|---------------|----------->| O | |
| | O | | | | |
| | | | | | |
|---------------------------------------------------------------------+
| PHASE 2: SPRINT EXECUTION |
|---------------------------------------------------------------------+
| O | | | | | |
| | | | | | | |
| +-----|----------->| O | | | |
| | | | | | | |
| | | +-------------|----------->| O | |
| | | |<------------|------------+ | | |
| | | | | | | |
| | | +------------>| O | | |
| | | | | | | |
| | | | +--------->| O | |
| | | | |<---------+ | | |
| | | O<------------+ | | | |
| | | | | | | |
| | | O (loop) | | | |
| | | | | | |
|---------------------------------------------------------------------+
| PHASE 3: SPRINT CLOSE |
|---------------------------------------------------------------------+
| O | | | | | |
| | | | | | | |
| +-----|----------->| O | | | |
| | | +-------------|----------->| O | |
| | | |<------------|------------+ | | |
| | | | | | | |
| O<----|-----------<+ | | | | |
| +-----|----------->| | | | | |
| | | +-------------|------------|------->| O |
| | | |<------------|------------|--------+ | |
| | | +-------------|----------->| O | |
| | | O | | | |
+--------+------------+---------------+------------+--------+----------+
+--------+------------+---------------+------------+----------+------------------+
| User | Planner | Orchestrator | Executor | Reviewer | Gitea |
| | | | | | (Issues + Wiki) |
+--------+------------+---------------+------------+----------+------------------+
| | | | | | |
| PHASE 1: SPRINT PLANNING |
|-------------------------------------------------------------------------------|
| O | | | | | |
| | | | | | | |
| +---->| O | | | | |
| | | | | | | |
| | +----------|---------------|------------|--------->| O (Wiki Query) |
| | |<---------|---------------|------------|----------+ | |
| | | | | | | |
| | O<> | | | | |
| O<--->+ | | | | | |
| | | | | | | |
| | +----------|---------------|------------|--------->| O (Issues) |
| | O | | | | |
| | | | | | |
|-------------------------------------------------------------------------------|
| PHASE 2: SPRINT EXECUTION |
|-------------------------------------------------------------------------------|
| O | | | | | |
| | | | | | | |
| +-----|----------->| O | | | |
| | | | | | | |
| | | +-------------|------------|--------->| O (Issues) |
| | | |<------------|------------|----------+ | |
| | | | | | | |
| | | +------------>| O | | |
| | | | | | | |
| | | | +----------|--------->| O (Issues) |
| | | | |<---------|----------+ | |
| | | O<------------+ | | | |
| | | | | | | |
| | | O (loop) | | | |
| | | | | | |
|-------------------------------------------------------------------------------|
| PHASE 2.5: CODE REVIEW |
|-------------------------------------------------------------------------------|
| O | | | | | |
| | | | | | | |
| +-----|------------|---------------|----------->| O | |
| | | | | | | |
| | | | | O->O->O | |
| | | | | | | |
| | | | | O | |
| | | | | | |
|-------------------------------------------------------------------------------|
| PHASE 3: SPRINT CLOSE |
|-------------------------------------------------------------------------------|
| O | | | | | |
| | | | | | | |
| +-----|----------->| O | | | |
| | | +-------------|------------|--------->| O (Issues) |
| | | |<------------|------------|----------+ | |
| | | | | | | |
| O<----|-----------<+ | | | | |
| +-----|----------->| | | | | |
| | | +-------------|------------|--------->| O (Wiki Create) |
| | | |<------------|------------|----------+ | |
| | | +-------------|------------|--------->| O (Issues Close) |
| | | O | | | |
+--------+------------+---------------+------------+----------+------------------+
```
---
@@ -198,7 +237,8 @@
| Blue | #4A90D9 | Planner Agent |
| Green | #7CB342 | Orchestrator Agent |
| Orange | #FF9800 | Executor Agent |
| Gray | #9E9E9E | External Services |
| Purple | #9C27B0 | Code Reviewer Agent |
| Gray | #9E9E9E | External Services (Gitea) |
---
@@ -219,3 +259,13 @@
|-------|---------|
| Solid | Action/Request |
| Dashed | Response/Data return |
---
## ARCHITECTURE NOTES
- **Gitea provides BOTH issue tracking AND wiki** (no separate wiki service)
- All wiki operations use Gitea REST API via MCP tools
- Lessons learned stored in Gitea Wiki under `lessons-learned/sprints/`
- MCP tools: `search_lessons`, `create_lesson`, `list_wiki_pages`, `get_wiki_page`
- Four-agent model: Planner, Orchestrator, Executor, Code Reviewer

View File

@@ -13,22 +13,26 @@
| ID | Label | Type | Color | Position |
|----|-------|------|-------|----------|
| projman | projman | rectangle | #4A90D9 | top-center |
| projman-pmo | projman-pmo | rectangle | #4A90D9 | top-right |
| projman-pmo | projman-pmo (planned) | rectangle | #4A90D9 | top-right |
| project-hygiene | project-hygiene | rectangle | #4A90D9 | top-left |
| claude-config | claude-config-maintainer | rectangle | #4A90D9 | bottom-left |
| cmdb-assistant | cmdb-assistant | rectangle | #4A90D9 | bottom-right |
### MCP Servers (Green - #7CB342)
| ID | Label | Type | Color | Position |
|----|-------|------|-------|----------|
| gitea-mcp | Gitea MCP Server | rectangle | #7CB342 | middle-left |
| wikijs-mcp | Wiki.js MCP Server | rectangle | #7CB342 | middle-right |
MCP servers are **bundled inside each plugin** that needs them.
| ID | Label | Type | Color | Position | Bundled In |
|----|-------|------|-------|----------|------------|
| gitea-mcp | Gitea MCP Server | rectangle | #7CB342 | middle-left | projman |
| netbox-mcp | NetBox MCP Server | rectangle | #7CB342 | middle-right | cmdb-assistant |
### External Systems (Gray - #9E9E9E)
| ID | Label | Type | Color | Position |
|----|-------|------|-------|----------|
| gitea-instance | Gitea\ngitea.hotserv.cloud | cylinder | #9E9E9E | bottom-left |
| wikijs-instance | Wiki.js\nwikijs.hotserv.cloud | cylinder | #9E9E9E | bottom-right |
| gitea-instance | Gitea\n(Issues + Wiki) | cylinder | #9E9E9E | bottom-left |
| netbox-instance | NetBox | cylinder | #9E9E9E | bottom-right |
### Configuration (Orange - #FF9800)
@@ -45,10 +49,8 @@
| From | To | Label | Style | Arrow |
|------|----|-------|-------|-------|
| projman | gitea-mcp | uses | solid | forward |
| projman | wikijs-mcp | uses | solid | forward |
| projman-pmo | gitea-mcp | uses (company-wide) | solid | forward |
| projman-pmo | wikijs-mcp | uses (company-wide) | solid | forward |
| projman | gitea-mcp | bundled | solid | bidirectional |
| cmdb-assistant | netbox-mcp | bundled | solid | bidirectional |
### Plugin Dependencies
@@ -61,16 +63,16 @@
| From | To | Label | Style | Arrow |
|------|----|-------|-------|-------|
| gitea-mcp | gitea-instance | REST API | solid | forward |
| wikijs-mcp | wikijs-instance | GraphQL | solid | forward |
| netbox-mcp | netbox-instance | REST API | solid | forward |
### Configuration Connections
| From | To | Label | Style | Arrow |
|------|----|-------|-------|-------|
| system-config | gitea-mcp | credentials | dashed | forward |
| system-config | wikijs-mcp | credentials | dashed | forward |
| system-config | netbox-mcp | credentials | dashed | forward |
| project-config | gitea-mcp | repo context | dashed | forward |
| project-config | wikijs-mcp | project path | dashed | forward |
| project-config | netbox-mcp | site context | dashed | forward |
---
@@ -78,9 +80,8 @@
| ID | Label | Contains | Style |
|----|-------|----------|-------|
| plugins-group | Plugins | projman, projman-pmo, project-hygiene | light blue border |
| mcp-group | Shared MCP Servers | gitea-mcp, wikijs-mcp | light green border |
| external-group | External Services | gitea-instance, wikijs-instance | light gray border |
| plugins-group | Plugins | projman, projman-pmo, project-hygiene, claude-config, cmdb-assistant | light blue border |
| external-group | External Services | gitea-instance, netbox-instance | light gray border |
| config-group | Configuration | system-config, project-config | light orange border |
---
@@ -92,25 +93,21 @@
| PLUGINS GROUP |
| +----------------+ +----------------+ +-------------------+ |
| | project- | | projman | | projman-pmo | |
| | hygiene | | | | | |
| +----------------+ +-------+--------+ +--------+----------+ |
| | | |
| | hygiene | | [gitea-mcp] | | (planned) | |
| +----------------+ +-------+--------+ +-------------------+ |
| | |
| +----------------+ +-------------------+ |
| | claude-config | | cmdb-assistant | |
| | -maintainer | | [netbox-mcp] | |
| +----------------+ +--------+----------+ |
+------------------------------------------------------------------+
| |
v v
+------------------------------------------------------------------+
| MCP SERVERS GROUP |
| +-------------------+ +-------------------+ |
| | Gitea MCP Server | | Wiki.js MCP Server| |
| +--------+----------+ +---------+---------+ |
+------------------------------------------------------------------+
| |
v v
|
v
+------------------------------------------------------------------+
| EXTERNAL SERVICES GROUP |
| +-------------------+ +-------------------+ |
| | Gitea | | Wiki.js | |
| | gitea.hotserv.cloud | wikijs.hotserv.cloud |
| | Gitea | | NetBox | |
| | (Issues + Wiki) | | | |
| +-------------------+ +-------------------+ |
+------------------------------------------------------------------+
@@ -128,6 +125,15 @@ CONFIG GROUP (left side): CONFIG GROUP (right side):
| Color | Hex | Meaning |
|-------|-----|---------|
| Blue | #4A90D9 | Plugins |
| Green | #7CB342 | MCP Servers |
| Green | #7CB342 | MCP Servers (bundled in plugins) |
| Gray | #9E9E9E | External Systems |
| Orange | #FF9800 | Configuration |
---
## ARCHITECTURE NOTES
- MCP servers are **bundled inside plugins** (not shared at root)
- Gitea provides both issue tracking AND wiki (lessons learned)
- No separate Wiki.js - all wiki functionality uses Gitea Wiki
- Each plugin is self-contained for Claude Code caching

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,692 +0,0 @@
# Project Management Plugins - Project Summary
## Overview
This project builds two Claude Code plugins that transform a proven 15-sprint workflow into reusable, distributable tools for managing software development with Gitea, Wiki.js, and agile methodologies.
**Status:** Planning phase complete, ready for implementation
---
## The Two Plugins
### 1. projman (Single-Repository)
**Purpose:** Project management for individual repositories
**Users:** Developers, Team Leads
**Build Order:** Build FIRST
**Key Features:**
- Sprint planning with AI agents
- Issue creation with 43-label taxonomy
- Lessons learned capture in Wiki.js
- Branch-aware security model
- Hybrid configuration system
**Reference:** [PLUGIN-PROJMAN.md](./PLUGIN-PROJMAN.md)
### 2. projman-pmo (Multi-Project)
**Purpose:** PMO coordination across organization
**Users:** PMO Coordinators, Engineering Managers, CTOs
**Build Order:** Build SECOND (after projman validated)
**Key Features:**
- Cross-project status aggregation
- Dependency tracking and visualization
- Resource conflict detection
- Release coordination
- Company-wide lessons learned search
**Reference:** [PLUGIN-PMO.md](./PLUGIN-PMO.md)
---
## Core Architecture
### Shared MCP Servers
Both plugins share the same MCP server codebase at repository root (`mcp-servers/`):
**1. Gitea MCP Server**
- Issue management (CRUD operations)
- Label taxonomy system (43 labels)
- Mode detection (project vs company-wide)
**Reference:** [MCP-GITEA.md](./MCP-GITEA.md)
**2. Wiki.js MCP Server**
- Documentation management
- Lessons learned capture and search
- GraphQL API integration
- Company-wide knowledge base
**Reference:** [MCP-WIKIJS.md](./MCP-WIKIJS.md)
### Mode Detection
The MCP servers detect their operating mode based on environment variables:
**Project Mode (projman):**
- `GITEA_REPO` present → operates on single repository
- `WIKIJS_PROJECT` present → operates on single project path
**Company Mode (pmo):**
- No `GITEA_REPO` → operates on all repositories
- No `WIKIJS_PROJECT` → operates on entire company namespace
---
## Repository Structure
```
bandit/support-claude-mktplace/
├── mcp-servers/ # ← SHARED BY BOTH PLUGINS
│ ├── gitea/ # Gitea MCP Server
│ │ ├── .venv/
│ │ ├── requirements.txt
│ │ ├── mcp_server/
│ │ └── tests/
│ └── wikijs/ # Wiki.js MCP Server
│ ├── .venv/
│ ├── requirements.txt
│ ├── mcp_server/
│ └── tests/
├── projman/ # ← PROJECT PLUGIN
│ ├── .claude-plugin/
│ │ └── plugin.json
│ ├── .mcp.json # Points to ../mcp-servers/
│ ├── commands/
│ │ ├── sprint-plan.md
│ │ ├── sprint-start.md
│ │ ├── sprint-status.md
│ │ ├── sprint-close.md
│ │ └── labels-sync.md
│ ├── agents/
│ │ ├── planner.md
│ │ ├── orchestrator.md
│ │ └── executor.md
│ ├── skills/
│ │ └── label-taxonomy/
│ ├── README.md
│ └── CONFIGURATION.md
└── projman-pmo/ # ← PMO PLUGIN
├── .claude-plugin/
│ └── plugin.json
├── .mcp.json # Points to ../mcp-servers/
├── commands/
│ ├── pmo-status.md
│ ├── pmo-priorities.md
│ ├── pmo-dependencies.md
│ └── pmo-schedule.md
├── agents/
│ └── pmo-coordinator.md
└── README.md
```
---
## Configuration Architecture
### Hybrid Configuration System
The plugins use a hybrid configuration approach that balances security and flexibility:
**System-Level (Once per machine):**
- `~/.config/claude/gitea.env` - Gitea credentials
- `~/.config/claude/wikijs.env` - Wiki.js credentials
**Project-Level (Per repository):**
- `project-root/.env` - Repository and project paths
**Benefits:**
- Single token per service (update once)
- Project isolation
- Security (tokens never committed)
- Easy multi-project setup
### Configuration Example
**System-Level:**
```bash
# ~/.config/claude/gitea.env
GITEA_API_URL=https://gitea.example.com/api/v1
GITEA_API_TOKEN=your_token
GITEA_OWNER=bandit
# ~/.config/claude/wikijs.env
WIKIJS_API_URL=https://wiki.your-company.com/graphql
WIKIJS_API_TOKEN=your_token
WIKIJS_BASE_PATH=/your-org
```
**Project-Level:**
```bash
# project-root/.env (for projman)
GITEA_REPO=cuisineflow
WIKIJS_PROJECT=projects/cuisineflow
# No .env needed for pmo (company-wide mode)
```
---
## Key Architectural Decisions
### 1. Two MCP Servers (Shared)
**Decision:** Separate Gitea and Wiki.js servers at repository root
**Why:**
- Clear separation of concerns
- Independent configuration
- Better maintainability
- Professional architecture
### 2. Python Implementation
**Decision:** Python 3.10+ for MCP servers
**Why:**
- Modern async/await improvements
- Better type hints support
- Good balance of compatibility vs features
- Widely available (released Oct 2021)
- Most production servers have 3.10+ by now
### 3. Wiki.js for Lessons Learned
**Decision:** Use Wiki.js instead of Git-based Wiki
**Why:**
- Rich editor and search
- Built-in tag system
- Version history
- Web-based collaboration
- GraphQL API
- Company-wide accessibility
### 4. Hybrid Configuration
**Decision:** System-level + project-level configuration
**Why:**
- Single token per service (security)
- Per-project customization (flexibility)
- Easy multi-project setup
- Never commit tokens to git
### 5. Mode Detection in MCP Servers
**Decision:** Detect mode based on environment variables
**Why:**
- Same codebase for both plugins
- No code duplication
- Fix bugs once, both benefit
- Clear separation of concerns
### 6. Build Order: projman First
**Decision:** Build projman completely before starting pmo
**Why:**
- Validate core functionality
- Establish patterns
- Reduce risk
- PMO builds on projman foundation
---
## The Three-Agent Model
### Projman Agents
**Planner Agent:**
- Sprint planning and architecture analysis
- Asks clarifying questions
- Suggests appropriate labels
- Creates Gitea issues
- Searches relevant lessons learned
**Orchestrator Agent:**
- Sprint progress monitoring
- Task coordination
- Blocker identification
- Git operations
- Generates lean execution prompts
**Executor Agent:**
- Implementation guidance
- Code review suggestions
- Testing strategy
- Quality standards enforcement
- Documentation
### PMO Agent
**PMO Coordinator:**
- Strategic view across all projects
- Cross-project dependency tracking
- Resource conflict detection
- Release coordination
- Delegates to projman agents for details
---
## Wiki.js Structure
```
Wiki.js: https://wiki.your-company.com
└── /your-org/
├── projects/ # Project-specific
│ ├── project-a/
│ │ ├── lessons-learned/
│ │ │ ├── sprints/
│ │ │ ├── patterns/
│ │ │ └── INDEX.md
│ │ └── documentation/
│ ├── project-b/
│ ├── project-c/
│ └── company-site/
├── company/ # Company-wide
│ ├── processes/
│ ├── standards/
│ └── tools/
└── shared/ # Cross-project
├── architecture-patterns/
├── best-practices/
└── tech-stack/
```
**Reference:** [MCP-WIKIJS.md](./MCP-WIKIJS.md#wiki-js-structure)
---
## Label Taxonomy System
### Dynamic Label System (44 labels currently)
Labels are **fetched dynamically from Gitea** at runtime via the `/labels-sync` command:
**Organization Labels (28):**
- Agent/2
- Complexity/3
- Efforts/5
- Priority/4
- Risk/3
- Source/4
- Type/6 (Bug, Feature, Refactor, Documentation, Test, Chore)
**Repository Labels (16):**
- Component/9 (Backend, Frontend, API, Database, Auth, Deploy, Testing, Docs, Infra)
- Tech/7 (Python, JavaScript, Docker, PostgreSQL, Redis, Vue, FastAPI)
### Type/Refactor Label
**Organization-level label** for architectural work:
- Service extraction
- Architecture modifications
- Code restructuring
- Technical debt reduction
**Note:** Label count may change. Always sync from Gitea using `/labels-sync` command. When new labels are detected, the command will explain changes and update suggestion logic.
**Reference:** [PLUGIN-PROJMAN.md](./PLUGIN-PROJMAN.md#label-taxonomy-system)
---
## Build Order & Phases
### Build projman First (Phases 1-8)
**Phase 1:** Core Infrastructure (MCP servers)
**Phase 2:** Sprint Planning Commands
**Phase 3:** Agent System
**Phase 4:** Lessons Learned System
**Phase 5:** Testing & Validation
**Phase 6:** Documentation & Refinement
**Phase 7:** Marketplace Preparation
**Phase 8:** Production Hardening
**Reference:** [PLUGIN-PROJMAN.md](./PLUGIN-PROJMAN.md#implementation-phases)
### Build pmo Second (Phases 9-12)
**Phase 9:** PMO Plugin Foundation
**Phase 10:** PMO Commands & Workflows
**Phase 11:** PMO Testing & Integration
**Phase 12:** Production Deployment
**Reference:** [PLUGIN-PMO.md](./PLUGIN-PMO.md#implementation-phases)
---
## Quick Start Guide
### 1. System Configuration
```bash
# Create config directory
mkdir -p ~/.config/claude
# Gitea config
cat > ~/.config/claude/gitea.env << EOF
GITEA_API_URL=https://gitea.example.com/api/v1
GITEA_API_TOKEN=your_gitea_token
GITEA_OWNER=bandit
EOF
# Wiki.js config
cat > ~/.config/claude/wikijs.env << EOF
WIKIJS_API_URL=https://wiki.your-company.com/graphql
WIKIJS_API_TOKEN=your_wikijs_token
WIKIJS_BASE_PATH=/your-org
EOF
# Secure files
chmod 600 ~/.config/claude/*.env
```
### 2. Project Configuration
```bash
# In each project root (for projman)
cat > .env << EOF
GITEA_REPO=cuisineflow
WIKIJS_PROJECT=projects/cuisineflow
EOF
# Add to .gitignore
echo ".env" >> .gitignore
```
### 3. MCP Server Setup
```bash
# Gitea MCP Server
cd mcp-servers/gitea
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Wiki.js MCP Server
cd mcp-servers/wikijs
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
### 4. Validate Setup
```bash
# Test MCP servers
python -m mcp_server.server --test # In each MCP directory
# Test plugin loading
claude plugin test projman
claude plugin test projman-pmo
```
---
## Document Organization
This documentation is organized into 4 focused files plus this summary:
### 1. Gitea MCP Server Reference
**File:** [MCP-GITEA.md](./MCP-GITEA.md)
**Contains:**
- Configuration setup
- Python implementation
- API client code
- Issue and label tools
- Testing strategies
- Mode detection
- Performance optimization
**Use when:** Implementing or troubleshooting Gitea integration
### 2. Wiki.js MCP Server Reference
**File:** [MCP-WIKIJS.md](./MCP-WIKIJS.md)
**Contains:**
- Configuration setup
- GraphQL client implementation
- Wiki.js structure
- Lessons learned system
- Documentation tools
- Company-wide patterns
- PMO multi-project methods
**Use when:** Implementing or troubleshooting Wiki.js integration
### 3. Projman Plugin Reference
**File:** [PLUGIN-PROJMAN.md](./PLUGIN-PROJMAN.md)
**Contains:**
- Plugin structure
- Commands (sprint-plan, sprint-start, sprint-status, sprint-close, labels-sync)
- Three agents (planner, orchestrator, executor)
- Sprint workflow
- Label taxonomy
- Branch-aware security
- Implementation phases 1-8
**Use when:** Building or using the projman plugin
### 4. PMO Plugin Reference
**File:** [PLUGIN-PMO.md](./PLUGIN-PMO.md)
**Contains:**
- PMO plugin structure
- Multi-project commands
- PMO coordinator agent
- Cross-project coordination
- Dependency tracking
- Resource conflict detection
- Implementation phases 9-12
**Use when:** Building or using the projman-pmo plugin
### 5. This Summary
**File:** PROJECT-SUMMARY.md (this document)
**Contains:**
- Project overview
- Architecture decisions
- Configuration approach
- Quick start guide
- References to detailed docs
**Use when:** Getting started or need high-level overview
---
## Key Success Metrics
### Technical Metrics
- Sprint planning time reduced by 40%
- Manual steps eliminated: 10+ per sprint
- Lessons learned capture rate: 100% (vs 0% before)
- Label accuracy on issues: 90%+
- Configuration setup time: < 5 minutes
### User Metrics
- User satisfaction: Better than current manual workflow
- Learning curve: < 1 hour to basic proficiency
- Error rate: < 5% incorrect operations
- Adoption rate: 100% team adoption within 1 month
### PMO Metrics
- Cross-project visibility: 100% (vs fragmented before)
- Dependency detection: Automated (vs manual tracking)
- Resource conflict identification: Proactive (vs reactive)
- Release coordination: Streamlined (vs ad-hoc)
---
## Critical Lessons from 15 Sprints
### Why Lessons Learned Is Critical
After 15 sprints without systematic lesson capture, repeated mistakes occurred:
- Claude Code infinite loops on similar issues: 2-3 times
- Same architectural mistakes: Multiple occurrences
- Forgotten optimizations: Re-discovered each time
**Solution:** Mandatory lessons learned capture at sprint close, searchable at sprint start
### Branch Detection Must Be 100% Reliable
Production accidents are unacceptable. Branch-aware security prevents:
- Accidental code changes on production branch
- Sprint planning on wrong branch
- Deployment mistakes
**Implementation:** Two layers - CLAUDE.md (file-level) + Plugin agents (tool-level)
### Configuration Complexity Is a Blocker
Previous attempts failed due to:
- Complex per-project setup
- Token management overhead
- Multiple configuration files
**Solution:** Hybrid approach - system-level tokens + simple project-level paths
---
## Next Steps
### Immediate Actions
1. **Set up system configuration** (Gitea + Wiki.js tokens)
2. **Create Wiki.js base structure** at `/your-org`
3. **Begin Phase 1.1a** - Gitea MCP Server implementation
4. **Begin Phase 1.1b** - Wiki.js MCP Server implementation
### Phase Execution
1. **Phases 1-4:** Build core projman functionality
2. **Phase 5:** Validate with real sprint (e.g., Intuit Engine extraction)
3. **Phases 6-8:** Polish, document, and harden projman
4. **Phases 9-12:** Build and validate pmo plugin
### Validation Points
- **After Phase 1:** MCP servers working and tested
- **After Phase 4:** Complete projman workflow end-to-end
- **After Phase 5:** Real sprint successfully managed
- **After Phase 8:** Production-ready projman
- **After Phase 11:** Multi-project coordination validated
- **After Phase 12:** Complete system operational
---
## Implementation Decisions (Pre-Development)
These decisions were finalized before development:
### 1. Python Version: 3.10+
- **Rationale:** Balance of modern features and wide availability
- **Benefits:** Modern async, good type hints, widely deployed
- **Minimum:** Python 3.10.0
### 2. Wiki.js Base Structure: Needs Creation
- **Status:** `/your-org` structure does NOT exist yet
- **Action:** Run `setup_wiki_structure.py` during Phase 1.1b
- **Script:** See MCP-WIKIJS.md for complete setup script
- **Post-setup:** Verify at https://wiki.your-company.com/your-org
### 3. Testing Strategy: Both Mocks and Real APIs
- **Unit tests:** Use mocks for fast feedback during development
- **Integration tests:** Use real Gitea/Wiki.js APIs for validation
- **CI/CD:** Run both test suites
- **Developers:** Can skip integration tests locally if needed
- **Markers:** Use pytest markers (`@pytest.mark.integration`)
### 4. Token Permissions: Confirmed
- **Gitea token:**
- `repo` (all) - Read/write repositories, issues, labels
- `read:org` - Organization information and labels
- `read:user` - User information
- **Wiki.js token:**
- Read/create/update pages
- Manage tags
- Search access
### 5. Label System: Dynamic (44 labels)
- **Current count:** 44 labels (28 org + 16 repo)
- **Approach:** Fetch dynamically via API, never hardcode
- **Sync:** `/labels-sync` command updates local reference and suggestion logic
- **New labels:** Command explains changes and asks for confirmation
### 6. Branch Detection: Defense in Depth
- **Layer 1:** MCP tools check branch and block operations
- **Layer 2:** Agent prompts check branch and warn users
- **Layer 3:** CLAUDE.md provides context (third layer)
- **Rationale:** Multiple layers prevent production accidents
---
## Important Reminders
1. **Build projman FIRST** - Don't start pmo until projman is validated
2. **MCP servers are SHARED** - Located at `mcp-servers/`, not inside plugins
3. **Lessons learned is critical** - Prevents repeated mistakes
4. **Test with real work** - Validate with actual sprints, not just unit tests
5. **Security first** - Branch detection must be 100% reliable
6. **Keep it simple** - Avoid over-engineering, focus on proven workflow
7. **Python 3.10+** - Minimum version requirement
8. **Wiki.js setup** - Must run setup script before projman works
---
## Getting Help
### Documentation Structure
**Need details on:**
- Gitea integration → [MCP-GITEA.md](./MCP-GITEA.md)
- Wiki.js integration → [MCP-WIKIJS.md](./MCP-WIKIJS.md)
- Projman usage → [PLUGIN-PROJMAN.md](./PLUGIN-PROJMAN.md)
- PMO usage → [PLUGIN-PMO.md](./PLUGIN-PMO.md)
- Overview → This document
### Quick Reference
| Question | Reference |
|----------|-----------|
| How do I set up configuration? | This document, "Quick Start Guide" |
| What's the repository structure? | This document, "Repository Structure" |
| How do Gitea tools work? | [MCP-GITEA.md](./MCP-GITEA.md) |
| How do Wiki.js tools work? | [MCP-WIKIJS.md](./MCP-WIKIJS.md) |
| How do I use sprint commands? | [PLUGIN-PROJMAN.md](./PLUGIN-PROJMAN.md#commands) |
| How do agents work? | [PLUGIN-PROJMAN.md](./PLUGIN-PROJMAN.md#three-agent-model) |
| How do I coordinate multiple projects? | [PLUGIN-PMO.md](./PLUGIN-PMO.md) |
| What's the build order? | This document, "Build Order & Phases" |
---
## Project Timeline
**Planning:** Complete ✅
**Phase 1-8 (projman):** 6-8 weeks estimated
**Phase 9-12 (pmo):** 2-4 weeks estimated
**Total:** 8-12 weeks from start to production
**Note:** No fixed deadlines - work at sustainable pace and validate thoroughly
---
## You're Ready!
You have everything you need to build the projman and projman-pmo plugins. All architectural decisions are finalized and documented.
**Start here:** [MCP-GITEA.md](./MCP-GITEA.md) - Set up Gitea MCP Server
Good luck with the build! 🚀

View File

@@ -0,0 +1,20 @@
2026-01-26T14:36:42 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:37:38 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:37:48 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:38:05 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:38:55 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:39:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:40:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:02:30 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:02:37 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:03:41 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_report_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:56:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:57:49 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:58:22 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/mcp-tools-reference.md | README.md
2026-02-02T10:58:38 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/validation-rules.md | README.md
2026-02-02T10:59:13 | .claude-plugin | /home/lmiranda/claude-plugins-work/.claude-plugin/marketplace.json | CLAUDE.md .claude-plugin/marketplace.json
2026-02-02T13:55:33 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/visual-output.md | README.md
2026-02-02T13:55:41 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/planner.md | README.md CLAUDE.md
2026-02-02T13:55:55 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/orchestrator.md | README.md CLAUDE.md
2026-02-02T13:56:14 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/executor.md | README.md CLAUDE.md
2026-02-02T13:56:34 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/code-reviewer.md | README.md CLAUDE.md

View File

@@ -0,0 +1,3 @@
"""Contract Validator MCP Server - Cross-plugin compatibility validation."""
__version__ = "1.0.0"

View File

@@ -0,0 +1,415 @@
"""
Parse tools for extracting interfaces from plugin documentation.
Provides structured extraction of:
- Plugin interfaces from README.md (commands, agents, tools)
- Agent definitions from CLAUDE.md (tool sequences, workflows)
"""
import re
import os
from pathlib import Path
from typing import Optional
from pydantic import BaseModel
class ToolInfo(BaseModel):
"""Information about a single tool"""
name: str
category: Optional[str] = None
description: Optional[str] = None
class CommandInfo(BaseModel):
"""Information about a plugin command"""
name: str
description: Optional[str] = None
class AgentInfo(BaseModel):
"""Information about a plugin agent"""
name: str
description: Optional[str] = None
tools: list[str] = []
class PluginInterface(BaseModel):
"""Structured plugin interface extracted from README"""
plugin_name: str
description: Optional[str] = None
commands: list[CommandInfo] = []
agents: list[AgentInfo] = []
tools: list[ToolInfo] = []
tool_categories: dict[str, list[str]] = {}
features: list[str] = []
class ClaudeMdAgent(BaseModel):
"""Agent definition extracted from CLAUDE.md"""
name: str
personality: Optional[str] = None
responsibilities: list[str] = []
tool_refs: list[str] = []
workflow_steps: list[str] = []
class ParseTools:
"""Tools for parsing plugin documentation"""
async def parse_plugin_interface(self, plugin_path: str) -> dict:
"""
Parse plugin README.md to extract interface declarations.
Args:
plugin_path: Path to plugin directory or README.md file
Returns:
Structured interface with commands, agents, tools, etc.
"""
# Resolve path to README
path = Path(plugin_path)
if path.is_dir():
readme_path = path / "README.md"
else:
readme_path = path
if not readme_path.exists():
return {
"error": f"README.md not found at {readme_path}",
"plugin_path": plugin_path
}
content = readme_path.read_text()
plugin_name = self._extract_plugin_name(content, path)
interface = PluginInterface(
plugin_name=plugin_name,
description=self._extract_description(content),
commands=self._extract_commands(content),
agents=self._extract_agents_from_readme(content),
tools=self._extract_tools(content),
tool_categories=self._extract_tool_categories(content),
features=self._extract_features(content)
)
return interface.model_dump()
async def parse_claude_md_agents(self, claude_md_path: str) -> dict:
"""
Parse CLAUDE.md to extract agent definitions and tool sequences.
Args:
claude_md_path: Path to CLAUDE.md file
Returns:
List of agents with their tool sequences
"""
path = Path(claude_md_path)
if not path.exists():
return {
"error": f"CLAUDE.md not found at {path}",
"claude_md_path": claude_md_path
}
content = path.read_text()
agents = self._extract_agents_from_claude_md(content)
return {
"file": str(path),
"agents": [a.model_dump() for a in agents],
"agent_count": len(agents)
}
def _extract_plugin_name(self, content: str, path: Path) -> str:
"""Extract plugin name from content or path"""
# Try to get from H1 header
match = re.search(r'^#\s+(.+?)(?:\s+Plugin|\s*$)', content, re.MULTILINE)
if match:
name = match.group(1).strip()
# Handle cases like "# data-platform Plugin"
name = re.sub(r'\s*Plugin\s*$', '', name, flags=re.IGNORECASE)
return name
# Fall back to directory name
if path.is_dir():
return path.name
return path.parent.name
def _extract_description(self, content: str) -> Optional[str]:
"""Extract plugin description from first paragraph after title"""
# Get content after H1, before first H2
match = re.search(r'^#\s+.+?\n\n(.+?)(?=\n##|\n\n##|\Z)', content, re.MULTILINE | re.DOTALL)
if match:
desc = match.group(1).strip()
# Take first paragraph only
desc = desc.split('\n\n')[0].strip()
return desc
return None
def _extract_commands(self, content: str) -> list[CommandInfo]:
"""Extract commands from Commands section"""
commands = []
# Find Commands section
commands_section = self._extract_section(content, "Commands")
if not commands_section:
return commands
# Parse table format: | Command | Description |
# Only match actual command names (start with / or alphanumeric)
table_pattern = r'\|\s*`?(/[a-z][-a-z0-9]*)`?\s*\|\s*([^|]+)\s*\|'
for match in re.finditer(table_pattern, commands_section):
cmd_name = match.group(1).strip()
desc = match.group(2).strip()
# Skip header row and separators
if cmd_name.lower() in ('command', 'commands') or cmd_name.startswith('-'):
continue
commands.append(CommandInfo(
name=cmd_name,
description=desc
))
# Also look for ### `/command-name` format (with backticks)
cmd_header_pattern = r'^###\s+`(/[a-z][-a-z0-9]*)`\s*\n(.+?)(?=\n###|\n##|\Z)'
for match in re.finditer(cmd_header_pattern, commands_section, re.MULTILINE | re.DOTALL):
cmd_name = match.group(1).strip()
desc_block = match.group(2).strip()
# Get first line or paragraph as description
desc = desc_block.split('\n')[0].strip()
# Don't duplicate if already found in table
if not any(c.name == cmd_name for c in commands):
commands.append(CommandInfo(name=cmd_name, description=desc))
# Also look for ### /command-name format (without backticks)
cmd_header_pattern2 = r'^###\s+(/[a-z][-a-z0-9]*)\s*\n(.+?)(?=\n###|\n##|\Z)'
for match in re.finditer(cmd_header_pattern2, commands_section, re.MULTILINE | re.DOTALL):
cmd_name = match.group(1).strip()
desc_block = match.group(2).strip()
# Get first line or paragraph as description
desc = desc_block.split('\n')[0].strip()
# Don't duplicate if already found in table
if not any(c.name == cmd_name for c in commands):
commands.append(CommandInfo(name=cmd_name, description=desc))
return commands
def _extract_agents_from_readme(self, content: str) -> list[AgentInfo]:
"""Extract agents from Agents section in README"""
agents = []
# Find Agents section
agents_section = self._extract_section(content, "Agents")
if not agents_section:
return agents
# Parse table format: | Agent | Description |
# Only match actual agent names (alphanumeric with dashes/underscores)
table_pattern = r'\|\s*`?([a-z][-a-z0-9_]*)`?\s*\|\s*([^|]+)\s*\|'
for match in re.finditer(table_pattern, agents_section):
agent_name = match.group(1).strip()
desc = match.group(2).strip()
# Skip header row and separators
if agent_name.lower() in ('agent', 'agents') or agent_name.startswith('-'):
continue
agents.append(AgentInfo(name=agent_name, description=desc))
return agents
def _extract_tools(self, content: str) -> list[ToolInfo]:
"""Extract tool list from Tools Summary or similar section"""
tools = []
# Find Tools Summary section
tools_section = self._extract_section(content, "Tools Summary")
if not tools_section:
tools_section = self._extract_section(content, "Tools")
if not tools_section:
tools_section = self._extract_section(content, "MCP Server Tools")
if not tools_section:
return tools
# Parse category headers: ### category (N tools)
category_pattern = r'###\s*(.+?)\s*(?:\((\d+)\s*tools?\))?\s*\n([^#]+)'
for match in re.finditer(category_pattern, tools_section):
category = match.group(1).strip()
tool_list_text = match.group(3).strip()
# Extract tool names from backtick lists
tool_names = re.findall(r'`([a-z_]+)`', tool_list_text)
for name in tool_names:
tools.append(ToolInfo(name=name, category=category))
# Also look for inline tool lists without categories
inline_pattern = r'`([a-z_]+)`'
all_tool_names = set(t.name for t in tools)
for match in re.finditer(inline_pattern, tools_section):
name = match.group(1)
if name not in all_tool_names:
tools.append(ToolInfo(name=name))
all_tool_names.add(name)
return tools
def _extract_tool_categories(self, content: str) -> dict[str, list[str]]:
"""Extract tool categories with their tool lists"""
categories = {}
tools_section = self._extract_section(content, "Tools Summary")
if not tools_section:
tools_section = self._extract_section(content, "Tools")
if not tools_section:
return categories
# Parse category headers: ### category (N tools)
category_pattern = r'###\s*(.+?)\s*(?:\((\d+)\s*tools?\))?\s*\n([^#]+)'
for match in re.finditer(category_pattern, tools_section):
category = match.group(1).strip()
tool_list_text = match.group(3).strip()
# Extract tool names from backtick lists
tool_names = re.findall(r'`([a-z_]+)`', tool_list_text)
if tool_names:
categories[category] = tool_names
return categories
def _extract_features(self, content: str) -> list[str]:
"""Extract features from Features section"""
features = []
features_section = self._extract_section(content, "Features")
if not features_section:
return features
# Parse bullet points
bullet_pattern = r'^[-*]\s+\*\*(.+?)\*\*'
for match in re.finditer(bullet_pattern, features_section, re.MULTILINE):
features.append(match.group(1).strip())
return features
def _extract_section(self, content: str, section_name: str) -> Optional[str]:
"""Extract content of a markdown section by header name"""
# Match ## Section Name - include all content until next ## (same level or higher)
pattern = rf'^##\s+{re.escape(section_name)}(?:\s*\([^)]*\))?\s*\n(.*?)(?=\n##[^#]|\Z)'
match = re.search(pattern, content, re.MULTILINE | re.DOTALL | re.IGNORECASE)
if match:
return match.group(1).strip()
# Try ### level - include content until next ## or ###
pattern = rf'^###\s+{re.escape(section_name)}(?:\s*\([^)]*\))?\s*\n(.*?)(?=\n##|\n###[^#]|\Z)'
match = re.search(pattern, content, re.MULTILINE | re.DOTALL | re.IGNORECASE)
if match:
return match.group(1).strip()
return None
def _extract_agents_from_claude_md(self, content: str) -> list[ClaudeMdAgent]:
"""Extract agent definitions from CLAUDE.md"""
agents = []
# Look for Four-Agent Model section specifically
# Match section headers like "### Four-Agent Model (projman)" or "## Four-Agent Model"
agent_model_match = re.search(
r'^##[#]?\s+Four-Agent Model.*?\n(.*?)(?=\n##[^#]|\Z)',
content, re.MULTILINE | re.DOTALL
)
agent_model_section = agent_model_match.group(1) if agent_model_match else None
if agent_model_section:
# Parse agent table within this section
# | **Planner** | Thoughtful, methodical | Sprint planning, ... |
# Match rows where first cell starts with ** (bold) and contains a capitalized word
agent_table_pattern = r'\|\s*\*\*([A-Z][a-zA-Z\s]+?)\*\*\s*\|\s*([^|]+)\s*\|\s*([^|]+)\s*\|'
for match in re.finditer(agent_table_pattern, agent_model_section):
agent_name = match.group(1).strip()
personality = match.group(2).strip()
responsibilities = match.group(3).strip()
# Skip header rows and separator rows
if agent_name.lower() in ('agent', 'agents', '---', '-', ''):
continue
if 'personality' in personality.lower() or '---' in personality:
continue
# Skip if personality looks like tool names (contains backticks)
if '`' in personality:
continue
# Extract tool references from responsibilities
tool_refs = re.findall(r'`([a-z_]+)`', responsibilities)
# Split responsibilities by comma
resp_list = [r.strip() for r in responsibilities.split(',')]
agents.append(ClaudeMdAgent(
name=agent_name,
personality=personality,
responsibilities=resp_list,
tool_refs=tool_refs
))
# Also look for agents table in ## Agents section
agents_section = self._extract_section(content, "Agents")
if agents_section:
# Parse table: | Agent | Description |
table_pattern = r'\|\s*`?([a-z][-a-z0-9_]+)`?\s*\|\s*([^|]+)\s*\|'
for match in re.finditer(table_pattern, agents_section):
agent_name = match.group(1).strip()
desc = match.group(2).strip()
# Skip header rows
if agent_name.lower() in ('agent', 'agents', '---', '-'):
continue
# Check if agent already exists
if not any(a.name.lower() == agent_name.lower() for a in agents):
agents.append(ClaudeMdAgent(
name=agent_name,
responsibilities=[desc] if desc else []
))
# Look for workflow sections to enrich agent data
workflow_section = self._extract_section(content, "Workflow")
if workflow_section:
# Parse numbered steps
step_pattern = r'^\d+\.\s+(.+?)$'
workflow_steps = re.findall(step_pattern, workflow_section, re.MULTILINE)
# Associate workflow steps with agents mentioned
for agent in agents:
for step in workflow_steps:
if agent.name.lower() in step.lower():
agent.workflow_steps.append(step)
# Extract any tool references in the step
step_tools = re.findall(r'`([a-z_]+)`', step)
agent.tool_refs.extend(t for t in step_tools if t not in agent.tool_refs)
# Look for agent-specific sections (### Planner Agent)
agent_section_pattern = r'^###?\s+([A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)\s+Agent\s*\n(.*?)(?=\n##|\n###|\Z)'
for match in re.finditer(agent_section_pattern, content, re.MULTILINE | re.DOTALL):
agent_name = match.group(1).strip()
section_content = match.group(2).strip()
# Check if agent already exists
existing = next((a for a in agents if a.name.lower() == agent_name.lower()), None)
if existing:
# Add tool refs from this section
tool_refs = re.findall(r'`([a-z_]+)`', section_content)
existing.tool_refs.extend(t for t in tool_refs if t not in existing.tool_refs)
else:
tool_refs = re.findall(r'`([a-z_]+)`', section_content)
agents.append(ClaudeMdAgent(
name=agent_name,
tool_refs=tool_refs
))
return agents

View File

@@ -0,0 +1,337 @@
"""
Report tools for generating compatibility reports and listing issues.
Provides:
- generate_compatibility_report: Full marketplace validation report
- list_issues: Filtered issue listing
"""
import os
from pathlib import Path
from datetime import datetime
from typing import Optional
from pydantic import BaseModel
from .parse_tools import ParseTools
from .validation_tools import ValidationTools, IssueSeverity, IssueType, ValidationIssue
class ReportSummary(BaseModel):
"""Summary statistics for a report"""
total_plugins: int = 0
total_commands: int = 0
total_agents: int = 0
total_tools: int = 0
total_issues: int = 0
errors: int = 0
warnings: int = 0
info: int = 0
class ReportTools:
"""Tools for generating reports and listing issues"""
def __init__(self):
self.parse_tools = ParseTools()
self.validation_tools = ValidationTools()
async def generate_compatibility_report(
self,
marketplace_path: str,
format: str = "markdown"
) -> dict:
"""
Generate a comprehensive compatibility report for all plugins.
Args:
marketplace_path: Path to marketplace root directory
format: Output format ("markdown" or "json")
Returns:
Full compatibility report with all findings
"""
marketplace = Path(marketplace_path)
plugins_dir = marketplace / "plugins"
if not plugins_dir.exists():
return {
"error": f"Plugins directory not found at {plugins_dir}",
"marketplace_path": marketplace_path
}
# Discover all plugins
plugins = []
for item in plugins_dir.iterdir():
if item.is_dir() and (item / ".claude-plugin").exists():
plugins.append(item)
if not plugins:
return {
"error": "No plugins found in marketplace",
"marketplace_path": marketplace_path
}
# Parse all plugin interfaces
interfaces = {}
all_issues = []
summary = ReportSummary(total_plugins=len(plugins))
for plugin_path in plugins:
interface = await self.parse_tools.parse_plugin_interface(str(plugin_path))
if "error" not in interface:
interfaces[interface["plugin_name"]] = interface
summary.total_commands += len(interface.get("commands", []))
summary.total_agents += len(interface.get("agents", []))
summary.total_tools += len(interface.get("tools", []))
# Run pairwise compatibility checks
plugin_names = list(interfaces.keys())
compatibility_results = []
for i, name_a in enumerate(plugin_names):
for name_b in plugin_names[i+1:]:
path_a = plugins_dir / self._find_plugin_dir(plugins_dir, name_a)
path_b = plugins_dir / self._find_plugin_dir(plugins_dir, name_b)
result = await self.validation_tools.validate_compatibility(
str(path_a), str(path_b)
)
if "error" not in result:
compatibility_results.append(result)
all_issues.extend(result.get("issues", []))
# Parse CLAUDE.md if exists
claude_md = marketplace / "CLAUDE.md"
agents_from_claude = []
if claude_md.exists():
agents_result = await self.parse_tools.parse_claude_md_agents(str(claude_md))
if "error" not in agents_result:
agents_from_claude = agents_result.get("agents", [])
# Validate each agent
for agent in agents_from_claude:
agent_result = await self.validation_tools.validate_agent_refs(
agent["name"],
str(claude_md),
[str(p) for p in plugins]
)
if "error" not in agent_result:
all_issues.extend(agent_result.get("issues", []))
# Count issues by severity
for issue in all_issues:
severity = issue.get("severity", "info")
if isinstance(severity, str):
severity_str = severity.lower()
else:
severity_str = severity.value if hasattr(severity, 'value') else str(severity).lower()
if "error" in severity_str:
summary.errors += 1
elif "warning" in severity_str:
summary.warnings += 1
else:
summary.info += 1
summary.total_issues = len(all_issues)
# Generate report
if format == "json":
return {
"generated_at": datetime.now().isoformat(),
"marketplace_path": marketplace_path,
"summary": summary.model_dump(),
"plugins": interfaces,
"compatibility_checks": compatibility_results,
"claude_md_agents": agents_from_claude,
"all_issues": all_issues
}
else:
# Generate markdown report
report = self._generate_markdown_report(
marketplace_path,
summary,
interfaces,
compatibility_results,
agents_from_claude,
all_issues
)
return {
"generated_at": datetime.now().isoformat(),
"marketplace_path": marketplace_path,
"summary": summary.model_dump(),
"report": report
}
def _find_plugin_dir(self, plugins_dir: Path, plugin_name: str) -> str:
"""Find plugin directory by name (handles naming variations)"""
# Try exact match first
for item in plugins_dir.iterdir():
if item.is_dir():
if item.name.lower() == plugin_name.lower():
return item.name
# Check plugin.json for name
plugin_json = item / ".claude-plugin" / "plugin.json"
if plugin_json.exists():
import json
try:
data = json.loads(plugin_json.read_text())
if data.get("name", "").lower() == plugin_name.lower():
return item.name
except:
pass
return plugin_name
def _generate_markdown_report(
self,
marketplace_path: str,
summary: ReportSummary,
interfaces: dict,
compatibility_results: list,
agents: list,
issues: list
) -> str:
"""Generate markdown formatted report"""
lines = [
"# Contract Validation Report",
"",
f"**Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
f"**Marketplace:** `{marketplace_path}`",
"",
"## Summary",
"",
f"| Metric | Count |",
f"|--------|-------|",
f"| Plugins | {summary.total_plugins} |",
f"| Commands | {summary.total_commands} |",
f"| Agents | {summary.total_agents} |",
f"| Tools | {summary.total_tools} |",
f"| **Issues** | **{summary.total_issues}** |",
f"| - Errors | {summary.errors} |",
f"| - Warnings | {summary.warnings} |",
f"| - Info | {summary.info} |",
"",
]
# Plugin details
lines.extend([
"## Plugins",
"",
])
for name, interface in interfaces.items():
cmds = len(interface.get("commands", []))
agents_count = len(interface.get("agents", []))
tools = len(interface.get("tools", []))
lines.append(f"### {name}")
lines.append("")
lines.append(f"- Commands: {cmds}")
lines.append(f"- Agents: {agents_count}")
lines.append(f"- Tools: {tools}")
lines.append("")
# Compatibility results
if compatibility_results:
lines.extend([
"## Compatibility Checks",
"",
])
for result in compatibility_results:
status = "" if result.get("compatible", True) else ""
lines.append(f"### {result['plugin_a']}{result['plugin_b']} {status}")
lines.append("")
if result.get("shared_tools"):
lines.append(f"- Shared tools: `{', '.join(result['shared_tools'])}`")
if result.get("issues"):
for issue in result["issues"]:
sev = issue.get("severity", "info")
if hasattr(sev, 'value'):
sev = sev.value
lines.append(f"- [{sev.upper()}] {issue['message']}")
lines.append("")
# Issues section
if issues:
lines.extend([
"## All Issues",
"",
"| Severity | Type | Message |",
"|----------|------|---------|",
])
for issue in issues:
sev = issue.get("severity", "info")
itype = issue.get("issue_type", "unknown")
msg = issue.get("message", "")
if hasattr(sev, 'value'):
sev = sev.value
if hasattr(itype, 'value'):
itype = itype.value
# Truncate message for table
msg_short = msg[:60] + "..." if len(msg) > 60 else msg
lines.append(f"| {sev} | {itype} | {msg_short} |")
lines.append("")
return "\n".join(lines)
async def list_issues(
self,
marketplace_path: str,
severity: str = "all",
issue_type: str = "all"
) -> dict:
"""
List validation issues with optional filtering.
Args:
marketplace_path: Path to marketplace root directory
severity: Filter by severity ("error", "warning", "info", "all")
issue_type: Filter by type ("missing_tool", "interface_mismatch", etc., "all")
Returns:
Filtered list of issues
"""
# Generate full report first
report = await self.generate_compatibility_report(marketplace_path, format="json")
if "error" in report:
return report
all_issues = report.get("all_issues", [])
# Filter by severity
if severity != "all":
filtered = []
for issue in all_issues:
issue_sev = issue.get("severity", "info")
if hasattr(issue_sev, 'value'):
issue_sev = issue_sev.value
if isinstance(issue_sev, str) and severity.lower() in issue_sev.lower():
filtered.append(issue)
all_issues = filtered
# Filter by type
if issue_type != "all":
filtered = []
for issue in all_issues:
itype = issue.get("issue_type", "unknown")
if hasattr(itype, 'value'):
itype = itype.value
if isinstance(itype, str) and issue_type.lower() in itype.lower():
filtered.append(issue)
all_issues = filtered
return {
"marketplace_path": marketplace_path,
"filters": {
"severity": severity,
"issue_type": issue_type
},
"total_issues": len(all_issues),
"issues": all_issues
}

View File

@@ -0,0 +1,309 @@
"""
MCP Server entry point for Contract Validator.
Provides cross-plugin compatibility validation and Claude.md agent verification
tools to Claude Code via JSON-RPC 2.0 over stdio.
"""
import asyncio
import logging
import json
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
from .parse_tools import ParseTools
from .validation_tools import ValidationTools
from .report_tools import ReportTools
# Suppress noisy MCP validation warnings on stderr
logging.basicConfig(level=logging.INFO)
logging.getLogger("root").setLevel(logging.ERROR)
logging.getLogger("mcp").setLevel(logging.ERROR)
logger = logging.getLogger(__name__)
class ContractValidatorMCPServer:
"""MCP Server for cross-plugin compatibility validation"""
def __init__(self):
self.server = Server("contract-validator-mcp")
self.parse_tools = ParseTools()
self.validation_tools = ValidationTools()
self.report_tools = ReportTools()
async def initialize(self):
"""Initialize server."""
logger.info("Contract Validator MCP Server initialized")
def setup_tools(self):
"""Register all available tools with the MCP server"""
@self.server.list_tools()
async def list_tools() -> list[Tool]:
"""Return list of available tools"""
tools = [
# Parse tools (to be implemented in #186)
Tool(
name="parse_plugin_interface",
description="Parse plugin README.md to extract interface declarations (inputs, outputs, tools)",
inputSchema={
"type": "object",
"properties": {
"plugin_path": {
"type": "string",
"description": "Path to plugin directory or README.md"
}
},
"required": ["plugin_path"]
}
),
Tool(
name="parse_claude_md_agents",
description="Parse Claude.md to extract agent definitions and their tool sequences",
inputSchema={
"type": "object",
"properties": {
"claude_md_path": {
"type": "string",
"description": "Path to CLAUDE.md file"
}
},
"required": ["claude_md_path"]
}
),
# Validation tools (to be implemented in #187)
Tool(
name="validate_compatibility",
description="Validate compatibility between two plugin interfaces",
inputSchema={
"type": "object",
"properties": {
"plugin_a": {
"type": "string",
"description": "Path to first plugin"
},
"plugin_b": {
"type": "string",
"description": "Path to second plugin"
}
},
"required": ["plugin_a", "plugin_b"]
}
),
Tool(
name="validate_agent_refs",
description="Validate that all tool references in an agent definition exist",
inputSchema={
"type": "object",
"properties": {
"agent_name": {
"type": "string",
"description": "Name of agent to validate"
},
"claude_md_path": {
"type": "string",
"description": "Path to CLAUDE.md containing agent"
},
"plugin_paths": {
"type": "array",
"items": {"type": "string"},
"description": "Paths to available plugins"
}
},
"required": ["agent_name", "claude_md_path"]
}
),
Tool(
name="validate_data_flow",
description="Validate data flow through an agent's tool sequence",
inputSchema={
"type": "object",
"properties": {
"agent_name": {
"type": "string",
"description": "Name of agent to validate"
},
"claude_md_path": {
"type": "string",
"description": "Path to CLAUDE.md containing agent"
}
},
"required": ["agent_name", "claude_md_path"]
}
),
Tool(
name="validate_workflow_integration",
description="Validate that a domain plugin exposes the required advisory interfaces (gate command, review command, advisory agent) expected by projman's domain-consultation skill. Also checks gate contract version compatibility.",
inputSchema={
"type": "object",
"properties": {
"plugin_path": {
"type": "string",
"description": "Path to the domain plugin directory"
},
"domain_label": {
"type": "string",
"description": "The Domain/* label it claims to handle, e.g. Domain/Viz"
},
"expected_contract": {
"type": "string",
"description": "Expected contract version (e.g., 'v1'). If provided, validates the gate command's contract matches."
}
},
"required": ["plugin_path", "domain_label"]
}
),
# Report tools (to be implemented in #188)
Tool(
name="generate_compatibility_report",
description="Generate a comprehensive compatibility report for all plugins",
inputSchema={
"type": "object",
"properties": {
"marketplace_path": {
"type": "string",
"description": "Path to marketplace root directory"
},
"format": {
"type": "string",
"enum": ["markdown", "json"],
"default": "markdown",
"description": "Output format"
}
},
"required": ["marketplace_path"]
}
),
Tool(
name="list_issues",
description="List validation issues with optional filtering",
inputSchema={
"type": "object",
"properties": {
"marketplace_path": {
"type": "string",
"description": "Path to marketplace root directory"
},
"severity": {
"type": "string",
"enum": ["error", "warning", "info", "all"],
"default": "all",
"description": "Filter by severity"
},
"issue_type": {
"type": "string",
"enum": ["missing_tool", "interface_mismatch", "optional_dependency", "undeclared_output", "all"],
"default": "all",
"description": "Filter by issue type"
}
},
"required": ["marketplace_path"]
}
)
]
return tools
@self.server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
"""Handle tool invocation."""
try:
# All tools return placeholder responses for now
# Actual implementation will be added in issues #186, #187, #188
if name == "parse_plugin_interface":
result = await self._parse_plugin_interface(**arguments)
elif name == "parse_claude_md_agents":
result = await self._parse_claude_md_agents(**arguments)
elif name == "validate_compatibility":
result = await self._validate_compatibility(**arguments)
elif name == "validate_agent_refs":
result = await self._validate_agent_refs(**arguments)
elif name == "validate_data_flow":
result = await self._validate_data_flow(**arguments)
elif name == "validate_workflow_integration":
result = await self._validate_workflow_integration(**arguments)
elif name == "generate_compatibility_report":
result = await self._generate_compatibility_report(**arguments)
elif name == "list_issues":
result = await self._list_issues(**arguments)
else:
raise ValueError(f"Unknown tool: {name}")
return [TextContent(
type="text",
text=json.dumps(result, indent=2, default=str)
)]
except Exception as e:
logger.error(f"Tool {name} failed: {e}")
return [TextContent(
type="text",
text=json.dumps({"error": str(e)}, indent=2)
)]
# Parse tool implementations (Issue #186)
async def _parse_plugin_interface(self, plugin_path: str) -> dict:
"""Parse plugin interface from README.md"""
return await self.parse_tools.parse_plugin_interface(plugin_path)
async def _parse_claude_md_agents(self, claude_md_path: str) -> dict:
"""Parse agents from CLAUDE.md"""
return await self.parse_tools.parse_claude_md_agents(claude_md_path)
# Validation tool implementations (Issue #187)
async def _validate_compatibility(self, plugin_a: str, plugin_b: str) -> dict:
"""Validate compatibility between plugins"""
return await self.validation_tools.validate_compatibility(plugin_a, plugin_b)
async def _validate_agent_refs(self, agent_name: str, claude_md_path: str, plugin_paths: list = None) -> dict:
"""Validate agent tool references"""
return await self.validation_tools.validate_agent_refs(agent_name, claude_md_path, plugin_paths)
async def _validate_data_flow(self, agent_name: str, claude_md_path: str) -> dict:
"""Validate agent data flow"""
return await self.validation_tools.validate_data_flow(agent_name, claude_md_path)
async def _validate_workflow_integration(
self,
plugin_path: str,
domain_label: str,
expected_contract: str = None
) -> dict:
"""Validate domain plugin exposes required advisory interfaces"""
return await self.validation_tools.validate_workflow_integration(
plugin_path, domain_label, expected_contract
)
# Report tool implementations (Issue #188)
async def _generate_compatibility_report(self, marketplace_path: str, format: str = "markdown") -> dict:
"""Generate comprehensive compatibility report"""
return await self.report_tools.generate_compatibility_report(marketplace_path, format)
async def _list_issues(self, marketplace_path: str, severity: str = "all", issue_type: str = "all") -> dict:
"""List validation issues with filtering"""
return await self.report_tools.list_issues(marketplace_path, severity, issue_type)
async def run(self):
"""Run the MCP server"""
await self.initialize()
self.setup_tools()
async with stdio_server() as (read_stream, write_stream):
await self.server.run(
read_stream,
write_stream,
self.server.create_initialization_options()
)
async def main():
"""Main entry point"""
server = ContractValidatorMCPServer()
await server.run()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,493 @@
"""
Validation tools for checking cross-plugin compatibility and agent references.
Provides:
- validate_compatibility: Compare two plugin interfaces
- validate_agent_refs: Check agent tool references exist
- validate_data_flow: Verify data flow through agent sequences
"""
from pathlib import Path
from typing import Optional
from pydantic import BaseModel
from enum import Enum
from .parse_tools import ParseTools, PluginInterface, ClaudeMdAgent
class IssueSeverity(str, Enum):
ERROR = "error"
WARNING = "warning"
INFO = "info"
class IssueType(str, Enum):
MISSING_TOOL = "missing_tool"
INTERFACE_MISMATCH = "interface_mismatch"
OPTIONAL_DEPENDENCY = "optional_dependency"
UNDECLARED_OUTPUT = "undeclared_output"
INVALID_SEQUENCE = "invalid_sequence"
MISSING_INTEGRATION = "missing_integration"
class ValidationIssue(BaseModel):
"""A single validation issue"""
severity: IssueSeverity
issue_type: IssueType
message: str
location: Optional[str] = None
suggestion: Optional[str] = None
class CompatibilityResult(BaseModel):
"""Result of compatibility check between two plugins"""
plugin_a: str
plugin_b: str
compatible: bool
shared_tools: list[str] = []
a_only_tools: list[str] = []
b_only_tools: list[str] = []
issues: list[ValidationIssue] = []
class AgentValidationResult(BaseModel):
"""Result of agent reference validation"""
agent_name: str
valid: bool
tool_refs_found: list[str] = []
tool_refs_missing: list[str] = []
issues: list[ValidationIssue] = []
class DataFlowResult(BaseModel):
"""Result of data flow validation"""
agent_name: str
valid: bool
flow_steps: list[str] = []
issues: list[ValidationIssue] = []
class WorkflowIntegrationResult(BaseModel):
"""Result of workflow integration validation for domain plugins"""
plugin_name: str
domain_label: str
valid: bool
gate_command_found: bool
gate_contract: Optional[str] = None # Contract version declared by gate command
review_command_found: bool
advisory_agent_found: bool
issues: list[ValidationIssue] = []
class ValidationTools:
"""Tools for validating plugin compatibility and agent references"""
def __init__(self):
self.parse_tools = ParseTools()
async def validate_compatibility(self, plugin_a: str, plugin_b: str) -> dict:
"""
Validate compatibility between two plugin interfaces.
Compares tools, commands, and agents to identify overlaps and gaps.
Args:
plugin_a: Path to first plugin directory
plugin_b: Path to second plugin directory
Returns:
Compatibility report with shared tools, unique tools, and issues
"""
# Parse both plugins
interface_a = await self.parse_tools.parse_plugin_interface(plugin_a)
interface_b = await self.parse_tools.parse_plugin_interface(plugin_b)
# Check for parse errors
if "error" in interface_a:
return {
"error": f"Failed to parse plugin A: {interface_a['error']}",
"plugin_a": plugin_a,
"plugin_b": plugin_b
}
if "error" in interface_b:
return {
"error": f"Failed to parse plugin B: {interface_b['error']}",
"plugin_a": plugin_a,
"plugin_b": plugin_b
}
# Extract tool names
tools_a = set(t["name"] for t in interface_a.get("tools", []))
tools_b = set(t["name"] for t in interface_b.get("tools", []))
# Find overlaps and differences
shared = tools_a & tools_b
a_only = tools_a - tools_b
b_only = tools_b - tools_a
issues = []
# Check for potential naming conflicts
if shared:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.INTERFACE_MISMATCH,
message=f"Both plugins define tools with same names: {list(shared)}",
location=f"{interface_a['plugin_name']} and {interface_b['plugin_name']}",
suggestion="Ensure tools with same names have compatible interfaces"
))
# Check command overlaps
cmds_a = set(c["name"] for c in interface_a.get("commands", []))
cmds_b = set(c["name"] for c in interface_b.get("commands", []))
shared_cmds = cmds_a & cmds_b
if shared_cmds:
issues.append(ValidationIssue(
severity=IssueSeverity.ERROR,
issue_type=IssueType.INTERFACE_MISMATCH,
message=f"Command name conflict: {list(shared_cmds)}",
location=f"{interface_a['plugin_name']} and {interface_b['plugin_name']}",
suggestion="Rename conflicting commands to avoid ambiguity"
))
result = CompatibilityResult(
plugin_a=interface_a["plugin_name"],
plugin_b=interface_b["plugin_name"],
compatible=len([i for i in issues if i.severity == IssueSeverity.ERROR]) == 0,
shared_tools=list(shared),
a_only_tools=list(a_only),
b_only_tools=list(b_only),
issues=issues
)
return result.model_dump()
async def validate_agent_refs(
self,
agent_name: str,
claude_md_path: str,
plugin_paths: list[str] = None
) -> dict:
"""
Validate that all tool references in an agent definition exist.
Args:
agent_name: Name of the agent to validate
claude_md_path: Path to CLAUDE.md containing the agent
plugin_paths: Optional list of plugin paths to check for tools
Returns:
Validation result with found/missing tools and issues
"""
# Parse CLAUDE.md for agents
agents_result = await self.parse_tools.parse_claude_md_agents(claude_md_path)
if "error" in agents_result:
return {
"error": agents_result["error"],
"agent_name": agent_name
}
# Find the specific agent
agent = None
for a in agents_result.get("agents", []):
if a["name"].lower() == agent_name.lower():
agent = a
break
if not agent:
return {
"error": f"Agent '{agent_name}' not found in {claude_md_path}",
"agent_name": agent_name,
"available_agents": [a["name"] for a in agents_result.get("agents", [])]
}
# Collect all available tools from plugins
available_tools = set()
if plugin_paths:
for plugin_path in plugin_paths:
interface = await self.parse_tools.parse_plugin_interface(plugin_path)
if "error" not in interface:
for tool in interface.get("tools", []):
available_tools.add(tool["name"])
# Check agent tool references
tool_refs = set(agent.get("tool_refs", []))
found = tool_refs & available_tools if available_tools else tool_refs
missing = tool_refs - available_tools if available_tools else set()
issues = []
# Report missing tools
for tool in missing:
issues.append(ValidationIssue(
severity=IssueSeverity.ERROR,
issue_type=IssueType.MISSING_TOOL,
message=f"Agent '{agent_name}' references tool '{tool}' which is not found",
location=claude_md_path,
suggestion=f"Check if tool '{tool}' exists or fix the reference"
))
# Check if agent has no tool refs (might be incomplete)
if not tool_refs:
issues.append(ValidationIssue(
severity=IssueSeverity.INFO,
issue_type=IssueType.UNDECLARED_OUTPUT,
message=f"Agent '{agent_name}' has no documented tool references",
location=claude_md_path,
suggestion="Consider documenting which tools this agent uses"
))
result = AgentValidationResult(
agent_name=agent_name,
valid=len([i for i in issues if i.severity == IssueSeverity.ERROR]) == 0,
tool_refs_found=list(found),
tool_refs_missing=list(missing),
issues=issues
)
return result.model_dump()
async def validate_data_flow(self, agent_name: str, claude_md_path: str) -> dict:
"""
Validate data flow through an agent's tool sequence.
Checks that each step's expected output can be used by the next step.
Args:
agent_name: Name of the agent to validate
claude_md_path: Path to CLAUDE.md containing the agent
Returns:
Data flow validation result with steps and issues
"""
# Parse CLAUDE.md for agents
agents_result = await self.parse_tools.parse_claude_md_agents(claude_md_path)
if "error" in agents_result:
return {
"error": agents_result["error"],
"agent_name": agent_name
}
# Find the specific agent
agent = None
for a in agents_result.get("agents", []):
if a["name"].lower() == agent_name.lower():
agent = a
break
if not agent:
return {
"error": f"Agent '{agent_name}' not found in {claude_md_path}",
"agent_name": agent_name,
"available_agents": [a["name"] for a in agents_result.get("agents", [])]
}
issues = []
flow_steps = []
# Extract workflow steps
workflow_steps = agent.get("workflow_steps", [])
responsibilities = agent.get("responsibilities", [])
# Build flow from workflow steps or responsibilities
steps = workflow_steps if workflow_steps else responsibilities
for i, step in enumerate(steps):
flow_steps.append(f"Step {i+1}: {step}")
# Check for data flow patterns
tool_refs = agent.get("tool_refs", [])
# Known data flow patterns
# e.g., data-platform produces data_ref, viz-platform consumes it
known_producers = {
"read_csv": "data_ref",
"read_parquet": "data_ref",
"pg_query": "data_ref",
"filter": "data_ref",
"groupby": "data_ref",
}
known_consumers = {
"describe": "data_ref",
"head": "data_ref",
"tail": "data_ref",
"to_csv": "data_ref",
"to_parquet": "data_ref",
}
# Check if agent uses tools that require data_ref
has_producer = any(t in known_producers for t in tool_refs)
has_consumer = any(t in known_consumers for t in tool_refs)
if has_consumer and not has_producer:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.INTERFACE_MISMATCH,
message=f"Agent '{agent_name}' uses tools that consume data_ref but no producer found",
location=claude_md_path,
suggestion="Ensure a data loading tool (read_csv, pg_query, etc.) is used before data consumers"
))
# Check for empty workflow
if not steps and not tool_refs:
issues.append(ValidationIssue(
severity=IssueSeverity.INFO,
issue_type=IssueType.UNDECLARED_OUTPUT,
message=f"Agent '{agent_name}' has no documented workflow or tool sequence",
location=claude_md_path,
suggestion="Consider documenting the agent's workflow steps"
))
result = DataFlowResult(
agent_name=agent_name,
valid=len([i for i in issues if i.severity == IssueSeverity.ERROR]) == 0,
flow_steps=flow_steps,
issues=issues
)
return result.model_dump()
async def validate_workflow_integration(
self,
plugin_path: str,
domain_label: str,
expected_contract: Optional[str] = None
) -> dict:
"""
Validate that a domain plugin exposes required advisory interfaces.
Checks for:
- Gate command (e.g., /design-gate, /data-gate) - REQUIRED
- Gate contract version (gate_contract in frontmatter) - INFO if missing
- Review command (e.g., /design-review, /data-review) - recommended
- Advisory agent referencing the domain label - recommended
Args:
plugin_path: Path to the domain plugin directory
domain_label: The Domain/* label it claims to handle (e.g., Domain/Viz)
expected_contract: Expected contract version (e.g., 'v1'). If provided,
validates the gate command's contract matches.
Returns:
Validation result with found interfaces and issues
"""
import re
plugin_path_obj = Path(plugin_path)
issues = []
# Extract plugin name from path
plugin_name = plugin_path_obj.name
if not plugin_path_obj.exists():
return {
"error": f"Plugin directory not found: {plugin_path}",
"plugin_path": plugin_path,
"domain_label": domain_label
}
# Extract domain short name from label (e.g., "Domain/Viz" -> "viz", "Domain/Data" -> "data")
domain_short = domain_label.split("/")[-1].lower() if "/" in domain_label else domain_label.lower()
# Check for gate command
commands_dir = plugin_path_obj / "commands"
gate_command_found = False
gate_contract = None
gate_patterns = ["pass", "fail", "PASS", "FAIL", "Binary pass/fail", "gate"]
if commands_dir.exists():
for cmd_file in commands_dir.glob("*.md"):
if "gate" in cmd_file.name.lower():
# Verify it's actually a gate command by checking content
content = cmd_file.read_text()
if any(pattern in content for pattern in gate_patterns):
gate_command_found = True
# Parse frontmatter for gate_contract
frontmatter_match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
if frontmatter_match:
frontmatter = frontmatter_match.group(1)
contract_match = re.search(r'gate_contract:\s*(\S+)', frontmatter)
if contract_match:
gate_contract = contract_match.group(1)
break
if not gate_command_found:
issues.append(ValidationIssue(
severity=IssueSeverity.ERROR,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks a gate command for domain '{domain_label}'",
location=str(commands_dir),
suggestion=f"Create commands/{domain_short}-gate.md with binary PASS/FAIL output"
))
# Check for review command
review_command_found = False
if commands_dir.exists():
for cmd_file in commands_dir.glob("*.md"):
if "review" in cmd_file.name.lower() and "gate" not in cmd_file.name.lower():
review_command_found = True
break
if not review_command_found:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks a review command for domain '{domain_label}'",
location=str(commands_dir),
suggestion=f"Create commands/{domain_short}-review.md for detailed audits"
))
# Check for advisory agent
agents_dir = plugin_path_obj / "agents"
advisory_agent_found = False
if agents_dir.exists():
for agent_file in agents_dir.glob("*.md"):
content = agent_file.read_text()
# Check if agent references the domain label or gate command
if domain_label in content or f"{domain_short}-gate" in content.lower() or "advisor" in agent_file.name.lower() or "reviewer" in agent_file.name.lower():
advisory_agent_found = True
break
if not advisory_agent_found:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks an advisory agent for domain '{domain_label}'",
location=str(agents_dir) if agents_dir.exists() else str(plugin_path_obj),
suggestion=f"Create agents/{domain_short}-advisor.md referencing '{domain_label}'"
))
# Check gate contract version
if gate_command_found:
if not gate_contract:
issues.append(ValidationIssue(
severity=IssueSeverity.INFO,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Gate command does not declare a contract version",
location=str(commands_dir),
suggestion="Consider adding `gate_contract: v1` to frontmatter for version tracking"
))
elif expected_contract and gate_contract != expected_contract:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.INTERFACE_MISMATCH,
message=f"Contract version mismatch: gate declares {gate_contract}, projman expects {expected_contract}",
location=str(commands_dir),
suggestion=f"Update domain-consultation.md Gate Command Reference table to {gate_contract}, or update gate command to {expected_contract}"
))
result = WorkflowIntegrationResult(
plugin_name=plugin_name,
domain_label=domain_label,
valid=gate_command_found, # Only gate is required for validity
gate_command_found=gate_command_found,
gate_contract=gate_contract,
review_command_found=review_command_found,
advisory_agent_found=advisory_agent_found,
issues=issues
)
return result.model_dump()

View File

@@ -0,0 +1,41 @@
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "contract-validator-mcp"
version = "1.0.0"
description = "MCP Server for cross-plugin compatibility validation and agent verification"
readme = "README.md"
license = {text = "MIT"}
requires-python = ">=3.10"
authors = [
{name = "Leo Miranda"}
]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
]
dependencies = [
"mcp>=0.9.0",
"pydantic>=2.5.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.4.3",
"pytest-asyncio>=0.23.0",
]
[tool.setuptools.packages.find]
where = ["."]
include = ["mcp_server*"]
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]

View File

@@ -0,0 +1,9 @@
# MCP SDK
mcp>=0.9.0
# Utilities
pydantic>=2.5.0
# Testing
pytest>=7.4.3
pytest-asyncio>=0.23.0

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/contract-validator/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -0,0 +1 @@
# Tests for contract-validator MCP server

View File

@@ -0,0 +1,193 @@
"""
Unit tests for parse tools.
"""
import pytest
from pathlib import Path
@pytest.fixture
def parse_tools():
"""Create ParseTools instance"""
from mcp_server.parse_tools import ParseTools
return ParseTools()
@pytest.fixture
def sample_readme(tmp_path):
"""Create a sample README.md for testing"""
readme = tmp_path / "README.md"
readme.write_text("""# Test Plugin
A test plugin for validation.
## Features
- **Feature One**: Does something
- **Feature Two**: Does something else
## Commands
| Command | Description |
|---------|-------------|
| `/test-cmd` | Test command |
| `/another-cmd` | Another test command |
## Agents
| Agent | Description |
|-------|-------------|
| `test-agent` | A test agent |
## Tools Summary
### Category A (3 tools)
`tool_a`, `tool_b`, `tool_c`
### Category B (2 tools)
`tool_d`, `tool_e`
""")
return str(tmp_path)
@pytest.fixture
def sample_claude_md(tmp_path):
"""Create a sample CLAUDE.md for testing"""
claude_md = tmp_path / "CLAUDE.md"
claude_md.write_text("""# CLAUDE.md
## Project Overview
### Four-Agent Model (test)
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **Planner** | Thoughtful | Planning via `create_issue`, `search_lessons` |
| **Executor** | Focused | Implementation via `write`, `edit` |
## Workflow
1. Planner creates issues
2. Executor implements code
""")
return str(claude_md)
@pytest.mark.asyncio
async def test_parse_plugin_interface_basic(parse_tools, sample_readme):
"""Test basic plugin interface parsing"""
result = await parse_tools.parse_plugin_interface(sample_readme)
assert "error" not in result
# Plugin name extraction strips "Plugin" suffix
assert result["plugin_name"] == "Test"
assert "A test plugin" in result["description"]
@pytest.mark.asyncio
async def test_parse_plugin_interface_commands(parse_tools, sample_readme):
"""Test command extraction from README"""
result = await parse_tools.parse_plugin_interface(sample_readme)
commands = result["commands"]
assert len(commands) == 2
assert commands[0]["name"] == "/test-cmd"
assert commands[1]["name"] == "/another-cmd"
@pytest.mark.asyncio
async def test_parse_plugin_interface_agents(parse_tools, sample_readme):
"""Test agent extraction from README"""
result = await parse_tools.parse_plugin_interface(sample_readme)
agents = result["agents"]
assert len(agents) == 1
assert agents[0]["name"] == "test-agent"
@pytest.mark.asyncio
async def test_parse_plugin_interface_tools(parse_tools, sample_readme):
"""Test tool extraction from README"""
result = await parse_tools.parse_plugin_interface(sample_readme)
tools = result["tools"]
tool_names = [t["name"] for t in tools]
assert "tool_a" in tool_names
assert "tool_b" in tool_names
assert "tool_e" in tool_names
assert len(tools) >= 5
@pytest.mark.asyncio
async def test_parse_plugin_interface_categories(parse_tools, sample_readme):
"""Test tool category extraction"""
result = await parse_tools.parse_plugin_interface(sample_readme)
categories = result["tool_categories"]
assert "Category A" in categories
assert "Category B" in categories
assert "tool_a" in categories["Category A"]
@pytest.mark.asyncio
async def test_parse_plugin_interface_features(parse_tools, sample_readme):
"""Test feature extraction"""
result = await parse_tools.parse_plugin_interface(sample_readme)
features = result["features"]
assert "Feature One" in features
assert "Feature Two" in features
@pytest.mark.asyncio
async def test_parse_plugin_interface_not_found(parse_tools, tmp_path):
"""Test error when README not found"""
result = await parse_tools.parse_plugin_interface(str(tmp_path / "nonexistent"))
assert "error" in result
assert "not found" in result["error"].lower()
@pytest.mark.asyncio
async def test_parse_claude_md_agents(parse_tools, sample_claude_md):
"""Test agent extraction from CLAUDE.md"""
result = await parse_tools.parse_claude_md_agents(sample_claude_md)
assert "error" not in result
assert result["agent_count"] == 2
agents = result["agents"]
agent_names = [a["name"] for a in agents]
assert "Planner" in agent_names
assert "Executor" in agent_names
@pytest.mark.asyncio
async def test_parse_claude_md_tool_refs(parse_tools, sample_claude_md):
"""Test tool reference extraction from agents"""
result = await parse_tools.parse_claude_md_agents(sample_claude_md)
agents = {a["name"]: a for a in result["agents"]}
planner = agents["Planner"]
assert "create_issue" in planner["tool_refs"]
assert "search_lessons" in planner["tool_refs"]
@pytest.mark.asyncio
async def test_parse_claude_md_not_found(parse_tools, tmp_path):
"""Test error when CLAUDE.md not found"""
result = await parse_tools.parse_claude_md_agents(str(tmp_path / "CLAUDE.md"))
assert "error" in result
assert "not found" in result["error"].lower()
@pytest.mark.asyncio
async def test_parse_plugin_with_direct_file(parse_tools, sample_readme):
"""Test parsing with direct file path instead of directory"""
readme_path = Path(sample_readme) / "README.md"
result = await parse_tools.parse_plugin_interface(str(readme_path))
assert "error" not in result
# Plugin name extraction strips "Plugin" suffix
assert result["plugin_name"] == "Test"

View File

@@ -0,0 +1,261 @@
"""
Unit tests for report tools.
"""
import pytest
from pathlib import Path
@pytest.fixture
def report_tools():
"""Create ReportTools instance"""
from mcp_server.report_tools import ReportTools
return ReportTools()
@pytest.fixture
def sample_marketplace(tmp_path):
"""Create a sample marketplace structure"""
import json
plugins_dir = tmp_path / "plugins"
plugins_dir.mkdir()
# Plugin 1
plugin1 = plugins_dir / "plugin-one"
plugin1.mkdir()
plugin1_meta = plugin1 / ".claude-plugin"
plugin1_meta.mkdir()
(plugin1_meta / "plugin.json").write_text(json.dumps({"name": "plugin-one"}))
(plugin1 / "README.md").write_text("""# plugin-one
First test plugin.
## Commands
| Command | Description |
|---------|-------------|
| `/cmd-one` | Command one |
## Tools Summary
### Tools (2 tools)
`tool_a`, `tool_b`
""")
# Plugin 2
plugin2 = plugins_dir / "plugin-two"
plugin2.mkdir()
plugin2_meta = plugin2 / ".claude-plugin"
plugin2_meta.mkdir()
(plugin2_meta / "plugin.json").write_text(json.dumps({"name": "plugin-two"}))
(plugin2 / "README.md").write_text("""# plugin-two
Second test plugin.
## Commands
| Command | Description |
|---------|-------------|
| `/cmd-two` | Command two |
## Tools Summary
### Tools (2 tools)
`tool_c`, `tool_d`
""")
# Plugin 3 (with conflict)
plugin3 = plugins_dir / "plugin-three"
plugin3.mkdir()
plugin3_meta = plugin3 / ".claude-plugin"
plugin3_meta.mkdir()
(plugin3_meta / "plugin.json").write_text(json.dumps({"name": "plugin-three"}))
(plugin3 / "README.md").write_text("""# plugin-three
Third test plugin with conflict.
## Commands
| Command | Description |
|---------|-------------|
| `/cmd-one` | Conflicting command |
## Tools Summary
### Tools (1 tool)
`tool_e`
""")
return str(tmp_path)
@pytest.fixture
def marketplace_no_plugins(tmp_path):
"""Create marketplace with no plugins"""
plugins_dir = tmp_path / "plugins"
plugins_dir.mkdir()
return str(tmp_path)
@pytest.fixture
def marketplace_no_dir(tmp_path):
"""Create path without plugins directory"""
return str(tmp_path)
@pytest.mark.asyncio
async def test_generate_report_json_format(report_tools, sample_marketplace):
"""Test JSON format report generation"""
result = await report_tools.generate_compatibility_report(
sample_marketplace, "json"
)
assert "error" not in result
assert "generated_at" in result
assert "summary" in result
assert "plugins" in result
assert result["summary"]["total_plugins"] == 3
@pytest.mark.asyncio
async def test_generate_report_markdown_format(report_tools, sample_marketplace):
"""Test markdown format report generation"""
result = await report_tools.generate_compatibility_report(
sample_marketplace, "markdown"
)
assert "error" not in result
assert "report" in result
assert "# Contract Validation Report" in result["report"]
assert "## Summary" in result["report"]
@pytest.mark.asyncio
async def test_generate_report_finds_conflicts(report_tools, sample_marketplace):
"""Test that report finds command conflicts"""
result = await report_tools.generate_compatibility_report(
sample_marketplace, "json"
)
assert "error" not in result
assert result["summary"]["errors"] > 0
assert result["summary"]["total_issues"] > 0
@pytest.mark.asyncio
async def test_generate_report_counts_correctly(report_tools, sample_marketplace):
"""Test summary counts are correct"""
result = await report_tools.generate_compatibility_report(
sample_marketplace, "json"
)
summary = result["summary"]
assert summary["total_plugins"] == 3
assert summary["total_commands"] == 3 # 3 commands total
assert summary["total_tools"] == 5 # a, b, c, d, e
@pytest.mark.asyncio
async def test_generate_report_no_plugins(report_tools, marketplace_no_plugins):
"""Test error when no plugins found"""
result = await report_tools.generate_compatibility_report(
marketplace_no_plugins, "json"
)
assert "error" in result
assert "no plugins" in result["error"].lower()
@pytest.mark.asyncio
async def test_generate_report_no_plugins_dir(report_tools, marketplace_no_dir):
"""Test error when plugins directory doesn't exist"""
result = await report_tools.generate_compatibility_report(
marketplace_no_dir, "json"
)
assert "error" in result
assert "not found" in result["error"].lower()
@pytest.mark.asyncio
async def test_list_issues_all(report_tools, sample_marketplace):
"""Test listing all issues"""
result = await report_tools.list_issues(sample_marketplace, "all", "all")
assert "error" not in result
assert "issues" in result
assert result["total_issues"] > 0
@pytest.mark.asyncio
async def test_list_issues_filter_by_severity(report_tools, sample_marketplace):
"""Test filtering issues by severity"""
all_result = await report_tools.list_issues(sample_marketplace, "all", "all")
error_result = await report_tools.list_issues(sample_marketplace, "error", "all")
# Error count should be less than or equal to all
assert error_result["total_issues"] <= all_result["total_issues"]
# All issues should have error severity
for issue in error_result["issues"]:
sev = issue.get("severity", "")
if hasattr(sev, 'value'):
sev = sev.value
assert "error" in str(sev).lower()
@pytest.mark.asyncio
async def test_list_issues_filter_by_type(report_tools, sample_marketplace):
"""Test filtering issues by type"""
result = await report_tools.list_issues(
sample_marketplace, "all", "interface_mismatch"
)
# All issues should have matching type
for issue in result["issues"]:
itype = issue.get("issue_type", "")
if hasattr(itype, 'value'):
itype = itype.value
assert "interface_mismatch" in str(itype).lower()
@pytest.mark.asyncio
async def test_list_issues_combined_filters(report_tools, sample_marketplace):
"""Test combined severity and type filters"""
result = await report_tools.list_issues(
sample_marketplace, "error", "interface_mismatch"
)
assert "error" not in result
# Should have command conflict errors
assert result["total_issues"] > 0
@pytest.mark.asyncio
async def test_report_markdown_has_all_sections(report_tools, sample_marketplace):
"""Test markdown report contains all expected sections"""
result = await report_tools.generate_compatibility_report(
sample_marketplace, "markdown"
)
report = result["report"]
assert "## Summary" in report
assert "## Plugins" in report
# Compatibility section only if there are checks
assert "Plugin One" in report or "plugin-one" in report.lower()
@pytest.mark.asyncio
async def test_report_includes_suggestions(report_tools, sample_marketplace):
"""Test that issues include suggestions"""
result = await report_tools.generate_compatibility_report(
sample_marketplace, "json"
)
issues = result.get("all_issues", [])
# Find an issue with a suggestion
issues_with_suggestions = [
i for i in issues
if i.get("suggestion")
]
assert len(issues_with_suggestions) > 0

View File

@@ -0,0 +1,514 @@
"""
Unit tests for validation tools.
"""
import pytest
from pathlib import Path
@pytest.fixture
def validation_tools():
"""Create ValidationTools instance"""
from mcp_server.validation_tools import ValidationTools
return ValidationTools()
@pytest.fixture
def plugin_a(tmp_path):
"""Create first test plugin"""
plugin_dir = tmp_path / "plugin-a"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
readme = plugin_dir / "README.md"
readme.write_text("""# Plugin A
Test plugin A.
## Commands
| Command | Description |
|---------|-------------|
| `/setup-a` | Setup A |
| `/shared-cmd` | Shared command |
## Tools Summary
### Core (2 tools)
`tool_one`, `tool_two`
""")
return str(plugin_dir)
@pytest.fixture
def plugin_b(tmp_path):
"""Create second test plugin"""
plugin_dir = tmp_path / "plugin-b"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
readme = plugin_dir / "README.md"
readme.write_text("""# Plugin B
Test plugin B.
## Commands
| Command | Description |
|---------|-------------|
| `/setup-b` | Setup B |
| `/shared-cmd` | Shared command (conflict!) |
## Tools Summary
### Core (2 tools)
`tool_two`, `tool_three`
""")
return str(plugin_dir)
@pytest.fixture
def plugin_no_conflict(tmp_path):
"""Create plugin with no conflicts"""
plugin_dir = tmp_path / "plugin-c"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
readme = plugin_dir / "README.md"
readme.write_text("""# Plugin C
Test plugin C.
## Commands
| Command | Description |
|---------|-------------|
| `/unique-cmd` | Unique command |
## Tools Summary
### Core (1 tool)
`unique_tool`
""")
return str(plugin_dir)
@pytest.fixture
def claude_md_with_agents(tmp_path):
"""Create CLAUDE.md with agent definitions"""
claude_md = tmp_path / "CLAUDE.md"
claude_md.write_text("""# CLAUDE.md
### Four-Agent Model
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **TestAgent** | Careful | Uses `tool_one`, `tool_two`, `missing_tool` |
| **ValidAgent** | Thorough | Uses `tool_one` only |
| **EmptyAgent** | Unknown | General tasks |
""")
return str(claude_md)
@pytest.mark.asyncio
async def test_validate_compatibility_command_conflict(validation_tools, plugin_a, plugin_b):
"""Test detection of command name conflicts"""
result = await validation_tools.validate_compatibility(plugin_a, plugin_b)
assert "error" not in result
assert result["compatible"] is False
# Find the command conflict issue
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
assert len(error_issues) > 0
assert any("/shared-cmd" in str(i["message"]) for i in error_issues)
@pytest.mark.asyncio
async def test_validate_compatibility_tool_overlap(validation_tools, plugin_a, plugin_b):
"""Test detection of tool name overlaps"""
result = await validation_tools.validate_compatibility(plugin_a, plugin_b)
assert "tool_two" in result["shared_tools"]
@pytest.mark.asyncio
async def test_validate_compatibility_unique_tools(validation_tools, plugin_a, plugin_b):
"""Test identification of unique tools per plugin"""
result = await validation_tools.validate_compatibility(plugin_a, plugin_b)
assert "tool_one" in result["a_only_tools"]
assert "tool_three" in result["b_only_tools"]
@pytest.mark.asyncio
async def test_validate_compatibility_no_conflict(validation_tools, plugin_a, plugin_no_conflict):
"""Test compatible plugins"""
result = await validation_tools.validate_compatibility(plugin_a, plugin_no_conflict)
assert "error" not in result
assert result["compatible"] is True
@pytest.mark.asyncio
async def test_validate_compatibility_missing_plugin(validation_tools, plugin_a, tmp_path):
"""Test error when plugin not found"""
result = await validation_tools.validate_compatibility(
plugin_a,
str(tmp_path / "nonexistent")
)
assert "error" in result
@pytest.mark.asyncio
async def test_validate_agent_refs_with_missing_tools(validation_tools, claude_md_with_agents, plugin_a):
"""Test detection of missing tool references"""
result = await validation_tools.validate_agent_refs(
"TestAgent",
claude_md_with_agents,
[plugin_a]
)
assert "error" not in result
assert result["valid"] is False
assert "missing_tool" in result["tool_refs_missing"]
@pytest.mark.asyncio
async def test_validate_agent_refs_valid_agent(validation_tools, claude_md_with_agents, plugin_a):
"""Test valid agent with all tools found"""
result = await validation_tools.validate_agent_refs(
"ValidAgent",
claude_md_with_agents,
[plugin_a]
)
assert "error" not in result
assert result["valid"] is True
assert "tool_one" in result["tool_refs_found"]
@pytest.mark.asyncio
async def test_validate_agent_refs_empty_agent(validation_tools, claude_md_with_agents, plugin_a):
"""Test agent with no tool references"""
result = await validation_tools.validate_agent_refs(
"EmptyAgent",
claude_md_with_agents,
[plugin_a]
)
assert "error" not in result
# Should have info issue about undocumented references
info_issues = [i for i in result["issues"] if i["severity"].value == "info"]
assert len(info_issues) > 0
@pytest.mark.asyncio
async def test_validate_agent_refs_agent_not_found(validation_tools, claude_md_with_agents, plugin_a):
"""Test error when agent not found"""
result = await validation_tools.validate_agent_refs(
"NonexistentAgent",
claude_md_with_agents,
[plugin_a]
)
assert "error" in result
assert "not found" in result["error"].lower()
@pytest.mark.asyncio
async def test_validate_data_flow_valid(validation_tools, tmp_path):
"""Test data flow validation with valid flow"""
claude_md = tmp_path / "CLAUDE.md"
claude_md.write_text("""# CLAUDE.md
### Four-Agent Model
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **DataAgent** | Analytical | Load with `read_csv`, analyze with `describe`, export with `to_csv` |
""")
result = await validation_tools.validate_data_flow("DataAgent", str(claude_md))
assert "error" not in result
assert result["valid"] is True
@pytest.mark.asyncio
async def test_validate_data_flow_missing_producer(validation_tools, tmp_path):
"""Test data flow with consumer but no producer"""
claude_md = tmp_path / "CLAUDE.md"
claude_md.write_text("""# CLAUDE.md
### Four-Agent Model
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **BadAgent** | Careless | Just runs `describe`, `head`, `tail` without loading |
""")
result = await validation_tools.validate_data_flow("BadAgent", str(claude_md))
assert "error" not in result
# Should have warning about missing producer
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
assert len(warning_issues) > 0
# --- Workflow Integration Tests ---
@pytest.fixture
def domain_plugin_complete(tmp_path):
"""Create a complete domain plugin with gate, review, and advisory agent"""
plugin_dir = tmp_path / "viz-platform"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Gate command with PASS/FAIL pattern
gate_cmd = plugin_dir / "commands" / "design-gate.md"
gate_cmd.write_text("""# /design-gate
Binary pass/fail validation gate for design system compliance.
## Output
- **PASS**: All design system checks passed
- **FAIL**: Design system violations detected
""")
# Review command
review_cmd = plugin_dir / "commands" / "design-review.md"
review_cmd.write_text("""# /design-review
Comprehensive design system audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "design-reviewer.md"
agent.write_text("""# design-reviewer
Design system compliance auditor.
Handles issues with `Domain/Viz` label.
""")
return str(plugin_dir)
@pytest.fixture
def domain_plugin_missing_gate(tmp_path):
"""Create domain plugin with review and agent but no gate command"""
plugin_dir = tmp_path / "data-platform"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Review command (but no gate)
review_cmd = plugin_dir / "commands" / "data-review.md"
review_cmd.write_text("""# /data-review
Data integrity audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "data-advisor.md"
agent.write_text("""# data-advisor
Data integrity advisor for Domain/Data issues.
""")
return str(plugin_dir)
@pytest.fixture
def domain_plugin_minimal(tmp_path):
"""Create minimal plugin with no commands or agents"""
plugin_dir = tmp_path / "minimal-plugin"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
readme = plugin_dir / "README.md"
readme.write_text("# Minimal Plugin\n\nNo commands or agents.")
return str(plugin_dir)
@pytest.mark.asyncio
async def test_validate_workflow_integration_complete(validation_tools, domain_plugin_complete):
"""Test complete domain plugin returns valid with all interfaces found"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_complete,
"Domain/Viz"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_command_found"] is True
assert result["review_command_found"] is True
assert result["advisory_agent_found"] is True
# May have INFO issue about missing contract version (not an error/warning)
error_or_warning = [i for i in result["issues"]
if i["severity"].value in ("error", "warning")]
assert len(error_or_warning) == 0
@pytest.mark.asyncio
async def test_validate_workflow_integration_missing_gate(validation_tools, domain_plugin_missing_gate):
"""Test plugin missing gate command returns invalid with ERROR"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_missing_gate,
"Domain/Data"
)
assert "error" not in result
assert result["valid"] is False
assert result["gate_command_found"] is False
assert result["review_command_found"] is True
assert result["advisory_agent_found"] is True
# Should have one ERROR for missing gate
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
assert len(error_issues) == 1
assert "gate" in error_issues[0]["message"].lower()
@pytest.mark.asyncio
async def test_validate_workflow_integration_minimal(validation_tools, domain_plugin_minimal):
"""Test minimal plugin returns invalid with multiple issues"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_minimal,
"Domain/Test"
)
assert "error" not in result
assert result["valid"] is False
assert result["gate_command_found"] is False
assert result["review_command_found"] is False
assert result["advisory_agent_found"] is False
# Should have one ERROR (gate) and two WARNINGs (review, agent)
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
assert len(error_issues) == 1
assert len(warning_issues) == 2
@pytest.mark.asyncio
async def test_validate_workflow_integration_nonexistent_plugin(validation_tools, tmp_path):
"""Test error when plugin directory doesn't exist"""
result = await validation_tools.validate_workflow_integration(
str(tmp_path / "nonexistent"),
"Domain/Test"
)
assert "error" in result
assert "not found" in result["error"].lower()
# --- Gate Contract Version Tests ---
@pytest.fixture
def domain_plugin_with_contract(tmp_path):
"""Create domain plugin with gate_contract: v1 in frontmatter"""
plugin_dir = tmp_path / "viz-platform-versioned"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Gate command with gate_contract in frontmatter
gate_cmd = plugin_dir / "commands" / "design-gate.md"
gate_cmd.write_text("""---
description: Design system compliance gate (pass/fail)
gate_contract: v1
---
# /design-gate
Binary pass/fail validation gate for design system compliance.
## Output
- **PASS**: All design system checks passed
- **FAIL**: Design system violations detected
""")
# Review command
review_cmd = plugin_dir / "commands" / "design-review.md"
review_cmd.write_text("""# /design-review
Comprehensive design system audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "design-reviewer.md"
agent.write_text("""# design-reviewer
Design system compliance auditor for Domain/Viz issues.
""")
return str(plugin_dir)
@pytest.mark.asyncio
async def test_validate_workflow_contract_match(validation_tools, domain_plugin_with_contract):
"""Test that matching expected_contract produces no warning"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_with_contract,
"Domain/Viz",
expected_contract="v1"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_contract"] == "v1"
# Should have no warnings about contract mismatch
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
assert len(contract_warnings) == 0
@pytest.mark.asyncio
async def test_validate_workflow_contract_mismatch(validation_tools, domain_plugin_with_contract):
"""Test that mismatched expected_contract produces WARNING"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_with_contract,
"Domain/Viz",
expected_contract="v2" # Gate has v1
)
assert "error" not in result
assert result["valid"] is True # Contract mismatch doesn't affect validity
assert result["gate_contract"] == "v1"
# Should have warning about contract mismatch
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
assert len(contract_warnings) == 1
assert "mismatch" in contract_warnings[0]["message"].lower()
assert "v1" in contract_warnings[0]["message"]
assert "v2" in contract_warnings[0]["message"]
@pytest.mark.asyncio
async def test_validate_workflow_no_contract(validation_tools, domain_plugin_complete):
"""Test that missing gate_contract produces INFO suggestion"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_complete,
"Domain/Viz"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_contract"] is None
# Should have info issue about missing contract
info_issues = [i for i in result["issues"] if i["severity"].value == "info"]
contract_info = [i for i in info_issues if "contract" in i["message"].lower()]
assert len(contract_info) == 1
assert "does not declare" in contract_info[0]["message"].lower()

View File

@@ -0,0 +1,131 @@
# Data Platform MCP Server
MCP Server providing pandas, PostgreSQL/PostGIS, and dbt tools for Claude Code.
## Features
- **pandas Tools**: DataFrame operations with Arrow IPC data_ref persistence
- **PostgreSQL Tools**: Database queries with asyncpg connection pooling
- **PostGIS Tools**: Spatial data operations
- **dbt Tools**: Build tool wrapper with pre-execution validation
## Installation
```bash
cd mcp-servers/data-platform
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
```
## Configuration
### System-Level (PostgreSQL credentials)
Create `~/.config/claude/postgres.env`:
```env
POSTGRES_URL=postgresql://user:password@host:5432/database
```
### Project-Level (dbt paths)
Create `.env` in your project root:
```env
DBT_PROJECT_DIR=/path/to/dbt/project
DBT_PROFILES_DIR=/path/to/.dbt
DATA_PLATFORM_MAX_ROWS=100000
```
## Tools
### pandas Tools (14 tools)
| Tool | Description |
|------|-------------|
| `read_csv` | Load CSV file into DataFrame |
| `read_parquet` | Load Parquet file into DataFrame |
| `read_json` | Load JSON/JSONL file into DataFrame |
| `to_csv` | Export DataFrame to CSV file |
| `to_parquet` | Export DataFrame to Parquet file |
| `describe` | Get statistical summary of DataFrame |
| `head` | Get first N rows of DataFrame |
| `tail` | Get last N rows of DataFrame |
| `filter` | Filter DataFrame rows by condition |
| `select` | Select specific columns from DataFrame |
| `groupby` | Group DataFrame and aggregate |
| `join` | Join two DataFrames |
| `list_data` | List all stored DataFrames |
| `drop_data` | Remove a DataFrame from storage |
### PostgreSQL Tools (6 tools)
| Tool | Description |
|------|-------------|
| `pg_connect` | Test connection and return status |
| `pg_query` | Execute SELECT, return as data_ref |
| `pg_execute` | Execute INSERT/UPDATE/DELETE |
| `pg_tables` | List all tables in schema |
| `pg_columns` | Get column info for table |
| `pg_schemas` | List all schemas |
### PostGIS Tools (4 tools)
| Tool | Description |
|------|-------------|
| `st_tables` | List PostGIS-enabled tables |
| `st_geometry_type` | Get geometry type of column |
| `st_srid` | Get SRID of geometry column |
| `st_extent` | Get bounding box of geometries |
### dbt Tools (8 tools)
| Tool | Description |
|------|-------------|
| `dbt_parse` | Validate project (pre-execution) |
| `dbt_run` | Run models with selection |
| `dbt_test` | Run tests |
| `dbt_build` | Run + test |
| `dbt_compile` | Compile SQL without executing |
| `dbt_ls` | List resources |
| `dbt_docs_generate` | Generate documentation |
| `dbt_lineage` | Get model dependencies |
## data_ref System
All DataFrame operations use a `data_ref` system to persist data across tool calls:
1. **Load data**: Returns a `data_ref` string (e.g., `"df_a1b2c3d4"`)
2. **Use data_ref**: Pass to other tools (filter, join, export)
3. **List data**: Use `list_data` to see all stored DataFrames
4. **Clean up**: Use `drop_data` when done
### Example Flow
```
read_csv("data.csv") → {"data_ref": "sales_data", "rows": 1000}
filter("sales_data", "amount > 100") → {"data_ref": "sales_data_filtered"}
describe("sales_data_filtered") → {statistics}
to_parquet("sales_data_filtered", "output.parquet") → {success}
```
## Memory Management
- Default row limit: 100,000 rows per DataFrame
- Configure via `DATA_PLATFORM_MAX_ROWS` environment variable
- Use chunked processing for large files (`chunk_size` parameter)
- Monitor with `list_data` tool (shows memory usage)
## Running
```bash
python -m mcp_server.server
```
## Development
```bash
pip install -e ".[dev]"
pytest
```

View File

@@ -0,0 +1,7 @@
"""
Data Platform MCP Server.
Provides pandas, PostgreSQL/PostGIS, and dbt tools to Claude Code via MCP.
"""
__version__ = "1.0.0"

View File

@@ -0,0 +1,195 @@
"""
Configuration loader for Data Platform MCP Server.
Implements hybrid configuration system:
- System-level: ~/.config/claude/postgres.env (credentials)
- Project-level: .env (dbt project paths, overrides)
- Auto-detection: dbt_project.yml discovery
"""
from pathlib import Path
from dotenv import load_dotenv
import os
import logging
from typing import Dict, Optional
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class DataPlatformConfig:
"""Hybrid configuration loader for data platform tools"""
def __init__(self):
self.postgres_url: Optional[str] = None
self.dbt_project_dir: Optional[str] = None
self.dbt_profiles_dir: Optional[str] = None
self.max_rows: int = 100_000
def load(self) -> Dict[str, Optional[str]]:
"""
Load configuration from system and project levels.
Returns:
Dict containing postgres_url, dbt_project_dir, dbt_profiles_dir, max_rows
Note:
PostgreSQL credentials are optional - server can run in pandas-only mode.
"""
# Load system config (PostgreSQL credentials)
system_config = Path.home() / '.config' / 'claude' / 'postgres.env'
if system_config.exists():
load_dotenv(system_config)
logger.info(f"Loaded system configuration from {system_config}")
else:
logger.info(
f"System config not found: {system_config} - "
"PostgreSQL tools will be unavailable"
)
# Find project directory
project_dir = self._find_project_directory()
# Load project config (overrides system)
if project_dir:
project_config = project_dir / '.env'
if project_config.exists():
load_dotenv(project_config, override=True)
logger.info(f"Loaded project configuration from {project_config}")
# Extract values
self.postgres_url = os.getenv('POSTGRES_URL')
self.dbt_project_dir = os.getenv('DBT_PROJECT_DIR')
self.dbt_profiles_dir = os.getenv('DBT_PROFILES_DIR')
self.max_rows = int(os.getenv('DATA_PLATFORM_MAX_ROWS', '100000'))
# Auto-detect dbt project if not specified
if not self.dbt_project_dir and project_dir:
self.dbt_project_dir = self._find_dbt_project(project_dir)
if self.dbt_project_dir:
logger.info(f"Auto-detected dbt project: {self.dbt_project_dir}")
# Default dbt profiles dir to ~/.dbt
if not self.dbt_profiles_dir:
default_profiles = Path.home() / '.dbt'
if default_profiles.exists():
self.dbt_profiles_dir = str(default_profiles)
return {
'postgres_url': self.postgres_url,
'dbt_project_dir': self.dbt_project_dir,
'dbt_profiles_dir': self.dbt_profiles_dir,
'max_rows': self.max_rows,
'postgres_available': self.postgres_url is not None,
'dbt_available': self.dbt_project_dir is not None
}
def _find_project_directory(self) -> Optional[Path]:
"""
Find the user's project directory.
Returns:
Path to project directory, or None if not found
"""
# Strategy 1: Check CLAUDE_PROJECT_DIR environment variable
project_dir = os.getenv('CLAUDE_PROJECT_DIR')
if project_dir:
path = Path(project_dir)
if path.exists():
logger.info(f"Found project directory from CLAUDE_PROJECT_DIR: {path}")
return path
# Strategy 2: Check PWD
pwd = os.getenv('PWD')
if pwd:
path = Path(pwd)
if path.exists() and (
(path / '.git').exists() or
(path / '.env').exists() or
(path / 'dbt_project.yml').exists()
):
logger.info(f"Found project directory from PWD: {path}")
return path
# Strategy 3: Check current working directory
cwd = Path.cwd()
if (cwd / '.git').exists() or (cwd / '.env').exists() or (cwd / 'dbt_project.yml').exists():
logger.info(f"Found project directory from cwd: {cwd}")
return cwd
logger.debug("Could not determine project directory")
return None
def _find_dbt_project(self, start_dir: Path) -> Optional[str]:
"""
Find dbt_project.yml in the project or its subdirectories.
Args:
start_dir: Directory to start searching from
Returns:
Path to dbt project directory, or None if not found
"""
# Check root
if (start_dir / 'dbt_project.yml').exists():
return str(start_dir)
# Check common subdirectories
for subdir in ['dbt', 'transform', 'analytics', 'models']:
candidate = start_dir / subdir
if (candidate / 'dbt_project.yml').exists():
return str(candidate)
# Search one level deep
for item in start_dir.iterdir():
if item.is_dir() and not item.name.startswith('.'):
if (item / 'dbt_project.yml').exists():
return str(item)
return None
def load_config() -> Dict[str, Optional[str]]:
"""
Convenience function to load configuration.
Returns:
Configuration dictionary
"""
config = DataPlatformConfig()
return config.load()
def check_postgres_connection() -> Dict[str, any]:
"""
Check PostgreSQL connection status for SessionStart hook.
Returns:
Dict with connection status and message
"""
import asyncio
config = load_config()
if not config.get('postgres_url'):
return {
'connected': False,
'message': 'PostgreSQL not configured (POSTGRES_URL not set)'
}
async def test_connection():
try:
import asyncpg
conn = await asyncpg.connect(config['postgres_url'], timeout=5)
version = await conn.fetchval('SELECT version()')
await conn.close()
return {
'connected': True,
'message': f'Connected to PostgreSQL',
'version': version.split(',')[0] if version else 'Unknown'
}
except Exception as e:
return {
'connected': False,
'message': f'PostgreSQL connection failed: {str(e)}'
}
return asyncio.run(test_connection())

View File

@@ -0,0 +1,219 @@
"""
Arrow IPC DataFrame Registry.
Provides persistent storage for DataFrames across tool calls using Apache Arrow
for efficient memory management and serialization.
"""
import pyarrow as pa
import pandas as pd
import uuid
import logging
from typing import Dict, Optional, List, Union
from dataclasses import dataclass
from datetime import datetime
logger = logging.getLogger(__name__)
@dataclass
class DataFrameInfo:
"""Metadata about a stored DataFrame"""
ref: str
rows: int
columns: int
column_names: List[str]
dtypes: Dict[str, str]
memory_bytes: int
created_at: datetime
source: Optional[str] = None
class DataStore:
"""
Singleton registry for Arrow Tables (DataFrames).
Uses Arrow IPC format for efficient memory usage and supports
data_ref based retrieval across multiple tool calls.
"""
_instance = None
_dataframes: Dict[str, pa.Table] = {}
_metadata: Dict[str, DataFrameInfo] = {}
_max_rows: int = 100_000
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._dataframes = {}
cls._metadata = {}
return cls._instance
@classmethod
def get_instance(cls) -> 'DataStore':
"""Get the singleton instance"""
if cls._instance is None:
cls._instance = cls()
return cls._instance
@classmethod
def set_max_rows(cls, max_rows: int):
"""Set the maximum rows limit"""
cls._max_rows = max_rows
def store(
self,
data: Union[pa.Table, pd.DataFrame],
name: Optional[str] = None,
source: Optional[str] = None
) -> str:
"""
Store a DataFrame and return its reference.
Args:
data: Arrow Table or pandas DataFrame
name: Optional name for the reference (auto-generated if not provided)
source: Optional source description (e.g., file path, query)
Returns:
data_ref string to retrieve the DataFrame later
"""
# Convert pandas to Arrow if needed
if isinstance(data, pd.DataFrame):
table = pa.Table.from_pandas(data)
else:
table = data
# Generate reference
data_ref = name or f"df_{uuid.uuid4().hex[:8]}"
# Ensure unique reference
if data_ref in self._dataframes and name is None:
data_ref = f"{data_ref}_{uuid.uuid4().hex[:4]}"
# Store table
self._dataframes[data_ref] = table
# Store metadata
schema = table.schema
self._metadata[data_ref] = DataFrameInfo(
ref=data_ref,
rows=table.num_rows,
columns=table.num_columns,
column_names=[f.name for f in schema],
dtypes={f.name: str(f.type) for f in schema},
memory_bytes=table.nbytes,
created_at=datetime.now(),
source=source
)
logger.info(f"Stored DataFrame '{data_ref}': {table.num_rows} rows, {table.num_columns} cols")
return data_ref
def get(self, data_ref: str) -> Optional[pa.Table]:
"""
Retrieve an Arrow Table by reference.
Args:
data_ref: Reference string from store()
Returns:
Arrow Table or None if not found
"""
return self._dataframes.get(data_ref)
def get_pandas(self, data_ref: str) -> Optional[pd.DataFrame]:
"""
Retrieve a DataFrame as pandas.
Args:
data_ref: Reference string from store()
Returns:
pandas DataFrame or None if not found
"""
table = self.get(data_ref)
if table is not None:
return table.to_pandas()
return None
def get_info(self, data_ref: str) -> Optional[DataFrameInfo]:
"""
Get metadata about a stored DataFrame.
Args:
data_ref: Reference string
Returns:
DataFrameInfo or None if not found
"""
return self._metadata.get(data_ref)
def list_refs(self) -> List[Dict]:
"""
List all stored DataFrame references with metadata.
Returns:
List of dicts with ref, rows, columns, memory info
"""
result = []
for ref, info in self._metadata.items():
result.append({
'ref': ref,
'rows': info.rows,
'columns': info.columns,
'column_names': info.column_names,
'memory_mb': round(info.memory_bytes / (1024 * 1024), 2),
'source': info.source,
'created_at': info.created_at.isoformat()
})
return result
def drop(self, data_ref: str) -> bool:
"""
Remove a DataFrame from the store.
Args:
data_ref: Reference string
Returns:
True if removed, False if not found
"""
if data_ref in self._dataframes:
del self._dataframes[data_ref]
del self._metadata[data_ref]
logger.info(f"Dropped DataFrame '{data_ref}'")
return True
return False
def clear(self):
"""Remove all stored DataFrames"""
count = len(self._dataframes)
self._dataframes.clear()
self._metadata.clear()
logger.info(f"Cleared {count} DataFrames from store")
def total_memory_bytes(self) -> int:
"""Get total memory used by all stored DataFrames"""
return sum(info.memory_bytes for info in self._metadata.values())
def total_memory_mb(self) -> float:
"""Get total memory in MB"""
return round(self.total_memory_bytes() / (1024 * 1024), 2)
def check_row_limit(self, row_count: int) -> Dict:
"""
Check if row count exceeds limit.
Args:
row_count: Number of rows
Returns:
Dict with 'exceeded' bool and 'message' if exceeded
"""
if row_count > self._max_rows:
return {
'exceeded': True,
'message': f"Row count ({row_count:,}) exceeds limit ({self._max_rows:,})",
'suggestion': f"Use chunked processing or filter data first",
'limit': self._max_rows
}
return {'exceeded': False}

View File

@@ -0,0 +1,387 @@
"""
dbt MCP Tools.
Provides dbt CLI wrapper with pre-execution validation.
"""
import subprocess
import json
import logging
import os
from pathlib import Path
from typing import Dict, List, Optional, Any
from .config import load_config
logger = logging.getLogger(__name__)
class DbtTools:
"""dbt CLI wrapper tools with pre-validation"""
def __init__(self):
self.config = load_config()
self.project_dir = self.config.get('dbt_project_dir')
self.profiles_dir = self.config.get('dbt_profiles_dir')
def _get_dbt_command(self, cmd: List[str]) -> List[str]:
"""Build dbt command with project and profiles directories"""
base = ['dbt']
if self.project_dir:
base.extend(['--project-dir', self.project_dir])
if self.profiles_dir:
base.extend(['--profiles-dir', self.profiles_dir])
base.extend(cmd)
return base
def _run_dbt(
self,
cmd: List[str],
timeout: int = 300,
capture_json: bool = False
) -> Dict:
"""
Run dbt command and return result.
Args:
cmd: dbt subcommand and arguments
timeout: Command timeout in seconds
capture_json: If True, parse JSON output
Returns:
Dict with command result
"""
if not self.project_dir:
return {
'error': 'dbt project not found',
'suggestion': 'Set DBT_PROJECT_DIR in project .env or ensure dbt_project.yml exists'
}
full_cmd = self._get_dbt_command(cmd)
logger.info(f"Running: {' '.join(full_cmd)}")
try:
env = os.environ.copy()
# Disable dbt analytics/tracking
env['DBT_SEND_ANONYMOUS_USAGE_STATS'] = 'false'
result = subprocess.run(
full_cmd,
capture_output=True,
text=True,
timeout=timeout,
cwd=self.project_dir,
env=env
)
output = {
'success': result.returncode == 0,
'command': ' '.join(cmd),
'stdout': result.stdout,
'stderr': result.stderr if result.returncode != 0 else None
}
if capture_json and result.returncode == 0:
try:
output['data'] = json.loads(result.stdout)
except json.JSONDecodeError:
pass
return output
except subprocess.TimeoutExpired:
return {
'error': f'Command timed out after {timeout}s',
'command': ' '.join(cmd)
}
except FileNotFoundError:
return {
'error': 'dbt not found in PATH',
'suggestion': 'Install dbt: pip install dbt-core dbt-postgres'
}
except Exception as e:
logger.error(f"dbt command failed: {e}")
return {'error': str(e)}
async def dbt_parse(self) -> Dict:
"""
Validate dbt project without executing (pre-flight check).
Returns:
Dict with validation result and any errors
"""
result = self._run_dbt(['parse'])
# Check if _run_dbt returned an error (e.g., project not found, timeout, dbt not installed)
if 'error' in result:
return result
if not result.get('success'):
# Extract useful error info from stderr
stderr = result.get('stderr', '') or result.get('stdout', '')
errors = []
# Look for common dbt 1.9+ deprecation warnings
if 'deprecated' in stderr.lower():
errors.append({
'type': 'deprecation',
'message': 'Deprecated syntax found - check dbt 1.9+ migration guide'
})
# Look for compilation errors
if 'compilation error' in stderr.lower():
errors.append({
'type': 'compilation',
'message': 'SQL compilation error - check model syntax'
})
return {
'valid': False,
'errors': errors,
'details': stderr[:2000] if stderr else None,
'suggestion': 'Fix issues before running dbt models'
}
return {
'valid': True,
'message': 'dbt project validation passed'
}
async def dbt_run(
self,
select: Optional[str] = None,
exclude: Optional[str] = None,
full_refresh: bool = False
) -> Dict:
"""
Run dbt models with pre-validation.
Args:
select: Model selection (e.g., "model_name", "+model_name", "tag:daily")
exclude: Models to exclude
full_refresh: If True, rebuild incremental models
Returns:
Dict with run result
"""
# ALWAYS validate first
parse_result = await self.dbt_parse()
if not parse_result.get('valid'):
return {
'error': 'Pre-validation failed',
**parse_result
}
cmd = ['run']
if select:
cmd.extend(['--select', select])
if exclude:
cmd.extend(['--exclude', exclude])
if full_refresh:
cmd.append('--full-refresh')
return self._run_dbt(cmd)
async def dbt_test(
self,
select: Optional[str] = None,
exclude: Optional[str] = None
) -> Dict:
"""
Run dbt tests.
Args:
select: Test selection
exclude: Tests to exclude
Returns:
Dict with test results
"""
cmd = ['test']
if select:
cmd.extend(['--select', select])
if exclude:
cmd.extend(['--exclude', exclude])
return self._run_dbt(cmd)
async def dbt_build(
self,
select: Optional[str] = None,
exclude: Optional[str] = None,
full_refresh: bool = False
) -> Dict:
"""
Run dbt build (run + test) with pre-validation.
Args:
select: Model/test selection
exclude: Resources to exclude
full_refresh: If True, rebuild incremental models
Returns:
Dict with build result
"""
# ALWAYS validate first
parse_result = await self.dbt_parse()
if not parse_result.get('valid'):
return {
'error': 'Pre-validation failed',
**parse_result
}
cmd = ['build']
if select:
cmd.extend(['--select', select])
if exclude:
cmd.extend(['--exclude', exclude])
if full_refresh:
cmd.append('--full-refresh')
return self._run_dbt(cmd)
async def dbt_compile(
self,
select: Optional[str] = None
) -> Dict:
"""
Compile dbt models to SQL without executing.
Args:
select: Model selection
Returns:
Dict with compiled SQL info
"""
cmd = ['compile']
if select:
cmd.extend(['--select', select])
return self._run_dbt(cmd)
async def dbt_ls(
self,
select: Optional[str] = None,
resource_type: Optional[str] = None,
output: str = 'name'
) -> Dict:
"""
List dbt resources.
Args:
select: Resource selection
resource_type: Filter by type (model, test, seed, snapshot, source)
output: Output format ('name', 'path', 'json')
Returns:
Dict with list of resources
"""
cmd = ['ls', '--output', output]
if select:
cmd.extend(['--select', select])
if resource_type:
cmd.extend(['--resource-type', resource_type])
result = self._run_dbt(cmd)
if result.get('success') and result.get('stdout'):
lines = [l.strip() for l in result['stdout'].split('\n') if l.strip()]
result['resources'] = lines
result['count'] = len(lines)
return result
async def dbt_docs_generate(self) -> Dict:
"""
Generate dbt documentation.
Returns:
Dict with generation result
"""
result = self._run_dbt(['docs', 'generate'])
if result.get('success') and self.project_dir:
# Check for generated catalog
catalog_path = Path(self.project_dir) / 'target' / 'catalog.json'
manifest_path = Path(self.project_dir) / 'target' / 'manifest.json'
result['catalog_generated'] = catalog_path.exists()
result['manifest_generated'] = manifest_path.exists()
return result
async def dbt_lineage(self, model: str) -> Dict:
"""
Get model dependencies and lineage.
Args:
model: Model name to analyze
Returns:
Dict with upstream and downstream dependencies
"""
if not self.project_dir:
return {'error': 'dbt project not found'}
manifest_path = Path(self.project_dir) / 'target' / 'manifest.json'
# Generate manifest if not exists
if not manifest_path.exists():
compile_result = await self.dbt_compile(select=model)
if not compile_result.get('success'):
return {
'error': 'Failed to compile manifest',
'details': compile_result
}
if not manifest_path.exists():
return {
'error': 'Manifest not found',
'suggestion': 'Run dbt compile first'
}
try:
with open(manifest_path) as f:
manifest = json.load(f)
# Find the model node
model_key = None
for key in manifest.get('nodes', {}):
if key.endswith(f'.{model}') or manifest['nodes'][key].get('name') == model:
model_key = key
break
if not model_key:
return {
'error': f'Model not found: {model}',
'available_models': [
n.get('name') for n in manifest.get('nodes', {}).values()
if n.get('resource_type') == 'model'
][:20]
}
node = manifest['nodes'][model_key]
# Get upstream (depends_on)
upstream = node.get('depends_on', {}).get('nodes', [])
# Get downstream (find nodes that depend on this one)
downstream = []
for key, other_node in manifest.get('nodes', {}).items():
deps = other_node.get('depends_on', {}).get('nodes', [])
if model_key in deps:
downstream.append(key)
return {
'model': model,
'unique_id': model_key,
'materialization': node.get('config', {}).get('materialized'),
'schema': node.get('schema'),
'database': node.get('database'),
'upstream': upstream,
'downstream': downstream,
'description': node.get('description'),
'tags': node.get('tags', [])
}
except Exception as e:
logger.error(f"dbt_lineage failed: {e}")
return {'error': str(e)}

View File

@@ -0,0 +1,500 @@
"""
pandas MCP Tools.
Provides DataFrame operations with Arrow IPC data_ref persistence.
"""
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
import json
import logging
from pathlib import Path
from typing import Dict, List, Optional, Any, Union
from .data_store import DataStore
from .config import load_config
logger = logging.getLogger(__name__)
class PandasTools:
"""pandas data manipulation tools with data_ref persistence"""
def __init__(self):
self.store = DataStore.get_instance()
config = load_config()
self.max_rows = config.get('max_rows', 100_000)
self.store.set_max_rows(self.max_rows)
def _check_and_store(
self,
df: pd.DataFrame,
name: Optional[str] = None,
source: Optional[str] = None
) -> Dict:
"""Check row limit and store DataFrame if within limits"""
check = self.store.check_row_limit(len(df))
if check['exceeded']:
return {
'error': 'row_limit_exceeded',
**check,
'preview': df.head(100).to_dict(orient='records')
}
data_ref = self.store.store(df, name=name, source=source)
return {
'data_ref': data_ref,
'rows': len(df),
'columns': list(df.columns),
'dtypes': {col: str(dtype) for col, dtype in df.dtypes.items()}
}
async def read_csv(
self,
file_path: str,
name: Optional[str] = None,
chunk_size: Optional[int] = None,
**kwargs
) -> Dict:
"""
Load CSV file into DataFrame.
Args:
file_path: Path to CSV file
name: Optional name for data_ref
chunk_size: If provided, process in chunks
**kwargs: Additional pandas read_csv arguments
Returns:
Dict with data_ref or error info
"""
path = Path(file_path)
if not path.exists():
return {'error': f'File not found: {file_path}'}
try:
if chunk_size:
# Chunked processing - return iterator info
chunks = []
for i, chunk in enumerate(pd.read_csv(path, chunksize=chunk_size, **kwargs)):
chunk_ref = self.store.store(chunk, name=f"{name or 'chunk'}_{i}", source=file_path)
chunks.append({'ref': chunk_ref, 'rows': len(chunk)})
return {
'chunked': True,
'chunks': chunks,
'total_chunks': len(chunks)
}
df = pd.read_csv(path, **kwargs)
return self._check_and_store(df, name=name, source=file_path)
except Exception as e:
logger.error(f"read_csv failed: {e}")
return {'error': str(e)}
async def read_parquet(
self,
file_path: str,
name: Optional[str] = None,
columns: Optional[List[str]] = None
) -> Dict:
"""
Load Parquet file into DataFrame.
Args:
file_path: Path to Parquet file
name: Optional name for data_ref
columns: Optional list of columns to load
Returns:
Dict with data_ref or error info
"""
path = Path(file_path)
if not path.exists():
return {'error': f'File not found: {file_path}'}
try:
table = pq.read_table(path, columns=columns)
df = table.to_pandas()
return self._check_and_store(df, name=name, source=file_path)
except Exception as e:
logger.error(f"read_parquet failed: {e}")
return {'error': str(e)}
async def read_json(
self,
file_path: str,
name: Optional[str] = None,
lines: bool = False,
**kwargs
) -> Dict:
"""
Load JSON/JSONL file into DataFrame.
Args:
file_path: Path to JSON file
name: Optional name for data_ref
lines: If True, read as JSON Lines format
**kwargs: Additional pandas read_json arguments
Returns:
Dict with data_ref or error info
"""
path = Path(file_path)
if not path.exists():
return {'error': f'File not found: {file_path}'}
try:
df = pd.read_json(path, lines=lines, **kwargs)
return self._check_and_store(df, name=name, source=file_path)
except Exception as e:
logger.error(f"read_json failed: {e}")
return {'error': str(e)}
async def to_csv(
self,
data_ref: str,
file_path: str,
index: bool = False,
**kwargs
) -> Dict:
"""
Export DataFrame to CSV file.
Args:
data_ref: Reference to stored DataFrame
file_path: Output file path
index: Whether to include index
**kwargs: Additional pandas to_csv arguments
Returns:
Dict with success status
"""
df = self.store.get_pandas(data_ref)
if df is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
df.to_csv(file_path, index=index, **kwargs)
return {
'success': True,
'file_path': file_path,
'rows': len(df),
'size_bytes': Path(file_path).stat().st_size
}
except Exception as e:
logger.error(f"to_csv failed: {e}")
return {'error': str(e)}
async def to_parquet(
self,
data_ref: str,
file_path: str,
compression: str = 'snappy'
) -> Dict:
"""
Export DataFrame to Parquet file.
Args:
data_ref: Reference to stored DataFrame
file_path: Output file path
compression: Compression codec
Returns:
Dict with success status
"""
table = self.store.get(data_ref)
if table is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
pq.write_table(table, file_path, compression=compression)
return {
'success': True,
'file_path': file_path,
'rows': table.num_rows,
'size_bytes': Path(file_path).stat().st_size
}
except Exception as e:
logger.error(f"to_parquet failed: {e}")
return {'error': str(e)}
async def describe(self, data_ref: str) -> Dict:
"""
Get statistical summary of DataFrame.
Args:
data_ref: Reference to stored DataFrame
Returns:
Dict with statistical summary
"""
df = self.store.get_pandas(data_ref)
if df is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
desc = df.describe(include='all')
info = self.store.get_info(data_ref)
return {
'data_ref': data_ref,
'shape': {'rows': len(df), 'columns': len(df.columns)},
'columns': list(df.columns),
'dtypes': {col: str(dtype) for col, dtype in df.dtypes.items()},
'memory_mb': info.memory_bytes / (1024 * 1024) if info else None,
'null_counts': df.isnull().sum().to_dict(),
'statistics': desc.to_dict()
}
except Exception as e:
logger.error(f"describe failed: {e}")
return {'error': str(e)}
async def head(self, data_ref: str, n: int = 10) -> Dict:
"""
Get first N rows of DataFrame.
Args:
data_ref: Reference to stored DataFrame
n: Number of rows
Returns:
Dict with rows as records
"""
df = self.store.get_pandas(data_ref)
if df is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
head_df = df.head(n)
return {
'data_ref': data_ref,
'total_rows': len(df),
'returned_rows': len(head_df),
'columns': list(df.columns),
'data': head_df.to_dict(orient='records')
}
except Exception as e:
logger.error(f"head failed: {e}")
return {'error': str(e)}
async def tail(self, data_ref: str, n: int = 10) -> Dict:
"""
Get last N rows of DataFrame.
Args:
data_ref: Reference to stored DataFrame
n: Number of rows
Returns:
Dict with rows as records
"""
df = self.store.get_pandas(data_ref)
if df is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
tail_df = df.tail(n)
return {
'data_ref': data_ref,
'total_rows': len(df),
'returned_rows': len(tail_df),
'columns': list(df.columns),
'data': tail_df.to_dict(orient='records')
}
except Exception as e:
logger.error(f"tail failed: {e}")
return {'error': str(e)}
async def filter(
self,
data_ref: str,
condition: str,
name: Optional[str] = None
) -> Dict:
"""
Filter DataFrame rows by condition.
Args:
data_ref: Reference to stored DataFrame
condition: pandas query string (e.g., "age > 30 and city == 'NYC'")
name: Optional name for result data_ref
Returns:
Dict with new data_ref for filtered result
"""
df = self.store.get_pandas(data_ref)
if df is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
filtered = df.query(condition).reset_index(drop=True)
result_name = name or f"{data_ref}_filtered"
return self._check_and_store(
filtered,
name=result_name,
source=f"filter({data_ref}, '{condition}')"
)
except Exception as e:
logger.error(f"filter failed: {e}")
return {'error': str(e)}
async def select(
self,
data_ref: str,
columns: List[str],
name: Optional[str] = None
) -> Dict:
"""
Select specific columns from DataFrame.
Args:
data_ref: Reference to stored DataFrame
columns: List of column names to select
name: Optional name for result data_ref
Returns:
Dict with new data_ref for selected columns
"""
df = self.store.get_pandas(data_ref)
if df is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
# Validate columns exist
missing = [c for c in columns if c not in df.columns]
if missing:
return {
'error': f'Columns not found: {missing}',
'available_columns': list(df.columns)
}
selected = df[columns]
result_name = name or f"{data_ref}_select"
return self._check_and_store(
selected,
name=result_name,
source=f"select({data_ref}, {columns})"
)
except Exception as e:
logger.error(f"select failed: {e}")
return {'error': str(e)}
async def groupby(
self,
data_ref: str,
by: Union[str, List[str]],
agg: Dict[str, Union[str, List[str]]],
name: Optional[str] = None
) -> Dict:
"""
Group DataFrame and aggregate.
Args:
data_ref: Reference to stored DataFrame
by: Column(s) to group by
agg: Aggregation dict (e.g., {"sales": "sum", "count": "mean"})
name: Optional name for result data_ref
Returns:
Dict with new data_ref for aggregated result
"""
df = self.store.get_pandas(data_ref)
if df is None:
return {'error': f'DataFrame not found: {data_ref}'}
try:
grouped = df.groupby(by).agg(agg).reset_index()
# Flatten column names if multi-level
if isinstance(grouped.columns, pd.MultiIndex):
grouped.columns = ['_'.join(col).strip('_') for col in grouped.columns]
result_name = name or f"{data_ref}_grouped"
return self._check_and_store(
grouped,
name=result_name,
source=f"groupby({data_ref}, by={by})"
)
except Exception as e:
logger.error(f"groupby failed: {e}")
return {'error': str(e)}
async def join(
self,
left_ref: str,
right_ref: str,
on: Optional[Union[str, List[str]]] = None,
left_on: Optional[Union[str, List[str]]] = None,
right_on: Optional[Union[str, List[str]]] = None,
how: str = 'inner',
name: Optional[str] = None
) -> Dict:
"""
Join two DataFrames.
Args:
left_ref: Reference to left DataFrame
right_ref: Reference to right DataFrame
on: Column(s) to join on (if same name in both)
left_on: Left join column(s)
right_on: Right join column(s)
how: Join type ('inner', 'left', 'right', 'outer')
name: Optional name for result data_ref
Returns:
Dict with new data_ref for joined result
"""
left_df = self.store.get_pandas(left_ref)
right_df = self.store.get_pandas(right_ref)
if left_df is None:
return {'error': f'DataFrame not found: {left_ref}'}
if right_df is None:
return {'error': f'DataFrame not found: {right_ref}'}
try:
joined = pd.merge(
left_df, right_df,
on=on, left_on=left_on, right_on=right_on,
how=how
)
result_name = name or f"{left_ref}_{right_ref}_joined"
return self._check_and_store(
joined,
name=result_name,
source=f"join({left_ref}, {right_ref}, how={how})"
)
except Exception as e:
logger.error(f"join failed: {e}")
return {'error': str(e)}
async def list_data(self) -> Dict:
"""
List all stored DataFrames.
Returns:
Dict with list of stored DataFrames and their info
"""
refs = self.store.list_refs()
return {
'count': len(refs),
'total_memory_mb': self.store.total_memory_mb(),
'max_rows_limit': self.max_rows,
'dataframes': refs
}
async def drop_data(self, data_ref: str) -> Dict:
"""
Remove a DataFrame from storage.
Args:
data_ref: Reference to drop
Returns:
Dict with success status
"""
if self.store.drop(data_ref):
return {'success': True, 'dropped': data_ref}
return {'error': f'DataFrame not found: {data_ref}'}

View File

@@ -0,0 +1,538 @@
"""
PostgreSQL/PostGIS MCP Tools.
Provides database operations with connection pooling and PostGIS support.
"""
import asyncio
import logging
from typing import Dict, List, Optional, Any
import json
from .data_store import DataStore
from .config import load_config
logger = logging.getLogger(__name__)
# Optional imports - gracefully handle missing dependencies
try:
import asyncpg
ASYNCPG_AVAILABLE = True
except ImportError:
ASYNCPG_AVAILABLE = False
logger.warning("asyncpg not available - PostgreSQL tools will be disabled")
try:
import pandas as pd
PANDAS_AVAILABLE = True
except ImportError:
PANDAS_AVAILABLE = False
class PostgresTools:
"""PostgreSQL/PostGIS database tools"""
def __init__(self):
self.store = DataStore.get_instance()
self.config = load_config()
self.pool: Optional[Any] = None
self.max_rows = self.config.get('max_rows', 100_000)
async def _get_pool(self):
"""Get or create connection pool"""
if not ASYNCPG_AVAILABLE:
raise RuntimeError("asyncpg not installed - run: pip install asyncpg")
if self.pool is None:
postgres_url = self.config.get('postgres_url')
if not postgres_url:
raise RuntimeError(
"PostgreSQL not configured. Set POSTGRES_URL in "
"~/.config/claude/postgres.env"
)
self.pool = await asyncpg.create_pool(postgres_url, min_size=1, max_size=5)
return self.pool
async def pg_connect(self) -> Dict:
"""
Test PostgreSQL connection and return status.
Returns:
Dict with connection status, version, and database info
"""
if not ASYNCPG_AVAILABLE:
return {
'connected': False,
'error': 'asyncpg not installed',
'suggestion': 'pip install asyncpg'
}
postgres_url = self.config.get('postgres_url')
if not postgres_url:
return {
'connected': False,
'error': 'POSTGRES_URL not configured',
'suggestion': 'Create ~/.config/claude/postgres.env with POSTGRES_URL=postgresql://...'
}
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
version = await conn.fetchval('SELECT version()')
db_name = await conn.fetchval('SELECT current_database()')
user = await conn.fetchval('SELECT current_user')
# Check for PostGIS
postgis_version = None
try:
postgis_version = await conn.fetchval('SELECT PostGIS_Version()')
except Exception:
pass
return {
'connected': True,
'database': db_name,
'user': user,
'version': version.split(',')[0] if version else 'Unknown',
'postgis_version': postgis_version,
'postgis_available': postgis_version is not None
}
except Exception as e:
logger.error(f"pg_connect failed: {e}")
return {
'connected': False,
'error': str(e)
}
async def pg_query(
self,
query: str,
params: Optional[List] = None,
name: Optional[str] = None
) -> Dict:
"""
Execute SELECT query and return results as data_ref.
Args:
query: SQL SELECT query
params: Query parameters (positional, use $1, $2, etc.)
name: Optional name for result data_ref
Returns:
Dict with data_ref for results or error
"""
if not PANDAS_AVAILABLE:
return {'error': 'pandas not available'}
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
if params:
rows = await conn.fetch(query, *params)
else:
rows = await conn.fetch(query)
if not rows:
return {
'data_ref': None,
'rows': 0,
'message': 'Query returned no results'
}
# Convert to DataFrame
df = pd.DataFrame([dict(r) for r in rows])
# Check row limit
check = self.store.check_row_limit(len(df))
if check['exceeded']:
return {
'error': 'row_limit_exceeded',
**check,
'preview': df.head(100).to_dict(orient='records')
}
# Store result
data_ref = self.store.store(df, name=name, source=f"pg_query: {query[:100]}...")
return {
'data_ref': data_ref,
'rows': len(df),
'columns': list(df.columns)
}
except Exception as e:
logger.error(f"pg_query failed: {e}")
return {'error': str(e)}
async def pg_execute(
self,
query: str,
params: Optional[List] = None
) -> Dict:
"""
Execute INSERT/UPDATE/DELETE query.
Args:
query: SQL DML query
params: Query parameters
Returns:
Dict with affected rows count
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
if params:
result = await conn.execute(query, *params)
else:
result = await conn.execute(query)
# Parse result (e.g., "INSERT 0 1" or "UPDATE 5")
parts = result.split()
affected = int(parts[-1]) if parts else 0
return {
'success': True,
'command': parts[0] if parts else 'UNKNOWN',
'affected_rows': affected
}
except Exception as e:
logger.error(f"pg_execute failed: {e}")
return {'error': str(e)}
async def pg_tables(self, schema: str = 'public') -> Dict:
"""
List all tables in schema.
Args:
schema: Schema name (default: public)
Returns:
Dict with list of tables
"""
query = """
SELECT
table_name,
table_type,
(SELECT count(*) FROM information_schema.columns c
WHERE c.table_schema = t.table_schema
AND c.table_name = t.table_name) as column_count
FROM information_schema.tables t
WHERE table_schema = $1
ORDER BY table_name
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
rows = await conn.fetch(query, schema)
tables = [
{
'name': r['table_name'],
'type': r['table_type'],
'columns': r['column_count']
}
for r in rows
]
return {
'schema': schema,
'count': len(tables),
'tables': tables
}
except Exception as e:
logger.error(f"pg_tables failed: {e}")
return {'error': str(e)}
async def pg_columns(self, table: str, schema: str = 'public') -> Dict:
"""
Get column information for a table.
Args:
table: Table name
schema: Schema name (default: public)
Returns:
Dict with column details
"""
query = """
SELECT
column_name,
data_type,
udt_name,
is_nullable,
column_default,
character_maximum_length,
numeric_precision
FROM information_schema.columns
WHERE table_schema = $1 AND table_name = $2
ORDER BY ordinal_position
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
rows = await conn.fetch(query, schema, table)
columns = [
{
'name': r['column_name'],
'type': r['data_type'],
'udt': r['udt_name'],
'nullable': r['is_nullable'] == 'YES',
'default': r['column_default'],
'max_length': r['character_maximum_length'],
'precision': r['numeric_precision']
}
for r in rows
]
return {
'table': f'{schema}.{table}',
'column_count': len(columns),
'columns': columns
}
except Exception as e:
logger.error(f"pg_columns failed: {e}")
return {'error': str(e)}
async def pg_schemas(self) -> Dict:
"""
List all schemas in database.
Returns:
Dict with list of schemas
"""
query = """
SELECT schema_name
FROM information_schema.schemata
WHERE schema_name NOT IN ('pg_catalog', 'information_schema', 'pg_toast')
ORDER BY schema_name
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
rows = await conn.fetch(query)
schemas = [r['schema_name'] for r in rows]
return {
'count': len(schemas),
'schemas': schemas
}
except Exception as e:
logger.error(f"pg_schemas failed: {e}")
return {'error': str(e)}
async def st_tables(self, schema: str = 'public') -> Dict:
"""
List PostGIS-enabled tables.
Args:
schema: Schema name (default: public)
Returns:
Dict with list of tables with geometry columns
"""
query = """
SELECT
f_table_name as table_name,
f_geometry_column as geometry_column,
type as geometry_type,
srid,
coord_dimension
FROM geometry_columns
WHERE f_table_schema = $1
ORDER BY f_table_name
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
rows = await conn.fetch(query, schema)
tables = [
{
'table': r['table_name'],
'geometry_column': r['geometry_column'],
'geometry_type': r['geometry_type'],
'srid': r['srid'],
'dimensions': r['coord_dimension']
}
for r in rows
]
return {
'schema': schema,
'count': len(tables),
'postgis_tables': tables
}
except Exception as e:
if 'geometry_columns' in str(e):
return {
'error': 'PostGIS not installed or extension not enabled',
'suggestion': 'Run: CREATE EXTENSION IF NOT EXISTS postgis;'
}
logger.error(f"st_tables failed: {e}")
return {'error': str(e)}
async def st_geometry_type(self, table: str, column: str, schema: str = 'public') -> Dict:
"""
Get geometry type of a column.
Args:
table: Table name
column: Geometry column name
schema: Schema name
Returns:
Dict with geometry type information
"""
query = f"""
SELECT DISTINCT ST_GeometryType({column}) as geom_type
FROM {schema}.{table}
WHERE {column} IS NOT NULL
LIMIT 10
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
rows = await conn.fetch(query)
types = [r['geom_type'] for r in rows]
return {
'table': f'{schema}.{table}',
'column': column,
'geometry_types': types
}
except Exception as e:
logger.error(f"st_geometry_type failed: {e}")
return {'error': str(e)}
async def st_srid(self, table: str, column: str, schema: str = 'public') -> Dict:
"""
Get SRID of geometry column.
Args:
table: Table name
column: Geometry column name
schema: Schema name
Returns:
Dict with SRID information
"""
query = f"""
SELECT DISTINCT ST_SRID({column}) as srid
FROM {schema}.{table}
WHERE {column} IS NOT NULL
LIMIT 1
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
row = await conn.fetchrow(query)
srid = row['srid'] if row else None
# Get SRID description
srid_info = None
if srid:
srid_query = """
SELECT srtext, proj4text
FROM spatial_ref_sys
WHERE srid = $1
"""
srid_row = await conn.fetchrow(srid_query, srid)
if srid_row:
srid_info = {
'description': srid_row['srtext'][:200] if srid_row['srtext'] else None,
'proj4': srid_row['proj4text']
}
return {
'table': f'{schema}.{table}',
'column': column,
'srid': srid,
'info': srid_info
}
except Exception as e:
logger.error(f"st_srid failed: {e}")
return {'error': str(e)}
async def st_extent(self, table: str, column: str, schema: str = 'public') -> Dict:
"""
Get bounding box of all geometries.
Args:
table: Table name
column: Geometry column name
schema: Schema name
Returns:
Dict with bounding box coordinates
"""
query = f"""
SELECT
ST_XMin(extent) as xmin,
ST_YMin(extent) as ymin,
ST_XMax(extent) as xmax,
ST_YMax(extent) as ymax
FROM (
SELECT ST_Extent({column}) as extent
FROM {schema}.{table}
) sub
"""
try:
pool = await self._get_pool()
async with pool.acquire() as conn:
row = await conn.fetchrow(query)
if row and row['xmin'] is not None:
return {
'table': f'{schema}.{table}',
'column': column,
'bbox': {
'xmin': float(row['xmin']),
'ymin': float(row['ymin']),
'xmax': float(row['xmax']),
'ymax': float(row['ymax'])
}
}
return {
'table': f'{schema}.{table}',
'column': column,
'bbox': None,
'message': 'No geometries found or all NULL'
}
except Exception as e:
logger.error(f"st_extent failed: {e}")
return {'error': str(e)}
async def close(self):
"""Close connection pool"""
if self.pool:
await self.pool.close()
self.pool = None
def check_connection() -> None:
"""
Check PostgreSQL connection for SessionStart hook.
Prints warning to stderr if connection fails.
"""
import sys
config = load_config()
if not config.get('postgres_url'):
print(
"[data-platform] PostgreSQL not configured (POSTGRES_URL not set)",
file=sys.stderr
)
return
async def test():
try:
if not ASYNCPG_AVAILABLE:
print(
"[data-platform] asyncpg not installed - PostgreSQL tools unavailable",
file=sys.stderr
)
return
conn = await asyncpg.connect(config['postgres_url'], timeout=5)
await conn.close()
print("[data-platform] PostgreSQL connection OK", file=sys.stderr)
except Exception as e:
print(
f"[data-platform] PostgreSQL connection failed: {e}",
file=sys.stderr
)
asyncio.run(test())

View File

@@ -0,0 +1,795 @@
"""
MCP Server entry point for Data Platform integration.
Provides pandas, PostgreSQL/PostGIS, and dbt tools to Claude Code via JSON-RPC 2.0 over stdio.
"""
import asyncio
import logging
import json
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
from .config import DataPlatformConfig
from .data_store import DataStore
from .pandas_tools import PandasTools
from .postgres_tools import PostgresTools
from .dbt_tools import DbtTools
# Suppress noisy MCP validation warnings on stderr
logging.basicConfig(level=logging.INFO)
logging.getLogger("root").setLevel(logging.ERROR)
logging.getLogger("mcp").setLevel(logging.ERROR)
logger = logging.getLogger(__name__)
class DataPlatformMCPServer:
"""MCP Server for data platform integration"""
def __init__(self):
self.server = Server("data-platform-mcp")
self.config = None
self.pandas_tools = None
self.postgres_tools = None
self.dbt_tools = None
async def initialize(self):
"""Initialize server and load configuration."""
try:
config_loader = DataPlatformConfig()
self.config = config_loader.load()
self.pandas_tools = PandasTools()
self.postgres_tools = PostgresTools()
self.dbt_tools = DbtTools()
# Log available capabilities
caps = []
caps.append("pandas")
if self.config.get('postgres_available'):
caps.append("PostgreSQL")
if self.config.get('dbt_available'):
caps.append("dbt")
logger.info(f"Data Platform MCP Server initialized with: {', '.join(caps)}")
except Exception as e:
logger.error(f"Failed to initialize: {e}")
raise
def setup_tools(self):
"""Register all available tools with the MCP server"""
@self.server.list_tools()
async def list_tools() -> list[Tool]:
"""Return list of available tools"""
tools = [
# pandas tools - always available
Tool(
name="read_csv",
description="Load CSV file into DataFrame",
inputSchema={
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "Path to CSV file"
},
"name": {
"type": "string",
"description": "Optional name for data_ref"
},
"chunk_size": {
"type": "integer",
"description": "Process in chunks of this size"
}
},
"required": ["file_path"]
}
),
Tool(
name="read_parquet",
description="Load Parquet file into DataFrame",
inputSchema={
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "Path to Parquet file"
},
"name": {
"type": "string",
"description": "Optional name for data_ref"
},
"columns": {
"type": "array",
"items": {"type": "string"},
"description": "Optional list of columns to load"
}
},
"required": ["file_path"]
}
),
Tool(
name="read_json",
description="Load JSON/JSONL file into DataFrame",
inputSchema={
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "Path to JSON file"
},
"name": {
"type": "string",
"description": "Optional name for data_ref"
},
"lines": {
"type": "boolean",
"default": False,
"description": "Read as JSON Lines format"
}
},
"required": ["file_path"]
}
),
Tool(
name="to_csv",
description="Export DataFrame to CSV file",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
},
"file_path": {
"type": "string",
"description": "Output file path"
},
"index": {
"type": "boolean",
"default": False,
"description": "Include index column"
}
},
"required": ["data_ref", "file_path"]
}
),
Tool(
name="to_parquet",
description="Export DataFrame to Parquet file",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
},
"file_path": {
"type": "string",
"description": "Output file path"
},
"compression": {
"type": "string",
"default": "snappy",
"description": "Compression codec"
}
},
"required": ["data_ref", "file_path"]
}
),
Tool(
name="describe",
description="Get statistical summary of DataFrame",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
}
},
"required": ["data_ref"]
}
),
Tool(
name="head",
description="Get first N rows of DataFrame",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
},
"n": {
"type": "integer",
"default": 10,
"description": "Number of rows"
}
},
"required": ["data_ref"]
}
),
Tool(
name="tail",
description="Get last N rows of DataFrame",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
},
"n": {
"type": "integer",
"default": 10,
"description": "Number of rows"
}
},
"required": ["data_ref"]
}
),
Tool(
name="filter",
description="Filter DataFrame rows by condition",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
},
"condition": {
"type": "string",
"description": "pandas query string (e.g., 'age > 30 and city == \"NYC\"')"
},
"name": {
"type": "string",
"description": "Optional name for result data_ref"
}
},
"required": ["data_ref", "condition"]
}
),
Tool(
name="select",
description="Select specific columns from DataFrame",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
},
"columns": {
"type": "array",
"items": {"type": "string"},
"description": "List of column names to select"
},
"name": {
"type": "string",
"description": "Optional name for result data_ref"
}
},
"required": ["data_ref", "columns"]
}
),
Tool(
name="groupby",
description="Group DataFrame and aggregate",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to stored DataFrame"
},
"by": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
],
"description": "Column(s) to group by"
},
"agg": {
"type": "object",
"description": "Aggregation dict (e.g., {\"sales\": \"sum\", \"count\": \"mean\"})"
},
"name": {
"type": "string",
"description": "Optional name for result data_ref"
}
},
"required": ["data_ref", "by", "agg"]
}
),
Tool(
name="join",
description="Join two DataFrames",
inputSchema={
"type": "object",
"properties": {
"left_ref": {
"type": "string",
"description": "Reference to left DataFrame"
},
"right_ref": {
"type": "string",
"description": "Reference to right DataFrame"
},
"on": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
],
"description": "Column(s) to join on (if same name in both)"
},
"left_on": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
],
"description": "Left join column(s)"
},
"right_on": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
],
"description": "Right join column(s)"
},
"how": {
"type": "string",
"enum": ["inner", "left", "right", "outer"],
"default": "inner",
"description": "Join type"
},
"name": {
"type": "string",
"description": "Optional name for result data_ref"
}
},
"required": ["left_ref", "right_ref"]
}
),
Tool(
name="list_data",
description="List all stored DataFrames",
inputSchema={
"type": "object",
"properties": {}
}
),
Tool(
name="drop_data",
description="Remove a DataFrame from storage",
inputSchema={
"type": "object",
"properties": {
"data_ref": {
"type": "string",
"description": "Reference to drop"
}
},
"required": ["data_ref"]
}
),
# PostgreSQL tools
Tool(
name="pg_connect",
description="Test PostgreSQL connection and return status",
inputSchema={
"type": "object",
"properties": {}
}
),
Tool(
name="pg_query",
description="Execute SELECT query and return results as data_ref",
inputSchema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "SQL SELECT query"
},
"params": {
"type": "array",
"items": {},
"description": "Query parameters (use $1, $2, etc.)"
},
"name": {
"type": "string",
"description": "Optional name for result data_ref"
}
},
"required": ["query"]
}
),
Tool(
name="pg_execute",
description="Execute INSERT/UPDATE/DELETE query",
inputSchema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "SQL DML query"
},
"params": {
"type": "array",
"items": {},
"description": "Query parameters"
}
},
"required": ["query"]
}
),
Tool(
name="pg_tables",
description="List all tables in schema",
inputSchema={
"type": "object",
"properties": {
"schema": {
"type": "string",
"default": "public",
"description": "Schema name"
}
}
}
),
Tool(
name="pg_columns",
description="Get column information for a table",
inputSchema={
"type": "object",
"properties": {
"table": {
"type": "string",
"description": "Table name"
},
"schema": {
"type": "string",
"default": "public",
"description": "Schema name"
}
},
"required": ["table"]
}
),
Tool(
name="pg_schemas",
description="List all schemas in database",
inputSchema={
"type": "object",
"properties": {}
}
),
# PostGIS tools
Tool(
name="st_tables",
description="List PostGIS-enabled tables",
inputSchema={
"type": "object",
"properties": {
"schema": {
"type": "string",
"default": "public",
"description": "Schema name"
}
}
}
),
Tool(
name="st_geometry_type",
description="Get geometry type of a column",
inputSchema={
"type": "object",
"properties": {
"table": {
"type": "string",
"description": "Table name"
},
"column": {
"type": "string",
"description": "Geometry column name"
},
"schema": {
"type": "string",
"default": "public",
"description": "Schema name"
}
},
"required": ["table", "column"]
}
),
Tool(
name="st_srid",
description="Get SRID of geometry column",
inputSchema={
"type": "object",
"properties": {
"table": {
"type": "string",
"description": "Table name"
},
"column": {
"type": "string",
"description": "Geometry column name"
},
"schema": {
"type": "string",
"default": "public",
"description": "Schema name"
}
},
"required": ["table", "column"]
}
),
Tool(
name="st_extent",
description="Get bounding box of all geometries",
inputSchema={
"type": "object",
"properties": {
"table": {
"type": "string",
"description": "Table name"
},
"column": {
"type": "string",
"description": "Geometry column name"
},
"schema": {
"type": "string",
"default": "public",
"description": "Schema name"
}
},
"required": ["table", "column"]
}
),
# dbt tools
Tool(
name="dbt_parse",
description="Validate dbt project (pre-flight check)",
inputSchema={
"type": "object",
"properties": {}
}
),
Tool(
name="dbt_run",
description="Run dbt models with pre-validation",
inputSchema={
"type": "object",
"properties": {
"select": {
"type": "string",
"description": "Model selection (e.g., 'model_name', '+model_name', 'tag:daily')"
},
"exclude": {
"type": "string",
"description": "Models to exclude"
},
"full_refresh": {
"type": "boolean",
"default": False,
"description": "Rebuild incremental models"
}
}
}
),
Tool(
name="dbt_test",
description="Run dbt tests",
inputSchema={
"type": "object",
"properties": {
"select": {
"type": "string",
"description": "Test selection"
},
"exclude": {
"type": "string",
"description": "Tests to exclude"
}
}
}
),
Tool(
name="dbt_build",
description="Run dbt build (run + test) with pre-validation",
inputSchema={
"type": "object",
"properties": {
"select": {
"type": "string",
"description": "Model/test selection"
},
"exclude": {
"type": "string",
"description": "Resources to exclude"
},
"full_refresh": {
"type": "boolean",
"default": False,
"description": "Rebuild incremental models"
}
}
}
),
Tool(
name="dbt_compile",
description="Compile dbt models to SQL without executing",
inputSchema={
"type": "object",
"properties": {
"select": {
"type": "string",
"description": "Model selection"
}
}
}
),
Tool(
name="dbt_ls",
description="List dbt resources",
inputSchema={
"type": "object",
"properties": {
"select": {
"type": "string",
"description": "Resource selection"
},
"resource_type": {
"type": "string",
"enum": ["model", "test", "seed", "snapshot", "source"],
"description": "Filter by type"
},
"output": {
"type": "string",
"enum": ["name", "path", "json"],
"default": "name",
"description": "Output format"
}
}
}
),
Tool(
name="dbt_docs_generate",
description="Generate dbt documentation",
inputSchema={
"type": "object",
"properties": {}
}
),
Tool(
name="dbt_lineage",
description="Get model dependencies and lineage",
inputSchema={
"type": "object",
"properties": {
"model": {
"type": "string",
"description": "Model name to analyze"
}
},
"required": ["model"]
}
)
]
return tools
@self.server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
"""Handle tool invocation."""
try:
# Route to appropriate tool handler
# pandas tools
if name == "read_csv":
result = await self.pandas_tools.read_csv(**arguments)
elif name == "read_parquet":
result = await self.pandas_tools.read_parquet(**arguments)
elif name == "read_json":
result = await self.pandas_tools.read_json(**arguments)
elif name == "to_csv":
result = await self.pandas_tools.to_csv(**arguments)
elif name == "to_parquet":
result = await self.pandas_tools.to_parquet(**arguments)
elif name == "describe":
result = await self.pandas_tools.describe(**arguments)
elif name == "head":
result = await self.pandas_tools.head(**arguments)
elif name == "tail":
result = await self.pandas_tools.tail(**arguments)
elif name == "filter":
result = await self.pandas_tools.filter(**arguments)
elif name == "select":
result = await self.pandas_tools.select(**arguments)
elif name == "groupby":
result = await self.pandas_tools.groupby(**arguments)
elif name == "join":
result = await self.pandas_tools.join(**arguments)
elif name == "list_data":
result = await self.pandas_tools.list_data()
elif name == "drop_data":
result = await self.pandas_tools.drop_data(**arguments)
# PostgreSQL tools
elif name == "pg_connect":
result = await self.postgres_tools.pg_connect()
elif name == "pg_query":
result = await self.postgres_tools.pg_query(**arguments)
elif name == "pg_execute":
result = await self.postgres_tools.pg_execute(**arguments)
elif name == "pg_tables":
result = await self.postgres_tools.pg_tables(**arguments)
elif name == "pg_columns":
result = await self.postgres_tools.pg_columns(**arguments)
elif name == "pg_schemas":
result = await self.postgres_tools.pg_schemas()
# PostGIS tools
elif name == "st_tables":
result = await self.postgres_tools.st_tables(**arguments)
elif name == "st_geometry_type":
result = await self.postgres_tools.st_geometry_type(**arguments)
elif name == "st_srid":
result = await self.postgres_tools.st_srid(**arguments)
elif name == "st_extent":
result = await self.postgres_tools.st_extent(**arguments)
# dbt tools
elif name == "dbt_parse":
result = await self.dbt_tools.dbt_parse()
elif name == "dbt_run":
result = await self.dbt_tools.dbt_run(**arguments)
elif name == "dbt_test":
result = await self.dbt_tools.dbt_test(**arguments)
elif name == "dbt_build":
result = await self.dbt_tools.dbt_build(**arguments)
elif name == "dbt_compile":
result = await self.dbt_tools.dbt_compile(**arguments)
elif name == "dbt_ls":
result = await self.dbt_tools.dbt_ls(**arguments)
elif name == "dbt_docs_generate":
result = await self.dbt_tools.dbt_docs_generate()
elif name == "dbt_lineage":
result = await self.dbt_tools.dbt_lineage(**arguments)
else:
raise ValueError(f"Unknown tool: {name}")
return [TextContent(
type="text",
text=json.dumps(result, indent=2, default=str)
)]
except Exception as e:
logger.error(f"Tool {name} failed: {e}")
return [TextContent(
type="text",
text=json.dumps({"error": str(e)}, indent=2)
)]
async def run(self):
"""Run the MCP server"""
await self.initialize()
self.setup_tools()
async with stdio_server() as (read_stream, write_stream):
await self.server.run(
read_stream,
write_stream,
self.server.create_initialization_options()
)
async def main():
"""Main entry point"""
server = DataPlatformMCPServer()
await server.run()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,49 @@
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "data-platform-mcp"
version = "1.0.0"
description = "MCP Server for data engineering with pandas, PostgreSQL/PostGIS, and dbt"
readme = "README.md"
license = {text = "MIT"}
requires-python = ">=3.10"
authors = [
{name = "Leo Miranda"}
]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
]
dependencies = [
"mcp>=0.9.0",
"pandas>=2.0.0",
"pyarrow>=14.0.0",
"asyncpg>=0.29.0",
"geoalchemy2>=0.14.0",
"shapely>=2.0.0",
"dbt-core>=1.9.0",
"dbt-postgres>=1.9.0",
"python-dotenv>=1.0.0",
"pydantic>=2.5.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.4.3",
"pytest-asyncio>=0.23.0",
]
[tool.setuptools.packages.find]
where = ["."]
include = ["mcp_server*"]
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]

View File

@@ -0,0 +1,23 @@
# MCP SDK
mcp>=0.9.0
# Data Processing
pandas>=2.0.0
pyarrow>=14.0.0
# PostgreSQL/PostGIS
asyncpg>=0.29.0
geoalchemy2>=0.14.0
shapely>=2.0.0
# dbt
dbt-core>=1.9.0
dbt-postgres>=1.9.0
# Utilities
python-dotenv>=1.0.0
pydantic>=2.5.0
# Testing
pytest>=7.4.3
pytest-asyncio>=0.23.0

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -0,0 +1,3 @@
"""
Tests for Data Platform MCP Server.
"""

View File

@@ -0,0 +1,239 @@
"""
Unit tests for configuration loader.
"""
import pytest
from pathlib import Path
import os
def test_load_system_config(tmp_path, monkeypatch):
"""Test loading system-level PostgreSQL configuration"""
# Import here to avoid import errors before setup
from mcp_server.config import DataPlatformConfig
# Mock home directory
config_dir = tmp_path / '.config' / 'claude'
config_dir.mkdir(parents=True)
config_file = config_dir / 'postgres.env'
config_file.write_text(
"POSTGRES_URL=postgresql://user:pass@localhost:5432/testdb\n"
)
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(tmp_path)
config = DataPlatformConfig()
result = config.load()
assert result['postgres_url'] == 'postgresql://user:pass@localhost:5432/testdb'
assert result['postgres_available'] is True
def test_postgres_optional(tmp_path, monkeypatch):
"""Test that PostgreSQL configuration is optional"""
from mcp_server.config import DataPlatformConfig
# No postgres.env file
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(tmp_path)
# Clear any existing env vars
monkeypatch.delenv('POSTGRES_URL', raising=False)
config = DataPlatformConfig()
result = config.load()
assert result['postgres_url'] is None
assert result['postgres_available'] is False
def test_project_config_override(tmp_path, monkeypatch):
"""Test that project config overrides system config"""
from mcp_server.config import DataPlatformConfig
# Set up system config
system_config_dir = tmp_path / '.config' / 'claude'
system_config_dir.mkdir(parents=True)
system_config = system_config_dir / 'postgres.env'
system_config.write_text(
"POSTGRES_URL=postgresql://system:pass@localhost:5432/systemdb\n"
)
# Set up project config
project_dir = tmp_path / 'project'
project_dir.mkdir()
project_config = project_dir / '.env'
project_config.write_text(
"POSTGRES_URL=postgresql://project:pass@localhost:5432/projectdb\n"
"DBT_PROJECT_DIR=/path/to/dbt\n"
)
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(project_dir)
config = DataPlatformConfig()
result = config.load()
# Project config should override
assert result['postgres_url'] == 'postgresql://project:pass@localhost:5432/projectdb'
assert result['dbt_project_dir'] == '/path/to/dbt'
def test_max_rows_config(tmp_path, monkeypatch):
"""Test max rows configuration"""
from mcp_server.config import DataPlatformConfig
project_dir = tmp_path / 'project'
project_dir.mkdir()
project_config = project_dir / '.env'
project_config.write_text("DATA_PLATFORM_MAX_ROWS=50000\n")
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(project_dir)
config = DataPlatformConfig()
result = config.load()
assert result['max_rows'] == 50000
def test_default_max_rows(tmp_path, monkeypatch):
"""Test default max rows value"""
from mcp_server.config import DataPlatformConfig
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(tmp_path)
# Clear any existing env vars
monkeypatch.delenv('DATA_PLATFORM_MAX_ROWS', raising=False)
config = DataPlatformConfig()
result = config.load()
assert result['max_rows'] == 100_000 # Default value
def test_dbt_auto_detection(tmp_path, monkeypatch):
"""Test automatic dbt project detection"""
from mcp_server.config import DataPlatformConfig
# Create project with dbt_project.yml
project_dir = tmp_path / 'project'
project_dir.mkdir()
(project_dir / 'dbt_project.yml').write_text("name: test_project\n")
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(project_dir)
# Clear PWD and DBT_PROJECT_DIR to ensure auto-detection
monkeypatch.delenv('PWD', raising=False)
monkeypatch.delenv('DBT_PROJECT_DIR', raising=False)
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
config = DataPlatformConfig()
result = config.load()
assert result['dbt_project_dir'] == str(project_dir)
assert result['dbt_available'] is True
def test_dbt_subdirectory_detection(tmp_path, monkeypatch):
"""Test dbt project detection in subdirectory"""
from mcp_server.config import DataPlatformConfig
# Create project with dbt in subdirectory
project_dir = tmp_path / 'project'
project_dir.mkdir()
# Need a marker file for _find_project_directory to find the project
(project_dir / '.git').mkdir()
dbt_dir = project_dir / 'transform'
dbt_dir.mkdir()
(dbt_dir / 'dbt_project.yml').write_text("name: test_project\n")
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(project_dir)
# Clear env vars to ensure auto-detection
monkeypatch.delenv('PWD', raising=False)
monkeypatch.delenv('DBT_PROJECT_DIR', raising=False)
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
config = DataPlatformConfig()
result = config.load()
assert result['dbt_project_dir'] == str(dbt_dir)
assert result['dbt_available'] is True
def test_no_dbt_project(tmp_path, monkeypatch):
"""Test when no dbt project exists"""
from mcp_server.config import DataPlatformConfig
project_dir = tmp_path / 'project'
project_dir.mkdir()
monkeypatch.setenv('HOME', str(tmp_path))
monkeypatch.chdir(project_dir)
# Clear any existing env vars
monkeypatch.delenv('DBT_PROJECT_DIR', raising=False)
config = DataPlatformConfig()
result = config.load()
assert result['dbt_project_dir'] is None
assert result['dbt_available'] is False
def test_find_project_directory_from_env(tmp_path, monkeypatch):
"""Test finding project directory from CLAUDE_PROJECT_DIR env var"""
from mcp_server.config import DataPlatformConfig
project_dir = tmp_path / 'my-project'
project_dir.mkdir()
(project_dir / '.git').mkdir()
monkeypatch.setenv('CLAUDE_PROJECT_DIR', str(project_dir))
config = DataPlatformConfig()
result = config._find_project_directory()
assert result == project_dir
def test_find_project_directory_from_cwd(tmp_path, monkeypatch):
"""Test finding project directory from cwd with .env file"""
from mcp_server.config import DataPlatformConfig
project_dir = tmp_path / 'project'
project_dir.mkdir()
(project_dir / '.env').write_text("TEST=value")
monkeypatch.chdir(project_dir)
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
monkeypatch.delenv('PWD', raising=False)
config = DataPlatformConfig()
result = config._find_project_directory()
assert result == project_dir
def test_find_project_directory_none_when_no_markers(tmp_path, monkeypatch):
"""Test returns None when no project markers found"""
from mcp_server.config import DataPlatformConfig
empty_dir = tmp_path / 'empty'
empty_dir.mkdir()
monkeypatch.chdir(empty_dir)
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
monkeypatch.delenv('PWD', raising=False)
monkeypatch.delenv('DBT_PROJECT_DIR', raising=False)
config = DataPlatformConfig()
result = config._find_project_directory()
assert result is None

View File

@@ -0,0 +1,240 @@
"""
Unit tests for Arrow IPC DataFrame registry.
"""
import pytest
import pandas as pd
import pyarrow as pa
def test_store_pandas_dataframe():
"""Test storing pandas DataFrame"""
from mcp_server.data_store import DataStore
# Create fresh instance for test
store = DataStore()
store._dataframes = {}
store._metadata = {}
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['x', 'y', 'z']})
data_ref = store.store(df, name='test_df')
assert data_ref == 'test_df'
assert 'test_df' in store._dataframes
assert store._metadata['test_df'].rows == 3
assert store._metadata['test_df'].columns == 2
def test_store_arrow_table():
"""Test storing Arrow Table directly"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
table = pa.table({'x': [1, 2, 3], 'y': [4, 5, 6]})
data_ref = store.store(table, name='arrow_test')
assert data_ref == 'arrow_test'
assert store._dataframes['arrow_test'].num_rows == 3
def test_store_auto_name():
"""Test auto-generated data_ref names"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
df = pd.DataFrame({'a': [1, 2]})
data_ref = store.store(df)
assert data_ref.startswith('df_')
assert len(data_ref) == 11 # df_ + 8 hex chars
def test_get_dataframe():
"""Test retrieving stored DataFrame"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
df = pd.DataFrame({'a': [1, 2, 3]})
store.store(df, name='get_test')
result = store.get('get_test')
assert result is not None
assert result.num_rows == 3
def test_get_pandas():
"""Test retrieving as pandas DataFrame"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['x', 'y', 'z']})
store.store(df, name='pandas_test')
result = store.get_pandas('pandas_test')
assert isinstance(result, pd.DataFrame)
assert list(result.columns) == ['a', 'b']
assert len(result) == 3
def test_get_nonexistent():
"""Test getting nonexistent data_ref returns None"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
assert store.get('nonexistent') is None
assert store.get_pandas('nonexistent') is None
def test_list_refs():
"""Test listing all stored DataFrames"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
store.store(pd.DataFrame({'a': [1, 2]}), name='df1')
store.store(pd.DataFrame({'b': [3, 4, 5]}), name='df2')
refs = store.list_refs()
assert len(refs) == 2
ref_names = [r['ref'] for r in refs]
assert 'df1' in ref_names
assert 'df2' in ref_names
def test_drop_dataframe():
"""Test dropping a DataFrame"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
store.store(pd.DataFrame({'a': [1]}), name='drop_test')
assert store.get('drop_test') is not None
result = store.drop('drop_test')
assert result is True
assert store.get('drop_test') is None
def test_drop_nonexistent():
"""Test dropping nonexistent data_ref"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
result = store.drop('nonexistent')
assert result is False
def test_clear():
"""Test clearing all DataFrames"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
store.store(pd.DataFrame({'a': [1]}), name='df1')
store.store(pd.DataFrame({'b': [2]}), name='df2')
store.clear()
assert len(store.list_refs()) == 0
def test_get_info():
"""Test getting DataFrame metadata"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['x', 'y', 'z']})
store.store(df, name='info_test', source='test source')
info = store.get_info('info_test')
assert info.ref == 'info_test'
assert info.rows == 3
assert info.columns == 2
assert info.column_names == ['a', 'b']
assert info.source == 'test source'
assert info.memory_bytes > 0
def test_total_memory():
"""Test total memory calculation"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
store.store(pd.DataFrame({'a': range(100)}), name='df1')
store.store(pd.DataFrame({'b': range(200)}), name='df2')
total = store.total_memory_bytes()
assert total > 0
total_mb = store.total_memory_mb()
assert total_mb >= 0
def test_check_row_limit():
"""Test row limit checking"""
from mcp_server.data_store import DataStore
store = DataStore()
store._max_rows = 100
# Under limit
result = store.check_row_limit(50)
assert result['exceeded'] is False
# Over limit
result = store.check_row_limit(150)
assert result['exceeded'] is True
assert 'suggestion' in result
def test_metadata_dtypes():
"""Test that dtypes are correctly recorded"""
from mcp_server.data_store import DataStore
store = DataStore()
store._dataframes = {}
store._metadata = {}
df = pd.DataFrame({
'int_col': [1, 2, 3],
'float_col': [1.1, 2.2, 3.3],
'str_col': ['a', 'b', 'c']
})
store.store(df, name='dtype_test')
info = store.get_info('dtype_test')
assert 'int_col' in info.dtypes
assert 'float_col' in info.dtypes
assert 'str_col' in info.dtypes

View File

@@ -0,0 +1,318 @@
"""
Unit tests for dbt MCP tools.
"""
import pytest
from unittest.mock import Mock, patch, MagicMock
import subprocess
import json
import tempfile
import os
@pytest.fixture
def mock_config(tmp_path):
"""Mock configuration with dbt project"""
dbt_dir = tmp_path / 'dbt_project'
dbt_dir.mkdir()
(dbt_dir / 'dbt_project.yml').write_text('name: test_project\n')
return {
'dbt_project_dir': str(dbt_dir),
'dbt_profiles_dir': str(tmp_path / '.dbt')
}
@pytest.fixture
def dbt_tools(mock_config):
"""Create DbtTools instance with mocked config"""
with patch('mcp_server.dbt_tools.load_config', return_value=mock_config):
from mcp_server.dbt_tools import DbtTools
tools = DbtTools()
tools.project_dir = mock_config['dbt_project_dir']
tools.profiles_dir = mock_config['dbt_profiles_dir']
return tools
@pytest.mark.asyncio
async def test_dbt_parse_success(dbt_tools):
"""Test successful dbt parse"""
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = 'Parsed successfully'
mock_result.stderr = ''
with patch('subprocess.run', return_value=mock_result):
result = await dbt_tools.dbt_parse()
assert result['valid'] is True
@pytest.mark.asyncio
async def test_dbt_parse_failure(dbt_tools):
"""Test dbt parse with errors"""
mock_result = MagicMock()
mock_result.returncode = 1
mock_result.stdout = ''
mock_result.stderr = 'Compilation error: deprecated syntax'
with patch('subprocess.run', return_value=mock_result):
result = await dbt_tools.dbt_parse()
assert result['valid'] is False
assert 'deprecated' in str(result.get('details', '')).lower() or len(result.get('errors', [])) > 0
@pytest.mark.asyncio
async def test_dbt_run_with_prevalidation(dbt_tools):
"""Test dbt run includes pre-validation"""
# First call is parse, second is run
mock_parse = MagicMock()
mock_parse.returncode = 0
mock_parse.stdout = 'OK'
mock_parse.stderr = ''
mock_run = MagicMock()
mock_run.returncode = 0
mock_run.stdout = 'Completed successfully'
mock_run.stderr = ''
with patch('subprocess.run', side_effect=[mock_parse, mock_run]):
result = await dbt_tools.dbt_run()
assert result['success'] is True
@pytest.mark.asyncio
async def test_dbt_run_fails_validation(dbt_tools):
"""Test dbt run fails if validation fails"""
mock_parse = MagicMock()
mock_parse.returncode = 1
mock_parse.stdout = ''
mock_parse.stderr = 'Parse error'
with patch('subprocess.run', return_value=mock_parse):
result = await dbt_tools.dbt_run()
assert 'error' in result
assert 'Pre-validation failed' in result['error']
@pytest.mark.asyncio
async def test_dbt_run_with_selection(dbt_tools):
"""Test dbt run with model selection"""
mock_parse = MagicMock()
mock_parse.returncode = 0
mock_parse.stdout = 'OK'
mock_parse.stderr = ''
mock_run = MagicMock()
mock_run.returncode = 0
mock_run.stdout = 'Completed'
mock_run.stderr = ''
calls = []
def track_calls(*args, **kwargs):
calls.append(args[0] if args else kwargs.get('args', []))
if len(calls) == 1:
return mock_parse
return mock_run
with patch('subprocess.run', side_effect=track_calls):
result = await dbt_tools.dbt_run(select='dim_customers')
# Verify --select was passed
assert any('--select' in str(call) for call in calls)
@pytest.mark.asyncio
async def test_dbt_test(dbt_tools):
"""Test dbt test"""
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = 'All tests passed'
mock_result.stderr = ''
with patch('subprocess.run', return_value=mock_result):
result = await dbt_tools.dbt_test()
assert result['success'] is True
@pytest.mark.asyncio
async def test_dbt_build(dbt_tools):
"""Test dbt build with pre-validation"""
mock_parse = MagicMock()
mock_parse.returncode = 0
mock_parse.stdout = 'OK'
mock_parse.stderr = ''
mock_build = MagicMock()
mock_build.returncode = 0
mock_build.stdout = 'Build complete'
mock_build.stderr = ''
with patch('subprocess.run', side_effect=[mock_parse, mock_build]):
result = await dbt_tools.dbt_build()
assert result['success'] is True
@pytest.mark.asyncio
async def test_dbt_compile(dbt_tools):
"""Test dbt compile"""
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = 'Compiled'
mock_result.stderr = ''
with patch('subprocess.run', return_value=mock_result):
result = await dbt_tools.dbt_compile()
assert result['success'] is True
@pytest.mark.asyncio
async def test_dbt_ls(dbt_tools):
"""Test dbt ls"""
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = 'dim_customers\ndim_products\nfct_orders\n'
mock_result.stderr = ''
with patch('subprocess.run', return_value=mock_result):
result = await dbt_tools.dbt_ls()
assert result['success'] is True
assert result['count'] == 3
assert 'dim_customers' in result['resources']
@pytest.mark.asyncio
async def test_dbt_docs_generate(dbt_tools, tmp_path):
"""Test dbt docs generate"""
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = 'Done'
mock_result.stderr = ''
# Create fake target directory
target_dir = tmp_path / 'dbt_project' / 'target'
target_dir.mkdir(parents=True)
(target_dir / 'catalog.json').write_text('{}')
(target_dir / 'manifest.json').write_text('{}')
dbt_tools.project_dir = str(tmp_path / 'dbt_project')
with patch('subprocess.run', return_value=mock_result):
result = await dbt_tools.dbt_docs_generate()
assert result['success'] is True
assert result['catalog_generated'] is True
assert result['manifest_generated'] is True
@pytest.mark.asyncio
async def test_dbt_lineage(dbt_tools, tmp_path):
"""Test dbt lineage"""
# Create manifest
target_dir = tmp_path / 'dbt_project' / 'target'
target_dir.mkdir(parents=True)
manifest = {
'nodes': {
'model.test.dim_customers': {
'name': 'dim_customers',
'resource_type': 'model',
'schema': 'public',
'database': 'testdb',
'description': 'Customer dimension',
'tags': ['daily'],
'config': {'materialized': 'table'},
'depends_on': {
'nodes': ['model.test.stg_customers']
}
},
'model.test.stg_customers': {
'name': 'stg_customers',
'resource_type': 'model',
'depends_on': {'nodes': []}
},
'model.test.fct_orders': {
'name': 'fct_orders',
'resource_type': 'model',
'depends_on': {
'nodes': ['model.test.dim_customers']
}
}
}
}
(target_dir / 'manifest.json').write_text(json.dumps(manifest))
dbt_tools.project_dir = str(tmp_path / 'dbt_project')
result = await dbt_tools.dbt_lineage('dim_customers')
assert result['model'] == 'dim_customers'
assert 'model.test.stg_customers' in result['upstream']
assert 'model.test.fct_orders' in result['downstream']
@pytest.mark.asyncio
async def test_dbt_lineage_model_not_found(dbt_tools, tmp_path):
"""Test dbt lineage with nonexistent model"""
target_dir = tmp_path / 'dbt_project' / 'target'
target_dir.mkdir(parents=True)
manifest = {
'nodes': {
'model.test.dim_customers': {
'name': 'dim_customers',
'resource_type': 'model'
}
}
}
(target_dir / 'manifest.json').write_text(json.dumps(manifest))
dbt_tools.project_dir = str(tmp_path / 'dbt_project')
result = await dbt_tools.dbt_lineage('nonexistent_model')
assert 'error' in result
assert 'not found' in result['error'].lower()
@pytest.mark.asyncio
async def test_dbt_no_project():
"""Test dbt tools when no project configured"""
with patch('mcp_server.dbt_tools.load_config', return_value={'dbt_project_dir': None}):
from mcp_server.dbt_tools import DbtTools
tools = DbtTools()
tools.project_dir = None
result = await tools.dbt_run()
assert 'error' in result
assert 'not found' in result['error'].lower()
@pytest.mark.asyncio
async def test_dbt_timeout(dbt_tools):
"""Test dbt command timeout handling"""
with patch('subprocess.run', side_effect=subprocess.TimeoutExpired('dbt', 300)):
result = await dbt_tools.dbt_parse()
assert 'error' in result
assert 'timed out' in result['error'].lower()
@pytest.mark.asyncio
async def test_dbt_not_installed(dbt_tools):
"""Test handling when dbt is not installed"""
with patch('subprocess.run', side_effect=FileNotFoundError()):
result = await dbt_tools.dbt_parse()
assert 'error' in result
assert 'not found' in result['error'].lower()

View File

@@ -0,0 +1,301 @@
"""
Unit tests for pandas MCP tools.
"""
import pytest
import pandas as pd
import tempfile
import os
from pathlib import Path
@pytest.fixture
def temp_csv(tmp_path):
"""Create a temporary CSV file for testing"""
csv_path = tmp_path / 'test.csv'
df = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'name': ['Alice', 'Bob', 'Charlie', 'Diana', 'Eve'],
'value': [10.5, 20.0, 30.5, 40.0, 50.5]
})
df.to_csv(csv_path, index=False)
return str(csv_path)
@pytest.fixture
def temp_parquet(tmp_path):
"""Create a temporary Parquet file for testing"""
parquet_path = tmp_path / 'test.parquet'
df = pd.DataFrame({
'id': [1, 2, 3],
'data': ['a', 'b', 'c']
})
df.to_parquet(parquet_path)
return str(parquet_path)
@pytest.fixture
def temp_json(tmp_path):
"""Create a temporary JSON file for testing"""
json_path = tmp_path / 'test.json'
df = pd.DataFrame({
'x': [1, 2],
'y': [3, 4]
})
df.to_json(json_path, orient='records')
return str(json_path)
@pytest.fixture
def pandas_tools():
"""Create PandasTools instance with fresh store"""
from mcp_server.pandas_tools import PandasTools
from mcp_server.data_store import DataStore
# Reset store for test isolation
store = DataStore.get_instance()
store._dataframes = {}
store._metadata = {}
return PandasTools()
@pytest.mark.asyncio
async def test_read_csv(pandas_tools, temp_csv):
"""Test reading CSV file"""
result = await pandas_tools.read_csv(temp_csv, name='csv_test')
assert 'data_ref' in result
assert result['data_ref'] == 'csv_test'
assert result['rows'] == 5
assert 'id' in result['columns']
assert 'name' in result['columns']
@pytest.mark.asyncio
async def test_read_csv_nonexistent(pandas_tools):
"""Test reading nonexistent CSV file"""
result = await pandas_tools.read_csv('/nonexistent/path.csv')
assert 'error' in result
assert 'not found' in result['error'].lower()
@pytest.mark.asyncio
async def test_read_parquet(pandas_tools, temp_parquet):
"""Test reading Parquet file"""
result = await pandas_tools.read_parquet(temp_parquet, name='parquet_test')
assert 'data_ref' in result
assert result['rows'] == 3
@pytest.mark.asyncio
async def test_read_json(pandas_tools, temp_json):
"""Test reading JSON file"""
result = await pandas_tools.read_json(temp_json, name='json_test')
assert 'data_ref' in result
assert result['rows'] == 2
@pytest.mark.asyncio
async def test_to_csv(pandas_tools, temp_csv, tmp_path):
"""Test exporting to CSV"""
# First load some data
await pandas_tools.read_csv(temp_csv, name='export_test')
# Export to new file
output_path = str(tmp_path / 'output.csv')
result = await pandas_tools.to_csv('export_test', output_path)
assert result['success'] is True
assert os.path.exists(output_path)
@pytest.mark.asyncio
async def test_to_parquet(pandas_tools, temp_csv, tmp_path):
"""Test exporting to Parquet"""
await pandas_tools.read_csv(temp_csv, name='parquet_export')
output_path = str(tmp_path / 'output.parquet')
result = await pandas_tools.to_parquet('parquet_export', output_path)
assert result['success'] is True
assert os.path.exists(output_path)
@pytest.mark.asyncio
async def test_describe(pandas_tools, temp_csv):
"""Test describe statistics"""
await pandas_tools.read_csv(temp_csv, name='describe_test')
result = await pandas_tools.describe('describe_test')
assert 'data_ref' in result
assert 'shape' in result
assert result['shape']['rows'] == 5
assert 'statistics' in result
assert 'null_counts' in result
@pytest.mark.asyncio
async def test_head(pandas_tools, temp_csv):
"""Test getting first N rows"""
await pandas_tools.read_csv(temp_csv, name='head_test')
result = await pandas_tools.head('head_test', n=3)
assert result['returned_rows'] == 3
assert len(result['data']) == 3
@pytest.mark.asyncio
async def test_tail(pandas_tools, temp_csv):
"""Test getting last N rows"""
await pandas_tools.read_csv(temp_csv, name='tail_test')
result = await pandas_tools.tail('tail_test', n=2)
assert result['returned_rows'] == 2
@pytest.mark.asyncio
async def test_filter(pandas_tools, temp_csv):
"""Test filtering rows"""
await pandas_tools.read_csv(temp_csv, name='filter_test')
result = await pandas_tools.filter('filter_test', 'value > 25')
assert 'data_ref' in result
assert result['rows'] == 3 # 30.5, 40.0, 50.5
@pytest.mark.asyncio
async def test_filter_invalid_condition(pandas_tools, temp_csv):
"""Test filter with invalid condition"""
await pandas_tools.read_csv(temp_csv, name='filter_error')
result = await pandas_tools.filter('filter_error', 'invalid_column > 0')
assert 'error' in result
@pytest.mark.asyncio
async def test_select(pandas_tools, temp_csv):
"""Test selecting columns"""
await pandas_tools.read_csv(temp_csv, name='select_test')
result = await pandas_tools.select('select_test', ['id', 'name'])
assert 'data_ref' in result
assert result['columns'] == ['id', 'name']
@pytest.mark.asyncio
async def test_select_invalid_column(pandas_tools, temp_csv):
"""Test select with invalid column"""
await pandas_tools.read_csv(temp_csv, name='select_error')
result = await pandas_tools.select('select_error', ['id', 'nonexistent'])
assert 'error' in result
assert 'available_columns' in result
@pytest.mark.asyncio
async def test_groupby(pandas_tools, tmp_path):
"""Test groupby aggregation"""
# Create test data with groups
csv_path = tmp_path / 'groupby.csv'
df = pd.DataFrame({
'category': ['A', 'A', 'B', 'B'],
'value': [10, 20, 30, 40]
})
df.to_csv(csv_path, index=False)
await pandas_tools.read_csv(str(csv_path), name='groupby_test')
result = await pandas_tools.groupby(
'groupby_test',
by='category',
agg={'value': 'sum'}
)
assert 'data_ref' in result
assert result['rows'] == 2 # Two groups: A, B
@pytest.mark.asyncio
async def test_join(pandas_tools, tmp_path):
"""Test joining DataFrames"""
# Create left table
left_path = tmp_path / 'left.csv'
pd.DataFrame({
'id': [1, 2, 3],
'name': ['A', 'B', 'C']
}).to_csv(left_path, index=False)
# Create right table
right_path = tmp_path / 'right.csv'
pd.DataFrame({
'id': [1, 2, 4],
'value': [100, 200, 400]
}).to_csv(right_path, index=False)
await pandas_tools.read_csv(str(left_path), name='left')
await pandas_tools.read_csv(str(right_path), name='right')
result = await pandas_tools.join('left', 'right', on='id', how='inner')
assert 'data_ref' in result
assert result['rows'] == 2 # Only id 1 and 2 match
@pytest.mark.asyncio
async def test_list_data(pandas_tools, temp_csv):
"""Test listing all DataFrames"""
await pandas_tools.read_csv(temp_csv, name='list_test1')
await pandas_tools.read_csv(temp_csv, name='list_test2')
result = await pandas_tools.list_data()
assert result['count'] == 2
refs = [df['ref'] for df in result['dataframes']]
assert 'list_test1' in refs
assert 'list_test2' in refs
@pytest.mark.asyncio
async def test_drop_data(pandas_tools, temp_csv):
"""Test dropping DataFrame"""
await pandas_tools.read_csv(temp_csv, name='drop_test')
result = await pandas_tools.drop_data('drop_test')
assert result['success'] is True
# Verify it's gone
list_result = await pandas_tools.list_data()
refs = [df['ref'] for df in list_result['dataframes']]
assert 'drop_test' not in refs
@pytest.mark.asyncio
async def test_drop_nonexistent(pandas_tools):
"""Test dropping nonexistent DataFrame"""
result = await pandas_tools.drop_data('nonexistent')
assert 'error' in result
@pytest.mark.asyncio
async def test_operations_on_nonexistent(pandas_tools):
"""Test operations on nonexistent data_ref"""
result = await pandas_tools.describe('nonexistent')
assert 'error' in result
result = await pandas_tools.head('nonexistent')
assert 'error' in result
result = await pandas_tools.filter('nonexistent', 'x > 0')
assert 'error' in result

View File

@@ -0,0 +1,338 @@
"""
Unit tests for PostgreSQL MCP tools.
"""
import pytest
from unittest.mock import Mock, AsyncMock, patch, MagicMock
import pandas as pd
@pytest.fixture
def mock_config():
"""Mock configuration"""
return {
'postgres_url': 'postgresql://test:test@localhost:5432/testdb',
'max_rows': 100000
}
@pytest.fixture
def postgres_tools(mock_config):
"""Create PostgresTools instance with mocked config"""
with patch('mcp_server.postgres_tools.load_config', return_value=mock_config):
from mcp_server.postgres_tools import PostgresTools
from mcp_server.data_store import DataStore
# Reset store
store = DataStore.get_instance()
store._dataframes = {}
store._metadata = {}
tools = PostgresTools()
tools.config = mock_config
return tools
@pytest.mark.asyncio
async def test_pg_connect_no_config():
"""Test pg_connect when no PostgreSQL configured"""
with patch('mcp_server.postgres_tools.load_config', return_value={'postgres_url': None}):
from mcp_server.postgres_tools import PostgresTools
tools = PostgresTools()
tools.config = {'postgres_url': None}
result = await tools.pg_connect()
assert result['connected'] is False
assert 'not configured' in result['error'].lower()
@pytest.mark.asyncio
async def test_pg_connect_success(postgres_tools):
"""Test successful pg_connect"""
mock_conn = AsyncMock()
mock_conn.fetchval = AsyncMock(side_effect=[
'PostgreSQL 15.1', # version
'testdb', # database name
'testuser', # user
None # PostGIS check fails
])
mock_conn.close = AsyncMock()
# Create proper async context manager
mock_cm = AsyncMock()
mock_cm.__aenter__ = AsyncMock(return_value=mock_conn)
mock_cm.__aexit__ = AsyncMock(return_value=None)
mock_pool = MagicMock()
mock_pool.acquire = MagicMock(return_value=mock_cm)
# Use AsyncMock for create_pool since it's awaited
with patch('asyncpg.create_pool', new=AsyncMock(return_value=mock_pool)):
postgres_tools.pool = None
result = await postgres_tools.pg_connect()
assert result['connected'] is True
assert result['database'] == 'testdb'
@pytest.mark.asyncio
async def test_pg_query_success(postgres_tools):
"""Test successful pg_query"""
mock_rows = [
{'id': 1, 'name': 'Alice'},
{'id': 2, 'name': 'Bob'}
]
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(return_value=mock_rows)
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.pg_query('SELECT * FROM users', name='users_data')
assert 'data_ref' in result
assert result['rows'] == 2
@pytest.mark.asyncio
async def test_pg_query_empty_result(postgres_tools):
"""Test pg_query with no results"""
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(return_value=[])
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.pg_query('SELECT * FROM empty_table')
assert result['data_ref'] is None
assert result['rows'] == 0
@pytest.mark.asyncio
async def test_pg_execute_success(postgres_tools):
"""Test successful pg_execute"""
mock_conn = AsyncMock()
mock_conn.execute = AsyncMock(return_value='INSERT 0 3')
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.pg_execute('INSERT INTO users VALUES (1, 2, 3)')
assert result['success'] is True
assert result['affected_rows'] == 3
assert result['command'] == 'INSERT'
@pytest.mark.asyncio
async def test_pg_tables(postgres_tools):
"""Test listing tables"""
mock_rows = [
{'table_name': 'users', 'table_type': 'BASE TABLE', 'column_count': 5},
{'table_name': 'orders', 'table_type': 'BASE TABLE', 'column_count': 8}
]
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(return_value=mock_rows)
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.pg_tables(schema='public')
assert result['schema'] == 'public'
assert result['count'] == 2
assert len(result['tables']) == 2
@pytest.mark.asyncio
async def test_pg_columns(postgres_tools):
"""Test getting column info"""
mock_rows = [
{
'column_name': 'id',
'data_type': 'integer',
'udt_name': 'int4',
'is_nullable': 'NO',
'column_default': "nextval('users_id_seq'::regclass)",
'character_maximum_length': None,
'numeric_precision': 32
},
{
'column_name': 'name',
'data_type': 'character varying',
'udt_name': 'varchar',
'is_nullable': 'YES',
'column_default': None,
'character_maximum_length': 255,
'numeric_precision': None
}
]
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(return_value=mock_rows)
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.pg_columns(table='users')
assert result['table'] == 'public.users'
assert result['column_count'] == 2
assert result['columns'][0]['name'] == 'id'
assert result['columns'][0]['nullable'] is False
@pytest.mark.asyncio
async def test_pg_schemas(postgres_tools):
"""Test listing schemas"""
mock_rows = [
{'schema_name': 'public'},
{'schema_name': 'app'}
]
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(return_value=mock_rows)
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.pg_schemas()
assert result['count'] == 2
assert 'public' in result['schemas']
@pytest.mark.asyncio
async def test_st_tables(postgres_tools):
"""Test listing PostGIS tables"""
mock_rows = [
{
'table_name': 'locations',
'geometry_column': 'geom',
'geometry_type': 'POINT',
'srid': 4326,
'coord_dimension': 2
}
]
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(return_value=mock_rows)
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.st_tables()
assert result['count'] == 1
assert result['postgis_tables'][0]['table'] == 'locations'
assert result['postgis_tables'][0]['srid'] == 4326
@pytest.mark.asyncio
async def test_st_tables_no_postgis(postgres_tools):
"""Test st_tables when PostGIS not installed"""
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(side_effect=Exception("relation \"geometry_columns\" does not exist"))
# Create proper async context manager
mock_cm = AsyncMock()
mock_cm.__aenter__ = AsyncMock(return_value=mock_conn)
mock_cm.__aexit__ = AsyncMock(return_value=None)
mock_pool = MagicMock()
mock_pool.acquire = MagicMock(return_value=mock_cm)
postgres_tools.pool = mock_pool
result = await postgres_tools.st_tables()
assert 'error' in result
assert 'PostGIS' in result['error']
@pytest.mark.asyncio
async def test_st_extent(postgres_tools):
"""Test getting geometry bounding box"""
mock_row = {
'xmin': -122.5,
'ymin': 37.5,
'xmax': -122.0,
'ymax': 38.0
}
mock_conn = AsyncMock()
mock_conn.fetchrow = AsyncMock(return_value=mock_row)
mock_pool = AsyncMock()
mock_pool.acquire = MagicMock(return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_conn),
__aexit__=AsyncMock()
))
postgres_tools.pool = mock_pool
result = await postgres_tools.st_extent(table='locations', column='geom')
assert result['bbox']['xmin'] == -122.5
assert result['bbox']['ymax'] == 38.0
@pytest.mark.asyncio
async def test_error_handling(postgres_tools):
"""Test error handling for database errors"""
mock_conn = AsyncMock()
mock_conn.fetch = AsyncMock(side_effect=Exception("Connection refused"))
# Create proper async context manager
mock_cm = AsyncMock()
mock_cm.__aenter__ = AsyncMock(return_value=mock_conn)
mock_cm.__aexit__ = AsyncMock(return_value=None)
mock_pool = MagicMock()
mock_pool.acquire = MagicMock(return_value=mock_cm)
postgres_tools.pool = mock_pool
result = await postgres_tools.pg_query('SELECT 1')
assert 'error' in result
assert 'Connection refused' in result['error']

View File

@@ -389,25 +389,24 @@ def list_issues(self, state='open', labels=None, repo=None):
## License
Part of the Bandit Labs Claude Code Plugins project.
MIT License - Part of the Leo Claude Marketplace project.
## Related Documentation
- **MCP Specification**: `docs/references/MCP-GITEA.md`
- **Project Summary**: `docs/references/PROJECT-SUMMARY.md`
- **Implementation Plan**: `docs/reference-material/projman-implementation-plan.md`
- **Projman Documentation**: `plugins/projman/README.md`
- **Configuration Guide**: `plugins/projman/CONFIGURATION.md`
- **Testing Guide**: `TESTING.md`
## Support
For issues or questions:
1. Check [TESTING.md](./TESTING.md) troubleshooting section
2. Review [MCP-GITEA.md](../../docs/references/MCP-GITEA.md) specification
2. Review [plugins/projman/README.md](../../README.md) for plugin documentation
3. Create an issue in the project repository
---
**Built for**: Bandit Labs Project Management Plugins
**Built for**: Leo Claude Marketplace - Project Management Plugins
**Phase**: 1 (Complete)
**Status**: ✅ Production Ready
**Last Updated**: 2025-01-06

View File

@@ -574,8 +574,8 @@ After completing testing:
- **MCP Documentation**: https://docs.anthropic.com/claude/docs/mcp
- **Gitea API Documentation**: https://docs.gitea.io/en-us/api-usage/
- **Project Documentation**: `docs/references/MCP-GITEA.md`
- **Implementation Plan**: `docs/references/PROJECT-SUMMARY.md`
- **Projman Documentation**: `plugins/projman/README.md`
- **Configuration Guide**: `plugins/projman/CONFIGURATION.md`
---

View File

@@ -0,0 +1,227 @@
"""
Configuration loader for Gitea MCP Server.
Implements hybrid configuration system:
- System-level: ~/.config/claude/gitea.env (credentials)
- Project-level: .env (repository specification)
- Auto-detection: Falls back to git remote URL parsing
"""
from pathlib import Path
from dotenv import load_dotenv
import os
import re
import subprocess
import logging
from typing import Dict, Optional
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class GiteaConfig:
"""Hybrid configuration loader with mode detection"""
def __init__(self):
self.api_url: Optional[str] = None
self.api_token: Optional[str] = None
self.repo: Optional[str] = None
self.mode: str = 'project'
def load(self) -> Dict[str, Optional[str]]:
"""
Load configuration from system and project levels.
Project-level configuration overrides system-level.
Returns:
Dict containing api_url, api_token, repo, mode
Raises:
FileNotFoundError: If system config is missing
ValueError: If required configuration is missing
"""
# Load system config
system_config = Path.home() / '.config' / 'claude' / 'gitea.env'
if system_config.exists():
load_dotenv(system_config)
logger.info(f"Loaded system configuration from {system_config}")
else:
raise FileNotFoundError(
f"System config not found: {system_config}\n"
"Create it with: mkdir -p ~/.config/claude && "
"cat > ~/.config/claude/gitea.env"
)
# Find project directory (MCP server cwd is plugin dir, not project dir)
project_dir = self._find_project_directory()
# Load project config (overrides system)
if project_dir:
project_config = project_dir / '.env'
if project_config.exists():
load_dotenv(project_config, override=True)
logger.info(f"Loaded project configuration from {project_config}")
# Extract values
self.api_url = os.getenv('GITEA_API_URL')
self.api_token = os.getenv('GITEA_API_TOKEN')
self.repo = os.getenv('GITEA_REPO') # Optional, must be owner/repo format
# Auto-detect repo from git remote if not specified
if not self.repo and project_dir:
self.repo = self._detect_repo_from_git(project_dir)
if self.repo:
logger.info(f"Auto-detected repository from git remote: {self.repo}")
# Detect mode
if self.repo:
self.mode = 'project'
logger.info(f"Running in project mode: {self.repo}")
else:
self.mode = 'company'
logger.info("Running in company-wide mode (PMO)")
# Validate required variables
self._validate()
return {
'api_url': self.api_url,
'api_token': self.api_token,
'repo': self.repo,
'mode': self.mode
}
def _validate(self) -> None:
"""
Validate that required configuration is present.
Raises:
ValueError: If required configuration is missing
"""
required = {
'GITEA_API_URL': self.api_url,
'GITEA_API_TOKEN': self.api_token
}
missing = [key for key, value in required.items() if not value]
if missing:
raise ValueError(
f"Missing required configuration: {', '.join(missing)}\n"
"Check your ~/.config/claude/gitea.env file"
)
def _find_project_directory(self) -> Optional[Path]:
"""
Find the user's project directory.
The MCP server runs with cwd set to the plugin directory, not the
user's project. We need to find the actual project directory using
various heuristics.
Returns:
Path to project directory, or None if not found
"""
# Strategy 1: Check CLAUDE_PROJECT_DIR environment variable
project_dir = os.getenv('CLAUDE_PROJECT_DIR')
if project_dir:
path = Path(project_dir)
if path.exists():
logger.info(f"Found project directory from CLAUDE_PROJECT_DIR: {path}")
return path
# Strategy 2: Check PWD (original working directory before cwd override)
pwd = os.getenv('PWD')
if pwd:
path = Path(pwd)
# Verify it has .git or .env (indicates a project)
if path.exists() and ((path / '.git').exists() or (path / '.env').exists()):
logger.info(f"Found project directory from PWD: {path}")
return path
# Strategy 3: Check current working directory
# This handles test scenarios and cases where cwd is actually the project
cwd = Path.cwd()
if (cwd / '.git').exists() or (cwd / '.env').exists():
logger.info(f"Found project directory from cwd: {cwd}")
return cwd
# Strategy 4: Check if GITEA_REPO is already set (user configured it)
# If so, we don't need to find the project directory for git detection
if os.getenv('GITEA_REPO'):
logger.debug("GITEA_REPO already set, skipping project directory detection")
return None
logger.debug("Could not determine project directory")
return None
def _detect_repo_from_git(self, project_dir: Optional[Path] = None) -> Optional[str]:
"""
Auto-detect repository from git remote origin URL.
Args:
project_dir: Directory to run git command from (defaults to cwd)
Supports URL formats:
- SSH: ssh://git@host:port/owner/repo.git
- SSH short: git@host:owner/repo.git
- HTTPS: https://host/owner/repo.git
- HTTP: http://host/owner/repo.git
Returns:
Repository in 'owner/repo' format, or None if detection fails
"""
try:
result = subprocess.run(
['git', 'remote', 'get-url', 'origin'],
capture_output=True,
text=True,
timeout=5,
cwd=str(project_dir) if project_dir else None
)
if result.returncode != 0:
logger.debug("No git remote 'origin' found")
return None
url = result.stdout.strip()
return self._parse_git_url(url)
except subprocess.TimeoutExpired:
logger.warning("Git command timed out")
return None
except FileNotFoundError:
logger.debug("Git not available")
return None
except Exception as e:
logger.debug(f"Failed to detect repo from git: {e}")
return None
def _parse_git_url(self, url: str) -> Optional[str]:
"""
Parse git URL to extract owner/repo.
Args:
url: Git remote URL
Returns:
Repository in 'owner/repo' format, or None if parsing fails
"""
# Remove .git suffix if present
url = re.sub(r'\.git$', '', url)
# SSH format: ssh://git@host:port/owner/repo
ssh_match = re.match(r'ssh://[^/]+/(.+/.+)$', url)
if ssh_match:
return ssh_match.group(1)
# SSH short format: git@host:owner/repo
ssh_short_match = re.match(r'git@[^:]+:(.+/.+)$', url)
if ssh_short_match:
return ssh_short_match.group(1)
# HTTPS/HTTP format: https://host/owner/repo
http_match = re.match(r'https?://[^/]+/(.+/.+)$', url)
if http_match:
return http_match.group(1)
logger.warning(f"Could not parse git URL: {url}")
return None

View File

@@ -45,7 +45,7 @@ class GiteaClient:
"""Parse owner/repo from input. Always requires 'owner/repo' format."""
target = repo or self.repo
if not target or '/' not in target:
raise ValueError("Use 'owner/repo' format (e.g. 'bandit/support-claude-mktplace')")
raise ValueError("Use 'owner/repo' format (e.g. 'org/repo-name')")
parts = target.split('/', 1)
return parts[0], parts[1]
@@ -53,6 +53,7 @@ class GiteaClient:
self,
state: str = 'open',
labels: Optional[List[str]] = None,
milestone: Optional[str] = None,
repo: Optional[str] = None
) -> List[Dict]:
"""
@@ -61,6 +62,7 @@ class GiteaClient:
Args:
state: Issue state (open, closed, all)
labels: Filter by labels
milestone: Filter by milestone title (exact match)
repo: Repository in 'owner/repo' format
Returns:
@@ -71,6 +73,8 @@ class GiteaClient:
params = {'state': state}
if labels:
params['labels'] = ','.join(labels)
if milestone:
params['milestones'] = milestone
logger.info(f"Listing issues from {owner}/{target_repo} with state={state}")
response = self.session.get(url, params=params)
response.raise_for_status()
@@ -110,8 +114,14 @@ class GiteaClient:
def _resolve_label_ids(self, label_names: List[str], owner: str, repo: str) -> List[int]:
"""Convert label names to label IDs."""
org_labels = self.get_org_labels(owner)
repo_labels = self.get_labels(f"{owner}/{repo}")
full_repo = f"{owner}/{repo}"
# Only fetch org labels if repo belongs to an organization
org_labels = []
if self.is_org_repo(full_repo):
org_labels = self.get_org_labels(owner)
repo_labels = self.get_labels(full_repo)
all_labels = org_labels + repo_labels
label_map = {label['name']: label['id'] for label in all_labels}
label_ids = []
@@ -129,9 +139,24 @@ class GiteaClient:
body: Optional[str] = None,
state: Optional[str] = None,
labels: Optional[List[str]] = None,
milestone: Optional[int] = None,
repo: Optional[str] = None
) -> Dict:
"""Update existing issue. Repo must be 'owner/repo' format."""
"""
Update existing issue.
Args:
issue_number: Issue number to update
title: New title (optional)
body: New body (optional)
state: New state - 'open' or 'closed' (optional)
labels: New labels (optional)
milestone: Milestone ID to assign (optional)
repo: Repository in 'owner/repo' format
Returns:
Updated issue dictionary
"""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}"
data = {}
@@ -143,6 +168,8 @@ class GiteaClient:
data['state'] = state
if labels is not None:
data['labels'] = labels
if milestone is not None:
data['milestone'] = milestone
logger.info(f"Updating issue #{issue_number} in {owner}/{target_repo}")
response = self.session.patch(url, json=data)
response.raise_for_status()
@@ -233,8 +260,11 @@ class GiteaClient:
repo: Optional[str] = None
) -> Dict:
"""Get a specific wiki page by name."""
from urllib.parse import quote
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{page_name}"
# URL-encode the page_name to handle special characters like ':'
encoded_page_name = quote(page_name, safe='')
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{encoded_page_name}"
logger.info(f"Getting wiki page '{page_name}' from {owner}/{target_repo}")
response = self.session.get(url)
response.raise_for_status()
@@ -265,9 +295,13 @@ class GiteaClient:
repo: Optional[str] = None
) -> Dict:
"""Update an existing wiki page."""
from urllib.parse import quote
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{page_name}"
# URL-encode the page_name to handle special characters like ':'
encoded_page_name = quote(page_name, safe='')
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{encoded_page_name}"
data = {
'title': page_name, # CRITICAL: include title to preserve page name
'content_base64': self._encode_base64(content)
}
logger.info(f"Updating wiki page '{page_name}' in {owner}/{target_repo}")
@@ -281,8 +315,11 @@ class GiteaClient:
repo: Optional[str] = None
) -> bool:
"""Delete a wiki page."""
from urllib.parse import quote
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{page_name}"
# URL-encode the page_name to handle special characters like ':'
encoded_page_name = quote(page_name, safe='')
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{encoded_page_name}"
logger.info(f"Deleting wiki page '{page_name}' from {owner}/{target_repo}")
response = self.session.delete(url)
response.raise_for_status()
@@ -548,10 +585,33 @@ class GiteaClient:
return response.json()
def is_org_repo(self, repo: Optional[str] = None) -> bool:
"""Check if repository belongs to an organization (not a user)."""
info = self.get_repo_info(repo)
owner_type = info.get('owner', {}).get('type', '')
return owner_type.lower() == 'organization'
"""
Check if repository belongs to an organization (not a user).
Uses the /orgs/{owner} endpoint to reliably detect organizations,
as the owner.type field in repo info may be null in some Gitea versions.
"""
owner, _ = self._parse_repo(repo)
return self._is_organization(owner)
def _is_organization(self, owner: str) -> bool:
"""
Check if an owner is an organization by querying the orgs endpoint.
Args:
owner: The owner name to check
Returns:
True if owner is an organization, False if user or unknown
"""
url = f"{self.base_url}/orgs/{owner}"
try:
response = self.session.get(url)
# 200 = organization exists, 404 = not an organization (user account)
return response.status_code == 200
except Exception as e:
logger.warning(f"Failed to check if {owner} is organization: {e}")
return False
def get_branch_protection(
self,
@@ -591,3 +651,199 @@ class GiteaClient:
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()
def create_org_label(
self,
org: str,
name: str,
color: str,
description: Optional[str] = None
) -> Dict:
"""
Create a new label at the organization level.
Organization labels are shared across all repositories in the org.
Use this for workflow labels (Type, Priority, Complexity, Effort, etc.)
Args:
org: Organization name
name: Label name (e.g., 'Type/Bug', 'Priority/High')
color: Hex color code (with or without #)
description: Optional label description
Returns:
Created label dictionary
"""
url = f"{self.base_url}/orgs/{org}/labels"
data = {
'name': name,
'color': color.lstrip('#') # Remove # if present
}
if description:
data['description'] = description
logger.info(f"Creating organization label '{name}' in {org}")
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()
# ========================================
# PULL REQUEST OPERATIONS
# ========================================
def list_pull_requests(
self,
state: str = 'open',
sort: str = 'recentupdate',
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> List[Dict]:
"""
List pull requests from Gitea repository.
Args:
state: PR state (open, closed, all)
sort: Sort order (oldest, recentupdate, leastupdate, mostcomment, leastcomment, priority)
labels: Filter by labels
repo: Repository in 'owner/repo' format
Returns:
List of pull request dictionaries
"""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls"
params = {'state': state, 'sort': sort}
if labels:
params['labels'] = ','.join(labels)
logger.info(f"Listing PRs from {owner}/{target_repo} with state={state}")
response = self.session.get(url, params=params)
response.raise_for_status()
return response.json()
def get_pull_request(
self,
pr_number: int,
repo: Optional[str] = None
) -> Dict:
"""Get specific pull request details."""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls/{pr_number}"
logger.info(f"Getting PR #{pr_number} from {owner}/{target_repo}")
response = self.session.get(url)
response.raise_for_status()
return response.json()
def get_pr_diff(
self,
pr_number: int,
repo: Optional[str] = None
) -> str:
"""Get the diff for a pull request."""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls/{pr_number}.diff"
logger.info(f"Getting diff for PR #{pr_number} from {owner}/{target_repo}")
response = self.session.get(url)
response.raise_for_status()
return response.text
def get_pr_comments(
self,
pr_number: int,
repo: Optional[str] = None
) -> List[Dict]:
"""Get comments on a pull request (uses issue comments endpoint)."""
owner, target_repo = self._parse_repo(repo)
# PRs share comment endpoint with issues in Gitea
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{pr_number}/comments"
logger.info(f"Getting comments for PR #{pr_number} from {owner}/{target_repo}")
response = self.session.get(url)
response.raise_for_status()
return response.json()
def create_pr_review(
self,
pr_number: int,
body: str,
event: str = 'COMMENT',
comments: Optional[List[Dict]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a review on a pull request.
Args:
pr_number: Pull request number
body: Review body/summary
event: Review action (APPROVE, REQUEST_CHANGES, COMMENT)
comments: Optional list of inline comments with path, position, body
repo: Repository in 'owner/repo' format
Returns:
Created review dictionary
"""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls/{pr_number}/reviews"
data = {
'body': body,
'event': event
}
if comments:
data['comments'] = comments
logger.info(f"Creating review on PR #{pr_number} in {owner}/{target_repo}")
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()
def add_pr_comment(
self,
pr_number: int,
body: str,
repo: Optional[str] = None
) -> Dict:
"""Add a general comment to a pull request (uses issue comment endpoint)."""
owner, target_repo = self._parse_repo(repo)
# PRs share comment endpoint with issues in Gitea
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{pr_number}/comments"
data = {'body': body}
logger.info(f"Adding comment to PR #{pr_number} in {owner}/{target_repo}")
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()
def create_pull_request(
self,
title: str,
body: str,
head: str,
base: str,
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a new pull request.
Args:
title: PR title
body: PR description/body
head: Source branch name (the branch with changes)
base: Target branch name (the branch to merge into)
labels: Optional list of label names
repo: Repository in 'owner/repo' format
Returns:
Created pull request dictionary
"""
owner, target_repo = self._parse_repo(repo)
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls"
data = {
'title': title,
'body': body,
'head': head,
'base': base
}
if labels:
label_ids = self._resolve_label_ids(labels, owner, target_repo)
data['labels'] = label_ids
logger.info(f"Creating PR '{title}' in {owner}/{target_repo}: {head} -> {base}")
response = self.session.post(url, json=data)
response.raise_for_status()
return response.json()

View File

@@ -17,6 +17,7 @@ from .tools.labels import LabelTools
from .tools.wiki import WikiTools
from .tools.milestones import MilestoneTools
from .tools.dependencies import DependencyTools
from .tools.pull_requests import PullRequestTools
# Suppress noisy MCP validation warnings on stderr
logging.basicConfig(level=logging.INFO)
@@ -25,6 +26,44 @@ logging.getLogger("mcp").setLevel(logging.ERROR)
logger = logging.getLogger(__name__)
def _coerce_types(arguments: dict) -> dict:
"""
Coerce argument types to handle MCP serialization quirks.
MCP sometimes passes integers as strings and arrays as JSON strings.
This function normalizes them to the expected Python types.
"""
coerced = {}
for key, value in arguments.items():
if value is None:
coerced[key] = value
continue
# Coerce integer fields
int_fields = {'issue_number', 'milestone_id', 'pr_number', 'depends_on', 'milestone', 'limit'}
if key in int_fields and isinstance(value, str):
try:
coerced[key] = int(value)
continue
except ValueError:
pass
# Coerce array fields that might be JSON strings
array_fields = {'labels', 'tags', 'issue_numbers', 'comments'}
if key in array_fields and isinstance(value, str):
try:
parsed = json.loads(value)
if isinstance(parsed, list):
coerced[key] = parsed
continue
except json.JSONDecodeError:
pass
coerced[key] = value
return coerced
class GiteaMCPServer:
"""MCP Server for Gitea integration"""
@@ -37,6 +76,7 @@ class GiteaMCPServer:
self.wiki_tools = None
self.milestone_tools = None
self.dependency_tools = None
self.pr_tools = None
async def initialize(self):
"""
@@ -55,6 +95,7 @@ class GiteaMCPServer:
self.wiki_tools = WikiTools(self.client)
self.milestone_tools = MilestoneTools(self.client)
self.dependency_tools = DependencyTools(self.client)
self.pr_tools = PullRequestTools(self.client)
logger.info(f"Gitea MCP Server initialized in {self.config['mode']} mode")
except Exception as e:
@@ -85,6 +126,10 @@ class GiteaMCPServer:
"items": {"type": "string"},
"description": "Filter by labels"
},
"milestone": {
"type": "string",
"description": "Filter by milestone title (exact match)"
},
"repo": {
"type": "string",
"description": "Repository name (for PMO mode)"
@@ -99,7 +144,7 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"issue_number": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue number"
},
"repo": {
@@ -144,7 +189,7 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"issue_number": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue number"
},
"title": {
@@ -165,6 +210,10 @@ class GiteaMCPServer:
"items": {"type": "string"},
"description": "New labels"
},
"milestone": {
"type": ["integer", "string"],
"description": "Milestone ID to assign"
},
"repo": {
"type": "string",
"description": "Repository name (for PMO mode)"
@@ -180,7 +229,7 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"issue_number": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue number"
},
"comment": {
@@ -217,6 +266,10 @@ class GiteaMCPServer:
"context": {
"type": "string",
"description": "Issue title + description or sprint context"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["context"]
@@ -371,7 +424,7 @@ class GiteaMCPServer:
"description": "Tags to filter by (optional)"
},
"limit": {
"type": "integer",
"type": ["integer", "string"],
"default": 20,
"description": "Maximum results"
},
@@ -382,6 +435,19 @@ class GiteaMCPServer:
}
}
),
Tool(
name="allocate_rfc_number",
description="Allocate the next available RFC number by scanning existing RFC wiki pages",
inputSchema={
"type": "object",
"properties": {
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
}
}
),
# Milestone Tools
Tool(
name="list_milestones",
@@ -409,7 +475,7 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"milestone_id": {
"type": "integer",
"type": ["integer", "string"],
"description": "Milestone ID"
},
"repo": {
@@ -453,7 +519,7 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"milestone_id": {
"type": "integer",
"type": ["integer", "string"],
"description": "Milestone ID"
},
"title": {
@@ -488,7 +554,7 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"milestone_id": {
"type": "integer",
"type": ["integer", "string"],
"description": "Milestone ID"
},
"repo": {
@@ -507,7 +573,7 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"issue_number": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue number"
},
"repo": {
@@ -525,11 +591,11 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"issue_number": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue that will depend on another"
},
"depends_on": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue that blocks issue_number"
},
"repo": {
@@ -547,11 +613,11 @@ class GiteaMCPServer:
"type": "object",
"properties": {
"issue_number": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue that depends on another"
},
"depends_on": {
"type": "integer",
"type": ["integer", "string"],
"description": "Issue being depended on"
},
"repo": {
@@ -615,13 +681,13 @@ class GiteaMCPServer:
),
Tool(
name="create_label",
description="Create a new label in the repository",
description="Create a new label in the repository (for repo-specific labels like Component/*, Tech/*)",
inputSchema={
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Label name"
"description": "Label name (e.g., 'Component/Backend', 'Tech/Python')"
},
"color": {
"type": "string",
@@ -638,6 +704,240 @@ class GiteaMCPServer:
},
"required": ["name", "color"]
}
),
Tool(
name="create_org_label",
description="Create a new label at organization level (for workflow labels like Type/*, Priority/*, Complexity/*, Effort/*)",
inputSchema={
"type": "object",
"properties": {
"org": {
"type": "string",
"description": "Organization name"
},
"name": {
"type": "string",
"description": "Label name (e.g., 'Type/Bug', 'Priority/High')"
},
"color": {
"type": "string",
"description": "Label color (hex code)"
},
"description": {
"type": "string",
"description": "Label description"
}
},
"required": ["org", "name", "color"]
}
),
Tool(
name="create_label_smart",
description="Create a label at the appropriate level (org or repo) based on category. Org: Type/*, Priority/*, Complexity/*, Effort/*, Risk/*, Source/*, Agent/*. Repo: Component/*, Tech/*",
inputSchema={
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Label name (e.g., 'Type/Bug', 'Component/Backend')"
},
"color": {
"type": "string",
"description": "Label color (hex code)"
},
"description": {
"type": "string",
"description": "Label description"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["name", "color"]
}
),
# Pull Request Tools
Tool(
name="list_pull_requests",
description="List pull requests from repository",
inputSchema={
"type": "object",
"properties": {
"state": {
"type": "string",
"enum": ["open", "closed", "all"],
"default": "open",
"description": "PR state filter"
},
"sort": {
"type": "string",
"enum": ["oldest", "recentupdate", "leastupdate", "mostcomment", "leastcomment", "priority"],
"default": "recentupdate",
"description": "Sort order"
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "Filter by labels"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
}
}
),
Tool(
name="get_pull_request",
description="Get specific pull request details",
inputSchema={
"type": "object",
"properties": {
"pr_number": {
"type": ["integer", "string"],
"description": "Pull request number"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["pr_number"]
}
),
Tool(
name="get_pr_diff",
description="Get the diff for a pull request",
inputSchema={
"type": "object",
"properties": {
"pr_number": {
"type": ["integer", "string"],
"description": "Pull request number"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["pr_number"]
}
),
Tool(
name="get_pr_comments",
description="Get comments on a pull request",
inputSchema={
"type": "object",
"properties": {
"pr_number": {
"type": ["integer", "string"],
"description": "Pull request number"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["pr_number"]
}
),
Tool(
name="create_pr_review",
description="Create a review on a pull request (approve, request changes, or comment)",
inputSchema={
"type": "object",
"properties": {
"pr_number": {
"type": ["integer", "string"],
"description": "Pull request number"
},
"body": {
"type": "string",
"description": "Review body/summary"
},
"event": {
"type": "string",
"enum": ["APPROVE", "REQUEST_CHANGES", "COMMENT"],
"default": "COMMENT",
"description": "Review action"
},
"comments": {
"type": "array",
"items": {
"type": "object",
"properties": {
"path": {"type": "string"},
"position": {"type": ["integer", "string"]},
"body": {"type": "string"}
}
},
"description": "Optional inline comments"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["pr_number", "body"]
}
),
Tool(
name="add_pr_comment",
description="Add a general comment to a pull request",
inputSchema={
"type": "object",
"properties": {
"pr_number": {
"type": ["integer", "string"],
"description": "Pull request number"
},
"body": {
"type": "string",
"description": "Comment text"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["pr_number", "body"]
}
),
Tool(
name="create_pull_request",
description="Create a new pull request",
inputSchema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "PR title"
},
"body": {
"type": "string",
"description": "PR description/body"
},
"head": {
"type": "string",
"description": "Source branch name (the branch with changes)"
},
"base": {
"type": "string",
"description": "Target branch name (the branch to merge into)"
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "Optional list of label names"
},
"repo": {
"type": "string",
"description": "Repository name (owner/repo format)"
}
},
"required": ["title", "body", "head", "base"]
}
)
]
@@ -654,6 +954,9 @@ class GiteaMCPServer:
List of TextContent with results
"""
try:
# Coerce types to handle MCP serialization quirks
arguments = _coerce_types(arguments)
# Route to appropriate tool handler
if name == "list_issues":
result = await self.issue_tools.list_issues(**arguments)
@@ -690,6 +993,10 @@ class GiteaMCPServer:
limit=arguments.get('limit', 20),
repo=arguments.get('repo')
)
elif name == "allocate_rfc_number":
result = await self.wiki_tools.allocate_rfc_number(
repo=arguments.get('repo')
)
# Milestone tools
elif name == "list_milestones":
result = await self.milestone_tools.list_milestones(**arguments)
@@ -726,6 +1033,35 @@ class GiteaMCPServer:
arguments.get('description'),
arguments.get('repo')
)
elif name == "create_org_label":
result = self.client.create_org_label(
arguments['org'],
arguments['name'],
arguments['color'],
arguments.get('description')
)
elif name == "create_label_smart":
result = await self.label_tools.create_label_smart(
arguments['name'],
arguments['color'],
arguments.get('description'),
arguments.get('repo')
)
# Pull Request tools
elif name == "list_pull_requests":
result = await self.pr_tools.list_pull_requests(**arguments)
elif name == "get_pull_request":
result = await self.pr_tools.get_pull_request(**arguments)
elif name == "get_pr_diff":
result = await self.pr_tools.get_pr_diff(**arguments)
elif name == "get_pr_comments":
result = await self.pr_tools.get_pr_comments(**arguments)
elif name == "create_pr_review":
result = await self.pr_tools.create_pr_review(**arguments)
elif name == "add_pr_comment":
result = await self.pr_tools.add_pr_comment(**arguments)
elif name == "create_pull_request":
result = await self.pr_tools.create_pull_request(**arguments)
else:
raise ValueError(f"Unknown tool: {name}")

View File

@@ -4,4 +4,8 @@ MCP tools for Gitea integration.
This package provides MCP tool implementations for:
- Issue operations (issues.py)
- Label management (labels.py)
- Wiki operations (wiki.py)
- Milestone management (milestones.py)
- Issue dependencies (dependencies.py)
- Pull request operations (pull_requests.py)
"""

View File

@@ -7,6 +7,7 @@ Provides async wrappers for issue CRUD operations with:
- Comprehensive error handling
"""
import asyncio
import os
import subprocess
import logging
from typing import List, Dict, Optional
@@ -27,19 +28,34 @@ class IssueTools:
"""
self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str:
"""
Get current git branch.
Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns:
Current branch name or 'unknown' if not in a git repo
"""
try:
project_dir = self._get_project_directory()
result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True,
text=True,
check=True
check=True,
cwd=project_dir # Run git in project directory, not plugin directory
)
return result.stdout.strip()
except subprocess.CalledProcessError:
@@ -66,7 +82,13 @@ class IssueTools:
return operation in ['list_issues', 'get_issue', 'get_labels', 'create_issue']
# Development branches (full access)
if branch in ['development', 'develop'] or branch.startswith(('feat/', 'feature/', 'dev/')):
# Include all common feature/fix branch patterns
dev_prefixes = (
'feat/', 'feature/', 'dev/',
'fix/', 'bugfix/', 'hotfix/',
'chore/', 'refactor/', 'docs/', 'test/'
)
if branch in ['development', 'develop'] or branch.startswith(dev_prefixes):
return True
# Unknown branch - be restrictive
@@ -76,6 +98,7 @@ class IssueTools:
self,
state: str = 'open',
labels: Optional[List[str]] = None,
milestone: Optional[str] = None,
repo: Optional[str] = None
) -> List[Dict]:
"""
@@ -84,6 +107,7 @@ class IssueTools:
Args:
state: Issue state (open, closed, all)
labels: Filter by labels
milestone: Filter by milestone title (exact match)
repo: Override configured repo (for PMO multi-repo)
Returns:
@@ -102,7 +126,7 @@ class IssueTools:
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.list_issues(state, labels, repo)
lambda: self.gitea.list_issues(state, labels, milestone, repo)
)
async def get_issue(
@@ -178,6 +202,7 @@ class IssueTools:
body: Optional[str] = None,
state: Optional[str] = None,
labels: Optional[List[str]] = None,
milestone: Optional[int] = None,
repo: Optional[str] = None
) -> Dict:
"""
@@ -189,6 +214,7 @@ class IssueTools:
body: New body (optional)
state: New state - 'open' or 'closed' (optional)
labels: New labels (optional)
milestone: Milestone ID to assign (optional)
repo: Override configured repo (for PMO multi-repo)
Returns:
@@ -207,7 +233,7 @@ class IssueTools:
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, repo)
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, milestone, repo)
)
async def add_comment(

View File

@@ -0,0 +1,377 @@
"""
Label management tools for MCP server.
Provides async wrappers for label operations with:
- Label taxonomy retrieval
- Intelligent label suggestion
- Dynamic label detection
"""
import asyncio
import logging
import re
from typing import List, Dict, Optional
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class LabelTools:
"""Async wrappers for Gitea label operations"""
def __init__(self, gitea_client):
"""
Initialize label tools.
Args:
gitea_client: GiteaClient instance
"""
self.gitea = gitea_client
async def get_labels(self, repo: Optional[str] = None) -> Dict[str, List[Dict]]:
"""Get all labels (org + repo if org-owned, repo-only if user-owned)."""
loop = asyncio.get_event_loop()
target_repo = repo or self.gitea.repo
if not target_repo or '/' not in target_repo:
raise ValueError("Use 'owner/repo' format (e.g. 'org/repo-name')")
# Check if repo belongs to an organization or user
is_org = await loop.run_in_executor(
None,
lambda: self.gitea.is_org_repo(target_repo)
)
org_labels = []
if is_org:
org = target_repo.split('/')[0]
org_labels = await loop.run_in_executor(
None,
lambda: self.gitea.get_org_labels(org)
)
repo_labels = await loop.run_in_executor(
None,
lambda: self.gitea.get_labels(target_repo)
)
return {
'organization': org_labels,
'repository': repo_labels,
'total_count': len(org_labels) + len(repo_labels)
}
async def suggest_labels(self, context: str, repo: Optional[str] = None) -> List[str]:
"""
Analyze context and suggest appropriate labels from repository's actual labels.
This method fetches actual labels from the repository and matches them
dynamically, supporting any label naming convention (slash, colon-space, etc.).
Args:
context: Issue title + description or sprint context
repo: Repository in 'owner/repo' format (optional, uses default if not provided)
Returns:
List of suggested label names that exist in the repository
"""
# Fetch actual labels from repository
target_repo = repo or self.gitea.repo
if not target_repo:
logger.warning("No repository specified, returning empty suggestions")
return []
try:
labels_data = await self.get_labels(target_repo)
all_labels = labels_data.get('organization', []) + labels_data.get('repository', [])
label_names = [label['name'] for label in all_labels]
except Exception as e:
logger.warning(f"Failed to fetch labels: {e}. Using fallback suggestions.")
label_names = []
# Build label lookup for dynamic matching
label_lookup = self._build_label_lookup(label_names)
suggested = []
context_lower = context.lower()
# Type detection (exclusive - only one)
type_label = None
if any(word in context_lower for word in ['bug', 'error', 'fix', 'broken', 'crash', 'fail']):
type_label = self._find_label(label_lookup, 'type', 'bug')
elif any(word in context_lower for word in ['refactor', 'extract', 'restructure', 'architecture', 'service extraction']):
type_label = self._find_label(label_lookup, 'type', 'refactor')
elif any(word in context_lower for word in ['feature', 'add', 'implement', 'new', 'create']):
type_label = self._find_label(label_lookup, 'type', 'feature')
elif any(word in context_lower for word in ['docs', 'documentation', 'readme', 'guide']):
type_label = self._find_label(label_lookup, 'type', 'documentation')
elif any(word in context_lower for word in ['test', 'testing', 'spec', 'coverage']):
type_label = self._find_label(label_lookup, 'type', 'test')
elif any(word in context_lower for word in ['chore', 'maintenance', 'update', 'upgrade']):
type_label = self._find_label(label_lookup, 'type', 'chore')
if type_label:
suggested.append(type_label)
# Priority detection
priority_label = None
if any(word in context_lower for word in ['critical', 'urgent', 'blocker', 'blocking', 'emergency']):
priority_label = self._find_label(label_lookup, 'priority', 'critical')
elif any(word in context_lower for word in ['high', 'important', 'asap', 'soon']):
priority_label = self._find_label(label_lookup, 'priority', 'high')
elif any(word in context_lower for word in ['low', 'nice-to-have', 'optional', 'later']):
priority_label = self._find_label(label_lookup, 'priority', 'low')
else:
priority_label = self._find_label(label_lookup, 'priority', 'medium')
if priority_label:
suggested.append(priority_label)
# Complexity detection
complexity_label = None
if any(word in context_lower for word in ['simple', 'trivial', 'easy', 'quick']):
complexity_label = self._find_label(label_lookup, 'complexity', 'simple')
elif any(word in context_lower for word in ['complex', 'difficult', 'challenging', 'intricate']):
complexity_label = self._find_label(label_lookup, 'complexity', 'complex')
else:
complexity_label = self._find_label(label_lookup, 'complexity', 'medium')
if complexity_label:
suggested.append(complexity_label)
# Effort detection (supports both "Effort" and "Efforts" naming)
effort_label = None
if any(word in context_lower for word in ['xs', 'tiny', '1 hour', '2 hours']):
effort_label = self._find_label(label_lookup, 'effort', 'xs')
elif any(word in context_lower for word in ['small', 's ', '1 day', 'half day']):
effort_label = self._find_label(label_lookup, 'effort', 's')
elif any(word in context_lower for word in ['medium', 'm ', '2 days', '3 days']):
effort_label = self._find_label(label_lookup, 'effort', 'm')
elif any(word in context_lower for word in ['large', 'l ', '1 week', '5 days']):
effort_label = self._find_label(label_lookup, 'effort', 'l')
elif any(word in context_lower for word in ['xl', 'extra large', '2 weeks', 'sprint']):
effort_label = self._find_label(label_lookup, 'effort', 'xl')
if effort_label:
suggested.append(effort_label)
# Component detection (based on keywords)
component_mappings = {
'backend': ['backend', 'server', 'api', 'database', 'service'],
'frontend': ['frontend', 'ui', 'interface', 'react', 'vue', 'component'],
'api': ['api', 'endpoint', 'rest', 'graphql', 'route'],
'database': ['database', 'db', 'sql', 'migration', 'schema', 'postgres'],
'auth': ['auth', 'authentication', 'login', 'oauth', 'token', 'session'],
'deploy': ['deploy', 'deployment', 'docker', 'kubernetes', 'ci/cd'],
'testing': ['test', 'testing', 'spec', 'jest', 'pytest', 'coverage'],
'docs': ['docs', 'documentation', 'readme', 'guide', 'wiki']
}
for component, keywords in component_mappings.items():
if any(keyword in context_lower for keyword in keywords):
label = self._find_label(label_lookup, 'component', component)
if label and label not in suggested:
suggested.append(label)
# Tech stack detection
tech_mappings = {
'python': ['python', 'fastapi', 'django', 'flask', 'pytest'],
'javascript': ['javascript', 'js', 'node', 'npm', 'yarn'],
'docker': ['docker', 'dockerfile', 'container', 'compose'],
'postgresql': ['postgres', 'postgresql', 'psql', 'sql'],
'redis': ['redis', 'cache', 'session store'],
'vue': ['vue', 'vuejs', 'nuxt'],
'fastapi': ['fastapi', 'pydantic', 'starlette']
}
for tech, keywords in tech_mappings.items():
if any(keyword in context_lower for keyword in keywords):
label = self._find_label(label_lookup, 'tech', tech)
if label and label not in suggested:
suggested.append(label)
# Source detection (based on git branch or context)
source_label = None
if 'development' in context_lower or 'dev/' in context_lower:
source_label = self._find_label(label_lookup, 'source', 'development')
elif 'staging' in context_lower or 'stage/' in context_lower:
source_label = self._find_label(label_lookup, 'source', 'staging')
elif 'production' in context_lower or 'prod' in context_lower:
source_label = self._find_label(label_lookup, 'source', 'production')
if source_label:
suggested.append(source_label)
# Risk detection
risk_label = None
if any(word in context_lower for word in ['breaking', 'breaking change', 'major', 'risky']):
risk_label = self._find_label(label_lookup, 'risk', 'high')
elif any(word in context_lower for word in ['safe', 'low risk', 'minor']):
risk_label = self._find_label(label_lookup, 'risk', 'low')
if risk_label:
suggested.append(risk_label)
logger.info(f"Suggested {len(suggested)} labels based on context and {len(label_names)} available labels")
return suggested
def _build_label_lookup(self, label_names: List[str]) -> Dict[str, Dict[str, str]]:
"""
Build a lookup dictionary for label matching.
Supports various label formats:
- Slash format: Type/Bug, Priority/High
- Colon-space format: Type: Bug, Priority: High
- Colon format: Type:Bug
Args:
label_names: List of actual label names from repository
Returns:
Nested dict: {category: {value: actual_label_name}}
"""
lookup: Dict[str, Dict[str, str]] = {}
for label in label_names:
# Try different separator patterns
# Pattern: Category<separator>Value
# Separators: /, : , :
match = re.match(r'^([^/:]+)(?:/|:\s*|:)(.+)$', label)
if match:
category = match.group(1).lower().rstrip('s') # Normalize: "Efforts" -> "effort"
value = match.group(2).lower()
if category not in lookup:
lookup[category] = {}
lookup[category][value] = label
return lookup
def _find_label(self, lookup: Dict[str, Dict[str, str]], category: str, value: str) -> Optional[str]:
"""
Find actual label name from lookup.
Args:
lookup: Label lookup dictionary
category: Category to search (e.g., 'type', 'priority')
value: Value to find (e.g., 'bug', 'high')
Returns:
Actual label name if found, None otherwise
"""
category_lower = category.lower().rstrip('s') # Normalize
value_lower = value.lower()
if category_lower in lookup and value_lower in lookup[category_lower]:
return lookup[category_lower][value_lower]
return None
# Organization-level label categories (workflow labels shared across repos)
ORG_LABEL_CATEGORIES = {'agent', 'complexity', 'effort', 'efforts', 'priority', 'risk', 'source', 'type'}
# Repository-level label categories (project-specific labels)
REPO_LABEL_CATEGORIES = {'component', 'tech'}
async def create_label_smart(
self,
name: str,
color: str,
description: Optional[str] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a label at the appropriate level (org or repo) based on category.
Skips if label already exists (checks both org and repo levels).
Organization labels: Agent, Complexity, Effort, Priority, Risk, Source, Type
Repository labels: Component, Tech
Args:
name: Label name (e.g., 'Type/Bug', 'Component/Backend')
color: Hex color code
description: Optional label description
repo: Repository in 'owner/repo' format
Returns:
Created label dictionary with 'level' key, or 'skipped' if already exists
"""
loop = asyncio.get_event_loop()
target_repo = repo or self.gitea.repo
if not target_repo or '/' not in target_repo:
raise ValueError("Use 'owner/repo' format (e.g. 'org/repo-name')")
owner = target_repo.split('/')[0]
is_org = await loop.run_in_executor(
None,
lambda: self.gitea.is_org_repo(target_repo)
)
# Fetch existing labels to check for duplicates
existing_labels = await self.get_labels(target_repo)
all_existing = existing_labels.get('organization', []) + existing_labels.get('repository', [])
existing_names = [label['name'].lower() for label in all_existing]
# Normalize the new label name for comparison
name_normalized = name.lower()
# Also check for format variations (Type/Bug vs Type: Bug)
name_variations = [name_normalized]
if '/' in name:
name_variations.append(name.replace('/', ': ').lower())
name_variations.append(name.replace('/', ':').lower())
elif ': ' in name:
name_variations.append(name.replace(': ', '/').lower())
elif ':' in name:
name_variations.append(name.replace(':', '/').lower())
# Check if label already exists in any format
for variation in name_variations:
if variation in existing_names:
logger.info(f"Label '{name}' already exists (found as '{variation}'), skipping")
return {
'name': name,
'skipped': True,
'reason': f"Label already exists",
'level': 'existing'
}
# Parse category from label name
category = None
if '/' in name:
category = name.split('/')[0].lower().rstrip('s')
elif ':' in name:
category = name.split(':')[0].strip().lower().rstrip('s')
# If it's an org repo and the category is an org-level category, create at org level
if is_org and category in self.ORG_LABEL_CATEGORIES:
result = await loop.run_in_executor(
None,
lambda: self.gitea.create_org_label(owner, name, color, description)
)
# Handle unexpected response types (API may return list or non-dict)
if not isinstance(result, dict):
logger.error(f"Unexpected API response type for org label: {type(result)} - {result}")
return {
'name': name,
'error': True,
'reason': f"API returned {type(result).__name__} instead of dict: {result}",
'level': 'organization'
}
result['level'] = 'organization'
result['skipped'] = False
logger.info(f"Created organization label '{name}' in {owner}")
else:
# Create at repo level
result = await loop.run_in_executor(
None,
lambda: self.gitea.create_label(name, color, description, target_repo)
)
# Handle unexpected response types (API may return list or non-dict)
if not isinstance(result, dict):
logger.error(f"Unexpected API response type for repo label: {type(result)} - {result}")
return {
'name': name,
'error': True,
'reason': f"API returned {type(result).__name__} instead of dict: {result}",
'level': 'repository'
}
result['level'] = 'repository'
result['skipped'] = False
logger.info(f"Created repository label '{name}' in {target_repo}")
return result

View File

@@ -0,0 +1,335 @@
"""
Pull request management tools for MCP server.
Provides async wrappers for PR operations with:
- Branch-aware security
- PMO multi-repo support
- Comprehensive error handling
"""
import asyncio
import os
import subprocess
import logging
from typing import List, Dict, Optional
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class PullRequestTools:
"""Async wrappers for Gitea pull request operations with branch detection"""
def __init__(self, gitea_client):
"""
Initialize pull request tools.
Args:
gitea_client: GiteaClient instance
"""
self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str:
"""
Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns:
Current branch name or 'unknown' if not in a git repo
"""
try:
project_dir = self._get_project_directory()
result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True,
text=True,
check=True,
cwd=project_dir # Run git in project directory, not plugin directory
)
return result.stdout.strip()
except subprocess.CalledProcessError:
return "unknown"
def _check_branch_permissions(self, operation: str) -> bool:
"""
Check if operation is allowed on current branch.
Args:
operation: Operation name (list_prs, create_review, etc.)
Returns:
True if operation is allowed, False otherwise
"""
branch = self._get_current_branch()
# Read-only operations allowed everywhere
read_ops = ['list_pull_requests', 'get_pull_request', 'get_pr_diff', 'get_pr_comments']
# Production branches (read-only)
if branch in ['main', 'master'] or branch.startswith('prod/'):
return operation in read_ops
# Staging branches (read-only for PRs, can comment)
if branch == 'staging' or branch.startswith('stage/'):
return operation in read_ops + ['add_pr_comment']
# Development branches (full access)
# Include all common feature/fix branch patterns
dev_prefixes = (
'feat/', 'feature/', 'dev/',
'fix/', 'bugfix/', 'hotfix/',
'chore/', 'refactor/', 'docs/', 'test/'
)
if branch in ['development', 'develop'] or branch.startswith(dev_prefixes):
return True
# Unknown branch - be restrictive
return operation in read_ops
async def list_pull_requests(
self,
state: str = 'open',
sort: str = 'recentupdate',
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> List[Dict]:
"""
List pull requests from repository (async wrapper).
Args:
state: PR state (open, closed, all)
sort: Sort order
labels: Filter by labels
repo: Override configured repo (for PMO multi-repo)
Returns:
List of pull request dictionaries
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('list_pull_requests'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot list PRs on branch '{branch}'. "
f"Switch to a development branch."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.list_pull_requests(state, sort, labels, repo)
)
async def get_pull_request(
self,
pr_number: int,
repo: Optional[str] = None
) -> Dict:
"""
Get specific pull request details (async wrapper).
Args:
pr_number: Pull request number
repo: Override configured repo (for PMO multi-repo)
Returns:
Pull request dictionary
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('get_pull_request'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot get PR on branch '{branch}'. "
f"Switch to a development branch."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.get_pull_request(pr_number, repo)
)
async def get_pr_diff(
self,
pr_number: int,
repo: Optional[str] = None
) -> str:
"""
Get pull request diff (async wrapper).
Args:
pr_number: Pull request number
repo: Override configured repo (for PMO multi-repo)
Returns:
Diff as string
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('get_pr_diff'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot get PR diff on branch '{branch}'. "
f"Switch to a development branch."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.get_pr_diff(pr_number, repo)
)
async def get_pr_comments(
self,
pr_number: int,
repo: Optional[str] = None
) -> List[Dict]:
"""
Get comments on a pull request (async wrapper).
Args:
pr_number: Pull request number
repo: Override configured repo (for PMO multi-repo)
Returns:
List of comment dictionaries
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('get_pr_comments'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot get PR comments on branch '{branch}'. "
f"Switch to a development branch."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.get_pr_comments(pr_number, repo)
)
async def create_pr_review(
self,
pr_number: int,
body: str,
event: str = 'COMMENT',
comments: Optional[List[Dict]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a review on a pull request (async wrapper with branch check).
Args:
pr_number: Pull request number
body: Review body/summary
event: Review action (APPROVE, REQUEST_CHANGES, COMMENT)
comments: Optional list of inline comments
repo: Override configured repo (for PMO multi-repo)
Returns:
Created review dictionary
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('create_pr_review'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot create PR review on branch '{branch}'. "
f"Switch to a development branch to review PRs."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.create_pr_review(pr_number, body, event, comments, repo)
)
async def add_pr_comment(
self,
pr_number: int,
body: str,
repo: Optional[str] = None
) -> Dict:
"""
Add a general comment to a pull request (async wrapper with branch check).
Args:
pr_number: Pull request number
body: Comment text
repo: Override configured repo (for PMO multi-repo)
Returns:
Created comment dictionary
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('add_pr_comment'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot add PR comment on branch '{branch}'. "
f"Switch to a development or staging branch to comment on PRs."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.add_pr_comment(pr_number, body, repo)
)
async def create_pull_request(
self,
title: str,
body: str,
head: str,
base: str,
labels: Optional[List[str]] = None,
repo: Optional[str] = None
) -> Dict:
"""
Create a new pull request (async wrapper with branch check).
Args:
title: PR title
body: PR description/body
head: Source branch name (the branch with changes)
base: Target branch name (the branch to merge into)
labels: Optional list of label names
repo: Override configured repo (for PMO multi-repo)
Returns:
Created pull request dictionary
Raises:
PermissionError: If operation not allowed on current branch
"""
if not self._check_branch_permissions('create_pull_request'):
branch = self._get_current_branch()
raise PermissionError(
f"Cannot create PR on branch '{branch}'. "
f"Switch to a development or feature branch to create PRs."
)
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: self.gitea.create_pull_request(title, body, head, base, labels, repo)
)

View File

@@ -4,9 +4,11 @@ Wiki management tools for MCP server.
Provides async wrappers for wiki operations to support lessons learned:
- Page CRUD operations
- Lessons learned creation and search
- RFC number allocation
"""
import asyncio
import logging
import re
from typing import List, Dict, Optional
logging.basicConfig(level=logging.INFO)
@@ -147,3 +149,39 @@ class WikiTools:
lambda: self.gitea.search_lessons(query, tags, repo)
)
return results[:limit]
async def allocate_rfc_number(self, repo: Optional[str] = None) -> Dict:
"""
Allocate the next available RFC number.
Scans existing wiki pages for RFC-NNNN pattern and returns
the next sequential number.
Args:
repo: Repository in owner/repo format
Returns:
Dict with 'next_number' (int) and 'formatted' (str like 'RFC-0001')
"""
pages = await self.list_wiki_pages(repo)
# Extract RFC numbers from page titles
rfc_numbers = []
rfc_pattern = re.compile(r'^RFC-(\d{4})')
for page in pages:
title = page.get('title', '')
match = rfc_pattern.match(title)
if match:
rfc_numbers.append(int(match.group(1)))
# Calculate next number
if rfc_numbers:
next_num = max(rfc_numbers) + 1
else:
next_num = 1
return {
'next_number': next_num,
'formatted': f'RFC-{next_num:04d}'
}

21
mcp-servers/gitea/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -149,3 +149,112 @@ def test_mode_detection_company(tmp_path, monkeypatch):
assert result['mode'] == 'company'
assert result['repo'] is None
# ========================================
# GIT URL PARSING TESTS
# ========================================
def test_parse_git_url_ssh_format():
"""Test parsing SSH format git URL"""
config = GiteaConfig()
# SSH with port: ssh://git@host:port/owner/repo.git
url = "ssh://git@hotserv.tailc9b278.ts.net:2222/personal-projects/personal-portfolio.git"
result = config._parse_git_url(url)
assert result == "personal-projects/personal-portfolio"
def test_parse_git_url_ssh_short_format():
"""Test parsing SSH short format git URL"""
config = GiteaConfig()
# SSH short: git@host:owner/repo.git
url = "git@github.com:owner/repo.git"
result = config._parse_git_url(url)
assert result == "owner/repo"
def test_parse_git_url_https_format():
"""Test parsing HTTPS format git URL"""
config = GiteaConfig()
# HTTPS: https://host/owner/repo.git
url = "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git"
result = config._parse_git_url(url)
assert result == "personal-projects/leo-claude-mktplace"
def test_parse_git_url_http_format():
"""Test parsing HTTP format git URL"""
config = GiteaConfig()
# HTTP: http://host/owner/repo.git
url = "http://gitea.hotserv.cloud/personal-projects/repo.git"
result = config._parse_git_url(url)
assert result == "personal-projects/repo"
def test_parse_git_url_without_git_suffix():
"""Test parsing git URL without .git suffix"""
config = GiteaConfig()
url = "https://github.com/owner/repo"
result = config._parse_git_url(url)
assert result == "owner/repo"
def test_parse_git_url_invalid_format():
"""Test parsing invalid git URL returns None"""
config = GiteaConfig()
url = "not-a-valid-url"
result = config._parse_git_url(url)
assert result is None
def test_find_project_directory_from_env(tmp_path, monkeypatch):
"""Test finding project directory from CLAUDE_PROJECT_DIR env var"""
project_dir = tmp_path / 'my-project'
project_dir.mkdir()
(project_dir / '.git').mkdir()
monkeypatch.setenv('CLAUDE_PROJECT_DIR', str(project_dir))
config = GiteaConfig()
result = config._find_project_directory()
assert result == project_dir
def test_find_project_directory_from_cwd(tmp_path, monkeypatch):
"""Test finding project directory from cwd with .env file"""
project_dir = tmp_path / 'project'
project_dir.mkdir()
(project_dir / '.env').write_text("GITEA_REPO=test/repo")
monkeypatch.chdir(project_dir)
# Clear env vars that might interfere
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
monkeypatch.delenv('PWD', raising=False)
config = GiteaConfig()
result = config._find_project_directory()
assert result == project_dir
def test_find_project_directory_none_when_no_markers(tmp_path, monkeypatch):
"""Test returns None when no project markers found"""
empty_dir = tmp_path / 'empty'
empty_dir.mkdir()
monkeypatch.chdir(empty_dir)
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
monkeypatch.delenv('PWD', raising=False)
monkeypatch.delenv('GITEA_REPO', raising=False)
config = GiteaConfig()
result = config._find_project_directory()
assert result is None

View File

@@ -222,3 +222,47 @@ def test_no_repo_specified_error(gitea_client):
client.list_issues()
assert "Repository not specified" in str(exc_info.value)
# ========================================
# ORGANIZATION DETECTION TESTS
# ========================================
def test_is_organization_true(gitea_client):
"""Test _is_organization returns True for valid organization"""
mock_response = Mock()
mock_response.status_code = 200
with patch.object(gitea_client.session, 'get', return_value=mock_response):
result = gitea_client._is_organization('personal-projects')
assert result is True
gitea_client.session.get.assert_called_once_with(
'https://test.com/api/v1/orgs/personal-projects'
)
def test_is_organization_false(gitea_client):
"""Test _is_organization returns False for user account"""
mock_response = Mock()
mock_response.status_code = 404
with patch.object(gitea_client.session, 'get', return_value=mock_response):
result = gitea_client._is_organization('lmiranda')
assert result is False
def test_is_org_repo_uses_orgs_endpoint(gitea_client):
"""Test is_org_repo uses /orgs endpoint instead of owner.type"""
mock_response = Mock()
mock_response.status_code = 200
with patch.object(gitea_client.session, 'get', return_value=mock_response):
result = gitea_client.is_org_repo('personal-projects/repo')
assert result is True
# Should call /orgs/personal-projects, not /repos/.../
gitea_client.session.get.assert_called_once_with(
'https://test.com/api/v1/orgs/personal-projects'
)

View File

@@ -0,0 +1,478 @@
"""
Unit tests for label tools with suggestion logic.
"""
import pytest
from unittest.mock import Mock, patch
from mcp_server.tools.labels import LabelTools
@pytest.fixture
def mock_gitea_client():
"""Fixture providing mocked Gitea client"""
client = Mock()
client.repo = 'test_org/test_repo'
client.is_org_repo = Mock(return_value=True)
return client
@pytest.fixture
def label_tools(mock_gitea_client):
"""Fixture providing LabelTools instance"""
return LabelTools(mock_gitea_client)
@pytest.mark.asyncio
async def test_get_labels(label_tools):
"""Test getting all labels (org + repo)"""
label_tools.gitea.get_org_labels = Mock(return_value=[
{'name': 'Type/Bug'},
{'name': 'Type/Feature'}
])
label_tools.gitea.get_labels = Mock(return_value=[
{'name': 'Component/Backend'},
{'name': 'Component/Frontend'}
])
result = await label_tools.get_labels()
assert len(result['organization']) == 2
assert len(result['repository']) == 2
assert result['total_count'] == 4
# ========================================
# LABEL LOOKUP TESTS (NEW)
# ========================================
def test_build_label_lookup_slash_format():
"""Test building label lookup with slash format labels"""
mock_client = Mock()
mock_client.repo = 'test/repo'
tools = LabelTools(mock_client)
labels = ['Type/Bug', 'Type/Feature', 'Priority/High', 'Priority/Low']
lookup = tools._build_label_lookup(labels)
assert 'type' in lookup
assert 'bug' in lookup['type']
assert lookup['type']['bug'] == 'Type/Bug'
assert lookup['type']['feature'] == 'Type/Feature'
assert 'priority' in lookup
assert lookup['priority']['high'] == 'Priority/High'
def test_build_label_lookup_colon_space_format():
"""Test building label lookup with colon-space format labels"""
mock_client = Mock()
mock_client.repo = 'test/repo'
tools = LabelTools(mock_client)
labels = ['Type: Bug', 'Type: Feature', 'Priority: High', 'Effort: M']
lookup = tools._build_label_lookup(labels)
assert 'type' in lookup
assert 'bug' in lookup['type']
assert lookup['type']['bug'] == 'Type: Bug'
assert lookup['type']['feature'] == 'Type: Feature'
assert 'priority' in lookup
assert lookup['priority']['high'] == 'Priority: High'
# Test singular "Effort" (not "Efforts")
assert 'effort' in lookup
assert lookup['effort']['m'] == 'Effort: M'
def test_build_label_lookup_efforts_normalization():
"""Test that 'Efforts' is normalized to 'effort' for matching"""
mock_client = Mock()
mock_client.repo = 'test/repo'
tools = LabelTools(mock_client)
labels = ['Efforts/XS', 'Efforts/S', 'Efforts/M']
lookup = tools._build_label_lookup(labels)
# 'Efforts' should be normalized to 'effort'
assert 'effort' in lookup
assert lookup['effort']['xs'] == 'Efforts/XS'
def test_find_label():
"""Test finding labels from lookup"""
mock_client = Mock()
mock_client.repo = 'test/repo'
tools = LabelTools(mock_client)
lookup = {
'type': {'bug': 'Type: Bug', 'feature': 'Type: Feature'},
'priority': {'high': 'Priority: High', 'low': 'Priority: Low'}
}
assert tools._find_label(lookup, 'type', 'bug') == 'Type: Bug'
assert tools._find_label(lookup, 'priority', 'high') == 'Priority: High'
assert tools._find_label(lookup, 'type', 'nonexistent') is None
assert tools._find_label(lookup, 'nonexistent', 'bug') is None
# ========================================
# SUGGEST LABELS WITH DYNAMIC FORMAT TESTS
# ========================================
def _create_tools_with_labels(labels):
"""Helper to create LabelTools with mocked labels"""
import asyncio
mock_client = Mock()
mock_client.repo = 'test/repo'
mock_client.is_org_repo = Mock(return_value=False)
mock_client.get_labels = Mock(return_value=[{'name': l} for l in labels])
return LabelTools(mock_client)
@pytest.mark.asyncio
async def test_suggest_labels_with_slash_format():
"""Test label suggestion with slash format labels"""
labels = [
'Type/Bug', 'Type/Feature', 'Type/Refactor',
'Priority/Critical', 'Priority/High', 'Priority/Medium', 'Priority/Low',
'Complexity/Simple', 'Complexity/Medium', 'Complexity/Complex',
'Component/Auth'
]
tools = _create_tools_with_labels(labels)
context = "Fix critical bug in login authentication"
suggestions = await tools.suggest_labels(context)
assert 'Type/Bug' in suggestions
assert 'Priority/Critical' in suggestions
assert 'Component/Auth' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_with_colon_space_format():
"""Test label suggestion with colon-space format labels"""
labels = [
'Type: Bug', 'Type: Feature', 'Type: Refactor',
'Priority: Critical', 'Priority: High', 'Priority: Medium', 'Priority: Low',
'Complexity: Simple', 'Complexity: Medium', 'Complexity: Complex',
'Effort: XS', 'Effort: S', 'Effort: M', 'Effort: L', 'Effort: XL'
]
tools = _create_tools_with_labels(labels)
context = "Fix critical bug for tiny 1 hour fix"
suggestions = await tools.suggest_labels(context)
# Should return colon-space format labels
assert 'Type: Bug' in suggestions
assert 'Priority: Critical' in suggestions
assert 'Effort: XS' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_bug():
"""Test label suggestion for bug context"""
labels = [
'Type/Bug', 'Type/Feature',
'Priority/Critical', 'Priority/High', 'Priority/Medium', 'Priority/Low',
'Complexity/Simple', 'Complexity/Medium', 'Complexity/Complex',
'Component/Auth'
]
tools = _create_tools_with_labels(labels)
context = "Fix critical bug in login authentication"
suggestions = await tools.suggest_labels(context)
assert 'Type/Bug' in suggestions
assert 'Priority/Critical' in suggestions
assert 'Component/Auth' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_feature():
"""Test label suggestion for feature context"""
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium']
tools = _create_tools_with_labels(labels)
context = "Add new feature to implement user dashboard"
suggestions = await tools.suggest_labels(context)
assert 'Type/Feature' in suggestions
assert any('Priority' in label for label in suggestions)
@pytest.mark.asyncio
async def test_suggest_labels_refactor():
"""Test label suggestion for refactor context"""
labels = ['Type/Refactor', 'Priority/Medium', 'Complexity/Medium', 'Component/Backend']
tools = _create_tools_with_labels(labels)
context = "Refactor architecture to extract service layer"
suggestions = await tools.suggest_labels(context)
assert 'Type/Refactor' in suggestions
assert 'Component/Backend' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_documentation():
"""Test label suggestion for documentation context"""
labels = ['Type/Documentation', 'Priority/Medium', 'Complexity/Medium', 'Component/API', 'Component/Docs']
tools = _create_tools_with_labels(labels)
context = "Update documentation for API endpoints"
suggestions = await tools.suggest_labels(context)
assert 'Type/Documentation' in suggestions
assert 'Component/API' in suggestions or 'Component/Docs' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_priority():
"""Test priority detection in suggestions"""
labels = ['Type/Feature', 'Priority/Critical', 'Priority/High', 'Priority/Medium', 'Priority/Low', 'Complexity/Medium']
tools = _create_tools_with_labels(labels)
# Critical priority
context = "Urgent blocker in production"
suggestions = await tools.suggest_labels(context)
assert 'Priority/Critical' in suggestions
# High priority
context = "Important feature needed asap"
suggestions = await tools.suggest_labels(context)
assert 'Priority/High' in suggestions
# Low priority
context = "Nice-to-have optional improvement"
suggestions = await tools.suggest_labels(context)
assert 'Priority/Low' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_complexity():
"""Test complexity detection in suggestions"""
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Simple', 'Complexity/Medium', 'Complexity/Complex']
tools = _create_tools_with_labels(labels)
# Simple complexity
context = "Simple quick fix for typo"
suggestions = await tools.suggest_labels(context)
assert 'Complexity/Simple' in suggestions
# Complex complexity
context = "Complex challenging architecture redesign"
suggestions = await tools.suggest_labels(context)
assert 'Complexity/Complex' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_efforts():
"""Test efforts detection in suggestions"""
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Efforts/XS', 'Efforts/S', 'Efforts/M', 'Efforts/L', 'Efforts/XL']
tools = _create_tools_with_labels(labels)
# XS effort
context = "Tiny fix that takes 1 hour"
suggestions = await tools.suggest_labels(context)
assert 'Efforts/XS' in suggestions
# L effort
context = "Large feature taking 1 week"
suggestions = await tools.suggest_labels(context)
assert 'Efforts/L' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_components():
"""Test component detection in suggestions"""
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Component/Backend', 'Component/Frontend', 'Component/API', 'Component/Database']
tools = _create_tools_with_labels(labels)
# Backend component
context = "Update backend API service"
suggestions = await tools.suggest_labels(context)
assert 'Component/Backend' in suggestions
assert 'Component/API' in suggestions
# Frontend component
context = "Fix frontend UI component"
suggestions = await tools.suggest_labels(context)
assert 'Component/Frontend' in suggestions
# Database component
context = "Add database migration for schema"
suggestions = await tools.suggest_labels(context)
assert 'Component/Database' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_tech_stack():
"""Test tech stack detection in suggestions"""
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Tech/Python', 'Tech/FastAPI', 'Tech/Docker', 'Tech/PostgreSQL']
tools = _create_tools_with_labels(labels)
# Python
context = "Update Python FastAPI endpoint"
suggestions = await tools.suggest_labels(context)
assert 'Tech/Python' in suggestions
assert 'Tech/FastAPI' in suggestions
# Docker
context = "Fix Dockerfile configuration"
suggestions = await tools.suggest_labels(context)
assert 'Tech/Docker' in suggestions
# PostgreSQL
context = "Optimize PostgreSQL query"
suggestions = await tools.suggest_labels(context)
assert 'Tech/PostgreSQL' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_source():
"""Test source detection in suggestions"""
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Source/Development', 'Source/Staging', 'Source/Production']
tools = _create_tools_with_labels(labels)
# Development
context = "Issue found in development environment"
suggestions = await tools.suggest_labels(context)
assert 'Source/Development' in suggestions
# Production
context = "Critical production issue"
suggestions = await tools.suggest_labels(context)
assert 'Source/Production' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_risk():
"""Test risk detection in suggestions"""
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Risk/High', 'Risk/Low']
tools = _create_tools_with_labels(labels)
# High risk
context = "Breaking change to major API"
suggestions = await tools.suggest_labels(context)
assert 'Risk/High' in suggestions
# Low risk
context = "Safe minor update with low risk"
suggestions = await tools.suggest_labels(context)
assert 'Risk/Low' in suggestions
@pytest.mark.asyncio
async def test_suggest_labels_multiple_categories():
"""Test that suggestions span multiple categories"""
labels = [
'Type/Bug', 'Type/Feature',
'Priority/Critical', 'Priority/Medium',
'Complexity/Complex', 'Complexity/Medium',
'Component/Backend', 'Component/API', 'Component/Auth',
'Tech/FastAPI', 'Tech/PostgreSQL',
'Source/Production'
]
tools = _create_tools_with_labels(labels)
context = """
Urgent critical bug in production backend API service.
Need to fix broken authentication endpoint.
This is a complex issue requiring FastAPI and PostgreSQL expertise.
"""
suggestions = await tools.suggest_labels(context)
# Should have Type
assert any('Type/' in label for label in suggestions)
# Should have Priority
assert any('Priority/' in label for label in suggestions)
# Should have Component
assert any('Component/' in label for label in suggestions)
# Should have Tech
assert any('Tech/' in label for label in suggestions)
# Should have Source
assert any('Source/' in label for label in suggestions)
@pytest.mark.asyncio
async def test_suggest_labels_empty_repo():
"""Test suggestions when no repo specified and no labels available"""
mock_client = Mock()
mock_client.repo = None
tools = LabelTools(mock_client)
context = "Fix a bug"
suggestions = await tools.suggest_labels(context)
# Should return empty list when no repo
assert suggestions == []
@pytest.mark.asyncio
async def test_suggest_labels_no_matching_labels():
"""Test suggestions return empty when no matching labels exist"""
labels = ['Custom/Label', 'Other/Thing'] # No standard labels
tools = _create_tools_with_labels(labels)
context = "Fix a bug"
suggestions = await tools.suggest_labels(context)
# Should return empty list since no Type/Bug or similar exists
assert len(suggestions) == 0
@pytest.mark.asyncio
async def test_get_labels_org_owned_repo():
"""Test getting labels for organization-owned repository"""
mock_client = Mock()
mock_client.repo = 'myorg/myrepo'
mock_client.is_org_repo = Mock(return_value=True)
mock_client.get_org_labels = Mock(return_value=[
{'name': 'Type/Bug', 'id': 1},
{'name': 'Type/Feature', 'id': 2}
])
mock_client.get_labels = Mock(return_value=[
{'name': 'Component/Backend', 'id': 3}
])
tools = LabelTools(mock_client)
result = await tools.get_labels()
# Should fetch both org and repo labels
mock_client.is_org_repo.assert_called_once_with('myorg/myrepo')
mock_client.get_org_labels.assert_called_once_with('myorg')
mock_client.get_labels.assert_called_once_with('myorg/myrepo')
assert len(result['organization']) == 2
assert len(result['repository']) == 1
assert result['total_count'] == 3
@pytest.mark.asyncio
async def test_get_labels_user_owned_repo():
"""Test getting labels for user-owned repository (no org labels)"""
mock_client = Mock()
mock_client.repo = 'lmiranda/personal-portfolio'
mock_client.is_org_repo = Mock(return_value=False)
mock_client.get_labels = Mock(return_value=[
{'name': 'bug', 'id': 1},
{'name': 'enhancement', 'id': 2}
])
tools = LabelTools(mock_client)
result = await tools.get_labels()
# Should check if org repo
mock_client.is_org_repo.assert_called_once_with('lmiranda/personal-portfolio')
# Should NOT call get_org_labels for user-owned repos
mock_client.get_org_labels.assert_not_called()
# Should still get repo labels
mock_client.get_labels.assert_called_once_with('lmiranda/personal-portfolio')
assert len(result['organization']) == 0
assert len(result['repository']) == 2
assert result['total_count'] == 2

View File

@@ -294,4 +294,4 @@ logging.basicConfig(level=logging.DEBUG)
## License
Part of the Bandit Labs Claude Code Plugins project (`support-claude-mktplace`).
MIT License - Part of the Leo Claude Marketplace.

View File

@@ -4,6 +4,7 @@ NetBox API client for interacting with NetBox REST API.
Provides a generic HTTP client with methods for all standard REST operations.
Individual tool modules use this client for their specific endpoints.
"""
import json
import requests
import logging
from typing import List, Dict, Optional, Any, Union
@@ -83,7 +84,20 @@ class NetBoxClient:
if response.status_code == 204 or not response.content:
return None
return response.json()
# Parse JSON with diagnostic error handling
try:
return response.json()
except json.JSONDecodeError as e:
logger.error(
f"JSON decode failed. Status: {response.status_code}, "
f"Content-Length: {len(response.content)}, "
f"Content preview: {response.content[:200]!r}"
)
raise ValueError(
f"Invalid JSON response from NetBox: {e}. "
f"Status code: {response.status_code}, "
f"Content length: {len(response.content)} bytes"
) from e
def list(
self,

View File

@@ -103,7 +103,19 @@ TOOL_DEFINITIONS = {
'properties': {
'id': {'type': 'integer', 'description': 'Site ID'},
'name': {'type': 'string', 'description': 'New name'},
'status': {'type': 'string', 'description': 'New status'}
'slug': {'type': 'string', 'description': 'New slug'},
'status': {'type': 'string', 'description': 'Status'},
'region': {'type': 'integer', 'description': 'Region ID'},
'group': {'type': 'integer', 'description': 'Site group ID'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'facility': {'type': 'string', 'description': 'Facility name'},
'time_zone': {'type': 'string', 'description': 'Time zone'},
'description': {'type': 'string', 'description': 'Description'},
'physical_address': {'type': 'string', 'description': 'Physical address'},
'shipping_address': {'type': 'string', 'description': 'Shipping address'},
'latitude': {'type': 'number', 'description': 'Latitude'},
'longitude': {'type': 'number', 'description': 'Longitude'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
@@ -136,7 +148,14 @@ TOOL_DEFINITIONS = {
},
'dcim_update_location': {
'description': 'Update an existing location',
'properties': {'id': {'type': 'integer', 'description': 'Location ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Location ID'},
'name': {'type': 'string', 'description': 'New name'},
'slug': {'type': 'string', 'description': 'New slug'},
'site': {'type': 'integer', 'description': 'Site ID'},
'parent': {'type': 'integer', 'description': 'Parent location ID'},
'description': {'type': 'string', 'description': 'Description'}
},
'required': ['id']
},
'dcim_delete_location': {
@@ -171,7 +190,18 @@ TOOL_DEFINITIONS = {
},
'dcim_update_rack': {
'description': 'Update an existing rack',
'properties': {'id': {'type': 'integer', 'description': 'Rack ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Rack ID'},
'name': {'type': 'string', 'description': 'New name'},
'site': {'type': 'integer', 'description': 'Site ID'},
'location': {'type': 'integer', 'description': 'Location ID'},
'status': {'type': 'string', 'description': 'Status'},
'role': {'type': 'integer', 'description': 'Role ID'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'u_height': {'type': 'integer', 'description': 'Rack height in U'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
'dcim_delete_rack': {
@@ -198,7 +228,12 @@ TOOL_DEFINITIONS = {
},
'dcim_update_manufacturer': {
'description': 'Update an existing manufacturer',
'properties': {'id': {'type': 'integer', 'description': 'Manufacturer ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Manufacturer ID'},
'name': {'type': 'string', 'description': 'New name'},
'slug': {'type': 'string', 'description': 'New slug'},
'description': {'type': 'string', 'description': 'Description'}
},
'required': ['id']
},
'dcim_delete_manufacturer': {
@@ -230,7 +265,16 @@ TOOL_DEFINITIONS = {
},
'dcim_update_device_type': {
'description': 'Update an existing device type',
'properties': {'id': {'type': 'integer', 'description': 'Device type ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Device type ID'},
'manufacturer': {'type': 'integer', 'description': 'Manufacturer ID'},
'model': {'type': 'string', 'description': 'Model name'},
'slug': {'type': 'string', 'description': 'New slug'},
'u_height': {'type': 'number', 'description': 'Height in rack units'},
'is_full_depth': {'type': 'boolean', 'description': 'Is full depth'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
'dcim_delete_device_type': {
@@ -259,7 +303,14 @@ TOOL_DEFINITIONS = {
},
'dcim_update_device_role': {
'description': 'Update an existing device role',
'properties': {'id': {'type': 'integer', 'description': 'Device role ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Device role ID'},
'name': {'type': 'string', 'description': 'New name'},
'slug': {'type': 'string', 'description': 'New slug'},
'color': {'type': 'string', 'description': 'Hex color code'},
'vm_role': {'type': 'boolean', 'description': 'Can be assigned to VMs'},
'description': {'type': 'string', 'description': 'Description'}
},
'required': ['id']
},
'dcim_delete_device_role': {
@@ -290,7 +341,13 @@ TOOL_DEFINITIONS = {
},
'dcim_update_platform': {
'description': 'Update an existing platform',
'properties': {'id': {'type': 'integer', 'description': 'Platform ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Platform ID'},
'name': {'type': 'string', 'description': 'New name'},
'slug': {'type': 'string', 'description': 'New slug'},
'manufacturer': {'type': 'integer', 'description': 'Manufacturer ID'},
'description': {'type': 'string', 'description': 'Description'}
},
'required': ['id']
},
'dcim_delete_platform': {
@@ -326,7 +383,13 @@ TOOL_DEFINITIONS = {
'status': {'type': 'string', 'description': 'Device status'},
'rack': {'type': 'integer', 'description': 'Rack ID'},
'position': {'type': 'number', 'description': 'Position in rack'},
'serial': {'type': 'string', 'description': 'Serial number'}
'serial': {'type': 'string', 'description': 'Serial number'},
'platform': {'type': 'integer', 'description': 'Platform ID'},
'primary_ip4': {'type': 'integer', 'description': 'Primary IPv4 address ID'},
'primary_ip6': {'type': 'integer', 'description': 'Primary IPv6 address ID'},
'asset_tag': {'type': 'string', 'description': 'Asset tag'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['name', 'device_type', 'role', 'site']
},
@@ -335,7 +398,17 @@ TOOL_DEFINITIONS = {
'properties': {
'id': {'type': 'integer', 'description': 'Device ID'},
'name': {'type': 'string', 'description': 'New name'},
'status': {'type': 'string', 'description': 'New status'}
'status': {'type': 'string', 'description': 'New status'},
'platform': {'type': 'integer', 'description': 'Platform ID'},
'primary_ip4': {'type': 'integer', 'description': 'Primary IPv4 address ID'},
'primary_ip6': {'type': 'integer', 'description': 'Primary IPv6 address ID'},
'serial': {'type': 'string', 'description': 'Serial number'},
'asset_tag': {'type': 'string', 'description': 'Asset tag'},
'site': {'type': 'integer', 'description': 'Site ID'},
'rack': {'type': 'integer', 'description': 'Rack ID'},
'position': {'type': 'number', 'description': 'Position in rack'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
@@ -370,7 +443,18 @@ TOOL_DEFINITIONS = {
},
'dcim_update_interface': {
'description': 'Update an existing interface',
'properties': {'id': {'type': 'integer', 'description': 'Interface ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Interface ID'},
'name': {'type': 'string', 'description': 'New name'},
'type': {'type': 'string', 'description': 'Interface type'},
'enabled': {'type': 'boolean', 'description': 'Interface enabled'},
'mtu': {'type': 'integer', 'description': 'MTU'},
'mac_address': {'type': 'string', 'description': 'MAC address'},
'description': {'type': 'string', 'description': 'Description'},
'mode': {'type': 'string', 'description': 'VLAN mode'},
'untagged_vlan': {'type': 'integer', 'description': 'Untagged VLAN ID'},
'tagged_vlans': {'type': 'array', 'description': 'Tagged VLAN IDs'}
},
'required': ['id']
},
'dcim_delete_interface': {
@@ -404,7 +488,15 @@ TOOL_DEFINITIONS = {
},
'dcim_update_cable': {
'description': 'Update an existing cable',
'properties': {'id': {'type': 'integer', 'description': 'Cable ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Cable ID'},
'type': {'type': 'string', 'description': 'Cable type'},
'status': {'type': 'string', 'description': 'Cable status'},
'label': {'type': 'string', 'description': 'Cable label'},
'color': {'type': 'string', 'description': 'Cable color'},
'length': {'type': 'number', 'description': 'Cable length'},
'length_unit': {'type': 'string', 'description': 'Length unit'}
},
'required': ['id']
},
'dcim_delete_cable': {
@@ -492,7 +584,15 @@ TOOL_DEFINITIONS = {
},
'ipam_update_vrf': {
'description': 'Update an existing VRF',
'properties': {'id': {'type': 'integer', 'description': 'VRF ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'VRF ID'},
'name': {'type': 'string', 'description': 'New name'},
'rd': {'type': 'string', 'description': 'Route distinguisher'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'enforce_unique': {'type': 'boolean', 'description': 'Enforce unique IPs'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
'ipam_delete_vrf': {
@@ -531,7 +631,19 @@ TOOL_DEFINITIONS = {
},
'ipam_update_prefix': {
'description': 'Update an existing prefix',
'properties': {'id': {'type': 'integer', 'description': 'Prefix ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Prefix ID'},
'prefix': {'type': 'string', 'description': 'Prefix in CIDR notation'},
'status': {'type': 'string', 'description': 'Status'},
'site': {'type': 'integer', 'description': 'Site ID'},
'vrf': {'type': 'integer', 'description': 'VRF ID'},
'vlan': {'type': 'integer', 'description': 'VLAN ID'},
'role': {'type': 'integer', 'description': 'Role ID'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'is_pool': {'type': 'boolean', 'description': 'Is a pool'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
'ipam_delete_prefix': {
@@ -582,7 +694,18 @@ TOOL_DEFINITIONS = {
},
'ipam_update_ip_address': {
'description': 'Update an existing IP address',
'properties': {'id': {'type': 'integer', 'description': 'IP address ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'IP address ID'},
'address': {'type': 'string', 'description': 'IP address with prefix length'},
'status': {'type': 'string', 'description': 'Status'},
'vrf': {'type': 'integer', 'description': 'VRF ID'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'dns_name': {'type': 'string', 'description': 'DNS name'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'},
'assigned_object_type': {'type': 'string', 'description': 'Object type to assign to'},
'assigned_object_id': {'type': 'integer', 'description': 'Object ID to assign to'}
},
'required': ['id']
},
'ipam_delete_ip_address': {
@@ -647,7 +770,18 @@ TOOL_DEFINITIONS = {
},
'ipam_update_vlan': {
'description': 'Update an existing VLAN',
'properties': {'id': {'type': 'integer', 'description': 'VLAN ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'VLAN ID'},
'vid': {'type': 'integer', 'description': 'VLAN ID number'},
'name': {'type': 'string', 'description': 'VLAN name'},
'status': {'type': 'string', 'description': 'Status'},
'site': {'type': 'integer', 'description': 'Site ID'},
'group': {'type': 'integer', 'description': 'VLAN group ID'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'role': {'type': 'integer', 'description': 'Role ID'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
'ipam_delete_vlan': {
@@ -757,16 +891,17 @@ TOOL_DEFINITIONS = {
'properties': {'id': {'type': 'integer', 'description': 'Provider ID'}},
'required': ['id']
},
'circuits_list_circuit_types': {
# NOTE: circuit_types tools shortened to meet 28-char limit
'circ_list_types': {
'description': 'List all circuit types in NetBox',
'properties': {'name': {'type': 'string', 'description': 'Filter by name'}}
},
'circuits_get_circuit_type': {
'circ_get_type': {
'description': 'Get a specific circuit type by ID',
'properties': {'id': {'type': 'integer', 'description': 'Circuit type ID'}},
'required': ['id']
},
'circuits_create_circuit_type': {
'circ_create_type': {
'description': 'Create a new circuit type',
'properties': {
'name': {'type': 'string', 'description': 'Type name'},
@@ -809,19 +944,20 @@ TOOL_DEFINITIONS = {
'properties': {'id': {'type': 'integer', 'description': 'Circuit ID'}},
'required': ['id']
},
'circuits_list_circuit_terminations': {
# NOTE: circuit_terminations tools shortened to meet 28-char limit
'circ_list_terminations': {
'description': 'List all circuit terminations in NetBox',
'properties': {
'circuit_id': {'type': 'integer', 'description': 'Filter by circuit ID'},
'site_id': {'type': 'integer', 'description': 'Filter by site ID'}
}
},
'circuits_get_circuit_termination': {
'circ_get_termination': {
'description': 'Get a specific circuit termination by ID',
'properties': {'id': {'type': 'integer', 'description': 'Termination ID'}},
'required': ['id']
},
'circuits_create_circuit_termination': {
'circ_create_termination': {
'description': 'Create a new circuit termination',
'properties': {
'circuit': {'type': 'integer', 'description': 'Circuit ID'},
@@ -832,16 +968,18 @@ TOOL_DEFINITIONS = {
},
# ==================== Virtualization Tools ====================
'virtualization_list_cluster_types': {
# NOTE: Tool names shortened from 'virtualization_' to 'virt_' to meet
# 28-char limit (Claude API 64-char limit minus 36-char prefix)
'virt_list_cluster_types': {
'description': 'List all cluster types in NetBox',
'properties': {'name': {'type': 'string', 'description': 'Filter by name'}}
},
'virtualization_get_cluster_type': {
'virt_get_cluster_type': {
'description': 'Get a specific cluster type by ID',
'properties': {'id': {'type': 'integer', 'description': 'Cluster type ID'}},
'required': ['id']
},
'virtualization_create_cluster_type': {
'virt_create_cluster_type': {
'description': 'Create a new cluster type',
'properties': {
'name': {'type': 'string', 'description': 'Type name'},
@@ -849,16 +987,16 @@ TOOL_DEFINITIONS = {
},
'required': ['name', 'slug']
},
'virtualization_list_cluster_groups': {
'virt_list_cluster_groups': {
'description': 'List all cluster groups in NetBox',
'properties': {'name': {'type': 'string', 'description': 'Filter by name'}}
},
'virtualization_get_cluster_group': {
'virt_get_cluster_group': {
'description': 'Get a specific cluster group by ID',
'properties': {'id': {'type': 'integer', 'description': 'Cluster group ID'}},
'required': ['id']
},
'virtualization_create_cluster_group': {
'virt_create_cluster_group': {
'description': 'Create a new cluster group',
'properties': {
'name': {'type': 'string', 'description': 'Group name'},
@@ -866,7 +1004,7 @@ TOOL_DEFINITIONS = {
},
'required': ['name', 'slug']
},
'virtualization_list_clusters': {
'virt_list_clusters': {
'description': 'List all clusters in NetBox',
'properties': {
'name': {'type': 'string', 'description': 'Filter by name'},
@@ -875,12 +1013,12 @@ TOOL_DEFINITIONS = {
'site_id': {'type': 'integer', 'description': 'Filter by site ID'}
}
},
'virtualization_get_cluster': {
'virt_get_cluster': {
'description': 'Get a specific cluster by ID',
'properties': {'id': {'type': 'integer', 'description': 'Cluster ID'}},
'required': ['id']
},
'virtualization_create_cluster': {
'virt_create_cluster': {
'description': 'Create a new cluster',
'properties': {
'name': {'type': 'string', 'description': 'Cluster name'},
@@ -891,17 +1029,27 @@ TOOL_DEFINITIONS = {
},
'required': ['name', 'type']
},
'virtualization_update_cluster': {
'virt_update_cluster': {
'description': 'Update an existing cluster',
'properties': {'id': {'type': 'integer', 'description': 'Cluster ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'Cluster ID'},
'name': {'type': 'string', 'description': 'New name'},
'type': {'type': 'integer', 'description': 'Cluster type ID'},
'group': {'type': 'integer', 'description': 'Cluster group ID'},
'site': {'type': 'integer', 'description': 'Site ID'},
'status': {'type': 'string', 'description': 'Status'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
'virtualization_delete_cluster': {
'virt_delete_cluster': {
'description': 'Delete a cluster',
'properties': {'id': {'type': 'integer', 'description': 'Cluster ID'}},
'required': ['id']
},
'virtualization_list_virtual_machines': {
'virt_list_vms': {
'description': 'List all virtual machines in NetBox',
'properties': {
'name': {'type': 'string', 'description': 'Filter by name'},
@@ -910,12 +1058,12 @@ TOOL_DEFINITIONS = {
'status': {'type': 'string', 'description': 'Filter by status'}
}
},
'virtualization_get_virtual_machine': {
'virt_get_vm': {
'description': 'Get a specific virtual machine by ID',
'properties': {'id': {'type': 'integer', 'description': 'VM ID'}},
'required': ['id']
},
'virtualization_create_virtual_machine': {
'virt_create_vm': {
'description': 'Create a new virtual machine',
'properties': {
'name': {'type': 'string', 'description': 'VM name'},
@@ -928,29 +1076,45 @@ TOOL_DEFINITIONS = {
},
'required': ['name']
},
'virtualization_update_virtual_machine': {
'virt_update_vm': {
'description': 'Update an existing virtual machine',
'properties': {'id': {'type': 'integer', 'description': 'VM ID'}},
'properties': {
'id': {'type': 'integer', 'description': 'VM ID'},
'name': {'type': 'string', 'description': 'New name'},
'status': {'type': 'string', 'description': 'Status'},
'cluster': {'type': 'integer', 'description': 'Cluster ID'},
'site': {'type': 'integer', 'description': 'Site ID'},
'role': {'type': 'integer', 'description': 'Role ID'},
'tenant': {'type': 'integer', 'description': 'Tenant ID'},
'platform': {'type': 'integer', 'description': 'Platform ID'},
'vcpus': {'type': 'number', 'description': 'Number of vCPUs'},
'memory': {'type': 'integer', 'description': 'Memory in MB'},
'disk': {'type': 'integer', 'description': 'Disk in GB'},
'primary_ip4': {'type': 'integer', 'description': 'Primary IPv4 address ID'},
'primary_ip6': {'type': 'integer', 'description': 'Primary IPv6 address ID'},
'description': {'type': 'string', 'description': 'Description'},
'comments': {'type': 'string', 'description': 'Comments'}
},
'required': ['id']
},
'virtualization_delete_virtual_machine': {
'virt_delete_vm': {
'description': 'Delete a virtual machine',
'properties': {'id': {'type': 'integer', 'description': 'VM ID'}},
'required': ['id']
},
'virtualization_list_vm_interfaces': {
'virt_list_vm_ifaces': {
'description': 'List all VM interfaces in NetBox',
'properties': {
'virtual_machine_id': {'type': 'integer', 'description': 'Filter by VM ID'},
'name': {'type': 'string', 'description': 'Filter by name'}
}
},
'virtualization_get_vm_interface': {
'virt_get_vm_iface': {
'description': 'Get a specific VM interface by ID',
'properties': {'id': {'type': 'integer', 'description': 'Interface ID'}},
'required': ['id']
},
'virtualization_create_vm_interface': {
'virt_create_vm_iface': {
'description': 'Create a new VM interface',
'properties': {
'virtual_machine': {'type': 'integer', 'description': 'VM ID'},
@@ -1088,16 +1252,18 @@ TOOL_DEFINITIONS = {
},
# ==================== Wireless Tools ====================
'wireless_list_wireless_lan_groups': {
# NOTE: Tool names shortened from 'wireless_' to 'wlan_' to meet
# 28-char limit (Claude API 64-char limit minus 36-char prefix)
'wlan_list_groups': {
'description': 'List all wireless LAN groups in NetBox',
'properties': {'name': {'type': 'string', 'description': 'Filter by name'}}
},
'wireless_get_wireless_lan_group': {
'wlan_get_group': {
'description': 'Get a specific wireless LAN group by ID',
'properties': {'id': {'type': 'integer', 'description': 'WLAN group ID'}},
'required': ['id']
},
'wireless_create_wireless_lan_group': {
'wlan_create_group': {
'description': 'Create a new wireless LAN group',
'properties': {
'name': {'type': 'string', 'description': 'Group name'},
@@ -1105,7 +1271,7 @@ TOOL_DEFINITIONS = {
},
'required': ['name', 'slug']
},
'wireless_list_wireless_lans': {
'wlan_list_lans': {
'description': 'List all wireless LANs in NetBox',
'properties': {
'ssid': {'type': 'string', 'description': 'Filter by SSID'},
@@ -1113,12 +1279,12 @@ TOOL_DEFINITIONS = {
'status': {'type': 'string', 'description': 'Filter by status'}
}
},
'wireless_get_wireless_lan': {
'wlan_get_lan': {
'description': 'Get a specific wireless LAN by ID',
'properties': {'id': {'type': 'integer', 'description': 'WLAN ID'}},
'required': ['id']
},
'wireless_create_wireless_lan': {
'wlan_create_lan': {
'description': 'Create a new wireless LAN',
'properties': {
'ssid': {'type': 'string', 'description': 'SSID'},
@@ -1128,14 +1294,14 @@ TOOL_DEFINITIONS = {
},
'required': ['ssid']
},
'wireless_list_wireless_links': {
'wlan_list_links': {
'description': 'List all wireless links in NetBox',
'properties': {
'ssid': {'type': 'string', 'description': 'Filter by SSID'},
'status': {'type': 'string', 'description': 'Filter by status'}
}
},
'wireless_get_wireless_link': {
'wlan_get_link': {
'description': 'Get a specific wireless link by ID',
'properties': {'id': {'type': 'integer', 'description': 'Link ID'}},
'required': ['id']
@@ -1241,6 +1407,52 @@ TOOL_DEFINITIONS = {
}
# Map shortened tool names to (category, method_name) for routing.
# This is necessary because tool names were shortened to meet the 28-character
# limit imposed by Claude API's 64-character tool name limit minus the
# 36-character prefix used by Claude Code for MCP tools.
TOOL_NAME_MAP = {
# Virtualization tools (virt_ -> virtualization category)
'virt_list_cluster_types': ('virtualization', 'list_cluster_types'),
'virt_get_cluster_type': ('virtualization', 'get_cluster_type'),
'virt_create_cluster_type': ('virtualization', 'create_cluster_type'),
'virt_list_cluster_groups': ('virtualization', 'list_cluster_groups'),
'virt_get_cluster_group': ('virtualization', 'get_cluster_group'),
'virt_create_cluster_group': ('virtualization', 'create_cluster_group'),
'virt_list_clusters': ('virtualization', 'list_clusters'),
'virt_get_cluster': ('virtualization', 'get_cluster'),
'virt_create_cluster': ('virtualization', 'create_cluster'),
'virt_update_cluster': ('virtualization', 'update_cluster'),
'virt_delete_cluster': ('virtualization', 'delete_cluster'),
'virt_list_vms': ('virtualization', 'list_virtual_machines'),
'virt_get_vm': ('virtualization', 'get_virtual_machine'),
'virt_create_vm': ('virtualization', 'create_virtual_machine'),
'virt_update_vm': ('virtualization', 'update_virtual_machine'),
'virt_delete_vm': ('virtualization', 'delete_virtual_machine'),
'virt_list_vm_ifaces': ('virtualization', 'list_vm_interfaces'),
'virt_get_vm_iface': ('virtualization', 'get_vm_interface'),
'virt_create_vm_iface': ('virtualization', 'create_vm_interface'),
# Circuits tools (circ_ -> circuits category, for shortened names only)
'circ_list_types': ('circuits', 'list_circuit_types'),
'circ_get_type': ('circuits', 'get_circuit_type'),
'circ_create_type': ('circuits', 'create_circuit_type'),
'circ_list_terminations': ('circuits', 'list_circuit_terminations'),
'circ_get_termination': ('circuits', 'get_circuit_termination'),
'circ_create_termination': ('circuits', 'create_circuit_termination'),
# Wireless tools (wlan_ -> wireless category)
'wlan_list_groups': ('wireless', 'list_wireless_lan_groups'),
'wlan_get_group': ('wireless', 'get_wireless_lan_group'),
'wlan_create_group': ('wireless', 'create_wireless_lan_group'),
'wlan_list_lans': ('wireless', 'list_wireless_lans'),
'wlan_get_lan': ('wireless', 'get_wireless_lan'),
'wlan_create_lan': ('wireless', 'create_wireless_lan'),
'wlan_list_links': ('wireless', 'list_wireless_links'),
'wlan_get_link': ('wireless', 'get_wireless_link'),
}
class NetBoxMCPServer:
"""MCP Server for NetBox integration"""
@@ -1314,12 +1526,21 @@ class NetBoxMCPServer:
)]
async def _route_tool(self, name: str, arguments: dict):
"""Route tool call to appropriate handler."""
parts = name.split('_', 1)
if len(parts) != 2:
raise ValueError(f"Invalid tool name format: {name}")
"""Route tool call to appropriate handler.
category, method_name = parts[0], parts[1]
Tool names may be shortened (e.g., 'virt_list_vms' instead of
'virtualization_list_virtual_machines') to meet the 28-character
limit. TOOL_NAME_MAP handles the translation to actual method names.
"""
# Check if this is a mapped short name
if name in TOOL_NAME_MAP:
category, method_name = TOOL_NAME_MAP[name]
else:
# Fall back to original logic for unchanged tools
parts = name.split('_', 1)
if len(parts) != 2:
raise ValueError(f"Invalid tool name format: {name}")
category, method_name = parts[0], parts[1]
# Map category to tool class
tool_map = {

21
mcp-servers/netbox/run.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Capture original working directory before any cd operations
# This should be the user's project directory when launched by Claude Code
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/netbox/.venv"
LOCAL_VENV="$SCRIPT_DIR/.venv"
if [[ -f "$CACHE_VENV/bin/python" ]]; then
PYTHON="$CACHE_VENV/bin/python"
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
PYTHON="$LOCAL_VENV/bin/python"
else
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
exit 1
fi
cd "$SCRIPT_DIR"
export PYTHONPATH="$SCRIPT_DIR"
exec "$PYTHON" -m mcp_server.server "$@"

View File

@@ -0,0 +1,5 @@
2026-01-26T11:40:11 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/registry/dmc_2_5.json | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:31 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_chart_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:32 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:34 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md

View File

@@ -0,0 +1,115 @@
# viz-platform MCP Server
Model Context Protocol (MCP) server for Dash Mantine Components validation and visualization tools.
## Overview
This MCP server provides 21 tools for:
- **DMC Validation**: Version-locked component registry prevents Claude from hallucinating invalid props
- **Chart Creation**: Plotly-based visualization with theme integration
- **Layout Composition**: Dashboard layouts with responsive grids
- **Theme Management**: Design token-based theming system
- **Page Structure**: Multi-page Dash app generation
## Tools
### DMC Tools (3)
| Tool | Description |
|------|-------------|
| `list_components` | List available DMC components by category |
| `get_component_props` | Get valid props, types, and defaults for a component |
| `validate_component` | Validate component definition before use |
### Chart Tools (2)
| Tool | Description |
|------|-------------|
| `chart_create` | Create Plotly chart (line, bar, scatter, pie, histogram, area, heatmap) |
| `chart_configure_interaction` | Configure chart interactions (zoom, pan, hover) |
### Layout Tools (5)
| Tool | Description |
|------|-------------|
| `layout_create` | Create dashboard layout structure |
| `layout_add_filter` | Add filter components to layout |
| `layout_set_grid` | Configure responsive grid settings |
| `layout_get` | Retrieve layout configuration |
| `layout_add_section` | Add sections to layout |
### Theme Tools (6)
| Tool | Description |
|------|-------------|
| `theme_create` | Create new theme with design tokens |
| `theme_extend` | Extend existing theme with overrides |
| `theme_validate` | Validate theme completeness |
| `theme_export_css` | Export theme as CSS custom properties |
| `theme_list` | List available themes |
| `theme_activate` | Set active theme for visualizations |
### Page Tools (5)
| Tool | Description |
|------|-------------|
| `page_create` | Create new page structure |
| `page_add_navbar` | Add navigation bar to page |
| `page_set_auth` | Configure page authentication |
| `page_list` | List available pages |
| `page_get_app_config` | Get full app configuration |
## Configuration
### Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `DMC_VERSION` | No | Dash Mantine Components version (auto-detected if installed) |
| `VIZ_DEFAULT_THEME` | No | Default theme name |
| `CLAUDE_PROJECT_DIR` | No | Project directory for theme storage |
### Theme Storage
Themes can be stored at two levels:
- **User-level**: `~/.config/claude/themes/`
- **Project-level**: `{project}/.viz-platform/themes/`
Project-level themes take precedence.
## Component Registry
The server uses a static JSON registry for DMC component validation:
- Pre-generated from DMC source code
- Version-tagged (e.g., `dmc_2_5.json`)
- Prevents hallucination of non-existent props
- Fast, deterministic validation
Registry files are stored in `registry/` directory.
## Tests
94 tests with coverage:
- `test_config.py`: 82% coverage
- `test_component_registry.py`: 92% coverage
- `test_dmc_tools.py`: 88% coverage
- `test_chart_tools.py`: 68% coverage
- `test_theme_tools.py`: 99% coverage
Run tests:
```bash
cd mcp-servers/viz-platform
source .venv/bin/activate
pytest tests/ -v
```
## Dependencies
- Python 3.10+
- FastMCP
- plotly
- dash-mantine-components (optional, for version detection)
## Usage
This MCP server is used by the `viz-platform` plugin. See the plugin's commands in `plugins/viz-platform/commands/` for usage.

View File

@@ -0,0 +1,7 @@
"""
viz-platform MCP Server package.
Provides Dash Mantine Components validation and visualization tools to Claude Code.
"""
__version__ = "1.0.0"

View File

@@ -0,0 +1,479 @@
"""
Accessibility validation tools for color blindness and WCAG compliance.
Provides tools for validating color palettes against color blindness
simulations and WCAG contrast requirements.
"""
import logging
import math
from typing import Dict, List, Optional, Any, Tuple
logger = logging.getLogger(__name__)
# Color-blind safe palettes
SAFE_PALETTES = {
"categorical": {
"name": "Paul Tol's Qualitative",
"colors": ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"],
"description": "Distinguishable for all types of color blindness"
},
"ibm": {
"name": "IBM Design",
"colors": ["#648FFF", "#785EF0", "#DC267F", "#FE6100", "#FFB000"],
"description": "IBM's accessible color palette"
},
"okabe_ito": {
"name": "Okabe-Ito",
"colors": ["#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#000000"],
"description": "Optimized for all color vision deficiencies"
},
"tableau_colorblind": {
"name": "Tableau Colorblind 10",
"colors": ["#006BA4", "#FF800E", "#ABABAB", "#595959", "#5F9ED1",
"#C85200", "#898989", "#A2C8EC", "#FFBC79", "#CFCFCF"],
"description": "Industry-standard accessible palette"
}
}
# Simulation matrices for color blindness (LMS color space transformation)
# These approximate how colors appear to people with different types of color blindness
SIMULATION_MATRICES = {
"deuteranopia": {
# Green-blind (most common)
"severity": "common",
"population": "6% males, 0.4% females",
"description": "Difficulty distinguishing red from green (green-blind)",
"matrix": [
[0.625, 0.375, 0.0],
[0.700, 0.300, 0.0],
[0.0, 0.300, 0.700]
]
},
"protanopia": {
# Red-blind
"severity": "common",
"population": "2.5% males, 0.05% females",
"description": "Difficulty distinguishing red from green (red-blind)",
"matrix": [
[0.567, 0.433, 0.0],
[0.558, 0.442, 0.0],
[0.0, 0.242, 0.758]
]
},
"tritanopia": {
# Blue-blind (rare)
"severity": "rare",
"population": "0.01% total",
"description": "Difficulty distinguishing blue from yellow",
"matrix": [
[0.950, 0.050, 0.0],
[0.0, 0.433, 0.567],
[0.0, 0.475, 0.525]
]
}
}
class AccessibilityTools:
"""
Color accessibility validation tools.
Validates colors for WCAG compliance and color blindness accessibility.
"""
def __init__(self, theme_store=None):
"""
Initialize accessibility tools.
Args:
theme_store: Optional ThemeStore for theme color extraction
"""
self.theme_store = theme_store
def _hex_to_rgb(self, hex_color: str) -> Tuple[int, int, int]:
"""Convert hex color to RGB tuple."""
hex_color = hex_color.lstrip('#')
if len(hex_color) == 3:
hex_color = ''.join([c * 2 for c in hex_color])
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def _rgb_to_hex(self, rgb: Tuple[int, int, int]) -> str:
"""Convert RGB tuple to hex color."""
return '#{:02x}{:02x}{:02x}'.format(
max(0, min(255, int(rgb[0]))),
max(0, min(255, int(rgb[1]))),
max(0, min(255, int(rgb[2])))
)
def _get_relative_luminance(self, rgb: Tuple[int, int, int]) -> float:
"""
Calculate relative luminance per WCAG 2.1.
https://www.w3.org/WAI/GL/wiki/Relative_luminance
"""
def channel_luminance(value: int) -> float:
v = value / 255
return v / 12.92 if v <= 0.03928 else ((v + 0.055) / 1.055) ** 2.4
r, g, b = rgb
return (
0.2126 * channel_luminance(r) +
0.7152 * channel_luminance(g) +
0.0722 * channel_luminance(b)
)
def _get_contrast_ratio(self, color1: str, color2: str) -> float:
"""
Calculate contrast ratio between two colors per WCAG 2.1.
Returns ratio between 1:1 and 21:1.
"""
rgb1 = self._hex_to_rgb(color1)
rgb2 = self._hex_to_rgb(color2)
l1 = self._get_relative_luminance(rgb1)
l2 = self._get_relative_luminance(rgb2)
lighter = max(l1, l2)
darker = min(l1, l2)
return (lighter + 0.05) / (darker + 0.05)
def _simulate_color_blindness(
self,
hex_color: str,
deficiency_type: str
) -> str:
"""
Simulate how a color appears with a specific color blindness type.
Uses linear RGB transformation approximation.
"""
if deficiency_type not in SIMULATION_MATRICES:
return hex_color
rgb = self._hex_to_rgb(hex_color)
matrix = SIMULATION_MATRICES[deficiency_type]["matrix"]
# Apply transformation matrix
r = rgb[0] * matrix[0][0] + rgb[1] * matrix[0][1] + rgb[2] * matrix[0][2]
g = rgb[0] * matrix[1][0] + rgb[1] * matrix[1][1] + rgb[2] * matrix[1][2]
b = rgb[0] * matrix[2][0] + rgb[1] * matrix[2][1] + rgb[2] * matrix[2][2]
return self._rgb_to_hex((r, g, b))
def _get_color_distance(self, color1: str, color2: str) -> float:
"""
Calculate perceptual color distance (CIE76 approximation).
Returns a value where < 20 means colors may be hard to distinguish.
"""
rgb1 = self._hex_to_rgb(color1)
rgb2 = self._hex_to_rgb(color2)
# Simple Euclidean distance in RGB space (approximation)
# For production, should use CIEDE2000
return math.sqrt(
(rgb1[0] - rgb2[0]) ** 2 +
(rgb1[1] - rgb2[1]) ** 2 +
(rgb1[2] - rgb2[2]) ** 2
)
async def accessibility_validate_colors(
self,
colors: List[str],
check_types: Optional[List[str]] = None,
min_contrast_ratio: float = 4.5
) -> Dict[str, Any]:
"""
Validate a list of colors for accessibility.
Args:
colors: List of hex colors to validate
check_types: Color blindness types to check (default: all)
min_contrast_ratio: Minimum WCAG contrast ratio (default: 4.5 for AA)
Returns:
Dict with:
- issues: List of accessibility issues found
- simulations: How colors appear under each deficiency
- recommendations: Suggestions for improvement
- safe_palettes: Color-blind safe palette suggestions
"""
check_types = check_types or list(SIMULATION_MATRICES.keys())
issues = []
simulations = {}
# Normalize colors
normalized_colors = [c.upper() if c.startswith('#') else f'#{c.upper()}' for c in colors]
# Simulate each color blindness type
for deficiency in check_types:
if deficiency not in SIMULATION_MATRICES:
continue
simulated = [self._simulate_color_blindness(c, deficiency) for c in normalized_colors]
simulations[deficiency] = {
"original": normalized_colors,
"simulated": simulated,
"info": SIMULATION_MATRICES[deficiency]
}
# Check if any color pairs become indistinguishable
for i in range(len(normalized_colors)):
for j in range(i + 1, len(normalized_colors)):
distance = self._get_color_distance(simulated[i], simulated[j])
if distance < 30: # Threshold for distinguishability
issues.append({
"type": "distinguishability",
"severity": "warning" if distance > 15 else "error",
"colors": [normalized_colors[i], normalized_colors[j]],
"affected_by": [deficiency],
"simulated_colors": [simulated[i], simulated[j]],
"distance": round(distance, 1),
"message": f"Colors may be hard to distinguish for {deficiency} ({SIMULATION_MATRICES[deficiency]['description']})"
})
# Check contrast ratios against white and black backgrounds
for color in normalized_colors:
white_contrast = self._get_contrast_ratio(color, "#FFFFFF")
black_contrast = self._get_contrast_ratio(color, "#000000")
if white_contrast < min_contrast_ratio and black_contrast < min_contrast_ratio:
issues.append({
"type": "contrast_ratio",
"severity": "error",
"colors": [color],
"white_contrast": round(white_contrast, 2),
"black_contrast": round(black_contrast, 2),
"required": min_contrast_ratio,
"message": f"Insufficient contrast against both white ({white_contrast:.1f}:1) and black ({black_contrast:.1f}:1) backgrounds"
})
# Generate recommendations
recommendations = self._generate_recommendations(issues)
# Calculate overall score
error_count = sum(1 for i in issues if i["severity"] == "error")
warning_count = sum(1 for i in issues if i["severity"] == "warning")
if error_count == 0 and warning_count == 0:
score = "A"
elif error_count == 0 and warning_count <= 2:
score = "B"
elif error_count <= 2:
score = "C"
else:
score = "D"
return {
"colors_checked": normalized_colors,
"overall_score": score,
"issue_count": len(issues),
"issues": issues,
"simulations": simulations,
"recommendations": recommendations,
"safe_palettes": SAFE_PALETTES
}
async def accessibility_validate_theme(
self,
theme_name: str
) -> Dict[str, Any]:
"""
Validate a theme's colors for accessibility.
Args:
theme_name: Theme name to validate
Returns:
Dict with accessibility validation results
"""
if not self.theme_store:
return {
"error": "Theme store not configured",
"theme_name": theme_name
}
theme = self.theme_store.get_theme(theme_name)
if not theme:
available = self.theme_store.list_themes()
return {
"error": f"Theme '{theme_name}' not found. Available: {available}",
"theme_name": theme_name
}
# Extract colors from theme
colors = []
tokens = theme.get("tokens", {})
color_tokens = tokens.get("colors", {})
def extract_colors(obj, prefix=""):
"""Recursively extract color values."""
if isinstance(obj, str) and (obj.startswith('#') or len(obj) == 6):
colors.append(obj if obj.startswith('#') else f'#{obj}')
elif isinstance(obj, dict):
for key, value in obj.items():
extract_colors(value, f"{prefix}.{key}")
elif isinstance(obj, list):
for item in obj:
extract_colors(item, prefix)
extract_colors(color_tokens)
# Validate extracted colors
result = await self.accessibility_validate_colors(colors)
result["theme_name"] = theme_name
# Add theme-specific checks
primary = color_tokens.get("primary")
background = color_tokens.get("background", {})
text = color_tokens.get("text", {})
if primary and background:
bg_color = background.get("base") if isinstance(background, dict) else background
if bg_color:
contrast = self._get_contrast_ratio(primary, bg_color)
if contrast < 4.5:
result["issues"].append({
"type": "primary_contrast",
"severity": "error",
"colors": [primary, bg_color],
"ratio": round(contrast, 2),
"required": 4.5,
"message": f"Primary color has insufficient contrast ({contrast:.1f}:1) against background"
})
return result
async def accessibility_suggest_alternative(
self,
color: str,
deficiency_type: str
) -> Dict[str, Any]:
"""
Suggest accessible alternative colors.
Args:
color: Original hex color
deficiency_type: Type of color blindness to optimize for
Returns:
Dict with alternative color suggestions
"""
rgb = self._hex_to_rgb(color)
suggestions = []
# Suggest shifting hue while maintaining saturation and brightness
# For red-green deficiency, shift toward blue or yellow
if deficiency_type in ["deuteranopia", "protanopia"]:
# Shift toward blue
blue_shift = self._rgb_to_hex((
max(0, rgb[0] - 50),
max(0, rgb[1] - 30),
min(255, rgb[2] + 80)
))
suggestions.append({
"color": blue_shift,
"description": "Blue-shifted alternative",
"preserves": "approximate brightness"
})
# Shift toward yellow/orange
yellow_shift = self._rgb_to_hex((
min(255, rgb[0] + 50),
min(255, rgb[1] + 30),
max(0, rgb[2] - 80)
))
suggestions.append({
"color": yellow_shift,
"description": "Yellow-shifted alternative",
"preserves": "approximate brightness"
})
elif deficiency_type == "tritanopia":
# For blue-yellow deficiency, shift toward red or green
red_shift = self._rgb_to_hex((
min(255, rgb[0] + 60),
max(0, rgb[1] - 20),
max(0, rgb[2] - 40)
))
suggestions.append({
"color": red_shift,
"description": "Red-shifted alternative",
"preserves": "approximate brightness"
})
# Add safe palette suggestions
for palette_name, palette in SAFE_PALETTES.items():
# Find closest color in safe palette
min_distance = float('inf')
closest = None
for safe_color in palette["colors"]:
distance = self._get_color_distance(color, safe_color)
if distance < min_distance:
min_distance = distance
closest = safe_color
if closest:
suggestions.append({
"color": closest,
"description": f"From {palette['name']} palette",
"palette": palette_name
})
return {
"original_color": color,
"deficiency_type": deficiency_type,
"suggestions": suggestions[:5] # Limit to 5 suggestions
}
def _generate_recommendations(self, issues: List[Dict[str, Any]]) -> List[str]:
"""Generate actionable recommendations based on issues."""
recommendations = []
# Check for distinguishability issues
distinguishability_issues = [i for i in issues if i["type"] == "distinguishability"]
if distinguishability_issues:
affected_types = set()
for issue in distinguishability_issues:
affected_types.update(issue.get("affected_by", []))
if "deuteranopia" in affected_types or "protanopia" in affected_types:
recommendations.append(
"Avoid using red and green as the only differentiators - "
"add patterns, shapes, or labels"
)
recommendations.append(
"Consider using a color-blind safe palette like Okabe-Ito or IBM Design"
)
# Check for contrast issues
contrast_issues = [i for i in issues if i["type"] in ["contrast_ratio", "primary_contrast"]]
if contrast_issues:
recommendations.append(
"Increase contrast by darkening colors for light backgrounds "
"or lightening for dark backgrounds"
)
recommendations.append(
"Use WCAG contrast checker tools to verify text readability"
)
# General recommendations
if len(issues) > 0:
recommendations.append(
"Add secondary visual cues (icons, patterns, labels) "
"to not rely solely on color"
)
if not recommendations:
recommendations.append(
"Color palette appears accessible! Consider adding patterns "
"for additional distinguishability"
)
return recommendations

View File

@@ -0,0 +1,533 @@
"""
Chart creation tools using Plotly.
Provides tools for creating data visualizations with automatic theme integration.
"""
import base64
import logging
import os
from typing import Dict, List, Optional, Any, Union
logger = logging.getLogger(__name__)
# Check for kaleido availability
KALEIDO_AVAILABLE = False
try:
import kaleido
KALEIDO_AVAILABLE = True
except ImportError:
logger.debug("kaleido not installed - chart export will be unavailable")
# Default color palette based on Mantine theme
DEFAULT_COLORS = [
"#228be6", # blue
"#40c057", # green
"#fa5252", # red
"#fab005", # yellow
"#7950f2", # violet
"#fd7e14", # orange
"#20c997", # teal
"#f783ac", # pink
"#868e96", # gray
"#15aabf", # cyan
]
class ChartTools:
"""
Plotly-based chart creation tools.
Creates charts that integrate with DMC theming system.
"""
def __init__(self, theme_store=None):
"""
Initialize chart tools.
Args:
theme_store: Optional ThemeStore for theme token resolution
"""
self.theme_store = theme_store
self._active_theme = None
def set_theme(self, theme: Dict[str, Any]) -> None:
"""Set the active theme for chart styling."""
self._active_theme = theme
def _get_color_palette(self) -> List[str]:
"""Get color palette from theme or defaults."""
if self._active_theme and 'colors' in self._active_theme:
colors = self._active_theme['colors']
# Extract primary colors from theme
palette = []
for key in ['primary', 'secondary', 'success', 'warning', 'error']:
if key in colors:
palette.append(colors[key])
if palette:
return palette + DEFAULT_COLORS[len(palette):]
return DEFAULT_COLORS
def _resolve_color(self, color: Optional[str]) -> str:
"""Resolve a color token to actual color value."""
if not color:
return self._get_color_palette()[0]
# Check if it's a theme token
if self._active_theme and 'colors' in self._active_theme:
colors = self._active_theme['colors']
if color in colors:
return colors[color]
# Check if it's already a valid color
if color.startswith('#') or color.startswith('rgb'):
return color
# Map common color names to palette
color_map = {
'blue': DEFAULT_COLORS[0],
'green': DEFAULT_COLORS[1],
'red': DEFAULT_COLORS[2],
'yellow': DEFAULT_COLORS[3],
'violet': DEFAULT_COLORS[4],
'orange': DEFAULT_COLORS[5],
'teal': DEFAULT_COLORS[6],
'pink': DEFAULT_COLORS[7],
'gray': DEFAULT_COLORS[8],
'cyan': DEFAULT_COLORS[9],
}
return color_map.get(color, color)
async def chart_create(
self,
chart_type: str,
data: Dict[str, Any],
options: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Create a Plotly chart.
Args:
chart_type: Type of chart (line, bar, scatter, pie, heatmap, histogram, area)
data: Data specification with x, y values or labels/values for pie
options: Optional chart options (title, color, layout settings)
Returns:
Dict with:
- figure: Plotly figure JSON
- chart_type: Type of chart created
- error: Error message if creation failed
"""
options = options or {}
# Validate chart type
valid_types = ['line', 'bar', 'scatter', 'pie', 'heatmap', 'histogram', 'area']
if chart_type not in valid_types:
return {
"error": f"Invalid chart_type '{chart_type}'. Must be one of: {valid_types}",
"chart_type": chart_type,
"figure": None
}
try:
# Build trace based on chart type
trace = self._build_trace(chart_type, data, options)
if 'error' in trace:
return trace
# Build layout
layout = self._build_layout(options)
# Create figure structure
figure = {
"data": [trace],
"layout": layout
}
return {
"figure": figure,
"chart_type": chart_type,
"trace_count": 1
}
except Exception as e:
logger.error(f"Chart creation failed: {e}")
return {
"error": str(e),
"chart_type": chart_type,
"figure": None
}
def _build_trace(
self,
chart_type: str,
data: Dict[str, Any],
options: Dict[str, Any]
) -> Dict[str, Any]:
"""Build Plotly trace for the chart type."""
color = self._resolve_color(options.get('color'))
palette = self._get_color_palette()
# Common trace properties
trace: Dict[str, Any] = {}
if chart_type == 'line':
trace = {
"type": "scatter",
"mode": "lines+markers",
"x": data.get('x', []),
"y": data.get('y', []),
"line": {"color": color},
"marker": {"color": color}
}
if 'name' in data:
trace['name'] = data['name']
elif chart_type == 'bar':
trace = {
"type": "bar",
"x": data.get('x', []),
"y": data.get('y', []),
"marker": {"color": color}
}
if options.get('horizontal'):
trace['orientation'] = 'h'
trace['x'], trace['y'] = trace['y'], trace['x']
if 'name' in data:
trace['name'] = data['name']
elif chart_type == 'scatter':
trace = {
"type": "scatter",
"mode": "markers",
"x": data.get('x', []),
"y": data.get('y', []),
"marker": {
"color": color,
"size": options.get('marker_size', 10)
}
}
if 'size' in data:
trace['marker']['size'] = data['size']
if 'name' in data:
trace['name'] = data['name']
elif chart_type == 'pie':
labels = data.get('labels', data.get('x', []))
values = data.get('values', data.get('y', []))
trace = {
"type": "pie",
"labels": labels,
"values": values,
"marker": {"colors": palette[:len(labels)]}
}
if options.get('donut'):
trace['hole'] = options.get('hole', 0.4)
elif chart_type == 'heatmap':
trace = {
"type": "heatmap",
"z": data.get('z', data.get('values', [])),
"x": data.get('x', []),
"y": data.get('y', []),
"colorscale": options.get('colorscale', 'Blues')
}
elif chart_type == 'histogram':
trace = {
"type": "histogram",
"x": data.get('x', data.get('values', [])),
"marker": {"color": color}
}
if 'nbins' in options:
trace['nbinsx'] = options['nbins']
elif chart_type == 'area':
trace = {
"type": "scatter",
"mode": "lines",
"x": data.get('x', []),
"y": data.get('y', []),
"fill": "tozeroy",
"line": {"color": color},
"fillcolor": color.replace(')', ', 0.3)').replace('rgb', 'rgba') if color.startswith('rgb') else color + '4D'
}
if 'name' in data:
trace['name'] = data['name']
else:
return {"error": f"Unsupported chart type: {chart_type}"}
return trace
def _build_layout(self, options: Dict[str, Any]) -> Dict[str, Any]:
"""Build Plotly layout from options."""
layout: Dict[str, Any] = {
"autosize": True,
"margin": {"l": 50, "r": 30, "t": 50, "b": 50}
}
# Title
if 'title' in options:
layout['title'] = {
"text": options['title'],
"x": 0.5,
"xanchor": "center"
}
# Axis labels
if 'x_label' in options:
layout['xaxis'] = layout.get('xaxis', {})
layout['xaxis']['title'] = options['x_label']
if 'y_label' in options:
layout['yaxis'] = layout.get('yaxis', {})
layout['yaxis']['title'] = options['y_label']
# Theme-based styling
if self._active_theme:
colors = self._active_theme.get('colors', {})
bg = colors.get('background', {})
if isinstance(bg, dict):
layout['paper_bgcolor'] = bg.get('base', '#ffffff')
layout['plot_bgcolor'] = bg.get('subtle', '#f8f9fa')
elif isinstance(bg, str):
layout['paper_bgcolor'] = bg
layout['plot_bgcolor'] = bg
text_color = colors.get('text', {})
if isinstance(text_color, dict):
layout['font'] = {'color': text_color.get('primary', '#212529')}
elif isinstance(text_color, str):
layout['font'] = {'color': text_color}
# Additional layout options
if 'showlegend' in options:
layout['showlegend'] = options['showlegend']
if 'height' in options:
layout['height'] = options['height']
if 'width' in options:
layout['width'] = options['width']
return layout
async def chart_configure_interaction(
self,
figure: Dict[str, Any],
interactions: Dict[str, Any]
) -> Dict[str, Any]:
"""
Configure interactions for a chart.
Args:
figure: Plotly figure JSON to modify
interactions: Interaction configuration:
- hover_template: Custom hover text template
- click_data: Enable click data capture
- selection: Enable selection (box, lasso)
- zoom: Enable/disable zoom
Returns:
Dict with:
- figure: Updated figure JSON
- interactions_added: List of interactions configured
- error: Error message if configuration failed
"""
if not figure or 'data' not in figure:
return {
"error": "Invalid figure: must contain 'data' key",
"figure": figure,
"interactions_added": []
}
try:
interactions_added = []
# Process each trace
for i, trace in enumerate(figure['data']):
# Hover template
if 'hover_template' in interactions:
trace['hovertemplate'] = interactions['hover_template']
if i == 0:
interactions_added.append('hover_template')
# Custom hover info
if 'hover_info' in interactions:
trace['hoverinfo'] = interactions['hover_info']
if i == 0:
interactions_added.append('hover_info')
# Layout-level interactions
layout = figure.get('layout', {})
# Click data (Dash callback integration)
if interactions.get('click_data', False):
layout['clickmode'] = 'event+select'
interactions_added.append('click_data')
# Selection mode
if 'selection' in interactions:
sel_mode = interactions['selection']
if sel_mode in ['box', 'lasso', 'box+lasso']:
layout['dragmode'] = 'select' if sel_mode == 'box' else sel_mode
interactions_added.append(f'selection:{sel_mode}')
# Zoom configuration
if 'zoom' in interactions:
if not interactions['zoom']:
layout['xaxis'] = layout.get('xaxis', {})
layout['yaxis'] = layout.get('yaxis', {})
layout['xaxis']['fixedrange'] = True
layout['yaxis']['fixedrange'] = True
interactions_added.append('zoom:disabled')
else:
interactions_added.append('zoom:enabled')
# Modebar configuration
if 'modebar' in interactions:
layout['modebar'] = interactions['modebar']
interactions_added.append('modebar')
figure['layout'] = layout
return {
"figure": figure,
"interactions_added": interactions_added
}
except Exception as e:
logger.error(f"Interaction configuration failed: {e}")
return {
"error": str(e),
"figure": figure,
"interactions_added": []
}
async def chart_export(
self,
figure: Dict[str, Any],
format: str = "png",
width: Optional[int] = None,
height: Optional[int] = None,
scale: float = 2.0,
output_path: Optional[str] = None
) -> Dict[str, Any]:
"""
Export a Plotly chart to a static image format.
Args:
figure: Plotly figure JSON to export
format: Output format - png, svg, or pdf
width: Image width in pixels (default: from figure or 1200)
height: Image height in pixels (default: from figure or 800)
scale: Resolution scale factor (default: 2 for retina)
output_path: Optional file path to save the image
Returns:
Dict with:
- image_data: Base64-encoded image (if no output_path)
- file_path: Path to saved file (if output_path provided)
- format: Export format used
- dimensions: {width, height, scale}
- error: Error message if export failed
"""
# Validate format
valid_formats = ['png', 'svg', 'pdf']
format = format.lower()
if format not in valid_formats:
return {
"error": f"Invalid format '{format}'. Must be one of: {valid_formats}",
"format": format,
"image_data": None
}
# Check kaleido availability
if not KALEIDO_AVAILABLE:
return {
"error": "kaleido package not installed. Install with: pip install kaleido",
"format": format,
"image_data": None,
"install_hint": "pip install kaleido"
}
# Validate figure
if not figure or 'data' not in figure:
return {
"error": "Invalid figure: must contain 'data' key",
"format": format,
"image_data": None
}
try:
import plotly.graph_objects as go
import plotly.io as pio
# Create Plotly figure object
fig = go.Figure(figure)
# Determine dimensions
layout = figure.get('layout', {})
export_width = width or layout.get('width') or 1200
export_height = height or layout.get('height') or 800
# Export to bytes
image_bytes = pio.to_image(
fig,
format=format,
width=export_width,
height=export_height,
scale=scale
)
result = {
"format": format,
"dimensions": {
"width": export_width,
"height": export_height,
"scale": scale,
"effective_width": int(export_width * scale),
"effective_height": int(export_height * scale)
}
}
# Save to file or return base64
if output_path:
# Ensure directory exists
output_dir = os.path.dirname(output_path)
if output_dir and not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
# Add extension if missing
if not output_path.endswith(f'.{format}'):
output_path = f"{output_path}.{format}"
with open(output_path, 'wb') as f:
f.write(image_bytes)
result["file_path"] = output_path
result["file_size_bytes"] = len(image_bytes)
else:
# Return as base64
result["image_data"] = base64.b64encode(image_bytes).decode('utf-8')
result["data_uri"] = f"data:image/{format};base64,{result['image_data']}"
return result
except ImportError as e:
logger.error(f"Chart export failed - missing dependency: {e}")
return {
"error": f"Missing dependency for export: {e}",
"format": format,
"image_data": None,
"install_hint": "pip install plotly kaleido"
}
except Exception as e:
logger.error(f"Chart export failed: {e}")
return {
"error": str(e),
"format": format,
"image_data": None
}

View File

@@ -0,0 +1,301 @@
"""
DMC Component Registry for viz-platform.
Provides version-locked component definitions to prevent Claude from
hallucinating invalid props. Uses static JSON registries pre-generated
from DMC source.
"""
import json
import logging
from pathlib import Path
from typing import Dict, List, Optional, Any
logger = logging.getLogger(__name__)
class ComponentRegistry:
"""
Version-locked registry of Dash Mantine Components.
Loads component definitions from static JSON files and provides
lookup methods for validation tools.
"""
def __init__(self, dmc_version: Optional[str] = None):
"""
Initialize the component registry.
Args:
dmc_version: Installed DMC version (e.g., "0.14.7").
If None, will try to detect or use fallback.
"""
self.dmc_version = dmc_version
self.registry_dir = Path(__file__).parent.parent / 'registry'
self.components: Dict[str, Dict[str, Any]] = {}
self.categories: Dict[str, List[str]] = {}
self.loaded_version: Optional[str] = None
def load(self) -> bool:
"""
Load the component registry for the configured DMC version.
Returns:
True if registry loaded successfully, False otherwise
"""
registry_file = self._find_registry_file()
if not registry_file:
logger.warning(
f"No registry found for DMC {self.dmc_version}. "
"Component validation will be limited."
)
return False
try:
with open(registry_file, 'r') as f:
data = json.load(f)
self.loaded_version = data.get('version')
self.components = data.get('components', {})
self.categories = data.get('categories', {})
logger.info(
f"Loaded component registry v{self.loaded_version} "
f"with {len(self.components)} components"
)
return True
except Exception as e:
logger.error(f"Failed to load registry: {e}")
return False
def _find_registry_file(self) -> Optional[Path]:
"""
Find the best matching registry file for the DMC version.
Strategy:
1. Exact major.minor match (e.g., dmc_0_14.json for 0.14.7)
2. Fallback to latest available registry
Returns:
Path to registry file, or None if not found
"""
if not self.registry_dir.exists():
logger.warning(f"Registry directory not found: {self.registry_dir}")
return None
# Try exact major.minor match
if self.dmc_version:
parts = self.dmc_version.split('.')
if len(parts) >= 2:
major_minor = f"{parts[0]}_{parts[1]}"
exact_match = self.registry_dir / f"dmc_{major_minor}.json"
if exact_match.exists():
return exact_match
# Fallback: find latest registry
registry_files = list(self.registry_dir.glob("dmc_*.json"))
if registry_files:
# Sort by version and return latest
registry_files.sort(reverse=True)
fallback = registry_files[0]
if self.dmc_version:
logger.warning(
f"No exact match for DMC {self.dmc_version}, "
f"using fallback: {fallback.name}"
)
return fallback
return None
def get_component(self, name: str) -> Optional[Dict[str, Any]]:
"""
Get component definition by name.
Args:
name: Component name (e.g., "Button", "TextInput")
Returns:
Component definition dict, or None if not found
"""
return self.components.get(name)
def get_component_props(self, name: str) -> Optional[Dict[str, Any]]:
"""
Get props schema for a component.
Args:
name: Component name
Returns:
Props dict with type info, or None if component not found
"""
component = self.get_component(name)
if component:
return component.get('props', {})
return None
def list_components(self, category: Optional[str] = None) -> Dict[str, List[str]]:
"""
List available components, optionally filtered by category.
Args:
category: Optional category filter (e.g., "inputs", "buttons")
Returns:
Dict of category -> component names
"""
if category:
if category in self.categories:
return {category: self.categories[category]}
return {}
return self.categories
def get_categories(self) -> List[str]:
"""
Get list of available component categories.
Returns:
List of category names
"""
return list(self.categories.keys())
def validate_prop(
self,
component: str,
prop_name: str,
prop_value: Any
) -> Dict[str, Any]:
"""
Validate a single prop value against the registry.
Args:
component: Component name
prop_name: Prop name
prop_value: Value to validate
Returns:
Dict with valid: bool, error: Optional[str]
"""
props = self.get_component_props(component)
if props is None:
return {
'valid': False,
'error': f"Unknown component: {component}"
}
if prop_name not in props:
# Check for similar prop names (typo detection)
similar = self._find_similar_props(prop_name, props.keys())
if similar:
return {
'valid': False,
'error': f"Unknown prop '{prop_name}' for {component}. Did you mean '{similar}'?"
}
return {
'valid': False,
'error': f"Unknown prop '{prop_name}' for {component}"
}
prop_schema = props[prop_name]
return self._validate_value(prop_value, prop_schema, prop_name)
def _validate_value(
self,
value: Any,
schema: Dict[str, Any],
prop_name: str
) -> Dict[str, Any]:
"""
Validate a value against a prop schema.
Args:
value: Value to validate
schema: Prop schema from registry
prop_name: Prop name (for error messages)
Returns:
Dict with valid: bool, error: Optional[str]
"""
prop_type = schema.get('type', 'any')
# Any type always valid
if prop_type == 'any':
return {'valid': True}
# Check enum values
if 'enum' in schema:
if value not in schema['enum']:
return {
'valid': False,
'error': f"Prop '{prop_name}' expects one of {schema['enum']}, got '{value}'"
}
return {'valid': True}
# Type checking
type_checks = {
'string': lambda v: isinstance(v, str),
'number': lambda v: isinstance(v, (int, float)),
'integer': lambda v: isinstance(v, int),
'boolean': lambda v: isinstance(v, bool),
'array': lambda v: isinstance(v, list),
'object': lambda v: isinstance(v, dict),
}
checker = type_checks.get(prop_type)
if checker and not checker(value):
return {
'valid': False,
'error': f"Prop '{prop_name}' expects type '{prop_type}', got '{type(value).__name__}'"
}
return {'valid': True}
def _find_similar_props(
self,
prop_name: str,
available_props: List[str]
) -> Optional[str]:
"""
Find a similar prop name for typo suggestions.
Uses simple edit distance heuristic.
Args:
prop_name: The (possibly misspelled) prop name
available_props: List of valid prop names
Returns:
Most similar prop name, or None if no close match
"""
prop_lower = prop_name.lower()
for prop in available_props:
# Exact match after lowercase
if prop.lower() == prop_lower:
return prop
# Common typos: extra/missing letter
if abs(len(prop) - len(prop_name)) == 1:
if prop_lower.startswith(prop.lower()[:3]):
return prop
return None
def is_loaded(self) -> bool:
"""Check if registry is loaded."""
return len(self.components) > 0
def load_registry(dmc_version: Optional[str] = None) -> ComponentRegistry:
"""
Convenience function to load and return a component registry.
Args:
dmc_version: Optional DMC version string
Returns:
Loaded ComponentRegistry instance
"""
registry = ComponentRegistry(dmc_version)
registry.load()
return registry

View File

@@ -0,0 +1,172 @@
"""
Configuration loader for viz-platform MCP Server.
Implements hybrid configuration system:
- System-level: ~/.config/claude/viz-platform.env (theme preferences)
- Project-level: .env (DMC version overrides)
- Auto-detection: DMC package version from installed package
"""
from pathlib import Path
from dotenv import load_dotenv
import os
import logging
from typing import Dict, Optional
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class VizPlatformConfig:
"""Hybrid configuration loader for viz-platform tools"""
def __init__(self):
self.dmc_version: Optional[str] = None
self.theme_dir_user: Path = Path.home() / '.config' / 'claude' / 'themes'
self.theme_dir_project: Optional[Path] = None
self.default_theme: Optional[str] = None
def load(self) -> Dict[str, any]:
"""
Load configuration from system and project levels.
Returns:
Dict containing dmc_version, theme directories, and availability flags
"""
# Load system config
system_config = Path.home() / '.config' / 'claude' / 'viz-platform.env'
if system_config.exists():
load_dotenv(system_config)
logger.info(f"Loaded system configuration from {system_config}")
# Find project directory
project_dir = self._find_project_directory()
# Load project config (overrides system)
if project_dir:
project_config = project_dir / '.env'
if project_config.exists():
load_dotenv(project_config, override=True)
logger.info(f"Loaded project configuration from {project_config}")
# Set project theme directory
self.theme_dir_project = project_dir / '.viz-platform' / 'themes'
# Get DMC version (from env or auto-detect)
self.dmc_version = os.getenv('DMC_VERSION') or self._detect_dmc_version()
self.default_theme = os.getenv('VIZ_DEFAULT_THEME')
# Ensure user theme directory exists
self.theme_dir_user.mkdir(parents=True, exist_ok=True)
return {
'dmc_version': self.dmc_version,
'dmc_available': self.dmc_version is not None,
'theme_dir_user': str(self.theme_dir_user),
'theme_dir_project': str(self.theme_dir_project) if self.theme_dir_project else None,
'default_theme': self.default_theme,
'project_dir': str(project_dir) if project_dir else None
}
def _detect_dmc_version(self) -> Optional[str]:
"""
Auto-detect installed Dash Mantine Components version.
Returns:
Version string (e.g., "0.14.7") or None if not installed
"""
try:
from importlib.metadata import version
dmc_version = version('dash-mantine-components')
logger.info(f"Detected DMC version: {dmc_version}")
return dmc_version
except ImportError:
logger.warning("dash-mantine-components not installed - using registry fallback")
return None
except Exception as e:
logger.warning(f"Could not detect DMC version: {e}")
return None
def _find_project_directory(self) -> Optional[Path]:
"""
Find the user's project directory.
Returns:
Path to project directory, or None if not found
"""
# Strategy 1: Check CLAUDE_PROJECT_DIR environment variable
project_dir = os.getenv('CLAUDE_PROJECT_DIR')
if project_dir:
path = Path(project_dir)
if path.exists():
logger.info(f"Found project directory from CLAUDE_PROJECT_DIR: {path}")
return path
# Strategy 2: Check PWD
pwd = os.getenv('PWD')
if pwd:
path = Path(pwd)
if path.exists() and (
(path / '.git').exists() or
(path / '.env').exists() or
(path / '.viz-platform').exists()
):
logger.info(f"Found project directory from PWD: {path}")
return path
# Strategy 3: Check current working directory
cwd = Path.cwd()
if (cwd / '.git').exists() or (cwd / '.env').exists() or (cwd / '.viz-platform').exists():
logger.info(f"Found project directory from cwd: {cwd}")
return cwd
logger.debug("Could not determine project directory")
return None
def load_config() -> Dict[str, any]:
"""
Convenience function to load configuration.
Returns:
Configuration dictionary
"""
config = VizPlatformConfig()
return config.load()
def check_dmc_version() -> Dict[str, any]:
"""
Check DMC installation status for SessionStart hook.
Returns:
Dict with installation status and version info
"""
config = load_config()
if not config.get('dmc_available'):
return {
'installed': False,
'message': 'dash-mantine-components not installed. Run: pip install dash-mantine-components'
}
version = config.get('dmc_version', 'unknown')
# Check for registry compatibility
registry_path = Path(__file__).parent.parent / 'registry'
major_minor = '.'.join(version.split('.')[:2]) if version else None
registry_file = registry_path / f'dmc_{major_minor.replace(".", "_")}.json' if major_minor else None
if registry_file and registry_file.exists():
return {
'installed': True,
'version': version,
'registry_available': True,
'message': f'DMC {version} ready with component registry'
}
else:
return {
'installed': True,
'version': version,
'registry_available': False,
'message': f'DMC {version} installed but no matching registry. Validation may be limited.'
}

View File

@@ -0,0 +1,306 @@
"""
DMC (Dash Mantine Components) validation tools.
Provides component constraint layer to prevent Claude from hallucinating invalid props.
"""
import logging
from typing import Dict, List, Optional, Any
from .component_registry import ComponentRegistry
logger = logging.getLogger(__name__)
class DMCTools:
"""
DMC component validation tools.
These tools provide the "constraint layer" that validates component usage
against a version-locked registry of DMC components.
"""
def __init__(self, registry: Optional[ComponentRegistry] = None):
"""
Initialize DMC tools with component registry.
Args:
registry: ComponentRegistry instance. If None, creates one.
"""
self.registry = registry
self._initialized = False
def initialize(self, dmc_version: Optional[str] = None) -> bool:
"""
Initialize the registry if not already provided.
Args:
dmc_version: DMC version to load registry for
Returns:
True if initialized successfully
"""
if self.registry is None:
self.registry = ComponentRegistry(dmc_version)
if not self.registry.is_loaded():
self.registry.load()
self._initialized = self.registry.is_loaded()
return self._initialized
async def list_components(
self,
category: Optional[str] = None
) -> Dict[str, Any]:
"""
List available DMC components, optionally filtered by category.
Args:
category: Optional category filter (e.g., "inputs", "buttons", "navigation")
Returns:
Dict with:
- components: Dict[category -> [component names]]
- categories: List of available categories
- version: Loaded DMC registry version
- total_count: Total number of components
"""
if not self._initialized:
return {
"error": "Registry not initialized",
"components": {},
"categories": [],
"version": None,
"total_count": 0
}
components = self.registry.list_components(category)
all_categories = self.registry.get_categories()
# Count total components
total = sum(len(comps) for comps in components.values())
return {
"components": components,
"categories": all_categories if not category else [category],
"version": self.registry.loaded_version,
"total_count": total
}
async def get_component_props(self, component: str) -> Dict[str, Any]:
"""
Get props schema for a specific component.
Args:
component: Component name (e.g., "Button", "TextInput")
Returns:
Dict with:
- component: Component name
- description: Component description
- props: Dict of prop name -> {type, default, enum, description}
- prop_count: Number of props
- required: List of required prop names
Or error dict if component not found
"""
if not self._initialized:
return {
"error": "Registry not initialized",
"component": component,
"props": {},
"prop_count": 0
}
comp_def = self.registry.get_component(component)
if not comp_def:
# Try to suggest similar component name
similar = self._find_similar_component(component)
error_msg = f"Component '{component}' not found in registry"
if similar:
error_msg += f". Did you mean '{similar}'?"
return {
"error": error_msg,
"component": component,
"props": {},
"prop_count": 0
}
props = comp_def.get('props', {})
# Extract required props
required = [
name for name, schema in props.items()
if schema.get('required', False)
]
return {
"component": component,
"description": comp_def.get('description', ''),
"props": props,
"prop_count": len(props),
"required": required,
"version": self.registry.loaded_version
}
async def validate_component(
self,
component: str,
props: Dict[str, Any]
) -> Dict[str, Any]:
"""
Validate component props against registry.
Args:
component: Component name
props: Props dict to validate
Returns:
Dict with:
- valid: bool - True if all props are valid
- errors: List of error messages
- warnings: List of warning messages
- validated_props: Number of props validated
- component: Component name for reference
"""
if not self._initialized:
return {
"valid": False,
"errors": ["Registry not initialized"],
"warnings": [],
"validated_props": 0,
"component": component
}
errors: List[str] = []
warnings: List[str] = []
# Check if component exists
comp_def = self.registry.get_component(component)
if not comp_def:
similar = self._find_similar_component(component)
error_msg = f"Unknown component: {component}"
if similar:
error_msg += f". Did you mean '{similar}'?"
errors.append(error_msg)
return {
"valid": False,
"errors": errors,
"warnings": warnings,
"validated_props": 0,
"component": component
}
comp_props = comp_def.get('props', {})
# Check for required props
for prop_name, prop_schema in comp_props.items():
if prop_schema.get('required', False) and prop_name not in props:
errors.append(f"Missing required prop: '{prop_name}'")
# Validate each provided prop
for prop_name, prop_value in props.items():
# Skip special props that are always allowed
if prop_name in ('id', 'children', 'className', 'style', 'key'):
continue
result = self.registry.validate_prop(component, prop_name, prop_value)
if not result.get('valid', True):
error = result.get('error', f"Invalid prop: {prop_name}")
# Distinguish between typos/unknown props and type errors
if "Unknown prop" in error:
errors.append(f"{error}")
elif "expects one of" in error:
errors.append(f"{error}")
elif "expects type" in error:
warnings.append(f"⚠️ {error}")
else:
errors.append(f"{error}")
# Check for props that exist but might have common mistakes
self._check_common_mistakes(component, props, warnings)
return {
"valid": len(errors) == 0,
"errors": errors,
"warnings": warnings,
"validated_props": len(props),
"component": component,
"version": self.registry.loaded_version
}
def _find_similar_component(self, component: str) -> Optional[str]:
"""
Find a similar component name for suggestions.
Args:
component: The (possibly misspelled) component name
Returns:
Similar component name, or None if no close match
"""
if not self.registry:
return None
comp_lower = component.lower()
all_components = []
for comps in self.registry.categories.values():
all_components.extend(comps)
for comp in all_components:
# Exact match after lowercase
if comp.lower() == comp_lower:
return comp
# Check if it's a prefix match
if comp.lower().startswith(comp_lower) or comp_lower.startswith(comp.lower()):
return comp
# Check for common typos
if abs(len(comp) - len(component)) <= 2:
if comp_lower[:4] == comp.lower()[:4]:
return comp
return None
def _check_common_mistakes(
self,
component: str,
props: Dict[str, Any],
warnings: List[str]
) -> None:
"""
Check for common prop usage mistakes and add warnings.
Args:
component: Component name
props: Props being used
warnings: List to append warnings to
"""
# Common mistake: using 'onclick' instead of callback pattern
if 'onclick' in [p.lower() for p in props.keys()]:
warnings.append(
"⚠️ Dash uses callback patterns, not inline event handlers. "
"Use 'n_clicks' prop with a callback instead."
)
# Common mistake: using 'class' instead of 'className'
if 'class' in props:
warnings.append(
"⚠️ Use 'className' instead of 'class' for CSS classes."
)
# Button-specific checks
if component == 'Button':
if 'href' in props and 'component' not in props:
warnings.append(
"⚠️ Button with 'href' should also set 'component=\"a\"' for proper anchor behavior."
)
# Input-specific checks
if 'Input' in component:
if 'value' in props and 'onChange' in [p for p in props.keys()]:
warnings.append(
"⚠️ Dash uses 'value' prop with callbacks, not 'onChange'. "
"The value updates automatically through Dash callbacks."
)

View File

@@ -0,0 +1,553 @@
"""
Layout composition tools for dashboard building.
Provides tools for creating structured layouts with grids, filters, and sections.
"""
import logging
from typing import Dict, List, Optional, Any
from uuid import uuid4
logger = logging.getLogger(__name__)
# Standard responsive breakpoints (Mantine/Bootstrap-aligned)
DEFAULT_BREAKPOINTS = {
"xs": {
"min_width": "0px",
"max_width": "575px",
"cols": 1,
"spacing": "xs",
"description": "Extra small devices (phones, portrait)"
},
"sm": {
"min_width": "576px",
"max_width": "767px",
"cols": 2,
"spacing": "sm",
"description": "Small devices (phones, landscape)"
},
"md": {
"min_width": "768px",
"max_width": "991px",
"cols": 6,
"spacing": "md",
"description": "Medium devices (tablets)"
},
"lg": {
"min_width": "992px",
"max_width": "1199px",
"cols": 12,
"spacing": "md",
"description": "Large devices (desktops)"
},
"xl": {
"min_width": "1200px",
"max_width": None,
"cols": 12,
"spacing": "lg",
"description": "Extra large devices (large desktops)"
}
}
# Layout templates
TEMPLATES = {
"dashboard": {
"sections": ["header", "filters", "main", "footer"],
"default_grid": {"cols": 12, "spacing": "md"},
"description": "Standard dashboard with header, filters, main content, and footer"
},
"report": {
"sections": ["title", "summary", "content", "appendix"],
"default_grid": {"cols": 1, "spacing": "lg"},
"description": "Report layout with title, summary, and content sections"
},
"form": {
"sections": ["header", "fields", "actions"],
"default_grid": {"cols": 2, "spacing": "md"},
"description": "Form layout with header, fields, and action buttons"
},
"blank": {
"sections": ["main"],
"default_grid": {"cols": 12, "spacing": "md"},
"description": "Blank canvas for custom layouts"
}
}
# Filter type definitions
FILTER_TYPES = {
"dropdown": {
"component": "Select",
"props": ["label", "data", "placeholder", "clearable", "searchable", "value"]
},
"multi_select": {
"component": "MultiSelect",
"props": ["label", "data", "placeholder", "clearable", "searchable", "value"]
},
"date_range": {
"component": "DateRangePicker",
"props": ["label", "placeholder", "value", "minDate", "maxDate"]
},
"date": {
"component": "DatePicker",
"props": ["label", "placeholder", "value", "minDate", "maxDate"]
},
"search": {
"component": "TextInput",
"props": ["label", "placeholder", "value", "icon"]
},
"checkbox_group": {
"component": "CheckboxGroup",
"props": ["label", "children", "value"]
},
"radio_group": {
"component": "RadioGroup",
"props": ["label", "children", "value"]
},
"slider": {
"component": "Slider",
"props": ["label", "min", "max", "step", "value", "marks"]
},
"range_slider": {
"component": "RangeSlider",
"props": ["label", "min", "max", "step", "value", "marks"]
}
}
class LayoutTools:
"""
Dashboard layout composition tools.
Creates layouts that map to DMC Grid and AppShell components.
"""
def __init__(self):
"""Initialize layout tools."""
self._layouts: Dict[str, Dict[str, Any]] = {}
async def layout_create(
self,
name: str,
template: Optional[str] = None
) -> Dict[str, Any]:
"""
Create a new layout container.
Args:
name: Unique name for the layout
template: Optional template (dashboard, report, form, blank)
Returns:
Dict with:
- layout_ref: Reference to use in other tools
- template: Template used
- sections: Available sections
- grid: Default grid configuration
"""
# Validate template
template = template or "blank"
if template not in TEMPLATES:
return {
"error": f"Invalid template '{template}'. Must be one of: {list(TEMPLATES.keys())}",
"layout_ref": None
}
# Check for name collision
if name in self._layouts:
return {
"error": f"Layout '{name}' already exists. Use a different name or modify existing.",
"layout_ref": name
}
template_config = TEMPLATES[template]
# Create layout structure
layout = {
"id": str(uuid4()),
"name": name,
"template": template,
"sections": {section: {"items": []} for section in template_config["sections"]},
"grid": template_config["default_grid"].copy(),
"filters": [],
"metadata": {
"description": template_config["description"]
}
}
self._layouts[name] = layout
return {
"layout_ref": name,
"template": template,
"sections": template_config["sections"],
"grid": layout["grid"],
"description": template_config["description"]
}
async def layout_add_filter(
self,
layout_ref: str,
filter_type: str,
options: Dict[str, Any]
) -> Dict[str, Any]:
"""
Add a filter control to a layout.
Args:
layout_ref: Layout name to add filter to
filter_type: Type of filter (dropdown, date_range, search, checkbox_group, etc.)
options: Filter options (label, data for dropdown, placeholder, position)
Returns:
Dict with:
- filter_id: Unique ID for the filter
- component: DMC component that will be used
- props: Props that were set
- position: Where filter was placed
"""
# Validate layout exists
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found. Create it first with layout_create.",
"filter_id": None
}
# Validate filter type
if filter_type not in FILTER_TYPES:
return {
"error": f"Invalid filter_type '{filter_type}'. Must be one of: {list(FILTER_TYPES.keys())}",
"filter_id": None
}
filter_config = FILTER_TYPES[filter_type]
layout = self._layouts[layout_ref]
# Generate filter ID
filter_id = f"filter_{filter_type}_{len(layout['filters'])}"
# Extract relevant props
props = {"id": filter_id}
for prop in filter_config["props"]:
if prop in options:
props[prop] = options[prop]
# Determine position
position = options.get("position", "filters")
if position not in layout["sections"]:
# Default to first available section
position = list(layout["sections"].keys())[0]
# Create filter definition
filter_def = {
"id": filter_id,
"type": filter_type,
"component": filter_config["component"],
"props": props,
"position": position
}
layout["filters"].append(filter_def)
layout["sections"][position]["items"].append({
"type": "filter",
"ref": filter_id
})
return {
"filter_id": filter_id,
"component": filter_config["component"],
"props": props,
"position": position,
"layout_ref": layout_ref
}
async def layout_set_grid(
self,
layout_ref: str,
grid: Dict[str, Any]
) -> Dict[str, Any]:
"""
Configure the grid system for a layout.
Args:
layout_ref: Layout name to configure
grid: Grid configuration:
- cols: Number of columns (default 12)
- spacing: Gap between items (xs, sm, md, lg, xl)
- breakpoints: Responsive breakpoints {xs: cols, sm: cols, ...}
- gutter: Gutter size
Returns:
Dict with:
- grid: Updated grid configuration
- layout_ref: Layout reference
"""
# Validate layout exists
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found. Create it first with layout_create.",
"grid": None
}
layout = self._layouts[layout_ref]
# Validate spacing if provided
valid_spacing = ["xs", "sm", "md", "lg", "xl"]
if "spacing" in grid and grid["spacing"] not in valid_spacing:
return {
"error": f"Invalid spacing '{grid['spacing']}'. Must be one of: {valid_spacing}",
"grid": layout["grid"]
}
# Validate cols
if "cols" in grid:
cols = grid["cols"]
if not isinstance(cols, int) or cols < 1 or cols > 24:
return {
"error": f"Invalid cols '{cols}'. Must be integer between 1 and 24.",
"grid": layout["grid"]
}
# Update grid configuration
layout["grid"].update(grid)
# Process breakpoints if provided
if "breakpoints" in grid:
bp = grid["breakpoints"]
layout["grid"]["breakpoints"] = bp
return {
"grid": layout["grid"],
"layout_ref": layout_ref
}
async def layout_get(self, layout_ref: str) -> Dict[str, Any]:
"""
Get a layout's full configuration.
Args:
layout_ref: Layout name to retrieve
Returns:
Full layout configuration or error
"""
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found.",
"layout": None
}
layout = self._layouts[layout_ref]
return {
"layout": layout,
"filter_count": len(layout["filters"]),
"sections": list(layout["sections"].keys())
}
async def layout_add_section(
self,
layout_ref: str,
section_name: str,
position: Optional[int] = None
) -> Dict[str, Any]:
"""
Add a custom section to a layout.
Args:
layout_ref: Layout name
section_name: Name for the new section
position: Optional position index (appends if not specified)
Returns:
Dict with sections list and the new section name
"""
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found.",
"sections": []
}
layout = self._layouts[layout_ref]
if section_name in layout["sections"]:
return {
"error": f"Section '{section_name}' already exists.",
"sections": list(layout["sections"].keys())
}
# Add new section
layout["sections"][section_name] = {"items": []}
return {
"section_name": section_name,
"sections": list(layout["sections"].keys()),
"layout_ref": layout_ref
}
def get_available_templates(self) -> Dict[str, Any]:
"""Get list of available layout templates."""
return {
name: {
"sections": config["sections"],
"description": config["description"]
}
for name, config in TEMPLATES.items()
}
def get_available_filter_types(self) -> Dict[str, Any]:
"""Get list of available filter types."""
return {
name: {
"component": config["component"],
"props": config["props"]
}
for name, config in FILTER_TYPES.items()
}
async def layout_set_breakpoints(
self,
layout_ref: str,
breakpoints: Dict[str, Dict[str, Any]],
mobile_first: bool = True
) -> Dict[str, Any]:
"""
Configure responsive breakpoints for a layout.
Args:
layout_ref: Layout name to configure
breakpoints: Breakpoint configuration dict:
{
"xs": {"cols": 1, "spacing": "xs"},
"sm": {"cols": 2, "spacing": "sm"},
"md": {"cols": 6, "spacing": "md"},
"lg": {"cols": 12, "spacing": "md"},
"xl": {"cols": 12, "spacing": "lg"}
}
mobile_first: If True, use min-width media queries (default)
Returns:
Dict with:
- breakpoints: Complete breakpoint configuration
- css_media_queries: Generated CSS media queries
- mobile_first: Whether mobile-first approach is used
"""
# Validate layout exists
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found. Create it first with layout_create.",
"breakpoints": None
}
layout = self._layouts[layout_ref]
# Validate breakpoint names
valid_breakpoints = ["xs", "sm", "md", "lg", "xl"]
for bp_name in breakpoints.keys():
if bp_name not in valid_breakpoints:
return {
"error": f"Invalid breakpoint '{bp_name}'. Must be one of: {valid_breakpoints}",
"breakpoints": layout.get("breakpoints")
}
# Merge with defaults
merged_breakpoints = {}
for bp_name in valid_breakpoints:
default = DEFAULT_BREAKPOINTS[bp_name].copy()
if bp_name in breakpoints:
default.update(breakpoints[bp_name])
merged_breakpoints[bp_name] = default
# Validate spacing values
valid_spacing = ["xs", "sm", "md", "lg", "xl"]
for bp_name, bp_config in merged_breakpoints.items():
if "spacing" in bp_config and bp_config["spacing"] not in valid_spacing:
return {
"error": f"Invalid spacing '{bp_config['spacing']}' for breakpoint '{bp_name}'. Must be one of: {valid_spacing}",
"breakpoints": layout.get("breakpoints")
}
# Validate column counts
for bp_name, bp_config in merged_breakpoints.items():
if "cols" in bp_config:
cols = bp_config["cols"]
if not isinstance(cols, int) or cols < 1 or cols > 24:
return {
"error": f"Invalid cols '{cols}' for breakpoint '{bp_name}'. Must be integer between 1 and 24.",
"breakpoints": layout.get("breakpoints")
}
# Generate CSS media queries
css_queries = self._generate_media_queries(merged_breakpoints, mobile_first)
# Store in layout
layout["breakpoints"] = merged_breakpoints
layout["mobile_first"] = mobile_first
layout["responsive_css"] = css_queries
return {
"layout_ref": layout_ref,
"breakpoints": merged_breakpoints,
"mobile_first": mobile_first,
"css_media_queries": css_queries
}
def _generate_media_queries(
self,
breakpoints: Dict[str, Dict[str, Any]],
mobile_first: bool
) -> List[str]:
"""Generate CSS media queries for breakpoints."""
queries = []
bp_order = ["xs", "sm", "md", "lg", "xl"]
if mobile_first:
# Use min-width queries (mobile-first)
for bp_name in bp_order[1:]: # Skip xs (base styles)
bp = breakpoints[bp_name]
min_width = bp.get("min_width", DEFAULT_BREAKPOINTS[bp_name]["min_width"])
if min_width and min_width != "0px":
queries.append(f"@media (min-width: {min_width}) {{ /* {bp_name} styles */ }}")
else:
# Use max-width queries (desktop-first)
for bp_name in reversed(bp_order[:-1]): # Skip xl (base styles)
bp = breakpoints[bp_name]
max_width = bp.get("max_width", DEFAULT_BREAKPOINTS[bp_name]["max_width"])
if max_width:
queries.append(f"@media (max-width: {max_width}) {{ /* {bp_name} styles */ }}")
return queries
async def layout_get_breakpoints(self, layout_ref: str) -> Dict[str, Any]:
"""
Get the breakpoint configuration for a layout.
Args:
layout_ref: Layout name
Returns:
Dict with breakpoint configuration
"""
if layout_ref not in self._layouts:
return {
"error": f"Layout '{layout_ref}' not found.",
"breakpoints": None
}
layout = self._layouts[layout_ref]
return {
"layout_ref": layout_ref,
"breakpoints": layout.get("breakpoints", DEFAULT_BREAKPOINTS.copy()),
"mobile_first": layout.get("mobile_first", True),
"css_media_queries": layout.get("responsive_css", [])
}
def get_default_breakpoints(self) -> Dict[str, Any]:
"""Get the default breakpoint configuration."""
return {
"breakpoints": DEFAULT_BREAKPOINTS.copy(),
"description": "Standard responsive breakpoints aligned with Mantine/Bootstrap",
"mobile_first": True
}

View File

@@ -0,0 +1,366 @@
"""
Multi-page app tools for viz-platform.
Provides tools for building complete Dash applications with routing and navigation.
"""
import logging
from typing import Dict, List, Optional, Any
from uuid import uuid4
logger = logging.getLogger(__name__)
# Navigation position options
NAV_POSITIONS = ["top", "left", "right"]
# Auth types supported
AUTH_TYPES = ["none", "basic", "oauth", "custom"]
class PageTools:
"""
Multi-page Dash application tools.
Creates page definitions, navigation, and auth configuration.
"""
def __init__(self):
"""Initialize page tools."""
self._pages: Dict[str, Dict[str, Any]] = {}
self._navbars: Dict[str, Dict[str, Any]] = {}
self._app_config: Dict[str, Any] = {
"title": "Dash App",
"suppress_callback_exceptions": True
}
async def page_create(
self,
name: str,
path: str,
layout_ref: Optional[str] = None,
title: Optional[str] = None
) -> Dict[str, Any]:
"""
Create a new page definition.
Args:
name: Unique page name (used as identifier)
path: URL path for the page (e.g., "/", "/settings")
layout_ref: Optional layout reference to use for the page
title: Optional page title (defaults to name)
Returns:
Dict with:
- page_ref: Reference to use in other tools
- path: URL path
- registered: Whether page was registered
"""
# Validate path format
if not path.startswith('/'):
return {
"error": f"Path must start with '/'. Got: {path}",
"page_ref": None
}
# Check for name collision
if name in self._pages:
return {
"error": f"Page '{name}' already exists. Use a different name.",
"page_ref": name
}
# Check for path collision
for page_name, page_data in self._pages.items():
if page_data['path'] == path:
return {
"error": f"Path '{path}' already used by page '{page_name}'.",
"page_ref": None
}
# Create page definition
page = {
"id": str(uuid4()),
"name": name,
"path": path,
"title": title or name,
"layout_ref": layout_ref,
"auth": None,
"metadata": {}
}
self._pages[name] = page
return {
"page_ref": name,
"path": path,
"title": page["title"],
"layout_ref": layout_ref,
"registered": True
}
async def page_add_navbar(
self,
pages: List[str],
options: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Generate a navigation component linking pages.
Args:
pages: List of page names to include in navigation
options: Navigation options:
- position: "top", "left", or "right"
- style: Style variant
- brand: Brand/logo text or config
- collapsible: Whether to collapse on mobile
Returns:
Dict with:
- navbar_id: Navigation ID
- pages: List of page links generated
- component: DMC component structure
"""
options = options or {}
# Validate pages exist
missing_pages = [p for p in pages if p not in self._pages]
if missing_pages:
return {
"error": f"Pages not found: {missing_pages}. Create them first.",
"navbar_id": None
}
# Validate position
position = options.get("position", "top")
if position not in NAV_POSITIONS:
return {
"error": f"Invalid position '{position}'. Must be one of: {NAV_POSITIONS}",
"navbar_id": None
}
# Generate navbar ID
navbar_id = f"navbar_{len(self._navbars)}"
# Build page links
page_links = []
for page_name in pages:
page = self._pages[page_name]
page_links.append({
"label": page["title"],
"href": page["path"],
"page_ref": page_name
})
# Build DMC component structure
if position == "top":
component = self._build_top_navbar(page_links, options)
else:
component = self._build_side_navbar(page_links, options, position)
# Store navbar config
self._navbars[navbar_id] = {
"id": navbar_id,
"position": position,
"pages": pages,
"options": options,
"component": component
}
return {
"navbar_id": navbar_id,
"position": position,
"pages": page_links,
"component": component
}
async def page_set_auth(
self,
page_ref: str,
auth_config: Dict[str, Any]
) -> Dict[str, Any]:
"""
Configure authentication for a page.
Args:
page_ref: Page name to configure
auth_config: Authentication configuration:
- type: "none", "basic", "oauth", "custom"
- required: Whether auth is required (default True)
- roles: List of required roles (optional)
- redirect: Redirect path for unauthenticated users
Returns:
Dict with:
- page_ref: Page reference
- auth_type: Type of auth configured
- protected: Whether page is protected
"""
# Validate page exists
if page_ref not in self._pages:
available = list(self._pages.keys())
return {
"error": f"Page '{page_ref}' not found. Available: {available}",
"page_ref": page_ref
}
# Validate auth type
auth_type = auth_config.get("type", "basic")
if auth_type not in AUTH_TYPES:
return {
"error": f"Invalid auth type '{auth_type}'. Must be one of: {AUTH_TYPES}",
"page_ref": page_ref
}
# Build auth config
auth = {
"type": auth_type,
"required": auth_config.get("required", True),
"roles": auth_config.get("roles", []),
"redirect": auth_config.get("redirect", "/login")
}
# Handle OAuth-specific config
if auth_type == "oauth":
auth["provider"] = auth_config.get("provider", "generic")
auth["scopes"] = auth_config.get("scopes", [])
# Update page
self._pages[page_ref]["auth"] = auth
return {
"page_ref": page_ref,
"auth_type": auth_type,
"protected": auth["required"],
"roles": auth["roles"],
"redirect": auth["redirect"]
}
async def page_list(self) -> Dict[str, Any]:
"""
List all registered pages.
Returns:
Dict with pages and their configurations
"""
pages_info = {}
for name, page in self._pages.items():
pages_info[name] = {
"path": page["path"],
"title": page["title"],
"layout_ref": page["layout_ref"],
"protected": page["auth"] is not None and page["auth"].get("required", False)
}
return {
"pages": pages_info,
"count": len(pages_info),
"navbars": list(self._navbars.keys())
}
async def page_get_app_config(self) -> Dict[str, Any]:
"""
Get the complete app configuration for Dash.
Returns:
Dict with app config including pages, navbars, and settings
"""
# Build pages config
pages_config = []
for name, page in self._pages.items():
pages_config.append({
"name": name,
"path": page["path"],
"title": page["title"],
"layout_ref": page["layout_ref"]
})
# Build routing config
routes = {page["path"]: name for name, page in self._pages.items()}
return {
"app": self._app_config,
"pages": pages_config,
"routes": routes,
"navbars": list(self._navbars.values()),
"page_count": len(self._pages)
}
def _build_top_navbar(
self,
page_links: List[Dict[str, str]],
options: Dict[str, Any]
) -> Dict[str, Any]:
"""Build a top navigation bar component."""
brand = options.get("brand", "App")
# Build nav links
nav_items = []
for link in page_links:
nav_items.append({
"component": "NavLink",
"props": {
"label": link["label"],
"href": link["href"]
}
})
return {
"component": "AppShell.Header",
"children": [
{
"component": "Group",
"props": {"justify": "space-between", "h": "100%", "px": "md"},
"children": [
{
"component": "Text",
"props": {"size": "lg", "fw": 700},
"children": brand
},
{
"component": "Group",
"props": {"gap": "sm"},
"children": nav_items
}
]
}
]
}
def _build_side_navbar(
self,
page_links: List[Dict[str, str]],
options: Dict[str, Any],
position: str
) -> Dict[str, Any]:
"""Build a side navigation bar component."""
brand = options.get("brand", "App")
# Build nav links
nav_items = []
for link in page_links:
nav_items.append({
"component": "NavLink",
"props": {
"label": link["label"],
"href": link["href"]
}
})
navbar_component = "AppShell.Navbar" if position == "left" else "AppShell.Aside"
return {
"component": navbar_component,
"props": {"p": "md"},
"children": [
{
"component": "Text",
"props": {"size": "lg", "fw": 700, "mb": "md"},
"children": brand
},
{
"component": "Stack",
"props": {"gap": "xs"},
"children": nav_items
}
]
}

View File

@@ -0,0 +1,928 @@
"""
MCP Server entry point for viz-platform integration.
Provides Dash Mantine Components validation, charting, layout, theming, and page tools
to Claude Code via JSON-RPC 2.0 over stdio.
"""
import asyncio
import logging
import json
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
from .config import VizPlatformConfig
from .dmc_tools import DMCTools
from .chart_tools import ChartTools
from .layout_tools import LayoutTools
from .theme_tools import ThemeTools
from .page_tools import PageTools
from .accessibility_tools import AccessibilityTools
# Suppress noisy MCP validation warnings on stderr
logging.basicConfig(level=logging.INFO)
logging.getLogger("root").setLevel(logging.ERROR)
logging.getLogger("mcp").setLevel(logging.ERROR)
logger = logging.getLogger(__name__)
class VizPlatformMCPServer:
"""MCP Server for visualization platform integration"""
def __init__(self):
self.server = Server("viz-platform-mcp")
self.config = None
self.dmc_tools = DMCTools()
self.chart_tools = ChartTools()
self.layout_tools = LayoutTools()
self.theme_tools = ThemeTools()
self.page_tools = PageTools()
self.accessibility_tools = AccessibilityTools(theme_store=self.theme_tools.store)
async def initialize(self):
"""Initialize server and load configuration."""
try:
config_loader = VizPlatformConfig()
self.config = config_loader.load()
# Initialize DMC tools with detected version
dmc_version = self.config.get('dmc_version')
self.dmc_tools.initialize(dmc_version)
# Log available capabilities
caps = []
if self.config.get('dmc_available'):
caps.append(f"DMC {dmc_version}")
if self.dmc_tools._initialized:
caps.append(f"Registry loaded ({self.dmc_tools.registry.loaded_version})")
else:
caps.append("DMC (not installed)")
logger.info(f"viz-platform MCP Server initialized with: {', '.join(caps)}")
except Exception as e:
logger.error(f"Failed to initialize: {e}")
raise
def setup_tools(self):
"""Register all available tools with the MCP server"""
@self.server.list_tools()
async def list_tools() -> list[Tool]:
"""Return list of available tools"""
tools = []
# DMC validation tools (Issue #172)
tools.append(Tool(
name="list_components",
description=(
"List available Dash Mantine Components. "
"Returns components grouped by category with version info. "
"Use this to discover what components are available before building UI."
),
inputSchema={
"type": "object",
"properties": {
"category": {
"type": "string",
"description": (
"Optional category filter. Available categories: "
"buttons, inputs, navigation, feedback, overlays, "
"typography, layout, data_display, charts, dates"
)
}
},
"required": []
}
))
tools.append(Tool(
name="get_component_props",
description=(
"Get the props schema for a specific DMC component. "
"Returns all available props with types, defaults, and allowed values. "
"ALWAYS use this before creating a component to ensure valid props."
),
inputSchema={
"type": "object",
"properties": {
"component": {
"type": "string",
"description": "Component name (e.g., 'Button', 'TextInput', 'Select')"
}
},
"required": ["component"]
}
))
tools.append(Tool(
name="validate_component",
description=(
"Validate component props before use. "
"Checks for invalid props, type mismatches, and common mistakes. "
"Returns errors and warnings with suggestions for fixes."
),
inputSchema={
"type": "object",
"properties": {
"component": {
"type": "string",
"description": "Component name to validate"
},
"props": {
"type": "object",
"description": "Props object to validate"
}
},
"required": ["component", "props"]
}
))
# Chart tools (Issue #173)
tools.append(Tool(
name="chart_create",
description=(
"Create a Plotly chart for data visualization. "
"Supports line, bar, scatter, pie, heatmap, histogram, and area charts. "
"Automatically applies theme colors when a theme is active."
),
inputSchema={
"type": "object",
"properties": {
"chart_type": {
"type": "string",
"enum": ["line", "bar", "scatter", "pie", "heatmap", "histogram", "area"],
"description": "Type of chart to create"
},
"data": {
"type": "object",
"description": (
"Data for the chart. For most charts: {x: [], y: []}. "
"For pie: {labels: [], values: []}. "
"For heatmap: {x: [], y: [], z: [[]]}"
)
},
"options": {
"type": "object",
"description": (
"Optional settings: title, x_label, y_label, color, "
"showlegend, height, width, horizontal (for bar)"
)
}
},
"required": ["chart_type", "data"]
}
))
tools.append(Tool(
name="chart_configure_interaction",
description=(
"Configure interactions on an existing chart. "
"Add hover templates, enable click data capture, selection modes, "
"and zoom behavior for Dash callback integration."
),
inputSchema={
"type": "object",
"properties": {
"figure": {
"type": "object",
"description": "Plotly figure JSON to modify"
},
"interactions": {
"type": "object",
"description": (
"Interaction config: hover_template (string), "
"click_data (bool), selection ('box'|'lasso'), zoom (bool)"
)
}
},
"required": ["figure", "interactions"]
}
))
# Chart export tool (Issue #247)
tools.append(Tool(
name="chart_export",
description=(
"Export a Plotly chart to static image format (PNG, SVG, PDF). "
"Requires kaleido package. Returns base64 image data or saves to file."
),
inputSchema={
"type": "object",
"properties": {
"figure": {
"type": "object",
"description": "Plotly figure JSON to export"
},
"format": {
"type": "string",
"enum": ["png", "svg", "pdf"],
"description": "Output format (default: png)"
},
"width": {
"type": "integer",
"description": "Image width in pixels (default: 1200)"
},
"height": {
"type": "integer",
"description": "Image height in pixels (default: 800)"
},
"scale": {
"type": "number",
"description": "Resolution scale factor (default: 2 for retina)"
},
"output_path": {
"type": "string",
"description": "Optional file path to save image"
}
},
"required": ["figure"]
}
))
# Layout tools (Issue #174)
tools.append(Tool(
name="layout_create",
description=(
"Create a new dashboard layout container. "
"Templates available: dashboard, report, form, blank. "
"Returns layout reference for use with other layout tools."
),
inputSchema={
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Unique name for the layout"
},
"template": {
"type": "string",
"enum": ["dashboard", "report", "form", "blank"],
"description": "Layout template to use (default: blank)"
}
},
"required": ["name"]
}
))
tools.append(Tool(
name="layout_add_filter",
description=(
"Add a filter control to a layout. "
"Filter types: dropdown, multi_select, date_range, date, search, "
"checkbox_group, radio_group, slider, range_slider."
),
inputSchema={
"type": "object",
"properties": {
"layout_ref": {
"type": "string",
"description": "Layout name to add filter to"
},
"filter_type": {
"type": "string",
"enum": ["dropdown", "multi_select", "date_range", "date",
"search", "checkbox_group", "radio_group", "slider", "range_slider"],
"description": "Type of filter control"
},
"options": {
"type": "object",
"description": (
"Filter options: label, data (for dropdown), placeholder, "
"position (section name), value, etc."
)
}
},
"required": ["layout_ref", "filter_type", "options"]
}
))
tools.append(Tool(
name="layout_set_grid",
description=(
"Configure the grid system for a layout. "
"Uses DMC Grid component patterns with 12 or 24 column system."
),
inputSchema={
"type": "object",
"properties": {
"layout_ref": {
"type": "string",
"description": "Layout name to configure"
},
"grid": {
"type": "object",
"description": (
"Grid config: cols (1-24), spacing (xs|sm|md|lg|xl), "
"breakpoints ({xs: cols, sm: cols, ...}), gutter"
)
}
},
"required": ["layout_ref", "grid"]
}
))
# Responsive breakpoints tool (Issue #249)
tools.append(Tool(
name="layout_set_breakpoints",
description=(
"Configure responsive breakpoints for a layout. "
"Supports xs, sm, md, lg, xl breakpoints with mobile-first approach. "
"Each breakpoint can define cols, spacing, and other grid properties."
),
inputSchema={
"type": "object",
"properties": {
"layout_ref": {
"type": "string",
"description": "Layout name to configure"
},
"breakpoints": {
"type": "object",
"description": (
"Breakpoint config: {xs: {cols, spacing}, sm: {...}, md: {...}, lg: {...}, xl: {...}}"
)
},
"mobile_first": {
"type": "boolean",
"description": "Use mobile-first (min-width) media queries (default: true)"
}
},
"required": ["layout_ref", "breakpoints"]
}
))
# Theme tools (Issue #175)
tools.append(Tool(
name="theme_create",
description=(
"Create a new design theme with tokens. "
"Tokens include colors, spacing, typography, radii. "
"Missing tokens are filled from defaults."
),
inputSchema={
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Unique theme name"
},
"tokens": {
"type": "object",
"description": (
"Design tokens: colors (primary, background, text), "
"spacing (xs-xl), typography (fontFamily, fontSize), radii (sm-xl)"
)
}
},
"required": ["name", "tokens"]
}
))
tools.append(Tool(
name="theme_extend",
description=(
"Create a new theme by extending an existing one. "
"Only specify the tokens you want to override."
),
inputSchema={
"type": "object",
"properties": {
"base_theme": {
"type": "string",
"description": "Theme to extend (e.g., 'default')"
},
"overrides": {
"type": "object",
"description": "Token overrides to apply"
},
"new_name": {
"type": "string",
"description": "Name for the new theme (optional)"
}
},
"required": ["base_theme", "overrides"]
}
))
tools.append(Tool(
name="theme_validate",
description=(
"Validate a theme for completeness. "
"Checks for required tokens and common issues."
),
inputSchema={
"type": "object",
"properties": {
"theme_name": {
"type": "string",
"description": "Theme to validate"
}
},
"required": ["theme_name"]
}
))
tools.append(Tool(
name="theme_export_css",
description=(
"Export a theme as CSS custom properties. "
"Generates :root CSS variables for all tokens."
),
inputSchema={
"type": "object",
"properties": {
"theme_name": {
"type": "string",
"description": "Theme to export"
}
},
"required": ["theme_name"]
}
))
# Page tools (Issue #176)
tools.append(Tool(
name="page_create",
description=(
"Create a new page for a multi-page Dash application. "
"Defines page routing and can link to a layout."
),
inputSchema={
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Unique page name (identifier)"
},
"path": {
"type": "string",
"description": "URL path (e.g., '/', '/settings')"
},
"layout_ref": {
"type": "string",
"description": "Optional layout reference to use"
},
"title": {
"type": "string",
"description": "Page title (defaults to name)"
}
},
"required": ["name", "path"]
}
))
tools.append(Tool(
name="page_add_navbar",
description=(
"Generate navigation component linking pages. "
"Creates top or side navigation with DMC components."
),
inputSchema={
"type": "object",
"properties": {
"pages": {
"type": "array",
"items": {"type": "string"},
"description": "List of page names to include"
},
"options": {
"type": "object",
"description": (
"Navigation options: position (top|left|right), "
"brand (app name), collapsible (bool)"
)
}
},
"required": ["pages"]
}
))
tools.append(Tool(
name="page_set_auth",
description=(
"Configure authentication for a page. "
"Sets auth requirements, roles, and redirect behavior."
),
inputSchema={
"type": "object",
"properties": {
"page_ref": {
"type": "string",
"description": "Page name to configure"
},
"auth_config": {
"type": "object",
"description": (
"Auth config: type (none|basic|oauth|custom), "
"required (bool), roles (array), redirect (path)"
)
}
},
"required": ["page_ref", "auth_config"]
}
))
# Accessibility tools (Issue #248)
tools.append(Tool(
name="accessibility_validate_colors",
description=(
"Validate colors for color blind accessibility. "
"Checks contrast ratios for deuteranopia, protanopia, tritanopia. "
"Returns issues, simulations, and accessible palette suggestions."
),
inputSchema={
"type": "object",
"properties": {
"colors": {
"type": "array",
"items": {"type": "string"},
"description": "List of hex colors to validate (e.g., ['#228be6', '#40c057'])"
},
"check_types": {
"type": "array",
"items": {"type": "string"},
"description": "Color blindness types to check: deuteranopia, protanopia, tritanopia (default: all)"
},
"min_contrast_ratio": {
"type": "number",
"description": "Minimum WCAG contrast ratio (default: 4.5 for AA)"
}
},
"required": ["colors"]
}
))
tools.append(Tool(
name="accessibility_validate_theme",
description=(
"Validate a theme's colors for accessibility. "
"Extracts all colors from theme tokens and checks for color blind safety."
),
inputSchema={
"type": "object",
"properties": {
"theme_name": {
"type": "string",
"description": "Theme name to validate"
}
},
"required": ["theme_name"]
}
))
tools.append(Tool(
name="accessibility_suggest_alternative",
description=(
"Suggest accessible alternative colors for a given color. "
"Provides alternatives optimized for specific color blindness types."
),
inputSchema={
"type": "object",
"properties": {
"color": {
"type": "string",
"description": "Hex color to find alternatives for"
},
"deficiency_type": {
"type": "string",
"enum": ["deuteranopia", "protanopia", "tritanopia"],
"description": "Color blindness type to optimize for"
}
},
"required": ["color", "deficiency_type"]
}
))
return tools
@self.server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
"""Handle tool invocation."""
try:
# DMC validation tools
if name == "list_components":
result = await self.dmc_tools.list_components(
category=arguments.get('category')
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "get_component_props":
component = arguments.get('component')
if not component:
return [TextContent(
type="text",
text=json.dumps({"error": "component is required"}, indent=2)
)]
result = await self.dmc_tools.get_component_props(component)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "validate_component":
component = arguments.get('component')
props = arguments.get('props', {})
if not component:
return [TextContent(
type="text",
text=json.dumps({"error": "component is required"}, indent=2)
)]
result = await self.dmc_tools.validate_component(component, props)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Chart tools
elif name == "chart_create":
chart_type = arguments.get('chart_type')
data = arguments.get('data', {})
options = arguments.get('options', {})
if not chart_type:
return [TextContent(
type="text",
text=json.dumps({"error": "chart_type is required"}, indent=2)
)]
result = await self.chart_tools.chart_create(chart_type, data, options)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "chart_configure_interaction":
figure = arguments.get('figure')
interactions = arguments.get('interactions', {})
if not figure:
return [TextContent(
type="text",
text=json.dumps({"error": "figure is required"}, indent=2)
)]
result = await self.chart_tools.chart_configure_interaction(figure, interactions)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "chart_export":
figure = arguments.get('figure')
if not figure:
return [TextContent(
type="text",
text=json.dumps({"error": "figure is required"}, indent=2)
)]
result = await self.chart_tools.chart_export(
figure=figure,
format=arguments.get('format', 'png'),
width=arguments.get('width'),
height=arguments.get('height'),
scale=arguments.get('scale', 2.0),
output_path=arguments.get('output_path')
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Layout tools
elif name == "layout_create":
layout_name = arguments.get('name')
template = arguments.get('template')
if not layout_name:
return [TextContent(
type="text",
text=json.dumps({"error": "name is required"}, indent=2)
)]
result = await self.layout_tools.layout_create(layout_name, template)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "layout_add_filter":
layout_ref = arguments.get('layout_ref')
filter_type = arguments.get('filter_type')
options = arguments.get('options', {})
if not layout_ref or not filter_type:
return [TextContent(
type="text",
text=json.dumps({"error": "layout_ref and filter_type are required"}, indent=2)
)]
result = await self.layout_tools.layout_add_filter(layout_ref, filter_type, options)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "layout_set_grid":
layout_ref = arguments.get('layout_ref')
grid = arguments.get('grid', {})
if not layout_ref:
return [TextContent(
type="text",
text=json.dumps({"error": "layout_ref is required"}, indent=2)
)]
result = await self.layout_tools.layout_set_grid(layout_ref, grid)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "layout_set_breakpoints":
layout_ref = arguments.get('layout_ref')
breakpoints = arguments.get('breakpoints', {})
mobile_first = arguments.get('mobile_first', True)
if not layout_ref:
return [TextContent(
type="text",
text=json.dumps({"error": "layout_ref is required"}, indent=2)
)]
result = await self.layout_tools.layout_set_breakpoints(
layout_ref, breakpoints, mobile_first
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Theme tools
elif name == "theme_create":
theme_name = arguments.get('name')
tokens = arguments.get('tokens', {})
if not theme_name:
return [TextContent(
type="text",
text=json.dumps({"error": "name is required"}, indent=2)
)]
result = await self.theme_tools.theme_create(theme_name, tokens)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "theme_extend":
base_theme = arguments.get('base_theme')
overrides = arguments.get('overrides', {})
new_name = arguments.get('new_name')
if not base_theme:
return [TextContent(
type="text",
text=json.dumps({"error": "base_theme is required"}, indent=2)
)]
result = await self.theme_tools.theme_extend(base_theme, overrides, new_name)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "theme_validate":
theme_name = arguments.get('theme_name')
if not theme_name:
return [TextContent(
type="text",
text=json.dumps({"error": "theme_name is required"}, indent=2)
)]
result = await self.theme_tools.theme_validate(theme_name)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "theme_export_css":
theme_name = arguments.get('theme_name')
if not theme_name:
return [TextContent(
type="text",
text=json.dumps({"error": "theme_name is required"}, indent=2)
)]
result = await self.theme_tools.theme_export_css(theme_name)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Page tools
elif name == "page_create":
page_name = arguments.get('name')
path = arguments.get('path')
layout_ref = arguments.get('layout_ref')
title = arguments.get('title')
if not page_name or not path:
return [TextContent(
type="text",
text=json.dumps({"error": "name and path are required"}, indent=2)
)]
result = await self.page_tools.page_create(page_name, path, layout_ref, title)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "page_add_navbar":
pages = arguments.get('pages', [])
options = arguments.get('options', {})
if not pages:
return [TextContent(
type="text",
text=json.dumps({"error": "pages list is required"}, indent=2)
)]
result = await self.page_tools.page_add_navbar(pages, options)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "page_set_auth":
page_ref = arguments.get('page_ref')
auth_config = arguments.get('auth_config', {})
if not page_ref:
return [TextContent(
type="text",
text=json.dumps({"error": "page_ref is required"}, indent=2)
)]
result = await self.page_tools.page_set_auth(page_ref, auth_config)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
# Accessibility tools
elif name == "accessibility_validate_colors":
colors = arguments.get('colors')
if not colors:
return [TextContent(
type="text",
text=json.dumps({"error": "colors list is required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_validate_colors(
colors=colors,
check_types=arguments.get('check_types'),
min_contrast_ratio=arguments.get('min_contrast_ratio', 4.5)
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "accessibility_validate_theme":
theme_name = arguments.get('theme_name')
if not theme_name:
return [TextContent(
type="text",
text=json.dumps({"error": "theme_name is required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_validate_theme(theme_name)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
elif name == "accessibility_suggest_alternative":
color = arguments.get('color')
deficiency_type = arguments.get('deficiency_type')
if not color or not deficiency_type:
return [TextContent(
type="text",
text=json.dumps({"error": "color and deficiency_type are required"}, indent=2)
)]
result = await self.accessibility_tools.accessibility_suggest_alternative(
color, deficiency_type
)
return [TextContent(
type="text",
text=json.dumps(result, indent=2)
)]
raise ValueError(f"Unknown tool: {name}")
except Exception as e:
logger.error(f"Tool {name} failed: {e}")
return [TextContent(
type="text",
text=json.dumps({"error": str(e)}, indent=2)
)]
async def run(self):
"""Run the MCP server"""
await self.initialize()
self.setup_tools()
async with stdio_server() as (read_stream, write_stream):
await self.server.run(
read_stream,
write_stream,
self.server.create_initialization_options()
)
async def main():
"""Main entry point"""
server = VizPlatformMCPServer()
await server.run()
if __name__ == "__main__":
asyncio.run(main())

Some files were not shown because too many files have changed in this diff Show More