1 Commits

Author SHA1 Message Date
249f8794ad feat(projman): add domain gate checks to orchestrator agent
Add domain-consultation skill and gate check responsibilities to the
orchestrator agent. Before marking issues complete, the orchestrator
now checks for Domain/* labels and invokes the appropriate gate command
(/design-gate for Domain/Viz, /data-gate for Domain/Data).

Includes graceful degradation when gate commands are unavailable.

Closes #362

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 18:11:58 -05:00
93 changed files with 598 additions and 5995 deletions

View File

@@ -6,7 +6,7 @@
}, },
"metadata": { "metadata": {
"description": "Project management plugins with Gitea and NetBox integrations", "description": "Project management plugins with Gitea and NetBox integrations",
"version": "5.9.0" "version": "5.5.0"
}, },
"plugins": [ "plugins": [
{ {
@@ -91,8 +91,8 @@
}, },
{ {
"name": "claude-config-maintainer", "name": "claude-config-maintainer",
"version": "1.2.0", "version": "1.1.0",
"description": "CLAUDE.md and settings.local.json optimization for Claude Code projects", "description": "CLAUDE.md optimization and maintenance for Claude Code projects",
"source": "./plugins/claude-config-maintainer", "source": "./plugins/claude-config-maintainer",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
@@ -155,7 +155,7 @@
}, },
{ {
"name": "data-platform", "name": "data-platform",
"version": "1.3.0", "version": "1.2.0",
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration", "description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
"source": "./plugins/data-platform", "source": "./plugins/data-platform",
"author": { "author": {

View File

@@ -1,249 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code when working with code in this repository.
## Project Overview
**Repository:** leo-claude-mktplace
**Version:** 3.0.1
**Status:** Production Ready
A plugin marketplace for Claude Code containing:
| Plugin | Description | Version |
|--------|-------------|---------|
| `projman` | Sprint planning and project management with Gitea integration | 3.0.0 |
| `git-flow` | Git workflow automation with smart commits and branch management | 1.0.0 |
| `pr-review` | Multi-agent PR review with confidence scoring | 1.0.0 |
| `clarity-assist` | Prompt optimization with ND-friendly accommodations | 1.0.0 |
| `doc-guardian` | Automatic documentation drift detection and synchronization | 1.0.0 |
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.0 |
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 |
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.0.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
## Quick Start
```bash
# Validate marketplace compliance
./scripts/validate-marketplace.sh
# Setup commands (in a target project with plugin installed)
/initial-setup # First time: full setup wizard
/project-init # New project: quick config
/project-sync # After repo move: sync config
# Run projman commands
/sprint-plan # Start sprint planning
/sprint-status # Check progress
/review # Pre-close code quality review
/test-check # Verify tests before close
/sprint-close # Complete sprint
```
## Repository Structure
```
leo-claude-mktplace/
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
│ └── netbox/ # NetBox MCP (CMDB)
├── plugins/
│ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ # 12 commands (incl. setup)
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ ├── agents/ # 4 agents
│ │ └── skills/label-taxonomy/
│ ├── git-flow/ # Git workflow automation
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 8 commands
│ │ └── agents/
│ ├── pr-review/ # Multi-agent PR review
│ │ ├── .claude-plugin/plugin.json
│ │ ├── .mcp.json
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
│ │ ├── commands/ # 6 commands (incl. setup)
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ └── agents/ # 5 agents
│ ├── clarity-assist/ # Prompt optimization (NEW v3.0.0)
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 2 commands
│ │ └── agents/
│ ├── doc-guardian/ # Documentation drift detection
│ ├── code-sentinel/ # Security scanning & refactoring
│ ├── claude-config-maintainer/
│ ├── cmdb-assistant/
│ └── project-hygiene/
├── scripts/
│ ├── setup.sh, post-update.sh
│ └── validate-marketplace.sh # Marketplace compliance validation
└── docs/
├── CANONICAL-PATHS.md # Single source of truth for paths
└── CONFIGURATION.md # Centralized configuration guide
```
## CRITICAL: Rules You MUST Follow
### File Operations
- **NEVER** create files in repository root unless listed in "Allowed Root Files"
- **NEVER** modify `.gitignore` without explicit permission
- **ALWAYS** use `.scratch/` for temporary/exploratory work
- **ALWAYS** verify paths against `docs/CANONICAL-PATHS.md` before creating files
### Plugin Development
- **plugin.json MUST be in `.claude-plugin/` directory** (not plugin root)
- **Every plugin MUST be listed in marketplace.json**
- **MCP servers are SHARED at root** with symlinks from plugins
- **MCP server venv path**: `${CLAUDE_PLUGIN_ROOT}/mcp-servers/{name}/.venv/bin/python`
- **CLI tools forbidden** - Use MCP tools exclusively (never `tea`, `gh`, etc.)
### Hooks (Valid Events Only)
`PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
**INVALID:** `task-completed`, `file-changed`, `git-commit-msg-needed`
### Allowed Root Files
`CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example`
### Allowed Root Directories
`.claude/`, `.claude-plugin/`, `.claude-plugins/`, `.scratch/`, `docs/`, `hooks/`, `mcp-servers/`, `plugins/`, `scripts/`
## Architecture
### Four-Agent Model (projman)
| Agent | Personality | Responsibilities |
|-------|-------------|------------------|
| **Planner** | Thoughtful, methodical | Sprint planning, architecture analysis, issue creation, lesson search |
| **Orchestrator** | Concise, action-oriented | Sprint execution, parallel batching, Git operations, lesson capture |
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
### MCP Server Tools (Gitea)
| Category | Tools |
|----------|-------|
| Issues | `list_issues`, `get_issue`, `create_issue`, `update_issue`, `add_comment` |
| Labels | `get_labels`, `suggest_labels`, `create_label` |
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone` |
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `get_execution_order` |
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `create_lesson`, `search_lessons` |
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` *(NEW v3.0.0)* |
| Validation | `validate_repo_org`, `get_branch_protection` |
### Hybrid Configuration
| Level | Location | Purpose |
|-------|----------|---------|
| System | `~/.config/claude/gitea.env` | Credentials (GITEA_API_URL, GITEA_API_TOKEN) |
| Project | `.env` in project root | Repository specification (GITEA_ORG, GITEA_REPO) |
**Note:** `GITEA_ORG` is at project level since different projects may belong to different organizations.
### Branch-Aware Security
| Branch Pattern | Mode | Capabilities |
|----------------|------|--------------|
| `development`, `feat/*` | Development | Full access |
| `staging` | Staging | Read-only code, can create issues |
| `main`, `master` | Production | Read-only, emergency only |
## Label Taxonomy
43 labels total: 27 organization + 16 repository
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Type/6
**Repository:** Component/9, Tech/7
Sync with `/labels-sync` command.
## Lessons Learned System
Stored in Gitea Wiki under `lessons-learned/sprints/`.
**Workflow:**
1. Orchestrator captures at sprint close via MCP tools
2. Planner searches at sprint start using `search_lessons`
3. Tags enable cross-project discovery
## Common Operations
### Adding a New Plugin
1. Create `plugins/{name}/.claude-plugin/plugin.json`
2. Add entry to `.claude-plugin/marketplace.json` with category, tags, license
3. Create `README.md` and `claude-md-integration.md`
4. If using MCP server, create symlink: `ln -s ../../../mcp-servers/{server} plugins/{name}/mcp-servers/{server}`
5. Run `./scripts/validate-marketplace.sh`
6. Update `CHANGELOG.md`
### Adding a Command to projman
1. Create `plugins/projman/commands/{name}.md`
2. Update `plugins/projman/README.md`
3. Update marketplace description if significant
### Validation
```bash
./scripts/validate-marketplace.sh # Validates all manifests
```
## Path Verification Protocol
**Before creating any file:**
1. Read `docs/CANONICAL-PATHS.md`
2. List all paths to be created/modified
3. Verify each against canonical paths
4. If not in canonical paths, STOP and ask
## Documentation Index
| Document | Purpose |
|----------|---------|
| `docs/CANONICAL-PATHS.md` | **Single source of truth** for paths |
| `docs/COMMANDS-CHEATSHEET.md` | All commands quick reference with workflow examples |
| `docs/CONFIGURATION.md` | Centralized setup guide |
| `docs/UPDATING.md` | Update guide for the marketplace |
| `plugins/projman/CONFIGURATION.md` | Quick reference (links to central) |
| `plugins/projman/README.md` | Projman full documentation |
## Versioning and Changelog Rules
### Version Display
**The marketplace version is displayed ONLY in the main `README.md` title.**
- Format: `# Leo Claude Marketplace - vX.Y.Z`
- Do NOT add version numbers to individual plugin documentation titles
- Do NOT add version numbers to configuration guides
- Do NOT add version numbers to CLAUDE.md or other docs
### Changelog Maintenance (MANDATORY)
**`CHANGELOG.md` is the authoritative source for version history.**
When releasing a new version:
1. Update main `README.md` title with new version
2. Update `CHANGELOG.md` with:
- Version number and date: `## [X.Y.Z] - YYYY-MM-DD`
- **Added**: New features, commands, files
- **Changed**: Modifications to existing functionality
- **Fixed**: Bug fixes
- **Removed**: Deleted features, files, deprecated items
3. Update `marketplace.json` metadata version
4. Update plugin `plugin.json` versions if plugin-specific changes
### Version Format
- Follow [Semantic Versioning](https://semver.org/): MAJOR.MINOR.PATCH
- MAJOR: Breaking changes
- MINOR: New features, backward compatible
- PATCH: Bug fixes, minor improvements
---
**Last Updated:** 2026-01-20

View File

@@ -1,27 +0,0 @@
# Doc Guardian Queue - cleared after sync on 2026-02-02
2026-02-02T11:41:00 | .claude-plugin | /home/lmiranda/claude-plugins-work/.claude-plugin/marketplace.json | CLAUDE.md .claude-plugin/marketplace.json
2026-02-02T13:35:48 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/sprint-approval.md | README.md
2026-02-02T13:36:03 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-start.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:36:16 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/orchestrator.md | README.md CLAUDE.md
2026-02-02T13:39:07 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/rfc.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:39:15 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/setup.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:39:32 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/rfc-workflow.md | README.md
2026-02-02T13:43:14 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/rfc-templates.md | README.md
2026-02-02T13:44:55 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/sprint-lifecycle.md | README.md
2026-02-02T13:45:04 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/label-taxonomy/labels-reference.md | README.md
2026-02-02T13:45:14 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-plan.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:45:48 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/review.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:07 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-close.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:21 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-status.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:46:38 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/planner.md | README.md CLAUDE.md
2026-02-02T13:46:57 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/code-reviewer.md | README.md CLAUDE.md
2026-02-02T13:49:13 | commands | /home/lmiranda/claude-plugins-work/plugins/viz-platform/commands/design-gate.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:49:24 | commands | /home/lmiranda/claude-plugins-work/plugins/data-platform/commands/data-gate.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-02-02T13:49:35 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/domain-consultation.md | README.md
2026-02-02T13:50:04 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:50:59 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/server.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:51:32 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T13:51:49 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/validation-rules.md | README.md
2026-02-02T13:52:07 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/mcp-tools-reference.md | README.md
2026-02-02T13:59:09 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/progress-tracking.md | README.md
2026-02-02T14:01:34 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/test.md | docs/COMMANDS-CHEATSHEET.md README.md

1
.env
View File

@@ -1 +0,0 @@
GITEA_REPO=personal-projects/leo-claude-mktplace

View File

@@ -4,209 +4,6 @@ All notable changes to the Leo Claude Marketplace will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [Unreleased]
---
## [5.9.0] - 2026-02-03
### Added
#### Plugin Installation Scripts
New scripts for installing marketplace plugins into consumer projects:
- **`scripts/install-plugin.sh`** — Install a plugin to a consumer project
- Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
- Appends integration snippet to target's `CLAUDE.md`
- Idempotent: safe to run multiple times
- Validates plugin exists and target path is valid
- **`scripts/uninstall-plugin.sh`** — Remove a plugin from a consumer project
- Removes MCP server entry from `.mcp.json`
- Removes integration section from `CLAUDE.md`
- **`scripts/list-installed.sh`** — Show installed plugins in a project
- Lists fully installed, partially installed, and available plugins
- Shows plugin versions and descriptions
**Usage:**
```bash
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
./scripts/list-installed.sh ~/projects/personal-portfolio
./scripts/uninstall-plugin.sh data-platform ~/projects/personal-portfolio
```
**Documentation:** `docs/CONFIGURATION.md` updated with "Installing Plugins to Consumer Projects" section.
### Fixed
#### Plugin Installation Scripts — MCP Mapping & Section Markers
**MCP Server Mapping:**
- Added `mcp_servers` field to plugin.json for plugins that use shared MCP servers
- `projman` and `pr-review` now correctly install `gitea` MCP server
- `cmdb-assistant` now correctly installs `netbox` MCP server
- Scripts read MCP server names from plugin.json instead of assuming plugin name = server name
**CLAUDE.md Section Markers:**
- Install script now wraps integration content with HTML comment markers:
`<!-- BEGIN marketplace-plugin: {name} -->` and `<!-- END marketplace-plugin: {name} -->`
- Uninstall script uses markers for precise section removal (no more code block false positives)
- Backward compatible: falls back to legacy header detection for pre-marker installations
**Plugins updated with `mcp_servers` field:**
- `projman``["gitea"]`
- `pr-review``["gitea"]`
- `cmdb-assistant``["netbox"]`
- `data-platform``["data-platform"]`
- `viz-platform``["viz-platform"]`
- `contract-validator``["contract-validator"]`
#### Agent Model Selection
Per-agent model selection using Claude Code's now-supported `model` frontmatter field.
- All 25 marketplace agents assigned appropriate model (`sonnet`, `haiku`, or `inherit`)
- Model assignment based on reasoning depth, tool complexity, and latency requirements
- Documentation added to `CLAUDE.md` and `docs/CONFIGURATION.md`
**Supported values:** `sonnet` (default), `opus`, `haiku`, `inherit`
**Model assignments:**
| Model | Agent Types |
|-------|-------------|
| sonnet | Planner, Orchestrator, Executor, Code Reviewer, Coordinator, Security Reviewers, Data Advisor, Design Reviewer, etc. |
| haiku | Maintainability Auditor, Test Validator, Component Check, Theme Setup, Git Assistant, Data Ingestion, Agent Check |
### Fixed
#### Agent Frontmatter Standardization
- Fixed viz-platform and data-platform agents using non-standard `agent:` field (now `name:`)
- Removed non-standard `triggers:` field from domain agents (trigger info already in agent body)
- Added missing frontmatter to 13 agents across pr-review, viz-platform, contract-validator, clarity-assist, git-flow, doc-guardian, code-sentinel, cmdb-assistant, and data-platform
- All 25 agents now have consistent `name`, `description`, and `model` fields
---
## [5.8.0] - 2026-02-02
### Added
#### claude-config-maintainer v1.2.0 - Settings Audit Feature
New commands for auditing and optimizing `settings.local.json` permission configurations:
- **`/config-audit-settings`** — Audit `settings.local.json` permissions with 100-point scoring across redundancy, coverage, safety alignment, and profile fit
- **`/config-optimize-settings`** — Apply permission optimizations with dry-run, named profiles (`conservative`, `reviewed`, `autonomous`), and consolidation modes
- **`/config-permissions-map`** — Generate Mermaid diagram of review layer coverage and permission gaps
- **`skills/settings-optimization.md`** — Comprehensive skill for permission pattern analysis, consolidation rules, review-layer-aware recommendations, and named profiles
**Key Features:**
- Settings Efficiency Score (100 points) alongside existing CLAUDE.md score
- Review layer verification — agent reads `hooks/hooks.json` from installed plugins before recommending auto-allow patterns
- Three named profiles: `conservative` (prompts for most writes), `reviewed` (for projects with ≥2 review layers), `autonomous` (sandboxed environments)
- Pattern consolidation detection: duplicates, subsets, merge candidates, stale entries, conflicts
#### Projman Hardening Sprint
Targeted improvements to safety gates, command structure, lifecycle tracking, and cross-plugin contracts.
**Sprint Lifecycle State Machine:**
- New `skills/sprint-lifecycle.md` - defines valid states and transitions via milestone metadata
- States: idle -> Sprint/Planning -> Sprint/Executing -> Sprint/Reviewing -> idle
- All sprint commands check and set lifecycle state on entry/exit
- Out-of-order calls produce warnings with guidance, `--force` override available
**Sprint Dispatch Log:**
- Orchestrator now maintains a structured dispatch log during execution
- Records task dispatch, completion, failure, gate checks, and resume events
- Enables timeline reconstruction after interrupted sessions
**Gate Contract Versioning:**
- Gate commands (`/design-gate`, `/data-gate`) declare `gate_contract: v1` in frontmatter
- `domain-consultation.md` Gate Command Reference includes expected contract version
- `validate_workflow_integration` now checks contract version compatibility
- Mismatch produces WARNING, missing contract produces INFO suggestion
**Shared Visual Output Skill:**
- New `skills/visual-output.md` - single source of truth for projman visual headers
- All 4 agent files reference the skill instead of inline templates
- Phase Registry maps agents to emoji and phase names
### Changed
**Sprint Approval Gate Hardened:**
- Approval is now a hard block, not a warning (was "recommended", now required)
- `--force` flag added to bypass in emergencies (logged to milestone)
- Consistent language across sprint-approval.md, sprint-start.md, and orchestrator.md
**RFC Commands Normalized:**
- 5 individual commands (`/rfc-create`, `/rfc-list`, `/rfc-review`, `/rfc-approve`, `/rfc-reject`) consolidated into `/rfc create|list|review|approve|reject`
- `/clear-cache` absorbed into `/setup --clear-cache`
- Command count reduced from 17 to 12
**`/test` Command Documentation Expanded:**
- Sprint integration section (pre-close verification workflow)
- Concrete usage examples for all modes
- Edge cases table
- DO NOT rules for both modes
### Removed
- `plugins/projman/commands/rfc-create.md` (replaced by `/rfc create`)
- `plugins/projman/commands/rfc-list.md` (replaced by `/rfc list`)
- `plugins/projman/commands/rfc-review.md` (replaced by `/rfc review`)
- `plugins/projman/commands/rfc-approve.md` (replaced by `/rfc approve`)
- `plugins/projman/commands/rfc-reject.md` (replaced by `/rfc reject`)
- `plugins/projman/commands/clear-cache.md` (replaced by `/setup --clear-cache`)
---
## [5.7.1] - 2026-02-02
### Added
- **contract-validator**: New `validate_workflow_integration` MCP tool — validates domain plugins expose required advisory interfaces (gate command, review command, advisory agent)
- **contract-validator**: New `MISSING_INTEGRATION` issue type for workflow integration validation
### Fixed
- `scripts/setup.sh` banner version updated from v5.1.0 to v5.7.1
### Reverted
- **marketplace.json**: Removed `integrates_with` field — Claude Code schema does not support custom plugin fields (causes marketplace load failure)
---
## [5.7.0] - 2026-02-02
### Added
- **data-platform**: New `data-advisor` agent for data integrity, schema, and dbt compliance validation
- **data-platform**: New `data-integrity-audit.md` skill defining audit rules, severity levels, and scanning strategies
- **data-platform**: New `/data-gate` command for binary pass/fail data integrity gates (projman integration)
- **data-platform**: New `/data-review` command for comprehensive data integrity audits
### Changed
- Domain Advisory Pattern now fully operational for both Viz and Data domains
- projman orchestrator `Domain/Data` gates now resolve to live `/data-gate` command (previously fell through to "gate unavailable" warning)
---
## [5.6.0] - 2026-02-01
### Added
- **Domain Advisory Pattern**: Cross-plugin integration enabling projman to consult domain-specific plugins during sprint lifecycle
- **projman**: New `domain-consultation.md` skill for domain detection and gate protocols
- **viz-platform**: New `design-reviewer` agent for design system compliance auditing
- **viz-platform**: New `design-system-audit.md` skill defining audit rules and severity levels
- **viz-platform**: New `/design-review` command for detailed design system audits
- **viz-platform**: New `/design-gate` command for binary pass/fail validation gates
- **Labels**: New `Domain/Viz` and `Domain/Data` labels for domain routing
### Changed
- **projman planner**: Now loads domain-consultation skill and performs domain detection during planning
- **projman orchestrator**: Now runs domain gates before marking Domain/* labeled issues as complete
---
## [5.5.0] - 2026-02-01 ## [5.5.0] - 2026-02-01
### Added ### Added

View File

@@ -146,7 +146,7 @@ When user says "fix the sprint-plan command", edit the SOURCE code.
## Project Overview ## Project Overview
**Repository:** leo-claude-mktplace **Repository:** leo-claude-mktplace
**Version:** 5.9.0 **Version:** 5.4.0
**Status:** Production Ready **Status:** Production Ready
A plugin marketplace for Claude Code containing: A plugin marketplace for Claude Code containing:
@@ -161,7 +161,7 @@ A plugin marketplace for Claude Code containing:
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.1 | | `code-sentinel` | Security scanning and code refactoring tools | 1.0.1 |
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 | | `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 |
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.2.0 | | `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.2.0 |
| `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.3.0 | | `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.1.0 |
| `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.1.0 | | `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.1.0 |
| `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.1.0 | | `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.1.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 | | `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
@@ -271,40 +271,6 @@ leo-claude-mktplace/
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation | | **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification | | **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
### Agent Model Selection
Agents specify their model in frontmatter using Claude Code's `model` field. Supported values: `sonnet` (default), `opus`, `haiku`, `inherit`.
| Plugin | Agent | Model | Rationale |
|--------|-------|-------|-----------|
| projman | Planner | sonnet | Architectural analysis, sprint planning |
| projman | Orchestrator | sonnet | Coordination and tool dispatch |
| projman | Executor | sonnet | Code generation and implementation |
| projman | Code Reviewer | sonnet | Quality gate, pattern detection |
| pr-review | Coordinator | sonnet | Orchestrates sub-agents, aggregates findings |
| pr-review | Security Reviewer | sonnet | Security analysis |
| pr-review | Performance Analyst | sonnet | Performance pattern detection |
| pr-review | Maintainability Auditor | haiku | Pattern matching (complexity, duplication) |
| pr-review | Test Validator | haiku | Coverage gap detection |
| data-platform | Data Advisor | sonnet | Schema validation, dbt orchestration |
| data-platform | Data Analysis | sonnet | Data exploration and profiling |
| data-platform | Data Ingestion | haiku | Data loading operations |
| viz-platform | Design Reviewer | sonnet | DMC validation + accessibility |
| viz-platform | Layout Builder | sonnet | Dashboard design guidance |
| viz-platform | Component Check | haiku | Quick component validation |
| viz-platform | Theme Setup | haiku | Theme configuration |
| contract-validator | Agent Check | haiku | Reference checking |
| contract-validator | Full Validation | sonnet | Marketplace sweep |
| code-sentinel | Security Reviewer | sonnet | Security analysis |
| code-sentinel | Refactor Advisor | sonnet | Code refactoring advice |
| doc-guardian | Doc Analyzer | sonnet | Documentation drift detection |
| clarity-assist | Clarity Coach | sonnet | Conversational coaching |
| git-flow | Git Assistant | haiku | Git operations |
| claude-config-maintainer | Maintainer | sonnet | CLAUDE.md optimization |
| cmdb-assistant | CMDB Assistant | sonnet | NetBox operations |
Override by editing the `model:` field in `plugins/{plugin}/agents/{agent}.md`.
### MCP Server Tools (Gitea) ### MCP Server Tools (Gitea)
| Category | Tools | | Category | Tools |
@@ -487,4 +453,4 @@ The script will:
--- ---
**Last Updated:** 2026-02-02 **Last Updated:** 2026-01-30

View File

@@ -1,4 +1,4 @@
# Leo Claude Marketplace - v5.9.0 # Leo Claude Marketplace - v5.5.0
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows. A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
@@ -19,7 +19,7 @@ AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sp
- Branch-aware security (development/staging/production) - Branch-aware security (development/staging/production)
- Pre-sprint-close code quality review and test verification - Pre-sprint-close code quality review and test verification
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/setup`, `/review`, `/test`, `/debug`, `/suggest-version`, `/proposal-status`, `/rfc` **Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/setup`, `/review`, `/test`, `/debug`, `/suggest-version`, `/proposal-status`, `/clear-cache`, `/rfc-create`, `/rfc-list`, `/rfc-review`, `/rfc-approve`, `/rfc-reject`
#### [git-flow](./plugins/git-flow) *NEW in v3.0.0* #### [git-flow](./plugins/git-flow) *NEW in v3.0.0*
**Git Workflow Automation** **Git Workflow Automation**
@@ -47,11 +47,11 @@ Comprehensive pull request review using specialized agents.
**Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff`, `/initial-setup`, `/project-init`, `/project-sync` **Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff`, `/initial-setup`, `/project-init`, `/project-sync`
#### [claude-config-maintainer](./plugins/claude-config-maintainer) #### [claude-config-maintainer](./plugins/claude-config-maintainer)
**CLAUDE.md and Settings Optimization** **CLAUDE.md Optimization and Maintenance**
Analyze, optimize, and create CLAUDE.md configuration files. Audit and optimize settings.local.json permissions. Analyze, optimize, and create CLAUDE.md configuration files for Claude Code projects.
**Commands:** `/analyze`, `/optimize`, `/init`, `/config-diff`, `/config-lint`, `/config-audit-settings`, `/config-optimize-settings`, `/config-permissions-map` **Commands:** `/config-analyze`, `/config-optimize`, `/config-init`, `/config-diff`, `/config-lint`
#### [contract-validator](./plugins/contract-validator) *NEW in v5.0.0* #### [contract-validator](./plugins/contract-validator) *NEW in v5.0.0*
**Cross-Plugin Compatibility Validation** **Cross-Plugin Compatibility Validation**
@@ -122,7 +122,7 @@ Comprehensive data engineering toolkit with persistent DataFrame storage.
- 100k row limit with chunking support - 100k row limit with chunking support
- Auto-detection of dbt projects - Auto-detection of dbt projects
**Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality`, `/data-review`, `/data-gate`, `/initial-setup` **Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/lineage-viz`, `/run`, `/dbt-test`, `/data-quality`, `/initial-setup`
### Visualization ### Visualization
@@ -138,22 +138,7 @@ Visualization toolkit with version-locked component validation and design token
- 5 Page tools for multi-page app structure - 5 Page tools for multi-page app structure
- Dual theme storage: user-level and project-level - Dual theme storage: user-level and project-level
**Commands:** `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/accessibility-check`, `/breakpoints`, `/design-review`, `/design-gate`, `/initial-setup` **Commands:** `/chart`, `/chart-export`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/accessibility-check`, `/breakpoints`, `/initial-setup`
## Domain Advisory Pattern
The marketplace supports cross-plugin domain advisory integration:
- **Domain Detection**: projman automatically detects when issues involve specialized domains (frontend/viz, data engineering)
- **Acceptance Criteria**: Domain-specific acceptance criteria are added to issues during planning
- **Execution Gates**: Domain validation gates (`/design-gate`, `/data-gate`) run before issue completion
- **Extensible**: New domains can be added by creating advisory agents and gate commands
**Current Domains:**
| Domain | Plugin | Gate Command |
|--------|--------|--------------|
| Visualization | viz-platform | `/design-gate` |
| Data | data-platform | `/data-gate` |
## MCP Servers ## MCP Servers
@@ -215,7 +200,7 @@ Cross-plugin compatibility validation tools.
| Category | Tools | | Category | Tools |
|----------|-------| |----------|-------|
| Parse | `parse_plugin_interface`, `parse_claude_md_agents` | | Parse | `parse_plugin_interface`, `parse_claude_md_agents` |
| Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow`, `validate_workflow_integration` | | Validation | `validate_compatibility`, `validate_agent_refs`, `validate_data_flow` |
| Report | `generate_compatibility_report`, `list_issues` | | Report | `generate_compatibility_report`, `list_issues` |
## Installation ## Installation
@@ -312,7 +297,7 @@ After installing plugins, the `/plugin` command may show `(no content)` - this i
| clarity-assist | `/clarity-assist:clarify` | | clarity-assist | `/clarity-assist:clarify` |
| doc-guardian | `/doc-guardian:doc-audit` | | doc-guardian | `/doc-guardian:doc-audit` |
| code-sentinel | `/code-sentinel:security-scan` | | code-sentinel | `/code-sentinel:security-scan` |
| claude-config-maintainer | `/claude-config-maintainer:analyze` | | claude-config-maintainer | `/claude-config-maintainer:config-analyze` |
| cmdb-assistant | `/cmdb-assistant:cmdb-search` | | cmdb-assistant | `/cmdb-assistant:cmdb-search` |
| data-platform | `/data-platform:ingest` | | data-platform | `/data-platform:ingest` |
| viz-platform | `/viz-platform:chart` | | viz-platform | `/viz-platform:chart` |

View File

@@ -182,42 +182,10 @@ MCP servers are **shared at repository root** and configured in `.mcp.json`.
| MCP configuration | `.mcp.json` | `.mcp.json` (at repo root) | | MCP configuration | `.mcp.json` | `.mcp.json` (at repo root) |
| Shared MCP server | `mcp-servers/{server}/` | `mcp-servers/gitea/` | | Shared MCP server | `mcp-servers/{server}/` | `mcp-servers/gitea/` |
| MCP server code | `mcp-servers/{server}/mcp_server/` | `mcp-servers/gitea/mcp_server/` | | MCP server code | `mcp-servers/{server}/mcp_server/` | `mcp-servers/gitea/mcp_server/` |
| MCP venv (local) | `mcp-servers/{server}/.venv/` | `mcp-servers/gitea/.venv/` | | MCP venv | `mcp-servers/{server}/.venv/` | `mcp-servers/gitea/.venv/` |
**Note:** Plugins do NOT have their own `mcp-servers/` directories. All MCP servers are shared at root and configured via `.mcp.json`. **Note:** Plugins do NOT have their own `mcp-servers/` directories. All MCP servers are shared at root and configured via `.mcp.json`.
### MCP Venv Paths - CRITICAL
**Venvs live in a CACHE directory that SURVIVES marketplace updates.**
When checking for venvs, ALWAYS check in this order:
| Priority | Path | Survives Updates? |
|----------|------|-------------------|
| 1 (CHECK FIRST) | `~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/` | YES |
| 2 (fallback) | `{marketplace}/mcp-servers/{server}/.venv/` | NO |
**Why cache first?**
- Marketplace directory gets WIPED on every update/reinstall
- Cache directory SURVIVES updates
- False "venv missing" errors waste hours of debugging
**Pattern for hooks checking venvs:**
```bash
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/bin/python"
LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/{server}/.venv/bin/python"
if [[ -f "$CACHE_VENV" ]]; then
VENV_PATH="$CACHE_VENV"
elif [[ -f "$LOCAL_VENV" ]]; then
VENV_PATH="$LOCAL_VENV"
else
echo "venv missing"
fi
```
**See lesson learned:** [Startup Hooks Must Check Venv Cache Path First](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/patterns/startup-hooks-must-check-venv-cache-path-first)
### Documentation Paths ### Documentation Paths
| Type | Location | | Type | Location |

View File

@@ -9,18 +9,23 @@ Quick reference for all commands in the Leo Claude Marketplace.
| Plugin | Command | Auto | Manual | Description | | Plugin | Command | Auto | Manual | Description |
|--------|---------|:----:|:------:|-------------| |--------|---------|:----:|:------:|-------------|
| **projman** | `/sprint-plan` | | X | Start sprint planning with AI-guided architecture analysis and issue creation | | **projman** | `/sprint-plan` | | X | Start sprint planning with AI-guided architecture analysis and issue creation |
| **projman** | `/sprint-start` | | X | Begin sprint execution with dependency analysis and parallel task coordination (requires approval or `--force`) | | **projman** | `/sprint-start` | | X | Begin sprint execution with dependency analysis and parallel task coordination |
| **projman** | `/sprint-status` | | X | Check current sprint progress (add `--diagram` for Mermaid visualization) | | **projman** | `/sprint-status` | | X | Check current sprint progress (add `--diagram` for Mermaid visualization) |
| **projman** | `/review` | | X | Pre-sprint-close code quality review (debug artifacts, security, error handling) | | **projman** | `/review` | | X | Pre-sprint-close code quality review (debug artifacts, security, error handling) |
| **projman** | `/test` | | X | Run tests (`/test run`) or generate tests (`/test gen <target>`) | | **projman** | `/test` | | X | Run tests (`/test run`) or generate tests (`/test gen <target>`) |
| **projman** | `/sprint-close` | | X | Complete sprint and capture lessons learned to Gitea Wiki | | **projman** | `/sprint-close` | | X | Complete sprint and capture lessons learned to Gitea Wiki |
| **projman** | `/labels-sync` | | X | Synchronize label taxonomy from Gitea | | **projman** | `/labels-sync` | | X | Synchronize label taxonomy from Gitea |
| **projman** | `/setup` | | X | Auto-detect mode or use `--full`, `--quick`, `--sync`, `--clear-cache` | | **projman** | `/setup` | | X | Auto-detect mode or use `--full`, `--quick`, `--sync` |
| **projman** | *SessionStart hook* | X | | Detects git remote vs .env mismatch, warns to run `/setup --sync` | | **projman** | *SessionStart hook* | X | | Detects git remote vs .env mismatch, warns to run `/setup --sync` |
| **projman** | `/debug` | | X | Diagnostics (`/debug report`) or investigate (`/debug review`) | | **projman** | `/debug` | | X | Diagnostics (`/debug report`) or investigate (`/debug review`) |
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump | | **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
| **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status | | **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status |
| **projman** | `/rfc` | | X | RFC lifecycle management (`/rfc create\|list\|review\|approve\|reject`) | | **projman** | `/clear-cache` | | X | Clear plugin cache to force fresh configuration reload |
| **projman** | `/rfc-create` | | X | Create new RFC from conversation or clarified spec |
| **projman** | `/rfc-list` | | X | List all RFCs grouped by status |
| **projman** | `/rfc-review` | | X | Submit Draft RFC for review |
| **projman** | `/rfc-approve` | | X | Approve RFC in Review status for sprint planning |
| **projman** | `/rfc-reject` | | X | Reject RFC with documented reason |
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message | | **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation | | **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch | | **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
@@ -54,9 +59,6 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project | | **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project |
| **claude-config-maintainer** | `/config-diff` | | X | Track CLAUDE.md changes over time with behavioral impact | | **claude-config-maintainer** | `/config-diff` | | X | Track CLAUDE.md changes over time with behavioral impact |
| **claude-config-maintainer** | `/config-lint` | | X | Lint CLAUDE.md for anti-patterns and best practices | | **claude-config-maintainer** | `/config-lint` | | X | Lint CLAUDE.md for anti-patterns and best practices |
| **claude-config-maintainer** | `/config-audit-settings` | | X | Audit settings.local.json permissions (100-point score) |
| **claude-config-maintainer** | `/config-optimize-settings` | | X | Optimize permissions (profiles, consolidation, dry-run) |
| **claude-config-maintainer** | `/config-permissions-map` | | X | Visual review layer + permission coverage map |
| **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server | | **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server |
| **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites | | **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites |
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) | | **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
@@ -90,11 +92,7 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **viz-platform** | `/chart-export` | | X | Export charts to PNG, SVG, PDF via kaleido | | **viz-platform** | `/chart-export` | | X | Export charts to PNG, SVG, PDF via kaleido |
| **viz-platform** | `/accessibility-check` | | X | Color blind validation (WCAG contrast ratios) | | **viz-platform** | `/accessibility-check` | | X | Color blind validation (WCAG contrast ratios) |
| **viz-platform** | `/breakpoints` | | X | Configure responsive layout breakpoints | | **viz-platform** | `/breakpoints` | | X | Configure responsive layout breakpoints |
| **viz-platform** | `/design-review` | | X | Detailed design system audits |
| **viz-platform** | `/design-gate` | | X | Binary pass/fail design system validation gates |
| **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) | | **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) |
| **data-platform** | `/data-review` | | X | Comprehensive data integrity audits |
| **data-platform** | `/data-gate` | | X | Binary pass/fail data integrity gates |
| **contract-validator** | `/validate-contracts` | | X | Full marketplace compatibility validation | | **contract-validator** | `/validate-contracts` | | X | Full marketplace compatibility validation |
| **contract-validator** | `/check-agent` | | X | Validate single agent definition | | **contract-validator** | `/check-agent` | | X | Validate single agent definition |
| **contract-validator** | `/list-interfaces` | | X | Show all plugin interfaces | | **contract-validator** | `/list-interfaces` | | X | Show all plugin interfaces |
@@ -142,11 +140,11 @@ Full workflow from idea to implementation using RFCs:
``` ```
1. /clarify # Clarify the feature idea 1. /clarify # Clarify the feature idea
2. /rfc create # Create RFC from clarified spec 2. /rfc-create # Create RFC from clarified spec
... refine RFC content ... ... refine RFC content ...
3. /rfc review 0001 # Submit RFC for review 3. /rfc-review 0001 # Submit RFC for review
... review discussion ... ... review discussion ...
4. /rfc approve 0001 # Approve RFC for implementation 4. /rfc-approve 0001 # Approve RFC for implementation
5. /sprint-plan # Select approved RFC for sprint 5. /sprint-plan # Select approved RFC for sprint
... implement feature ... ... implement feature ...
6. /sprint-close # Complete sprint, RFC marked Implemented 6. /sprint-close # Complete sprint, RFC marked Implemented
@@ -296,4 +294,4 @@ Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/cl
--- ---
*Last Updated: 2026-02-02* *Last Updated: 2026-01-30*

View File

@@ -398,7 +398,6 @@ PR_REVIEW_AUTO_SUBMIT=false
| **code-sentinel** | None | None | None needed | | **code-sentinel** | None | None | None needed |
| **project-hygiene** | None | None | None needed | | **project-hygiene** | None | None | None needed |
| **claude-config-maintainer** | None | None | None needed | | **claude-config-maintainer** | None | None | None needed |
| **contract-validator** | None | None | `/initial-setup` |
--- ---
@@ -415,144 +414,6 @@ The command auto-detects that system config exists and runs quick project setup.
--- ---
## Installing Plugins to Consumer Projects
The marketplace provides scripts to install plugins into consumer projects. This sets up the MCP server connections and adds CLAUDE.md integration snippets.
### Install a Plugin
```bash
cd /path/to/leo-claude-mktplace
./scripts/install-plugin.sh <plugin-name> <target-project-path>
```
**Examples:**
```bash
# Install data-platform to a portfolio project
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
# Install multiple plugins
./scripts/install-plugin.sh viz-platform ~/projects/personal-portfolio
./scripts/install-plugin.sh projman ~/projects/personal-portfolio
```
**What it does:**
1. Validates the plugin exists in the marketplace
2. Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
3. Appends integration snippet to target's `CLAUDE.md`
4. Reports changes and lists available commands
**After installation:** Restart your Claude Code session for MCP tools to become available.
### Uninstall a Plugin
```bash
./scripts/uninstall-plugin.sh <plugin-name> <target-project-path>
```
Removes the MCP server entry and CLAUDE.md integration section.
### List Installed Plugins
```bash
./scripts/list-installed.sh <target-project-path>
```
Shows which marketplace plugins are installed, partially installed, or available.
**Output example:**
```
✓ Fully Installed:
PLUGIN VERSION DESCRIPTION
------ ------- -----------
data-platform 1.3.0 pandas, PostgreSQL, and dbt integration...
viz-platform 1.1.0 DMC validation, Plotly charts, and theming...
○ Available (not installed):
projman 3.4.0 Sprint planning and project management...
```
### Plugins with MCP Servers
Not all plugins have MCP servers. The install script handles this automatically:
| Plugin | Has MCP Server | Notes |
|--------|---------------|-------|
| data-platform | ✓ | pandas, PostgreSQL, dbt tools |
| viz-platform | ✓ | DMC validation, chart, theme tools |
| contract-validator | ✓ | Plugin compatibility validation |
| cmdb-assistant | ✓ (via netbox) | NetBox CMDB tools |
| projman | ✓ (via gitea) | Issue, wiki, PR tools |
| pr-review | ✓ (via gitea) | PR review tools |
| git-flow | ✗ | Commands only |
| doc-guardian | ✗ | Commands and hooks only |
| code-sentinel | ✗ | Commands and hooks only |
| clarity-assist | ✗ | Commands only |
### Script Requirements
- **jq** must be installed (`sudo apt install jq`)
- Scripts are idempotent (safe to run multiple times)
---
## Agent Model Selection
Marketplace agents specify their preferred model using Claude Code's `model` frontmatter field. This allows cost/performance optimization per agent.
### Supported Values
| Value | Description |
|-------|-------------|
| `sonnet` | Default. Balanced performance and cost. |
| `opus` | Higher reasoning depth. Use for complex analysis. |
| `haiku` | Faster, lower cost. Use for mechanical tasks. |
| `inherit` | Use session's current model setting. |
### How It Works
Each agent in `plugins/{plugin}/agents/{agent}.md` has frontmatter like:
```yaml
---
name: planner
description: Sprint planning agent - thoughtful architecture analysis
model: sonnet
---
```
Claude Code reads this field when invoking the agent as a subagent.
### Model Assignments
Agents are assigned models based on their task complexity:
| Model | Agents | Rationale |
|-------|--------|-----------|
| **sonnet** | Planner, Orchestrator, Executor, Code Reviewer, Coordinator, Security Reviewers, Performance Analyst, Data Advisor, Data Analysis, Design Reviewer, Layout Builder, Full Validation, Doc Analyzer, Clarity Coach, Maintainer, CMDB Assistant, Refactor Advisor | Standard reasoning, tool orchestration, code generation |
| **haiku** | Maintainability Auditor, Test Validator, Component Check, Theme Setup, Agent Check, Data Ingestion, Git Assistant | Pattern matching, quick validation, mechanical tasks |
### Overriding Model Selection
**Per-agent override:** Edit the `model:` field in the agent file:
```bash
# Change executor to use opus for heavy implementation work
nano plugins/projman/agents/executor.md
# Change model: sonnet to model: opus
```
**Session-level:** Users on Opus subscription can change the agent's model to `inherit` to use whatever model the session is using.
### Best Practices
1. **Default to sonnet** - Good balance for most tasks
2. **Use haiku for speed-sensitive agents** - Sub-agents dispatched in parallel, read-only tasks
3. **Reserve opus for heavy analysis** - Only when sonnet's reasoning isn't sufficient
4. **Use inherit sparingly** - Only when you want session-level control
---
## Automatic Validation Features ## Automatic Validation Features
### API Validation ### API Validation

View File

@@ -1,20 +0,0 @@
2026-01-26T14:36:42 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:37:38 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:37:48 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:38:05 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:38:55 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:39:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T14:40:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:02:30 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:02:37 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T15:03:41 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_report_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:56:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:57:49 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-02-02T10:58:22 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/mcp-tools-reference.md | README.md
2026-02-02T10:58:38 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/validation-rules.md | README.md
2026-02-02T10:59:13 | .claude-plugin | /home/lmiranda/claude-plugins-work/.claude-plugin/marketplace.json | CLAUDE.md .claude-plugin/marketplace.json
2026-02-02T13:55:33 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/visual-output.md | README.md
2026-02-02T13:55:41 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/planner.md | README.md CLAUDE.md
2026-02-02T13:55:55 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/orchestrator.md | README.md CLAUDE.md
2026-02-02T13:56:14 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/executor.md | README.md CLAUDE.md
2026-02-02T13:56:34 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/code-reviewer.md | README.md CLAUDE.md

View File

@@ -131,28 +131,6 @@ class ContractValidatorMCPServer:
"required": ["agent_name", "claude_md_path"] "required": ["agent_name", "claude_md_path"]
} }
), ),
Tool(
name="validate_workflow_integration",
description="Validate that a domain plugin exposes the required advisory interfaces (gate command, review command, advisory agent) expected by projman's domain-consultation skill. Also checks gate contract version compatibility.",
inputSchema={
"type": "object",
"properties": {
"plugin_path": {
"type": "string",
"description": "Path to the domain plugin directory"
},
"domain_label": {
"type": "string",
"description": "The Domain/* label it claims to handle, e.g. Domain/Viz"
},
"expected_contract": {
"type": "string",
"description": "Expected contract version (e.g., 'v1'). If provided, validates the gate command's contract matches."
}
},
"required": ["plugin_path", "domain_label"]
}
),
# Report tools (to be implemented in #188) # Report tools (to be implemented in #188)
Tool( Tool(
name="generate_compatibility_report", name="generate_compatibility_report",
@@ -220,8 +198,6 @@ class ContractValidatorMCPServer:
result = await self._validate_agent_refs(**arguments) result = await self._validate_agent_refs(**arguments)
elif name == "validate_data_flow": elif name == "validate_data_flow":
result = await self._validate_data_flow(**arguments) result = await self._validate_data_flow(**arguments)
elif name == "validate_workflow_integration":
result = await self._validate_workflow_integration(**arguments)
elif name == "generate_compatibility_report": elif name == "generate_compatibility_report":
result = await self._generate_compatibility_report(**arguments) result = await self._generate_compatibility_report(**arguments)
elif name == "list_issues": elif name == "list_issues":
@@ -265,17 +241,6 @@ class ContractValidatorMCPServer:
"""Validate agent data flow""" """Validate agent data flow"""
return await self.validation_tools.validate_data_flow(agent_name, claude_md_path) return await self.validation_tools.validate_data_flow(agent_name, claude_md_path)
async def _validate_workflow_integration(
self,
plugin_path: str,
domain_label: str,
expected_contract: str = None
) -> dict:
"""Validate domain plugin exposes required advisory interfaces"""
return await self.validation_tools.validate_workflow_integration(
plugin_path, domain_label, expected_contract
)
# Report tool implementations (Issue #188) # Report tool implementations (Issue #188)
async def _generate_compatibility_report(self, marketplace_path: str, format: str = "markdown") -> dict: async def _generate_compatibility_report(self, marketplace_path: str, format: str = "markdown") -> dict:

View File

@@ -26,7 +26,6 @@ class IssueType(str, Enum):
OPTIONAL_DEPENDENCY = "optional_dependency" OPTIONAL_DEPENDENCY = "optional_dependency"
UNDECLARED_OUTPUT = "undeclared_output" UNDECLARED_OUTPUT = "undeclared_output"
INVALID_SEQUENCE = "invalid_sequence" INVALID_SEQUENCE = "invalid_sequence"
MISSING_INTEGRATION = "missing_integration"
class ValidationIssue(BaseModel): class ValidationIssue(BaseModel):
@@ -66,18 +65,6 @@ class DataFlowResult(BaseModel):
issues: list[ValidationIssue] = [] issues: list[ValidationIssue] = []
class WorkflowIntegrationResult(BaseModel):
"""Result of workflow integration validation for domain plugins"""
plugin_name: str
domain_label: str
valid: bool
gate_command_found: bool
gate_contract: Optional[str] = None # Contract version declared by gate command
review_command_found: bool
advisory_agent_found: bool
issues: list[ValidationIssue] = []
class ValidationTools: class ValidationTools:
"""Tools for validating plugin compatibility and agent references""" """Tools for validating plugin compatibility and agent references"""
@@ -349,145 +336,3 @@ class ValidationTools:
) )
return result.model_dump() return result.model_dump()
async def validate_workflow_integration(
self,
plugin_path: str,
domain_label: str,
expected_contract: Optional[str] = None
) -> dict:
"""
Validate that a domain plugin exposes required advisory interfaces.
Checks for:
- Gate command (e.g., /design-gate, /data-gate) - REQUIRED
- Gate contract version (gate_contract in frontmatter) - INFO if missing
- Review command (e.g., /design-review, /data-review) - recommended
- Advisory agent referencing the domain label - recommended
Args:
plugin_path: Path to the domain plugin directory
domain_label: The Domain/* label it claims to handle (e.g., Domain/Viz)
expected_contract: Expected contract version (e.g., 'v1'). If provided,
validates the gate command's contract matches.
Returns:
Validation result with found interfaces and issues
"""
import re
plugin_path_obj = Path(plugin_path)
issues = []
# Extract plugin name from path
plugin_name = plugin_path_obj.name
if not plugin_path_obj.exists():
return {
"error": f"Plugin directory not found: {plugin_path}",
"plugin_path": plugin_path,
"domain_label": domain_label
}
# Extract domain short name from label (e.g., "Domain/Viz" -> "viz", "Domain/Data" -> "data")
domain_short = domain_label.split("/")[-1].lower() if "/" in domain_label else domain_label.lower()
# Check for gate command
commands_dir = plugin_path_obj / "commands"
gate_command_found = False
gate_contract = None
gate_patterns = ["pass", "fail", "PASS", "FAIL", "Binary pass/fail", "gate"]
if commands_dir.exists():
for cmd_file in commands_dir.glob("*.md"):
if "gate" in cmd_file.name.lower():
# Verify it's actually a gate command by checking content
content = cmd_file.read_text()
if any(pattern in content for pattern in gate_patterns):
gate_command_found = True
# Parse frontmatter for gate_contract
frontmatter_match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
if frontmatter_match:
frontmatter = frontmatter_match.group(1)
contract_match = re.search(r'gate_contract:\s*(\S+)', frontmatter)
if contract_match:
gate_contract = contract_match.group(1)
break
if not gate_command_found:
issues.append(ValidationIssue(
severity=IssueSeverity.ERROR,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks a gate command for domain '{domain_label}'",
location=str(commands_dir),
suggestion=f"Create commands/{domain_short}-gate.md with binary PASS/FAIL output"
))
# Check for review command
review_command_found = False
if commands_dir.exists():
for cmd_file in commands_dir.glob("*.md"):
if "review" in cmd_file.name.lower() and "gate" not in cmd_file.name.lower():
review_command_found = True
break
if not review_command_found:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks a review command for domain '{domain_label}'",
location=str(commands_dir),
suggestion=f"Create commands/{domain_short}-review.md for detailed audits"
))
# Check for advisory agent
agents_dir = plugin_path_obj / "agents"
advisory_agent_found = False
if agents_dir.exists():
for agent_file in agents_dir.glob("*.md"):
content = agent_file.read_text()
# Check if agent references the domain label or gate command
if domain_label in content or f"{domain_short}-gate" in content.lower() or "advisor" in agent_file.name.lower() or "reviewer" in agent_file.name.lower():
advisory_agent_found = True
break
if not advisory_agent_found:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Plugin '{plugin_name}' lacks an advisory agent for domain '{domain_label}'",
location=str(agents_dir) if agents_dir.exists() else str(plugin_path_obj),
suggestion=f"Create agents/{domain_short}-advisor.md referencing '{domain_label}'"
))
# Check gate contract version
if gate_command_found:
if not gate_contract:
issues.append(ValidationIssue(
severity=IssueSeverity.INFO,
issue_type=IssueType.MISSING_INTEGRATION,
message=f"Gate command does not declare a contract version",
location=str(commands_dir),
suggestion="Consider adding `gate_contract: v1` to frontmatter for version tracking"
))
elif expected_contract and gate_contract != expected_contract:
issues.append(ValidationIssue(
severity=IssueSeverity.WARNING,
issue_type=IssueType.INTERFACE_MISMATCH,
message=f"Contract version mismatch: gate declares {gate_contract}, projman expects {expected_contract}",
location=str(commands_dir),
suggestion=f"Update domain-consultation.md Gate Command Reference table to {gate_contract}, or update gate command to {expected_contract}"
))
result = WorkflowIntegrationResult(
plugin_name=plugin_name,
domain_label=domain_label,
valid=gate_command_found, # Only gate is required for validity
gate_command_found=gate_command_found,
gate_contract=gate_contract,
review_command_found=review_command_found,
advisory_agent_found=advisory_agent_found,
issues=issues
)
return result.model_dump()

View File

@@ -254,261 +254,3 @@ async def test_validate_data_flow_missing_producer(validation_tools, tmp_path):
# Should have warning about missing producer # Should have warning about missing producer
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"] warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
assert len(warning_issues) > 0 assert len(warning_issues) > 0
# --- Workflow Integration Tests ---
@pytest.fixture
def domain_plugin_complete(tmp_path):
"""Create a complete domain plugin with gate, review, and advisory agent"""
plugin_dir = tmp_path / "viz-platform"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Gate command with PASS/FAIL pattern
gate_cmd = plugin_dir / "commands" / "design-gate.md"
gate_cmd.write_text("""# /design-gate
Binary pass/fail validation gate for design system compliance.
## Output
- **PASS**: All design system checks passed
- **FAIL**: Design system violations detected
""")
# Review command
review_cmd = plugin_dir / "commands" / "design-review.md"
review_cmd.write_text("""# /design-review
Comprehensive design system audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "design-reviewer.md"
agent.write_text("""# design-reviewer
Design system compliance auditor.
Handles issues with `Domain/Viz` label.
""")
return str(plugin_dir)
@pytest.fixture
def domain_plugin_missing_gate(tmp_path):
"""Create domain plugin with review and agent but no gate command"""
plugin_dir = tmp_path / "data-platform"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Review command (but no gate)
review_cmd = plugin_dir / "commands" / "data-review.md"
review_cmd.write_text("""# /data-review
Data integrity audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "data-advisor.md"
agent.write_text("""# data-advisor
Data integrity advisor for Domain/Data issues.
""")
return str(plugin_dir)
@pytest.fixture
def domain_plugin_minimal(tmp_path):
"""Create minimal plugin with no commands or agents"""
plugin_dir = tmp_path / "minimal-plugin"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
readme = plugin_dir / "README.md"
readme.write_text("# Minimal Plugin\n\nNo commands or agents.")
return str(plugin_dir)
@pytest.mark.asyncio
async def test_validate_workflow_integration_complete(validation_tools, domain_plugin_complete):
"""Test complete domain plugin returns valid with all interfaces found"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_complete,
"Domain/Viz"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_command_found"] is True
assert result["review_command_found"] is True
assert result["advisory_agent_found"] is True
# May have INFO issue about missing contract version (not an error/warning)
error_or_warning = [i for i in result["issues"]
if i["severity"].value in ("error", "warning")]
assert len(error_or_warning) == 0
@pytest.mark.asyncio
async def test_validate_workflow_integration_missing_gate(validation_tools, domain_plugin_missing_gate):
"""Test plugin missing gate command returns invalid with ERROR"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_missing_gate,
"Domain/Data"
)
assert "error" not in result
assert result["valid"] is False
assert result["gate_command_found"] is False
assert result["review_command_found"] is True
assert result["advisory_agent_found"] is True
# Should have one ERROR for missing gate
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
assert len(error_issues) == 1
assert "gate" in error_issues[0]["message"].lower()
@pytest.mark.asyncio
async def test_validate_workflow_integration_minimal(validation_tools, domain_plugin_minimal):
"""Test minimal plugin returns invalid with multiple issues"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_minimal,
"Domain/Test"
)
assert "error" not in result
assert result["valid"] is False
assert result["gate_command_found"] is False
assert result["review_command_found"] is False
assert result["advisory_agent_found"] is False
# Should have one ERROR (gate) and two WARNINGs (review, agent)
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
assert len(error_issues) == 1
assert len(warning_issues) == 2
@pytest.mark.asyncio
async def test_validate_workflow_integration_nonexistent_plugin(validation_tools, tmp_path):
"""Test error when plugin directory doesn't exist"""
result = await validation_tools.validate_workflow_integration(
str(tmp_path / "nonexistent"),
"Domain/Test"
)
assert "error" in result
assert "not found" in result["error"].lower()
# --- Gate Contract Version Tests ---
@pytest.fixture
def domain_plugin_with_contract(tmp_path):
"""Create domain plugin with gate_contract: v1 in frontmatter"""
plugin_dir = tmp_path / "viz-platform-versioned"
plugin_dir.mkdir()
(plugin_dir / ".claude-plugin").mkdir()
(plugin_dir / "commands").mkdir()
(plugin_dir / "agents").mkdir()
# Gate command with gate_contract in frontmatter
gate_cmd = plugin_dir / "commands" / "design-gate.md"
gate_cmd.write_text("""---
description: Design system compliance gate (pass/fail)
gate_contract: v1
---
# /design-gate
Binary pass/fail validation gate for design system compliance.
## Output
- **PASS**: All design system checks passed
- **FAIL**: Design system violations detected
""")
# Review command
review_cmd = plugin_dir / "commands" / "design-review.md"
review_cmd.write_text("""# /design-review
Comprehensive design system audit.
""")
# Advisory agent
agent = plugin_dir / "agents" / "design-reviewer.md"
agent.write_text("""# design-reviewer
Design system compliance auditor for Domain/Viz issues.
""")
return str(plugin_dir)
@pytest.mark.asyncio
async def test_validate_workflow_contract_match(validation_tools, domain_plugin_with_contract):
"""Test that matching expected_contract produces no warning"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_with_contract,
"Domain/Viz",
expected_contract="v1"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_contract"] == "v1"
# Should have no warnings about contract mismatch
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
assert len(contract_warnings) == 0
@pytest.mark.asyncio
async def test_validate_workflow_contract_mismatch(validation_tools, domain_plugin_with_contract):
"""Test that mismatched expected_contract produces WARNING"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_with_contract,
"Domain/Viz",
expected_contract="v2" # Gate has v1
)
assert "error" not in result
assert result["valid"] is True # Contract mismatch doesn't affect validity
assert result["gate_contract"] == "v1"
# Should have warning about contract mismatch
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
assert len(contract_warnings) == 1
assert "mismatch" in contract_warnings[0]["message"].lower()
assert "v1" in contract_warnings[0]["message"]
assert "v2" in contract_warnings[0]["message"]
@pytest.mark.asyncio
async def test_validate_workflow_no_contract(validation_tools, domain_plugin_complete):
"""Test that missing gate_contract produces INFO suggestion"""
result = await validation_tools.validate_workflow_integration(
domain_plugin_complete,
"Domain/Viz"
)
assert "error" not in result
assert result["valid"] is True
assert result["gate_contract"] is None
# Should have info issue about missing contract
info_issues = [i for i in result["issues"] if i["severity"].value == "info"]
contract_info = [i for i in info_issues if "contract" in i["message"].lower()]
assert len(contract_info) == 1
assert "does not declare" in contract_info[0]["message"].lower()

View File

@@ -1,5 +0,0 @@
2026-01-26T11:40:11 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/registry/dmc_2_5.json | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:31 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_chart_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:32 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:34 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
2026-01-26T13:46:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md

View File

@@ -1,6 +1,6 @@
{ {
"name": "clarity-assist", "name": "clarity-assist",
"version": "1.2.0", "version": "1.0.0",
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations", "description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",

View File

@@ -1,9 +1,3 @@
---
name: clarity-coach
description: Patient, structured coach helping users articulate requirements clearly. Uses neurodivergent-friendly communication patterns.
model: sonnet
---
# Clarity Coach Agent # Clarity Coach Agent
## Visual Output Requirements ## Visual Output Requirements

View File

@@ -1,7 +1,7 @@
{ {
"name": "claude-config-maintainer", "name": "claude-config-maintainer",
"version": "1.2.0", "version": "1.0.0",
"description": "Maintains and optimizes CLAUDE.md and settings.local.json configuration files for Claude Code projects", "description": "Maintains and optimizes CLAUDE.md configuration files for Claude Code projects",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
"email": "leobmiranda@gmail.com" "email": "leobmiranda@gmail.com"
@@ -14,9 +14,7 @@
"configuration", "configuration",
"optimization", "optimization",
"claude-md", "claude-md",
"developer-tools", "developer-tools"
"settings",
"permissions"
], ],
"commands": ["./commands/"] "commands": ["./commands/"]
} }

View File

@@ -1,7 +1,6 @@
--- ---
name: maintainer name: maintainer
description: CLAUDE.md optimization and maintenance agent description: CLAUDE.md optimization and maintenance agent
model: sonnet
--- ---
# CLAUDE.md Maintainer Agent # CLAUDE.md Maintainer Agent
@@ -115,54 +114,7 @@ Report plugin coverage percentage and offer to add missing integrations:
- Display the integration content that would be added - Display the integration content that would be added
- Ask user for confirmation before modifying CLAUDE.md - Ask user for confirmation before modifying CLAUDE.md
### 2. Audit Settings Files ### 2. Optimize CLAUDE.md Structure
When auditing settings files, perform:
#### A. Permission Analysis
Read `.claude/settings.local.json` (primary) and check `.claude/settings.json` and `~/.claude.json` project entries (secondary).
Evaluate using `skills/settings-optimization.md`:
**Redundancy:**
- Duplicate entries in allow/deny arrays
- Subset patterns covered by broader patterns
- Patterns that could be merged
**Coverage:**
- Common safe tools missing from allow list
- MCP server tools not covered
- Directory scopes with no matching permission
**Safety Alignment:**
- Deny rules cover secrets and destructive commands
- Allow rules don't bypass active review layers
- No overly broad patterns without justification
**Profile Fit:**
- Compare against recommended profile for the project's review architecture
- Identify specific additions/removals to reach target profile
#### B. Review Layer Verification
Before recommending auto-allow patterns, verify active review layers:
1. Read `plugins/*/hooks/hooks.json` for each installed plugin
2. Map hook types (PreToolUse, PostToolUse) to tool matchers (Write, Edit, Bash)
3. Confirm plugins are listed in `.claude-plugin/marketplace.json`
4. Only recommend auto-allow for scopes covered by ≥2 verified review layers
#### C. Settings Efficiency Score (100 points)
| Category | Points |
|----------|--------|
| Redundancy | 25 |
| Coverage | 25 |
| Safety Alignment | 25 |
| Profile Fit | 25 |
### 3. Optimize CLAUDE.md Structure
**Recommended Structure:** **Recommended Structure:**
@@ -197,7 +149,7 @@ Common issues and solutions.
- Use headers that scan easily - Use headers that scan easily
- Include examples where they add clarity - Include examples where they add clarity
### 4. Apply Best Practices ### 3. Apply Best Practices
**DO:** **DO:**
- Use clear, direct language - Use clear, direct language
@@ -214,7 +166,7 @@ Common issues and solutions.
- Add generic advice that applies to all projects - Add generic advice that applies to all projects
- Use emojis unless project requires them - Use emojis unless project requires them
### 5. Generate Improvement Reports ### 4. Generate Improvement Reports
After analyzing a CLAUDE.md, provide: After analyzing a CLAUDE.md, provide:
@@ -250,7 +202,7 @@ Suggested Actions:
Would you like me to implement these improvements? Would you like me to implement these improvements?
``` ```
### 6. Insert Plugin Integrations ### 5. Insert Plugin Integrations
When adding plugin integration content to CLAUDE.md: When adding plugin integration content to CLAUDE.md:
@@ -285,7 +237,7 @@ Add this integration to CLAUDE.md?
- Allow users to skip specific plugins they don't want documented - Allow users to skip specific plugins they don't want documented
- Preserve existing CLAUDE.md structure and content - Preserve existing CLAUDE.md structure and content
### 7. Create New CLAUDE.md Files ### 6. Create New CLAUDE.md Files
When creating a new CLAUDE.md: When creating a new CLAUDE.md:

View File

@@ -1,6 +1,6 @@
## CLAUDE.md Maintenance (claude-config-maintainer) ## CLAUDE.md Maintenance (claude-config-maintainer)
This project uses the **claude-config-maintainer** plugin to analyze and optimize CLAUDE.md and settings.local.json configuration files. This project uses the **claude-config-maintainer** plugin to analyze and optimize CLAUDE.md configuration files.
### Available Commands ### Available Commands
@@ -9,13 +9,8 @@ This project uses the **claude-config-maintainer** plugin to analyze and optimiz
| `/config-analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring | | `/config-analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring |
| `/config-optimize` | Automatically optimize CLAUDE.md structure and content | | `/config-optimize` | Automatically optimize CLAUDE.md structure and content |
| `/config-init` | Initialize a new CLAUDE.md file for a project | | `/config-init` | Initialize a new CLAUDE.md file for a project |
| `/config-diff` | Track CLAUDE.md changes over time with behavioral impact analysis |
| `/config-lint` | Lint CLAUDE.md for anti-patterns and best practices (31 rules) |
| `/config-audit-settings` | Audit settings.local.json permissions with 100-point scoring |
| `/config-optimize-settings` | Optimize permission patterns and apply named profiles |
| `/config-permissions-map` | Visual map of review layers and permission coverage |
### CLAUDE.md Scoring System ### Scoring System
The analysis uses a 100-point scoring system across four categories: The analysis uses a 100-point scoring system across four categories:
@@ -26,31 +21,10 @@ The analysis uses a 100-point scoring system across four categories:
| Completeness | 25 | Overview, quick start, critical rules, workflows | | Completeness | 25 | Overview, quick start, critical rules, workflows |
| Conciseness | 25 | Efficiency, no repetition, appropriate length | | Conciseness | 25 | Efficiency, no repetition, appropriate length |
### Settings Scoring System
The settings audit uses a 100-point scoring system across four categories:
| Category | Points | What It Measures |
|----------|--------|------------------|
| Redundancy | 25 | No duplicates, no subset patterns, efficient rules |
| Coverage | 25 | Common tools allowed, MCP servers covered |
| Safety Alignment | 25 | Deny rules for secrets/destructive ops, review layers verified |
| Profile Fit | 25 | Alignment with recommended profile for review layer count |
### Permission Profiles
| Profile | Use Case |
|---------|----------|
| `conservative` | New users, minimal auto-allow, prompts for most writes |
| `reviewed` | Projects with 2+ review layers (code-sentinel, doc-guardian, PR review) |
| `autonomous` | Trusted CI/sandboxed environments only |
### Usage Guidelines ### Usage Guidelines
- Run `/config-analyze` periodically to assess CLAUDE.md quality - Run `/config-analyze` periodically to assess CLAUDE.md quality
- Run `/config-audit-settings` to check permission efficiency
- Target a score of **70+/100** for effective Claude Code operation - Target a score of **70+/100** for effective Claude Code operation
- Address HIGH priority issues first when optimizing - Address HIGH priority issues first when optimizing
- Use `/config-init` when setting up new projects to start with best practices - Use `/config-init` when setting up new projects to start with best practices
- Use `/config-permissions-map` to visualize review layer coverage
- Re-analyze after making changes to verify improvements - Re-analyze after making changes to verify improvements

View File

@@ -1,204 +0,0 @@
---
name: config-audit-settings
description: Audit settings.local.json for permission optimization opportunities
---
# /config-audit-settings
Audit Claude Code `settings.local.json` permissions with 100-point scoring across redundancy, coverage, safety alignment, and profile fit.
## Skills to Load
Before executing, load:
- `skills/visual-header.md`
- `skills/settings-optimization.md`
## Visual Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Audit |
+-----------------------------------------------------------------+
```
## Usage
```
/config-audit-settings # Full audit with recommendations
/config-audit-settings --diagram # Include Mermaid diagram of review layer coverage
```
## Workflow
### Step 1: Locate Settings Files
Search in order:
1. `.claude/settings.local.json` (primary target)
2. `.claude/settings.json` (shared config)
3. `~/.claude.json` project entry (legacy)
Report which format is in use.
### Step 2: Parse Permission Arrays
Extract and analyze:
- `permissions.allow` array
- `permissions.deny` array
- `permissions.ask` array (if present)
- Legacy `allowedTools` array (if legacy format)
### Step 3: Run Pattern Consolidation Analysis
Using `settings-optimization.md` Section 3, detect:
| Check | Description |
|-------|-------------|
| Duplicates | Exact same pattern appearing multiple times |
| Subsets | Narrower patterns covered by broader ones |
| Merge candidates | 4+ similar patterns that could be consolidated |
| Overly broad | Unscoped tool permissions (e.g., `Bash` without pattern) |
| Stale entries | Patterns referencing non-existent paths |
| Conflicts | Same pattern in both allow and deny |
### Step 4: Detect Active Marketplace Hooks
Read `plugins/*/hooks/hooks.json` files:
```bash
# Check each plugin's hooks
plugins/code-sentinel/hooks/hooks.json # PreToolUse security
plugins/doc-guardian/hooks/hooks.json # PostToolUse drift detection
plugins/project-hygiene/hooks/hooks.json # PostToolUse cleanup
plugins/data-platform/hooks/hooks.json # PostToolUse schema diff
plugins/contract-validator/hooks/hooks.json # Plugin validation
```
Parse each to identify:
- Hook event type (PreToolUse, PostToolUse)
- Tool matchers (Write, Edit, MultiEdit, Bash)
- Whether hook is command type (reliable) or prompt type (unreliable)
### Step 5: Map Review Layers to Directory Scopes
For each directory scope in `settings-optimization.md` Section 4:
1. Count how many review layers are verified active
2. Determine if auto-allow is justified (≥2 layers required)
3. Note any scopes that lack coverage
### Step 6: Compare Against Recommended Profile
Based on review layer count:
- 0-1 layers: Recommend `conservative` profile
- 2+ layers: Recommend `reviewed` profile
- CI/sandboxed: May recommend `autonomous` profile
Calculate profile fit percentage.
### Step 7: Generate Scored Report
Calculate scores using `settings-optimization.md` Section 6.
## Output Format
```
Settings Efficiency Score: XX/100
Redundancy: XX/25
Coverage: XX/25
Safety Alignment: XX/25
Profile Fit: XX/25
Current Profile: [closest match or "custom"]
Recommended Profile: [target based on review layers]
Issues Found:
🔴 CRITICAL: [description]
🟠 HIGH: [description]
🟡 MEDIUM: [description]
🔵 LOW: [description]
Active Review Layers Detected:
✓ code-sentinel (PreToolUse: Write|Edit|MultiEdit)
✓ doc-guardian (PostToolUse: Write|Edit|MultiEdit)
✓ project-hygiene (PostToolUse: Write|Edit)
✗ data-platform schema-diff (not detected)
Recommendations:
1. [specific action with pattern]
2. [specific action with pattern]
...
Follow-Up Actions:
1. Run /config-optimize-settings to apply recommendations
2. Run /config-optimize-settings --dry-run to preview first
3. Run /config-optimize-settings --profile=reviewed to apply profile
```
## Diagram Output (--diagram flag)
When `--diagram` is specified, generate a Mermaid flowchart showing:
**Before generating:** Read `/mnt/skills/user/mermaid-diagrams/SKILL.md` for diagram requirements.
**Diagram structure:**
- Left column: File operation types (Write, Edit, Bash)
- Middle: Review layers that intercept each operation
- Right column: Current permission status (auto-allowed, prompted, denied)
**Color coding:**
- PreToolUse hooks: Blue
- PostToolUse hooks: Green
- Sprint Approval: Amber
- PR Review: Purple
Example structure:
```mermaid
flowchart LR
subgraph Operations
W[Write]
E[Edit]
B[Bash]
end
subgraph Review Layers
CS[code-sentinel]
DG[doc-guardian]
PR[pr-review]
end
subgraph Permission
A[Auto-allowed]
P[Prompted]
D[Denied]
end
W --> CS
W --> DG
E --> CS
E --> DG
CS --> A
DG --> A
B --> P
classDef preHook fill:#e3f2fd
classDef postHook fill:#e8f5e9
classDef prReview fill:#f3e5f5
class CS preHook
class DG postHook
class PR prReview
```
## Issue Severity Levels
| Severity | Icon | Examples |
|----------|------|----------|
| CRITICAL | 🔴 | Unscoped `Bash` in allow, missing deny for secrets |
| HIGH | 🟠 | Overly broad patterns, missing MCP coverage |
| MEDIUM | 🟡 | Subset redundancy, merge candidates |
| LOW | 🔵 | Exact duplicates, minor optimizations |
## DO NOT
- Modify any files (this is audit only)
- Recommend `autonomous` profile unless explicitly sandboxed environment
- Recommend auto-allow for scopes with <2 verified review layers
- Skip hook verification before making recommendations

View File

@@ -1,243 +0,0 @@
---
name: config-optimize-settings
description: Optimize settings.local.json permissions based on audit recommendations
---
# /config-optimize-settings
Optimize Claude Code `settings.local.json` permission patterns and apply named profiles.
## Skills to Load
Before executing, load:
- `skills/visual-header.md`
- `skills/settings-optimization.md`
- `skills/pre-change-protocol.md`
## Visual Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
```
## Usage
```
/config-optimize-settings # Apply audit recommendations
/config-optimize-settings --dry-run # Preview only, no changes
/config-optimize-settings --profile=reviewed # Apply named profile
/config-optimize-settings --consolidate-only # Only merge/dedupe, no new rules
```
## Options
| Option | Description |
|--------|-------------|
| `--dry-run` | Preview changes without applying |
| `--profile=NAME` | Apply named profile (`conservative`, `reviewed`, `autonomous`) |
| `--consolidate-only` | Only deduplicate and merge patterns, don't add new rules |
| `--no-backup` | Skip backup (not recommended) |
## Workflow
### Step 1: Run Audit Analysis
Execute the same analysis as `/config-audit-settings`:
1. Locate settings file
2. Parse permission arrays
3. Detect issues (duplicates, subsets, merge candidates, etc.)
4. Verify active review layers
5. Calculate current score
### Step 2: Generate Optimization Plan
Based on audit results, create a change plan:
**For `--consolidate-only`:**
- Remove exact duplicates
- Remove subset patterns covered by broader patterns
- Merge similar patterns (4+ threshold)
- Remove stale patterns for non-existent paths
- Remove conflicting allow entries that are already denied
**For `--profile=NAME`:**
- Calculate diff between current permissions and target profile
- Show additions and removals
- Preserve any custom deny rules not in profile
**For default (full optimization):**
- Apply all consolidation changes
- Add recommended patterns based on verified review layers
- Suggest profile alignment if appropriate
### Step 3: Show Before/After Preview
**MANDATORY:** Always show preview before applying changes.
```
Current Settings:
allow: [12 patterns]
deny: [4 patterns]
Proposed Changes:
REMOVE from allow (redundant):
- Write(plugins/projman/*) [covered by Write(plugins/**)]
- Write(plugins/git-flow/*) [covered by Write(plugins/**)]
- Bash(git status) [covered by Bash(git *)]
ADD to allow (recommended):
+ Bash(npm *) [2 review layers active]
+ Bash(pytest *) [2 review layers active]
ADD to deny (security):
+ Bash(curl * | bash*) [missing safety rule]
After Optimization:
allow: [10 patterns]
deny: [5 patterns]
Score Impact: 67/100 → 85/100 (+18 points)
```
### Step 4: Request User Approval
Ask for confirmation before proceeding:
```
Apply these changes to .claude/settings.local.json?
[1] Yes, apply changes
[2] No, cancel
[3] Apply partial (select which changes)
```
### Step 5: Create Backup
**Before any write operation:**
```bash
# Backup location
.claude/backups/settings.local.json.{YYYYMMDD-HHMMSS}
```
Create the `.claude/backups/` directory if it doesn't exist.
### Step 6: Apply Changes
Write the optimized `settings.local.json` file.
### Step 7: Verify
Re-read the file and re-calculate the score to confirm improvement.
```
Optimization Complete!
Backup saved: .claude/backups/settings.local.json.20260202-143022
Settings Efficiency Score: 85/100 (+18 from 67)
Redundancy: 25/25 (+8)
Coverage: 22/25 (+5)
Safety Alignment: 23/25 (+3)
Profile Fit: 15/25 (+2)
Changes applied:
- Removed 3 redundant patterns
- Added 2 recommended patterns
- Added 1 safety deny rule
```
## Profile Application
When using `--profile=NAME`:
### `conservative`
```
Switching to conservative profile...
This profile:
- Allows: Read, Glob, Grep, LS, basic Bash commands
- Allows: Write/Edit only for docs/
- Denies: .env*, secrets/, rm -rf, sudo
All other Write/Edit operations will prompt for approval.
```
### `reviewed`
```
Switching to reviewed profile...
Prerequisites verified:
✓ code-sentinel hook active (PreToolUse)
✓ doc-guardian hook active (PostToolUse)
✓ 2+ review layers detected
This profile:
- Allows: All file operations (Edit, Write, MultiEdit)
- Allows: Scoped Bash commands (git, npm, python, etc.)
- Denies: .env*, secrets/, rm -rf, sudo, curl|bash
```
### `autonomous`
```
⚠️ WARNING: Autonomous profile requested
This profile allows unscoped Bash execution.
Only use in fully sandboxed environments (CI, containers).
Confirm this is a sandboxed environment?
[1] Yes, this is sandboxed - apply autonomous profile
[2] No, cancel
```
## Safety Rules
1. **ALWAYS backup before writing** (unless `--no-backup`)
2. **NEVER remove deny rules without explicit confirmation**
3. **NEVER add unscoped `Bash` to allow** — always use scoped patterns
4. **Preview is MANDATORY** before applying changes
5. **Verify review layers** before recommending broad permissions
## Output Format
### Dry Run Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
DRY RUN - No changes will be made
[... preview content ...]
To apply these changes, run:
/config-optimize-settings
```
### Applied Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
Optimization Applied Successfully
Backup: .claude/backups/settings.local.json.20260202-143022
[... summary of changes ...]
Score: 67/100 → 85/100
```
## DO NOT
- Apply changes without showing preview
- Remove deny rules silently
- Add unscoped `Bash` permission
- Skip backup without explicit `--no-backup` flag
- Apply `autonomous` profile without sandbox confirmation
- Recommend broad permissions without verifying review layers

View File

@@ -1,256 +0,0 @@
---
name: config-permissions-map
description: Generate visual map of review layers and permission coverage
---
# /config-permissions-map
Generate a Mermaid diagram showing the relationship between file operations, review layers, and permission status.
## Skills to Load
Before executing, load:
- `skills/visual-header.md`
- `skills/settings-optimization.md`
Also read: `/mnt/skills/user/mermaid-diagrams/SKILL.md` (for diagram requirements)
## Visual Output
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Permissions Map |
+-----------------------------------------------------------------+
```
## Usage
```
/config-permissions-map # Generate and display diagram
/config-permissions-map --save # Save diagram to .mermaid file
```
## Workflow
### Step 1: Detect Active Hooks
Read all plugin hooks from the marketplace:
```
plugins/code-sentinel/hooks/hooks.json
plugins/doc-guardian/hooks/hooks.json
plugins/project-hygiene/hooks/hooks.json
plugins/data-platform/hooks/hooks.json
plugins/contract-validator/hooks/hooks.json
plugins/cmdb-assistant/hooks/hooks.json
```
For each hook, extract:
- Event type (PreToolUse, PostToolUse, SessionStart, etc.)
- Tool matchers (Write, Edit, MultiEdit, Bash patterns)
- Hook command/script
### Step 2: Map Hooks to File Scopes
Create a mapping of which review layers cover which operations:
| Operation | PreToolUse Hooks | PostToolUse Hooks | Other Gates |
|-----------|------------------|-------------------|-------------|
| Write | code-sentinel | doc-guardian, project-hygiene | PR review |
| Edit | code-sentinel | doc-guardian, project-hygiene | PR review |
| MultiEdit | code-sentinel | doc-guardian | PR review |
| Bash(git *) | git-flow | — | — |
### Step 3: Read Current Permissions
Load `.claude/settings.local.json` and parse:
- `allow` array → auto-allowed operations
- `deny` array → blocked operations
- `ask` array → always-prompted operations
### Step 4: Generate Mermaid Flowchart
**Diagram requirements (from mermaid-diagrams skill):**
- Use `classDef` for styling
- Maximum 3 colors (blue, green, amber/purple)
- Semantic arrow labels
- Left-to-right flow
**Structure:**
```mermaid
flowchart LR
subgraph ops[File Operations]
direction TB
W[Write]
E[Edit]
ME[MultiEdit]
BG[Bash git]
BN[Bash npm]
BO[Bash other]
end
subgraph pre[PreToolUse Hooks]
direction TB
CS[code-sentinel<br/>Security Scan]
GF[git-flow<br/>Branch Check]
end
subgraph post[PostToolUse Hooks]
direction TB
DG[doc-guardian<br/>Drift Detection]
PH[project-hygiene<br/>Cleanup]
DP[data-platform<br/>Schema Diff]
end
subgraph perm[Permission Status]
direction TB
AA[Auto-Allowed]
PR[Prompted]
DN[Denied]
end
W -->|intercepted| CS
W -->|tracked| DG
E -->|intercepted| CS
E -->|tracked| DG
BG -->|checked| GF
CS -->|passed| AA
DG -->|logged| AA
GF -->|valid| AA
BO -->|no hook| PR
classDef preHook fill:#e3f2fd,stroke:#1976d2
classDef postHook fill:#e8f5e9,stroke:#388e3c
classDef sprint fill:#fff3e0,stroke:#f57c00
classDef prReview fill:#f3e5f5,stroke:#7b1fa2
classDef allowed fill:#c8e6c9,stroke:#2e7d32
classDef prompted fill:#fff9c4,stroke:#f9a825
classDef denied fill:#ffcdd2,stroke:#c62828
class CS,GF preHook
class DG,PH,DP postHook
class AA allowed
class PR prompted
class DN denied
```
### Step 5: Generate Coverage Summary Table
```
Review Layer Coverage Summary
=============================
| Directory Scope | Layers | Status | Recommendation |
|--------------------------|--------|-----------------|----------------|
| plugins/*/commands/*.md | 3 | ✓ Auto-allowed | — |
| plugins/*/skills/*.md | 2 | ✓ Auto-allowed | — |
| mcp-servers/**/*.py | 3 | ✓ Auto-allowed | — |
| docs/** | 2 | ✓ Auto-allowed | — |
| scripts/*.sh | 2 | ⚠ Prompted | Consider auto-allow |
| .env* | 0 | ✗ Denied | Correct - secrets |
| Root directory | 1 | ⚠ Prompted | Keep prompted |
Legend:
✓ = Covered by ≥2 review layers, auto-allowed
⚠ = Fewer than 2 layers or not allowed
✗ = Explicitly denied
```
### Step 6: Identify Gaps
Report any gaps in coverage:
```
Coverage Gaps Detected:
1. Bash(npm *) — not in allow list, but npm operations are common
→ 2 review layers active, could be auto-allowed
2. mcp__data-platform__* — MCP server configured but tools not allowed
→ Add to allow list to avoid prompts
3. scripts/*.sh — 2 review layers but still prompted
→ Consider adding Write(scripts/**) to allow
```
### Step 7: Output Diagram
Display the Mermaid diagram inline.
If `--save` flag is used:
- Save to `.claude/permissions-map.mermaid`
- Report the file path
## Output Format
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Permissions Map |
+-----------------------------------------------------------------+
Review Layer Status
===================
PreToolUse Hooks (intercept before operation):
✓ code-sentinel — Write, Edit, MultiEdit
✓ git-flow — Bash(git checkout *), Bash(git commit *)
PostToolUse Hooks (track after operation):
✓ doc-guardian — Write, Edit, MultiEdit
✓ project-hygiene — Write, Edit
✗ data-platform — not detected
Other Review Gates:
✓ Sprint Approval (projman milestone workflow)
✓ PR Review (pr-review multi-agent)
Permissions Flow Diagram
========================
```mermaid
[diagram here]
```
Coverage Summary
================
[table here]
Gaps & Recommendations
======================
[gaps list here]
```
## File Output (--save flag)
When `--save` is specified:
```
Diagram saved to: .claude/permissions-map.mermaid
To view:
- Open in VS Code with Mermaid extension
- Paste into https://mermaid.live
- Include in documentation with ```mermaid code fence
```
## Color Scheme
| Element | Color | Hex |
|---------|-------|-----|
| PreToolUse hooks | Blue | #e3f2fd |
| PostToolUse hooks | Green | #e8f5e9 |
| Sprint/Planning gates | Amber | #fff3e0 |
| PR Review | Purple | #f3e5f5 |
| Auto-allowed | Light green | #c8e6c9 |
| Prompted | Light yellow | #fff9c4 |
| Denied | Light red | #ffcdd2 |
## DO NOT
- Generate diagrams without reading the mermaid-diagrams skill
- Use more than 3 primary colors in the diagram
- Skip the coverage summary table
- Fail to identify coverage gaps

View File

@@ -1,377 +0,0 @@
# Settings Optimization Skill
This skill provides comprehensive knowledge for auditing and optimizing Claude Code `settings.local.json` permission configurations.
---
## Section 1: Settings File Locations & Format
Claude Code uses two configuration formats for permissions:
### Newer Format (Recommended)
**Primary target:** `.claude/settings.local.json` (project-local, gitignored)
**Secondary locations:**
- `.claude/settings.json` (shared, committed)
- `~/.claude.json` (legacy global config)
```json
{
"permissions": {
"allow": ["Edit", "Write(plugins/**)", "Bash(git *)"],
"deny": ["Read(.env*)", "Bash(rm *)"],
"ask": ["Bash(pip install *)"]
}
}
```
**Field meanings:**
- `allow`: Operations auto-approved without prompting
- `deny`: Operations blocked entirely
- `ask`: Operations that always prompt (overrides allow)
### Legacy Format
Found in `~/.claude.json` with per-project entries:
```json
{
"projects": {
"/path/to/project": {
"allowedTools": ["Read", "Write", "Bash(git *)"]
}
}
}
```
**Detection strategy:**
1. Check `.claude/settings.local.json` first (primary)
2. Check `.claude/settings.json` (shared)
3. Check `~/.claude.json` for project entry (legacy)
4. Report which format is in use
---
## Section 2: Permission Rule Syntax Reference
| Pattern | Meaning |
|---------|---------|
| `Tool` or `Tool(*)` | Allow all uses of that tool |
| `Bash(npm run build)` | Exact command match |
| `Bash(npm run test *)` | Prefix match (space+asterisk = word boundary) |
| `Bash(npm*)` | Prefix match without word boundary |
| `Write(plugins/**)` | Glob — all files recursively under `plugins/` |
| `Write(plugins/projman/*)` | Glob — direct children only |
| `Read(.env*)` | Pattern matching `.env`, `.env.local`, etc. |
| `mcp__gitea__*` | All tools from the gitea MCP server |
| `mcp__netbox__list_*` | Specific MCP tool pattern |
| `WebFetch(domain:github.com)` | Domain-restricted web fetch |
### Important Nuances
**Word boundary matching:**
- `Bash(ls *)` (with space) matches `ls -la` but NOT `lsof`
- `Bash(ls*)` (no space) matches both `ls -la` AND `lsof`
**Precedence rules:**
- `deny` rules take precedence over `allow` rules
- `ask` rules override both (always prompts even if allowed)
- More specific patterns do NOT override broader patterns
**Command operators:**
- Piped commands (`cmd1 | cmd2`) may not match individual command rules (known Claude Code limitation)
- Shell operators (`&&`, `||`) — Claude Code is aware of these and won't let prefix rules bypass them
- Commands with redirects (`>`, `>>`, `<`) are evaluated as complete strings
---
## Section 3: Pattern Consolidation Rules
The audit detects these optimization opportunities:
| Issue | Example | Recommendation |
|-------|---------|----------------|
| **Exact duplicates** | `Write(plugins/**)` listed twice | Remove duplicate |
| **Subset redundancy** | `Write(plugins/projman/*)` when `Write(plugins/**)` exists | Remove the narrower pattern — already covered |
| **Merge candidates** | `Write(plugins/projman/*)`, `Write(plugins/git-flow/*)`, `Write(plugins/pr-review/*)` ... (4+ similar patterns) | Merge to `Write(plugins/**)` |
| **Overly broad** | `Bash` (no specifier = allows ALL bash) | Flag as security concern, suggest scoped patterns |
| **Stale patterns** | `Write(plugins/old-plugin/**)` for a plugin that no longer exists | Remove stale entry |
| **Missing MCP permissions** | MCP servers in `.mcp.json` but no `mcp__servername__*` in allow | Suggest adding if server is trusted |
| **Conflicting rules** | Same pattern in both `allow` and `deny` | Flag conflict — deny wins, but allow is dead weight |
### Consolidation Algorithm
1. **Deduplicate:** Remove exact duplicates from each array
2. **Subset elimination:** For each pattern, check if a broader pattern exists
- `Write(plugins/projman/*)` is subset of `Write(plugins/**)`
- `Bash(git status)` is subset of `Bash(git *)`
3. **Merge detection:** If 4+ patterns share a common prefix, suggest merge
- Threshold: 4 patterns minimum before suggesting consolidation
4. **Stale detection:** Cross-reference file patterns against actual filesystem
5. **Conflict detection:** Check for patterns appearing in multiple arrays
---
## Section 4: Review-Layer-Aware Recommendations
This is the key section. Map upstream review processes to directory scopes:
| Directory Scope | Active Review Layers | Auto-Allow Recommendation |
|----------------|---------------------|---------------------------|
| `plugins/*/commands/*.md` | Sprint approval, PR review, doc-guardian PostToolUse | `Write(plugins/*/commands/**)` — 3 layers cover this |
| `plugins/*/skills/*.md` | Sprint approval, PR review | `Write(plugins/*/skills/**)` — 2 layers |
| `plugins/*/agents/*.md` | Sprint approval, PR review, contract-validator | `Write(plugins/*/agents/**)` — 3 layers |
| `mcp-servers/*/mcp_server/*.py` | Code-sentinel PreToolUse, sprint approval, PR review | `Write(mcp-servers/**)` + `Edit(mcp-servers/**)` — sentinel catches secrets |
| `docs/*.md` | Doc-guardian PostToolUse, PR review | `Write(docs/**)` + `Edit(docs/**)` |
| `.claude-plugin/*.json` | validate-marketplace.sh, PR review | `Write(.claude-plugin/**)` |
| `scripts/*.sh` | Code-sentinel, PR review | `Write(scripts/**)` — with caution flag |
| `CLAUDE.md`, `CHANGELOG.md`, `README.md` | Doc-guardian, PR review | `Write(CLAUDE.md)`, `Write(CHANGELOG.md)`, `Write(README.md)` |
### Critical Rule: Hook Verification
**Before recommending auto-allow for a scope, the agent MUST verify the hook is actually configured.**
Read the relevant `plugins/*/hooks/hooks.json` file:
- If code-sentinel's hook is missing or disabled, do NOT recommend auto-allowing `mcp-servers/**` writes
- If doc-guardian's hook is missing, do NOT recommend auto-allowing `docs/**` without caution
- Count the number of verified review layers before making recommendations
**Minimum threshold:** Recommend auto-allow only for scopes covered by ≥2 verified review layers.
---
## Section 5: Permission Profiles
Three named profiles for different project contexts:
### `conservative` (Default for New Users)
Minimal permissions, prompts for most write operations:
```json
{
"permissions": {
"allow": [
"Read",
"Glob",
"Grep",
"LS",
"Write(docs/**)",
"Edit(docs/**)",
"Bash(git status *)",
"Bash(git diff *)",
"Bash(git log *)",
"Bash(cat *)",
"Bash(ls *)",
"Bash(head *)",
"Bash(tail *)",
"Bash(wc *)",
"Bash(grep *)"
],
"deny": [
"Read(.env*)",
"Read(./secrets/**)",
"Bash(rm -rf *)",
"Bash(sudo *)"
]
}
}
```
### `reviewed` (Projects with ≥2 Upstream Review Layers)
This is the target profile for projects using the marketplace's multi-layer review architecture:
```json
{
"permissions": {
"allow": [
"Read",
"Glob",
"Grep",
"LS",
"Edit",
"Write",
"MultiEdit",
"Bash(git *)",
"Bash(python *)",
"Bash(pip install *)",
"Bash(cd *)",
"Bash(cat *)",
"Bash(ls *)",
"Bash(head *)",
"Bash(tail *)",
"Bash(wc *)",
"Bash(grep *)",
"Bash(find *)",
"Bash(mkdir *)",
"Bash(cp *)",
"Bash(mv *)",
"Bash(touch *)",
"Bash(chmod *)",
"Bash(source *)",
"Bash(echo *)",
"Bash(sed *)",
"Bash(awk *)",
"Bash(sort *)",
"Bash(uniq *)",
"Bash(diff *)",
"Bash(jq *)",
"Bash(npm *)",
"Bash(npx *)",
"Bash(node *)",
"Bash(pytest *)",
"Bash(python -m *)",
"Bash(./scripts/*)",
"WebFetch",
"WebSearch"
],
"deny": [
"Read(.env*)",
"Read(./secrets/**)",
"Bash(rm -rf *)",
"Bash(sudo *)",
"Bash(curl * | bash*)",
"Bash(wget * | bash*)"
]
}
}
```
### `autonomous` (Trusted CI/Sandboxed Environments Only)
Maximum permissions for automated environments:
```json
{
"permissions": {
"allow": [
"Read",
"Glob",
"Grep",
"LS",
"Edit",
"Write",
"MultiEdit",
"Bash",
"WebFetch",
"WebSearch"
],
"deny": [
"Read(.env*)",
"Read(./secrets/**)",
"Bash(rm -rf /)",
"Bash(sudo *)"
]
}
}
```
**Warning:** The `autonomous` profile allows unscoped `Bash` — only use in fully sandboxed environments.
---
## Section 6: Scoring Criteria (Settings Efficiency Score — 100 points)
| Category | Points | What It Measures |
|----------|--------|------------------|
| **Redundancy** | 25 | No duplicates, no subset patterns, merged where possible |
| **Coverage** | 25 | Common tools allowed, MCP servers covered, no unnecessary gaps |
| **Safety Alignment** | 25 | Deny rules cover secrets, destructive commands; review layers verified |
| **Profile Fit** | 25 | How close to recommended profile for the project's review layer count |
### Scoring Breakdown
**Redundancy (25 points):**
- 25: No duplicates, no subsets, patterns are consolidated
- 20: 1-2 minor redundancies
- 15: 3-5 redundancies or 1 merge candidate group
- 10: 6+ redundancies or 2+ merge candidate groups
- 5: Significant redundancy (10+ issues)
- 0: Severe redundancy (20+ issues)
**Coverage (25 points):**
- 25: All common tools allowed, MCP servers covered
- 20: Missing 1-2 common tool patterns
- 15: Missing 3-5 patterns or 1 MCP server
- 10: Missing 6+ patterns or 2+ MCP servers
- 5: Significant gaps causing frequent prompts
- 0: Minimal coverage (prompts on most operations)
**Safety Alignment (25 points):**
- 25: Deny rules cover secrets + destructive ops, review layers verified
- 20: Minor gaps (e.g., missing one secret pattern)
- 15: Overly broad allow without review layer coverage
- 10: Missing deny rules for secrets or destructive commands
- 5: Unsafe patterns without review layer justification
- 0: Security concerns (e.g., unscoped `Bash` without review layers)
**Profile Fit (25 points):**
- 25: Matches recommended profile exactly
- 20: Within 90% of recommended profile
- 15: Within 80% of recommended profile
- 10: Within 70% of recommended profile
- 5: Significant deviation from recommended profile
- 0: No alignment with any named profile
### Score Interpretation
| Score Range | Status | Meaning |
|-------------|--------|---------|
| 90-100 | Optimized | Minimal prompt interruptions, safety maintained |
| 70-89 | Good | Minor consolidation opportunities |
| 50-69 | Needs Work | Significant redundancy or missing permissions |
| Below 50 | Poor | Likely getting constant approval prompts unnecessarily |
---
## Section 7: Hook Detection Method
To verify which review layers are active, read these files:
| File | Hook Type | Tool Matcher | Purpose |
|------|-----------|--------------|---------|
| `plugins/code-sentinel/hooks/hooks.json` | PreToolUse | Write\|Edit\|MultiEdit | Blocks hardcoded secrets |
| `plugins/doc-guardian/hooks/hooks.json` | PostToolUse | Write\|Edit\|MultiEdit | Tracks documentation drift |
| `plugins/project-hygiene/hooks/hooks.json` | PostToolUse | Write\|Edit | Cleanup tracking |
| `plugins/data-platform/hooks/hooks.json` | PostToolUse | Edit\|Write | Schema diff detection |
| `plugins/cmdb-assistant/hooks/hooks.json` | PreToolUse | (if exists) | Input validation |
### Verification Process
1. **Read each hooks.json file**
2. **Parse the JSON to find hook configurations**
3. **Check the `type` field** — must be `"command"` (not `"prompt"`)
4. **Check the `event` field** — maps to when hook runs
5. **Check the `tools` array** — which operations are intercepted
6. **Verify plugin is in marketplace** — check `.claude-plugin/marketplace.json`
### Example Hook Structure
```json
{
"hooks": [
{
"event": "PreToolUse",
"type": "command",
"command": "./hooks/security-check.sh",
"tools": ["Write", "Edit", "MultiEdit"]
}
]
}
```
### Review Layer Count
Count verified review layers for each scope:
| Layer | Verification |
|-------|-------------|
| Sprint approval | Check if projman plugin is installed (milestone workflow) |
| PR review | Check if pr-review plugin is installed |
| code-sentinel PreToolUse | hooks.json exists with PreToolUse on Write/Edit |
| doc-guardian PostToolUse | hooks.json exists with PostToolUse on Write/Edit |
| contract-validator | Plugin installed + hooks present |
**Recommendation threshold:** Only recommend auto-allow for scopes with ≥2 verified layers.

View File

@@ -47,27 +47,6 @@ This skill defines the standard visual header for claude-config-maintainer comma
+-----------------------------------------------------------------+ +-----------------------------------------------------------------+
``` ```
### /config-audit-settings
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Audit |
+-----------------------------------------------------------------+
```
### /config-optimize-settings
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Settings Optimization |
+-----------------------------------------------------------------+
```
### /config-permissions-map
```
+-----------------------------------------------------------------+
| CONFIG-MAINTAINER - Permissions Map |
+-----------------------------------------------------------------+
```
## Usage ## Usage
Display the header at the start of command execution, before any analysis or output. Display the header at the start of command execution, before any analysis or output.

View File

@@ -1,9 +1,3 @@
---
name: cmdb-assistant
description: Infrastructure management assistant specialized in NetBox CMDB operations. Use for device management, IP addressing, and infrastructure queries.
model: sonnet
---
# CMDB Assistant Agent # CMDB Assistant Agent
You are an infrastructure management assistant specialized in NetBox CMDB operations. You are an infrastructure management assistant specialized in NetBox CMDB operations.

View File

@@ -1,7 +1,5 @@
--- ---
name: refactor-advisor description: Code structure and refactoring specialist
description: Code structure and refactoring specialist. Use when analyzing code quality, design patterns, or planning refactoring work.
model: sonnet
--- ---
# Refactor Advisor Agent # Refactor Advisor Agent

View File

@@ -1,7 +1,6 @@
--- ---
name: security-reviewer name: security-reviewer
description: Security-focused code review agent description: Security-focused code review agent
model: sonnet
--- ---
# Security Reviewer Agent # Security Reviewer Agent

View File

@@ -1,6 +1,6 @@
{ {
"name": "contract-validator", "name": "contract-validator",
"version": "1.2.0", "version": "1.1.0",
"description": "Cross-plugin compatibility validation and Claude.md agent verification", "description": "Cross-plugin compatibility validation and Claude.md agent verification",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",

View File

@@ -1,7 +1,6 @@
--- ---
name: agent-check name: agent-check
description: Agent definition validator for quick verification description: Agent definition validator for quick verification
model: haiku
--- ---
# Agent Check Agent # Agent Check Agent

View File

@@ -1,9 +1,3 @@
---
name: full-validation
description: Contract validation specialist for comprehensive cross-plugin compatibility validation of the entire marketplace.
model: sonnet
---
# Full Validation Agent # Full Validation Agent
You are a contract validation specialist. Your role is to perform comprehensive cross-plugin compatibility validation for the entire marketplace. You are a contract validation specialist. Your role is to perform comprehensive cross-plugin compatibility validation for the entire marketplace.

View File

@@ -30,7 +30,6 @@
- Use `validate_compatibility` for pairwise checks - Use `validate_compatibility` for pairwise checks
- Use `validate_agent_refs` for CLAUDE.md agents - Use `validate_agent_refs` for CLAUDE.md agents
- Use `validate_data_flow` for data sequences - Use `validate_data_flow` for data sequences
- Use `validate_workflow_integration` for domain plugin advisory interfaces
5. **Generate report**: 5. **Generate report**:
- Use `generate_compatibility_report` for full report - Use `generate_compatibility_report` for full report

View File

@@ -16,7 +16,6 @@ Available MCP tools for contract-validator operations.
| `validate_compatibility` | Check two plugins for conflicts | | `validate_compatibility` | Check two plugins for conflicts |
| `validate_agent_refs` | Check agent tool references exist | | `validate_agent_refs` | Check agent tool references exist |
| `validate_data_flow` | Verify data flow through agent sequence | | `validate_data_flow` | Verify data flow through agent sequence |
| `validate_workflow_integration` | Check domain plugin exposes required advisory interfaces and gate contract version |
### Report Tools ### Report Tools
| Tool | Description | | Tool | Description |
@@ -54,14 +53,6 @@ Available MCP tools for contract-validator operations.
3. Build Mermaid diagram from results 3. Build Mermaid diagram from results
``` ```
### Workflow Integration Check
```
1. validate_workflow_integration(plugin_path, domain_label) # Check single domain plugin
2. validate_workflow_integration(plugin_path, domain_label, expected_contract="v1") # With contract version check
3. For each domain in domain-consultation.md detection rules:
validate_workflow_integration(domain_plugin_path, domain_label, expected_contract)
```
## Error Handling ## Error Handling
If MCP tools fail: If MCP tools fail:

View File

@@ -30,15 +30,6 @@ Rules for validating plugin compatibility and agent definitions.
3. Check for orphaned data references 3. Check for orphaned data references
4. Ensure required data is available at each step 4. Ensure required data is available at each step
### Workflow Integration Checks
1. Gate command exists in plugin's commands/ directory
2. Gate command produces binary PASS/FAIL output
3. Review command exists (WARNING if missing, not ERROR)
4. Advisory agent exists referencing the domain label
5. Gate command declares `gate_contract` version in frontmatter
6. If expected version provided, gate contract version matches expected
- Severity: ERROR for missing gate, WARNING for missing review/agent or contract mismatch, INFO for missing contract
## Severity Levels ## Severity Levels
| Level | Meaning | Action | | Level | Meaning | Action |
@@ -55,8 +46,6 @@ Rules for validating plugin compatibility and agent definitions.
| Data flow gap | Producer not called before consumer | Reorder workflow steps | | Data flow gap | Producer not called before consumer | Reorder workflow steps |
| Name conflict | Two plugins use same command | Rename one command | | Name conflict | Two plugins use same command | Rename one command |
| Orphan reference | Data produced but never consumed | Remove or use the data | | Orphan reference | Data produced but never consumed | Remove or use the data |
| Missing gate command | Domain plugin lacks /X-gate command | Create commands/{domain}-gate.md |
| Missing advisory agent | Domain plugin has no reviewer agent | Create agents/{domain}-advisor.md |
## MCP Tools ## MCP Tools
@@ -65,5 +54,4 @@ Rules for validating plugin compatibility and agent definitions.
| `validate_compatibility` | Check two plugins for conflicts | | `validate_compatibility` | Check two plugins for conflicts |
| `validate_agent_refs` | Check agent tool references | | `validate_agent_refs` | Check agent tool references |
| `validate_data_flow` | Verify data flow sequences | | `validate_data_flow` | Verify data flow sequences |
| `validate_workflow_integration` | Check domain plugin advisory interfaces |
| `list_issues` | Filter issues by severity or type | | `list_issues` | Filter issues by severity or type |

View File

@@ -1,6 +1,6 @@
{ {
"name": "data-platform", "name": "data-platform",
"version": "1.3.0", "version": "1.1.0",
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration", "description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",

View File

@@ -1,317 +0,0 @@
---
name: data-advisor
description: Reviews code for data integrity, schema validity, and dbt compliance using data-platform MCP tools. Use when validating database operations or data pipelines.
model: sonnet
---
# Data Advisor Agent
You are a strict data integrity auditor. Your role is to review code for proper schema usage, dbt compliance, lineage integrity, and data quality standards.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| DATA-PLATFORM - Data Advisor |
| [Target Path] |
+----------------------------------------------------------------------+
```
## Trigger Conditions
Activate this agent when:
- User runs `/data-review <path>`
- User runs `/data-gate <path>`
- Projman orchestrator requests data domain gate check
- Code review includes database operations, dbt models, or data pipelines
## Skills to Load
- skills/data-integrity-audit.md
- skills/mcp-tools-reference.md
## Available MCP Tools
### PostgreSQL (Schema Validation)
| Tool | Purpose |
|------|---------|
| `pg_connect` | Verify database is reachable |
| `pg_tables` | List tables, verify existence |
| `pg_columns` | Get column details, verify types and constraints |
| `pg_schemas` | List available schemas |
| `pg_query` | Run diagnostic queries (SELECT only in review context) |
### PostGIS (Spatial Validation)
| Tool | Purpose |
|------|---------|
| `st_tables` | List tables with geometry columns |
| `st_geometry_type` | Verify geometry types |
| `st_srid` | Verify coordinate reference systems |
| `st_extent` | Verify spatial extent is reasonable |
### dbt (Project Validation)
| Tool | Purpose |
|------|---------|
| `dbt_parse` | Validate project structure (ALWAYS run first) |
| `dbt_compile` | Verify SQL renders correctly |
| `dbt_test` | Run data tests |
| `dbt_build` | Combined run + test |
| `dbt_ls` | List all resources (models, tests, sources) |
| `dbt_lineage` | Get model dependency graph |
| `dbt_docs_generate` | Generate documentation for inspection |
### pandas (Data Validation)
| Tool | Purpose |
|------|---------|
| `describe` | Statistical summary for data quality checks |
| `head` | Preview data for structural verification |
| `list_data` | Check for stale DataFrames |
## Operating Modes
### Review Mode (default)
Triggered by `/data-review <path>`
**Characteristics:**
- Produces detailed report with all findings
- Groups findings by severity (FAIL/WARN/INFO)
- Includes actionable recommendations with fixes
- Does NOT block - informational only
- Shows category compliance status
### Gate Mode
Triggered by `/data-gate <path>` or projman orchestrator domain gate
**Characteristics:**
- Binary PASS/FAIL output
- Only reports FAIL-level issues
- Returns exit status for automation integration
- Blocks completion on FAIL
- Compact output for CI/CD pipelines
## Audit Workflow
### 1. Receive Target Path
Accept file or directory path from command invocation.
### 2. Determine Scope
Analyze target to identify what type of data work is present:
| Pattern | Type | Checks to Run |
|---------|------|---------------|
| `dbt_project.yml` present | dbt project | Full dbt validation |
| `*.sql` files in dbt path | dbt models | Model compilation, lineage |
| `*.py` with `pg_query`/`pg_execute` | Database operations | Schema validation |
| `schema.yml` files | dbt schemas | Schema drift detection |
| Migration files (`*_migration.sql`) | Schema changes | Full PostgreSQL + dbt checks |
### 3. Run Database Checks (if applicable)
```
1. pg_connect → verify database reachable
If fails: WARN, continue with file-based checks
2. pg_tables → verify expected tables exist
If missing: FAIL
3. pg_columns on affected tables → verify types
If mismatch: FAIL
```
### 4. Run dbt Checks (if applicable)
```
1. dbt_parse → validate project
If fails: FAIL immediately (project broken)
2. dbt_ls → catalog all resources
Record models, tests, sources
3. dbt_lineage on target models → check integrity
Orphaned refs: FAIL
4. dbt_compile on target models → verify SQL
Compilation errors: FAIL
5. dbt_test --select <targets> → run tests
Test failures: FAIL
6. Cross-reference tests → models without tests
Missing tests: WARN
```
### 5. Run PostGIS Checks (if applicable)
```
1. st_tables → list spatial tables
If none found: skip PostGIS checks
2. st_srid → verify SRID correct
Unexpected SRID: FAIL
3. st_geometry_type → verify expected types
Wrong type: WARN
4. st_extent → sanity check bounding box
Unreasonable extent: FAIL
```
### 6. Scan Python Code (manual patterns)
For Python files with database operations:
| Pattern | Issue | Severity |
|---------|-------|----------|
| `f"SELECT * FROM {table}"` | SQL injection risk | WARN |
| `f"INSERT INTO {table}"` | Unparameterized mutation | WARN |
| `pg_execute` without WHERE in DELETE/UPDATE | Dangerous mutation | WARN |
| Hardcoded connection strings | Credential exposure | WARN |
### 7. Generate Report
Output format depends on operating mode (see templates in `skills/data-integrity-audit.md`).
## Report Formats
### Gate Mode Output
**PASS:**
```
DATA GATE: PASS
No blocking data integrity violations found.
```
**FAIL:**
```
DATA GATE: FAIL
Blocking Issues (2):
1. dbt/models/staging/stg_census.sql - Compilation error: column 'census_yr' not found
Fix: Column was renamed to 'census_year' in source table. Update model.
2. portfolio_app/toronto/loaders/census.py:67 - References table 'census_raw' which does not exist
Fix: Table was renamed to 'census_demographics' in migration 003.
Run /data-review for full audit report.
```
### Review Mode Output
```
+----------------------------------------------------------------------+
| DATA-PLATFORM - Data Integrity Audit |
| /path/to/project |
+----------------------------------------------------------------------+
Target: /path/to/project
Scope: 12 files scanned, 8 models checked, 3 tables verified
FINDINGS
FAIL (2)
1. [dbt/models/staging/stg_census.sql] Compilation error
Error: column 'census_yr' does not exist
Fix: Column was renamed to 'census_year'. Update SELECT clause.
2. [portfolio_app/loaders/census.py:67] Missing table reference
Error: Table 'census_raw' does not exist
Fix: Table renamed to 'census_demographics' in migration 003.
WARN (3)
1. [dbt/models/marts/dim_neighbourhoods.sql] Missing dbt test
Issue: No unique test on neighbourhood_id
Suggestion: Add unique test to schema.yml
2. [portfolio_app/toronto/queries.py:45] Hardcoded SQL
Issue: f"SELECT * FROM {table_name}" without parameterization
Suggestion: Use parameterized queries
3. [dbt/models/staging/stg_legacy.sql] Orphaned model
Issue: No downstream consumers or exposures
Suggestion: Remove if unused or add to exposure
INFO (1)
1. [dbt/models/marts/fct_demographics.sql] Documentation gap
Note: Model description missing in schema.yml
Suggestion: Add description for discoverability
SUMMARY
Schema: 2 issues
Lineage: Intact
dbt: 1 failure
PostGIS: Not applicable
VERDICT: FAIL (2 blocking issues)
```
## Severity Definitions
| Level | Criteria | Action Required |
|-------|----------|-----------------|
| **FAIL** | dbt parse/compile fails, missing tables/columns, type mismatches, broken lineage, invalid SRID | Must fix before completion |
| **WARN** | Missing tests, hardcoded SQL, schema drift, orphaned models | Should fix |
| **INFO** | Documentation gaps, optimization opportunities | Consider for improvement |
## Error Handling
| Error | Response |
|-------|----------|
| Database not reachable | WARN: "PostgreSQL unavailable, skipping schema checks" - continue |
| No dbt_project.yml | Skip dbt checks silently - not an error |
| No PostGIS tables | Skip PostGIS checks silently - not an error |
| MCP tool fails | WARN: "Tool {name} failed: {error}" - continue with remaining |
| Empty path | PASS: "No data artifacts found in target path" |
| Invalid path | Error: "Path not found: {path}" |
## Integration with projman
When called as a domain gate by projman orchestrator:
1. Receive path from orchestrator (changed files for the issue)
2. Determine what type of data work changed
3. Run audit in gate mode
4. Return structured result:
```
Gate: data
Status: PASS | FAIL
Blocking: N issues
Summary: Brief description
```
5. Orchestrator decides whether to proceed based on gate status
## Example Interactions
**User**: `/data-review dbt/models/staging/`
**Agent**:
1. Scans all .sql files in staging/
2. Runs dbt_parse to validate project
3. Runs dbt_compile on each model
4. Checks lineage for orphaned refs
5. Cross-references test coverage
6. Returns detailed report
**User**: `/data-gate portfolio_app/toronto/`
**Agent**:
1. Scans for Python files with pg_query/pg_execute
2. Checks if referenced tables exist
3. Validates column types
4. Returns PASS if clean, FAIL with blocking issues if not
5. Compact output for automation
## Communication Style
Technical and precise. Report findings with exact locations, specific violations, and actionable fixes:
- "Table `census_demographics` column `population` is `varchar(50)` in PostgreSQL but referenced as `integer` in `stg_census.sql` line 14. This will cause a runtime cast error."
- "Model `dim_neighbourhoods` has no `unique` test on `neighbourhood_id`. Add to `schema.yml` to prevent duplicates."
- "Spatial extent for `toronto_boundaries` shows global coordinates (-180 to 180). Expected Toronto bbox (~-79.6 to -79.1 longitude). Likely missing ST_Transform or wrong SRID on import."

View File

@@ -1,7 +1,6 @@
--- ---
name: data-analysis name: data-analysis
description: Data analysis specialist for exploration and profiling description: Data analysis specialist for exploration and profiling
model: sonnet
--- ---
# Data Analysis Agent # Data Analysis Agent

View File

@@ -1,9 +1,3 @@
---
name: data-ingestion
description: Data ingestion specialist for loading, transforming, and preparing data for analysis.
model: haiku
---
# Data Ingestion Agent # Data Ingestion Agent
You are a data ingestion specialist. Your role is to help users load, transform, and prepare data for analysis. You are a data ingestion specialist. Your role is to help users load, transform, and prepare data for analysis.

View File

@@ -1,105 +0,0 @@
---
description: Data integrity compliance gate (pass/fail) for sprint execution
gate_contract: v1
arguments:
- name: path
description: File or directory to validate
required: true
---
# /data-gate
Binary pass/fail validation for data integrity compliance. Used by projman orchestrator during sprint execution to gate issue completion.
## Usage
```
/data-gate <path>
```
**Examples:**
```
/data-gate ./dbt/models/staging/
/data-gate ./portfolio_app/toronto/parsers/
/data-gate ./dbt/
```
## What It Does
1. **Activates** the `data-advisor` agent in gate mode
2. **Loads** the `skills/data-integrity-audit.md` skill
3. **Determines scope** from target path:
- dbt project directory: full dbt validation (parse, compile, test, lineage)
- Python files with database operations: schema validation
- SQL files: dbt model validation
- Mixed: all applicable checks
4. **Checks only FAIL-level violations:**
- dbt parse failures (project broken)
- dbt compilation errors (SQL invalid)
- Missing tables/columns referenced in code
- Data type mismatches that cause runtime errors
- Broken lineage (orphaned model references)
- PostGIS SRID mismatches
5. **Returns binary result:**
- `PASS` - No blocking violations found
- `FAIL` - One or more blocking violations
## Output
### On PASS
```
DATA GATE: PASS
No blocking data integrity violations found.
```
### On FAIL
```
DATA GATE: FAIL
Blocking Issues (2):
1. dbt/models/staging/stg_census.sql - Compilation error: column 'census_yr' not found
Fix: Column was renamed to 'census_year' in source table. Update model.
2. portfolio_app/toronto/loaders/census.py:67 - References table 'census_raw' which does not exist
Fix: Table was renamed to 'census_demographics' in migration 003.
Run /data-review for full audit report.
```
## Integration with projman
This command is automatically invoked by the projman orchestrator when:
1. An issue has the `Domain/Data` label
2. The orchestrator is about to mark the issue as complete
3. The orchestrator passes the path of changed files
**Gate behavior:**
- PASS: Issue can be marked complete
- FAIL: Issue stays open, blocker comment added with failure details
## Differences from /data-review
| Aspect | /data-gate | /data-review |
|--------|------------|--------------|
| Output | Binary PASS/FAIL | Detailed report with all severities |
| Severity | FAIL only | FAIL + WARN + INFO |
| Purpose | Automation gate | Human review |
| Verbosity | Minimal | Comprehensive |
| Speed | Skips INFO checks | Full scan |
## When to Use
- **Sprint execution**: Automatic quality gates via projman
- **CI/CD pipelines**: Automated data integrity checks
- **Quick validation**: Fast pass/fail without full report
- **Pre-merge checks**: Verify data changes before integration
For detailed findings including warnings and suggestions, use `/data-review` instead.
## Requirements
- data-platform MCP server must be running
- For dbt checks: dbt project must be configured (auto-detected via `dbt_project.yml`)
- For PostgreSQL checks: connection configured in `~/.config/claude/postgres.env`
- If database or dbt unavailable: applicable checks skipped with warning (non-blocking degradation)

View File

@@ -1,149 +0,0 @@
---
description: Audit data integrity, schema validity, and dbt compliance
arguments:
- name: path
description: File, directory, or dbt project to audit
required: true
---
# /data-review
Comprehensive data integrity audit producing a detailed report with findings at all severity levels. For human review and standalone codebase auditing.
## Usage
```
/data-review <path>
```
**Examples:**
```
/data-review ./dbt/
/data-review ./portfolio_app/toronto/
/data-review ./dbt/models/marts/
```
## What It Does
1. **Activates** the `data-advisor` agent in review mode
2. **Scans target path** to determine scope:
- Identifies dbt project files (.sql models, schema.yml, sources.yml)
- Identifies Python files with database operations
- Identifies migration files
- Identifies PostGIS usage
3. **Runs all check categories:**
- Schema validity (PostgreSQL tables, columns, types)
- dbt project health (parse, compile, test, lineage)
- PostGIS compliance (SRID, geometry types, extent)
- Data type consistency
- Code patterns (unsafe SQL, hardcoded queries)
4. **Produces detailed report** with all severity levels (FAIL, WARN, INFO)
5. **Provides actionable recommendations** for each finding
## Output Format
```
+----------------------------------------------------------------------+
| DATA-PLATFORM - Data Integrity Audit |
| /path/to/project |
+----------------------------------------------------------------------+
Target: /path/to/project
Scope: N files scanned, N models checked, N tables verified
FINDINGS
FAIL (N)
1. [location] violation description
Fix: actionable fix
WARN (N)
1. [location] warning description
Suggestion: improvement suggestion
INFO (N)
1. [location] info description
Note: context
SUMMARY
Schema: Valid | N issues
Lineage: Intact | N orphaned
dbt: Passes | N failures
PostGIS: Valid | N issues | Not applicable
VERDICT: PASS | FAIL (N blocking issues)
```
## When to Use
### Before Sprint Planning
Audit data layer health to identify tech debt and inform sprint scope.
```
/data-review ./dbt/
```
### During Code Review
Get detailed data integrity findings alongside code review comments.
```
/data-review ./dbt/models/staging/stg_new_source.sql
```
### After Migrations
Verify schema changes didn't break anything downstream.
```
/data-review ./migrations/
```
### Periodic Health Checks
Regular data infrastructure audits for proactive maintenance.
```
/data-review ./data_pipeline/
```
### New Project Onboarding
Understand the current state of data architecture.
```
/data-review .
```
## Severity Levels
| Level | Meaning | Gate Impact |
|-------|---------|-------------|
| **FAIL** | Blocking issues that will cause runtime errors | Would block `/data-gate` |
| **WARN** | Quality issues that should be addressed | Does not block gate |
| **INFO** | Suggestions for improvement | Does not block gate |
## Differences from /data-gate
`/data-review` gives you the full picture. `/data-gate` gives the orchestrator a yes/no.
| Aspect | /data-gate | /data-review |
|--------|------------|--------------|
| Output | Binary PASS/FAIL | Detailed report |
| Severity | FAIL only | FAIL + WARN + INFO |
| Purpose | Automation | Human review |
| Verbosity | Minimal | Comprehensive |
| Speed | Fast (skips INFO) | Thorough |
Use `/data-review` when you want to understand.
Use `/data-gate` when you want to automate.
## Requirements
- data-platform MCP server must be running
- For dbt checks: dbt project must be configured (auto-detected via `dbt_project.yml`)
- For PostgreSQL checks: connection configured in `~/.config/claude/postgres.env`
**Graceful degradation:** If database or dbt unavailable, applicable checks are skipped with a note in the report rather than failing entirely.
## Skills Used
- `skills/data-integrity-audit.md` - Audit rules and patterns
- `skills/mcp-tools-reference.md` - MCP tool reference
## Related Commands
- `/data-gate` - Binary pass/fail for automation
- `/lineage` - Visualize dbt model dependencies
- `/schema` - Explore database schema

View File

@@ -5,26 +5,11 @@
PREFIX="[data-platform]" PREFIX="[data-platform]"
# Check if MCP venv exists - check cache first, then local # Check if MCP venv exists
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv/bin/python"
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}" PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}"
MARKETPLACE_ROOT="$(dirname "$(dirname "$PLUGIN_ROOT")")" VENV_PATH="$PLUGIN_ROOT/mcp-servers/data-platform/.venv/bin/python"
LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/data-platform/.venv/bin/python"
# Check cache first (preferred), then local symlink if [[ ! -f "$VENV_PATH" ]]; then
CACHE_VENV_DIR="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
LOCAL_VENV_DIR="$MARKETPLACE_ROOT/mcp-servers/data-platform/.venv"
if [[ -f "$CACHE_VENV" ]]; then
VENV_PATH="$CACHE_VENV"
# Auto-create symlink in installed marketplace if missing
if [[ ! -e "$LOCAL_VENV_DIR" && -d "$CACHE_VENV_DIR" ]]; then
mkdir -p "$(dirname "$LOCAL_VENV_DIR")" 2>/dev/null
ln -sf "$CACHE_VENV_DIR" "$LOCAL_VENV_DIR" 2>/dev/null
fi
elif [[ -f "$LOCAL_VENV" ]]; then
VENV_PATH="$LOCAL_VENV"
else
echo "$PREFIX MCP venv missing - run /initial-setup or setup.sh" echo "$PREFIX MCP venv missing - run /initial-setup or setup.sh"
exit 0 exit 0
fi fi

View File

@@ -1,307 +0,0 @@
---
name: data-integrity-audit
description: Rules and patterns for auditing data integrity, schema validity, and dbt compliance
---
# Data Integrity Audit
## Purpose
Defines what "data valid" means for the data-platform domain. This skill is loaded by the `data-advisor` agent for both review and gate modes during sprint execution and standalone audits.
---
## What to Check
| Check Category | What It Validates | MCP Tools Used |
|----------------|-------------------|----------------|
| **Schema Validity** | Tables exist, columns have correct types, constraints present, no orphaned columns | `pg_tables`, `pg_columns`, `pg_schemas` |
| **dbt Project Health** | Project parses without errors, models compile, tests defined for critical models | `dbt_parse`, `dbt_compile`, `dbt_test`, `dbt_ls` |
| **Lineage Integrity** | No orphaned models (referenced but missing), no circular dependencies, upstream sources exist | `dbt_lineage`, `dbt_ls` |
| **Data Type Consistency** | DataFrame dtypes match expected schema, no silent type coercion, date formats consistent | `describe`, `head`, `pg_columns` |
| **PostGIS Compliance** | Spatial tables have correct SRID, geometry types match expectations, extent is reasonable | `st_tables`, `st_geometry_type`, `st_srid`, `st_extent` |
| **Query Safety** | SELECT queries used for reads (not raw SQL for mutations), parameterized patterns | Code review - manual pattern check |
---
## Common Violations
### FAIL-Level Violations (Block Gate)
| Violation | Detection Method | Example |
|-----------|-----------------|---------|
| dbt parse failure | `dbt_parse` returns error | Project YAML invalid, missing ref targets |
| dbt compilation error | `dbt_compile` fails | SQL syntax error, undefined column reference |
| Missing table/column | `pg_tables`, `pg_columns` lookup | Code references `census_raw` but table doesn't exist |
| Type mismatch | Compare `pg_columns` vs dbt schema | Column is `varchar` in DB but model expects `integer` |
| Broken lineage | `dbt_lineage` shows orphaned refs | Model references `stg_old_format` which doesn't exist |
| PostGIS SRID mismatch | `st_srid` returns unexpected value | Geometry column has SRID 0 instead of 4326 |
| Unreasonable spatial extent | `st_extent` returns global bbox | Toronto data shows coordinates in China |
### WARN-Level Violations (Report, Don't Block)
| Violation | Detection Method | Example |
|-----------|-----------------|---------|
| Missing dbt tests | `dbt_ls` shows model without test | `dim_customers` has no `unique` test on `customer_id` |
| Undocumented columns | dbt schema.yml missing descriptions | Model columns have no documentation |
| Schema drift | `pg_columns` vs dbt schema.yml | Column exists in DB but not in dbt YAML |
| Hardcoded SQL | Scan Python for string concatenation | `f"SELECT * FROM {table}"` without parameterization |
| Orphaned model | `dbt_lineage` shows no downstream | `stg_legacy` has no consumers and no exposure |
### INFO-Level Violations (Suggestions Only)
| Violation | Detection Method | Example |
|-----------|-----------------|---------|
| Missing indexes | Query pattern suggests need | Frequent filter on non-indexed column |
| Documentation gaps | dbt docs incomplete | Missing model description |
| Unused models | `dbt_ls` vs actual queries | Model exists but never selected |
| Optimization opportunity | `describe` shows data patterns | Column has low cardinality, could be enum |
---
## Severity Classification
| Severity | When to Apply | Gate Behavior |
|----------|--------------|---------------|
| **FAIL** | Broken lineage, models that won't compile, missing tables/columns, data type mismatches that cause runtime errors, invalid SRID | Blocks issue completion |
| **WARN** | Missing dbt tests, undocumented columns, schema drift, hardcoded SQL, orphaned models | Does NOT block gate, included in review report |
| **INFO** | Optimization opportunities, documentation gaps, unused models | Review report only |
### Severity Decision Tree
```
Is the dbt project broken (parse/compile fails)?
YES -> FAIL
NO -> Does code reference non-existent tables/columns?
YES -> FAIL
NO -> Would this cause a runtime error?
YES -> FAIL
NO -> Does it violate data quality standards?
YES -> WARN
NO -> Is it an optimization/documentation suggestion?
YES -> INFO
NO -> Not a violation
```
---
## Scanning Strategy
### For dbt Projects
1. **Parse validation** (ALWAYS FIRST)
```
dbt_parse → if fails, immediate FAIL (project is broken)
```
2. **Catalog resources**
```
dbt_ls → list all models, tests, sources, exposures
```
3. **Lineage check**
```
dbt_lineage on changed models → check upstream/downstream integrity
```
4. **Compilation check**
```
dbt_compile on changed models → verify SQL renders correctly
```
5. **Test execution**
```
dbt_test --select <changed_models> → verify tests pass
```
6. **Test coverage audit**
```
Cross-reference dbt_ls tests against model list → flag models without tests (WARN)
```
### For PostgreSQL Schema Changes
1. **Table verification**
```
pg_tables → verify expected tables exist
```
2. **Column validation**
```
pg_columns on affected tables → verify types match expectations
```
3. **Schema comparison**
```
Compare pg_columns output against dbt schema.yml → flag drift
```
### For PostGIS/Spatial Data
1. **Spatial table scan**
```
st_tables → list tables with geometry columns
```
2. **SRID validation**
```
st_srid → verify SRID is correct for expected region
Expected: 4326 (WGS84) for GPS data, local projections for regional data
```
3. **Geometry type check**
```
st_geometry_type → verify expected types (Point, Polygon, etc.)
```
4. **Extent sanity check**
```
st_extent → verify bounding box is reasonable for expected region
Toronto data should be ~(-79.6 to -79.1, 43.6 to 43.9)
```
### For DataFrame/pandas Operations
1. **Data quality check**
```
describe → check for unexpected nulls, type issues, outliers
```
2. **Structure verification**
```
head → verify data structure matches expectations
```
3. **Memory management**
```
list_data → verify no stale DataFrames from previous failed runs
```
### For Python Code (Manual Scan)
1. **SQL injection patterns**
- Scan for f-strings with table/column names
- Check for string concatenation in queries
- Look for `.format()` calls with SQL
2. **Mutation safety**
- `pg_execute` usage should be intentional, not accidental
- Verify DELETE/UPDATE have WHERE clauses
3. **Credential exposure**
- No hardcoded connection strings
- No credentials in code (check for `.env` usage)
---
## Report Templates
### Gate Mode (Compact)
```
DATA GATE: PASS
No blocking data integrity violations found.
```
or
```
DATA GATE: FAIL
Blocking Issues (N):
1. <location> - <violation description>
Fix: <actionable fix>
2. <location> - <violation description>
Fix: <actionable fix>
Run /data-review for full audit report.
```
### Review Mode (Detailed)
```
+----------------------------------------------------------------------+
| DATA-PLATFORM - Data Integrity Audit |
| [Target Path] |
+----------------------------------------------------------------------+
Target: <scanned path or project>
Scope: N files scanned, N models checked, N tables verified
FINDINGS
FAIL (N)
1. [location] violation description
Fix: actionable fix
2. [location] violation description
Fix: actionable fix
WARN (N)
1. [location] warning description
Suggestion: improvement suggestion
2. [location] warning description
Suggestion: improvement suggestion
INFO (N)
1. [location] info description
Note: context
SUMMARY
Schema: Valid | N issues
Lineage: Intact | N orphaned
dbt: Passes | N failures
PostGIS: Valid | N issues | Not applicable
VERDICT: PASS | FAIL (N blocking issues)
```
---
## Skip Patterns
Do not flag violations in:
- `**/tests/**` - Test files may have intentional violations
- `**/__pycache__/**` - Compiled files
- `**/fixtures/**` - Test fixtures
- `**/.scratch/**` - Temporary working files
- Files with `# noqa: data-audit` comment
- Migration files marked as historical
---
## Error Handling
| Scenario | Behavior |
|----------|----------|
| Database not reachable (`pg_connect` fails) | WARN, skip PostgreSQL checks, continue with file-based |
| dbt not configured (no `dbt_project.yml`) | Skip dbt checks entirely, not an error |
| No PostGIS tables found | Skip PostGIS checks, not an error |
| MCP tool call fails | Report as WARN with tool name, continue with remaining checks |
| No data files in scanned path | Report "No data artifacts found" - PASS (nothing to fail) |
| Empty directory | Report "No files found in path" - PASS |
---
## Integration Notes
### projman Orchestrator
When called as a domain gate:
1. Orchestrator detects `Domain/Data` label on issue
2. Orchestrator identifies changed files
3. Orchestrator invokes `/data-gate <path>`
4. Agent runs gate mode scan
5. Returns PASS/FAIL to orchestrator
6. Orchestrator decides whether to complete issue
### Standalone Usage
For manual audits:
1. User runs `/data-review <path>`
2. Agent runs full review mode scan
3. Returns detailed report with all severity levels
4. User decides on actions

View File

@@ -1,7 +1,7 @@
{ {
"name": "doc-guardian", "name": "doc-guardian",
"description": "Automatic documentation drift detection and synchronization", "description": "Automatic documentation drift detection and synchronization",
"version": "1.1.0", "version": "1.0.0",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",
"email": "leobmiranda@gmail.com" "email": "leobmiranda@gmail.com"

View File

@@ -1,7 +1,5 @@
--- ---
name: doc-analyzer description: Specialized agent for documentation analysis and drift detection
description: Specialized agent for documentation analysis and drift detection. Use when detecting or fixing discrepancies between code and documentation.
model: sonnet
--- ---
# Documentation Analyzer Agent # Documentation Analyzer Agent

View File

@@ -1,6 +1,6 @@
{ {
"name": "git-flow", "name": "git-flow",
"version": "1.2.0", "version": "1.0.0",
"description": "Git workflow automation with intelligent commit messages and branch management", "description": "Git workflow automation with intelligent commit messages and branch management",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",

View File

@@ -1,9 +1,3 @@
---
name: git-assistant
description: Git workflow assistant for complex git operations, conflict resolution, and repository history management.
model: haiku
---
# Git Assistant Agent # Git Assistant Agent
## Visual Output Requirements ## Visual Output Requirements

View File

@@ -1,9 +1,3 @@
---
name: coordinator
description: Review coordinator that orchestrates the multi-agent PR review process. Dispatches to specialized reviewers, aggregates findings, and produces the final review report. Use proactively after code changes.
model: sonnet
---
# Coordinator Agent # Coordinator Agent
## Visual Output Requirements ## Visual Output Requirements

View File

@@ -1,9 +1,3 @@
---
name: maintainability-auditor
description: Identifies code complexity, duplication, naming issues, and architecture concerns in PR changes.
model: haiku
---
# Maintainability Auditor Agent # Maintainability Auditor Agent
## Visual Output Requirements ## Visual Output Requirements

View File

@@ -1,9 +1,3 @@
---
name: performance-analyst
description: Performance-focused code reviewer that identifies performance issues, inefficiencies, and optimization opportunities.
model: sonnet
---
# Performance Analyst Agent # Performance Analyst Agent
## Visual Output Requirements ## Visual Output Requirements

View File

@@ -1,7 +1,6 @@
--- ---
name: security-reviewer name: security-reviewer
description: Security-focused code reviewer for PR analysis description: Security-focused code reviewer for PR analysis
model: sonnet
--- ---
# Security Reviewer Agent # Security Reviewer Agent

View File

@@ -1,9 +1,3 @@
---
name: test-validator
description: Test quality reviewer that validates test coverage, test quality, and testing practices in PR changes.
model: haiku
---
# Test Validator Agent # Test Validator Agent
## Visual Output Requirements ## Visual Output Requirements

View File

@@ -5,18 +5,11 @@
PREFIX="[pr-review]" PREFIX="[pr-review]"
# Check if MCP venv exists - check cache first, then local # Check if MCP venv exists
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv/bin/python"
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}" PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}"
MARKETPLACE_ROOT="$(dirname "$(dirname "$PLUGIN_ROOT")")" VENV_PATH="$PLUGIN_ROOT/mcp-servers/gitea/.venv/bin/python"
LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/gitea/.venv/bin/python"
# Check cache first (preferred), then local if [[ ! -f "$VENV_PATH" ]]; then
if [[ -f "$CACHE_VENV" ]]; then
VENV_PATH="$CACHE_VENV"
elif [[ -f "$LOCAL_VENV" ]]; then
VENV_PATH="$LOCAL_VENV"
else
echo "$PREFIX MCP venvs missing - run setup.sh from installed marketplace" echo "$PREFIX MCP venvs missing - run setup.sh from installed marketplace"
exit 0 exit 0
fi fi

View File

@@ -1,6 +1,6 @@
{ {
"name": "projman", "name": "projman",
"version": "3.4.0", "version": "3.3.0",
"description": "Sprint planning and project management with Gitea integration", "description": "Sprint planning and project management with Gitea integration",
"author": { "author": {
"name": "Leo Miranda", "name": "Leo Miranda",

View File

@@ -1,5 +0,0 @@
2026-01-30T14:32:53 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/setup-workflows.md | README.md
2026-01-30T14:32:53 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/setup.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-01-30T14:32:54 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/debug.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-01-30T14:32:54 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/test.md | docs/COMMANDS-CHEATSHEET.md README.md
2026-01-30T14:33:13 | commands | /home/lmiranda/claude-plugins-work/plugins/projman/commands/sprint-status.md | docs/COMMANDS-CHEATSHEET.md README.md

View File

@@ -1,7 +1,6 @@
--- ---
name: code-reviewer name: code-reviewer
description: Pre-sprint code quality review agent description: Pre-sprint code quality review agent
model: sonnet
--- ---
# Code Reviewer Agent # Code Reviewer Agent
@@ -12,8 +11,6 @@ You are the **Code Reviewer Agent** - a thorough, practical reviewer who ensures
- skills/review-checklist.md - skills/review-checklist.md
- skills/test-standards.md - skills/test-standards.md
- skills/sprint-lifecycle.md
- skills/visual-output.md
## Your Personality ## Your Personality
@@ -31,10 +28,14 @@ You are the **Code Reviewer Agent** - a thorough, practical reviewer who ensures
## Visual Output ## Visual Output
See `skills/visual-output.md` for header templates. Use the **Code Reviewer** row from the Phase Registry: Display header at start of every response:
- Phase Emoji: Magnifier ```
- Phase Name: REVIEW ╔══════════════════════════════════════════════════════════════════╗
- Context: Sprint Name ║ 📋 PROJMAN ║
║ 🏁 CLOSING ║
║ Code Review ║
╚══════════════════════════════════════════════════════════════════╝
```
## Your Responsibilities ## Your Responsibilities

View File

@@ -1,7 +1,6 @@
--- ---
name: executor name: executor
description: Implementation executor agent - precise implementation guidance and code quality description: Implementation executor agent - precise implementation guidance and code quality
model: sonnet
--- ---
# Implementation Executor Agent # Implementation Executor Agent
@@ -16,7 +15,6 @@ You are the **Executor Agent** - an implementation-focused specialist who writes
- skills/progress-tracking.md - skills/progress-tracking.md
- skills/runaway-detection.md - skills/runaway-detection.md
- skills/lessons-learned.md - skills/lessons-learned.md
- skills/visual-output.md
## Your Personality ## Your Personality
@@ -34,10 +32,14 @@ You are the **Executor Agent** - an implementation-focused specialist who writes
## Visual Output ## Visual Output
See `skills/visual-output.md` for header templates. Use the **Executor** row from the Phase Registry: Display header at start of every response:
- Phase Emoji: Wrench ```
- Phase Name: IMPLEMENTING ╔══════════════════════════════════════════════════════════════════╗
- Context: Issue Title ║ 📋 PROJMAN ║
║ ⚡ EXECUTION ║
║ [Issue Title] ║
╚══════════════════════════════════════════════════════════════════╝
```
## Your Responsibilities ## Your Responsibilities

View File

@@ -1,7 +1,6 @@
--- ---
name: orchestrator name: orchestrator
description: Sprint orchestration agent - coordinates execution and tracks progress description: Sprint orchestration agent - coordinates execution and tracks progress
model: sonnet
--- ---
# Sprint Orchestration Agent # Sprint Orchestration Agent
@@ -20,8 +19,6 @@ You are the **Orchestrator Agent** - a concise, action-oriented coordinator who
- skills/runaway-detection.md - skills/runaway-detection.md
- skills/wiki-conventions.md - skills/wiki-conventions.md
- skills/domain-consultation.md - skills/domain-consultation.md
- skills/sprint-lifecycle.md
- skills/visual-output.md
## Your Personality ## Your Personality
@@ -39,17 +36,28 @@ You are the **Orchestrator Agent** - a concise, action-oriented coordinator who
## Visual Output ## Visual Output
See `skills/visual-output.md` for header templates. Use the **Orchestrator** row from the Phase Registry: Display header at start of every response:
- Phase Emoji: Lightning ```
- Phase Name: EXECUTION ╔══════════════════════════════════════════════════════════════════╗
- Context: Sprint Name ║ 📋 PROJMAN ║
║ ⚡ EXECUTION ║
║ [Sprint Name] ║
╚══════════════════════════════════════════════════════════════════╝
```
Also use the Progress Block format from `skills/visual-output.md` during sprint execution. Progress block format:
```
┌─ Sprint Progress ────────────────────────────────────────────────┐
│ [Sprint Name] │
│ ████████████░░░░░░░░░░░░░░░░░░ 40% complete │
│ ✅ Done: 4 ⏳ Active: 2 ⬚ Pending: 4 │
└──────────────────────────────────────────────────────────────────┘
```
## Your Responsibilities ## Your Responsibilities
### 1. Verify Approval (Sprint Start) ### 1. Verify Approval (Sprint Start)
Execute `skills/sprint-approval.md` - Check milestone for approval record. **STOP execution if approval is missing** unless user provided `--force` flag. Execute `skills/sprint-approval.md` - Check milestone for approval record.
### 2. Detect Checkpoints (Sprint Start) ### 2. Detect Checkpoints (Sprint Start)
Check for resume points from interrupted sessions. Check for resume points from interrupted sessions.
@@ -67,6 +75,7 @@ Execute `skills/dependency-management.md` - Check for file conflicts before para
Execute `skills/progress-tracking.md` - Manage status labels, parse progress comments. Execute `skills/progress-tracking.md` - Manage status labels, parse progress comments.
### 6.5. Domain Gate Checks ### 6.5. Domain Gate Checks
Execute `skills/domain-consultation.md` (Execution Gate Protocol section): Execute `skills/domain-consultation.md` (Execution Gate Protocol section):
1. **Before marking any issue as complete**, check for `Domain/*` labels 1. **Before marking any issue as complete**, check for `Domain/*` labels
@@ -97,13 +106,6 @@ Execute `skills/wiki-conventions.md` - Update implementation status.
### 10. Git Operations (Sprint Close) ### 10. Git Operations (Sprint Close)
Execute `skills/git-workflow.md` - Merge, tag, clean up branches. Execute `skills/git-workflow.md` - Merge, tag, clean up branches.
### 11. Maintain Dispatch Log
Execute `skills/progress-tracking.md` (Sprint Dispatch Log section):
- Create dispatch log header at sprint start
- Append row on every task dispatch, completion, failure, and domain gate check
- On sprint resume: add "Resumed" row with checkpoint context
- Log is posted as comments, one `add_comment` per event
## Critical Reminders ## Critical Reminders
1. **NEVER use CLI tools** - Use MCP tools exclusively (see `skills/mcp-tools-reference.md`) 1. **NEVER use CLI tools** - Use MCP tools exclusively (see `skills/mcp-tools-reference.md`)

View File

@@ -1,7 +1,6 @@
--- ---
name: planner name: planner
description: Sprint planning agent - thoughtful architecture analysis and issue creation description: Sprint planning agent - thoughtful architecture analysis and issue creation
model: sonnet
--- ---
# Sprint Planning Agent # Sprint Planning Agent
@@ -21,9 +20,6 @@ You are the **Planner Agent** - a methodical architect who thoroughly analyzes r
- skills/sprint-approval.md - skills/sprint-approval.md
- skills/planning-workflow.md - skills/planning-workflow.md
- skills/label-taxonomy/labels-reference.md - skills/label-taxonomy/labels-reference.md
- skills/domain-consultation.md
- skills/sprint-lifecycle.md
- skills/visual-output.md
## Your Personality ## Your Personality
@@ -41,10 +37,14 @@ You are the **Planner Agent** - a methodical architect who thoroughly analyzes r
## Visual Output ## Visual Output
See `skills/visual-output.md` for header templates. Use the **Planner** row from the Phase Registry: Display header at start of every response:
- Phase Emoji: Target ```
- Phase Name: PLANNING ╔══════════════════════════════════════════════════════════════════╗
- Context: Sprint Name or Goal ║ 📋 PROJMAN ║
║ 🎯 PLANNING ║
║ [Sprint Name or Goal] ║
╚══════════════════════════════════════════════════════════════════╝
```
## Your Responsibilities ## Your Responsibilities
@@ -66,25 +66,10 @@ Execute `skills/wiki-conventions.md` - Create proposal and implementation pages.
### 6. Task Sizing ### 6. Task Sizing
Execute `skills/task-sizing.md` - **REFUSE to create L/XL tasks without breakdown.** Execute `skills/task-sizing.md` - **REFUSE to create L/XL tasks without breakdown.**
### 7. Domain Consultation ### 7. Issue Creation
Execute `skills/domain-consultation.md` (Planning Protocol section):
1. **After drafting issues but BEFORE creating them in Gitea**
2. **Analyze each issue for domain signals:**
- Check planned labels for `Component/Frontend`, `Component/UI` -> Domain/Viz
- Check planned labels for `Component/Database`, `Component/Data` -> Domain/Data
- Scan issue description for domain keywords (see skill for full list)
3. **For detected domains, append acceptance criteria:**
- Domain/Viz: Design System Compliance checklist
- Domain/Data: Data Integrity checklist
4. **Add corresponding `Domain/*` label** to the issue's label set
5. **Document in planning summary** which issues have domain gates active
### 8. Issue Creation
Execute `skills/issue-conventions.md` - Use proper format with wiki references. Execute `skills/issue-conventions.md` - Use proper format with wiki references.
### 9. Request Approval ### 8. Request Approval
Execute `skills/sprint-approval.md` - Planning DOES NOT equal execution permission. Execute `skills/sprint-approval.md` - Planning DOES NOT equal execution permission.
## Critical Reminders ## Critical Reminders
@@ -96,7 +81,6 @@ Execute `skills/sprint-approval.md` - Planning DOES NOT equal execution permissi
5. **ALWAYS search lessons** - Past experience informs better planning 5. **ALWAYS search lessons** - Past experience informs better planning
6. **ALWAYS include wiki reference** - Every issue links to implementation wiki page 6. **ALWAYS include wiki reference** - Every issue links to implementation wiki page
7. **ALWAYS use proper title format** - `[Sprint XX] <type>: <description>` 7. **ALWAYS use proper title format** - `[Sprint XX] <type>: <description>`
8. **ALWAYS check domain signals** - Every issue gets checked for viz/data domain applicability before creation
## Your Mission ## Your Mission

View File

@@ -12,11 +12,11 @@ This project uses the **projman** plugin for sprint planning and project managem
| `/sprint-close` | Complete sprint and capture lessons learned to Gitea Wiki | | `/sprint-close` | Complete sprint and capture lessons learned to Gitea Wiki |
| `/labels-sync` | Synchronize label taxonomy from Gitea | | `/labels-sync` | Synchronize label taxonomy from Gitea |
| `/initial-setup` | Run initial setup for projman plugin | | `/initial-setup` | Run initial setup for projman plugin |
| `/rfc create` | Create new RFC from conversation or clarified spec | | `/rfc-create` | Create new RFC from conversation or clarified spec |
| `/rfc list` | List all RFCs grouped by status | | `/rfc-list` | List all RFCs grouped by status |
| `/rfc review` | Submit Draft RFC for review | | `/rfc-review` | Submit Draft RFC for review |
| `/rfc approve` | Approve RFC for sprint planning | | `/rfc-approve` | Approve RFC for sprint planning |
| `/rfc reject` | Reject RFC with documented reason | | `/rfc-reject` | Reject RFC with documented reason |
### MCP Tools Available ### MCP Tools Available

View File

@@ -0,0 +1,34 @@
---
description: Clear plugin cache to force fresh configuration reload after marketplace updates
---
# Clear Cache
## Purpose
Clear plugin cache to force fresh configuration reload after marketplace updates.
## When to Use
- After updating the marketplace (`git pull` or reinstall)
- When MCP servers show stale configuration
- When plugin changes don't take effect
## Workflow
Execute cache clear:
```bash
rm -rf ~/.claude/plugins/cache/leo-claude-mktplace/
```
Then inform user: "Cache cleared. Restart Claude Code for changes to take effect."
## Visual Output
```
╔══════════════════════════════════════════════════════════════════╗
║ 📋 PROJMAN ║
║ Clear Cache ║
╚══════════════════════════════════════════════════════════════════╝
```

View File

@@ -8,7 +8,6 @@ agent: code-reviewer
## Skills Required ## Skills Required
- skills/review-checklist.md - skills/review-checklist.md
- skills/sprint-lifecycle.md
## Purpose ## Purpose
@@ -20,7 +19,6 @@ Run `/review` before `/sprint-close` to catch issues.
## Workflow ## Workflow
0. **Check Lifecycle State** - Execute `skills/sprint-lifecycle.md` check protocol. Expect `Sprint/Executing`. Set `Sprint/Reviewing` after review begins. Warn if in wrong state (allow with `--force`).
1. **Determine Scope** - Sprint files or recent commits (`git diff --name-only HEAD~5`) 1. **Determine Scope** - Sprint files or recent commits (`git diff --name-only HEAD~5`)
2. **Read Files** - Use Read tool for each file in scope 2. **Read Files** - Use Read tool for each file in scope
3. **Scan for Patterns** - Check each category from review checklist 3. **Scan for Patterns** - Check each category from review checklist
@@ -44,7 +42,10 @@ See `skills/review-checklist.md` for complete patterns:
## Visual Output ## Visual Output
See `skills/visual-output.md`. This command invokes the **Code Reviewer** agent: ```
- Phase Emoji: 🔍 ╔══════════════════════════════════════════════════════════════════╗
- Phase Name: REVIEW ║ 📋 PROJMAN ║
- Context: Sprint Name ║ 🏁 CLOSING ║
║ Code Review ║
╚══════════════════════════════════════════════════════════════════╝
```

View File

@@ -0,0 +1,83 @@
---
description: Approve an RFC in Review status, making it ready for sprint planning
agent: planner
---
# Approve RFC
## Skills Required
- skills/mcp-tools-reference.md
- skills/rfc-workflow.md
- skills/rfc-templates.md
## Purpose
Transition an RFC from Review to Approved status, indicating the proposal has been accepted and is ready for implementation in an upcoming sprint.
## Invocation
Run `/rfc-approve <number>` where number is the RFC number:
- `/rfc-approve 0003`
- `/rfc-approve 3` (leading zeros optional)
## Workflow
1. **Validate RFC Number**
- Normalize input (add leading zeros if needed)
- Fetch RFC page: `RFC-NNNN: *`
2. **Check Current Status**
- Parse frontmatter to get current status
- **STOP** if status is not "Review"
- Error: "RFC-NNNN is in [status] status. Only RFCs in Review can be approved."
3. **Gather Decision Details**
- Prompt: "Please provide the approval rationale (why is this RFC being approved?):"
- This becomes the Decision section content
4. **Update RFC Page**
- Change status: Review → Approved
- Update "Updated" date
- Add/update Decision section:
```markdown
## Decision
**Decision:** Approved
**Date:** YYYY-MM-DD
**Decided By:** @[current user or maintainer]
**Rationale:**
[User-provided rationale]
```
5. **Update RFC-Index**
- Remove entry from "## In Review" section
- Add entry to "## Approved" section
6. **Confirm Approval**
- Display updated status
- Note that RFC is now available for `/sprint-plan`
## Visual Output
```
+----------------------------------------------------------------------+
| PROJMAN - RFC Approval |
+----------------------------------------------------------------------+
RFC-0003: Feature X has been approved!
Status: Review → Approved
Decision recorded in RFC page.
This RFC is now available for sprint planning.
Use /sprint-plan and select this RFC when prompted.
```
## Validation Errors
- **RFC not found**: "RFC-NNNN not found. Check the number with /rfc-list"
- **Wrong status**: "RFC-NNNN is [status]. Only RFCs in Review can be approved."
- **No rationale provided**: "Approval rationale is required. Please explain why this RFC is being approved."

View File

@@ -0,0 +1,84 @@
---
description: Create a new RFC (Request for Comments) from conversation or clarified specification
agent: planner
---
# Create RFC
## Skills Required
- skills/mcp-tools-reference.md
- skills/rfc-workflow.md
- skills/rfc-templates.md
## Purpose
Create a new RFC wiki page to track a feature idea, proposal, or enhancement through the review lifecycle. RFCs provide a structured way to document, discuss, and approve changes before implementation.
## Invocation
Run `/rfc-create` with optional context:
- After `/clarify` to convert clarified spec to RFC
- With description of feature idea
- From conversation context
## Workflow
1. **Gather Input**
- Check if conversation has clarified specification (from `/clarify`)
- If no context: prompt for Summary, Motivation, and initial Design
- Extract author from context or prompt
2. **Allocate RFC Number**
- Call `allocate_rfc_number` MCP tool
- Get next sequential 4-digit number
3. **Create RFC Page**
- Use template from `skills/rfc-templates.md`
- Fill in frontmatter (number, title, status=Draft, author, dates)
- Populate Summary, Motivation, Detailed Design sections
- Create wiki page: `RFC-NNNN: Title`
4. **Update RFC-Index**
- Fetch RFC-Index (create if doesn't exist)
- Add entry to "## Draft" section
- Update wiki page
5. **Confirm Creation**
- Display RFC number and wiki link
- Remind about next steps (refine → `/rfc-review`)
## Visual Output
```
+----------------------------------------------------------------------+
| PROJMAN - RFC Creation |
+----------------------------------------------------------------------+
RFC-0001: [Title] created successfully!
Status: Draft
Wiki: [link to RFC page]
Next steps:
- Refine the RFC with additional details
- When ready: /rfc-review 0001 to submit for review
```
## Input Mapping
When converting from `/clarify` output:
| Clarify Section | RFC Section |
|-----------------|-------------|
| Problem/Context | Motivation > Problem Statement |
| Goals/Outcomes | Motivation > Goals |
| Scope/Requirements | Detailed Design > Overview |
| Constraints | Non-Goals or Detailed Design |
| Success Criteria | Testing Strategy |
## Edge Cases
- **No RFC-Index exists**: Create it with empty sections
- **User provides minimal input**: Create minimal RFC template, note sections to fill
- **Duplicate title**: Proceed (RFC numbers are unique, titles don't need to be)

View File

@@ -0,0 +1,98 @@
---
description: List all RFCs grouped by status from RFC-Index wiki page
agent: planner
---
# List RFCs
## Skills Required
- skills/mcp-tools-reference.md
- skills/rfc-workflow.md
## Purpose
Display all RFCs grouped by their lifecycle status. Highlights "Approved" RFCs that are ready for sprint planning.
## Invocation
Run `/rfc-list` to see all RFCs.
Optional filters:
- `/rfc-list approved` - Show only approved RFCs
- `/rfc-list draft` - Show only draft RFCs
- `/rfc-list review` - Show only RFCs in review
## Workflow
1. **Fetch RFC-Index**
- Call `get_wiki_page` for "RFC-Index"
- Handle missing index gracefully
2. **Parse Sections**
- Extract tables from each status section
- Parse RFC number, title, and metadata from each row
3. **Display Results**
- Group by status
- Highlight Approved section (ready for planning)
- Show counts per status
## Visual Output
```
+----------------------------------------------------------------------+
| PROJMAN - RFC Index |
+----------------------------------------------------------------------+
## Approved (Ready for Sprint Planning)
| RFC | Title | Champion | Created |
|-----|-------|----------|---------|
| RFC-0003 | Feature X | @user | 2026-01-15 |
| RFC-0007 | Enhancement Y | @user | 2026-01-28 |
## In Review (2)
| RFC | Title | Author | Created |
|-----|-------|--------|---------|
| RFC-0004 | Feature Y | @user | 2026-01-20 |
| RFC-0008 | Idea Z | @user | 2026-01-29 |
## Draft (3)
| RFC | Title | Author | Created |
|-----|-------|--------|---------|
| RFC-0005 | Concept A | @user | 2026-01-22 |
| RFC-0009 | Proposal B | @user | 2026-01-30 |
| RFC-0010 | Sketch C | @user | 2026-01-30 |
## Implementing (1)
| RFC | Title | Sprint | Started |
|-----|-------|--------|---------|
| RFC-0002 | Feature W | Sprint 18 | 2026-01-22 |
## Implemented (1)
| RFC | Title | Completed | Release |
|-----|-------|-----------|---------|
| RFC-0001 | Initial Feature | 2026-01-10 | v5.0.0 |
## Rejected (0)
(none)
## Stale (0)
(none)
---
Total: 10 RFCs | 2 Approved | 1 Implementing
```
## Edge Cases
- **No RFC-Index**: Display message "No RFCs yet. Create one with /rfc-create"
- **Empty sections**: Show "(none)" for empty status categories
- **Filter applied**: Only show matching section, still show total counts

View File

@@ -0,0 +1,90 @@
---
description: Reject an RFC with documented reason, marking it as declined
agent: planner
---
# Reject RFC
## Skills Required
- skills/mcp-tools-reference.md
- skills/rfc-workflow.md
- skills/rfc-templates.md
## Purpose
Transition an RFC to Rejected status with a documented reason. Rejected RFCs remain in the wiki for historical reference but are marked as declined.
## Invocation
Run `/rfc-reject <number>` where number is the RFC number:
- `/rfc-reject 0006`
- `/rfc-reject 6` (leading zeros optional)
## Workflow
1. **Validate RFC Number**
- Normalize input (add leading zeros if needed)
- Fetch RFC page: `RFC-NNNN: *`
2. **Check Current Status**
- Parse frontmatter to get current status
- **STOP** if status is not "Draft" or "Review"
- Error: "RFC-NNNN is in [status] status. Only Draft or Review RFCs can be rejected."
3. **Require Rejection Reason**
- Prompt: "Please provide the rejection reason (required):"
- **STOP** if no reason provided
- Error: "Rejection reason is required to document why this RFC was declined."
4. **Update RFC Page**
- Change status: [current] → Rejected
- Update "Updated" date
- Add/update Decision section:
```markdown
## Decision
**Decision:** Rejected
**Date:** YYYY-MM-DD
**Decided By:** @[current user or maintainer]
**Reason:**
[User-provided rejection reason]
```
5. **Update RFC-Index**
- Remove entry from current section ("## Draft" or "## In Review")
- Add entry to "## Rejected" section with reason summary
6. **Confirm Rejection**
- Display updated status
- Note that RFC remains in wiki for reference
## Visual Output
```
+----------------------------------------------------------------------+
| PROJMAN - RFC Rejection |
+----------------------------------------------------------------------+
RFC-0006: Proposed Feature has been rejected.
Status: Review → Rejected
Reason: Out of scope for current project direction
The RFC remains in the wiki for historical reference.
If circumstances change, a new RFC can be created.
```
## Validation Errors
- **RFC not found**: "RFC-NNNN not found. Check the number with /rfc-list"
- **Wrong status**: "RFC-NNNN is [status]. Only Draft or Review RFCs can be rejected."
- **No reason provided**: "Rejection reason is required. Please document why this RFC is being declined."
## Notes
- Rejected is a terminal state
- To reconsider, create a new RFC that references the rejected one
- Rejection reasons help future contributors understand project direction

View File

@@ -0,0 +1,81 @@
---
description: Submit a Draft RFC for review, transitioning status to Review
agent: planner
---
# Submit RFC for Review
## Skills Required
- skills/mcp-tools-reference.md
- skills/rfc-workflow.md
- skills/rfc-templates.md
## Purpose
Transition an RFC from Draft to Review status, indicating it's ready for maintainer evaluation. Optionally assign a champion to shepherd the RFC through review.
## Invocation
Run `/rfc-review <number>` where number is the RFC number:
- `/rfc-review 0001`
- `/rfc-review 1` (leading zeros optional)
## Workflow
1. **Validate RFC Number**
- Normalize input (add leading zeros if needed)
- Fetch RFC page: `RFC-NNNN: *`
2. **Check Current Status**
- Parse frontmatter to get current status
- **STOP** if status is not "Draft"
- Error: "RFC-NNNN is in [status] status. Only Draft RFCs can be submitted for review."
3. **Validate Minimum Content**
- Check for Summary section (required)
- Check for Motivation section (required)
- Check for Detailed Design > Overview (required)
- Warn if Alternatives Considered is empty
4. **Optional: Assign Champion**
- Ask: "Would you like to assign a champion? (Enter username or skip)"
- Champion is responsible for driving the RFC through review
5. **Update RFC Page**
- Change status: Draft → Review
- Update "Updated" date
- Set Champion if provided
- Add Review Notes section if not present
6. **Update RFC-Index**
- Remove entry from "## Draft" section
- Add entry to "## In Review" section
7. **Confirm Transition**
- Display updated status
- Note next steps (review discussion, then /rfc-approve or /rfc-reject)
## Visual Output
```
+----------------------------------------------------------------------+
| PROJMAN - RFC Review Submission |
+----------------------------------------------------------------------+
RFC-0005: Feature Idea submitted for review
Status: Draft → Review
Champion: @assigned_user (or: unassigned)
Updated: RFC-Index
Next steps:
- Discuss in RFC wiki page comments or meetings
- When decision reached: /rfc-approve 0005 or /rfc-reject 0005
```
## Validation Errors
- **RFC not found**: "RFC-NNNN not found. Check the number with /rfc-list"
- **Wrong status**: "RFC-NNNN is [status]. Only Draft RFCs can be reviewed."
- **Missing sections**: "RFC-NNNN is missing required sections: [list]. Please complete before review."

View File

@@ -1,144 +0,0 @@
---
description: RFC lifecycle management - create, list, review, approve, reject
agent: planner
---
# RFC Management
## Skills Required
- skills/mcp-tools-reference.md
- skills/rfc-workflow.md
- skills/rfc-templates.md
## Purpose
Manage the full RFC lifecycle through sub-commands. RFCs provide a structured way to document, discuss, and approve changes before implementation.
## Invocation
```
/rfc <sub-command> [arguments]
```
### Sub-Commands
| Sub-Command | Usage | Description |
|-------------|-------|-------------|
| `create` | `/rfc create` | Create new RFC from conversation or clarified spec |
| `list` | `/rfc list [filter]` | List all RFCs grouped by status |
| `review` | `/rfc review <number>` | Submit Draft RFC for review |
| `approve` | `/rfc approve <number>` | Approve RFC in Review status |
| `reject` | `/rfc reject <number>` | Reject RFC with documented reason |
---
## Sub-Command: create
Create a new RFC wiki page to track a feature idea through the review lifecycle.
**Workflow:**
1. Check if conversation has clarified specification (from `/clarify`)
2. If no context: prompt for Summary, Motivation, and initial Design
3. Call `allocate_rfc_number` MCP tool for next sequential number
4. Create RFC page using template from `skills/rfc-templates.md`
5. Update RFC-Index wiki page (create if doesn't exist)
6. Display RFC number, wiki link, and next steps
**Input Mapping (from /clarify):**
| Clarify Section | RFC Section |
|-----------------|-------------|
| Problem/Context | Motivation > Problem Statement |
| Goals/Outcomes | Motivation > Goals |
| Scope/Requirements | Detailed Design > Overview |
| Constraints | Non-Goals or Detailed Design |
| Success Criteria | Testing Strategy |
**Edge cases:**
- No RFC-Index exists: Create it with empty sections
- User provides minimal input: Create minimal RFC template, note sections to fill
- Duplicate title: Proceed (RFC numbers are unique, titles don't need to be)
---
## Sub-Command: list
Display all RFCs grouped by lifecycle status.
**Filters:** `/rfc list approved`, `/rfc list draft`, `/rfc list review`
**Workflow:**
1. Fetch RFC-Index wiki page via `get_wiki_page`
2. Parse tables from each status section
3. Display grouped by status, highlight Approved section
4. Show counts per status
**Edge cases:**
- No RFC-Index: "No RFCs yet. Create one with `/rfc create`"
- Empty sections: Show "(none)"
---
## Sub-Command: review
Submit a Draft RFC for review, transitioning status to Review.
**Usage:** `/rfc review <number>` (leading zeros optional)
**Workflow:**
1. Validate RFC number, fetch page
2. Check status is Draft - STOP if not
3. Validate minimum content (Summary, Motivation, Detailed Design > Overview required)
4. Optionally assign champion
5. Update RFC page: status Draft -> Review, update date
6. Update RFC-Index: move from Draft to In Review section
---
## Sub-Command: approve
Approve an RFC in Review status for sprint planning.
**Usage:** `/rfc approve <number>` (leading zeros optional)
**Workflow:**
1. Validate RFC number, fetch page
2. Check status is Review - STOP if not
3. Gather approval rationale (required)
4. Update RFC page: status Review -> Approved, add Decision section
5. Update RFC-Index: move from In Review to Approved section
---
## Sub-Command: reject
Reject an RFC with documented reason.
**Usage:** `/rfc reject <number>` (leading zeros optional)
**Workflow:**
1. Validate RFC number, fetch page
2. Check status is Draft or Review - STOP if not
3. Require rejection reason (mandatory)
4. Update RFC page: status -> Rejected, add Decision section
5. Update RFC-Index: move to Rejected section
---
## Visual Output
```
+----------------------------------------------------------------------+
| PROJMAN - RFC [Sub-Command] |
+----------------------------------------------------------------------+
```
---
## Validation Errors (All Sub-Commands)
- **RFC not found**: "RFC-NNNN not found. Check the number with `/rfc list`"
- **Wrong status**: "RFC-NNNN is in [status] status. [Specific allowed statuses for this action]."
- **Missing required input**: Specific message per sub-command
- **No sub-command provided**: Display sub-command reference table

View File

@@ -26,7 +26,6 @@ Unified setup command for all configuration needs.
/setup --full # Full wizard (MCP + system + project) /setup --full # Full wizard (MCP + system + project)
/setup --quick # Project-only setup /setup --quick # Project-only setup
/setup --sync # Update after repo move /setup --sync # Update after repo move
/setup --clear-cache # Clear plugin cache (between sessions only)
``` ```
## Mode Detection ## Mode Detection
@@ -80,21 +79,6 @@ Steps:
6. Update `.env` 6. Update `.env`
7. Confirm 7. Confirm
## Mode: Clear Cache (--clear-cache)
Clear plugin cache to force fresh configuration reload.
**WARNING:** Only run between sessions, never mid-session. Clearing cache mid-session destroys MCP tool venv paths and breaks all MCP operations.
Steps:
1. Execute: `rm -rf ~/.claude/plugins/cache/leo-claude-mktplace/`
2. Inform user: "Cache cleared. Restart Claude Code for changes to take effect."
When to use:
- After updating the marketplace (`git pull` or reinstall)
- When MCP servers show stale configuration
- When plugin changes don't take effect
## Visual Output ## Visual Output
``` ```

View File

@@ -13,7 +13,6 @@ agent: orchestrator
- skills/rfc-workflow.md - skills/rfc-workflow.md
- skills/progress-tracking.md - skills/progress-tracking.md
- skills/git-workflow.md - skills/git-workflow.md
- skills/sprint-lifecycle.md
## Purpose ## Purpose
@@ -27,7 +26,6 @@ Run `/sprint-close` when sprint work is complete.
Execute the sprint close workflow: Execute the sprint close workflow:
0. **Check Lifecycle State** - Execute `skills/sprint-lifecycle.md` check protocol. Expect `Sprint/Reviewing`. Clear all Sprint/* labels (return to idle) at the END of close workflow, after all other steps. Warn if in wrong state (allow with `--force`).
1. **Review Sprint Completion** - Verify issues closed or moved to backlog 1. **Review Sprint Completion** - Verify issues closed or moved to backlog
2. **Capture Lessons Learned** - Interview user about challenges and insights 2. **Capture Lessons Learned** - Interview user about challenges and insights
3. **Tag for Discoverability** - Apply technology, component, and pattern tags 3. **Tag for Discoverability** - Apply technology, component, and pattern tags

View File

@@ -18,7 +18,6 @@ agent: planner
- skills/sprint-approval.md - skills/sprint-approval.md
- skills/planning-workflow.md - skills/planning-workflow.md
- skills/label-taxonomy/labels-reference.md - skills/label-taxonomy/labels-reference.md
- skills/sprint-lifecycle.md
## Purpose ## Purpose
@@ -36,7 +35,6 @@ Provide sprint goals as natural language input, or prepare input via:
Execute the planning workflow as defined in `skills/planning-workflow.md`. Execute the planning workflow as defined in `skills/planning-workflow.md`.
**Key steps:** **Key steps:**
0. **Check Lifecycle State** - Execute `skills/sprint-lifecycle.md` check protocol. Expect idle state. Set `Sprint/Planning` after planning completes. Warn and stop if sprint is in another active state (unless `--force`).
1. Run pre-planning validations (branch, repo org, labels) 1. Run pre-planning validations (branch, repo org, labels)
2. Detect input source (file, wiki, or conversation) 2. Detect input source (file, wiki, or conversation)
3. Search relevant lessons learned 3. Search relevant lessons learned

View File

@@ -15,7 +15,6 @@ agent: orchestrator
- skills/git-workflow.md - skills/git-workflow.md
- skills/progress-tracking.md - skills/progress-tracking.md
- skills/runaway-detection.md - skills/runaway-detection.md
- skills/sprint-lifecycle.md
## Purpose ## Purpose
@@ -25,14 +24,11 @@ Initiate sprint execution. The orchestrator agent verifies approval, analyzes de
Run `/sprint-start` when ready to begin executing a planned sprint. Run `/sprint-start` when ready to begin executing a planned sprint.
**Flags:**
- `--force` — Bypass approval gate (emergency only, logged to milestone)
## Workflow ## Workflow
Execute the sprint start workflow: Execute the sprint start workflow:
1. **Verify Sprint Approval & Lifecycle State** (required) - Check milestone for approval record. STOP if missing unless `--force` flag provided. Also verify lifecycle state is `Sprint/Planning` per `skills/sprint-lifecycle.md`. Set `Sprint/Executing` after verification passes. 1. **Verify Sprint Approval** (recommended) - Check milestone for approval record
2. **Detect Checkpoints** - Check for resume points from interrupted sessions 2. **Detect Checkpoints** - Check for resume points from interrupted sessions
3. **Fetch Sprint Issues** - Get open issues from milestone 3. **Fetch Sprint Issues** - Get open issues from milestone
4. **Analyze Dependencies** - Use `get_execution_order` for parallel batches 4. **Analyze Dependencies** - Use `get_execution_order` for parallel batches

View File

@@ -9,7 +9,6 @@ description: Check current sprint progress, identify blockers, optionally genera
- skills/mcp-tools-reference.md - skills/mcp-tools-reference.md
- skills/progress-tracking.md - skills/progress-tracking.md
- skills/dependency-management.md - skills/dependency-management.md
- skills/sprint-lifecycle.md
## Purpose ## Purpose
@@ -24,7 +23,6 @@ Check current sprint progress, identify blockers, and show execution status. Opt
## Workflow ## Workflow
0. **Display Lifecycle State** - Read current Sprint/* state from milestone description per `skills/sprint-lifecycle.md` and display in output header.
1. **Fetch Sprint Issues** - Get all issues for current milestone 1. **Fetch Sprint Issues** - Get all issues for current milestone
2. **Calculate Progress** - Count completed vs total issues 2. **Calculate Progress** - Count completed vs total issues
3. **Identify Active Tasks** - Find issues with `Status/In-Progress` 3. **Identify Active Tasks** - Find issues with `Status/In-Progress`

View File

@@ -93,78 +93,6 @@ Generate comprehensive tests for specified code.
See `skills/test-standards.md` for test patterns and structure. See `skills/test-standards.md` for test patterns and structure.
### DO NOT (Generate Mode)
- Install dependencies without asking first
- Generate tests that import private/internal functions not meant for testing
- Overwrite existing test files without confirmation
- Generate tests with hardcoded values that should be environment-based
---
## Sprint Integration
The `/test` command plays a critical role in the sprint close workflow:
1. After `/review` identifies code quality issues
2. Before `/sprint-close` finalizes the sprint
3. The code reviewer and orchestrator reference test results when deciding if a sprint is ready to close
### Pre-Close Verification
When running `/test run` before sprint close:
1. **Identify sprint files** - Files changed in the current sprint (via git diff against development)
2. **Check test coverage** - Report which sprint files have tests and which don't
3. **Flag untested code** - Warn if new code has no corresponding tests
4. **Recommend action** - "READY FOR CLOSE" or "TESTS NEEDED: [list of untested files]"
---
## Examples
### Run all tests
```
/test run
```
Detects framework, runs full test suite, reports results.
### Run with coverage
```
/test run --coverage
```
Same as above plus coverage percentage per file.
### Generate tests for a specific file
```
/test gen src/auth/jwt_service.py
```
Analyzes the file, generates a test file at `tests/test_jwt_service.py`.
### Generate specific test type
```
/test gen src/api/routes/auth.py --type=integration
```
Generates integration tests (request/response patterns) instead of unit tests.
### Generate with specific framework
```
/test gen src/components/Card.jsx --framework=vitest
```
Uses Vitest instead of auto-detected framework.
---
## Edge Cases
| Scenario | Behavior |
|----------|----------|
| No test framework detected | List what was checked, ask user to specify test command |
| Tests fail | Report failures clearly, recommend "TESTS MUST PASS before sprint close" |
| No tests exist for sprint files | Warn with file list, offer to generate with `/test gen` |
| External services required | Ask for confirmation before running tests that need database/API |
| Mixed framework project | Detect all frameworks, ask which to run or run all |
--- ---
## Visual Output ## Visual Output

View File

@@ -29,19 +29,17 @@ if [[ -f ".env" ]]; then
if [[ -n "$GITEA_API_URL" && -n "$GITEA_API_TOKEN" && -n "$GITEA_REPO" ]]; then if [[ -n "$GITEA_API_URL" && -n "$GITEA_API_TOKEN" && -n "$GITEA_REPO" ]]; then
# Quick check for open issues without milestone (unplanned work) # Quick check for open issues without milestone (unplanned work)
# Note: grep -c returns 0 on no match but exits non-zero, causing || to also fire
# Use subshell to ensure single value
OPEN_ISSUES=$(curl -s -m 5 \ OPEN_ISSUES=$(curl -s -m 5 \
-H "Authorization: token $GITEA_API_TOKEN" \ -H "Authorization: token $GITEA_API_TOKEN" \
"${GITEA_API_URL}/repos/${GITEA_REPO}/issues?state=open&milestone=none&limit=1" 2>/dev/null | \ "${GITEA_API_URL}/repos/${GITEA_REPO}/issues?state=open&milestone=none&limit=1" 2>/dev/null | \
grep -c '"number"' 2>/dev/null) || OPEN_ISSUES=0 grep -c '"number"' || echo "0")
if [[ "$OPEN_ISSUES" -gt 0 ]]; then if [[ "$OPEN_ISSUES" -gt 0 ]]; then
# Count total unplanned issues # Count total unplanned issues
TOTAL_UNPLANNED=$(curl -s -m 5 \ TOTAL_UNPLANNED=$(curl -s -m 5 \
-H "Authorization: token $GITEA_API_TOKEN" \ -H "Authorization: token $GITEA_API_TOKEN" \
"${GITEA_API_URL}/repos/${GITEA_REPO}/issues?state=open&milestone=none" 2>/dev/null | \ "${GITEA_API_URL}/repos/${GITEA_REPO}/issues?state=open&milestone=none" 2>/dev/null | \
grep -c '"number"' 2>/dev/null) || TOTAL_UNPLANNED="?" grep -c '"number"' || echo "?")
echo "$PREFIX ${TOTAL_UNPLANNED} open issues without milestone - consider /sprint-plan" echo "$PREFIX ${TOTAL_UNPLANNED} open issues without milestone - consider /sprint-plan"
fi fi
fi fi

View File

@@ -1,165 +0,0 @@
---
name: domain-consultation
description: Cross-plugin domain consultation for specialized planning and validation
---
# Domain Consultation
## Purpose
Enables projman agents to detect domain-specific work and consult specialized plugins for expert validation during planning and execution phases. This skill is the backbone of the Domain Advisory Pattern.
---
## When to Use
| Agent | Phase | Action |
|-------|-------|--------|
| Planner | After task sizing, before issue creation | Detect domains, add acceptance criteria |
| Orchestrator | Before marking issue complete | Run domain gates, block if violations |
| Code Reviewer | During review | Include domain compliance in findings |
---
## Domain Detection Rules
| Signal Type | Detection Pattern | Domain Plugin | Action |
|-------------|-------------------|---------------|--------|
| Label-based | `Component/Frontend`, `Component/UI` | viz-platform | Add design system criteria, apply `Domain/Viz` |
| Content-based | Keywords: DMC, Dash, layout, theme, component, dashboard, chart, responsive, color, UI, frontend, Plotly | viz-platform | Same as above |
| Label-based | `Component/Database`, `Component/Data` | data-platform | Add data validation criteria, apply `Domain/Data` |
| Content-based | Keywords: schema, migration, pipeline, dbt, table, column, query, PostgreSQL, lineage, data model | data-platform | Same as above |
| Both signals | Frontend + Data signals present | Both plugins | Apply both sets of criteria |
---
## Planning Protocol
When creating issues, the planner MUST:
1. **Analyze each issue** for domain signals (check labels AND scan description for keywords)
2. **For Domain/Viz issues**, append this acceptance criteria block:
```markdown
## Design System Compliance
- [ ] All DMC components validated against registry
- [ ] Theme tokens used (no hardcoded colors/sizes)
- [ ] Accessibility check passed (WCAG contrast)
- [ ] Responsive breakpoints verified
```
3. **For Domain/Data issues**, append this acceptance criteria block:
```markdown
## Data Integrity
- [ ] Schema changes validated
- [ ] dbt tests pass
- [ ] Lineage intact (no orphaned models)
- [ ] Data types verified
```
4. **Apply the corresponding `Domain/*` label** to route the issue through gates
5. **Document in planning summary** which issues have domain gates active
---
## Execution Gate Protocol
Before marking any issue as complete, the orchestrator MUST:
1. **Check issue labels** for `Domain/*` labels
2. **If `Domain/Viz` label present:**
- Identify files changed by this issue
- Invoke `/design-gate <path-to-changed-files>`
- Gate PASS → proceed to mark issue complete
- Gate FAIL → add comment to issue with failure details, keep issue open
3. **If `Domain/Data` label present:**
- Identify files changed by this issue
- Invoke `/data-gate <path-to-changed-files>`
- Gate PASS → proceed to mark issue complete
- Gate FAIL → add comment to issue with failure details, keep issue open
4. **If gate command unavailable** (MCP server not running):
- Warn user: "Domain gate unavailable - proceeding without validation"
- Proceed with completion (non-blocking degradation)
- Do NOT silently skip - always inform user
---
## Review Protocol
During code review, the code reviewer SHOULD:
1. After completing standard code quality and security checks, check for `Domain/*` labels
2. **If Domain/Viz:** Include "Design System Compliance" section in review report
- Reference `/design-review` findings if available
- Check for hardcoded colors, invalid props, accessibility issues
3. **If Domain/Data:** Include "Data Integrity" section in review report
- Reference `/data-gate` findings if available
- Check for schema validity, lineage integrity
---
## Extensibility
To add a new domain (e.g., `Domain/Infra` for cmdb-assistant):
1. **In domain plugin:** Create advisory agent + gate command
- Agent: `agents/infra-advisor.md`
- Gate command: `commands/infra-gate.md`
- Audit skill: `skills/infra-audit.md`
2. **In this skill:** Add detection rules to the Detection Rules table above
- Define label-based signals (e.g., `Component/Infrastructure`)
- Define content-based keywords (e.g., "server", "network", "NetBox")
3. **In label taxonomy:** Add `Domain/Infra` label with appropriate color
- Update `plugins/projman/skills/label-taxonomy/labels-reference.md`
4. **No changes needed** to planner.md or orchestrator.md agent files
- They read this skill dynamically
- Detection rules table is the single source of truth
This pattern ensures domain expertise stays in domain plugins while projman orchestrates when to ask.
---
## Domain Acceptance Criteria Templates
### Design System Compliance (Domain/Viz)
```markdown
## Design System Compliance
- [ ] All DMC components validated against registry
- [ ] Theme tokens used (no hardcoded colors/sizes)
- [ ] Accessibility check passed (WCAG contrast)
- [ ] Responsive breakpoints verified
```
### Data Integrity (Domain/Data)
```markdown
## Data Integrity
- [ ] Schema changes validated
- [ ] dbt tests pass
- [ ] Lineage intact (no orphaned models)
- [ ] Data types verified
```
---
## Gate Command Reference
| Domain | Gate Command | Contract | Review Command | Advisory Agent |
|--------|--------------|----------|----------------|----------------|
| Viz | `/design-gate <path>` | v1 | `/design-review <path>` | `design-reviewer` |
| Data | `/data-gate <path>` | v1 | `/data-review <path>` | `data-advisor` |
Gate commands return binary PASS/FAIL for automation.
Review commands return detailed reports for human review.
**Contract Version:** Gate commands declare `gate_contract: vN` in their frontmatter. The version in this table is what projman expects. If a gate command bumps its contract version, this table must be updated to match. The `contract-validator` plugin checks this automatically via `validate_workflow_integration`.

View File

@@ -13,7 +13,7 @@ description: Dynamic reference for Gitea label taxonomy (organization + reposito
This skill provides the current label taxonomy used for issue classification in Gitea. Labels are **fetched dynamically** from Gitea and should never be hardcoded. This skill provides the current label taxonomy used for issue classification in Gitea. Labels are **fetched dynamically** from Gitea and should never be hardcoded.
**Current Taxonomy:** 49 labels (31 organization + 18 repository) **Current Taxonomy:** 47 labels (31 organization + 16 repository)
## Organization Labels (31) ## Organization Labels (31)
@@ -66,7 +66,7 @@ Organization-level labels are shared across all repositories in your configured
- `Status/Failed` (#de350b) - Implementation attempted but failed, needs investigation - `Status/Failed` (#de350b) - Implementation attempted but failed, needs investigation
- `Status/Deferred` (#6554c0) - Moved to a future sprint or backlog - `Status/Deferred` (#6554c0) - Moved to a future sprint or backlog
## Repository Labels (18) ## Repository Labels (16)
Repository-level labels are specific to each project. Repository-level labels are specific to each project.
@@ -90,39 +90,6 @@ Repository-level labels are specific to each project.
- `Tech/Vue` (#42b883) - Vue.js frontend framework - `Tech/Vue` (#42b883) - Vue.js frontend framework
- `Tech/FastAPI` (#009688) - FastAPI backend framework - `Tech/FastAPI` (#009688) - FastAPI backend framework
### Sprint Lifecycle (Milestone Metadata)
These are tracked as milestone description metadata, not as Gitea issue labels. They are documented here for completeness.
| Label | Description |
|-------|-------------|
| `Sprint/Planning` | Sprint planning in progress |
| `Sprint/Executing` | Sprint execution in progress |
| `Sprint/Reviewing` | Code review in progress |
**Note:** Lifecycle state is stored in milestone description as `**Sprint State:** Sprint/Executing`. See `skills/sprint-lifecycle.md` for state machine rules.
### Domain (2 labels)
Cross-plugin integration labels for domain-specific validation gates.
| Label | Color | Description |
|-------|-------|-------------|
| `Domain/Viz` | `#7c4dff` | Issue involves visualization/frontend — triggers viz-platform design gates |
| `Domain/Data` | `#00bfa5` | Issue involves data engineering — triggers data-platform data gates |
**Detection Rules:**
**Domain/Viz:**
- Keywords: "dashboard", "chart", "theme", "DMC", "component", "layout", "responsive", "color", "UI", "frontend", "Dash", "Plotly"
- Also applied when: `Component/Frontend` or `Component/UI` label is present
- Example: "Create new neighbourhood comparison dashboard tab"
**Domain/Data:**
- Keywords: "schema", "migration", "pipeline", "dbt", "table", "column", "query", "PostgreSQL", "lineage", "data model"
- Also applied when: `Component/Database` or `Component/Data` label is present
- Example: "Add census demographic data pipeline"
## Label Suggestion Logic ## Label Suggestion Logic
When suggesting labels for issues, consider the following patterns: When suggesting labels for issues, consider the following patterns:

View File

@@ -138,49 +138,6 @@ For resume support, save checkpoints after major steps:
--- ---
## Sprint Dispatch Log
A single structured comment on the sprint milestone that records all task dispatches and completions. This is the first place to look when resuming an interrupted sprint.
### Format
Post as a comment on the milestone (via `add_comment` on a pinned tracking issue, or as milestone description appendix):
```markdown
## Sprint Dispatch Log
| Time | Issue | Action | Agent | Branch | Notes |
|------|-------|--------|-------|--------|-------|
| 14:30 | #45 | Dispatched | Executor | feat/45-jwt | Parallel batch 1 |
| 14:30 | #46 | Dispatched | Executor | feat/46-login | Parallel batch 1 |
| 14:45 | #45 | Complete | Executor | feat/45-jwt | 47 tool calls, merged |
| 14:52 | #46 | Failed | Executor | feat/46-login | Auth test failure |
| 14:53 | #46 | Re-dispatched | Executor | feat/46-login | After fix |
| 15:10 | #46 | Complete | Executor | feat/46-login | 62 tool calls, merged |
| 15:10 | #47 | Dispatched | Executor | feat/47-tests | Batch 2 (depended on #45, #46) |
```
### When to Log
| Event | Action Column | Required Fields |
|-------|---------------|-----------------|
| Task dispatched to executor | `Dispatched` | Time, Issue, Branch, Batch info |
| Task completed | `Complete` | Time, Issue, Tool call count |
| Task failed | `Failed` | Time, Issue, Error summary |
| Task re-dispatched | `Re-dispatched` | Time, Issue, Reason |
| Domain gate checked | `Gate: PASS` or `Gate: FAIL` | Time, Issue, Domain |
| Sprint resumed | `Resumed` | Time, Notes (from checkpoint) |
### Implementation
The orchestrator appends rows to this log via `add_comment` on the first issue in the milestone (or a dedicated tracking issue). Each append is a single `add_comment` call updating the table.
**On sprint start:** Create the dispatch log header.
**On each event:** Append a row.
**On sprint resume:** Add a "Resumed" row with checkpoint context.
---
## Sprint Progress Display ## Sprint Progress Display
``` ```

View File

@@ -11,7 +11,7 @@ Provides templates for RFC wiki pages and defines the required/optional sections
## When to Use ## When to Use
- **Commands**: `/rfc create` when generating new RFC pages - **Commands**: `/rfc-create` when generating new RFC pages
- **Integration**: Referenced by `rfc-workflow.md` for page structure - **Integration**: Referenced by `rfc-workflow.md` for page structure
--- ---

View File

@@ -12,7 +12,7 @@ Defines the Request for Comments (RFC) system for capturing, reviewing, and trac
## When to Use ## When to Use
- **Planner agent**: When detecting approved RFCs for sprint planning - **Planner agent**: When detecting approved RFCs for sprint planning
- **Commands**: `/rfc create`, `/rfc list`, `/rfc review`, `/rfc approve`, `/rfc reject` - **Commands**: `/rfc-create`, `/rfc-list`, `/rfc-review`, `/rfc-approve`, `/rfc-reject`
- **Integration**: With `/sprint-plan` to select approved RFCs for implementation - **Integration**: With `/sprint-plan` to select approved RFCs for implementation
--- ---
@@ -298,11 +298,11 @@ When RFC status changes:
| Component | How It Uses RFC System | | Component | How It Uses RFC System |
|-----------|------------------------| |-----------|------------------------|
| `/rfc create` | Creates RFC page + updates RFC-Index | | `/rfc-create` | Creates RFC page + updates RFC-Index |
| `/rfc list` | Reads and displays RFC-Index | | `/rfc-list` | Reads and displays RFC-Index |
| `/rfc review` | Transitions Draft -> Review | | `/rfc-review` | Transitions Draft Review |
| `/rfc approve` | Transitions Review -> Approved | | `/rfc-approve` | Transitions Review Approved |
| `/rfc reject` | Transitions Review/Draft -> Rejected | | `/rfc-reject` | Transitions Review/Draft Rejected |
| `/sprint-plan` | Detects Approved RFCs, transitions to Implementing | | `/sprint-plan` | Detects Approved RFCs, transitions to Implementing |
| `/sprint-close` | Transitions Implementing -> Implemented | | `/sprint-close` | Transitions Implementing Implemented |
| `clarity-assist` | Suggests `/rfc create` for feature ideas | | `clarity-assist` | Suggests `/rfc-create` for feature ideas |

View File

@@ -84,17 +84,15 @@ get_milestone(repo="org/repo", milestone_id=17)
### If Approval Missing ### If Approval Missing
``` ```
🔴 SPRINT APPROVAL NOT FOUND — BLOCKED ⚠️ SPRINT APPROVAL NOT FOUND (Warning)
Sprint 17 milestone does not contain an approval record. Sprint 17 milestone does not contain an approval record.
Execution cannot proceed without approval.
Required: Run /sprint-plan first to: Recommended: Run /sprint-plan first to:
1. Review the sprint scope 1. Review the sprint scope
2. Get explicit approval for execution 2. Document the approved execution plan
To override (emergency only): /sprint-start --force Proceeding anyway - consider adding approval for audit trail.
This bypasses the approval gate and logs a warning to the milestone.
``` ```
### If Approval Found ### If Approval Found
@@ -136,16 +134,3 @@ Request re-approval when:
- Scope expansion needed (new files, new branches) - Scope expansion needed (new files, new branches)
- Dependencies change significantly - Dependencies change significantly
- Timeline changes require scope adjustment - Timeline changes require scope adjustment
---
## Force Override
The `--force` flag bypasses the approval gate for emergency situations.
When `--force` is used:
1. Log a warning comment on the milestone: "⚠️ Sprint started without approval via --force on [date]"
2. Proceed with execution
3. The sprint close will flag this as an audit concern
**Do NOT use --force** as standard practice. If you find yourself using it regularly, the planning workflow needs adjustment.

View File

@@ -1,104 +0,0 @@
---
name: sprint-lifecycle
description: Sprint lifecycle state machine using milestone labels
---
# Sprint Lifecycle
## Purpose
Defines the valid sprint lifecycle states and transitions, enforced via labels on the sprint milestone. Each projman command checks the current state before executing and updates it on completion.
## When to Use
- **All sprint commands**: Check state on entry, update on exit
- **Sprint status**: Display current lifecycle state
---
## State Machine
```
idle -> Sprint/Planning -> Sprint/Executing -> Sprint/Reviewing -> idle
(sprint-plan) (sprint-start) (review) (sprint-close)
```
## State Labels
| Label | Set By | Meaning |
|-------|--------|---------|
| *(no Sprint/* label)* | `/sprint-close` or initial state | Idle - no active sprint phase |
| `Sprint/Planning` | `/sprint-plan` | Planning in progress |
| `Sprint/Executing` | `/sprint-start` | Execution in progress |
| `Sprint/Reviewing` | `/review` | Code review in progress |
**Rule:** Only ONE `Sprint/*` label may exist on a milestone at a time. Setting a new one removes the previous one.
---
## State Transition Rules
| Command | Expected State | Sets State | On Wrong State |
|---------|---------------|------------|----------------|
| `/sprint-plan` | idle (no Sprint/* label) | `Sprint/Planning` | Warn: "Sprint is in [state]. Run `/sprint-close` first or use `--force` to re-plan." Allow with `--force`. |
| `/sprint-start` | `Sprint/Planning` | `Sprint/Executing` | Warn: "Expected Sprint/Planning state but found [state]. Run `/sprint-plan` first or use `--force`." Allow with `--force`. |
| `/review` | `Sprint/Executing` | `Sprint/Reviewing` | Warn: "Expected Sprint/Executing state but found [state]." Allow with `--force`. |
| `/sprint-close` | `Sprint/Reviewing` | Remove all Sprint/* labels (idle) | Warn: "Expected Sprint/Reviewing state but found [state]. Run `/review` first or use `--force`." Allow with `--force`. |
| `/sprint-status` | Any | No change (read-only) | Display current state in output. |
---
## Checking State (Protocol)
At command entry, before any other work:
1. Fetch the active milestone using `get_milestone`
2. Check milestone description for `**Sprint State:**` line
3. Compare against expected state for this command
4. If mismatch: display warning with guidance, STOP unless `--force` provided
5. If match: proceed and update state after command completes its primary work
**Implementation:** Use milestone description metadata. Add/update a line:
```
**Sprint State:** Sprint/Executing
```
This avoids needing actual Gitea labels on milestones (which may not be supported). Parse this line to check state, update it to set state.
---
## Setting State
After command completes successfully:
1. Fetch current milestone description
2. If `**Sprint State:**` line exists, replace it
3. If not, append it to the end of the description
4. Update milestone via `update_milestone`
**Format:** `**Sprint State:** <state>` where state is one of:
- `Sprint/Planning`
- `Sprint/Executing`
- `Sprint/Reviewing`
- (empty/removed for idle)
---
## Displaying State
In `/sprint-status` output, include:
```
Sprint Phase: Executing (since 2026-02-01)
```
Parse the milestone description for the `**Sprint State:**` line and display it prominently.
---
## Edge Cases
- **No active milestone**: State is implicitly `idle`
- **Multiple milestones**: Use the open/active milestone. If multiple open, use the most recent.
- **Milestone has no state line**: Treat as `idle`
- **`--force` used**: Log to milestone: "Warning: Lifecycle state override: [command] forced from [actual] state on [date]"

View File

@@ -1,101 +0,0 @@
---
name: visual-output
description: Standard visual formatting for projman commands and agents
---
# Visual Output Standards
## Purpose
Single source of truth for all projman visual headers, progress blocks, and verdict formats. All agents and commands reference this skill instead of defining their own templates.
---
## Plugin Header (Double-Line)
Projman uses the double-line box drawing header style with emoji phase indicators.
### Agent Headers
```
+----------------------------------------------------------------------+
| PROJMAN |
| [Phase Emoji] [PHASE NAME] |
| [Context Line] |
+----------------------------------------------------------------------+
```
### Phase Registry
| Agent | Phase Emoji | Phase Name | Context |
|-------|-------------|------------|---------|
| Planner | 🎯 Target | PLANNING | Sprint Name or Goal |
| Orchestrator | ⚡ Lightning | EXECUTION | Sprint Name |
| Executor | 🔧 Wrench | IMPLEMENTING | Issue Title |
| Code Reviewer | 🔍 Magnifier | REVIEW | Sprint Name |
### Command Headers (Non-Agent)
For commands that don't invoke a specific agent phase:
| Command | Phase Emoji | Phase Name |
|---------|-------------|------------|
| `/sprint-status` | 📊 Chart | STATUS |
| `/setup` | ⚙️ Gear | SETUP |
| `/debug` | 🐛 Bug | DEBUG |
| `/labels-sync` | 🏷️ Label | LABELS |
| `/suggest-version` | 📦 Package | VERSION |
| `/proposal-status` | 📋 Clipboard | PROPOSALS |
| `/test` | 🧪 Flask | TEST |
| `/rfc` | 📄 Document | RFC [Sub-Command] |
---
## Progress Block
Used by orchestrator during sprint execution:
```
+-- Sprint Progress -------------------------------------------------------+
| [Sprint Name] |
| [Progress bar] XX% complete |
| Done: X Active: X Pending: X |
+--------------------------------------------------------------------------+
```
---
## Sprint Close Summary
```
+----------------------------------------------------------------------+
| PROJMAN |
| Finish Flag CLOSING |
| [Sprint Name] |
+----------------------------------------------------------------------+
```
---
## Gate Verdict Format
For domain gate results displayed by orchestrator:
```
+-- Domain Gate: [Viz/Data] -----------------------------------------------+
| Status: PASS / FAIL |
| [Details if FAIL] |
+--------------------------------------------------------------------------+
```
---
## Status Indicators
| Indicator | Meaning |
|-----------|---------|
| Check | Complete / Pass |
| X | Failed / Blocked |
| Hourglass | In progress |
| Empty box | Pending / Not started |
| Warning | Warning |

View File

@@ -1,7 +1,6 @@
--- ---
name: component-check name: component-check
description: DMC component validation specialist description: DMC component validation specialist
model: haiku
--- ---
# Component Check Agent # Component Check Agent

View File

@@ -1,285 +0,0 @@
---
name: design-reviewer
description: Reviews code for design system compliance using viz-platform MCP tools. Use when validating DMC components, theme tokens, or accessibility standards.
model: sonnet
---
# Design Reviewer Agent
You are a strict design system compliance auditor. Your role is to review code for proper use of Dash Mantine Components, theme tokens, and accessibility standards.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| VIZ-PLATFORM - Design Reviewer |
| [Target Path] |
+----------------------------------------------------------------------+
```
## Trigger Conditions
Activate this agent when:
- User runs `/design-review <path>`
- User runs `/design-gate <path>`
- Projman orchestrator requests design domain gate check
- Code review includes DMC/Dash components
## Skills to Load
- skills/design-system-audit.md
## Available MCP Tools
| Tool | Purpose |
|------|---------|
| `validate_component` | Check DMC component configurations for invalid props |
| `get_component_props` | Retrieve expected props for a component |
| `list_components` | Cross-reference components against DMC registry |
| `theme_validate` | Validate theme configuration |
| `accessibility_validate_colors` | Verify color contrast meets WCAG standards |
| `accessibility_validate_theme` | Full theme accessibility audit |
## Operating Modes
### Review Mode (default)
Triggered by `/design-review <path>`
**Characteristics:**
- Produces detailed report with all findings
- Groups findings by severity (FAIL/WARN/INFO)
- Includes actionable recommendations with code fixes
- Does NOT block - informational only
- Shows theme compliance percentage
### Gate Mode
Triggered by `/design-gate <path>` or projman orchestrator domain gate
**Characteristics:**
- Binary PASS/FAIL output
- Only reports FAIL-level issues
- Returns exit status for automation integration
- Blocks completion on FAIL
- Compact output for CI/CD pipelines
## Audit Workflow
### 1. Receive Target Path
Accept file or directory path from command invocation.
### 2. Scan for DMC Usage
Find relevant files:
```python
# Look for files with DMC imports
import dash_mantine_components as dmc
# Look for component instantiations
dmc.Button(...)
dmc.Card(...)
```
### 3. Component Validation
For each DMC component found:
1. Extract component name and props from code
2. Call `list_components` to verify component exists in registry
3. Call `get_component_props` to get valid prop schema
4. Compare used props against schema
5. **FAIL**: Invalid props, unknown components
6. **WARN**: Deprecated patterns, React-style props
### 4. Theme Compliance Check
Detect hardcoded values that should use theme tokens:
| Pattern | Issue | Recommendation |
|---------|-------|----------------|
| `color="#228be6"` | Hardcoded color | Use `color="blue"` or theme token |
| `color="rgb(34, 139, 230)"` | Hardcoded RGB | Use theme color reference |
| `style={"padding": "16px"}` | Hardcoded size | Use `p="md"` prop |
| `style={"fontSize": "14px"}` | Hardcoded font | Use `size="sm"` prop |
### 5. Accessibility Validation
Check accessibility compliance:
1. Call `accessibility_validate_colors` on detected color pairs
2. Check color contrast ratios (min 4.5:1 for AA)
3. Verify interactive components have accessible labels
4. Flag missing aria-labels on buttons/links
### 6. Generate Report
Output format depends on operating mode.
## Report Formats
### Gate Mode Output
**PASS:**
```
DESIGN GATE: PASS
No blocking design system violations found.
```
**FAIL:**
```
DESIGN GATE: FAIL
Blocking Issues (2):
1. app/pages/home.py:45 - Invalid prop 'onclick' on dmc.Button
Fix: Use 'n_clicks' for click handling
2. app/components/nav.py:12 - Component 'dmc.Navbar' not found
Fix: Use 'dmc.AppShell.Navbar' (DMC v0.14+)
Run /design-review for full audit report.
```
### Review Mode Output
```
+----------------------------------------------------------------------+
| VIZ-PLATFORM - Design Review Report |
| /path/to/project |
+----------------------------------------------------------------------+
Files Scanned: 8
Components Analyzed: 34
Theme Compliance: 78%
## FAIL - Must Fix (2)
### app/pages/home.py:45
**Invalid prop on dmc.Button**
- Found: `onclick=handle_click`
- Valid props: n_clicks, disabled, loading, ...
- Fix: `n_clicks` triggers callback, not `onclick`
### app/components/nav.py:12
**Component not in registry**
- Found: `dmc.Navbar`
- Status: Removed in DMC v0.14
- Fix: Use `dmc.AppShell.Navbar` instead
## WARN - Should Fix (3)
### app/pages/home.py:23
**Hardcoded color**
- Found: `color="#228be6"`
- Recommendation: Use `theme.colors.blue[6]` or `color="blue"`
### app/components/card.py:56
**Hardcoded spacing**
- Found: `style={"padding": "16px"}`
- Recommendation: Use `p="md"` prop instead
### app/layouts/header.py:18
**Low color contrast**
- Foreground: #888888, Background: #ffffff
- Contrast ratio: 3.5:1 (below AA standard of 4.5:1)
- Recommendation: Darken text to #757575 for 4.6:1 ratio
## INFO - Suggestions (2)
### app/components/card.py:8
**Consider more specific component**
- Found: `dmc.Box` used as card container
- Suggestion: `dmc.Paper` provides card semantics and shadow
### app/pages/dashboard.py:92
**Missing aria-label**
- Found: `dmc.ActionIcon` without accessible label
- Suggestion: Add `aria-label="Close"` for screen readers
## Summary
| Severity | Count | Action |
|----------|-------|--------|
| FAIL | 2 | Must fix before merge |
| WARN | 3 | Address in this PR or follow-up |
| INFO | 2 | Optional improvements |
Gate Status: FAIL (2 blocking issues)
```
## Severity Definitions
| Level | Criteria | Action Required |
|-------|----------|-----------------|
| **FAIL** | Invalid props, unknown components, breaking changes | Must fix before merge |
| **WARN** | Hardcoded values, deprecated patterns, contrast issues | Should fix |
| **INFO** | Suboptimal patterns, missing optional attributes | Consider for improvement |
## Common Violations
### FAIL-Level Issues
| Issue | Pattern | Fix |
|-------|---------|-----|
| Invalid prop | `onclick=handler` | Use `n_clicks` + callback |
| Unknown component | `dmc.Navbar` | Use `dmc.AppShell.Navbar` |
| Wrong prop type | `size=12` (int) | Use `size="lg"` (string) |
| Invalid enum value | `variant="primary"` | Use `variant="filled"` |
### WARN-Level Issues
| Issue | Pattern | Fix |
|-------|---------|-----|
| Hardcoded color | `color="#hex"` | Use theme color name |
| Hardcoded size | `padding="16px"` | Use spacing prop |
| Low contrast | ratio < 4.5:1 | Adjust colors |
| React prop pattern | `className` | Use Dash equivalents |
### INFO-Level Issues
| Issue | Pattern | Suggestion |
|-------|---------|------------|
| Generic container | `dmc.Box` for cards | Use `dmc.Paper` |
| Missing aria-label | Interactive without label | Add accessible name |
| Inline styles | `style={}` overuse | Use component props |
## Error Handling
| Error | Response |
|-------|----------|
| MCP server unavailable | Report error, suggest checking viz-platform MCP setup |
| No DMC files found | "No Dash Mantine Components detected in target path" |
| Invalid path | "Path not found: {path}" |
| Empty directory | "No Python files found in {path}" |
## Integration with Projman
When called as a domain gate by projman orchestrator:
1. Receive path from orchestrator
2. Run audit in gate mode
3. Return structured result:
```json
{
"gate": "design",
"status": "PASS|FAIL",
"blocking_count": 0,
"summary": "No violations found"
}
```
4. Orchestrator decides whether to proceed based on gate status
## Example Interactions
**User**: `/design-review src/pages/`
**Agent**:
1. Scans all .py files in src/pages/
2. Identifies DMC component usage
3. Validates each component with MCP tools
4. Checks theme token usage
5. Runs accessibility validation
6. Returns full review report
**User**: `/design-gate src/`
**Agent**:
1. Scans all .py files
2. Identifies FAIL-level issues only
3. Returns PASS if clean, FAIL with blocking issues if not
4. Compact output for pipeline integration

View File

@@ -1,9 +1,3 @@
---
name: layout-builder
description: Practical dashboard layout specialist for creating well-structured layouts with filtering, grid systems, and responsive design.
model: sonnet
---
# Layout Builder Agent # Layout Builder Agent
You are a practical dashboard layout specialist. Your role is to help users create well-structured dashboard layouts with proper filtering, grid systems, and responsive design. You are a practical dashboard layout specialist. Your role is to help users create well-structured dashboard layouts with proper filtering, grid systems, and responsive design.

View File

@@ -1,9 +1,3 @@
---
name: theme-setup
description: Design-focused theme setup specialist for creating consistent, brand-aligned themes for Dash Mantine Components applications.
model: haiku
---
# Theme Setup Agent # Theme Setup Agent
You are a design-focused theme setup specialist. Your role is to help users create consistent, brand-aligned themes for their Dash Mantine Components applications. You are a design-focused theme setup specialist. Your role is to help users create consistent, brand-aligned themes for their Dash Mantine Components applications.

View File

@@ -1,94 +0,0 @@
---
description: Design system compliance gate (pass/fail) for sprint execution
gate_contract: v1
arguments:
- name: path
description: File or directory to validate
required: true
---
# /design-gate
Binary pass/fail validation for design system compliance. Used by projman orchestrator during sprint execution to gate issue completion.
## Usage
```
/design-gate <path>
```
**Examples:**
```
/design-gate ./app/pages/dashboard.py
/design-gate ./app/components/
```
## What It Does
1. **Activates** the `design-reviewer` agent in gate mode
2. **Loads** the `skills/design-system-audit.md` skill
3. **Scans** target path for DMC usage
4. **Checks only for FAIL-level violations:**
- Invalid component props
- Non-existent components
- Missing required props
- Deprecated components
5. **Returns binary result:**
- `PASS` - No blocking violations found
- `FAIL` - One or more blocking violations
## Output
### On PASS
```
DESIGN GATE: PASS
No blocking design system violations found.
```
### On FAIL
```
DESIGN GATE: FAIL
Blocking Issues (2):
1. app/pages/home.py:45 - Invalid prop 'onclick' on dmc.Button
Fix: Use 'n_clicks' for click handling
2. app/components/nav.py:12 - Component 'dmc.Navbar' not found
Fix: Use 'dmc.AppShell.Navbar' (DMC v0.14+)
Run /design-review for full audit report.
```
## Integration with projman
This command is automatically invoked by the projman orchestrator when:
1. An issue has the `Domain/Viz` label
2. The orchestrator is about to mark the issue as complete
3. The orchestrator passes the path of changed files
**Gate behavior:**
- PASS → Issue can be marked complete
- FAIL → Issue stays open, blocker comment added
## Differences from /design-review
| Aspect | /design-gate | /design-review |
|--------|--------------|----------------|
| Output | Binary PASS/FAIL | Detailed report |
| Severity | FAIL only | FAIL + WARN + INFO |
| Purpose | Automation gate | Human review |
| Verbosity | Minimal | Comprehensive |
## When to Use
- **Automated pipelines**: CI/CD design system checks
- **Sprint execution**: Automatic quality gates
- **Quick validation**: Fast pass/fail without full report
For detailed findings, use `/design-review` instead.
## Requirements
- viz-platform MCP server must be running
- Target path must exist

View File

@@ -1,70 +0,0 @@
---
description: Audit codebase for design system compliance
arguments:
- name: path
description: File or directory to audit
required: true
---
# /design-review
Scans target path for Dash Mantine Components usage and validates against design system standards.
## Usage
```
/design-review <path>
```
**Examples:**
```
/design-review ./app/pages/
/design-review ./app/components/dashboard.py
/design-review .
```
## What It Does
1. **Activates** the `design-reviewer` agent in review mode
2. **Loads** the `skills/design-system-audit.md` skill
3. **Scans** target path for:
- Python files with DMC imports
- Component instantiations and their props
- Style dictionaries and color values
- Accessibility attributes
4. **Validates** against:
- DMC component registry (valid components and props)
- Theme token usage (no hardcoded colors/sizes)
- Accessibility standards (contrast, ARIA labels)
5. **Produces** detailed report grouped by severity
## Output
Generates a comprehensive audit report with:
- **FAIL**: Invalid props, deprecated components, missing required props
- **WARN**: Hardcoded colors/sizes, missing accessibility attributes
- **INFO**: Optimization suggestions, consistency recommendations
Each finding includes:
- File path and line number
- Description of the issue
- Recommended fix
## When to Use
- **Before PR review**: Catch design system violations early
- **On existing codebases**: Audit for compliance gaps
- **During refactoring**: Ensure changes maintain compliance
- **Learning**: Understand design system best practices
## Related Commands
- `/design-gate` - Binary pass/fail for sprint execution (no detailed report)
- `/component` - Inspect individual DMC component props
- `/theme` - Check active theme configuration
## Requirements
- viz-platform MCP server must be running
- Target path must contain Python files with DMC usage

View File

@@ -1,280 +0,0 @@
---
name: design-system-audit
description: Design system compliance rules and violation patterns for viz-platform audits
---
# Design System Audit
## Purpose
Defines what to check, how to classify violations, and common patterns for design system compliance auditing of Dash Mantine Components (DMC) code.
---
## What to Check
### 1. Component Prop Validity
| Check | Tool | Severity |
|-------|------|----------|
| Invalid prop names (typos) | `validate_component` | FAIL |
| Invalid prop values | `validate_component` | FAIL |
| Missing required props | `validate_component` | FAIL |
| Deprecated props | `get_component_props` | WARN |
| Unknown props | `validate_component` | WARN |
### 2. Theme Token Usage
| Check | Detection | Severity |
|-------|-----------|----------|
| Hardcoded hex colors | Regex `#[0-9a-fA-F]{3,6}` | WARN |
| Hardcoded RGB/RGBA | Regex `rgb\(` | WARN |
| Hardcoded font sizes | Regex `fontSize=\d+` | WARN |
| Hardcoded spacing | Regex `margin=\d+|padding=\d+` | INFO |
| Missing theme provider | AST analysis | FAIL |
**Allowed exceptions:**
- Colors in theme definition files
- Test/fixture files
- Comments and documentation
### 3. Accessibility Compliance
| Check | Tool | Severity |
|-------|------|----------|
| Color contrast ratio < 4.5:1 (AA) | `accessibility_validate_colors` | WARN |
| Color contrast ratio < 3:1 (large text) | `accessibility_validate_colors` | WARN |
| Missing aria-label on interactive | Manual scan | WARN |
| Color-only information | `accessibility_validate_theme` | WARN |
| Focus indicator missing | Manual scan | INFO |
### 4. Responsive Design
| Check | Detection | Severity |
|-------|-----------|----------|
| Fixed pixel widths > 600px | Regex `width=\d{3,}` | INFO |
| Missing breakpoint handling | No `visibleFrom`/`hiddenFrom` | INFO |
| Non-responsive layout | Fixed Grid columns | INFO |
---
## Common Violations
### FAIL-Level Violations
```python
# Invalid prop name (typo)
dmc.Button(colour="blue") # Should be 'color'
# Invalid enum value
dmc.Button(size="large") # Should be 'lg'
# Missing required prop
dmc.Select(data=[...]) # Missing 'id' for callbacks
# Invalid component name
dmc.Buttons(...) # Should be 'Button'
# Wrong case
dmc.Button(fullwidth=True) # Should be 'fullWidth'
# React patterns in Dash
dmc.Button(onClick=fn) # Should use 'id' + callback
```
### WARN-Level Violations
```python
# Hardcoded colors
dmc.Text(color="#ff0000") # Use theme token
dmc.Button(style={"color": "red"}) # Use theme token
# Hardcoded font size
dmc.Text(style={"fontSize": "14px"}) # Use 'size' prop
# Poor contrast
dmc.Text(color="gray") # Check contrast ratio
# Inline styles for colors
dmc.Container(style={"backgroundColor": "#f0f0f0"})
# Deprecated patterns
dmc.Button(variant="subtle") # Check if still supported
```
### INFO-Level Violations
```python
# Fixed widths
dmc.Container(w=800) # Consider responsive
# Missing responsive handling
dmc.Grid.Col(span=6) # Consider span={{ base: 12, md: 6 }}
# Optimization opportunity
dmc.Stack([dmc.Text(...) for _ in range(100)]) # Consider virtualization
```
---
## Severity Classification
| Level | Icon | Meaning | Action |
|-------|------|---------|--------|
| **FAIL** | Red circle | Blocking issue, will cause runtime error | Must fix before completion |
| **WARN** | Orange circle | Quality issue, violates best practices | Should fix, may be waived |
| **INFO** | Yellow circle | Suggestion for improvement | Consider for future |
### Severity Decision Tree
```
Is it invalid syntax/props?
YES -> FAIL
NO -> Does it violate accessibility standards?
YES -> WARN
NO -> Does it use hardcoded styles?
YES -> WARN
NO -> Is it a best practice suggestion?
YES -> INFO
NO -> Not a violation
```
---
## Scanning Strategy
### File Types to Scan
| Extension | Priority | Check For |
|-----------|----------|-----------|
| `*.py` | High | DMC component usage |
| `*.dash.py` | High | Layout definitions |
| `theme*.py` | High | Theme configuration |
| `layout*.py` | High | Layout structure |
| `components/*.py` | High | Custom components |
| `callbacks/*.py` | Medium | Component references |
### Scan Process
1. **Find relevant files**
```
glob: **/*.py
filter: Contains 'dmc.' or 'dash_mantine_components'
```
2. **Extract component usages**
- Parse Python AST
- Find all `dmc.*` calls
- Extract component name and kwargs
3. **Validate each component**
- Call `validate_component(name, props)`
- Record violations with file:line reference
4. **Scan for patterns**
- Hardcoded colors (regex)
- Inline styles (AST)
- Fixed dimensions (regex)
5. **Run accessibility checks**
- Extract color combinations
- Call `accessibility_validate_colors`
---
## Report Template
```
Design System Audit Report
==========================
Path: <scanned-path>
Files Scanned: N
Timestamp: YYYY-MM-DD HH:MM:SS
Summary
-------
FAIL: N blocking violations
WARN: N quality warnings
INFO: N suggestions
Findings
--------
FAIL Violations (must fix)
--------------------------
[file.py:42] Invalid prop 'colour' on Button
Found: colour="blue"
Expected: color="blue"
Docs: https://mantine.dev/core/button
[file.py:58] Invalid size value on Text
Found: size="huge"
Expected: One of ['xs', 'sm', 'md', 'lg', 'xl']
WARN Violations (should fix)
----------------------------
[theme.py:15] Hardcoded color detected
Found: color="#ff0000"
Suggestion: Use theme color token (e.g., color="red")
[layout.py:23] Low contrast ratio (3.2:1)
Found: Text on background
Required: 4.5:1 for WCAG AA
Suggestion: Darken text or lighten background
INFO Suggestions
----------------
[dashboard.py:100] Consider responsive breakpoints
Found: span=6 (fixed)
Suggestion: span={{ base: 12, md: 6 }}
Files Scanned
-------------
- src/components/button.py (3 components)
- src/layouts/main.py (8 components)
- src/theme.py (1 theme config)
Gate Result: PASS | FAIL
```
---
## Integration with MCP Tools
### Required Tools
| Tool | Purpose | When Used |
|------|---------|-----------|
| `validate_component` | Check component props | Every component found |
| `get_component_props` | Get expected props | When suggesting fixes |
| `list_components` | Verify component exists | Unknown component names |
| `theme_validate` | Validate theme config | Theme files |
| `accessibility_validate_colors` | Check contrast | Color combinations |
| `accessibility_validate_theme` | Full a11y audit | Theme files |
### Tool Call Pattern
```
For each Python file:
For each dmc.Component(...) found:
result = validate_component(
component_name="Component",
props={extracted_props}
)
if result.errors:
record FAIL violation
if result.warnings:
record WARN violation
```
---
## Skip Patterns
Do not flag violations in:
- `**/tests/**` - Test files may have intentional violations
- `**/__pycache__/**` - Compiled files
- `**/fixtures/**` - Test fixtures
- Files with `# noqa: design-audit` comment
- Theme definition files (colors are expected there)

View File

@@ -1,384 +0,0 @@
#!/usr/bin/env bash
# =============================================================================
# install-plugin.sh - Install marketplace plugin to a consumer project
# =============================================================================
#
# Usage: ./scripts/install-plugin.sh <plugin-name> <target-project-path>
#
# This script:
# 1. Validates plugin exists in the marketplace
# 2. Updates target project's .mcp.json with MCP server entries (if applicable)
# 3. Appends CLAUDE.md integration snippet to target project
# 4. Is idempotent (safe to run multiple times)
#
# Examples:
# ./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
# ./scripts/install-plugin.sh projman /home/user/my-project
#
# =============================================================================
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
# --- Color Definitions ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# --- Logging Functions ---
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[OK]${NC} $1"; }
log_skip() { echo -e "${YELLOW}[SKIP]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; }
# --- Track Changes ---
CHANGES_MADE=()
SKIPPED=()
MCP_SERVERS_INSTALLED=()
# --- Usage ---
usage() {
echo "Usage: $0 <plugin-name> <target-project-path>"
echo ""
echo "Install a marketplace plugin to a consumer project."
echo ""
echo "Arguments:"
echo " plugin-name Name of the plugin (e.g., data-platform, viz-platform, projman)"
echo " target-project-path Path to the target project (absolute or relative)"
echo ""
echo "Available plugins:"
for dir in "$REPO_ROOT"/plugins/*/; do
if [[ -d "$dir" ]]; then
basename "$dir"
fi
done
echo ""
echo "Examples:"
echo " $0 data-platform ~/projects/personal-portfolio"
echo " $0 projman /home/user/my-project"
exit 1
}
# --- Prerequisite Check ---
check_prerequisites() {
if ! command -v jq &> /dev/null; then
log_error "jq is required but not installed."
echo "Install with: sudo apt install jq"
exit 1
fi
}
# --- Validate Plugin Exists ---
validate_plugin() {
local plugin_name="$1"
local plugin_dir="$REPO_ROOT/plugins/$plugin_name"
if [[ ! -d "$plugin_dir" ]]; then
log_error "Plugin '$plugin_name' not found in $REPO_ROOT/plugins/"
echo ""
echo "Available plugins:"
for dir in "$REPO_ROOT"/plugins/*/; do
if [[ -d "$dir" ]]; then
echo " - $(basename "$dir")"
fi
done
exit 1
fi
if [[ ! -f "$plugin_dir/.claude-plugin/plugin.json" ]]; then
log_error "Plugin '$plugin_name' missing .claude-plugin/plugin.json"
exit 1
fi
log_success "Plugin '$plugin_name' found"
}
# --- Validate Target Project ---
validate_target() {
local target_path="$1"
if [[ ! -d "$target_path" ]]; then
log_error "Target project path does not exist: $target_path"
exit 1
fi
log_success "Target project found: $target_path"
# Warn if no CLAUDE.md
if [[ ! -f "$target_path/CLAUDE.md" ]]; then
log_warning "Target project has no CLAUDE.md - will create one"
fi
}
# --- Get MCP Servers for Plugin ---
# Reads the mcp_servers array from plugin.json
# Returns newline-separated list of MCP server names, or empty if none
get_mcp_servers() {
local plugin_name="$1"
local plugin_json="$REPO_ROOT/plugins/$plugin_name/.claude-plugin/plugin.json"
if [[ ! -f "$plugin_json" ]]; then
return
fi
# Read mcp_servers array from plugin.json
# Returns empty if field doesn't exist or is empty
jq -r '.mcp_servers // [] | .[]' "$plugin_json" 2>/dev/null || true
}
# --- Check if plugin has any MCP servers ---
has_mcp_servers() {
local plugin_name="$1"
local servers
servers=$(get_mcp_servers "$plugin_name")
[[ -n "$servers" ]]
}
# --- Update .mcp.json ---
update_mcp_json() {
local plugin_name="$1"
local target_path="$2"
local mcp_json="$target_path/.mcp.json"
# Get MCP servers for this plugin
local mcp_servers
mcp_servers=$(get_mcp_servers "$plugin_name")
if [[ -z "$mcp_servers" ]]; then
log_skip "Plugin '$plugin_name' has no MCP servers - skipping .mcp.json update"
SKIPPED+=(".mcp.json: No MCP servers for $plugin_name")
return 0
fi
# Create .mcp.json if it doesn't exist
if [[ ! -f "$mcp_json" ]]; then
log_info "Creating new .mcp.json"
echo '{"mcpServers":{}}' > "$mcp_json"
CHANGES_MADE+=("Created .mcp.json")
fi
# Add each MCP server
local servers_added=0
while IFS= read -r server_name; do
[[ -z "$server_name" ]] && continue
local mcp_server_path="$REPO_ROOT/mcp-servers/$server_name/run.sh"
# Verify server exists
if [[ ! -f "$mcp_server_path" ]]; then
log_warning "MCP server '$server_name' not found at $mcp_server_path"
continue
fi
# Check if entry already exists
if jq -e ".mcpServers[\"$server_name\"]" "$mcp_json" > /dev/null 2>&1; then
log_skip "MCP server '$server_name' already in .mcp.json"
SKIPPED+=(".mcp.json: $server_name already present")
continue
fi
# Add MCP server entry
log_info "Adding MCP server '$server_name' to .mcp.json"
local tmp_file=$(mktemp)
jq ".mcpServers[\"$server_name\"] = {\"command\": \"$mcp_server_path\", \"args\": []}" "$mcp_json" > "$tmp_file"
mv "$tmp_file" "$mcp_json"
CHANGES_MADE+=("Added $server_name to .mcp.json")
MCP_SERVERS_INSTALLED+=("$server_name")
log_success "Added MCP server entry for '$server_name'"
((++servers_added))
done <<< "$mcp_servers"
}
# --- Update CLAUDE.md ---
update_claude_md() {
local plugin_name="$1"
local target_path="$2"
local target_claude_md="$target_path/CLAUDE.md"
local integration_file="$REPO_ROOT/plugins/$plugin_name/claude-md-integration.md"
# Check if integration file exists
if [[ ! -f "$integration_file" ]]; then
log_skip "No claude-md-integration.md for plugin '$plugin_name'"
SKIPPED+=("CLAUDE.md: No integration snippet for $plugin_name")
return 0
fi
# Create CLAUDE.md if it doesn't exist
if [[ ! -f "$target_claude_md" ]]; then
log_info "Creating new CLAUDE.md"
cat > "$target_claude_md" << 'EOF'
# CLAUDE.md
This file provides guidance to Claude Code when working with code in this repository.
EOF
CHANGES_MADE+=("Created CLAUDE.md")
fi
# Check if already integrated using HTML comment marker (preferred)
local begin_marker="<!-- BEGIN marketplace-plugin: $plugin_name -->"
if grep -qF "$begin_marker" "$target_claude_md" 2>/dev/null; then
log_skip "Plugin '$plugin_name' integration already in CLAUDE.md"
SKIPPED+=("CLAUDE.md: $plugin_name already present")
return 0
fi
# Fallback: check for legacy header format (backward compatibility)
if grep -qE "^# ${plugin_name}( Plugin)? -? ?CLAUDE\.md Integration" "$target_claude_md" 2>/dev/null; then
log_skip "Plugin '$plugin_name' integration already in CLAUDE.md (legacy format)"
SKIPPED+=("CLAUDE.md: $plugin_name already present")
return 0
fi
# Read integration content
local integration_content
integration_content=$(cat "$integration_file")
# Check for or create Marketplace Plugin Integration section
local section_header="## Marketplace Plugin Integration"
if ! grep -qF "$section_header" "$target_claude_md"; then
log_info "Creating '$section_header' section"
echo "" >> "$target_claude_md"
echo "$section_header" >> "$target_claude_md"
echo "" >> "$target_claude_md"
echo "The following plugins are installed from the leo-claude-mktplace:" >> "$target_claude_md"
echo "" >> "$target_claude_md"
fi
# Append integration content with HTML comment markers
log_info "Adding '$plugin_name' integration to CLAUDE.md"
local end_marker="<!-- END marketplace-plugin: $plugin_name -->"
echo "" >> "$target_claude_md"
echo "---" >> "$target_claude_md"
echo "" >> "$target_claude_md"
echo "$begin_marker" >> "$target_claude_md"
echo "" >> "$target_claude_md"
echo "$integration_content" >> "$target_claude_md"
echo "" >> "$target_claude_md"
echo "$end_marker" >> "$target_claude_md"
CHANGES_MADE+=("Added $plugin_name integration to CLAUDE.md")
log_success "Added CLAUDE.md integration for '$plugin_name'"
}
# --- Get Commands for Plugin ---
get_plugin_commands() {
local plugin_name="$1"
local commands_dir="$REPO_ROOT/plugins/$plugin_name/commands"
if [[ ! -d "$commands_dir" ]]; then
return
fi
for cmd_file in "$commands_dir"/*.md; do
if [[ -f "$cmd_file" ]]; then
local cmd_name
cmd_name=$(basename "$cmd_file" .md)
echo " /$cmd_name"
fi
done
}
# --- Print Summary ---
print_summary() {
local plugin_name="$1"
local target_path="$2"
echo ""
echo "=============================================="
echo -e "${GREEN}Installation Summary${NC}"
echo "=============================================="
echo ""
echo -e "${CYAN}Plugin:${NC} $plugin_name"
echo -e "${CYAN}Target:${NC} $target_path"
echo ""
if [[ ${#CHANGES_MADE[@]} -gt 0 ]]; then
echo -e "${GREEN}Changes Made:${NC}"
for change in "${CHANGES_MADE[@]}"; do
echo "$change"
done
echo ""
fi
if [[ ${#SKIPPED[@]} -gt 0 ]]; then
echo -e "${YELLOW}Skipped (already present or N/A):${NC}"
for skip in "${SKIPPED[@]}"; do
echo " - $skip"
done
echo ""
fi
# Show available commands
echo -e "${CYAN}Commands Now Available:${NC}"
local commands
commands=$(get_plugin_commands "$plugin_name")
if [[ -n "$commands" ]]; then
echo "$commands"
else
echo " (No commands - this plugin may be hooks-only)"
fi
echo ""
# MCP servers info
if [[ ${#MCP_SERVERS_INSTALLED[@]} -gt 0 ]]; then
echo -e "${CYAN}MCP Servers Installed:${NC}"
for server in "${MCP_SERVERS_INSTALLED[@]}"; do
echo " - $server"
done
echo ""
elif has_mcp_servers "$plugin_name"; then
echo -e "${CYAN}MCP Tools:${NC}"
echo " This plugin includes MCP server tools. Use ToolSearch to discover them."
echo ""
fi
# Important reminder
echo -e "${YELLOW}⚠️ IMPORTANT:${NC}"
echo " Restart your Claude Code session for changes to take effect."
echo " The .mcp.json changes require a session restart to load MCP servers."
echo ""
}
# =============================================================================
# Main Execution
# =============================================================================
# Check arguments
if [[ $# -lt 2 ]]; then
usage
fi
PLUGIN_NAME="$1"
TARGET_PATH="$2"
# Resolve target path to absolute
TARGET_PATH=$(cd "$TARGET_PATH" 2>/dev/null && pwd || echo "$TARGET_PATH")
echo ""
echo "=============================================="
echo -e "${BLUE}Installing Plugin: $PLUGIN_NAME${NC}"
echo "=============================================="
echo ""
# Run checks
check_prerequisites
validate_plugin "$PLUGIN_NAME"
validate_target "$TARGET_PATH"
echo ""
# Perform installation
update_mcp_json "$PLUGIN_NAME" "$TARGET_PATH"
update_claude_md "$PLUGIN_NAME" "$TARGET_PATH"
# Print summary
print_summary "$PLUGIN_NAME" "$TARGET_PATH"

View File

@@ -1,322 +0,0 @@
#!/usr/bin/env bash
# =============================================================================
# list-installed.sh - Show installed marketplace plugins in a project
# =============================================================================
#
# Usage: ./scripts/list-installed.sh <target-project-path>
#
# This script:
# 1. Checks .mcp.json for MCP server entries from this marketplace
# 2. Checks CLAUDE.md for plugin integration sections
# 3. Reports which plugins are installed
#
# Examples:
# ./scripts/list-installed.sh ~/projects/personal-portfolio
# ./scripts/list-installed.sh .
#
# =============================================================================
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
# --- Color Definitions ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# --- Logging Functions ---
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[OK]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; }
# --- Usage ---
usage() {
echo "Usage: $0 <target-project-path>"
echo ""
echo "Show which marketplace plugins are installed in a project."
echo ""
echo "Arguments:"
echo " target-project-path Path to the target project (absolute or relative)"
echo ""
echo "Examples:"
echo " $0 ~/projects/personal-portfolio"
echo " $0 ."
exit 1
}
# --- Prerequisite Check ---
check_prerequisites() {
if ! command -v jq &> /dev/null; then
log_error "jq is required but not installed."
echo "Install with: sudo apt install jq"
exit 1
fi
}
# --- Get Available Plugins ---
get_available_plugins() {
for dir in "$REPO_ROOT"/plugins/*/; do
if [[ -d "$dir" ]]; then
basename "$dir"
fi
done
}
# --- Get MCP Servers for Plugin ---
# Reads the mcp_servers array from plugin.json
get_mcp_servers() {
local plugin_name="$1"
local plugin_json="$REPO_ROOT/plugins/$plugin_name/.claude-plugin/plugin.json"
if [[ ! -f "$plugin_json" ]]; then
return
fi
jq -r '.mcp_servers // [] | .[]' "$plugin_json" 2>/dev/null || true
}
# --- Check if plugin has any MCP servers defined ---
has_mcp_servers() {
local plugin_name="$1"
local servers
servers=$(get_mcp_servers "$plugin_name")
[[ -n "$servers" ]]
}
# --- Check MCP Installation ---
check_mcp_installed() {
local plugin_name="$1"
local target_path="$2"
local mcp_json="$target_path/.mcp.json"
if [[ ! -f "$mcp_json" ]]; then
return 1
fi
# Get MCP servers for this plugin from plugin.json
local mcp_servers
mcp_servers=$(get_mcp_servers "$plugin_name")
if [[ -z "$mcp_servers" ]]; then
# Plugin has no MCP servers defined, so MCP check passes
return 0
fi
# Check if ALL required MCP servers are present
while IFS= read -r server_name; do
[[ -z "$server_name" ]] && continue
if ! jq -e ".mcpServers[\"$server_name\"]" "$mcp_json" > /dev/null 2>&1; then
# Also check if any entry points to this marketplace's mcp-servers
if ! grep -q "mcp-servers/$server_name" "$mcp_json" 2>/dev/null; then
return 1
fi
fi
done <<< "$mcp_servers"
return 0
}
# --- Check CLAUDE.md Integration ---
check_claude_md_installed() {
local plugin_name="$1"
local target_path="$2"
local target_claude_md="$target_path/CLAUDE.md"
if [[ ! -f "$target_claude_md" ]]; then
return 1
fi
# Check for HTML comment marker (preferred, new format)
local begin_marker="<!-- BEGIN marketplace-plugin: $plugin_name -->"
if grep -qF "$begin_marker" "$target_claude_md" 2>/dev/null; then
return 0
fi
# Fallback: check for legacy header format
if grep -qE "^# ${plugin_name}( Plugin)? -? ?CLAUDE\.md Integration" "$target_claude_md" 2>/dev/null; then
return 0
fi
return 1
}
# --- Get Plugin Version ---
get_plugin_version() {
local plugin_name="$1"
local plugin_json="$REPO_ROOT/plugins/$plugin_name/.claude-plugin/plugin.json"
if [[ -f "$plugin_json" ]]; then
jq -r '.version // "unknown"' "$plugin_json"
else
echo "unknown"
fi
}
# --- Get Plugin Description ---
get_plugin_description() {
local plugin_name="$1"
local plugin_json="$REPO_ROOT/plugins/$plugin_name/.claude-plugin/plugin.json"
if [[ -f "$plugin_json" ]]; then
jq -r '.description // "No description"' "$plugin_json" | cut -c1-60
else
echo "No description"
fi
}
# =============================================================================
# Main Execution
# =============================================================================
# Check arguments
if [[ $# -lt 1 ]]; then
usage
fi
TARGET_PATH="$1"
# Resolve target path to absolute
if [[ -d "$TARGET_PATH" ]]; then
TARGET_PATH=$(cd "$TARGET_PATH" && pwd)
else
log_error "Target project path does not exist: $TARGET_PATH"
exit 1
fi
check_prerequisites
echo ""
echo "=============================================="
echo -e "${BLUE}Installed Plugins: $(basename "$TARGET_PATH")${NC}"
echo "=============================================="
echo -e "${CYAN}Target:${NC} $TARGET_PATH"
echo ""
# Collect results
declare -A INSTALLED_MCP
declare -A INSTALLED_CLAUDE_MD
INSTALLED_PLUGINS=()
PARTIAL_PLUGINS=()
NOT_INSTALLED=()
# Check each available plugin
for plugin in $(get_available_plugins); do
mcp_installed=false
claude_installed=false
needs_mcp=false
# Check if plugin has MCP servers defined
if has_mcp_servers "$plugin"; then
needs_mcp=true
fi
# Check MCP installation
if check_mcp_installed "$plugin" "$TARGET_PATH"; then
mcp_installed=true
INSTALLED_MCP[$plugin]=true
fi
# Check CLAUDE.md integration
if check_claude_md_installed "$plugin" "$TARGET_PATH"; then
claude_installed=true
INSTALLED_CLAUDE_MD[$plugin]=true
fi
# Categorize
if $claude_installed; then
if $needs_mcp; then
if $mcp_installed; then
INSTALLED_PLUGINS+=("$plugin")
else
PARTIAL_PLUGINS+=("$plugin")
fi
else
# Plugins without MCP servers just need CLAUDE.md
INSTALLED_PLUGINS+=("$plugin")
fi
elif $mcp_installed && $needs_mcp; then
# Has MCP but missing CLAUDE.md
PARTIAL_PLUGINS+=("$plugin")
else
NOT_INSTALLED+=("$plugin")
fi
done
# Print fully installed plugins
if [[ ${#INSTALLED_PLUGINS[@]} -gt 0 ]]; then
echo -e "${GREEN}✓ Fully Installed:${NC}"
echo ""
printf " %-24s %-10s %s\n" "PLUGIN" "VERSION" "DESCRIPTION"
printf " %-24s %-10s %s\n" "------" "-------" "-----------"
for plugin in "${INSTALLED_PLUGINS[@]}"; do
version=$(get_plugin_version "$plugin")
desc=$(get_plugin_description "$plugin")
printf " %-24s %-10s %s\n" "$plugin" "$version" "$desc"
done
echo ""
fi
# Print partially installed plugins
if [[ ${#PARTIAL_PLUGINS[@]} -gt 0 ]]; then
echo -e "${YELLOW}⚠ Partially Installed:${NC}"
echo ""
for plugin in "${PARTIAL_PLUGINS[@]}"; do
version=$(get_plugin_version "$plugin")
echo " $plugin (v$version)"
if [[ -v INSTALLED_MCP[$plugin] ]]; then
echo " ✓ MCP server configured in .mcp.json"
else
# Show which MCP servers are missing
mcp_servers=$(get_mcp_servers "$plugin")
if [[ -n "$mcp_servers" ]]; then
echo " ✗ MCP server(s) NOT in .mcp.json: $mcp_servers"
fi
fi
if [[ -v INSTALLED_CLAUDE_MD[$plugin] ]]; then
echo " ✓ Integration in CLAUDE.md"
else
echo " ✗ Integration NOT in CLAUDE.md"
fi
echo ""
done
echo " Run install-plugin.sh to complete installation."
echo ""
fi
# Print available but not installed
if [[ ${#NOT_INSTALLED[@]} -gt 0 ]]; then
echo -e "${BLUE}○ Available (not installed):${NC}"
echo ""
for plugin in "${NOT_INSTALLED[@]}"; do
version=$(get_plugin_version "$plugin")
desc=$(get_plugin_description "$plugin")
printf " %-24s %-10s %s\n" "$plugin" "$version" "$desc"
done
echo ""
fi
# Summary
echo "----------------------------------------------"
total_available=$(get_available_plugins | wc -l)
total_installed=${#INSTALLED_PLUGINS[@]}
total_partial=${#PARTIAL_PLUGINS[@]}
echo -e "Total: ${GREEN}$total_installed installed${NC}"
if [[ $total_partial -gt 0 ]]; then
echo -e " ${YELLOW}$total_partial partial${NC}"
fi
echo " $((total_available - total_installed - total_partial)) available"
echo ""
# Install hint
if [[ ${#NOT_INSTALLED[@]} -gt 0 ]]; then
echo "To install a plugin:"
echo " $SCRIPT_DIR/install-plugin.sh <plugin-name> $TARGET_PATH"
echo ""
fi

View File

@@ -278,7 +278,7 @@ print_report() {
# --- Main --- # --- Main ---
main() { main() {
echo "==============================================" echo "=============================================="
echo " Leo Claude Marketplace Setup (v5.7.1)" echo " Leo Claude Marketplace Setup (v5.1.0)"
echo "==============================================" echo "=============================================="
echo "" echo ""

View File

@@ -1,363 +0,0 @@
#!/usr/bin/env bash
# =============================================================================
# uninstall-plugin.sh - Remove marketplace plugin from a consumer project
# =============================================================================
#
# Usage: ./scripts/uninstall-plugin.sh <plugin-name> <target-project-path>
#
# This script:
# 1. Removes MCP server entries from target project's .mcp.json
# 2. Removes CLAUDE.md integration section for the plugin
# 3. Is idempotent (safe to run multiple times)
#
# Examples:
# ./scripts/uninstall-plugin.sh data-platform ~/projects/personal-portfolio
# ./scripts/uninstall-plugin.sh projman /home/user/my-project
#
# =============================================================================
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
# --- Color Definitions ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# --- Logging Functions ---
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[OK]${NC} $1"; }
log_skip() { echo -e "${YELLOW}[SKIP]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; }
# --- Track Changes ---
CHANGES_MADE=()
SKIPPED=()
# --- Usage ---
usage() {
echo "Usage: $0 <plugin-name> <target-project-path>"
echo ""
echo "Remove a marketplace plugin from a consumer project."
echo ""
echo "Arguments:"
echo " plugin-name Name of the plugin (e.g., data-platform, viz-platform, projman)"
echo " target-project-path Path to the target project (absolute or relative)"
echo ""
echo "Examples:"
echo " $0 data-platform ~/projects/personal-portfolio"
echo " $0 projman /home/user/my-project"
exit 1
}
# --- Prerequisite Check ---
check_prerequisites() {
if ! command -v jq &> /dev/null; then
log_error "jq is required but not installed."
echo "Install with: sudo apt install jq"
exit 1
fi
}
# --- Validate Target Project ---
validate_target() {
local target_path="$1"
if [[ ! -d "$target_path" ]]; then
log_error "Target project path does not exist: $target_path"
exit 1
fi
log_success "Target project found: $target_path"
}
# --- Get MCP Servers for Plugin ---
# Reads the mcp_servers array from plugin.json
# Returns newline-separated list of MCP server names, or empty if none
get_mcp_servers() {
local plugin_name="$1"
local plugin_json="$REPO_ROOT/plugins/$plugin_name/.claude-plugin/plugin.json"
if [[ ! -f "$plugin_json" ]]; then
return
fi
# Read mcp_servers array from plugin.json
# Returns empty if field doesn't exist or is empty
jq -r '.mcp_servers // [] | .[]' "$plugin_json" 2>/dev/null || true
}
# --- Remove from .mcp.json ---
remove_from_mcp_json() {
local plugin_name="$1"
local target_path="$2"
local mcp_json="$target_path/.mcp.json"
# Check if .mcp.json exists
if [[ ! -f "$mcp_json" ]]; then
log_skip "No .mcp.json found - nothing to remove"
SKIPPED+=(".mcp.json: File does not exist")
return 0
fi
# Get MCP servers for this plugin
local mcp_servers
mcp_servers=$(get_mcp_servers "$plugin_name")
if [[ -z "$mcp_servers" ]]; then
# Fallback: try to remove entry with plugin name (backward compatibility)
if jq -e ".mcpServers[\"$plugin_name\"]" "$mcp_json" > /dev/null 2>&1; then
log_info "Removing MCP server '$plugin_name' from .mcp.json"
local tmp_file=$(mktemp)
jq "del(.mcpServers[\"$plugin_name\"])" "$mcp_json" > "$tmp_file"
mv "$tmp_file" "$mcp_json"
CHANGES_MADE+=("Removed $plugin_name from .mcp.json")
log_success "Removed MCP server entry for '$plugin_name'"
else
log_skip "Plugin '$plugin_name' has no MCP servers configured"
SKIPPED+=(".mcp.json: No MCP servers for $plugin_name")
fi
return 0
fi
# Remove each MCP server
local servers_removed=0
while IFS= read -r server_name; do
[[ -z "$server_name" ]] && continue
# Check if entry exists
if ! jq -e ".mcpServers[\"$server_name\"]" "$mcp_json" > /dev/null 2>&1; then
log_skip "MCP server '$server_name' not in .mcp.json"
SKIPPED+=(".mcp.json: $server_name not present")
continue
fi
# Remove MCP server entry
log_info "Removing MCP server '$server_name' from .mcp.json"
local tmp_file=$(mktemp)
jq "del(.mcpServers[\"$server_name\"])" "$mcp_json" > "$tmp_file"
mv "$tmp_file" "$mcp_json"
CHANGES_MADE+=("Removed $server_name from .mcp.json")
log_success "Removed MCP server entry for '$server_name'"
((++servers_removed))
done <<< "$mcp_servers"
}
# --- Remove from CLAUDE.md ---
remove_from_claude_md() {
local plugin_name="$1"
local target_path="$2"
local target_claude_md="$target_path/CLAUDE.md"
# Check if CLAUDE.md exists
if [[ ! -f "$target_claude_md" ]]; then
log_skip "No CLAUDE.md found - nothing to remove"
SKIPPED+=("CLAUDE.md: File does not exist")
return 0
fi
# Try HTML comment markers first (preferred method)
local begin_marker="<!-- BEGIN marketplace-plugin: $plugin_name -->"
local end_marker="<!-- END marketplace-plugin: $plugin_name -->"
if grep -qF "$begin_marker" "$target_claude_md" 2>/dev/null; then
log_info "Removing '$plugin_name' section from CLAUDE.md (using markers)"
# Remove everything between markers (inclusive) and preceding ---
local tmp_file=$(mktemp)
awk -v begin="$begin_marker" -v end="$end_marker" '
BEGIN { skip = 0; prev_hr = 0; buffer = "" }
{
is_hr = /^---[[:space:]]*$/
if ($0 == begin) {
skip = 1
# If previous line was ---, dont print it
if (prev_hr) {
buffer = ""
}
next
}
if (skip) {
if ($0 == end) {
skip = 0
}
next
}
# Print buffered content
if (buffer != "") {
print buffer
}
# Buffer current line (in case its --- before a marker)
buffer = $0
prev_hr = is_hr
}
END {
# Print final buffered content
if (buffer != "") {
print buffer
}
}
' "$target_claude_md" > "$tmp_file"
# Clean up multiple consecutive blank lines
awk 'NF{blank=0} !NF{blank++} blank<=2' "$tmp_file" > "${tmp_file}.clean"
mv "${tmp_file}.clean" "$target_claude_md"
rm -f "$tmp_file"
CHANGES_MADE+=("Removed $plugin_name section from CLAUDE.md")
log_success "Removed CLAUDE.md section for '$plugin_name'"
return 0
fi
# Fallback: try legacy header-based detection
local section_header
section_header=$(grep -E "^# ${plugin_name}( Plugin)? -? ?CLAUDE\.md Integration" "$target_claude_md" 2>/dev/null | head -1)
if [[ -z "$section_header" ]]; then
log_skip "Plugin '$plugin_name' section not found in CLAUDE.md"
SKIPPED+=("CLAUDE.md: $plugin_name section not found")
return 0
fi
log_info "Removing '$plugin_name' section from CLAUDE.md (legacy format)"
# Create temp file and use awk to remove section
local tmp_file=$(mktemp)
awk -v header="$section_header" '
BEGIN { skip = 0; found = 0; in_code_block = 0 }
{
# Track code blocks (``` markers)
if (/^```/) {
in_code_block = !in_code_block
}
# Check if this is the section header we want to remove
if ($0 == header) {
skip = 1
found = 1
next
}
# Check if this is a horizontal rule (---) - only count if not in code block
is_hr = /^---[[:space:]]*$/ && !in_code_block
# Check if this is a new plugin section header (only outside code blocks)
is_new_plugin_section = /^# [a-z-]+( Plugin)? -? ?CLAUDE\.md Integration/ && !in_code_block && $0 != header
# Check for HTML marker (new format)
is_begin_marker = /^<!-- BEGIN marketplace-plugin:/ && !in_code_block
if (skip) {
# Stop skipping when we hit --- or a new section
if (is_hr) {
skip = 0
next
}
if (is_new_plugin_section || is_begin_marker) {
skip = 0
print
}
next
}
print
}
END { if (!found) exit 1 }
' "$target_claude_md" > "$tmp_file" 2>/dev/null
if [[ $? -eq 0 ]]; then
# Clean up multiple consecutive blank lines
awk 'NF{blank=0} !NF{blank++} blank<=2' "$tmp_file" > "${tmp_file}.clean"
mv "${tmp_file}.clean" "$target_claude_md"
rm -f "$tmp_file"
CHANGES_MADE+=("Removed $plugin_name section from CLAUDE.md")
log_success "Removed CLAUDE.md section for '$plugin_name'"
else
rm -f "$tmp_file"
log_skip "Could not locate exact section boundaries in CLAUDE.md"
log_warning "You may need to manually remove the $plugin_name section"
SKIPPED+=("CLAUDE.md: Manual removal may be needed")
fi
}
# --- Print Summary ---
print_summary() {
local plugin_name="$1"
local target_path="$2"
echo ""
echo "=============================================="
echo -e "${GREEN}Uninstallation Summary${NC}"
echo "=============================================="
echo ""
echo -e "${CYAN}Plugin:${NC} $plugin_name"
echo -e "${CYAN}Target:${NC} $target_path"
echo ""
if [[ ${#CHANGES_MADE[@]} -gt 0 ]]; then
echo -e "${GREEN}Changes Made:${NC}"
for change in "${CHANGES_MADE[@]}"; do
echo "$change"
done
echo ""
fi
if [[ ${#SKIPPED[@]} -gt 0 ]]; then
echo -e "${YELLOW}Skipped (not present or N/A):${NC}"
for skip in "${SKIPPED[@]}"; do
echo " - $skip"
done
echo ""
fi
if [[ ${#CHANGES_MADE[@]} -gt 0 ]]; then
echo -e "${YELLOW}⚠️ IMPORTANT:${NC}"
echo " Restart your Claude Code session for changes to take effect."
echo ""
fi
}
# =============================================================================
# Main Execution
# =============================================================================
# Check arguments
if [[ $# -lt 2 ]]; then
usage
fi
PLUGIN_NAME="$1"
TARGET_PATH="$2"
# Resolve target path to absolute
TARGET_PATH=$(cd "$TARGET_PATH" 2>/dev/null && pwd || echo "$TARGET_PATH")
echo ""
echo "=============================================="
echo -e "${BLUE}Uninstalling Plugin: $PLUGIN_NAME${NC}"
echo "=============================================="
echo ""
# Run checks
check_prerequisites
validate_target "$TARGET_PATH"
echo ""
# Perform uninstallation
remove_from_mcp_json "$PLUGIN_NAME" "$TARGET_PATH"
remove_from_claude_md "$PLUGIN_NAME" "$TARGET_PATH"
# Print summary
print_summary "$PLUGIN_NAME" "$TARGET_PATH"