feat(marketplace): hook migration, projman commands, optimizations [BREAKING]

Remove all SessionStart and PostToolUse hooks across the marketplace,
retaining only PreToolUse safety hooks and UserPromptSubmit quality hooks.
Add /project and /adr command families, /hygiene check, /cv status.
Create 7 new projman skills for project lifecycle management.
Remove /pm-debug, /suggest-version, /proposal-status commands.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-06 12:28:06 -05:00
parent 442ed63b4c
commit 560746b150
60 changed files with 1061 additions and 1828 deletions

View File

@@ -6,7 +6,7 @@
},
"metadata": {
"description": "Project management plugins with Gitea and NetBox integrations",
"version": "8.0.0"
"version": "8.1.0"
},
"plugins": [
{
@@ -20,9 +20,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "development",
"tags": [
"sprint",
@@ -44,9 +41,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/doc-guardian/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "productivity",
"tags": [
"documentation",
@@ -90,9 +84,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/project-hygiene/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "productivity",
"tags": [
"cleanup",
@@ -139,9 +130,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/claude-config-maintainer/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "development",
"tags": [
"claude-md",
@@ -210,9 +198,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "development",
"tags": [
"code-review",
@@ -234,9 +219,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-platform/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "data",
"tags": [
"pandas",
@@ -260,9 +242,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/viz-platform/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "visualization",
"tags": [
"dash",
@@ -287,9 +266,6 @@
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/contract-validator/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"hooks": [
"./hooks/hooks.json"
],
"category": "development",
"tags": [
"validation",

View File

@@ -8,6 +8,44 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
---
## [8.1.0] - 2026-02-06
### BREAKING CHANGES
#### Hook Migration (v8.1.0)
All `SessionStart` and `PostToolUse` hooks removed. Only `PreToolUse` safety hooks and `UserPromptSubmit` quality hooks remain. Plugins that relied on automatic startup checks or post-write automation must use manual commands instead.
### Added
- **projman:** 7 new skills — `source-analysis`, `project-charter`, `adr-conventions`, `epic-conventions`, `wbs`, `risk-register`, `sprint-roadmap`
- **projman:** `/project` command family — `initiation`, `plan`, `status`, `close` for full project lifecycle management
- **projman:** `/adr` command family — `create`, `list`, `update`, `supersede` for Architecture Decision Records
- **projman:** Expanded `wiki-conventions.md` with dependency headers, R&D notes, page naming patterns
- **projman:** Epic/* labels (5) and RnD/* labels (4) added to label taxonomy
- **project-hygiene:** `/hygiene check` manual command replacing PostToolUse hook
- **contract-validator:** `/cv status` marketplace-wide health check command
### Changed
- `verify-hooks.sh` rewritten to validate post-migration hook inventory (4 plugins, 5 hooks)
- `config-permissions-map.md` updated to reflect reduced hook inventory
- `settings-optimization.md` updated for current hook landscape
- `sprint-plan.md` no longer loads `token-budget-report.md` skill
- `sprint-close.md` loads `rfc-workflow.md` conditionally; manual CHANGELOG review replaces `/suggest-version`
- `planner.md` and `orchestrator.md` no longer reference domain consultation or domain gates
- Label taxonomy updated from 43 to 58 labels (added Status/4, Domain/2, Epic/5, RnD/4)
### Removed
- **hooks:** 8 hooks.json files deleted (projman, pr-review, doc-guardian, project-hygiene, claude-config-maintainer, viz-platform, data-platform, contract-validator SessionStart/PostToolUse hooks)
- **hooks:** Orphaned shell scripts deleted (startup-check.sh, notify.sh, cleanup.sh, enforce-rules.sh, schema-diff-check.sh, auto-validate.sh, breaking-change-check.sh)
- **projman:** `/pm-debug`, `/suggest-version`, `/proposal-status` commands deleted
- **projman:** `domain-consultation.md` skill deleted
- **cmdb-assistant:** SessionStart hook removed (PreToolUse hook retained)
---
## [8.0.0] - 2026-02-06
### BREAKING CHANGES

View File

@@ -146,7 +146,7 @@ When user says "fix the sprint-plan command", edit the SOURCE code.
## Project Overview
**Repository:** leo-claude-mktplace
**Version:** 7.0.0
**Version:** 8.1.0
**Status:** Production Ready
A plugin marketplace for Claude Code containing:
@@ -164,7 +164,7 @@ A plugin marketplace for Claude Code containing:
| `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.3.0 |
| `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.1.0 |
| `contract-validator` | Cross-plugin compatibility validation and agent verification | 1.1.0 |
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
| `project-hygiene` | Project file organization and cleanup checks | 0.1.0 |
## Quick Start
@@ -183,13 +183,14 @@ A plugin marketplace for Claude Code containing:
| **Setup** | `/pm-setup` (modes: `--full`, `--quick`, `--sync`) |
| **Sprint** | `/sprint-plan`, `/sprint-start`, `/sprint-status` (with `--diagram`), `/sprint-close` |
| **Quality** | `/pm-review`, `/pm-test` (modes: `run`, `gen`) |
| **Versioning** | `/suggest-version` |
| **Project** | `/project initiation\|plan\|status\|close` |
| **ADR** | `/adr create\|list\|update\|supersede` |
| **PR Review** | `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff` |
| **Docs** | `/doc-audit`, `/doc-sync`, `/changelog-gen`, `/doc-coverage`, `/stale-docs` |
| **Security** | `/security-scan`, `/refactor`, `/refactor-dry` |
| **Config** | `/config-analyze`, `/config-optimize`, `/config-diff`, `/config-lint` |
| **Validation** | `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph` |
| **Debug** | `/pm-debug` (modes: `report`, `review`) |
| **Validation** | `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph`, `/cv status` |
| **Maintenance** | `/hygiene check` |
### Plugin Commands - NOT RELEVANT to This Project
@@ -217,10 +218,9 @@ leo-claude-mktplace/
├── plugins/
│ ├── projman/ # Sprint management
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 12 commands
│ │ ├── hooks/ # SessionStart: mismatch detection
│ │ ├── commands/ # 19 commands
│ │ ├── agents/ # 4 agents
│ │ └── skills/ # 17 reusable skill files
│ │ └── skills/ # 23 reusable skill files
│ ├── git-flow/ # Git workflow automation
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 8 commands
@@ -228,7 +228,6 @@ leo-claude-mktplace/
│ ├── pr-review/ # Multi-agent PR review
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 6 commands
│ │ ├── hooks/ # SessionStart mismatch detection
│ │ └── agents/ # 5 agents
│ ├── clarity-assist/ # Prompt optimization
│ │ ├── .claude-plugin/plugin.json
@@ -237,12 +236,10 @@ leo-claude-mktplace/
│ ├── data-platform/ # Data engineering
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 7 commands
│ │ ├── hooks/ # SessionStart PostgreSQL check
│ │ └── agents/ # 2 agents
│ ├── viz-platform/ # Visualization
│ │ ├── .claude-plugin/plugin.json
│ │ ├── commands/ # 7 commands
│ │ ├── hooks/ # SessionStart DMC check
│ │ └── agents/ # 3 agents
│ ├── doc-guardian/ # Documentation drift detection
│ ├── code-sentinel/ # Security scanning & refactoring
@@ -371,10 +368,10 @@ Wiki-based Request for Comments system for tracking feature ideas from proposal
## Label Taxonomy
43 labels total: 27 organization + 16 repository
58 labels total: 31 organization + 27 repository
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Type/6
**Repository:** Component/9, Tech/7
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Status/4, Type/6
**Repository:** Component/9, Tech/7, Domain/2, Epic/5, RnD/4
Sync with `/labels-sync` command.
@@ -468,12 +465,12 @@ See `docs/DEBUGGING-CHECKLIST.md` for systematic troubleshooting.
| Symptom | Likely Cause | Fix |
|---------|--------------|-----|
| "X MCP servers failed" | Missing venv in installed path | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
| MCP tools not available | Venv missing or .mcp.json misconfigured | Run `/pm-debug report` to diagnose |
| MCP tools not available | Venv missing or .mcp.json misconfigured | Run `/cv status` to diagnose |
| Changes not taking effect | Editing source, not installed | Reinstall plugin or edit installed path |
**Debug Commands:**
- `/pm-debug report` - Run full diagnostics, create issue if needed
- `/pm-debug review` - Investigate and propose fixes
**Diagnostic Commands:**
- `/cv status` - Marketplace-wide health check (installation, MCP, configuration)
- `/hygiene check` - Project file organization and cleanup check
## Versioning Workflow
@@ -527,4 +524,4 @@ The script will:
---
**Last Updated:** 2026-02-03
**Last Updated:** 2026-02-06

View File

@@ -1,4 +1,4 @@
# Leo Claude Marketplace - v8.0.0
# Leo Claude Marketplace - v8.1.0
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.

View File

@@ -16,11 +16,15 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **projman** | `/sprint-close` | | X | Complete sprint and capture lessons learned to Gitea Wiki |
| **projman** | `/labels-sync` | | X | Synchronize label taxonomy from Gitea |
| **projman** | `/pm-setup` | | X | Auto-detect mode or use `--full`, `--quick`, `--sync`, `--clear-cache` |
| **projman** | *SessionStart hook* | X | | Detects git remote vs .env mismatch, warns to run `/pm-setup --sync` |
| **projman** | `/pm-debug` | | X | Diagnostics (`/pm-debug report`) or investigate (`/pm-debug review`) |
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
| **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status |
| **projman** | `/rfc` | | X | RFC lifecycle management (`/rfc create\|list\|review\|approve\|reject`) |
| **projman** | `/project initiation` | | X | Source analysis + project charter creation |
| **projman** | `/project plan` | | X | WBS, risk register, and sprint roadmap |
| **projman** | `/project status` | | X | Full project hierarchy status view |
| **projman** | `/project close` | | X | Retrospective, lessons learned, and archive |
| **projman** | `/adr create` | | X | Create Architecture Decision Record in wiki |
| **projman** | `/adr list` | | X | List ADRs by status (accepted, proposed, deprecated) |
| **projman** | `/adr update` | | X | Update ADR content or transition status |
| **projman** | `/adr supersede` | | X | Supersede an ADR with a new one |
| **git-flow** | `/git-commit` | | X | Create commit with auto-generated conventional message |
| **git-flow** | `/git-commit-push` | | X | Commit and push to remote in one operation |
| **git-flow** | `/git-commit-merge` | | X | Commit current changes, then merge into target branch |
@@ -32,7 +36,6 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **pr-review** | `/pr-setup` | | X | Setup wizard for pr-review (shares Gitea MCP with projman) |
| **pr-review** | `/project-init` | | X | Quick project setup for PR reviews |
| **pr-review** | `/project-sync` | | X | Sync config with git remote after repo move/rename |
| **pr-review** | *SessionStart hook* | X | | Detects git remote vs .env mismatch |
| **pr-review** | `/pr-review` | | X | Full multi-agent PR review with confidence scoring |
| **pr-review** | `/pr-summary` | | X | Quick summary of PR changes |
| **pr-review** | `/pr-findings` | | X | List and filter review findings by category/severity |
@@ -44,7 +47,6 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **doc-guardian** | `/changelog-gen` | | X | Generate changelog from conventional commits |
| **doc-guardian** | `/doc-coverage` | | X | Documentation coverage metrics by function/class |
| **doc-guardian** | `/stale-docs` | | X | Flag documentation behind code changes |
| **doc-guardian** | *PostToolUse hook* | X | | Silently detects doc drift on Write/Edit |
| **code-sentinel** | `/security-scan` | | X | Full security audit (SQL injection, XSS, secrets, etc.) |
| **code-sentinel** | `/refactor` | | X | Apply refactoring patterns to improve code |
| **code-sentinel** | `/refactor-dry` | | X | Preview refactoring without applying changes |
@@ -68,7 +70,7 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **cmdb-assistant** | `/cmdb-topology` | | X | Infrastructure topology diagrams (rack, network, site views) |
| **cmdb-assistant** | `/change-audit` | | X | NetBox audit trail queries with filtering |
| **cmdb-assistant** | `/ip-conflicts` | | X | Detect IP conflicts and overlapping prefixes |
| **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files |
| **project-hygiene** | `/hygiene check` | | X | Project file organization and cleanup check |
| **data-platform** | `/data-ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
| **data-platform** | `/data-profile` | | X | Generate data profiling report with statistics |
| **data-platform** | `/data-schema` | | X | Explore database schemas, tables, columns |
@@ -79,7 +81,6 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **data-platform** | `/dbt-test` | | X | Formatted dbt test runner with summary and failure details |
| **data-platform** | `/data-quality` | | X | DataFrame quality checks (nulls, duplicates, types, outliers) |
| **data-platform** | `/data-setup` | | X | Setup wizard for data-platform MCP servers |
| **data-platform** | *SessionStart hook* | X | | Checks PostgreSQL connection (non-blocking warning) |
| **viz-platform** | `/viz-setup` | | X | Setup wizard for viz-platform MCP server |
| **viz-platform** | `/viz-chart` | | X | Create Plotly charts with theme integration |
| **viz-platform** | `/viz-dashboard` | | X | Create dashboard layouts with filters and grids |
@@ -92,7 +93,6 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **viz-platform** | `/viz-breakpoints` | | X | Configure responsive layout breakpoints |
| **viz-platform** | `/design-review` | | X | Detailed design system audits |
| **viz-platform** | `/design-gate` | | X | Binary pass/fail design system validation gates |
| **viz-platform** | *SessionStart hook* | X | | Checks DMC version (non-blocking warning) |
| **data-platform** | `/data-review` | | X | Comprehensive data integrity audits |
| **data-platform** | `/data-gate` | | X | Binary pass/fail data integrity gates |
| **contract-validator** | `/validate-contracts` | | X | Full marketplace compatibility validation |
@@ -100,6 +100,7 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **contract-validator** | `/list-interfaces` | | X | Show all plugin interfaces |
| **contract-validator** | `/dependency-graph` | | X | Mermaid visualization of plugin dependencies |
| **contract-validator** | `/cv-setup` | | X | Setup wizard for contract-validator MCP |
| **contract-validator** | `/cv status` | | X | Marketplace-wide health check (installation, MCP, configuration) |
---
@@ -116,7 +117,7 @@ Quick reference for all commands in the Leo Claude Marketplace.
| **Data Engineering** | data-platform | pandas, PostgreSQL, dbt operations |
| **Visualization** | viz-platform | DMC validation, Plotly charts, theming |
| **Validation** | contract-validator | Cross-plugin compatibility checks |
| **Maintenance** | project-hygiene | Automatic cleanup |
| **Maintenance** | project-hygiene | Manual cleanup via `/hygiene check` |
---
@@ -124,13 +125,10 @@ Quick reference for all commands in the Leo Claude Marketplace.
| Plugin | Hook Event | Behavior |
|--------|------------|----------|
| **projman** | SessionStart | Checks git remote vs .env; warns if mismatch detected; suggests sprint planning if issues exist |
| **pr-review** | SessionStart | Checks git remote vs .env; warns if mismatch detected |
| **doc-guardian** | PostToolUse (Write/Edit) | Tracks documentation drift; auto-updates dependent docs |
| **code-sentinel** | PreToolUse (Write/Edit) | Scans for security issues; blocks critical vulnerabilities |
| **project-hygiene** | PostToolUse (Write/Edit) | Cleans temp files, warns about misplaced files |
| **data-platform** | SessionStart | Checks PostgreSQL connection; non-blocking warning if unavailable |
| **viz-platform** | SessionStart | Checks DMC version; non-blocking warning if mismatch detected |
| **code-sentinel** | PreToolUse (Write/Edit/MultiEdit) | Scans code before writing; blocks critical security issues |
| **git-flow** | PreToolUse (Bash) | Validates branch naming and commit message conventions |
| **cmdb-assistant** | PreToolUse (MCP create/update) | Validates input data before NetBox writes |
| **clarity-assist** | UserPromptSubmit | Detects vague prompts and suggests clarification |
---
@@ -271,7 +269,7 @@ Adding a new project when system config exists:
## Quick Tips
- **Hooks run automatically** - doc-guardian and code-sentinel protect you without manual invocation
- **Hooks run automatically** - code-sentinel and git-flow protect you without manual invocation
- **Use `/git-commit` over `git commit`** - generates better commit messages following conventions
- **Run `/pm-review` before `/sprint-close`** - catches issues before closing the sprint
- **Use `/clarify` for vague requests** - especially helpful for complex requirements
@@ -296,4 +294,4 @@ Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/cl
---
*Last Updated: 2026-02-02*
*Last Updated: 2026-02-06*

View File

@@ -279,8 +279,8 @@ Error: Could not find a suitable TLS CA certificate bundle, invalid path:
Use these commands for automated checking:
- `/pm-debug report` - Run full diagnostics, create issue if problems found
- `/pm-debug review` - Investigate existing diagnostic issues and propose fixes
- `/cv status` - Marketplace-wide health check (installation, MCP, configuration)
- `/hygiene check` - Project file organization and cleanup check
---

View File

@@ -38,15 +38,13 @@ Read all plugin hooks from the marketplace:
```
plugins/code-sentinel/hooks/hooks.json
plugins/doc-guardian/hooks/hooks.json
plugins/project-hygiene/hooks/hooks.json
plugins/data-platform/hooks/hooks.json
plugins/contract-validator/hooks/hooks.json
plugins/git-flow/hooks/hooks.json
plugins/cmdb-assistant/hooks/hooks.json
plugins/clarity-assist/hooks/hooks.json
```
For each hook, extract:
- Event type (PreToolUse, PostToolUse, SessionStart, etc.)
- Event type (PreToolUse, UserPromptSubmit)
- Tool matchers (Write, Edit, MultiEdit, Bash patterns)
- Hook command/script
@@ -54,12 +52,13 @@ For each hook, extract:
Create a mapping of which review layers cover which operations:
| Operation | PreToolUse Hooks | PostToolUse Hooks | Other Gates |
|-----------|------------------|-------------------|-------------|
| Write | code-sentinel | doc-guardian, project-hygiene | PR review |
| Edit | code-sentinel | doc-guardian, project-hygiene | PR review |
| MultiEdit | code-sentinel | doc-guardian | PR review |
| Bash(git *) | git-flow | — | — |
| Operation | PreToolUse Hooks | Other Gates |
|-----------|------------------|-------------|
| Write | code-sentinel | PR review |
| Edit | code-sentinel | PR review |
| MultiEdit | code-sentinel | PR review |
| Bash(git *) | git-flow | — |
| MCP(netbox create/update) | cmdb-assistant | — |
### Step 3: Read Current Permissions
@@ -94,13 +93,7 @@ flowchart LR
direction TB
CS[code-sentinel<br/>Security Scan]
GF[git-flow<br/>Branch Check]
end
subgraph post[PostToolUse Hooks]
direction TB
DG[doc-guardian<br/>Drift Detection]
PH[project-hygiene<br/>Cleanup]
DP[data-platform<br/>Schema Diff]
CA[clarity-assist<br/>Prompt Quality]
end
subgraph perm[Permission Status]
@@ -111,26 +104,22 @@ flowchart LR
end
W -->|intercepted| CS
W -->|tracked| DG
E -->|intercepted| CS
E -->|tracked| DG
BG -->|checked| GF
CS -->|passed| AA
DG -->|logged| AA
GF -->|valid| AA
BO -->|no hook| PR
classDef preHook fill:#e3f2fd,stroke:#1976d2
classDef postHook fill:#e8f5e9,stroke:#388e3c
classDef sprint fill:#fff3e0,stroke:#f57c00
classDef quality fill:#fff3e0,stroke:#f57c00
classDef prReview fill:#f3e5f5,stroke:#7b1fa2
classDef allowed fill:#c8e6c9,stroke:#2e7d32
classDef prompted fill:#fff9c4,stroke:#f9a825
classDef denied fill:#ffcdd2,stroke:#c62828
class CS,GF preHook
class DG,PH,DP postHook
class CA quality
class AA allowed
class PR prompted
class DN denied
@@ -195,11 +184,10 @@ Review Layer Status
PreToolUse Hooks (intercept before operation):
✓ code-sentinel — Write, Edit, MultiEdit
✓ git-flow — Bash(git checkout *), Bash(git commit *)
✓ cmdb-assistant — MCP(netbox create/update)
PostToolUse Hooks (track after operation):
doc-guardian — Write, Edit, MultiEdit
✓ project-hygiene — Write, Edit
✗ data-platform — not detected
UserPromptSubmit Hooks (check prompt quality):
clarity-assist — vagueness detection
Other Review Gates:
✓ Sprint Approval (projman milestone workflow)
@@ -241,7 +229,6 @@ To view:
| Element | Color | Hex |
|---------|-------|-----|
| PreToolUse hooks | Blue | #e3f2fd |
| PostToolUse hooks | Green | #e8f5e9 |
| Sprint/Planning gates | Amber | #fff3e0 |
| PR Review | Purple | #f3e5f5 |
| Auto-allowed | Light green | #c8e6c9 |

View File

@@ -1,68 +0,0 @@
#!/bin/bash
# claude-config-maintainer: enforce mandatory behavior rules
# Checks if CLAUDE.md has the rules, adds them if missing
PREFIX="[claude-config-maintainer]"
# Find CLAUDE.md in current directory or parent
CLAUDE_MD=""
if [ -f "./CLAUDE.md" ]; then
CLAUDE_MD="./CLAUDE.md"
elif [ -f "../CLAUDE.md" ]; then
CLAUDE_MD="../CLAUDE.md"
fi
# If no CLAUDE.md found, exit silently
if [ -z "$CLAUDE_MD" ]; then
exit 0
fi
# Check if mandatory rules exist
if grep -q "MANDATORY BEHAVIOR RULES" "$CLAUDE_MD" 2>/dev/null; then
# Rules exist, all good
exit 0
fi
# Rules missing - add them
RULES='## ⛔ MANDATORY BEHAVIOR RULES - READ FIRST
**These rules are NON-NEGOTIABLE. Violating them wastes the user'\''s time and money.**
### 1. WHEN USER ASKS YOU TO CHECK SOMETHING - CHECK EVERYTHING
- Search ALL locations, not just where you think it is
- Check cache directories: `~/.claude/plugins/cache/`
- Check installed: `~/.claude/plugins/marketplaces/`
- Check source directories
- **NEVER say "no" or "that'\''s not the issue" without exhaustive verification**
### 2. WHEN USER SAYS SOMETHING IS WRONG - BELIEVE THEM
- The user knows their system better than you
- Investigate thoroughly before disagreeing
- **Your confidence is often wrong. User'\''s instincts are often right.**
### 3. NEVER SAY "DONE" WITHOUT VERIFICATION
- Run the actual command/script to verify
- Show the output to the user
- **"Done" means VERIFIED WORKING, not "I made changes"**
### 4. SHOW EXACTLY WHAT USER ASKS FOR
- If user asks for messages, show the MESSAGES
- If user asks for code, show the CODE
- **Do not interpret or summarize unless asked**
**FAILURE TO FOLLOW THESE RULES = WASTED USER TIME = UNACCEPTABLE**
---
'
# Create temp file with rules + existing content
{
head -1 "$CLAUDE_MD"
echo ""
echo "$RULES"
tail -n +2 "$CLAUDE_MD"
} > "${CLAUDE_MD}.tmp"
mv "${CLAUDE_MD}.tmp" "$CLAUDE_MD"
echo "$PREFIX Added mandatory behavior rules to CLAUDE.md"

View File

@@ -1,15 +0,0 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/enforce-rules.sh"
}
]
}
]
}
}

View File

@@ -119,14 +119,14 @@ This is the key section. Map upstream review processes to directory scopes:
| Directory Scope | Active Review Layers | Auto-Allow Recommendation |
|----------------|---------------------|---------------------------|
| `plugins/*/commands/*.md` | Sprint approval, PR review, doc-guardian PostToolUse | `Write(plugins/*/commands/**)`3 layers cover this |
| `plugins/*/commands/*.md` | Sprint approval, PR review | `Write(plugins/*/commands/**)`2 layers cover this |
| `plugins/*/skills/*.md` | Sprint approval, PR review | `Write(plugins/*/skills/**)` — 2 layers |
| `plugins/*/agents/*.md` | Sprint approval, PR review, contract-validator | `Write(plugins/*/agents/**)`3 layers |
| `plugins/*/agents/*.md` | Sprint approval, PR review | `Write(plugins/*/agents/**)`2 layers |
| `mcp-servers/*/mcp_server/*.py` | Code-sentinel PreToolUse, sprint approval, PR review | `Write(mcp-servers/**)` + `Edit(mcp-servers/**)` — sentinel catches secrets |
| `docs/*.md` | Doc-guardian PostToolUse, PR review | `Write(docs/**)` + `Edit(docs/**)` |
| `docs/*.md` | PR review | `Write(docs/**)` + `Edit(docs/**)` — with caution flag |
| `.claude-plugin/*.json` | validate-marketplace.sh, PR review | `Write(.claude-plugin/**)` |
| `scripts/*.sh` | Code-sentinel, PR review | `Write(scripts/**)` — with caution flag |
| `CLAUDE.md`, `CHANGELOG.md`, `README.md` | Doc-guardian, PR review | `Write(CLAUDE.md)`, `Write(CHANGELOG.md)`, `Write(README.md)` |
| `CLAUDE.md`, `CHANGELOG.md`, `README.md` | PR review | `Write(CLAUDE.md)`, `Write(CHANGELOG.md)`, `Write(README.md)` |
### Critical Rule: Hook Verification
@@ -134,10 +134,11 @@ This is the key section. Map upstream review processes to directory scopes:
Read the relevant `plugins/*/hooks/hooks.json` file:
- If code-sentinel's hook is missing or disabled, do NOT recommend auto-allowing `mcp-servers/**` writes
- If doc-guardian's hook is missing, do NOT recommend auto-allowing `docs/**` without caution
- If git-flow's hook is missing, do NOT recommend auto-allowing `Bash(git *)` operations
- If cmdb-assistant's hook is missing, do NOT recommend auto-allowing MCP netbox create/update operations
- Count the number of verified review layers before making recommendations
**Minimum threshold:** Recommend auto-allow only for scopes covered by ≥2 verified review layers.
**Minimum threshold:** Only recommend auto-allow for scopes with ≥2 verified review layers.
---
@@ -333,10 +334,9 @@ To verify which review layers are active, read these files:
| File | Hook Type | Tool Matcher | Purpose |
|------|-----------|--------------|---------|
| `plugins/code-sentinel/hooks/hooks.json` | PreToolUse | Write\|Edit\|MultiEdit | Blocks hardcoded secrets |
| `plugins/doc-guardian/hooks/hooks.json` | PostToolUse | Write\|Edit\|MultiEdit | Tracks documentation drift |
| `plugins/project-hygiene/hooks/hooks.json` | PostToolUse | Write\|Edit | Cleanup tracking |
| `plugins/data-platform/hooks/hooks.json` | PostToolUse | Edit\|Write | Schema diff detection |
| `plugins/cmdb-assistant/hooks/hooks.json` | PreToolUse | (if exists) | Input validation |
| `plugins/git-flow/hooks/hooks.json` | PreToolUse | Bash | Branch naming + commit format |
| `plugins/cmdb-assistant/hooks/hooks.json` | PreToolUse | MCP create/update | NetBox input validation |
| `plugins/clarity-assist/hooks/hooks.json` | UserPromptSubmit | (all prompts) | Vagueness detection |
### Verification Process
@@ -370,8 +370,8 @@ Count verified review layers for each scope:
|-------|-------------|
| Sprint approval | Check if projman plugin is installed (milestone workflow) |
| PR review | Check if pr-review plugin is installed |
| code-sentinel PreToolUse | hooks.json exists with PreToolUse on Write/Edit |
| doc-guardian PostToolUse | hooks.json exists with PostToolUse on Write/Edit |
| contract-validator | Plugin installed + hooks present |
| code-sentinel PreToolUse | hooks.json exists with PreToolUse on Write/Edit/MultiEdit |
| git-flow PreToolUse | hooks.json exists with PreToolUse on Bash |
| cmdb-assistant PreToolUse | hooks.json exists with PreToolUse on MCP create/update |
**Recommendation threshold:** Only recommend auto-allow for scopes with ≥2 verified layers.

View File

@@ -1,16 +1,5 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
]
}
],
"PreToolUse": [
{
"matcher": "mcp__plugin_cmdb-assistant_netbox__virt_create|mcp__plugin_cmdb-assistant_netbox__virt_update|mcp__plugin_cmdb-assistant_netbox__dcim_create|mcp__plugin_cmdb-assistant_netbox__dcim_update",

View File

@@ -1,83 +0,0 @@
#!/bin/bash
# cmdb-assistant SessionStart hook
# Tests NetBox API connectivity and checks for data quality issues
# All output MUST have [cmdb-assistant] prefix
# Non-blocking: always exits 0
set -euo pipefail
PREFIX="[cmdb-assistant]"
# Load NetBox configuration
NETBOX_CONFIG="$HOME/.config/claude/netbox.env"
if [[ ! -f "$NETBOX_CONFIG" ]]; then
echo "$PREFIX NetBox not configured - run /cmdb-assistant:initial-setup"
exit 0
fi
# Source config
source "$NETBOX_CONFIG"
# Validate required variables
if [[ -z "${NETBOX_API_URL:-}" ]] || [[ -z "${NETBOX_API_TOKEN:-}" ]]; then
echo "$PREFIX Missing NETBOX_API_URL or NETBOX_API_TOKEN in config"
exit 0
fi
# Helper function to make authenticated API calls
# Token passed via curl config to avoid exposure in process listings
netbox_curl() {
local url="$1"
curl -s -K - <<EOF 2>/dev/null
-H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json"
url = "${url}"
EOF
}
# Quick API connectivity test (5s timeout)
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -m 5 -K - <<EOF 2>/dev/null || echo "000"
-H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json"
url = "${NETBOX_API_URL}/"
EOF
)
if [[ "$HTTP_CODE" == "000" ]]; then
echo "$PREFIX NetBox API unreachable (timeout/connection error)"
exit 0
elif [[ "$HTTP_CODE" != "200" ]]; then
echo "$PREFIX NetBox API returned HTTP $HTTP_CODE - check credentials"
exit 0
fi
# Check for VMs without site assignment (data quality)
VMS_RESPONSE=$(curl -s -m 5 -K - <<EOF 2>/dev/null || echo '{"count":0}'
-H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json"
url = "${NETBOX_API_URL}/virtualization/virtual-machines/?site__isnull=true&limit=1"
EOF
)
VMS_NO_SITE=$(echo "$VMS_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")
if [[ "$VMS_NO_SITE" -gt 0 ]]; then
echo "$PREFIX $VMS_NO_SITE VMs without site assignment - run /cmdb-audit for details"
fi
# Check for devices without platform
DEVICES_RESPONSE=$(curl -s -m 5 -K - <<EOF 2>/dev/null || echo '{"count":0}'
-H "Authorization: Token ${NETBOX_API_TOKEN}"
-H "Accept: application/json"
url = "${NETBOX_API_URL}/dcim/devices/?platform__isnull=true&limit=1"
EOF
)
DEVICES_NO_PLATFORM=$(echo "$DEVICES_RESPONSE" | grep -o '"count":[0-9]*' | cut -d: -f2 || echo "0")
if [[ "$DEVICES_NO_PLATFORM" -gt 0 ]]; then
echo "$PREFIX $DEVICES_NO_PLATFORM devices without platform - consider updating"
fi
exit 0

View File

@@ -0,0 +1,47 @@
---
description: Marketplace-wide health check across all installed plugins
---
# /cv status
## Purpose
Quick health check showing installed plugin status. For each marketplace plugin, reports installation state, MCP connectivity, and configuration status.
## Usage
```
/cv status # Full status table
/cv status --plugin projman # Single plugin check
```
## Workflow
### Step 1: Enumerate Plugins
Read `.claude-plugin/marketplace.json` to get the full plugin list.
### Step 2: Check Each Plugin
For each plugin, verify:
- **Installed:** `plugin.json` exists and is valid JSON
- **MCP Connected:** If plugin has MCP servers (check `metadata.json`), verify server is responding
- **Configured:** Required config files present
- **Version:** Read from `plugin.json`
- **Domain:** Read from `plugin.json`
### Step 3: Display Results
```
| Plugin | Domain | Version | Installed | MCP | Configured |
|--------------------------|--------|---------|-----------|-----|------------|
| projman | core | 3.4.0 | Y | Y | Y |
| git-flow | core | 1.2.0 | Y | - | Y |
| cmdb-assistant | ops | 1.2.0 | Y | N | N |
Summary: 12/12 installed, 4/5 MCP connected, 11/12 configured
```
## Notes
- MCP column shows `-` for plugins without MCP servers
- `N` in MCP means the server is defined but not responding
- `N` in Configured means the setup check found issues

View File

@@ -1,195 +0,0 @@
#!/bin/bash
# contract-validator SessionStart auto-validate hook
# Validates plugin contracts only when plugin files have changed since last check
# All output MUST have [contract-validator] prefix
PREFIX="[contract-validator]"
# ============================================================================
# Configuration
# ============================================================================
# Enable/disable auto-check (default: true)
AUTO_CHECK="${CONTRACT_VALIDATOR_AUTO_CHECK:-true}"
# Cache location for storing last check hash
CACHE_DIR="$HOME/.cache/claude-plugins/contract-validator"
HASH_FILE="$CACHE_DIR/last-check.hash"
# Marketplace location (installed plugins)
MARKETPLACE_PATH="$HOME/.claude/plugins/marketplaces/leo-claude-mktplace"
# ============================================================================
# Early exit if disabled
# ============================================================================
if [[ "$AUTO_CHECK" != "true" ]]; then
exit 0
fi
# ============================================================================
# Smart mode: Check if plugin files have changed
# ============================================================================
# Function to compute hash of all plugin manifest files
compute_plugin_hash() {
local hash_input=""
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
# Hash all plugin.json, hooks.json, and agent files
while IFS= read -r -d '' file; do
if [[ -f "$file" ]]; then
hash_input+="$(md5sum "$file" 2>/dev/null | cut -d' ' -f1)"
fi
done < <(find "$MARKETPLACE_PATH/plugins" \
\( -name "plugin.json" -o -name "hooks.json" -o -name "*.md" -path "*/agents/*" \) \
-print0 2>/dev/null | sort -z)
fi
# Also include marketplace.json
if [[ -f "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" ]]; then
hash_input+="$(md5sum "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" 2>/dev/null | cut -d' ' -f1)"
fi
# Compute final hash
echo "$hash_input" | md5sum | cut -d' ' -f1
}
# Ensure cache directory exists
mkdir -p "$CACHE_DIR" 2>/dev/null
# Compute current hash
CURRENT_HASH=$(compute_plugin_hash)
# Check if we have a previous hash
if [[ -f "$HASH_FILE" ]]; then
PREVIOUS_HASH=$(cat "$HASH_FILE" 2>/dev/null)
# If hashes match, no changes - skip validation
if [[ "$CURRENT_HASH" == "$PREVIOUS_HASH" ]]; then
exit 0
fi
fi
# ============================================================================
# Run validation (hashes differ or no cache)
# ============================================================================
ISSUES_FOUND=0
WARNINGS=""
# Function to add warning
add_warning() {
WARNINGS+=" - $1"$'\n'
((ISSUES_FOUND++))
}
# 1. Check all installed plugins have valid plugin.json
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
for plugin_dir in "$MARKETPLACE_PATH/plugins"/*/; do
if [[ -d "$plugin_dir" ]]; then
plugin_name=$(basename "$plugin_dir")
plugin_json="$plugin_dir/.claude-plugin/plugin.json"
if [[ ! -f "$plugin_json" ]]; then
add_warning "$plugin_name: missing .claude-plugin/plugin.json"
continue
fi
# Basic JSON validation
if ! python3 -c "import json; json.load(open('$plugin_json'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in plugin.json"
continue
fi
# Check required fields
if ! python3 -c "
import json
with open('$plugin_json') as f:
data = json.load(f)
required = ['name', 'version', 'description']
missing = [r for r in required if r not in data]
if missing:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: plugin.json missing required fields"
fi
fi
done
fi
# 2. Check hooks.json files are properly formatted
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' hooks_file; do
plugin_name=$(basename "$(dirname "$(dirname "$hooks_file")")")
# Validate JSON
if ! python3 -c "import json; json.load(open('$hooks_file'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in hooks/hooks.json"
continue
fi
# Validate hook structure
if ! python3 -c "
import json
with open('$hooks_file') as f:
data = json.load(f)
if 'hooks' not in data:
exit(1)
valid_events = ['PreToolUse', 'PostToolUse', 'UserPromptSubmit', 'SessionStart', 'SessionEnd', 'Notification', 'Stop', 'SubagentStop', 'PreCompact']
for event in data['hooks']:
if event not in valid_events:
exit(1)
for hook in data['hooks'][event]:
# Support both flat structure (type at top) and nested structure (matcher + hooks array)
if 'type' in hook:
# Flat structure: {type: 'command', command: '...'}
pass
elif 'matcher' in hook and 'hooks' in hook:
# Nested structure: {matcher: '...', hooks: [{type: 'command', ...}]}
for nested_hook in hook['hooks']:
if 'type' not in nested_hook:
exit(1)
else:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: hooks.json has invalid structure or events"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/hooks/hooks.json" -print0 2>/dev/null)
fi
# 3. Check agent references are valid (agent files exist and are markdown)
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' agent_file; do
plugin_name=$(basename "$(dirname "$(dirname "$agent_file")")")
agent_name=$(basename "$agent_file")
# Check file is not empty
if [[ ! -s "$agent_file" ]]; then
add_warning "$plugin_name: empty agent file $agent_name"
continue
fi
# Check file has markdown content (at least a header)
if ! grep -q '^#' "$agent_file" 2>/dev/null; then
add_warning "$plugin_name: agent $agent_name missing markdown header"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/agents/*.md" -print0 2>/dev/null)
fi
# ============================================================================
# Store new hash and report results
# ============================================================================
# Always store the new hash (even if issues found - we don't want to recheck)
echo "$CURRENT_HASH" > "$HASH_FILE"
# Report any issues found (non-blocking warning)
if [[ $ISSUES_FOUND -gt 0 ]]; then
echo "$PREFIX Plugin contract validation found $ISSUES_FOUND issue(s):"
echo "$WARNINGS"
echo "$PREFIX Run /validate-contracts for full details"
fi
# Always exit 0 (non-blocking)
exit 0

View File

@@ -1,174 +0,0 @@
#!/bin/bash
# contract-validator breaking change detection hook
# Warns when plugin interface changes might break consumers
# This is a PostToolUse hook - non-blocking, warnings only
PREFIX="[contract-validator]"
# Check if warnings are enabled (default: true)
if [[ "${CONTRACT_VALIDATOR_BREAKING_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a plugin interface file
is_interface_file() {
local file="$1"
case "$file" in
*/plugin.json) return 0 ;;
*/.claude-plugin/plugin.json) return 0 ;;
*/hooks.json) return 0 ;;
*/hooks/hooks.json) return 0 ;;
*/.mcp.json) return 0 ;;
*/agents/*.md) return 0 ;;
*/commands/*.md) return 0 ;;
*/skills/*.md) return 0 ;;
esac
return 1
}
# Exit if not an interface file
if ! is_interface_file "$FILE_PATH"; then
exit 0
fi
# Check if file exists and is in a git repo
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
# Get the directory containing the file
FILE_DIR=$(dirname "$FILE_PATH")
FILE_NAME=$(basename "$FILE_PATH")
# Try to get the previous version from git
cd "$FILE_DIR" 2>/dev/null || exit 0
# Check if we're in a git repo
if ! git rev-parse --git-dir > /dev/null 2>&1; then
exit 0
fi
# Get previous version (HEAD version before current changes)
PREV_CONTENT=$(git show HEAD:"$FILE_PATH" 2>/dev/null || echo "")
# If no previous version, this is a new file - no breaking changes possible
if [ -z "$PREV_CONTENT" ]; then
exit 0
fi
# Read current content
CURR_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [ -z "$CURR_CONTENT" ]; then
exit 0
fi
BREAKING_CHANGES=()
# Detect breaking changes based on file type
case "$FILE_PATH" in
*/plugin.json|*/.claude-plugin/plugin.json)
# Check for removed or renamed fields in plugin.json
# Check if name changed
PREV_NAME=$(echo "$PREV_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
CURR_NAME=$(echo "$CURR_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
if [ -n "$PREV_NAME" ] && [ "$PREV_NAME" != "$CURR_NAME" ]; then
BREAKING_CHANGES+=("Plugin name changed - consumers may need updates")
fi
# Check if version had major bump (semantic versioning)
PREV_VER=$(echo "$PREV_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
CURR_VER=$(echo "$CURR_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
if [ -n "$PREV_VER" ] && [ -n "$CURR_VER" ] && [ "$CURR_VER" -gt "$PREV_VER" ] 2>/dev/null; then
BREAKING_CHANGES+=("Major version bump detected - verify breaking changes documented")
fi
;;
*/hooks.json|*/hooks/hooks.json)
# Check for removed hook events
PREV_EVENTS=$(echo "$PREV_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
CURR_EVENTS=$(echo "$CURR_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
# Find removed events
REMOVED_EVENTS=$(comm -23 <(echo "$PREV_EVENTS") <(echo "$CURR_EVENTS") 2>/dev/null)
if [ -n "$REMOVED_EVENTS" ]; then
BREAKING_CHANGES+=("Hook events removed: $(echo $REMOVED_EVENTS | tr '\n' ' ')")
fi
# Check for changed matchers
PREV_MATCHERS=$(echo "$PREV_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
CURR_MATCHERS=$(echo "$CURR_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
if [ "$PREV_MATCHERS" != "$CURR_MATCHERS" ]; then
BREAKING_CHANGES+=("Hook matchers changed - verify tool coverage")
fi
;;
*/.mcp.json)
# Check for removed MCP servers
PREV_SERVERS=$(echo "$PREV_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
CURR_SERVERS=$(echo "$CURR_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
REMOVED_SERVERS=$(comm -23 <(echo "$PREV_SERVERS") <(echo "$CURR_SERVERS") 2>/dev/null)
if [ -n "$REMOVED_SERVERS" ]; then
BREAKING_CHANGES+=("MCP servers removed - tools may be unavailable")
fi
;;
*/agents/*.md)
# Check if agent file was significantly reduced (might indicate removal of capabilities)
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
# If more than 50% reduction, warn
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Agent definition significantly reduced - capabilities may be removed")
fi
# Check if agent name/description changed in frontmatter
PREV_DESC=$(echo "$PREV_CONTENT" | head -20 | grep -i "description" | head -1)
CURR_DESC=$(echo "$CURR_CONTENT" | head -20 | grep -i "description" | head -1)
if [ -n "$PREV_DESC" ] && [ "$PREV_DESC" != "$CURR_DESC" ]; then
BREAKING_CHANGES+=("Agent description changed - verify consumer expectations")
fi
;;
*/commands/*.md|*/skills/*.md)
# Check if command/skill was significantly changed
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Command/skill significantly reduced - behavior may change")
fi
;;
esac
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Consider updating CHANGELOG and notifying consumers"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -1,26 +0,0 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/auto-validate.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/breaking-change-check.sh"
}
]
}
]
}
}

View File

@@ -1,26 +0,0 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/schema-diff-check.sh"
}
]
}
]
}
}

View File

@@ -1,138 +0,0 @@
#!/bin/bash
# data-platform schema diff detection hook
# Warns about potentially breaking schema changes
# This is a command hook - non-blocking, warnings only
PREFIX="[data-platform]"
# Check if warnings are enabled (default: true)
if [[ "${DATA_PLATFORM_SCHEMA_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin (JSON with file_path)
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a schema-related file
is_schema_file() {
local file="$1"
# Check file extension
case "$file" in
*.sql) return 0 ;;
*/migrations/*.py) return 0 ;;
*/migrations/*.sql) return 0 ;;
*/models/*.py) return 0 ;;
*/models/*.sql) return 0 ;;
*schema.prisma) return 0 ;;
*schema.graphql) return 0 ;;
*/dbt/models/*.sql) return 0 ;;
*/dbt/models/*.yml) return 0 ;;
*/alembic/versions/*.py) return 0 ;;
esac
# Check directory patterns
if echo "$file" | grep -qE "(migrations?|schemas?|models)/"; then
return 0
fi
return 1
}
# Exit if not a schema file
if ! is_schema_file "$FILE_PATH"; then
exit 0
fi
# Read the file content (if it exists and is readable)
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
FILE_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [[ -z "$FILE_CONTENT" ]]; then
exit 0
fi
# Detect breaking changes
BREAKING_CHANGES=()
# Check for DROP COLUMN
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("DROP COLUMN detected - may break existing queries")
fi
# Check for DROP TABLE
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+TABLE"; then
BREAKING_CHANGES+=("DROP TABLE detected - data loss risk")
fi
# Check for DROP INDEX
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+INDEX"; then
BREAKING_CHANGES+=("DROP INDEX detected - may impact query performance")
fi
# Check for ALTER TYPE / MODIFY COLUMN type changes
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*(TYPE|COLUMN.*TYPE)"; then
BREAKING_CHANGES+=("Column type change detected - may cause data truncation")
fi
if echo "$FILE_CONTENT" | grep -qiE "MODIFY[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("MODIFY COLUMN detected - verify data compatibility")
fi
# Check for adding NOT NULL to existing column
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*SET[[:space:]]+NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL constraint - existing NULL values will fail")
fi
if echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL[^[:space:]]*[[:space:]]+DEFAULT"; then
# Adding NOT NULL with DEFAULT is usually safe - don't warn
:
elif echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL column without DEFAULT - INSERT may fail")
fi
# Check for RENAME TABLE/COLUMN
if echo "$FILE_CONTENT" | grep -qiE "RENAME[[:space:]]+(TABLE|COLUMN|TO)"; then
BREAKING_CHANGES+=("RENAME detected - update all references")
fi
# Check for removing from Django/SQLAlchemy models (Python files)
if [[ "$FILE_PATH" == *.py ]]; then
if echo "$FILE_CONTENT" | grep -qE "^-[[:space:]]*[a-z_]+[[:space:]]*=.*Field\("; then
BREAKING_CHANGES+=("Model field removal detected in Python ORM")
fi
fi
# Check for Prisma schema changes
if [[ "$FILE_PATH" == *schema.prisma ]]; then
if echo "$FILE_CONTENT" | grep -qE "@relation.*onDelete.*Cascade"; then
BREAKING_CHANGES+=("Cascade delete detected - verify data safety")
fi
fi
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking schema changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Review before deploying to production"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -1,69 +0,0 @@
#!/bin/bash
# data-platform startup check hook
# Checks for common issues at session start
# All output MUST have [data-platform] prefix
PREFIX="[data-platform]"
# Check if MCP venv exists - check cache first, then local
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv/bin/python"
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}"
MARKETPLACE_ROOT="$(dirname "$(dirname "$PLUGIN_ROOT")")"
LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/data-platform/.venv/bin/python"
# Check cache first (preferred), then local symlink
CACHE_VENV_DIR="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
LOCAL_VENV_DIR="$MARKETPLACE_ROOT/mcp-servers/data-platform/.venv"
if [[ -f "$CACHE_VENV" ]]; then
VENV_PATH="$CACHE_VENV"
# Auto-create symlink in installed marketplace if missing
if [[ ! -e "$LOCAL_VENV_DIR" && -d "$CACHE_VENV_DIR" ]]; then
mkdir -p "$(dirname "$LOCAL_VENV_DIR")" 2>/dev/null
ln -sf "$CACHE_VENV_DIR" "$LOCAL_VENV_DIR" 2>/dev/null
fi
elif [[ -f "$LOCAL_VENV" ]]; then
VENV_PATH="$LOCAL_VENV"
else
echo "$PREFIX MCP venv missing - run /initial-setup or setup.sh"
exit 0
fi
# Check PostgreSQL configuration (optional - just warn if configured but failing)
POSTGRES_CONFIG="$HOME/.config/claude/postgres.env"
if [[ -f "$POSTGRES_CONFIG" ]]; then
source "$POSTGRES_CONFIG"
if [[ -n "${POSTGRES_URL:-}" ]]; then
# Quick connection test (5 second timeout)
RESULT=$("$VENV_PATH" -c "
import asyncio
import sys
async def test():
try:
import asyncpg
conn = await asyncpg.connect('$POSTGRES_URL', timeout=5)
await conn.close()
return 'OK'
except Exception as e:
return f'FAIL: {e}'
print(asyncio.run(test()))
" 2>/dev/null || echo "FAIL: asyncpg not installed")
if [[ "$RESULT" == "OK" ]]; then
# PostgreSQL OK - say nothing
:
elif [[ "$RESULT" == *"FAIL"* ]]; then
echo "$PREFIX PostgreSQL connection failed - check POSTGRES_URL"
fi
fi
fi
# Check dbt project (if in a project with dbt_project.yml)
if [[ -f "dbt_project.yml" ]] || [[ -f "transform/dbt_project.yml" ]]; then
if ! command -v dbt &> /dev/null; then
echo "$PREFIX dbt CLI not found - dbt tools unavailable"
fi
fi
# All checks passed - say nothing
exit 0

View File

@@ -1,15 +0,0 @@
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/notify.sh"
}
]
}
]
}
}

View File

@@ -1,74 +0,0 @@
#!/bin/bash
# doc-guardian notification hook
# Tracks documentation dependencies and queues updates
#
# SILENT BY DEFAULT - No output to avoid interrupting Claude's workflow.
# Changes are queued to .doc-guardian-queue for later processing.
# Run /doc-sync or /doc-audit to see and process pending updates.
#
# Set DOC_GUARDIAN_VERBOSE=1 to enable notification output.
# Read tool input from stdin (JSON with file_path)
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Define documentation dependency mappings
# When these directories change, these docs need updating
declare -A DOC_DEPS
DOC_DEPS["commands"]="docs/COMMANDS-CHEATSHEET.md README.md"
DOC_DEPS["agents"]="README.md CLAUDE.md"
DOC_DEPS["hooks"]="docs/COMMANDS-CHEATSHEET.md README.md"
DOC_DEPS["skills"]="README.md"
DOC_DEPS[".claude-plugin"]="CLAUDE.md .claude-plugin/marketplace.json"
DOC_DEPS["mcp-servers"]="docs/COMMANDS-CHEATSHEET.md CLAUDE.md"
# Check which config directory was modified
MODIFIED_TYPE=""
for dir in commands agents hooks skills .claude-plugin mcp-servers; do
if echo "$FILE_PATH" | grep -qE "/${dir}/|^${dir}/"; then
MODIFIED_TYPE="$dir"
break
fi
done
# Exit silently if not a tracked config directory
if [ -z "$MODIFIED_TYPE" ]; then
exit 0
fi
# Get the dependent docs
DEPENDENT_DOCS="${DOC_DEPS[$MODIFIED_TYPE]}"
# Queue file for tracking pending updates
QUEUE_FILE="${CLAUDE_PROJECT_ROOT:-.}/.doc-guardian-queue"
# Add to queue (always, for deduplication we check file+type combo)
# Format: timestamp | type | file_path | dependent_docs
QUEUE_ENTRY="$(date +%Y-%m-%dT%H:%M:%S) | $MODIFIED_TYPE | $FILE_PATH | $DEPENDENT_DOCS"
# Check if this exact file+type combo already exists in queue (dedup)
if [ -f "$QUEUE_FILE" ]; then
if grep -qF "| $MODIFIED_TYPE | $FILE_PATH |" "$QUEUE_FILE" 2>/dev/null; then
# Already queued, skip silently
exit 0
fi
fi
# Add to queue
echo "$QUEUE_ENTRY" >> "$QUEUE_FILE" 2>/dev/null || true
# SILENT by default - only output if DOC_GUARDIAN_VERBOSE is set
# This prevents Claude from stopping to ask about documentation updates
if [ "${DOC_GUARDIAN_VERBOSE:-0}" = "1" ]; then
PENDING_COUNT=$(wc -l < "$QUEUE_FILE" 2>/dev/null | tr -d ' ' || echo "1")
echo "[doc-guardian] queued: $MODIFIED_TYPE ($PENDING_COUNT pending)"
fi
exit 0

View File

@@ -1,15 +0,0 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
]
}
]
}
}

View File

@@ -1,37 +0,0 @@
#!/bin/bash
# pr-review startup check hook
# Checks for common issues at session start
# All output MUST have [pr-review] prefix
PREFIX="[pr-review]"
# Check if MCP venv exists - check cache first, then local
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv/bin/python"
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}"
MARKETPLACE_ROOT="$(dirname "$(dirname "$PLUGIN_ROOT")")"
LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/gitea/.venv/bin/python"
# Check cache first (preferred), then local
if [[ -f "$CACHE_VENV" ]]; then
VENV_PATH="$CACHE_VENV"
elif [[ -f "$LOCAL_VENV" ]]; then
VENV_PATH="$LOCAL_VENV"
else
echo "$PREFIX MCP venvs missing - run setup.sh from installed marketplace"
exit 0
fi
# Check git remote vs .env config (only if .env exists)
if [[ -f ".env" ]]; then
CONFIGURED_REPO=$(grep -E "^GITEA_REPO=" .env 2>/dev/null | cut -d'=' -f2 | tr -d '"' || true)
if [[ -n "$CONFIGURED_REPO" ]]; then
CURRENT_REMOTE=$(git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\/[^.]*\).*/\1/' || true)
if [[ -n "$CURRENT_REMOTE" && "$CONFIGURED_REPO" != "$CURRENT_REMOTE" ]]; then
echo "$PREFIX Git remote mismatch - run /pr-review:project-sync"
exit 0
fi
fi
fi
# All checks passed - say nothing
exit 0

View File

@@ -16,18 +16,6 @@
"hooks",
"maintenance"
],
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/cleanup.sh"
}
]
}
]
},
"domain": "core"
"domain": "core",
"commands": ["./commands/"]
}

View File

@@ -0,0 +1,36 @@
---
description: Manual project hygiene check — validates file organization and cleanup
---
# /hygiene check
## Purpose
Manually run project hygiene checks that were previously automatic (PostToolUse hook removed per Decision #29).
## Checks Performed
1. **Temp file detection** — find files in project root that look temporary (*.tmp, *.bak, *.swp, *~)
2. **Misplaced files** — files outside their expected directories per project conventions
3. **Empty directories** — directories with no files
4. **Large files** — files exceeding reasonable size thresholds
5. **Debug artifacts** — leftover debug logs, console.log statements, print statements
## Usage
```
/hygiene check # Run all checks
/hygiene check --fix # Auto-fix safe issues (delete temp files, remove empty dirs)
```
## Output
```
Temp files: 0 found
Misplaced files: 0 found
Empty directories: 2 found
Large files: 0 found
Debug artifacts: 1 found
Fixable: 2 issues (run with --fix)
```

View File

@@ -1,42 +0,0 @@
#!/bin/bash
# project-hygiene cleanup hook
# Runs after file edits to clean up temp files
# All output MUST have [project-hygiene] prefix
set -euo pipefail
PREFIX="[project-hygiene]"
# Read tool input from stdin (discard - we don't need it for cleanup)
cat > /dev/null
# Cooldown - skip if ran in last 60 seconds
# Using $PPID ties cooldown to the Claude session
LAST_RUN_FILE="/tmp/project-hygiene-${PPID}-last-run"
if [[ -f "$LAST_RUN_FILE" ]]; then
LAST_RUN=$(cat "$LAST_RUN_FILE")
NOW=$(date +%s)
if (( NOW - LAST_RUN < 60 )); then
exit 0
fi
fi
PROJECT_ROOT="${PROJECT_ROOT:-.}"
DELETED_COUNT=0
# Silently delete temp files
for pattern in "*.tmp" "*.bak" "*.swp" "*~" ".DS_Store"; do
while IFS= read -r -d '' file; do
rm -f "$file" 2>/dev/null && ((DELETED_COUNT++)) || true
done < <(find "$PROJECT_ROOT" -name "$pattern" -type f -print0 2>/dev/null || true)
done
# Only output if we deleted something
if [[ $DELETED_COUNT -gt 0 ]]; then
echo "$PREFIX Cleaned $DELETED_COUNT temp files"
fi
# Record this run for cooldown
date +%s > "$LAST_RUN_FILE"
exit 0

View File

@@ -1,15 +0,0 @@
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/cleanup.sh"
}
]
}
]
}
}

View File

@@ -28,8 +28,6 @@ You are the **Orchestrator Agent** - a concise, action-oriented coordinator who
- skills/lessons-learned.md
- skills/git-workflow.md
- skills/progress-tracking.md
- skills/wiki-conventions.md
- skills/domain-consultation.md
**CRITICAL: Read each skill file exactly ONCE. Do NOT re-read skill files between MCP API calls. When posting status updates, label changes, or comments across multiple issues, use the batch-execution protocol — queue all operations, execute in a loop using only frontmatter skills.**
@@ -76,25 +74,6 @@ Execute `skills/dependency-management.md` - Check for file conflicts before para
### 6. Track Progress
Execute `skills/progress-tracking.md` - Manage status labels, parse progress comments.
### 6.5. Domain Gate Checks
Execute `skills/domain-consultation.md` (Execution Gate Protocol section):
1. **Before marking any issue as complete**, check for `Domain/*` labels
2. **If `Domain/Viz` label present:**
- Identify files changed by this issue
- Invoke `/design-gate <path-to-changed-files>`
- Gate PASS → proceed to mark issue complete
- Gate FAIL → add comment to issue with failure details, keep issue open
3. **If `Domain/Data` label present:**
- Identify files changed by this issue
- Invoke `/data-gate <path-to-changed-files>`
- Gate PASS → proceed to mark issue complete
- Gate FAIL → add comment to issue with failure details, keep issue open
4. **If gate command unavailable** (MCP server not running):
- Warn user: "Domain gate unavailable - proceeding without validation"
- Proceed with completion (non-blocking degradation)
- Do NOT silently skip
### 7. Monitor for Runaway Agents
Execute `skills/runaway-detection.md` - Intervene when agents are stuck.
@@ -110,7 +89,7 @@ Execute `skills/git-workflow.md` - Merge, tag, clean up branches.
### 11. Maintain Dispatch Log
Execute `skills/progress-tracking.md` (Sprint Dispatch Log section):
- Create dispatch log header at sprint start
- Append row on every task dispatch, completion, failure, and domain gate check
- Append row on every task dispatch, completion, and failure
- On sprint resume: add "Resumed" row with checkpoint context
- Log is posted as comments, one `add_comment` per event
@@ -122,7 +101,7 @@ Execute `skills/progress-tracking.md` (Sprint Dispatch Log section):
4. **ALWAYS monitor dispatched agents** - Intervene if stuck
5. **ALWAYS capture lessons** - Don't skip the interview at sprint close
6. **ALWAYS update milestone** - Close milestone when sprint complete
7. **ALWAYS run domain gates** - Issues with `Domain/*` labels must pass gates before completion
7. **ALWAYS use wiki-conventions** - Load `skills/wiki-conventions.md` when updating wiki pages at sprint close
## Your Mission

View File

@@ -30,7 +30,6 @@ You are the **Planner Agent** - a methodical architect who thoroughly analyzes r
- skills/issue-conventions.md
- skills/planning-workflow.md
- skills/label-taxonomy/labels-reference.md
- skills/domain-consultation.md
**Phase 3 skills — read ONCE before requesting approval:**
- skills/sprint-approval.md
@@ -78,25 +77,10 @@ Execute `skills/wiki-conventions.md` - Create proposal and implementation pages.
### 6. Task Sizing
Execute `skills/task-sizing.md` - **REFUSE to create L/XL tasks without breakdown.**
### 7. Domain Consultation
Execute `skills/domain-consultation.md` (Planning Protocol section):
1. **After drafting issues but BEFORE creating them in Gitea**
2. **Analyze each issue for domain signals:**
- Check planned labels for `Component/Frontend`, `Component/UI` -> Domain/Viz
- Check planned labels for `Component/Database`, `Component/Data` -> Domain/Data
- Scan issue description for domain keywords (see skill for full list)
3. **For detected domains, append acceptance criteria:**
- Domain/Viz: Design System Compliance checklist
- Domain/Data: Data Integrity checklist
4. **Add corresponding `Domain/*` label** to the issue's label set
5. **Document in planning summary** which issues have domain gates active
### 8. Issue Creation
### 7. Issue Creation
Execute `skills/issue-conventions.md` - Use proper format with wiki references.
### 9. Request Approval
### 8. Request Approval
Execute `skills/sprint-approval.md` - Planning DOES NOT equal execution permission.
## Critical Reminders
@@ -108,7 +92,7 @@ Execute `skills/sprint-approval.md` - Planning DOES NOT equal execution permissi
5. **ALWAYS search lessons** - Past experience informs better planning
6. **ALWAYS include wiki reference** - Every issue links to implementation wiki page
7. **ALWAYS use proper title format** - `[Sprint XX] <type>: <description>`
8. **ALWAYS check domain signals** - Every issue gets checked for viz/data domain applicability before creation
8. **ALWAYS use proper labels** - Apply relevant labels from the label taxonomy
## Your Mission

View File

@@ -0,0 +1,34 @@
---
description: Create a new Architecture Decision Record wiki page
agent: planner
---
# /adr create
## Skills Required
- skills/adr-conventions.md — ADR template and naming
- skills/wiki-conventions.md — page naming and dependency headers
- skills/visual-output.md — output formatting
## Workflow
### Step 1: Allocate ADR Number
Search existing wiki pages for `ADR-NNNN` pattern. Allocate next sequential number.
### Step 2: Gather Context
Ask user for:
- Title (short, decision-focused)
- Context (what prompted this decision)
- Options considered (at least 2)
- Recommended option
### Step 3: Create Wiki Page
Create `ADR-NNNN: {Title}` per `skills/adr-conventions.md` template.
Set status to `Proposed`.
### Step 4: Update ADR Index
Update or create `ADR-Index` wiki page, adding the new ADR under "Proposed".
### Step 5: Confirm
Display the created ADR and its URL.

View File

@@ -0,0 +1,23 @@
---
description: List all ADRs grouped by status
agent: planner
---
# /adr list
## Skills Required
- skills/adr-conventions.md — ADR lifecycle states
- skills/visual-output.md — output formatting
## Workflow
### Step 1: Read ADR Index
Load `ADR-Index` wiki page.
### Step 2: Display
Group ADRs by status (Accepted, Proposed, Superseded, Deprecated).
Show table with ID, Title, Date, Status.
### Optional Filter
`/adr list --status proposed` — show only proposed ADRs.

View File

@@ -0,0 +1,25 @@
---
description: Mark an ADR as superseded by a newer ADR
agent: planner
---
# /adr supersede
## Skills Required
- skills/adr-conventions.md — ADR lifecycle
- skills/wiki-conventions.md — page naming
## Workflow
### Step 1: Validate
Verify both ADRs exist. The superseding ADR should be in `Accepted` or `Proposed` state.
### Step 2: Update Old ADR
Set status to `Superseded`. Add note: "Superseded by ADR-MMMM: {Title}".
### Step 3: Update New ADR
Add note: "Supersedes ADR-NNNN: {Title}".
### Step 4: Update ADR Index
Move old ADR to "Superseded" section.

View File

@@ -0,0 +1,25 @@
---
description: Update an existing ADR's content or status
agent: planner
---
# /adr update
## Skills Required
- skills/adr-conventions.md — ADR template
- skills/wiki-conventions.md — page naming
## Workflow
### Step 1: Load ADR
Read `ADR-NNNN: {Title}` wiki page.
### Step 2: Apply Updates
Update content or status as specified. Valid status transitions:
- Proposed → Accepted
- Proposed → Deprecated
- Accepted → Deprecated
### Step 3: Update ADR Index
Move ADR to correct status section in `ADR-Index`.

View File

@@ -0,0 +1,23 @@
---
description: Architecture Decision Records management
---
# /adr
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/adr create` | Create a new ADR wiki page |
| `/adr list` | List all ADRs by status |
| `/adr update` | Update an existing ADR |
| `/adr supersede` | Mark an ADR as superseded by a new one |
## Usage
```
/adr create "<title>"
/adr list [--status proposed|accepted|superseded|deprecated]
/adr update <ADR-NNNN> [--status accepted|deprecated]
/adr supersede <ADR-NNNN> --by <ADR-MMMM>
```

View File

@@ -36,6 +36,8 @@ Run `/labels-sync` when setting up the plugin or after taxonomy updates.
| Priority/* | Low, Medium, High, Critical |
| Complexity/* | Simple, Medium, Complex |
| Efforts/* | XS, S, M, L, XL |
| Epic/* | Database, API, Frontend, Auth, Infrastructure |
| RnD/* | Friction, Gap, Pattern, Automation |
## DO NOT

View File

@@ -1,162 +0,0 @@
---
description: Diagnose issues and create reports, or investigate existing diagnostic issues
---
# PM Debug
## Skills Required
- skills/mcp-tools-reference.md
- skills/lessons-learned.md
- skills/git-workflow.md
## Purpose
Unified debugging command for diagnostics and issue investigation.
## Invocation
```
/pm-debug # Ask which mode
/pm-debug report # Run diagnostics, create issue
/pm-debug review # Investigate existing issues
```
## Mode Selection
If no subcommand provided, ask user:
1. **Report** - Run MCP tool diagnostics and create issue in marketplace
2. **Review** - Investigate existing diagnostic issues and propose fixes
---
## Mode: Report
Create structured issues in the marketplace repository.
### Prerequisites
Project `.env` must have:
```env
PROJMAN_MARKETPLACE_REPO=personal-projects/leo-claude-mktplace
```
### Workflow
#### Step 0: Select Report Type
- **Automated** - Run MCP tool diagnostics and report failures
- **User-Reported** - Gather structured feedback about a problem
#### For User-Reported (Step 0.1)
Gather via AskUserQuestion:
1. Which plugin/command was affected
2. What was the goal
3. What type of problem (error, missing feature, unexpected behavior, docs)
4. Problem description
5. Expected behavior
6. Workaround (optional)
#### Steps 1-2: Context Gathering
1. Gather project context (git remote, branch, pwd)
2. Detect sprint context (active milestone)
3. Read marketplace config
#### Steps 3-4: Diagnostic Suite (Automated Only)
Run MCP tools with explicit `repo` parameter:
- `validate_repo_org`
- `get_labels`
- `list_issues`
- `list_milestones`
- `suggest_labels`
Categorize: Parameter Format, Authentication, Not Found, Network, Logic
#### Steps 5-6: Generate Labels and Issue
**Automated:** `Type/Bug`, `Source/Diagnostic`, `Agent/Claude` + suggested
**User-Reported:** Map problem type to labels
#### Step 7: Create Issue
**Use curl (not MCP)** - avoids branch protection issues
#### Step 8: Report to User
Show summary and link to created issue
### DO NOT (Report Mode)
- Attempt to fix anything - only report
- Create issues if all automated tests pass (unless user-reported)
- Use MCP tools to create issues in marketplace - always use curl
---
## Mode: Review
Investigate diagnostic issues and propose fixes with human approval.
### Workflow with Approval Gates
#### Steps 1-8: Investigation
1. Detect repository (git remote)
2. Fetch diagnostic issues: `list_issues(labels=["Source: Diagnostic"])`
3. Display issue list
4. User selects issue (AskUserQuestion)
5. Fetch full details: `get_issue(issue_number=...)`
6. Parse diagnostic report (failed tools, errors, hypothesis)
7. Map errors to files
8. Read relevant files - **MANDATORY before proposing fix**
#### Step 9: Investigation Summary
Present analysis to user.
**APPROVAL GATE 1:** "Does this analysis match your understanding?"
- STOP and wait for user response
#### Step 9.5: Search Lessons Learned
Search for related past fixes using `search_lessons`.
#### Step 10: Propose Fix
Present specific fix approach with changes and rationale.
**APPROVAL GATE 2:** "Proceed with this fix?"
- STOP and wait for user response
#### Steps 11-12: Implement
1. Create feature branch (`fix/issue-N-description`)
2. Make code changes
3. Run tests
4. Show diff to user
**APPROVAL GATE 3:** "Create PR with these changes?"
- STOP and wait for user response
#### Steps 13-15: Finalize
13. Commit and push
14. Create PR
15. After user verifies fix: Close issue (REQUIRED) and capture lesson
### Error-to-File Quick Reference
| Error Pattern | Check Files |
|---------------|-------------|
| "owner/repo format" | config.py, gitea_client.py |
| "404" + "orgs" | gitea_client.py |
| "401", "403" | config.py (token) |
| "No repository" | Command .md file |
### DO NOT (Review Mode)
- Skip reading relevant files
- Proceed past approval gates without confirmation
- Close issues until user confirms fix works
- Commit directly to development/main
---
## Visual Output
```
╔══════════════════════════════════════════════════════════════════╗
║ 📋 PROJMAN ║
║ 🔧 DEBUG ║
║ [Mode: Report | Review] ║
╚══════════════════════════════════════════════════════════════════╝
```

View File

@@ -0,0 +1,41 @@
---
description: Close a completed project — retrospective, archive, lessons learned
agent: orchestrator
---
# /project close
## Purpose
Run project-level retrospective, capture lessons learned, and archive project artifacts.
## Prerequisites
- All sprints in roadmap are closed
- Project status is `Executing`
## Skills Required
- skills/project-charter.md — to update status
- skills/wiki-conventions.md — page naming
- skills/visual-output.md — output formatting
## Workflow
### Step 1: Verify Completion
Check that all sprints in `Roadmap: {Name}` are marked complete.
Check that all epic-labeled issues are closed.
### Step 2: Project Retrospective
Create a retrospective section in the project charter:
- What went well
- What didn't go well
- Key learnings
- Metrics: total sprints, total issues, duration
### Step 3: Archive
Update `Project: {Name}` status → `Closed`
Update `Risk-Register: {Name}` status → `Archived`
### Step 4: Final Report
Display project summary with key metrics.

View File

@@ -0,0 +1,52 @@
---
description: Discover, analyze, and charter a new project
agent: planner
---
# /project initiation
## Purpose
Analyze an existing codebase or system description, create a project charter, and decompose into epics.
## Skills Required
- skills/source-analysis.md — analysis framework
- skills/project-charter.md — charter template and naming
- skills/epic-conventions.md — epic decomposition rules
- skills/wiki-conventions.md — page naming and dependency headers
- skills/visual-output.md — output formatting
## Workflow
### Step 1: Source Analysis
If a source codebase path is provided, analyze it using `skills/source-analysis.md`:
- Identify tech stack, architecture, features, data model, quality state
- If no source provided (greenfield), skip to Step 2
### Step 2: Project Charter
Create wiki page `Project: {Name}` following `skills/project-charter.md`:
- Set status to `Initiating`
- Fill Vision, Scope (In/Out), Source Analysis Summary (if applicable)
- Leave Architecture Decisions, Epic Decomposition, Roadmap as placeholders
### Step 3: Epic Decomposition
Using analysis results, decompose project into epics per `skills/epic-conventions.md`:
- Create `Epic/*` labels if they don't exist (check with `list_labels`)
- Fill the Epic Decomposition table in the charter
### Step 4: Present and Confirm
Show the charter to the user. Wait for approval before proceeding to `/project plan`.
## Output
- Wiki page: `Project: {Name}`
- Labels: `Epic/*` labels created in Gitea
- State: `Initiating` — awaiting `/project plan`
## DO NOT
- Create sprint issues — that's `/sprint-plan`
- Create WBS or roadmap — that's `/project plan`
- Make architecture decisions — suggest ADRs via `/adr create`
- Skip user approval of the charter

View File

@@ -0,0 +1,52 @@
---
description: Create WBS, risk register, and sprint roadmap for an initiated project
agent: planner
---
# /project plan
## Purpose
Take an approved project charter and create the planning artifacts: WBS, risk register, and sprint roadmap.
## Prerequisites
- Project charter exists (`Project: {Name}` wiki page with status `Initiating` or `Planning`)
- Epic decomposition complete in charter
## Skills Required
- skills/wbs.md — work breakdown structure
- skills/risk-register.md — risk identification and scoring
- skills/sprint-roadmap.md — sprint sequencing
- skills/project-charter.md — to update charter status
- skills/wiki-conventions.md — page naming and dependency headers
- skills/visual-output.md — output formatting
## Workflow
### Step 1: Load Charter
Read `Project: {Name}` wiki page. Verify epic decomposition is present.
### Step 2: Create WBS
Create wiki page `WBS: {Name}` per `skills/wbs.md`.
### Step 3: Create Risk Register
Create wiki page `Risk-Register: {Name}` per `skills/risk-register.md`.
### Step 4: Create Sprint Roadmap
Create wiki page `Roadmap: {Name}` per `skills/sprint-roadmap.md`.
### Step 5: Update Charter
Update `Project: {Name}` wiki page:
- Fill Architecture Decisions links (ADRs created so far)
- Link to WBS, Risk Register, Roadmap
- Change status: `Initiating``Planning`
### Step 6: Present and Confirm
Show all artifacts to user. Approval transitions to `Executing` (ready for first `/sprint-plan`).
## Output
- Wiki pages: `WBS: {Name}`, `Risk-Register: {Name}`, `Roadmap: {Name}`
- Updated: `Project: {Name}` status → Planning

View File

@@ -0,0 +1,47 @@
---
description: Full project hierarchy status view (absorbs /proposal-status)
agent: planner
---
# /project status
## Purpose
Display comprehensive project status including charter state, epic progress, risk summary, and sprint roadmap status. Absorbs the former `/proposal-status` functionality.
## Skills Required
- skills/project-charter.md — charter structure
- skills/visual-output.md — output formatting
## Workflow
### Step 1: Load Project Artifacts
Read wiki pages:
- `Project: {Name}` — charter and status
- `Roadmap: {Name}` — sprint sequence and completion
- `Risk-Register: {Name}` — open risks
- `WBS: {Name}` — work package completion
### Step 2: Aggregate Sprint Progress
For each sprint in the roadmap:
- Check milestone status (open/closed)
- Count issues by state (open/closed)
- Calculate completion percentage
### Step 3: Display Status
```
Project: {Name}
Status: Executing
Epics: 3/6 complete
Sprints: 4/8 closed
Open Risks: 2 (highest: R1 score 6)
Next Sprint: Sprint 5
```
### Step 4: Detail Sections
- Epic progress bars
- Top risks by score
- Upcoming sprint scope from roadmap
- ADR summary (accepted/proposed counts)

View File

@@ -0,0 +1,23 @@
---
description: Project lifecycle management — concept to MVP
---
# /project
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/project initiation` | Analyze source, create charter, decompose into epics |
| `/project plan` | Create WBS, risk register, sprint roadmap |
| `/project status` | Full project hierarchy view |
| `/project close` | Retrospective, lessons learned, archive |
## Usage
```
/project initiation <source-path-or-description>
/project plan <project-name>
/project status <project-name>
/project close <project-name>
```

View File

@@ -1,49 +0,0 @@
---
description: View proposal and implementation hierarchy with status tracking
---
# Proposal Status
## Skills Required
- skills/mcp-tools-reference.md
- skills/wiki-conventions.md
## Purpose
View status of all change proposals and their implementations in Gitea Wiki.
## Invocation
```
/proposal-status
/proposal-status --version V04.1.0
/proposal-status --status "In Progress"
```
## Workflow
1. **Fetch Wiki Pages** - Use `list_wiki_pages()` to get all pages
2. **Filter Proposals** - Match `Change V*: Proposal*` pattern
3. **Parse Structure** - Group implementations under parent proposals
4. **Extract Status** - Read page metadata (In Progress, Implemented, Abandoned)
5. **Fetch Linked Artifacts** - Find issues and lessons referencing each implementation
6. **Display Tree View** - Show hierarchy with status and links
## Status Definitions
| Status | Meaning |
|--------|---------|
| Pending | Proposal created, no implementation started |
| In Progress | At least one implementation is active |
| Implemented | All planned implementations complete |
| Abandoned | Proposal cancelled or superseded |
## Visual Output
```
╔══════════════════════════════════════════════════════════════════╗
║ 📋 PROJMAN ║
║ Proposal Status ║
╚══════════════════════════════════════════════════════════════════╝
```

View File

@@ -10,11 +10,10 @@ agent: orchestrator
- skills/mcp-tools-reference.md
- skills/lessons-learned.md
- skills/wiki-conventions.md
- skills/rfc-workflow.md
- skills/rfc-workflow.md *(conditional — load only when sprint milestone metadata contains `**RFC:**` reference)*
- skills/progress-tracking.md
- skills/git-workflow.md
- skills/sprint-lifecycle.md
- skills/token-budget-report.md
## Purpose
@@ -35,10 +34,10 @@ Execute the sprint close workflow:
4. **Save to Gitea Wiki** - Use `create_lesson` with metadata and implementation link
5. **Update Wiki Implementation Page** - Change status to Implemented/Partial/Failed
6. **Update Wiki Proposal Page** - Update overall status if all implementations complete
7. **Update RFC Status (if applicable)** - See RFC Update section below
7. **Update RFC Status (if applicable)** - Skip if sprint is not RFC-linked. Only load `rfc-workflow.md` and execute this step if the milestone description contains `**RFC:**`.
8. **New Command Verification** - Remind user new commands require session restart
9. **Update CHANGELOG** (MANDATORY) - Add changes to `[Unreleased]` section
10. **Version Check** - Run `/suggest-version` to recommend version bump
10. **Version Review** - Review CHANGELOG.md for version bump recommendation (manual)
11. **Git Operations** - Commit, merge, tag, clean up branches
12. **Close Milestone** - Update milestone state to closed
@@ -84,11 +83,6 @@ If the sprint was linked to an RFC:
╚══════════════════════════════════════════════════════════════════╝
```
## Final Step: Token Budget Report
## Token Usage Note
After displaying the closing summary and completing all workflow steps, generate a Token Budget Report per `skills/token-budget-report.md`.
- Phase: CLOSING
- List all skills that were loaded during this closing session
- Use the orchestrator agent's model (sonnet) for agent overhead
- Display the formatted report
Token usage is captured as a `## Token Usage` section in the lessons learned wiki page — no standalone report.

View File

@@ -20,7 +20,6 @@ agent: planner
- skills/planning-workflow.md
- skills/label-taxonomy/labels-reference.md
- skills/sprint-lifecycle.md
- skills/token-budget-report.md
## Purpose
@@ -59,11 +58,3 @@ Execute the planning workflow as defined in `skills/planning-workflow.md`.
╚══════════════════════════════════════════════════════════════════╝
```
## Final Step: Token Budget Report
After displaying the planning summary and gaining sprint approval, generate a Token Budget Report per `skills/token-budget-report.md`.
- Phase: PLANNING
- List all skills that were loaded during this planning session
- Use the planner agent's model (sonnet) for agent overhead
- Display the formatted report

View File

@@ -1,47 +0,0 @@
---
description: Analyze CHANGELOG.md and suggest appropriate semantic version bump
---
# Suggest Version
## Purpose
Analyze CHANGELOG.md and suggest appropriate semantic version bump.
## Invocation
Run `/suggest-version` after updating CHANGELOG or before release.
## Workflow
1. **Read Current State**
- CHANGELOG.md for current version and [Unreleased] content
- marketplace.json for marketplace version
- Individual plugin versions
2. **Analyze [Unreleased] Section**
- Extract entries under Added, Changed, Fixed, Removed, Deprecated
- Categorize changes by impact
3. **Apply SemVer Rules**
| Change Type | Bump | Indicators |
|-------------|------|------------|
| MAJOR (X.0.0) | Breaking changes | Removed, "BREAKING:" in Changed |
| MINOR (x.Y.0) | New features | Added with new commands/plugins |
| PATCH (x.y.Z) | Bug fixes only | Fixed only |
4. **Output Recommendation**
- Current version
- Summary of changes
- Recommended bump with reason
- Release command
## Visual Output
```
╔══════════════════════════════════════════════════════════════════╗
║ 📋 PROJMAN ║
║ Version Analysis ║
╚══════════════════════════════════════════════════════════════════╝
```

View File

@@ -1,15 +0,0 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
]
}
]
}
}

View File

@@ -1,85 +0,0 @@
#!/bin/bash
# projman startup check hook
# Checks for common issues AND suggests sprint planning proactively
# All output MUST have [projman] prefix
PREFIX="[projman]"
# Calculate paths
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}"
# Check git remote vs .env config (only if .env exists)
if [[ -f ".env" ]]; then
CONFIGURED_REPO=$(grep -E "^GITEA_REPO=" .env 2>/dev/null | cut -d'=' -f2 | tr -d '"' || true)
if [[ -n "$CONFIGURED_REPO" ]]; then
CURRENT_REMOTE=$(git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\/[^.]*\).*/\1/' || true)
if [[ -n "$CURRENT_REMOTE" && "$CONFIGURED_REPO" != "$CURRENT_REMOTE" ]]; then
echo "$PREFIX Git remote mismatch - run /project-sync"
exit 0
fi
fi
fi
# Check for open issues (suggests sprint planning)
# Only if .env exists with valid GITEA config
if [[ -f ".env" ]]; then
GITEA_API_URL=$(grep -E "^GITEA_API_URL=" ~/.config/claude/gitea.env 2>/dev/null | cut -d'=' -f2 | tr -d '"' || true)
GITEA_API_TOKEN=$(grep -E "^GITEA_API_TOKEN=" ~/.config/claude/gitea.env 2>/dev/null | cut -d'=' -f2 | tr -d '"' || true)
GITEA_REPO=$(grep -E "^GITEA_REPO=" .env 2>/dev/null | cut -d'=' -f2 | tr -d '"' || true)
if [[ -n "$GITEA_API_URL" && -n "$GITEA_API_TOKEN" && -n "$GITEA_REPO" ]]; then
# Quick check for open issues without milestone (unplanned work)
# Note: grep -c returns 0 on no match but exits non-zero, causing || to also fire
# Use subshell to ensure single value
OPEN_ISSUES=$(curl -s -m 5 \
-H "Authorization: token $GITEA_API_TOKEN" \
"${GITEA_API_URL}/repos/${GITEA_REPO}/issues?state=open&milestone=none&limit=1" 2>/dev/null | \
grep -c '"number"' 2>/dev/null) || OPEN_ISSUES=0
if [[ "$OPEN_ISSUES" -gt 0 ]]; then
# Count total unplanned issues
TOTAL_UNPLANNED=$(curl -s -m 5 \
-H "Authorization: token $GITEA_API_TOKEN" \
"${GITEA_API_URL}/repos/${GITEA_REPO}/issues?state=open&milestone=none" 2>/dev/null | \
grep -c '"number"' 2>/dev/null) || TOTAL_UNPLANNED="?"
echo "$PREFIX ${TOTAL_UNPLANNED} open issues without milestone - consider /sprint-plan"
fi
fi
fi
# ============================================================================
# Check version consistency across files (early drift detection)
# ============================================================================
# Versions must stay in sync across: README.md, marketplace.json, CHANGELOG.md
# Drift here causes confusion and release issues
if [[ -f "README.md" && -f ".claude-plugin/marketplace.json" && -f "CHANGELOG.md" ]]; then
VERSION_README=$(grep -oE "^# .* - v[0-9]+\.[0-9]+\.[0-9]+" README.md 2>/dev/null | grep -oE "[0-9]+\.[0-9]+\.[0-9]+" || echo "")
# Extract metadata.version specifically (appears after "metadata" in marketplace.json)
VERSION_MARKETPLACE=$(sed -n '/"metadata"/,/}/p' .claude-plugin/marketplace.json 2>/dev/null | grep -oE '"version"[[:space:]]*:[[:space:]]*"[0-9]+\.[0-9]+\.[0-9]+"' | head -1 | grep -oE "[0-9]+\.[0-9]+\.[0-9]+" || echo "")
VERSION_CHANGELOG=$(grep -oE "^## \[[0-9]+\.[0-9]+\.[0-9]+\]" CHANGELOG.md 2>/dev/null | head -1 | grep -oE "[0-9]+\.[0-9]+\.[0-9]+" || echo "")
if [[ -n "$VERSION_README" && -n "$VERSION_MARKETPLACE" && -n "$VERSION_CHANGELOG" ]]; then
if [[ "$VERSION_README" != "$VERSION_MARKETPLACE" ]] || [[ "$VERSION_README" != "$VERSION_CHANGELOG" ]]; then
echo "$PREFIX Version mismatch detected:"
echo "$PREFIX README.md: v$VERSION_README"
echo "$PREFIX marketplace.json: v$VERSION_MARKETPLACE"
echo "$PREFIX CHANGELOG.md: v$VERSION_CHANGELOG"
echo "$PREFIX Run /suggest-version to analyze and fix"
fi
fi
fi
# Check for CHANGELOG.md [Unreleased] content (version management)
if [[ -f "CHANGELOG.md" ]]; then
# Check if there's content under [Unreleased] that hasn't been released
UNRELEASED_CONTENT=$(sed -n '/## \[Unreleased\]/,/## \[/p' CHANGELOG.md 2>/dev/null | grep -E '^### (Added|Changed|Fixed|Removed|Deprecated)' | head -1 || true)
if [[ -n "$UNRELEASED_CONTENT" ]]; then
UNRELEASED_LINES=$(sed -n '/## \[Unreleased\]/,/## \[/p' CHANGELOG.md 2>/dev/null | grep -E '^- ' | wc -l | tr -d ' ')
if [[ "$UNRELEASED_LINES" -gt 0 ]]; then
echo "$PREFIX ${UNRELEASED_LINES} unreleased changes in CHANGELOG - consider version bump"
fi
fi
fi
exit 0

View File

@@ -0,0 +1,49 @@
---
description: Architecture Decision Record conventions and wiki page template
---
# ADR Conventions
## Wiki Page Naming
Page name: `ADR-NNNN: {Title}` (e.g., `ADR-0001: Use PostgreSQL with Alembic`)
Sequential numbering. Allocate next available number.
## Dependency Header
```
> **Project:** {Name}
> **Sprint:** N/A or sprint where ADR was created
> **Issues:** Related issue numbers or N/A
> **Parent:** Project: {Name}
> **Created:** YYYY-MM-DD
> **Status:** Proposed | Accepted | Superseded | Deprecated
```
## ADR Template
1. **Context** — What is the issue motivating this decision?
2. **Decision** — What is the change being proposed/made?
3. **Consequences** — Positive, Negative, Neutral outcomes
4. **Alternatives Considered** — Table of options with pros, cons, verdict
5. **References** — Related ADRs, RFCs, external links
## Lifecycle
| State | Meaning |
|-------|---------|
| Proposed | Under discussion, not yet approved |
| Accepted | Approved and in effect |
| Superseded | Replaced by a newer ADR (link to replacement) |
| Deprecated | No longer relevant |
## ADR Index Page
Maintain an `ADR-Index` wiki page listing all ADRs grouped by status.
## Integration with Sprint Workflow
- ADRs created via `/adr create` during project initiation or planning
- ADR status checked during `/project status`
- Sprint planning can reference ADRs in issue descriptions

View File

@@ -1,165 +0,0 @@
---
name: domain-consultation
description: Cross-plugin domain consultation for specialized planning and validation
---
# Domain Consultation
## Purpose
Enables projman agents to detect domain-specific work and consult specialized plugins for expert validation during planning and execution phases. This skill is the backbone of the Domain Advisory Pattern.
---
## When to Use
| Agent | Phase | Action |
|-------|-------|--------|
| Planner | After task sizing, before issue creation | Detect domains, add acceptance criteria |
| Orchestrator | Before marking issue complete | Run domain gates, block if violations |
| Code Reviewer | During review | Include domain compliance in findings |
---
## Domain Detection Rules
| Signal Type | Detection Pattern | Domain Plugin | Action |
|-------------|-------------------|---------------|--------|
| Label-based | `Component/Frontend`, `Component/UI` | viz-platform | Add design system criteria, apply `Domain/Viz` |
| Content-based | Keywords: DMC, Dash, layout, theme, component, dashboard, chart, responsive, color, UI, frontend, Plotly | viz-platform | Same as above |
| Label-based | `Component/Database`, `Component/Data` | data-platform | Add data validation criteria, apply `Domain/Data` |
| Content-based | Keywords: schema, migration, pipeline, dbt, table, column, query, PostgreSQL, lineage, data model | data-platform | Same as above |
| Both signals | Frontend + Data signals present | Both plugins | Apply both sets of criteria |
---
## Planning Protocol
When creating issues, the planner MUST:
1. **Analyze each issue** for domain signals (check labels AND scan description for keywords)
2. **For Domain/Viz issues**, append this acceptance criteria block:
```markdown
## Design System Compliance
- [ ] All DMC components validated against registry
- [ ] Theme tokens used (no hardcoded colors/sizes)
- [ ] Accessibility check passed (WCAG contrast)
- [ ] Responsive breakpoints verified
```
3. **For Domain/Data issues**, append this acceptance criteria block:
```markdown
## Data Integrity
- [ ] Schema changes validated
- [ ] dbt tests pass
- [ ] Lineage intact (no orphaned models)
- [ ] Data types verified
```
4. **Apply the corresponding `Domain/*` label** to route the issue through gates
5. **Document in planning summary** which issues have domain gates active
---
## Execution Gate Protocol
Before marking any issue as complete, the orchestrator MUST:
1. **Check issue labels** for `Domain/*` labels
2. **If `Domain/Viz` label present:**
- Identify files changed by this issue
- Invoke `/design-gate <path-to-changed-files>`
- Gate PASS → proceed to mark issue complete
- Gate FAIL → add comment to issue with failure details, keep issue open
3. **If `Domain/Data` label present:**
- Identify files changed by this issue
- Invoke `/data-gate <path-to-changed-files>`
- Gate PASS → proceed to mark issue complete
- Gate FAIL → add comment to issue with failure details, keep issue open
4. **If gate command unavailable** (MCP server not running):
- Warn user: "Domain gate unavailable - proceeding without validation"
- Proceed with completion (non-blocking degradation)
- Do NOT silently skip - always inform user
---
## Review Protocol
During code review, the code reviewer SHOULD:
1. After completing standard code quality and security checks, check for `Domain/*` labels
2. **If Domain/Viz:** Include "Design System Compliance" section in review report
- Reference `/design-review` findings if available
- Check for hardcoded colors, invalid props, accessibility issues
3. **If Domain/Data:** Include "Data Integrity" section in review report
- Reference `/data-gate` findings if available
- Check for schema validity, lineage integrity
---
## Extensibility
To add a new domain (e.g., `Domain/Infra` for cmdb-assistant):
1. **In domain plugin:** Create advisory agent + gate command
- Agent: `agents/infra-advisor.md`
- Gate command: `commands/infra-gate.md`
- Audit skill: `skills/infra-audit.md`
2. **In this skill:** Add detection rules to the Detection Rules table above
- Define label-based signals (e.g., `Component/Infrastructure`)
- Define content-based keywords (e.g., "server", "network", "NetBox")
3. **In label taxonomy:** Add `Domain/Infra` label with appropriate color
- Update `plugins/projman/skills/label-taxonomy/labels-reference.md`
4. **No changes needed** to planner.md or orchestrator.md agent files
- They read this skill dynamically
- Detection rules table is the single source of truth
This pattern ensures domain expertise stays in domain plugins while projman orchestrates when to ask.
---
## Domain Acceptance Criteria Templates
### Design System Compliance (Domain/Viz)
```markdown
## Design System Compliance
- [ ] All DMC components validated against registry
- [ ] Theme tokens used (no hardcoded colors/sizes)
- [ ] Accessibility check passed (WCAG contrast)
- [ ] Responsive breakpoints verified
```
### Data Integrity (Domain/Data)
```markdown
## Data Integrity
- [ ] Schema changes validated
- [ ] dbt tests pass
- [ ] Lineage intact (no orphaned models)
- [ ] Data types verified
```
---
## Gate Command Reference
| Domain | Gate Command | Contract | Review Command | Advisory Agent |
|--------|--------------|----------|----------------|----------------|
| Viz | `/design-gate <path>` | v1 | `/design-review <path>` | `design-reviewer` |
| Data | `/data-gate <path>` | v1 | `/data-review <path>` | `data-advisor` |
Gate commands return binary PASS/FAIL for automation.
Review commands return detailed reports for human review.
**Contract Version:** Gate commands declare `gate_contract: vN` in their frontmatter. The version in this table is what projman expects. If a gate command bumps its contract version, this table must be updated to match. The `contract-validator` plugin checks this automatically via `validate_workflow_integration`.

View File

@@ -0,0 +1,49 @@
---
description: Epic decomposition and management conventions using Gitea labels and projects
---
# Epic Conventions
## What Is an Epic
An epic is a large body of work that spans multiple sprints. Epics are tracked as Gitea labels (`Epic/*`) and optionally as Gitea Project boards.
## Label Convention
Labels follow the pattern `Epic/{Name}`:
- `Epic/Database` — Schema design, migrations, seed data
- `Epic/API` — Backend endpoints, middleware, auth
- `Epic/Frontend` — UI components, routing, state management
- `Epic/Auth` — Authentication and authorization
- `Epic/Infrastructure` — CI/CD, deployment, monitoring
Epics are defined during `/project initiation` as part of the charter's Epic Decomposition table.
## Epic-to-Sprint Mapping
Each sprint focuses on one or more epics. The sprint milestone description references the active epics:
```
**Epics:** Epic/Database, Epic/API
**Project:** Driving School SaaS
```
## Wiki Cross-References
- Project Charter (`Project: {Name}`) contains the Epic Decomposition table
- Sprint Roadmap (`Roadmap: {Name}`) maps epics to sprint sequence
- WBS (`WBS: {Name}`) breaks epics into work packages
- Sprint Lessons (`Sprint-Lessons: Sprint-X`) reference which epics were active
## Issue-Epic Relationship
Every issue in an epic-aligned sprint gets the `Epic/*` label. This enables:
- Filtering all issues by epic across sprints
- Tracking epic completion percentage
- Epic velocity analysis in lessons learned
## DO NOT
- Create epics for single-sprint work — use regular labels
- Mix unrelated work under one epic label
- Skip epic labels during sprint planning — they're the traceability link

View File

@@ -13,7 +13,7 @@ description: Dynamic reference for Gitea label taxonomy (organization + reposito
This skill provides the current label taxonomy used for issue classification in Gitea. Labels are **fetched dynamically** from Gitea and should never be hardcoded.
**Current Taxonomy:** 49 labels (31 organization + 18 repository)
**Current Taxonomy:** 58 labels (31 organization + 27 repository)
## Organization Labels (31)
@@ -123,6 +123,29 @@ Cross-plugin integration labels for domain-specific validation gates.
- Also applied when: `Component/Database` or `Component/Data` label is present
- Example: "Add census demographic data pipeline"
### Epic (5 labels)
Project-level epic labels for multi-sprint work tracking.
| Label | Color | Description |
|-------|-------|-------------|
| `Epic/Database` | `#0E8A16` | Database schema, migrations, seed data |
| `Epic/API` | `#1D76DB` | Backend endpoints, middleware, auth |
| `Epic/Frontend` | `#E99695` | UI components, routing, state management |
| `Epic/Auth` | `#D93F0B` | Authentication and authorization |
| `Epic/Infrastructure` | `#BFD4F2` | CI/CD, deployment, monitoring |
### R&D (4 labels)
Research and development tracking labels for lessons learned.
| Label | Color | Description |
|-------|-------|-------------|
| `RnD/Friction` | `#FBCA04` | Workflow friction points |
| `RnD/Gap` | `#B60205` | Capability gaps discovered |
| `RnD/Pattern` | `#0075CA` | Reusable patterns identified |
| `RnD/Automation` | `#5319E7` | Automation opportunities |
## Label Suggestion Logic
When suggesting labels for issues, consider the following patterns:

View File

@@ -0,0 +1,46 @@
---
description: Template and conventions for project charter wiki pages
---
# Project Charter Conventions
## Wiki Page Naming
Page name: `Project: {Name}` (e.g., `Project: Driving School SaaS`)
## Dependency Header
```
> **Project:** {Name}
> **Sprint:** N/A (project-level)
> **Issues:** N/A (created during planning)
> **Parent:** N/A (top-level artifact)
> **Created:** YYYY-MM-DD
> **Status:** Initiating | Planning | Executing | Closing | Closed
```
## Charter Structure
The wiki page follows this structure:
1. **Vision** — One paragraph describing what this project achieves and why
2. **Scope** — In Scope (explicit list) and Out of Scope (prevents scope creep)
3. **Source Analysis Summary** — Key findings from `/project initiation` (if applicable)
4. **Architecture Decisions** — Links to ADR wiki pages
5. **Epic Decomposition** — Table of epics with description, priority, estimated sprints
6. **Sprint Roadmap** — Link to `Roadmap: {Name}` wiki page
7. **Risk Register** — Link to `Risk-Register: {Name}` wiki page
8. **Stakeholders** — Table of roles, persons, responsibilities
9. **Success Criteria** — Measurable outcomes that define "done"
## Lifecycle States
| State | Meaning | Transition |
|-------|---------|------------|
| Initiating | Discovery and chartering in progress | Planning (charter approved) |
| Planning | WBS, risk, roadmap being created | Executing (first sprint starts) |
| Executing | Sprints are running | Closing (all epics complete) |
| Closing | Retrospective and archival | Closed |
| Closed | Archived | Terminal |
State is tracked in the charter's `Status` field.

View File

@@ -0,0 +1,58 @@
---
description: Risk register template and management conventions for project planning
---
# Risk Register
## Wiki Page
Page name: `Risk-Register: {Name}` (e.g., `Risk-Register: Driving School SaaS`)
### Dependency Header
```
> **Project:** {Name}
> **Sprint:** N/A (project-level, reviewed per sprint)
> **Issues:** N/A
> **Parent:** Project: {Name}
> **Created:** YYYY-MM-DD
> **Status:** Active | Archived
```
## Risk Register Template
| ID | Risk | Probability | Impact | Score | Mitigation | Owner | Status |
|----|------|------------|--------|-------|------------|-------|--------|
| R1 | Example risk description | Medium | High | 6 | Mitigation strategy | Dev | Open |
## Scoring
| Probability | Value |
|-------------|-------|
| Low | 1 |
| Medium | 2 |
| High | 3 |
| Impact | Value |
|--------|-------|
| Low | 1 |
| Medium | 2 |
| High | 3 |
**Score** = Probability x Impact (1-9 range)
## Risk Lifecycle
| Status | Meaning |
|--------|---------|
| Open | Active risk, mitigation planned |
| Monitoring | Risk identified but not yet actionable |
| Mitigated | Mitigation applied, risk reduced |
| Occurred | Risk materialized — track resolution |
| Closed | No longer relevant |
## Integration
- `/project plan` creates initial risk register
- `/project status` summarizes open risk count and top-3 by score
- `/sprint-close` updates risk statuses in lessons learned

View File

@@ -0,0 +1,51 @@
---
description: Framework for analyzing existing codebases and systems as input to project initiation
---
# Source Analysis
## Purpose
Structured approach to analyzing an existing codebase or system before project planning. Used by `/project initiation` to understand what exists before defining what to build.
## Analysis Framework
### 1. Codebase Discovery
- Technology stack identification (languages, frameworks, databases, ORMs)
- Architecture pattern recognition (monolith, microservices, modular monolith)
- Dependency inventory (package managers, versions, lock files)
- Environment configuration (env files, config patterns, secrets management)
### 2. Feature Inventory
- User-facing features (pages, flows, API endpoints)
- Admin/internal features (dashboards, tools, scripts)
- Integration points (third-party APIs, external services)
- Background processes (cron jobs, workers, queues)
### 3. Data Model Analysis
- Database schemas and relationships
- Data migration history
- Seed data patterns
- Data validation rules
### 4. Quality Assessment
- Test coverage (types, frameworks, CI integration)
- Documentation state (README, inline, API docs)
- Code quality indicators (linting, formatting, type safety)
- Known technical debt (TODO/FIXME/HACK comments)
### 5. Deployment Assessment
- Current hosting/infrastructure
- CI/CD pipeline state
- Environment parity (dev/staging/prod)
- Monitoring and logging
## Output Format
The analysis produces a structured report stored as a wiki page (`Project: {Name}`) that feeds into the project charter and epic decomposition.
## DO NOT
- Make assumptions about missing components — document gaps explicitly
- Evaluate "good vs bad" — document facts for decision-making
- Propose solutions during analysis — that's the planning phase

View File

@@ -0,0 +1,41 @@
---
description: Sprint roadmap template for sequencing epics across multiple sprints
---
# Sprint Roadmap
## Wiki Page
Page name: `Roadmap: {Name}` (e.g., `Roadmap: Driving School SaaS`)
### Dependency Header
```
> **Project:** {Name}
> **Sprint:** N/A (project-level)
> **Issues:** N/A
> **Parent:** Project: {Name}
> **Created:** YYYY-MM-DD
> **Status:** Draft | Active | Complete
```
## Roadmap Template
| Sprint | Epics | Focus | Dependencies | Status |
|--------|-------|-------|-------------|--------|
| Sprint 1 | Epic/Database | Schema design, initial migrations | None | Planned |
| Sprint 2 | Epic/Database, Epic/API | Seed data, API scaffolding | Sprint 1 | Planned |
## Milestones
| Milestone | Target Sprint | Criteria |
|-----------|--------------|----------|
| Backend MVP | Sprint 3 | All core API endpoints passing tests |
| Integration Complete | Sprint 6 | End-to-end flows working |
## Integration
- `/project plan` creates the initial roadmap from epic decomposition + dependency analysis
- `/sprint-plan` references the roadmap to determine sprint scope
- `/sprint-close` updates sprint status in roadmap
- `/project status` shows roadmap progress

View File

@@ -42,12 +42,13 @@ For commands that don't invoke a specific agent phase:
|---------|-------------|------------|
| `/sprint-status` | 📊 Chart | STATUS |
| `/pm-setup` | ⚙️ Gear | SETUP |
| `/pm-debug` | 🐛 Bug | DEBUG |
| `/labels-sync` | 🏷️ Label | LABELS |
| `/suggest-version` | 📦 Package | VERSION |
| `/proposal-status` | 📋 Clipboard | PROPOSALS |
| `/pm-test` | 🧪 Flask | TEST |
| `/rfc` | 📄 Document | RFC [Sub-Command] |
| `/project` | 📋 Clipboard | PROJECT [Sub-Command] |
| `/adr` | 📐 Ruler | ADR [Sub-Command] |
| `/hygiene check` | 🧹 Broom | HYGIENE |
| `/cv status` | ✅ Check | CV STATUS |
---

View File

@@ -0,0 +1,45 @@
---
description: Work Breakdown Structure skill for decomposing projects and sprints into implementable work packages
---
# Work Breakdown Structure (WBS)
## Purpose
Bridges project-level epics and sprint-level issues. Used by `/project plan` to create the initial decomposition and by `/sprint-plan` to refine sprint scope.
## Wiki Page
Page name: `WBS: {Name}` (e.g., `WBS: Driving School SaaS`)
### Dependency Header
```
> **Project:** {Name}
> **Sprint:** N/A (project-level, refined per sprint)
> **Issues:** N/A
> **Parent:** Project: {Name}
> **Created:** YYYY-MM-DD
> **Status:** Draft | Active | Complete
```
## Decomposition Rules
1. **Level 1:** Epics (from project charter)
2. **Level 2:** Work packages (groupings within an epic — typically 1 sprint each)
3. **Level 3:** Tasks (become Gitea issues — must be S or M size per task-sizing.md)
## Sprint Refinement
During `/sprint-plan`, the planner:
1. Loads the WBS
2. Identifies the next unstarted work packages
3. Creates issues from Level 3 tasks
4. Marks consumed work packages as "Sprint-X" in the WBS
## Integration
- `/project plan` creates the initial WBS from epic decomposition
- `/sprint-plan` consumes WBS work packages to create sprint issues
- `/sprint-close` updates WBS with completion status
- `/project status` aggregates WBS progress for project-level view

View File

@@ -1,73 +1,98 @@
---
name: wiki-conventions
description: Proposal and implementation page format and naming conventions
description: Wiki page naming conventions and dependency headers (Decision #30)
---
# Wiki Conventions
## Purpose
Defines the naming and structure for wiki proposal and implementation pages.
Defines naming conventions, dependency headers, and structure for all wiki pages in the project management workflow.
## When to Use
- **Planner agent**: When creating wiki pages during planning
- **Orchestrator agent**: When updating status at sprint close
- **Commands**: `/sprint-plan`, `/sprint-close`, `/proposal-status`
- **Commands**: `/sprint-plan`, `/sprint-close`, `/project initiation`, `/project plan`, `/project status`, `/project close`, `/adr create`
---
## Page Naming
## Page Naming Pattern
| Page Type | Naming Convention |
|-----------|-------------------|
| Proposal | `Change VXX.X.X: Proposal` |
| Implementation | `Change VXX.X.X: Proposal (Implementation N)` |
All wiki pages follow: `{Type}-{ID}: {Title}` or `{Type}: {Title}`
**Examples:**
- `Change V4.1.0: Proposal`
- `Change V4.1.0: Proposal (Implementation 1)`
- `Change V4.1.0: Proposal (Implementation 2)`
| Type | ID Format | Example |
|------|-----------|---------|
| RFC | NNNN (sequential) | `RFC-0001: OAuth2 Provider Support` |
| ADR | NNNN (sequential) | `ADR-0001: Use PostgreSQL with Alembic` |
| Project | Name | `Project: Driving School SaaS` |
| WBS | Name | `WBS: Driving School SaaS` |
| Risk-Register | Name | `Risk-Register: Driving School SaaS` |
| Roadmap | Name | `Roadmap: Driving School SaaS` |
| Sprint-Lessons | Sprint ID | `Sprint-Lessons: Sprint-3` |
| ADR-Index | — | `ADR-Index` |
| Change Proposal | Version | `Change VXX.X.X: Proposal` |
| Implementation | Version + N | `Change VXX.X.X: Proposal (Implementation N)` |
## Dependency Header
Every wiki page MUST include this header block:
```markdown
> **Project:** [project name or N/A]
> **Sprint:** [sprint milestone or N/A]
> **Issues:** #12, #15, #18 [or N/A]
> **Parent:** [parent wiki page or N/A]
> **Created:** YYYY-MM-DD
> **Status:** [lifecycle state]
```
## Hierarchy
- `Project: {Name}` is the root
- `WBS: {Name}` (parent: Project)
- `Risk-Register: {Name}` (parent: Project)
- `Roadmap: {Name}` (parent: Project)
- `ADR-NNNN: {Title}` (parent: Project)
- `Sprint-Lessons: Sprint-X` (parent: Project)
---
## Proposal Page Template
## Change Proposal Pages (Legacy Format)
### Proposal Page Template
```markdown
> **Type:** Change Proposal
> **Version:** V04.1.0
> **Version:** VXX.X.X
> **Plugin:** projman
> **Status:** In Progress
> **Date:** 2026-01-26
> **Date:** YYYY-MM-DD
# Feature Title
[Content migrated from input source or created from discussion]
## Implementations
- [Implementation 1](link) - Sprint 17 - In Progress
- [Implementation 1](link) - Sprint N - In Progress
```
---
## Implementation Page Template
### Implementation Page Template
```markdown
> **Type:** Change Proposal Implementation
> **Version:** V04.1.0
> **Version:** VXX.X.X
> **Status:** In Progress
> **Date:** 2026-01-26
> **Date:** YYYY-MM-DD
> **Origin:** [Proposal](wiki-link)
> **Sprint:** Sprint 17
> **Sprint:** Sprint N
# Implementation Details
[Technical details, scope, approach]
## Issues
- #45: JWT token generation
- #46: Login endpoint
- #47: Auth tests
- #45: Issue description
- #46: Issue description
```
---
@@ -85,71 +110,26 @@ Defines the naming and structure for wiki proposal and implementation pages.
## Completion Update (Sprint Close)
On sprint close, update implementation page:
On sprint close, update implementation page status to `Implemented` and add a `## Completion Summary` section with lessons learned link.
```markdown
> **Type:** Change Proposal Implementation
> **Version:** V04.1.0
> **Status:** Implemented ✅
> **Date:** 2026-01-26
> **Completed:** 2026-01-28
> **Origin:** [Proposal](wiki-link)
> **Sprint:** Sprint 17
# Implementation Details
[Original content...]
## Completion Summary
- All planned issues completed
- Lessons learned: [Link to lesson]
```
On proposal page, update implementation entries with completion status.
---
## Proposal Status Update
## R&D Notes Section
When all implementations complete, update proposal:
Lessons learned pages include a `## R&D Notes` section at the bottom for capturing:
```markdown
> **Type:** Change Proposal
> **Version:** V04.1.0
> **Status:** Implemented ✅
> **Date:** 2026-01-26
# Feature Title
[Original content...]
## Implementations
- [Implementation 1](link) - Sprint 17 - ✅ Completed
```
| Label | Description | Action |
|-------|-------------|--------|
| `RnD/Friction` | Workflow friction points | Consider improvements |
| `RnD/Gap` | Capability gaps discovered | Prioritize new tools |
| `RnD/Pattern` | Reusable patterns identified | Document for reuse |
| `RnD/Automation` | Automation opportunities | Add to backlog |
---
## Creating Pages
## Enforcement
**Create proposal:**
```python
create_wiki_page(
repo="org/repo",
title="Change V4.1.0: Proposal",
content="[proposal template content]"
)
```
**Create implementation:**
```python
create_wiki_page(
repo="org/repo",
title="Change V4.1.0: Proposal (Implementation 1)",
content="[implementation template content]"
)
```
**Update implementation on close:**
```python
update_wiki_page(
repo="org/repo",
page_name="Change-V4.1.0:-Proposal-(Implementation-1)",
content="[updated content with completion status]"
)
```
- Commands creating wiki pages use these templates from their respective skills
- Malformed pages are flagged, not auto-corrected

View File

@@ -1,3 +0,0 @@
{
"hooks": {}
}

View File

@@ -1,44 +1,101 @@
#!/bin/bash
# Verify all hooks are command type (not prompt)
# Run this after any plugin update
# verify-hooks.sh — Verify marketplace hook inventory
# Post-Decision #29: Only PreToolUse safety hooks and UserPromptSubmit quality hooks may exist
#
# Expected inventory:
# code-sentinel : PreToolUse → security-check.sh (Write|Edit|MultiEdit)
# git-flow : PreToolUse → branch-check.sh (Bash)
# git-flow : PreToolUse → commit-msg-check.sh (Bash)
# cmdb-assistant : PreToolUse → validate-input.sh (MCP create/update)
# clarity-assist : UserPromptSubmit → vagueness-check.sh (prompt quality)
#
# FAIL conditions:
# - Any SessionStart hook
# - Any PostToolUse hook
# - Any hook of type "prompt"
# - Any hooks.json outside the 4 expected plugins
# - Missing expected hooks
echo "=== HOOK VERIFICATION ==="
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
PLUGINS_DIR="$ROOT_DIR/plugins"
echo "=== HOOK VERIFICATION (Post-Decision #29) ==="
echo ""
FAILED=0
HOOK_COUNT=0
# Check ALL hooks.json files in .claude directory
for f in $(find ~/.claude -name "hooks.json" 2>/dev/null); do
if grep -q '"type": "prompt"' "$f" || grep -q '"type":"prompt"' "$f"; then
echo "❌ PROMPT HOOK FOUND: $f"
# Allowed plugins with hooks
ALLOWED_PLUGINS="code-sentinel git-flow cmdb-assistant clarity-assist"
# 1. Check for unexpected hooks.json files
while IFS= read -r -d '' hooks_file; do
plugin_name=$(basename "$(dirname "$(dirname "$hooks_file")")")
if ! echo "$ALLOWED_PLUGINS" | grep -qw "$plugin_name"; then
echo "FAIL: UNEXPECTED hooks.json in: $plugin_name"
echo " File: $hooks_file"
echo " Only code-sentinel, git-flow, cmdb-assistant, and clarity-assist may have hooks"
FAILED=1
fi
done
done < <(find "$PLUGINS_DIR" -path "*/hooks/hooks.json" -print0 2>/dev/null)
# Note about cache (informational only - do NOT clear mid-session)
if [ -d ~/.claude/plugins/cache/leo-claude-mktplace ]; then
echo " Cache exists: ~/.claude/plugins/cache/leo-claude-mktplace"
echo " (This is normal - do NOT clear mid-session or MCP tools will break)"
echo " To apply plugin changes: restart Claude Code session"
# 2. Check for forbidden hook types
while IFS= read -r -d '' hooks_file; do
plugin_name=$(basename "$(dirname "$(dirname "$hooks_file")")")
# Check for SessionStart (FORBIDDEN)
if jq -e '.hooks.SessionStart' "$hooks_file" > /dev/null 2>&1; then
echo "FAIL: SessionStart hook found in $plugin_name (FORBIDDEN post-Decision #29)"
FAILED=1
fi
# Verify installed hooks are command type
for plugin in doc-guardian code-sentinel projman pr-review project-hygiene data-platform cmdb-assistant; do
HOOK_FILE=~/.claude/plugins/marketplaces/leo-claude-mktplace/plugins/$plugin/hooks/hooks.json
if [ -f "$HOOK_FILE" ]; then
if grep -q '"type": "command"' "$HOOK_FILE" || grep -q '"type":"command"' "$HOOK_FILE"; then
echo "$plugin: command type"
# Check for PostToolUse (FORBIDDEN)
if jq -e '.hooks.PostToolUse' "$hooks_file" > /dev/null 2>&1; then
echo "FAIL: PostToolUse hook found in $plugin_name (FORBIDDEN post-Decision #29)"
FAILED=1
fi
# Check for prompt type (FORBIDDEN)
if grep -q '"type"[[:space:]]*:[[:space:]]*"prompt"' "$hooks_file"; then
echo "FAIL: Prompt-type hook found in $plugin_name (FORBIDDEN)"
FAILED=1
fi
# Count PreToolUse hooks
pre_count=$(jq '[.hooks.PreToolUse[]? | .hooks[]?] | length' "$hooks_file" 2>/dev/null || echo 0)
HOOK_COUNT=$((HOOK_COUNT + pre_count))
# Count UserPromptSubmit hooks (allowed for quality checks)
ups_count=$(jq '[.hooks.UserPromptSubmit[]? | .hooks[]?] | length' "$hooks_file" 2>/dev/null || echo 0)
HOOK_COUNT=$((HOOK_COUNT + ups_count))
done < <(find "$PLUGINS_DIR" -path "*/hooks/hooks.json" -print0 2>/dev/null)
# 3. Verify expected hooks exist
for expected in code-sentinel git-flow cmdb-assistant clarity-assist; do
if [[ ! -f "$PLUGINS_DIR/$expected/hooks/hooks.json" ]]; then
echo "FAIL: Missing expected hooks.json in $expected"
FAILED=1
else
echo " $plugin: NOT command type"
FAILED=1
fi
echo " $expected: hooks.json present"
fi
done
# 4. Summary
echo ""
echo "Total hooks: $HOOK_COUNT (expected: 5 — 4 PreToolUse + 1 UserPromptSubmit)"
if [[ "$HOOK_COUNT" -ne 5 ]]; then
echo "FAIL: Hook count mismatch"
FAILED=1
fi
echo ""
if [ $FAILED -eq 0 ]; then
echo "✓ All hooks verified OK"
if [[ $FAILED -eq 0 ]]; then
echo "✓ All hooks verified OK — 4 PreToolUse safety hooks + 1 UserPromptSubmit quality hook"
else
echo "❌ ISSUES FOUND - fix before using"
echo "FAIL: HOOK VERIFICATION FAILED"
exit 1
fi