Compare commits
1 Commits
v5.2.0
...
87676e0198
| Author | SHA1 | Date | |
|---|---|---|---|
| 87676e0198 |
@@ -6,7 +6,7 @@
|
|||||||
},
|
},
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"description": "Project management plugins with Gitea and NetBox integrations",
|
"description": "Project management plugins with Gitea and NetBox integrations",
|
||||||
"version": "5.2.0"
|
"version": "5.1.0"
|
||||||
},
|
},
|
||||||
"plugins": [
|
"plugins": [
|
||||||
{
|
{
|
||||||
|
|||||||
100
CHANGELOG.md
100
CHANGELOG.md
@@ -4,108 +4,10 @@ All notable changes to the Leo Claude Marketplace will be documented in this fil
|
|||||||
|
|
||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
||||||
|
|
||||||
## [5.2.0] - 2026-01-28
|
## [Unreleased]
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
#### Sprint 5: Documentation (V5.2.0 Plugin Enhancements)
|
|
||||||
Documentation and guides for the plugin enhancements initiative.
|
|
||||||
|
|
||||||
**git-flow v1.2.0:**
|
|
||||||
- **Branching Strategy Guide** (`docs/BRANCHING-STRATEGY.md`) - Complete documentation of `development -> staging -> main` promotion flow with Mermaid diagrams
|
|
||||||
|
|
||||||
**clarity-assist v1.2.0:**
|
|
||||||
- **ND Support Guide** (`docs/ND-SUPPORT.md`) - Documentation of neurodivergent accommodations, features, and usage examples
|
|
||||||
|
|
||||||
**Gitea MCP Server:**
|
|
||||||
- **`update_issue` milestone parameter** - Can now assign/change milestones programmatically
|
|
||||||
|
|
||||||
**Sprint Completed:**
|
|
||||||
- Milestone: Sprint 5 - Documentation (closed 2026-01-28)
|
|
||||||
- Issues: #266, #267, #268, #269
|
|
||||||
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 5 Documentation)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-5-Documentation%29)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Sprint 4: Commands (V5.2.0 Plugin Enhancements)
|
|
||||||
Implementation of 18 new user-facing commands across 8 plugins.
|
|
||||||
|
|
||||||
**projman v3.3.0:**
|
|
||||||
- **`/sprint-diagram`** - Generate Mermaid diagram of sprint issues with dependencies and status
|
|
||||||
|
|
||||||
**pr-review v1.1.0:**
|
|
||||||
- **`/pr-diff`** - Formatted diff with inline review comments and annotations
|
|
||||||
- **Confidence threshold config** - `PR_REVIEW_CONFIDENCE_THRESHOLD` env var (default: 0.7)
|
|
||||||
|
|
||||||
**data-platform v1.2.0:**
|
|
||||||
- **`/data-quality`** - DataFrame quality checks (nulls, duplicates, types, outliers) with pass/warn/fail scoring
|
|
||||||
- **`/lineage-viz`** - dbt lineage visualization as Mermaid diagrams
|
|
||||||
- **`/dbt-test`** - Formatted dbt test runner with summary and failure details
|
|
||||||
|
|
||||||
**viz-platform v1.1.0:**
|
|
||||||
- **`/chart-export`** - Export charts to PNG, SVG, PDF via kaleido
|
|
||||||
- **`/accessibility-check`** - Color blind validation (WCAG contrast ratios)
|
|
||||||
- **`/breakpoints`** - Responsive layout breakpoint configuration
|
|
||||||
- **New MCP tools**: `chart_export`, `accessibility_validate_colors`, `accessibility_validate_theme`, `accessibility_suggest_alternative`, `layout_set_breakpoints`
|
|
||||||
- **New dependency**: kaleido>=0.2.1 for chart rendering
|
|
||||||
|
|
||||||
**contract-validator v1.2.0:**
|
|
||||||
- **`/dependency-graph`** - Mermaid visualization of plugin dependencies with data flow
|
|
||||||
|
|
||||||
**doc-guardian v1.1.0:**
|
|
||||||
- **`/changelog-gen`** - Generate changelog from conventional commits
|
|
||||||
- **`/doc-coverage`** - Documentation coverage metrics by function/class
|
|
||||||
- **`/stale-docs`** - Flag documentation behind code changes
|
|
||||||
|
|
||||||
**claude-config-maintainer v1.1.0:**
|
|
||||||
- **`/config-diff`** - Track CLAUDE.md changes over time with behavioral impact analysis
|
|
||||||
- **`/config-lint`** - 31 lint rules for CLAUDE.md (security, structure, content, format, best practices)
|
|
||||||
|
|
||||||
**cmdb-assistant v1.2.0:**
|
|
||||||
- **`/cmdb-topology`** - Infrastructure topology diagrams (rack, network, site views)
|
|
||||||
- **`/change-audit`** - NetBox audit trail queries with filtering
|
|
||||||
- **`/ip-conflicts`** - Detect IP conflicts and overlapping prefixes
|
|
||||||
|
|
||||||
**Sprint Completed:**
|
|
||||||
- Milestone: Sprint 4 - Commands (closed 2026-01-28)
|
|
||||||
- Issues: #241-#258 (18/18 closed)
|
|
||||||
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 4 Commands)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-4-Commands%29)
|
|
||||||
- Lessons: [Sprint 4 - Plugin Commands Implementation](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-4---plugin-commands-implementation)
|
|
||||||
|
|
||||||
### Fixed
|
|
||||||
- **MCP:** Project directory detection - all run.sh scripts now capture `CLAUDE_PROJECT_DIR` from PWD before changing directories
|
|
||||||
- **Docs:** Added Gitea auto-close behavior and MCP session restart notes to DEBUGGING-CHECKLIST.md
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Sprint 3: Hooks (V5.2.0 Plugin Enhancements)
|
|
||||||
Implementation of 6 foundational hooks across 4 plugins.
|
|
||||||
|
|
||||||
**git-flow v1.1.0:**
|
|
||||||
- **Commit message enforcement hook** - PreToolUse hook validates conventional commit format on all `git commit` commands (not just `/commit`). Blocks invalid commits with format guidance.
|
|
||||||
- **Branch name validation hook** - PreToolUse hook validates branch naming on `git checkout -b` and `git switch -c`. Enforces `type/description` format, lowercase, max 50 chars.
|
|
||||||
|
|
||||||
**clarity-assist v1.1.0:**
|
|
||||||
- **Vagueness detection hook** - UserPromptSubmit hook detects vague prompts and suggests `/clarify` when ambiguity, missing context, or unclear scope detected.
|
|
||||||
|
|
||||||
**data-platform v1.1.0:**
|
|
||||||
- **Schema diff detection hook** - PostToolUse hook monitors edits to schema files (dbt models, SQL migrations). Warns on breaking changes (column removal, type narrowing, constraint addition).
|
|
||||||
|
|
||||||
**contract-validator v1.1.0:**
|
|
||||||
- **SessionStart auto-validate hook** - Smart validation that only runs when plugin files changed since last check. Detects interface compatibility issues at session start.
|
|
||||||
- **Breaking change detection hook** - PostToolUse hook monitors plugin interface files (README.md, plugin.json). Warns when changes would break consumers.
|
|
||||||
|
|
||||||
**Sprint Completed:**
|
|
||||||
- Milestone: Sprint 3 - Hooks (closed 2026-01-28)
|
|
||||||
- Issues: #225, #226, #227, #228, #229, #230
|
|
||||||
- Wiki: [Change V5.2.0: Plugin Enhancements Proposal](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0:-Plugin-Enhancements-Proposal)
|
|
||||||
- Lessons: Background agent permissions, agent runaway detection, MCP branch detection bug
|
|
||||||
|
|
||||||
### Known Issues
|
|
||||||
- **MCP Bug #231:** Branch detection in Gitea MCP runs from installed plugin directory, not user's project directory. Workaround: close issues via Gitea web UI.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
#### Gitea MCP Server - create_pull_request Tool
|
#### Gitea MCP Server - create_pull_request Tool
|
||||||
- **`create_pull_request`**: Create new pull requests via MCP
|
- **`create_pull_request`**: Create new pull requests via MCP
|
||||||
- Parameters: title, body, head (source branch), base (target branch), labels
|
- Parameters: title, body, head (source branch), base (target branch), labels
|
||||||
|
|||||||
@@ -50,7 +50,7 @@ See `docs/DEBUGGING-CHECKLIST.md` for details on cache timing.
|
|||||||
## Project Overview
|
## Project Overview
|
||||||
|
|
||||||
**Repository:** leo-claude-mktplace
|
**Repository:** leo-claude-mktplace
|
||||||
**Version:** 5.1.0
|
**Version:** 5.0.0
|
||||||
**Status:** Production Ready
|
**Status:** Production Ready
|
||||||
|
|
||||||
A plugin marketplace for Claude Code containing:
|
A plugin marketplace for Claude Code containing:
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Leo Claude Marketplace - v5.2.0
|
# Leo Claude Marketplace - v5.1.0
|
||||||
|
|
||||||
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
|
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
|
||||||
|
|
||||||
@@ -19,7 +19,7 @@ AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sp
|
|||||||
- Branch-aware security (development/staging/production)
|
- Branch-aware security (development/staging/production)
|
||||||
- Pre-sprint-close code quality review and test verification
|
- Pre-sprint-close code quality review and test verification
|
||||||
|
|
||||||
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review`, `/suggest-version`, `/proposal-status`
|
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review`
|
||||||
|
|
||||||
#### [git-flow](./plugins/git-flow/README.md) *NEW in v3.0.0*
|
#### [git-flow](./plugins/git-flow/README.md) *NEW in v3.0.0*
|
||||||
**Git Workflow Automation**
|
**Git Workflow Automation**
|
||||||
@@ -107,7 +107,7 @@ Security vulnerability detection and code refactoring tools.
|
|||||||
|
|
||||||
Full CRUD operations for network infrastructure management directly from Claude Code.
|
Full CRUD operations for network infrastructure management directly from Claude Code.
|
||||||
|
|
||||||
**Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`, `/cmdb-audit`, `/cmdb-register`, `/cmdb-sync`
|
**Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`
|
||||||
|
|
||||||
### Data Engineering
|
### Data Engineering
|
||||||
|
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
|
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
|
||||||
|
|
||||||
Last Updated: 2026-01-27 (v5.1.0)
|
Last Updated: 2026-01-26 (v5.0.0)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -165,11 +165,7 @@ leo-claude-mktplace/
|
|||||||
│ ├── setup.sh # Initial setup (create venvs, config templates)
|
│ ├── setup.sh # Initial setup (create venvs, config templates)
|
||||||
│ ├── post-update.sh # Post-update (rebuild venvs, verify symlinks)
|
│ ├── post-update.sh # Post-update (rebuild venvs, verify symlinks)
|
||||||
│ ├── check-venv.sh # Check if venvs exist (for hooks)
|
│ ├── check-venv.sh # Check if venvs exist (for hooks)
|
||||||
│ ├── validate-marketplace.sh # Marketplace compliance validation
|
│ └── validate-marketplace.sh # Marketplace compliance validation
|
||||||
│ ├── verify-hooks.sh # Verify all hooks use correct event types
|
|
||||||
│ ├── setup-venvs.sh # Setup/repair MCP server venvs
|
|
||||||
│ ├── venv-repair.sh # Repair broken venv symlinks
|
|
||||||
│ └── release.sh # Release automation with version bumping
|
|
||||||
├── CLAUDE.md
|
├── CLAUDE.md
|
||||||
├── README.md
|
├── README.md
|
||||||
├── LICENSE
|
├── LICENSE
|
||||||
|
|||||||
@@ -23,7 +23,6 @@ Quick reference for all commands in the Leo Claude Marketplace.
|
|||||||
| **projman** | `/debug-report` | | X | Run diagnostics and create structured issue in marketplace |
|
| **projman** | `/debug-report` | | X | Run diagnostics and create structured issue in marketplace |
|
||||||
| **projman** | `/debug-review` | | X | Investigate diagnostic issues and propose fixes with approval gates |
|
| **projman** | `/debug-review` | | X | Investigate diagnostic issues and propose fixes with approval gates |
|
||||||
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
|
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
|
||||||
| **projman** | `/proposal-status` | | X | View proposal and implementation hierarchy with status |
|
|
||||||
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
|
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
|
||||||
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
|
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
|
||||||
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
|
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
|
||||||
@@ -56,9 +55,6 @@ Quick reference for all commands in the Leo Claude Marketplace.
|
|||||||
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
|
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
|
||||||
| **cmdb-assistant** | `/cmdb-ip` | | X | Manage IP addresses and prefixes |
|
| **cmdb-assistant** | `/cmdb-ip` | | X | Manage IP addresses and prefixes |
|
||||||
| **cmdb-assistant** | `/cmdb-site` | | X | Manage sites, locations, racks, and regions |
|
| **cmdb-assistant** | `/cmdb-site` | | X | Manage sites, locations, racks, and regions |
|
||||||
| **cmdb-assistant** | `/cmdb-audit` | | X | Data quality analysis (VMs, devices, naming, roles) |
|
|
||||||
| **cmdb-assistant** | `/cmdb-register` | | X | Register current machine into NetBox with running apps |
|
|
||||||
| **cmdb-assistant** | `/cmdb-sync` | | X | Sync machine state with NetBox (detect drift, update) |
|
|
||||||
| **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files |
|
| **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files |
|
||||||
| **data-platform** | `/ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
|
| **data-platform** | `/ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
|
||||||
| **data-platform** | `/profile` | | X | Generate data profiling report with statistics |
|
| **data-platform** | `/profile` | | X | Generate data profiling report with statistics |
|
||||||
|
|||||||
@@ -171,7 +171,8 @@ This marketplace uses a **hybrid configuration** approach:
|
|||||||
│ PROJECT-LEVEL (once per project) │
|
│ PROJECT-LEVEL (once per project) │
|
||||||
│ <project-root>/.env │
|
│ <project-root>/.env │
|
||||||
├─────────────────────────────────────────────────────────────────┤
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
│ GITEA_REPO │ Repository as owner/repo format │
|
│ GITEA_ORG │ Organization for this project │
|
||||||
|
│ GITEA_REPO │ Repository name for this project │
|
||||||
│ GIT_WORKFLOW_STYLE │ (optional) Override system default │
|
│ GIT_WORKFLOW_STYLE │ (optional) Override system default │
|
||||||
│ PR_REVIEW_* │ (optional) PR review settings │
|
│ PR_REVIEW_* │ (optional) PR review settings │
|
||||||
└─────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
@@ -261,7 +262,8 @@ In each project root:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat > .env << 'EOF'
|
cat > .env << 'EOF'
|
||||||
GITEA_REPO=your-organization/your-repo-name
|
GITEA_ORG=your-organization
|
||||||
|
GITEA_REPO=your-repo-name
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -305,7 +307,7 @@ GITEA_API_TOKEN=your_gitea_token_here
|
|||||||
| `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` |
|
| `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` |
|
||||||
| `GITEA_API_TOKEN` | Personal access token | `abc123...` |
|
| `GITEA_API_TOKEN` | Personal access token | `abc123...` |
|
||||||
|
|
||||||
**Note:** `GITEA_REPO` is configured at the project level in `owner/repo` format since different projects may belong to different organizations.
|
**Note:** `GITEA_ORG` is configured at the project level (see below) since different projects may belong to different organizations.
|
||||||
|
|
||||||
**Generating a Gitea Token:**
|
**Generating a Gitea Token:**
|
||||||
1. Log into Gitea → **User Icon** → **Settings**
|
1. Log into Gitea → **User Icon** → **Settings**
|
||||||
@@ -360,8 +362,9 @@ GIT_CO_AUTHOR=true
|
|||||||
Create `.env` in each project root:
|
Create `.env` in each project root:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Required for projman, pr-review (use owner/repo format)
|
# Required for projman, pr-review
|
||||||
GITEA_REPO=your-organization/your-repo-name
|
GITEA_ORG=your-organization
|
||||||
|
GITEA_REPO=your-repo-name
|
||||||
|
|
||||||
# Optional: Override git-flow defaults
|
# Optional: Override git-flow defaults
|
||||||
GIT_WORKFLOW_STYLE=pr-required
|
GIT_WORKFLOW_STYLE=pr-required
|
||||||
@@ -374,7 +377,8 @@ PR_REVIEW_AUTO_SUBMIT=false
|
|||||||
|
|
||||||
| Variable | Required | Description |
|
| Variable | Required | Description |
|
||||||
|----------|----------|-------------|
|
|----------|----------|-------------|
|
||||||
| `GITEA_REPO` | Yes | Repository in `owner/repo` format (e.g., `my-org/my-repo`) |
|
| `GITEA_ORG` | Yes | Gitea organization for this project |
|
||||||
|
| `GITEA_REPO` | Yes | Repository name (must match Gitea exactly) |
|
||||||
| `GIT_WORKFLOW_STYLE` | No | Override system default |
|
| `GIT_WORKFLOW_STYLE` | No | Override system default |
|
||||||
| `PR_REVIEW_*` | No | PR review settings |
|
| `PR_REVIEW_*` | No | PR review settings |
|
||||||
|
|
||||||
@@ -384,8 +388,8 @@ PR_REVIEW_AUTO_SUBMIT=false
|
|||||||
|
|
||||||
| Plugin | System Config | Project Config | Setup Commands |
|
| Plugin | System Config | Project Config | Setup Commands |
|
||||||
|--------|---------------|----------------|----------------|
|
|--------|---------------|----------------|----------------|
|
||||||
| **projman** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` |
|
| **projman** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` |
|
||||||
| **pr-review** | gitea.env | .env (GITEA_REPO=owner/repo) | `/initial-setup`, `/project-init`, `/project-sync` |
|
| **pr-review** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` |
|
||||||
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
|
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
|
||||||
| **clarity-assist** | None | None | None needed |
|
| **clarity-assist** | None | None | None needed |
|
||||||
| **cmdb-assistant** | netbox.env | None | `/initial-setup` |
|
| **cmdb-assistant** | netbox.env | None | `/initial-setup` |
|
||||||
@@ -437,7 +441,7 @@ This catches typos and permission issues before saving configuration.
|
|||||||
|
|
||||||
When you start a Claude Code session, a hook automatically:
|
When you start a Claude Code session, a hook automatically:
|
||||||
|
|
||||||
1. Reads `GITEA_REPO` (in `owner/repo` format) from `.env`
|
1. Reads `GITEA_ORG` and `GITEA_REPO` from `.env`
|
||||||
2. Compares with current `git remote get-url origin`
|
2. Compares with current `git remote get-url origin`
|
||||||
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/project-sync` to update."
|
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/project-sync` to update."
|
||||||
|
|
||||||
@@ -516,8 +520,7 @@ deactivate
|
|||||||
# Check project .env
|
# Check project .env
|
||||||
cat .env
|
cat .env
|
||||||
|
|
||||||
# Verify GITEA_REPO is in owner/repo format and matches Gitea exactly
|
# Verify GITEA_REPO matches the Gitea repository name exactly
|
||||||
# Example: GITEA_REPO=my-org/my-repo
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
**Purpose:** Systematic approach to diagnose and fix plugin loading issues.
|
**Purpose:** Systematic approach to diagnose and fix plugin loading issues.
|
||||||
|
|
||||||
Last Updated: 2026-01-28
|
Last Updated: 2026-01-22
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -128,7 +128,7 @@ cat ~/.config/claude/netbox.env
|
|||||||
|
|
||||||
# Project-level config (in target project)
|
# Project-level config (in target project)
|
||||||
cat /path/to/project/.env
|
cat /path/to/project/.env
|
||||||
# Should contain: GITEA_REPO=owner/repo (e.g., my-org/my-repo)
|
# Should contain: GITEA_ORG, GITEA_REPO
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -186,47 +186,6 @@ echo -e "\n=== Config Files ==="
|
|||||||
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
|
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
|
||||||
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
|
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
|
||||||
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
|
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
|
||||||
| Gitea issues not closing | Merged to non-default branch | Manually close issues (see below) |
|
|
||||||
| MCP changes not taking effect | Session caching | Restart Claude Code session (see below) |
|
|
||||||
|
|
||||||
### Gitea Auto-Close Behavior
|
|
||||||
|
|
||||||
**Issue:** Using `Closes #XX` or `Fixes #XX` in commit/PR messages does NOT auto-close issues when merging to `development`.
|
|
||||||
|
|
||||||
**Root Cause:** Gitea only auto-closes issues when merging to the **default branch** (typically `main` or `master`). Merging to `development`, `staging`, or any other branch will NOT trigger auto-close.
|
|
||||||
|
|
||||||
**Workaround:**
|
|
||||||
1. Use the Gitea MCP tool to manually close issues after merging to development:
|
|
||||||
```
|
|
||||||
mcp__plugin_projman_gitea__update_issue(issue_number=XX, state="closed")
|
|
||||||
```
|
|
||||||
2. Or close issues via the Gitea web UI
|
|
||||||
3. The auto-close keywords will still work when the changes are eventually merged to `main`
|
|
||||||
|
|
||||||
**Recommendation:** Include the `Closes #XX` keywords in commits anyway - they'll work when the final merge to `main` happens.
|
|
||||||
|
|
||||||
### MCP Session Restart Requirement
|
|
||||||
|
|
||||||
**Issue:** Changes to MCP servers, hooks, or plugin configuration don't take effect immediately.
|
|
||||||
|
|
||||||
**Root Cause:** Claude Code loads MCP tools and plugin configuration at session start. These are cached in session memory and not reloaded dynamically.
|
|
||||||
|
|
||||||
**What requires a session restart:**
|
|
||||||
- MCP server code changes (Python files in `mcp-servers/`)
|
|
||||||
- Changes to `.mcp.json` files
|
|
||||||
- Changes to `hooks/hooks.json`
|
|
||||||
- Changes to `plugin.json`
|
|
||||||
- Adding new MCP tools or modifying tool signatures
|
|
||||||
|
|
||||||
**What does NOT require a restart:**
|
|
||||||
- Command/skill markdown files (`.md`) - these are read on invocation
|
|
||||||
- Agent markdown files - read when agent is invoked
|
|
||||||
|
|
||||||
**Correct workflow after plugin changes:**
|
|
||||||
1. Make changes to source files
|
|
||||||
2. Run `./scripts/verify-hooks.sh` to validate
|
|
||||||
3. Inform user: "Please restart Claude Code for changes to take effect"
|
|
||||||
4. **Do NOT clear cache mid-session** - see "Cache Clearing" section
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,8 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Capture original working directory before any cd operations
|
|
||||||
# This should be the user's project directory when launched by Claude Code
|
|
||||||
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/contract-validator/.venv"
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/contract-validator/.venv"
|
||||||
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|||||||
@@ -1,8 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Capture original working directory before any cd operations
|
|
||||||
# This should be the user's project directory when launched by Claude Code
|
|
||||||
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
|
||||||
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|||||||
@@ -135,24 +135,9 @@ class GiteaClient:
|
|||||||
body: Optional[str] = None,
|
body: Optional[str] = None,
|
||||||
state: Optional[str] = None,
|
state: Optional[str] = None,
|
||||||
labels: Optional[List[str]] = None,
|
labels: Optional[List[str]] = None,
|
||||||
milestone: Optional[int] = None,
|
|
||||||
repo: Optional[str] = None
|
repo: Optional[str] = None
|
||||||
) -> Dict:
|
) -> Dict:
|
||||||
"""
|
"""Update existing issue. Repo must be 'owner/repo' format."""
|
||||||
Update existing issue.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: Issue number to update
|
|
||||||
title: New title (optional)
|
|
||||||
body: New body (optional)
|
|
||||||
state: New state - 'open' or 'closed' (optional)
|
|
||||||
labels: New labels (optional)
|
|
||||||
milestone: Milestone ID to assign (optional)
|
|
||||||
repo: Repository in 'owner/repo' format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Updated issue dictionary
|
|
||||||
"""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
owner, target_repo = self._parse_repo(repo)
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}"
|
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}"
|
||||||
data = {}
|
data = {}
|
||||||
@@ -164,8 +149,6 @@ class GiteaClient:
|
|||||||
data['state'] = state
|
data['state'] = state
|
||||||
if labels is not None:
|
if labels is not None:
|
||||||
data['labels'] = labels
|
data['labels'] = labels
|
||||||
if milestone is not None:
|
|
||||||
data['milestone'] = milestone
|
|
||||||
logger.info(f"Updating issue #{issue_number} in {owner}/{target_repo}")
|
logger.info(f"Updating issue #{issue_number} in {owner}/{target_repo}")
|
||||||
response = self.session.patch(url, json=data)
|
response = self.session.patch(url, json=data)
|
||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
|
|||||||
@@ -168,10 +168,6 @@ class GiteaMCPServer:
|
|||||||
"items": {"type": "string"},
|
"items": {"type": "string"},
|
||||||
"description": "New labels"
|
"description": "New labels"
|
||||||
},
|
},
|
||||||
"milestone": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Milestone ID to assign"
|
|
||||||
},
|
|
||||||
"repo": {
|
"repo": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Repository name (for PMO mode)"
|
"description": "Repository name (for PMO mode)"
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ Provides async wrappers for issue CRUD operations with:
|
|||||||
- Comprehensive error handling
|
- Comprehensive error handling
|
||||||
"""
|
"""
|
||||||
import asyncio
|
import asyncio
|
||||||
import os
|
|
||||||
import subprocess
|
import subprocess
|
||||||
import logging
|
import logging
|
||||||
from typing import List, Dict, Optional
|
from typing import List, Dict, Optional
|
||||||
@@ -28,34 +27,19 @@ class IssueTools:
|
|||||||
"""
|
"""
|
||||||
self.gitea = gitea_client
|
self.gitea = gitea_client
|
||||||
|
|
||||||
def _get_project_directory(self) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
Get the user's project directory from environment.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Project directory path or None if not set
|
|
||||||
"""
|
|
||||||
return os.environ.get('CLAUDE_PROJECT_DIR')
|
|
||||||
|
|
||||||
def _get_current_branch(self) -> str:
|
def _get_current_branch(self) -> str:
|
||||||
"""
|
"""
|
||||||
Get current git branch from user's project directory.
|
Get current git branch.
|
||||||
|
|
||||||
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
|
|
||||||
directory for git operations, avoiding the bug where git runs from
|
|
||||||
the installed plugin directory instead of the user's project.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Current branch name or 'unknown' if not in a git repo
|
Current branch name or 'unknown' if not in a git repo
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
project_dir = self._get_project_directory()
|
|
||||||
result = subprocess.run(
|
result = subprocess.run(
|
||||||
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
|
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
|
||||||
capture_output=True,
|
capture_output=True,
|
||||||
text=True,
|
text=True,
|
||||||
check=True,
|
check=True
|
||||||
cwd=project_dir # Run git in project directory, not plugin directory
|
|
||||||
)
|
)
|
||||||
return result.stdout.strip()
|
return result.stdout.strip()
|
||||||
except subprocess.CalledProcessError:
|
except subprocess.CalledProcessError:
|
||||||
@@ -200,7 +184,6 @@ class IssueTools:
|
|||||||
body: Optional[str] = None,
|
body: Optional[str] = None,
|
||||||
state: Optional[str] = None,
|
state: Optional[str] = None,
|
||||||
labels: Optional[List[str]] = None,
|
labels: Optional[List[str]] = None,
|
||||||
milestone: Optional[int] = None,
|
|
||||||
repo: Optional[str] = None
|
repo: Optional[str] = None
|
||||||
) -> Dict:
|
) -> Dict:
|
||||||
"""
|
"""
|
||||||
@@ -212,7 +195,6 @@ class IssueTools:
|
|||||||
body: New body (optional)
|
body: New body (optional)
|
||||||
state: New state - 'open' or 'closed' (optional)
|
state: New state - 'open' or 'closed' (optional)
|
||||||
labels: New labels (optional)
|
labels: New labels (optional)
|
||||||
milestone: Milestone ID to assign (optional)
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
repo: Override configured repo (for PMO multi-repo)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
@@ -231,7 +213,7 @@ class IssueTools:
|
|||||||
loop = asyncio.get_event_loop()
|
loop = asyncio.get_event_loop()
|
||||||
return await loop.run_in_executor(
|
return await loop.run_in_executor(
|
||||||
None,
|
None,
|
||||||
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, milestone, repo)
|
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, repo)
|
||||||
)
|
)
|
||||||
|
|
||||||
async def add_comment(
|
async def add_comment(
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ Provides async wrappers for PR operations with:
|
|||||||
- Comprehensive error handling
|
- Comprehensive error handling
|
||||||
"""
|
"""
|
||||||
import asyncio
|
import asyncio
|
||||||
import os
|
|
||||||
import subprocess
|
import subprocess
|
||||||
import logging
|
import logging
|
||||||
from typing import List, Dict, Optional
|
from typing import List, Dict, Optional
|
||||||
@@ -28,34 +27,19 @@ class PullRequestTools:
|
|||||||
"""
|
"""
|
||||||
self.gitea = gitea_client
|
self.gitea = gitea_client
|
||||||
|
|
||||||
def _get_project_directory(self) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
Get the user's project directory from environment.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Project directory path or None if not set
|
|
||||||
"""
|
|
||||||
return os.environ.get('CLAUDE_PROJECT_DIR')
|
|
||||||
|
|
||||||
def _get_current_branch(self) -> str:
|
def _get_current_branch(self) -> str:
|
||||||
"""
|
"""
|
||||||
Get current git branch from user's project directory.
|
Get current git branch.
|
||||||
|
|
||||||
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
|
|
||||||
directory for git operations, avoiding the bug where git runs from
|
|
||||||
the installed plugin directory instead of the user's project.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Current branch name or 'unknown' if not in a git repo
|
Current branch name or 'unknown' if not in a git repo
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
project_dir = self._get_project_directory()
|
|
||||||
result = subprocess.run(
|
result = subprocess.run(
|
||||||
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
|
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
|
||||||
capture_output=True,
|
capture_output=True,
|
||||||
text=True,
|
text=True,
|
||||||
check=True,
|
check=True
|
||||||
cwd=project_dir # Run git in project directory, not plugin directory
|
|
||||||
)
|
)
|
||||||
return result.stdout.strip()
|
return result.stdout.strip()
|
||||||
except subprocess.CalledProcessError:
|
except subprocess.CalledProcessError:
|
||||||
|
|||||||
@@ -1,8 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Capture original working directory before any cd operations
|
|
||||||
# This should be the user's project directory when launched by Claude Code
|
|
||||||
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv"
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv"
|
||||||
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|||||||
@@ -1,8 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Capture original working directory before any cd operations
|
|
||||||
# This should be the user's project directory when launched by Claude Code
|
|
||||||
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/netbox/.venv"
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/netbox/.venv"
|
||||||
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|||||||
@@ -1,479 +0,0 @@
|
|||||||
"""
|
|
||||||
Accessibility validation tools for color blindness and WCAG compliance.
|
|
||||||
|
|
||||||
Provides tools for validating color palettes against color blindness
|
|
||||||
simulations and WCAG contrast requirements.
|
|
||||||
"""
|
|
||||||
import logging
|
|
||||||
import math
|
|
||||||
from typing import Dict, List, Optional, Any, Tuple
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
# Color-blind safe palettes
|
|
||||||
SAFE_PALETTES = {
|
|
||||||
"categorical": {
|
|
||||||
"name": "Paul Tol's Qualitative",
|
|
||||||
"colors": ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"],
|
|
||||||
"description": "Distinguishable for all types of color blindness"
|
|
||||||
},
|
|
||||||
"ibm": {
|
|
||||||
"name": "IBM Design",
|
|
||||||
"colors": ["#648FFF", "#785EF0", "#DC267F", "#FE6100", "#FFB000"],
|
|
||||||
"description": "IBM's accessible color palette"
|
|
||||||
},
|
|
||||||
"okabe_ito": {
|
|
||||||
"name": "Okabe-Ito",
|
|
||||||
"colors": ["#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#000000"],
|
|
||||||
"description": "Optimized for all color vision deficiencies"
|
|
||||||
},
|
|
||||||
"tableau_colorblind": {
|
|
||||||
"name": "Tableau Colorblind 10",
|
|
||||||
"colors": ["#006BA4", "#FF800E", "#ABABAB", "#595959", "#5F9ED1",
|
|
||||||
"#C85200", "#898989", "#A2C8EC", "#FFBC79", "#CFCFCF"],
|
|
||||||
"description": "Industry-standard accessible palette"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# Simulation matrices for color blindness (LMS color space transformation)
|
|
||||||
# These approximate how colors appear to people with different types of color blindness
|
|
||||||
SIMULATION_MATRICES = {
|
|
||||||
"deuteranopia": {
|
|
||||||
# Green-blind (most common)
|
|
||||||
"severity": "common",
|
|
||||||
"population": "6% males, 0.4% females",
|
|
||||||
"description": "Difficulty distinguishing red from green (green-blind)",
|
|
||||||
"matrix": [
|
|
||||||
[0.625, 0.375, 0.0],
|
|
||||||
[0.700, 0.300, 0.0],
|
|
||||||
[0.0, 0.300, 0.700]
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"protanopia": {
|
|
||||||
# Red-blind
|
|
||||||
"severity": "common",
|
|
||||||
"population": "2.5% males, 0.05% females",
|
|
||||||
"description": "Difficulty distinguishing red from green (red-blind)",
|
|
||||||
"matrix": [
|
|
||||||
[0.567, 0.433, 0.0],
|
|
||||||
[0.558, 0.442, 0.0],
|
|
||||||
[0.0, 0.242, 0.758]
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"tritanopia": {
|
|
||||||
# Blue-blind (rare)
|
|
||||||
"severity": "rare",
|
|
||||||
"population": "0.01% total",
|
|
||||||
"description": "Difficulty distinguishing blue from yellow",
|
|
||||||
"matrix": [
|
|
||||||
[0.950, 0.050, 0.0],
|
|
||||||
[0.0, 0.433, 0.567],
|
|
||||||
[0.0, 0.475, 0.525]
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class AccessibilityTools:
|
|
||||||
"""
|
|
||||||
Color accessibility validation tools.
|
|
||||||
|
|
||||||
Validates colors for WCAG compliance and color blindness accessibility.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, theme_store=None):
|
|
||||||
"""
|
|
||||||
Initialize accessibility tools.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
theme_store: Optional ThemeStore for theme color extraction
|
|
||||||
"""
|
|
||||||
self.theme_store = theme_store
|
|
||||||
|
|
||||||
def _hex_to_rgb(self, hex_color: str) -> Tuple[int, int, int]:
|
|
||||||
"""Convert hex color to RGB tuple."""
|
|
||||||
hex_color = hex_color.lstrip('#')
|
|
||||||
if len(hex_color) == 3:
|
|
||||||
hex_color = ''.join([c * 2 for c in hex_color])
|
|
||||||
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
|
|
||||||
|
|
||||||
def _rgb_to_hex(self, rgb: Tuple[int, int, int]) -> str:
|
|
||||||
"""Convert RGB tuple to hex color."""
|
|
||||||
return '#{:02x}{:02x}{:02x}'.format(
|
|
||||||
max(0, min(255, int(rgb[0]))),
|
|
||||||
max(0, min(255, int(rgb[1]))),
|
|
||||||
max(0, min(255, int(rgb[2])))
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_relative_luminance(self, rgb: Tuple[int, int, int]) -> float:
|
|
||||||
"""
|
|
||||||
Calculate relative luminance per WCAG 2.1.
|
|
||||||
|
|
||||||
https://www.w3.org/WAI/GL/wiki/Relative_luminance
|
|
||||||
"""
|
|
||||||
def channel_luminance(value: int) -> float:
|
|
||||||
v = value / 255
|
|
||||||
return v / 12.92 if v <= 0.03928 else ((v + 0.055) / 1.055) ** 2.4
|
|
||||||
|
|
||||||
r, g, b = rgb
|
|
||||||
return (
|
|
||||||
0.2126 * channel_luminance(r) +
|
|
||||||
0.7152 * channel_luminance(g) +
|
|
||||||
0.0722 * channel_luminance(b)
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_contrast_ratio(self, color1: str, color2: str) -> float:
|
|
||||||
"""
|
|
||||||
Calculate contrast ratio between two colors per WCAG 2.1.
|
|
||||||
|
|
||||||
Returns ratio between 1:1 and 21:1.
|
|
||||||
"""
|
|
||||||
rgb1 = self._hex_to_rgb(color1)
|
|
||||||
rgb2 = self._hex_to_rgb(color2)
|
|
||||||
|
|
||||||
l1 = self._get_relative_luminance(rgb1)
|
|
||||||
l2 = self._get_relative_luminance(rgb2)
|
|
||||||
|
|
||||||
lighter = max(l1, l2)
|
|
||||||
darker = min(l1, l2)
|
|
||||||
|
|
||||||
return (lighter + 0.05) / (darker + 0.05)
|
|
||||||
|
|
||||||
def _simulate_color_blindness(
|
|
||||||
self,
|
|
||||||
hex_color: str,
|
|
||||||
deficiency_type: str
|
|
||||||
) -> str:
|
|
||||||
"""
|
|
||||||
Simulate how a color appears with a specific color blindness type.
|
|
||||||
|
|
||||||
Uses linear RGB transformation approximation.
|
|
||||||
"""
|
|
||||||
if deficiency_type not in SIMULATION_MATRICES:
|
|
||||||
return hex_color
|
|
||||||
|
|
||||||
rgb = self._hex_to_rgb(hex_color)
|
|
||||||
matrix = SIMULATION_MATRICES[deficiency_type]["matrix"]
|
|
||||||
|
|
||||||
# Apply transformation matrix
|
|
||||||
r = rgb[0] * matrix[0][0] + rgb[1] * matrix[0][1] + rgb[2] * matrix[0][2]
|
|
||||||
g = rgb[0] * matrix[1][0] + rgb[1] * matrix[1][1] + rgb[2] * matrix[1][2]
|
|
||||||
b = rgb[0] * matrix[2][0] + rgb[1] * matrix[2][1] + rgb[2] * matrix[2][2]
|
|
||||||
|
|
||||||
return self._rgb_to_hex((r, g, b))
|
|
||||||
|
|
||||||
def _get_color_distance(self, color1: str, color2: str) -> float:
|
|
||||||
"""
|
|
||||||
Calculate perceptual color distance (CIE76 approximation).
|
|
||||||
|
|
||||||
Returns a value where < 20 means colors may be hard to distinguish.
|
|
||||||
"""
|
|
||||||
rgb1 = self._hex_to_rgb(color1)
|
|
||||||
rgb2 = self._hex_to_rgb(color2)
|
|
||||||
|
|
||||||
# Simple Euclidean distance in RGB space (approximation)
|
|
||||||
# For production, should use CIEDE2000
|
|
||||||
return math.sqrt(
|
|
||||||
(rgb1[0] - rgb2[0]) ** 2 +
|
|
||||||
(rgb1[1] - rgb2[1]) ** 2 +
|
|
||||||
(rgb1[2] - rgb2[2]) ** 2
|
|
||||||
)
|
|
||||||
|
|
||||||
async def accessibility_validate_colors(
|
|
||||||
self,
|
|
||||||
colors: List[str],
|
|
||||||
check_types: Optional[List[str]] = None,
|
|
||||||
min_contrast_ratio: float = 4.5
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Validate a list of colors for accessibility.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
colors: List of hex colors to validate
|
|
||||||
check_types: Color blindness types to check (default: all)
|
|
||||||
min_contrast_ratio: Minimum WCAG contrast ratio (default: 4.5 for AA)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with:
|
|
||||||
- issues: List of accessibility issues found
|
|
||||||
- simulations: How colors appear under each deficiency
|
|
||||||
- recommendations: Suggestions for improvement
|
|
||||||
- safe_palettes: Color-blind safe palette suggestions
|
|
||||||
"""
|
|
||||||
check_types = check_types or list(SIMULATION_MATRICES.keys())
|
|
||||||
issues = []
|
|
||||||
simulations = {}
|
|
||||||
|
|
||||||
# Normalize colors
|
|
||||||
normalized_colors = [c.upper() if c.startswith('#') else f'#{c.upper()}' for c in colors]
|
|
||||||
|
|
||||||
# Simulate each color blindness type
|
|
||||||
for deficiency in check_types:
|
|
||||||
if deficiency not in SIMULATION_MATRICES:
|
|
||||||
continue
|
|
||||||
|
|
||||||
simulated = [self._simulate_color_blindness(c, deficiency) for c in normalized_colors]
|
|
||||||
simulations[deficiency] = {
|
|
||||||
"original": normalized_colors,
|
|
||||||
"simulated": simulated,
|
|
||||||
"info": SIMULATION_MATRICES[deficiency]
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check if any color pairs become indistinguishable
|
|
||||||
for i in range(len(normalized_colors)):
|
|
||||||
for j in range(i + 1, len(normalized_colors)):
|
|
||||||
distance = self._get_color_distance(simulated[i], simulated[j])
|
|
||||||
if distance < 30: # Threshold for distinguishability
|
|
||||||
issues.append({
|
|
||||||
"type": "distinguishability",
|
|
||||||
"severity": "warning" if distance > 15 else "error",
|
|
||||||
"colors": [normalized_colors[i], normalized_colors[j]],
|
|
||||||
"affected_by": [deficiency],
|
|
||||||
"simulated_colors": [simulated[i], simulated[j]],
|
|
||||||
"distance": round(distance, 1),
|
|
||||||
"message": f"Colors may be hard to distinguish for {deficiency} ({SIMULATION_MATRICES[deficiency]['description']})"
|
|
||||||
})
|
|
||||||
|
|
||||||
# Check contrast ratios against white and black backgrounds
|
|
||||||
for color in normalized_colors:
|
|
||||||
white_contrast = self._get_contrast_ratio(color, "#FFFFFF")
|
|
||||||
black_contrast = self._get_contrast_ratio(color, "#000000")
|
|
||||||
|
|
||||||
if white_contrast < min_contrast_ratio and black_contrast < min_contrast_ratio:
|
|
||||||
issues.append({
|
|
||||||
"type": "contrast_ratio",
|
|
||||||
"severity": "error",
|
|
||||||
"colors": [color],
|
|
||||||
"white_contrast": round(white_contrast, 2),
|
|
||||||
"black_contrast": round(black_contrast, 2),
|
|
||||||
"required": min_contrast_ratio,
|
|
||||||
"message": f"Insufficient contrast against both white ({white_contrast:.1f}:1) and black ({black_contrast:.1f}:1) backgrounds"
|
|
||||||
})
|
|
||||||
|
|
||||||
# Generate recommendations
|
|
||||||
recommendations = self._generate_recommendations(issues)
|
|
||||||
|
|
||||||
# Calculate overall score
|
|
||||||
error_count = sum(1 for i in issues if i["severity"] == "error")
|
|
||||||
warning_count = sum(1 for i in issues if i["severity"] == "warning")
|
|
||||||
|
|
||||||
if error_count == 0 and warning_count == 0:
|
|
||||||
score = "A"
|
|
||||||
elif error_count == 0 and warning_count <= 2:
|
|
||||||
score = "B"
|
|
||||||
elif error_count <= 2:
|
|
||||||
score = "C"
|
|
||||||
else:
|
|
||||||
score = "D"
|
|
||||||
|
|
||||||
return {
|
|
||||||
"colors_checked": normalized_colors,
|
|
||||||
"overall_score": score,
|
|
||||||
"issue_count": len(issues),
|
|
||||||
"issues": issues,
|
|
||||||
"simulations": simulations,
|
|
||||||
"recommendations": recommendations,
|
|
||||||
"safe_palettes": SAFE_PALETTES
|
|
||||||
}
|
|
||||||
|
|
||||||
async def accessibility_validate_theme(
|
|
||||||
self,
|
|
||||||
theme_name: str
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Validate a theme's colors for accessibility.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
theme_name: Theme name to validate
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with accessibility validation results
|
|
||||||
"""
|
|
||||||
if not self.theme_store:
|
|
||||||
return {
|
|
||||||
"error": "Theme store not configured",
|
|
||||||
"theme_name": theme_name
|
|
||||||
}
|
|
||||||
|
|
||||||
theme = self.theme_store.get_theme(theme_name)
|
|
||||||
if not theme:
|
|
||||||
available = self.theme_store.list_themes()
|
|
||||||
return {
|
|
||||||
"error": f"Theme '{theme_name}' not found. Available: {available}",
|
|
||||||
"theme_name": theme_name
|
|
||||||
}
|
|
||||||
|
|
||||||
# Extract colors from theme
|
|
||||||
colors = []
|
|
||||||
tokens = theme.get("tokens", {})
|
|
||||||
color_tokens = tokens.get("colors", {})
|
|
||||||
|
|
||||||
def extract_colors(obj, prefix=""):
|
|
||||||
"""Recursively extract color values."""
|
|
||||||
if isinstance(obj, str) and (obj.startswith('#') or len(obj) == 6):
|
|
||||||
colors.append(obj if obj.startswith('#') else f'#{obj}')
|
|
||||||
elif isinstance(obj, dict):
|
|
||||||
for key, value in obj.items():
|
|
||||||
extract_colors(value, f"{prefix}.{key}")
|
|
||||||
elif isinstance(obj, list):
|
|
||||||
for item in obj:
|
|
||||||
extract_colors(item, prefix)
|
|
||||||
|
|
||||||
extract_colors(color_tokens)
|
|
||||||
|
|
||||||
# Validate extracted colors
|
|
||||||
result = await self.accessibility_validate_colors(colors)
|
|
||||||
result["theme_name"] = theme_name
|
|
||||||
|
|
||||||
# Add theme-specific checks
|
|
||||||
primary = color_tokens.get("primary")
|
|
||||||
background = color_tokens.get("background", {})
|
|
||||||
text = color_tokens.get("text", {})
|
|
||||||
|
|
||||||
if primary and background:
|
|
||||||
bg_color = background.get("base") if isinstance(background, dict) else background
|
|
||||||
if bg_color:
|
|
||||||
contrast = self._get_contrast_ratio(primary, bg_color)
|
|
||||||
if contrast < 4.5:
|
|
||||||
result["issues"].append({
|
|
||||||
"type": "primary_contrast",
|
|
||||||
"severity": "error",
|
|
||||||
"colors": [primary, bg_color],
|
|
||||||
"ratio": round(contrast, 2),
|
|
||||||
"required": 4.5,
|
|
||||||
"message": f"Primary color has insufficient contrast ({contrast:.1f}:1) against background"
|
|
||||||
})
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
async def accessibility_suggest_alternative(
|
|
||||||
self,
|
|
||||||
color: str,
|
|
||||||
deficiency_type: str
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Suggest accessible alternative colors.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
color: Original hex color
|
|
||||||
deficiency_type: Type of color blindness to optimize for
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with alternative color suggestions
|
|
||||||
"""
|
|
||||||
rgb = self._hex_to_rgb(color)
|
|
||||||
|
|
||||||
suggestions = []
|
|
||||||
|
|
||||||
# Suggest shifting hue while maintaining saturation and brightness
|
|
||||||
# For red-green deficiency, shift toward blue or yellow
|
|
||||||
if deficiency_type in ["deuteranopia", "protanopia"]:
|
|
||||||
# Shift toward blue
|
|
||||||
blue_shift = self._rgb_to_hex((
|
|
||||||
max(0, rgb[0] - 50),
|
|
||||||
max(0, rgb[1] - 30),
|
|
||||||
min(255, rgb[2] + 80)
|
|
||||||
))
|
|
||||||
suggestions.append({
|
|
||||||
"color": blue_shift,
|
|
||||||
"description": "Blue-shifted alternative",
|
|
||||||
"preserves": "approximate brightness"
|
|
||||||
})
|
|
||||||
|
|
||||||
# Shift toward yellow/orange
|
|
||||||
yellow_shift = self._rgb_to_hex((
|
|
||||||
min(255, rgb[0] + 50),
|
|
||||||
min(255, rgb[1] + 30),
|
|
||||||
max(0, rgb[2] - 80)
|
|
||||||
))
|
|
||||||
suggestions.append({
|
|
||||||
"color": yellow_shift,
|
|
||||||
"description": "Yellow-shifted alternative",
|
|
||||||
"preserves": "approximate brightness"
|
|
||||||
})
|
|
||||||
|
|
||||||
elif deficiency_type == "tritanopia":
|
|
||||||
# For blue-yellow deficiency, shift toward red or green
|
|
||||||
red_shift = self._rgb_to_hex((
|
|
||||||
min(255, rgb[0] + 60),
|
|
||||||
max(0, rgb[1] - 20),
|
|
||||||
max(0, rgb[2] - 40)
|
|
||||||
))
|
|
||||||
suggestions.append({
|
|
||||||
"color": red_shift,
|
|
||||||
"description": "Red-shifted alternative",
|
|
||||||
"preserves": "approximate brightness"
|
|
||||||
})
|
|
||||||
|
|
||||||
# Add safe palette suggestions
|
|
||||||
for palette_name, palette in SAFE_PALETTES.items():
|
|
||||||
# Find closest color in safe palette
|
|
||||||
min_distance = float('inf')
|
|
||||||
closest = None
|
|
||||||
for safe_color in palette["colors"]:
|
|
||||||
distance = self._get_color_distance(color, safe_color)
|
|
||||||
if distance < min_distance:
|
|
||||||
min_distance = distance
|
|
||||||
closest = safe_color
|
|
||||||
|
|
||||||
if closest:
|
|
||||||
suggestions.append({
|
|
||||||
"color": closest,
|
|
||||||
"description": f"From {palette['name']} palette",
|
|
||||||
"palette": palette_name
|
|
||||||
})
|
|
||||||
|
|
||||||
return {
|
|
||||||
"original_color": color,
|
|
||||||
"deficiency_type": deficiency_type,
|
|
||||||
"suggestions": suggestions[:5] # Limit to 5 suggestions
|
|
||||||
}
|
|
||||||
|
|
||||||
def _generate_recommendations(self, issues: List[Dict[str, Any]]) -> List[str]:
|
|
||||||
"""Generate actionable recommendations based on issues."""
|
|
||||||
recommendations = []
|
|
||||||
|
|
||||||
# Check for distinguishability issues
|
|
||||||
distinguishability_issues = [i for i in issues if i["type"] == "distinguishability"]
|
|
||||||
if distinguishability_issues:
|
|
||||||
affected_types = set()
|
|
||||||
for issue in distinguishability_issues:
|
|
||||||
affected_types.update(issue.get("affected_by", []))
|
|
||||||
|
|
||||||
if "deuteranopia" in affected_types or "protanopia" in affected_types:
|
|
||||||
recommendations.append(
|
|
||||||
"Avoid using red and green as the only differentiators - "
|
|
||||||
"add patterns, shapes, or labels"
|
|
||||||
)
|
|
||||||
|
|
||||||
recommendations.append(
|
|
||||||
"Consider using a color-blind safe palette like Okabe-Ito or IBM Design"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check for contrast issues
|
|
||||||
contrast_issues = [i for i in issues if i["type"] in ["contrast_ratio", "primary_contrast"]]
|
|
||||||
if contrast_issues:
|
|
||||||
recommendations.append(
|
|
||||||
"Increase contrast by darkening colors for light backgrounds "
|
|
||||||
"or lightening for dark backgrounds"
|
|
||||||
)
|
|
||||||
recommendations.append(
|
|
||||||
"Use WCAG contrast checker tools to verify text readability"
|
|
||||||
)
|
|
||||||
|
|
||||||
# General recommendations
|
|
||||||
if len(issues) > 0:
|
|
||||||
recommendations.append(
|
|
||||||
"Add secondary visual cues (icons, patterns, labels) "
|
|
||||||
"to not rely solely on color"
|
|
||||||
)
|
|
||||||
|
|
||||||
if not recommendations:
|
|
||||||
recommendations.append(
|
|
||||||
"Color palette appears accessible! Consider adding patterns "
|
|
||||||
"for additional distinguishability"
|
|
||||||
)
|
|
||||||
|
|
||||||
return recommendations
|
|
||||||
@@ -3,21 +3,11 @@ Chart creation tools using Plotly.
|
|||||||
|
|
||||||
Provides tools for creating data visualizations with automatic theme integration.
|
Provides tools for creating data visualizations with automatic theme integration.
|
||||||
"""
|
"""
|
||||||
import base64
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
|
||||||
from typing import Dict, List, Optional, Any, Union
|
from typing import Dict, List, Optional, Any, Union
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# Check for kaleido availability
|
|
||||||
KALEIDO_AVAILABLE = False
|
|
||||||
try:
|
|
||||||
import kaleido
|
|
||||||
KALEIDO_AVAILABLE = True
|
|
||||||
except ImportError:
|
|
||||||
logger.debug("kaleido not installed - chart export will be unavailable")
|
|
||||||
|
|
||||||
|
|
||||||
# Default color palette based on Mantine theme
|
# Default color palette based on Mantine theme
|
||||||
DEFAULT_COLORS = [
|
DEFAULT_COLORS = [
|
||||||
@@ -405,129 +395,3 @@ class ChartTools:
|
|||||||
"figure": figure,
|
"figure": figure,
|
||||||
"interactions_added": []
|
"interactions_added": []
|
||||||
}
|
}
|
||||||
|
|
||||||
async def chart_export(
|
|
||||||
self,
|
|
||||||
figure: Dict[str, Any],
|
|
||||||
format: str = "png",
|
|
||||||
width: Optional[int] = None,
|
|
||||||
height: Optional[int] = None,
|
|
||||||
scale: float = 2.0,
|
|
||||||
output_path: Optional[str] = None
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Export a Plotly chart to a static image format.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
figure: Plotly figure JSON to export
|
|
||||||
format: Output format - png, svg, or pdf
|
|
||||||
width: Image width in pixels (default: from figure or 1200)
|
|
||||||
height: Image height in pixels (default: from figure or 800)
|
|
||||||
scale: Resolution scale factor (default: 2 for retina)
|
|
||||||
output_path: Optional file path to save the image
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with:
|
|
||||||
- image_data: Base64-encoded image (if no output_path)
|
|
||||||
- file_path: Path to saved file (if output_path provided)
|
|
||||||
- format: Export format used
|
|
||||||
- dimensions: {width, height, scale}
|
|
||||||
- error: Error message if export failed
|
|
||||||
"""
|
|
||||||
# Validate format
|
|
||||||
valid_formats = ['png', 'svg', 'pdf']
|
|
||||||
format = format.lower()
|
|
||||||
if format not in valid_formats:
|
|
||||||
return {
|
|
||||||
"error": f"Invalid format '{format}'. Must be one of: {valid_formats}",
|
|
||||||
"format": format,
|
|
||||||
"image_data": None
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check kaleido availability
|
|
||||||
if not KALEIDO_AVAILABLE:
|
|
||||||
return {
|
|
||||||
"error": "kaleido package not installed. Install with: pip install kaleido",
|
|
||||||
"format": format,
|
|
||||||
"image_data": None,
|
|
||||||
"install_hint": "pip install kaleido"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Validate figure
|
|
||||||
if not figure or 'data' not in figure:
|
|
||||||
return {
|
|
||||||
"error": "Invalid figure: must contain 'data' key",
|
|
||||||
"format": format,
|
|
||||||
"image_data": None
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
import plotly.graph_objects as go
|
|
||||||
import plotly.io as pio
|
|
||||||
|
|
||||||
# Create Plotly figure object
|
|
||||||
fig = go.Figure(figure)
|
|
||||||
|
|
||||||
# Determine dimensions
|
|
||||||
layout = figure.get('layout', {})
|
|
||||||
export_width = width or layout.get('width') or 1200
|
|
||||||
export_height = height or layout.get('height') or 800
|
|
||||||
|
|
||||||
# Export to bytes
|
|
||||||
image_bytes = pio.to_image(
|
|
||||||
fig,
|
|
||||||
format=format,
|
|
||||||
width=export_width,
|
|
||||||
height=export_height,
|
|
||||||
scale=scale
|
|
||||||
)
|
|
||||||
|
|
||||||
result = {
|
|
||||||
"format": format,
|
|
||||||
"dimensions": {
|
|
||||||
"width": export_width,
|
|
||||||
"height": export_height,
|
|
||||||
"scale": scale,
|
|
||||||
"effective_width": int(export_width * scale),
|
|
||||||
"effective_height": int(export_height * scale)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Save to file or return base64
|
|
||||||
if output_path:
|
|
||||||
# Ensure directory exists
|
|
||||||
output_dir = os.path.dirname(output_path)
|
|
||||||
if output_dir and not os.path.exists(output_dir):
|
|
||||||
os.makedirs(output_dir, exist_ok=True)
|
|
||||||
|
|
||||||
# Add extension if missing
|
|
||||||
if not output_path.endswith(f'.{format}'):
|
|
||||||
output_path = f"{output_path}.{format}"
|
|
||||||
|
|
||||||
with open(output_path, 'wb') as f:
|
|
||||||
f.write(image_bytes)
|
|
||||||
|
|
||||||
result["file_path"] = output_path
|
|
||||||
result["file_size_bytes"] = len(image_bytes)
|
|
||||||
else:
|
|
||||||
# Return as base64
|
|
||||||
result["image_data"] = base64.b64encode(image_bytes).decode('utf-8')
|
|
||||||
result["data_uri"] = f"data:image/{format};base64,{result['image_data']}"
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
logger.error(f"Chart export failed - missing dependency: {e}")
|
|
||||||
return {
|
|
||||||
"error": f"Missing dependency for export: {e}",
|
|
||||||
"format": format,
|
|
||||||
"image_data": None,
|
|
||||||
"install_hint": "pip install plotly kaleido"
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Chart export failed: {e}")
|
|
||||||
return {
|
|
||||||
"error": str(e),
|
|
||||||
"format": format,
|
|
||||||
"image_data": None
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -10,46 +10,6 @@ from uuid import uuid4
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
# Standard responsive breakpoints (Mantine/Bootstrap-aligned)
|
|
||||||
DEFAULT_BREAKPOINTS = {
|
|
||||||
"xs": {
|
|
||||||
"min_width": "0px",
|
|
||||||
"max_width": "575px",
|
|
||||||
"cols": 1,
|
|
||||||
"spacing": "xs",
|
|
||||||
"description": "Extra small devices (phones, portrait)"
|
|
||||||
},
|
|
||||||
"sm": {
|
|
||||||
"min_width": "576px",
|
|
||||||
"max_width": "767px",
|
|
||||||
"cols": 2,
|
|
||||||
"spacing": "sm",
|
|
||||||
"description": "Small devices (phones, landscape)"
|
|
||||||
},
|
|
||||||
"md": {
|
|
||||||
"min_width": "768px",
|
|
||||||
"max_width": "991px",
|
|
||||||
"cols": 6,
|
|
||||||
"spacing": "md",
|
|
||||||
"description": "Medium devices (tablets)"
|
|
||||||
},
|
|
||||||
"lg": {
|
|
||||||
"min_width": "992px",
|
|
||||||
"max_width": "1199px",
|
|
||||||
"cols": 12,
|
|
||||||
"spacing": "md",
|
|
||||||
"description": "Large devices (desktops)"
|
|
||||||
},
|
|
||||||
"xl": {
|
|
||||||
"min_width": "1200px",
|
|
||||||
"max_width": None,
|
|
||||||
"cols": 12,
|
|
||||||
"spacing": "lg",
|
|
||||||
"description": "Extra large devices (large desktops)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# Layout templates
|
# Layout templates
|
||||||
TEMPLATES = {
|
TEMPLATES = {
|
||||||
"dashboard": {
|
"dashboard": {
|
||||||
@@ -405,149 +365,3 @@ class LayoutTools:
|
|||||||
}
|
}
|
||||||
for name, config in FILTER_TYPES.items()
|
for name, config in FILTER_TYPES.items()
|
||||||
}
|
}
|
||||||
|
|
||||||
async def layout_set_breakpoints(
|
|
||||||
self,
|
|
||||||
layout_ref: str,
|
|
||||||
breakpoints: Dict[str, Dict[str, Any]],
|
|
||||||
mobile_first: bool = True
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Configure responsive breakpoints for a layout.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
layout_ref: Layout name to configure
|
|
||||||
breakpoints: Breakpoint configuration dict:
|
|
||||||
{
|
|
||||||
"xs": {"cols": 1, "spacing": "xs"},
|
|
||||||
"sm": {"cols": 2, "spacing": "sm"},
|
|
||||||
"md": {"cols": 6, "spacing": "md"},
|
|
||||||
"lg": {"cols": 12, "spacing": "md"},
|
|
||||||
"xl": {"cols": 12, "spacing": "lg"}
|
|
||||||
}
|
|
||||||
mobile_first: If True, use min-width media queries (default)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with:
|
|
||||||
- breakpoints: Complete breakpoint configuration
|
|
||||||
- css_media_queries: Generated CSS media queries
|
|
||||||
- mobile_first: Whether mobile-first approach is used
|
|
||||||
"""
|
|
||||||
# Validate layout exists
|
|
||||||
if layout_ref not in self._layouts:
|
|
||||||
return {
|
|
||||||
"error": f"Layout '{layout_ref}' not found. Create it first with layout_create.",
|
|
||||||
"breakpoints": None
|
|
||||||
}
|
|
||||||
|
|
||||||
layout = self._layouts[layout_ref]
|
|
||||||
|
|
||||||
# Validate breakpoint names
|
|
||||||
valid_breakpoints = ["xs", "sm", "md", "lg", "xl"]
|
|
||||||
for bp_name in breakpoints.keys():
|
|
||||||
if bp_name not in valid_breakpoints:
|
|
||||||
return {
|
|
||||||
"error": f"Invalid breakpoint '{bp_name}'. Must be one of: {valid_breakpoints}",
|
|
||||||
"breakpoints": layout.get("breakpoints")
|
|
||||||
}
|
|
||||||
|
|
||||||
# Merge with defaults
|
|
||||||
merged_breakpoints = {}
|
|
||||||
for bp_name in valid_breakpoints:
|
|
||||||
default = DEFAULT_BREAKPOINTS[bp_name].copy()
|
|
||||||
if bp_name in breakpoints:
|
|
||||||
default.update(breakpoints[bp_name])
|
|
||||||
merged_breakpoints[bp_name] = default
|
|
||||||
|
|
||||||
# Validate spacing values
|
|
||||||
valid_spacing = ["xs", "sm", "md", "lg", "xl"]
|
|
||||||
for bp_name, bp_config in merged_breakpoints.items():
|
|
||||||
if "spacing" in bp_config and bp_config["spacing"] not in valid_spacing:
|
|
||||||
return {
|
|
||||||
"error": f"Invalid spacing '{bp_config['spacing']}' for breakpoint '{bp_name}'. Must be one of: {valid_spacing}",
|
|
||||||
"breakpoints": layout.get("breakpoints")
|
|
||||||
}
|
|
||||||
|
|
||||||
# Validate column counts
|
|
||||||
for bp_name, bp_config in merged_breakpoints.items():
|
|
||||||
if "cols" in bp_config:
|
|
||||||
cols = bp_config["cols"]
|
|
||||||
if not isinstance(cols, int) or cols < 1 or cols > 24:
|
|
||||||
return {
|
|
||||||
"error": f"Invalid cols '{cols}' for breakpoint '{bp_name}'. Must be integer between 1 and 24.",
|
|
||||||
"breakpoints": layout.get("breakpoints")
|
|
||||||
}
|
|
||||||
|
|
||||||
# Generate CSS media queries
|
|
||||||
css_queries = self._generate_media_queries(merged_breakpoints, mobile_first)
|
|
||||||
|
|
||||||
# Store in layout
|
|
||||||
layout["breakpoints"] = merged_breakpoints
|
|
||||||
layout["mobile_first"] = mobile_first
|
|
||||||
layout["responsive_css"] = css_queries
|
|
||||||
|
|
||||||
return {
|
|
||||||
"layout_ref": layout_ref,
|
|
||||||
"breakpoints": merged_breakpoints,
|
|
||||||
"mobile_first": mobile_first,
|
|
||||||
"css_media_queries": css_queries
|
|
||||||
}
|
|
||||||
|
|
||||||
def _generate_media_queries(
|
|
||||||
self,
|
|
||||||
breakpoints: Dict[str, Dict[str, Any]],
|
|
||||||
mobile_first: bool
|
|
||||||
) -> List[str]:
|
|
||||||
"""Generate CSS media queries for breakpoints."""
|
|
||||||
queries = []
|
|
||||||
bp_order = ["xs", "sm", "md", "lg", "xl"]
|
|
||||||
|
|
||||||
if mobile_first:
|
|
||||||
# Use min-width queries (mobile-first)
|
|
||||||
for bp_name in bp_order[1:]: # Skip xs (base styles)
|
|
||||||
bp = breakpoints[bp_name]
|
|
||||||
min_width = bp.get("min_width", DEFAULT_BREAKPOINTS[bp_name]["min_width"])
|
|
||||||
if min_width and min_width != "0px":
|
|
||||||
queries.append(f"@media (min-width: {min_width}) {{ /* {bp_name} styles */ }}")
|
|
||||||
else:
|
|
||||||
# Use max-width queries (desktop-first)
|
|
||||||
for bp_name in reversed(bp_order[:-1]): # Skip xl (base styles)
|
|
||||||
bp = breakpoints[bp_name]
|
|
||||||
max_width = bp.get("max_width", DEFAULT_BREAKPOINTS[bp_name]["max_width"])
|
|
||||||
if max_width:
|
|
||||||
queries.append(f"@media (max-width: {max_width}) {{ /* {bp_name} styles */ }}")
|
|
||||||
|
|
||||||
return queries
|
|
||||||
|
|
||||||
async def layout_get_breakpoints(self, layout_ref: str) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Get the breakpoint configuration for a layout.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
layout_ref: Layout name
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with breakpoint configuration
|
|
||||||
"""
|
|
||||||
if layout_ref not in self._layouts:
|
|
||||||
return {
|
|
||||||
"error": f"Layout '{layout_ref}' not found.",
|
|
||||||
"breakpoints": None
|
|
||||||
}
|
|
||||||
|
|
||||||
layout = self._layouts[layout_ref]
|
|
||||||
|
|
||||||
return {
|
|
||||||
"layout_ref": layout_ref,
|
|
||||||
"breakpoints": layout.get("breakpoints", DEFAULT_BREAKPOINTS.copy()),
|
|
||||||
"mobile_first": layout.get("mobile_first", True),
|
|
||||||
"css_media_queries": layout.get("responsive_css", [])
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_default_breakpoints(self) -> Dict[str, Any]:
|
|
||||||
"""Get the default breakpoint configuration."""
|
|
||||||
return {
|
|
||||||
"breakpoints": DEFAULT_BREAKPOINTS.copy(),
|
|
||||||
"description": "Standard responsive breakpoints aligned with Mantine/Bootstrap",
|
|
||||||
"mobile_first": True
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -17,7 +17,6 @@ from .chart_tools import ChartTools
|
|||||||
from .layout_tools import LayoutTools
|
from .layout_tools import LayoutTools
|
||||||
from .theme_tools import ThemeTools
|
from .theme_tools import ThemeTools
|
||||||
from .page_tools import PageTools
|
from .page_tools import PageTools
|
||||||
from .accessibility_tools import AccessibilityTools
|
|
||||||
|
|
||||||
# Suppress noisy MCP validation warnings on stderr
|
# Suppress noisy MCP validation warnings on stderr
|
||||||
logging.basicConfig(level=logging.INFO)
|
logging.basicConfig(level=logging.INFO)
|
||||||
@@ -37,7 +36,6 @@ class VizPlatformMCPServer:
|
|||||||
self.layout_tools = LayoutTools()
|
self.layout_tools = LayoutTools()
|
||||||
self.theme_tools = ThemeTools()
|
self.theme_tools = ThemeTools()
|
||||||
self.page_tools = PageTools()
|
self.page_tools = PageTools()
|
||||||
self.accessibility_tools = AccessibilityTools(theme_store=self.theme_tools.store)
|
|
||||||
|
|
||||||
async def initialize(self):
|
async def initialize(self):
|
||||||
"""Initialize server and load configuration."""
|
"""Initialize server and load configuration."""
|
||||||
@@ -200,46 +198,6 @@ class VizPlatformMCPServer:
|
|||||||
}
|
}
|
||||||
))
|
))
|
||||||
|
|
||||||
# Chart export tool (Issue #247)
|
|
||||||
tools.append(Tool(
|
|
||||||
name="chart_export",
|
|
||||||
description=(
|
|
||||||
"Export a Plotly chart to static image format (PNG, SVG, PDF). "
|
|
||||||
"Requires kaleido package. Returns base64 image data or saves to file."
|
|
||||||
),
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"figure": {
|
|
||||||
"type": "object",
|
|
||||||
"description": "Plotly figure JSON to export"
|
|
||||||
},
|
|
||||||
"format": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["png", "svg", "pdf"],
|
|
||||||
"description": "Output format (default: png)"
|
|
||||||
},
|
|
||||||
"width": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Image width in pixels (default: 1200)"
|
|
||||||
},
|
|
||||||
"height": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Image height in pixels (default: 800)"
|
|
||||||
},
|
|
||||||
"scale": {
|
|
||||||
"type": "number",
|
|
||||||
"description": "Resolution scale factor (default: 2 for retina)"
|
|
||||||
},
|
|
||||||
"output_path": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Optional file path to save image"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["figure"]
|
|
||||||
}
|
|
||||||
))
|
|
||||||
|
|
||||||
# Layout tools (Issue #174)
|
# Layout tools (Issue #174)
|
||||||
tools.append(Tool(
|
tools.append(Tool(
|
||||||
name="layout_create",
|
name="layout_create",
|
||||||
@@ -322,36 +280,6 @@ class VizPlatformMCPServer:
|
|||||||
}
|
}
|
||||||
))
|
))
|
||||||
|
|
||||||
# Responsive breakpoints tool (Issue #249)
|
|
||||||
tools.append(Tool(
|
|
||||||
name="layout_set_breakpoints",
|
|
||||||
description=(
|
|
||||||
"Configure responsive breakpoints for a layout. "
|
|
||||||
"Supports xs, sm, md, lg, xl breakpoints with mobile-first approach. "
|
|
||||||
"Each breakpoint can define cols, spacing, and other grid properties."
|
|
||||||
),
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"layout_ref": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Layout name to configure"
|
|
||||||
},
|
|
||||||
"breakpoints": {
|
|
||||||
"type": "object",
|
|
||||||
"description": (
|
|
||||||
"Breakpoint config: {xs: {cols, spacing}, sm: {...}, md: {...}, lg: {...}, xl: {...}}"
|
|
||||||
)
|
|
||||||
},
|
|
||||||
"mobile_first": {
|
|
||||||
"type": "boolean",
|
|
||||||
"description": "Use mobile-first (min-width) media queries (default: true)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["layout_ref", "breakpoints"]
|
|
||||||
}
|
|
||||||
))
|
|
||||||
|
|
||||||
# Theme tools (Issue #175)
|
# Theme tools (Issue #175)
|
||||||
tools.append(Tool(
|
tools.append(Tool(
|
||||||
name="theme_create",
|
name="theme_create",
|
||||||
@@ -523,77 +451,6 @@ class VizPlatformMCPServer:
|
|||||||
}
|
}
|
||||||
))
|
))
|
||||||
|
|
||||||
# Accessibility tools (Issue #248)
|
|
||||||
tools.append(Tool(
|
|
||||||
name="accessibility_validate_colors",
|
|
||||||
description=(
|
|
||||||
"Validate colors for color blind accessibility. "
|
|
||||||
"Checks contrast ratios for deuteranopia, protanopia, tritanopia. "
|
|
||||||
"Returns issues, simulations, and accessible palette suggestions."
|
|
||||||
),
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"colors": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "List of hex colors to validate (e.g., ['#228be6', '#40c057'])"
|
|
||||||
},
|
|
||||||
"check_types": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "Color blindness types to check: deuteranopia, protanopia, tritanopia (default: all)"
|
|
||||||
},
|
|
||||||
"min_contrast_ratio": {
|
|
||||||
"type": "number",
|
|
||||||
"description": "Minimum WCAG contrast ratio (default: 4.5 for AA)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["colors"]
|
|
||||||
}
|
|
||||||
))
|
|
||||||
|
|
||||||
tools.append(Tool(
|
|
||||||
name="accessibility_validate_theme",
|
|
||||||
description=(
|
|
||||||
"Validate a theme's colors for accessibility. "
|
|
||||||
"Extracts all colors from theme tokens and checks for color blind safety."
|
|
||||||
),
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"theme_name": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Theme name to validate"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["theme_name"]
|
|
||||||
}
|
|
||||||
))
|
|
||||||
|
|
||||||
tools.append(Tool(
|
|
||||||
name="accessibility_suggest_alternative",
|
|
||||||
description=(
|
|
||||||
"Suggest accessible alternative colors for a given color. "
|
|
||||||
"Provides alternatives optimized for specific color blindness types."
|
|
||||||
),
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"color": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Hex color to find alternatives for"
|
|
||||||
},
|
|
||||||
"deficiency_type": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["deuteranopia", "protanopia", "tritanopia"],
|
|
||||||
"description": "Color blindness type to optimize for"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["color", "deficiency_type"]
|
|
||||||
}
|
|
||||||
))
|
|
||||||
|
|
||||||
return tools
|
return tools
|
||||||
|
|
||||||
@self.server.call_tool()
|
@self.server.call_tool()
|
||||||
@@ -667,26 +524,6 @@ class VizPlatformMCPServer:
|
|||||||
text=json.dumps(result, indent=2)
|
text=json.dumps(result, indent=2)
|
||||||
)]
|
)]
|
||||||
|
|
||||||
elif name == "chart_export":
|
|
||||||
figure = arguments.get('figure')
|
|
||||||
if not figure:
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps({"error": "figure is required"}, indent=2)
|
|
||||||
)]
|
|
||||||
result = await self.chart_tools.chart_export(
|
|
||||||
figure=figure,
|
|
||||||
format=arguments.get('format', 'png'),
|
|
||||||
width=arguments.get('width'),
|
|
||||||
height=arguments.get('height'),
|
|
||||||
scale=arguments.get('scale', 2.0),
|
|
||||||
output_path=arguments.get('output_path')
|
|
||||||
)
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps(result, indent=2)
|
|
||||||
)]
|
|
||||||
|
|
||||||
# Layout tools
|
# Layout tools
|
||||||
elif name == "layout_create":
|
elif name == "layout_create":
|
||||||
layout_name = arguments.get('name')
|
layout_name = arguments.get('name')
|
||||||
@@ -731,23 +568,6 @@ class VizPlatformMCPServer:
|
|||||||
text=json.dumps(result, indent=2)
|
text=json.dumps(result, indent=2)
|
||||||
)]
|
)]
|
||||||
|
|
||||||
elif name == "layout_set_breakpoints":
|
|
||||||
layout_ref = arguments.get('layout_ref')
|
|
||||||
breakpoints = arguments.get('breakpoints', {})
|
|
||||||
mobile_first = arguments.get('mobile_first', True)
|
|
||||||
if not layout_ref:
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps({"error": "layout_ref is required"}, indent=2)
|
|
||||||
)]
|
|
||||||
result = await self.layout_tools.layout_set_breakpoints(
|
|
||||||
layout_ref, breakpoints, mobile_first
|
|
||||||
)
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps(result, indent=2)
|
|
||||||
)]
|
|
||||||
|
|
||||||
# Theme tools
|
# Theme tools
|
||||||
elif name == "theme_create":
|
elif name == "theme_create":
|
||||||
theme_name = arguments.get('name')
|
theme_name = arguments.get('name')
|
||||||
@@ -849,53 +669,6 @@ class VizPlatformMCPServer:
|
|||||||
text=json.dumps(result, indent=2)
|
text=json.dumps(result, indent=2)
|
||||||
)]
|
)]
|
||||||
|
|
||||||
# Accessibility tools
|
|
||||||
elif name == "accessibility_validate_colors":
|
|
||||||
colors = arguments.get('colors')
|
|
||||||
if not colors:
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps({"error": "colors list is required"}, indent=2)
|
|
||||||
)]
|
|
||||||
result = await self.accessibility_tools.accessibility_validate_colors(
|
|
||||||
colors=colors,
|
|
||||||
check_types=arguments.get('check_types'),
|
|
||||||
min_contrast_ratio=arguments.get('min_contrast_ratio', 4.5)
|
|
||||||
)
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps(result, indent=2)
|
|
||||||
)]
|
|
||||||
|
|
||||||
elif name == "accessibility_validate_theme":
|
|
||||||
theme_name = arguments.get('theme_name')
|
|
||||||
if not theme_name:
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps({"error": "theme_name is required"}, indent=2)
|
|
||||||
)]
|
|
||||||
result = await self.accessibility_tools.accessibility_validate_theme(theme_name)
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps(result, indent=2)
|
|
||||||
)]
|
|
||||||
|
|
||||||
elif name == "accessibility_suggest_alternative":
|
|
||||||
color = arguments.get('color')
|
|
||||||
deficiency_type = arguments.get('deficiency_type')
|
|
||||||
if not color or not deficiency_type:
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps({"error": "color and deficiency_type are required"}, indent=2)
|
|
||||||
)]
|
|
||||||
result = await self.accessibility_tools.accessibility_suggest_alternative(
|
|
||||||
color, deficiency_type
|
|
||||||
)
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps(result, indent=2)
|
|
||||||
)]
|
|
||||||
|
|
||||||
raise ValueError(f"Unknown tool: {name}")
|
raise ValueError(f"Unknown tool: {name}")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ mcp>=0.9.0
|
|||||||
plotly>=5.18.0
|
plotly>=5.18.0
|
||||||
dash>=2.14.0
|
dash>=2.14.0
|
||||||
dash-mantine-components>=2.0.0
|
dash-mantine-components>=2.0.0
|
||||||
kaleido>=0.2.1 # For chart export (PNG, SVG, PDF)
|
|
||||||
|
|
||||||
# Utilities
|
# Utilities
|
||||||
python-dotenv>=1.0.0
|
python-dotenv>=1.0.0
|
||||||
|
|||||||
@@ -1,8 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Capture original working directory before any cd operations
|
|
||||||
# This should be the user's project directory when launched by Claude Code
|
|
||||||
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/viz-platform/.venv"
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/viz-platform/.venv"
|
||||||
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|||||||
@@ -1,195 +0,0 @@
|
|||||||
"""
|
|
||||||
Tests for accessibility validation tools.
|
|
||||||
"""
|
|
||||||
import pytest
|
|
||||||
from mcp_server.accessibility_tools import AccessibilityTools
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def tools():
|
|
||||||
"""Create AccessibilityTools instance."""
|
|
||||||
return AccessibilityTools()
|
|
||||||
|
|
||||||
|
|
||||||
class TestHexToRgb:
|
|
||||||
"""Tests for _hex_to_rgb method."""
|
|
||||||
|
|
||||||
def test_hex_to_rgb_6_digit(self, tools):
|
|
||||||
"""Test 6-digit hex conversion."""
|
|
||||||
assert tools._hex_to_rgb("#FF0000") == (255, 0, 0)
|
|
||||||
assert tools._hex_to_rgb("#00FF00") == (0, 255, 0)
|
|
||||||
assert tools._hex_to_rgb("#0000FF") == (0, 0, 255)
|
|
||||||
|
|
||||||
def test_hex_to_rgb_3_digit(self, tools):
|
|
||||||
"""Test 3-digit hex conversion."""
|
|
||||||
assert tools._hex_to_rgb("#F00") == (255, 0, 0)
|
|
||||||
assert tools._hex_to_rgb("#0F0") == (0, 255, 0)
|
|
||||||
assert tools._hex_to_rgb("#00F") == (0, 0, 255)
|
|
||||||
|
|
||||||
def test_hex_to_rgb_lowercase(self, tools):
|
|
||||||
"""Test lowercase hex conversion."""
|
|
||||||
assert tools._hex_to_rgb("#ff0000") == (255, 0, 0)
|
|
||||||
|
|
||||||
|
|
||||||
class TestContrastRatio:
|
|
||||||
"""Tests for _get_contrast_ratio method."""
|
|
||||||
|
|
||||||
def test_black_white_contrast(self, tools):
|
|
||||||
"""Test black on white has maximum contrast."""
|
|
||||||
ratio = tools._get_contrast_ratio("#000000", "#FFFFFF")
|
|
||||||
assert ratio == pytest.approx(21.0, rel=0.01)
|
|
||||||
|
|
||||||
def test_same_color_contrast(self, tools):
|
|
||||||
"""Test same color has minimum contrast."""
|
|
||||||
ratio = tools._get_contrast_ratio("#FF0000", "#FF0000")
|
|
||||||
assert ratio == pytest.approx(1.0, rel=0.01)
|
|
||||||
|
|
||||||
def test_symmetric_contrast(self, tools):
|
|
||||||
"""Test contrast ratio is symmetric."""
|
|
||||||
ratio1 = tools._get_contrast_ratio("#228be6", "#FFFFFF")
|
|
||||||
ratio2 = tools._get_contrast_ratio("#FFFFFF", "#228be6")
|
|
||||||
assert ratio1 == pytest.approx(ratio2, rel=0.01)
|
|
||||||
|
|
||||||
|
|
||||||
class TestColorBlindnessSimulation:
|
|
||||||
"""Tests for _simulate_color_blindness method."""
|
|
||||||
|
|
||||||
def test_deuteranopia_simulation(self, tools):
|
|
||||||
"""Test deuteranopia (green-blind) simulation."""
|
|
||||||
# Red and green should appear more similar
|
|
||||||
original_red = "#FF0000"
|
|
||||||
original_green = "#00FF00"
|
|
||||||
|
|
||||||
simulated_red = tools._simulate_color_blindness(original_red, "deuteranopia")
|
|
||||||
simulated_green = tools._simulate_color_blindness(original_green, "deuteranopia")
|
|
||||||
|
|
||||||
# They should be different from originals
|
|
||||||
assert simulated_red != original_red or simulated_green != original_green
|
|
||||||
|
|
||||||
def test_protanopia_simulation(self, tools):
|
|
||||||
"""Test protanopia (red-blind) simulation."""
|
|
||||||
simulated = tools._simulate_color_blindness("#FF0000", "protanopia")
|
|
||||||
# Should return a modified color
|
|
||||||
assert simulated.startswith("#")
|
|
||||||
assert len(simulated) == 7
|
|
||||||
|
|
||||||
def test_tritanopia_simulation(self, tools):
|
|
||||||
"""Test tritanopia (blue-blind) simulation."""
|
|
||||||
simulated = tools._simulate_color_blindness("#0000FF", "tritanopia")
|
|
||||||
# Should return a modified color
|
|
||||||
assert simulated.startswith("#")
|
|
||||||
assert len(simulated) == 7
|
|
||||||
|
|
||||||
def test_unknown_deficiency_returns_original(self, tools):
|
|
||||||
"""Test unknown deficiency type returns original color."""
|
|
||||||
color = "#FF0000"
|
|
||||||
simulated = tools._simulate_color_blindness(color, "unknown")
|
|
||||||
assert simulated == color
|
|
||||||
|
|
||||||
|
|
||||||
class TestAccessibilityValidateColors:
|
|
||||||
"""Tests for accessibility_validate_colors method."""
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_validate_single_color(self, tools):
|
|
||||||
"""Test validating a single color."""
|
|
||||||
result = await tools.accessibility_validate_colors(["#228be6"])
|
|
||||||
assert "colors_checked" in result
|
|
||||||
assert "overall_score" in result
|
|
||||||
assert "issues" in result
|
|
||||||
assert "safe_palettes" in result
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_validate_problematic_colors(self, tools):
|
|
||||||
"""Test similar colors trigger warnings."""
|
|
||||||
# Use colors that are very close in hue, which should be harder to distinguish
|
|
||||||
result = await tools.accessibility_validate_colors(["#FF5555", "#FF6666"])
|
|
||||||
# Similar colors should trigger distinguishability warnings
|
|
||||||
assert "issues" in result
|
|
||||||
# The validation should at least run without errors
|
|
||||||
assert "colors_checked" in result
|
|
||||||
assert len(result["colors_checked"]) == 2
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_validate_contrast_issue(self, tools):
|
|
||||||
"""Test low contrast colors trigger contrast warnings."""
|
|
||||||
# Yellow on white has poor contrast
|
|
||||||
result = await tools.accessibility_validate_colors(["#FFFF00"])
|
|
||||||
# Check for contrast issues (yellow may have issues with both black and white)
|
|
||||||
assert "issues" in result
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_validate_with_specific_types(self, tools):
|
|
||||||
"""Test validating for specific color blindness types."""
|
|
||||||
result = await tools.accessibility_validate_colors(
|
|
||||||
["#FF0000", "#00FF00"],
|
|
||||||
check_types=["deuteranopia"]
|
|
||||||
)
|
|
||||||
assert "simulations" in result
|
|
||||||
assert "deuteranopia" in result["simulations"]
|
|
||||||
assert "protanopia" not in result["simulations"]
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_overall_score(self, tools):
|
|
||||||
"""Test overall score is calculated."""
|
|
||||||
result = await tools.accessibility_validate_colors(["#228be6", "#ffffff"])
|
|
||||||
assert result["overall_score"] in ["A", "B", "C", "D"]
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_recommendations_generated(self, tools):
|
|
||||||
"""Test recommendations are generated for issues."""
|
|
||||||
result = await tools.accessibility_validate_colors(["#FF0000", "#00FF00"])
|
|
||||||
assert "recommendations" in result
|
|
||||||
assert len(result["recommendations"]) > 0
|
|
||||||
|
|
||||||
|
|
||||||
class TestAccessibilitySuggestAlternative:
|
|
||||||
"""Tests for accessibility_suggest_alternative method."""
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_alternative_deuteranopia(self, tools):
|
|
||||||
"""Test suggesting alternatives for deuteranopia."""
|
|
||||||
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
|
|
||||||
assert "original_color" in result
|
|
||||||
assert result["deficiency_type"] == "deuteranopia"
|
|
||||||
assert "suggestions" in result
|
|
||||||
assert len(result["suggestions"]) > 0
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_alternative_tritanopia(self, tools):
|
|
||||||
"""Test suggesting alternatives for tritanopia."""
|
|
||||||
result = await tools.accessibility_suggest_alternative("#0000FF", "tritanopia")
|
|
||||||
assert "suggestions" in result
|
|
||||||
assert len(result["suggestions"]) > 0
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggestions_include_safe_palettes(self, tools):
|
|
||||||
"""Test suggestions include colors from safe palettes."""
|
|
||||||
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
|
|
||||||
palette_suggestions = [
|
|
||||||
s for s in result["suggestions"]
|
|
||||||
if "palette" in s
|
|
||||||
]
|
|
||||||
assert len(palette_suggestions) > 0
|
|
||||||
|
|
||||||
|
|
||||||
class TestSafePalettes:
|
|
||||||
"""Tests for safe palette constants."""
|
|
||||||
|
|
||||||
def test_safe_palettes_exist(self, tools):
|
|
||||||
"""Test that safe palettes are defined."""
|
|
||||||
from mcp_server.accessibility_tools import SAFE_PALETTES
|
|
||||||
assert "categorical" in SAFE_PALETTES
|
|
||||||
assert "ibm" in SAFE_PALETTES
|
|
||||||
assert "okabe_ito" in SAFE_PALETTES
|
|
||||||
assert "tableau_colorblind" in SAFE_PALETTES
|
|
||||||
|
|
||||||
def test_safe_palettes_have_colors(self, tools):
|
|
||||||
"""Test that safe palettes have color lists."""
|
|
||||||
from mcp_server.accessibility_tools import SAFE_PALETTES
|
|
||||||
for palette_name, palette in SAFE_PALETTES.items():
|
|
||||||
assert "colors" in palette
|
|
||||||
assert len(palette["colors"]) > 0
|
|
||||||
# All colors should be valid hex
|
|
||||||
for color in palette["colors"]:
|
|
||||||
assert color.startswith("#")
|
|
||||||
@@ -90,10 +90,6 @@ After clarification, you receive a structured specification:
|
|||||||
[List of assumptions]
|
[List of assumptions]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
- [Neurodivergent Support Guide](docs/ND-SUPPORT.md) - How clarity-assist supports ND users with executive function challenges and cognitive differences
|
|
||||||
|
|
||||||
## Integration
|
## Integration
|
||||||
|
|
||||||
For CLAUDE.md integration instructions, see `claude-md-integration.md`.
|
For CLAUDE.md integration instructions, see `claude-md-integration.md`.
|
||||||
|
|||||||
@@ -1,328 +0,0 @@
|
|||||||
# Neurodivergent Support in clarity-assist
|
|
||||||
|
|
||||||
This document describes how clarity-assist is designed to support users with neurodivergent traits, including ADHD, autism, anxiety, and other conditions that affect executive function, sensory processing, or cognitive style.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
### Purpose
|
|
||||||
|
|
||||||
clarity-assist exists to help all users transform vague or incomplete requests into clear, actionable specifications. For neurodivergent users specifically, it addresses common challenges:
|
|
||||||
|
|
||||||
- **Executive function difficulties** - Breaking down complex tasks, getting started, managing scope
|
|
||||||
- **Working memory limitations** - Keeping track of context across long conversations
|
|
||||||
- **Decision fatigue** - Facing too many open-ended choices
|
|
||||||
- **Processing style differences** - Preferring structured, predictable interactions
|
|
||||||
- **Anxiety around uncertainty** - Needing clear expectations and explicit confirmation
|
|
||||||
|
|
||||||
### Philosophy
|
|
||||||
|
|
||||||
Our design philosophy centers on three principles:
|
|
||||||
|
|
||||||
1. **Reduce cognitive load** - Never force the user to hold too much in their head at once
|
|
||||||
2. **Provide structure** - Use consistent, predictable patterns for all interactions
|
|
||||||
3. **Respect different communication styles** - Accommodate rather than assume one "right" way to think
|
|
||||||
|
|
||||||
## Features for ND Users
|
|
||||||
|
|
||||||
### 1. Reduced Cognitive Load
|
|
||||||
|
|
||||||
**Prompt Simplification**
|
|
||||||
- The 4-D methodology (Deconstruct, Diagnose, Develop, Deliver) breaks down complex requests into manageable phases
|
|
||||||
- Users never need to specify everything upfront - clarification happens incrementally
|
|
||||||
|
|
||||||
**Task Breakdown**
|
|
||||||
- Large requests are decomposed into explicit components
|
|
||||||
- Dependencies and relationships are surfaced rather than left implicit
|
|
||||||
- Scope boundaries are clearly defined (in-scope vs. out-of-scope)
|
|
||||||
|
|
||||||
### 2. Structured Output
|
|
||||||
|
|
||||||
**Consistent Formatting**
|
|
||||||
- Every clarification session produces the same structured specification:
|
|
||||||
- Summary (1-2 sentences)
|
|
||||||
- Scope (In/Out)
|
|
||||||
- Requirements table (numbered, prioritized)
|
|
||||||
- Assumptions list
|
|
||||||
- This predictability reduces the mental effort of parsing responses
|
|
||||||
|
|
||||||
**Predictable Patterns**
|
|
||||||
- Questions always follow the same format
|
|
||||||
- Progress summaries appear at regular intervals
|
|
||||||
- Escalation (simple to complex) is always offered, never forced
|
|
||||||
|
|
||||||
**Bulleted Lists Over Prose**
|
|
||||||
- Requirements are presented as scannable lists, not paragraphs
|
|
||||||
- Options are numbered for easy reference
|
|
||||||
- Key information is highlighted with bold labels
|
|
||||||
|
|
||||||
### 3. Customizable Verbosity
|
|
||||||
|
|
||||||
**Detail Levels**
|
|
||||||
- `/clarify` - Full methodology for complex requests (more thorough, more questions)
|
|
||||||
- `/quick-clarify` - Rapid mode for simple disambiguation (fewer questions, faster)
|
|
||||||
|
|
||||||
**User Control**
|
|
||||||
- Users can always say "that's enough detail" to end questioning early
|
|
||||||
- The plugin offers to break sessions into smaller parts
|
|
||||||
- "Good enough for now" is explicitly validated as an acceptable outcome
|
|
||||||
|
|
||||||
### 4. Vagueness Detection
|
|
||||||
|
|
||||||
The `UserPromptSubmit` hook automatically detects prompts that might benefit from clarification and gently suggests using `/clarify`.
|
|
||||||
|
|
||||||
**Detection Signals**
|
|
||||||
- Short prompts (< 10 words) without specific technical terms
|
|
||||||
- Vague action phrases: "help me", "fix this", "make it better"
|
|
||||||
- Ambiguous scope words: "somehow", "something", "stuff", "etc."
|
|
||||||
- Open questions without context
|
|
||||||
|
|
||||||
**Non-Blocking Approach**
|
|
||||||
- The hook never prevents you from proceeding
|
|
||||||
- It provides a suggestion with a vagueness score (percentage)
|
|
||||||
- You can disable auto-suggestions entirely via environment variable
|
|
||||||
|
|
||||||
### 5. Focus Aids
|
|
||||||
|
|
||||||
**Task Prioritization**
|
|
||||||
- Requirements are tagged as Must/Should/Could/Won't (MoSCoW)
|
|
||||||
- Critical items are separated from nice-to-haves
|
|
||||||
- Scope creep is explicitly called out and deferred
|
|
||||||
|
|
||||||
**Context Switching Warnings**
|
|
||||||
- When questions touch multiple areas, the plugin acknowledges the complexity
|
|
||||||
- Offers to focus on one aspect at a time
|
|
||||||
- Summarizes frequently to rebuild context after interruptions
|
|
||||||
|
|
||||||
## How It Works
|
|
||||||
|
|
||||||
### The UserPromptSubmit Hook
|
|
||||||
|
|
||||||
When you submit a prompt, the vagueness detection hook (`hooks/vagueness-check.sh`) runs automatically:
|
|
||||||
|
|
||||||
```
|
|
||||||
User submits prompt
|
|
||||||
|
|
|
||||||
v
|
|
||||||
Hook reads prompt from stdin
|
|
||||||
|
|
|
||||||
v
|
|
||||||
Skip if: empty, starts with /, or contains file paths
|
|
||||||
|
|
|
||||||
v
|
|
||||||
Calculate vagueness score (0.0 - 1.0)
|
|
||||||
- Short prompts: +0.3
|
|
||||||
- Vague action phrases: +0.2
|
|
||||||
- Ambiguous scope words: +0.15
|
|
||||||
- Missing technical specifics: +0.2
|
|
||||||
- Short questions without context: +0.15
|
|
||||||
|
|
|
||||||
v
|
|
||||||
If score >= threshold (default 0.6):
|
|
||||||
- Output gentle suggestion with [clarity-assist] prefix
|
|
||||||
- Show vagueness percentage
|
|
||||||
|
|
|
||||||
v
|
|
||||||
Exit 0 (always non-blocking)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Hook Output
|
|
||||||
|
|
||||||
```
|
|
||||||
[clarity-assist] Your prompt could benefit from more clarity.
|
|
||||||
[clarity-assist] Consider running /clarity-assist to refine your request.
|
|
||||||
[clarity-assist] (Vagueness score: 65% - this is a suggestion, not a block)
|
|
||||||
```
|
|
||||||
|
|
||||||
### The 4-D Methodology
|
|
||||||
|
|
||||||
| Phase | What Happens | ND Benefit |
|
|
||||||
|-------|--------------|------------|
|
|
||||||
| **Deconstruct** | Break request into components | Shows the full picture without overwhelming |
|
|
||||||
| **Diagnose** | Identify gaps and conflicts | Surfaces hidden assumptions |
|
|
||||||
| **Develop** | Ask structured questions | Option-based, chunked, contextual |
|
|
||||||
| **Deliver** | Produce specification | Consistent, scannable format |
|
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
### Before: Vague Request
|
|
||||||
|
|
||||||
```
|
|
||||||
User: Make the app faster
|
|
||||||
```
|
|
||||||
|
|
||||||
This triggers vagueness detection because:
|
|
||||||
- Short (4 words)
|
|
||||||
- Vague action ("make")
|
|
||||||
- No specific technical context
|
|
||||||
- No measurable outcome
|
|
||||||
|
|
||||||
### After: Clarified with /clarify
|
|
||||||
|
|
||||||
```
|
|
||||||
User: /clarify Make the app faster
|
|
||||||
|
|
||||||
Claude: Let me help clarify what kind of performance improvements you're looking for.
|
|
||||||
|
|
||||||
**What type of slowness are you experiencing?**
|
|
||||||
1. Initial page load takes too long
|
|
||||||
2. Interactions feel sluggish (clicking, typing)
|
|
||||||
3. API calls are slow
|
|
||||||
4. All of the above
|
|
||||||
5. Other
|
|
||||||
|
|
||||||
[I'm asking because each has very different solutions]
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
User: 1
|
|
||||||
|
|
||||||
Claude: Initial page load - got it.
|
|
||||||
|
|
||||||
**What's an acceptable load time for you?**
|
|
||||||
1. Under 1 second (aggressive)
|
|
||||||
2. Under 2 seconds (standard)
|
|
||||||
3. Under 3 seconds (relaxed)
|
|
||||||
4. I don't have a specific target
|
|
||||||
|
|
||||||
[This helps me know whether to suggest quick wins or a major optimization effort]
|
|
||||||
```
|
|
||||||
|
|
||||||
After 2-3 more questions, the user receives:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Clarified Request
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
Optimize initial page load time to under 2 seconds by addressing bundle size and initial render blocking resources.
|
|
||||||
|
|
||||||
### Scope
|
|
||||||
**In Scope:**
|
|
||||||
- Bundle analysis and splitting
|
|
||||||
- Lazy loading non-critical components
|
|
||||||
- Image optimization
|
|
||||||
- CSS/JS minification check
|
|
||||||
|
|
||||||
**Out of Scope:**
|
|
||||||
- Backend API optimization
|
|
||||||
- Database query performance
|
|
||||||
- CDN setup
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
| # | Requirement | Priority | Notes |
|
|
||||||
|---|-------------|----------|-------|
|
|
||||||
| 1 | Page loads in < 2s on 4G | Must | Measured via Lighthouse |
|
|
||||||
| 2 | Largest contentful paint < 1.5s | Should | Core Web Vital |
|
|
||||||
| 3 | No layout shift after load | Should | CLS < 0.1 |
|
|
||||||
|
|
||||||
### Assumptions
|
|
||||||
- Current load time is > 3 seconds (based on "too long" description)
|
|
||||||
- Target browsers: modern evergreen (Chrome, Firefox, Safari)
|
|
||||||
- Build tooling already configured (webpack/vite)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Patterns Detected
|
|
||||||
|
|
||||||
| Pattern | Example | What It Triggers |
|
|
||||||
|---------|---------|------------------|
|
|
||||||
| Missing context | "Fix the bug" | "Which bug? In which file or feature?" |
|
|
||||||
| Ambiguous scope | "Update the user stuff" | "What specifically needs updating?" |
|
|
||||||
| Unclear goals | "Make it better" | "Better in what way? Faster? Cleaner? More features?" |
|
|
||||||
| Implicit knowledge | "Do it like last time" | "I don't have context from previous sessions" |
|
|
||||||
| Conflicting requests | "Keep it simple but add everything" | "Which matters more: simplicity or completeness?" |
|
|
||||||
|
|
||||||
### Quick-Clarify Example
|
|
||||||
|
|
||||||
For simpler requests that just need minor disambiguation:
|
|
||||||
|
|
||||||
```
|
|
||||||
User: /quick-clarify Add a delete button to the user list
|
|
||||||
|
|
||||||
Claude: I'll add a delete button to each row in the user list.
|
|
||||||
|
|
||||||
**Should deletion require confirmation?**
|
|
||||||
1. Yes - show confirmation dialog (Recommended)
|
|
||||||
2. No - delete immediately
|
|
||||||
3. Soft delete - mark as deleted but keep data
|
|
||||||
|
|
||||||
[Asking because accidental deletions can be problematic]
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
User: 1
|
|
||||||
|
|
||||||
Claude: Got it. I'll implement a delete button with confirmation dialog. Starting now.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration Options
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
| Variable | Default | Description |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| `CLARITY_ASSIST_AUTO_SUGGEST` | `true` | Enable/disable automatic vagueness detection |
|
|
||||||
| `CLARITY_ASSIST_VAGUENESS_THRESHOLD` | `0.6` | Score threshold to trigger suggestion (0.0-1.0) |
|
|
||||||
|
|
||||||
### Disabling Auto-Suggestions
|
|
||||||
|
|
||||||
If you find the vagueness detection unhelpful, disable it in your shell profile or `.env`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLARITY_ASSIST_AUTO_SUGGEST=false
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adjusting Sensitivity
|
|
||||||
|
|
||||||
To make detection more or less sensitive:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# More sensitive (suggests more often)
|
|
||||||
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.4
|
|
||||||
|
|
||||||
# Less sensitive (only very vague prompts)
|
|
||||||
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.8
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tips for ND Users
|
|
||||||
|
|
||||||
### If You're Feeling Overwhelmed
|
|
||||||
|
|
||||||
- Use `/quick-clarify` instead of `/clarify` for faster interactions
|
|
||||||
- Say "let's focus on just one thing" to narrow scope
|
|
||||||
- Ask to "pause and summarize" at any point
|
|
||||||
- It's OK to say "I don't know" - the plugin will offer concrete alternatives
|
|
||||||
|
|
||||||
### If You Have Executive Function Challenges
|
|
||||||
|
|
||||||
- Start with `/clarify` even for tasks you think are simple - it helps with planning
|
|
||||||
- The structured specification can serve as a checklist
|
|
||||||
- Use the scope boundaries to prevent scope creep
|
|
||||||
|
|
||||||
### If You Prefer Detailed Structure
|
|
||||||
|
|
||||||
- The 4-D methodology provides a predictable framework
|
|
||||||
- All output follows consistent formatting
|
|
||||||
- Questions always offer numbered options
|
|
||||||
|
|
||||||
### If You Have Anxiety About Getting It Right
|
|
||||||
|
|
||||||
- The plugin validates "good enough for now" as acceptable
|
|
||||||
- You can always revisit and change earlier answers
|
|
||||||
- Assumptions are explicitly listed - nothing is hidden
|
|
||||||
|
|
||||||
## Accessibility Notes
|
|
||||||
|
|
||||||
- All output uses standard markdown that works with screen readers
|
|
||||||
- No time pressure - take as long as you need between responses
|
|
||||||
- Questions are designed to be answerable without deep context retrieval
|
|
||||||
- Visual patterns (bold, bullets, tables) create scannable structure
|
|
||||||
|
|
||||||
## Feedback
|
|
||||||
|
|
||||||
If you have suggestions for improving neurodivergent support in clarity-assist, please open an issue at:
|
|
||||||
|
|
||||||
https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/issues
|
|
||||||
|
|
||||||
Include the label `clarity-assist` and describe:
|
|
||||||
- What challenge you faced
|
|
||||||
- What would have helped
|
|
||||||
- Any specific accommodations you'd like to see
|
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"hooks": {
|
|
||||||
"UserPromptSubmit": [
|
|
||||||
{
|
|
||||||
"type": "command",
|
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/vagueness-check.sh"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,216 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# clarity-assist vagueness detection hook
|
|
||||||
# Analyzes user prompts for vagueness and suggests /clarity-assist when beneficial
|
|
||||||
# All output MUST have [clarity-assist] prefix
|
|
||||||
# This is a NON-BLOCKING hook - always exits 0
|
|
||||||
|
|
||||||
PREFIX="[clarity-assist]"
|
|
||||||
|
|
||||||
# Check if auto-suggest is enabled (default: true)
|
|
||||||
AUTO_SUGGEST="${CLARITY_ASSIST_AUTO_SUGGEST:-true}"
|
|
||||||
if [[ "$AUTO_SUGGEST" != "true" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Threshold for vagueness score (default: 0.6)
|
|
||||||
THRESHOLD="${CLARITY_ASSIST_VAGUENESS_THRESHOLD:-0.6}"
|
|
||||||
|
|
||||||
# Read user prompt from stdin
|
|
||||||
PROMPT=""
|
|
||||||
if [[ -t 0 ]]; then
|
|
||||||
# No stdin available
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
PROMPT=$(cat)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Skip empty prompts
|
|
||||||
if [[ -z "$PROMPT" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Skip if prompt is a command (starts with /)
|
|
||||||
if [[ "$PROMPT" =~ ^[[:space:]]*/[a-zA-Z] ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Skip if prompt mentions specific files or paths
|
|
||||||
if [[ "$PROMPT" =~ \.(py|js|ts|sh|md|json|yaml|yml|txt|css|html|go|rs|java|c|cpp|h)([[:space:]]|$|[^a-zA-Z]) ]] || \
|
|
||||||
[[ "$PROMPT" =~ [/\\][a-zA-Z0-9_-]+[/\\] ]] || \
|
|
||||||
[[ "$PROMPT" =~ (src|lib|test|docs|plugins|hooks|commands)/ ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Initialize vagueness score
|
|
||||||
SCORE=0
|
|
||||||
|
|
||||||
# Count words in the prompt
|
|
||||||
WORD_COUNT=$(echo "$PROMPT" | wc -w | tr -d ' ')
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Vagueness Signal Detection
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Signal 1: Very short prompts (< 10 words) are often vague
|
|
||||||
if [[ "$WORD_COUNT" -lt 10 ]]; then
|
|
||||||
# But very short specific commands are OK
|
|
||||||
if [[ "$WORD_COUNT" -lt 3 ]]; then
|
|
||||||
# Extremely short - probably intentional or a command
|
|
||||||
:
|
|
||||||
else
|
|
||||||
SCORE=$(echo "$SCORE + 0.3" | bc)
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Signal 2: Vague action phrases (no specific outcome)
|
|
||||||
VAGUE_ACTIONS=(
|
|
||||||
"help me"
|
|
||||||
"help with"
|
|
||||||
"do something"
|
|
||||||
"work on"
|
|
||||||
"look at"
|
|
||||||
"check this"
|
|
||||||
"fix it"
|
|
||||||
"fix this"
|
|
||||||
"make it better"
|
|
||||||
"make this better"
|
|
||||||
"improve it"
|
|
||||||
"improve this"
|
|
||||||
"update this"
|
|
||||||
"update it"
|
|
||||||
"change it"
|
|
||||||
"change this"
|
|
||||||
"can you"
|
|
||||||
"could you"
|
|
||||||
"would you"
|
|
||||||
"please help"
|
|
||||||
)
|
|
||||||
|
|
||||||
PROMPT_LOWER=$(echo "$PROMPT" | tr '[:upper:]' '[:lower:]')
|
|
||||||
|
|
||||||
for phrase in "${VAGUE_ACTIONS[@]}"; do
|
|
||||||
if [[ "$PROMPT_LOWER" == *"$phrase"* ]]; then
|
|
||||||
SCORE=$(echo "$SCORE + 0.2" | bc)
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Signal 3: Ambiguous scope indicators
|
|
||||||
AMBIGUOUS_SCOPE=(
|
|
||||||
"somehow"
|
|
||||||
"something"
|
|
||||||
"somewhere"
|
|
||||||
"anything"
|
|
||||||
"whatever"
|
|
||||||
"stuff"
|
|
||||||
"things"
|
|
||||||
"etc"
|
|
||||||
"and so on"
|
|
||||||
)
|
|
||||||
|
|
||||||
for word in "${AMBIGUOUS_SCOPE[@]}"; do
|
|
||||||
if [[ "$PROMPT_LOWER" == *"$word"* ]]; then
|
|
||||||
SCORE=$(echo "$SCORE + 0.15" | bc)
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Signal 4: Missing context indicators (no reference to what/where)
|
|
||||||
# Check if prompt lacks specificity markers
|
|
||||||
HAS_SPECIFICS=false
|
|
||||||
|
|
||||||
# Specific technical terms suggest clarity
|
|
||||||
SPECIFIC_MARKERS=(
|
|
||||||
"function"
|
|
||||||
"class"
|
|
||||||
"method"
|
|
||||||
"variable"
|
|
||||||
"error"
|
|
||||||
"bug"
|
|
||||||
"test"
|
|
||||||
"api"
|
|
||||||
"endpoint"
|
|
||||||
"database"
|
|
||||||
"query"
|
|
||||||
"component"
|
|
||||||
"module"
|
|
||||||
"service"
|
|
||||||
"config"
|
|
||||||
"install"
|
|
||||||
"deploy"
|
|
||||||
"build"
|
|
||||||
"run"
|
|
||||||
"execute"
|
|
||||||
"create"
|
|
||||||
"delete"
|
|
||||||
"add"
|
|
||||||
"remove"
|
|
||||||
"implement"
|
|
||||||
"refactor"
|
|
||||||
"migrate"
|
|
||||||
"upgrade"
|
|
||||||
"debug"
|
|
||||||
"log"
|
|
||||||
"exception"
|
|
||||||
"stack"
|
|
||||||
"memory"
|
|
||||||
"performance"
|
|
||||||
"security"
|
|
||||||
"auth"
|
|
||||||
"token"
|
|
||||||
"session"
|
|
||||||
"route"
|
|
||||||
"controller"
|
|
||||||
"model"
|
|
||||||
"view"
|
|
||||||
"template"
|
|
||||||
"schema"
|
|
||||||
"migration"
|
|
||||||
"commit"
|
|
||||||
"branch"
|
|
||||||
"merge"
|
|
||||||
"pull"
|
|
||||||
"push"
|
|
||||||
)
|
|
||||||
|
|
||||||
for marker in "${SPECIFIC_MARKERS[@]}"; do
|
|
||||||
if [[ "$PROMPT_LOWER" == *"$marker"* ]]; then
|
|
||||||
HAS_SPECIFICS=true
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ "$HAS_SPECIFICS" == false ]] && [[ "$WORD_COUNT" -gt 3 ]]; then
|
|
||||||
SCORE=$(echo "$SCORE + 0.2" | bc)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Signal 5: Question without context
|
|
||||||
if [[ "$PROMPT" =~ \?$ ]] && [[ "$WORD_COUNT" -lt 8 ]]; then
|
|
||||||
# Short questions without specifics are often vague
|
|
||||||
if [[ "$HAS_SPECIFICS" == false ]]; then
|
|
||||||
SCORE=$(echo "$SCORE + 0.15" | bc)
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Cap score at 1.0
|
|
||||||
if (( $(echo "$SCORE > 1.0" | bc -l) )); then
|
|
||||||
SCORE="1.0"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Output suggestion if score exceeds threshold
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Compare score to threshold using bc
|
|
||||||
if (( $(echo "$SCORE >= $THRESHOLD" | bc -l) )); then
|
|
||||||
# Format score as percentage for display
|
|
||||||
SCORE_PCT=$(echo "$SCORE * 100" | bc | cut -d'.' -f1)
|
|
||||||
|
|
||||||
# Gentle, non-blocking suggestion
|
|
||||||
echo "$PREFIX Your prompt could benefit from more clarity."
|
|
||||||
echo "$PREFIX Consider running /clarity-assist to refine your request."
|
|
||||||
echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Always exit 0 - this hook is non-blocking
|
|
||||||
exit 0
|
|
||||||
@@ -37,33 +37,6 @@ Create a new CLAUDE.md tailored to your project.
|
|||||||
/config-init
|
/config-init
|
||||||
```
|
```
|
||||||
|
|
||||||
### `/config-diff`
|
|
||||||
Show differences between current CLAUDE.md and previous versions.
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-diff # Compare working copy vs last commit
|
|
||||||
/config-diff --commit=abc1234 # Compare against specific commit
|
|
||||||
/config-diff --from=v1.0 --to=v2.0 # Compare two commits
|
|
||||||
/config-diff --section="Critical Rules" # Focus on specific section
|
|
||||||
```
|
|
||||||
|
|
||||||
### `/config-lint`
|
|
||||||
Lint CLAUDE.md for common anti-patterns and best practices.
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-lint # Run all lint checks
|
|
||||||
/config-lint --fix # Auto-fix fixable issues
|
|
||||||
/config-lint --rules=security # Check only security rules
|
|
||||||
/config-lint --severity=error # Show only errors
|
|
||||||
```
|
|
||||||
|
|
||||||
**Lint Rule Categories:**
|
|
||||||
- **Security (SEC)** - Hardcoded secrets, paths, credentials
|
|
||||||
- **Structure (STR)** - Header hierarchy, required sections
|
|
||||||
- **Content (CNT)** - Contradictions, duplicates, vague rules
|
|
||||||
- **Format (FMT)** - Consistency, code blocks, whitespace
|
|
||||||
- **Best Practice (BPR)** - Missing Quick Start, Critical Rules sections
|
|
||||||
|
|
||||||
## Best Practices
|
## Best Practices
|
||||||
|
|
||||||
A good CLAUDE.md should be:
|
A good CLAUDE.md should be:
|
||||||
|
|||||||
@@ -1,239 +0,0 @@
|
|||||||
---
|
|
||||||
description: Show diff between current CLAUDE.md and last commit
|
|
||||||
---
|
|
||||||
|
|
||||||
# Compare CLAUDE.md Changes
|
|
||||||
|
|
||||||
This command shows differences between your current CLAUDE.md file and previous versions, helping track configuration drift and review changes before committing.
|
|
||||||
|
|
||||||
## What This Command Does
|
|
||||||
|
|
||||||
1. **Detect CLAUDE.md Location** - Finds the project's CLAUDE.md file
|
|
||||||
2. **Compare Versions** - Shows diff against last commit or specified revision
|
|
||||||
3. **Highlight Sections** - Groups changes by affected sections
|
|
||||||
4. **Summarize Impact** - Explains what the changes mean for Claude's behavior
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-diff
|
|
||||||
```
|
|
||||||
|
|
||||||
Compare against a specific commit:
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-diff --commit=abc1234
|
|
||||||
/config-diff --commit=HEAD~3
|
|
||||||
```
|
|
||||||
|
|
||||||
Compare two specific commits:
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-diff --from=abc1234 --to=def5678
|
|
||||||
```
|
|
||||||
|
|
||||||
Show only specific sections:
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-diff --section="Critical Rules"
|
|
||||||
/config-diff --section="Quick Start"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Comparison Modes
|
|
||||||
|
|
||||||
### Default: Working vs Last Commit
|
|
||||||
Shows uncommitted changes to CLAUDE.md:
|
|
||||||
```
|
|
||||||
/config-diff
|
|
||||||
```
|
|
||||||
|
|
||||||
### Working vs Specific Commit
|
|
||||||
Shows changes since a specific point:
|
|
||||||
```
|
|
||||||
/config-diff --commit=v1.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### Commit to Commit
|
|
||||||
Shows changes between two historical versions:
|
|
||||||
```
|
|
||||||
/config-diff --from=v1.0.0 --to=v2.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### Branch Comparison
|
|
||||||
Shows CLAUDE.md differences between branches:
|
|
||||||
```
|
|
||||||
/config-diff --branch=main
|
|
||||||
/config-diff --from=feature-branch --to=main
|
|
||||||
```
|
|
||||||
|
|
||||||
## Expected Output
|
|
||||||
|
|
||||||
```
|
|
||||||
CLAUDE.md Diff Report
|
|
||||||
=====================
|
|
||||||
|
|
||||||
File: /path/to/project/CLAUDE.md
|
|
||||||
Comparing: Working copy vs HEAD (last commit)
|
|
||||||
Commit: abc1234 "Update build commands" (2 days ago)
|
|
||||||
|
|
||||||
Summary:
|
|
||||||
- Lines added: 12
|
|
||||||
- Lines removed: 5
|
|
||||||
- Net change: +7 lines
|
|
||||||
- Sections affected: 3
|
|
||||||
|
|
||||||
Section Changes:
|
|
||||||
----------------
|
|
||||||
|
|
||||||
## Quick Start [MODIFIED]
|
|
||||||
- Added new environment variable requirement
|
|
||||||
- Updated test command with coverage flag
|
|
||||||
|
|
||||||
## Critical Rules [ADDED CONTENT]
|
|
||||||
+ New rule: "Never modify database migrations directly"
|
|
||||||
|
|
||||||
## Architecture [UNCHANGED]
|
|
||||||
|
|
||||||
## Common Operations [MODIFIED]
|
|
||||||
- Removed deprecated deployment command
|
|
||||||
- Added new Docker workflow
|
|
||||||
|
|
||||||
Detailed Diff:
|
|
||||||
--------------
|
|
||||||
|
|
||||||
--- CLAUDE.md (HEAD)
|
|
||||||
+++ CLAUDE.md (working)
|
|
||||||
|
|
||||||
@@ -15,7 +15,10 @@
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
+export DATABASE_URL=postgres://... # Required
|
|
||||||
pip install -r requirements.txt
|
|
||||||
-pytest
|
|
||||||
+pytest --cov=src # Run with coverage
|
|
||||||
uvicorn main:app --reload
|
|
||||||
```
|
|
||||||
|
|
||||||
@@ -45,6 +48,7 @@
|
|
||||||
## Critical Rules
|
|
||||||
|
|
||||||
- Never modify `.env` files directly
|
|
||||||
+- Never modify database migrations directly
|
|
||||||
- Always run tests before committing
|
|
||||||
|
|
||||||
Behavioral Impact:
|
|
||||||
------------------
|
|
||||||
|
|
||||||
These changes will affect Claude's behavior:
|
|
||||||
|
|
||||||
1. [NEW REQUIREMENT] Claude will now export DATABASE_URL before running
|
|
||||||
2. [MODIFIED] Test command now includes coverage reporting
|
|
||||||
3. [NEW RULE] Claude will avoid direct migration modifications
|
|
||||||
|
|
||||||
Review: Do these changes reflect your intended configuration?
|
|
||||||
```
|
|
||||||
|
|
||||||
## Section-Focused View
|
|
||||||
|
|
||||||
When using `--section`, output focuses on specific areas:
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-diff --section="Critical Rules"
|
|
||||||
|
|
||||||
CLAUDE.md Section Diff: Critical Rules
|
|
||||||
======================================
|
|
||||||
|
|
||||||
--- HEAD
|
|
||||||
+++ Working
|
|
||||||
|
|
||||||
## Critical Rules
|
|
||||||
|
|
||||||
- Never modify `.env` files directly
|
|
||||||
+- Never modify database migrations directly
|
|
||||||
+- Always use type hints in Python code
|
|
||||||
- Always run tests before committing
|
|
||||||
-- Keep functions under 50 lines
|
|
||||||
|
|
||||||
Changes:
|
|
||||||
+ 2 rules added
|
|
||||||
- 1 rule removed
|
|
||||||
|
|
||||||
Impact: Claude will follow 2 new constraints and no longer enforce
|
|
||||||
the 50-line function limit.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Option | Description |
|
|
||||||
|--------|-------------|
|
|
||||||
| `--commit=REF` | Compare working copy against specific commit/tag |
|
|
||||||
| `--from=REF` | Starting point for comparison |
|
|
||||||
| `--to=REF` | Ending point for comparison (default: HEAD) |
|
|
||||||
| `--branch=NAME` | Compare against branch tip |
|
|
||||||
| `--section=NAME` | Show only changes to specific section |
|
|
||||||
| `--stat` | Show only statistics, no detailed diff |
|
|
||||||
| `--no-color` | Disable colored output |
|
|
||||||
| `--context=N` | Lines of context around changes (default: 3) |
|
|
||||||
|
|
||||||
## Understanding the Output
|
|
||||||
|
|
||||||
### Change Indicators
|
|
||||||
|
|
||||||
| Symbol | Meaning |
|
|
||||||
|--------|---------|
|
|
||||||
| `+` | Line added |
|
|
||||||
| `-` | Line removed |
|
|
||||||
| `@@` | Location marker showing line numbers |
|
|
||||||
| `[MODIFIED]` | Section has changes |
|
|
||||||
| `[ADDED]` | New section created |
|
|
||||||
| `[REMOVED]` | Section deleted |
|
|
||||||
| `[UNCHANGED]` | No changes to section |
|
|
||||||
|
|
||||||
### Impact Categories
|
|
||||||
|
|
||||||
- **NEW REQUIREMENT** - Claude will now need to do something new
|
|
||||||
- **REMOVED REQUIREMENT** - Claude no longer needs to do something
|
|
||||||
- **MODIFIED** - Existing behavior changed
|
|
||||||
- **NEW RULE** - New constraint added
|
|
||||||
- **RELAXED RULE** - Constraint removed or softened
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
Run `/config-diff` when:
|
|
||||||
- Before committing CLAUDE.md changes
|
|
||||||
- Reviewing what changed after pulling updates
|
|
||||||
- Debugging unexpected Claude behavior
|
|
||||||
- Auditing configuration changes over time
|
|
||||||
- Comparing configurations across branches
|
|
||||||
|
|
||||||
## Integration with Other Commands
|
|
||||||
|
|
||||||
| Workflow | Commands |
|
|
||||||
|----------|----------|
|
|
||||||
| Review before commit | `/config-diff` then `git commit` |
|
|
||||||
| After optimization | `/config-optimize` then `/config-diff` |
|
|
||||||
| Audit history | `/config-diff --from=v1.0.0 --to=HEAD` |
|
|
||||||
| Branch comparison | `/config-diff --branch=main` |
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
1. **Review before committing** - Always check what changed
|
|
||||||
2. **Track behavioral changes** - Focus on rules and requirements sections
|
|
||||||
3. **Use section filtering** - Large files benefit from focused diffs
|
|
||||||
4. **Compare across releases** - Use tags to track major changes
|
|
||||||
5. **Check after merges** - Ensure CLAUDE.md didn't get conflict artifacts
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### "No changes detected"
|
|
||||||
- CLAUDE.md matches the comparison target
|
|
||||||
- Check if you're comparing the right commits
|
|
||||||
|
|
||||||
### "File not found in commit"
|
|
||||||
- CLAUDE.md didn't exist at that commit
|
|
||||||
- Use `git log -- CLAUDE.md` to find when it was created
|
|
||||||
|
|
||||||
### "Not a git repository"
|
|
||||||
- This command requires git history
|
|
||||||
- Initialize git or use file backup comparison instead
|
|
||||||
@@ -1,334 +0,0 @@
|
|||||||
---
|
|
||||||
description: Lint CLAUDE.md for common anti-patterns and best practices
|
|
||||||
---
|
|
||||||
|
|
||||||
# Lint CLAUDE.md
|
|
||||||
|
|
||||||
This command checks your CLAUDE.md file against best practices and detects common anti-patterns that can cause issues with Claude Code.
|
|
||||||
|
|
||||||
## What This Command Does
|
|
||||||
|
|
||||||
1. **Parse Structure** - Validates markdown structure and hierarchy
|
|
||||||
2. **Check Security** - Detects hardcoded paths, secrets, and sensitive data
|
|
||||||
3. **Validate Content** - Identifies anti-patterns and problematic instructions
|
|
||||||
4. **Verify Format** - Ensures consistent formatting and style
|
|
||||||
5. **Generate Report** - Provides actionable findings with fix suggestions
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-lint
|
|
||||||
```
|
|
||||||
|
|
||||||
Lint with auto-fix:
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-lint --fix
|
|
||||||
```
|
|
||||||
|
|
||||||
Check specific rules only:
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-lint --rules=security,structure
|
|
||||||
```
|
|
||||||
|
|
||||||
## Linting Rules
|
|
||||||
|
|
||||||
### Security Rules (SEC)
|
|
||||||
|
|
||||||
| Rule | Description | Severity |
|
|
||||||
|------|-------------|----------|
|
|
||||||
| SEC001 | Hardcoded absolute paths | Warning |
|
|
||||||
| SEC002 | Potential secrets/API keys | Error |
|
|
||||||
| SEC003 | Hardcoded IP addresses | Warning |
|
|
||||||
| SEC004 | Exposed credentials patterns | Error |
|
|
||||||
| SEC005 | Hardcoded URLs with tokens | Error |
|
|
||||||
| SEC006 | Environment variable values (not names) | Warning |
|
|
||||||
|
|
||||||
### Structure Rules (STR)
|
|
||||||
|
|
||||||
| Rule | Description | Severity |
|
|
||||||
|------|-------------|----------|
|
|
||||||
| STR001 | Missing required sections | Error |
|
|
||||||
| STR002 | Invalid header hierarchy (h3 before h2) | Warning |
|
|
||||||
| STR003 | Orphaned content (text before first header) | Info |
|
|
||||||
| STR004 | Excessive nesting depth (>4 levels) | Warning |
|
|
||||||
| STR005 | Empty sections | Warning |
|
|
||||||
| STR006 | Missing section content | Warning |
|
|
||||||
|
|
||||||
### Content Rules (CNT)
|
|
||||||
|
|
||||||
| Rule | Description | Severity |
|
|
||||||
|------|-------------|----------|
|
|
||||||
| CNT001 | Contradictory instructions | Error |
|
|
||||||
| CNT002 | Vague or ambiguous rules | Warning |
|
|
||||||
| CNT003 | Overly long sections (>100 lines) | Info |
|
|
||||||
| CNT004 | Duplicate content | Warning |
|
|
||||||
| CNT005 | TODO/FIXME in production config | Warning |
|
|
||||||
| CNT006 | Outdated version references | Info |
|
|
||||||
| CNT007 | Broken internal links | Warning |
|
|
||||||
|
|
||||||
### Format Rules (FMT)
|
|
||||||
|
|
||||||
| Rule | Description | Severity |
|
|
||||||
|------|-------------|----------|
|
|
||||||
| FMT001 | Inconsistent header styles | Info |
|
|
||||||
| FMT002 | Inconsistent list markers | Info |
|
|
||||||
| FMT003 | Missing code block language | Info |
|
|
||||||
| FMT004 | Trailing whitespace | Info |
|
|
||||||
| FMT005 | Missing blank lines around headers | Info |
|
|
||||||
| FMT006 | Inconsistent indentation | Info |
|
|
||||||
|
|
||||||
### Best Practice Rules (BPR)
|
|
||||||
|
|
||||||
| Rule | Description | Severity |
|
|
||||||
|------|-------------|----------|
|
|
||||||
| BPR001 | No Quick Start section | Warning |
|
|
||||||
| BPR002 | No Critical Rules section | Warning |
|
|
||||||
| BPR003 | Instructions without examples | Info |
|
|
||||||
| BPR004 | Commands without explanation | Info |
|
|
||||||
| BPR005 | Rules without rationale | Info |
|
|
||||||
| BPR006 | Missing plugin integration docs | Info |
|
|
||||||
|
|
||||||
## Expected Output
|
|
||||||
|
|
||||||
```
|
|
||||||
CLAUDE.md Lint Report
|
|
||||||
=====================
|
|
||||||
|
|
||||||
File: /path/to/project/CLAUDE.md
|
|
||||||
Rules checked: 25
|
|
||||||
Time: 0.3s
|
|
||||||
|
|
||||||
Summary:
|
|
||||||
Errors: 2
|
|
||||||
Warnings: 5
|
|
||||||
Info: 3
|
|
||||||
|
|
||||||
Findings:
|
|
||||||
---------
|
|
||||||
|
|
||||||
[ERROR] SEC002: Potential secret detected (line 45)
|
|
||||||
│ api_key = "sk-1234567890abcdef"
|
|
||||||
│ ^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
└─ Hardcoded API key found. Use environment variable reference instead.
|
|
||||||
|
|
||||||
Suggested fix:
|
|
||||||
- api_key = "sk-1234567890abcdef"
|
|
||||||
+ api_key = $OPENAI_API_KEY # Set in environment
|
|
||||||
|
|
||||||
[ERROR] CNT001: Contradictory instructions (lines 23, 67)
|
|
||||||
│ Line 23: "Always run tests before committing"
|
|
||||||
│ Line 67: "Skip tests for documentation-only changes"
|
|
||||||
│
|
|
||||||
└─ These rules conflict. Clarify the exception explicitly.
|
|
||||||
|
|
||||||
Suggested fix:
|
|
||||||
+ "Always run tests before committing, except for documentation-only
|
|
||||||
+ changes (files in docs/ directory)"
|
|
||||||
|
|
||||||
[WARNING] SEC001: Hardcoded absolute path (line 12)
|
|
||||||
│ Database location: /home/user/data/myapp.db
|
|
||||||
│ ^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
└─ Absolute paths break portability. Use relative or variable.
|
|
||||||
|
|
||||||
Suggested fix:
|
|
||||||
- Database location: /home/user/data/myapp.db
|
|
||||||
+ Database location: ./data/myapp.db # Or $DATA_DIR/myapp.db
|
|
||||||
|
|
||||||
[WARNING] STR002: Invalid header hierarchy (line 34)
|
|
||||||
│ ### Subsection
|
|
||||||
│ (no preceding ## header)
|
|
||||||
│
|
|
||||||
└─ H3 header without parent H2. Add H2 or promote to H2.
|
|
||||||
|
|
||||||
[WARNING] CNT004: Duplicate content (lines 45-52, 89-96)
|
|
||||||
│ Same git workflow documented twice
|
|
||||||
│
|
|
||||||
└─ Remove duplicate or consolidate into single section.
|
|
||||||
|
|
||||||
[WARNING] STR005: Empty section (line 78)
|
|
||||||
│ ## Troubleshooting
|
|
||||||
│ (no content)
|
|
||||||
│
|
|
||||||
└─ Add content or remove empty section.
|
|
||||||
|
|
||||||
[WARNING] BPR002: No Critical Rules section
|
|
||||||
│ Missing "Critical Rules" or "Important Rules" section
|
|
||||||
│
|
|
||||||
└─ Add a section highlighting must-follow rules for Claude.
|
|
||||||
|
|
||||||
[INFO] FMT003: Missing code block language (line 56)
|
|
||||||
│ ```
|
|
||||||
│ npm install
|
|
||||||
│ ```
|
|
||||||
│
|
|
||||||
└─ Specify language for syntax highlighting: ```bash
|
|
||||||
|
|
||||||
[INFO] CNT003: Overly long section (lines 100-215)
|
|
||||||
│ "Architecture" section is 115 lines
|
|
||||||
│
|
|
||||||
└─ Consider breaking into subsections or condensing.
|
|
||||||
|
|
||||||
[INFO] FMT001: Inconsistent header styles
|
|
||||||
│ Line 10: "## Quick Start"
|
|
||||||
│ Line 25: "## Architecture:"
|
|
||||||
│ (colon suffix inconsistent)
|
|
||||||
│
|
|
||||||
└─ Standardize header format throughout document.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Auto-fixable: 4 issues (run with --fix)
|
|
||||||
Manual review required: 6 issues
|
|
||||||
|
|
||||||
Run `/config-lint --fix` to apply automatic fixes.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Option | Description |
|
|
||||||
|--------|-------------|
|
|
||||||
| `--fix` | Automatically fix auto-fixable issues |
|
|
||||||
| `--rules=LIST` | Check only specified rule categories |
|
|
||||||
| `--ignore=LIST` | Skip specified rules (e.g., `--ignore=FMT001,FMT002`) |
|
|
||||||
| `--severity=LEVEL` | Show only issues at or above level (error/warning/info) |
|
|
||||||
| `--format=FORMAT` | Output format: `text` (default), `json`, `sarif` |
|
|
||||||
| `--config=FILE` | Use custom lint configuration |
|
|
||||||
| `--strict` | Treat warnings as errors |
|
|
||||||
|
|
||||||
## Rule Categories
|
|
||||||
|
|
||||||
Use `--rules` to focus on specific areas:
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-lint --rules=security # Only security checks
|
|
||||||
/config-lint --rules=structure # Only structure checks
|
|
||||||
/config-lint --rules=security,content # Multiple categories
|
|
||||||
```
|
|
||||||
|
|
||||||
Available categories:
|
|
||||||
- `security` - SEC rules
|
|
||||||
- `structure` - STR rules
|
|
||||||
- `content` - CNT rules
|
|
||||||
- `format` - FMT rules
|
|
||||||
- `bestpractice` - BPR rules
|
|
||||||
|
|
||||||
## Custom Configuration
|
|
||||||
|
|
||||||
Create `.claude-lint.json` in project root:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"rules": {
|
|
||||||
"SEC001": "warning",
|
|
||||||
"FMT001": "off",
|
|
||||||
"CNT003": {
|
|
||||||
"severity": "warning",
|
|
||||||
"maxLines": 150
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"ignore": [
|
|
||||||
"FMT*"
|
|
||||||
],
|
|
||||||
"requiredSections": [
|
|
||||||
"Quick Start",
|
|
||||||
"Critical Rules",
|
|
||||||
"Project Overview"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Anti-Pattern Examples
|
|
||||||
|
|
||||||
### Hardcoded Secrets (SEC002)
|
|
||||||
```markdown
|
|
||||||
# BAD
|
|
||||||
API_KEY=sk-1234567890abcdef
|
|
||||||
|
|
||||||
# GOOD
|
|
||||||
API_KEY=$OPENAI_API_KEY # Set via environment
|
|
||||||
```
|
|
||||||
|
|
||||||
### Hardcoded Paths (SEC001)
|
|
||||||
```markdown
|
|
||||||
# BAD
|
|
||||||
Config file: /home/john/projects/myapp/config.yml
|
|
||||||
|
|
||||||
# GOOD
|
|
||||||
Config file: ./config.yml
|
|
||||||
Config file: $PROJECT_ROOT/config.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Contradictory Rules (CNT001)
|
|
||||||
```markdown
|
|
||||||
# BAD
|
|
||||||
- Always use TypeScript
|
|
||||||
- JavaScript files are acceptable for scripts
|
|
||||||
|
|
||||||
# GOOD
|
|
||||||
- Always use TypeScript for source code
|
|
||||||
- JavaScript (.js) is acceptable only for config files and scripts
|
|
||||||
```
|
|
||||||
|
|
||||||
### Vague Instructions (CNT002)
|
|
||||||
```markdown
|
|
||||||
# BAD
|
|
||||||
- Be careful with the database
|
|
||||||
|
|
||||||
# GOOD
|
|
||||||
- Never run DELETE without WHERE clause
|
|
||||||
- Always backup before migrations
|
|
||||||
```
|
|
||||||
|
|
||||||
### Invalid Hierarchy (STR002)
|
|
||||||
```markdown
|
|
||||||
# BAD
|
|
||||||
# Main Title
|
|
||||||
### Skipped Level
|
|
||||||
|
|
||||||
# GOOD
|
|
||||||
# Main Title
|
|
||||||
## Section
|
|
||||||
### Subsection
|
|
||||||
```
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
Run `/config-lint` when:
|
|
||||||
- Before committing CLAUDE.md changes
|
|
||||||
- During code review for CLAUDE.md modifications
|
|
||||||
- Setting up CI/CD checks for configuration files
|
|
||||||
- After major edits to catch introduced issues
|
|
||||||
- Periodically as maintenance check
|
|
||||||
|
|
||||||
## Integration with CI/CD
|
|
||||||
|
|
||||||
Add to your CI pipeline:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# GitHub Actions example
|
|
||||||
- name: Lint CLAUDE.md
|
|
||||||
run: claude /config-lint --strict --format=sarif > lint-results.sarif
|
|
||||||
|
|
||||||
- name: Upload SARIF
|
|
||||||
uses: github/codeql-action/upload-sarif@v2
|
|
||||||
with:
|
|
||||||
sarif_file: lint-results.sarif
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
1. **Start with errors** - Fix errors before warnings
|
|
||||||
2. **Use --fix carefully** - Review auto-fixes before committing
|
|
||||||
3. **Configure per-project** - Different projects have different needs
|
|
||||||
4. **Integrate in CI** - Catch issues before they reach main
|
|
||||||
5. **Review periodically** - Run lint check monthly as maintenance
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
| Command | Relationship |
|
|
||||||
|---------|--------------|
|
|
||||||
| `/config-analyze` | Deeper content analysis (complements lint) |
|
|
||||||
| `/config-optimize` | Applies fixes and improvements |
|
|
||||||
| `/config-diff` | Shows what changed (run lint before commit) |
|
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "cmdb-assistant",
|
"name": "cmdb-assistant",
|
||||||
"version": "1.2.0",
|
"version": "1.1.0",
|
||||||
"description": "NetBox CMDB integration with data quality validation - query, create, update, and manage network devices, IP addresses, sites, and more with best practices enforcement",
|
"description": "NetBox CMDB integration with data quality validation - query, create, update, and manage network devices, IP addresses, sites, and more with best practices enforcement",
|
||||||
"author": {
|
"author": {
|
||||||
"name": "Leo Miranda",
|
"name": "Leo Miranda",
|
||||||
|
|||||||
@@ -2,12 +2,6 @@
|
|||||||
|
|
||||||
A Claude Code plugin for NetBox CMDB integration - query, create, update, and manage your network infrastructure directly from Claude Code.
|
A Claude Code plugin for NetBox CMDB integration - query, create, update, and manage your network infrastructure directly from Claude Code.
|
||||||
|
|
||||||
## What's New in v1.2.0
|
|
||||||
|
|
||||||
- **`/cmdb-topology`**: Generate Mermaid diagrams showing infrastructure topology (rack view, network view, site overview)
|
|
||||||
- **`/change-audit`**: Query and analyze NetBox audit log for change tracking and compliance
|
|
||||||
- **`/ip-conflicts`**: Detect IP address conflicts and overlapping prefixes
|
|
||||||
|
|
||||||
## What's New in v1.1.0
|
## What's New in v1.1.0
|
||||||
|
|
||||||
- **Data Quality Validation**: Hooks for SessionStart and PreToolUse that check data quality and warn about missing fields
|
- **Data Quality Validation**: Hooks for SessionStart and PreToolUse that check data quality and warn about missing fields
|
||||||
@@ -65,9 +59,6 @@ Add to your Claude Code plugins or marketplace configuration.
|
|||||||
| `/cmdb-audit [scope]` | Data quality analysis (all, vms, devices, naming, roles) |
|
| `/cmdb-audit [scope]` | Data quality analysis (all, vms, devices, naming, roles) |
|
||||||
| `/cmdb-register` | Register current machine into NetBox with running apps |
|
| `/cmdb-register` | Register current machine into NetBox with running apps |
|
||||||
| `/cmdb-sync` | Sync machine state with NetBox (detect drift, update) |
|
| `/cmdb-sync` | Sync machine state with NetBox (detect drift, update) |
|
||||||
| `/cmdb-topology <view>` | Generate Mermaid diagrams (rack, network, site, full) |
|
|
||||||
| `/change-audit [filters]` | Query NetBox audit log for change tracking |
|
|
||||||
| `/ip-conflicts [scope]` | Detect IP conflicts and overlapping prefixes |
|
|
||||||
|
|
||||||
## Agent
|
## Agent
|
||||||
|
|
||||||
@@ -149,12 +140,9 @@ cmdb-assistant/
|
|||||||
│ ├── cmdb-device.md # Device management
|
│ ├── cmdb-device.md # Device management
|
||||||
│ ├── cmdb-ip.md # IP management
|
│ ├── cmdb-ip.md # IP management
|
||||||
│ ├── cmdb-site.md # Site management
|
│ ├── cmdb-site.md # Site management
|
||||||
│ ├── cmdb-audit.md # Data quality audit
|
│ ├── cmdb-audit.md # Data quality audit (NEW)
|
||||||
│ ├── cmdb-register.md # Machine registration
|
│ ├── cmdb-register.md # Machine registration (NEW)
|
||||||
│ ├── cmdb-sync.md # Machine sync
|
│ └── cmdb-sync.md # Machine sync (NEW)
|
||||||
│ ├── cmdb-topology.md # Topology visualization (NEW)
|
|
||||||
│ ├── change-audit.md # Change audit log (NEW)
|
|
||||||
│ └── ip-conflicts.md # IP conflict detection (NEW)
|
|
||||||
├── hooks/
|
├── hooks/
|
||||||
│ ├── hooks.json # Hook configuration
|
│ ├── hooks.json # Hook configuration
|
||||||
│ ├── startup-check.sh # SessionStart validation
|
│ ├── startup-check.sh # SessionStart validation
|
||||||
|
|||||||
@@ -1,163 +0,0 @@
|
|||||||
---
|
|
||||||
description: Audit NetBox changes with filtering by date, user, or object type
|
|
||||||
---
|
|
||||||
|
|
||||||
# CMDB Change Audit
|
|
||||||
|
|
||||||
Query and analyze the NetBox audit log for change tracking and compliance.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/change-audit [filters]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Filters:**
|
|
||||||
- `last <N> days/hours` - Changes within time period
|
|
||||||
- `by <username>` - Changes by specific user
|
|
||||||
- `type <object-type>` - Changes to specific object type
|
|
||||||
- `action <create|update|delete>` - Filter by action type
|
|
||||||
- `object <name>` - Search for changes to specific object
|
|
||||||
|
|
||||||
## Instructions
|
|
||||||
|
|
||||||
You are a change auditor that queries NetBox's object change log and generates audit reports.
|
|
||||||
|
|
||||||
### MCP Tools
|
|
||||||
|
|
||||||
Use these tools to query the audit log:
|
|
||||||
|
|
||||||
- `extras_list_object_changes` - List changes with filters:
|
|
||||||
- `user_id` - Filter by user ID
|
|
||||||
- `changed_object_type` - Filter by object type (e.g., "dcim.device", "ipam.ipaddress")
|
|
||||||
- `action` - Filter by action: "create", "update", "delete"
|
|
||||||
|
|
||||||
- `extras_get_object_change` - Get detailed change record by ID
|
|
||||||
|
|
||||||
### Common Object Types
|
|
||||||
|
|
||||||
| Category | Object Types |
|
|
||||||
|----------|--------------|
|
|
||||||
| DCIM | `dcim.device`, `dcim.interface`, `dcim.site`, `dcim.rack`, `dcim.cable` |
|
|
||||||
| IPAM | `ipam.ipaddress`, `ipam.prefix`, `ipam.vlan`, `ipam.vrf` |
|
|
||||||
| Virtualization | `virtualization.virtualmachine`, `virtualization.cluster` |
|
|
||||||
| Tenancy | `tenancy.tenant`, `tenancy.contact` |
|
|
||||||
|
|
||||||
### Workflow
|
|
||||||
|
|
||||||
1. **Parse user request** to determine filters
|
|
||||||
2. **Query object changes** using `extras_list_object_changes`
|
|
||||||
3. **Enrich data** by fetching detailed records if needed
|
|
||||||
4. **Analyze patterns** in the changes
|
|
||||||
5. **Generate report** in structured format
|
|
||||||
|
|
||||||
### Report Format
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## NetBox Change Audit Report
|
|
||||||
|
|
||||||
**Generated:** [timestamp]
|
|
||||||
**Period:** [date range or "All time"]
|
|
||||||
**Filters:** [applied filters]
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
|
|
||||||
| Metric | Count |
|
|
||||||
|--------|-------|
|
|
||||||
| Total Changes | X |
|
|
||||||
| Creates | Y |
|
|
||||||
| Updates | Z |
|
|
||||||
| Deletes | W |
|
|
||||||
| Unique Users | N |
|
|
||||||
| Object Types | M |
|
|
||||||
|
|
||||||
### Changes by Action
|
|
||||||
|
|
||||||
#### Created Objects (Y)
|
|
||||||
|
|
||||||
| Time | User | Object Type | Object | Details |
|
|
||||||
|------|------|-------------|--------|---------|
|
|
||||||
| 2024-01-15 14:30 | admin | dcim.device | server-01 | Created device |
|
|
||||||
| ... | ... | ... | ... | ... |
|
|
||||||
|
|
||||||
#### Updated Objects (Z)
|
|
||||||
|
|
||||||
| Time | User | Object Type | Object | Changed Fields |
|
|
||||||
|------|------|-------------|--------|----------------|
|
|
||||||
| 2024-01-15 15:00 | john | ipam.ipaddress | 10.0.1.50/24 | status, description |
|
|
||||||
| ... | ... | ... | ... | ... |
|
|
||||||
|
|
||||||
#### Deleted Objects (W)
|
|
||||||
|
|
||||||
| Time | User | Object Type | Object | Details |
|
|
||||||
|------|------|-------------|--------|---------|
|
|
||||||
| 2024-01-14 09:00 | admin | dcim.interface | eth2 | Removed from server-01 |
|
|
||||||
| ... | ... | ... | ... | ... |
|
|
||||||
|
|
||||||
### Changes by User
|
|
||||||
|
|
||||||
| User | Creates | Updates | Deletes | Total |
|
|
||||||
|------|---------|---------|---------|-------|
|
|
||||||
| admin | 5 | 10 | 2 | 17 |
|
|
||||||
| john | 3 | 8 | 0 | 11 |
|
|
||||||
|
|
||||||
### Changes by Object Type
|
|
||||||
|
|
||||||
| Object Type | Creates | Updates | Deletes | Total |
|
|
||||||
|-------------|---------|---------|---------|-------|
|
|
||||||
| dcim.device | 2 | 5 | 0 | 7 |
|
|
||||||
| ipam.ipaddress | 4 | 3 | 1 | 8 |
|
|
||||||
|
|
||||||
### Timeline
|
|
||||||
|
|
||||||
```
|
|
||||||
2024-01-15: ████████ 8 changes
|
|
||||||
2024-01-14: ████ 4 changes
|
|
||||||
2024-01-13: ██ 2 changes
|
|
||||||
```
|
|
||||||
|
|
||||||
### Notable Patterns
|
|
||||||
|
|
||||||
- **Bulk operations:** [Identify if many changes happened in short time]
|
|
||||||
- **Unusual activity:** [Flag unexpected deletions or after-hours changes]
|
|
||||||
- **Missing audit trail:** [Note if expected changes are not logged]
|
|
||||||
|
|
||||||
### Recommendations
|
|
||||||
|
|
||||||
1. [Any security or process recommendations based on findings]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Time Period Handling
|
|
||||||
|
|
||||||
When user specifies "last N days":
|
|
||||||
- The NetBox API may not have direct date filtering in `extras_list_object_changes`
|
|
||||||
- Fetch recent changes and filter client-side by the `time` field
|
|
||||||
- Note any limitations in the report
|
|
||||||
|
|
||||||
### Enriching Change Details
|
|
||||||
|
|
||||||
For detailed audit, use `extras_get_object_change` with the change ID to see:
|
|
||||||
- `prechange_data` - Object state before change
|
|
||||||
- `postchange_data` - Object state after change
|
|
||||||
- `request_id` - Links related changes in same request
|
|
||||||
|
|
||||||
### Security Audit Mode
|
|
||||||
|
|
||||||
If user asks for "security audit" or "compliance report":
|
|
||||||
1. Focus on deletions and permission-sensitive changes
|
|
||||||
2. Highlight changes to critical objects (firewalls, VRFs, prefixes)
|
|
||||||
3. Flag changes outside business hours
|
|
||||||
4. Identify users with high change counts
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
- `/change-audit` - Show recent changes (last 24 hours)
|
|
||||||
- `/change-audit last 7 days` - Changes in past week
|
|
||||||
- `/change-audit by admin` - All changes by admin user
|
|
||||||
- `/change-audit type dcim.device` - Device changes only
|
|
||||||
- `/change-audit action delete` - All deletions
|
|
||||||
- `/change-audit object server-01` - Changes to server-01
|
|
||||||
|
|
||||||
## User Request
|
|
||||||
|
|
||||||
$ARGUMENTS
|
|
||||||
@@ -1,182 +0,0 @@
|
|||||||
---
|
|
||||||
description: Generate infrastructure topology diagrams from NetBox data
|
|
||||||
---
|
|
||||||
|
|
||||||
# CMDB Topology Visualization
|
|
||||||
|
|
||||||
Generate Mermaid diagrams showing infrastructure topology from NetBox.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/cmdb-topology <view> [scope]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Views:**
|
|
||||||
- `rack <rack-name>` - Rack elevation showing devices and positions
|
|
||||||
- `network [site]` - Network topology showing device connections via cables
|
|
||||||
- `site <site-name>` - Site overview with racks and device counts
|
|
||||||
- `full` - Full infrastructure overview
|
|
||||||
|
|
||||||
## Instructions
|
|
||||||
|
|
||||||
You are a topology visualization assistant that queries NetBox and generates Mermaid diagrams.
|
|
||||||
|
|
||||||
### View: Rack Elevation
|
|
||||||
|
|
||||||
Generate a rack view showing devices and their positions.
|
|
||||||
|
|
||||||
**Data Collection:**
|
|
||||||
1. Use `dcim_list_racks` to find the rack by name
|
|
||||||
2. Use `dcim_list_devices` with `rack_id` filter to get devices in rack
|
|
||||||
3. For each device, note: `position`, `u_height`, `face`, `name`, `role`
|
|
||||||
|
|
||||||
**Mermaid Output:**
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TB
|
|
||||||
subgraph rack["Rack: <rack-name> (U<height>)"]
|
|
||||||
direction TB
|
|
||||||
u42["U42: empty"]
|
|
||||||
u41["U41: empty"]
|
|
||||||
u40["U40: server-01 (Server)"]
|
|
||||||
u39["U39: server-01 (cont.)"]
|
|
||||||
u38["U38: switch-01 (Switch)"]
|
|
||||||
%% ... continue for all units
|
|
||||||
end
|
|
||||||
```
|
|
||||||
|
|
||||||
**For devices spanning multiple U:**
|
|
||||||
- Mark the top U with device name and role
|
|
||||||
- Mark subsequent Us as "(cont.)" for the same device
|
|
||||||
- Empty Us should show "empty"
|
|
||||||
|
|
||||||
### View: Network Topology
|
|
||||||
|
|
||||||
Generate a network diagram showing device connections.
|
|
||||||
|
|
||||||
**Data Collection:**
|
|
||||||
1. Use `dcim_list_sites` if no site specified (get all)
|
|
||||||
2. Use `dcim_list_devices` with optional `site_id` filter
|
|
||||||
3. Use `dcim_list_cables` to get all connections
|
|
||||||
4. Use `dcim_list_interfaces` for each device to understand port names
|
|
||||||
|
|
||||||
**Mermaid Output:**
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TD
|
|
||||||
subgraph site1["Site: Home"]
|
|
||||||
router1[("core-router-01<br/>Router")]
|
|
||||||
switch1[["dist-switch-01<br/>Switch"]]
|
|
||||||
server1["web-server-01<br/>Server"]
|
|
||||||
server2["db-server-01<br/>Server"]
|
|
||||||
end
|
|
||||||
|
|
||||||
router1 -->|"eth0 - eth1"| switch1
|
|
||||||
switch1 -->|"gi0/1 - eth0"| server1
|
|
||||||
switch1 -->|"gi0/2 - eth0"| server2
|
|
||||||
```
|
|
||||||
|
|
||||||
**Node shapes by role:**
|
|
||||||
- Router: `[(" ")]` (cylinder/database shape)
|
|
||||||
- Switch: `[[ ]]` (double brackets)
|
|
||||||
- Server: `[ ]` (rectangle)
|
|
||||||
- Firewall: `{{ }}` (hexagon)
|
|
||||||
- Other: `[ ]` (rectangle)
|
|
||||||
|
|
||||||
**Edge labels:** Show interface names on both ends (A-side - B-side)
|
|
||||||
|
|
||||||
### View: Site Overview
|
|
||||||
|
|
||||||
Generate a site-level view showing racks and summary counts.
|
|
||||||
|
|
||||||
**Data Collection:**
|
|
||||||
1. Use `dcim_get_site` to get site details
|
|
||||||
2. Use `dcim_list_racks` with `site_id` filter
|
|
||||||
3. Use `dcim_list_devices` with `site_id` filter for counts per rack
|
|
||||||
|
|
||||||
**Mermaid Output:**
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TB
|
|
||||||
subgraph site["Site: Headquarters"]
|
|
||||||
subgraph row1["Row 1"]
|
|
||||||
rack1["Rack A1<br/>12/42 U used<br/>5 devices"]
|
|
||||||
rack2["Rack A2<br/>20/42 U used<br/>8 devices"]
|
|
||||||
end
|
|
||||||
subgraph row2["Row 2"]
|
|
||||||
rack3["Rack B1<br/>8/42 U used<br/>3 devices"]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
```
|
|
||||||
|
|
||||||
### View: Full Infrastructure
|
|
||||||
|
|
||||||
Generate a high-level view of all sites and their relationships.
|
|
||||||
|
|
||||||
**Data Collection:**
|
|
||||||
1. Use `dcim_list_regions` to get hierarchy
|
|
||||||
2. Use `dcim_list_sites` to get all sites
|
|
||||||
3. Use `dcim_list_devices` with status filter for counts
|
|
||||||
|
|
||||||
**Mermaid Output:**
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TB
|
|
||||||
subgraph region1["Region: Americas"]
|
|
||||||
site1["Headquarters<br/>3 racks, 25 devices"]
|
|
||||||
site2["Branch Office<br/>1 rack, 5 devices"]
|
|
||||||
end
|
|
||||||
subgraph region2["Region: Europe"]
|
|
||||||
site3["EU Datacenter<br/>10 racks, 100 devices"]
|
|
||||||
end
|
|
||||||
|
|
||||||
site1 -.->|"WAN Link"| site3
|
|
||||||
```
|
|
||||||
|
|
||||||
### Output Format
|
|
||||||
|
|
||||||
Always provide:
|
|
||||||
|
|
||||||
1. **Summary** - Brief description of what the diagram shows
|
|
||||||
2. **Mermaid Code Block** - The diagram code in a fenced code block
|
|
||||||
3. **Legend** - Explanation of shapes and colors used
|
|
||||||
4. **Data Notes** - Any data quality issues (e.g., devices without position, missing cables)
|
|
||||||
|
|
||||||
**Example Output:**
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Network Topology: Home Site
|
|
||||||
|
|
||||||
This diagram shows the network connections between 4 devices at the Home site.
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TD
|
|
||||||
router1[("core-router<br/>Router")]
|
|
||||||
switch1[["main-switch<br/>Switch"]]
|
|
||||||
server1["homelab-01<br/>Server"]
|
|
||||||
|
|
||||||
router1 -->|"eth0 - gi0/24"| switch1
|
|
||||||
switch1 -->|"gi0/1 - eth0"| server1
|
|
||||||
```
|
|
||||||
|
|
||||||
**Legend:**
|
|
||||||
- Cylinder shape: Routers
|
|
||||||
- Double brackets: Switches
|
|
||||||
- Rectangle: Servers
|
|
||||||
|
|
||||||
**Data Notes:**
|
|
||||||
- 1 device (nas-01) has no cable connections documented
|
|
||||||
```
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
- `/cmdb-topology rack server-rack-01` - Show devices in server-rack-01
|
|
||||||
- `/cmdb-topology network` - Show all network connections
|
|
||||||
- `/cmdb-topology network Home` - Show network topology for Home site only
|
|
||||||
- `/cmdb-topology site Headquarters` - Show rack overview for Headquarters
|
|
||||||
- `/cmdb-topology full` - Show full infrastructure overview
|
|
||||||
|
|
||||||
## User Request
|
|
||||||
|
|
||||||
$ARGUMENTS
|
|
||||||
@@ -1,226 +0,0 @@
|
|||||||
---
|
|
||||||
description: Detect IP address conflicts and overlapping prefixes in NetBox
|
|
||||||
---
|
|
||||||
|
|
||||||
# CMDB IP Conflict Detection
|
|
||||||
|
|
||||||
Scan NetBox IPAM data to identify IP address conflicts and overlapping prefixes.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/ip-conflicts [scope]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Scopes:**
|
|
||||||
- `all` (default) - Full scan of all IP data
|
|
||||||
- `addresses` - Check for duplicate IP addresses only
|
|
||||||
- `prefixes` - Check for overlapping prefixes only
|
|
||||||
- `vrf <name>` - Scan specific VRF only
|
|
||||||
- `prefix <cidr>` - Scan within specific prefix
|
|
||||||
|
|
||||||
## Instructions
|
|
||||||
|
|
||||||
You are an IP conflict detection specialist that analyzes NetBox IPAM data for conflicts and issues.
|
|
||||||
|
|
||||||
### Conflict Types to Detect
|
|
||||||
|
|
||||||
#### 1. Duplicate IP Addresses
|
|
||||||
|
|
||||||
Multiple IP address records with the same address (within same VRF).
|
|
||||||
|
|
||||||
**Detection:**
|
|
||||||
1. Use `ipam_list_ip_addresses` to get all addresses
|
|
||||||
2. Group by address + VRF combination
|
|
||||||
3. Flag groups with more than one record
|
|
||||||
|
|
||||||
**Exception:** Anycast addresses may legitimately appear multiple times - check the `role` field for "anycast".
|
|
||||||
|
|
||||||
#### 2. Overlapping Prefixes
|
|
||||||
|
|
||||||
Prefixes that contain the same address space (within same VRF).
|
|
||||||
|
|
||||||
**Detection:**
|
|
||||||
1. Use `ipam_list_prefixes` to get all prefixes
|
|
||||||
2. For each prefix pair in the same VRF, check if one contains the other
|
|
||||||
3. Legitimate hierarchies should have proper parent-child relationships
|
|
||||||
|
|
||||||
**Legitimate Overlaps:**
|
|
||||||
- Parent/child prefix hierarchy (e.g., 10.0.0.0/8 contains 10.0.1.0/24)
|
|
||||||
- Different VRFs (isolated routing tables)
|
|
||||||
- Marked as "container" status
|
|
||||||
|
|
||||||
#### 3. IPs Outside Their Prefix
|
|
||||||
|
|
||||||
IP addresses that don't fall within any defined prefix.
|
|
||||||
|
|
||||||
**Detection:**
|
|
||||||
1. For each IP address, find the most specific prefix that contains it
|
|
||||||
2. Flag IPs with no matching prefix
|
|
||||||
|
|
||||||
#### 4. Prefix Overlap Across VRFs (Informational)
|
|
||||||
|
|
||||||
Same prefix appearing in multiple VRFs - not necessarily a conflict, but worth noting.
|
|
||||||
|
|
||||||
### MCP Tools
|
|
||||||
|
|
||||||
- `ipam_list_ip_addresses` - Get all IP addresses with filters:
|
|
||||||
- `address` - Filter by specific address
|
|
||||||
- `vrf_id` - Filter by VRF
|
|
||||||
- `parent` - Filter by parent prefix
|
|
||||||
- `status` - Filter by status
|
|
||||||
|
|
||||||
- `ipam_list_prefixes` - Get all prefixes with filters:
|
|
||||||
- `prefix` - Filter by prefix CIDR
|
|
||||||
- `vrf_id` - Filter by VRF
|
|
||||||
- `within` - Find prefixes within a parent
|
|
||||||
- `contains` - Find prefixes containing an address
|
|
||||||
|
|
||||||
- `ipam_list_vrfs` - List VRFs for context
|
|
||||||
- `ipam_get_ip_address` - Get detailed IP info including assigned device/interface
|
|
||||||
- `ipam_get_prefix` - Get detailed prefix info
|
|
||||||
|
|
||||||
### Workflow
|
|
||||||
|
|
||||||
1. **Data Collection**
|
|
||||||
- Fetch all IP addresses (or filtered set)
|
|
||||||
- Fetch all prefixes (or filtered set)
|
|
||||||
- Fetch VRFs for context
|
|
||||||
|
|
||||||
2. **Duplicate Detection**
|
|
||||||
- Build address map: `{address+vrf: [records]}`
|
|
||||||
- Filter for entries with >1 record
|
|
||||||
|
|
||||||
3. **Overlap Detection**
|
|
||||||
- For each VRF, compare prefixes pairwise
|
|
||||||
- Check using CIDR math: does prefix A contain prefix B or vice versa?
|
|
||||||
- Ignore legitimate hierarchies (status=container)
|
|
||||||
|
|
||||||
4. **Orphan IP Detection**
|
|
||||||
- For each IP, find containing prefix
|
|
||||||
- Flag IPs with no prefix match
|
|
||||||
|
|
||||||
5. **Generate Report**
|
|
||||||
|
|
||||||
### Report Format
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## IP Conflict Detection Report
|
|
||||||
|
|
||||||
**Generated:** [timestamp]
|
|
||||||
**Scope:** [scope parameter]
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
|
|
||||||
| Check | Status | Count |
|
|
||||||
|-------|--------|-------|
|
|
||||||
| Duplicate IPs | [PASS/FAIL] | X |
|
|
||||||
| Overlapping Prefixes | [PASS/FAIL] | Y |
|
|
||||||
| Orphan IPs | [PASS/FAIL] | Z |
|
|
||||||
| Total Issues | - | N |
|
|
||||||
|
|
||||||
### Critical Issues
|
|
||||||
|
|
||||||
#### Duplicate IP Addresses
|
|
||||||
|
|
||||||
| Address | VRF | Count | Assigned To |
|
|
||||||
|---------|-----|-------|-------------|
|
|
||||||
| 10.0.1.50/24 | Global | 2 | server-01 (eth0), server-02 (eth0) |
|
|
||||||
| 192.168.1.100/24 | Global | 2 | router-01 (gi0/1), switch-01 (vlan10) |
|
|
||||||
|
|
||||||
**Impact:** IP conflicts cause network connectivity issues. Devices will have intermittent connectivity.
|
|
||||||
|
|
||||||
**Resolution:**
|
|
||||||
- Determine which device should have the IP
|
|
||||||
- Update or remove the duplicate assignment
|
|
||||||
- Consider IP reservation to prevent future conflicts
|
|
||||||
|
|
||||||
#### Overlapping Prefixes
|
|
||||||
|
|
||||||
| Prefix 1 | Prefix 2 | VRF | Type |
|
|
||||||
|----------|----------|-----|------|
|
|
||||||
| 10.0.0.0/24 | 10.0.0.0/25 | Global | Unstructured overlap |
|
|
||||||
| 192.168.0.0/16 | 192.168.1.0/24 | Production | Missing container flag |
|
|
||||||
|
|
||||||
**Impact:** Overlapping prefixes can cause routing ambiguity and IP management confusion.
|
|
||||||
|
|
||||||
**Resolution:**
|
|
||||||
- For legitimate hierarchies: Mark parent prefix as status="container"
|
|
||||||
- For accidental overlaps: Consolidate or re-address one prefix
|
|
||||||
|
|
||||||
### Warnings
|
|
||||||
|
|
||||||
#### IPs Without Prefix
|
|
||||||
|
|
||||||
| Address | VRF | Assigned To | Nearest Prefix |
|
|
||||||
|---------|-----|-------------|----------------|
|
|
||||||
| 172.16.5.10/24 | Global | server-03 (eth0) | None found |
|
|
||||||
|
|
||||||
**Impact:** IPs without a prefix bypass IPAM allocation controls.
|
|
||||||
|
|
||||||
**Resolution:**
|
|
||||||
- Create appropriate prefix to contain the IP
|
|
||||||
- Or update IP to correct address within existing prefix
|
|
||||||
|
|
||||||
### Informational
|
|
||||||
|
|
||||||
#### Same Prefix in Multiple VRFs
|
|
||||||
|
|
||||||
| Prefix | VRFs | Purpose |
|
|
||||||
|--------|------|---------|
|
|
||||||
| 10.0.0.0/24 | Global, DMZ, Internal | [Check if intentional] |
|
|
||||||
|
|
||||||
### Statistics
|
|
||||||
|
|
||||||
| Metric | Value |
|
|
||||||
|--------|-------|
|
|
||||||
| Total IP Addresses | X |
|
|
||||||
| Total Prefixes | Y |
|
|
||||||
| Total VRFs | Z |
|
|
||||||
| Utilization (IPs/Prefix space) | W% |
|
|
||||||
|
|
||||||
### Remediation Commands
|
|
||||||
|
|
||||||
```
|
|
||||||
# Remove duplicate IP (keep server-01's assignment)
|
|
||||||
ipam_delete_ip_address id=123
|
|
||||||
|
|
||||||
# Mark prefix as container
|
|
||||||
ipam_update_prefix id=456 status=container
|
|
||||||
|
|
||||||
# Create missing prefix for orphan IP
|
|
||||||
ipam_create_prefix prefix=172.16.5.0/24 status=active
|
|
||||||
```
|
|
||||||
```
|
|
||||||
|
|
||||||
### CIDR Math Reference
|
|
||||||
|
|
||||||
For overlap detection, use these rules:
|
|
||||||
- Prefix A **contains** Prefix B if: A.network <= B.network AND A.broadcast >= B.broadcast
|
|
||||||
- Two prefixes **overlap** if: A.network <= B.broadcast AND B.network <= A.broadcast
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
- 10.0.0.0/8 contains 10.0.1.0/24 (legitimate hierarchy)
|
|
||||||
- 10.0.0.0/24 and 10.0.0.128/25 overlap (10.0.0.128/25 is within 10.0.0.0/24)
|
|
||||||
|
|
||||||
### Severity Levels
|
|
||||||
|
|
||||||
| Issue | Severity | Description |
|
|
||||||
|-------|----------|-------------|
|
|
||||||
| Duplicate IP (same interface type) | CRITICAL | Active conflict, causes outages |
|
|
||||||
| Duplicate IP (different roles) | HIGH | Potential conflict |
|
|
||||||
| Overlapping prefixes (same status) | HIGH | IPAM management issue |
|
|
||||||
| Overlapping prefixes (container ok) | LOW | May need status update |
|
|
||||||
| Orphan IP | MEDIUM | Bypasses IPAM controls |
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
- `/ip-conflicts` - Full scan for all conflicts
|
|
||||||
- `/ip-conflicts addresses` - Check only for duplicate IPs
|
|
||||||
- `/ip-conflicts prefixes` - Check only for overlapping prefixes
|
|
||||||
- `/ip-conflicts vrf Production` - Scan only Production VRF
|
|
||||||
- `/ip-conflicts prefix 10.0.0.0/8` - Scan within specific prefix range
|
|
||||||
|
|
||||||
## User Request
|
|
||||||
|
|
||||||
$ARGUMENTS
|
|
||||||
@@ -19,7 +19,6 @@ Contract-validator solves these by parsing plugin interfaces and validating comp
|
|||||||
- **Agent Extraction**: Parse CLAUDE.md Four-Agent Model tables and Agents sections
|
- **Agent Extraction**: Parse CLAUDE.md Four-Agent Model tables and Agents sections
|
||||||
- **Compatibility Checks**: Pairwise validation between all plugins in a marketplace
|
- **Compatibility Checks**: Pairwise validation between all plugins in a marketplace
|
||||||
- **Data Flow Validation**: Verify agent tool sequences have valid data producers/consumers
|
- **Data Flow Validation**: Verify agent tool sequences have valid data producers/consumers
|
||||||
- **Dependency Visualization**: Generate Mermaid flowcharts showing plugin relationships
|
|
||||||
- **Comprehensive Reports**: Markdown or JSON reports with actionable suggestions
|
- **Comprehensive Reports**: Markdown or JSON reports with actionable suggestions
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
@@ -45,7 +44,6 @@ pip install -r requirements.txt
|
|||||||
| `/validate-contracts` | Full marketplace compatibility validation |
|
| `/validate-contracts` | Full marketplace compatibility validation |
|
||||||
| `/check-agent` | Validate single agent definition |
|
| `/check-agent` | Validate single agent definition |
|
||||||
| `/list-interfaces` | Show all plugin interfaces |
|
| `/list-interfaces` | Show all plugin interfaces |
|
||||||
| `/dependency-graph` | Generate Mermaid flowchart of plugin dependencies |
|
|
||||||
|
|
||||||
## Agents
|
## Agents
|
||||||
|
|
||||||
@@ -108,16 +106,6 @@ pip install -r requirements.txt
|
|||||||
# Data Flow: No issues detected
|
# Data Flow: No issues detected
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
|
||||||
/dependency-graph
|
|
||||||
|
|
||||||
# Output: Mermaid flowchart showing:
|
|
||||||
# - Plugins grouped by shared MCP servers
|
|
||||||
# - Data flow from data-platform to viz-platform
|
|
||||||
# - Required vs optional dependencies
|
|
||||||
# - Command counts per plugin
|
|
||||||
```
|
|
||||||
|
|
||||||
## Issue Types
|
## Issue Types
|
||||||
|
|
||||||
| Type | Severity | Description |
|
| Type | Severity | Description |
|
||||||
|
|||||||
@@ -1,251 +0,0 @@
|
|||||||
# /dependency-graph - Generate Dependency Visualization
|
|
||||||
|
|
||||||
Generate a Mermaid flowchart showing plugin dependencies, data flows, and tool relationships.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/dependency-graph [marketplace_path] [--format <mermaid|text>] [--show-tools]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `marketplace_path` (optional): Path to marketplace root. Defaults to current project root.
|
|
||||||
- `--format` (optional): Output format - `mermaid` (default) or `text`
|
|
||||||
- `--show-tools` (optional): Include individual tool nodes in the graph
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
1. **Discover plugins**:
|
|
||||||
- Scan plugins directory for all plugins with `.claude-plugin/` marker
|
|
||||||
- Parse each plugin's README.md to extract interface
|
|
||||||
- Parse CLAUDE.md for agent definitions and tool sequences
|
|
||||||
|
|
||||||
2. **Analyze dependencies**:
|
|
||||||
- Identify shared MCP servers (plugins using same server)
|
|
||||||
- Detect tool dependencies (which plugins produce vs consume data)
|
|
||||||
- Find agent tool references across plugins
|
|
||||||
- Categorize as required (ERROR if missing) or optional (WARNING if missing)
|
|
||||||
|
|
||||||
3. **Build dependency graph**:
|
|
||||||
- Create nodes for each plugin
|
|
||||||
- Create edges for:
|
|
||||||
- Shared MCP servers (bidirectional)
|
|
||||||
- Data producers -> consumers (directional)
|
|
||||||
- Agent tool dependencies (directional)
|
|
||||||
- Mark edges as optional or required
|
|
||||||
|
|
||||||
4. **Generate Mermaid output**:
|
|
||||||
- Create flowchart diagram syntax
|
|
||||||
- Style required dependencies with solid lines
|
|
||||||
- Style optional dependencies with dashed lines
|
|
||||||
- Group by MCP server or data flow
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
### Mermaid (default)
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
subgraph mcp_gitea["MCP: gitea"]
|
|
||||||
projman["projman"]
|
|
||||||
pr-review["pr-review"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph mcp_data["MCP: data-platform"]
|
|
||||||
data-platform["data-platform"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph mcp_viz["MCP: viz-platform"]
|
|
||||||
viz-platform["viz-platform"]
|
|
||||||
end
|
|
||||||
|
|
||||||
%% Data flow dependencies
|
|
||||||
data-platform -->|"data_ref (required)"| viz-platform
|
|
||||||
|
|
||||||
%% Optional dependencies
|
|
||||||
projman -.->|"lessons (optional)"| pr-review
|
|
||||||
|
|
||||||
%% Styling
|
|
||||||
classDef required stroke:#e74c3c,stroke-width:2px
|
|
||||||
classDef optional stroke:#f39c12,stroke-dasharray:5 5
|
|
||||||
```
|
|
||||||
|
|
||||||
### Text Format
|
|
||||||
|
|
||||||
```
|
|
||||||
DEPENDENCY GRAPH
|
|
||||||
================
|
|
||||||
|
|
||||||
Plugins: 12
|
|
||||||
MCP Servers: 4
|
|
||||||
Dependencies: 8 (5 required, 3 optional)
|
|
||||||
|
|
||||||
MCP Server Groups:
|
|
||||||
gitea: projman, pr-review
|
|
||||||
data-platform: data-platform
|
|
||||||
viz-platform: viz-platform
|
|
||||||
netbox: cmdb-assistant
|
|
||||||
|
|
||||||
Data Flow Dependencies:
|
|
||||||
data-platform -> viz-platform (data_ref) [REQUIRED]
|
|
||||||
data-platform -> data-platform (data_ref) [INTERNAL]
|
|
||||||
|
|
||||||
Cross-Plugin Tool Usage:
|
|
||||||
projman.Planner uses: create_issue, search_lessons
|
|
||||||
pr-review.reviewer uses: get_pr_diff, create_pr_review
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dependency Types
|
|
||||||
|
|
||||||
| Type | Line Style | Meaning |
|
|
||||||
|------|------------|---------|
|
|
||||||
| Required | Solid (`-->`) | Plugin cannot function without this dependency |
|
|
||||||
| Optional | Dashed (`-.->`) | Plugin works but with reduced functionality |
|
|
||||||
| Internal | Dotted (`...>`) | Self-dependency within same plugin |
|
|
||||||
| Shared MCP | Double (`==>`) | Plugins share same MCP server instance |
|
|
||||||
|
|
||||||
## Known Data Flow Patterns
|
|
||||||
|
|
||||||
The command recognizes these producer/consumer relationships:
|
|
||||||
|
|
||||||
### Data Producers
|
|
||||||
- `read_csv`, `read_parquet`, `read_json` - File loaders
|
|
||||||
- `pg_query`, `pg_execute` - Database queries
|
|
||||||
- `filter`, `select`, `groupby`, `join` - Transformations
|
|
||||||
|
|
||||||
### Data Consumers
|
|
||||||
- `describe`, `head`, `tail` - Data inspection
|
|
||||||
- `to_csv`, `to_parquet` - File writers
|
|
||||||
- `chart_create` - Visualization
|
|
||||||
|
|
||||||
### Cross-Plugin Flows
|
|
||||||
- `data-platform` produces `data_ref` -> `viz-platform` consumes for charts
|
|
||||||
- `projman` produces issues -> `pr-review` references in reviews
|
|
||||||
- `gitea` wiki -> `projman` lessons learned
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Basic Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/dependency-graph
|
|
||||||
```
|
|
||||||
|
|
||||||
Generates Mermaid diagram for current marketplace.
|
|
||||||
|
|
||||||
### With Tool Details
|
|
||||||
|
|
||||||
```
|
|
||||||
/dependency-graph --show-tools
|
|
||||||
```
|
|
||||||
|
|
||||||
Includes individual tool nodes showing which tools each plugin provides.
|
|
||||||
|
|
||||||
### Text Summary
|
|
||||||
|
|
||||||
```
|
|
||||||
/dependency-graph --format text
|
|
||||||
```
|
|
||||||
|
|
||||||
Outputs text-based summary suitable for CLAUDE.md inclusion.
|
|
||||||
|
|
||||||
### Specific Path
|
|
||||||
|
|
||||||
```
|
|
||||||
/dependency-graph ~/claude-plugins-work
|
|
||||||
```
|
|
||||||
|
|
||||||
Analyze marketplace at specified path.
|
|
||||||
|
|
||||||
## Integration with Other Commands
|
|
||||||
|
|
||||||
Use with `/validate-contracts` to:
|
|
||||||
1. Run `/dependency-graph` to visualize relationships
|
|
||||||
2. Run `/validate-contracts` to find issues in those relationships
|
|
||||||
3. Fix issues and regenerate graph to verify
|
|
||||||
|
|
||||||
## Available Tools
|
|
||||||
|
|
||||||
Use these MCP tools:
|
|
||||||
- `parse_plugin_interface` - Extract interface from plugin README.md
|
|
||||||
- `parse_claude_md_agents` - Extract agents and their tool sequences
|
|
||||||
- `generate_compatibility_report` - Get full interface data (JSON format for analysis)
|
|
||||||
- `validate_data_flow` - Verify data producer/consumer relationships
|
|
||||||
|
|
||||||
## Implementation Notes
|
|
||||||
|
|
||||||
### Detecting Shared MCP Servers
|
|
||||||
|
|
||||||
Check each plugin's `.mcp.json` file for server definitions:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all .mcp.json files in plugins
|
|
||||||
find plugins/ -name ".mcp.json" -exec cat {} \;
|
|
||||||
```
|
|
||||||
|
|
||||||
Plugins with identical MCP server names share that server.
|
|
||||||
|
|
||||||
### Identifying Data Flows
|
|
||||||
|
|
||||||
1. Parse tool categories from README.md
|
|
||||||
2. Map known producer tools to their output types
|
|
||||||
3. Map known consumer tools to their input requirements
|
|
||||||
4. Create edges where outputs match inputs
|
|
||||||
|
|
||||||
### Optional vs Required
|
|
||||||
|
|
||||||
- **Required**: Consumer tool has no default/fallback behavior
|
|
||||||
- **Optional**: Consumer works without producer (e.g., lessons search returns empty)
|
|
||||||
|
|
||||||
Determination is based on:
|
|
||||||
- Issue severity from `validate_data_flow` (ERROR = required, WARNING = optional)
|
|
||||||
- Tool documentation stating "requires" vs "uses if available"
|
|
||||||
|
|
||||||
## Sample Output
|
|
||||||
|
|
||||||
For the leo-claude-mktplace:
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
subgraph gitea_mcp["Shared MCP: gitea"]
|
|
||||||
projman["projman<br/>14 commands"]
|
|
||||||
pr-review["pr-review<br/>6 commands"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph netbox_mcp["Shared MCP: netbox"]
|
|
||||||
cmdb-assistant["cmdb-assistant<br/>3 commands"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph data_mcp["Shared MCP: data-platform"]
|
|
||||||
data-platform["data-platform<br/>7 commands"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph viz_mcp["Shared MCP: viz-platform"]
|
|
||||||
viz-platform["viz-platform<br/>7 commands"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph standalone["Standalone Plugins"]
|
|
||||||
doc-guardian["doc-guardian"]
|
|
||||||
code-sentinel["code-sentinel"]
|
|
||||||
clarity-assist["clarity-assist"]
|
|
||||||
git-flow["git-flow"]
|
|
||||||
claude-config-maintainer["claude-config-maintainer"]
|
|
||||||
contract-validator["contract-validator"]
|
|
||||||
end
|
|
||||||
|
|
||||||
%% Data flow: data-platform -> viz-platform
|
|
||||||
data-platform -->|"data_ref"| viz-platform
|
|
||||||
|
|
||||||
%% Cross-plugin: projman lessons -> pr-review context
|
|
||||||
projman -.->|"lessons"| pr-review
|
|
||||||
|
|
||||||
%% Styling
|
|
||||||
classDef mcpGroup fill:#e8f4fd,stroke:#2196f3
|
|
||||||
classDef standalone fill:#f5f5f5,stroke:#9e9e9e
|
|
||||||
classDef required stroke:#e74c3c,stroke-width:2px
|
|
||||||
classDef optional stroke:#f39c12,stroke-dasharray:5 5
|
|
||||||
|
|
||||||
class gitea_mcp,netbox_mcp,data_mcp,viz_mcp mcpGroup
|
|
||||||
class standalone standalone
|
|
||||||
```
|
|
||||||
@@ -1,195 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# contract-validator SessionStart auto-validate hook
|
|
||||||
# Validates plugin contracts only when plugin files have changed since last check
|
|
||||||
# All output MUST have [contract-validator] prefix
|
|
||||||
|
|
||||||
PREFIX="[contract-validator]"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Configuration
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Enable/disable auto-check (default: true)
|
|
||||||
AUTO_CHECK="${CONTRACT_VALIDATOR_AUTO_CHECK:-true}"
|
|
||||||
|
|
||||||
# Cache location for storing last check hash
|
|
||||||
CACHE_DIR="$HOME/.cache/claude-plugins/contract-validator"
|
|
||||||
HASH_FILE="$CACHE_DIR/last-check.hash"
|
|
||||||
|
|
||||||
# Marketplace location (installed plugins)
|
|
||||||
MARKETPLACE_PATH="$HOME/.claude/plugins/marketplaces/leo-claude-mktplace"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Early exit if disabled
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
if [[ "$AUTO_CHECK" != "true" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Smart mode: Check if plugin files have changed
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Function to compute hash of all plugin manifest files
|
|
||||||
compute_plugin_hash() {
|
|
||||||
local hash_input=""
|
|
||||||
|
|
||||||
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
|
|
||||||
# Hash all plugin.json, hooks.json, and agent files
|
|
||||||
while IFS= read -r -d '' file; do
|
|
||||||
if [[ -f "$file" ]]; then
|
|
||||||
hash_input+="$(md5sum "$file" 2>/dev/null | cut -d' ' -f1)"
|
|
||||||
fi
|
|
||||||
done < <(find "$MARKETPLACE_PATH/plugins" \
|
|
||||||
\( -name "plugin.json" -o -name "hooks.json" -o -name "*.md" -path "*/agents/*" \) \
|
|
||||||
-print0 2>/dev/null | sort -z)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Also include marketplace.json
|
|
||||||
if [[ -f "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" ]]; then
|
|
||||||
hash_input+="$(md5sum "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" 2>/dev/null | cut -d' ' -f1)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Compute final hash
|
|
||||||
echo "$hash_input" | md5sum | cut -d' ' -f1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Ensure cache directory exists
|
|
||||||
mkdir -p "$CACHE_DIR" 2>/dev/null
|
|
||||||
|
|
||||||
# Compute current hash
|
|
||||||
CURRENT_HASH=$(compute_plugin_hash)
|
|
||||||
|
|
||||||
# Check if we have a previous hash
|
|
||||||
if [[ -f "$HASH_FILE" ]]; then
|
|
||||||
PREVIOUS_HASH=$(cat "$HASH_FILE" 2>/dev/null)
|
|
||||||
|
|
||||||
# If hashes match, no changes - skip validation
|
|
||||||
if [[ "$CURRENT_HASH" == "$PREVIOUS_HASH" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Run validation (hashes differ or no cache)
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
ISSUES_FOUND=0
|
|
||||||
WARNINGS=""
|
|
||||||
|
|
||||||
# Function to add warning
|
|
||||||
add_warning() {
|
|
||||||
WARNINGS+=" - $1"$'\n'
|
|
||||||
((ISSUES_FOUND++))
|
|
||||||
}
|
|
||||||
|
|
||||||
# 1. Check all installed plugins have valid plugin.json
|
|
||||||
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
|
|
||||||
for plugin_dir in "$MARKETPLACE_PATH/plugins"/*/; do
|
|
||||||
if [[ -d "$plugin_dir" ]]; then
|
|
||||||
plugin_name=$(basename "$plugin_dir")
|
|
||||||
plugin_json="$plugin_dir/.claude-plugin/plugin.json"
|
|
||||||
|
|
||||||
if [[ ! -f "$plugin_json" ]]; then
|
|
||||||
add_warning "$plugin_name: missing .claude-plugin/plugin.json"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Basic JSON validation
|
|
||||||
if ! python3 -c "import json; json.load(open('$plugin_json'))" 2>/dev/null; then
|
|
||||||
add_warning "$plugin_name: invalid JSON in plugin.json"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check required fields
|
|
||||||
if ! python3 -c "
|
|
||||||
import json
|
|
||||||
with open('$plugin_json') as f:
|
|
||||||
data = json.load(f)
|
|
||||||
required = ['name', 'version', 'description']
|
|
||||||
missing = [r for r in required if r not in data]
|
|
||||||
if missing:
|
|
||||||
exit(1)
|
|
||||||
" 2>/dev/null; then
|
|
||||||
add_warning "$plugin_name: plugin.json missing required fields"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 2. Check hooks.json files are properly formatted
|
|
||||||
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
|
|
||||||
while IFS= read -r -d '' hooks_file; do
|
|
||||||
plugin_name=$(basename "$(dirname "$(dirname "$hooks_file")")")
|
|
||||||
|
|
||||||
# Validate JSON
|
|
||||||
if ! python3 -c "import json; json.load(open('$hooks_file'))" 2>/dev/null; then
|
|
||||||
add_warning "$plugin_name: invalid JSON in hooks/hooks.json"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate hook structure
|
|
||||||
if ! python3 -c "
|
|
||||||
import json
|
|
||||||
with open('$hooks_file') as f:
|
|
||||||
data = json.load(f)
|
|
||||||
if 'hooks' not in data:
|
|
||||||
exit(1)
|
|
||||||
valid_events = ['PreToolUse', 'PostToolUse', 'UserPromptSubmit', 'SessionStart', 'SessionEnd', 'Notification', 'Stop', 'SubagentStop', 'PreCompact']
|
|
||||||
for event in data['hooks']:
|
|
||||||
if event not in valid_events:
|
|
||||||
exit(1)
|
|
||||||
for hook in data['hooks'][event]:
|
|
||||||
# Support both flat structure (type at top) and nested structure (matcher + hooks array)
|
|
||||||
if 'type' in hook:
|
|
||||||
# Flat structure: {type: 'command', command: '...'}
|
|
||||||
pass
|
|
||||||
elif 'matcher' in hook and 'hooks' in hook:
|
|
||||||
# Nested structure: {matcher: '...', hooks: [{type: 'command', ...}]}
|
|
||||||
for nested_hook in hook['hooks']:
|
|
||||||
if 'type' not in nested_hook:
|
|
||||||
exit(1)
|
|
||||||
else:
|
|
||||||
exit(1)
|
|
||||||
" 2>/dev/null; then
|
|
||||||
add_warning "$plugin_name: hooks.json has invalid structure or events"
|
|
||||||
fi
|
|
||||||
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/hooks/hooks.json" -print0 2>/dev/null)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 3. Check agent references are valid (agent files exist and are markdown)
|
|
||||||
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
|
|
||||||
while IFS= read -r -d '' agent_file; do
|
|
||||||
plugin_name=$(basename "$(dirname "$(dirname "$agent_file")")")
|
|
||||||
agent_name=$(basename "$agent_file")
|
|
||||||
|
|
||||||
# Check file is not empty
|
|
||||||
if [[ ! -s "$agent_file" ]]; then
|
|
||||||
add_warning "$plugin_name: empty agent file $agent_name"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check file has markdown content (at least a header)
|
|
||||||
if ! grep -q '^#' "$agent_file" 2>/dev/null; then
|
|
||||||
add_warning "$plugin_name: agent $agent_name missing markdown header"
|
|
||||||
fi
|
|
||||||
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/agents/*.md" -print0 2>/dev/null)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Store new hash and report results
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Always store the new hash (even if issues found - we don't want to recheck)
|
|
||||||
echo "$CURRENT_HASH" > "$HASH_FILE"
|
|
||||||
|
|
||||||
# Report any issues found (non-blocking warning)
|
|
||||||
if [[ $ISSUES_FOUND -gt 0 ]]; then
|
|
||||||
echo "$PREFIX Plugin contract validation found $ISSUES_FOUND issue(s):"
|
|
||||||
echo "$WARNINGS"
|
|
||||||
echo "$PREFIX Run /validate-contracts for full details"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Always exit 0 (non-blocking)
|
|
||||||
exit 0
|
|
||||||
@@ -1,174 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# contract-validator breaking change detection hook
|
|
||||||
# Warns when plugin interface changes might break consumers
|
|
||||||
# This is a PostToolUse hook - non-blocking, warnings only
|
|
||||||
|
|
||||||
PREFIX="[contract-validator]"
|
|
||||||
|
|
||||||
# Check if warnings are enabled (default: true)
|
|
||||||
if [[ "${CONTRACT_VALIDATOR_BREAKING_WARN:-true}" != "true" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Read tool input from stdin
|
|
||||||
INPUT=$(cat)
|
|
||||||
|
|
||||||
# Extract file_path from JSON input
|
|
||||||
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
|
|
||||||
|
|
||||||
# If no file_path found, exit silently
|
|
||||||
if [ -z "$FILE_PATH" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if file is a plugin interface file
|
|
||||||
is_interface_file() {
|
|
||||||
local file="$1"
|
|
||||||
|
|
||||||
case "$file" in
|
|
||||||
*/plugin.json) return 0 ;;
|
|
||||||
*/.claude-plugin/plugin.json) return 0 ;;
|
|
||||||
*/hooks.json) return 0 ;;
|
|
||||||
*/hooks/hooks.json) return 0 ;;
|
|
||||||
*/.mcp.json) return 0 ;;
|
|
||||||
*/agents/*.md) return 0 ;;
|
|
||||||
*/commands/*.md) return 0 ;;
|
|
||||||
*/skills/*.md) return 0 ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Exit if not an interface file
|
|
||||||
if ! is_interface_file "$FILE_PATH"; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if file exists and is in a git repo
|
|
||||||
if [[ ! -f "$FILE_PATH" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get the directory containing the file
|
|
||||||
FILE_DIR=$(dirname "$FILE_PATH")
|
|
||||||
FILE_NAME=$(basename "$FILE_PATH")
|
|
||||||
|
|
||||||
# Try to get the previous version from git
|
|
||||||
cd "$FILE_DIR" 2>/dev/null || exit 0
|
|
||||||
|
|
||||||
# Check if we're in a git repo
|
|
||||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get previous version (HEAD version before current changes)
|
|
||||||
PREV_CONTENT=$(git show HEAD:"$FILE_PATH" 2>/dev/null || echo "")
|
|
||||||
|
|
||||||
# If no previous version, this is a new file - no breaking changes possible
|
|
||||||
if [ -z "$PREV_CONTENT" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Read current content
|
|
||||||
CURR_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
|
|
||||||
|
|
||||||
if [ -z "$CURR_CONTENT" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
BREAKING_CHANGES=()
|
|
||||||
|
|
||||||
# Detect breaking changes based on file type
|
|
||||||
case "$FILE_PATH" in
|
|
||||||
*/plugin.json|*/.claude-plugin/plugin.json)
|
|
||||||
# Check for removed or renamed fields in plugin.json
|
|
||||||
|
|
||||||
# Check if name changed
|
|
||||||
PREV_NAME=$(echo "$PREV_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
|
|
||||||
CURR_NAME=$(echo "$CURR_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
|
|
||||||
if [ -n "$PREV_NAME" ] && [ "$PREV_NAME" != "$CURR_NAME" ]; then
|
|
||||||
BREAKING_CHANGES+=("Plugin name changed - consumers may need updates")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if version had major bump (semantic versioning)
|
|
||||||
PREV_VER=$(echo "$PREV_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
|
|
||||||
CURR_VER=$(echo "$CURR_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
|
|
||||||
if [ -n "$PREV_VER" ] && [ -n "$CURR_VER" ] && [ "$CURR_VER" -gt "$PREV_VER" ] 2>/dev/null; then
|
|
||||||
BREAKING_CHANGES+=("Major version bump detected - verify breaking changes documented")
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
*/hooks.json|*/hooks/hooks.json)
|
|
||||||
# Check for removed hook events
|
|
||||||
PREV_EVENTS=$(echo "$PREV_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
|
|
||||||
CURR_EVENTS=$(echo "$CURR_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
|
|
||||||
|
|
||||||
# Find removed events
|
|
||||||
REMOVED_EVENTS=$(comm -23 <(echo "$PREV_EVENTS") <(echo "$CURR_EVENTS") 2>/dev/null)
|
|
||||||
if [ -n "$REMOVED_EVENTS" ]; then
|
|
||||||
BREAKING_CHANGES+=("Hook events removed: $(echo $REMOVED_EVENTS | tr '\n' ' ')")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for changed matchers
|
|
||||||
PREV_MATCHERS=$(echo "$PREV_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
|
|
||||||
CURR_MATCHERS=$(echo "$CURR_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
|
|
||||||
if [ "$PREV_MATCHERS" != "$CURR_MATCHERS" ]; then
|
|
||||||
BREAKING_CHANGES+=("Hook matchers changed - verify tool coverage")
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
*/.mcp.json)
|
|
||||||
# Check for removed MCP servers
|
|
||||||
PREV_SERVERS=$(echo "$PREV_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
|
|
||||||
CURR_SERVERS=$(echo "$CURR_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
|
|
||||||
|
|
||||||
REMOVED_SERVERS=$(comm -23 <(echo "$PREV_SERVERS") <(echo "$CURR_SERVERS") 2>/dev/null)
|
|
||||||
if [ -n "$REMOVED_SERVERS" ]; then
|
|
||||||
BREAKING_CHANGES+=("MCP servers removed - tools may be unavailable")
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
*/agents/*.md)
|
|
||||||
# Check if agent file was significantly reduced (might indicate removal of capabilities)
|
|
||||||
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
|
|
||||||
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
|
|
||||||
|
|
||||||
# If more than 50% reduction, warn
|
|
||||||
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
|
|
||||||
BREAKING_CHANGES+=("Agent definition significantly reduced - capabilities may be removed")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if agent name/description changed in frontmatter
|
|
||||||
PREV_DESC=$(echo "$PREV_CONTENT" | head -20 | grep -i "description" | head -1)
|
|
||||||
CURR_DESC=$(echo "$CURR_CONTENT" | head -20 | grep -i "description" | head -1)
|
|
||||||
if [ -n "$PREV_DESC" ] && [ "$PREV_DESC" != "$CURR_DESC" ]; then
|
|
||||||
BREAKING_CHANGES+=("Agent description changed - verify consumer expectations")
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
*/commands/*.md|*/skills/*.md)
|
|
||||||
# Check if command/skill was significantly changed
|
|
||||||
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
|
|
||||||
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
|
|
||||||
|
|
||||||
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
|
|
||||||
BREAKING_CHANGES+=("Command/skill significantly reduced - behavior may change")
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# Output warnings if any breaking changes detected
|
|
||||||
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
|
|
||||||
echo ""
|
|
||||||
echo "$PREFIX WARNING: Potential breaking changes in $(basename "$FILE_PATH")"
|
|
||||||
echo "$PREFIX ============================================"
|
|
||||||
for change in "${BREAKING_CHANGES[@]}"; do
|
|
||||||
echo "$PREFIX - $change"
|
|
||||||
done
|
|
||||||
echo "$PREFIX ============================================"
|
|
||||||
echo "$PREFIX Consider updating CHANGELOG and notifying consumers"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Always exit 0 - non-blocking
|
|
||||||
exit 0
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
{
|
|
||||||
"hooks": {
|
|
||||||
"SessionStart": [
|
|
||||||
{
|
|
||||||
"type": "command",
|
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/auto-validate.sh"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"PostToolUse": [
|
|
||||||
{
|
|
||||||
"matcher": "Edit|Write",
|
|
||||||
"hooks": [
|
|
||||||
{
|
|
||||||
"type": "command",
|
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/breaking-change-check.sh"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -49,13 +49,10 @@ DBT_PROFILES_DIR=~/.dbt
|
|||||||
| `/initial-setup` | Interactive setup wizard for PostgreSQL and dbt configuration |
|
| `/initial-setup` | Interactive setup wizard for PostgreSQL and dbt configuration |
|
||||||
| `/ingest` | Load data from files or database |
|
| `/ingest` | Load data from files or database |
|
||||||
| `/profile` | Generate data profile and statistics |
|
| `/profile` | Generate data profile and statistics |
|
||||||
| `/data-quality` | Data quality assessment with pass/warn/fail scoring |
|
|
||||||
| `/schema` | Show database/DataFrame schema |
|
| `/schema` | Show database/DataFrame schema |
|
||||||
| `/explain` | Explain dbt model lineage |
|
| `/explain` | Explain dbt model lineage |
|
||||||
| `/lineage` | Visualize data dependencies (ASCII) |
|
| `/lineage` | Visualize data dependencies |
|
||||||
| `/lineage-viz` | Generate Mermaid flowchart for dbt lineage |
|
|
||||||
| `/run` | Execute dbt models |
|
| `/run` | Execute dbt models |
|
||||||
| `/dbt-test` | Run dbt tests with formatted results |
|
|
||||||
|
|
||||||
## Agents
|
## Agents
|
||||||
|
|
||||||
|
|||||||
@@ -1,103 +0,0 @@
|
|||||||
# /data-quality - Data Quality Assessment
|
|
||||||
|
|
||||||
Comprehensive data quality check for DataFrames with pass/warn/fail scoring.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/data-quality <data_ref> [--strict]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
1. **Get data reference**:
|
|
||||||
- If no data_ref provided, use `list_data` to show available options
|
|
||||||
- Validate the data_ref exists
|
|
||||||
|
|
||||||
2. **Null analysis**:
|
|
||||||
- Calculate null percentage per column
|
|
||||||
- **PASS**: < 5% nulls
|
|
||||||
- **WARN**: 5-20% nulls
|
|
||||||
- **FAIL**: > 20% nulls
|
|
||||||
|
|
||||||
3. **Duplicate detection**:
|
|
||||||
- Check for fully duplicated rows
|
|
||||||
- **PASS**: 0% duplicates
|
|
||||||
- **WARN**: < 1% duplicates
|
|
||||||
- **FAIL**: >= 1% duplicates
|
|
||||||
|
|
||||||
4. **Type consistency**:
|
|
||||||
- Identify mixed-type columns (object columns with mixed content)
|
|
||||||
- Flag columns that could be numeric but contain strings
|
|
||||||
- **PASS**: All columns have consistent types
|
|
||||||
- **FAIL**: Mixed types detected
|
|
||||||
|
|
||||||
5. **Outlier detection** (numeric columns):
|
|
||||||
- Use IQR method (values beyond 1.5 * IQR)
|
|
||||||
- Report percentage of outliers per column
|
|
||||||
- **PASS**: < 1% outliers
|
|
||||||
- **WARN**: 1-5% outliers
|
|
||||||
- **FAIL**: > 5% outliers
|
|
||||||
|
|
||||||
6. **Generate quality report**:
|
|
||||||
- Overall quality score (0-100)
|
|
||||||
- Per-column breakdown
|
|
||||||
- Recommendations for remediation
|
|
||||||
|
|
||||||
## Report Format
|
|
||||||
|
|
||||||
```
|
|
||||||
=== Data Quality Report ===
|
|
||||||
Dataset: sales_data
|
|
||||||
Rows: 10,000 | Columns: 15
|
|
||||||
Overall Score: 82/100 [PASS]
|
|
||||||
|
|
||||||
--- Column Analysis ---
|
|
||||||
| Column | Nulls | Dups | Type | Outliers | Status |
|
|
||||||
|--------------|-------|------|----------|----------|--------|
|
|
||||||
| customer_id | 0.0% | - | int64 | 0.2% | PASS |
|
|
||||||
| email | 2.3% | - | object | - | PASS |
|
|
||||||
| amount | 15.2% | - | float64 | 3.1% | WARN |
|
|
||||||
| created_at | 0.0% | - | datetime | - | PASS |
|
|
||||||
|
|
||||||
--- Issues Found ---
|
|
||||||
[WARN] Column 'amount': 15.2% null values (threshold: 5%)
|
|
||||||
[WARN] Column 'amount': 3.1% outliers detected
|
|
||||||
[FAIL] 1.2% duplicate rows detected (12 rows)
|
|
||||||
|
|
||||||
--- Recommendations ---
|
|
||||||
1. Investigate null values in 'amount' column
|
|
||||||
2. Review outliers in 'amount' - may be data entry errors
|
|
||||||
3. Remove or deduplicate 12 duplicate rows
|
|
||||||
```
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `--strict` | Use stricter thresholds (WARN at 1% nulls, FAIL at 5%) |
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
```
|
|
||||||
/data-quality sales_data
|
|
||||||
/data-quality df_customers --strict
|
|
||||||
```
|
|
||||||
|
|
||||||
## Scoring
|
|
||||||
|
|
||||||
| Component | Weight | Scoring |
|
|
||||||
|-----------|--------|---------|
|
|
||||||
| Nulls | 30% | 100 - (avg_null_pct * 2) |
|
|
||||||
| Duplicates | 20% | 100 - (dup_pct * 50) |
|
|
||||||
| Type consistency | 25% | 100 if clean, 0 if mixed |
|
|
||||||
| Outliers | 25% | 100 - (avg_outlier_pct * 10) |
|
|
||||||
|
|
||||||
Final score: Weighted average, capped at 0-100
|
|
||||||
|
|
||||||
## Available Tools
|
|
||||||
|
|
||||||
Use these MCP tools:
|
|
||||||
- `describe` - Get statistical summary (for outlier detection)
|
|
||||||
- `head` - Preview data
|
|
||||||
- `list_data` - List available DataFrames
|
|
||||||
@@ -1,119 +0,0 @@
|
|||||||
# /dbt-test - Run dbt Tests
|
|
||||||
|
|
||||||
Execute dbt tests with formatted pass/fail results.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/dbt-test [selection] [--warn-only]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
1. **Pre-validation** (MANDATORY):
|
|
||||||
- Use `dbt_parse` to validate project first
|
|
||||||
- If validation fails, show errors and STOP
|
|
||||||
|
|
||||||
2. **Execute tests**:
|
|
||||||
- Use `dbt_test` with provided selection
|
|
||||||
- Capture all test results
|
|
||||||
|
|
||||||
3. **Format results**:
|
|
||||||
- Group by test type (schema vs. data)
|
|
||||||
- Show pass/fail status with counts
|
|
||||||
- Display failure details
|
|
||||||
|
|
||||||
## Report Format
|
|
||||||
|
|
||||||
```
|
|
||||||
=== dbt Test Results ===
|
|
||||||
Project: my_project
|
|
||||||
Selection: tag:critical
|
|
||||||
|
|
||||||
--- Summary ---
|
|
||||||
Total: 24 tests
|
|
||||||
PASS: 22 (92%)
|
|
||||||
FAIL: 1 (4%)
|
|
||||||
WARN: 1 (4%)
|
|
||||||
SKIP: 0 (0%)
|
|
||||||
|
|
||||||
--- Schema Tests (18) ---
|
|
||||||
[PASS] unique_dim_customers_customer_id
|
|
||||||
[PASS] not_null_dim_customers_customer_id
|
|
||||||
[PASS] not_null_dim_customers_email
|
|
||||||
[PASS] accepted_values_dim_customers_status
|
|
||||||
[FAIL] relationships_fct_orders_customer_id
|
|
||||||
|
|
||||||
--- Data Tests (6) ---
|
|
||||||
[PASS] assert_positive_order_amounts
|
|
||||||
[PASS] assert_valid_dates
|
|
||||||
[WARN] assert_recent_orders (threshold: 7 days)
|
|
||||||
|
|
||||||
--- Failure Details ---
|
|
||||||
Test: relationships_fct_orders_customer_id
|
|
||||||
Type: schema (relationships)
|
|
||||||
Model: fct_orders
|
|
||||||
Message: 15 records failed referential integrity check
|
|
||||||
Query: SELECT * FROM fct_orders WHERE customer_id NOT IN (SELECT customer_id FROM dim_customers)
|
|
||||||
|
|
||||||
--- Warning Details ---
|
|
||||||
Test: assert_recent_orders
|
|
||||||
Type: data
|
|
||||||
Message: No orders in last 7 days (expected for dev environment)
|
|
||||||
Severity: warn
|
|
||||||
```
|
|
||||||
|
|
||||||
## Selection Syntax
|
|
||||||
|
|
||||||
| Pattern | Meaning |
|
|
||||||
|---------|---------|
|
|
||||||
| (none) | Run all tests |
|
|
||||||
| `model_name` | Tests for specific model |
|
|
||||||
| `+model_name` | Tests for model and upstream |
|
|
||||||
| `tag:critical` | Tests with tag |
|
|
||||||
| `test_type:schema` | Only schema tests |
|
|
||||||
| `test_type:data` | Only data tests |
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `--warn-only` | Treat failures as warnings (don't fail CI) |
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
```
|
|
||||||
/dbt-test # Run all tests
|
|
||||||
/dbt-test dim_customers # Tests for specific model
|
|
||||||
/dbt-test tag:critical # Run critical tests only
|
|
||||||
/dbt-test +fct_orders # Test model and its upstream
|
|
||||||
```
|
|
||||||
|
|
||||||
## Test Types
|
|
||||||
|
|
||||||
### Schema Tests
|
|
||||||
Built-in tests defined in `schema.yml`:
|
|
||||||
- `unique` - No duplicate values
|
|
||||||
- `not_null` - No null values
|
|
||||||
- `accepted_values` - Value in allowed list
|
|
||||||
- `relationships` - Foreign key integrity
|
|
||||||
|
|
||||||
### Data Tests
|
|
||||||
Custom SQL tests in `tests/` directory:
|
|
||||||
- Return rows that fail the assertion
|
|
||||||
- Zero rows = pass, any rows = fail
|
|
||||||
|
|
||||||
## Exit Codes
|
|
||||||
|
|
||||||
| Code | Meaning |
|
|
||||||
|------|---------|
|
|
||||||
| 0 | All tests passed |
|
|
||||||
| 1 | One or more tests failed |
|
|
||||||
| 2 | dbt error (parse failure, etc.) |
|
|
||||||
|
|
||||||
## Available Tools
|
|
||||||
|
|
||||||
Use these MCP tools:
|
|
||||||
- `dbt_parse` - Pre-validation (ALWAYS RUN FIRST)
|
|
||||||
- `dbt_test` - Execute tests (REQUIRED)
|
|
||||||
- `dbt_build` - Alternative: run + test together
|
|
||||||
@@ -1,125 +0,0 @@
|
|||||||
# /lineage-viz - Mermaid Lineage Visualization
|
|
||||||
|
|
||||||
Generate Mermaid flowchart syntax for dbt model lineage.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/lineage-viz <model_name> [--direction TB|LR] [--depth N]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
1. **Get lineage data**:
|
|
||||||
- Use `dbt_lineage` to fetch model dependencies
|
|
||||||
- Capture upstream sources and downstream consumers
|
|
||||||
|
|
||||||
2. **Build Mermaid graph**:
|
|
||||||
- Create nodes for each model/source
|
|
||||||
- Style nodes by materialization type
|
|
||||||
- Add directional arrows for dependencies
|
|
||||||
|
|
||||||
3. **Output**:
|
|
||||||
- Render Mermaid flowchart syntax
|
|
||||||
- Include copy-paste ready code block
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart LR
|
|
||||||
subgraph Sources
|
|
||||||
raw_customers[(raw_customers)]
|
|
||||||
raw_orders[(raw_orders)]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Staging
|
|
||||||
stg_customers[stg_customers]
|
|
||||||
stg_orders[stg_orders]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Marts
|
|
||||||
dim_customers{{dim_customers}}
|
|
||||||
fct_orders{{fct_orders}}
|
|
||||||
end
|
|
||||||
|
|
||||||
raw_customers --> stg_customers
|
|
||||||
raw_orders --> stg_orders
|
|
||||||
stg_customers --> dim_customers
|
|
||||||
stg_orders --> fct_orders
|
|
||||||
dim_customers --> fct_orders
|
|
||||||
```
|
|
||||||
|
|
||||||
## Node Styles
|
|
||||||
|
|
||||||
| Materialization | Mermaid Shape | Example |
|
|
||||||
|-----------------|---------------|---------|
|
|
||||||
| source | Cylinder `[( )]` | `raw_data[(raw_data)]` |
|
|
||||||
| view | Rectangle `[ ]` | `stg_model[stg_model]` |
|
|
||||||
| table | Double braces `{{ }}` | `dim_model{{dim_model}}` |
|
|
||||||
| incremental | Hexagon `{{ }}` | `fct_model{{fct_model}}` |
|
|
||||||
| ephemeral | Dashed `[/ /]` | `tmp_model[/tmp_model/]` |
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `--direction TB` | Top-to-bottom layout (default: LR = left-to-right) |
|
|
||||||
| `--depth N` | Limit lineage depth (default: unlimited) |
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
```
|
|
||||||
/lineage-viz dim_customers
|
|
||||||
/lineage-viz fct_orders --direction TB
|
|
||||||
/lineage-viz rpt_revenue --depth 2
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage Tips
|
|
||||||
|
|
||||||
1. **Paste in documentation**: Copy the output directly into README.md or docs
|
|
||||||
2. **GitHub/GitLab rendering**: Both platforms render Mermaid natively
|
|
||||||
3. **Mermaid Live Editor**: Paste at https://mermaid.live for interactive editing
|
|
||||||
|
|
||||||
## Example Output
|
|
||||||
|
|
||||||
For `/lineage-viz fct_orders`:
|
|
||||||
|
|
||||||
~~~markdown
|
|
||||||
```mermaid
|
|
||||||
flowchart LR
|
|
||||||
%% Sources
|
|
||||||
raw_customers[(raw_customers)]
|
|
||||||
raw_orders[(raw_orders)]
|
|
||||||
raw_products[(raw_products)]
|
|
||||||
|
|
||||||
%% Staging
|
|
||||||
stg_customers[stg_customers]
|
|
||||||
stg_orders[stg_orders]
|
|
||||||
stg_products[stg_products]
|
|
||||||
|
|
||||||
%% Marts
|
|
||||||
dim_customers{{dim_customers}}
|
|
||||||
dim_products{{dim_products}}
|
|
||||||
fct_orders{{fct_orders}}
|
|
||||||
|
|
||||||
%% Dependencies
|
|
||||||
raw_customers --> stg_customers
|
|
||||||
raw_orders --> stg_orders
|
|
||||||
raw_products --> stg_products
|
|
||||||
stg_customers --> dim_customers
|
|
||||||
stg_products --> dim_products
|
|
||||||
stg_orders --> fct_orders
|
|
||||||
dim_customers --> fct_orders
|
|
||||||
dim_products --> fct_orders
|
|
||||||
|
|
||||||
%% Highlight target model
|
|
||||||
style fct_orders fill:#f96,stroke:#333,stroke-width:2px
|
|
||||||
```
|
|
||||||
~~~
|
|
||||||
|
|
||||||
## Available Tools
|
|
||||||
|
|
||||||
Use these MCP tools:
|
|
||||||
- `dbt_lineage` - Get model dependencies (REQUIRED)
|
|
||||||
- `dbt_ls` - List dbt resources
|
|
||||||
- `dbt_docs_generate` - Generate full manifest if needed
|
|
||||||
@@ -5,17 +5,6 @@
|
|||||||
"type": "command",
|
"type": "command",
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
|
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
|
||||||
}
|
}
|
||||||
],
|
|
||||||
"PostToolUse": [
|
|
||||||
{
|
|
||||||
"matcher": "Edit|Write",
|
|
||||||
"hooks": [
|
|
||||||
{
|
|
||||||
"type": "command",
|
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/schema-diff-check.sh"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,138 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# data-platform schema diff detection hook
|
|
||||||
# Warns about potentially breaking schema changes
|
|
||||||
# This is a command hook - non-blocking, warnings only
|
|
||||||
|
|
||||||
PREFIX="[data-platform]"
|
|
||||||
|
|
||||||
# Check if warnings are enabled (default: true)
|
|
||||||
if [[ "${DATA_PLATFORM_SCHEMA_WARN:-true}" != "true" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Read tool input from stdin (JSON with file_path)
|
|
||||||
INPUT=$(cat)
|
|
||||||
|
|
||||||
# Extract file_path from JSON input
|
|
||||||
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
|
|
||||||
|
|
||||||
# If no file_path found, exit silently
|
|
||||||
if [ -z "$FILE_PATH" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if file is a schema-related file
|
|
||||||
is_schema_file() {
|
|
||||||
local file="$1"
|
|
||||||
|
|
||||||
# Check file extension
|
|
||||||
case "$file" in
|
|
||||||
*.sql) return 0 ;;
|
|
||||||
*/migrations/*.py) return 0 ;;
|
|
||||||
*/migrations/*.sql) return 0 ;;
|
|
||||||
*/models/*.py) return 0 ;;
|
|
||||||
*/models/*.sql) return 0 ;;
|
|
||||||
*schema.prisma) return 0 ;;
|
|
||||||
*schema.graphql) return 0 ;;
|
|
||||||
*/dbt/models/*.sql) return 0 ;;
|
|
||||||
*/dbt/models/*.yml) return 0 ;;
|
|
||||||
*/alembic/versions/*.py) return 0 ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# Check directory patterns
|
|
||||||
if echo "$file" | grep -qE "(migrations?|schemas?|models)/"; then
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Exit if not a schema file
|
|
||||||
if ! is_schema_file "$FILE_PATH"; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Read the file content (if it exists and is readable)
|
|
||||||
if [[ ! -f "$FILE_PATH" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
FILE_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
|
|
||||||
|
|
||||||
if [[ -z "$FILE_CONTENT" ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Detect breaking changes
|
|
||||||
BREAKING_CHANGES=()
|
|
||||||
|
|
||||||
# Check for DROP COLUMN
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+COLUMN"; then
|
|
||||||
BREAKING_CHANGES+=("DROP COLUMN detected - may break existing queries")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for DROP TABLE
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+TABLE"; then
|
|
||||||
BREAKING_CHANGES+=("DROP TABLE detected - data loss risk")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for DROP INDEX
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+INDEX"; then
|
|
||||||
BREAKING_CHANGES+=("DROP INDEX detected - may impact query performance")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for ALTER TYPE / MODIFY COLUMN type changes
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*(TYPE|COLUMN.*TYPE)"; then
|
|
||||||
BREAKING_CHANGES+=("Column type change detected - may cause data truncation")
|
|
||||||
fi
|
|
||||||
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "MODIFY[[:space:]]+COLUMN"; then
|
|
||||||
BREAKING_CHANGES+=("MODIFY COLUMN detected - verify data compatibility")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for adding NOT NULL to existing column
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*SET[[:space:]]+NOT[[:space:]]+NULL"; then
|
|
||||||
BREAKING_CHANGES+=("Adding NOT NULL constraint - existing NULL values will fail")
|
|
||||||
fi
|
|
||||||
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL[^[:space:]]*[[:space:]]+DEFAULT"; then
|
|
||||||
# Adding NOT NULL with DEFAULT is usually safe - don't warn
|
|
||||||
:
|
|
||||||
elif echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL"; then
|
|
||||||
BREAKING_CHANGES+=("Adding NOT NULL column without DEFAULT - INSERT may fail")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for RENAME TABLE/COLUMN
|
|
||||||
if echo "$FILE_CONTENT" | grep -qiE "RENAME[[:space:]]+(TABLE|COLUMN|TO)"; then
|
|
||||||
BREAKING_CHANGES+=("RENAME detected - update all references")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for removing from Django/SQLAlchemy models (Python files)
|
|
||||||
if [[ "$FILE_PATH" == *.py ]]; then
|
|
||||||
if echo "$FILE_CONTENT" | grep -qE "^-[[:space:]]*[a-z_]+[[:space:]]*=.*Field\("; then
|
|
||||||
BREAKING_CHANGES+=("Model field removal detected in Python ORM")
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for Prisma schema changes
|
|
||||||
if [[ "$FILE_PATH" == *schema.prisma ]]; then
|
|
||||||
if echo "$FILE_CONTENT" | grep -qE "@relation.*onDelete.*Cascade"; then
|
|
||||||
BREAKING_CHANGES+=("Cascade delete detected - verify data safety")
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Output warnings if any breaking changes detected
|
|
||||||
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
|
|
||||||
echo ""
|
|
||||||
echo "$PREFIX WARNING: Potential breaking schema changes in $(basename "$FILE_PATH")"
|
|
||||||
echo "$PREFIX ============================================"
|
|
||||||
for change in "${BREAKING_CHANGES[@]}"; do
|
|
||||||
echo "$PREFIX - $change"
|
|
||||||
done
|
|
||||||
echo "$PREFIX ============================================"
|
|
||||||
echo "$PREFIX Review before deploying to production"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Always exit 0 - non-blocking
|
|
||||||
exit 0
|
|
||||||
@@ -22,9 +22,6 @@ doc-guardian monitors your code changes via hooks:
|
|||||||
|---------|-------------|
|
|---------|-------------|
|
||||||
| `/doc-audit` | Full project scan - reports all drift without changing anything |
|
| `/doc-audit` | Full project scan - reports all drift without changing anything |
|
||||||
| `/doc-sync` | Apply all pending documentation updates in one commit |
|
| `/doc-sync` | Apply all pending documentation updates in one commit |
|
||||||
| `/changelog-gen` | Generate changelog from conventional commits in Keep-a-Changelog format |
|
|
||||||
| `/doc-coverage` | Calculate documentation coverage percentage for functions and classes |
|
|
||||||
| `/stale-docs` | Detect documentation files that are stale relative to their associated code |
|
|
||||||
|
|
||||||
## Hooks
|
## Hooks
|
||||||
|
|
||||||
@@ -36,8 +33,6 @@ doc-guardian monitors your code changes via hooks:
|
|||||||
- **Version Drift**: Python 3.9 in docs but 3.11 in pyproject.toml
|
- **Version Drift**: Python 3.9 in docs but 3.11 in pyproject.toml
|
||||||
- **Missing Docs**: Public functions without docstrings
|
- **Missing Docs**: Public functions without docstrings
|
||||||
- **Stale Examples**: CLI examples that no longer work
|
- **Stale Examples**: CLI examples that no longer work
|
||||||
- **Low Coverage**: Undocumented functions and classes
|
|
||||||
- **Stale Files**: Documentation that hasn't been updated alongside code changes
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
|||||||
@@ -11,9 +11,6 @@ This project uses doc-guardian for automatic documentation synchronization.
|
|||||||
- Pending updates are queued silently during work
|
- Pending updates are queued silently during work
|
||||||
- Run `/doc-sync` to apply all pending documentation updates
|
- Run `/doc-sync` to apply all pending documentation updates
|
||||||
- Run `/doc-audit` for a full project documentation review
|
- Run `/doc-audit` for a full project documentation review
|
||||||
- Run `/changelog-gen` to generate changelog from conventional commits
|
|
||||||
- Run `/doc-coverage` to check documentation coverage metrics
|
|
||||||
- Run `/stale-docs` to find documentation that may be outdated
|
|
||||||
|
|
||||||
### Documentation Files Tracked
|
### Documentation Files Tracked
|
||||||
- README.md (root and subdirectories)
|
- README.md (root and subdirectories)
|
||||||
|
|||||||
@@ -1,109 +0,0 @@
|
|||||||
---
|
|
||||||
description: Generate changelog from conventional commits in Keep-a-Changelog format
|
|
||||||
---
|
|
||||||
|
|
||||||
# Changelog Generation
|
|
||||||
|
|
||||||
Generate a changelog entry from conventional commits.
|
|
||||||
|
|
||||||
## Process
|
|
||||||
|
|
||||||
1. **Identify Commit Range**
|
|
||||||
- Default: commits since last tag
|
|
||||||
- Optional: specify range (e.g., `v1.0.0..HEAD`)
|
|
||||||
- Detect if this is first release (no previous tags)
|
|
||||||
|
|
||||||
2. **Parse Conventional Commits**
|
|
||||||
Extract from commit messages following the pattern:
|
|
||||||
```
|
|
||||||
<type>(<scope>): <description>
|
|
||||||
|
|
||||||
[optional body]
|
|
||||||
|
|
||||||
[optional footer(s)]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Recognized Types:**
|
|
||||||
| Type | Changelog Section |
|
|
||||||
|------|------------------|
|
|
||||||
| `feat` | Added |
|
|
||||||
| `fix` | Fixed |
|
|
||||||
| `docs` | Documentation |
|
|
||||||
| `perf` | Performance |
|
|
||||||
| `refactor` | Changed |
|
|
||||||
| `style` | Changed |
|
|
||||||
| `test` | Testing |
|
|
||||||
| `build` | Build |
|
|
||||||
| `ci` | CI/CD |
|
|
||||||
| `chore` | Maintenance |
|
|
||||||
| `BREAKING CHANGE` | Breaking Changes |
|
|
||||||
|
|
||||||
3. **Group by Type**
|
|
||||||
Organize commits into Keep-a-Changelog sections:
|
|
||||||
- Breaking Changes (if any `!` suffix or `BREAKING CHANGE` footer)
|
|
||||||
- Added (feat)
|
|
||||||
- Changed (refactor, style, perf)
|
|
||||||
- Deprecated
|
|
||||||
- Removed
|
|
||||||
- Fixed (fix)
|
|
||||||
- Security
|
|
||||||
|
|
||||||
4. **Format Entries**
|
|
||||||
For each commit:
|
|
||||||
- Extract scope (if present) as prefix
|
|
||||||
- Use description as entry text
|
|
||||||
- Link to commit hash if repository URL available
|
|
||||||
- Include PR/issue references from footer
|
|
||||||
|
|
||||||
5. **Output Format**
|
|
||||||
```markdown
|
|
||||||
## [Unreleased]
|
|
||||||
|
|
||||||
### Breaking Changes
|
|
||||||
- **scope**: Description of breaking change
|
|
||||||
|
|
||||||
### Added
|
|
||||||
- **scope**: New feature description
|
|
||||||
- Another feature without scope
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
- **scope**: Refactoring description
|
|
||||||
|
|
||||||
### Fixed
|
|
||||||
- **scope**: Bug fix description
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
- Updated README with new examples
|
|
||||||
```
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Flag | Description | Default |
|
|
||||||
|------|-------------|---------|
|
|
||||||
| `--from <tag>` | Start from specific tag | Latest tag |
|
|
||||||
| `--to <ref>` | End at specific ref | HEAD |
|
|
||||||
| `--version <ver>` | Set version header | [Unreleased] |
|
|
||||||
| `--include-merge` | Include merge commits | false |
|
|
||||||
| `--group-by-scope` | Group by scope within sections | false |
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
The generated output is designed to be copied directly into CHANGELOG.md:
|
|
||||||
- Follows [Keep a Changelog](https://keepachangelog.com) format
|
|
||||||
- Compatible with semantic versioning
|
|
||||||
- Excludes non-user-facing commits (chore, ci, test by default)
|
|
||||||
|
|
||||||
## Example Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/changelog-gen
|
|
||||||
/changelog-gen --from v1.0.0 --version 1.1.0
|
|
||||||
/changelog-gen --include-merge --group-by-scope
|
|
||||||
```
|
|
||||||
|
|
||||||
## Non-Conventional Commits
|
|
||||||
|
|
||||||
Commits not following conventional format are:
|
|
||||||
- Listed under "Other" section
|
|
||||||
- Flagged for manual categorization
|
|
||||||
- Skipped if `--strict` flag is used
|
|
||||||
@@ -1,128 +0,0 @@
|
|||||||
---
|
|
||||||
description: Calculate documentation coverage percentage for functions and classes
|
|
||||||
---
|
|
||||||
|
|
||||||
# Documentation Coverage
|
|
||||||
|
|
||||||
Analyze codebase to calculate documentation coverage metrics.
|
|
||||||
|
|
||||||
## Process
|
|
||||||
|
|
||||||
1. **Scan Source Files**
|
|
||||||
Identify all documentable items:
|
|
||||||
|
|
||||||
**Python:**
|
|
||||||
- Functions (def)
|
|
||||||
- Classes
|
|
||||||
- Methods
|
|
||||||
- Module-level docstrings
|
|
||||||
|
|
||||||
**JavaScript/TypeScript:**
|
|
||||||
- Functions (function, arrow functions)
|
|
||||||
- Classes
|
|
||||||
- Methods
|
|
||||||
- JSDoc comments
|
|
||||||
|
|
||||||
**Other Languages:**
|
|
||||||
- Adapt patterns for Go, Rust, etc.
|
|
||||||
|
|
||||||
2. **Determine Documentation Status**
|
|
||||||
For each item, check:
|
|
||||||
- Has docstring/JSDoc comment
|
|
||||||
- Docstring is non-empty and meaningful (not just `pass` or `TODO`)
|
|
||||||
- Parameters are documented (for detailed mode)
|
|
||||||
- Return type is documented (for detailed mode)
|
|
||||||
|
|
||||||
3. **Calculate Metrics**
|
|
||||||
```
|
|
||||||
Coverage = (Documented Items / Total Items) * 100
|
|
||||||
```
|
|
||||||
|
|
||||||
**Levels:**
|
|
||||||
- Basic: Item has any docstring
|
|
||||||
- Standard: Docstring describes purpose
|
|
||||||
- Complete: All parameters and return documented
|
|
||||||
|
|
||||||
4. **Output Format**
|
|
||||||
```
|
|
||||||
## Documentation Coverage Report
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
- Total documentable items: 156
|
|
||||||
- Documented: 142
|
|
||||||
- Coverage: 91.0%
|
|
||||||
|
|
||||||
### By Type
|
|
||||||
| Type | Total | Documented | Coverage |
|
|
||||||
|------|-------|------------|----------|
|
|
||||||
| Functions | 89 | 85 | 95.5% |
|
|
||||||
| Classes | 23 | 21 | 91.3% |
|
|
||||||
| Methods | 44 | 36 | 81.8% |
|
|
||||||
|
|
||||||
### By Directory
|
|
||||||
| Path | Total | Documented | Coverage |
|
|
||||||
|------|-------|------------|----------|
|
|
||||||
| src/api/ | 34 | 32 | 94.1% |
|
|
||||||
| src/utils/ | 28 | 28 | 100.0% |
|
|
||||||
| src/models/ | 45 | 38 | 84.4% |
|
|
||||||
| tests/ | 49 | 44 | 89.8% |
|
|
||||||
|
|
||||||
### Undocumented Items
|
|
||||||
- [ ] src/api/handlers.py:45 `create_order()`
|
|
||||||
- [ ] src/api/handlers.py:78 `update_order()`
|
|
||||||
- [ ] src/models/user.py:23 `UserModel.validate()`
|
|
||||||
```
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Flag | Description | Default |
|
|
||||||
|------|-------------|---------|
|
|
||||||
| `--path <dir>` | Scan specific directory | Project root |
|
|
||||||
| `--exclude <glob>` | Exclude files matching pattern | `**/test_*,**/*_test.*` |
|
|
||||||
| `--include-private` | Include private members (_prefixed) | false |
|
|
||||||
| `--include-tests` | Include test files | false |
|
|
||||||
| `--min-coverage <pct>` | Fail if below threshold | none |
|
|
||||||
| `--format <fmt>` | Output format (table, json, markdown) | table |
|
|
||||||
| `--detailed` | Check parameter/return docs | false |
|
|
||||||
|
|
||||||
## Thresholds
|
|
||||||
|
|
||||||
Common coverage targets:
|
|
||||||
| Level | Coverage | Description |
|
|
||||||
|-------|----------|-------------|
|
|
||||||
| Minimal | 60% | Basic documentation exists |
|
|
||||||
| Good | 80% | Most public APIs documented |
|
|
||||||
| Excellent | 95% | Comprehensive documentation |
|
|
||||||
|
|
||||||
## CI Integration
|
|
||||||
|
|
||||||
Use `--min-coverage` to enforce standards:
|
|
||||||
```bash
|
|
||||||
# Fail if coverage drops below 80%
|
|
||||||
claude /doc-coverage --min-coverage 80
|
|
||||||
```
|
|
||||||
|
|
||||||
Exit codes:
|
|
||||||
- 0: Coverage meets threshold (or no threshold set)
|
|
||||||
- 1: Coverage below threshold
|
|
||||||
|
|
||||||
## Example Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/doc-coverage
|
|
||||||
/doc-coverage --path src/
|
|
||||||
/doc-coverage --min-coverage 85 --exclude "**/generated/**"
|
|
||||||
/doc-coverage --detailed --include-private
|
|
||||||
```
|
|
||||||
|
|
||||||
## Language Detection
|
|
||||||
|
|
||||||
File extensions mapped to documentation patterns:
|
|
||||||
| Extension | Language | Doc Format |
|
|
||||||
|-----------|----------|------------|
|
|
||||||
| .py | Python | Docstrings (""") |
|
|
||||||
| .js, .ts | JavaScript/TypeScript | JSDoc (/** */) |
|
|
||||||
| .go | Go | // comments above |
|
|
||||||
| .rs | Rust | /// doc comments |
|
|
||||||
| .rb | Ruby | # comments, YARD |
|
|
||||||
| .java | Java | Javadoc (/** */) |
|
|
||||||
@@ -1,143 +0,0 @@
|
|||||||
---
|
|
||||||
description: Detect documentation files that are stale relative to their associated code
|
|
||||||
---
|
|
||||||
|
|
||||||
# Stale Documentation Detection
|
|
||||||
|
|
||||||
Identify documentation files that may be outdated based on commit history.
|
|
||||||
|
|
||||||
## Process
|
|
||||||
|
|
||||||
1. **Map Documentation to Code**
|
|
||||||
Build relationships between docs and code:
|
|
||||||
|
|
||||||
| Doc File | Related Code |
|
|
||||||
|----------|--------------|
|
|
||||||
| README.md | All files in same directory |
|
|
||||||
| API.md | src/api/**/* |
|
|
||||||
| CLAUDE.md | Configuration files, scripts |
|
|
||||||
| docs/module.md | src/module/**/* |
|
|
||||||
| Component.md | Component.tsx, Component.css |
|
|
||||||
|
|
||||||
2. **Analyze Commit History**
|
|
||||||
For each doc file:
|
|
||||||
- Find last commit that modified the doc
|
|
||||||
- Find last commit that modified related code
|
|
||||||
- Count commits to code since doc was updated
|
|
||||||
|
|
||||||
3. **Calculate Staleness**
|
|
||||||
```
|
|
||||||
Commits Behind = Code Commits Since Doc Update
|
|
||||||
Days Behind = Days Since Doc Update - Days Since Code Update
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Apply Threshold**
|
|
||||||
Default: Flag if documentation is 10+ commits behind related code
|
|
||||||
|
|
||||||
**Staleness Levels:**
|
|
||||||
| Commits Behind | Level | Action |
|
|
||||||
|----------------|-------|--------|
|
|
||||||
| 0-5 | Fresh | No action needed |
|
|
||||||
| 6-10 | Aging | Review recommended |
|
|
||||||
| 11-20 | Stale | Update needed |
|
|
||||||
| 20+ | Critical | Immediate attention |
|
|
||||||
|
|
||||||
5. **Output Format**
|
|
||||||
```
|
|
||||||
## Stale Documentation Report
|
|
||||||
|
|
||||||
### Critical (20+ commits behind)
|
|
||||||
| File | Last Updated | Commits Behind | Related Code |
|
|
||||||
|------|--------------|----------------|--------------|
|
|
||||||
| docs/api.md | 2024-01-15 | 34 | src/api/**/* |
|
|
||||||
|
|
||||||
### Stale (11-20 commits behind)
|
|
||||||
| File | Last Updated | Commits Behind | Related Code |
|
|
||||||
|------|--------------|----------------|--------------|
|
|
||||||
| README.md | 2024-02-20 | 15 | package.json, src/index.ts |
|
|
||||||
|
|
||||||
### Aging (6-10 commits behind)
|
|
||||||
| File | Last Updated | Commits Behind | Related Code |
|
|
||||||
|------|--------------|----------------|--------------|
|
|
||||||
| CONTRIBUTING.md | 2024-03-01 | 8 | .github/*, scripts/* |
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
- Critical: 1 file
|
|
||||||
- Stale: 1 file
|
|
||||||
- Aging: 1 file
|
|
||||||
- Fresh: 12 files
|
|
||||||
- Total documentation files: 15
|
|
||||||
```
|
|
||||||
|
|
||||||
## Options
|
|
||||||
|
|
||||||
| Flag | Description | Default |
|
|
||||||
|------|-------------|---------|
|
|
||||||
| `--threshold <n>` | Commits behind to flag as stale | 10 |
|
|
||||||
| `--days` | Use days instead of commits | false |
|
|
||||||
| `--path <dir>` | Scan specific directory | Project root |
|
|
||||||
| `--doc-pattern <glob>` | Pattern for doc files | `**/*.md,**/README*` |
|
|
||||||
| `--ignore <glob>` | Ignore specific docs | `CHANGELOG.md,LICENSE` |
|
|
||||||
| `--show-fresh` | Include fresh docs in output | false |
|
|
||||||
| `--format <fmt>` | Output format (table, json) | table |
|
|
||||||
|
|
||||||
## Relationship Detection
|
|
||||||
|
|
||||||
How docs are mapped to code:
|
|
||||||
|
|
||||||
1. **Same Directory**
|
|
||||||
- `src/api/README.md` relates to `src/api/**/*`
|
|
||||||
|
|
||||||
2. **Name Matching**
|
|
||||||
- `docs/auth.md` relates to `**/auth.*`, `**/auth/**`
|
|
||||||
|
|
||||||
3. **Explicit Links**
|
|
||||||
- Parse `[link](path)` in docs to find related files
|
|
||||||
|
|
||||||
4. **Import Analysis**
|
|
||||||
- Track which modules are referenced in code examples
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
Create `.doc-guardian.yml` to customize mappings:
|
|
||||||
```yaml
|
|
||||||
stale-docs:
|
|
||||||
threshold: 10
|
|
||||||
mappings:
|
|
||||||
- doc: docs/deployment.md
|
|
||||||
code:
|
|
||||||
- Dockerfile
|
|
||||||
- docker-compose.yml
|
|
||||||
- .github/workflows/deploy.yml
|
|
||||||
- doc: ARCHITECTURE.md
|
|
||||||
code:
|
|
||||||
- src/**/*
|
|
||||||
ignore:
|
|
||||||
- CHANGELOG.md
|
|
||||||
- LICENSE
|
|
||||||
- vendor/**
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/stale-docs
|
|
||||||
/stale-docs --threshold 5
|
|
||||||
/stale-docs --days --threshold 30
|
|
||||||
/stale-docs --path docs/ --show-fresh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with doc-audit
|
|
||||||
|
|
||||||
`/stale-docs` focuses specifically on commit-based staleness, while `/doc-audit` checks content accuracy. Use both for comprehensive documentation health:
|
|
||||||
|
|
||||||
```
|
|
||||||
/doc-audit # Check for broken references and content drift
|
|
||||||
/stale-docs # Check for files that may need review
|
|
||||||
```
|
|
||||||
|
|
||||||
## Exit Codes
|
|
||||||
|
|
||||||
- 0: No critical or stale documentation
|
|
||||||
- 1: Stale documentation found (useful for CI)
|
|
||||||
- 2: Critical documentation found
|
|
||||||
@@ -119,10 +119,6 @@ The git-assistant agent helps resolve merge conflicts with analysis and recommen
|
|||||||
→ Status: Clean, up-to-date
|
→ Status: Clean, up-to-date
|
||||||
```
|
```
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
- [Branching Strategy Guide](docs/BRANCHING-STRATEGY.md) - Detailed documentation of the `development -> staging -> main` promotion flow
|
|
||||||
|
|
||||||
## Integration
|
## Integration
|
||||||
|
|
||||||
For CLAUDE.md integration instructions, see `claude-md-integration.md`.
|
For CLAUDE.md integration instructions, see `claude-md-integration.md`.
|
||||||
|
|||||||
@@ -1,541 +0,0 @@
|
|||||||
# Branching Strategy
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
This document defines the branching strategy used by git-flow to manage code changes across environments. The strategy ensures:
|
|
||||||
|
|
||||||
- **Stability**: Production code is always release-ready
|
|
||||||
- **Isolation**: Features are developed in isolation without affecting others
|
|
||||||
- **Traceability**: Clear history of what changed and when
|
|
||||||
- **Collaboration**: Multiple developers can work simultaneously without conflicts
|
|
||||||
|
|
||||||
The strategy follows a hierarchical promotion model: feature branches merge into development, which promotes to staging for testing, and finally to main for production release.
|
|
||||||
|
|
||||||
## Branch Types
|
|
||||||
|
|
||||||
### Main Branches
|
|
||||||
|
|
||||||
| Branch | Purpose | Stability | Direct Commits |
|
|
||||||
|--------|---------|-----------|----------------|
|
|
||||||
| `main` | Production releases | Highest | Never |
|
|
||||||
| `staging` | Pre-production testing | High | Never |
|
|
||||||
| `development` | Active development | Medium | Never |
|
|
||||||
|
|
||||||
### Feature Branches
|
|
||||||
|
|
||||||
Feature branches are short-lived branches created from `development` for implementing specific changes.
|
|
||||||
|
|
||||||
| Prefix | Purpose | Example |
|
|
||||||
|--------|---------|---------|
|
|
||||||
| `feat/` | New features | `feat/user-authentication` |
|
|
||||||
| `fix/` | Bug fixes | `fix/login-timeout-error` |
|
|
||||||
| `docs/` | Documentation only | `docs/api-reference` |
|
|
||||||
| `refactor/` | Code restructuring | `refactor/auth-module` |
|
|
||||||
| `test/` | Test additions/fixes | `test/auth-coverage` |
|
|
||||||
| `chore/` | Maintenance tasks | `chore/update-dependencies` |
|
|
||||||
|
|
||||||
### Special Branches
|
|
||||||
|
|
||||||
| Branch | Purpose | Created From | Merges Into |
|
|
||||||
|--------|---------|--------------|-------------|
|
|
||||||
| `hotfix/*` | Emergency production fixes | `main` | `main` AND `development` |
|
|
||||||
| `release/*` | Release preparation | `development` | `staging` then `main` |
|
|
||||||
|
|
||||||
## Branch Hierarchy
|
|
||||||
|
|
||||||
```
|
|
||||||
PRODUCTION
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌───────────────[ main ]───────────────┐
|
|
||||||
│ (stable releases) │
|
|
||||||
│ ▲ │
|
|
||||||
│ │ PR │
|
|
||||||
│ │ │
|
|
||||||
│ ┌───────┴───────┐ │
|
|
||||||
│ │ staging │ │
|
|
||||||
│ │ (testing) │ │
|
|
||||||
│ └───────▲───────┘ │
|
|
||||||
│ │ PR │
|
|
||||||
│ │ │
|
|
||||||
│ ┌─────────┴─────────┐ │
|
|
||||||
│ │ development │ │
|
|
||||||
│ │ (integration) │ │
|
|
||||||
│ └─────────▲─────────┘ │
|
|
||||||
│ ╱ │ ╲ │
|
|
||||||
│ PR╱ │PR ╲PR │
|
|
||||||
│ ╱ │ ╲ │
|
|
||||||
│ ┌─────┴──┐ ┌──┴───┐ ┌┴─────┐ │
|
|
||||||
│ │ feat/* │ │ fix/*│ │docs/*│ │
|
|
||||||
│ └────────┘ └──────┘ └──────┘ │
|
|
||||||
│ FEATURE BRANCHES │
|
|
||||||
└───────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
### Feature Development
|
|
||||||
|
|
||||||
The standard workflow for implementing a new feature or fix:
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
gitGraph
|
|
||||||
commit id: "initial"
|
|
||||||
branch development
|
|
||||||
checkout development
|
|
||||||
commit id: "dev-base"
|
|
||||||
branch feat/new-feature
|
|
||||||
checkout feat/new-feature
|
|
||||||
commit id: "implement"
|
|
||||||
commit id: "tests"
|
|
||||||
commit id: "polish"
|
|
||||||
checkout development
|
|
||||||
merge feat/new-feature id: "PR merged"
|
|
||||||
commit id: "other-work"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
|
|
||||||
1. **Create branch from development**
|
|
||||||
```bash
|
|
||||||
git checkout development
|
|
||||||
git pull origin development
|
|
||||||
git checkout -b feat/add-user-auth
|
|
||||||
```
|
|
||||||
Or use git-flow command:
|
|
||||||
```
|
|
||||||
/branch-start add user authentication
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Implement changes**
|
|
||||||
- Make commits following conventional commit format
|
|
||||||
- Keep commits atomic and focused
|
|
||||||
- Push regularly to remote
|
|
||||||
|
|
||||||
3. **Create Pull Request**
|
|
||||||
- Target: `development`
|
|
||||||
- Include description of changes
|
|
||||||
- Link related issues
|
|
||||||
|
|
||||||
4. **Review and merge**
|
|
||||||
- Address review feedback
|
|
||||||
- Squash or rebase as needed
|
|
||||||
- Merge when approved
|
|
||||||
|
|
||||||
5. **Cleanup**
|
|
||||||
- Delete feature branch after merge
|
|
||||||
```
|
|
||||||
/branch-cleanup
|
|
||||||
```
|
|
||||||
|
|
||||||
### Release Promotion
|
|
||||||
|
|
||||||
Promoting code from development to production:
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
gitGraph
|
|
||||||
commit id: "v1.0.0" tag: "v1.0.0"
|
|
||||||
branch development
|
|
||||||
checkout development
|
|
||||||
commit id: "feat-1"
|
|
||||||
commit id: "feat-2"
|
|
||||||
commit id: "fix-1"
|
|
||||||
branch staging
|
|
||||||
checkout staging
|
|
||||||
commit id: "staging-test"
|
|
||||||
checkout main
|
|
||||||
merge staging id: "v1.1.0" tag: "v1.1.0"
|
|
||||||
checkout development
|
|
||||||
merge main id: "sync"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
|
|
||||||
1. **Prepare release**
|
|
||||||
- Ensure development is stable
|
|
||||||
- Update version numbers
|
|
||||||
- Update CHANGELOG.md
|
|
||||||
|
|
||||||
2. **Promote to staging**
|
|
||||||
```bash
|
|
||||||
git checkout staging
|
|
||||||
git merge development
|
|
||||||
git push origin staging
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Test in staging**
|
|
||||||
- Run integration tests
|
|
||||||
- Perform QA validation
|
|
||||||
- Fix any issues (merge fixes to development first, then re-promote)
|
|
||||||
|
|
||||||
4. **Promote to main**
|
|
||||||
```bash
|
|
||||||
git checkout main
|
|
||||||
git merge staging
|
|
||||||
git tag -a v1.1.0 -m "Release v1.1.0"
|
|
||||||
git push origin main --tags
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Sync development**
|
|
||||||
```bash
|
|
||||||
git checkout development
|
|
||||||
git merge main
|
|
||||||
git push origin development
|
|
||||||
```
|
|
||||||
|
|
||||||
### Hotfix Handling
|
|
||||||
|
|
||||||
Emergency fixes for production issues:
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
gitGraph
|
|
||||||
commit id: "v1.0.0" tag: "v1.0.0"
|
|
||||||
branch development
|
|
||||||
checkout development
|
|
||||||
commit id: "ongoing-work"
|
|
||||||
checkout main
|
|
||||||
branch hotfix/critical-bug
|
|
||||||
checkout hotfix/critical-bug
|
|
||||||
commit id: "fix"
|
|
||||||
checkout main
|
|
||||||
merge hotfix/critical-bug id: "v1.0.1" tag: "v1.0.1"
|
|
||||||
checkout development
|
|
||||||
merge main id: "sync-fix"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
|
|
||||||
1. **Create hotfix branch from main**
|
|
||||||
```bash
|
|
||||||
git checkout main
|
|
||||||
git pull origin main
|
|
||||||
git checkout -b hotfix/critical-security-fix
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Implement fix**
|
|
||||||
- Minimal, focused changes only
|
|
||||||
- Include tests for the fix
|
|
||||||
- Follow conventional commit format
|
|
||||||
|
|
||||||
3. **Merge to main**
|
|
||||||
```bash
|
|
||||||
git checkout main
|
|
||||||
git merge hotfix/critical-security-fix
|
|
||||||
git tag -a v1.0.1 -m "Hotfix: critical security fix"
|
|
||||||
git push origin main --tags
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Merge to development** (critical step)
|
|
||||||
```bash
|
|
||||||
git checkout development
|
|
||||||
git merge main
|
|
||||||
git push origin development
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Delete hotfix branch**
|
|
||||||
```bash
|
|
||||||
git branch -d hotfix/critical-security-fix
|
|
||||||
git push origin --delete hotfix/critical-security-fix
|
|
||||||
```
|
|
||||||
|
|
||||||
## PR Requirements
|
|
||||||
|
|
||||||
### To development
|
|
||||||
|
|
||||||
| Requirement | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| Passing tests | All CI tests must pass |
|
|
||||||
| Conventional commit | Message follows `type(scope): description` format |
|
|
||||||
| No conflicts | Branch must be rebased on latest development |
|
|
||||||
| Code review | At least one approval (recommended) |
|
|
||||||
|
|
||||||
### To staging
|
|
||||||
|
|
||||||
| Requirement | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| All checks pass | CI, linting, tests, security scans |
|
|
||||||
| From development | Only PRs from development branch |
|
|
||||||
| Clean history | Squashed or rebased commits |
|
|
||||||
| Release notes | CHANGELOG updated with version |
|
|
||||||
|
|
||||||
### To main (Protected)
|
|
||||||
|
|
||||||
| Requirement | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| Source restriction | PRs only from `staging` or `development` |
|
|
||||||
| All checks pass | Complete CI pipeline success |
|
|
||||||
| Approvals | Minimum 1-2 reviewer approvals |
|
|
||||||
| No direct push | Force push disabled |
|
|
||||||
| Version tag | Must include version tag on merge |
|
|
||||||
|
|
||||||
## Branch Flow Diagram
|
|
||||||
|
|
||||||
### Complete Lifecycle
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
subgraph Feature["Feature Development"]
|
|
||||||
A[Create feature branch] --> B[Implement changes]
|
|
||||||
B --> C[Push commits]
|
|
||||||
C --> D[Create PR to development]
|
|
||||||
D --> E{Review passed?}
|
|
||||||
E -->|No| B
|
|
||||||
E -->|Yes| F[Merge to development]
|
|
||||||
F --> G[Delete feature branch]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Release["Release Cycle"]
|
|
||||||
H[development stable] --> I[Create PR to staging]
|
|
||||||
I --> J[Test in staging]
|
|
||||||
J --> K{Tests passed?}
|
|
||||||
K -->|No| L[Fix in development]
|
|
||||||
L --> H
|
|
||||||
K -->|Yes| M[Create PR to main]
|
|
||||||
M --> N[Tag release]
|
|
||||||
N --> O[Sync development]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Hotfix["Hotfix Process"]
|
|
||||||
P[Critical bug in prod] --> Q[Create hotfix from main]
|
|
||||||
Q --> R[Implement fix]
|
|
||||||
R --> S[Merge to main + tag]
|
|
||||||
S --> T[Merge to development]
|
|
||||||
end
|
|
||||||
|
|
||||||
G --> H
|
|
||||||
O --> Feature
|
|
||||||
T --> Feature
|
|
||||||
```
|
|
||||||
|
|
||||||
### Release Cycle Timeline
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
gantt
|
|
||||||
title Release Cycle
|
|
||||||
dateFormat YYYY-MM-DD
|
|
||||||
section Feature Development
|
|
||||||
Feature A :a1, 2024-01-01, 5d
|
|
||||||
Feature B :a2, 2024-01-03, 4d
|
|
||||||
Bug Fix :a3, 2024-01-06, 2d
|
|
||||||
section Integration
|
|
||||||
Merge to development:b1, after a1, 1d
|
|
||||||
Integration testing :b2, after b1, 2d
|
|
||||||
section Release
|
|
||||||
Promote to staging :c1, after b2, 1d
|
|
||||||
QA testing :c2, after c1, 3d
|
|
||||||
Release to main :milestone, after c2, 0d
|
|
||||||
```
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Example 1: Adding a New Feature
|
|
||||||
|
|
||||||
**Scenario:** Add user password reset functionality
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Start from development
|
|
||||||
git checkout development
|
|
||||||
git pull origin development
|
|
||||||
|
|
||||||
# 2. Create feature branch
|
|
||||||
git checkout -b feat/password-reset
|
|
||||||
|
|
||||||
# 3. Make changes and commit
|
|
||||||
git add src/auth/password-reset.ts
|
|
||||||
git commit -m "feat(auth): add password reset request handler"
|
|
||||||
|
|
||||||
git add src/email/templates/reset-email.html
|
|
||||||
git commit -m "feat(email): add password reset email template"
|
|
||||||
|
|
||||||
git add tests/auth/password-reset.test.ts
|
|
||||||
git commit -m "test(auth): add password reset tests"
|
|
||||||
|
|
||||||
# 4. Push and create PR
|
|
||||||
git push -u origin feat/password-reset
|
|
||||||
# Create PR targeting development
|
|
||||||
|
|
||||||
# 5. After merge, cleanup
|
|
||||||
git checkout development
|
|
||||||
git pull origin development
|
|
||||||
git branch -d feat/password-reset
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 2: Bug Fix
|
|
||||||
|
|
||||||
**Scenario:** Fix login timeout issue
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create fix branch
|
|
||||||
git checkout development
|
|
||||||
git pull origin development
|
|
||||||
git checkout -b fix/login-timeout
|
|
||||||
|
|
||||||
# 2. Fix and commit
|
|
||||||
git add src/auth/session.ts
|
|
||||||
git commit -m "fix(auth): increase session timeout to 30 minutes
|
|
||||||
|
|
||||||
The default 5-minute timeout was causing frequent re-authentication.
|
|
||||||
Extended to 30 minutes based on user feedback.
|
|
||||||
|
|
||||||
Closes #456"
|
|
||||||
|
|
||||||
# 3. Push and create PR
|
|
||||||
git push -u origin fix/login-timeout
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 3: Documentation Update
|
|
||||||
|
|
||||||
**Scenario:** Update API documentation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create docs branch
|
|
||||||
git checkout development
|
|
||||||
git checkout -b docs/api-authentication
|
|
||||||
|
|
||||||
# 2. Update docs
|
|
||||||
git add docs/api/authentication.md
|
|
||||||
git commit -m "docs(api): document authentication endpoints
|
|
||||||
|
|
||||||
Add detailed documentation for:
|
|
||||||
- POST /auth/login
|
|
||||||
- POST /auth/logout
|
|
||||||
- POST /auth/refresh
|
|
||||||
- GET /auth/me"
|
|
||||||
|
|
||||||
# 3. Push and create PR
|
|
||||||
git push -u origin docs/api-authentication
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 4: Emergency Hotfix
|
|
||||||
|
|
||||||
**Scenario:** Critical security vulnerability in production
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create hotfix from main
|
|
||||||
git checkout main
|
|
||||||
git pull origin main
|
|
||||||
git checkout -b hotfix/sql-injection
|
|
||||||
|
|
||||||
# 2. Fix the issue
|
|
||||||
git add src/db/queries.ts
|
|
||||||
git commit -m "fix(security): sanitize SQL query parameters
|
|
||||||
|
|
||||||
CRITICAL: Addresses SQL injection vulnerability in user search.
|
|
||||||
All user inputs are now properly parameterized.
|
|
||||||
|
|
||||||
CVE: CVE-2024-XXXXX"
|
|
||||||
|
|
||||||
# 3. Merge to main with tag
|
|
||||||
git checkout main
|
|
||||||
git merge hotfix/sql-injection
|
|
||||||
git tag -a v2.1.1 -m "Hotfix: SQL injection vulnerability"
|
|
||||||
git push origin main --tags
|
|
||||||
|
|
||||||
# 4. Sync to development (IMPORTANT!)
|
|
||||||
git checkout development
|
|
||||||
git merge main
|
|
||||||
git push origin development
|
|
||||||
|
|
||||||
# 5. Cleanup
|
|
||||||
git branch -d hotfix/sql-injection
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 5: Release Promotion
|
|
||||||
|
|
||||||
**Scenario:** Release version 2.2.0
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Ensure development is ready
|
|
||||||
git checkout development
|
|
||||||
git pull origin development
|
|
||||||
# Run full test suite
|
|
||||||
npm test
|
|
||||||
|
|
||||||
# 2. Update version and changelog
|
|
||||||
# Edit package.json, CHANGELOG.md
|
|
||||||
git add package.json CHANGELOG.md
|
|
||||||
git commit -m "chore(release): prepare v2.2.0"
|
|
||||||
|
|
||||||
# 3. Promote to staging
|
|
||||||
git checkout staging
|
|
||||||
git merge development
|
|
||||||
git push origin staging
|
|
||||||
|
|
||||||
# 4. After QA approval, promote to main
|
|
||||||
git checkout main
|
|
||||||
git merge staging
|
|
||||||
git tag -a v2.2.0 -m "Release v2.2.0"
|
|
||||||
git push origin main --tags
|
|
||||||
|
|
||||||
# 5. Sync development
|
|
||||||
git checkout development
|
|
||||||
git merge main
|
|
||||||
git push origin development
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
Configure the branching strategy via environment variables:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Default base branch for new features
|
|
||||||
GIT_DEFAULT_BASE=development
|
|
||||||
|
|
||||||
# Protected branches (comma-separated)
|
|
||||||
GIT_PROTECTED_BRANCHES=main,master,development,staging,production
|
|
||||||
|
|
||||||
# Workflow style
|
|
||||||
GIT_WORKFLOW_STYLE=feature-branch
|
|
||||||
|
|
||||||
# Auto-delete merged branches
|
|
||||||
GIT_AUTO_DELETE_MERGED=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Per-Project Configuration
|
|
||||||
|
|
||||||
Create `.git-flow.json` in project root:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"defaultBase": "development",
|
|
||||||
"protectedBranches": ["main", "staging", "development"],
|
|
||||||
"branchPrefixes": ["feat", "fix", "docs", "refactor", "test", "chore"],
|
|
||||||
"requirePR": {
|
|
||||||
"main": true,
|
|
||||||
"staging": true,
|
|
||||||
"development": false
|
|
||||||
},
|
|
||||||
"squashOnMerge": {
|
|
||||||
"development": true,
|
|
||||||
"staging": false,
|
|
||||||
"main": false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### Do
|
|
||||||
|
|
||||||
- Keep feature branches short-lived (< 1 week)
|
|
||||||
- Rebase feature branches on development regularly
|
|
||||||
- Write descriptive commit messages
|
|
||||||
- Delete branches after merging
|
|
||||||
- Tag all releases on main
|
|
||||||
- Always sync development after hotfixes
|
|
||||||
|
|
||||||
### Avoid
|
|
||||||
|
|
||||||
- Long-lived feature branches
|
|
||||||
- Direct commits to protected branches
|
|
||||||
- Force pushing to shared branches
|
|
||||||
- Merging untested code to staging
|
|
||||||
- Skipping the development sync after hotfixes
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [Branching Strategies Skill](/plugins/git-flow/skills/workflow-patterns/branching-strategies.md)
|
|
||||||
- [git-flow README](/plugins/git-flow/README.md)
|
|
||||||
- [Conventional Commits](https://www.conventionalcommits.org/)
|
|
||||||
@@ -1,102 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# git-flow branch name validation hook
|
|
||||||
# Validates branch names follow the convention: <type>/<description>
|
|
||||||
# Command hook - guaranteed predictable behavior
|
|
||||||
|
|
||||||
# Read tool input from stdin (JSON format)
|
|
||||||
INPUT=$(cat)
|
|
||||||
|
|
||||||
# Extract command from JSON input
|
|
||||||
# The Bash tool sends {"command": "..."} format
|
|
||||||
COMMAND=$(echo "$INPUT" | grep -o '"command"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"command"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
|
|
||||||
|
|
||||||
# If no command found, exit silently (allow)
|
|
||||||
if [ -z "$COMMAND" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if this is a branch creation command
|
|
||||||
# Patterns: git checkout -b, git branch (without -d/-D), git switch -c/-C
|
|
||||||
IS_BRANCH_CREATE=false
|
|
||||||
BRANCH_NAME=""
|
|
||||||
|
|
||||||
# git checkout -b <branch>
|
|
||||||
if echo "$COMMAND" | grep -qE 'git\s+checkout\s+(-b|--branch)\s+'; then
|
|
||||||
IS_BRANCH_CREATE=true
|
|
||||||
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+checkout\s\+\(-b\|--branch\)\s\+\([^ ]*\).*/\2/p')
|
|
||||||
fi
|
|
||||||
|
|
||||||
# git switch -c/-C <branch>
|
|
||||||
if echo "$COMMAND" | grep -qE 'git\s+switch\s+(-c|-C|--create|--force-create)\s+'; then
|
|
||||||
IS_BRANCH_CREATE=true
|
|
||||||
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+switch\s\+\(-c\|-C\|--create\|--force-create\)\s\+\([^ ]*\).*/\2/p')
|
|
||||||
fi
|
|
||||||
|
|
||||||
# git branch <name> (without -d/-D/-m/-M which are delete/rename)
|
|
||||||
if echo "$COMMAND" | grep -qE 'git\s+branch\s+[^-]' && ! echo "$COMMAND" | grep -qE 'git\s+branch\s+(-d|-D|-m|-M|--delete|--move|--list|--show-current)'; then
|
|
||||||
IS_BRANCH_CREATE=true
|
|
||||||
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+branch\s\+\([^ -][^ ]*\).*/\1/p')
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If not a branch creation command, exit silently (allow)
|
|
||||||
if [ "$IS_BRANCH_CREATE" = false ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If we couldn't extract the branch name, exit silently (allow)
|
|
||||||
if [ -z "$BRANCH_NAME" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Remove any quotes from branch name
|
|
||||||
BRANCH_NAME=$(echo "$BRANCH_NAME" | tr -d '"' | tr -d "'")
|
|
||||||
|
|
||||||
# Skip validation for special branches
|
|
||||||
case "$BRANCH_NAME" in
|
|
||||||
main|master|develop|development|staging|release|hotfix)
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# Allowed branch types
|
|
||||||
VALID_TYPES="feat|fix|chore|docs|refactor|test|perf|debug"
|
|
||||||
|
|
||||||
# Validate branch name format: <type>/<description>
|
|
||||||
# Description: lowercase letters, numbers, hyphens only, max 50 chars total
|
|
||||||
if ! echo "$BRANCH_NAME" | grep -qE "^($VALID_TYPES)/[a-z0-9][a-z0-9-]*$"; then
|
|
||||||
echo ""
|
|
||||||
echo "[git-flow] Branch name validation failed"
|
|
||||||
echo ""
|
|
||||||
echo "Branch: $BRANCH_NAME"
|
|
||||||
echo ""
|
|
||||||
echo "Expected format: <type>/<description>"
|
|
||||||
echo ""
|
|
||||||
echo "Valid types: feat, fix, chore, docs, refactor, test, perf, debug"
|
|
||||||
echo ""
|
|
||||||
echo "Description rules:"
|
|
||||||
echo " - Lowercase letters, numbers, and hyphens only"
|
|
||||||
echo " - Must start with letter or number"
|
|
||||||
echo " - No spaces or special characters"
|
|
||||||
echo ""
|
|
||||||
echo "Examples:"
|
|
||||||
echo " feat/add-user-auth"
|
|
||||||
echo " fix/login-timeout"
|
|
||||||
echo " chore/update-deps"
|
|
||||||
echo " docs/api-reference"
|
|
||||||
echo ""
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check total length (max 50 chars)
|
|
||||||
if [ ${#BRANCH_NAME} -gt 50 ]; then
|
|
||||||
echo ""
|
|
||||||
echo "[git-flow] Branch name too long"
|
|
||||||
echo ""
|
|
||||||
echo "Branch: $BRANCH_NAME (${#BRANCH_NAME} chars)"
|
|
||||||
echo "Maximum: 50 characters"
|
|
||||||
echo ""
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Valid branch name
|
|
||||||
exit 0
|
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# git-flow commit message validation hook
|
|
||||||
# Validates git commit messages follow conventional commit format
|
|
||||||
# PreToolUse hook for Bash commands - type: command
|
|
||||||
|
|
||||||
# Read tool input from stdin
|
|
||||||
INPUT=$(cat)
|
|
||||||
|
|
||||||
# Use Python to properly parse JSON and extract the command
|
|
||||||
COMMAND=$(echo "$INPUT" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('command',''))" 2>/dev/null)
|
|
||||||
|
|
||||||
# If no command or python failed, allow through
|
|
||||||
if [ -z "$COMMAND" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if it is a git commit command with -m flag
|
|
||||||
if ! echo "$COMMAND" | grep -qE 'git\s+commit.*-m'; then
|
|
||||||
# Not a git commit with -m, allow through
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Extract commit message - handle various quoting styles
|
|
||||||
# Try double quotes first
|
|
||||||
COMMIT_MSG=$(echo "$COMMAND" | sed -n 's/.*-m[[:space:]]*"\([^"]*\)".*/\1/p')
|
|
||||||
# If empty, try single quotes
|
|
||||||
if [ -z "$COMMIT_MSG" ]; then
|
|
||||||
COMMIT_MSG=$(echo "$COMMAND" | sed -n "s/.*-m[[:space:]]*'\\([^']*\\)'.*/\\1/p")
|
|
||||||
fi
|
|
||||||
# If still empty, try HEREDOC pattern
|
|
||||||
if [ -z "$COMMIT_MSG" ]; then
|
|
||||||
if echo "$COMMAND" | grep -qE -- '-m[[:space:]]+"\$\(cat <<'; then
|
|
||||||
# HEREDOC pattern - too complex to parse, allow through
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If no message extracted, allow through
|
|
||||||
if [ -z "$COMMIT_MSG" ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate conventional commit format
|
|
||||||
# Format: <type>(<scope>): <description>
|
|
||||||
# or: <type>: <description>
|
|
||||||
# Valid types: feat, fix, docs, style, refactor, perf, test, chore, build, ci
|
|
||||||
|
|
||||||
VALID_TYPES="feat|fix|docs|style|refactor|perf|test|chore|build|ci"
|
|
||||||
|
|
||||||
# Check if message matches conventional commit format
|
|
||||||
if echo "$COMMIT_MSG" | grep -qE "^($VALID_TYPES)(\([a-zA-Z0-9_-]+\))?:[[:space:]]+.+"; then
|
|
||||||
# Valid format
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Invalid format - output warning
|
|
||||||
echo "[git-flow] WARNING: Commit message does not follow conventional commit format"
|
|
||||||
echo ""
|
|
||||||
echo "Expected format: <type>(<scope>): <description>"
|
|
||||||
echo " or: <type>: <description>"
|
|
||||||
echo ""
|
|
||||||
echo "Valid types: feat, fix, docs, style, refactor, perf, test, chore, build, ci"
|
|
||||||
echo ""
|
|
||||||
echo "Examples:"
|
|
||||||
echo " feat(auth): add password reset functionality"
|
|
||||||
echo " fix: resolve login timeout issue"
|
|
||||||
echo " docs(readme): update installation instructions"
|
|
||||||
echo ""
|
|
||||||
echo "Your message: $COMMIT_MSG"
|
|
||||||
echo ""
|
|
||||||
echo "To proceed anyway, use /commit command which auto-generates valid messages."
|
|
||||||
|
|
||||||
# Exit with non-zero to block
|
|
||||||
exit 1
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
{
|
|
||||||
"hooks": {
|
|
||||||
"PreToolUse": [
|
|
||||||
{
|
|
||||||
"matcher": "Bash",
|
|
||||||
"hooks": [
|
|
||||||
{
|
|
||||||
"type": "command",
|
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/branch-check.sh"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "command",
|
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/commit-msg-check.sh"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -13,7 +13,6 @@ pr-review conducts comprehensive code reviews using specialized agents for secur
|
|||||||
| `/pr-review <pr#>` | Full multi-agent review |
|
| `/pr-review <pr#>` | Full multi-agent review |
|
||||||
| `/pr-summary <pr#>` | Quick summary without full review |
|
| `/pr-summary <pr#>` | Quick summary without full review |
|
||||||
| `/pr-findings <pr#>` | Filter findings by category/confidence |
|
| `/pr-findings <pr#>` | Filter findings by category/confidence |
|
||||||
| `/pr-diff <pr#>` | View diff with inline comment annotations |
|
|
||||||
| `/initial-setup` | Full interactive setup wizard |
|
| `/initial-setup` | Full interactive setup wizard |
|
||||||
| `/project-init` | Quick project setup (system already configured) |
|
| `/project-init` | Quick project setup (system already configured) |
|
||||||
| `/project-sync` | Sync configuration with current git remote |
|
| `/project-sync` | Sync configuration with current git remote |
|
||||||
@@ -52,38 +51,14 @@ Requires Gitea MCP server configuration.
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
Environment variables can be set in your project's `.env` file or shell environment.
|
|
||||||
|
|
||||||
| Variable | Default | Description |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.7` | Minimum confidence score (0.0-1.0) for reporting findings. Findings below this threshold are filtered out to reduce noise. |
|
|
||||||
| `PR_REVIEW_AUTO_SUBMIT` | `false` | Automatically submit review to Gitea without confirmation prompt |
|
|
||||||
|
|
||||||
### Example Configuration
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Project .env file
|
# Minimum confidence to report (default: 0.5)
|
||||||
|
PR_REVIEW_CONFIDENCE_THRESHOLD=0.5
|
||||||
# Only show high-confidence findings (MEDIUM and HIGH)
|
|
||||||
PR_REVIEW_CONFIDENCE_THRESHOLD=0.7
|
|
||||||
|
|
||||||
# Auto-submit review to Gitea (default: false)
|
# Auto-submit review to Gitea (default: false)
|
||||||
PR_REVIEW_AUTO_SUBMIT=false
|
PR_REVIEW_AUTO_SUBMIT=false
|
||||||
```
|
```
|
||||||
|
|
||||||
### Confidence Threshold Details
|
|
||||||
|
|
||||||
The confidence threshold filters which findings appear in review output:
|
|
||||||
|
|
||||||
| Threshold | Effect |
|
|
||||||
|-----------|--------|
|
|
||||||
| `0.9` | Only definite issues (HIGH confidence) |
|
|
||||||
| `0.7` | Likely issues and above (MEDIUM+HIGH) - **recommended** |
|
|
||||||
| `0.5` | Include possible concerns (LOW+MEDIUM+HIGH) |
|
|
||||||
| `0.3` | Include speculative findings |
|
|
||||||
|
|
||||||
Lower thresholds show more findings but may include false positives. Higher thresholds reduce noise but may miss some valid concerns.
|
|
||||||
|
|
||||||
## Usage Examples
|
## Usage Examples
|
||||||
|
|
||||||
### Full Review
|
### Full Review
|
||||||
|
|||||||
@@ -120,13 +120,10 @@ Focus on findings that:
|
|||||||
|
|
||||||
### Respect Confidence Thresholds
|
### Respect Confidence Thresholds
|
||||||
|
|
||||||
Filter findings based on `PR_REVIEW_CONFIDENCE_THRESHOLD` (default: 0.7). Be transparent about uncertainty:
|
Never report findings below 0.5 confidence. Be transparent about uncertainty:
|
||||||
- 0.9+ → "This is definitely an issue" (HIGH)
|
- 0.9+ → "This is definitely an issue"
|
||||||
- 0.7-0.89 → "This is likely an issue" (MEDIUM)
|
- 0.7-0.89 → "This is likely an issue"
|
||||||
- 0.5-0.69 → "This might be an issue" (LOW)
|
- 0.5-0.69 → "This might be an issue"
|
||||||
- < threshold → Filtered from output
|
|
||||||
|
|
||||||
With the default threshold of 0.7, only MEDIUM and HIGH confidence findings are reported.
|
|
||||||
|
|
||||||
### Avoid Noise
|
### Avoid Noise
|
||||||
|
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ This project uses the pr-review plugin for automated code review.
|
|||||||
| `/pr-review <pr#>` | Full multi-agent review |
|
| `/pr-review <pr#>` | Full multi-agent review |
|
||||||
| `/pr-summary <pr#>` | Quick change summary |
|
| `/pr-summary <pr#>` | Quick change summary |
|
||||||
| `/pr-findings <pr#>` | Filter review findings |
|
| `/pr-findings <pr#>` | Filter review findings |
|
||||||
| `/pr-diff <pr#>` | View diff with inline comments |
|
|
||||||
|
|
||||||
### Review Categories
|
### Review Categories
|
||||||
|
|
||||||
@@ -27,16 +26,11 @@ Reviews analyze:
|
|||||||
|
|
||||||
### Confidence Threshold
|
### Confidence Threshold
|
||||||
|
|
||||||
Configure via `PR_REVIEW_CONFIDENCE_THRESHOLD` (default: 0.7).
|
Findings below 0.5 confidence are suppressed.
|
||||||
|
|
||||||
| Range | Label | Action |
|
- HIGH (0.9+): Definite issue
|
||||||
|-------|-------|--------|
|
- MEDIUM (0.7-0.89): Likely issue
|
||||||
| 0.9 - 1.0 | HIGH | Must address |
|
- LOW (0.5-0.69): Possible concern
|
||||||
| 0.7 - 0.89 | MEDIUM | Should address |
|
|
||||||
| 0.5 - 0.69 | LOW | Consider addressing |
|
|
||||||
| < threshold | (filtered) | Not reported |
|
|
||||||
|
|
||||||
With default threshold of 0.7, only MEDIUM and HIGH findings are shown.
|
|
||||||
|
|
||||||
### Verdict Rules
|
### Verdict Rules
|
||||||
|
|
||||||
|
|||||||
@@ -1,154 +0,0 @@
|
|||||||
# /pr-diff - Annotated PR Diff Viewer
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Display the PR diff with inline annotations from review comments, making it easy to see what feedback has been given alongside the code changes.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/pr-diff <pr-number> [--repo owner/repo] [--context <lines>]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Options
|
|
||||||
|
|
||||||
```
|
|
||||||
--repo <owner/repo> Override repository (default: from .env)
|
|
||||||
--context <n> Lines of context around changes (default: 3)
|
|
||||||
--no-comments Show diff without comment annotations
|
|
||||||
--file <pattern> Filter to specific files (glob pattern)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Behavior
|
|
||||||
|
|
||||||
### Step 1: Fetch PR Data
|
|
||||||
|
|
||||||
Using Gitea MCP tools:
|
|
||||||
1. `get_pr_diff` - Unified diff of all changes
|
|
||||||
2. `get_pr_comments` - All review comments on the PR
|
|
||||||
|
|
||||||
### Step 2: Parse and Annotate
|
|
||||||
|
|
||||||
Parse the diff and overlay comments at their respective file/line positions:
|
|
||||||
|
|
||||||
```
|
|
||||||
═══════════════════════════════════════════════════
|
|
||||||
PR #123 Diff - Add user authentication
|
|
||||||
═══════════════════════════════════════════════════
|
|
||||||
|
|
||||||
Branch: feat/user-auth → development
|
|
||||||
Files: 12 changed (+234 / -45)
|
|
||||||
|
|
||||||
───────────────────────────────────────────────────
|
|
||||||
src/api/users.ts (+85 / -12)
|
|
||||||
───────────────────────────────────────────────────
|
|
||||||
|
|
||||||
@@ -42,6 +42,15 @@ export async function getUser(id: string) {
|
|
||||||
42 │ const db = getDatabase();
|
|
||||||
43 │
|
|
||||||
44 │- const user = db.query("SELECT * FROM users WHERE id = " + id);
|
|
||||||
│ ┌─────────────────────────────────────────────────────────────
|
|
||||||
│ │ COMMENT by @reviewer (2h ago):
|
|
||||||
│ │ This is a SQL injection vulnerability. Use parameterized
|
|
||||||
│ │ queries instead: `db.query("SELECT * FROM users WHERE id = ?", [id])`
|
|
||||||
│ └─────────────────────────────────────────────────────────────
|
|
||||||
45 │+ const query = "SELECT * FROM users WHERE id = ?";
|
|
||||||
46 │+ const user = db.query(query, [id]);
|
|
||||||
47 │
|
|
||||||
48 │ if (!user) {
|
|
||||||
49 │ throw new NotFoundError("User not found");
|
|
||||||
50 │ }
|
|
||||||
|
|
||||||
@@ -78,3 +87,12 @@ export async function updateUser(id: string, data: UserInput) {
|
|
||||||
87 │+ // Validate input before update
|
|
||||||
88 │+ validateUserInput(data);
|
|
||||||
89 │+
|
|
||||||
90 │+ const result = db.query(
|
|
||||||
91 │+ "UPDATE users SET name = ?, email = ? WHERE id = ?",
|
|
||||||
92 │+ [data.name, data.email, id]
|
|
||||||
93 │+ );
|
|
||||||
│ ┌─────────────────────────────────────────────────────────────
|
|
||||||
│ │ COMMENT by @maintainer (1h ago):
|
|
||||||
│ │ Good use of parameterized query here!
|
|
||||||
│ │
|
|
||||||
│ │ REPLY by @author (30m ago):
|
|
||||||
│ │ Thanks! Applied the same pattern throughout.
|
|
||||||
│ └─────────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
───────────────────────────────────────────────────
|
|
||||||
src/components/LoginForm.tsx (+65 / -0) [NEW FILE]
|
|
||||||
───────────────────────────────────────────────────
|
|
||||||
|
|
||||||
@@ -0,0 +1,65 @@
|
|
||||||
1 │+import React, { useState } from 'react';
|
|
||||||
2 │+import { useAuth } from '../context/AuthContext';
|
|
||||||
3 │+
|
|
||||||
4 │+export function LoginForm() {
|
|
||||||
5 │+ const [email, setEmail] = useState('');
|
|
||||||
6 │+ const [password, setPassword] = useState('');
|
|
||||||
7 │+ const { login } = useAuth();
|
|
||||||
|
|
||||||
... (remaining diff content)
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════
|
|
||||||
Comment Summary: 5 comments, 2 resolved
|
|
||||||
═══════════════════════════════════════════════════
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Filter by Confidence (Optional)
|
|
||||||
|
|
||||||
If `PR_REVIEW_CONFIDENCE_THRESHOLD` is set, also annotate with high-confidence findings from previous reviews:
|
|
||||||
|
|
||||||
```
|
|
||||||
44 │- const user = db.query("SELECT * FROM users WHERE id = " + id);
|
|
||||||
│ ┌─── REVIEW FINDING (0.95 HIGH) ─────────────────────────────
|
|
||||||
│ │ [SEC-001] SQL Injection Vulnerability
|
|
||||||
│ │ Use parameterized queries to prevent injection attacks.
|
|
||||||
│ └─────────────────────────────────────────────────────────────
|
|
||||||
│ ┌─── COMMENT by @reviewer ────────────────────────────────────
|
|
||||||
│ │ This is a SQL injection vulnerability...
|
|
||||||
│ └─────────────────────────────────────────────────────────────
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Formats
|
|
||||||
|
|
||||||
### Default (Annotated Diff)
|
|
||||||
|
|
||||||
Full diff with inline comments as shown above.
|
|
||||||
|
|
||||||
### Plain (--no-comments)
|
|
||||||
|
|
||||||
```
|
|
||||||
/pr-diff 123 --no-comments
|
|
||||||
|
|
||||||
# Standard unified diff output without annotations
|
|
||||||
```
|
|
||||||
|
|
||||||
### File Filter (--file)
|
|
||||||
|
|
||||||
```
|
|
||||||
/pr-diff 123 --file "src/api/*"
|
|
||||||
|
|
||||||
# Shows diff only for files matching pattern
|
|
||||||
```
|
|
||||||
|
|
||||||
## Use Cases
|
|
||||||
|
|
||||||
- **Review preparation**: See the full context of changes with existing feedback
|
|
||||||
- **Followup work**: Understand what was commented on and where
|
|
||||||
- **Discussion context**: View threaded conversations alongside the code
|
|
||||||
- **Progress tracking**: See which comments have been resolved
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
| Variable | Default | Description |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.7` | Minimum confidence for showing review findings |
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
| Command | Purpose |
|
|
||||||
|---------|---------|
|
|
||||||
| `/pr-summary` | Quick overview without diff |
|
|
||||||
| `/pr-review` | Full multi-agent review |
|
|
||||||
| `/pr-findings` | Filter review findings by category |
|
|
||||||
@@ -46,16 +46,14 @@ Collect findings from all agents, each with:
|
|||||||
|
|
||||||
### Step 4: Filter by Confidence
|
### Step 4: Filter by Confidence
|
||||||
|
|
||||||
Filter findings based on `PR_REVIEW_CONFIDENCE_THRESHOLD` (default: 0.7):
|
Only display findings with confidence >= 0.5:
|
||||||
|
|
||||||
| Confidence | Label | Description |
|
| Confidence | Label | Description |
|
||||||
|------------|-------|-------------|
|
|------------|-------|-------------|
|
||||||
| 0.9 - 1.0 | HIGH | Definite issue, must address |
|
| 0.9 - 1.0 | HIGH | Definite issue, must address |
|
||||||
| 0.7 - 0.89 | MEDIUM | Likely issue, should address |
|
| 0.7 - 0.89 | MEDIUM | Likely issue, should address |
|
||||||
| 0.5 - 0.69 | LOW | Possible concern, consider addressing |
|
| 0.5 - 0.69 | LOW | Possible concern, consider addressing |
|
||||||
| < threshold | (filtered) | Below configured threshold |
|
| < 0.5 | (suppressed) | Too uncertain to report |
|
||||||
|
|
||||||
**Note:** With the default threshold of 0.7, only MEDIUM and HIGH confidence findings are shown. Adjust `PR_REVIEW_CONFIDENCE_THRESHOLD` to include more or fewer findings.
|
|
||||||
|
|
||||||
### Step 5: Generate Report
|
### Step 5: Generate Report
|
||||||
|
|
||||||
@@ -137,5 +135,5 @@ Full review report with:
|
|||||||
|
|
||||||
| Variable | Default | Description |
|
| Variable | Default | Description |
|
||||||
|----------|---------|-------------|
|
|----------|---------|-------------|
|
||||||
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.7` | Minimum confidence to report (0.0-1.0) |
|
| `PR_REVIEW_CONFIDENCE_THRESHOLD` | `0.5` | Minimum confidence to report |
|
||||||
| `PR_REVIEW_AUTO_SUBMIT` | `false` | Auto-submit to Gitea |
|
| `PR_REVIEW_AUTO_SUBMIT` | `false` | Auto-submit to Gitea |
|
||||||
|
|||||||
@@ -73,12 +73,10 @@ Base confidence by pattern:
|
|||||||
|
|
||||||
## Threshold Configuration
|
## Threshold Configuration
|
||||||
|
|
||||||
The default threshold is 0.7 (showing MEDIUM and HIGH confidence findings). This can be adjusted:
|
The default threshold is 0.5. This can be adjusted:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
PR_REVIEW_CONFIDENCE_THRESHOLD=0.9 # Only definite issues (HIGH)
|
PR_REVIEW_CONFIDENCE_THRESHOLD=0.7 # Only high-confidence
|
||||||
PR_REVIEW_CONFIDENCE_THRESHOLD=0.7 # Likely issues and above (MEDIUM+HIGH) - default
|
|
||||||
PR_REVIEW_CONFIDENCE_THRESHOLD=0.5 # Include possible concerns (LOW+)
|
|
||||||
PR_REVIEW_CONFIDENCE_THRESHOLD=0.3 # Include more speculative
|
PR_REVIEW_CONFIDENCE_THRESHOLD=0.3 # Include more speculative
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -278,17 +278,6 @@ Investigate diagnostic issues and propose fixes with human approval.
|
|||||||
|
|
||||||
**When to use:** In the marketplace repo, to investigate and fix issues reported by `/debug-report`.
|
**When to use:** In the marketplace repo, to investigate and fix issues reported by `/debug-report`.
|
||||||
|
|
||||||
### `/suggest-version`
|
|
||||||
Analyze CHANGELOG and recommend semantic version bump.
|
|
||||||
|
|
||||||
**What it does:**
|
|
||||||
- Reads CHANGELOG.md `[Unreleased]` section
|
|
||||||
- Analyzes changes to determine bump type (major/minor/patch)
|
|
||||||
- Applies SemVer rules: breaking changes → major, features → minor, fixes → patch
|
|
||||||
- Returns recommended version with rationale
|
|
||||||
|
|
||||||
**When to use:** Before creating a release to determine the appropriate version number.
|
|
||||||
|
|
||||||
## Code Quality Commands
|
## Code Quality Commands
|
||||||
|
|
||||||
The `/review` and `/test-check` commands complement the Executor agent by catching issues before work is marked complete. Run both commands before `/sprint-close` for a complete quality check.
|
The `/review` and `/test-check` commands complement the Executor agent by catching issues before work is marked complete. Run both commands before `/sprint-close` for a complete quality check.
|
||||||
|
|||||||
@@ -108,127 +108,7 @@ git branch --show-current
|
|||||||
|
|
||||||
## Your Responsibilities
|
## Your Responsibilities
|
||||||
|
|
||||||
### 1. Status Reporting (Structured Progress)
|
### 1. Implement Features Following Specs
|
||||||
|
|
||||||
**CRITICAL: Post structured progress comments for visibility.**
|
|
||||||
|
|
||||||
**Standard Progress Comment Format:**
|
|
||||||
```markdown
|
|
||||||
## Progress Update
|
|
||||||
**Status:** In Progress | Blocked | Failed
|
|
||||||
**Phase:** [current phase name]
|
|
||||||
**Tool Calls:** X (budget: Y)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] Step 1
|
|
||||||
- [x] Step 2
|
|
||||||
|
|
||||||
### In Progress
|
|
||||||
- [ ] Current step (estimated: Z more calls)
|
|
||||||
|
|
||||||
### Blockers
|
|
||||||
- None | [blocker description]
|
|
||||||
|
|
||||||
### Next
|
|
||||||
- What happens after current step
|
|
||||||
```
|
|
||||||
|
|
||||||
**When to Post Progress Comments:**
|
|
||||||
- **Immediately on starting** - Post initial status
|
|
||||||
- **Every 20-30 tool calls** - Show progress
|
|
||||||
- **On phase transitions** - Moving from implementation to testing
|
|
||||||
- **When blocked or encountering errors**
|
|
||||||
- **Before budget limit** - If approaching turn limit
|
|
||||||
|
|
||||||
**Starting Work Example:**
|
|
||||||
```
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Progress Update
|
|
||||||
**Status:** In Progress
|
|
||||||
**Phase:** Starting
|
|
||||||
**Tool Calls:** 5 (budget: 100)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] Read issue and acceptance criteria
|
|
||||||
- [x] Created feature branch feat/45-jwt-service
|
|
||||||
|
|
||||||
### In Progress
|
|
||||||
- [ ] Implementing JWT service
|
|
||||||
|
|
||||||
### Blockers
|
|
||||||
- None
|
|
||||||
|
|
||||||
### Next
|
|
||||||
- Create auth/jwt_service.py
|
|
||||||
- Implement core token functions
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Blocked Example:**
|
|
||||||
```
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Progress Update
|
|
||||||
**Status:** Blocked
|
|
||||||
**Phase:** Testing
|
|
||||||
**Tool Calls:** 67 (budget: 100)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] Implemented jwt_service.py
|
|
||||||
- [x] Wrote unit tests
|
|
||||||
|
|
||||||
### In Progress
|
|
||||||
- [ ] Running tests - BLOCKED
|
|
||||||
|
|
||||||
### Blockers
|
|
||||||
- Missing PyJWT dependency in requirements.txt
|
|
||||||
- Need orchestrator to add dependency
|
|
||||||
|
|
||||||
### Next
|
|
||||||
- Resume after blocker resolved
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Failed Example:**
|
|
||||||
```
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Progress Update
|
|
||||||
**Status:** Failed
|
|
||||||
**Phase:** Implementation
|
|
||||||
**Tool Calls:** 89 (budget: 100)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] Created jwt_service.py
|
|
||||||
- [x] Implemented generate_token()
|
|
||||||
|
|
||||||
### In Progress
|
|
||||||
- [ ] verify_token() - FAILED
|
|
||||||
|
|
||||||
### Blockers
|
|
||||||
- Critical: Cannot decode tokens - algorithm mismatch
|
|
||||||
- Attempted: HS256, HS384, RS256
|
|
||||||
- Error: InvalidSignatureError consistently
|
|
||||||
|
|
||||||
### Next
|
|
||||||
- Needs human investigation
|
|
||||||
- Possible issue with secret key encoding
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**NEVER report "completed" unless:**
|
|
||||||
- All acceptance criteria are met
|
|
||||||
- Tests pass
|
|
||||||
- Code is committed and pushed
|
|
||||||
- No unresolved errors
|
|
||||||
|
|
||||||
**If you cannot complete, report failure honestly.** The orchestrator needs accurate status to coordinate effectively.
|
|
||||||
|
|
||||||
### 2. Implement Features Following Specs
|
|
||||||
|
|
||||||
**You receive:**
|
**You receive:**
|
||||||
- Issue number and description
|
- Issue number and description
|
||||||
@@ -242,7 +122,7 @@ add_comment(
|
|||||||
- Proper error handling
|
- Proper error handling
|
||||||
- Edge case coverage
|
- Edge case coverage
|
||||||
|
|
||||||
### 3. Follow Best Practices
|
### 2. Follow Best Practices
|
||||||
|
|
||||||
**Code Quality Standards:**
|
**Code Quality Standards:**
|
||||||
|
|
||||||
@@ -270,7 +150,7 @@ add_comment(
|
|||||||
- Handle errors gracefully
|
- Handle errors gracefully
|
||||||
- Follow OWASP guidelines
|
- Follow OWASP guidelines
|
||||||
|
|
||||||
### 4. Handle Edge Cases
|
### 3. Handle Edge Cases
|
||||||
|
|
||||||
Always consider:
|
Always consider:
|
||||||
- What if input is None/null/undefined?
|
- What if input is None/null/undefined?
|
||||||
@@ -280,7 +160,7 @@ Always consider:
|
|||||||
- What if user doesn't have permission?
|
- What if user doesn't have permission?
|
||||||
- What if resource doesn't exist?
|
- What if resource doesn't exist?
|
||||||
|
|
||||||
### 5. Apply Lessons Learned
|
### 4. Apply Lessons Learned
|
||||||
|
|
||||||
Reference relevant lessons in your implementation:
|
Reference relevant lessons in your implementation:
|
||||||
|
|
||||||
@@ -299,7 +179,7 @@ def test_verify_expired_token(jwt_service):
|
|||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
### 6. Create Merge Requests (When Branch Protected)
|
### 5. Create Merge Requests (When Branch Protected)
|
||||||
|
|
||||||
**MR Body Template - NO SUBTASKS:**
|
**MR Body Template - NO SUBTASKS:**
|
||||||
|
|
||||||
@@ -328,7 +208,7 @@ Closes #45
|
|||||||
|
|
||||||
The issue already tracks subtasks. MR body should be summary only.
|
The issue already tracks subtasks. MR body should be summary only.
|
||||||
|
|
||||||
### 7. Auto-Close Issues via Commit Messages
|
### 6. Auto-Close Issues via Commit Messages
|
||||||
|
|
||||||
**Always include closing keywords in commits:**
|
**Always include closing keywords in commits:**
|
||||||
|
|
||||||
@@ -349,7 +229,7 @@ Closes #45"
|
|||||||
|
|
||||||
This ensures issues auto-close when MR is merged.
|
This ensures issues auto-close when MR is merged.
|
||||||
|
|
||||||
### 8. Generate Completion Reports
|
### 7. Generate Completion Reports
|
||||||
|
|
||||||
After implementation, provide a concise completion report:
|
After implementation, provide a concise completion report:
|
||||||
|
|
||||||
@@ -424,185 +304,18 @@ As the executor, you interact with MCP tools for status updates:
|
|||||||
- Apply best practices
|
- Apply best practices
|
||||||
- Deliver quality work
|
- Deliver quality work
|
||||||
|
|
||||||
## Checkpointing (Save Progress for Resume)
|
|
||||||
|
|
||||||
**CRITICAL: Save checkpoints so work can be resumed if interrupted.**
|
|
||||||
|
|
||||||
**Checkpoint Comment Format:**
|
|
||||||
```markdown
|
|
||||||
## Checkpoint
|
|
||||||
**Branch:** feat/45-jwt-service
|
|
||||||
**Commit:** abc123 (or "uncommitted")
|
|
||||||
**Phase:** [current phase]
|
|
||||||
**Tool Calls:** 45
|
|
||||||
|
|
||||||
### Files Modified
|
|
||||||
- auth/jwt_service.py (created)
|
|
||||||
- tests/test_jwt.py (created)
|
|
||||||
|
|
||||||
### Completed Steps
|
|
||||||
- [x] Created jwt_service.py skeleton
|
|
||||||
- [x] Implemented generate_token()
|
|
||||||
- [x] Implemented verify_token()
|
|
||||||
|
|
||||||
### Pending Steps
|
|
||||||
- [ ] Write unit tests
|
|
||||||
- [ ] Add token refresh logic
|
|
||||||
- [ ] Commit and push
|
|
||||||
|
|
||||||
### State Notes
|
|
||||||
[Any important context for resumption]
|
|
||||||
```
|
|
||||||
|
|
||||||
**When to Save Checkpoints:**
|
|
||||||
- After completing each major step (every 20-30 tool calls)
|
|
||||||
- Before stopping due to budget limit
|
|
||||||
- When encountering a blocker
|
|
||||||
- After any commit
|
|
||||||
|
|
||||||
**Checkpoint Example:**
|
|
||||||
```
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Checkpoint
|
|
||||||
**Branch:** feat/45-jwt-service
|
|
||||||
**Commit:** uncommitted (changes staged)
|
|
||||||
**Phase:** Testing
|
|
||||||
**Tool Calls:** 67
|
|
||||||
|
|
||||||
### Files Modified
|
|
||||||
- auth/jwt_service.py (created, 120 lines)
|
|
||||||
- auth/__init__.py (modified, added import)
|
|
||||||
- tests/test_jwt.py (created, 50 lines, incomplete)
|
|
||||||
|
|
||||||
### Completed Steps
|
|
||||||
- [x] Created auth/jwt_service.py
|
|
||||||
- [x] Implemented generate_token() with HS256
|
|
||||||
- [x] Implemented verify_token()
|
|
||||||
- [x] Updated auth/__init__.py exports
|
|
||||||
|
|
||||||
### Pending Steps
|
|
||||||
- [ ] Complete test_jwt.py (5 tests remaining)
|
|
||||||
- [ ] Add token refresh logic
|
|
||||||
- [ ] Commit changes
|
|
||||||
- [ ] Push to remote
|
|
||||||
|
|
||||||
### State Notes
|
|
||||||
- Using PyJWT 2.8.0
|
|
||||||
- Secret key from JWT_SECRET env var
|
|
||||||
- Tests use pytest fixtures in conftest.py
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Checkpoint on Interruption:**
|
|
||||||
|
|
||||||
If you must stop (budget, failure, blocker), ALWAYS post a checkpoint FIRST.
|
|
||||||
|
|
||||||
## Runaway Detection (Self-Monitoring)
|
|
||||||
|
|
||||||
**CRITICAL: Monitor yourself to prevent infinite loops and wasted resources.**
|
|
||||||
|
|
||||||
**Self-Monitoring Checkpoints:**
|
|
||||||
|
|
||||||
| Trigger | Action |
|
|
||||||
|---------|--------|
|
|
||||||
| 10+ tool calls without progress | STOP - Post progress update, reassess approach |
|
|
||||||
| Same error 3+ times | CIRCUIT BREAKER - Stop, report failure with error pattern |
|
|
||||||
| 50+ tool calls total | POST progress update (mandatory) |
|
|
||||||
| 80+ tool calls total | WARN - Approaching budget, evaluate if completion is realistic |
|
|
||||||
| 100+ tool calls total | STOP - Save state, report incomplete with checkpoint |
|
|
||||||
|
|
||||||
**What Counts as "Progress":**
|
|
||||||
- File created or modified
|
|
||||||
- Test passing that wasn't before
|
|
||||||
- New functionality working
|
|
||||||
- Moving to next phase of work
|
|
||||||
|
|
||||||
**What Does NOT Count as Progress:**
|
|
||||||
- Reading more files
|
|
||||||
- Searching for something
|
|
||||||
- Retrying the same operation
|
|
||||||
- Adding logging/debugging
|
|
||||||
|
|
||||||
**Circuit Breaker Protocol:**
|
|
||||||
|
|
||||||
If you encounter the same error 3+ times:
|
|
||||||
```
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Progress Update
|
|
||||||
**Status:** Failed (Circuit Breaker)
|
|
||||||
**Phase:** [phase when stopped]
|
|
||||||
**Tool Calls:** 67 (budget: 100)
|
|
||||||
|
|
||||||
### Circuit Breaker Triggered
|
|
||||||
Same error occurred 3+ times:
|
|
||||||
```
|
|
||||||
[error message]
|
|
||||||
```
|
|
||||||
|
|
||||||
### What Was Tried
|
|
||||||
1. [first attempt]
|
|
||||||
2. [second attempt]
|
|
||||||
3. [third attempt]
|
|
||||||
|
|
||||||
### Recommendation
|
|
||||||
[What human should investigate]
|
|
||||||
|
|
||||||
### Files Modified
|
|
||||||
- [list any files changed before failure]
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Budget Approaching Protocol:**
|
|
||||||
|
|
||||||
At 80+ tool calls, post an update:
|
|
||||||
```
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Progress Update
|
|
||||||
**Status:** In Progress (Budget Warning)
|
|
||||||
**Phase:** [current phase]
|
|
||||||
**Tool Calls:** 82 (budget: 100)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] [completed steps]
|
|
||||||
|
|
||||||
### Remaining
|
|
||||||
- [ ] [what's left]
|
|
||||||
|
|
||||||
### Assessment
|
|
||||||
[Realistic? Should I continue or stop and checkpoint?]
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Hard Stop at 100 Calls:**
|
|
||||||
|
|
||||||
If you reach 100 tool calls:
|
|
||||||
1. STOP immediately
|
|
||||||
2. Save current state
|
|
||||||
3. Post checkpoint comment
|
|
||||||
4. Report as incomplete (not failed)
|
|
||||||
|
|
||||||
## Critical Reminders
|
## Critical Reminders
|
||||||
|
|
||||||
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
|
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
|
||||||
2. **Report status honestly** - In-Progress, Blocked, or Failed - never lie about completion
|
2. **Branch naming** - Always use `feat/`, `fix/`, or `debug/` prefix with issue number
|
||||||
3. **Blocked ≠ Failed** - Blocked means waiting for something; Failed means tried and couldn't complete
|
3. **Branch check FIRST** - Never implement on staging/production
|
||||||
4. **Self-monitor** - Watch for runaway patterns, trigger circuit breaker when stuck
|
4. **Follow specs precisely** - Respect architectural decisions
|
||||||
5. **Branch naming** - Always use `feat/`, `fix/`, or `debug/` prefix with issue number
|
5. **Apply lessons learned** - Reference in code and tests
|
||||||
6. **Branch check FIRST** - Never implement on staging/production
|
6. **Write tests** - Cover edge cases, not just happy path
|
||||||
7. **Follow specs precisely** - Respect architectural decisions
|
7. **Clean code** - Readable, maintainable, documented
|
||||||
8. **Apply lessons learned** - Reference in code and tests
|
8. **No MR subtasks** - MR body should NOT have checklists
|
||||||
9. **Write tests** - Cover edge cases, not just happy path
|
9. **Use closing keywords** - `Closes #XX` in commit messages
|
||||||
10. **Clean code** - Readable, maintainable, documented
|
10. **Report thoroughly** - Complete summary when done
|
||||||
11. **No MR subtasks** - MR body should NOT have checklists
|
|
||||||
12. **Use closing keywords** - `Closes #XX` in commit messages
|
|
||||||
13. **Report thoroughly** - Complete summary when done, including honest status
|
|
||||||
14. **Hard stop at 100 calls** - Save checkpoint and report incomplete
|
|
||||||
|
|
||||||
## Your Mission
|
## Your Mission
|
||||||
|
|
||||||
|
|||||||
@@ -57,42 +57,9 @@ curl -X POST "https://gitea.../api/..."
|
|||||||
- Coordinate Git operations (commit, merge, cleanup)
|
- Coordinate Git operations (commit, merge, cleanup)
|
||||||
- Keep sprint moving forward
|
- Keep sprint moving forward
|
||||||
|
|
||||||
## Critical: Approval Verification
|
|
||||||
|
|
||||||
**BEFORE EXECUTING**, verify sprint approval exists:
|
|
||||||
|
|
||||||
```
|
|
||||||
get_milestone(milestone_id=current_sprint)
|
|
||||||
→ Check description for "## Sprint Approval" section
|
|
||||||
```
|
|
||||||
|
|
||||||
**If No Approval:**
|
|
||||||
```
|
|
||||||
⚠️ SPRINT NOT APPROVED
|
|
||||||
|
|
||||||
This sprint has not been approved for execution.
|
|
||||||
Please run /sprint-plan to approve the sprint first.
|
|
||||||
```
|
|
||||||
|
|
||||||
**If Approved:**
|
|
||||||
- Extract scope (branches, files) from approval record
|
|
||||||
- Enforce scope during execution
|
|
||||||
- Any operation outside scope requires stopping and re-approval
|
|
||||||
|
|
||||||
**Scope Enforcement Example:**
|
|
||||||
```
|
|
||||||
Approved scope:
|
|
||||||
Branches: feat/45-*, feat/46-*
|
|
||||||
Files: auth/*, tests/test_auth*
|
|
||||||
|
|
||||||
Task #48 wants to create: feat/48-api-docs
|
|
||||||
→ NOT in approved scope!
|
|
||||||
→ STOP and ask user to approve expanded scope
|
|
||||||
```
|
|
||||||
|
|
||||||
## Critical: Branch Detection
|
## Critical: Branch Detection
|
||||||
|
|
||||||
**AFTER approval verification**, check the current git branch:
|
**BEFORE DOING ANYTHING**, check the current git branch:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git branch --show-current
|
git branch --show-current
|
||||||
@@ -126,44 +93,7 @@ git branch --show-current
|
|||||||
|
|
||||||
**Workflow:**
|
**Workflow:**
|
||||||
|
|
||||||
**A. Fetch Sprint Issues and Detect Checkpoints**
|
**A. Fetch Sprint Issues**
|
||||||
```
|
|
||||||
list_issues(state="open", labels=["sprint-current"])
|
|
||||||
```
|
|
||||||
|
|
||||||
**For each open issue, check for checkpoint comments:**
|
|
||||||
```
|
|
||||||
get_issue(issue_number=45) # Comments included
|
|
||||||
→ Look for comments containing "## Checkpoint"
|
|
||||||
```
|
|
||||||
|
|
||||||
**If Checkpoint Found:**
|
|
||||||
```
|
|
||||||
Checkpoint Detected for #45
|
|
||||||
|
|
||||||
Found checkpoint from previous session:
|
|
||||||
Branch: feat/45-jwt-service
|
|
||||||
Phase: Testing
|
|
||||||
Tool Calls: 67
|
|
||||||
Files Modified: 3
|
|
||||||
Completed: 4/7 steps
|
|
||||||
|
|
||||||
Options:
|
|
||||||
1. Resume from checkpoint (recommended)
|
|
||||||
2. Start fresh (discard previous work)
|
|
||||||
3. Review checkpoint details first
|
|
||||||
|
|
||||||
Would you like to resume?
|
|
||||||
```
|
|
||||||
|
|
||||||
**Resume Protocol:**
|
|
||||||
1. Verify branch exists: `git branch -a | grep feat/45-jwt-service`
|
|
||||||
2. Switch to branch: `git checkout feat/45-jwt-service`
|
|
||||||
3. Verify files match checkpoint
|
|
||||||
4. Dispatch executor with checkpoint context
|
|
||||||
5. Executor continues from pending steps
|
|
||||||
|
|
||||||
**B. Fetch Sprint Issues (Standard)**
|
|
||||||
```
|
```
|
||||||
list_issues(state="open", labels=["sprint-current"])
|
list_issues(state="open", labels=["sprint-current"])
|
||||||
```
|
```
|
||||||
@@ -217,56 +147,11 @@ Relevant Lessons:
|
|||||||
Ready to start? I can dispatch multiple tasks in parallel.
|
Ready to start? I can dispatch multiple tasks in parallel.
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. File Conflict Prevention (Pre-Dispatch)
|
### 2. Parallel Task Dispatch
|
||||||
|
|
||||||
**BEFORE dispatching parallel agents, analyze file overlap.**
|
**When starting execution:**
|
||||||
|
|
||||||
**Conflict Detection Workflow:**
|
For independent tasks (same batch), spawn multiple Executor agents in parallel:
|
||||||
|
|
||||||
1. **Read each issue's checklist/body** to identify target files
|
|
||||||
2. **Build file map** for all tasks in the batch
|
|
||||||
3. **Check for overlap** - Same file in multiple tasks?
|
|
||||||
4. **Sequentialize conflicts** - Don't parallelize if same file
|
|
||||||
|
|
||||||
**Example Analysis:**
|
|
||||||
```
|
|
||||||
Analyzing Batch 1 for conflicts:
|
|
||||||
|
|
||||||
#45 - Implement JWT service
|
|
||||||
→ auth/jwt_service.py, auth/__init__.py, tests/test_jwt.py
|
|
||||||
|
|
||||||
#48 - Update API documentation
|
|
||||||
→ docs/api.md, README.md
|
|
||||||
|
|
||||||
Overlap check: NONE
|
|
||||||
Decision: Safe to parallelize ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
**If Conflict Detected:**
|
|
||||||
```
|
|
||||||
Analyzing Batch 2 for conflicts:
|
|
||||||
|
|
||||||
#46 - Build login endpoint
|
|
||||||
→ api/routes/auth.py, auth/__init__.py
|
|
||||||
|
|
||||||
#49 - Add auth tests
|
|
||||||
→ tests/test_auth.py, auth/__init__.py
|
|
||||||
|
|
||||||
Overlap: auth/__init__.py ⚠️
|
|
||||||
Decision: Sequentialize - run #46 first, then #49
|
|
||||||
```
|
|
||||||
|
|
||||||
**Conflict Resolution:**
|
|
||||||
- Same file → MUST sequentialize
|
|
||||||
- Same directory → Usually safe, review file names
|
|
||||||
- Shared config → Sequentialize
|
|
||||||
- Shared test fixture → Assign different fixture files or sequentialize
|
|
||||||
|
|
||||||
### 3. Parallel Task Dispatch
|
|
||||||
|
|
||||||
**After conflict check passes, dispatch parallel agents:**
|
|
||||||
|
|
||||||
For independent tasks (same batch) WITH NO FILE CONFLICTS, spawn multiple Executor agents in parallel:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
Dispatching Batch 1 (2 tasks in parallel):
|
Dispatching Batch 1 (2 tasks in parallel):
|
||||||
@@ -282,14 +167,6 @@ Task 2: #48 - Update API documentation
|
|||||||
Both tasks running in parallel. I'll monitor progress.
|
Both tasks running in parallel. I'll monitor progress.
|
||||||
```
|
```
|
||||||
|
|
||||||
**Branch Isolation:** Each task MUST have its own branch. Never have two agents work on the same branch.
|
|
||||||
|
|
||||||
**Sequential Merge Protocol:**
|
|
||||||
1. Wait for task to complete
|
|
||||||
2. Merge its branch to development
|
|
||||||
3. Then merge next completed task
|
|
||||||
4. Never merge simultaneously
|
|
||||||
|
|
||||||
**Branch Naming Convention (MANDATORY):**
|
**Branch Naming Convention (MANDATORY):**
|
||||||
- Features: `feat/<issue-number>-<short-description>`
|
- Features: `feat/<issue-number>-<short-description>`
|
||||||
- Bug fixes: `fix/<issue-number>-<short-description>`
|
- Bug fixes: `fix/<issue-number>-<short-description>`
|
||||||
@@ -300,7 +177,7 @@ Both tasks running in parallel. I'll monitor progress.
|
|||||||
- `fix/46-login-timeout`
|
- `fix/46-login-timeout`
|
||||||
- `debug/47-investigate-memory-leak`
|
- `debug/47-investigate-memory-leak`
|
||||||
|
|
||||||
### 4. Generate Lean Execution Prompts
|
### 3. Generate Lean Execution Prompts
|
||||||
|
|
||||||
**NOT THIS (too verbose):**
|
**NOT THIS (too verbose):**
|
||||||
```
|
```
|
||||||
@@ -345,127 +222,11 @@ Dependencies: None (can start immediately)
|
|||||||
Ready to start? Say "yes" and I'll monitor progress.
|
Ready to start? Say "yes" and I'll monitor progress.
|
||||||
```
|
```
|
||||||
|
|
||||||
### 5. Status Label Management
|
### 4. Progress Tracking
|
||||||
|
|
||||||
**CRITICAL: Use Status labels to communicate issue state accurately.**
|
**Monitor and Update:**
|
||||||
|
|
||||||
**When dispatching a task:**
|
**Add Progress Comments:**
|
||||||
```
|
|
||||||
update_issue(
|
|
||||||
issue_number=45,
|
|
||||||
labels=["Status/In-Progress", ...existing_labels]
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**When task is blocked:**
|
|
||||||
```
|
|
||||||
update_issue(
|
|
||||||
issue_number=46,
|
|
||||||
labels=["Status/Blocked", ...existing_labels_without_in_progress]
|
|
||||||
)
|
|
||||||
add_comment(
|
|
||||||
issue_number=46,
|
|
||||||
body="🚫 BLOCKED: Waiting for #45 to complete (dependency)"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**When task fails:**
|
|
||||||
```
|
|
||||||
update_issue(
|
|
||||||
issue_number=47,
|
|
||||||
labels=["Status/Failed", ...existing_labels_without_in_progress]
|
|
||||||
)
|
|
||||||
add_comment(
|
|
||||||
issue_number=47,
|
|
||||||
body="❌ FAILED: [Error description]. Needs investigation."
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**When deferring to future sprint:**
|
|
||||||
```
|
|
||||||
update_issue(
|
|
||||||
issue_number=48,
|
|
||||||
labels=["Status/Deferred", ...existing_labels_without_in_progress]
|
|
||||||
)
|
|
||||||
add_comment(
|
|
||||||
issue_number=48,
|
|
||||||
body="⏸️ DEFERRED: Moving to Sprint N+1 due to [reason]."
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**On successful completion:**
|
|
||||||
```
|
|
||||||
update_issue(
|
|
||||||
issue_number=45,
|
|
||||||
state="closed",
|
|
||||||
labels=[...existing_labels_without_status] # Remove all Status/* labels
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Status Label Rules:**
|
|
||||||
- Only ONE Status label at a time (In-Progress, Blocked, Failed, or Deferred)
|
|
||||||
- Remove Status labels when closing successfully
|
|
||||||
- Always add comment explaining status changes
|
|
||||||
|
|
||||||
### 6. Progress Tracking (Structured Comments)
|
|
||||||
|
|
||||||
**CRITICAL: Use structured progress comments for visibility.**
|
|
||||||
|
|
||||||
**Standard Progress Comment Format:**
|
|
||||||
```markdown
|
|
||||||
## Progress Update
|
|
||||||
**Status:** In Progress | Blocked | Failed
|
|
||||||
**Phase:** [current phase name]
|
|
||||||
**Tool Calls:** X (budget: Y)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] Step 1
|
|
||||||
- [x] Step 2
|
|
||||||
|
|
||||||
### In Progress
|
|
||||||
- [ ] Current step (estimated: Z more calls)
|
|
||||||
|
|
||||||
### Blockers
|
|
||||||
- None | [blocker description]
|
|
||||||
|
|
||||||
### Next
|
|
||||||
- What happens after current step
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example Progress Comment:**
|
|
||||||
```
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Progress Update
|
|
||||||
**Status:** In Progress
|
|
||||||
**Phase:** Implementation
|
|
||||||
**Tool Calls:** 45 (budget: 100)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] Created auth/jwt_service.py
|
|
||||||
- [x] Implemented generate_token()
|
|
||||||
- [x] Implemented verify_token()
|
|
||||||
|
|
||||||
### In Progress
|
|
||||||
- [ ] Writing unit tests (estimated: 20 more calls)
|
|
||||||
|
|
||||||
### Blockers
|
|
||||||
- None
|
|
||||||
|
|
||||||
### Next
|
|
||||||
- Run tests and fix any failures
|
|
||||||
- Commit and push
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**When to Post Progress Comments:**
|
|
||||||
- After completing each major phase (every 20-30 tool calls)
|
|
||||||
- When status changes (blocked, failed)
|
|
||||||
- When encountering unexpected issues
|
|
||||||
- Before approaching tool call budget limit
|
|
||||||
|
|
||||||
**Simple progress updates (for minor milestones):**
|
|
||||||
```
|
```
|
||||||
add_comment(
|
add_comment(
|
||||||
issue_number=45,
|
issue_number=45,
|
||||||
@@ -503,7 +264,7 @@ add_comment(
|
|||||||
- Notify that new tasks are ready for execution
|
- Notify that new tasks are ready for execution
|
||||||
- Update the execution queue
|
- Update the execution queue
|
||||||
|
|
||||||
### 7. Monitor Parallel Execution
|
### 5. Monitor Parallel Execution
|
||||||
|
|
||||||
**Track multiple running tasks:**
|
**Track multiple running tasks:**
|
||||||
```
|
```
|
||||||
@@ -521,7 +282,7 @@ Batch 2 (now unblocked):
|
|||||||
Starting #46 while #48 continues...
|
Starting #46 while #48 continues...
|
||||||
```
|
```
|
||||||
|
|
||||||
### 8. Branch Protection Detection
|
### 6. Branch Protection Detection
|
||||||
|
|
||||||
Before merging, check if development branch is protected:
|
Before merging, check if development branch is protected:
|
||||||
|
|
||||||
@@ -551,7 +312,7 @@ Closes #45
|
|||||||
|
|
||||||
**NEVER include subtask checklists in MR body.** The issue already has them.
|
**NEVER include subtask checklists in MR body.** The issue already has them.
|
||||||
|
|
||||||
### 9. Sprint Close - Capture Lessons Learned
|
### 7. Sprint Close - Capture Lessons Learned
|
||||||
|
|
||||||
**Invoked by:** `/sprint-close`
|
**Invoked by:** `/sprint-close`
|
||||||
|
|
||||||
@@ -803,64 +564,6 @@ Would you like me to handle git operations?
|
|||||||
- Document blockers promptly
|
- Document blockers promptly
|
||||||
- Never let tasks slip through
|
- Never let tasks slip through
|
||||||
|
|
||||||
## Runaway Detection (Monitoring Dispatched Agents)
|
|
||||||
|
|
||||||
**Monitor dispatched agents for runaway behavior:**
|
|
||||||
|
|
||||||
**Warning Signs:**
|
|
||||||
- Agent running 30+ minutes with no progress comment
|
|
||||||
- Progress comment shows "same phase" for 20+ tool calls
|
|
||||||
- Error patterns repeating in progress comments
|
|
||||||
|
|
||||||
**Intervention Protocol:**
|
|
||||||
|
|
||||||
When you detect an agent may be stuck:
|
|
||||||
|
|
||||||
1. **Read latest progress comment** - Check tool call count and phase
|
|
||||||
2. **If no progress in 20+ calls** - Consider stopping the agent
|
|
||||||
3. **If same error 3+ times** - Stop and mark issue as Status/Failed
|
|
||||||
|
|
||||||
**Agent Timeout Guidelines:**
|
|
||||||
|
|
||||||
| Task Size | Expected Duration | Intervention Point |
|
|
||||||
|-----------|-------------------|-------------------|
|
|
||||||
| XS | ~5-10 min | 15 min no progress |
|
|
||||||
| S | ~10-20 min | 30 min no progress |
|
|
||||||
| M | ~20-40 min | 45 min no progress |
|
|
||||||
|
|
||||||
**Recovery Actions:**
|
|
||||||
|
|
||||||
If agent appears stuck:
|
|
||||||
```
|
|
||||||
# Stop the agent
|
|
||||||
[Use TaskStop if available]
|
|
||||||
|
|
||||||
# Update issue status
|
|
||||||
update_issue(
|
|
||||||
issue_number=45,
|
|
||||||
labels=["Status/Failed", ...other_labels]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Add explanation comment
|
|
||||||
add_comment(
|
|
||||||
issue_number=45,
|
|
||||||
body="""## Agent Intervention
|
|
||||||
**Reason:** No progress detected for [X] minutes / [Y] tool calls
|
|
||||||
**Last Status:** [from progress comment]
|
|
||||||
**Action:** Stopped agent, requires human review
|
|
||||||
|
|
||||||
### What Was Completed
|
|
||||||
[from progress comment]
|
|
||||||
|
|
||||||
### What Remains
|
|
||||||
[from progress comment]
|
|
||||||
|
|
||||||
### Recommendation
|
|
||||||
[Manual completion / Different approach / Break down further]
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Critical Reminders
|
## Critical Reminders
|
||||||
|
|
||||||
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
|
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
|
||||||
@@ -869,18 +572,14 @@ add_comment(
|
|||||||
4. **Parallel dispatch** - Run independent tasks simultaneously
|
4. **Parallel dispatch** - Run independent tasks simultaneously
|
||||||
5. **Lean prompts** - Brief, actionable, not verbose documents
|
5. **Lean prompts** - Brief, actionable, not verbose documents
|
||||||
6. **Branch naming** - `feat/`, `fix/`, `debug/` prefixes required
|
6. **Branch naming** - `feat/`, `fix/`, `debug/` prefixes required
|
||||||
7. **Status labels** - Apply Status/In-Progress, Status/Blocked, Status/Failed, Status/Deferred accurately
|
7. **No MR subtasks** - MR body should NOT have checklists
|
||||||
8. **One status at a time** - Remove old Status/* label before applying new one
|
8. **Auto-check subtasks** - Mark issue subtasks complete on close
|
||||||
9. **Remove status on close** - Successful completion removes all Status/* labels
|
9. **Track meticulously** - Update issues immediately, document blockers
|
||||||
10. **Monitor for runaways** - Intervene if agent shows no progress for extended period
|
10. **Capture lessons** - At sprint close, interview thoroughly
|
||||||
11. **No MR subtasks** - MR body should NOT have checklists
|
11. **Update wiki status** - At sprint close, update implementation and proposal pages
|
||||||
12. **Auto-check subtasks** - Mark issue subtasks complete on close
|
12. **Link lessons to wiki** - Include lesson links in implementation completion summary
|
||||||
13. **Track meticulously** - Update issues immediately, document blockers
|
13. **Update CHANGELOG** - MANDATORY at sprint close, never skip
|
||||||
14. **Capture lessons** - At sprint close, interview thoroughly
|
14. **Run suggest-version** - Check if release is needed after CHANGELOG update
|
||||||
15. **Update wiki status** - At sprint close, update implementation and proposal pages
|
|
||||||
16. **Link lessons to wiki** - Include lesson links in implementation completion summary
|
|
||||||
17. **Update CHANGELOG** - MANDATORY at sprint close, never skip
|
|
||||||
18. **Run suggest-version** - Check if release is needed after CHANGELOG update
|
|
||||||
|
|
||||||
## Your Mission
|
## Your Mission
|
||||||
|
|
||||||
|
|||||||
@@ -310,55 +310,14 @@ Think through the technical approach:
|
|||||||
- `[Sprint 17] fix: Resolve login timeout issue`
|
- `[Sprint 17] fix: Resolve login timeout issue`
|
||||||
- `[Sprint 18] refactor: Extract authentication module`
|
- `[Sprint 18] refactor: Extract authentication module`
|
||||||
|
|
||||||
**Task Sizing Rules (MANDATORY):**
|
**Task Granularity Guidelines:**
|
||||||
|
| Size | Scope | Example |
|
||||||
|
|------|-------|---------|
|
||||||
|
| **Small** | 1-2 hours, single file/component | Add validation to one field |
|
||||||
|
| **Medium** | Half day, multiple files, one feature | Implement new API endpoint |
|
||||||
|
| **Large** | Should be broken down | Full authentication system |
|
||||||
|
|
||||||
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|
**If a task is too large, break it down into smaller tasks.**
|
||||||
|--------|-------|-----------------|----------------|-------------|
|
|
||||||
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
|
|
||||||
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
|
|
||||||
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
|
|
||||||
| **L** | MUST BREAK DOWN | - | - | Too large for one agent |
|
|
||||||
| **XL** | MUST BREAK DOWN | - | - | Way too large |
|
|
||||||
|
|
||||||
**CRITICAL: L and XL tasks MUST be broken into subtasks.**
|
|
||||||
|
|
||||||
**Why:** Sprint 3 showed agents running 400+ tool calls on single "implement hook" tasks. This causes:
|
|
||||||
- Long wait times (1+ hour per task)
|
|
||||||
- No progress visibility
|
|
||||||
- Resource exhaustion
|
|
||||||
- Difficult debugging
|
|
||||||
|
|
||||||
**Task Scoping Checklist:**
|
|
||||||
1. Can this be completed in one file? → XS or S
|
|
||||||
2. Does it touch 2-3 files? → M (max)
|
|
||||||
3. Does it touch 4+ files? → MUST break down
|
|
||||||
4. Does it require complex decision-making? → MUST break down
|
|
||||||
5. Would you estimate 50+ tool calls? → MUST break down
|
|
||||||
|
|
||||||
**Breaking Down Large Tasks:**
|
|
||||||
|
|
||||||
**BAD (L/XL - too broad):**
|
|
||||||
```
|
|
||||||
[Sprint 3] feat: Implement git-flow branch validation hook
|
|
||||||
Labels: Efforts/L, ...
|
|
||||||
```
|
|
||||||
|
|
||||||
**GOOD (broken into S/M tasks):**
|
|
||||||
```
|
|
||||||
[Sprint 3] feat: Create branch validation hook skeleton
|
|
||||||
Labels: Efforts/S, ...
|
|
||||||
|
|
||||||
[Sprint 3] feat: Add prefix pattern validation (feat/, fix/, etc.)
|
|
||||||
Labels: Efforts/S, ...
|
|
||||||
|
|
||||||
[Sprint 3] feat: Add issue number extraction and validation
|
|
||||||
Labels: Efforts/S, ...
|
|
||||||
|
|
||||||
[Sprint 3] test: Add branch validation unit tests
|
|
||||||
Labels: Efforts/S, ...
|
|
||||||
```
|
|
||||||
|
|
||||||
**If a task is estimated L or XL, STOP and break it down before creating.**
|
|
||||||
|
|
||||||
**IMPORTANT: Include wiki implementation reference in issue body:**
|
**IMPORTANT: Include wiki implementation reference in issue body:**
|
||||||
|
|
||||||
@@ -520,9 +479,5 @@ Sprint 17 - User Authentication (Due: 2025-02-01)
|
|||||||
11. **Always use suggest_labels** - Don't guess labels
|
11. **Always use suggest_labels** - Don't guess labels
|
||||||
12. **Always think through architecture** - Consider edge cases
|
12. **Always think through architecture** - Consider edge cases
|
||||||
13. **Always cleanup local files** - Delete after migrating to wiki
|
13. **Always cleanup local files** - Delete after migrating to wiki
|
||||||
14. **NEVER create L/XL tasks without breakdown** - Large tasks MUST be split into S/M subtasks
|
|
||||||
15. **Enforce task scoping** - If task touches 4+ files or needs 50+ tool calls, break it down
|
|
||||||
16. **ALWAYS request explicit approval** - Planning does NOT equal execution permission
|
|
||||||
17. **Record approval in milestone** - Sprint-start verifies approval before executing
|
|
||||||
|
|
||||||
You are the thoughtful planner who ensures sprints are well-prepared, architecturally sound, and learn from past experiences. Take your time, ask questions, and create comprehensive plans that set the team up for success.
|
You are the thoughtful planner who ensures sprints are well-prepared, architecturally sound, and learn from past experiences. Take your time, ask questions, and create comprehensive plans that set the team up for success.
|
||||||
|
|||||||
@@ -1,180 +0,0 @@
|
|||||||
---
|
|
||||||
description: Generate Mermaid diagram of sprint issues with dependencies and status
|
|
||||||
---
|
|
||||||
|
|
||||||
# Sprint Diagram
|
|
||||||
|
|
||||||
This command generates a visual Mermaid diagram showing the current sprint's issues, their dependencies, and execution flow.
|
|
||||||
|
|
||||||
## What This Command Does
|
|
||||||
|
|
||||||
1. **Fetch Sprint Issues** - Gets all issues for the current sprint milestone
|
|
||||||
2. **Fetch Dependencies** - Retrieves dependency relationships between issues
|
|
||||||
3. **Generate Mermaid Syntax** - Creates flowchart showing issue flow
|
|
||||||
4. **Apply Status Styling** - Colors nodes based on issue state (open/closed/in-progress)
|
|
||||||
5. **Show Execution Order** - Visualizes parallel batches and critical path
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/sprint-diagram
|
|
||||||
/sprint-diagram --milestone "Sprint 4"
|
|
||||||
```
|
|
||||||
|
|
||||||
## MCP Tools Used
|
|
||||||
|
|
||||||
**Issue Tools:**
|
|
||||||
- `list_issues(state="all")` - Fetch all sprint issues
|
|
||||||
- `list_milestones()` - Find current sprint milestone
|
|
||||||
|
|
||||||
**Dependency Tools:**
|
|
||||||
- `list_issue_dependencies(issue_number)` - Get dependencies for each issue
|
|
||||||
- `get_execution_order(issue_numbers)` - Get parallel execution batches
|
|
||||||
|
|
||||||
## Implementation Steps
|
|
||||||
|
|
||||||
1. **Get Current Milestone:**
|
|
||||||
```
|
|
||||||
milestones = list_milestones(state="open")
|
|
||||||
current_sprint = milestones[0] # Most recent open milestone
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Fetch Sprint Issues:**
|
|
||||||
```
|
|
||||||
issues = list_issues(state="all", milestone=current_sprint.title)
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Fetch Dependencies for Each Issue:**
|
|
||||||
```python
|
|
||||||
dependencies = {}
|
|
||||||
for issue in issues:
|
|
||||||
deps = list_issue_dependencies(issue.number)
|
|
||||||
dependencies[issue.number] = deps
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Generate Mermaid Diagram:**
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
subgraph Sprint["Sprint 4 - Commands"]
|
|
||||||
241["#241: sprint-diagram"]
|
|
||||||
242["#242: confidence threshold"]
|
|
||||||
243["#243: pr-diff"]
|
|
||||||
|
|
||||||
241 --> 242
|
|
||||||
242 --> 243
|
|
||||||
end
|
|
||||||
|
|
||||||
classDef completed fill:#90EE90,stroke:#228B22
|
|
||||||
classDef inProgress fill:#FFD700,stroke:#DAA520
|
|
||||||
classDef open fill:#ADD8E6,stroke:#4682B4
|
|
||||||
classDef blocked fill:#FFB6C1,stroke:#DC143C
|
|
||||||
|
|
||||||
class 241 completed
|
|
||||||
class 242 inProgress
|
|
||||||
class 243 open
|
|
||||||
```
|
|
||||||
|
|
||||||
## Expected Output
|
|
||||||
|
|
||||||
```
|
|
||||||
Sprint Diagram: Sprint 4 - Commands
|
|
||||||
===================================
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
subgraph batch1["Batch 1 - No Dependencies"]
|
|
||||||
241["#241: sprint-diagram<br/>projman"]
|
|
||||||
242["#242: confidence threshold<br/>pr-review"]
|
|
||||||
244["#244: data-quality<br/>data-platform"]
|
|
||||||
247["#247: chart-export<br/>viz-platform"]
|
|
||||||
250["#250: dependency-graph<br/>contract-validator"]
|
|
||||||
251["#251: changelog-gen<br/>doc-guardian"]
|
|
||||||
254["#254: config-diff<br/>config-maintainer"]
|
|
||||||
256["#256: cmdb-topology<br/>cmdb-assistant"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph batch2["Batch 2 - After Batch 1"]
|
|
||||||
243["#243: pr-diff<br/>pr-review"]
|
|
||||||
245["#245: lineage-viz<br/>data-platform"]
|
|
||||||
248["#248: color blind<br/>viz-platform"]
|
|
||||||
252["#252: doc-coverage<br/>doc-guardian"]
|
|
||||||
255["#255: linting<br/>config-maintainer"]
|
|
||||||
257["#257: change-audit<br/>cmdb-assistant"]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph batch3["Batch 3 - Final"]
|
|
||||||
246["#246: dbt-test<br/>data-platform"]
|
|
||||||
249["#249: responsive<br/>viz-platform"]
|
|
||||||
253["#253: stale-docs<br/>doc-guardian"]
|
|
||||||
258["#258: IP conflict<br/>cmdb-assistant"]
|
|
||||||
end
|
|
||||||
|
|
||||||
batch1 --> batch2
|
|
||||||
batch2 --> batch3
|
|
||||||
|
|
||||||
classDef completed fill:#90EE90,stroke:#228B22
|
|
||||||
classDef inProgress fill:#FFD700,stroke:#DAA520
|
|
||||||
classDef open fill:#ADD8E6,stroke:#4682B4
|
|
||||||
|
|
||||||
class 241,242 completed
|
|
||||||
class 243,244 inProgress
|
|
||||||
```
|
|
||||||
|
|
||||||
## Status Legend
|
|
||||||
|
|
||||||
| Status | Color | Description |
|
|
||||||
|--------|-------|-------------|
|
|
||||||
| Completed | Green | Issue closed |
|
|
||||||
| In Progress | Yellow | Currently being worked on |
|
|
||||||
| Open | Blue | Ready to start |
|
|
||||||
| Blocked | Red | Waiting on dependencies |
|
|
||||||
|
|
||||||
## Diagram Types
|
|
||||||
|
|
||||||
### Default: Dependency Flow
|
|
||||||
Shows how issues depend on each other with arrows indicating blockers.
|
|
||||||
|
|
||||||
### Batch View (--batch)
|
|
||||||
Groups issues by execution batch for parallel work visualization.
|
|
||||||
|
|
||||||
### Plugin View (--by-plugin)
|
|
||||||
Groups issues by plugin for component-level overview.
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
- **Sprint Planning**: Visualize scope and dependencies
|
|
||||||
- **Daily Standups**: Show progress at a glance
|
|
||||||
- **Documentation**: Include in wiki pages
|
|
||||||
- **Stakeholder Updates**: Visual progress reports
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
The generated Mermaid diagram can be:
|
|
||||||
- Pasted into GitHub/Gitea issues
|
|
||||||
- Rendered in wiki pages
|
|
||||||
- Included in PRs for context
|
|
||||||
- Used in sprint retrospectives
|
|
||||||
|
|
||||||
## Example
|
|
||||||
|
|
||||||
```
|
|
||||||
User: /sprint-diagram
|
|
||||||
|
|
||||||
Generating sprint diagram...
|
|
||||||
|
|
||||||
Milestone: Sprint 4 - Commands (18 issues)
|
|
||||||
Fetching dependencies...
|
|
||||||
Building diagram...
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart LR
|
|
||||||
241[sprint-diagram] --> |enables| 242[confidence]
|
|
||||||
242 --> 243[pr-diff]
|
|
||||||
|
|
||||||
style 241 fill:#90EE90
|
|
||||||
style 242 fill:#ADD8E6
|
|
||||||
style 243 fill:#ADD8E6
|
|
||||||
```
|
|
||||||
|
|
||||||
Open: 16 | In Progress: 1 | Completed: 1
|
|
||||||
```
|
|
||||||
@@ -136,58 +136,6 @@ The planner agent will:
|
|||||||
- Document dependency graph
|
- Document dependency graph
|
||||||
- Provide sprint overview with wiki links
|
- Provide sprint overview with wiki links
|
||||||
|
|
||||||
11. **Request Sprint Approval**
|
|
||||||
- Present approval request with scope summary
|
|
||||||
- Capture explicit user approval
|
|
||||||
- Record approval in milestone description
|
|
||||||
- Approval scopes what sprint-start can execute
|
|
||||||
|
|
||||||
## Sprint Approval (MANDATORY)
|
|
||||||
|
|
||||||
**Planning DOES NOT equal execution permission.**
|
|
||||||
|
|
||||||
After creating issues, the planner MUST request explicit approval:
|
|
||||||
|
|
||||||
```
|
|
||||||
Sprint 17 Planning Complete
|
|
||||||
===========================
|
|
||||||
|
|
||||||
Created Issues:
|
|
||||||
- #45: [Sprint 17] feat: JWT token generation
|
|
||||||
- #46: [Sprint 17] feat: Login endpoint
|
|
||||||
- #47: [Sprint 17] test: Auth tests
|
|
||||||
|
|
||||||
Execution Scope:
|
|
||||||
- Branches: feat/45-*, feat/46-*, feat/47-*
|
|
||||||
- Files: auth/*, api/routes/auth.py, tests/test_auth*
|
|
||||||
- Dependencies: PyJWT, python-jose
|
|
||||||
|
|
||||||
⚠️ APPROVAL REQUIRED
|
|
||||||
|
|
||||||
Do you approve this sprint for execution?
|
|
||||||
This grants permission for agents to:
|
|
||||||
- Create and modify files in the listed scope
|
|
||||||
- Create branches with the listed prefixes
|
|
||||||
- Install listed dependencies
|
|
||||||
|
|
||||||
Type "approve sprint 17" to authorize execution.
|
|
||||||
```
|
|
||||||
|
|
||||||
**On Approval:**
|
|
||||||
1. Record approval in milestone description
|
|
||||||
2. Note timestamp and scope
|
|
||||||
3. Sprint-start will verify approval exists
|
|
||||||
|
|
||||||
**Approval Record Format:**
|
|
||||||
```markdown
|
|
||||||
## Sprint Approval
|
|
||||||
**Approved:** 2026-01-28 14:30
|
|
||||||
**Approver:** User
|
|
||||||
**Scope:**
|
|
||||||
- Branches: feat/45-*, feat/46-*, feat/47-*
|
|
||||||
- Files: auth/*, api/routes/auth.py, tests/test_auth*
|
|
||||||
```
|
|
||||||
|
|
||||||
## Issue Title Format (MANDATORY)
|
## Issue Title Format (MANDATORY)
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -207,70 +155,15 @@ Type "approve sprint 17" to authorize execution.
|
|||||||
- `[Sprint 17] fix: Resolve login timeout issue`
|
- `[Sprint 17] fix: Resolve login timeout issue`
|
||||||
- `[Sprint 18] refactor: Extract authentication module`
|
- `[Sprint 18] refactor: Extract authentication module`
|
||||||
|
|
||||||
## Task Sizing Rules (MANDATORY)
|
## Task Granularity Guidelines
|
||||||
|
|
||||||
**CRITICAL: Tasks sized L or XL MUST be broken down into smaller tasks.**
|
| Size | Scope | Example |
|
||||||
|
|------|-------|---------|
|
||||||
|
| **Small** | 1-2 hours, single file/component | Add validation to one field |
|
||||||
|
| **Medium** | Half day, multiple files, one feature | Implement new API endpoint |
|
||||||
|
| **Large** | Should be broken down | Full authentication system |
|
||||||
|
|
||||||
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|
**If a task is too large, break it down into smaller tasks.**
|
||||||
|--------|-------|-----------------|----------------|-------------|
|
|
||||||
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
|
|
||||||
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
|
|
||||||
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
|
|
||||||
| **L** | MUST BREAK DOWN | - | - | Too large for one agent |
|
|
||||||
| **XL** | MUST BREAK DOWN | - | - | Way too large |
|
|
||||||
|
|
||||||
**Why This Matters:**
|
|
||||||
- Agents running 400+ tool calls take 1+ hour, with no visibility
|
|
||||||
- Large tasks lack clear completion criteria
|
|
||||||
- Debugging failures is extremely difficult
|
|
||||||
- Small tasks enable parallel execution
|
|
||||||
|
|
||||||
**Scoping Checklist:**
|
|
||||||
1. Can this be completed in one file? → XS or S
|
|
||||||
2. Does it touch 2-3 files? → M (maximum for single task)
|
|
||||||
3. Does it touch 4+ files? → MUST break down
|
|
||||||
4. Would you estimate 50+ tool calls? → MUST break down
|
|
||||||
5. Does it require complex decision-making mid-task? → MUST break down
|
|
||||||
|
|
||||||
**Example Breakdown:**
|
|
||||||
|
|
||||||
**BAD (L - too broad):**
|
|
||||||
```
|
|
||||||
[Sprint 3] feat: Implement schema diff detection hook
|
|
||||||
Labels: Efforts/L
|
|
||||||
- Hook skeleton
|
|
||||||
- Pattern detection for DROP/ALTER/RENAME
|
|
||||||
- Warning output formatting
|
|
||||||
- Integration with hooks.json
|
|
||||||
```
|
|
||||||
|
|
||||||
**GOOD (broken into S tasks):**
|
|
||||||
```
|
|
||||||
[Sprint 3] feat: Create schema-diff-check.sh hook skeleton
|
|
||||||
Labels: Efforts/S
|
|
||||||
- [ ] Create hook file with standard header
|
|
||||||
- [ ] Add file type detection for SQL/migrations
|
|
||||||
- [ ] Exit 0 (non-blocking)
|
|
||||||
|
|
||||||
[Sprint 3] feat: Add DROP/ALTER pattern detection
|
|
||||||
Labels: Efforts/S
|
|
||||||
- [ ] Detect DROP COLUMN/TABLE/INDEX
|
|
||||||
- [ ] Detect ALTER TYPE changes
|
|
||||||
- [ ] Detect RENAME operations
|
|
||||||
|
|
||||||
[Sprint 3] feat: Add warning output formatting
|
|
||||||
Labels: Efforts/S
|
|
||||||
- [ ] Format breaking change warnings
|
|
||||||
- [ ] Add hook prefix to output
|
|
||||||
- [ ] Test output visibility
|
|
||||||
|
|
||||||
[Sprint 3] chore: Register hook in hooks.json
|
|
||||||
Labels: Efforts/XS
|
|
||||||
- [ ] Add PostToolUse:Edit hook entry
|
|
||||||
- [ ] Test hook triggers on SQL edits
|
|
||||||
```
|
|
||||||
|
|
||||||
**The planner MUST refuse to create L/XL tasks without breakdown.**
|
|
||||||
|
|
||||||
## MCP Tools Available
|
## MCP Tools Available
|
||||||
|
|
||||||
|
|||||||
@@ -6,47 +6,6 @@ description: Begin sprint execution with relevant lessons learned from previous
|
|||||||
|
|
||||||
You are initiating sprint execution. The orchestrator agent will coordinate the work, analyze dependencies for parallel execution, search for relevant lessons learned, and guide you through the implementation process.
|
You are initiating sprint execution. The orchestrator agent will coordinate the work, analyze dependencies for parallel execution, search for relevant lessons learned, and guide you through the implementation process.
|
||||||
|
|
||||||
## Sprint Approval Verification
|
|
||||||
|
|
||||||
**CRITICAL: Sprint must be approved before execution.**
|
|
||||||
|
|
||||||
The orchestrator checks for approval in the milestone description:
|
|
||||||
|
|
||||||
```
|
|
||||||
get_milestone(milestone_id=17)
|
|
||||||
→ Check description for "## Sprint Approval" section
|
|
||||||
```
|
|
||||||
|
|
||||||
**If Approval Missing:**
|
|
||||||
```
|
|
||||||
⚠️ SPRINT NOT APPROVED
|
|
||||||
|
|
||||||
Sprint 17 has not been approved for execution.
|
|
||||||
The milestone description does not contain an approval record.
|
|
||||||
|
|
||||||
Please run /sprint-plan to:
|
|
||||||
1. Review the sprint scope
|
|
||||||
2. Approve the execution plan
|
|
||||||
|
|
||||||
Then run /sprint-start again.
|
|
||||||
```
|
|
||||||
|
|
||||||
**If Approval Found:**
|
|
||||||
```
|
|
||||||
✓ Sprint Approval Verified
|
|
||||||
Approved: 2026-01-28 14:30
|
|
||||||
Scope:
|
|
||||||
Branches: feat/45-*, feat/46-*, feat/47-*
|
|
||||||
Files: auth/*, api/routes/auth.py, tests/test_auth*
|
|
||||||
|
|
||||||
Proceeding with execution within approved scope...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Scope Enforcement:**
|
|
||||||
- Agents can ONLY create branches matching approved patterns
|
|
||||||
- Agents can ONLY modify files within approved paths
|
|
||||||
- Operations outside scope require re-approval via `/sprint-plan`
|
|
||||||
|
|
||||||
## Branch Detection
|
## Branch Detection
|
||||||
|
|
||||||
**CRITICAL:** Before proceeding, check the current git branch:
|
**CRITICAL:** Before proceeding, check the current git branch:
|
||||||
@@ -66,18 +25,7 @@ If you are on a production or staging branch, you MUST stop and ask the user to
|
|||||||
|
|
||||||
The orchestrator agent will:
|
The orchestrator agent will:
|
||||||
|
|
||||||
1. **Verify Sprint Approval**
|
1. **Fetch Sprint Issues**
|
||||||
- Check milestone description for `## Sprint Approval` section
|
|
||||||
- If no approval found, STOP and direct user to `/sprint-plan`
|
|
||||||
- If approval found, extract scope (branches, files)
|
|
||||||
- Agents operate ONLY within approved scope
|
|
||||||
|
|
||||||
2. **Detect Checkpoints (Resume Support)**
|
|
||||||
- Check each open issue for `## Checkpoint` comments
|
|
||||||
- If checkpoint found, offer to resume from that point
|
|
||||||
- Resume preserves: branch, completed work, pending steps
|
|
||||||
|
|
||||||
3. **Fetch Sprint Issues**
|
|
||||||
- Use `list_issues` to fetch open issues for the sprint
|
- Use `list_issues` to fetch open issues for the sprint
|
||||||
- Identify priorities based on labels (Priority/Critical, Priority/High, etc.)
|
- Identify priorities based on labels (Priority/Critical, Priority/High, etc.)
|
||||||
|
|
||||||
@@ -124,67 +72,6 @@ Parallel Execution Batches:
|
|||||||
|
|
||||||
**Independent tasks in the same batch run in parallel.**
|
**Independent tasks in the same batch run in parallel.**
|
||||||
|
|
||||||
## File Conflict Prevention (MANDATORY)
|
|
||||||
|
|
||||||
**CRITICAL: Before dispatching parallel agents, check for file overlap.**
|
|
||||||
|
|
||||||
**Pre-Dispatch Conflict Check:**
|
|
||||||
|
|
||||||
1. **Identify target files** for each task in the batch
|
|
||||||
2. **Check for overlap** - Do any tasks modify the same file?
|
|
||||||
3. **If overlap detected** - Sequentialize those specific tasks
|
|
||||||
|
|
||||||
**Example Conflict Detection:**
|
|
||||||
```
|
|
||||||
Batch 1 Analysis:
|
|
||||||
#45 - Implement JWT service
|
|
||||||
Files: auth/jwt_service.py, auth/__init__.py, tests/test_jwt.py
|
|
||||||
|
|
||||||
#48 - Update API documentation
|
|
||||||
Files: docs/api.md, README.md
|
|
||||||
|
|
||||||
Overlap: NONE → Safe to parallelize
|
|
||||||
|
|
||||||
Batch 2 Analysis:
|
|
||||||
#46 - Build login endpoint
|
|
||||||
Files: api/routes/auth.py, auth/__init__.py
|
|
||||||
|
|
||||||
#49 - Add auth tests
|
|
||||||
Files: tests/test_auth.py, auth/__init__.py
|
|
||||||
|
|
||||||
Overlap: auth/__init__.py → CONFLICT!
|
|
||||||
Action: Sequentialize #46 and #49 (run #46 first)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Conflict Resolution Rules:**
|
|
||||||
|
|
||||||
| Conflict Type | Action |
|
|
||||||
|---------------|--------|
|
|
||||||
| Same file in checklist | Sequentialize tasks |
|
|
||||||
| Same directory | Review if safe, usually OK |
|
|
||||||
| Shared test file | Sequentialize or assign different test files |
|
|
||||||
| Shared config | Sequentialize |
|
|
||||||
|
|
||||||
**Branch Isolation Protocol:**
|
|
||||||
|
|
||||||
Even for parallel tasks, each MUST run on its own branch:
|
|
||||||
```
|
|
||||||
Task #45 → feat/45-jwt-service (isolated)
|
|
||||||
Task #48 → feat/48-api-docs (isolated)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Sequential Merge After Completion:**
|
|
||||||
```
|
|
||||||
1. Task #45 completes → merge feat/45-jwt-service to development
|
|
||||||
2. Task #48 completes → merge feat/48-api-docs to development
|
|
||||||
3. Never merge simultaneously - always sequential to detect conflicts
|
|
||||||
```
|
|
||||||
|
|
||||||
**If Merge Conflict Occurs:**
|
|
||||||
1. Stop second task
|
|
||||||
2. Resolve conflict manually or assign to human
|
|
||||||
3. Resume/restart second task with updated base
|
|
||||||
|
|
||||||
## Branch Naming Convention (MANDATORY)
|
## Branch Naming Convention (MANDATORY)
|
||||||
|
|
||||||
When creating branches for tasks:
|
When creating branches for tasks:
|
||||||
@@ -352,61 +239,6 @@ Batch 2 (now unblocked):
|
|||||||
Starting #46 while #48 continues...
|
Starting #46 while #48 continues...
|
||||||
```
|
```
|
||||||
|
|
||||||
## Checkpoint Resume Support
|
|
||||||
|
|
||||||
If a previous session was interrupted (agent stopped, failure, budget exhausted), checkpoints enable resumption.
|
|
||||||
|
|
||||||
**Checkpoint Detection:**
|
|
||||||
The orchestrator scans issue comments for `## Checkpoint` markers containing:
|
|
||||||
- Branch name
|
|
||||||
- Last commit hash
|
|
||||||
- Completed/pending steps
|
|
||||||
- Files modified
|
|
||||||
|
|
||||||
**Resume Flow:**
|
|
||||||
```
|
|
||||||
User: /sprint-start
|
|
||||||
|
|
||||||
Orchestrator: Checking for checkpoints...
|
|
||||||
|
|
||||||
Found checkpoint for #45 (JWT service):
|
|
||||||
Branch: feat/45-jwt-service
|
|
||||||
Last activity: 2 hours ago
|
|
||||||
Progress: 4/7 steps completed
|
|
||||||
Pending: Write tests, add refresh, commit
|
|
||||||
|
|
||||||
Options:
|
|
||||||
1. Resume from checkpoint (recommended)
|
|
||||||
2. Start fresh (lose previous work)
|
|
||||||
3. Review checkpoint details
|
|
||||||
|
|
||||||
User: 1
|
|
||||||
|
|
||||||
Orchestrator: Resuming #45 from checkpoint...
|
|
||||||
✓ Branch exists
|
|
||||||
✓ Files match checkpoint
|
|
||||||
✓ Dispatching executor with context
|
|
||||||
|
|
||||||
Executor continues from pending steps...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Checkpoint Format:**
|
|
||||||
Executors save checkpoints after major steps:
|
|
||||||
```markdown
|
|
||||||
## Checkpoint
|
|
||||||
**Branch:** feat/45-jwt-service
|
|
||||||
**Commit:** abc123
|
|
||||||
**Phase:** Testing
|
|
||||||
|
|
||||||
### Completed Steps
|
|
||||||
- [x] Step 1
|
|
||||||
- [x] Step 2
|
|
||||||
|
|
||||||
### Pending Steps
|
|
||||||
- [ ] Step 3
|
|
||||||
- [ ] Step 4
|
|
||||||
```
|
|
||||||
|
|
||||||
## Getting Started
|
## Getting Started
|
||||||
|
|
||||||
Simply invoke `/sprint-start` and the orchestrator will:
|
Simply invoke `/sprint-start` and the orchestrator will:
|
||||||
|
|||||||
@@ -79,12 +79,7 @@ Completed Issues (3):
|
|||||||
|
|
||||||
In Progress (2):
|
In Progress (2):
|
||||||
#46: [Sprint 18] feat: Build login endpoint [Type/Feature, Priority/High]
|
#46: [Sprint 18] feat: Build login endpoint [Type/Feature, Priority/High]
|
||||||
Status: In Progress | Phase: Implementation | Tool Calls: 45/100
|
|
||||||
Progress: 3/5 steps | Current: Writing validation logic
|
|
||||||
|
|
||||||
#49: [Sprint 18] test: Add auth tests [Type/Test, Priority/Medium]
|
#49: [Sprint 18] test: Add auth tests [Type/Test, Priority/Medium]
|
||||||
Status: In Progress | Phase: Testing | Tool Calls: 30/100
|
|
||||||
Progress: 2/4 steps | Current: Testing edge cases
|
|
||||||
|
|
||||||
Ready to Start (2):
|
Ready to Start (2):
|
||||||
#50: [Sprint 18] feat: Integrate OAuth providers [Type/Feature, Priority/Low]
|
#50: [Sprint 18] feat: Integrate OAuth providers [Type/Feature, Priority/Low]
|
||||||
@@ -142,53 +137,12 @@ Show only backend issues:
|
|||||||
list_issues(labels=["Component/Backend"])
|
list_issues(labels=["Component/Backend"])
|
||||||
```
|
```
|
||||||
|
|
||||||
## Progress Comment Parsing
|
|
||||||
|
|
||||||
Agents post structured progress comments in this format:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Progress Update
|
|
||||||
**Status:** In Progress | Blocked | Failed
|
|
||||||
**Phase:** [current phase name]
|
|
||||||
**Tool Calls:** X (budget: Y)
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] Step 1
|
|
||||||
|
|
||||||
### In Progress
|
|
||||||
- [ ] Current step
|
|
||||||
|
|
||||||
### Blockers
|
|
||||||
- None | [blocker description]
|
|
||||||
```
|
|
||||||
|
|
||||||
**To extract real-time progress:**
|
|
||||||
1. Fetch issue comments: `get_issue(number)` includes recent comments
|
|
||||||
2. Look for comments containing `## Progress Update`
|
|
||||||
3. Parse the **Status:** line for current state
|
|
||||||
4. Parse **Tool Calls:** for budget consumption
|
|
||||||
5. Extract blockers from `### Blockers` section
|
|
||||||
|
|
||||||
**Progress Summary Display:**
|
|
||||||
```
|
|
||||||
In Progress Issues:
|
|
||||||
#45: [Sprint 18] feat: JWT service
|
|
||||||
Status: In Progress | Phase: Testing | Tool Calls: 67/100
|
|
||||||
Completed: 4/6 steps | Current: Writing unit tests
|
|
||||||
|
|
||||||
#46: [Sprint 18] feat: Login endpoint
|
|
||||||
Status: Blocked | Phase: Implementation | Tool Calls: 23/100
|
|
||||||
Blocker: Waiting for JWT service (#45)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Blocker Detection
|
## Blocker Detection
|
||||||
|
|
||||||
The command identifies blocked issues by:
|
The command identifies blocked issues by:
|
||||||
1. **Progress Comments** - Parse `### Blockers` section from structured comments
|
1. **Dependency Analysis** - Uses `list_issue_dependencies` to find unmet dependencies
|
||||||
2. **Status Labels** - Check for `Status/Blocked` label on issue
|
2. **Comment Keywords** - Checks for "blocked", "blocker", "waiting for"
|
||||||
3. **Dependency Analysis** - Uses `list_issue_dependencies` to find unmet dependencies
|
3. **Stale Issues** - Issues with no recent activity (>7 days)
|
||||||
4. **Comment Keywords** - Checks for "blocked", "blocker", "waiting for"
|
|
||||||
5. **Stale Issues** - Issues with no recent activity (>7 days)
|
|
||||||
|
|
||||||
## When to Use
|
## When to Use
|
||||||
|
|
||||||
|
|||||||
@@ -10,19 +10,6 @@ PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(dirname "$(dirname "$(realpath "$0")")")}"
|
|||||||
# Marketplace root is 2 levels up from plugin root (plugins/projman -> .)
|
# Marketplace root is 2 levels up from plugin root (plugins/projman -> .)
|
||||||
MARKETPLACE_ROOT="$(dirname "$(dirname "$PLUGIN_ROOT")")"
|
MARKETPLACE_ROOT="$(dirname "$(dirname "$PLUGIN_ROOT")")"
|
||||||
VENV_REPAIR_SCRIPT="$MARKETPLACE_ROOT/scripts/venv-repair.sh"
|
VENV_REPAIR_SCRIPT="$MARKETPLACE_ROOT/scripts/venv-repair.sh"
|
||||||
PLUGIN_CACHE="$HOME/.claude/plugins/cache/leo-claude-mktplace"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Clear stale plugin cache (MUST run before MCP servers load)
|
|
||||||
# ============================================================================
|
|
||||||
# The cache at ~/.claude/plugins/cache/ holds versioned .mcp.json files.
|
|
||||||
# After marketplace updates, cached configs may point to old paths.
|
|
||||||
# Clearing forces Claude to read fresh configs from installed marketplace.
|
|
||||||
|
|
||||||
if [[ -d "$PLUGIN_CACHE" ]]; then
|
|
||||||
rm -rf "$PLUGIN_CACHE"
|
|
||||||
# Don't output anything - this should be silent and automatic
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Auto-repair MCP venvs (runs before other checks)
|
# Auto-repair MCP venvs (runs before other checks)
|
||||||
|
|||||||
@@ -13,9 +13,9 @@ description: Dynamic reference for Gitea label taxonomy (organization + reposito
|
|||||||
|
|
||||||
This skill provides the current label taxonomy used for issue classification in Gitea. Labels are **fetched dynamically** from Gitea and should never be hardcoded.
|
This skill provides the current label taxonomy used for issue classification in Gitea. Labels are **fetched dynamically** from Gitea and should never be hardcoded.
|
||||||
|
|
||||||
**Current Taxonomy:** 47 labels (31 organization + 16 repository)
|
**Current Taxonomy:** 43 labels (27 organization + 16 repository)
|
||||||
|
|
||||||
## Organization Labels (31)
|
## Organization Labels (27)
|
||||||
|
|
||||||
Organization-level labels are shared across all repositories in your configured organization.
|
Organization-level labels are shared across all repositories in your configured organization.
|
||||||
|
|
||||||
@@ -60,12 +60,6 @@ Organization-level labels are shared across all repositories in your configured
|
|||||||
- `Type/Test` (#1d76db) - Testing-related work (unit, integration, e2e)
|
- `Type/Test` (#1d76db) - Testing-related work (unit, integration, e2e)
|
||||||
- `Type/Chore` (#fef2c0) - Maintenance, tooling, dependencies, build tasks
|
- `Type/Chore` (#fef2c0) - Maintenance, tooling, dependencies, build tasks
|
||||||
|
|
||||||
### Status (4)
|
|
||||||
- `Status/In-Progress` (#0052cc) - Work is actively being done on this issue
|
|
||||||
- `Status/Blocked` (#ff5630) - Blocked by external dependency or issue
|
|
||||||
- `Status/Failed` (#de350b) - Implementation attempted but failed, needs investigation
|
|
||||||
- `Status/Deferred` (#6554c0) - Moved to a future sprint or backlog
|
|
||||||
|
|
||||||
## Repository Labels (16)
|
## Repository Labels (16)
|
||||||
|
|
||||||
Repository-level labels are specific to each project.
|
Repository-level labels are specific to each project.
|
||||||
@@ -174,28 +168,6 @@ When suggesting labels for issues, consider the following patterns:
|
|||||||
- Keywords: "deploy", "deployment", "docker", "infrastructure", "ci/cd", "production"
|
- Keywords: "deploy", "deployment", "docker", "infrastructure", "ci/cd", "production"
|
||||||
- Example: "Deploy authentication service to production"
|
- Example: "Deploy authentication service to production"
|
||||||
|
|
||||||
### Status Detection
|
|
||||||
|
|
||||||
**Status/In-Progress:**
|
|
||||||
- Applied when: Agent starts working on an issue
|
|
||||||
- Remove when: Work completes, fails, or is blocked
|
|
||||||
- Example: Orchestrator applies when dispatching task to executor
|
|
||||||
|
|
||||||
**Status/Blocked:**
|
|
||||||
- Applied when: Issue cannot proceed due to external dependency
|
|
||||||
- Context: Waiting for another issue, external service, or decision
|
|
||||||
- Example: "Blocked by #45 - need JWT service first"
|
|
||||||
|
|
||||||
**Status/Failed:**
|
|
||||||
- Applied when: Implementation was attempted but failed
|
|
||||||
- Context: Errors, permission issues, technical blockers
|
|
||||||
- Example: Agent hit permission errors and couldn't complete
|
|
||||||
|
|
||||||
**Status/Deferred:**
|
|
||||||
- Applied when: Work is moved to a future sprint
|
|
||||||
- Context: Scope reduction, reprioritization
|
|
||||||
- Example: "Moving to Sprint 5 due to scope constraints"
|
|
||||||
|
|
||||||
### Tech Detection
|
### Tech Detection
|
||||||
|
|
||||||
**Tech/Python:**
|
**Tech/Python:**
|
||||||
|
|||||||
@@ -51,10 +51,7 @@ DMC_VERSION=0.14.7
|
|||||||
| `/initial-setup` | Interactive setup wizard for DMC and theme preferences |
|
| `/initial-setup` | Interactive setup wizard for DMC and theme preferences |
|
||||||
| `/component {name}` | Inspect component props and validation |
|
| `/component {name}` | Inspect component props and validation |
|
||||||
| `/chart {type}` | Create a Plotly chart |
|
| `/chart {type}` | Create a Plotly chart |
|
||||||
| `/chart-export {format}` | Export chart to PNG, SVG, or PDF |
|
|
||||||
| `/dashboard {template}` | Create a dashboard layout |
|
| `/dashboard {template}` | Create a dashboard layout |
|
||||||
| `/breakpoints {layout}` | Configure responsive breakpoints |
|
|
||||||
| `/accessibility-check` | Validate colors for color blind users |
|
|
||||||
| `/theme {name}` | Apply an existing theme |
|
| `/theme {name}` | Apply an existing theme |
|
||||||
| `/theme-new {name}` | Create a new custom theme |
|
| `/theme-new {name}` | Create a new custom theme |
|
||||||
| `/theme-css {name}` | Export theme as CSS |
|
| `/theme-css {name}` | Export theme as CSS |
|
||||||
@@ -78,16 +75,15 @@ Prevent invalid component props before runtime.
|
|||||||
| `get_component_props` | Get detailed prop specifications |
|
| `get_component_props` | Get detailed prop specifications |
|
||||||
| `validate_component` | Validate a component configuration |
|
| `validate_component` | Validate a component configuration |
|
||||||
|
|
||||||
### Charts (3 tools)
|
### Charts (2 tools)
|
||||||
Create Plotly charts with theme integration.
|
Create Plotly charts with theme integration.
|
||||||
|
|
||||||
| Tool | Description |
|
| Tool | Description |
|
||||||
|------|-------------|
|
|------|-------------|
|
||||||
| `chart_create` | Create a chart (line, bar, scatter, pie, etc.) |
|
| `chart_create` | Create a chart (line, bar, scatter, pie, etc.) |
|
||||||
| `chart_configure_interaction` | Configure chart interactivity |
|
| `chart_configure_interaction` | Configure chart interactivity |
|
||||||
| `chart_export` | Export chart to PNG, SVG, or PDF |
|
|
||||||
|
|
||||||
### Layouts (6 tools)
|
### Layouts (5 tools)
|
||||||
Build dashboard structures with filters and grids.
|
Build dashboard structures with filters and grids.
|
||||||
|
|
||||||
| Tool | Description |
|
| Tool | Description |
|
||||||
@@ -95,19 +91,9 @@ Build dashboard structures with filters and grids.
|
|||||||
| `layout_create` | Create a layout structure |
|
| `layout_create` | Create a layout structure |
|
||||||
| `layout_add_filter` | Add filter components |
|
| `layout_add_filter` | Add filter components |
|
||||||
| `layout_set_grid` | Configure responsive grid |
|
| `layout_set_grid` | Configure responsive grid |
|
||||||
| `layout_set_breakpoints` | Configure responsive breakpoints (xs-xl) |
|
|
||||||
| `layout_add_section` | Add content sections |
|
| `layout_add_section` | Add content sections |
|
||||||
| `layout_get` | Retrieve layout details |
|
| `layout_get` | Retrieve layout details |
|
||||||
|
|
||||||
### Accessibility (3 tools)
|
|
||||||
Validate colors for accessibility and color blindness.
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `accessibility_validate_colors` | Check colors for color blind accessibility |
|
|
||||||
| `accessibility_validate_theme` | Validate a theme's color accessibility |
|
|
||||||
| `accessibility_suggest_alternative` | Suggest accessible color alternatives |
|
|
||||||
|
|
||||||
### Themes (6 tools)
|
### Themes (6 tools)
|
||||||
Manage design tokens and styling.
|
Manage design tokens and styling.
|
||||||
|
|
||||||
@@ -202,37 +188,9 @@ viz-platform works seamlessly with data-platform:
|
|||||||
| `tabs` | Multi-page dashboards |
|
| `tabs` | Multi-page dashboards |
|
||||||
| `split` | Comparisons, master-detail |
|
| `split` | Comparisons, master-detail |
|
||||||
|
|
||||||
## Responsive Breakpoints
|
|
||||||
|
|
||||||
The plugin supports mobile-first responsive design with standard breakpoints:
|
|
||||||
|
|
||||||
| Breakpoint | Min Width | Description |
|
|
||||||
|------------|-----------|-------------|
|
|
||||||
| `xs` | 0px | Extra small (mobile portrait) |
|
|
||||||
| `sm` | 576px | Small (mobile landscape) |
|
|
||||||
| `md` | 768px | Medium (tablet) |
|
|
||||||
| `lg` | 992px | Large (desktop) |
|
|
||||||
| `xl` | 1200px | Extra large (large desktop) |
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
/breakpoints my-dashboard
|
|
||||||
# Configure cols, spacing per breakpoint
|
|
||||||
```
|
|
||||||
|
|
||||||
## Color Accessibility
|
|
||||||
|
|
||||||
The plugin validates colors for color blindness:
|
|
||||||
- **Deuteranopia** (green-blind) - 6% of males
|
|
||||||
- **Protanopia** (red-blind) - 2.5% of males
|
|
||||||
- **Tritanopia** (blue-blind) - 0.01% of population
|
|
||||||
|
|
||||||
Includes WCAG contrast ratio checking and accessible palette suggestions.
|
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
- Python 3.10+
|
- Python 3.10+
|
||||||
- dash-mantine-components >= 0.14.0
|
- dash-mantine-components >= 0.14.0
|
||||||
- plotly >= 5.18.0
|
- plotly >= 5.18.0
|
||||||
- dash >= 2.14.0
|
- dash >= 2.14.0
|
||||||
- kaleido >= 0.2.1 (for chart export)
|
|
||||||
|
|||||||
@@ -1,144 +0,0 @@
|
|||||||
---
|
|
||||||
description: Validate color accessibility for color blind users
|
|
||||||
---
|
|
||||||
|
|
||||||
# Accessibility Check
|
|
||||||
|
|
||||||
Validate theme or chart colors for color blind accessibility, checking contrast ratios and suggesting alternatives.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/accessibility-check {target}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Arguments
|
|
||||||
|
|
||||||
- `target` (optional): "theme" or "chart" - defaults to active theme
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
```
|
|
||||||
/accessibility-check
|
|
||||||
/accessibility-check theme
|
|
||||||
/accessibility-check chart
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tool Mapping
|
|
||||||
|
|
||||||
This command uses the `accessibility_validate_colors` MCP tool:
|
|
||||||
|
|
||||||
```python
|
|
||||||
accessibility_validate_colors(
|
|
||||||
colors=["#228be6", "#40c057", "#fa5252"], # Colors to check
|
|
||||||
check_types=["deuteranopia", "protanopia", "tritanopia"],
|
|
||||||
min_contrast_ratio=4.5 # WCAG AA standard
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
Or validate a full theme:
|
|
||||||
```python
|
|
||||||
accessibility_validate_theme(
|
|
||||||
theme_name="corporate"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
1. **User invokes**: `/accessibility-check theme`
|
|
||||||
2. **Tool analyzes**: Theme color palette
|
|
||||||
3. **Tool simulates**: Color perception for each deficiency type
|
|
||||||
4. **Tool checks**: Contrast ratios between color pairs
|
|
||||||
5. **Tool returns**: Issues found and alternative suggestions
|
|
||||||
|
|
||||||
## Color Blindness Types
|
|
||||||
|
|
||||||
| Type | Affected Colors | Population |
|
|
||||||
|------|-----------------|------------|
|
|
||||||
| **Deuteranopia** | Red-Green (green-blind) | ~6% males, 0.4% females |
|
|
||||||
| **Protanopia** | Red-Green (red-blind) | ~2.5% males, 0.05% females |
|
|
||||||
| **Tritanopia** | Blue-Yellow | ~0.01% total |
|
|
||||||
|
|
||||||
## Output Example
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"theme_name": "corporate",
|
|
||||||
"overall_score": "B",
|
|
||||||
"issues": [
|
|
||||||
{
|
|
||||||
"type": "contrast",
|
|
||||||
"severity": "warning",
|
|
||||||
"colors": ["#fa5252", "#40c057"],
|
|
||||||
"affected_by": ["deuteranopia", "protanopia"],
|
|
||||||
"message": "Red and green may be indistinguishable for red-green color blind users",
|
|
||||||
"suggestion": "Use blue (#228be6) instead of green to differentiate from red"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "contrast_ratio",
|
|
||||||
"severity": "error",
|
|
||||||
"colors": ["#fab005", "#ffffff"],
|
|
||||||
"ratio": 2.1,
|
|
||||||
"required": 4.5,
|
|
||||||
"message": "Insufficient contrast for WCAG AA compliance",
|
|
||||||
"suggestion": "Darken yellow to #e6a200 for ratio of 4.5+"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"recommendations": [
|
|
||||||
"Add patterns or shapes to distinguish data series, not just color",
|
|
||||||
"Include labels directly on chart elements",
|
|
||||||
"Consider using a color-blind safe palette"
|
|
||||||
],
|
|
||||||
"safe_palettes": {
|
|
||||||
"categorical": ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"],
|
|
||||||
"sequential": ["#FEE0D2", "#FC9272", "#DE2D26"],
|
|
||||||
"diverging": ["#4575B4", "#FFFFBF", "#D73027"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## WCAG Contrast Standards
|
|
||||||
|
|
||||||
| Level | Ratio | Use Case |
|
|
||||||
|-------|-------|----------|
|
|
||||||
| AA (normal text) | 4.5:1 | Body text, labels |
|
|
||||||
| AA (large text) | 3:1 | Headings, 14pt+ bold |
|
|
||||||
| AAA (enhanced) | 7:1 | Highest accessibility |
|
|
||||||
|
|
||||||
## Color-Blind Safe Palettes
|
|
||||||
|
|
||||||
The tool can suggest complete color-blind safe palettes:
|
|
||||||
|
|
||||||
### IBM Design Colors
|
|
||||||
Designed for accessibility:
|
|
||||||
```
|
|
||||||
#648FFF #785EF0 #DC267F #FE6100 #FFB000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Tableau Colorblind 10
|
|
||||||
Industry-standard accessible palette:
|
|
||||||
```
|
|
||||||
#006BA4 #FF800E #ABABAB #595959 #5F9ED1
|
|
||||||
#C85200 #898989 #A2C8EC #FFBC79 #CFCFCF
|
|
||||||
```
|
|
||||||
|
|
||||||
### Okabe-Ito
|
|
||||||
Optimized for all types of color blindness:
|
|
||||||
```
|
|
||||||
#E69F00 #56B4E9 #009E73 #F0E442 #0072B2
|
|
||||||
#D55E00 #CC79A7 #000000
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
- `/theme-new {name}` - Create accessible theme from the start
|
|
||||||
- `/theme-validate {name}` - General theme validation
|
|
||||||
- `/chart {type}` - Create chart (check colors after)
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Don't rely on color alone** - Use shapes, patterns, or labels
|
|
||||||
2. **Test with simulation** - View your visualizations through color blindness simulators
|
|
||||||
3. **Use sufficient contrast** - Minimum 4.5:1 for text, 3:1 for large elements
|
|
||||||
4. **Limit color count** - Fewer colors = easier to distinguish
|
|
||||||
5. **Use semantic colors** - Blue for information, red for errors (with icons)
|
|
||||||
@@ -1,193 +0,0 @@
|
|||||||
---
|
|
||||||
description: Configure responsive breakpoints for dashboard layouts
|
|
||||||
---
|
|
||||||
|
|
||||||
# Configure Breakpoints
|
|
||||||
|
|
||||||
Configure responsive breakpoints for a layout to support mobile-first design across different screen sizes.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/breakpoints {layout_ref}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Arguments
|
|
||||||
|
|
||||||
- `layout_ref` (required): Layout name to configure breakpoints for
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
```
|
|
||||||
/breakpoints my-dashboard
|
|
||||||
/breakpoints sales-report
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tool Mapping
|
|
||||||
|
|
||||||
This command uses the `layout_set_breakpoints` MCP tool:
|
|
||||||
|
|
||||||
```python
|
|
||||||
layout_set_breakpoints(
|
|
||||||
layout_ref="my-dashboard",
|
|
||||||
breakpoints={
|
|
||||||
"xs": {"cols": 1, "spacing": "xs"}, # < 576px (mobile)
|
|
||||||
"sm": {"cols": 2, "spacing": "sm"}, # >= 576px (large mobile)
|
|
||||||
"md": {"cols": 6, "spacing": "md"}, # >= 768px (tablet)
|
|
||||||
"lg": {"cols": 12, "spacing": "md"}, # >= 992px (desktop)
|
|
||||||
"xl": {"cols": 12, "spacing": "lg"} # >= 1200px (large desktop)
|
|
||||||
},
|
|
||||||
mobile_first=True
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
1. **User invokes**: `/breakpoints my-dashboard`
|
|
||||||
2. **Agent asks**: Which breakpoints to customize? (shows current settings)
|
|
||||||
3. **Agent asks**: Mobile column count? (xs, typically 1-2)
|
|
||||||
4. **Agent asks**: Tablet column count? (md, typically 4-6)
|
|
||||||
5. **Agent applies**: Breakpoint configuration
|
|
||||||
6. **Agent returns**: Complete responsive configuration
|
|
||||||
|
|
||||||
## Breakpoint Sizes
|
|
||||||
|
|
||||||
| Name | Min Width | Common Devices |
|
|
||||||
|------|-----------|----------------|
|
|
||||||
| `xs` | 0px | Small phones (portrait) |
|
|
||||||
| `sm` | 576px | Large phones, small tablets |
|
|
||||||
| `md` | 768px | Tablets (portrait) |
|
|
||||||
| `lg` | 992px | Tablets (landscape), laptops |
|
|
||||||
| `xl` | 1200px | Desktops, large screens |
|
|
||||||
|
|
||||||
## Mobile-First Approach
|
|
||||||
|
|
||||||
When `mobile_first=True` (default), styles cascade up:
|
|
||||||
- Define base styles for `xs` (mobile)
|
|
||||||
- Override only what changes at larger breakpoints
|
|
||||||
- Smaller CSS footprint, better performance
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Mobile-first example
|
|
||||||
{
|
|
||||||
"xs": {"cols": 1}, # Stack everything on mobile
|
|
||||||
"md": {"cols": 6}, # Two-column on tablet
|
|
||||||
"lg": {"cols": 12} # Full grid on desktop
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
When `mobile_first=False`, styles cascade down:
|
|
||||||
- Define base styles for `xl` (desktop)
|
|
||||||
- Override for smaller screens
|
|
||||||
- Traditional "desktop-first" approach
|
|
||||||
|
|
||||||
## Grid Configuration per Breakpoint
|
|
||||||
|
|
||||||
Each breakpoint can configure:
|
|
||||||
|
|
||||||
| Property | Description | Values |
|
|
||||||
|----------|-------------|--------|
|
|
||||||
| `cols` | Grid column count | 1-24 |
|
|
||||||
| `spacing` | Gap between items | xs, sm, md, lg, xl |
|
|
||||||
| `gutter` | Outer padding | xs, sm, md, lg, xl |
|
|
||||||
| `grow` | Items grow to fill | true, false |
|
|
||||||
|
|
||||||
## Common Patterns
|
|
||||||
|
|
||||||
### Dashboard (Charts & Filters)
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
"xs": {"cols": 1, "spacing": "xs"}, # Full-width cards
|
|
||||||
"sm": {"cols": 2, "spacing": "sm"}, # 2 cards per row
|
|
||||||
"md": {"cols": 3, "spacing": "md"}, # 3 cards per row
|
|
||||||
"lg": {"cols": 4, "spacing": "md"}, # 4 cards per row
|
|
||||||
"xl": {"cols": 6, "spacing": "lg"} # 6 cards per row
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Data Table
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
"xs": {"cols": 1, "scroll": true}, # Horizontal scroll
|
|
||||||
"md": {"cols": 1, "scroll": false}, # Full table visible
|
|
||||||
"lg": {"cols": 1} # Same as md
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Form Layout
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
"xs": {"cols": 1}, # Single column
|
|
||||||
"md": {"cols": 2}, # Two columns
|
|
||||||
"lg": {"cols": 3} # Three columns
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sidebar Layout
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
"xs": {"sidebar": "hidden"}, # No sidebar on mobile
|
|
||||||
"md": {"sidebar": "collapsed"}, # Icon-only sidebar
|
|
||||||
"lg": {"sidebar": "expanded"} # Full sidebar
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Component Span
|
|
||||||
|
|
||||||
Control how many columns a component spans at each breakpoint:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# A chart that spans full width on mobile, half on desktop
|
|
||||||
{
|
|
||||||
"component": "sales-chart",
|
|
||||||
"span": {
|
|
||||||
"xs": 1, # Full width (1/1)
|
|
||||||
"md": 3, # Half width (3/6)
|
|
||||||
"lg": 6 # Half width (6/12)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## DMC Grid Integration
|
|
||||||
|
|
||||||
This maps to Dash Mantine Components Grid:
|
|
||||||
|
|
||||||
```python
|
|
||||||
dmc.Grid(
|
|
||||||
children=[
|
|
||||||
dmc.GridCol(
|
|
||||||
children=[chart],
|
|
||||||
span={"base": 12, "sm": 6, "lg": 4} # Responsive span
|
|
||||||
)
|
|
||||||
],
|
|
||||||
gutter="md"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"layout_ref": "my-dashboard",
|
|
||||||
"breakpoints": {
|
|
||||||
"xs": {"cols": 1, "spacing": "xs", "min_width": "0px"},
|
|
||||||
"sm": {"cols": 2, "spacing": "sm", "min_width": "576px"},
|
|
||||||
"md": {"cols": 6, "spacing": "md", "min_width": "768px"},
|
|
||||||
"lg": {"cols": 12, "spacing": "md", "min_width": "992px"},
|
|
||||||
"xl": {"cols": 12, "spacing": "lg", "min_width": "1200px"}
|
|
||||||
},
|
|
||||||
"mobile_first": true,
|
|
||||||
"css_media_queries": [
|
|
||||||
"@media (min-width: 576px) { ... }",
|
|
||||||
"@media (min-width: 768px) { ... }",
|
|
||||||
"@media (min-width: 992px) { ... }",
|
|
||||||
"@media (min-width: 1200px) { ... }"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
- `/dashboard {template}` - Create layout with default breakpoints
|
|
||||||
- `/layout-set-grid` - Configure grid without responsive settings
|
|
||||||
- `/theme {name}` - Theme includes default spacing values
|
|
||||||
@@ -1,114 +0,0 @@
|
|||||||
---
|
|
||||||
description: Export a Plotly chart to PNG, SVG, or PDF format
|
|
||||||
---
|
|
||||||
|
|
||||||
# Export Chart
|
|
||||||
|
|
||||||
Export a Plotly chart to static image formats for sharing, embedding, or printing.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/chart-export {format}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Arguments
|
|
||||||
|
|
||||||
- `format` (required): Output format - one of: png, svg, pdf
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
```
|
|
||||||
/chart-export png
|
|
||||||
/chart-export svg
|
|
||||||
/chart-export pdf
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tool Mapping
|
|
||||||
|
|
||||||
This command uses the `chart_export` MCP tool:
|
|
||||||
|
|
||||||
```python
|
|
||||||
chart_export(
|
|
||||||
figure=figure_json, # Plotly figure JSON from chart_create
|
|
||||||
format="png", # Output format: png, svg, pdf
|
|
||||||
width=1200, # Optional: image width in pixels
|
|
||||||
height=800, # Optional: image height in pixels
|
|
||||||
scale=2, # Optional: resolution scale factor
|
|
||||||
output_path=None # Optional: save to file path
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
1. **User invokes**: `/chart-export png`
|
|
||||||
2. **Agent asks**: Which chart to export? (if multiple charts in context)
|
|
||||||
3. **Agent asks**: Image dimensions? (optional, uses chart defaults)
|
|
||||||
4. **Agent exports**: Chart with `chart_export` tool
|
|
||||||
5. **Agent returns**: Base64 image data or file path
|
|
||||||
|
|
||||||
## Output Formats
|
|
||||||
|
|
||||||
| Format | Best For | File Size |
|
|
||||||
|--------|----------|-----------|
|
|
||||||
| `png` | Web, presentations, general use | Medium |
|
|
||||||
| `svg` | Scalable graphics, editing | Small |
|
|
||||||
| `pdf` | Print, documents, archival | Large |
|
|
||||||
|
|
||||||
## Resolution Options
|
|
||||||
|
|
||||||
### Width & Height
|
|
||||||
Specify exact pixel dimensions:
|
|
||||||
```python
|
|
||||||
chart_export(figure, format="png", width=1920, height=1080)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Scale Factor
|
|
||||||
Increase resolution for high-DPI displays:
|
|
||||||
```python
|
|
||||||
chart_export(figure, format="png", scale=3) # 3x resolution
|
|
||||||
```
|
|
||||||
|
|
||||||
Common scale values:
|
|
||||||
- `1` - Standard resolution (72 DPI)
|
|
||||||
- `2` - Retina/HiDPI (144 DPI)
|
|
||||||
- `3` - Print quality (216 DPI)
|
|
||||||
- `4` - High-quality print (288 DPI)
|
|
||||||
|
|
||||||
## Output Options
|
|
||||||
|
|
||||||
### Return as Base64
|
|
||||||
Default behavior - returns base64-encoded image data:
|
|
||||||
```python
|
|
||||||
result = chart_export(figure, format="png")
|
|
||||||
# result["image_data"] contains base64 string
|
|
||||||
```
|
|
||||||
|
|
||||||
### Save to File
|
|
||||||
Specify output path to save directly:
|
|
||||||
```python
|
|
||||||
chart_export(figure, format="png", output_path="/path/to/chart.png")
|
|
||||||
# result["file_path"] contains the saved path
|
|
||||||
```
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
This tool requires the `kaleido` package for rendering:
|
|
||||||
```bash
|
|
||||||
pip install kaleido
|
|
||||||
```
|
|
||||||
|
|
||||||
Kaleido is a cross-platform library that renders Plotly figures without a browser.
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
Common issues:
|
|
||||||
- **Kaleido not installed**: Install with `pip install kaleido`
|
|
||||||
- **Invalid figure**: Ensure figure is valid Plotly JSON
|
|
||||||
- **Permission denied**: Check write permissions for output path
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
- `/chart {type}` - Create a chart
|
|
||||||
- `/theme {name}` - Apply theme before export
|
|
||||||
- `/dashboard` - Create layout containing charts
|
|
||||||
@@ -324,7 +324,7 @@ print_report() {
|
|||||||
# --- Main ---
|
# --- Main ---
|
||||||
main() {
|
main() {
|
||||||
echo "=============================================="
|
echo "=============================================="
|
||||||
echo " Leo Claude Marketplace Setup (v5.1.0)"
|
echo " Leo Claude Marketplace Setup (v5.0.0)"
|
||||||
echo "=============================================="
|
echo "=============================================="
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user