refactor(projman): extract skills and consolidate commands

Major refactoring of projman plugin architecture:

Skills Extraction (17 new files):
- Extracted reusable knowledge from commands and agents into skills/
- branch-security, dependency-management, git-workflow, input-detection
- issue-conventions, lessons-learned, mcp-tools-reference, planning-workflow
- progress-tracking, repo-validation, review-checklist, runaway-detection
- setup-workflows, sprint-approval, task-sizing, test-standards, wiki-conventions

Command Consolidation (17 → 12 commands):
- /setup: consolidates initial-setup, project-init, project-sync (--full/--quick/--sync)
- /debug: consolidates debug-report, debug-review (report/review modes)
- /test: consolidates test-check, test-gen (run/gen modes)
- /sprint-status: absorbs sprint-diagram via --diagram flag

Architecture Cleanup:
- Remove plugin-level mcp-servers/ symlinks (6 plugins)
- Remove plugin README.md files (12 files, ~2000 lines)
- Update all documentation to reflect new command structure
- Fix documentation drift in CONFIGURATION.md, COMMANDS-CHEATSHEET.md

Commands are now thin dispatchers (~20-50 lines) that reference skills.
Agents reference skills for domain knowledge instead of inline content.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-30 15:02:16 -05:00
parent 8fe685037e
commit 2e65b60725
70 changed files with 3450 additions and 8887 deletions

View File

@@ -0,0 +1,98 @@
---
name: branch-security
description: Branch detection, protection rules, and branch-aware authorization
---
# Branch Security
## Purpose
Defines branch detection, classification, and branch-aware authorization rules.
## When to Use
- **Planner agent**: Before planning any sprint work
- **Orchestrator agent**: Before executing any sprint tasks
- **Executor agent**: Before modifying any files
- **Commands**: `/sprint-plan`, `/sprint-start`, `/sprint-close`
---
## Branch Detection
```bash
git branch --show-current
```
## Branch Classification
| Branch Pattern | Classification | Capabilities |
|----------------|----------------|--------------|
| `development`, `develop`, `feat/*`, `fix/*`, `dev/*` | Development | Full access |
| `staging`, `stage/*` | Staging | Read-only code, can create issues |
| `main`, `master`, `prod/*` | Production | READ-ONLY, no changes |
---
## Behavior by Classification
### Development Branches
- Full planning and execution capabilities
- Can create/modify issues, wiki, lessons
- Can execute tasks and modify code
- Normal operation
### Staging Branches
- Can create issues to document bugs
- CANNOT modify application code
- Can modify `.env` files only
- Warn user about limitations
### Production Branches
- READ-ONLY mode enforced
- Cannot create issues or modify anything
- MUST stop immediately and instruct user to switch
---
## Stop Messages
### Production Branch
```
BRANCH SECURITY: Production branch detected
You are on branch: [branch-name]
Planning and execution are NOT allowed on production branches.
Please switch to a development branch:
git checkout development
Or create a feature branch:
git checkout -b feat/[issue-number]-[description]
```
### Staging Branch Warning
```
STAGING BRANCH: Limited capabilities
Available: Create issues to document bugs
Not available: Sprint planning, code modifications
Switch to development for full capabilities:
git checkout development
```
---
## Branch Naming Conventions
| Type | Pattern | Example |
|------|---------|---------|
| Features | `feat/<issue>-<desc>` | `feat/45-jwt-service` |
| Bug fixes | `fix/<issue>-<desc>` | `fix/46-login-timeout` |
| Debugging | `debug/<issue>-<desc>` | `debug/47-memory-leak` |
**Validation:**
- Issue number MUST be present
- Prefix MUST be `feat/`, `fix/`, or `debug/`
- Description: kebab-case (lowercase, hyphens)

View File

@@ -0,0 +1,138 @@
---
name: dependency-management
description: Parallel execution planning, dependency graphs, and file conflict prevention
---
# Dependency Management
## Purpose
Defines how to analyze dependencies, plan parallel execution, and prevent file conflicts.
## When to Use
- **Orchestrator agent**: When starting sprint execution
- **Commands**: `/sprint-start`, `/sprint-diagram`
---
## Get Execution Order
```python
get_execution_order(repo="org/repo", issue_numbers=[45, 46, 47, 48, 49])
```
Returns batches that can run in parallel:
```json
{
"batches": [
[45, 48], // Batch 1: No dependencies
[46, 49], // Batch 2: Depends on batch 1
[47] // Batch 3: Depends on batch 2
]
}
```
**Independent tasks in the same batch can run in parallel.**
---
## Parallel Execution Display
```
Parallel Execution Batches:
┌─────────────────────────────────────────────────────────────┐
│ Batch 1 (can start immediately): │
│ • #45 [Sprint 18] feat: Implement JWT service │
│ • #48 [Sprint 18] docs: Update API documentation │
├─────────────────────────────────────────────────────────────┤
│ Batch 2 (after batch 1): │
│ • #46 [Sprint 18] feat: Build login endpoint (needs #45) │
│ • #49 [Sprint 18] test: Add auth tests (needs #45) │
├─────────────────────────────────────────────────────────────┤
│ Batch 3 (after batch 2): │
│ • #47 [Sprint 18] feat: Create login form (needs #46) │
└─────────────────────────────────────────────────────────────┘
```
---
## File Conflict Prevention (MANDATORY)
**CRITICAL: Before dispatching parallel agents, check for file overlap.**
### Pre-Dispatch Conflict Check
1. **Identify target files** for each task in the batch
2. **Check for overlap** - Do any tasks modify the same file?
3. **If overlap detected** - Sequentialize those specific tasks
### Example Analysis
```
Batch 1 Analysis:
#45 - Implement JWT service
Files: auth/jwt_service.py, auth/__init__.py
#48 - Update API documentation
Files: docs/api.md, README.md
Overlap: NONE → Safe to parallelize ✅
Batch 2 Analysis:
#46 - Build login endpoint
Files: api/routes/auth.py, auth/__init__.py
#49 - Add auth tests
Files: tests/test_auth.py, auth/__init__.py
Overlap: auth/__init__.py → CONFLICT! ⚠️
Action: Sequentialize #46 and #49 (run #46 first)
```
### Conflict Resolution Rules
| Conflict Type | Action |
|---------------|--------|
| Same file in checklist | Sequentialize tasks |
| Same directory | Review if safe, usually OK |
| Shared test file | Sequentialize or assign different test files |
| Shared config | Sequentialize |
---
## Branch Isolation Protocol
Each task MUST have its own branch:
```
Task #45 → feat/45-jwt-service (isolated)
Task #48 → feat/48-api-docs (isolated)
```
Never have two agents work on the same branch.
---
## Sequential Merge After Completion
```
1. Task #45 completes → merge feat/45-jwt-service to development
2. Task #48 completes → merge feat/48-api-docs to development
3. Never merge simultaneously - always sequential to detect conflicts
```
**If Merge Conflict Occurs:**
1. Stop second task
2. Resolve conflict manually or assign to human
3. Resume/restart second task with updated base
---
## Creating Dependencies
```python
# Issue 46 depends on issue 45
create_issue_dependency(
repo="org/repo",
issue_number=46,
depends_on=45
)
```
This ensures #46 won't be scheduled until #45 completes.

View File

@@ -0,0 +1,163 @@
---
name: git-workflow
description: Branch naming, merge process, and git operations
---
# Git Workflow
## Purpose
Defines branch naming conventions, merge protocols, and git operations.
## When to Use
- **Orchestrator agent**: When coordinating git operations
- **Executor agent**: When creating branches and commits
- **Commands**: `/sprint-start`, `/sprint-close`
---
## Branch Naming Convention (MANDATORY)
| Type | Pattern | Example |
|------|---------|---------|
| Features | `feat/<issue>-<desc>` | `feat/45-jwt-service` |
| Bug fixes | `fix/<issue>-<desc>` | `fix/46-login-timeout` |
| Debugging | `debug/<issue>-<desc>` | `debug/47-memory-leak` |
### Validation Rules
- Issue number MUST be present
- Prefix MUST be `feat/`, `fix/`, or `debug/`
- Description: kebab-case (lowercase, hyphens)
- No spaces or special characters
### Creating Feature Branch
```bash
git checkout development
git pull origin development
git checkout -b feat/45-jwt-service
```
---
## Branch Isolation
**Each task MUST have its own branch.**
```
Task #45 → feat/45-jwt-service (isolated)
Task #48 → feat/48-api-docs (isolated)
```
Never have two agents work on the same branch.
---
## Sequential Merge Protocol
After task completion, merge sequentially (never simultaneously):
```
1. Task #45 completes → merge feat/45-jwt-service to development
2. Task #48 completes → merge feat/48-api-docs to development
3. Never merge simultaneously - always sequential to detect conflicts
```
### Merge Steps
```bash
git checkout development
git pull origin development
git merge feat/45-jwt-service --no-ff
git push origin development
git branch -d feat/45-jwt-service
```
### If Merge Conflict Occurs
1. Stop second task
2. Resolve conflict manually or assign to human
3. Resume/restart second task with updated base
---
## Commit Message Format
```
<type>: <description>
[Optional body with details]
[Optional: Closes #XX]
```
### Types
| Type | Use For |
|------|---------|
| `feat` | New feature |
| `fix` | Bug fix |
| `refactor` | Code refactoring |
| `docs` | Documentation |
| `test` | Test additions |
| `chore` | Maintenance |
### Auto-Close Keywords
Use in commit message to auto-close issues:
- `Closes #XX`
- `Fixes #XX`
- `Resolves #XX`
### Example
```
feat: implement JWT token generation
- Add generate_token(user_id, email) function
- Add verify_token(token) function
- Include refresh logic per Sprint 12 lesson
Closes #45
```
---
## Merge Request Template
When branch protection requires MR (check via `get_branch_protection`):
```markdown
## Summary
Brief description of changes made.
## Related Issues
Closes #45
## Testing
- Describe how changes were tested
- Include test commands if relevant
```
**NEVER include subtask checklists in MR body.** The issue already has them.
---
## Sprint Close Git Operations
Offer to handle:
1. Commit any remaining changes
2. Merge feature branches to development
3. Tag sprint completion (if release)
4. Clean up merged branches
```bash
# Tag sprint completion
git tag -a v0.18.0 -m "Sprint 18 release"
git push origin v0.18.0
# Clean up merged branches
git branch -d feat/45-jwt-service feat/46-login-endpoint
```

View File

@@ -0,0 +1,115 @@
---
name: input-detection
description: Detect planning input source (file, wiki, or conversation)
---
# Input Source Detection
## Purpose
Defines how to detect where planning input is coming from and how to handle each source.
## When to Use
- **Planner agent**: At start of sprint planning
- **Commands**: `/sprint-plan`
---
## Detection Priority
| Priority | Source | Detection | Action |
|----------|--------|-----------|--------|
| 1 | Local file | `docs/changes/*.md` exists | Parse frontmatter, migrate to wiki, delete local |
| 2 | Existing wiki | `Change VXX.X.X: Proposal` exists | Use as-is, create implementation page |
| 3 | Conversation | Neither exists | Create wiki from discussion context |
---
## Local File Format
```yaml
---
version: "4.1.0" # or "sprint-17" for internal work
title: "Feature Name"
plugin: plugin-name # optional
type: feature # feature | bugfix | refactor | infra
---
# Feature Description
[Free-form content...]
```
---
## Detection Steps
1. **Check for local files:**
```bash
ls docs/changes/*.md
```
2. **Check for existing wiki proposal:**
```python
list_wiki_pages(repo="org/repo")
# Filter for "Change V" prefix matching version
```
3. **If neither found:** Use conversation context
4. **If multiple sources found:** Ask user which to use
---
## Report to User
```
Input source detected:
✓ Found: docs/changes/v4.1.0-wiki-planning.md
- Version: 4.1.0
- Title: Wiki-Based Planning Workflow
- Type: feature
I'll use this as the planning input. Proceed? (y/n)
```
---
## Migration Flow (Local File → Wiki)
When using local file as input:
1. **Parse frontmatter** to extract metadata
2. **Create wiki proposal page:** `Change V4.1.0: Proposal`
3. **Create implementation page:** `Change V4.1.0: Proposal (Implementation 1)`
4. **Delete local file** - wiki is now source of truth
```
Migration complete:
✓ Created: "Change V4.1.0: Proposal" (wiki)
✓ Created: "Change V4.1.0: Proposal (Implementation 1)" (wiki)
✓ Deleted: docs/changes/v4.1.0-wiki-planning.md (migrated)
```
---
## Ambiguous Input Handling
If multiple valid sources found:
```
Multiple input sources detected:
1. Local file: docs/changes/v4.1.0-feature.md
- Version: 4.1.0
- Title: New Feature
2. Wiki proposal: Change V4.1.0: Proposal
- Status: In Progress
- Date: 2026-01-20
Which should I use for planning?
[1] Local file (will migrate to wiki)
[2] Existing wiki proposal
[3] Start fresh from conversation
```

View File

@@ -0,0 +1,130 @@
---
name: issue-conventions
description: Issue title format, wiki references, and creation standards
---
# Issue Conventions
## Purpose
Defines standard formats for issue titles, bodies, and wiki references.
## When to Use
- **Planner agent**: When creating issues during sprint planning
- **Commands**: `/sprint-plan`
---
## Title Format (MANDATORY)
```
[Sprint XX] <type>: <description>
```
### Types
| Type | Use For |
|------|---------|
| `feat` | New feature |
| `fix` | Bug fix |
| `refactor` | Code refactoring |
| `docs` | Documentation |
| `test` | Test additions/changes |
| `chore` | Maintenance tasks |
### Examples
- `[Sprint 17] feat: Add user email validation`
- `[Sprint 17] fix: Resolve login timeout issue`
- `[Sprint 18] refactor: Extract authentication module`
- `[Sprint 18] test: Add JWT token edge case tests`
- `[Sprint 19] docs: Update API documentation`
---
## Issue Body Structure
Every issue body MUST include:
```markdown
## Description
[Clear description of the task]
## Implementation
**Wiki:** [Change VXX.X.X (Impl N)](wiki-link)
## Acceptance Criteria
- [ ] Criteria 1
- [ ] Criteria 2
- [ ] Criteria 3
## Technical Notes
[Optional: Architecture decisions, constraints, considerations]
```
---
## Wiki Reference (MANDATORY)
Every issue MUST reference its implementation wiki page:
```markdown
## Implementation
**Wiki:** [Change V4.1.0 (Impl 1)](https://gitea.example.com/org/repo/wiki/Change-V4.1.0%3A-Proposal-(Implementation-1))
```
This enables:
- Traceability between issues and proposals
- Context for the broader feature being implemented
- Connection to lessons learned
---
## Issue Creation Example
```python
create_issue(
repo="org/repo",
title="[Sprint 17] feat: Implement JWT generation",
body="""## Description
Create a JWT token generation service for user authentication.
## Implementation
**Wiki:** [Change V1.2.0 (Impl 1)](wiki-link)
## Acceptance Criteria
- [ ] Generate tokens with user_id, email, expiration
- [ ] Use HS256 algorithm
- [ ] Include token refresh logic
- [ ] Unit tests cover all paths
## Technical Notes
- Token expiration: 24 hours
- Refresh window: last 4 hours of validity
- See Sprint 12 lesson on token refresh edge cases
""",
labels=["Type/Feature", "Priority/High", "Component/Auth", "Tech/Python", "Efforts/M"],
milestone=17
)
```
---
## Auto-Close Keywords
Use in commit messages to auto-close issues:
- `Closes #XX`
- `Fixes #XX`
- `Resolves #XX`
Example commit:
```
feat: implement JWT token generation
- Add generate_token(user_id, email) function
- Add verify_token(token) function
- Include refresh logic per Sprint 12 lesson
Closes #45
```

View File

@@ -0,0 +1,139 @@
---
name: lessons-learned
description: Capture and search workflow for lessons learned system
---
# Lessons Learned System
## Purpose
Defines the workflow for capturing lessons at sprint close and searching them at sprint start/plan.
## When to Use
- **Planner agent**: Search lessons at sprint start
- **Orchestrator agent**: Capture lessons at sprint close
- **Commands**: `/sprint-plan`, `/sprint-start`, `/sprint-close`
---
## Searching Lessons (Sprint Start/Plan)
**ALWAYS search for past lessons before planning or executing.**
```python
search_lessons(
repo="org/repo",
query="relevant keywords",
tags=["technology", "component"],
limit=10
)
```
**Present findings:**
```
Relevant lessons from previous sprints:
📚 Sprint 12: "JWT Token Expiration Edge Cases"
Tags: auth, jwt, python
Key lesson: Handle token refresh explicitly
📚 Sprint 8: "Service Extraction Boundaries"
Tags: architecture, refactoring
Key lesson: Define API contracts BEFORE extracting
```
---
## Capturing Lessons (Sprint Close)
### Interview Questions
Ask these probing questions:
1. What challenges did you face this sprint?
2. What worked well and should be repeated?
3. Were there any preventable mistakes?
4. Did any technical decisions need adjustment?
5. What would you do differently?
### Lesson Structure
```markdown
# Sprint N - [Lesson Title]
## Metadata
- **Implementation:** [Change VXX.X.X (Impl N)](wiki-link)
- **Issues:** #XX, #XX
- **Sprint:** Sprint N
## Context
[What were you doing?]
## Problem
[What went wrong / insight / challenge?]
## Solution
[How did you solve it?]
## Prevention
[How to avoid in future?]
## Tags
technology, component, type, pattern
```
### Creating Lesson
```python
create_lesson(
repo="org/repo",
title="Sprint N - Lesson Title",
content="[structured content above]",
tags=["tag1", "tag2"],
category="sprints"
)
```
---
## Tagging Strategy
**By Technology:** python, javascript, docker, postgresql, redis, vue, fastapi
**By Component:** backend, frontend, api, database, auth, deploy, testing, docs
**By Type:** bug, feature, refactor, architecture, performance, security
**By Pattern:** infinite-loop, edge-case, integration, boundaries, dependencies
---
## Example Lessons
### Technical Gotcha
```markdown
# Sprint 16 - Claude Code Infinite Loop on Validation Errors
## Metadata
- **Implementation:** [Change V1.2.0 (Impl 1)](wiki-link)
- **Issues:** #45, #46
- **Sprint:** Sprint 16
## Context
Implementing input validation for authentication API.
## Problem
Claude Code entered infinite loop when pytest validation tests failed.
The loop occurred because error messages didn't change between attempts.
## Solution
Added more descriptive error messages specifying what value failed and why.
## Prevention
- Write validation test errors with specific values
- If Claude loops, check if errors provide unique information
- Add loop detection (fail after 3 identical errors)
## Tags
testing, claude-code, validation, python, pytest
```

View File

@@ -0,0 +1,145 @@
---
name: mcp-tools-reference
description: Complete reference of available Gitea MCP tools with usage patterns
---
# MCP Tools Reference
## Purpose
Provides the complete reference of available MCP tools for Gitea operations. This skill ensures consistent tool usage across all commands and agents.
## When to Use
- **All agents**: When performing any Gitea operation
- **All commands**: That interact with issues, labels, milestones, wiki, or dependencies
## Critical Rules
### NEVER Use CLI Tools
**FORBIDDEN - Do not use:**
```bash
tea issue list
tea issue create
tea pr create
gh issue list
gh pr create
curl -X POST "https://gitea.../api/..."
```
**If you find yourself about to run a bash command for Gitea, STOP and use the MCP tool instead.**
### Required Parameter Format
All tools require the `repo` parameter in `owner/repo` format:
```python
# CORRECT
get_labels(repo="org/repo")
list_issues(repo="org/repo", state="open")
# INCORRECT - Will fail!
get_labels() # Missing repo parameter
```
---
## Issue Tools
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `list_issues` | Fetch issues | `repo`, `state`, `labels`, `milestone` |
| `get_issue` | Get issue details | `repo`, `number` |
| `create_issue` | Create new issue | `repo`, `title`, `body`, `labels`, `assignee`, `milestone` |
| `update_issue` | Update issue | `repo`, `number`, `state`, `labels`, `body` |
| `add_comment` | Add comment to issue | `repo`, `number`, `body` |
## Label Tools
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `get_labels` | Fetch all labels | `repo` |
| `suggest_labels` | Get intelligent suggestions | `repo`, `context` |
| `create_label` | Create repository label | `repo`, `name`, `color` |
| `create_label_smart` | Auto-detect org vs repo | `repo`, `name`, `color` |
## Milestone Tools
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `list_milestones` | List all milestones | `repo`, `state` |
| `get_milestone` | Get milestone details | `repo`, `milestone_id` |
| `create_milestone` | Create new milestone | `repo`, `title`, `description`, `due_on` |
| `update_milestone` | Update milestone | `repo`, `milestone_id`, `state` |
## Dependency Tools
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `list_issue_dependencies` | Get issue dependencies | `repo`, `issue_number` |
| `create_issue_dependency` | Create dependency | `repo`, `issue_number`, `depends_on` |
| `get_execution_order` | Get parallel batches | `repo`, `issue_numbers` |
## Wiki & Lessons Learned Tools
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `list_wiki_pages` | List all wiki pages | `repo` |
| `get_wiki_page` | Fetch page content | `repo`, `page_name` |
| `create_wiki_page` | Create new page | `repo`, `title`, `content` |
| `update_wiki_page` | Update page content | `repo`, `page_name`, `content` |
| `search_lessons` | Search lessons learned | `repo`, `query`, `tags`, `limit` |
| `create_lesson` | Create lesson entry | `repo`, `title`, `content`, `tags`, `category` |
## Validation Tools
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `validate_repo_org` | Check if repo is under org | `repo` |
| `get_branch_protection` | Check branch protection | `repo`, `branch` |
## Pull Request Tools
| Tool | Purpose | Key Parameters |
|------|---------|----------------|
| `list_pull_requests` | List PRs | `repo`, `state` |
| `get_pull_request` | Get PR details | `repo`, `number` |
| `create_pull_request` | Create PR | `repo`, `title`, `body`, `head`, `base` |
| `create_pr_review` | Create review | `repo`, `number`, `body`, `event` |
---
## Common Usage Examples
**Create sprint issue:**
```python
create_issue(
repo="org/repo",
title="[Sprint 17] feat: Implement JWT service",
body="## Description\n...\n## Implementation\n**Wiki:** [link]",
labels=["Type/Feature", "Priority/High"],
milestone=17
)
```
**Get parallel execution batches:**
```python
get_execution_order(repo="org/repo", issue_numbers=[45, 46, 47, 48])
# Returns: {"batches": [[45, 48], [46], [47]]}
```
**Search lessons learned:**
```python
search_lessons(repo="org/repo", tags=["auth", "python"], limit=10)
```
**Create lesson:**
```python
create_lesson(
repo="org/repo",
title="Sprint 18 - JWT Token Edge Cases",
content="# Sprint 18...\n## Context\n...",
tags=["auth", "jwt", "python"],
category="sprints"
)
```

View File

@@ -0,0 +1,134 @@
---
name: planning-workflow
description: The complete sprint planning process steps
---
# Planning Workflow
## Purpose
Defines the complete 11-step planning workflow from validation through approval.
## When to Use
- **Planner agent**: When executing `/sprint-plan`
- **Commands**: `/sprint-plan`
---
## Workflow Steps
### 1. Understand Sprint Goals
Ask clarifying questions:
- What are the sprint objectives?
- What's the scope and priority?
- Are there any constraints (time, resources, dependencies)?
- What's the desired outcome?
**Never rush - take time to understand requirements fully.**
### 2. Run Pre-Planning Validations
Execute in order:
1. **Branch detection** - See `skills/branch-security.md`
2. **Repository org check** - See `skills/repo-validation.md`
3. **Label taxonomy validation** - See `skills/repo-validation.md`
**STOP if any validation fails.**
### 3. Detect Input Source
Follow `skills/input-detection.md`:
1. Check for `docs/changes/*.md` files
2. Check for existing wiki proposal
3. If neither: use conversation context
4. If ambiguous: ask user
### 4. Search Relevant Lessons Learned
Follow `skills/lessons-learned.md`:
```python
search_lessons(repo="org/repo", query="sprint keywords", tags=["relevant", "tags"])
```
Present findings to user before proceeding.
### 5. Create/Update Wiki Proposal
Follow `skills/wiki-conventions.md`:
- If local file: migrate content to wiki, create proposal page
- If conversation: create proposal from discussion
- If existing wiki: skip creation, use as-is
### 6. Create Wiki Implementation Page
Follow `skills/wiki-conventions.md`:
- Create `Change VXX.X.X: Proposal (Implementation N)`
- Update proposal page with link to implementation
### 7. Architecture Analysis
Think through:
- What components will be affected?
- What are the integration points?
- Are there edge cases to handle?
- What dependencies exist?
- What are potential risks?
### 8. Create Gitea Issues
Follow `skills/issue-conventions.md` and `skills/task-sizing.md`:
- Use proper title format: `[Sprint XX] <type>: <description>`
- Include wiki implementation reference
- Apply appropriate labels using `suggest_labels`
- **Refuse to create L/XL tasks without breakdown**
### 9. Set Up Dependencies
```python
create_issue_dependency(repo="org/repo", issue_number=46, depends_on=45)
```
### 10. Create or Select Milestone
```python
create_milestone(
repo="org/repo",
title="Sprint 17 - Feature Name",
description="Sprint description",
due_on="2025-02-01T00:00:00Z"
)
```
Assign issues to the milestone.
### 11. Request Sprint Approval
Follow `skills/sprint-approval.md`:
- Present approval request with scope summary
- Wait for explicit user approval
- Record approval in milestone description
---
## Cleanup After Planning
- Delete local input file (wiki is now source of truth)
- Summarize architectural decisions
- List created issues with labels
- Document dependency graph
- Provide sprint overview with wiki links
---
## Visual Output
Display header at start:
```
╔══════════════════════════════════════════════════════════════════╗
║ 📋 PROJMAN ║
║ 🎯 PLANNING ║
║ [Sprint Name] ║
╚══════════════════════════════════════════════════════════════════╝
```

View File

@@ -0,0 +1,192 @@
---
name: progress-tracking
description: Structured progress comments and status label management
---
# Progress Tracking
## Purpose
Defines structured progress comment format and status label management.
## When to Use
- **Orchestrator agent**: When tracking sprint execution
- **Executor agent**: When posting progress updates
- **Commands**: `/sprint-start`, `/sprint-status`
---
## Status Labels
| Label | Meaning | When to Apply |
|-------|---------|---------------|
| `Status/In-Progress` | Work actively happening | When dispatching task |
| `Status/Blocked` | Cannot proceed | When dependency or blocker found |
| `Status/Failed` | Task failed | When task cannot complete |
| `Status/Deferred` | Moved to future | When deprioritized |
### Rules
- Only ONE Status label at a time
- Remove Status labels when closing successfully
- Always add comment explaining status changes
---
## Applying Status Labels
**When dispatching:**
```python
update_issue(
repo="org/repo",
issue_number=45,
labels=["Status/In-Progress", ...existing_labels]
)
```
**When blocked:**
```python
update_issue(
repo="org/repo",
issue_number=46,
labels=["Status/Blocked", ...labels_without_in_progress]
)
add_comment(repo="org/repo", number=46, body="🚫 BLOCKED: Waiting for #45")
```
**When failed:**
```python
update_issue(
repo="org/repo",
issue_number=47,
labels=["Status/Failed", ...labels_without_in_progress]
)
add_comment(repo="org/repo", number=47, body="❌ FAILED: [Error description]")
```
**On successful close:**
```python
update_issue(
repo="org/repo",
issue_number=45,
state="closed",
labels=[...labels_without_status] # Remove all Status/* labels
)
```
---
## Structured Progress Comment Format
```markdown
## Progress Update
**Status:** In Progress | Blocked | Failed
**Phase:** [current phase name]
**Tool Calls:** X (budget: Y)
### Completed
- [x] Step 1
- [x] Step 2
### In Progress
- [ ] Current step (estimated: Z more calls)
### Blockers
- None | [blocker description]
### Next
- What happens after current step
```
---
## When to Post Progress Comments
- After completing each major phase (every 20-30 tool calls)
- When status changes (blocked, failed)
- When encountering unexpected issues
- Before approaching tool call budget limit
---
## Checkpoint Format (Resume Support)
For resume support, save checkpoints after major steps:
```markdown
## Checkpoint
**Branch:** feat/45-jwt-service
**Commit:** abc123
**Phase:** Testing
**Tool Calls:** 67
### Completed Steps
- [x] Created auth/jwt_service.py
- [x] Implemented generate_token()
- [x] Implemented verify_token()
### Pending Steps
- [ ] Write unit tests
- [ ] Add refresh logic
- [ ] Commit and push
### Files Modified
- auth/jwt_service.py (new)
- auth/__init__.py (modified)
```
---
## Sprint Progress Display
```
┌─ Sprint Progress ────────────────────────────────────────────────┐
│ Sprint 18 - User Authentication │
│ ████████████░░░░░░░░░░░░░░░░░░ 40% complete │
│ ✅ Done: 4 ⏳ Active: 2 ⬚ Pending: 4 │
│ Current: │
│ #45 ⏳ Implement JWT service │
│ #46 ⏳ Build login endpoint │
└──────────────────────────────────────────────────────────────────┘
```
### Progress Bar Calculation
- Width: 30 characters
- Filled: `█` (completed percentage)
- Empty: `░` (remaining percentage)
- Formula: `(closed_issues / total_issues) * 30`
---
## Parallel Execution Status
```
Parallel Execution Status:
Batch 1:
✅ #45 - JWT service - COMPLETED (12:45)
🔄 #48 - API docs - IN PROGRESS (75%)
Batch 2 (now unblocked):
⏳ #46 - Login endpoint - READY TO START
⏳ #49 - Auth tests - READY TO START
#45 completed! #46 and #49 are now unblocked.
```
---
## Auto-Check Subtasks on Close
When closing an issue, update unchecked subtasks in body:
```python
# Change - [ ] to - [x] for completed items
update_issue(
repo="org/repo",
issue_number=45,
body="... - [x] Completed subtask ..."
)
```

View File

@@ -0,0 +1,119 @@
---
name: repo-validation
description: Repository organization check and label taxonomy validation
---
# Repository Validation
## Purpose
Validates that the repository belongs to an organization and has the required label taxonomy.
## When to Use
- **Planner agent**: At start of sprint planning
- **Commands**: `/sprint-plan`, `/labels-sync`, `/project-init`
---
## Step 1: Detect Repository from Git Remote
```bash
git remote get-url origin
```
Parse output to extract `owner/repo`:
- SSH: `git@host:owner/repo.git``owner/repo`
- SSH with port: `ssh://git@host:port/owner/repo.git``owner/repo`
- HTTPS: `https://host/owner/repo.git``owner/repo`
---
## Step 2: Validate Organization Ownership
```python
validate_repo_org(repo="owner/repo")
```
**If NOT an organization repository:**
```
REPOSITORY VALIDATION FAILED
This plugin requires the repository to belong to an organization, not a user.
Current repository appears to be a personal repository.
Please:
1. Create an organization in Gitea
2. Transfer or create the repository under that organization
3. Update your configuration
```
---
## Step 3: Validate Label Taxonomy
```python
get_labels(repo="owner/repo")
```
**Required label categories:**
| Category | Required Labels |
|----------|-----------------|
| Type/* | Bug, Feature, Refactor, Documentation, Test, Chore |
| Priority/* | Low, Medium, High, Critical |
| Complexity/* | Simple, Medium, Complex |
| Efforts/* | XS, S, M, L, XL |
**If labels are missing:**
- Use `create_label_smart()` to create them (auto-detects org vs repo level)
- Report which labels were created
---
## Validation Report Format
```
Repository Validation
=====================
Git Remote: git@gitea.example.com:org/repo.git
Detected: org/repo
Organization Check:
✓ Repository belongs to organization "org"
Label Taxonomy:
✓ Type/* labels: 6/6 present
✓ Priority/* labels: 4/4 present
✓ Complexity/* labels: 3/3 present
✓ Efforts/* labels: 5/5 present
All validations passed. Ready for planning.
```
---
## Error Handling
### Repository Not Found (404)
```
Repository validation failed: Not found
The repository "owner/repo" does not exist or you don't have access.
Please verify:
1. Repository name is correct
2. Your token has repository access
3. Organization/owner name is correct
```
### Authentication Error (401/403)
```
Repository validation failed: Authentication error
Your Gitea token may be invalid or lack permissions.
Please verify:
1. Token is valid and not expired
2. Token has 'repo' scope
3. You have access to this repository
```

View File

@@ -0,0 +1,149 @@
---
name: review-checklist
description: Code review criteria and severity classification
---
# Review Checklist
## Purpose
Defines code review criteria, severity classification, and output format.
## When to Use
- **Code Reviewer agent**: During pre-sprint-close review
- **Commands**: `/review`
---
## Severity Classification
### Critical (Must Fix Before Close)
Security issues, broken functionality, data loss risks:
- Hardcoded credentials or API keys
- SQL injection vulnerabilities
- Missing authentication/authorization checks
- Unhandled errors that could crash the application
- Data loss or corruption risks
- Broken core functionality
### Warning (Should Fix)
Technical debt that will cause problems soon:
- TODO/FIXME comments left unresolved
- Debug statements (console.log, print) in production code
- Functions over 50 lines (complexity smell)
- Deeply nested conditionals (>3 levels)
- Bare except/catch blocks
- Ignored errors
- Missing error handling
### Recommendation (Future Sprint)
Improvements that can wait:
- Missing docstrings on public functions
- Minor code duplication
- Commented-out code blocks
- Variable naming improvements
- Minor refactoring opportunities
---
## Review Patterns by Language
### Python
| Look For | Severity |
|----------|----------|
| Bare `except:` | Warning |
| `print()` statements | Warning |
| `# TODO` | Warning |
| Missing type hints on public APIs | Recommendation |
| `eval()`, `exec()` | Critical |
| SQL string formatting | Critical |
| `verify=False` in requests | Critical |
### JavaScript/TypeScript
| Look For | Severity |
|----------|----------|
| `console.log` | Warning |
| `// TODO` | Warning |
| `any` type abuse | Warning |
| Missing error boundaries | Warning |
| `eval()` | Critical |
| `innerHTML` with user input | Critical |
| Unescaped user input | Critical |
### Go
| Look For | Severity |
|----------|----------|
| `// TODO` | Warning |
| Ignored errors (`_`) | Warning |
| Missing error returns | Warning |
| SQL concatenation | Critical |
| Missing input validation | Warning |
### Rust
| Look For | Severity |
|----------|----------|
| `// TODO` | Warning |
| `unwrap()` chains | Warning |
| `unsafe` blocks without justification | Warning |
| Unchecked `unwrap()` on user input | Critical |
---
## What NOT to Review
- Style issues (assume formatters handle this)
- Architectural rewrites mid-sprint
- Issues in unchanged code (unless directly impacted)
- Bikeshedding on naming preferences
---
## Output Template
```
## Code Review Summary
**Scope**: [X files from sprint/last N commits]
**Verdict**: [READY FOR CLOSE / NEEDS ATTENTION / BLOCKED]
### Critical (Must Fix)
- `src/auth.py:45` - Hardcoded API key in source code
- `src/db.py:123` - SQL injection vulnerability
### Warnings (Should Fix)
- `src/utils.js:123` - console.log left in production code
- `src/handler.py:67` - Bare except block swallows all errors
### Recommendations (Future Sprint)
- `src/api.ts:89` - Function exceeds 50 lines, consider splitting
### Clean Files
- src/models.py
- src/tests/test_auth.py
```
---
## Verdict Criteria
| Verdict | Criteria |
|---------|----------|
| **READY FOR CLOSE** | No Critical, few/no Warnings |
| **NEEDS ATTENTION** | No Critical, has Warnings that should be addressed |
| **BLOCKED** | Has Critical issues that must be fixed |
---
## Integration with Sprint
When sprint context is available:
- Reference the sprint's issue list
- Create follow-up issues for non-critical findings
- Tag findings with appropriate labels from taxonomy

View File

@@ -0,0 +1,156 @@
---
name: runaway-detection
description: Detecting and handling stuck agents
---
# Runaway Detection
## Purpose
Defines how to detect stuck agents and intervention protocols.
## When to Use
- **Orchestrator agent**: When monitoring dispatched agents
- **Executor agent**: Self-monitoring during execution
---
## Warning Signs
| Sign | Threshold | Action |
|------|-----------|--------|
| No progress comment | 30+ minutes | Investigate |
| Same phase repeated | 20+ tool calls | Consider stopping |
| Same error 3+ times | Immediately | Stop agent |
| Approaching budget | 80% of limit | Post checkpoint |
---
## Agent Timeout Guidelines
| Task Size | Expected Duration | Intervention Point |
|-----------|-------------------|-------------------|
| XS | ~5-10 min | 15 min no progress |
| S | ~10-20 min | 30 min no progress |
| M | ~20-40 min | 45 min no progress |
---
## Detection Protocol
1. **Read latest progress comment** - Check tool call count and phase
2. **Compare to previous** - Is progress happening?
3. **Check for error patterns** - Same error repeating?
4. **Evaluate time elapsed** - Beyond expected duration?
---
## Intervention Protocol
When you detect an agent may be stuck:
### Step 1: Assess
```
Agent Status Check for #45:
- Last progress: 25 minutes ago
- Phase: "Testing" (same as 20 tool calls ago)
- Errors: "ModuleNotFoundError" (3 times)
- Assessment: LIKELY STUCK
```
### Step 2: Stop Agent
```python
# If TaskStop available
TaskStop(task_id="agent-id")
```
### Step 3: Update Issue Status
```python
update_issue(
repo="org/repo",
issue_number=45,
labels=["Status/Failed", ...other_labels]
)
```
### Step 4: Add Explanation Comment
```python
add_comment(
repo="org/repo",
number=45,
body="""## Agent Intervention
**Reason:** No progress detected for 25 minutes / repeated errors
**Last Status:** Testing phase, ModuleNotFoundError x3
**Action:** Stopped agent, requires human review
### What Was Completed
- [x] Created auth/jwt_service.py
- [x] Implemented generate_token()
### What Remains
- [ ] Fix import issue
- [ ] Write tests
- [ ] Commit
### Recommendation
- Check for missing dependency in requirements.txt
- May need manual intervention to resolve import
"""
)
```
---
## Self-Monitoring (Executor)
Executors should self-monitor:
### Circuit Breakers
- **Same error 3 times**: Stop and report
- **80% of tool call budget**: Post checkpoint
- **File not found 3 times**: Stop and ask for help
- **Test failing same way 5 times**: Stop and report
### Self-Check Template
```
Self-check at tool call 45/100:
- Progress: 4/7 steps completed
- Current phase: Testing
- Errors encountered: 1 (resolved)
- Remaining budget: 55 calls
- Status: ON TRACK
```
---
## Recovery Actions
After stopping a stuck agent:
1. **Preserve work** - Branch and commits remain
2. **Document state** - Checkpoint in issue comment
3. **Identify cause** - What caused the loop?
4. **Plan recovery**:
- Manual completion
- Different approach
- Break down further
- Assign to human
---
## Common Stuck Patterns
| Pattern | Cause | Solution |
|---------|-------|----------|
| Import loop | Missing dependency | Add to requirements |
| Test loop | Non-deterministic test | Fix test isolation |
| Validation loop | Error message not changing | Improve error specificity |
| File not found | Wrong path | Verify path exists |
| Permission denied | File ownership | Check permissions |

View File

@@ -0,0 +1,223 @@
# Setup Workflows
Shared workflows for the `/setup` command modes.
## Mode Detection Logic
Determine setup mode automatically:
```
1. Check ~/.config/claude/gitea.env exists
- If missing → FULL mode needed
2. If gitea.env exists, check project .env
- If .env missing → QUICK mode (project setup)
3. If both exist, compare git remote with .env values
- If mismatch → SYNC mode needed
- If match → already configured, offer reconfigure option
```
## Full Setup Workflow
Complete first-time setup including MCP servers, credentials, and project.
### Phase 1: Environment Validation
```bash
# Check Python 3.10+
python3 --version # Should be 3.10+
```
If Python < 3.10, stop and ask user to install.
### Phase 2: MCP Server Setup
1. **Locate Plugin Installation**
```bash
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/gitea/
```
2. **Check Virtual Environment**
```bash
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/gitea/venv/
```
3. **Create venv if Missing**
```bash
cd ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/gitea/
python3 -m venv venv
./venv/bin/pip install -r requirements.txt
```
4. **Optional: NetBox MCP Setup**
Ask user if they need NetBox integration.
### Phase 3: System-Level Configuration
1. **Create Config Directory**
```bash
mkdir -p ~/.config/claude
```
2. **Create gitea.env Template**
```env
GITEA_API_URL=https://your-gitea-instance/api/v1
GITEA_API_TOKEN=
```
3. **Token Entry (SECURITY)**
- DO NOT ask for token in chat
- Instruct user to edit file manually
- Provide command: `nano ~/.config/claude/gitea.env`
4. **Validate Token**
```bash
curl -s -H "Authorization: token $TOKEN" "$GITEA_API_URL/user"
```
- 200 OK = valid
- 401 = invalid token
### Phase 4: Project-Level Configuration
See **Quick Setup Workflow** below.
### Phase 5: Final Validation
Display summary:
- MCP server: ✓/✗
- System config: ✓/✗
- Project config: ✓/✗
**Important:** Session restart required for MCP tools to load.
---
## Quick Setup Workflow
Project-level setup only (assumes system config exists).
### Step 1: Verify System Config
```bash
cat ~/.config/claude/gitea.env
```
If missing or empty token, redirect to FULL mode.
### Step 2: Verify Git Repository
```bash
git rev-parse --git-dir 2>/dev/null
```
If not a git repo, stop and inform user.
### Step 3: Check Existing Config
If `.env` exists:
- Show current GITEA_ORG and GITEA_REPO
- Ask: Keep current or reconfigure?
### Step 4: Detect Org/Repo
Parse git remote URL:
```bash
git remote get-url origin
# https://gitea.example.com/org-name/repo-name.git
# → org: org-name, repo: repo-name
```
### Step 5: Validate via API
```bash
curl -s -H "Authorization: token $TOKEN" \
"$GITEA_API_URL/repos/$ORG/$REPO"
```
- 200 OK = auto-fill without asking
- 404 = ask user to confirm/correct
### Step 6: Create .env
```env
GITEA_ORG=detected-org
GITEA_REPO=detected-repo
```
### Step 7: Check .gitignore
If `.env` not in `.gitignore`:
- Warn user about security risk
- Offer to add it
---
## Sync Workflow
Update project config when git remote changed.
### Step 1: Read Current Config
```bash
grep GITEA_ORG .env
grep GITEA_REPO .env
```
### Step 2: Detect Git Remote
Parse current remote URL (same as Quick Step 4).
### Step 3: Compare Values
| Current | Detected | Action |
|---------|----------|--------|
| Match | Match | "Already in sync" - exit |
| Different | Different | Show diff, ask to update |
### Step 4: Show Changes
```
Current: org/old-repo
Detected: org/new-repo
Update configuration? [y/n]
```
### Step 5: Validate New Values
API check on detected org/repo.
### Step 6: Update .env
Replace GITEA_ORG and GITEA_REPO values.
### Step 7: Confirm
```
✓ Project configuration updated
GITEA_ORG: new-org
GITEA_REPO: new-repo
```
---
## Visual Header
All setup modes use:
```
╔══════════════════════════════════════════════════════════════════╗
║ 📋 PROJMAN ║
║ ⚙️ SETUP ║
║ [Mode: Full | Quick | Sync] ║
╚══════════════════════════════════════════════════════════════════╝
```
## DO NOT
- Ask for tokens in chat (security risk)
- Skip venv creation for MCP servers
- Create .env without checking .gitignore
- Proceed if API validation fails

View File

@@ -0,0 +1,136 @@
---
name: sprint-approval
description: Approval gate logic for sprint execution
---
# Sprint Approval
## Purpose
Defines the approval workflow that gates sprint execution.
## When to Use
- **Planner agent**: After creating issues, request approval
- **Orchestrator agent**: Before execution, verify approval exists
- **Commands**: `/sprint-plan`, `/sprint-start`
---
## Core Principle
**Planning DOES NOT equal execution permission.**
Sprint approval is a mandatory checkpoint between planning and execution.
---
## Requesting Approval (Planner)
After creating issues, present approval request:
```
Sprint 17 Planning Complete
===========================
Created Issues:
- #45: [Sprint 17] feat: JWT token generation
- #46: [Sprint 17] feat: Login endpoint
- #47: [Sprint 17] test: Auth tests
Execution Scope:
- Branches: feat/45-*, feat/46-*, feat/47-*
- Files: auth/*, api/routes/auth.py, tests/test_auth*
- Dependencies: PyJWT, python-jose
⚠️ APPROVAL REQUIRED
Do you approve this sprint for execution?
This grants permission for agents to:
- Create and modify files in the listed scope
- Create branches with the listed prefixes
- Install listed dependencies
Type "approve sprint 17" to authorize execution.
```
---
## Recording Approval
On user approval, update milestone description:
```markdown
## Sprint Approval
**Approved:** 2026-01-28 14:30
**Approver:** User
**Scope:**
- Branches: feat/45-*, feat/46-*, feat/47-*
- Files: auth/*, api/routes/auth.py, tests/test_auth*
- Dependencies: PyJWT, python-jose
```
---
## Verifying Approval (Orchestrator)
Before execution, check milestone for approval:
```python
get_milestone(repo="org/repo", milestone_id=17)
# Check description for "## Sprint Approval" section
```
### If Approval Missing
```
⚠️ SPRINT APPROVAL NOT FOUND (Warning)
Sprint 17 milestone does not contain an approval record.
Recommended: Run /sprint-plan first to:
1. Review the sprint scope
2. Document the approved execution plan
Proceeding anyway - consider adding approval for audit trail.
```
### If Approval Found
```
✓ Sprint Approval Verified
Approved: 2026-01-28 14:30
Scope:
Branches: feat/45-*, feat/46-*, feat/47-*
Files: auth/*, api/routes/auth.py, tests/test_auth*
Proceeding with execution within approved scope...
```
---
## Scope Enforcement
When approval exists, agents SHOULD operate within approved scope:
```
Approved scope:
Branches: feat/45-*, feat/46-*
Files: auth/*, tests/test_auth*
Task #48 wants to create: feat/48-api-docs
→ NOT in approved scope!
→ STOP and ask user to approve expanded scope
```
**Operations outside scope should trigger re-approval via `/sprint-plan`.**
---
## Re-Approval Scenarios
Request re-approval when:
- New tasks discovered during execution
- Scope expansion needed (new files, new branches)
- Dependencies change significantly
- Timeline changes require scope adjustment

View File

@@ -0,0 +1,104 @@
---
name: task-sizing
description: Task sizing rules and mandatory breakdown requirements
---
# Task Sizing Rules
## Purpose
Defines effort estimation rules and enforces task breakdown requirements.
## When to Use
- **Planner agent**: When creating issues during sprint planning
- **Orchestrator agent**: When reviewing task scope during sprint start
- **Code Reviewer agent**: When flagging oversized tasks
---
## Sizing Matrix
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|--------|-------|-----------------|----------------|-------------|
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
| **L** | MUST BREAK DOWN | - | - | Too large |
| **XL** | MUST BREAK DOWN | - | - | Way too large |
---
## CRITICAL: L/XL Tasks MUST Be Broken Down
**Why:**
- Agents running 400+ tool calls take 1+ hour with no visibility
- Large tasks lack clear completion criteria
- Debugging failures is extremely difficult
- Small tasks enable parallel execution
---
## Scoping Checklist
1. Can this be completed in one file? → XS or S
2. Does it touch 2-3 files? → M (maximum for single task)
3. Does it touch 4+ files? → MUST break down
4. Would you estimate 50+ tool calls? → MUST break down
5. Does it require complex decision-making mid-task? → MUST break down
---
## Breakdown Example
### BAD (L - too broad)
```
[Sprint 3] feat: Implement schema diff detection hook
Labels: Efforts/L
- Hook skeleton
- Pattern detection
- Warning output
- Integration
```
### GOOD (broken into S tasks)
```
[Sprint 3] feat: Create hook skeleton
Labels: Efforts/S
- [ ] Create hook file with standard header
- [ ] Add file type detection for SQL
- [ ] Exit 0 (non-blocking)
[Sprint 3] feat: Add DROP/ALTER pattern detection
Labels: Efforts/S
- [ ] Detect DROP COLUMN/TABLE/INDEX
- [ ] Detect ALTER TYPE changes
- [ ] Detect RENAME operations
[Sprint 3] feat: Add warning output formatting
Labels: Efforts/S
- [ ] Format breaking change warnings
- [ ] Add hook prefix to output
[Sprint 3] chore: Register hook in hooks.json
Labels: Efforts/XS
- [ ] Add PostToolUse:Edit hook entry
```
---
## Enforcement
**The planner MUST refuse to create L/XL tasks without breakdown.**
If user requests a large task:
```
This task appears to be L/XL sized (touches 4+ files, estimated 100+ tool calls).
L/XL tasks MUST be broken down into S/M subtasks because:
- Agents need clear, completable units of work
- Parallel execution requires smaller tasks
- Progress visibility requires frequent checkpoints
Let me break this down into smaller tasks...
```

View File

@@ -0,0 +1,173 @@
---
name: test-standards
description: Testing requirements, framework detection, and patterns
---
# Test Standards
## Purpose
Defines testing requirements, framework detection, and test patterns.
## When to Use
- **Commands**: `/test-check`, `/test-gen`
- **Executor agent**: When writing tests during implementation
---
## Framework Detection
| Indicator | Framework | Command |
|-----------|-----------|---------|
| `pytest.ini`, `pyproject.toml` with pytest | pytest | `pytest` |
| `package.json` with jest | Jest | `npm test` or `npx jest` |
| `package.json` with vitest | Vitest | `npm test` or `npx vitest` |
| `go.mod` with `*_test.go` | Go test | `go test ./...` |
| `Cargo.toml` | Cargo test | `cargo test` |
| `Makefile` with test target | Make | `make test` |
---
## Coverage Commands
| Framework | Coverage Command |
|-----------|------------------|
| Python/pytest | `pytest --cov=src --cov-report=term-missing` |
| JavaScript/Jest | `npm test -- --coverage` |
| Go | `go test -cover ./...` |
| Rust | `cargo tarpaulin` or `cargo llvm-cov` |
---
## Test Structure Pattern
### Unit Tests
For each function:
- **Happy path**: Expected inputs → expected output
- **Edge cases**: Empty, null, boundary values
- **Error cases**: Invalid inputs → expected errors
- **Type variations**: If dynamic typing
### Example (Python/pytest)
```python
import pytest
from module import target_function
class TestTargetFunction:
"""Tests for target_function."""
def test_happy_path(self):
"""Standard input produces expected output."""
result = target_function(valid_input)
assert result == expected_output
def test_empty_input(self):
"""Empty input handled gracefully."""
result = target_function("")
assert result == default_value
def test_invalid_input_raises(self):
"""Invalid input raises ValueError."""
with pytest.raises(ValueError):
target_function(invalid_input)
@pytest.mark.parametrize("input,expected", [
(case1_in, case1_out),
(case2_in, case2_out),
])
def test_variations(self, input, expected):
"""Multiple input variations."""
assert target_function(input) == expected
```
---
## Test Strategy by Code Pattern
| Code Pattern | Test Approach |
|--------------|---------------|
| Pure function | Unit tests with varied inputs |
| Class with state | Setup/teardown, state transitions |
| External calls | Mocks/stubs for dependencies |
| Database ops | Integration tests with fixtures |
| API endpoints | Request/response tests |
| UI components | Snapshot + interaction tests |
---
## Test Check Output Format
```
## Test Check Summary
### Test Results
- Framework: pytest
- Status: PASS/FAIL
- Passed: 45 | Failed: 2 | Skipped: 3
- Duration: 12.5s
### Failed Tests
- test_auth.py::test_token_refresh: AssertionError (line 45)
- test_api.py::test_login_endpoint: TimeoutError (line 78)
### Coverage (if available)
- Overall: 78%
- Sprint files coverage:
- auth/jwt_service.py: 92%
- api/routes/auth.py: 65%
- models/user.py: NO TESTS
### Recommendation
TESTS MUST PASS / READY FOR CLOSE / COVERAGE GAPS TO ADDRESS
```
---
## Test Generation Output Format
```
## Tests Generated
### Target: src/orders.py:calculate_total
### File Created: tests/test_orders.py
### Tests (6 total)
- test_calculate_total_happy_path
- test_calculate_total_empty_items
- test_calculate_total_negative_price_raises
- test_calculate_total_with_discount
- test_calculate_total_with_tax
- test_calculate_total_parametrized_cases
### Run Tests
pytest tests/test_orders.py -v
```
---
## Do NOT
- Modify test files during `/test-check` (only run and report)
- Skip failing tests to make the run pass
- Run tests in production environments
- Install dependencies without asking first
- Run tests requiring external services without confirmation
---
## Error Handling
**If tests fail:**
1. Report the failure clearly
2. List failed test names and error summaries
3. Recommend: "TESTS MUST PASS before sprint close"
4. Offer to help debug specific failures
**If framework not detected:**
1. List what was checked
2. Ask user to specify the test command
3. Offer common suggestions based on file types

View File

@@ -0,0 +1,155 @@
---
name: wiki-conventions
description: Proposal and implementation page format and naming conventions
---
# Wiki Conventions
## Purpose
Defines the naming and structure for wiki proposal and implementation pages.
## When to Use
- **Planner agent**: When creating wiki pages during planning
- **Orchestrator agent**: When updating status at sprint close
- **Commands**: `/sprint-plan`, `/sprint-close`, `/proposal-status`
---
## Page Naming
| Page Type | Naming Convention |
|-----------|-------------------|
| Proposal | `Change VXX.X.X: Proposal` |
| Implementation | `Change VXX.X.X: Proposal (Implementation N)` |
**Examples:**
- `Change V4.1.0: Proposal`
- `Change V4.1.0: Proposal (Implementation 1)`
- `Change V4.1.0: Proposal (Implementation 2)`
---
## Proposal Page Template
```markdown
> **Type:** Change Proposal
> **Version:** V04.1.0
> **Plugin:** projman
> **Status:** In Progress
> **Date:** 2026-01-26
# Feature Title
[Content migrated from input source or created from discussion]
## Implementations
- [Implementation 1](link) - Sprint 17 - In Progress
```
---
## Implementation Page Template
```markdown
> **Type:** Change Proposal Implementation
> **Version:** V04.1.0
> **Status:** In Progress
> **Date:** 2026-01-26
> **Origin:** [Proposal](wiki-link)
> **Sprint:** Sprint 17
# Implementation Details
[Technical details, scope, approach]
## Issues
- #45: JWT token generation
- #46: Login endpoint
- #47: Auth tests
```
---
## Status Values
| Status | Meaning |
|--------|---------|
| `In Progress` | Active work |
| `Implemented` | Completed successfully |
| `Partial` | Partially completed, continued in next impl |
| `Failed` | Did not complete, abandoned |
---
## Completion Update (Sprint Close)
On sprint close, update implementation page:
```markdown
> **Type:** Change Proposal Implementation
> **Version:** V04.1.0
> **Status:** Implemented ✅
> **Date:** 2026-01-26
> **Completed:** 2026-01-28
> **Origin:** [Proposal](wiki-link)
> **Sprint:** Sprint 17
# Implementation Details
[Original content...]
## Completion Summary
- All planned issues completed
- Lessons learned: [Link to lesson]
```
---
## Proposal Status Update
When all implementations complete, update proposal:
```markdown
> **Type:** Change Proposal
> **Version:** V04.1.0
> **Status:** Implemented ✅
> **Date:** 2026-01-26
# Feature Title
[Original content...]
## Implementations
- [Implementation 1](link) - Sprint 17 - ✅ Completed
```
---
## Creating Pages
**Create proposal:**
```python
create_wiki_page(
repo="org/repo",
title="Change V4.1.0: Proposal",
content="[proposal template content]"
)
```
**Create implementation:**
```python
create_wiki_page(
repo="org/repo",
title="Change V4.1.0: Proposal (Implementation 1)",
content="[implementation template content]"
)
```
**Update implementation on close:**
```python
update_wiki_page(
repo="org/repo",
page_name="Change-V4.1.0:-Proposal-(Implementation-1)",
content="[updated content with completion status]"
)
```