refactor(projman): extract skills and consolidate commands
Major refactoring of projman plugin architecture: Skills Extraction (17 new files): - Extracted reusable knowledge from commands and agents into skills/ - branch-security, dependency-management, git-workflow, input-detection - issue-conventions, lessons-learned, mcp-tools-reference, planning-workflow - progress-tracking, repo-validation, review-checklist, runaway-detection - setup-workflows, sprint-approval, task-sizing, test-standards, wiki-conventions Command Consolidation (17 → 12 commands): - /setup: consolidates initial-setup, project-init, project-sync (--full/--quick/--sync) - /debug: consolidates debug-report, debug-review (report/review modes) - /test: consolidates test-check, test-gen (run/gen modes) - /sprint-status: absorbs sprint-diagram via --diagram flag Architecture Cleanup: - Remove plugin-level mcp-servers/ symlinks (6 plugins) - Remove plugin README.md files (12 files, ~2000 lines) - Update all documentation to reflect new command structure - Fix documentation drift in CONFIGURATION.md, COMMANDS-CHEATSHEET.md Commands are now thin dispatchers (~20-50 lines) that reference skills. Agents reference skills for domain knowledge instead of inline content. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -4,7 +4,9 @@ description: Clear plugin cache to force fresh configuration reload after market
|
||||
|
||||
# Clear Cache
|
||||
|
||||
Clear plugin cache to force fresh configuration reload. Run this after marketplace updates.
|
||||
## Purpose
|
||||
|
||||
Clear plugin cache to force fresh configuration reload after marketplace updates.
|
||||
|
||||
## When to Use
|
||||
|
||||
@@ -12,17 +14,21 @@ Clear plugin cache to force fresh configuration reload. Run this after marketpla
|
||||
- When MCP servers show stale configuration
|
||||
- When plugin changes don't take effect
|
||||
|
||||
## What It Does
|
||||
## Workflow
|
||||
|
||||
1. Clears `~/.claude/plugins/cache/leo-claude-mktplace/`
|
||||
2. Forces Claude Code to re-read `.mcp.json` files on next session
|
||||
|
||||
## Instructions
|
||||
|
||||
Run this command, then **restart your Claude Code session** for changes to take effect.
|
||||
Execute cache clear:
|
||||
|
||||
```bash
|
||||
rm -rf ~/.claude/plugins/cache/leo-claude-mktplace/
|
||||
```
|
||||
|
||||
After clearing, inform the user: "Cache cleared. Restart Claude Code for changes to take effect."
|
||||
Then inform user: "Cache cleared. Restart Claude Code for changes to take effect."
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Clear Cache ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
@@ -1,701 +0,0 @@
|
||||
---
|
||||
description: Run diagnostics and create structured issue in marketplace repository
|
||||
---
|
||||
|
||||
# Debug Report
|
||||
|
||||
Create structured issues in the marketplace repository - either from automated diagnostic tests OR from user-reported problems.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Your project `.env` must have:
|
||||
|
||||
```env
|
||||
PROJMAN_MARKETPLACE_REPO=personal-projects/leo-claude-mktplace
|
||||
```
|
||||
|
||||
If not configured, ask the user for the marketplace repository path.
|
||||
|
||||
## CRITICAL: Execution Steps
|
||||
|
||||
You MUST follow these steps in order. Do NOT skip any step.
|
||||
|
||||
### Step 0: Select Report Mode
|
||||
|
||||
Use AskUserQuestion to determine what the user wants to report:
|
||||
|
||||
```
|
||||
What would you like to report?
|
||||
|
||||
[ ] Run automated diagnostics - Test MCP tools and report failures
|
||||
[ ] Report an issue I experienced - Describe a problem with any plugin command
|
||||
```
|
||||
|
||||
Store the selection as `REPORT_MODE`:
|
||||
- "automated" → Continue to Step 1
|
||||
- "user-reported" → Skip to Step 0.1
|
||||
|
||||
---
|
||||
|
||||
### Step 0.1: Gather User Feedback (User-Reported Mode Only)
|
||||
|
||||
If `REPORT_MODE` is "user-reported", gather structured feedback.
|
||||
|
||||
**Question 1: What were you trying to do?**
|
||||
|
||||
Use AskUserQuestion:
|
||||
```
|
||||
Which plugin/command were you using?
|
||||
|
||||
[ ] projman (sprint planning, issues, labels)
|
||||
[ ] git-flow (commits, branches)
|
||||
[ ] pr-review (pull request review)
|
||||
[ ] cmdb-assistant (NetBox integration)
|
||||
[ ] doc-guardian (documentation)
|
||||
[ ] code-sentinel (security, refactoring)
|
||||
[ ] Other - I'll describe it
|
||||
```
|
||||
|
||||
Store as `AFFECTED_PLUGIN`.
|
||||
|
||||
Then ask for the specific command (free text):
|
||||
```
|
||||
What command or tool were you using? (e.g., /sprint-plan, virt_update_vm)
|
||||
```
|
||||
|
||||
Store as `AFFECTED_COMMAND`.
|
||||
|
||||
**Question 2: What was your goal?**
|
||||
|
||||
```
|
||||
Briefly describe what you were trying to accomplish:
|
||||
```
|
||||
|
||||
Store as `USER_GOAL`.
|
||||
|
||||
**Question 3: What went wrong?**
|
||||
|
||||
Use AskUserQuestion:
|
||||
```
|
||||
What type of problem did you encounter?
|
||||
|
||||
[ ] Error message - Command failed with an error
|
||||
[ ] Missing feature - Tool doesn't support what I need
|
||||
[ ] Unexpected behavior - It worked but did the wrong thing
|
||||
[ ] Documentation issue - Instructions were unclear or wrong
|
||||
[ ] Other - I'll describe it
|
||||
```
|
||||
|
||||
Store as `PROBLEM_TYPE`.
|
||||
|
||||
Then ask for details (free text):
|
||||
```
|
||||
Describe what happened. Include any error messages if applicable:
|
||||
```
|
||||
|
||||
Store as `PROBLEM_DESCRIPTION`.
|
||||
|
||||
**Question 4: Expected vs Actual**
|
||||
|
||||
```
|
||||
What did you expect to happen?
|
||||
```
|
||||
|
||||
Store as `EXPECTED_BEHAVIOR`.
|
||||
|
||||
**Question 5: Workaround (optional)**
|
||||
|
||||
```
|
||||
Did you find a workaround? If so, describe it (or skip):
|
||||
```
|
||||
|
||||
Store as `WORKAROUND` (may be empty).
|
||||
|
||||
After gathering feedback, continue to Step 1 for context gathering, then skip to Step 5.1.
|
||||
|
||||
---
|
||||
|
||||
### Step 1: Gather Project Context
|
||||
|
||||
Run these Bash commands to capture project information:
|
||||
|
||||
```bash
|
||||
# Get git remote URL
|
||||
git remote get-url origin
|
||||
|
||||
# Get current branch
|
||||
git branch --show-current
|
||||
|
||||
# Get working directory
|
||||
pwd
|
||||
```
|
||||
|
||||
Parse the git remote to extract `REPO_NAME` in `owner/repo` format.
|
||||
|
||||
Store all values:
|
||||
- `PROJECT_REPO`: The detected owner/repo
|
||||
- `GIT_REMOTE`: Full git remote URL
|
||||
- `CURRENT_BRANCH`: Current branch name
|
||||
- `WORKING_DIR`: Current working directory
|
||||
|
||||
### Step 1.5: Detect Sprint Context
|
||||
|
||||
Determine if this debug issue should be associated with an active sprint.
|
||||
|
||||
**1. Check for active sprint milestone:**
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__list_milestones(repo=PROJECT_REPO, state="open")
|
||||
```
|
||||
|
||||
Store the first open milestone as `ACTIVE_SPRINT` (if any).
|
||||
|
||||
**2. Analyze branch context:**
|
||||
|
||||
| Branch Pattern | Context |
|
||||
|----------------|---------|
|
||||
| `feat/*`, `fix/*`, `issue-*` | Sprint work - likely related to current sprint |
|
||||
| `main`, `master`, `development` | Production/standalone - not sprint-related |
|
||||
| Other | Unknown - ask user |
|
||||
|
||||
**3. Determine sprint association:**
|
||||
|
||||
```
|
||||
IF ACTIVE_SPRINT exists AND CURRENT_BRANCH matches sprint pattern (feat/*, fix/*, issue-*):
|
||||
→ SPRINT_CONTEXT = "detected"
|
||||
→ Ask user: "Active sprint detected: [SPRINT_NAME]. Is this bug related to sprint work?"
|
||||
Options:
|
||||
- Yes, add to sprint (will associate with milestone)
|
||||
- No, standalone fix (no milestone)
|
||||
→ Store choice as ASSOCIATE_WITH_SPRINT (true/false)
|
||||
|
||||
ELSE IF ACTIVE_SPRINT exists AND CURRENT_BRANCH is main/development:
|
||||
→ SPRINT_CONTEXT = "production"
|
||||
→ ASSOCIATE_WITH_SPRINT = false (standalone fix, no question needed)
|
||||
|
||||
ELSE:
|
||||
→ SPRINT_CONTEXT = "none"
|
||||
→ ASSOCIATE_WITH_SPRINT = false
|
||||
```
|
||||
|
||||
### Step 2: Read Marketplace Configuration
|
||||
|
||||
```bash
|
||||
grep PROJMAN_MARKETPLACE_REPO .env
|
||||
```
|
||||
|
||||
Store as `MARKETPLACE_REPO`. If not found, ask the user.
|
||||
|
||||
### Step 3: Run Diagnostic Suite (Automated Mode Only)
|
||||
|
||||
**Skip this step if `REPORT_MODE` is "user-reported"** → Go to Step 5.1
|
||||
|
||||
Run each MCP tool with explicit `repo` parameter. Record success/failure and full response.
|
||||
|
||||
**Test 1: validate_repo_org**
|
||||
```
|
||||
mcp__plugin_projman_gitea__validate_repo_org(repo=PROJECT_REPO)
|
||||
```
|
||||
Expected: `{is_organization: true/false}`
|
||||
|
||||
**Test 2: get_labels**
|
||||
```
|
||||
mcp__plugin_projman_gitea__get_labels(repo=PROJECT_REPO)
|
||||
```
|
||||
Expected: `{organization: [...], repository: [...], total_count: N}`
|
||||
|
||||
**Test 3: list_issues**
|
||||
```
|
||||
mcp__plugin_projman_gitea__list_issues(repo=PROJECT_REPO, state="open")
|
||||
```
|
||||
Expected: Array of issues
|
||||
|
||||
**Test 4: list_milestones**
|
||||
```
|
||||
mcp__plugin_projman_gitea__list_milestones(repo=PROJECT_REPO)
|
||||
```
|
||||
Expected: Array of milestones
|
||||
|
||||
**Test 5: suggest_labels**
|
||||
```
|
||||
mcp__plugin_projman_gitea__suggest_labels(context="Test bug fix for authentication")
|
||||
```
|
||||
Expected: Array of label names matching repo's format
|
||||
|
||||
For each test, record:
|
||||
- Tool name
|
||||
- Exact parameters used
|
||||
- Status: PASS or FAIL
|
||||
- Response or error message
|
||||
|
||||
### Step 4: Analyze Results (Automated Mode Only)
|
||||
|
||||
**Skip this step if `REPORT_MODE` is "user-reported"** → Go to Step 5.1
|
||||
|
||||
Count failures and categorize errors:
|
||||
|
||||
| Category | Indicators |
|
||||
|----------|------------|
|
||||
| Parameter Format | "owner/repo format", "missing parameter" |
|
||||
| Authentication | "401", "403", "unauthorized" |
|
||||
| Not Found | "404", "not found" |
|
||||
| Network | "connection", "timeout", "ECONNREFUSED" |
|
||||
| Logic | Unexpected response format, wrong data |
|
||||
|
||||
For each failure, write a hypothesis about the likely cause.
|
||||
|
||||
### Step 5: Generate Smart Labels (Automated Mode Only)
|
||||
|
||||
**Skip this step if `REPORT_MODE` is "user-reported"** → Go to Step 5.1
|
||||
|
||||
Generate appropriate labels based on the diagnostic results.
|
||||
|
||||
**1. Build context string for label suggestion:**
|
||||
|
||||
```
|
||||
LABEL_CONTEXT = "Bug fix: " + [summary of main failure] + ". " +
|
||||
"Failed tools: " + [list of failed tool names] + ". " +
|
||||
"Error category: " + [detected error category from Step 4]
|
||||
```
|
||||
|
||||
**2. Get suggested labels:**
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__suggest_labels(
|
||||
repo=PROJECT_REPO,
|
||||
context=LABEL_CONTEXT
|
||||
)
|
||||
```
|
||||
|
||||
**3. Merge with base labels:**
|
||||
|
||||
```
|
||||
BASE_LABELS = ["Type: Bug", "Source: Diagnostic", "Agent: Claude"]
|
||||
SUGGESTED_LABELS = [result from suggest_labels]
|
||||
|
||||
# Combine, avoiding duplicates
|
||||
FINAL_LABELS = BASE_LABELS + [label for label in SUGGESTED_LABELS if label not in BASE_LABELS]
|
||||
```
|
||||
|
||||
The final label set should include:
|
||||
- **Always**: `Type: Bug`, `Source: Diagnostic`, `Agent: Claude`
|
||||
- **If detected**: `Component: *`, `Complexity: *`, `Risk: *`, `Priority: *`
|
||||
|
||||
After generating labels, continue to Step 6.
|
||||
|
||||
---
|
||||
|
||||
### Step 5.1: Generate Labels (User-Reported Mode Only)
|
||||
|
||||
**Only execute this step if `REPORT_MODE` is "user-reported"**
|
||||
|
||||
**1. Map problem type to labels:**
|
||||
|
||||
| PROBLEM_TYPE | Labels |
|
||||
|--------------|--------|
|
||||
| Error message | `Type: Bug` |
|
||||
| Missing feature | `Type: Enhancement` |
|
||||
| Unexpected behavior | `Type: Bug` |
|
||||
| Documentation issue | `Type: Documentation` |
|
||||
| Other | `Type: Bug` (default) |
|
||||
|
||||
**2. Map plugin to component:**
|
||||
|
||||
| AFFECTED_PLUGIN | Component Label |
|
||||
|-----------------|-----------------|
|
||||
| projman | `Component: Commands` |
|
||||
| git-flow | `Component: Commands` |
|
||||
| pr-review | `Component: Commands` |
|
||||
| cmdb-assistant | `Component: API` |
|
||||
| doc-guardian | `Component: Commands` |
|
||||
| code-sentinel | `Component: Commands` |
|
||||
| Other | *(no component label)* |
|
||||
|
||||
**3. Build final labels:**
|
||||
|
||||
```
|
||||
BASE_LABELS = ["Source: User-Reported", "Agent: Claude"]
|
||||
TYPE_LABEL = [mapped from PROBLEM_TYPE]
|
||||
COMPONENT_LABEL = [mapped from AFFECTED_PLUGIN, if any]
|
||||
|
||||
FINAL_LABELS = BASE_LABELS + TYPE_LABEL + COMPONENT_LABEL
|
||||
```
|
||||
|
||||
After generating labels, continue to Step 6.1.
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Generate Issue Content (Automated Mode Only)
|
||||
|
||||
**Skip this step if `REPORT_MODE` is "user-reported"** → Go to Step 6.1
|
||||
|
||||
Use this exact template:
|
||||
|
||||
```markdown
|
||||
## Diagnostic Report
|
||||
|
||||
**Generated**: [ISO timestamp]
|
||||
**Command Tested**: [What the user was trying to run, or "general diagnostic"]
|
||||
**Reporter**: Claude Code via /debug-report
|
||||
|
||||
## Project Context
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Repository | `[PROJECT_REPO]` |
|
||||
| Git Remote | `[GIT_REMOTE]` |
|
||||
| Working Directory | `[WORKING_DIR]` |
|
||||
| Current Branch | `[CURRENT_BRANCH]` |
|
||||
|
||||
## Diagnostic Results
|
||||
|
||||
### Test 1: validate_repo_org
|
||||
|
||||
**Call**: `validate_repo_org(repo="[PROJECT_REPO]")`
|
||||
**Status**: [PASS/FAIL]
|
||||
**Response**:
|
||||
```json
|
||||
[full response or error]
|
||||
```
|
||||
|
||||
### Test 2: get_labels
|
||||
|
||||
**Call**: `get_labels(repo="[PROJECT_REPO]")`
|
||||
**Status**: [PASS/FAIL]
|
||||
**Response**:
|
||||
```json
|
||||
[full response or error - truncate if very long]
|
||||
```
|
||||
|
||||
[... repeat for each test ...]
|
||||
|
||||
## Summary
|
||||
|
||||
- **Total Tests**: 5
|
||||
- **Passed**: [N]
|
||||
- **Failed**: [N]
|
||||
|
||||
### Failed Tools
|
||||
|
||||
[List each failed tool with its error]
|
||||
|
||||
### Error Category
|
||||
|
||||
[Check applicable categories]
|
||||
|
||||
### Hypothesis
|
||||
|
||||
[Your analysis of what's wrong and why]
|
||||
|
||||
### Suggested Investigation
|
||||
|
||||
1. [First file/function to check]
|
||||
2. [Second file/function to check]
|
||||
3. [etc.]
|
||||
|
||||
## Reproduction Steps
|
||||
|
||||
1. Navigate to `[WORKING_DIR]`
|
||||
2. Run `[command that was being tested]`
|
||||
3. Observe error at step [X]
|
||||
|
||||
---
|
||||
|
||||
*Generated by /debug-report (automated) - Labels: Type: Bug, Source: Diagnostic, Agent: Claude*
|
||||
```
|
||||
|
||||
After generating content, continue to Step 7.
|
||||
|
||||
---
|
||||
|
||||
### Step 6.1: Generate Issue Content (User-Reported Mode Only)
|
||||
|
||||
**Only execute this step if `REPORT_MODE` is "user-reported"**
|
||||
|
||||
Use this template for user-reported issues:
|
||||
|
||||
```markdown
|
||||
## User-Reported Issue
|
||||
|
||||
**Reported**: [ISO timestamp]
|
||||
**Reporter**: Claude Code via /debug-report (user feedback)
|
||||
|
||||
## Context
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Plugin | `[AFFECTED_PLUGIN]` |
|
||||
| Command/Tool | `[AFFECTED_COMMAND]` |
|
||||
| Repository | `[PROJECT_REPO]` |
|
||||
| Working Directory | `[WORKING_DIR]` |
|
||||
| Branch | `[CURRENT_BRANCH]` |
|
||||
|
||||
## Problem Description
|
||||
|
||||
### Goal
|
||||
[USER_GOAL]
|
||||
|
||||
### What Happened
|
||||
**Problem Type**: [PROBLEM_TYPE]
|
||||
|
||||
[PROBLEM_DESCRIPTION]
|
||||
|
||||
### Expected Behavior
|
||||
[EXPECTED_BEHAVIOR]
|
||||
|
||||
## Workaround
|
||||
[WORKAROUND if provided, otherwise "None identified"]
|
||||
|
||||
## Investigation Hints
|
||||
|
||||
Based on the affected plugin/command, relevant files to check:
|
||||
|
||||
[Generate based on AFFECTED_PLUGIN:]
|
||||
|
||||
**projman:**
|
||||
- `plugins/projman/commands/[AFFECTED_COMMAND].md`
|
||||
- `mcp-servers/gitea/mcp_server/tools/*.py`
|
||||
|
||||
**git-flow:**
|
||||
- `plugins/git-flow/commands/[AFFECTED_COMMAND].md`
|
||||
|
||||
**pr-review:**
|
||||
- `plugins/pr-review/commands/[AFFECTED_COMMAND].md`
|
||||
- `mcp-servers/gitea/mcp_server/tools/pull_requests.py`
|
||||
|
||||
**cmdb-assistant:**
|
||||
- `plugins/cmdb-assistant/commands/[AFFECTED_COMMAND].md`
|
||||
- `mcp-servers/netbox/mcp_server/tools/*.py`
|
||||
- `mcp-servers/netbox/mcp_server/server.py` (tool schemas)
|
||||
|
||||
**doc-guardian / code-sentinel:**
|
||||
- `plugins/[plugin]/commands/[AFFECTED_COMMAND].md`
|
||||
- `plugins/[plugin]/hooks/*.md`
|
||||
|
||||
---
|
||||
|
||||
*Generated by /debug-report (user feedback) - Labels: [FINAL_LABELS]*
|
||||
```
|
||||
|
||||
After generating content, continue to Step 7.
|
||||
|
||||
---
|
||||
|
||||
### Step 7: Create Issue in Marketplace
|
||||
|
||||
**IMPORTANT:** Always use curl to create issues in the marketplace repo. This avoids branch protection restrictions and MCP context issues that can block issue creation when working on protected branches.
|
||||
|
||||
**1. Load Gitea credentials:**
|
||||
|
||||
```bash
|
||||
if [[ -f ~/.config/claude/gitea.env ]]; then
|
||||
source ~/.config/claude/gitea.env
|
||||
echo "Credentials loaded. API URL: $GITEA_API_URL"
|
||||
else
|
||||
echo "ERROR: No credentials at ~/.config/claude/gitea.env"
|
||||
fi
|
||||
```
|
||||
|
||||
**2. Fetch label IDs from marketplace repo:**
|
||||
|
||||
Labels depend on `REPORT_MODE`:
|
||||
|
||||
**Automated mode:**
|
||||
- `Source/Diagnostic` (always)
|
||||
- `Type/Bug` (always)
|
||||
|
||||
**User-reported mode:**
|
||||
- `Source/User-Reported` (always)
|
||||
- Type label from Step 5.1 (Bug, Enhancement, or Documentation)
|
||||
- Component label from Step 5.1 (if applicable)
|
||||
|
||||
```bash
|
||||
# Fetch all labels from marketplace repo
|
||||
LABELS_JSON=$(curl -s "${GITEA_API_URL}/repos/${MARKETPLACE_REPO}/labels" \
|
||||
-H "Authorization: token ${GITEA_API_TOKEN}")
|
||||
|
||||
# Extract label IDs based on FINAL_LABELS from Step 5 or 5.1
|
||||
# Build LABEL_IDS array with IDs of labels that exist in the repo
|
||||
# Example for automated mode:
|
||||
SOURCE_ID=$(echo "$LABELS_JSON" | jq -r '.[] | select(.name == "Source/Diagnostic") | .id')
|
||||
TYPE_ID=$(echo "$LABELS_JSON" | jq -r '.[] | select(.name == "Type/Bug") | .id')
|
||||
|
||||
# Example for user-reported mode (adjust based on FINAL_LABELS):
|
||||
# SOURCE_ID=$(echo "$LABELS_JSON" | jq -r '.[] | select(.name == "Source/User-Reported") | .id')
|
||||
# TYPE_ID=$(echo "$LABELS_JSON" | jq -r '.[] | select(.name == "[TYPE_LABEL]") | .id')
|
||||
|
||||
# Build label array from found IDs
|
||||
LABEL_IDS="[$(echo "$SOURCE_ID,$TYPE_ID" | sed 's/,,*/,/g; s/^,//; s/,$//')]"
|
||||
echo "Label IDs to apply: $LABEL_IDS"
|
||||
```
|
||||
|
||||
**3. Create issue with labels via curl:**
|
||||
|
||||
**Title format depends on `REPORT_MODE`:**
|
||||
- Automated: `[Diagnostic] [summary of main failure]`
|
||||
- User-reported: `[AFFECTED_PLUGIN] [brief summary of PROBLEM_DESCRIPTION]`
|
||||
|
||||
```bash
|
||||
# Create temp files with restrictive permissions
|
||||
DIAG_TITLE=$(mktemp -t diag-title.XXXXXX)
|
||||
DIAG_BODY=$(mktemp -t diag-body.XXXXXX)
|
||||
DIAG_PAYLOAD=$(mktemp -t diag-payload.XXXXXX)
|
||||
|
||||
# Save title (format depends on REPORT_MODE)
|
||||
# Automated: "[Diagnostic] [summary of main failure]"
|
||||
# User-reported: "[AFFECTED_PLUGIN] [brief summary]"
|
||||
echo "[Title based on REPORT_MODE]" > "$DIAG_TITLE"
|
||||
|
||||
# Save body (paste Step 6 or 6.1 content) - heredoc delimiter prevents shell expansion
|
||||
cat > "$DIAG_BODY" << 'DIAGNOSTIC_EOF'
|
||||
[Paste the full issue content from Step 6 or 6.1 here]
|
||||
DIAGNOSTIC_EOF
|
||||
|
||||
# Build JSON payload with labels using jq
|
||||
jq -n \
|
||||
--rawfile title "$DIAG_TITLE" \
|
||||
--rawfile body "$DIAG_BODY" \
|
||||
--argjson labels "$LABEL_IDS" \
|
||||
'{title: ($title | rtrimstr("\n")), body: $body, labels: $labels}' > "$DIAG_PAYLOAD"
|
||||
|
||||
# Create issue using the JSON file
|
||||
RESULT=$(curl -s -X POST "${GITEA_API_URL}/repos/${MARKETPLACE_REPO}/issues" \
|
||||
-H "Authorization: token ${GITEA_API_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @"$DIAG_PAYLOAD")
|
||||
|
||||
# Extract and display the issue URL
|
||||
echo "$RESULT" | jq -r '.html_url // "Error: " + (.message // "Unknown error")'
|
||||
|
||||
# Secure cleanup
|
||||
rm -f "$DIAG_TITLE" "$DIAG_BODY" "$DIAG_PAYLOAD"
|
||||
```
|
||||
|
||||
**4. If no credentials found, save report locally:**
|
||||
|
||||
```bash
|
||||
REPORT_FILE=$(mktemp -t diagnostic-report-XXXXXX.md)
|
||||
cat > "$REPORT_FILE" << 'DIAGNOSTIC_EOF'
|
||||
[Paste the full issue content from Step 6 here]
|
||||
DIAGNOSTIC_EOF
|
||||
echo "Report saved to: $REPORT_FILE"
|
||||
```
|
||||
|
||||
Then inform the user:
|
||||
|
||||
```
|
||||
No Gitea credentials found at ~/.config/claude/gitea.env.
|
||||
|
||||
Diagnostic report saved to: [REPORT_FILE]
|
||||
|
||||
To create the issue manually:
|
||||
1. Configure credentials: See docs/CONFIGURATION.md
|
||||
2. Or create issue directly at: http://gitea.hotserv.cloud/[MARKETPLACE_REPO]/issues/new
|
||||
```
|
||||
|
||||
### Step 8: Report to User
|
||||
|
||||
Display summary based on `REPORT_MODE`:
|
||||
|
||||
**Automated Mode:**
|
||||
```
|
||||
Debug Report Complete
|
||||
=====================
|
||||
|
||||
Project: [PROJECT_REPO]
|
||||
Tests Run: 5
|
||||
Passed: [N]
|
||||
Failed: [N]
|
||||
|
||||
Failed Tools:
|
||||
- [tool1]: [brief error]
|
||||
- [tool2]: [brief error]
|
||||
|
||||
Issue Created: [issue URL]
|
||||
|
||||
Next Steps:
|
||||
1. Switch to marketplace repo: cd [marketplace path]
|
||||
2. Run: /debug-review
|
||||
3. Select issue #[N] to investigate
|
||||
```
|
||||
|
||||
**User-Reported Mode:**
|
||||
```
|
||||
Issue Report Complete
|
||||
=====================
|
||||
|
||||
Plugin: [AFFECTED_PLUGIN]
|
||||
Command: [AFFECTED_COMMAND]
|
||||
Problem: [PROBLEM_TYPE]
|
||||
|
||||
Issue Created: [issue URL]
|
||||
|
||||
Your feedback has been captured. The development team will
|
||||
investigate and may follow up with questions.
|
||||
|
||||
Next Steps:
|
||||
1. Switch to marketplace repo: cd [marketplace path]
|
||||
2. Run: /debug-review
|
||||
3. Select issue #[N] to investigate
|
||||
```
|
||||
|
||||
## DO NOT
|
||||
|
||||
- **DO NOT** attempt to fix anything - only report
|
||||
- **DO NOT** create issues if all automated tests pass (unless in user-reported mode)
|
||||
- **DO NOT** skip any diagnostic test in automated mode
|
||||
- **DO NOT** call MCP tools without the `repo` parameter
|
||||
- **DO NOT** skip user questions in user-reported mode - gather complete feedback
|
||||
- **DO NOT** use MCP tools to create issues in the marketplace - always use curl (avoids branch restrictions)
|
||||
|
||||
## If All Tests Pass (Automated Mode Only)
|
||||
|
||||
If all 5 tests pass in automated mode, report success without creating an issue:
|
||||
|
||||
```
|
||||
Debug Report Complete
|
||||
=====================
|
||||
|
||||
Project: [PROJECT_REPO]
|
||||
Tests Run: 5
|
||||
Passed: 5
|
||||
Failed: 0
|
||||
|
||||
All diagnostics passed. No issues to report.
|
||||
|
||||
If you're experiencing a specific problem, run /debug-report again
|
||||
and select "Report an issue I experienced" to describe it.
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**PROJMAN_MARKETPLACE_REPO not configured**
|
||||
- Ask user: "What is the marketplace repository? (e.g., personal-projects/leo-claude-mktplace)"
|
||||
- Store for this session and remind user to add to .env
|
||||
|
||||
**Cannot detect project repository**
|
||||
- Check if in a git repository: `git rev-parse --git-dir`
|
||||
- If not a git repo, ask user for the repository path
|
||||
|
||||
**Gitea credentials not found**
|
||||
- Credentials must be at `~/.config/claude/gitea.env`
|
||||
- If missing, the report will be saved locally for manual submission
|
||||
- See docs/CONFIGURATION.md for setup instructions
|
||||
|
||||
**Labels not applied to issue**
|
||||
- Verify labels exist in the marketplace repo: `Source/Diagnostic`, `Type/Bug`
|
||||
- Check the label fetch output in Step 7.2 for errors
|
||||
- If labels don't exist, create them first with `/labels-sync` in the marketplace repo
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Debug Report ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the diagnostic workflow.
|
||||
@@ -1,599 +0,0 @@
|
||||
---
|
||||
description: Investigate diagnostic issues and propose fixes with human approval
|
||||
---
|
||||
|
||||
# Debug Review
|
||||
|
||||
Investigate diagnostic issues created by `/debug-report`, read relevant code, and propose fixes with human approval at each step.
|
||||
|
||||
## CRITICAL: This Command Requires Human Approval
|
||||
|
||||
This command has THREE mandatory approval gates. You MUST stop and wait for user confirmation at each gate before proceeding.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Detect Repository
|
||||
|
||||
Run Bash to get the current repository:
|
||||
|
||||
```bash
|
||||
git remote get-url origin
|
||||
```
|
||||
|
||||
Parse to extract `REPO_NAME` in `owner/repo` format.
|
||||
|
||||
### Step 2: Fetch Diagnostic Issues
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__list_issues(
|
||||
repo=REPO_NAME,
|
||||
state="open",
|
||||
labels=["Source: Diagnostic"]
|
||||
)
|
||||
```
|
||||
|
||||
If no issues with that label, try without label filter and look for issues with "[Diagnostic]" in title.
|
||||
|
||||
### Step 3: Display Issue List
|
||||
|
||||
Show the user available issues:
|
||||
|
||||
```
|
||||
Debug Review
|
||||
============
|
||||
|
||||
Open Diagnostic Issues:
|
||||
|
||||
#80 - [Diagnostic] get_labels fails without repo parameter
|
||||
Created: 2026-01-21 | Labels: Type: Bug, Source: Diagnostic
|
||||
|
||||
#77 - [Diagnostic] MCP tools require explicit repo parameter
|
||||
Created: 2026-01-21 | Labels: Type: Bug, Source: Diagnostic
|
||||
|
||||
No diagnostic issues? Showing recent bugs:
|
||||
|
||||
#75 - [Bug] Some other issue
|
||||
Created: 2026-01-20
|
||||
```
|
||||
|
||||
### Step 4: User Selects Issue
|
||||
|
||||
Use AskUserQuestion:
|
||||
|
||||
```
|
||||
Which issue would you like to investigate?
|
||||
Options: [List issue numbers]
|
||||
```
|
||||
|
||||
Wait for user selection.
|
||||
|
||||
### Step 5: Fetch Full Issue Details
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__get_issue(repo=REPO_NAME, issue_number=SELECTED)
|
||||
```
|
||||
|
||||
### Step 6: Parse Diagnostic Report
|
||||
|
||||
Extract from the issue body:
|
||||
|
||||
1. **Failed Tools**: Which MCP tools failed
|
||||
2. **Error Messages**: Exact error text
|
||||
3. **Hypothesis**: Reporter's analysis
|
||||
4. **Suggested Investigation**: Files to check
|
||||
5. **Project Context**: Repo, branch, cwd where error occurred
|
||||
|
||||
If the issue doesn't follow the diagnostic template, extract what information is available.
|
||||
|
||||
### Step 7: Map Errors to Code Files
|
||||
|
||||
Use this mapping to identify relevant files:
|
||||
|
||||
**By Tool Name:**
|
||||
| Tool | Primary Files |
|
||||
|------|---------------|
|
||||
| `validate_repo_org` | `mcp-servers/gitea/mcp_server/gitea_client.py` |
|
||||
| `get_labels` | `mcp-servers/gitea/mcp_server/tools/labels.py` |
|
||||
| `suggest_labels` | `mcp-servers/gitea/mcp_server/tools/labels.py` |
|
||||
| `list_issues` | `mcp-servers/gitea/mcp_server/tools/issues.py` |
|
||||
| `create_issue` | `mcp-servers/gitea/mcp_server/tools/issues.py` |
|
||||
| `list_milestones` | `mcp-servers/gitea/mcp_server/gitea_client.py` |
|
||||
|
||||
**By Error Pattern:**
|
||||
| Error Contains | Check Files |
|
||||
|----------------|-------------|
|
||||
| "owner/repo format" | `config.py`, `gitea_client.py` |
|
||||
| "404" + "orgs" | `gitea_client.py` (is_org_repo method) |
|
||||
| "401", "403" | `config.py` (token loading) |
|
||||
| "No repository" | Command `.md` file (repo detection step) |
|
||||
|
||||
**By Command:**
|
||||
| Command | Documentation File |
|
||||
|---------|-------------------|
|
||||
| `/labels-sync` | `plugins/projman/commands/labels-sync.md` |
|
||||
| `/sprint-plan` | `plugins/projman/commands/sprint-plan.md` |
|
||||
| `/sprint-start` | `plugins/projman/commands/sprint-start.md` |
|
||||
| `/debug-report` | `plugins/projman/commands/debug-report.md` |
|
||||
|
||||
### Step 8: Read Relevant Files (MANDATORY)
|
||||
|
||||
You MUST read the identified files before proposing any fix.
|
||||
|
||||
For each relevant file:
|
||||
1. Read the file using the Read tool
|
||||
2. Find the specific function/method mentioned in the error
|
||||
3. Understand the code path that leads to the error
|
||||
4. Note any related code that might be affected
|
||||
|
||||
Display snippets of relevant code to the user:
|
||||
|
||||
```
|
||||
Reading relevant files...
|
||||
|
||||
┌─ mcp-servers/gitea/mcp_server/tools/labels.py (lines 29-40) ────────┐
|
||||
│ │
|
||||
│ async def get_labels(self, repo: Optional[str] = None): │
|
||||
│ target_repo = repo or self.gitea.repo │
|
||||
│ if not target_repo or '/' not in target_repo: │
|
||||
│ raise ValueError("Use 'owner/repo' format...") │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 9: Present Investigation Summary
|
||||
|
||||
Summarize what you found:
|
||||
|
||||
```
|
||||
Investigation Summary
|
||||
=====================
|
||||
|
||||
ISSUE: #80 - get_labels fails without repo parameter
|
||||
|
||||
FAILED TOOLS:
|
||||
• get_labels - "Use 'owner/repo' format"
|
||||
|
||||
CODE ANALYSIS:
|
||||
|
||||
1. labels.py:get_labels() requires repo parameter
|
||||
- Line 30: `target_repo = repo or self.gitea.repo`
|
||||
- Line 31-32: Raises ValueError if no repo
|
||||
|
||||
2. labels-sync.md documents Step 1 for repo detection
|
||||
- Lines 13-26: Instructs to run `git remote get-url origin`
|
||||
- This step may not be followed by executing Claude
|
||||
|
||||
ROOT CAUSE HYPOTHESIS:
|
||||
|
||||
The command documentation (labels-sync.md) correctly instructs
|
||||
repo detection, but the executing Claude may be skipping Step 1
|
||||
and calling MCP tools directly.
|
||||
|
||||
Evidence:
|
||||
• Error indicates repo parameter was not passed
|
||||
• labels-sync.md has correct instructions
|
||||
• MCP server cannot auto-detect (sandboxed environment)
|
||||
|
||||
LIKELY FIX:
|
||||
|
||||
Option A: Make Step 1 more prominent in labels-sync.md
|
||||
Option B: Add validation that repo was detected before proceeding
|
||||
Option C: [Other based on analysis]
|
||||
```
|
||||
|
||||
## APPROVAL GATE 1
|
||||
|
||||
```
|
||||
Does this analysis match your understanding of the problem?
|
||||
|
||||
[Y] Yes, proceed to propose fix
|
||||
[N] No, let me clarify
|
||||
[R] Read more files first
|
||||
```
|
||||
|
||||
**STOP HERE AND WAIT FOR USER RESPONSE**
|
||||
|
||||
Do NOT proceed until user approves.
|
||||
|
||||
### Step 9.5: Search Lessons Learned
|
||||
|
||||
Before proposing a fix, search for relevant lessons from past fixes.
|
||||
|
||||
**1. Extract search tags from the issue:**
|
||||
|
||||
```
|
||||
SEARCH_TAGS = []
|
||||
# Add tool names
|
||||
for each failed_tool in issue:
|
||||
SEARCH_TAGS.append(tool_name) # e.g., "get_labels", "validate_repo_org"
|
||||
|
||||
# Add error category
|
||||
SEARCH_TAGS.append(error_category) # e.g., "parameter-format", "authentication"
|
||||
|
||||
# Add component if identifiable
|
||||
if error relates to MCP server:
|
||||
SEARCH_TAGS.append("mcp")
|
||||
if error relates to command:
|
||||
SEARCH_TAGS.append("command")
|
||||
```
|
||||
|
||||
**2. Search lessons learned:**
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__search_lessons(
|
||||
repo=REPO_NAME,
|
||||
tags=SEARCH_TAGS,
|
||||
limit=5
|
||||
)
|
||||
```
|
||||
|
||||
**3. Also search by error keywords:**
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__search_lessons(
|
||||
repo=REPO_NAME,
|
||||
query=[key error message words],
|
||||
limit=5
|
||||
)
|
||||
```
|
||||
|
||||
**4. Display relevant lessons (if any):**
|
||||
|
||||
```
|
||||
Related Lessons Learned
|
||||
=======================
|
||||
|
||||
Found [N] relevant lessons from past fixes:
|
||||
|
||||
📚 Lesson: "Sprint 14 - Parameter validation in MCP tools"
|
||||
Tags: mcp, get_labels, parameter-format
|
||||
Summary: Always validate repo parameter format before API calls
|
||||
Prevention: Add format check at function entry
|
||||
|
||||
📚 Lesson: "Sprint 12 - Graceful fallback for missing config"
|
||||
Tags: configuration, fallback
|
||||
Summary: Commands should work even without .env
|
||||
Prevention: Check for env vars, use sensible defaults
|
||||
|
||||
These lessons may inform your fix approach.
|
||||
```
|
||||
|
||||
If no lessons found, display:
|
||||
```
|
||||
No related lessons found. This may be a new type of issue.
|
||||
```
|
||||
|
||||
### Step 10: Propose Fix Approach
|
||||
|
||||
Based on the analysis, propose a specific fix:
|
||||
|
||||
```
|
||||
Proposed Fix
|
||||
============
|
||||
|
||||
APPROACH: [A/B/C from above]
|
||||
|
||||
CHANGES NEEDED:
|
||||
|
||||
1. File: plugins/projman/commands/labels-sync.md
|
||||
Change: Add warning box after Step 1 emphasizing repo must be detected
|
||||
|
||||
2. File: [if applicable]
|
||||
Change: [description]
|
||||
|
||||
RATIONALE:
|
||||
|
||||
[Explain why this fix addresses the root cause]
|
||||
|
||||
RISKS:
|
||||
|
||||
[Any potential issues with this approach]
|
||||
```
|
||||
|
||||
## APPROVAL GATE 2
|
||||
|
||||
```
|
||||
Proceed with this fix approach?
|
||||
|
||||
[Y] Yes, implement it
|
||||
[N] No, try different approach
|
||||
[M] Modify the approach (tell me what to change)
|
||||
```
|
||||
|
||||
**STOP HERE AND WAIT FOR USER RESPONSE**
|
||||
|
||||
Do NOT implement until user approves.
|
||||
|
||||
### Step 11: Implement Fix
|
||||
|
||||
Only after user approves:
|
||||
|
||||
1. Create feature branch:
|
||||
```bash
|
||||
git checkout -b fix/issue-[NUMBER]-[brief-description]
|
||||
```
|
||||
|
||||
2. Make the code changes using Edit tool
|
||||
|
||||
3. Run relevant tests if they exist:
|
||||
```bash
|
||||
cd mcp-servers/gitea && .venv/bin/python -m pytest tests/ -v
|
||||
```
|
||||
|
||||
4. Show the changes to user:
|
||||
```bash
|
||||
git diff
|
||||
```
|
||||
|
||||
### Step 12: Present Changes
|
||||
|
||||
```
|
||||
Changes Implemented
|
||||
===================
|
||||
|
||||
Branch: fix/issue-80-labels-sync-instructions
|
||||
|
||||
Files Modified:
|
||||
• plugins/projman/commands/labels-sync.md (+15, -3)
|
||||
|
||||
Diff Summary:
|
||||
[Show git diff output]
|
||||
|
||||
Test Results:
|
||||
• 23 passed, 0 failed (or N/A if no tests)
|
||||
```
|
||||
|
||||
## APPROVAL GATE 3
|
||||
|
||||
```
|
||||
Create PR with these changes?
|
||||
|
||||
[Y] Yes, create PR
|
||||
[N] No, I want to modify something
|
||||
[D] Discard changes
|
||||
```
|
||||
|
||||
**STOP HERE AND WAIT FOR USER RESPONSE**
|
||||
|
||||
Do NOT create PR until user approves.
|
||||
|
||||
### Step 13: Create PR
|
||||
|
||||
Only after user approves:
|
||||
|
||||
1. Commit changes:
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "fix: [description]
|
||||
|
||||
[Longer explanation]
|
||||
|
||||
Fixes #[ISSUE_NUMBER]
|
||||
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
|
||||
```
|
||||
|
||||
2. Push branch:
|
||||
```bash
|
||||
git push -u origin fix/issue-[NUMBER]-[brief-description]
|
||||
```
|
||||
|
||||
3. Create PR via API or MCP tools
|
||||
|
||||
4. **Switch back to development branch** (required for MCP issue operations):
|
||||
```bash
|
||||
git checkout development
|
||||
```
|
||||
|
||||
**⚠️ IMPORTANT: Issue will NOT auto-close**
|
||||
|
||||
PRs merged to `development` do NOT trigger Gitea's auto-close feature.
|
||||
Auto-close only works when merging to the default branch (`main`).
|
||||
|
||||
You MUST manually close the issue in Step 15 after the PR is merged.
|
||||
|
||||
5. Add comment to original issue:
|
||||
```
|
||||
mcp__plugin_projman_gitea__add_comment(
|
||||
repo=REPO_NAME,
|
||||
issue_number=ISSUE_NUMBER,
|
||||
comment="Fix proposed in PR #[PR_NUMBER]\n\nChanges:\n- [summary]\n\nPlease test after merge and report back."
|
||||
)
|
||||
```
|
||||
|
||||
### Step 14: Report Completion
|
||||
|
||||
```
|
||||
Debug Review Complete
|
||||
=====================
|
||||
|
||||
Issue: #80 - get_labels fails without repo parameter
|
||||
Status: Fix Proposed
|
||||
|
||||
PR Created: #81 - fix: improve labels-sync repo detection instructions
|
||||
URL: http://gitea.hotserv.cloud/.../pulls/81
|
||||
|
||||
Next Steps:
|
||||
1. Review and merge PR #81
|
||||
2. In test project, pull latest plugin version
|
||||
3. Run /debug-report to verify fix
|
||||
4. Come back and run Step 15 to close issue and capture lesson
|
||||
```
|
||||
|
||||
### Step 15: Verify, Close, and Capture Lesson
|
||||
|
||||
**This step runs AFTER the user has verified the fix works.**
|
||||
|
||||
**⚠️ MANDATORY: You MUST manually close the issue.**
|
||||
|
||||
PRs merged to `development` do NOT auto-close issues (Gitea only auto-closes
|
||||
on merges to the default branch `main`). Always close manually after merge.
|
||||
|
||||
When user returns and confirms the fix is working:
|
||||
|
||||
**1. Close the issue (REQUIRED - won't auto-close):**
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__update_issue(
|
||||
repo=REPO_NAME,
|
||||
issue_number=ISSUE_NUMBER,
|
||||
state="closed"
|
||||
)
|
||||
```
|
||||
|
||||
**2. Ask about lesson capture:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
|
||||
```
|
||||
This fix addressed [ERROR_TYPE] in [COMPONENT].
|
||||
|
||||
Would you like to capture this as a lesson learned?
|
||||
|
||||
Options:
|
||||
- Yes, capture lesson (helps avoid similar issues in future)
|
||||
- No, skip (trivial fix or already documented)
|
||||
```
|
||||
|
||||
**3. If user chooses Yes, auto-generate lesson content:**
|
||||
|
||||
```
|
||||
LESSON_TITLE = "Sprint [N] - [Brief description of fix]"
|
||||
# Example: "Sprint 17 - MCP parameter validation"
|
||||
|
||||
LESSON_CONTENT = """
|
||||
## Context
|
||||
|
||||
[What was happening when the issue occurred]
|
||||
- Command/tool being used: [FAILED_TOOL]
|
||||
- Error encountered: [ERROR_MESSAGE]
|
||||
|
||||
## Problem
|
||||
|
||||
[Root cause identified during investigation]
|
||||
|
||||
## Solution
|
||||
|
||||
[What was changed to fix it]
|
||||
- Files modified: [LIST]
|
||||
- PR: #[PR_NUMBER]
|
||||
|
||||
## Prevention
|
||||
|
||||
[How to avoid this in the future]
|
||||
|
||||
## Related
|
||||
|
||||
- Issue: #[ISSUE_NUMBER]
|
||||
- PR: #[PR_NUMBER]
|
||||
"""
|
||||
|
||||
LESSON_TAGS = [
|
||||
tool_name, # e.g., "get_labels"
|
||||
error_category, # e.g., "parameter-format"
|
||||
component, # e.g., "mcp", "command"
|
||||
"bug-fix"
|
||||
]
|
||||
```
|
||||
|
||||
**4. Show lesson preview and ask for approval:**
|
||||
|
||||
```
|
||||
Lesson Preview
|
||||
==============
|
||||
|
||||
Title: [LESSON_TITLE]
|
||||
Tags: [LESSON_TAGS]
|
||||
|
||||
Content:
|
||||
[LESSON_CONTENT]
|
||||
|
||||
Save this lesson? (Y/N/Edit)
|
||||
```
|
||||
|
||||
**5. If approved, create the lesson:**
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__create_lesson(
|
||||
repo=REPO_NAME,
|
||||
title=LESSON_TITLE,
|
||||
content=LESSON_CONTENT,
|
||||
tags=LESSON_TAGS,
|
||||
category="sprints"
|
||||
)
|
||||
```
|
||||
|
||||
**6. Report completion:**
|
||||
|
||||
```
|
||||
Issue Closed & Lesson Captured
|
||||
==============================
|
||||
|
||||
Issue #[N]: CLOSED
|
||||
Lesson: "[LESSON_TITLE]" saved to wiki
|
||||
|
||||
This lesson will be surfaced in future /debug-review
|
||||
sessions when similar errors are encountered.
|
||||
```
|
||||
|
||||
## DO NOT
|
||||
|
||||
- **DO NOT** skip reading relevant files - this is MANDATORY
|
||||
- **DO NOT** proceed past approval gates without user confirmation
|
||||
- **DO NOT** guess at fixes without evidence from code
|
||||
- **DO NOT** close issues until user confirms fix works (Step 15)
|
||||
- **DO NOT** commit directly to development or main branches
|
||||
- **DO NOT** skip the lessons learned search - past fixes inform better solutions
|
||||
|
||||
## If Investigation Finds No Bug
|
||||
|
||||
Sometimes investigation reveals the issue is:
|
||||
- User error (didn't follow documented steps)
|
||||
- Configuration issue (missing .env vars)
|
||||
- Already fixed in a newer version
|
||||
|
||||
In this case:
|
||||
|
||||
```
|
||||
Investigation Summary
|
||||
=====================
|
||||
|
||||
FINDING: This does not appear to be a code bug.
|
||||
|
||||
ANALYSIS:
|
||||
[Explanation of what you found]
|
||||
|
||||
RECOMMENDATION:
|
||||
[ ] Close issue as "not a bug" - user error
|
||||
[ ] Close issue as "duplicate" of #[X]
|
||||
[ ] Add documentation to prevent confusion
|
||||
[ ] Other: [specify]
|
||||
|
||||
Would you like me to add a comment explaining this finding?
|
||||
```
|
||||
|
||||
## Error-to-File Quick Reference
|
||||
|
||||
```
|
||||
Error Message → File to Check
|
||||
─────────────────────────────────────────────────────────────────
|
||||
"Use 'owner/repo' format" → config.py, gitea_client.py
|
||||
"404 Client Error.*orgs" → gitea_client.py (_is_organization)
|
||||
"No repository specified" → Command .md file (Step 1)
|
||||
"401 Unauthorized" → config.py (token loading)
|
||||
"labels not found" → labels.py, gitea_client.py
|
||||
"create local.*file" → Command .md file (DO NOT section)
|
||||
```
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Debug Review ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the investigation workflow.
|
||||
162
plugins/projman/commands/debug.md
Normal file
162
plugins/projman/commands/debug.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
description: Diagnose issues and create reports, or investigate existing diagnostic issues
|
||||
---
|
||||
|
||||
# Debug
|
||||
|
||||
## Skills Required
|
||||
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/lessons-learned.md
|
||||
- skills/git-workflow.md
|
||||
|
||||
## Purpose
|
||||
|
||||
Unified debugging command for diagnostics and issue investigation.
|
||||
|
||||
## Invocation
|
||||
|
||||
```
|
||||
/debug # Ask which mode
|
||||
/debug report # Run diagnostics, create issue
|
||||
/debug review # Investigate existing issues
|
||||
```
|
||||
|
||||
## Mode Selection
|
||||
|
||||
If no subcommand provided, ask user:
|
||||
|
||||
1. **Report** - Run MCP tool diagnostics and create issue in marketplace
|
||||
2. **Review** - Investigate existing diagnostic issues and propose fixes
|
||||
|
||||
---
|
||||
|
||||
## Mode: Report
|
||||
|
||||
Create structured issues in the marketplace repository.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Project `.env` must have:
|
||||
```env
|
||||
PROJMAN_MARKETPLACE_REPO=personal-projects/leo-claude-mktplace
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
#### Step 0: Select Report Type
|
||||
- **Automated** - Run MCP tool diagnostics and report failures
|
||||
- **User-Reported** - Gather structured feedback about a problem
|
||||
|
||||
#### For User-Reported (Step 0.1)
|
||||
Gather via AskUserQuestion:
|
||||
1. Which plugin/command was affected
|
||||
2. What was the goal
|
||||
3. What type of problem (error, missing feature, unexpected behavior, docs)
|
||||
4. Problem description
|
||||
5. Expected behavior
|
||||
6. Workaround (optional)
|
||||
|
||||
#### Steps 1-2: Context Gathering
|
||||
1. Gather project context (git remote, branch, pwd)
|
||||
2. Detect sprint context (active milestone)
|
||||
3. Read marketplace config
|
||||
|
||||
#### Steps 3-4: Diagnostic Suite (Automated Only)
|
||||
Run MCP tools with explicit `repo` parameter:
|
||||
- `validate_repo_org`
|
||||
- `get_labels`
|
||||
- `list_issues`
|
||||
- `list_milestones`
|
||||
- `suggest_labels`
|
||||
|
||||
Categorize: Parameter Format, Authentication, Not Found, Network, Logic
|
||||
|
||||
#### Steps 5-6: Generate Labels and Issue
|
||||
**Automated:** `Type/Bug`, `Source/Diagnostic`, `Agent/Claude` + suggested
|
||||
**User-Reported:** Map problem type to labels
|
||||
|
||||
#### Step 7: Create Issue
|
||||
**Use curl (not MCP)** - avoids branch protection issues
|
||||
|
||||
#### Step 8: Report to User
|
||||
Show summary and link to created issue
|
||||
|
||||
### DO NOT (Report Mode)
|
||||
- Attempt to fix anything - only report
|
||||
- Create issues if all automated tests pass (unless user-reported)
|
||||
- Use MCP tools to create issues in marketplace - always use curl
|
||||
|
||||
---
|
||||
|
||||
## Mode: Review
|
||||
|
||||
Investigate diagnostic issues and propose fixes with human approval.
|
||||
|
||||
### Workflow with Approval Gates
|
||||
|
||||
#### Steps 1-8: Investigation
|
||||
1. Detect repository (git remote)
|
||||
2. Fetch diagnostic issues: `list_issues(labels=["Source: Diagnostic"])`
|
||||
3. Display issue list
|
||||
4. User selects issue (AskUserQuestion)
|
||||
5. Fetch full details: `get_issue(issue_number=...)`
|
||||
6. Parse diagnostic report (failed tools, errors, hypothesis)
|
||||
7. Map errors to files
|
||||
8. Read relevant files - **MANDATORY before proposing fix**
|
||||
|
||||
#### Step 9: Investigation Summary
|
||||
Present analysis to user.
|
||||
|
||||
**APPROVAL GATE 1:** "Does this analysis match your understanding?"
|
||||
- STOP and wait for user response
|
||||
|
||||
#### Step 9.5: Search Lessons Learned
|
||||
Search for related past fixes using `search_lessons`.
|
||||
|
||||
#### Step 10: Propose Fix
|
||||
Present specific fix approach with changes and rationale.
|
||||
|
||||
**APPROVAL GATE 2:** "Proceed with this fix?"
|
||||
- STOP and wait for user response
|
||||
|
||||
#### Steps 11-12: Implement
|
||||
1. Create feature branch (`fix/issue-N-description`)
|
||||
2. Make code changes
|
||||
3. Run tests
|
||||
4. Show diff to user
|
||||
|
||||
**APPROVAL GATE 3:** "Create PR with these changes?"
|
||||
- STOP and wait for user response
|
||||
|
||||
#### Steps 13-15: Finalize
|
||||
13. Commit and push
|
||||
14. Create PR
|
||||
15. After user verifies fix: Close issue (REQUIRED) and capture lesson
|
||||
|
||||
### Error-to-File Quick Reference
|
||||
|
||||
| Error Pattern | Check Files |
|
||||
|---------------|-------------|
|
||||
| "owner/repo format" | config.py, gitea_client.py |
|
||||
| "404" + "orgs" | gitea_client.py |
|
||||
| "401", "403" | config.py (token) |
|
||||
| "No repository" | Command .md file |
|
||||
|
||||
### DO NOT (Review Mode)
|
||||
- Skip reading relevant files
|
||||
- Proceed past approval gates without confirmation
|
||||
- Close issues until user confirms fix works
|
||||
- Commit directly to development/main
|
||||
|
||||
---
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ 🔧 DEBUG ║
|
||||
║ [Mode: Report | Review] ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
@@ -1,477 +0,0 @@
|
||||
---
|
||||
description: Interactive setup wizard for projman plugin - configures MCP servers, credentials, and project settings
|
||||
---
|
||||
|
||||
# Initial Setup Wizard
|
||||
|
||||
This command guides the user through complete projman setup interactively. It handles both first-time marketplace setup and per-project configuration.
|
||||
|
||||
## Important Context
|
||||
|
||||
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools
|
||||
- **MCP tools won't work until after setup + session restart**
|
||||
- **Tokens must be entered manually by the user** for security (not typed into chat)
|
||||
|
||||
---
|
||||
|
||||
## Quick Path Detection
|
||||
|
||||
**FIRST**, check if system-level setup is already complete:
|
||||
|
||||
```bash
|
||||
cat ~/.config/claude/gitea.env 2>/dev/null | grep -v "^#" | grep -v "PASTE_YOUR" | grep -v "example.com" | grep "GITEA_API_TOKEN=" && echo "SYSTEM_CONFIGURED" || echo "SYSTEM_NEEDS_SETUP"
|
||||
```
|
||||
|
||||
**If SYSTEM_CONFIGURED:**
|
||||
|
||||
The system-level configuration already exists. Offer the user a choice:
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "System configuration found. What would you like to do?"
|
||||
- Header: "Setup Mode"
|
||||
- Options:
|
||||
- "Quick project setup only (Recommended for new projects)"
|
||||
- "Full setup (re-check everything)"
|
||||
|
||||
**If "Quick project setup":**
|
||||
- Skip directly to **Phase 4: Project-Level Configuration**
|
||||
- This is equivalent to running `/project-init`
|
||||
|
||||
**If "Full setup":**
|
||||
- Continue with Phase 1 below
|
||||
|
||||
**If SYSTEM_NEEDS_SETUP:**
|
||||
- Continue with Phase 1 (full setup required)
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Environment Validation
|
||||
|
||||
### Step 1.1: Check Python Version
|
||||
|
||||
Run this command to verify Python 3.10+ is available:
|
||||
|
||||
```bash
|
||||
python3 --version
|
||||
```
|
||||
|
||||
**If version is below 3.10:**
|
||||
- Stop setup and inform user: "Python 3.10 or higher is required. Please install it and run /initial-setup again."
|
||||
- Provide installation hints based on OS (apt, brew, etc.)
|
||||
|
||||
**If successful:** Continue to next step.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: MCP Server Setup
|
||||
|
||||
### Step 2.1: Locate the Plugin Installation
|
||||
|
||||
The plugin is installed somewhere on the system. Find the MCP server directory by resolving the path.
|
||||
|
||||
First, identify where this plugin is installed. The MCP server should be accessible via the symlink structure. Look for the gitea MCP server:
|
||||
|
||||
```bash
|
||||
# Find the plugin's mcp-servers directory (search common locations)
|
||||
find ~/.claude ~/.config/claude -name "mcp_server" -path "*gitea*" 2>/dev/null | head -5
|
||||
```
|
||||
|
||||
If found, extract the parent path (the gitea MCP server root).
|
||||
|
||||
**Alternative:** If the user knows the marketplace location, ask them:
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Where is the Leo Claude Marketplace installed?"
|
||||
- Options:
|
||||
- "~/.claude/plugins/leo-claude-mktplace" (default Claude plugins location)
|
||||
- "Let me find it automatically"
|
||||
- Other (user provides path)
|
||||
|
||||
### Step 2.2: Check if Virtual Environment Exists
|
||||
|
||||
Once you have the MCP server path (e.g., `/path/to/mcp-servers/gitea`), check for the venv:
|
||||
|
||||
```bash
|
||||
ls -la /path/to/mcp-servers/gitea/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING"
|
||||
```
|
||||
|
||||
### Step 2.3: Create Virtual Environment (if missing)
|
||||
|
||||
If venv doesn't exist:
|
||||
|
||||
```bash
|
||||
cd /path/to/mcp-servers/gitea && python3 -m venv .venv
|
||||
```
|
||||
|
||||
Then install dependencies:
|
||||
|
||||
```bash
|
||||
cd /path/to/mcp-servers/gitea && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
|
||||
```
|
||||
|
||||
**If pip install fails:**
|
||||
- Show the error to the user
|
||||
- Suggest: "Check your internet connection and try again. You can also manually run: `cd /path/to/mcp-servers/gitea && source .venv/bin/activate && pip install -r requirements.txt`"
|
||||
|
||||
### Step 2.4: NetBox MCP Server (Optional)
|
||||
|
||||
Check if the user wants to set up the NetBox MCP server (for cmdb-assistant plugin):
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Do you want to set up the NetBox MCP server for infrastructure management?"
|
||||
- Options:
|
||||
- "Yes, set up NetBox MCP"
|
||||
- "No, skip NetBox (Recommended if not using cmdb-assistant)"
|
||||
|
||||
If yes, repeat steps 2.2-2.3 for `/path/to/mcp-servers/netbox`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: System-Level Configuration
|
||||
|
||||
System configuration is stored in `~/.config/claude/` and contains credentials that work across all projects.
|
||||
|
||||
### Step 3.1: Create Config Directory
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.config/claude
|
||||
```
|
||||
|
||||
### Step 3.2: Check Gitea Configuration
|
||||
|
||||
```bash
|
||||
cat ~/.config/claude/gitea.env 2>/dev/null || echo "FILE_NOT_FOUND"
|
||||
```
|
||||
|
||||
**If file doesn't exist:** Go to Step 3.3 (Create Template)
|
||||
**If file exists:** Read it and check if values are placeholders (contain "example.com" or "your_" or "token_here"). If placeholders, go to Step 3.4 (Update Existing).
|
||||
**If file has real values:** Go to Step 3.5 (Validate).
|
||||
|
||||
### Step 3.3: Create Gitea Configuration Template
|
||||
|
||||
Gather the Gitea server URL from the user.
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "What is your Gitea server URL? (e.g., https://gitea.company.com)"
|
||||
- Header: "Gitea URL"
|
||||
- Options:
|
||||
- "https://gitea.hotserv.cloud" (if this is Leo's setup)
|
||||
- "Other (I'll provide the URL)"
|
||||
|
||||
If "Other", ask the user to type the URL.
|
||||
|
||||
Now create the configuration file with a placeholder for the token:
|
||||
|
||||
```bash
|
||||
cat > ~/.config/claude/gitea.env << 'EOF'
|
||||
# Gitea API Configuration
|
||||
# Generated by /initial-setup
|
||||
# Note: GITEA_ORG is configured per-project in .env
|
||||
|
||||
GITEA_API_URL=<USER_PROVIDED_URL>
|
||||
GITEA_API_TOKEN=PASTE_YOUR_TOKEN_HERE
|
||||
EOF
|
||||
chmod 600 ~/.config/claude/gitea.env
|
||||
```
|
||||
|
||||
Replace `<USER_PROVIDED_URL>` with the actual value from the user.
|
||||
|
||||
### Step 3.4: Token Entry Instructions
|
||||
|
||||
**CRITICAL: Do not ask the user to type the token into chat.**
|
||||
|
||||
Display these instructions clearly:
|
||||
|
||||
---
|
||||
|
||||
**Action Required: Add Your Gitea API Token**
|
||||
|
||||
I've created the configuration file at `~/.config/claude/gitea.env` but you need to add your API token manually for security.
|
||||
|
||||
**Steps:**
|
||||
1. Open the file in your editor:
|
||||
```
|
||||
nano ~/.config/claude/gitea.env
|
||||
```
|
||||
Or use any editor you prefer.
|
||||
|
||||
2. Generate a Gitea token (if you don't have one):
|
||||
- Go to your Gitea instance → User Icon → Settings
|
||||
- Click "Applications" tab
|
||||
- Under "Manage Access Tokens", click "Generate New Token"
|
||||
- Name it something like "claude-code"
|
||||
- Select permissions: `repo` (all), `read:org`, `read:user`
|
||||
- Click "Generate Token" and copy it immediately
|
||||
|
||||
3. Replace `PASTE_YOUR_TOKEN_HERE` with your actual token
|
||||
|
||||
4. Save the file
|
||||
|
||||
---
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Have you added your Gitea token to ~/.config/claude/gitea.env?"
|
||||
- Header: "Token Added?"
|
||||
- Options:
|
||||
- "Yes, I've added the token"
|
||||
- "I need help generating a token"
|
||||
- "Skip for now (I'll do it later)"
|
||||
|
||||
**If "I need help":** Provide detailed instructions for their specific Gitea instance.
|
||||
**If "Skip":** Warn that MCP tools won't work until configured, but continue with project setup.
|
||||
|
||||
### Step 3.5: Validate Gitea Configuration
|
||||
|
||||
Read the config file and verify it has non-placeholder values:
|
||||
|
||||
```bash
|
||||
source ~/.config/claude/gitea.env && echo "URL: $GITEA_API_URL" && echo "TOKEN_LENGTH: ${#GITEA_API_TOKEN}"
|
||||
```
|
||||
|
||||
If TOKEN_LENGTH is less than 10 or contains "PASTE" or "your_", the token hasn't been set properly.
|
||||
|
||||
**Note:** GITEA_ORG is configured per-project in `.env`, not in the system-level config.
|
||||
|
||||
**Test connectivity (optional but recommended):**
|
||||
|
||||
```bash
|
||||
source ~/.config/claude/gitea.env && curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/user"
|
||||
```
|
||||
|
||||
- **200:** Success! Credentials are valid.
|
||||
- **401:** Invalid token.
|
||||
- **404/Connection error:** Invalid URL or network issue.
|
||||
|
||||
Report the result to the user.
|
||||
|
||||
### Step 3.6: Git-Flow Configuration (Optional)
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Do you want to configure git-flow defaults for smart commits?"
|
||||
- Header: "Git-Flow"
|
||||
- Options:
|
||||
- "Yes, use recommended defaults (Recommended)"
|
||||
- "Yes, let me customize"
|
||||
- "No, skip git-flow configuration"
|
||||
|
||||
If yes with defaults:
|
||||
```bash
|
||||
cat > ~/.config/claude/git-flow.env << 'EOF'
|
||||
# Git-Flow Default Configuration
|
||||
GIT_WORKFLOW_STYLE=feature-branch
|
||||
GIT_DEFAULT_BASE=development
|
||||
GIT_AUTO_DELETE_MERGED=true
|
||||
GIT_AUTO_PUSH=false
|
||||
GIT_PROTECTED_BRANCHES=main,master,development,staging,production
|
||||
GIT_COMMIT_STYLE=conventional
|
||||
GIT_CO_AUTHOR=true
|
||||
EOF
|
||||
chmod 600 ~/.config/claude/git-flow.env
|
||||
```
|
||||
|
||||
If customize, use AskUserQuestion for each setting.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Project-Level Configuration
|
||||
|
||||
Project configuration is stored in `.env` in the current project root.
|
||||
|
||||
### Step 4.1: Verify Current Directory
|
||||
|
||||
Confirm we're in the right place:
|
||||
|
||||
```bash
|
||||
pwd && git rev-parse --show-toplevel 2>/dev/null || echo "NOT_A_GIT_REPO"
|
||||
```
|
||||
|
||||
If not a git repo, ask the user:
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "The current directory doesn't appear to be a git repository. Where should I create the project configuration?"
|
||||
- Options:
|
||||
- "This directory is correct, continue anyway"
|
||||
- "Let me navigate to the right directory first"
|
||||
|
||||
### Step 4.2: Check Existing Project Configuration
|
||||
|
||||
```bash
|
||||
cat .env 2>/dev/null || echo "FILE_NOT_FOUND"
|
||||
```
|
||||
|
||||
If `.env` exists and has `GITEA_REPO=` set to a non-placeholder value, skip to Phase 5.
|
||||
|
||||
### Step 4.3: Infer Organization and Repository from Git Remote
|
||||
|
||||
Try to detect org and repo name automatically:
|
||||
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null
|
||||
```
|
||||
|
||||
Parse the output to extract both organization and repository:
|
||||
- `git@gitea.example.com:org/repo-name.git` → org: `org`, repo: `repo-name`
|
||||
- `https://gitea.example.com/org/repo-name.git` → org: `org`, repo: `repo-name`
|
||||
|
||||
Extract organization:
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\/[^/]*$/\1/'
|
||||
```
|
||||
|
||||
Extract repository:
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\.git$/\1/' | sed 's/.*\/\([^/]*\)$/\1/'
|
||||
```
|
||||
|
||||
### Step 4.4: Validate Repository via Gitea API
|
||||
|
||||
**Before asking for confirmation**, verify the repository exists and is accessible:
|
||||
|
||||
```bash
|
||||
source ~/.config/claude/gitea.env
|
||||
curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/repos/<detected-org>/<detected-repo>"
|
||||
```
|
||||
|
||||
**Based on response:**
|
||||
|
||||
| HTTP Code | Meaning | Action |
|
||||
|-----------|---------|--------|
|
||||
| **200** | Repository exists and accessible | **Auto-fill without asking** - skip to Step 4.7 |
|
||||
| **404** | Repository not found | Ask user to confirm/correct (Step 4.5) |
|
||||
| **401/403** | Token issue or no access | Warn about permissions, ask to confirm |
|
||||
| **Other** | Network/server issue | Warn, ask to confirm manually |
|
||||
|
||||
**If 200 OK:** Display confirmation message and skip to Step 4.7:
|
||||
"Verified: Repository '<detected-org>/<detected-repo>' exists and is accessible."
|
||||
|
||||
### Step 4.5: Confirm Organization (only if API validation failed)
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Repository '<detected-org>/<detected-repo>' was not found. Is '<detected-org>' the correct organization?"
|
||||
- Header: "Organization"
|
||||
- Options:
|
||||
- "Yes, that's correct"
|
||||
- "No, let me specify the correct organization"
|
||||
|
||||
If no, ask user to provide the correct organization name.
|
||||
|
||||
### Step 4.6: Confirm Repository Name (only if API validation failed)
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Is '<detected-repo-name>' the correct Gitea repository name?"
|
||||
- Header: "Repository"
|
||||
- Options:
|
||||
- "Yes, that's correct"
|
||||
- "No, let me specify the correct name"
|
||||
|
||||
If no, ask user to provide the correct name.
|
||||
|
||||
**After user provides corrections, re-validate via API (Step 4.4)** to ensure the corrected values are valid.
|
||||
|
||||
### Step 4.7: Create Project Configuration
|
||||
|
||||
```bash
|
||||
cat > .env << 'EOF'
|
||||
# Project Configuration for projman
|
||||
# Generated by /initial-setup
|
||||
|
||||
GITEA_ORG=<ORG_NAME>
|
||||
GITEA_REPO=<REPO_NAME>
|
||||
EOF
|
||||
```
|
||||
|
||||
Replace `<REPO_NAME>` with the confirmed value.
|
||||
|
||||
**Important:** Check if `.env` is in `.gitignore`:
|
||||
|
||||
```bash
|
||||
grep -q "^\.env$" .gitignore 2>/dev/null && echo "GITIGNORE_OK" || echo "GITIGNORE_MISSING"
|
||||
```
|
||||
|
||||
If not in `.gitignore`, warn the user:
|
||||
"Warning: `.env` is not in your `.gitignore`. Consider adding it to prevent accidentally committing credentials."
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Final Validation and Next Steps
|
||||
|
||||
### Step 5.1: Summary Report
|
||||
|
||||
Display a summary of what was configured:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ SETUP COMPLETE ║
|
||||
╠══════════════════════════════════════════════════════════════╣
|
||||
║ MCP Server (Gitea): ✓ Installed ║
|
||||
║ System Config: ✓ ~/.config/claude/gitea.env ║
|
||||
║ Project Config: ✓ ./.env ║
|
||||
║ Gitea Connection: ✓ Verified (or ⚠ Not tested) ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
### Step 5.2: Session Restart Notice
|
||||
|
||||
**IMPORTANT:** Display this notice clearly:
|
||||
|
||||
---
|
||||
|
||||
**⚠️ Session Restart Required**
|
||||
|
||||
The MCP server has been configured but won't be available until you restart your Claude Code session.
|
||||
|
||||
**To complete setup:**
|
||||
1. Exit this Claude Code session (type `/exit` or close the terminal)
|
||||
2. Start a new Claude Code session in this project
|
||||
3. The Gitea MCP tools will now be available
|
||||
|
||||
**After restart, you can:**
|
||||
- Run `/labels-sync` to sync your label taxonomy
|
||||
- Run `/sprint-plan` to start planning
|
||||
- Use MCP tools like `list_issues`, `create_issue`, etc.
|
||||
|
||||
---
|
||||
|
||||
### Step 5.3: Troubleshooting Checklist
|
||||
|
||||
If something isn't working after restart, check:
|
||||
|
||||
1. **MCP server not found:** Verify venv exists at the expected path
|
||||
2. **Authentication failed:** Re-check token in `~/.config/claude/gitea.env`
|
||||
3. **Wrong repository:** Verify `GITEA_REPO` in `./.env` matches Gitea exactly
|
||||
4. **Network error:** Ensure Gitea URL is accessible from this machine
|
||||
|
||||
---
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ ⚙️ SETUP ║
|
||||
║ Initial Setup Wizard ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the setup workflow.
|
||||
|
||||
## Re-Running This Command
|
||||
|
||||
This command is safe to run multiple times:
|
||||
- Existing venvs are skipped (not recreated)
|
||||
- Existing config files are checked for validity
|
||||
- Only missing or placeholder values are updated
|
||||
- Project config can be regenerated for new projects
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference: Files Created
|
||||
|
||||
| File | Purpose | Contains |
|
||||
|------|---------|----------|
|
||||
| `~/.config/claude/gitea.env` | System credentials | URL, token, org |
|
||||
| `~/.config/claude/git-flow.env` | Git defaults | Workflow settings |
|
||||
| `./.env` | Project settings | Repository name |
|
||||
| `<mcp-server>/.venv/` | Python environment | Dependencies |
|
||||
@@ -2,179 +2,52 @@
|
||||
description: Fetch and validate label taxonomy from Gitea, create missing required labels
|
||||
---
|
||||
|
||||
# Sync Label Taxonomy from Gitea
|
||||
# Sync Label Taxonomy
|
||||
|
||||
This command fetches the current label taxonomy from Gitea (organization + repository labels), validates that required labels exist, and creates any missing ones.
|
||||
## Skills Required
|
||||
|
||||
## CRITICAL: Execution Steps
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/repo-validation.md
|
||||
- skills/label-taxonomy/labels-reference.md
|
||||
|
||||
You MUST follow these steps in order. Do NOT skip any step.
|
||||
## Purpose
|
||||
|
||||
### Step 1: Detect Repository from Git Remote
|
||||
Fetch current label taxonomy from Gitea, validate required labels exist, and create any missing ones.
|
||||
|
||||
Run this Bash command to get the git remote URL:
|
||||
## Invocation
|
||||
|
||||
```bash
|
||||
git remote get-url origin
|
||||
```
|
||||
Run `/labels-sync` when setting up the plugin or after taxonomy updates.
|
||||
|
||||
Parse the output to extract `owner/repo`:
|
||||
- SSH format `ssh://git@host:port/owner/repo.git` → extract `owner/repo`
|
||||
- SSH short `git@host:owner/repo.git` → extract `owner/repo`
|
||||
- HTTPS `https://host/owner/repo.git` → extract `owner/repo`
|
||||
## Workflow
|
||||
|
||||
Store this as `REPO_NAME` for all subsequent MCP calls.
|
||||
1. **Detect Repository** - Parse `git remote get-url origin` to get `owner/repo`
|
||||
2. **Validate Repository** - Use `validate_repo_org` to check if org-owned
|
||||
3. **Fetch Labels** - Use `get_labels(repo=...)` to get org + repo labels
|
||||
4. **Display Taxonomy** - Show labels grouped by category
|
||||
5. **Check Required Labels** - Verify Type/*, Priority/*, Complexity/*, Effort/* exist
|
||||
6. **Create Missing** - Use `create_label_smart` which auto-detects org vs repo level
|
||||
7. **Report Results** - Summarize what was found and created
|
||||
|
||||
### Step 2: Validate Repository Organization
|
||||
## Required Label Categories
|
||||
|
||||
Call MCP tool with the detected repo:
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__validate_repo_org(repo=REPO_NAME)
|
||||
```
|
||||
|
||||
This determines if the owner is an organization or user account.
|
||||
|
||||
### Step 3: Fetch Labels from Gitea
|
||||
|
||||
Call MCP tool with the detected repo:
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__get_labels(repo=REPO_NAME)
|
||||
```
|
||||
|
||||
This returns both organization labels (if org-owned) and repository labels.
|
||||
|
||||
### Step 4: Display Current Taxonomy
|
||||
|
||||
Show the user:
|
||||
- Total organization labels count
|
||||
- Total repository labels count
|
||||
- Labels grouped by category (Type/*, Priority/*, etc.)
|
||||
|
||||
### Step 5: Check Required Labels
|
||||
|
||||
Verify these required label categories exist:
|
||||
- **Type/***: Bug, Feature, Refactor, Documentation, Test, Chore
|
||||
- **Priority/***: Low, Medium, High, Critical
|
||||
- **Complexity/***: Simple, Medium, Complex
|
||||
- **Effort/***: XS, S, M, L, XL (note: may be "Effort" or "Efforts")
|
||||
|
||||
### Step 6: Create Missing Labels (if any)
|
||||
|
||||
Use `create_label_smart` which automatically creates labels at the correct level:
|
||||
- **Organization level**: Type/*, Priority/*, Complexity/*, Effort/*, Risk/*, Source/*, Agent/*
|
||||
- **Repository level**: Component/*, Tech/*
|
||||
|
||||
```
|
||||
mcp__plugin_projman_gitea__create_label_smart(repo=REPO_NAME, name="Type/Bug", color="d73a4a")
|
||||
```
|
||||
|
||||
This automatically detects whether to create at org or repo level based on the category.
|
||||
|
||||
**Alternative (explicit control):**
|
||||
- Org labels: `create_org_label(org="org-name", name="Type/Bug", color="d73a4a")`
|
||||
- Repo labels: `create_label(repo=REPO_NAME, name="Component/Backend", color="5319e7")`
|
||||
|
||||
Use the label format that matches existing labels in the repo (slash `/` or colon-space `: `).
|
||||
|
||||
### Step 7: Report Results
|
||||
|
||||
Summarize what was found and created.
|
||||
| Category | Required Labels |
|
||||
|----------|-----------------|
|
||||
| Type/* | Bug, Feature, Refactor, Documentation, Test, Chore |
|
||||
| Priority/* | Low, Medium, High, Critical |
|
||||
| Complexity/* | Simple, Medium, Complex |
|
||||
| Efforts/* | XS, S, M, L, XL |
|
||||
|
||||
## DO NOT
|
||||
|
||||
- **DO NOT** call MCP tools without the `repo` parameter - they will fail
|
||||
- **DO NOT** create any local files - this command only interacts with Gitea
|
||||
- **DO NOT** ask the user questions - execute autonomously
|
||||
- **DO NOT** create a "labels reference file" - labels are fetched dynamically from Gitea
|
||||
|
||||
## MCP Tools Used
|
||||
|
||||
All tools require the `repo` parameter in `owner/repo` format:
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `validate_repo_org(repo=...)` | Check if owner is organization or user |
|
||||
| `get_labels(repo=...)` | Fetch all labels (org + repo) |
|
||||
| `create_label(repo=..., name=..., color=...)` | Create missing labels |
|
||||
|
||||
## Expected Output
|
||||
|
||||
```
|
||||
Label Taxonomy Sync
|
||||
===================
|
||||
|
||||
Detecting repository from git remote...
|
||||
Repository: personal-projects/your-repo-name
|
||||
Owner type: Organization
|
||||
|
||||
Fetching labels from Gitea...
|
||||
|
||||
Current Label Taxonomy:
|
||||
- Organization Labels: 27
|
||||
- Repository Labels: 16
|
||||
- Total: 43 labels
|
||||
|
||||
Organization Labels by Category:
|
||||
Type/*: 6 labels
|
||||
Priority/*: 4 labels
|
||||
Complexity/*: 3 labels
|
||||
Effort/*: 5 labels
|
||||
...
|
||||
|
||||
Repository Labels by Category:
|
||||
Component/*: 9 labels
|
||||
Tech/*: 7 labels
|
||||
|
||||
Required Labels Check:
|
||||
Type/*: 6/6 present ✓
|
||||
Priority/*: 4/4 present ✓
|
||||
Complexity/*: 3/3 present ✓
|
||||
Effort/*: 5/5 present ✓
|
||||
|
||||
All required labels present. Label taxonomy is ready for use.
|
||||
```
|
||||
|
||||
## Label Format Detection
|
||||
|
||||
Labels may use different naming conventions:
|
||||
- Slash format: `Type/Bug`, `Priority/High`
|
||||
- Colon-space format: `Type: Bug`, `Priority: High`
|
||||
|
||||
When creating missing labels, match the format used by existing labels in the repository.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Error: Use 'owner/repo' format**
|
||||
- You forgot to pass the `repo` parameter to the MCP tool
|
||||
- Go back to Step 1 and detect the repo from git remote
|
||||
|
||||
**Empty organization labels**
|
||||
- If owner is a user account (not org), organization labels will be empty
|
||||
- This is expected - user accounts only have repository-level labels
|
||||
|
||||
**Git remote not found**
|
||||
- Ensure you're running in a directory with a git repository
|
||||
- Check that the `origin` remote is configured
|
||||
- Call MCP tools without the `repo` parameter
|
||||
- Create local files - this command only interacts with Gitea
|
||||
- Ask user questions - execute autonomously
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Labels Sync ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the sync workflow.
|
||||
|
||||
## When to Run
|
||||
|
||||
Run `/labels-sync` when:
|
||||
- Setting up the plugin for the first time
|
||||
- You notice missing labels in suggestions
|
||||
- New labels are added to Gitea
|
||||
- After major taxonomy updates
|
||||
|
||||
@@ -1,232 +0,0 @@
|
||||
---
|
||||
description: Quick project setup - configures only project-level settings (assumes system setup is complete)
|
||||
---
|
||||
|
||||
# Project Initialization
|
||||
|
||||
Fast setup for a new project when system-level configuration is already complete.
|
||||
|
||||
**Use this when:**
|
||||
- You've already run `/initial-setup` on this machine
|
||||
- You're starting work on a new project/repository
|
||||
- You just need to create the project `.env` file
|
||||
|
||||
**Use `/initial-setup` instead if:**
|
||||
- This is your first time using the plugin
|
||||
- MCP tools aren't working (might need system setup)
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Check
|
||||
|
||||
### Step 1: Verify System Configuration Exists
|
||||
|
||||
Quickly verify system setup is complete:
|
||||
|
||||
```bash
|
||||
cat ~/.config/claude/gitea.env 2>/dev/null | grep -v "^#" | grep -v "PASTE_YOUR" | grep "GITEA_API_TOKEN=" && echo "SYSTEM_OK" || echo "SYSTEM_MISSING"
|
||||
```
|
||||
|
||||
**If SYSTEM_MISSING:**
|
||||
|
||||
Display this message and stop:
|
||||
|
||||
---
|
||||
|
||||
**System configuration not found or incomplete.**
|
||||
|
||||
It looks like the system-level setup hasn't been completed yet. Please run:
|
||||
|
||||
```
|
||||
/initial-setup
|
||||
```
|
||||
|
||||
This will configure both system credentials and this project.
|
||||
|
||||
---
|
||||
|
||||
**If SYSTEM_OK:** Continue to project setup.
|
||||
|
||||
---
|
||||
|
||||
## Project Setup
|
||||
|
||||
### Step 2: Verify Current Directory
|
||||
|
||||
```bash
|
||||
pwd && git rev-parse --show-toplevel 2>/dev/null || echo "NOT_A_GIT_REPO"
|
||||
```
|
||||
|
||||
If not a git repo, ask:
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "This doesn't appear to be a git repository. Continue anyway?"
|
||||
- Header: "Directory"
|
||||
- Options:
|
||||
- "Yes, continue here"
|
||||
- "No, I'll navigate to the correct directory"
|
||||
|
||||
### Step 3: Check for Existing Configuration
|
||||
|
||||
```bash
|
||||
cat .env 2>/dev/null | grep "GITEA_REPO=" || echo "NOT_CONFIGURED"
|
||||
```
|
||||
|
||||
**If already configured:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "This project already has GITEA_ORG and GITEA_REPO configured. What would you like to do?"
|
||||
- Header: "Existing"
|
||||
- Options:
|
||||
- "Keep existing configuration"
|
||||
- "Reconfigure (replace current settings)"
|
||||
|
||||
**If "Keep":** End with success message.
|
||||
|
||||
### Step 4: Detect Organization and Repository
|
||||
|
||||
Try to auto-detect from git remote:
|
||||
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null
|
||||
```
|
||||
|
||||
Extract organization:
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\/[^/]*$/\1/'
|
||||
```
|
||||
|
||||
Extract repository:
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\.git$/\1/' | sed 's/.*\/\([^/]*\)$/\1/'
|
||||
```
|
||||
|
||||
### Step 5: Validate Repository via Gitea API
|
||||
|
||||
Verify the repository exists and is accessible:
|
||||
|
||||
```bash
|
||||
source ~/.config/claude/gitea.env
|
||||
curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/repos/<detected-org>/<detected-repo>"
|
||||
```
|
||||
|
||||
| HTTP Code | Action |
|
||||
|-----------|--------|
|
||||
| **200** | Auto-fill - display "Verified: <org>/<repo> exists" and skip to Step 8 |
|
||||
| **404** | Repository not found - proceed to Step 6 |
|
||||
| **401/403** | Permission issue - warn and proceed to Step 6 |
|
||||
|
||||
### Step 6: Confirm Organization (only if API validation failed)
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Repository not found. Is '<detected-org>' the correct organization?"
|
||||
- Header: "Organization"
|
||||
- Options:
|
||||
- "Yes, that's correct"
|
||||
- "No, let me specify"
|
||||
|
||||
If "No", ask user to type the correct organization name.
|
||||
|
||||
### Step 7: Confirm Repository Name (only if API validation failed)
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Is '<detected-repo-name>' the correct repository name?"
|
||||
- Header: "Repository"
|
||||
- Options:
|
||||
- "Yes, that's correct"
|
||||
- "No, let me specify"
|
||||
|
||||
If "No", ask user to type the correct name.
|
||||
|
||||
**After corrections, re-validate via API (Step 5).**
|
||||
|
||||
### Step 8: Create/Update Project Configuration
|
||||
|
||||
**If `.env` exists:**
|
||||
Check if it already has other content and append:
|
||||
|
||||
```bash
|
||||
echo "" >> .env
|
||||
echo "# Added by /project-init" >> .env
|
||||
echo "GITEA_ORG=<ORG_NAME>" >> .env
|
||||
echo "GITEA_REPO=<REPO_NAME>" >> .env
|
||||
```
|
||||
|
||||
**If `.env` doesn't exist:**
|
||||
|
||||
```bash
|
||||
cat > .env << 'EOF'
|
||||
# Project Configuration for projman
|
||||
# Generated by /project-init
|
||||
|
||||
GITEA_ORG=<ORG_NAME>
|
||||
GITEA_REPO=<REPO_NAME>
|
||||
EOF
|
||||
```
|
||||
|
||||
### Step 9: Check .gitignore
|
||||
|
||||
```bash
|
||||
grep -q "^\.env$" .gitignore 2>/dev/null && echo "GITIGNORE_OK" || echo "GITIGNORE_MISSING"
|
||||
```
|
||||
|
||||
**If GITIGNORE_MISSING:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "`.env` is not in `.gitignore`. Add it to prevent committing secrets?"
|
||||
- Header: "gitignore"
|
||||
- Options:
|
||||
- "Yes, add .env to .gitignore (Recommended)"
|
||||
- "No, I'll handle it manually"
|
||||
|
||||
If yes:
|
||||
```bash
|
||||
echo ".env" >> .gitignore
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete
|
||||
|
||||
Display success message:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ PROJECT CONFIGURED ║
|
||||
╠══════════════════════════════════════════════════════════════╣
|
||||
║ Organization: <ORG_NAME> ║
|
||||
║ Repository: <REPO_NAME> ║
|
||||
║ Config file: ./.env ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
|
||||
You're ready to use projman commands:
|
||||
• /sprint-plan - Start sprint planning
|
||||
• /sprint-status - Check progress
|
||||
• /labels-sync - Sync label taxonomy
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ ⚙️ SETUP ║
|
||||
║ Project Initialization ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the project setup workflow.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**MCP tools not working?**
|
||||
- Run `/initial-setup` for full setup including MCP server
|
||||
- Restart your Claude Code session after setup
|
||||
|
||||
**Wrong repository configured?**
|
||||
- Edit `.env` directly: `nano .env`
|
||||
- Or run `/project-init` again and choose "Reconfigure"
|
||||
@@ -1,210 +0,0 @@
|
||||
---
|
||||
description: Sync project configuration with current git remote - use after changing repository location or organization
|
||||
---
|
||||
|
||||
# Project Sync
|
||||
|
||||
Updates project configuration when the git remote URL has changed (repository moved, renamed, or organization changed).
|
||||
|
||||
**Use this when:**
|
||||
- You moved the repository to a different organization
|
||||
- You renamed the repository
|
||||
- You changed the git remote URL
|
||||
- The SessionStart hook detected a mismatch
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Verify System Configuration
|
||||
|
||||
```bash
|
||||
cat ~/.config/claude/gitea.env 2>/dev/null | grep -v "^#" | grep -v "PASTE_YOUR" | grep "GITEA_API_TOKEN=" && echo "SYSTEM_OK" || echo "SYSTEM_MISSING"
|
||||
```
|
||||
|
||||
**If SYSTEM_MISSING:** Stop and instruct user to run `/initial-setup` first.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Read Current Configuration
|
||||
|
||||
Read the current .env values:
|
||||
|
||||
```bash
|
||||
cat .env 2>/dev/null
|
||||
```
|
||||
|
||||
Extract current values:
|
||||
- `CURRENT_ORG` from `GITEA_ORG=...`
|
||||
- `CURRENT_REPO` from `GITEA_REPO=...`
|
||||
|
||||
**If .env doesn't exist or has no GITEA values:** Redirect to `/project-init`.
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Detect Git Remote Values
|
||||
|
||||
Get the current git remote:
|
||||
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null
|
||||
```
|
||||
|
||||
Extract organization:
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\/[^/]*$/\1/'
|
||||
```
|
||||
|
||||
Extract repository:
|
||||
```bash
|
||||
git remote get-url origin 2>/dev/null | sed 's/.*[:/]\([^/]*\)\.git$/\1/' | sed 's/.*\/\([^/]*\)$/\1/'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Compare Values
|
||||
|
||||
Compare current .env values with detected git remote values:
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| **Both match** | Display "Configuration is in sync" and exit |
|
||||
| **Organization changed** | Proceed to Step 5 |
|
||||
| **Repository changed** | Proceed to Step 5 |
|
||||
| **Both changed** | Proceed to Step 5 |
|
||||
|
||||
**If already in sync:**
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ CONFIGURATION IN SYNC ║
|
||||
╠══════════════════════════════════════════════════════════════╣
|
||||
║ Organization: <ORG_NAME> ║
|
||||
║ Repository: <REPO_NAME> ║
|
||||
║ Git Remote: matches .env ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Exit here if in sync.
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Show Detected Changes
|
||||
|
||||
Display the detected changes to the user:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ REPOSITORY CHANGE DETECTED ║
|
||||
╠══════════════════════════════════════════════════════════════╣
|
||||
║ Current .env │ Git Remote ║
|
||||
║ Organization: <OLD_ORG> │ <NEW_ORG> ║
|
||||
║ Repository: <OLD_REPO> │ <NEW_REPO> ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Validate New Repository via Gitea API
|
||||
|
||||
Verify the new repository exists and is accessible:
|
||||
|
||||
```bash
|
||||
source ~/.config/claude/gitea.env
|
||||
curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/repos/<NEW_ORG>/<NEW_REPO>"
|
||||
```
|
||||
|
||||
| HTTP Code | Action |
|
||||
|-----------|--------|
|
||||
| **200** | Repository verified - proceed to Step 7 |
|
||||
| **404** | Repository not found - ask user to confirm (Step 6a) |
|
||||
| **401/403** | Permission issue - warn and ask to confirm |
|
||||
|
||||
### Step 6a: Confirm if API Validation Failed
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "The new repository '<NEW_ORG>/<NEW_REPO>' was not found via API. Update configuration anyway?"
|
||||
- Header: "Not Found"
|
||||
- Options:
|
||||
- "Yes, update anyway (I'll fix the remote later)"
|
||||
- "No, let me fix the git remote first"
|
||||
- "Let me specify different values"
|
||||
|
||||
**If "specify different values":** Ask for correct org and repo, then re-validate.
|
||||
|
||||
---
|
||||
|
||||
## Step 7: Confirm Update
|
||||
|
||||
Use AskUserQuestion:
|
||||
- Question: "Update project configuration to match git remote?"
|
||||
- Header: "Confirm"
|
||||
- Options:
|
||||
- "Yes, update .env (Recommended)"
|
||||
- "No, keep current configuration"
|
||||
|
||||
**If "No":** Exit without changes.
|
||||
|
||||
---
|
||||
|
||||
## Step 8: Update Configuration
|
||||
|
||||
Update the .env file with new values:
|
||||
|
||||
```bash
|
||||
# Update GITEA_ORG
|
||||
sed -i 's/^GITEA_ORG=.*/GITEA_ORG=<NEW_ORG>/' .env
|
||||
|
||||
# Update GITEA_REPO
|
||||
sed -i 's/^GITEA_REPO=.*/GITEA_REPO=<NEW_REPO>/' .env
|
||||
```
|
||||
|
||||
Alternatively, if sed doesn't work well, read the file, replace values, and write back.
|
||||
|
||||
---
|
||||
|
||||
## Step 9: Verify Update
|
||||
|
||||
Read the updated .env and display confirmation:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ CONFIGURATION UPDATED ║
|
||||
╠══════════════════════════════════════════════════════════════╣
|
||||
║ Organization: <NEW_ORG> ║
|
||||
║ Repository: <NEW_REPO> ║
|
||||
║ Status: In sync with git remote ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
|
||||
Your project configuration has been updated.
|
||||
MCP tools will now use the new repository.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ ⚙️ SETUP ║
|
||||
║ Project Sync ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the sync workflow.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**"Repository not found" but it exists:**
|
||||
- Check your Gitea token has access to the new organization
|
||||
- Verify the repository name matches exactly (case-sensitive)
|
||||
- Ensure your token has `repo` permissions
|
||||
|
||||
**Git remote URL is wrong:**
|
||||
- Fix it first: `git remote set-url origin <correct-url>`
|
||||
- Then run `/project-sync` again
|
||||
|
||||
**Want to revert the change:**
|
||||
- Edit `.env` manually: `nano .env`
|
||||
- Or run `/project-sync` after fixing the git remote
|
||||
@@ -4,131 +4,46 @@ description: View proposal and implementation hierarchy with status tracking
|
||||
|
||||
# Proposal Status
|
||||
|
||||
View the status of all change proposals and their implementations in the Gitea Wiki.
|
||||
## Skills Required
|
||||
|
||||
## Overview
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/wiki-conventions.md
|
||||
|
||||
This command provides a tree view of:
|
||||
- All change proposals (`Change VXX.X.X: Proposal`)
|
||||
- Their implementations (`Change VXX.X.X: Proposal (Implementation N)`)
|
||||
- Linked issues and lessons learned
|
||||
## Purpose
|
||||
|
||||
View status of all change proposals and their implementations in Gitea Wiki.
|
||||
|
||||
## Invocation
|
||||
|
||||
```
|
||||
/proposal-status
|
||||
/proposal-status --version V04.1.0
|
||||
/proposal-status --status "In Progress"
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Fetch All Wiki Pages**
|
||||
- Use `list_wiki_pages()` to get all wiki pages
|
||||
- Filter for pages matching `Change V*: Proposal*` pattern
|
||||
|
||||
2. **Parse Proposal Structure**
|
||||
- Group implementations under their parent proposals
|
||||
- Extract status from page metadata (In Progress, Implemented, Abandoned)
|
||||
|
||||
3. **Fetch Linked Artifacts**
|
||||
- For each implementation, search for issues referencing it
|
||||
- Search lessons learned that link to the implementation
|
||||
|
||||
4. **Display Tree View**
|
||||
```
|
||||
Change V04.1.0: Proposal [In Progress]
|
||||
├── Implementation 1 [In Progress] - 2026-01-26
|
||||
│ ├── Issues: #161, #162, #163, #164
|
||||
│ └── Lessons: (pending)
|
||||
└── Implementation 2 [Not Started]
|
||||
|
||||
Change V04.0.0: Proposal [Implemented]
|
||||
└── Implementation 1 [Implemented] - 2026-01-20
|
||||
├── Issues: #150, #151
|
||||
└── Lessons: v4.0.0-impl-1-lessons
|
||||
```
|
||||
|
||||
## MCP Tools Used
|
||||
|
||||
- `list_wiki_pages()` - List all wiki pages
|
||||
- `get_wiki_page(page_name)` - Get page content for status extraction
|
||||
- `list_issues(state, labels)` - Find linked issues
|
||||
- `search_lessons(query, tags)` - Find linked lessons
|
||||
1. **Fetch Wiki Pages** - Use `list_wiki_pages()` to get all pages
|
||||
2. **Filter Proposals** - Match `Change V*: Proposal*` pattern
|
||||
3. **Parse Structure** - Group implementations under parent proposals
|
||||
4. **Extract Status** - Read page metadata (In Progress, Implemented, Abandoned)
|
||||
5. **Fetch Linked Artifacts** - Find issues and lessons referencing each implementation
|
||||
6. **Display Tree View** - Show hierarchy with status and links
|
||||
|
||||
## Status Definitions
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| **Pending** | Proposal created but no implementation started |
|
||||
| **In Progress** | At least one implementation is active |
|
||||
| **Implemented** | All planned implementations complete |
|
||||
| **Abandoned** | Proposal was cancelled or superseded |
|
||||
|
||||
## Filtering Options
|
||||
|
||||
The command accepts optional filters:
|
||||
|
||||
```
|
||||
/proposal-status # Show all proposals
|
||||
/proposal-status --version V04.1.0 # Show specific version
|
||||
/proposal-status --status "In Progress" # Filter by status
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
Proposal Status Report
|
||||
======================
|
||||
|
||||
Change V04.1.0: Wiki-Based Planning Workflow [In Progress]
|
||||
├── Implementation 1 [In Progress] - Started: 2026-01-26
|
||||
│ ├── Issues: #161 (closed), #162 (closed), #163 (closed), #164 (open)
|
||||
│ └── Lessons: (pending - sprint not closed)
|
||||
│
|
||||
└── (No additional implementations planned)
|
||||
|
||||
Change V04.0.0: MCP Server Consolidation [Implemented]
|
||||
├── Implementation 1 [Implemented] - 2026-01-15 to 2026-01-20
|
||||
│ ├── Issues: #150 (closed), #151 (closed), #152 (closed)
|
||||
│ └── Lessons:
|
||||
│ • Sprint 15 - MCP Server Symlink Best Practices
|
||||
│ • Sprint 15 - Venv Path Resolution in Plugins
|
||||
│
|
||||
└── (Complete)
|
||||
|
||||
Change V03.2.0: Label Taxonomy Sync [Implemented]
|
||||
└── Implementation 1 [Implemented] - 2026-01-10 to 2026-01-12
|
||||
├── Issues: #140 (closed), #141 (closed)
|
||||
└── Lessons:
|
||||
• Sprint 14 - Organization vs Repository Labels
|
||||
|
||||
Summary:
|
||||
- Total Proposals: 3
|
||||
- In Progress: 1
|
||||
- Implemented: 2
|
||||
- Pending: 0
|
||||
```
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
**Page Name Parsing:**
|
||||
- Proposals: `Change VXX.X.X: Proposal` or `Change Sprint-NN: Proposal`
|
||||
- Implementations: `Change VXX.X.X: Proposal (Implementation N)`
|
||||
|
||||
**Status Extraction:**
|
||||
- Parse the `> **Status:**` line from page metadata
|
||||
- Default to "Unknown" if not found
|
||||
|
||||
**Issue Linking:**
|
||||
- Search for issues containing wiki link in body
|
||||
- Or search for issues with `[Sprint XX]` prefix matching implementation
|
||||
|
||||
**Lesson Linking:**
|
||||
- Search lessons with implementation link in metadata
|
||||
- Or search by version/sprint tags
|
||||
| Pending | Proposal created, no implementation started |
|
||||
| In Progress | At least one implementation is active |
|
||||
| Implemented | All planned implementations complete |
|
||||
| Abandoned | Proposal cancelled or superseded |
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Proposal Status ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the status report.
|
||||
|
||||
@@ -1,83 +1,47 @@
|
||||
---
|
||||
name: review
|
||||
description: Pre-sprint-close code quality review
|
||||
agent: code-reviewer
|
||||
---
|
||||
|
||||
# Code Review for Sprint Close
|
||||
|
||||
Review the recent code changes for quality issues before closing the sprint.
|
||||
## Skills Required
|
||||
|
||||
## Review Checklist
|
||||
- skills/review-checklist.md
|
||||
|
||||
Analyze the changes and report on:
|
||||
## Purpose
|
||||
|
||||
### 1. Debug Artifacts
|
||||
- [ ] TODO/FIXME comments that should be resolved or converted to issues
|
||||
- [ ] Console.log, print(), debug statements left in code
|
||||
- [ ] Commented-out code blocks
|
||||
Review recent code changes for quality issues before closing the sprint.
|
||||
|
||||
### 2. Code Quality
|
||||
- [ ] Functions exceeding 50 lines (complexity smell)
|
||||
- [ ] Deeply nested conditionals (>3 levels)
|
||||
- [ ] Duplicate code patterns
|
||||
- [ ] Missing docstrings/comments on public functions
|
||||
## Invocation
|
||||
|
||||
### 3. Security Scan (Lightweight)
|
||||
- [ ] Hardcoded strings that look like secrets (API keys, passwords, tokens)
|
||||
- [ ] SQL strings with concatenation (injection risk)
|
||||
- [ ] Disabled SSL verification
|
||||
- [ ] Overly permissive file permissions in code
|
||||
Run `/review` before `/sprint-close` to catch issues.
|
||||
|
||||
### 4. Error Handling
|
||||
- [ ] Bare except/catch blocks
|
||||
- [ ] Swallowed exceptions (catch with pass/empty block)
|
||||
- [ ] Missing null/undefined checks on external data
|
||||
## Workflow
|
||||
|
||||
## Output Format
|
||||
1. **Determine Scope** - Sprint files or recent commits (`git diff --name-only HEAD~5`)
|
||||
2. **Read Files** - Use Read tool for each file in scope
|
||||
3. **Scan for Patterns** - Check each category from review checklist
|
||||
4. **Compile Findings** - Group by severity (Critical, Warning, Recommendation)
|
||||
5. **Report Verdict** - READY / NEEDS ATTENTION / BLOCK
|
||||
|
||||
Provide a structured report:
|
||||
## Review Categories
|
||||
|
||||
```
|
||||
## Sprint Review Summary
|
||||
See `skills/review-checklist.md` for complete patterns:
|
||||
- Debug artifacts (TODO, console.log, commented code)
|
||||
- Code quality (long functions, deep nesting, duplication)
|
||||
- Security (hardcoded secrets, SQL injection, disabled SSL)
|
||||
- Error handling (bare except, swallowed exceptions)
|
||||
|
||||
### Critical Issues (Block Sprint Close)
|
||||
- [file:line] Description
|
||||
## DO NOT
|
||||
|
||||
### Warnings (Should Address)
|
||||
- [file:line] Description
|
||||
|
||||
### Recommendations (Nice to Have)
|
||||
- [file:line] Description
|
||||
|
||||
### Clean Files
|
||||
- List of files with no issues found
|
||||
```
|
||||
|
||||
## Scope
|
||||
|
||||
If sprint context is available from projman, limit review to files touched in current sprint.
|
||||
Otherwise, review staged changes or changes in the last 5 commits.
|
||||
|
||||
## How to Determine Scope
|
||||
|
||||
1. **Check for sprint context**: Look for `.projman/current-sprint.json` or similar
|
||||
2. **Fall back to git changes**: Use `git diff --name-only HEAD~5` or staged files
|
||||
3. **Filter by file type**: Focus on code files (.py, .js, .ts, .go, .rs, etc.)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. Determine scope (sprint files or recent commits)
|
||||
2. For each file in scope:
|
||||
- Read the file content
|
||||
- Scan for patterns in each category
|
||||
- Record findings with file:line references
|
||||
3. Compile findings into the structured report
|
||||
4. Provide recommendation: READY / NEEDS ATTENTION / BLOCK
|
||||
- Rewrite or refactor code automatically
|
||||
- Make changes without explicit approval
|
||||
- Review files outside sprint/change scope
|
||||
- Spend excessive time on style issues
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
@@ -85,12 +49,3 @@ When executing this command, display the plugin header:
|
||||
║ Code Review ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the review workflow.
|
||||
|
||||
## Do NOT
|
||||
|
||||
- Rewrite or refactor code automatically
|
||||
- Make changes without explicit approval
|
||||
- Review files outside the sprint/change scope
|
||||
- Spend excessive time on style issues (assume formatters handle this)
|
||||
|
||||
90
plugins/projman/commands/setup.md
Normal file
90
plugins/projman/commands/setup.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
description: Configure projman - full setup, quick project init, or sync after repo move
|
||||
---
|
||||
|
||||
# Setup
|
||||
|
||||
## Skills Required
|
||||
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/repo-validation.md
|
||||
- skills/setup-workflows.md
|
||||
|
||||
## Purpose
|
||||
|
||||
Unified setup command for all configuration needs.
|
||||
|
||||
**Important:**
|
||||
- Uses Bash, Read, Write, AskUserQuestion - NOT MCP tools
|
||||
- MCP tools won't work until after setup + session restart
|
||||
- Tokens must be entered manually for security
|
||||
|
||||
## Invocation
|
||||
|
||||
```
|
||||
/setup # Auto-detect appropriate mode
|
||||
/setup --full # Full wizard (MCP + system + project)
|
||||
/setup --quick # Project-only setup
|
||||
/setup --sync # Update after repo move
|
||||
```
|
||||
|
||||
## Mode Detection
|
||||
|
||||
If no argument provided, auto-detect:
|
||||
|
||||
1. Check `~/.config/claude/gitea.env`
|
||||
- Missing → **full** mode
|
||||
|
||||
2. Check project `.env`
|
||||
- Missing → **quick** mode
|
||||
|
||||
3. Compare `.env` with git remote
|
||||
- Mismatch → **sync** mode
|
||||
- Match → offer reconfigure or exit
|
||||
|
||||
## Mode: Full
|
||||
|
||||
Execute `skills/setup-workflows.md` → Full Setup Workflow
|
||||
|
||||
Phases:
|
||||
1. Environment validation (Python 3.10+)
|
||||
2. MCP server setup (venv + requirements)
|
||||
3. System-level config (`~/.config/claude/gitea.env`)
|
||||
4. Project-level config (`.env`)
|
||||
5. Final validation
|
||||
|
||||
## Mode: Quick
|
||||
|
||||
Execute `skills/setup-workflows.md` → Quick Setup Workflow
|
||||
|
||||
Steps:
|
||||
1. Verify system config exists
|
||||
2. Verify git repository
|
||||
3. Check existing `.env`
|
||||
4. Detect org/repo from git remote
|
||||
5. Validate via API
|
||||
6. Create/update `.env`
|
||||
7. Check `.gitignore`
|
||||
|
||||
## Mode: Sync
|
||||
|
||||
Execute `skills/setup-workflows.md` → Sync Workflow
|
||||
|
||||
Steps:
|
||||
1. Read current config
|
||||
2. Detect git remote
|
||||
3. Compare values
|
||||
4. Show changes
|
||||
5. Validate new values
|
||||
6. Update `.env`
|
||||
7. Confirm
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ ⚙️ SETUP ║
|
||||
║ [Mode: Full | Quick | Sync] ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
@@ -1,293 +1,46 @@
|
||||
---
|
||||
description: Complete sprint and capture lessons learned to Gitea Wiki
|
||||
agent: orchestrator
|
||||
---
|
||||
|
||||
# Close Sprint and Capture Lessons Learned
|
||||
|
||||
This command completes the sprint and captures lessons learned to Gitea Wiki. **This is critical** - after 15 sprints without lesson capture, repeated mistakes occurred (e.g., Claude Code infinite loops 2-3 times on similar issues).
|
||||
## Skills Required
|
||||
|
||||
## Why Lessons Learned Matter
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/lessons-learned.md
|
||||
- skills/wiki-conventions.md
|
||||
- skills/progress-tracking.md
|
||||
- skills/git-workflow.md
|
||||
|
||||
**Problem:** Without systematic lesson capture, teams repeat the same mistakes:
|
||||
- Claude Code infinite loops on similar issues (happened 2-3 times)
|
||||
- Same architectural mistakes (multiple occurrences)
|
||||
- Forgotten optimizations (re-discovered each time)
|
||||
## Purpose
|
||||
|
||||
**Solution:** Mandatory lessons learned capture at sprint close, searchable at sprint start.
|
||||
Complete the sprint, capture lessons learned to Gitea Wiki, and update documentation. This is critical for preventing repeated mistakes across sprints.
|
||||
|
||||
## Sprint Close Workflow
|
||||
## Invocation
|
||||
|
||||
The orchestrator agent will guide you through:
|
||||
Run `/sprint-close` when sprint work is complete.
|
||||
|
||||
1. **Review Sprint Completion**
|
||||
- Use `list_issues` to verify all issues are closed or moved to backlog
|
||||
- Check milestone completion status
|
||||
- Check for incomplete work needing carryover
|
||||
- Review overall sprint goals vs. actual completion
|
||||
## Workflow
|
||||
|
||||
2. **Capture Lessons Learned**
|
||||
- What went wrong and why
|
||||
- What went right and should be repeated
|
||||
- Preventable repetitions to avoid in future sprints
|
||||
- Technical insights and gotchas discovered
|
||||
Execute the sprint close workflow:
|
||||
|
||||
3. **Tag for Discoverability**
|
||||
- Apply relevant tags: technology, component, type of lesson
|
||||
- Ensure future sprints can find these lessons via search
|
||||
- Use consistent tagging for patterns
|
||||
1. **Review Sprint Completion** - Verify issues closed or moved to backlog
|
||||
2. **Capture Lessons Learned** - Interview user about challenges and insights
|
||||
3. **Tag for Discoverability** - Apply technology, component, and pattern tags
|
||||
4. **Save to Gitea Wiki** - Use `create_lesson` with metadata and implementation link
|
||||
5. **Update Wiki Implementation Page** - Change status to Implemented/Partial/Failed
|
||||
6. **Update Wiki Proposal Page** - Update overall status if all implementations complete
|
||||
7. **New Command Verification** - Remind user new commands require session restart
|
||||
8. **Update CHANGELOG** (MANDATORY) - Add changes to `[Unreleased]` section
|
||||
9. **Version Check** - Run `/suggest-version` to recommend version bump
|
||||
10. **Git Operations** - Commit, merge, tag, clean up branches
|
||||
11. **Close Milestone** - Update milestone state to closed
|
||||
|
||||
4. **Save to Gitea Wiki**
|
||||
- Use `create_lesson` to save lessons to Gitea Wiki
|
||||
- Create lessons in project wiki under `lessons-learned/sprints/`
|
||||
- Make lessons searchable for future sprints
|
||||
|
||||
5. **Update Wiki Implementation Page**
|
||||
- Use `get_wiki_page` to fetch the current implementation page
|
||||
- Update status from "In Progress" to "Implemented" (or "Partial"/"Failed")
|
||||
- Add completion date
|
||||
- Link to lessons learned created in step 4
|
||||
- Use `update_wiki_page` to save changes
|
||||
|
||||
6. **Update Wiki Proposal Page**
|
||||
- Check if all implementations for this proposal are complete
|
||||
- If all complete: Update proposal status to "Implemented"
|
||||
- If partial: Keep status as "In Progress", note completed implementations
|
||||
- Add summary of what was accomplished
|
||||
|
||||
7. **New Command Verification** (if applicable)
|
||||
- Check if this sprint added new commands or skills
|
||||
- **IMPORTANT:** New commands are NOT discoverable until session restart
|
||||
- If new commands were added:
|
||||
- List them in sprint close notes
|
||||
- Remind user: "New commands require session restart to test"
|
||||
- Create verification task in next sprint or backlog
|
||||
- **WHY:** Claude Code discovers skills at session start; commands added during a session won't work until restart
|
||||
|
||||
8. **Update CHANGELOG** (MANDATORY)
|
||||
- Add all sprint changes to `[Unreleased]` section in CHANGELOG.md
|
||||
- Categorize: Added, Changed, Fixed, Removed, Deprecated
|
||||
- Include plugin prefix (e.g., `- **projman:** New feature`)
|
||||
|
||||
9. **Version Check**
|
||||
- Run `/suggest-version` to analyze changes and recommend version bump
|
||||
- If release warranted: run `./scripts/release.sh X.Y.Z`
|
||||
- Ensures version numbers stay in sync across files
|
||||
|
||||
10. **Git Operations**
|
||||
- Commit any remaining work (including CHANGELOG updates)
|
||||
- Merge feature branches if needed
|
||||
- Clean up merged branches
|
||||
- Tag sprint completion (if release created)
|
||||
|
||||
11. **Close Milestone**
|
||||
- Use `update_milestone` to close the sprint milestone
|
||||
- Document final completion status
|
||||
|
||||
## MCP Tools Available
|
||||
|
||||
**Gitea Tools:**
|
||||
- `list_issues` - Review sprint issues (completed and incomplete)
|
||||
- `get_issue` - Get detailed issue information for retrospective
|
||||
- `update_issue` - Move incomplete issues to next sprint
|
||||
|
||||
**Milestone Tools:**
|
||||
- `get_milestone` - Get milestone status
|
||||
- `update_milestone` - Close milestone
|
||||
|
||||
**Lessons Learned Tools (Gitea Wiki):**
|
||||
- `create_lesson` - Create lessons learned entry
|
||||
- `search_lessons` - Check for similar existing lessons
|
||||
- `list_wiki_pages` - Check existing lessons learned
|
||||
- `get_wiki_page` - Read existing lessons or implementation pages
|
||||
- `update_wiki_page` - Update implementation/proposal status
|
||||
|
||||
## Lesson Structure
|
||||
|
||||
Lessons should follow this structure:
|
||||
|
||||
```markdown
|
||||
# Sprint X - [Lesson Title]
|
||||
|
||||
## Metadata
|
||||
- **Implementation:** [Change VXX.X.X (Impl N)](wiki-link)
|
||||
- **Issues:** #45, #46, #47
|
||||
- **Sprint:** Sprint X
|
||||
|
||||
## Context
|
||||
[What were you trying to do? What was the sprint goal?]
|
||||
|
||||
## Problem
|
||||
[What went wrong? What insight emerged? What challenge did you face?]
|
||||
|
||||
## Solution
|
||||
[How did you solve it? What approach worked?]
|
||||
|
||||
## Prevention
|
||||
[How can this be avoided or optimized in the future? What should future sprints know?]
|
||||
|
||||
## Tags
|
||||
[Comma-separated tags for search: technology, component, type]
|
||||
```
|
||||
|
||||
**IMPORTANT:** Always include the Implementation link in the Metadata section. This enables bidirectional traceability between lessons and the work that generated them.
|
||||
|
||||
## Example Lessons Learned
|
||||
|
||||
**Example 1: Technical Gotcha**
|
||||
```markdown
|
||||
# Sprint 16 - Claude Code Infinite Loop on Validation Errors
|
||||
|
||||
## Metadata
|
||||
- **Implementation:** [Change V1.2.0 (Impl 1)](https://gitea.example.com/org/repo/wiki/Change-V1.2.0%3A-Proposal-(Implementation-1))
|
||||
- **Issues:** #45, #46
|
||||
- **Sprint:** Sprint 16
|
||||
|
||||
## Context
|
||||
Implementing input validation for authentication API endpoints.
|
||||
|
||||
## Problem
|
||||
Claude Code entered an infinite loop when pytest validation tests failed.
|
||||
The loop occurred because the error message didn't change between attempts,
|
||||
causing Claude to retry the same fix repeatedly.
|
||||
|
||||
## Solution
|
||||
Added more descriptive error messages to validation tests that specify
|
||||
exactly what value failed and why. This gave Claude clear feedback
|
||||
to adjust the approach rather than retrying the same fix.
|
||||
|
||||
## Prevention
|
||||
- Always write validation test errors with specific values and expectations
|
||||
- If Claude loops, check if error messages provide unique information per failure
|
||||
- Add a "loop detection" check in test output (fail after 3 identical errors)
|
||||
|
||||
## Tags
|
||||
testing, claude-code, validation, python, pytest, debugging
|
||||
```
|
||||
|
||||
**Example 2: Architectural Decision**
|
||||
```markdown
|
||||
# Sprint 14 - Extracting Services Too Early
|
||||
|
||||
## Metadata
|
||||
- **Implementation:** [Change V2.0.0 (Impl 1)](https://gitea.example.com/org/repo/wiki/Change-V2.0.0%3A-Proposal-(Implementation-1))
|
||||
- **Issues:** #32, #33, #34
|
||||
- **Sprint:** Sprint 14
|
||||
|
||||
## Context
|
||||
Planning to extract Intuit Engine service from monolith.
|
||||
|
||||
## Problem
|
||||
Initial plan was to extract immediately without testing the API boundaries
|
||||
first. This would have caused integration issues discovered late.
|
||||
|
||||
## Solution
|
||||
Added a sprint phase to:
|
||||
1. Define clear API contracts first
|
||||
2. Add integration tests for the boundaries
|
||||
3. THEN extract the service
|
||||
|
||||
Delayed extraction by one sprint but avoided major rework.
|
||||
|
||||
## Prevention
|
||||
- Always define API contracts before service extraction
|
||||
- Write integration tests FIRST, extraction SECOND
|
||||
- Don't rush architectural changes - test boundaries first
|
||||
|
||||
## Tags
|
||||
architecture, service-extraction, refactoring, api-design, planning
|
||||
```
|
||||
|
||||
## Tagging Strategy
|
||||
|
||||
Use consistent tags for discoverability:
|
||||
|
||||
**By Technology:**
|
||||
- `python`, `javascript`, `docker`, `postgresql`, `redis`, `vue`, `fastapi`
|
||||
|
||||
**By Component:**
|
||||
- `backend`, `frontend`, `api`, `database`, `auth`, `deploy`, `testing`, `docs`
|
||||
|
||||
**By Type:**
|
||||
- `bug`, `feature`, `refactor`, `architecture`, `performance`, `security`
|
||||
|
||||
**By Issue:**
|
||||
- `claude-code`, `validation`, `testing`, `debugging`, `deployment`, `configuration`
|
||||
|
||||
**By Pattern:**
|
||||
- `infinite-loop`, `edge-case`, `integration`, `boundaries`, `dependencies`
|
||||
|
||||
## Orchestrator's Role
|
||||
|
||||
The orchestrator agent will:
|
||||
1. Ask probing questions about the sprint
|
||||
2. Identify preventable mistakes vs. expected challenges
|
||||
3. Structure lessons in the proper format
|
||||
4. Suggest appropriate tags
|
||||
5. Create lessons in Gitea Wiki using `create_lesson`
|
||||
6. Ensure lessons are discoverable for future sprints
|
||||
7. Handle git operations and milestone closure
|
||||
|
||||
## Example Sprint Close Session
|
||||
|
||||
```
|
||||
User: /sprint-close
|
||||
Orchestrator: Let's close the sprint and capture lessons learned.
|
||||
|
||||
First, let me review the sprint completion...
|
||||
|
||||
[Uses list_issues to check sprint status]
|
||||
|
||||
Sprint 18 Summary:
|
||||
- 8 issues planned
|
||||
- 7 completed (87.5%)
|
||||
- 1 moved to backlog (#52: Deploy auth service - blocked by infra)
|
||||
|
||||
Great progress! Now, let's capture lessons learned.
|
||||
|
||||
What challenges did you face this sprint?
|
||||
|
||||
User: Claude Code got stuck in a loop when pytest validation tests failed
|
||||
|
||||
Orchestrator: That's a valuable lesson - this prevents future frustration.
|
||||
|
||||
Let me ask some clarifying questions:
|
||||
1. What caused the loop? (Same error message repeated?)
|
||||
2. How did you break out of it?
|
||||
3. What would prevent this in future sprints?
|
||||
|
||||
[After gathering details]
|
||||
|
||||
I'll create this lesson with tags: testing, claude-code, validation, python, pytest, debugging
|
||||
|
||||
[Uses create_lesson to save to Gitea Wiki]
|
||||
|
||||
Lesson created: "Sprint 18 - Claude Code Infinite Loop on Validation Errors"
|
||||
Saved to Gitea Wiki with appropriate tags.
|
||||
|
||||
Any other lessons from this sprint?
|
||||
|
||||
User: Yes, we discovered that JWT token refresh needs special handling
|
||||
|
||||
Orchestrator: Excellent - let's capture that too...
|
||||
|
||||
[Continues capturing lessons]
|
||||
|
||||
All lessons captured! They'll be searchable when planning future sprints.
|
||||
|
||||
Would you like me to handle git operations?
|
||||
- Commit any remaining changes
|
||||
- Merge feature branches to development
|
||||
- Tag sprint completion (v0.18.0)
|
||||
- Clean up merged branches
|
||||
- Close milestone
|
||||
|
||||
[Y/n]
|
||||
```
|
||||
**Don't skip lessons learned!** Future sprints will benefit from captured insights.
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
@@ -295,18 +48,3 @@ When executing this command, display the plugin header:
|
||||
║ [Sprint Name] ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Replace `[Sprint Name]` with the actual sprint/milestone name.
|
||||
|
||||
Then proceed with the sprint close workflow.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Simply run `/sprint-close` when your sprint is complete. The orchestrator will guide you through:
|
||||
1. Sprint review
|
||||
2. Lessons learned capture
|
||||
3. Gitea Wiki updates
|
||||
4. Git operations
|
||||
5. Milestone closure
|
||||
|
||||
**Don't skip this step!** Future sprints will thank you for capturing these insights.
|
||||
|
||||
@@ -1,193 +0,0 @@
|
||||
---
|
||||
description: Generate Mermaid diagram of sprint issues with dependencies and status
|
||||
---
|
||||
|
||||
# Sprint Diagram
|
||||
|
||||
This command generates a visual Mermaid diagram showing the current sprint's issues, their dependencies, and execution flow.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Fetch Sprint Issues** - Gets all issues for the current sprint milestone
|
||||
2. **Fetch Dependencies** - Retrieves dependency relationships between issues
|
||||
3. **Generate Mermaid Syntax** - Creates flowchart showing issue flow
|
||||
4. **Apply Status Styling** - Colors nodes based on issue state (open/closed/in-progress)
|
||||
5. **Show Execution Order** - Visualizes parallel batches and critical path
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/sprint-diagram
|
||||
/sprint-diagram --milestone "Sprint 4"
|
||||
```
|
||||
|
||||
## MCP Tools Used
|
||||
|
||||
**Issue Tools:**
|
||||
- `list_issues(state="all")` - Fetch all sprint issues
|
||||
- `list_milestones()` - Find current sprint milestone
|
||||
|
||||
**Dependency Tools:**
|
||||
- `list_issue_dependencies(issue_number)` - Get dependencies for each issue
|
||||
- `get_execution_order(issue_numbers)` - Get parallel execution batches
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. **Get Current Milestone:**
|
||||
```
|
||||
milestones = list_milestones(state="open")
|
||||
current_sprint = milestones[0] # Most recent open milestone
|
||||
```
|
||||
|
||||
2. **Fetch Sprint Issues:**
|
||||
```
|
||||
issues = list_issues(state="all", milestone=current_sprint.title)
|
||||
```
|
||||
|
||||
3. **Fetch Dependencies for Each Issue:**
|
||||
```python
|
||||
dependencies = {}
|
||||
for issue in issues:
|
||||
deps = list_issue_dependencies(issue.number)
|
||||
dependencies[issue.number] = deps
|
||||
```
|
||||
|
||||
4. **Generate Mermaid Diagram:**
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Sprint["Sprint 4 - Commands"]
|
||||
241["#241: sprint-diagram"]
|
||||
242["#242: confidence threshold"]
|
||||
243["#243: pr-diff"]
|
||||
|
||||
241 --> 242
|
||||
242 --> 243
|
||||
end
|
||||
|
||||
classDef completed fill:#90EE90,stroke:#228B22
|
||||
classDef inProgress fill:#FFD700,stroke:#DAA520
|
||||
classDef open fill:#ADD8E6,stroke:#4682B4
|
||||
classDef blocked fill:#FFB6C1,stroke:#DC143C
|
||||
|
||||
class 241 completed
|
||||
class 242 inProgress
|
||||
class 243 open
|
||||
```
|
||||
|
||||
## Expected Output
|
||||
|
||||
```
|
||||
Sprint Diagram: Sprint 4 - Commands
|
||||
===================================
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph batch1["Batch 1 - No Dependencies"]
|
||||
241["#241: sprint-diagram<br/>projman"]
|
||||
242["#242: confidence threshold<br/>pr-review"]
|
||||
244["#244: data-quality<br/>data-platform"]
|
||||
247["#247: chart-export<br/>viz-platform"]
|
||||
250["#250: dependency-graph<br/>contract-validator"]
|
||||
251["#251: changelog-gen<br/>doc-guardian"]
|
||||
254["#254: config-diff<br/>config-maintainer"]
|
||||
256["#256: cmdb-topology<br/>cmdb-assistant"]
|
||||
end
|
||||
|
||||
subgraph batch2["Batch 2 - After Batch 1"]
|
||||
243["#243: pr-diff<br/>pr-review"]
|
||||
245["#245: lineage-viz<br/>data-platform"]
|
||||
248["#248: color blind<br/>viz-platform"]
|
||||
252["#252: doc-coverage<br/>doc-guardian"]
|
||||
255["#255: linting<br/>config-maintainer"]
|
||||
257["#257: change-audit<br/>cmdb-assistant"]
|
||||
end
|
||||
|
||||
subgraph batch3["Batch 3 - Final"]
|
||||
246["#246: dbt-test<br/>data-platform"]
|
||||
249["#249: responsive<br/>viz-platform"]
|
||||
253["#253: stale-docs<br/>doc-guardian"]
|
||||
258["#258: IP conflict<br/>cmdb-assistant"]
|
||||
end
|
||||
|
||||
batch1 --> batch2
|
||||
batch2 --> batch3
|
||||
|
||||
classDef completed fill:#90EE90,stroke:#228B22
|
||||
classDef inProgress fill:#FFD700,stroke:#DAA520
|
||||
classDef open fill:#ADD8E6,stroke:#4682B4
|
||||
|
||||
class 241,242 completed
|
||||
class 243,244 inProgress
|
||||
```
|
||||
|
||||
## Status Legend
|
||||
|
||||
| Status | Color | Description |
|
||||
|--------|-------|-------------|
|
||||
| Completed | Green | Issue closed |
|
||||
| In Progress | Yellow | Currently being worked on |
|
||||
| Open | Blue | Ready to start |
|
||||
| Blocked | Red | Waiting on dependencies |
|
||||
|
||||
## Diagram Types
|
||||
|
||||
### Default: Dependency Flow
|
||||
Shows how issues depend on each other with arrows indicating blockers.
|
||||
|
||||
### Batch View (--batch)
|
||||
Groups issues by execution batch for parallel work visualization.
|
||||
|
||||
### Plugin View (--by-plugin)
|
||||
Groups issues by plugin for component-level overview.
|
||||
|
||||
## When to Use
|
||||
|
||||
- **Sprint Planning**: Visualize scope and dependencies
|
||||
- **Daily Standups**: Show progress at a glance
|
||||
- **Documentation**: Include in wiki pages
|
||||
- **Stakeholder Updates**: Visual progress reports
|
||||
|
||||
## Integration
|
||||
|
||||
The generated Mermaid diagram can be:
|
||||
- Pasted into GitHub/Gitea issues
|
||||
- Rendered in wiki pages
|
||||
- Included in PRs for context
|
||||
- Used in sprint retrospectives
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
User: /sprint-diagram
|
||||
|
||||
Generating sprint diagram...
|
||||
|
||||
Milestone: Sprint 4 - Commands (18 issues)
|
||||
Fetching dependencies...
|
||||
Building diagram...
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
241[sprint-diagram] --> |enables| 242[confidence]
|
||||
242 --> 243[pr-diff]
|
||||
|
||||
style 241 fill:#90EE90
|
||||
style 242 fill:#ADD8E6
|
||||
style 243 fill:#ADD8E6
|
||||
```
|
||||
|
||||
Open: 16 | In Progress: 1 | Completed: 1
|
||||
```
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Sprint Diagram ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed to generate the diagram.
|
||||
@@ -1,413 +1,56 @@
|
||||
---
|
||||
description: Start sprint planning with AI-guided architecture analysis and issue creation
|
||||
agent: planner
|
||||
---
|
||||
|
||||
# Sprint Planning
|
||||
|
||||
You are initiating sprint planning. The planner agent will guide you through architecture analysis, ask clarifying questions, and help create well-structured Gitea issues with appropriate labels.
|
||||
|
||||
## CRITICAL: Pre-Planning Validations
|
||||
|
||||
**BEFORE PLANNING**, the planner agent performs mandatory checks:
|
||||
|
||||
### 1. Branch Detection
|
||||
|
||||
```bash
|
||||
git branch --show-current
|
||||
```
|
||||
|
||||
**Branch Requirements:**
|
||||
- **Development branches** (`development`, `develop`, `feat/*`, `dev/*`): Full planning capabilities
|
||||
- **Staging branches** (`staging`, `stage/*`): Can create issues to document needed changes, but cannot modify code
|
||||
- **Production branches** (`main`, `master`, `prod/*`): READ-ONLY - no planning allowed
|
||||
|
||||
If you are on a production or staging branch, you MUST stop and ask the user to switch to a development branch.
|
||||
|
||||
### 2. Repository Organization Check
|
||||
|
||||
Use `validate_repo_org` MCP tool to verify the repository belongs to an organization.
|
||||
|
||||
**If NOT an organization repository:**
|
||||
```
|
||||
REPOSITORY VALIDATION FAILED
|
||||
|
||||
This plugin requires the repository to belong to an organization, not a user.
|
||||
Please transfer or create the repository under that organization.
|
||||
```
|
||||
|
||||
### 3. Label Taxonomy Validation
|
||||
|
||||
Verify all required labels exist using `get_labels`:
|
||||
|
||||
**Required label categories:**
|
||||
- Type/* (Bug, Feature, Refactor, Documentation, Test, Chore)
|
||||
- Priority/* (Low, Medium, High, Critical)
|
||||
- Complexity/* (Simple, Medium, Complex)
|
||||
- Efforts/* (XS, S, M, L, XL)
|
||||
|
||||
**If labels are missing:** Use `create_label` to create them.
|
||||
|
||||
### 4. Input Source Detection
|
||||
|
||||
The planner supports flexible input sources for sprint planning:
|
||||
|
||||
| Source | Detection | Action |
|
||||
|--------|-----------|--------|
|
||||
| **Local file** | `docs/changes/*.md` exists | Parse frontmatter, migrate to wiki, delete local |
|
||||
| **Existing wiki** | `Change VXX.X.X: Proposal` exists | Use as-is, create new implementation page |
|
||||
| **Conversation** | Neither file nor wiki exists | Create wiki from discussion context |
|
||||
|
||||
**Input File Format** (if using local file):
|
||||
```yaml
|
||||
---
|
||||
version: "4.1.0" # or "sprint-17" for internal work
|
||||
title: "Feature Name"
|
||||
plugin: plugin-name # optional
|
||||
type: feature # feature | bugfix | refactor | infra
|
||||
---
|
||||
|
||||
# Feature Description
|
||||
[Free-form content...]
|
||||
```
|
||||
|
||||
**Detection Logic:**
|
||||
1. Check for `docs/changes/*.md` files
|
||||
2. Check for existing wiki proposal matching version
|
||||
3. If neither found, use conversation context
|
||||
4. If ambiguous, ask user which input to use
|
||||
|
||||
## Planning Workflow
|
||||
|
||||
The planner agent will:
|
||||
|
||||
1. **Understand Sprint Goals**
|
||||
- Ask clarifying questions about the sprint objectives
|
||||
- Understand scope, priorities, and constraints
|
||||
- Never rush - take time to understand requirements fully
|
||||
|
||||
2. **Detect Input Source**
|
||||
- Check for `docs/changes/*.md` files
|
||||
- Check for existing wiki proposal by version
|
||||
- If neither: use conversation context
|
||||
- Ask user if multiple sources found
|
||||
|
||||
3. **Search Relevant Lessons Learned**
|
||||
- Use the `search_lessons` MCP tool to find past experiences
|
||||
- Search by keywords and tags relevant to the sprint work
|
||||
- Review patterns and preventable mistakes from previous sprints
|
||||
|
||||
4. **Create/Update Wiki Proposal**
|
||||
- If local file: migrate content to wiki, create proposal page
|
||||
- If conversation: create proposal from discussion
|
||||
- If existing wiki: skip creation, use as-is
|
||||
- **Page naming:** `Change VXX.X.X: Proposal` or `Change Sprint-NN: Proposal`
|
||||
|
||||
5. **Create Wiki Implementation Page**
|
||||
- Create `Change VXX.X.X: Proposal (Implementation N)`
|
||||
- Include tags: Type, Version, Status=In Progress, Date, Origin
|
||||
- Update proposal page with link to this implementation
|
||||
- This page tracks THIS sprint's work on the proposal
|
||||
|
||||
6. **Architecture Analysis**
|
||||
- Think through technical approach and edge cases
|
||||
- Identify architectural decisions needed
|
||||
- Consider dependencies and integration points
|
||||
- Review existing codebase architecture
|
||||
|
||||
7. **Create Gitea Issues**
|
||||
- Use the `create_issue` MCP tool for each planned task
|
||||
- Apply appropriate labels using `suggest_labels` tool
|
||||
- **Issue Title Format (MANDATORY):** `[Sprint XX] <type>: <description>`
|
||||
- **Include wiki reference:** `Implementation: [Change VXX.X.X (Impl N)](wiki-link)`
|
||||
- Include acceptance criteria and technical notes
|
||||
|
||||
8. **Set Up Dependencies**
|
||||
- Use `create_issue_dependency` to establish task dependencies
|
||||
- This enables parallel execution planning
|
||||
|
||||
9. **Create or Select Milestone**
|
||||
- Use `create_milestone` to group sprint issues
|
||||
- Assign issues to the milestone
|
||||
|
||||
10. **Cleanup & Summary**
|
||||
- Delete local input file (wiki is now source of truth)
|
||||
- Summarize architectural decisions
|
||||
- List created issues with labels
|
||||
- Document dependency graph
|
||||
- Provide sprint overview with wiki links
|
||||
|
||||
11. **Request Sprint Approval**
|
||||
- Present approval request with scope summary
|
||||
- Capture explicit user approval
|
||||
- Record approval in milestone description
|
||||
- Approval scopes what sprint-start can execute
|
||||
|
||||
## Sprint Approval (MANDATORY)
|
||||
|
||||
**Planning DOES NOT equal execution permission.**
|
||||
|
||||
After creating issues, the planner MUST request explicit approval:
|
||||
|
||||
```
|
||||
Sprint 17 Planning Complete
|
||||
===========================
|
||||
|
||||
Created Issues:
|
||||
- #45: [Sprint 17] feat: JWT token generation
|
||||
- #46: [Sprint 17] feat: Login endpoint
|
||||
- #47: [Sprint 17] test: Auth tests
|
||||
|
||||
Execution Scope:
|
||||
- Branches: feat/45-*, feat/46-*, feat/47-*
|
||||
- Files: auth/*, api/routes/auth.py, tests/test_auth*
|
||||
- Dependencies: PyJWT, python-jose
|
||||
|
||||
⚠️ APPROVAL REQUIRED
|
||||
|
||||
Do you approve this sprint for execution?
|
||||
This grants permission for agents to:
|
||||
- Create and modify files in the listed scope
|
||||
- Create branches with the listed prefixes
|
||||
- Install listed dependencies
|
||||
|
||||
Type "approve sprint 17" to authorize execution.
|
||||
```
|
||||
|
||||
**On Approval:**
|
||||
1. Record approval in milestone description
|
||||
2. Note timestamp and scope
|
||||
3. Sprint-start will verify approval exists
|
||||
|
||||
**Approval Record Format:**
|
||||
```markdown
|
||||
## Sprint Approval
|
||||
**Approved:** 2026-01-28 14:30
|
||||
**Approver:** User
|
||||
**Scope:**
|
||||
- Branches: feat/45-*, feat/46-*, feat/47-*
|
||||
- Files: auth/*, api/routes/auth.py, tests/test_auth*
|
||||
```
|
||||
|
||||
## Issue Title Format (MANDATORY)
|
||||
|
||||
```
|
||||
[Sprint XX] <type>: <description>
|
||||
```
|
||||
|
||||
**Types:**
|
||||
- `feat` - New feature
|
||||
- `fix` - Bug fix
|
||||
- `refactor` - Code refactoring
|
||||
- `docs` - Documentation
|
||||
- `test` - Test additions/changes
|
||||
- `chore` - Maintenance tasks
|
||||
|
||||
**Examples:**
|
||||
- `[Sprint 17] feat: Add user email validation`
|
||||
- `[Sprint 17] fix: Resolve login timeout issue`
|
||||
- `[Sprint 18] refactor: Extract authentication module`
|
||||
|
||||
## Task Sizing Rules (MANDATORY)
|
||||
|
||||
**CRITICAL: Tasks sized L or XL MUST be broken down into smaller tasks.**
|
||||
|
||||
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|
||||
|--------|-------|-----------------|----------------|-------------|
|
||||
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
|
||||
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
|
||||
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
|
||||
| **L** | MUST BREAK DOWN | - | - | Too large for one agent |
|
||||
| **XL** | MUST BREAK DOWN | - | - | Way too large |
|
||||
|
||||
**Why This Matters:**
|
||||
- Agents running 400+ tool calls take 1+ hour, with no visibility
|
||||
- Large tasks lack clear completion criteria
|
||||
- Debugging failures is extremely difficult
|
||||
- Small tasks enable parallel execution
|
||||
|
||||
**Scoping Checklist:**
|
||||
1. Can this be completed in one file? → XS or S
|
||||
2. Does it touch 2-3 files? → M (maximum for single task)
|
||||
3. Does it touch 4+ files? → MUST break down
|
||||
4. Would you estimate 50+ tool calls? → MUST break down
|
||||
5. Does it require complex decision-making mid-task? → MUST break down
|
||||
|
||||
**Example Breakdown:**
|
||||
|
||||
**BAD (L - too broad):**
|
||||
```
|
||||
[Sprint 3] feat: Implement schema diff detection hook
|
||||
Labels: Efforts/L
|
||||
- Hook skeleton
|
||||
- Pattern detection for DROP/ALTER/RENAME
|
||||
- Warning output formatting
|
||||
- Integration with hooks.json
|
||||
```
|
||||
|
||||
**GOOD (broken into S tasks):**
|
||||
```
|
||||
[Sprint 3] feat: Create schema-diff-check.sh hook skeleton
|
||||
Labels: Efforts/S
|
||||
- [ ] Create hook file with standard header
|
||||
- [ ] Add file type detection for SQL/migrations
|
||||
- [ ] Exit 0 (non-blocking)
|
||||
|
||||
[Sprint 3] feat: Add DROP/ALTER pattern detection
|
||||
Labels: Efforts/S
|
||||
- [ ] Detect DROP COLUMN/TABLE/INDEX
|
||||
- [ ] Detect ALTER TYPE changes
|
||||
- [ ] Detect RENAME operations
|
||||
|
||||
[Sprint 3] feat: Add warning output formatting
|
||||
Labels: Efforts/S
|
||||
- [ ] Format breaking change warnings
|
||||
- [ ] Add hook prefix to output
|
||||
- [ ] Test output visibility
|
||||
|
||||
[Sprint 3] chore: Register hook in hooks.json
|
||||
Labels: Efforts/XS
|
||||
- [ ] Add PostToolUse:Edit hook entry
|
||||
- [ ] Test hook triggers on SQL edits
|
||||
```
|
||||
|
||||
**The planner MUST refuse to create L/XL tasks without breakdown.**
|
||||
|
||||
## MCP Tools Available
|
||||
|
||||
**Gitea Tools:**
|
||||
- `list_issues` - Review existing issues
|
||||
- `get_issue` - Get detailed issue information
|
||||
- `create_issue` - Create new issue with labels
|
||||
- `update_issue` - Update issue
|
||||
- `get_labels` - Fetch current label taxonomy
|
||||
- `suggest_labels` - Get intelligent label suggestions based on context
|
||||
- `create_label` - Create missing labels
|
||||
- `validate_repo_org` - Check if repo is under organization
|
||||
|
||||
**Milestone Tools:**
|
||||
- `list_milestones` - List milestones
|
||||
- `create_milestone` - Create milestone
|
||||
- `update_milestone` - Update milestone
|
||||
|
||||
**Dependency Tools:**
|
||||
- `create_issue_dependency` - Create dependency between issues
|
||||
- `list_issue_dependencies` - List dependencies for an issue
|
||||
- `get_execution_order` - Get parallel execution batches
|
||||
|
||||
**Lessons Learned Tools (Gitea Wiki):**
|
||||
- `search_lessons` - Search lessons learned from previous sprints
|
||||
- `list_wiki_pages` - List wiki pages
|
||||
- `get_wiki_page` - Fetch specific documentation page
|
||||
- `create_wiki_page` - Create new wiki page (proposals, implementations)
|
||||
- `update_wiki_page` - Update wiki page content
|
||||
|
||||
## Label Taxonomy
|
||||
|
||||
The system uses a dynamic 44-label taxonomy (28 org + 16 repo). Always use the `suggest_labels` tool to get appropriate labels based on the issue context.
|
||||
|
||||
**Key Label Categories:**
|
||||
- **Type/***: Bug, Feature, Refactor, Documentation, Test, Chore
|
||||
- **Priority/***: Low, Medium, High, Critical
|
||||
- **Complexity/***: Simple, Medium, Complex
|
||||
- **Efforts/***: XS, S, M, L, XL
|
||||
- **Component/***: Backend, Frontend, API, Database, Auth, Deploy, Testing, Docs, Infra
|
||||
- **Tech/***: Python, JavaScript, Docker, PostgreSQL, Redis, Vue, FastAPI
|
||||
|
||||
## Planner Personality
|
||||
|
||||
The planner agent is thoughtful and methodical:
|
||||
- Asks clarifying questions before making assumptions
|
||||
- Thinks through edge cases and architectural implications
|
||||
- Never rushes planning - quality over speed
|
||||
- References lessons learned proactively
|
||||
- Suggests appropriate labels based on context
|
||||
- Creates well-structured, detailed issues
|
||||
|
||||
## Example Planning Session
|
||||
|
||||
```
|
||||
User: I want to plan a sprint for user authentication
|
||||
Planner: Great! Let me first run pre-planning validations...
|
||||
|
||||
[Checks branch, repo org, labels]
|
||||
All validations passed.
|
||||
|
||||
[Checks for input sources]
|
||||
Found: docs/changes/v1.2.0-user-authentication.md
|
||||
|
||||
I'll use this file as the planning input. Let me ask a few questions:
|
||||
|
||||
1. What authentication method are you planning? (JWT, OAuth, session-based?)
|
||||
2. Are there any specific security requirements or compliance needs?
|
||||
3. Should this integrate with existing user management?
|
||||
|
||||
[Uses search_lessons to find past authentication work]
|
||||
|
||||
Based on previous experience, I found these relevant lessons:
|
||||
- Sprint 12: JWT token expiration handling edge cases
|
||||
- Sprint 8: OAuth integration pitfalls with third-party providers
|
||||
|
||||
[Creates wiki proposal and implementation pages]
|
||||
✓ Created: "Change V1.2.0: Proposal" (wiki)
|
||||
✓ Created: "Change V1.2.0: Proposal (Implementation 1)" (wiki)
|
||||
✓ Deleted: docs/changes/v1.2.0-user-authentication.md (migrated to wiki)
|
||||
|
||||
Now, let me analyze the architecture and create issues...
|
||||
|
||||
[Creates issues with wiki references]
|
||||
|
||||
Created 5 issues for the authentication sprint:
|
||||
- Issue #45: [Sprint 17] feat: Implement JWT token generation
|
||||
Labels: Type/Feature, Priority/High, Component/Auth, Tech/Python
|
||||
Implementation: [Change V1.2.0 (Impl 1)](wiki-link)
|
||||
|
||||
- Issue #46: [Sprint 17] feat: Build user login endpoint
|
||||
Labels: Type/Feature, Priority/High, Component/API, Tech/FastAPI
|
||||
Implementation: [Change V1.2.0 (Impl 1)](wiki-link)
|
||||
|
||||
Dependency Graph:
|
||||
#45 -> #46 -> #47
|
||||
|
|
||||
v
|
||||
#48
|
||||
|
||||
Milestone: Sprint 17 - User Authentication (Due: 2025-02-01)
|
||||
Wiki: https://gitea.example.com/org/repo/wiki/Change-V1.2.0%3A-Proposal
|
||||
```
|
||||
## Skills Required
|
||||
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/branch-security.md
|
||||
- skills/repo-validation.md
|
||||
- skills/input-detection.md
|
||||
- skills/lessons-learned.md
|
||||
- skills/wiki-conventions.md
|
||||
- skills/task-sizing.md
|
||||
- skills/issue-conventions.md
|
||||
- skills/sprint-approval.md
|
||||
- skills/planning-workflow.md
|
||||
- skills/label-taxonomy/labels-reference.md
|
||||
|
||||
## Purpose
|
||||
|
||||
Initiate sprint planning session. The planner agent validates prerequisites, gathers requirements, searches lessons learned, creates wiki pages, and creates well-structured Gitea issues with proper dependencies and labels.
|
||||
|
||||
## Invocation
|
||||
|
||||
Provide sprint goals as natural language input, or prepare input via:
|
||||
- `docs/changes/*.md` file with frontmatter
|
||||
- Existing wiki proposal page
|
||||
- Direct conversation
|
||||
|
||||
## Workflow
|
||||
|
||||
Execute the planning workflow as defined in `skills/planning-workflow.md`.
|
||||
|
||||
**Key steps:**
|
||||
1. Run pre-planning validations (branch, repo org, labels)
|
||||
2. Detect input source (file, wiki, or conversation)
|
||||
3. Search relevant lessons learned
|
||||
4. Create/update wiki proposal and implementation pages
|
||||
5. Perform architecture analysis
|
||||
6. Create Gitea issues with wiki references (respecting task sizing rules)
|
||||
7. Set up dependencies
|
||||
8. Create or select milestone
|
||||
9. Request explicit sprint approval
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ 🎯 PLANNING ║
|
||||
║ Sprint Planning ║
|
||||
║ [Sprint Name] ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the command workflow.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Invoke the planner agent by providing your sprint goals. The agent will guide you through the planning process.
|
||||
|
||||
**Input Options:**
|
||||
1. Create `docs/changes/vX.Y.Z-feature-name.md` with frontmatter before running
|
||||
2. Create wiki proposal page manually, then run `/sprint-plan`
|
||||
3. Just start a conversation - the planner will capture context and create wiki pages
|
||||
|
||||
**Example:**
|
||||
> "I want to plan a sprint for extracting the Intuit Engine service from the monolith"
|
||||
|
||||
The planner will then:
|
||||
1. Run pre-planning validations
|
||||
2. Detect input source (file, wiki, or conversation)
|
||||
3. Ask clarifying questions
|
||||
4. Search lessons learned
|
||||
5. Create wiki proposal and implementation pages
|
||||
6. Create issues with wiki references
|
||||
7. Set up dependencies
|
||||
8. Create milestone
|
||||
9. Cleanup and generate planning summary
|
||||
|
||||
@@ -1,419 +1,48 @@
|
||||
---
|
||||
description: Begin sprint execution with relevant lessons learned from previous sprints
|
||||
agent: orchestrator
|
||||
---
|
||||
|
||||
# Start Sprint Execution
|
||||
|
||||
You are initiating sprint execution. The orchestrator agent will coordinate the work, analyze dependencies for parallel execution, search for relevant lessons learned, and guide you through the implementation process.
|
||||
## Skills Required
|
||||
|
||||
## Sprint Approval Verification (Recommended)
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/branch-security.md
|
||||
- skills/sprint-approval.md
|
||||
- skills/dependency-management.md
|
||||
- skills/lessons-learned.md
|
||||
- skills/git-workflow.md
|
||||
- skills/progress-tracking.md
|
||||
- skills/runaway-detection.md
|
||||
|
||||
**RECOMMENDED: Sprint should be approved before execution.**
|
||||
## Purpose
|
||||
|
||||
> **Note:** This is a recommended workflow practice, not code-enforced. The orchestrator
|
||||
> SHOULD check for approval, but execution will proceed if approval is missing. For
|
||||
> critical projects, consider making approval mandatory in your workflow.
|
||||
Initiate sprint execution. The orchestrator agent verifies approval, analyzes dependencies for parallel execution, searches relevant lessons, and coordinates task dispatch.
|
||||
|
||||
The orchestrator checks for approval in the milestone description:
|
||||
## Invocation
|
||||
|
||||
```
|
||||
get_milestone(milestone_id=17)
|
||||
→ Check description for "## Sprint Approval" section
|
||||
```
|
||||
Run `/sprint-start` when ready to begin executing a planned sprint.
|
||||
|
||||
**If Approval Missing:**
|
||||
```
|
||||
⚠️ SPRINT APPROVAL NOT FOUND (Warning)
|
||||
## Workflow
|
||||
|
||||
Sprint 17 milestone does not contain an approval record.
|
||||
Execute the sprint start workflow:
|
||||
|
||||
Recommended: Run /sprint-plan first to:
|
||||
1. Review the sprint scope
|
||||
2. Document the approved execution plan
|
||||
1. **Verify Sprint Approval** (recommended) - Check milestone for approval record
|
||||
2. **Detect Checkpoints** - Check for resume points from interrupted sessions
|
||||
3. **Fetch Sprint Issues** - Get open issues from milestone
|
||||
4. **Analyze Dependencies** - Use `get_execution_order` for parallel batches
|
||||
5. **Search Relevant Lessons** - Find applicable past experiences
|
||||
6. **Dispatch Tasks** - Parallel when safe, sequential when file conflicts exist
|
||||
|
||||
Proceeding anyway - consider adding approval for audit trail.
|
||||
```
|
||||
**File Conflict Prevention:** Before parallel dispatch, check target files for overlap. Sequentialize tasks that modify the same files.
|
||||
|
||||
**If Approval Found:**
|
||||
```
|
||||
✓ Sprint Approval Verified
|
||||
Approved: 2026-01-28 14:30
|
||||
Scope:
|
||||
Branches: feat/45-*, feat/46-*, feat/47-*
|
||||
Files: auth/*, api/routes/auth.py, tests/test_auth*
|
||||
**Branch Isolation:** Each task runs on its own branch (`feat/<issue>-<desc>`).
|
||||
|
||||
Proceeding with execution within approved scope...
|
||||
```
|
||||
|
||||
**Scope Enforcement (when approval exists):**
|
||||
- Agents SHOULD only create branches matching approved patterns
|
||||
- Agents SHOULD only modify files within approved paths
|
||||
- Operations outside scope should trigger re-approval via `/sprint-plan`
|
||||
|
||||
## Branch Detection
|
||||
|
||||
**CRITICAL:** Before proceeding, check the current git branch:
|
||||
|
||||
```bash
|
||||
git branch --show-current
|
||||
```
|
||||
|
||||
**Branch Requirements:**
|
||||
- **Development branches** (`development`, `develop`, `feat/*`, `dev/*`): Full execution capabilities
|
||||
- **Staging branches** (`staging`, `stage/*`): Can create issues to document bugs, but cannot modify code
|
||||
- **Production branches** (`main`, `master`, `prod/*`): READ-ONLY - no execution allowed
|
||||
|
||||
If you are on a production or staging branch, you MUST stop and ask the user to switch to a development branch.
|
||||
|
||||
## Sprint Start Workflow
|
||||
|
||||
The orchestrator agent will:
|
||||
|
||||
1. **Verify Sprint Approval** (Recommended)
|
||||
- Check milestone description for `## Sprint Approval` section
|
||||
- If no approval found, WARN user and suggest `/sprint-plan` first
|
||||
- If approval found, extract scope (branches, files)
|
||||
- Agents SHOULD operate within approved scope when available
|
||||
|
||||
2. **Detect Checkpoints (Resume Support)**
|
||||
- Check each open issue for `## Checkpoint` comments
|
||||
- If checkpoint found, offer to resume from that point
|
||||
- Resume preserves: branch, completed work, pending steps
|
||||
|
||||
3. **Fetch Sprint Issues**
|
||||
- Use `list_issues` to fetch open issues for the sprint
|
||||
- Identify priorities based on labels (Priority/Critical, Priority/High, etc.)
|
||||
|
||||
2. **Analyze Dependencies and Plan Parallel Execution**
|
||||
- Use `get_execution_order` to build dependency graph
|
||||
- Identify batches that can be executed in parallel
|
||||
- Present parallel execution plan
|
||||
|
||||
3. **Search Relevant Lessons Learned**
|
||||
- Use `search_lessons` to find experiences from past sprints
|
||||
- Search by tags matching the current sprint's technology and components
|
||||
- Review patterns, gotchas, and preventable mistakes
|
||||
- Present relevant lessons before starting work
|
||||
|
||||
4. **Dispatch Tasks (Parallel When Possible)**
|
||||
- For independent tasks (same batch), spawn multiple Executor agents in parallel
|
||||
- For dependent tasks, execute sequentially
|
||||
- Create proper branch for each task
|
||||
|
||||
5. **Track Progress**
|
||||
- Update issue status as work progresses
|
||||
- Use `add_comment` to document progress and blockers
|
||||
- Monitor when dependencies are satisfied and new tasks become unblocked
|
||||
|
||||
## Parallel Execution Model
|
||||
|
||||
The orchestrator analyzes dependencies and groups issues into parallelizable batches:
|
||||
|
||||
```
|
||||
Parallel Execution Batches:
|
||||
+---------------------------------------------------------------+
|
||||
| Batch 1 (can start immediately): |
|
||||
| #45 [Sprint 18] feat: Implement JWT service |
|
||||
| #48 [Sprint 18] docs: Update API documentation |
|
||||
+---------------------------------------------------------------+
|
||||
| Batch 2 (after batch 1): |
|
||||
| #46 [Sprint 18] feat: Build login endpoint (needs #45) |
|
||||
| #49 [Sprint 18] test: Add auth tests (needs #45) |
|
||||
+---------------------------------------------------------------+
|
||||
| Batch 3 (after batch 2): |
|
||||
| #47 [Sprint 18] feat: Create login form (needs #46) |
|
||||
+---------------------------------------------------------------+
|
||||
```
|
||||
|
||||
**Independent tasks in the same batch run in parallel.**
|
||||
|
||||
## File Conflict Prevention (MANDATORY)
|
||||
|
||||
**CRITICAL: Before dispatching parallel agents, check for file overlap.**
|
||||
|
||||
**Pre-Dispatch Conflict Check:**
|
||||
|
||||
1. **Identify target files** for each task in the batch
|
||||
2. **Check for overlap** - Do any tasks modify the same file?
|
||||
3. **If overlap detected** - Sequentialize those specific tasks
|
||||
|
||||
**Example Conflict Detection:**
|
||||
```
|
||||
Batch 1 Analysis:
|
||||
#45 - Implement JWT service
|
||||
Files: auth/jwt_service.py, auth/__init__.py, tests/test_jwt.py
|
||||
|
||||
#48 - Update API documentation
|
||||
Files: docs/api.md, README.md
|
||||
|
||||
Overlap: NONE → Safe to parallelize
|
||||
|
||||
Batch 2 Analysis:
|
||||
#46 - Build login endpoint
|
||||
Files: api/routes/auth.py, auth/__init__.py
|
||||
|
||||
#49 - Add auth tests
|
||||
Files: tests/test_auth.py, auth/__init__.py
|
||||
|
||||
Overlap: auth/__init__.py → CONFLICT!
|
||||
Action: Sequentialize #46 and #49 (run #46 first)
|
||||
```
|
||||
|
||||
**Conflict Resolution Rules:**
|
||||
|
||||
| Conflict Type | Action |
|
||||
|---------------|--------|
|
||||
| Same file in checklist | Sequentialize tasks |
|
||||
| Same directory | Review if safe, usually OK |
|
||||
| Shared test file | Sequentialize or assign different test files |
|
||||
| Shared config | Sequentialize |
|
||||
|
||||
**Branch Isolation Protocol:**
|
||||
|
||||
Even for parallel tasks, each MUST run on its own branch:
|
||||
```
|
||||
Task #45 → feat/45-jwt-service (isolated)
|
||||
Task #48 → feat/48-api-docs (isolated)
|
||||
```
|
||||
|
||||
**Sequential Merge After Completion:**
|
||||
```
|
||||
1. Task #45 completes → merge feat/45-jwt-service to development
|
||||
2. Task #48 completes → merge feat/48-api-docs to development
|
||||
3. Never merge simultaneously - always sequential to detect conflicts
|
||||
```
|
||||
|
||||
**If Merge Conflict Occurs:**
|
||||
1. Stop second task
|
||||
2. Resolve conflict manually or assign to human
|
||||
3. Resume/restart second task with updated base
|
||||
|
||||
## Branch Naming Convention (MANDATORY)
|
||||
|
||||
When creating branches for tasks:
|
||||
|
||||
- Features: `feat/<issue-number>-<short-description>`
|
||||
- Bug fixes: `fix/<issue-number>-<short-description>`
|
||||
- Debugging: `debug/<issue-number>-<short-description>`
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
git checkout -b feat/45-jwt-service
|
||||
git checkout -b fix/46-login-timeout
|
||||
git checkout -b debug/47-investigate-memory-leak
|
||||
```
|
||||
|
||||
**Validation:**
|
||||
- Issue number MUST be present
|
||||
- Prefix MUST be `feat/`, `fix/`, or `debug/`
|
||||
- Description should be kebab-case (lowercase, hyphens)
|
||||
|
||||
## MCP Tools Available
|
||||
|
||||
**Gitea Tools:**
|
||||
- `list_issues` - Fetch sprint issues (filter by state, labels, milestone)
|
||||
- `get_issue` - Get detailed issue information
|
||||
- `update_issue` - Update issue status, assignee, labels
|
||||
- `add_comment` - Add progress updates or blocker notes
|
||||
|
||||
**Dependency Tools:**
|
||||
- `list_issue_dependencies` - Get dependencies for an issue
|
||||
- `get_execution_order` - Get parallel execution batches for sprint issues
|
||||
|
||||
**Milestone Tools:**
|
||||
- `list_milestones` - List milestones
|
||||
- `get_milestone` - Get milestone details
|
||||
|
||||
**Lessons Learned Tools (Gitea Wiki):**
|
||||
- `search_lessons` - Find relevant lessons from past sprints
|
||||
- `list_wiki_pages` - List project documentation
|
||||
- `get_wiki_page` - Fetch specific documentation (e.g., architecture decisions)
|
||||
|
||||
## Orchestrator Personality
|
||||
|
||||
The orchestrator agent is concise and action-oriented:
|
||||
- Generates lean execution prompts, not lengthy documents
|
||||
- Tracks details meticulously (no task forgotten)
|
||||
- Coordinates parallel execution based on dependencies
|
||||
- Identifies blockers proactively
|
||||
- Coordinates Git operations (commit, merge, cleanup)
|
||||
- Updates documentation as work progresses
|
||||
|
||||
## Example Sprint Start Session
|
||||
|
||||
```
|
||||
User: /sprint-start
|
||||
Orchestrator: Starting sprint execution. Let me analyze the sprint...
|
||||
|
||||
[Uses list_issues to fetch sprint backlog]
|
||||
|
||||
Found 5 open issues for this sprint.
|
||||
|
||||
[Uses get_execution_order to analyze dependencies]
|
||||
|
||||
Parallel Execution Batches:
|
||||
+-----------------------------------------------+
|
||||
| Batch 1 (can start immediately): |
|
||||
| #45 - Implement JWT service |
|
||||
| #48 - Update API documentation |
|
||||
+-----------------------------------------------+
|
||||
| Batch 2 (after batch 1): |
|
||||
| #46 - Build login endpoint (needs #45) |
|
||||
| #49 - Add auth tests (needs #45) |
|
||||
+-----------------------------------------------+
|
||||
| Batch 3 (after batch 2): |
|
||||
| #47 - Create login form (needs #46) |
|
||||
+-----------------------------------------------+
|
||||
|
||||
[Uses search_lessons to find relevant past experiences]
|
||||
|
||||
Relevant lessons learned:
|
||||
- Sprint 12: "JWT Token Expiration Edge Cases" - Remember to handle token refresh
|
||||
- Sprint 8: "OAuth Integration Pitfalls" - Test error handling for auth providers
|
||||
|
||||
Ready to start? I can dispatch multiple tasks in parallel.
|
||||
|
||||
Dispatching Batch 1 (2 tasks in parallel):
|
||||
|
||||
Task 1: #45 - Implement JWT service
|
||||
Branch: feat/45-jwt-service
|
||||
Executor: Starting...
|
||||
|
||||
Task 2: #48 - Update API documentation
|
||||
Branch: feat/48-api-docs
|
||||
Executor: Starting...
|
||||
|
||||
Both tasks running in parallel. I'll monitor progress.
|
||||
```
|
||||
|
||||
## Lean Execution Prompts
|
||||
|
||||
The orchestrator generates concise prompts (NOT verbose documents):
|
||||
|
||||
```
|
||||
Next Task: #45 - [Sprint 18] feat: Implement JWT token generation
|
||||
|
||||
Priority: High | Effort: M (1 day) | Unblocked
|
||||
Branch: feat/45-jwt-service
|
||||
|
||||
Quick Context:
|
||||
- Create backend service for JWT tokens
|
||||
- Use HS256 algorithm (decision from planning)
|
||||
- Include user_id, email, expiration in payload
|
||||
|
||||
Key Actions:
|
||||
1. Create auth/jwt_service.py
|
||||
2. Implement generate_token(user_id, email)
|
||||
3. Implement verify_token(token)
|
||||
4. Add token refresh logic (Sprint 12 lesson!)
|
||||
5. Write unit tests for generation/validation
|
||||
|
||||
Acceptance Criteria:
|
||||
- Tokens generate successfully
|
||||
- Token verification works
|
||||
- Refresh prevents expiration issues
|
||||
- Tests cover edge cases
|
||||
|
||||
Relevant Lessons:
|
||||
Sprint 12: Handle token refresh explicitly to prevent mid-request expiration
|
||||
|
||||
Dependencies: None (can start immediately)
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
As work progresses, the orchestrator updates Gitea:
|
||||
|
||||
**Add Progress Comment:**
|
||||
```
|
||||
add_comment(issue_number=45, body="JWT generation implemented. Running tests now.")
|
||||
```
|
||||
|
||||
**Update Issue Status:**
|
||||
```
|
||||
update_issue(issue_number=45, state="closed")
|
||||
```
|
||||
|
||||
**Document Blockers:**
|
||||
```
|
||||
add_comment(issue_number=46, body="BLOCKED: Waiting for #45 to complete (dependency)")
|
||||
```
|
||||
|
||||
**Track Parallel Execution:**
|
||||
```
|
||||
Parallel Execution Status:
|
||||
|
||||
Batch 1:
|
||||
#45 - JWT service - COMPLETED (12:45)
|
||||
#48 - API docs - IN PROGRESS (75%)
|
||||
|
||||
Batch 2 (now unblocked):
|
||||
#46 - Login endpoint - READY TO START
|
||||
#49 - Auth tests - READY TO START
|
||||
|
||||
#45 completed! #46 and #49 are now unblocked.
|
||||
Starting #46 while #48 continues...
|
||||
```
|
||||
|
||||
## Checkpoint Resume Support
|
||||
|
||||
If a previous session was interrupted (agent stopped, failure, budget exhausted), checkpoints enable resumption.
|
||||
|
||||
**Checkpoint Detection:**
|
||||
The orchestrator scans issue comments for `## Checkpoint` markers containing:
|
||||
- Branch name
|
||||
- Last commit hash
|
||||
- Completed/pending steps
|
||||
- Files modified
|
||||
|
||||
**Resume Flow:**
|
||||
```
|
||||
User: /sprint-start
|
||||
|
||||
Orchestrator: Checking for checkpoints...
|
||||
|
||||
Found checkpoint for #45 (JWT service):
|
||||
Branch: feat/45-jwt-service
|
||||
Last activity: 2 hours ago
|
||||
Progress: 4/7 steps completed
|
||||
Pending: Write tests, add refresh, commit
|
||||
|
||||
Options:
|
||||
1. Resume from checkpoint (recommended)
|
||||
2. Start fresh (lose previous work)
|
||||
3. Review checkpoint details
|
||||
|
||||
User: 1
|
||||
|
||||
Orchestrator: Resuming #45 from checkpoint...
|
||||
✓ Branch exists
|
||||
✓ Files match checkpoint
|
||||
✓ Dispatching executor with context
|
||||
|
||||
Executor continues from pending steps...
|
||||
```
|
||||
|
||||
**Checkpoint Format:**
|
||||
Executors save checkpoints after major steps:
|
||||
```markdown
|
||||
## Checkpoint
|
||||
**Branch:** feat/45-jwt-service
|
||||
**Commit:** abc123
|
||||
**Phase:** Testing
|
||||
|
||||
### Completed Steps
|
||||
- [x] Step 1
|
||||
- [x] Step 2
|
||||
|
||||
### Pending Steps
|
||||
- [ ] Step 3
|
||||
- [ ] Step 4
|
||||
```
|
||||
**Sequential Merge:** After completion, merge branches sequentially to detect conflicts.
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
@@ -421,18 +50,3 @@ When executing this command, display the plugin header:
|
||||
║ [Sprint Name] ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Replace `[Sprint Name]` with the actual sprint/milestone name from Gitea.
|
||||
|
||||
Then proceed with the command workflow.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Simply invoke `/sprint-start` and the orchestrator will:
|
||||
1. Review your sprint backlog
|
||||
2. Analyze dependencies and plan parallel execution
|
||||
3. Search for relevant lessons
|
||||
4. Dispatch tasks (parallel when possible)
|
||||
5. Track progress as you work
|
||||
|
||||
The orchestrator keeps you focused, maximizes parallelism, and ensures nothing is forgotten.
|
||||
|
||||
@@ -1,247 +1,78 @@
|
||||
---
|
||||
description: Check current sprint progress and identify blockers
|
||||
description: Check current sprint progress, identify blockers, optionally generate dependency diagram
|
||||
---
|
||||
|
||||
# Sprint Status Check
|
||||
# Sprint Status
|
||||
|
||||
This command provides a quick overview of your current sprint progress, including open issues, completed work, dependency status, and potential blockers.
|
||||
## Skills Required
|
||||
|
||||
## What This Command Does
|
||||
- skills/mcp-tools-reference.md
|
||||
- skills/progress-tracking.md
|
||||
- skills/dependency-management.md
|
||||
|
||||
1. **Fetch Sprint Issues** - Lists all issues with current sprint labels/milestone
|
||||
2. **Analyze Dependencies** - Shows dependency graph and blocked/unblocked tasks
|
||||
3. **Categorize by Status** - Groups issues into: Open, In Progress, Blocked, Completed
|
||||
4. **Identify Blockers** - Highlights issues with blocker comments or unmet dependencies
|
||||
5. **Show Progress Summary** - Provides completion percentage and parallel execution status
|
||||
6. **Highlight Priorities** - Shows critical and high-priority items needing attention
|
||||
## Purpose
|
||||
|
||||
## Usage
|
||||
Check current sprint progress, identify blockers, and show execution status. Optionally generate a visual dependency diagram.
|
||||
|
||||
Simply run `/sprint-status` to get a comprehensive sprint overview.
|
||||
|
||||
## MCP Tools Used
|
||||
|
||||
This command uses the following Gitea MCP tools:
|
||||
|
||||
**Issue Tools:**
|
||||
- `list_issues(state="open")` - Fetch open issues
|
||||
- `list_issues(state="closed")` - Fetch completed issues
|
||||
- `get_issue(number)` - Get detailed issue information for blockers
|
||||
|
||||
**Dependency Tools:**
|
||||
- `list_issue_dependencies(issue_number)` - Get dependencies for each issue
|
||||
- `get_execution_order(issue_numbers)` - Get parallel execution batches
|
||||
|
||||
**Milestone Tools:**
|
||||
- `get_milestone(milestone_id)` - Get milestone progress
|
||||
|
||||
## Expected Output
|
||||
## Invocation
|
||||
|
||||
```
|
||||
Sprint Status Report
|
||||
====================
|
||||
|
||||
Sprint: Sprint 18 - Authentication System
|
||||
Milestone: Due 2025-02-01 (5 days remaining)
|
||||
Date: 2025-01-18
|
||||
|
||||
Progress Summary:
|
||||
- Total Issues: 8
|
||||
- Completed: 3 (37.5%)
|
||||
- In Progress: 2 (25%)
|
||||
- Ready: 2 (25%)
|
||||
- Blocked: 1 (12.5%)
|
||||
|
||||
Dependency Graph:
|
||||
#45 -> #46 -> #47
|
||||
|
|
||||
v
|
||||
#49 -> #50
|
||||
|
||||
Parallel Execution Status:
|
||||
+-----------------------------------------------+
|
||||
| Batch 1 (COMPLETED): |
|
||||
| #45 - Implement JWT service |
|
||||
| #48 - Update API documentation |
|
||||
+-----------------------------------------------+
|
||||
| Batch 2 (IN PROGRESS): |
|
||||
| #46 - Build login endpoint (75%) |
|
||||
| #49 - Add auth tests (50%) |
|
||||
+-----------------------------------------------+
|
||||
| Batch 3 (BLOCKED): |
|
||||
| #47 - Create login form (waiting for #46) |
|
||||
+-----------------------------------------------+
|
||||
|
||||
Completed Issues (3):
|
||||
#45: [Sprint 18] feat: Implement JWT service [Type/Feature, Priority/High]
|
||||
#48: [Sprint 18] docs: Update API documentation [Type/Docs, Priority/Medium]
|
||||
#51: [Sprint 18] chore: Update dependencies [Type/Chore, Priority/Low]
|
||||
|
||||
In Progress (2):
|
||||
#46: [Sprint 18] feat: Build login endpoint [Type/Feature, Priority/High]
|
||||
Status: In Progress | Phase: Implementation | Tool Calls: 45/100
|
||||
Progress: 3/5 steps | Current: Writing validation logic
|
||||
|
||||
#49: [Sprint 18] test: Add auth tests [Type/Test, Priority/Medium]
|
||||
Status: In Progress | Phase: Testing | Tool Calls: 30/100
|
||||
Progress: 2/4 steps | Current: Testing edge cases
|
||||
|
||||
Ready to Start (2):
|
||||
#50: [Sprint 18] feat: Integrate OAuth providers [Type/Feature, Priority/Low]
|
||||
#52: [Sprint 18] feat: Add email verification [Type/Feature, Priority/Medium]
|
||||
|
||||
Blocked Issues (1):
|
||||
#47: [Sprint 18] feat: Create login form [Type/Feature, Priority/High]
|
||||
Blocked by: #46 (in progress)
|
||||
|
||||
Priority Alerts:
|
||||
1 high-priority item blocked: #47
|
||||
All critical items completed
|
||||
|
||||
Recommendations:
|
||||
1. Focus on completing #46 (Login endpoint) - unblocks #47
|
||||
2. Continue parallel work on #49 (Auth tests)
|
||||
3. #50 and #52 are ready - can start in parallel
|
||||
/sprint-status # Text-based status report
|
||||
/sprint-status --diagram # Include Mermaid dependency diagram
|
||||
```
|
||||
|
||||
## Dependency Analysis
|
||||
## Workflow
|
||||
|
||||
The status check analyzes dependencies to show:
|
||||
1. **Fetch Sprint Issues** - Get all issues for current milestone
|
||||
2. **Calculate Progress** - Count completed vs total issues
|
||||
3. **Identify Active Tasks** - Find issues with `Status/In-Progress`
|
||||
4. **Identify Blockers** - Find issues with `Status/Blocked`
|
||||
5. **Show Dependency Status** - Which tasks are now unblocked
|
||||
6. **Parse Progress Comments** - Extract real-time status from structured comments
|
||||
|
||||
**Blocked Issues:**
|
||||
- Issues waiting for other issues to complete
|
||||
- Shows which issue is blocking and its current status
|
||||
### If --diagram flag:
|
||||
|
||||
**Unblocked Issues:**
|
||||
- Issues with no pending dependencies
|
||||
- Ready to be picked up immediately
|
||||
7. **Fetch Dependencies** - Use `list_issue_dependencies` for each issue
|
||||
8. **Get Execution Order** - Use `get_execution_order` for batch grouping
|
||||
9. **Generate Mermaid Syntax** - Create flowchart with status colors
|
||||
|
||||
**Parallel Opportunities:**
|
||||
- Multiple unblocked issues that can run simultaneously
|
||||
- Maximizes sprint velocity
|
||||
## Output Format
|
||||
|
||||
## Filtering Options
|
||||
See `skills/progress-tracking.md` for the progress display format.
|
||||
|
||||
You can optionally filter the status check:
|
||||
### Diagram Format (--diagram)
|
||||
|
||||
**By Label:**
|
||||
```
|
||||
Show only high-priority issues:
|
||||
list_issues(labels=["Priority/High"])
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph batch1["Batch 1 - No Dependencies"]
|
||||
241["#241: sprint-diagram"]
|
||||
242["#242: confidence threshold"]
|
||||
end
|
||||
|
||||
classDef completed fill:#90EE90,stroke:#228B22
|
||||
classDef inProgress fill:#FFD700,stroke:#DAA520
|
||||
classDef open fill:#ADD8E6,stroke:#4682B4
|
||||
classDef blocked fill:#FFB6C1,stroke:#CD5C5C
|
||||
|
||||
class 241 completed
|
||||
class 242 inProgress
|
||||
```
|
||||
|
||||
**By Milestone:**
|
||||
```
|
||||
Show issues for specific sprint:
|
||||
list_issues(milestone="Sprint 18")
|
||||
```
|
||||
### Status Colors
|
||||
|
||||
**By Component:**
|
||||
```
|
||||
Show only backend issues:
|
||||
list_issues(labels=["Component/Backend"])
|
||||
```
|
||||
|
||||
## Progress Comment Parsing
|
||||
|
||||
Agents post structured progress comments in this format:
|
||||
|
||||
```markdown
|
||||
## Progress Update
|
||||
**Status:** In Progress | Blocked | Failed
|
||||
**Phase:** [current phase name]
|
||||
**Tool Calls:** X (budget: Y)
|
||||
|
||||
### Completed
|
||||
- [x] Step 1
|
||||
|
||||
### In Progress
|
||||
- [ ] Current step
|
||||
|
||||
### Blockers
|
||||
- None | [blocker description]
|
||||
```
|
||||
|
||||
**To extract real-time progress:**
|
||||
1. Fetch issue comments: `get_issue(number)` includes recent comments
|
||||
2. Look for comments containing `## Progress Update`
|
||||
3. Parse the **Status:** line for current state
|
||||
4. Parse **Tool Calls:** for budget consumption
|
||||
5. Extract blockers from `### Blockers` section
|
||||
|
||||
**Progress Summary Display:**
|
||||
```
|
||||
In Progress Issues:
|
||||
#45: [Sprint 18] feat: JWT service
|
||||
Status: In Progress | Phase: Testing | Tool Calls: 67/100
|
||||
Completed: 4/6 steps | Current: Writing unit tests
|
||||
|
||||
#46: [Sprint 18] feat: Login endpoint
|
||||
Status: Blocked | Phase: Implementation | Tool Calls: 23/100
|
||||
Blocker: Waiting for JWT service (#45)
|
||||
```
|
||||
|
||||
## Blocker Detection
|
||||
|
||||
The command identifies blocked issues by:
|
||||
1. **Progress Comments** - Parse `### Blockers` section from structured comments
|
||||
2. **Status Labels** - Check for `Status/Blocked` label on issue
|
||||
3. **Dependency Analysis** - Uses `list_issue_dependencies` to find unmet dependencies
|
||||
4. **Comment Keywords** - Checks for "blocked", "blocker", "waiting for"
|
||||
5. **Stale Issues** - Issues with no recent activity (>7 days)
|
||||
|
||||
## When to Use
|
||||
|
||||
Run `/sprint-status` when you want to:
|
||||
- Start your day and see what needs attention
|
||||
- Prepare for standup meetings
|
||||
- Check if the sprint is on track
|
||||
- Identify bottlenecks or blockers
|
||||
- Decide what to work on next
|
||||
- See which tasks can run in parallel
|
||||
|
||||
## Integration with Other Commands
|
||||
|
||||
- Use `/sprint-start` to begin working on identified tasks
|
||||
- Use `/sprint-close` when all issues are completed
|
||||
- Use `/sprint-plan` to adjust scope if blocked items can't be unblocked
|
||||
| Status | Color | Hex |
|
||||
|--------|-------|-----|
|
||||
| Completed | Green | #90EE90 |
|
||||
| In Progress | Yellow | #FFD700 |
|
||||
| Open | Blue | #ADD8E6 |
|
||||
| Blocked | Red | #FFB6C1 |
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header followed by a progress block:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ ⚡ EXECUTION ║
|
||||
║ 📊 STATUS ║
|
||||
║ [Sprint Name] ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
|
||||
┌─ Sprint Progress ────────────────────────────────────────────────┐
|
||||
│ [Sprint Name] │
|
||||
│ ████████████░░░░░░░░░░░░░░░░░░ 40% complete │
|
||||
│ ✅ Done: 4 ⏳ Active: 2 ⬚ Pending: 4 │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Replace `[Sprint Name]` with the actual sprint/milestone name. Calculate percentage from completed vs total issues.
|
||||
|
||||
Then proceed with the full status report.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
User: /sprint-status
|
||||
|
||||
Sprint Status Report
|
||||
====================
|
||||
|
||||
Sprint: Sprint 18 - Authentication System
|
||||
Progress: 3/8 (37.5%)
|
||||
|
||||
Next Actions:
|
||||
1. Complete #46 - it's blocking #47
|
||||
2. Start #50 or #52 - both are unblocked
|
||||
|
||||
Would you like me to generate execution prompts for the unblocked tasks?
|
||||
```
|
||||
|
||||
@@ -4,92 +4,44 @@ description: Analyze CHANGELOG.md and suggest appropriate semantic version bump
|
||||
|
||||
# Suggest Version
|
||||
|
||||
## Purpose
|
||||
|
||||
Analyze CHANGELOG.md and suggest appropriate semantic version bump.
|
||||
|
||||
## Behavior
|
||||
## Invocation
|
||||
|
||||
1. **Read current state**:
|
||||
- Read `CHANGELOG.md` to find current version and [Unreleased] content
|
||||
- Read `.claude-plugin/marketplace.json` for current marketplace version
|
||||
- Check individual plugin versions in `plugins/*/. claude-plugin/plugin.json`
|
||||
Run `/suggest-version` after updating CHANGELOG or before release.
|
||||
|
||||
2. **Analyze [Unreleased] section**:
|
||||
- Extract all entries under `### Added`, `### Changed`, `### Fixed`, `### Removed`, `### Deprecated`
|
||||
## Workflow
|
||||
|
||||
1. **Read Current State**
|
||||
- CHANGELOG.md for current version and [Unreleased] content
|
||||
- marketplace.json for marketplace version
|
||||
- Individual plugin versions
|
||||
|
||||
2. **Analyze [Unreleased] Section**
|
||||
- Extract entries under Added, Changed, Fixed, Removed, Deprecated
|
||||
- Categorize changes by impact
|
||||
|
||||
3. **Apply SemVer rules**:
|
||||
3. **Apply SemVer Rules**
|
||||
|
||||
| Change Type | Version Bump | Indicators |
|
||||
|-------------|--------------|------------|
|
||||
| **MAJOR** (X.0.0) | Breaking changes | `### Removed`, `### Changed` with "BREAKING:", renamed/removed APIs |
|
||||
| **MINOR** (x.Y.0) | New features, backwards compatible | `### Added` with new commands/plugins/features |
|
||||
| **PATCH** (x.y.Z) | Bug fixes only | `### Fixed` only, `### Changed` for non-breaking tweaks |
|
||||
| Change Type | Bump | Indicators |
|
||||
|-------------|------|------------|
|
||||
| MAJOR (X.0.0) | Breaking changes | Removed, "BREAKING:" in Changed |
|
||||
| MINOR (x.Y.0) | New features | Added with new commands/plugins |
|
||||
| PATCH (x.y.Z) | Bug fixes only | Fixed only |
|
||||
|
||||
4. **Output recommendation**:
|
||||
```
|
||||
## Version Analysis
|
||||
|
||||
**Current version:** X.Y.Z
|
||||
**[Unreleased] summary:**
|
||||
- Added: N entries (new features/plugins)
|
||||
- Changed: N entries (M breaking)
|
||||
- Fixed: N entries
|
||||
- Removed: N entries
|
||||
|
||||
**Recommendation:** MINOR bump → X.(Y+1).0
|
||||
**Reason:** New features added without breaking changes
|
||||
|
||||
**To release:** ./scripts/release.sh X.Y.Z
|
||||
```
|
||||
|
||||
5. **Check version sync**:
|
||||
- Compare marketplace version with individual plugin versions
|
||||
- Warn if plugins are out of sync (e.g., marketplace 4.0.0 but projman 3.1.0)
|
||||
|
||||
## Examples
|
||||
|
||||
**Output when MINOR bump needed:**
|
||||
```
|
||||
## Version Analysis
|
||||
|
||||
**Current version:** 4.0.0
|
||||
**[Unreleased] summary:**
|
||||
- Added: 3 entries (new command, hook improvement, workflow example)
|
||||
- Changed: 1 entry (0 breaking)
|
||||
- Fixed: 2 entries
|
||||
|
||||
**Recommendation:** MINOR bump → 4.1.0
|
||||
**Reason:** New features (Added section) without breaking changes
|
||||
|
||||
**To release:** ./scripts/release.sh 4.1.0
|
||||
```
|
||||
|
||||
**Output when nothing to release:**
|
||||
```
|
||||
## Version Analysis
|
||||
|
||||
**Current version:** 4.0.0
|
||||
**[Unreleased] summary:** Empty - no pending changes
|
||||
|
||||
**Recommendation:** No release needed
|
||||
```
|
||||
4. **Output Recommendation**
|
||||
- Current version
|
||||
- Summary of changes
|
||||
- Recommended bump with reason
|
||||
- Release command
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Version Analysis ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the version analysis.
|
||||
|
||||
## Integration
|
||||
|
||||
This command helps maintain proper versioning workflow:
|
||||
- Run after completing a sprint to determine version bump
|
||||
- Run before `/sprint-close` to ensure version is updated
|
||||
- Integrates with `./scripts/release.sh` for actual release execution
|
||||
|
||||
@@ -1,177 +0,0 @@
|
||||
---
|
||||
name: test-check
|
||||
description: Run tests and verify coverage before sprint close
|
||||
---
|
||||
|
||||
# Test Check for Sprint Close
|
||||
|
||||
Verify test status and coverage before closing the sprint.
|
||||
|
||||
## Framework Detection
|
||||
|
||||
Detect the test framework by checking for:
|
||||
|
||||
| Indicator | Framework | Command |
|
||||
|-----------|-----------|---------|
|
||||
| `pytest.ini`, `pyproject.toml` with pytest, `tests/` with `test_*.py` | pytest | `pytest` |
|
||||
| `package.json` with jest | Jest | `npm test` or `npx jest` |
|
||||
| `package.json` with mocha | Mocha | `npm test` or `npx mocha` |
|
||||
| `package.json` with vitest | Vitest | `npm test` or `npx vitest` |
|
||||
| `go.mod` with `*_test.go` files | Go test | `go test ./...` |
|
||||
| `Cargo.toml` with `tests/` or `#[test]` | Cargo test | `cargo test` |
|
||||
| `Makefile` with test target | Make | `make test` |
|
||||
| `tox.ini` | tox | `tox` |
|
||||
| `setup.py` with test command | setuptools | `python setup.py test` |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Detect Framework
|
||||
|
||||
1. Check for framework indicators in project root
|
||||
2. If multiple found, list them and ask which to run
|
||||
3. If none found, report "No test framework detected"
|
||||
|
||||
### 2. Run Tests
|
||||
|
||||
1. Execute the appropriate test command
|
||||
2. Capture stdout/stderr
|
||||
3. Parse results for pass/fail counts
|
||||
4. Note: Some frameworks may require dependencies to be installed first
|
||||
|
||||
### 3. Coverage Check (if available)
|
||||
|
||||
Coverage tools by framework:
|
||||
- **Python**: `pytest --cov` or `coverage run`
|
||||
- **JavaScript**: Jest has built-in coverage (`--coverage`)
|
||||
- **Go**: `go test -cover`
|
||||
- **Rust**: `cargo tarpaulin` or `cargo llvm-cov`
|
||||
|
||||
If coverage is configured:
|
||||
- Report overall coverage percentage
|
||||
- List files with 0% coverage that were changed in sprint
|
||||
|
||||
### 4. Sprint File Analysis
|
||||
|
||||
If sprint context is available:
|
||||
- Identify which sprint files have tests
|
||||
- Flag sprint files with no corresponding test coverage
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Test Check Summary
|
||||
|
||||
### Test Results
|
||||
- Framework: {detected framework}
|
||||
- Status: {PASS/FAIL}
|
||||
- Passed: {n} | Failed: {n} | Skipped: {n}
|
||||
- Duration: {time}
|
||||
|
||||
### Failed Tests
|
||||
- test_name: error message (file:line)
|
||||
|
||||
### Coverage (if available)
|
||||
- Overall: {n}%
|
||||
- Sprint files coverage:
|
||||
- file.py: {n}%
|
||||
- file.py: NO TESTS
|
||||
|
||||
### Recommendation
|
||||
{READY FOR CLOSE / TESTS MUST PASS / COVERAGE GAPS TO ADDRESS}
|
||||
```
|
||||
|
||||
## Behavior Flags
|
||||
|
||||
The command accepts optional flags via natural language:
|
||||
|
||||
| Request | Behavior |
|
||||
|---------|----------|
|
||||
| "run tests with coverage" | Include coverage report |
|
||||
| "run tests verbose" | Show full output |
|
||||
| "just check, don't run" | Report framework detection only |
|
||||
| "run specific tests for X" | Run tests matching pattern |
|
||||
|
||||
## Framework-Specific Notes
|
||||
|
||||
### Python (pytest)
|
||||
```bash
|
||||
# Basic run
|
||||
pytest
|
||||
|
||||
# With coverage
|
||||
pytest --cov=src --cov-report=term-missing
|
||||
|
||||
# Verbose
|
||||
pytest -v
|
||||
|
||||
# Specific tests
|
||||
pytest tests/test_specific.py -k "test_function_name"
|
||||
```
|
||||
|
||||
### JavaScript (Jest/Vitest)
|
||||
```bash
|
||||
# Basic run
|
||||
npm test
|
||||
|
||||
# With coverage
|
||||
npm test -- --coverage
|
||||
|
||||
# Specific tests
|
||||
npm test -- --testPathPattern="specific"
|
||||
```
|
||||
|
||||
### Go
|
||||
```bash
|
||||
# Basic run
|
||||
go test ./...
|
||||
|
||||
# With coverage
|
||||
go test -cover ./...
|
||||
|
||||
# Verbose
|
||||
go test -v ./...
|
||||
```
|
||||
|
||||
### Rust
|
||||
```bash
|
||||
# Basic run
|
||||
cargo test
|
||||
|
||||
# Verbose
|
||||
cargo test -- --nocapture
|
||||
```
|
||||
|
||||
## Do NOT
|
||||
|
||||
- Modify test files
|
||||
- Skip failing tests to make the run pass
|
||||
- Run tests in production environments (check for .env indicators)
|
||||
- Install dependencies without asking first
|
||||
- Run tests that require external services without confirmation
|
||||
|
||||
## Error Handling
|
||||
|
||||
If tests fail:
|
||||
1. Report the failure clearly
|
||||
2. List failed test names and error summaries
|
||||
3. Recommend: "TESTS MUST PASS before sprint close"
|
||||
4. Offer to help debug specific failures
|
||||
|
||||
If framework not detected:
|
||||
1. List what was checked
|
||||
2. Ask user to specify the test command
|
||||
3. Offer common suggestions based on file types found
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ 🏁 CLOSING ║
|
||||
║ Test Verification ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the test check workflow.
|
||||
@@ -1,131 +0,0 @@
|
||||
---
|
||||
description: Generate tests for specified code - creates unit, integration, or e2e tests
|
||||
---
|
||||
|
||||
# Test Generation
|
||||
|
||||
Generate comprehensive tests for specified code.
|
||||
|
||||
## Usage
|
||||
```
|
||||
/test-gen <target> [--type=<type>] [--framework=<framework>]
|
||||
```
|
||||
|
||||
**Target:** File path, function name, class name, or module
|
||||
**Type:** unit (default), integration, e2e, snapshot
|
||||
**Framework:** Auto-detected or specify (pytest, jest, vitest, go test, etc.)
|
||||
|
||||
## Process
|
||||
|
||||
1. **Analyze Target Code**
|
||||
- Parse function/class signatures
|
||||
- Identify dependencies and side effects
|
||||
- Map input types and return types
|
||||
- Find edge cases from logic branches
|
||||
|
||||
2. **Determine Test Strategy**
|
||||
|
||||
| Code Pattern | Test Approach |
|
||||
|--------------|---------------|
|
||||
| Pure function | Unit tests with varied inputs |
|
||||
| Class with state | Setup/teardown, state transitions |
|
||||
| External calls | Mocks/stubs for dependencies |
|
||||
| Database ops | Integration tests with fixtures |
|
||||
| API endpoints | Request/response tests |
|
||||
| UI components | Snapshot + interaction tests |
|
||||
|
||||
3. **Generate Tests**
|
||||
|
||||
For each target function/method:
|
||||
- Happy path test (expected inputs → expected output)
|
||||
- Edge cases (empty, null, boundary values)
|
||||
- Error cases (invalid inputs → expected errors)
|
||||
- Type variations (if dynamic typing)
|
||||
|
||||
4. **Test Structure**
|
||||
```python
|
||||
# Example output for Python/pytest
|
||||
|
||||
import pytest
|
||||
from module import target_function
|
||||
|
||||
class TestTargetFunction:
|
||||
"""Tests for target_function."""
|
||||
|
||||
def test_happy_path(self):
|
||||
"""Standard input produces expected output."""
|
||||
result = target_function(valid_input)
|
||||
assert result == expected_output
|
||||
|
||||
def test_empty_input(self):
|
||||
"""Empty input handled gracefully."""
|
||||
result = target_function("")
|
||||
assert result == default_value
|
||||
|
||||
def test_invalid_input_raises(self):
|
||||
"""Invalid input raises ValueError."""
|
||||
with pytest.raises(ValueError):
|
||||
target_function(invalid_input)
|
||||
|
||||
@pytest.mark.parametrize("input,expected", [
|
||||
(case1_in, case1_out),
|
||||
(case2_in, case2_out),
|
||||
])
|
||||
def test_variations(self, input, expected):
|
||||
"""Multiple input variations."""
|
||||
assert target_function(input) == expected
|
||||
```
|
||||
|
||||
5. **Output**
|
||||
```
|
||||
## Tests Generated
|
||||
|
||||
### Target: src/orders.py:calculate_total
|
||||
|
||||
### File Created: tests/test_orders.py
|
||||
|
||||
### Tests (6 total)
|
||||
- test_calculate_total_happy_path
|
||||
- test_calculate_total_empty_items
|
||||
- test_calculate_total_negative_price_raises
|
||||
- test_calculate_total_with_discount
|
||||
- test_calculate_total_with_tax
|
||||
- test_calculate_total_parametrized_cases
|
||||
|
||||
### Coverage Estimate
|
||||
- Line coverage: ~85%
|
||||
- Branch coverage: ~70%
|
||||
|
||||
### Run Tests
|
||||
pytest tests/test_orders.py -v
|
||||
```
|
||||
|
||||
## Framework Detection
|
||||
|
||||
| Files Present | Framework Used |
|
||||
|---------------|----------------|
|
||||
| pytest.ini, conftest.py | pytest |
|
||||
| jest.config.* | jest |
|
||||
| vitest.config.* | vitest |
|
||||
| *_test.go | go test |
|
||||
| Cargo.toml | cargo test |
|
||||
| mix.exs | ExUnit |
|
||||
|
||||
## Visual Output
|
||||
|
||||
When executing this command, display the plugin header:
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ Test Generation ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Then proceed with the test generation workflow.
|
||||
|
||||
## Integration with /test-check
|
||||
|
||||
- `/test-gen` creates new tests
|
||||
- `/test-check` verifies tests pass
|
||||
- Typical flow: `/test-gen src/new_module.py` → `/test-check`
|
||||
106
plugins/projman/commands/test.md
Normal file
106
plugins/projman/commands/test.md
Normal file
@@ -0,0 +1,106 @@
|
||||
---
|
||||
description: Run tests with coverage or generate tests for specified code
|
||||
---
|
||||
|
||||
# Test
|
||||
|
||||
## Skills Required
|
||||
|
||||
- skills/test-standards.md
|
||||
|
||||
## Purpose
|
||||
|
||||
Unified testing command for running tests and generating new tests.
|
||||
|
||||
## Invocation
|
||||
|
||||
```
|
||||
/test # Default: run tests
|
||||
/test run # Run tests, check coverage
|
||||
/test run --coverage # Run with coverage report
|
||||
/test run --verbose # Verbose output
|
||||
/test gen <target> # Generate tests for target
|
||||
/test gen <target> --type=unit # Specific test type
|
||||
/test gen <target> --framework=jest # Specific framework
|
||||
```
|
||||
|
||||
## Mode Selection
|
||||
|
||||
- No args or `run` → **Run Mode**
|
||||
- `gen <target>` → **Generate Mode**
|
||||
|
||||
---
|
||||
|
||||
## Mode: Run
|
||||
|
||||
Run tests and verify coverage before sprint close.
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Detect Framework**
|
||||
Check for: pytest.ini, package.json, go.mod, Cargo.toml, etc.
|
||||
|
||||
2. **Run Tests**
|
||||
Execute appropriate test command for detected framework.
|
||||
|
||||
3. **Check Coverage** (if --coverage or available)
|
||||
Report coverage percentage.
|
||||
|
||||
4. **Sprint File Analysis**
|
||||
Identify sprint files without tests.
|
||||
|
||||
See `skills/test-standards.md` for framework detection and commands.
|
||||
|
||||
### DO NOT (Run Mode)
|
||||
|
||||
- Modify test files
|
||||
- Skip failing tests to make run pass
|
||||
- Run tests in production environments
|
||||
- Install dependencies without asking
|
||||
|
||||
---
|
||||
|
||||
## Mode: Generate
|
||||
|
||||
Generate comprehensive tests for specified code.
|
||||
|
||||
### Arguments
|
||||
|
||||
- **Target:** File path, function name, class name, or module
|
||||
- **--type:** unit (default), integration, e2e, snapshot
|
||||
- **--framework:** Auto-detected or specified (pytest, jest, vitest, go test)
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Analyze Target Code**
|
||||
Parse signatures, identify dependencies, map types.
|
||||
|
||||
2. **Determine Test Strategy**
|
||||
Based on code pattern:
|
||||
- Pure function → unit tests with multiple inputs
|
||||
- Class → instance lifecycle tests
|
||||
- API endpoint → request/response tests
|
||||
- Component → render and interaction tests
|
||||
|
||||
3. **Generate Tests**
|
||||
- Happy path cases
|
||||
- Edge cases (empty, null, boundary)
|
||||
- Error cases (invalid input, exceptions)
|
||||
- Type variations (if applicable)
|
||||
|
||||
4. **Output File**
|
||||
Create test file with proper structure and naming.
|
||||
|
||||
See `skills/test-standards.md` for test patterns and structure.
|
||||
|
||||
---
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════════╗
|
||||
║ 📋 PROJMAN ║
|
||||
║ 🧪 TEST ║
|
||||
║ [Mode: Run | Generate] ║
|
||||
╚══════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
Reference in New Issue
Block a user