refactor: consolidate plugins into plugins/ directory

- Created plugins/ directory to centralize all plugins
- Moved projman and project-hygiene into plugins/
- Updated .mcp.json paths from ../mcp-servers/ to ../../mcp-servers/
- Updated CLAUDE.md governance section with new directory structure
- Updated README.md with new repository structure diagram
- Fixed all path references in documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2025-12-11 23:26:38 -05:00
parent 8bb69d3556
commit e9e425933b
21 changed files with 52 additions and 51 deletions

View File

@@ -0,0 +1,35 @@
---
name: initial-setup
description: Run initial setup for support-claude-mktplace
arguments: []
---
# Initial Setup
Run the installation script to set up the toolkit.
## What This Does
1. Creates Python virtual environments for MCP servers
2. Installs all dependencies
3. Creates configuration file templates
4. Validates existing configuration
5. Reports remaining manual steps
## Execution
```bash
cd ${PROJECT_ROOT}
./scripts/setup.sh
```
## After Running
Review the output for any manual steps required:
- Configure API credentials in `~/.config/claude/`
- Run `/labels-sync` to sync Gitea labels
- Verify Wiki.js directory structure
## Re-Running
This command is safe to run multiple times. It will skip already-completed steps.

View File

@@ -0,0 +1,218 @@
---
name: labels-sync
description: Synchronize label taxonomy from Gitea and update suggestion logic
---
# Sync Label Taxonomy from Gitea
This command synchronizes the label taxonomy from Gitea (organization + repository labels) and updates the local reference file used by the label suggestion logic.
## Why Label Sync Matters
The label taxonomy is **dynamic** - new labels may be added to Gitea over time:
- Organization-level labels (shared across all repos)
- Repository-specific labels (unique to this project)
**Dynamic approach:** Never hardcode labels. Always fetch from Gitea and adapt suggestions accordingly.
## What This Command Does
1. **Fetch Current Labels** - Uses `get_labels` MCP tool to fetch all labels (org + repo)
2. **Compare with Local Reference** - Checks against `skills/label-taxonomy/labels-reference.md`
3. **Detect Changes** - Identifies new, removed, or modified labels
4. **Explain Changes** - Shows what changed and why it matters
5. **Update Reference** - Updates the local labels-reference.md file
6. **Confirm Update** - Asks for user confirmation before updating
## MCP Tools Used
**Gitea Tools:**
- `get_labels` - Fetch all labels (organization + repository)
The command will parse the response and categorize labels by namespace and color.
## Expected Output
```
Label Taxonomy Sync
===================
Fetching labels from Gitea...
Current Label Taxonomy:
- Organization Labels: 28
- Repository Labels: 16
- Total: 44 labels
Comparing with local reference...
Changes Detected:
✨ NEW: Type/Performance (org-level)
Description: Performance optimization tasks
Color: #FF6B6B
Suggestion: Add to suggestion logic for performance-related work
✨ NEW: Tech/Redis (repo-level)
Description: Redis-related technology
Color: #DC143C
Suggestion: Add to suggestion logic for caching and data store work
📝 MODIFIED: Priority/Critical
Change: Color updated from #D73A4A to #FF0000
Impact: Visual only, no logic change needed
❌ REMOVED: Component/Legacy
Reason: Component deprecated and removed from codebase
Impact: Remove from suggestion logic
Summary:
- 2 new labels added
- 1 label modified (color only)
- 1 label removed
- Total labels: 44 → 45
Label Suggestion Logic Updates:
- Type/Performance: Suggest for keywords "optimize", "performance", "slow", "speed"
- Tech/Redis: Suggest for keywords "cache", "redis", "session", "pubsub"
- Component/Legacy: Remove from all suggestion contexts
Update local reference file?
[Y/n]
```
## Label Taxonomy Structure
Labels are organized by namespace:
**Organization Labels (28):**
- `Agent/*` (2): Agent/Human, Agent/Claude
- `Complexity/*` (3): Simple, Medium, Complex
- `Efforts/*` (5): XS, S, M, L, XL
- `Priority/*` (4): Low, Medium, High, Critical
- `Risk/*` (3): Low, Medium, High
- `Source/*` (4): Development, Staging, Production, Customer
- `Type/*` (6): Bug, Feature, Refactor, Documentation, Test, Chore
**Repository Labels (16):**
- `Component/*` (9): Backend, Frontend, API, Database, Auth, Deploy, Testing, Docs, Infra
- `Tech/*` (7): Python, JavaScript, Docker, PostgreSQL, Redis, Vue, FastAPI
## Local Reference File
The command updates `skills/label-taxonomy/labels-reference.md` with:
```markdown
# Label Taxonomy Reference
Last synced: 2025-01-18 14:30 UTC
Source: Gitea (bandit/your-repo-name)
## Organization Labels (28)
### Agent (2)
- Agent/Human - Work performed by human developers
- Agent/Claude - Work performed by Claude Code
### Type (6)
- Type/Bug - Bug fixes and error corrections
- Type/Feature - New features and enhancements
- Type/Refactor - Code restructuring and architectural changes
- Type/Documentation - Documentation updates
- Type/Test - Testing-related work
- Type/Chore - Maintenance and tooling tasks
...
## Repository Labels (16)
### Component (9)
- Component/Backend - Backend service code
- Component/Frontend - User interface code
- Component/API - API endpoints and contracts
...
## Suggestion Logic
When suggesting labels, consider:
**Type Detection:**
- Keywords "bug", "fix", "error" → Type/Bug
- Keywords "feature", "add", "implement" → Type/Feature
- Keywords "refactor", "extract", "restructure" → Type/Refactor
...
```
## When to Run
Run `/labels-sync` when:
- Setting up the plugin for the first time
- You notice missing labels in suggestions
- New labels are added to Gitea (announced by team)
- Quarterly maintenance (check for changes)
- After major taxonomy updates
## Integration with Other Commands
The updated taxonomy is used by:
- `/sprint-plan` - Planner agent uses `suggest_labels` with current taxonomy
- All commands that create or update issues
## Example Usage
```
User: /labels-sync
Fetching labels from Gitea...
Current Label Taxonomy:
- Organization Labels: 28
- Repository Labels: 16
- Total: 44 labels
Comparing with local reference...
✅ No changes detected. Label taxonomy is up to date.
Last synced: 2025-01-18 14:30 UTC
User: /labels-sync
Fetching labels from Gitea...
Changes Detected:
✨ NEW: Type/Performance
✨ NEW: Tech/Redis
Update local reference file? [Y/n] y
✅ Label taxonomy updated successfully!
✅ Suggestion logic updated with new labels
New labels available for use:
- Type/Performance
- Tech/Redis
```
## Troubleshooting
**Error: Cannot fetch labels from Gitea**
- Check your Gitea configuration in `~/.config/claude/gitea.env`
- Verify your API token has `read:org` and `repo` permissions
- Ensure you're connected to the network
**Error: Permission denied to update reference file**
- Check file permissions on `skills/label-taxonomy/labels-reference.md`
- Ensure you have write access to the plugin directory
**No changes detected but labels seem wrong**
- The reference file may be manually edited - review it
- Try forcing a re-sync by deleting the reference file first
- Check if you're comparing against the correct repository
## Best Practices
1. **Sync regularly** - Run monthly or when notified of label changes
2. **Review changes** - Always review what changed before confirming
3. **Update planning** - After sync, consider if new labels affect current sprint
4. **Communicate changes** - Let team know when new labels are available
5. **Keep skill updated** - The label-taxonomy skill should match the reference file

View File

@@ -0,0 +1,231 @@
---
name: sprint-close
description: Complete sprint and capture lessons learned to Wiki.js
agent: orchestrator
---
# Close Sprint and Capture Lessons Learned
This command completes the sprint and captures lessons learned to Wiki.js. **This is critical** - after 15 sprints without lesson capture, repeated mistakes occurred (e.g., Claude Code infinite loops 2-3 times on similar issues).
## Why Lessons Learned Matter
**Problem:** Without systematic lesson capture, teams repeat the same mistakes:
- Claude Code infinite loops on similar issues (happened 2-3 times)
- Same architectural mistakes (multiple occurrences)
- Forgotten optimizations (re-discovered each time)
**Solution:** Mandatory lessons learned capture at sprint close, searchable at sprint start.
## Sprint Close Workflow
The orchestrator agent will guide you through:
1. **Review Sprint Completion**
- Verify all issues are closed or moved to backlog
- Check for incomplete work needing carryover
- Review overall sprint goals vs. actual completion
2. **Capture Lessons Learned**
- What went wrong and why
- What went right and should be repeated
- Preventable repetitions to avoid in future sprints
- Technical insights and gotchas discovered
3. **Tag for Discoverability**
- Apply relevant tags: technology, component, type of lesson
- Ensure future sprints can find these lessons via search
- Use consistent tagging for patterns
4. **Update Wiki.js**
- Use `create_lesson` to save lessons to Wiki.js
- Create lessons in `/projects/{project}/lessons-learned/sprints/`
- Update INDEX.md automatically
- Make lessons searchable for future sprints
5. **Git Operations**
- Commit any remaining work
- Merge feature branches if needed
- Clean up merged branches
- Tag sprint completion
## MCP Tools Available
**Gitea Tools:**
- `list_issues` - Review sprint issues (completed and incomplete)
- `get_issue` - Get detailed issue information for retrospective
- `update_issue` - Move incomplete issues to next sprint
**Wiki.js Tools:**
- `create_lesson` - Create lessons learned entry
- `tag_lesson` - Add/update tags on lessons
- `list_pages` - Check existing lessons learned
- `update_page` - Update INDEX.md if needed
## Lesson Structure
Lessons should follow this structure:
```markdown
# Sprint X - [Lesson Title]
## Context
[What were you trying to do? What was the sprint goal?]
## Problem
[What went wrong? What insight emerged? What challenge did you face?]
## Solution
[How did you solve it? What approach worked?]
## Prevention
[How can this be avoided or optimized in the future? What should future sprints know?]
## Tags
[Comma-separated tags for search: technology, component, type]
```
## Example Lessons Learned
**Example 1: Technical Gotcha**
```markdown
# Sprint 16 - Claude Code Infinite Loop on Validation Errors
## Context
Implementing input validation for authentication API endpoints.
## Problem
Claude Code entered an infinite loop when pytest validation tests failed.
The loop occurred because the error message didn't change between attempts,
causing Claude to retry the same fix repeatedly.
## Solution
Added more descriptive error messages to validation tests that specify
exactly what value failed and why. This gave Claude clear feedback
to adjust the approach rather than retrying the same fix.
## Prevention
- Always write validation test errors with specific values and expectations
- If Claude loops, check if error messages provide unique information per failure
- Add a "loop detection" check in test output (fail after 3 identical errors)
## Tags
testing, claude-code, validation, python, pytest, debugging
```
**Example 2: Architectural Decision**
```markdown
# Sprint 14 - Extracting Services Too Early
## Context
Planning to extract Intuit Engine service from monolith.
## Problem
Initial plan was to extract immediately without testing the API boundaries
first. This would have caused integration issues discovered late.
## Solution
Added a sprint phase to:
1. Define clear API contracts first
2. Add integration tests for the boundaries
3. THEN extract the service
Delayed extraction by one sprint but avoided major rework.
## Prevention
- Always define API contracts before service extraction
- Write integration tests FIRST, extraction SECOND
- Don't rush architectural changes - test boundaries first
## Tags
architecture, service-extraction, refactoring, api-design, planning
```
## Tagging Strategy
Use consistent tags for discoverability:
**By Technology:**
- `python`, `javascript`, `docker`, `postgresql`, `redis`, `vue`, `fastapi`
**By Component:**
- `backend`, `frontend`, `api`, `database`, `auth`, `deploy`, `testing`, `docs`
**By Type:**
- `bug`, `feature`, `refactor`, `architecture`, `performance`, `security`
**By Issue:**
- `claude-code`, `validation`, `testing`, `debugging`, `deployment`, `configuration`
**By Pattern:**
- `infinite-loop`, `edge-case`, `integration`, `boundaries`, `dependencies`
## Orchestrator's Role
The orchestrator agent will:
1. Ask probing questions about the sprint
2. Identify preventable mistakes vs. expected challenges
3. Structure lessons in the proper format
4. Suggest appropriate tags
5. Create lessons in Wiki.js using `create_lesson`
6. Ensure lessons are discoverable for future sprints
## Example Sprint Close Session
```
User: /sprint-close
Orchestrator: Let's close the sprint and capture lessons learned.
First, let me review the sprint completion...
[Uses list_issues to check sprint status]
Sprint 16 Summary:
- 8 issues planned
- 7 completed (87.5%)
- 1 moved to backlog (#52: Deploy auth service - blocked by infra)
Great progress! Now, let's capture lessons learned.
What went wrong or what challenges did you face this sprint?
User: Claude Code got stuck in a loop when pytest validation tests failed
Orchestrator: That's a valuable lesson - this prevents future frustration.
Let me ask some clarifying questions:
1. What caused the loop? (Same error message repeated?)
2. How did you break out of it?
3. What would prevent this in future sprints?
[After gathering details]
I'll create this lesson with tags: testing, claude-code, validation, python, pytest, debugging
[Uses create_lesson to save to Wiki.js]
Lesson created: "Sprint 16 - Claude Code Infinite Loop on Validation Errors"
Path: /projects/cuisineflow/lessons-learned/sprints/sprint-16-validation-loop.md
Any other lessons from this sprint?
User: Yes, we discovered that JWT token refresh needs special handling
Orchestrator: Excellent - let's capture that too...
[Continues capturing lessons]
All lessons captured! They'll be searchable when planning future sprints.
Would you like me to handle git operations (merge branches, tag release)?
```
## Getting Started
Simply run `/sprint-close` when your sprint is complete. The orchestrator will guide you through:
1. Sprint review
2. Lessons learned capture
3. Wiki.js updates
4. Git operations
**Don't skip this step!** Future sprints will thank you for capturing these insights.

View File

@@ -0,0 +1,129 @@
---
name: sprint-plan
description: Start sprint planning with AI-guided architecture analysis and issue creation
agent: planner
---
# Sprint Planning
You are initiating sprint planning. The planner agent will guide you through architecture analysis, ask clarifying questions, and help create well-structured Gitea issues with appropriate labels.
## Branch Detection
**CRITICAL:** Before proceeding, check the current git branch:
```bash
git branch --show-current
```
**Branch Requirements:**
-**Development branches** (`development`, `develop`, `feat/*`, `dev/*`): Full planning capabilities
- ⚠️ **Staging branches** (`staging`, `stage/*`): Can create issues to document needed changes, but cannot modify code
-**Production branches** (`main`, `master`, `prod/*`): READ-ONLY - no planning allowed
If you are on a production or staging branch, you MUST stop and ask the user to switch to a development branch.
## Planning Workflow
The planner agent will:
1. **Understand Sprint Goals**
- Ask clarifying questions about the sprint objectives
- Understand scope, priorities, and constraints
- Never rush - take time to understand requirements fully
2. **Search Relevant Lessons Learned**
- Use the `search_lessons` MCP tool to find past experiences
- Search by keywords and tags relevant to the sprint work
- Review patterns and preventable mistakes from previous sprints
3. **Architecture Analysis**
- Think through technical approach and edge cases
- Identify architectural decisions needed
- Consider dependencies and integration points
- Review existing codebase architecture
4. **Create Gitea Issues**
- Use the `create_issue` MCP tool for each planned task
- Apply appropriate labels using `suggest_labels` tool
- Structure issues with clear titles and descriptions
- Include acceptance criteria and technical notes
5. **Generate Planning Document**
- Summarize architectural decisions
- List created issues with labels
- Document assumptions and open questions
- Provide sprint overview
## MCP Tools Available
**Gitea Tools:**
- `list_issues` - Review existing issues
- `get_labels` - Fetch current label taxonomy
- `suggest_labels` - Get intelligent label suggestions based on context
- `create_issue` - Create new issue with labels
**Wiki.js Tools:**
- `search_lessons` - Search lessons learned from previous sprints
- `search_pages` - Search project documentation
- `get_page` - Fetch specific documentation page
## Label Taxonomy
The system uses a dynamic 44-label taxonomy (28 org + 16 repo). Always use the `suggest_labels` tool to get appropriate labels based on the issue context.
**Key Label Categories:**
- **Type/***: Bug, Feature, Refactor, Documentation, Test, Chore
- **Priority/***: Low, Medium, High, Critical
- **Complexity/***: Simple, Medium, Complex
- **Efforts/***: XS, S, M, L, XL
- **Component/***: Backend, Frontend, API, Database, Auth, Deploy, Testing, Docs, Infra
- **Tech/***: Python, JavaScript, Docker, PostgreSQL, Redis, Vue, FastAPI
## Planner Personality
The planner agent is thoughtful and methodical:
- Asks clarifying questions before making assumptions
- Thinks through edge cases and architectural implications
- Never rushes planning - quality over speed
- References lessons learned proactively
- Suggests appropriate labels based on context
- Creates well-structured, detailed issues
## Example Planning Session
```
User: I want to plan a sprint for user authentication
Planner: Great! Let me ask a few questions to understand the scope:
1. What authentication method are you planning? (JWT, OAuth, session-based?)
2. Are there any specific security requirements or compliance needs?
3. Should this integrate with existing user management?
4. What's the priority level for this sprint?
Let me also search for relevant lessons learned about authentication...
[Uses search_lessons to find past authentication work]
Based on previous experience, I found these relevant lessons:
- Sprint 12: JWT token expiration handling edge cases
- Sprint 8: OAuth integration pitfalls with third-party providers
Now, let me analyze the architecture...
[Creates issues with appropriate labels]
Created 5 issues for the authentication sprint:
- Issue #45: Implement JWT token generation [Type/Feature, Priority/High, Component/Auth, Tech/Python]
- Issue #46: Build user login endpoint [Type/Feature, Priority/High, Component/API, Tech/FastAPI]
...
```
## Getting Started
Invoke the planner agent by providing your sprint goals. The agent will guide you through the planning process.
**Example:**
> "I want to plan a sprint for extracting the Intuit Engine service from the monolith"
The planner will then ask clarifying questions and guide you through the complete planning workflow.

View File

@@ -0,0 +1,162 @@
---
name: sprint-start
description: Begin sprint execution with relevant lessons learned from previous sprints
agent: orchestrator
---
# Start Sprint Execution
You are initiating sprint execution. The orchestrator agent will coordinate the work, search for relevant lessons learned, and guide you through the implementation process.
## Branch Detection
**CRITICAL:** Before proceeding, check the current git branch:
```bash
git branch --show-current
```
**Branch Requirements:**
-**Development branches** (`development`, `develop`, `feat/*`, `dev/*`): Full execution capabilities
- ⚠️ **Staging branches** (`staging`, `stage/*`): Can create issues to document bugs, but cannot modify code
-**Production branches** (`main`, `master`, `prod/*`): READ-ONLY - no execution allowed
If you are on a production or staging branch, you MUST stop and ask the user to switch to a development branch.
## Sprint Start Workflow
The orchestrator agent will:
1. **Review Sprint Issues**
- Use `list_issues` to fetch open issues for the sprint
- Identify priorities based on labels (Priority/Critical, Priority/High, etc.)
- Understand dependencies between issues
2. **Search Relevant Lessons Learned**
- Use `search_lessons` to find experiences from past sprints
- Search by tags matching the current sprint's technology and components
- Review patterns, gotchas, and preventable mistakes
- Present relevant lessons before starting work
3. **Identify Next Task**
- Select the highest priority task that's unblocked
- Review task details and acceptance criteria
- Check for dependencies
4. **Generate Lean Execution Prompt**
- Create concise implementation guidance (NOT full planning docs)
- Reference architectural decisions from planning phase
- Highlight relevant lessons learned
- Provide clear acceptance criteria
5. **Track Progress**
- Update issue status as work progresses
- Use `add_comment` to document progress and blockers
- Identify when tasks are blocked and need attention
## MCP Tools Available
**Gitea Tools:**
- `list_issues` - Fetch sprint issues (filter by state, labels, milestone)
- `get_issue` - Get detailed issue information
- `update_issue` - Update issue status, assignee, labels
- `add_comment` - Add progress updates or blocker notes
**Wiki.js Tools:**
- `search_lessons` - Find relevant lessons from past sprints
- `search_pages` - Search project documentation for references
- `get_page` - Fetch specific documentation (e.g., architecture decisions)
## Orchestrator Personality
The orchestrator agent is concise and action-oriented:
- Generates lean execution prompts, not lengthy documents
- Tracks details meticulously (no task forgotten)
- Identifies blockers proactively
- Coordinates Git operations (commit, merge, cleanup)
- Manages task dependencies
- Updates documentation as work progresses
## Example Sprint Start Session
```
User: /sprint-start
Orchestrator: Starting sprint execution. Let me review open issues and relevant lessons...
[Uses list_issues to fetch sprint backlog]
Found 8 open issues for this sprint:
- Issue #45: Implement JWT token generation [Priority/High, Component/Auth]
- Issue #46: Build user login endpoint [Priority/High, Component/API]
- Issue #47: Create user registration form [Priority/Medium, Component/Frontend]
...
[Uses search_lessons to find relevant past experiences]
Relevant lessons learned:
- Sprint 12: "JWT Token Expiration Edge Cases" - Remember to handle token refresh
- Sprint 8: "OAuth Integration Pitfalls" - Test error handling for auth providers
[Identifies next task based on priority and dependencies]
Next task: Issue #45 - Implement JWT token generation
This is unblocked and high priority.
Execution prompt:
- Create JWT token generation utility in backend/auth/jwt.py
- Use HS256 algorithm with secret from environment variable
- Include user_id, email, and expiration in payload
- Add token refresh logic (remember lesson from Sprint 12!)
- Write unit tests for token generation and validation
Would you like me to invoke the executor agent for implementation guidance?
```
## Lessons Learned Integration
The orchestrator actively searches for and presents relevant lessons before starting work:
**Search by Technology:**
```
search_lessons(tags="python,fastapi,jwt")
```
**Search by Component:**
```
search_lessons(tags="authentication,api,backend")
```
**Search by Keywords:**
```
search_lessons(query="token expiration edge cases")
```
## Progress Tracking
As work progresses, the orchestrator updates Gitea:
**Add Progress Comment:**
```
add_comment(issue_number=45, body="JWT generation implemented. Running tests now.")
```
**Update Issue Status:**
```
update_issue(issue_number=45, state="closed")
```
**Document Blockers:**
```
add_comment(issue_number=46, body="Blocked: Waiting for auth database schema migration")
```
## Getting Started
Simply invoke `/sprint-start` and the orchestrator will:
1. Review your sprint backlog
2. Search for relevant lessons
3. Identify the next task to work on
4. Provide lean execution guidance
5. Track progress as you work
The orchestrator keeps you focused and ensures nothing is forgotten.

View File

@@ -0,0 +1,120 @@
---
name: sprint-status
description: Check current sprint progress and identify blockers
---
# Sprint Status Check
This command provides a quick overview of your current sprint progress, including open issues, completed work, and potential blockers.
## What This Command Does
1. **Fetch Sprint Issues** - Lists all issues with current sprint labels/milestone
2. **Categorize by Status** - Groups issues into: Open, In Progress, Blocked, Completed
3. **Identify Blockers** - Highlights issues with blocker comments or dependencies
4. **Show Progress Summary** - Provides completion percentage and velocity insights
5. **Highlight Priorities** - Shows critical and high-priority items needing attention
## Usage
Simply run `/sprint-status` to get a comprehensive sprint overview.
## MCP Tools Used
This command uses the following Gitea MCP tools:
- `list_issues(state="open")` - Fetch open issues
- `list_issues(state="closed")` - Fetch completed issues
- `get_issue(number)` - Get detailed issue information for blockers
## Expected Output
```
Sprint Status Report
====================
Sprint: Sprint 16 - Authentication System
Date: 2025-01-18
Progress Summary:
- Total Issues: 8
- Completed: 3 (37.5%)
- In Progress: 2 (25%)
- Open: 2 (25%)
- Blocked: 1 (12.5%)
Completed Issues (3):
✅ #45: Implement JWT token generation [Type/Feature, Priority/High]
✅ #46: Build user login endpoint [Type/Feature, Priority/High]
✅ #48: Write authentication tests [Type/Test, Priority/Medium]
In Progress (2):
🔄 #47: Create user registration form [Type/Feature, Priority/Medium]
🔄 #49: Add password reset flow [Type/Feature, Priority/Low]
Open Issues (2):
📋 #50: Integrate OAuth providers [Type/Feature, Priority/Low]
📋 #51: Add email verification [Type/Feature, Priority/Medium]
Blocked Issues (1):
🚫 #52: Deploy auth service [Type/Deploy, Priority/High]
Blocker: Waiting for database migration approval
Priority Alerts:
⚠️ 1 high-priority item blocked: #52
✅ All critical items completed
Recommendations:
1. Focus on unblocking #52 (Deploy auth service)
2. Continue work on #47 (User registration form)
3. Consider starting #51 (Email verification) next
```
## Filtering Options
You can optionally filter the status check:
**By Label:**
```
Show only high-priority issues:
list_issues(labels=["Priority/High"])
```
**By Milestone:**
```
Show issues for specific sprint:
list_issues(milestone="Sprint 16")
```
**By Component:**
```
Show only backend issues:
list_issues(labels=["Component/Backend"])
```
## Blocker Detection
The command identifies blocked issues by:
1. Checking issue comments for keywords: "blocked", "blocker", "waiting for", "dependency"
2. Looking for issues with no recent activity (>7 days)
3. Identifying issues with unresolved dependencies
## When to Use
Run `/sprint-status` when you want to:
- Start your day and see what needs attention
- Prepare for standup meetings
- Check if the sprint is on track
- Identify bottlenecks or blockers
- Decide what to work on next
## Integration with Other Commands
- Use `/sprint-start` to begin working on identified tasks
- Use `/sprint-close` when all issues are completed
- Use `/sprint-plan` to adjust scope if blocked items can't be unblocked
## Example Usage
```
User: /sprint-status