Major refactoring of projman plugin architecture: Skills Extraction (17 new files): - Extracted reusable knowledge from commands and agents into skills/ - branch-security, dependency-management, git-workflow, input-detection - issue-conventions, lessons-learned, mcp-tools-reference, planning-workflow - progress-tracking, repo-validation, review-checklist, runaway-detection - setup-workflows, sprint-approval, task-sizing, test-standards, wiki-conventions Command Consolidation (17 → 12 commands): - /setup: consolidates initial-setup, project-init, project-sync (--full/--quick/--sync) - /debug: consolidates debug-report, debug-review (report/review modes) - /test: consolidates test-check, test-gen (run/gen modes) - /sprint-status: absorbs sprint-diagram via --diagram flag Architecture Cleanup: - Remove plugin-level mcp-servers/ symlinks (6 plugins) - Remove plugin README.md files (12 files, ~2000 lines) - Update all documentation to reflect new command structure - Fix documentation drift in CONFIGURATION.md, COMMANDS-CHEATSHEET.md Commands are now thin dispatchers (~20-50 lines) that reference skills. Agents reference skills for domain knowledge instead of inline content. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
3.0 KiB
3.0 KiB
name, description
| name | description |
|---|---|
| lessons-learned | Capture and search workflow for lessons learned system |
Lessons Learned System
Purpose
Defines the workflow for capturing lessons at sprint close and searching them at sprint start/plan.
When to Use
- Planner agent: Search lessons at sprint start
- Orchestrator agent: Capture lessons at sprint close
- Commands:
/sprint-plan,/sprint-start,/sprint-close
Searching Lessons (Sprint Start/Plan)
ALWAYS search for past lessons before planning or executing.
search_lessons(
repo="org/repo",
query="relevant keywords",
tags=["technology", "component"],
limit=10
)
Present findings:
Relevant lessons from previous sprints:
📚 Sprint 12: "JWT Token Expiration Edge Cases"
Tags: auth, jwt, python
Key lesson: Handle token refresh explicitly
📚 Sprint 8: "Service Extraction Boundaries"
Tags: architecture, refactoring
Key lesson: Define API contracts BEFORE extracting
Capturing Lessons (Sprint Close)
Interview Questions
Ask these probing questions:
- What challenges did you face this sprint?
- What worked well and should be repeated?
- Were there any preventable mistakes?
- Did any technical decisions need adjustment?
- What would you do differently?
Lesson Structure
# Sprint N - [Lesson Title]
## Metadata
- **Implementation:** [Change VXX.X.X (Impl N)](wiki-link)
- **Issues:** #XX, #XX
- **Sprint:** Sprint N
## Context
[What were you doing?]
## Problem
[What went wrong / insight / challenge?]
## Solution
[How did you solve it?]
## Prevention
[How to avoid in future?]
## Tags
technology, component, type, pattern
Creating Lesson
create_lesson(
repo="org/repo",
title="Sprint N - Lesson Title",
content="[structured content above]",
tags=["tag1", "tag2"],
category="sprints"
)
Tagging Strategy
By Technology: python, javascript, docker, postgresql, redis, vue, fastapi
By Component: backend, frontend, api, database, auth, deploy, testing, docs
By Type: bug, feature, refactor, architecture, performance, security
By Pattern: infinite-loop, edge-case, integration, boundaries, dependencies
Example Lessons
Technical Gotcha
# Sprint 16 - Claude Code Infinite Loop on Validation Errors
## Metadata
- **Implementation:** [Change V1.2.0 (Impl 1)](wiki-link)
- **Issues:** #45, #46
- **Sprint:** Sprint 16
## Context
Implementing input validation for authentication API.
## Problem
Claude Code entered infinite loop when pytest validation tests failed.
The loop occurred because error messages didn't change between attempts.
## Solution
Added more descriptive error messages specifying what value failed and why.
## Prevention
- Write validation test errors with specific values
- If Claude loops, check if errors provide unique information
- Add loop detection (fail after 3 identical errors)
## Tags
testing, claude-code, validation, python, pytest