Files
leo-claude-mktplace/plugins/projman/skills/lessons-learned.md
lmiranda 2d51df7a42 feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]
Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action>
sub-command pattern. Git-flow consolidated from 8→5 commands (commit
variants absorbed into --push/--merge/--sync flags). Dispatch files,
name: frontmatter, and cross-reference updates for all plugins.

Phase 2: Design documents for 8 new plugins in docs/designs/.

Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate,
saas-react-platform, saas-test-pilot, data-seed, ops-release-manager,
ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents,
skills, README, and claude-md-integration. Marketplace grows from 12→20.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:52:11 -05:00

3.0 KiB

name, description
name description
lessons-learned Capture and search workflow for lessons learned system

Lessons Learned System

Purpose

Defines the workflow for capturing lessons at sprint close and searching them at sprint start/plan.

When to Use

  • Planner agent: Search lessons at sprint start
  • Orchestrator agent: Capture lessons at sprint close
  • Commands: /sprint plan, /sprint start, /sprint close

Searching Lessons (Sprint Start/Plan)

ALWAYS search for past lessons before planning or executing.

search_lessons(
    repo="org/repo",
    query="relevant keywords",
    tags=["technology", "component"],
    limit=10
)

Present findings:

Relevant lessons from previous sprints:

📚 Sprint 12: "JWT Token Expiration Edge Cases"
   Tags: auth, jwt, python
   Key lesson: Handle token refresh explicitly

📚 Sprint 8: "Service Extraction Boundaries"
   Tags: architecture, refactoring
   Key lesson: Define API contracts BEFORE extracting

Capturing Lessons (Sprint Close)

Interview Questions

Ask these probing questions:

  1. What challenges did you face this sprint?
  2. What worked well and should be repeated?
  3. Were there any preventable mistakes?
  4. Did any technical decisions need adjustment?
  5. What would you do differently?

Lesson Structure

# Sprint N - [Lesson Title]

## Metadata
- **Implementation:** [Change VXX.X.X (Impl N)](wiki-link)
- **Issues:** #XX, #XX
- **Sprint:** Sprint N

## Context
[What were you doing?]

## Problem
[What went wrong / insight / challenge?]

## Solution
[How did you solve it?]

## Prevention
[How to avoid in future?]

## Tags
technology, component, type, pattern

Creating Lesson

create_lesson(
    repo="org/repo",
    title="Sprint N - Lesson Title",
    content="[structured content above]",
    tags=["tag1", "tag2"],
    category="sprints"
)

Tagging Strategy

By Technology: python, javascript, docker, postgresql, redis, vue, fastapi

By Component: backend, frontend, api, database, auth, deploy, testing, docs

By Type: bug, feature, refactor, architecture, performance, security

By Pattern: infinite-loop, edge-case, integration, boundaries, dependencies


Example Lessons

Technical Gotcha

# Sprint 16 - Claude Code Infinite Loop on Validation Errors

## Metadata
- **Implementation:** [Change V1.2.0 (Impl 1)](wiki-link)
- **Issues:** #45, #46
- **Sprint:** Sprint 16

## Context
Implementing input validation for authentication API.

## Problem
Claude Code entered infinite loop when pytest validation tests failed.
The loop occurred because error messages didn't change between attempts.

## Solution
Added more descriptive error messages specifying what value failed and why.

## Prevention
- Write validation test errors with specific values
- If Claude loops, check if errors provide unique information
- Add loop detection (fail after 3 identical errors)

## Tags
testing, claude-code, validation, python, pytest