Merge pull request 'feat: enhance debug commands with sprint awareness and lessons learned' (#107) from feat/debug-commands-enhancements into development

Reviewed-on: #107
This commit was merged in pull request #107.
This commit is contained in:
2026-01-22 23:03:34 +00:00
4 changed files with 279 additions and 10 deletions

View File

@@ -6,7 +6,7 @@
"hooks": [ "hooks": [
{ {
"type": "prompt", "type": "prompt",
"prompt": "[code-sentinel] SECURITY CHECK - Before writing this code, scan for these patterns:\n\n**Critical (BLOCK if found):**\n- eval(), exec() with user input\n- SQL string concatenation (SQL injection)\n- shell=True with user input (command injection)\n- Hardcoded secrets (API keys, passwords, tokens)\n- Pickle/marshal deserialization of untrusted data\n- innerHTML/dangerouslySetInnerHTML with user content (XSS)\n\n**Warning (WARN but allow):**\n- subprocess without input validation\n- File operations without path sanitization\n- HTTP requests without timeout\n- Broad exception catches (except:)\n- Debug/print statements with sensitive data\n\n**Response:**\n- If CRITICAL found: STOP with '[code-sentinel] BLOCKED:', explain the issue, suggest safe alternative\n- If WARNING found: Note briefly with '[code-sentinel] WARNING:', proceed with suggestion\n- If clean: Proceed silently (say nothing)\n\nDo NOT announce clean scans. Only speak if issues found." "prompt": "[code-sentinel] SECURITY CHECK for CODE files only:\n\nSKIP entirely for: *.md, *.json, *.yml, *.yaml, *.txt, README, CHANGELOG, LICENSE, docs/*\n\nFor CODE files (.py, .js, .ts, .sh, etc.), check for:\n- eval()/exec() with unsanitized user input\n- SQL string concatenation with user data\n- shell=True with user input\n- Hardcoded real secrets (actual keys/passwords, not placeholders)\n\nResponse:\n- Security issue in code: '[code-sentinel] BLOCKED: {10 words max}'\n- Otherwise: proceed silently (say nothing)\n\nNEVER block docs/config. NEVER do lengthy analysis."
} }
] ]
} }

View File

@@ -6,7 +6,7 @@
"hooks": [ "hooks": [
{ {
"type": "prompt", "type": "prompt",
"prompt": "[doc-guardian] QUICK drift check (DO NOT block workflow):\n\n1. ONLY check if the modified file is referenced in README.md, CLAUDE.md, or API docs in the SAME directory\n2. Do NOT read files or perform deep analysis - just note potential drift based on file name/path\n3. If potential drift: output a single line like '[doc-guardian] Note: {filename} changed - may affect {doc}. Run /doc-sync to verify.'\n4. If no obvious drift: say nothing\n\nIMPORTANT: This is notification-only. Do NOT read documentation files, do NOT make changes, do NOT use any tools. Just a quick mental check based on the file path." "prompt": "[doc-guardian] NON-BLOCKING drift note:\n\nLook at the file path. If it might affect documentation, output ONLY:\n'[doc-guardian] {filename} modified - consider /doc-sync later'\n\nOtherwise say nothing. Do NOT analyze, do NOT read files, do NOT stop working. Maximum 15 words, then continue immediately."
} }
] ]
} }

View File

@@ -43,6 +43,46 @@ Store all values:
- `CURRENT_BRANCH`: Current branch name - `CURRENT_BRANCH`: Current branch name
- `WORKING_DIR`: Current working directory - `WORKING_DIR`: Current working directory
### Step 1.5: Detect Sprint Context
Determine if this debug issue should be associated with an active sprint.
**1. Check for active sprint milestone:**
```
mcp__plugin_projman_gitea__list_milestones(repo=PROJECT_REPO, state="open")
```
Store the first open milestone as `ACTIVE_SPRINT` (if any).
**2. Analyze branch context:**
| Branch Pattern | Context |
|----------------|---------|
| `feat/*`, `fix/*`, `issue-*` | Sprint work - likely related to current sprint |
| `main`, `master`, `development` | Production/standalone - not sprint-related |
| Other | Unknown - ask user |
**3. Determine sprint association:**
```
IF ACTIVE_SPRINT exists AND CURRENT_BRANCH matches sprint pattern (feat/*, fix/*, issue-*):
→ SPRINT_CONTEXT = "detected"
→ Ask user: "Active sprint detected: [SPRINT_NAME]. Is this bug related to sprint work?"
Options:
- Yes, add to sprint (will associate with milestone)
- No, standalone fix (no milestone)
→ Store choice as ASSOCIATE_WITH_SPRINT (true/false)
ELSE IF ACTIVE_SPRINT exists AND CURRENT_BRANCH is main/development:
→ SPRINT_CONTEXT = "production"
→ ASSOCIATE_WITH_SPRINT = false (standalone fix, no question needed)
ELSE:
→ SPRINT_CONTEXT = "none"
→ ASSOCIATE_WITH_SPRINT = false
```
### Step 2: Read Marketplace Configuration ### Step 2: Read Marketplace Configuration
```bash ```bash
@@ -105,7 +145,42 @@ Count failures and categorize errors:
For each failure, write a hypothesis about the likely cause. For each failure, write a hypothesis about the likely cause.
### Step 5: Generate Issue Content ### Step 5: Generate Smart Labels
Generate appropriate labels based on the diagnostic results.
**1. Build context string for label suggestion:**
```
LABEL_CONTEXT = "Bug fix: " + [summary of main failure] + ". " +
"Failed tools: " + [list of failed tool names] + ". " +
"Error category: " + [detected error category from Step 4]
```
**2. Get suggested labels:**
```
mcp__plugin_projman_gitea__suggest_labels(
repo=PROJECT_REPO,
context=LABEL_CONTEXT
)
```
**3. Merge with base labels:**
```
BASE_LABELS = ["Type: Bug", "Source: Diagnostic", "Agent: Claude"]
SUGGESTED_LABELS = [result from suggest_labels]
# Combine, avoiding duplicates
FINAL_LABELS = BASE_LABELS + [label for label in SUGGESTED_LABELS if label not in BASE_LABELS]
```
The final label set should include:
- **Always**: `Type: Bug`, `Source: Diagnostic`, `Agent: Claude`
- **If detected**: `Component: *`, `Complexity: *`, `Risk: *`, `Priority: *`
### Step 6: Generate Issue Content
Use this exact template: Use this exact template:
@@ -182,22 +257,36 @@ Use this exact template:
*Generated by /debug-report - Labels: Type: Bug, Source: Diagnostic, Agent: Claude* *Generated by /debug-report - Labels: Type: Bug, Source: Diagnostic, Agent: Claude*
``` ```
### Step 6: Create Issue in Marketplace ### Step 7: Create Issue in Marketplace
**First, check if MCP tools are available.** Attempt to use an MCP tool. If you receive "tool not found", "not in function list", or similar error, the MCP server is not accessible in this session - use the curl fallback. **First, check if MCP tools are available.** Attempt to use an MCP tool. If you receive "tool not found", "not in function list", or similar error, the MCP server is not accessible in this session - use the curl fallback.
#### Option A: MCP Available (preferred) #### Option A: MCP Available (preferred)
**If ASSOCIATE_WITH_SPRINT is true:**
``` ```
mcp__plugin_projman_gitea__create_issue( mcp__plugin_projman_gitea__create_issue(
repo=MARKETPLACE_REPO, repo=MARKETPLACE_REPO,
title="[Diagnostic] [summary of main failure]", title="[Diagnostic] [summary of main failure]",
body=[generated content from Step 5], body=[generated content from Step 6],
labels=["Type: Bug", "Source: Diagnostic", "Agent: Claude"] labels=FINAL_LABELS,
milestone=ACTIVE_SPRINT.id
) )
``` ```
If labels don't exist, create issue without labels. **If ASSOCIATE_WITH_SPRINT is false (standalone fix):**
```
mcp__plugin_projman_gitea__create_issue(
repo=MARKETPLACE_REPO,
title="[Diagnostic] [summary of main failure]",
body=[generated content from Step 6],
labels=FINAL_LABELS
)
```
If some labels don't exist, create issue with available labels only.
#### Option B: MCP Unavailable - Use curl Fallback #### Option B: MCP Unavailable - Use curl Fallback
@@ -274,7 +363,7 @@ To create the issue manually:
2. Or create issue directly at: http://gitea.hotserv.cloud/[MARKETPLACE_REPO]/issues/new 2. Or create issue directly at: http://gitea.hotserv.cloud/[MARKETPLACE_REPO]/issues/new
``` ```
### Step 7: Report to User ### Step 8: Report to User
Display summary: Display summary:

View File

@@ -195,6 +195,74 @@ Does this analysis match your understanding of the problem?
Do NOT proceed until user approves. Do NOT proceed until user approves.
### Step 9.5: Search Lessons Learned
Before proposing a fix, search for relevant lessons from past fixes.
**1. Extract search tags from the issue:**
```
SEARCH_TAGS = []
# Add tool names
for each failed_tool in issue:
SEARCH_TAGS.append(tool_name) # e.g., "get_labels", "validate_repo_org"
# Add error category
SEARCH_TAGS.append(error_category) # e.g., "parameter-format", "authentication"
# Add component if identifiable
if error relates to MCP server:
SEARCH_TAGS.append("mcp")
if error relates to command:
SEARCH_TAGS.append("command")
```
**2. Search lessons learned:**
```
mcp__plugin_projman_gitea__search_lessons(
repo=REPO_NAME,
tags=SEARCH_TAGS,
limit=5
)
```
**3. Also search by error keywords:**
```
mcp__plugin_projman_gitea__search_lessons(
repo=REPO_NAME,
query=[key error message words],
limit=5
)
```
**4. Display relevant lessons (if any):**
```
Related Lessons Learned
=======================
Found [N] relevant lessons from past fixes:
📚 Lesson: "Sprint 14 - Parameter validation in MCP tools"
Tags: mcp, get_labels, parameter-format
Summary: Always validate repo parameter format before API calls
Prevention: Add format check at function entry
📚 Lesson: "Sprint 12 - Graceful fallback for missing config"
Tags: configuration, fallback
Summary: Commands should work even without .env
Prevention: Check for env vars, use sensible defaults
These lessons may inform your fix approach.
```
If no lessons found, display:
```
No related lessons found. This may be a new type of issue.
```
### Step 10: Propose Fix Approach ### Step 10: Propose Fix Approach
Based on the analysis, propose a specific fix: Based on the analysis, propose a specific fix:
@@ -342,7 +410,118 @@ Next Steps:
1. Review and merge PR #81 1. Review and merge PR #81
2. In test project, pull latest plugin version 2. In test project, pull latest plugin version
3. Run /debug-report to verify fix 3. Run /debug-report to verify fix
4. If passing, close issue #80 4. Come back and run Step 15 to close issue and capture lesson
```
### Step 15: Verify, Close, and Capture Lesson
**This step runs AFTER the user has verified the fix works.**
When user returns and confirms the fix is working:
**1. Close the issue:**
```
mcp__plugin_projman_gitea__update_issue(
repo=REPO_NAME,
issue_number=ISSUE_NUMBER,
state="closed"
)
```
**2. Ask about lesson capture:**
Use AskUserQuestion:
```
This fix addressed [ERROR_TYPE] in [COMPONENT].
Would you like to capture this as a lesson learned?
Options:
- Yes, capture lesson (helps avoid similar issues in future)
- No, skip (trivial fix or already documented)
```
**3. If user chooses Yes, auto-generate lesson content:**
```
LESSON_TITLE = "Sprint [N] - [Brief description of fix]"
# Example: "Sprint 17 - MCP parameter validation"
LESSON_CONTENT = """
## Context
[What was happening when the issue occurred]
- Command/tool being used: [FAILED_TOOL]
- Error encountered: [ERROR_MESSAGE]
## Problem
[Root cause identified during investigation]
## Solution
[What was changed to fix it]
- Files modified: [LIST]
- PR: #[PR_NUMBER]
## Prevention
[How to avoid this in the future]
## Related
- Issue: #[ISSUE_NUMBER]
- PR: #[PR_NUMBER]
"""
LESSON_TAGS = [
tool_name, # e.g., "get_labels"
error_category, # e.g., "parameter-format"
component, # e.g., "mcp", "command"
"bug-fix"
]
```
**4. Show lesson preview and ask for approval:**
```
Lesson Preview
==============
Title: [LESSON_TITLE]
Tags: [LESSON_TAGS]
Content:
[LESSON_CONTENT]
Save this lesson? (Y/N/Edit)
```
**5. If approved, create the lesson:**
```
mcp__plugin_projman_gitea__create_lesson(
repo=REPO_NAME,
title=LESSON_TITLE,
content=LESSON_CONTENT,
tags=LESSON_TAGS,
category="sprints"
)
```
**6. Report completion:**
```
Issue Closed & Lesson Captured
==============================
Issue #[N]: CLOSED
Lesson: "[LESSON_TITLE]" saved to wiki
This lesson will be surfaced in future /debug-review
sessions when similar errors are encountered.
``` ```
## DO NOT ## DO NOT
@@ -350,8 +529,9 @@ Next Steps:
- **DO NOT** skip reading relevant files - this is MANDATORY - **DO NOT** skip reading relevant files - this is MANDATORY
- **DO NOT** proceed past approval gates without user confirmation - **DO NOT** proceed past approval gates without user confirmation
- **DO NOT** guess at fixes without evidence from code - **DO NOT** guess at fixes without evidence from code
- **DO NOT** close issues - let user verify fix works first - **DO NOT** close issues until user confirms fix works (Step 15)
- **DO NOT** commit directly to development or main branches - **DO NOT** commit directly to development or main branches
- **DO NOT** skip the lessons learned search - past fixes inform better solutions
## If Investigation Finds No Bug ## If Investigation Finds No Bug