development #240

Merged
lmiranda merged 29 commits from development into main 2026-01-28 16:02:47 +00:00
20 changed files with 2062 additions and 60 deletions

View File

@@ -8,6 +8,34 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
### Added
#### Sprint 3: Hooks (V5.2.0 Plugin Enhancements)
Implementation of 6 foundational hooks across 4 plugins.
**git-flow v1.1.0:**
- **Commit message enforcement hook** - PreToolUse hook validates conventional commit format on all `git commit` commands (not just `/commit`). Blocks invalid commits with format guidance.
- **Branch name validation hook** - PreToolUse hook validates branch naming on `git checkout -b` and `git switch -c`. Enforces `type/description` format, lowercase, max 50 chars.
**clarity-assist v1.1.0:**
- **Vagueness detection hook** - UserPromptSubmit hook detects vague prompts and suggests `/clarify` when ambiguity, missing context, or unclear scope detected.
**data-platform v1.1.0:**
- **Schema diff detection hook** - PostToolUse hook monitors edits to schema files (dbt models, SQL migrations). Warns on breaking changes (column removal, type narrowing, constraint addition).
**contract-validator v1.1.0:**
- **SessionStart auto-validate hook** - Smart validation that only runs when plugin files changed since last check. Detects interface compatibility issues at session start.
- **Breaking change detection hook** - PostToolUse hook monitors plugin interface files (README.md, plugin.json). Warns when changes would break consumers.
**Sprint Completed:**
- Milestone: Sprint 3 - Hooks (closed 2026-01-28)
- Issues: #225, #226, #227, #228, #229, #230
- Wiki: [Change V5.2.0: Plugin Enhancements Proposal](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0:-Plugin-Enhancements-Proposal)
- Lessons: Background agent permissions, agent runaway detection, MCP branch detection bug
### Known Issues
- **MCP Bug #231:** Branch detection in Gitea MCP runs from installed plugin directory, not user's project directory. Workaround: close issues via Gitea web UI.
---
#### Gitea MCP Server - create_pull_request Tool
- **`create_pull_request`**: Create new pull requests via MCP
- Parameters: title, body, head (source branch), base (target branch), labels

View File

@@ -7,6 +7,7 @@ Provides async wrappers for issue CRUD operations with:
- Comprehensive error handling
"""
import asyncio
import os
import subprocess
import logging
from typing import List, Dict, Optional
@@ -27,19 +28,34 @@ class IssueTools:
"""
self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str:
"""
Get current git branch.
Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns:
Current branch name or 'unknown' if not in a git repo
"""
try:
project_dir = self._get_project_directory()
result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True,
text=True,
check=True
check=True,
cwd=project_dir # Run git in project directory, not plugin directory
)
return result.stdout.strip()
except subprocess.CalledProcessError:

View File

@@ -7,6 +7,7 @@ Provides async wrappers for PR operations with:
- Comprehensive error handling
"""
import asyncio
import os
import subprocess
import logging
from typing import List, Dict, Optional
@@ -27,19 +28,34 @@ class PullRequestTools:
"""
self.gitea = gitea_client
def _get_project_directory(self) -> Optional[str]:
"""
Get the user's project directory from environment.
Returns:
Project directory path or None if not set
"""
return os.environ.get('CLAUDE_PROJECT_DIR')
def _get_current_branch(self) -> str:
"""
Get current git branch.
Get current git branch from user's project directory.
Uses CLAUDE_PROJECT_DIR environment variable to determine the correct
directory for git operations, avoiding the bug where git runs from
the installed plugin directory instead of the user's project.
Returns:
Current branch name or 'unknown' if not in a git repo
"""
try:
project_dir = self._get_project_directory()
result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
capture_output=True,
text=True,
check=True
check=True,
cwd=project_dir # Run git in project directory, not plugin directory
)
return result.stdout.strip()
except subprocess.CalledProcessError:

View File

@@ -0,0 +1,10 @@
{
"hooks": {
"UserPromptSubmit": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/vagueness-check.sh"
}
]
}
}

View File

@@ -0,0 +1,216 @@
#!/bin/bash
# clarity-assist vagueness detection hook
# Analyzes user prompts for vagueness and suggests /clarity-assist when beneficial
# All output MUST have [clarity-assist] prefix
# This is a NON-BLOCKING hook - always exits 0
PREFIX="[clarity-assist]"
# Check if auto-suggest is enabled (default: true)
AUTO_SUGGEST="${CLARITY_ASSIST_AUTO_SUGGEST:-true}"
if [[ "$AUTO_SUGGEST" != "true" ]]; then
exit 0
fi
# Threshold for vagueness score (default: 0.6)
THRESHOLD="${CLARITY_ASSIST_VAGUENESS_THRESHOLD:-0.6}"
# Read user prompt from stdin
PROMPT=""
if [[ -t 0 ]]; then
# No stdin available
exit 0
else
PROMPT=$(cat)
fi
# Skip empty prompts
if [[ -z "$PROMPT" ]]; then
exit 0
fi
# Skip if prompt is a command (starts with /)
if [[ "$PROMPT" =~ ^[[:space:]]*/[a-zA-Z] ]]; then
exit 0
fi
# Skip if prompt mentions specific files or paths
if [[ "$PROMPT" =~ \.(py|js|ts|sh|md|json|yaml|yml|txt|css|html|go|rs|java|c|cpp|h)([[:space:]]|$|[^a-zA-Z]) ]] || \
[[ "$PROMPT" =~ [/\\][a-zA-Z0-9_-]+[/\\] ]] || \
[[ "$PROMPT" =~ (src|lib|test|docs|plugins|hooks|commands)/ ]]; then
exit 0
fi
# Initialize vagueness score
SCORE=0
# Count words in the prompt
WORD_COUNT=$(echo "$PROMPT" | wc -w | tr -d ' ')
# ============================================================================
# Vagueness Signal Detection
# ============================================================================
# Signal 1: Very short prompts (< 10 words) are often vague
if [[ "$WORD_COUNT" -lt 10 ]]; then
# But very short specific commands are OK
if [[ "$WORD_COUNT" -lt 3 ]]; then
# Extremely short - probably intentional or a command
:
else
SCORE=$(echo "$SCORE + 0.3" | bc)
fi
fi
# Signal 2: Vague action phrases (no specific outcome)
VAGUE_ACTIONS=(
"help me"
"help with"
"do something"
"work on"
"look at"
"check this"
"fix it"
"fix this"
"make it better"
"make this better"
"improve it"
"improve this"
"update this"
"update it"
"change it"
"change this"
"can you"
"could you"
"would you"
"please help"
)
PROMPT_LOWER=$(echo "$PROMPT" | tr '[:upper:]' '[:lower:]')
for phrase in "${VAGUE_ACTIONS[@]}"; do
if [[ "$PROMPT_LOWER" == *"$phrase"* ]]; then
SCORE=$(echo "$SCORE + 0.2" | bc)
break
fi
done
# Signal 3: Ambiguous scope indicators
AMBIGUOUS_SCOPE=(
"somehow"
"something"
"somewhere"
"anything"
"whatever"
"stuff"
"things"
"etc"
"and so on"
)
for word in "${AMBIGUOUS_SCOPE[@]}"; do
if [[ "$PROMPT_LOWER" == *"$word"* ]]; then
SCORE=$(echo "$SCORE + 0.15" | bc)
break
fi
done
# Signal 4: Missing context indicators (no reference to what/where)
# Check if prompt lacks specificity markers
HAS_SPECIFICS=false
# Specific technical terms suggest clarity
SPECIFIC_MARKERS=(
"function"
"class"
"method"
"variable"
"error"
"bug"
"test"
"api"
"endpoint"
"database"
"query"
"component"
"module"
"service"
"config"
"install"
"deploy"
"build"
"run"
"execute"
"create"
"delete"
"add"
"remove"
"implement"
"refactor"
"migrate"
"upgrade"
"debug"
"log"
"exception"
"stack"
"memory"
"performance"
"security"
"auth"
"token"
"session"
"route"
"controller"
"model"
"view"
"template"
"schema"
"migration"
"commit"
"branch"
"merge"
"pull"
"push"
)
for marker in "${SPECIFIC_MARKERS[@]}"; do
if [[ "$PROMPT_LOWER" == *"$marker"* ]]; then
HAS_SPECIFICS=true
break
fi
done
if [[ "$HAS_SPECIFICS" == false ]] && [[ "$WORD_COUNT" -gt 3 ]]; then
SCORE=$(echo "$SCORE + 0.2" | bc)
fi
# Signal 5: Question without context
if [[ "$PROMPT" =~ \?$ ]] && [[ "$WORD_COUNT" -lt 8 ]]; then
# Short questions without specifics are often vague
if [[ "$HAS_SPECIFICS" == false ]]; then
SCORE=$(echo "$SCORE + 0.15" | bc)
fi
fi
# Cap score at 1.0
if (( $(echo "$SCORE > 1.0" | bc -l) )); then
SCORE="1.0"
fi
# ============================================================================
# Output suggestion if score exceeds threshold
# ============================================================================
# Compare score to threshold using bc
if (( $(echo "$SCORE >= $THRESHOLD" | bc -l) )); then
# Format score as percentage for display
SCORE_PCT=$(echo "$SCORE * 100" | bc | cut -d'.' -f1)
# Gentle, non-blocking suggestion
echo "$PREFIX Your prompt could benefit from more clarity."
echo "$PREFIX Consider running /clarity-assist to refine your request."
echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)"
fi
# Always exit 0 - this hook is non-blocking
exit 0

View File

@@ -0,0 +1,195 @@
#!/bin/bash
# contract-validator SessionStart auto-validate hook
# Validates plugin contracts only when plugin files have changed since last check
# All output MUST have [contract-validator] prefix
PREFIX="[contract-validator]"
# ============================================================================
# Configuration
# ============================================================================
# Enable/disable auto-check (default: true)
AUTO_CHECK="${CONTRACT_VALIDATOR_AUTO_CHECK:-true}"
# Cache location for storing last check hash
CACHE_DIR="$HOME/.cache/claude-plugins/contract-validator"
HASH_FILE="$CACHE_DIR/last-check.hash"
# Marketplace location (installed plugins)
MARKETPLACE_PATH="$HOME/.claude/plugins/marketplaces/leo-claude-mktplace"
# ============================================================================
# Early exit if disabled
# ============================================================================
if [[ "$AUTO_CHECK" != "true" ]]; then
exit 0
fi
# ============================================================================
# Smart mode: Check if plugin files have changed
# ============================================================================
# Function to compute hash of all plugin manifest files
compute_plugin_hash() {
local hash_input=""
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
# Hash all plugin.json, hooks.json, and agent files
while IFS= read -r -d '' file; do
if [[ -f "$file" ]]; then
hash_input+="$(md5sum "$file" 2>/dev/null | cut -d' ' -f1)"
fi
done < <(find "$MARKETPLACE_PATH/plugins" \
\( -name "plugin.json" -o -name "hooks.json" -o -name "*.md" -path "*/agents/*" \) \
-print0 2>/dev/null | sort -z)
fi
# Also include marketplace.json
if [[ -f "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" ]]; then
hash_input+="$(md5sum "$MARKETPLACE_PATH/.claude-plugin/marketplace.json" 2>/dev/null | cut -d' ' -f1)"
fi
# Compute final hash
echo "$hash_input" | md5sum | cut -d' ' -f1
}
# Ensure cache directory exists
mkdir -p "$CACHE_DIR" 2>/dev/null
# Compute current hash
CURRENT_HASH=$(compute_plugin_hash)
# Check if we have a previous hash
if [[ -f "$HASH_FILE" ]]; then
PREVIOUS_HASH=$(cat "$HASH_FILE" 2>/dev/null)
# If hashes match, no changes - skip validation
if [[ "$CURRENT_HASH" == "$PREVIOUS_HASH" ]]; then
exit 0
fi
fi
# ============================================================================
# Run validation (hashes differ or no cache)
# ============================================================================
ISSUES_FOUND=0
WARNINGS=""
# Function to add warning
add_warning() {
WARNINGS+=" - $1"$'\n'
((ISSUES_FOUND++))
}
# 1. Check all installed plugins have valid plugin.json
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
for plugin_dir in "$MARKETPLACE_PATH/plugins"/*/; do
if [[ -d "$plugin_dir" ]]; then
plugin_name=$(basename "$plugin_dir")
plugin_json="$plugin_dir/.claude-plugin/plugin.json"
if [[ ! -f "$plugin_json" ]]; then
add_warning "$plugin_name: missing .claude-plugin/plugin.json"
continue
fi
# Basic JSON validation
if ! python3 -c "import json; json.load(open('$plugin_json'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in plugin.json"
continue
fi
# Check required fields
if ! python3 -c "
import json
with open('$plugin_json') as f:
data = json.load(f)
required = ['name', 'version', 'description']
missing = [r for r in required if r not in data]
if missing:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: plugin.json missing required fields"
fi
fi
done
fi
# 2. Check hooks.json files are properly formatted
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' hooks_file; do
plugin_name=$(basename "$(dirname "$(dirname "$hooks_file")")")
# Validate JSON
if ! python3 -c "import json; json.load(open('$hooks_file'))" 2>/dev/null; then
add_warning "$plugin_name: invalid JSON in hooks/hooks.json"
continue
fi
# Validate hook structure
if ! python3 -c "
import json
with open('$hooks_file') as f:
data = json.load(f)
if 'hooks' not in data:
exit(1)
valid_events = ['PreToolUse', 'PostToolUse', 'UserPromptSubmit', 'SessionStart', 'SessionEnd', 'Notification', 'Stop', 'SubagentStop', 'PreCompact']
for event in data['hooks']:
if event not in valid_events:
exit(1)
for hook in data['hooks'][event]:
# Support both flat structure (type at top) and nested structure (matcher + hooks array)
if 'type' in hook:
# Flat structure: {type: 'command', command: '...'}
pass
elif 'matcher' in hook and 'hooks' in hook:
# Nested structure: {matcher: '...', hooks: [{type: 'command', ...}]}
for nested_hook in hook['hooks']:
if 'type' not in nested_hook:
exit(1)
else:
exit(1)
" 2>/dev/null; then
add_warning "$plugin_name: hooks.json has invalid structure or events"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/hooks/hooks.json" -print0 2>/dev/null)
fi
# 3. Check agent references are valid (agent files exist and are markdown)
if [[ -d "$MARKETPLACE_PATH/plugins" ]]; then
while IFS= read -r -d '' agent_file; do
plugin_name=$(basename "$(dirname "$(dirname "$agent_file")")")
agent_name=$(basename "$agent_file")
# Check file is not empty
if [[ ! -s "$agent_file" ]]; then
add_warning "$plugin_name: empty agent file $agent_name"
continue
fi
# Check file has markdown content (at least a header)
if ! grep -q '^#' "$agent_file" 2>/dev/null; then
add_warning "$plugin_name: agent $agent_name missing markdown header"
fi
done < <(find "$MARKETPLACE_PATH/plugins" -path "*/agents/*.md" -print0 2>/dev/null)
fi
# ============================================================================
# Store new hash and report results
# ============================================================================
# Always store the new hash (even if issues found - we don't want to recheck)
echo "$CURRENT_HASH" > "$HASH_FILE"
# Report any issues found (non-blocking warning)
if [[ $ISSUES_FOUND -gt 0 ]]; then
echo "$PREFIX Plugin contract validation found $ISSUES_FOUND issue(s):"
echo "$WARNINGS"
echo "$PREFIX Run /validate-contracts for full details"
fi
# Always exit 0 (non-blocking)
exit 0

View File

@@ -0,0 +1,174 @@
#!/bin/bash
# contract-validator breaking change detection hook
# Warns when plugin interface changes might break consumers
# This is a PostToolUse hook - non-blocking, warnings only
PREFIX="[contract-validator]"
# Check if warnings are enabled (default: true)
if [[ "${CONTRACT_VALIDATOR_BREAKING_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a plugin interface file
is_interface_file() {
local file="$1"
case "$file" in
*/plugin.json) return 0 ;;
*/.claude-plugin/plugin.json) return 0 ;;
*/hooks.json) return 0 ;;
*/hooks/hooks.json) return 0 ;;
*/.mcp.json) return 0 ;;
*/agents/*.md) return 0 ;;
*/commands/*.md) return 0 ;;
*/skills/*.md) return 0 ;;
esac
return 1
}
# Exit if not an interface file
if ! is_interface_file "$FILE_PATH"; then
exit 0
fi
# Check if file exists and is in a git repo
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
# Get the directory containing the file
FILE_DIR=$(dirname "$FILE_PATH")
FILE_NAME=$(basename "$FILE_PATH")
# Try to get the previous version from git
cd "$FILE_DIR" 2>/dev/null || exit 0
# Check if we're in a git repo
if ! git rev-parse --git-dir > /dev/null 2>&1; then
exit 0
fi
# Get previous version (HEAD version before current changes)
PREV_CONTENT=$(git show HEAD:"$FILE_PATH" 2>/dev/null || echo "")
# If no previous version, this is a new file - no breaking changes possible
if [ -z "$PREV_CONTENT" ]; then
exit 0
fi
# Read current content
CURR_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [ -z "$CURR_CONTENT" ]; then
exit 0
fi
BREAKING_CHANGES=()
# Detect breaking changes based on file type
case "$FILE_PATH" in
*/plugin.json|*/.claude-plugin/plugin.json)
# Check for removed or renamed fields in plugin.json
# Check if name changed
PREV_NAME=$(echo "$PREV_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
CURR_NAME=$(echo "$CURR_CONTENT" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1)
if [ -n "$PREV_NAME" ] && [ "$PREV_NAME" != "$CURR_NAME" ]; then
BREAKING_CHANGES+=("Plugin name changed - consumers may need updates")
fi
# Check if version had major bump (semantic versioning)
PREV_VER=$(echo "$PREV_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
CURR_VER=$(echo "$CURR_CONTENT" | grep -o '"version"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"\([0-9]*\)\..*/\1/')
if [ -n "$PREV_VER" ] && [ -n "$CURR_VER" ] && [ "$CURR_VER" -gt "$PREV_VER" ] 2>/dev/null; then
BREAKING_CHANGES+=("Major version bump detected - verify breaking changes documented")
fi
;;
*/hooks.json|*/hooks/hooks.json)
# Check for removed hook events
PREV_EVENTS=$(echo "$PREV_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
CURR_EVENTS=$(echo "$CURR_CONTENT" | grep -oE '"(PreToolUse|PostToolUse|UserPromptSubmit|SessionStart|SessionEnd|Notification|Stop|SubagentStop|PreCompact)"' | sort -u)
# Find removed events
REMOVED_EVENTS=$(comm -23 <(echo "$PREV_EVENTS") <(echo "$CURR_EVENTS") 2>/dev/null)
if [ -n "$REMOVED_EVENTS" ]; then
BREAKING_CHANGES+=("Hook events removed: $(echo $REMOVED_EVENTS | tr '\n' ' ')")
fi
# Check for changed matchers
PREV_MATCHERS=$(echo "$PREV_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
CURR_MATCHERS=$(echo "$CURR_CONTENT" | grep -o '"matcher"[[:space:]]*:[[:space:]]*"[^"]*"' | sort -u)
if [ "$PREV_MATCHERS" != "$CURR_MATCHERS" ]; then
BREAKING_CHANGES+=("Hook matchers changed - verify tool coverage")
fi
;;
*/.mcp.json)
# Check for removed MCP servers
PREV_SERVERS=$(echo "$PREV_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
CURR_SERVERS=$(echo "$CURR_CONTENT" | grep -o '"[^"]*"[[:space:]]*:' | grep -v "mcpServers" | sort -u)
REMOVED_SERVERS=$(comm -23 <(echo "$PREV_SERVERS") <(echo "$CURR_SERVERS") 2>/dev/null)
if [ -n "$REMOVED_SERVERS" ]; then
BREAKING_CHANGES+=("MCP servers removed - tools may be unavailable")
fi
;;
*/agents/*.md)
# Check if agent file was significantly reduced (might indicate removal of capabilities)
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
# If more than 50% reduction, warn
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Agent definition significantly reduced - capabilities may be removed")
fi
# Check if agent name/description changed in frontmatter
PREV_DESC=$(echo "$PREV_CONTENT" | head -20 | grep -i "description" | head -1)
CURR_DESC=$(echo "$CURR_CONTENT" | head -20 | grep -i "description" | head -1)
if [ -n "$PREV_DESC" ] && [ "$PREV_DESC" != "$CURR_DESC" ]; then
BREAKING_CHANGES+=("Agent description changed - verify consumer expectations")
fi
;;
*/commands/*.md|*/skills/*.md)
# Check if command/skill was significantly changed
PREV_LINES=$(echo "$PREV_CONTENT" | wc -l)
CURR_LINES=$(echo "$CURR_CONTENT" | wc -l)
if [ "$PREV_LINES" -gt 10 ] && [ "$CURR_LINES" -lt $((PREV_LINES / 2)) ]; then
BREAKING_CHANGES+=("Command/skill significantly reduced - behavior may change")
fi
;;
esac
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Consider updating CHANGELOG and notifying consumers"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -0,0 +1,21 @@
{
"hooks": {
"SessionStart": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/auto-validate.sh"
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/breaking-change-check.sh"
}
]
}
]
}
}

View File

@@ -5,6 +5,17 @@
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/startup-check.sh"
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/schema-diff-check.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,138 @@
#!/bin/bash
# data-platform schema diff detection hook
# Warns about potentially breaking schema changes
# This is a command hook - non-blocking, warnings only
PREFIX="[data-platform]"
# Check if warnings are enabled (default: true)
if [[ "${DATA_PLATFORM_SCHEMA_WARN:-true}" != "true" ]]; then
exit 0
fi
# Read tool input from stdin (JSON with file_path)
INPUT=$(cat)
# Extract file_path from JSON input
FILE_PATH=$(echo "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no file_path found, exit silently
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Check if file is a schema-related file
is_schema_file() {
local file="$1"
# Check file extension
case "$file" in
*.sql) return 0 ;;
*/migrations/*.py) return 0 ;;
*/migrations/*.sql) return 0 ;;
*/models/*.py) return 0 ;;
*/models/*.sql) return 0 ;;
*schema.prisma) return 0 ;;
*schema.graphql) return 0 ;;
*/dbt/models/*.sql) return 0 ;;
*/dbt/models/*.yml) return 0 ;;
*/alembic/versions/*.py) return 0 ;;
esac
# Check directory patterns
if echo "$file" | grep -qE "(migrations?|schemas?|models)/"; then
return 0
fi
return 1
}
# Exit if not a schema file
if ! is_schema_file "$FILE_PATH"; then
exit 0
fi
# Read the file content (if it exists and is readable)
if [[ ! -f "$FILE_PATH" ]]; then
exit 0
fi
FILE_CONTENT=$(cat "$FILE_PATH" 2>/dev/null || echo "")
if [[ -z "$FILE_CONTENT" ]]; then
exit 0
fi
# Detect breaking changes
BREAKING_CHANGES=()
# Check for DROP COLUMN
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("DROP COLUMN detected - may break existing queries")
fi
# Check for DROP TABLE
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+TABLE"; then
BREAKING_CHANGES+=("DROP TABLE detected - data loss risk")
fi
# Check for DROP INDEX
if echo "$FILE_CONTENT" | grep -qiE "DROP[[:space:]]+INDEX"; then
BREAKING_CHANGES+=("DROP INDEX detected - may impact query performance")
fi
# Check for ALTER TYPE / MODIFY COLUMN type changes
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*(TYPE|COLUMN.*TYPE)"; then
BREAKING_CHANGES+=("Column type change detected - may cause data truncation")
fi
if echo "$FILE_CONTENT" | grep -qiE "MODIFY[[:space:]]+COLUMN"; then
BREAKING_CHANGES+=("MODIFY COLUMN detected - verify data compatibility")
fi
# Check for adding NOT NULL to existing column
if echo "$FILE_CONTENT" | grep -qiE "ALTER[[:space:]]+.*SET[[:space:]]+NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL constraint - existing NULL values will fail")
fi
if echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL[^[:space:]]*[[:space:]]+DEFAULT"; then
# Adding NOT NULL with DEFAULT is usually safe - don't warn
:
elif echo "$FILE_CONTENT" | grep -qiE "ADD[[:space:]]+.*NOT[[:space:]]+NULL"; then
BREAKING_CHANGES+=("Adding NOT NULL column without DEFAULT - INSERT may fail")
fi
# Check for RENAME TABLE/COLUMN
if echo "$FILE_CONTENT" | grep -qiE "RENAME[[:space:]]+(TABLE|COLUMN|TO)"; then
BREAKING_CHANGES+=("RENAME detected - update all references")
fi
# Check for removing from Django/SQLAlchemy models (Python files)
if [[ "$FILE_PATH" == *.py ]]; then
if echo "$FILE_CONTENT" | grep -qE "^-[[:space:]]*[a-z_]+[[:space:]]*=.*Field\("; then
BREAKING_CHANGES+=("Model field removal detected in Python ORM")
fi
fi
# Check for Prisma schema changes
if [[ "$FILE_PATH" == *schema.prisma ]]; then
if echo "$FILE_CONTENT" | grep -qE "@relation.*onDelete.*Cascade"; then
BREAKING_CHANGES+=("Cascade delete detected - verify data safety")
fi
fi
# Output warnings if any breaking changes detected
if [[ ${#BREAKING_CHANGES[@]} -gt 0 ]]; then
echo ""
echo "$PREFIX WARNING: Potential breaking schema changes in $(basename "$FILE_PATH")"
echo "$PREFIX ============================================"
for change in "${BREAKING_CHANGES[@]}"; do
echo "$PREFIX - $change"
done
echo "$PREFIX ============================================"
echo "$PREFIX Review before deploying to production"
echo ""
fi
# Always exit 0 - non-blocking
exit 0

View File

@@ -0,0 +1,102 @@
#!/bin/bash
# git-flow branch name validation hook
# Validates branch names follow the convention: <type>/<description>
# Command hook - guaranteed predictable behavior
# Read tool input from stdin (JSON format)
INPUT=$(cat)
# Extract command from JSON input
# The Bash tool sends {"command": "..."} format
COMMAND=$(echo "$INPUT" | grep -o '"command"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"command"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
# If no command found, exit silently (allow)
if [ -z "$COMMAND" ]; then
exit 0
fi
# Check if this is a branch creation command
# Patterns: git checkout -b, git branch (without -d/-D), git switch -c/-C
IS_BRANCH_CREATE=false
BRANCH_NAME=""
# git checkout -b <branch>
if echo "$COMMAND" | grep -qE 'git\s+checkout\s+(-b|--branch)\s+'; then
IS_BRANCH_CREATE=true
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+checkout\s\+\(-b\|--branch\)\s\+\([^ ]*\).*/\2/p')
fi
# git switch -c/-C <branch>
if echo "$COMMAND" | grep -qE 'git\s+switch\s+(-c|-C|--create|--force-create)\s+'; then
IS_BRANCH_CREATE=true
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+switch\s\+\(-c\|-C\|--create\|--force-create\)\s\+\([^ ]*\).*/\2/p')
fi
# git branch <name> (without -d/-D/-m/-M which are delete/rename)
if echo "$COMMAND" | grep -qE 'git\s+branch\s+[^-]' && ! echo "$COMMAND" | grep -qE 'git\s+branch\s+(-d|-D|-m|-M|--delete|--move|--list|--show-current)'; then
IS_BRANCH_CREATE=true
BRANCH_NAME=$(echo "$COMMAND" | sed -n 's/.*git\s\+branch\s\+\([^ -][^ ]*\).*/\1/p')
fi
# If not a branch creation command, exit silently (allow)
if [ "$IS_BRANCH_CREATE" = false ]; then
exit 0
fi
# If we couldn't extract the branch name, exit silently (allow)
if [ -z "$BRANCH_NAME" ]; then
exit 0
fi
# Remove any quotes from branch name
BRANCH_NAME=$(echo "$BRANCH_NAME" | tr -d '"' | tr -d "'")
# Skip validation for special branches
case "$BRANCH_NAME" in
main|master|develop|development|staging|release|hotfix)
exit 0
;;
esac
# Allowed branch types
VALID_TYPES="feat|fix|chore|docs|refactor|test|perf|debug"
# Validate branch name format: <type>/<description>
# Description: lowercase letters, numbers, hyphens only, max 50 chars total
if ! echo "$BRANCH_NAME" | grep -qE "^($VALID_TYPES)/[a-z0-9][a-z0-9-]*$"; then
echo ""
echo "[git-flow] Branch name validation failed"
echo ""
echo "Branch: $BRANCH_NAME"
echo ""
echo "Expected format: <type>/<description>"
echo ""
echo "Valid types: feat, fix, chore, docs, refactor, test, perf, debug"
echo ""
echo "Description rules:"
echo " - Lowercase letters, numbers, and hyphens only"
echo " - Must start with letter or number"
echo " - No spaces or special characters"
echo ""
echo "Examples:"
echo " feat/add-user-auth"
echo " fix/login-timeout"
echo " chore/update-deps"
echo " docs/api-reference"
echo ""
exit 1
fi
# Check total length (max 50 chars)
if [ ${#BRANCH_NAME} -gt 50 ]; then
echo ""
echo "[git-flow] Branch name too long"
echo ""
echo "Branch: $BRANCH_NAME (${#BRANCH_NAME} chars)"
echo "Maximum: 50 characters"
echo ""
exit 1
fi
# Valid branch name
exit 0

View File

@@ -0,0 +1,74 @@
#!/bin/bash
# git-flow commit message validation hook
# Validates git commit messages follow conventional commit format
# PreToolUse hook for Bash commands - type: command
# Read tool input from stdin
INPUT=$(cat)
# Use Python to properly parse JSON and extract the command
COMMAND=$(echo "$INPUT" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('command',''))" 2>/dev/null)
# If no command or python failed, allow through
if [ -z "$COMMAND" ]; then
exit 0
fi
# Check if it is a git commit command with -m flag
if ! echo "$COMMAND" | grep -qE 'git\s+commit.*-m'; then
# Not a git commit with -m, allow through
exit 0
fi
# Extract commit message - handle various quoting styles
# Try double quotes first
COMMIT_MSG=$(echo "$COMMAND" | sed -n 's/.*-m[[:space:]]*"\([^"]*\)".*/\1/p')
# If empty, try single quotes
if [ -z "$COMMIT_MSG" ]; then
COMMIT_MSG=$(echo "$COMMAND" | sed -n "s/.*-m[[:space:]]*'\\([^']*\\)'.*/\\1/p")
fi
# If still empty, try HEREDOC pattern
if [ -z "$COMMIT_MSG" ]; then
if echo "$COMMAND" | grep -qE -- '-m[[:space:]]+"\$\(cat <<'; then
# HEREDOC pattern - too complex to parse, allow through
exit 0
fi
fi
# If no message extracted, allow through
if [ -z "$COMMIT_MSG" ]; then
exit 0
fi
# Validate conventional commit format
# Format: <type>(<scope>): <description>
# or: <type>: <description>
# Valid types: feat, fix, docs, style, refactor, perf, test, chore, build, ci
VALID_TYPES="feat|fix|docs|style|refactor|perf|test|chore|build|ci"
# Check if message matches conventional commit format
if echo "$COMMIT_MSG" | grep -qE "^($VALID_TYPES)(\([a-zA-Z0-9_-]+\))?:[[:space:]]+.+"; then
# Valid format
exit 0
fi
# Invalid format - output warning
echo "[git-flow] WARNING: Commit message does not follow conventional commit format"
echo ""
echo "Expected format: <type>(<scope>): <description>"
echo " or: <type>: <description>"
echo ""
echo "Valid types: feat, fix, docs, style, refactor, perf, test, chore, build, ci"
echo ""
echo "Examples:"
echo " feat(auth): add password reset functionality"
echo " fix: resolve login timeout issue"
echo " docs(readme): update installation instructions"
echo ""
echo "Your message: $COMMIT_MSG"
echo ""
echo "To proceed anyway, use /commit command which auto-generates valid messages."
# Exit with non-zero to block
exit 1

View File

@@ -0,0 +1,19 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/branch-check.sh"
},
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/commit-msg-check.sh"
}
]
}
]
}
}

View File

@@ -108,7 +108,127 @@ git branch --show-current
## Your Responsibilities
### 1. Implement Features Following Specs
### 1. Status Reporting (Structured Progress)
**CRITICAL: Post structured progress comments for visibility.**
**Standard Progress Comment Format:**
```markdown
## Progress Update
**Status:** In Progress | Blocked | Failed
**Phase:** [current phase name]
**Tool Calls:** X (budget: Y)
### Completed
- [x] Step 1
- [x] Step 2
### In Progress
- [ ] Current step (estimated: Z more calls)
### Blockers
- None | [blocker description]
### Next
- What happens after current step
```
**When to Post Progress Comments:**
- **Immediately on starting** - Post initial status
- **Every 20-30 tool calls** - Show progress
- **On phase transitions** - Moving from implementation to testing
- **When blocked or encountering errors**
- **Before budget limit** - If approaching turn limit
**Starting Work Example:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** In Progress
**Phase:** Starting
**Tool Calls:** 5 (budget: 100)
### Completed
- [x] Read issue and acceptance criteria
- [x] Created feature branch feat/45-jwt-service
### In Progress
- [ ] Implementing JWT service
### Blockers
- None
### Next
- Create auth/jwt_service.py
- Implement core token functions
"""
)
```
**Blocked Example:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** Blocked
**Phase:** Testing
**Tool Calls:** 67 (budget: 100)
### Completed
- [x] Implemented jwt_service.py
- [x] Wrote unit tests
### In Progress
- [ ] Running tests - BLOCKED
### Blockers
- Missing PyJWT dependency in requirements.txt
- Need orchestrator to add dependency
### Next
- Resume after blocker resolved
"""
)
```
**Failed Example:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** Failed
**Phase:** Implementation
**Tool Calls:** 89 (budget: 100)
### Completed
- [x] Created jwt_service.py
- [x] Implemented generate_token()
### In Progress
- [ ] verify_token() - FAILED
### Blockers
- Critical: Cannot decode tokens - algorithm mismatch
- Attempted: HS256, HS384, RS256
- Error: InvalidSignatureError consistently
### Next
- Needs human investigation
- Possible issue with secret key encoding
"""
)
```
**NEVER report "completed" unless:**
- All acceptance criteria are met
- Tests pass
- Code is committed and pushed
- No unresolved errors
**If you cannot complete, report failure honestly.** The orchestrator needs accurate status to coordinate effectively.
### 2. Implement Features Following Specs
**You receive:**
- Issue number and description
@@ -122,7 +242,7 @@ git branch --show-current
- Proper error handling
- Edge case coverage
### 2. Follow Best Practices
### 3. Follow Best Practices
**Code Quality Standards:**
@@ -150,7 +270,7 @@ git branch --show-current
- Handle errors gracefully
- Follow OWASP guidelines
### 3. Handle Edge Cases
### 4. Handle Edge Cases
Always consider:
- What if input is None/null/undefined?
@@ -160,7 +280,7 @@ Always consider:
- What if user doesn't have permission?
- What if resource doesn't exist?
### 4. Apply Lessons Learned
### 5. Apply Lessons Learned
Reference relevant lessons in your implementation:
@@ -179,7 +299,7 @@ def test_verify_expired_token(jwt_service):
...
```
### 5. Create Merge Requests (When Branch Protected)
### 6. Create Merge Requests (When Branch Protected)
**MR Body Template - NO SUBTASKS:**
@@ -208,7 +328,7 @@ Closes #45
The issue already tracks subtasks. MR body should be summary only.
### 6. Auto-Close Issues via Commit Messages
### 7. Auto-Close Issues via Commit Messages
**Always include closing keywords in commits:**
@@ -229,7 +349,7 @@ Closes #45"
This ensures issues auto-close when MR is merged.
### 7. Generate Completion Reports
### 8. Generate Completion Reports
After implementation, provide a concise completion report:
@@ -304,18 +424,185 @@ As the executor, you interact with MCP tools for status updates:
- Apply best practices
- Deliver quality work
## Checkpointing (Save Progress for Resume)
**CRITICAL: Save checkpoints so work can be resumed if interrupted.**
**Checkpoint Comment Format:**
```markdown
## Checkpoint
**Branch:** feat/45-jwt-service
**Commit:** abc123 (or "uncommitted")
**Phase:** [current phase]
**Tool Calls:** 45
### Files Modified
- auth/jwt_service.py (created)
- tests/test_jwt.py (created)
### Completed Steps
- [x] Created jwt_service.py skeleton
- [x] Implemented generate_token()
- [x] Implemented verify_token()
### Pending Steps
- [ ] Write unit tests
- [ ] Add token refresh logic
- [ ] Commit and push
### State Notes
[Any important context for resumption]
```
**When to Save Checkpoints:**
- After completing each major step (every 20-30 tool calls)
- Before stopping due to budget limit
- When encountering a blocker
- After any commit
**Checkpoint Example:**
```
add_comment(
issue_number=45,
body="""## Checkpoint
**Branch:** feat/45-jwt-service
**Commit:** uncommitted (changes staged)
**Phase:** Testing
**Tool Calls:** 67
### Files Modified
- auth/jwt_service.py (created, 120 lines)
- auth/__init__.py (modified, added import)
- tests/test_jwt.py (created, 50 lines, incomplete)
### Completed Steps
- [x] Created auth/jwt_service.py
- [x] Implemented generate_token() with HS256
- [x] Implemented verify_token()
- [x] Updated auth/__init__.py exports
### Pending Steps
- [ ] Complete test_jwt.py (5 tests remaining)
- [ ] Add token refresh logic
- [ ] Commit changes
- [ ] Push to remote
### State Notes
- Using PyJWT 2.8.0
- Secret key from JWT_SECRET env var
- Tests use pytest fixtures in conftest.py
"""
)
```
**Checkpoint on Interruption:**
If you must stop (budget, failure, blocker), ALWAYS post a checkpoint FIRST.
## Runaway Detection (Self-Monitoring)
**CRITICAL: Monitor yourself to prevent infinite loops and wasted resources.**
**Self-Monitoring Checkpoints:**
| Trigger | Action |
|---------|--------|
| 10+ tool calls without progress | STOP - Post progress update, reassess approach |
| Same error 3+ times | CIRCUIT BREAKER - Stop, report failure with error pattern |
| 50+ tool calls total | POST progress update (mandatory) |
| 80+ tool calls total | WARN - Approaching budget, evaluate if completion is realistic |
| 100+ tool calls total | STOP - Save state, report incomplete with checkpoint |
**What Counts as "Progress":**
- File created or modified
- Test passing that wasn't before
- New functionality working
- Moving to next phase of work
**What Does NOT Count as Progress:**
- Reading more files
- Searching for something
- Retrying the same operation
- Adding logging/debugging
**Circuit Breaker Protocol:**
If you encounter the same error 3+ times:
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** Failed (Circuit Breaker)
**Phase:** [phase when stopped]
**Tool Calls:** 67 (budget: 100)
### Circuit Breaker Triggered
Same error occurred 3+ times:
```
[error message]
```
### What Was Tried
1. [first attempt]
2. [second attempt]
3. [third attempt]
### Recommendation
[What human should investigate]
### Files Modified
- [list any files changed before failure]
"""
)
```
**Budget Approaching Protocol:**
At 80+ tool calls, post an update:
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** In Progress (Budget Warning)
**Phase:** [current phase]
**Tool Calls:** 82 (budget: 100)
### Completed
- [x] [completed steps]
### Remaining
- [ ] [what's left]
### Assessment
[Realistic? Should I continue or stop and checkpoint?]
"""
)
```
**Hard Stop at 100 Calls:**
If you reach 100 tool calls:
1. STOP immediately
2. Save current state
3. Post checkpoint comment
4. Report as incomplete (not failed)
## Critical Reminders
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
2. **Branch naming** - Always use `feat/`, `fix/`, or `debug/` prefix with issue number
3. **Branch check FIRST** - Never implement on staging/production
4. **Follow specs precisely** - Respect architectural decisions
5. **Apply lessons learned** - Reference in code and tests
6. **Write tests** - Cover edge cases, not just happy path
7. **Clean code** - Readable, maintainable, documented
8. **No MR subtasks** - MR body should NOT have checklists
9. **Use closing keywords** - `Closes #XX` in commit messages
10. **Report thoroughly** - Complete summary when done
2. **Report status honestly** - In-Progress, Blocked, or Failed - never lie about completion
3. **Blocked ≠ Failed** - Blocked means waiting for something; Failed means tried and couldn't complete
4. **Self-monitor** - Watch for runaway patterns, trigger circuit breaker when stuck
5. **Branch naming** - Always use `feat/`, `fix/`, or `debug/` prefix with issue number
6. **Branch check FIRST** - Never implement on staging/production
7. **Follow specs precisely** - Respect architectural decisions
8. **Apply lessons learned** - Reference in code and tests
9. **Write tests** - Cover edge cases, not just happy path
10. **Clean code** - Readable, maintainable, documented
11. **No MR subtasks** - MR body should NOT have checklists
12. **Use closing keywords** - `Closes #XX` in commit messages
13. **Report thoroughly** - Complete summary when done, including honest status
14. **Hard stop at 100 calls** - Save checkpoint and report incomplete
## Your Mission

View File

@@ -57,9 +57,42 @@ curl -X POST "https://gitea.../api/..."
- Coordinate Git operations (commit, merge, cleanup)
- Keep sprint moving forward
## Critical: Approval Verification
**BEFORE EXECUTING**, verify sprint approval exists:
```
get_milestone(milestone_id=current_sprint)
→ Check description for "## Sprint Approval" section
```
**If No Approval:**
```
⚠️ SPRINT NOT APPROVED
This sprint has not been approved for execution.
Please run /sprint-plan to approve the sprint first.
```
**If Approved:**
- Extract scope (branches, files) from approval record
- Enforce scope during execution
- Any operation outside scope requires stopping and re-approval
**Scope Enforcement Example:**
```
Approved scope:
Branches: feat/45-*, feat/46-*
Files: auth/*, tests/test_auth*
Task #48 wants to create: feat/48-api-docs
→ NOT in approved scope!
→ STOP and ask user to approve expanded scope
```
## Critical: Branch Detection
**BEFORE DOING ANYTHING**, check the current git branch:
**AFTER approval verification**, check the current git branch:
```bash
git branch --show-current
@@ -93,7 +126,44 @@ git branch --show-current
**Workflow:**
**A. Fetch Sprint Issues**
**A. Fetch Sprint Issues and Detect Checkpoints**
```
list_issues(state="open", labels=["sprint-current"])
```
**For each open issue, check for checkpoint comments:**
```
get_issue(issue_number=45) # Comments included
→ Look for comments containing "## Checkpoint"
```
**If Checkpoint Found:**
```
Checkpoint Detected for #45
Found checkpoint from previous session:
Branch: feat/45-jwt-service
Phase: Testing
Tool Calls: 67
Files Modified: 3
Completed: 4/7 steps
Options:
1. Resume from checkpoint (recommended)
2. Start fresh (discard previous work)
3. Review checkpoint details first
Would you like to resume?
```
**Resume Protocol:**
1. Verify branch exists: `git branch -a | grep feat/45-jwt-service`
2. Switch to branch: `git checkout feat/45-jwt-service`
3. Verify files match checkpoint
4. Dispatch executor with checkpoint context
5. Executor continues from pending steps
**B. Fetch Sprint Issues (Standard)**
```
list_issues(state="open", labels=["sprint-current"])
```
@@ -147,11 +217,56 @@ Relevant Lessons:
Ready to start? I can dispatch multiple tasks in parallel.
```
### 2. Parallel Task Dispatch
### 2. File Conflict Prevention (Pre-Dispatch)
**When starting execution:**
**BEFORE dispatching parallel agents, analyze file overlap.**
For independent tasks (same batch), spawn multiple Executor agents in parallel:
**Conflict Detection Workflow:**
1. **Read each issue's checklist/body** to identify target files
2. **Build file map** for all tasks in the batch
3. **Check for overlap** - Same file in multiple tasks?
4. **Sequentialize conflicts** - Don't parallelize if same file
**Example Analysis:**
```
Analyzing Batch 1 for conflicts:
#45 - Implement JWT service
→ auth/jwt_service.py, auth/__init__.py, tests/test_jwt.py
#48 - Update API documentation
→ docs/api.md, README.md
Overlap check: NONE
Decision: Safe to parallelize ✅
```
**If Conflict Detected:**
```
Analyzing Batch 2 for conflicts:
#46 - Build login endpoint
→ api/routes/auth.py, auth/__init__.py
#49 - Add auth tests
→ tests/test_auth.py, auth/__init__.py
Overlap: auth/__init__.py ⚠️
Decision: Sequentialize - run #46 first, then #49
```
**Conflict Resolution:**
- Same file → MUST sequentialize
- Same directory → Usually safe, review file names
- Shared config → Sequentialize
- Shared test fixture → Assign different fixture files or sequentialize
### 3. Parallel Task Dispatch
**After conflict check passes, dispatch parallel agents:**
For independent tasks (same batch) WITH NO FILE CONFLICTS, spawn multiple Executor agents in parallel:
```
Dispatching Batch 1 (2 tasks in parallel):
@@ -167,6 +282,14 @@ Task 2: #48 - Update API documentation
Both tasks running in parallel. I'll monitor progress.
```
**Branch Isolation:** Each task MUST have its own branch. Never have two agents work on the same branch.
**Sequential Merge Protocol:**
1. Wait for task to complete
2. Merge its branch to development
3. Then merge next completed task
4. Never merge simultaneously
**Branch Naming Convention (MANDATORY):**
- Features: `feat/<issue-number>-<short-description>`
- Bug fixes: `fix/<issue-number>-<short-description>`
@@ -177,7 +300,7 @@ Both tasks running in parallel. I'll monitor progress.
- `fix/46-login-timeout`
- `debug/47-investigate-memory-leak`
### 3. Generate Lean Execution Prompts
### 4. Generate Lean Execution Prompts
**NOT THIS (too verbose):**
```
@@ -222,11 +345,127 @@ Dependencies: None (can start immediately)
Ready to start? Say "yes" and I'll monitor progress.
```
### 4. Progress Tracking
### 5. Status Label Management
**Monitor and Update:**
**CRITICAL: Use Status labels to communicate issue state accurately.**
**Add Progress Comments:**
**When dispatching a task:**
```
update_issue(
issue_number=45,
labels=["Status/In-Progress", ...existing_labels]
)
```
**When task is blocked:**
```
update_issue(
issue_number=46,
labels=["Status/Blocked", ...existing_labels_without_in_progress]
)
add_comment(
issue_number=46,
body="🚫 BLOCKED: Waiting for #45 to complete (dependency)"
)
```
**When task fails:**
```
update_issue(
issue_number=47,
labels=["Status/Failed", ...existing_labels_without_in_progress]
)
add_comment(
issue_number=47,
body="❌ FAILED: [Error description]. Needs investigation."
)
```
**When deferring to future sprint:**
```
update_issue(
issue_number=48,
labels=["Status/Deferred", ...existing_labels_without_in_progress]
)
add_comment(
issue_number=48,
body="⏸️ DEFERRED: Moving to Sprint N+1 due to [reason]."
)
```
**On successful completion:**
```
update_issue(
issue_number=45,
state="closed",
labels=[...existing_labels_without_status] # Remove all Status/* labels
)
```
**Status Label Rules:**
- Only ONE Status label at a time (In-Progress, Blocked, Failed, or Deferred)
- Remove Status labels when closing successfully
- Always add comment explaining status changes
### 6. Progress Tracking (Structured Comments)
**CRITICAL: Use structured progress comments for visibility.**
**Standard Progress Comment Format:**
```markdown
## Progress Update
**Status:** In Progress | Blocked | Failed
**Phase:** [current phase name]
**Tool Calls:** X (budget: Y)
### Completed
- [x] Step 1
- [x] Step 2
### In Progress
- [ ] Current step (estimated: Z more calls)
### Blockers
- None | [blocker description]
### Next
- What happens after current step
```
**Example Progress Comment:**
```
add_comment(
issue_number=45,
body="""## Progress Update
**Status:** In Progress
**Phase:** Implementation
**Tool Calls:** 45 (budget: 100)
### Completed
- [x] Created auth/jwt_service.py
- [x] Implemented generate_token()
- [x] Implemented verify_token()
### In Progress
- [ ] Writing unit tests (estimated: 20 more calls)
### Blockers
- None
### Next
- Run tests and fix any failures
- Commit and push
"""
)
```
**When to Post Progress Comments:**
- After completing each major phase (every 20-30 tool calls)
- When status changes (blocked, failed)
- When encountering unexpected issues
- Before approaching tool call budget limit
**Simple progress updates (for minor milestones):**
```
add_comment(
issue_number=45,
@@ -264,7 +503,7 @@ add_comment(
- Notify that new tasks are ready for execution
- Update the execution queue
### 5. Monitor Parallel Execution
### 7. Monitor Parallel Execution
**Track multiple running tasks:**
```
@@ -282,7 +521,7 @@ Batch 2 (now unblocked):
Starting #46 while #48 continues...
```
### 6. Branch Protection Detection
### 8. Branch Protection Detection
Before merging, check if development branch is protected:
@@ -312,7 +551,7 @@ Closes #45
**NEVER include subtask checklists in MR body.** The issue already has them.
### 7. Sprint Close - Capture Lessons Learned
### 9. Sprint Close - Capture Lessons Learned
**Invoked by:** `/sprint-close`
@@ -564,6 +803,64 @@ Would you like me to handle git operations?
- Document blockers promptly
- Never let tasks slip through
## Runaway Detection (Monitoring Dispatched Agents)
**Monitor dispatched agents for runaway behavior:**
**Warning Signs:**
- Agent running 30+ minutes with no progress comment
- Progress comment shows "same phase" for 20+ tool calls
- Error patterns repeating in progress comments
**Intervention Protocol:**
When you detect an agent may be stuck:
1. **Read latest progress comment** - Check tool call count and phase
2. **If no progress in 20+ calls** - Consider stopping the agent
3. **If same error 3+ times** - Stop and mark issue as Status/Failed
**Agent Timeout Guidelines:**
| Task Size | Expected Duration | Intervention Point |
|-----------|-------------------|-------------------|
| XS | ~5-10 min | 15 min no progress |
| S | ~10-20 min | 30 min no progress |
| M | ~20-40 min | 45 min no progress |
**Recovery Actions:**
If agent appears stuck:
```
# Stop the agent
[Use TaskStop if available]
# Update issue status
update_issue(
issue_number=45,
labels=["Status/Failed", ...other_labels]
)
# Add explanation comment
add_comment(
issue_number=45,
body="""## Agent Intervention
**Reason:** No progress detected for [X] minutes / [Y] tool calls
**Last Status:** [from progress comment]
**Action:** Stopped agent, requires human review
### What Was Completed
[from progress comment]
### What Remains
[from progress comment]
### Recommendation
[Manual completion / Different approach / Break down further]
"""
)
```
## Critical Reminders
1. **Never use CLI tools** - Use MCP tools exclusively for Gitea
@@ -572,14 +869,18 @@ Would you like me to handle git operations?
4. **Parallel dispatch** - Run independent tasks simultaneously
5. **Lean prompts** - Brief, actionable, not verbose documents
6. **Branch naming** - `feat/`, `fix/`, `debug/` prefixes required
7. **No MR subtasks** - MR body should NOT have checklists
8. **Auto-check subtasks** - Mark issue subtasks complete on close
9. **Track meticulously** - Update issues immediately, document blockers
10. **Capture lessons** - At sprint close, interview thoroughly
11. **Update wiki status** - At sprint close, update implementation and proposal pages
12. **Link lessons to wiki** - Include lesson links in implementation completion summary
13. **Update CHANGELOG** - MANDATORY at sprint close, never skip
14. **Run suggest-version** - Check if release is needed after CHANGELOG update
7. **Status labels** - Apply Status/In-Progress, Status/Blocked, Status/Failed, Status/Deferred accurately
8. **One status at a time** - Remove old Status/* label before applying new one
9. **Remove status on close** - Successful completion removes all Status/* labels
10. **Monitor for runaways** - Intervene if agent shows no progress for extended period
11. **No MR subtasks** - MR body should NOT have checklists
12. **Auto-check subtasks** - Mark issue subtasks complete on close
13. **Track meticulously** - Update issues immediately, document blockers
14. **Capture lessons** - At sprint close, interview thoroughly
15. **Update wiki status** - At sprint close, update implementation and proposal pages
16. **Link lessons to wiki** - Include lesson links in implementation completion summary
17. **Update CHANGELOG** - MANDATORY at sprint close, never skip
18. **Run suggest-version** - Check if release is needed after CHANGELOG update
## Your Mission

View File

@@ -310,14 +310,55 @@ Think through the technical approach:
- `[Sprint 17] fix: Resolve login timeout issue`
- `[Sprint 18] refactor: Extract authentication module`
**Task Granularity Guidelines:**
| Size | Scope | Example |
|------|-------|---------|
| **Small** | 1-2 hours, single file/component | Add validation to one field |
| **Medium** | Half day, multiple files, one feature | Implement new API endpoint |
| **Large** | Should be broken down | Full authentication system |
**Task Sizing Rules (MANDATORY):**
**If a task is too large, break it down into smaller tasks.**
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|--------|-------|-----------------|----------------|-------------|
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
| **L** | MUST BREAK DOWN | - | - | Too large for one agent |
| **XL** | MUST BREAK DOWN | - | - | Way too large |
**CRITICAL: L and XL tasks MUST be broken into subtasks.**
**Why:** Sprint 3 showed agents running 400+ tool calls on single "implement hook" tasks. This causes:
- Long wait times (1+ hour per task)
- No progress visibility
- Resource exhaustion
- Difficult debugging
**Task Scoping Checklist:**
1. Can this be completed in one file? → XS or S
2. Does it touch 2-3 files? → M (max)
3. Does it touch 4+ files? → MUST break down
4. Does it require complex decision-making? → MUST break down
5. Would you estimate 50+ tool calls? → MUST break down
**Breaking Down Large Tasks:**
**BAD (L/XL - too broad):**
```
[Sprint 3] feat: Implement git-flow branch validation hook
Labels: Efforts/L, ...
```
**GOOD (broken into S/M tasks):**
```
[Sprint 3] feat: Create branch validation hook skeleton
Labels: Efforts/S, ...
[Sprint 3] feat: Add prefix pattern validation (feat/, fix/, etc.)
Labels: Efforts/S, ...
[Sprint 3] feat: Add issue number extraction and validation
Labels: Efforts/S, ...
[Sprint 3] test: Add branch validation unit tests
Labels: Efforts/S, ...
```
**If a task is estimated L or XL, STOP and break it down before creating.**
**IMPORTANT: Include wiki implementation reference in issue body:**
@@ -479,5 +520,9 @@ Sprint 17 - User Authentication (Due: 2025-02-01)
11. **Always use suggest_labels** - Don't guess labels
12. **Always think through architecture** - Consider edge cases
13. **Always cleanup local files** - Delete after migrating to wiki
14. **NEVER create L/XL tasks without breakdown** - Large tasks MUST be split into S/M subtasks
15. **Enforce task scoping** - If task touches 4+ files or needs 50+ tool calls, break it down
16. **ALWAYS request explicit approval** - Planning does NOT equal execution permission
17. **Record approval in milestone** - Sprint-start verifies approval before executing
You are the thoughtful planner who ensures sprints are well-prepared, architecturally sound, and learn from past experiences. Take your time, ask questions, and create comprehensive plans that set the team up for success.

View File

@@ -136,6 +136,58 @@ The planner agent will:
- Document dependency graph
- Provide sprint overview with wiki links
11. **Request Sprint Approval**
- Present approval request with scope summary
- Capture explicit user approval
- Record approval in milestone description
- Approval scopes what sprint-start can execute
## Sprint Approval (MANDATORY)
**Planning DOES NOT equal execution permission.**
After creating issues, the planner MUST request explicit approval:
```
Sprint 17 Planning Complete
===========================
Created Issues:
- #45: [Sprint 17] feat: JWT token generation
- #46: [Sprint 17] feat: Login endpoint
- #47: [Sprint 17] test: Auth tests
Execution Scope:
- Branches: feat/45-*, feat/46-*, feat/47-*
- Files: auth/*, api/routes/auth.py, tests/test_auth*
- Dependencies: PyJWT, python-jose
⚠️ APPROVAL REQUIRED
Do you approve this sprint for execution?
This grants permission for agents to:
- Create and modify files in the listed scope
- Create branches with the listed prefixes
- Install listed dependencies
Type "approve sprint 17" to authorize execution.
```
**On Approval:**
1. Record approval in milestone description
2. Note timestamp and scope
3. Sprint-start will verify approval exists
**Approval Record Format:**
```markdown
## Sprint Approval
**Approved:** 2026-01-28 14:30
**Approver:** User
**Scope:**
- Branches: feat/45-*, feat/46-*, feat/47-*
- Files: auth/*, api/routes/auth.py, tests/test_auth*
```
## Issue Title Format (MANDATORY)
```
@@ -155,15 +207,70 @@ The planner agent will:
- `[Sprint 17] fix: Resolve login timeout issue`
- `[Sprint 18] refactor: Extract authentication module`
## Task Granularity Guidelines
## Task Sizing Rules (MANDATORY)
| Size | Scope | Example |
|------|-------|---------|
| **Small** | 1-2 hours, single file/component | Add validation to one field |
| **Medium** | Half day, multiple files, one feature | Implement new API endpoint |
| **Large** | Should be broken down | Full authentication system |
**CRITICAL: Tasks sized L or XL MUST be broken down into smaller tasks.**
**If a task is too large, break it down into smaller tasks.**
| Effort | Files | Checklist Items | Max Tool Calls | Agent Scope |
|--------|-------|-----------------|----------------|-------------|
| **XS** | 1 file | 0-2 items | ~30 | Single function/fix |
| **S** | 1 file | 2-4 items | ~50 | Single file feature |
| **M** | 2-3 files | 4-6 items | ~80 | Multi-file feature |
| **L** | MUST BREAK DOWN | - | - | Too large for one agent |
| **XL** | MUST BREAK DOWN | - | - | Way too large |
**Why This Matters:**
- Agents running 400+ tool calls take 1+ hour, with no visibility
- Large tasks lack clear completion criteria
- Debugging failures is extremely difficult
- Small tasks enable parallel execution
**Scoping Checklist:**
1. Can this be completed in one file? → XS or S
2. Does it touch 2-3 files? → M (maximum for single task)
3. Does it touch 4+ files? → MUST break down
4. Would you estimate 50+ tool calls? → MUST break down
5. Does it require complex decision-making mid-task? → MUST break down
**Example Breakdown:**
**BAD (L - too broad):**
```
[Sprint 3] feat: Implement schema diff detection hook
Labels: Efforts/L
- Hook skeleton
- Pattern detection for DROP/ALTER/RENAME
- Warning output formatting
- Integration with hooks.json
```
**GOOD (broken into S tasks):**
```
[Sprint 3] feat: Create schema-diff-check.sh hook skeleton
Labels: Efforts/S
- [ ] Create hook file with standard header
- [ ] Add file type detection for SQL/migrations
- [ ] Exit 0 (non-blocking)
[Sprint 3] feat: Add DROP/ALTER pattern detection
Labels: Efforts/S
- [ ] Detect DROP COLUMN/TABLE/INDEX
- [ ] Detect ALTER TYPE changes
- [ ] Detect RENAME operations
[Sprint 3] feat: Add warning output formatting
Labels: Efforts/S
- [ ] Format breaking change warnings
- [ ] Add hook prefix to output
- [ ] Test output visibility
[Sprint 3] chore: Register hook in hooks.json
Labels: Efforts/XS
- [ ] Add PostToolUse:Edit hook entry
- [ ] Test hook triggers on SQL edits
```
**The planner MUST refuse to create L/XL tasks without breakdown.**
## MCP Tools Available

View File

@@ -6,6 +6,47 @@ description: Begin sprint execution with relevant lessons learned from previous
You are initiating sprint execution. The orchestrator agent will coordinate the work, analyze dependencies for parallel execution, search for relevant lessons learned, and guide you through the implementation process.
## Sprint Approval Verification
**CRITICAL: Sprint must be approved before execution.**
The orchestrator checks for approval in the milestone description:
```
get_milestone(milestone_id=17)
→ Check description for "## Sprint Approval" section
```
**If Approval Missing:**
```
⚠️ SPRINT NOT APPROVED
Sprint 17 has not been approved for execution.
The milestone description does not contain an approval record.
Please run /sprint-plan to:
1. Review the sprint scope
2. Approve the execution plan
Then run /sprint-start again.
```
**If Approval Found:**
```
✓ Sprint Approval Verified
Approved: 2026-01-28 14:30
Scope:
Branches: feat/45-*, feat/46-*, feat/47-*
Files: auth/*, api/routes/auth.py, tests/test_auth*
Proceeding with execution within approved scope...
```
**Scope Enforcement:**
- Agents can ONLY create branches matching approved patterns
- Agents can ONLY modify files within approved paths
- Operations outside scope require re-approval via `/sprint-plan`
## Branch Detection
**CRITICAL:** Before proceeding, check the current git branch:
@@ -25,7 +66,18 @@ If you are on a production or staging branch, you MUST stop and ask the user to
The orchestrator agent will:
1. **Fetch Sprint Issues**
1. **Verify Sprint Approval**
- Check milestone description for `## Sprint Approval` section
- If no approval found, STOP and direct user to `/sprint-plan`
- If approval found, extract scope (branches, files)
- Agents operate ONLY within approved scope
2. **Detect Checkpoints (Resume Support)**
- Check each open issue for `## Checkpoint` comments
- If checkpoint found, offer to resume from that point
- Resume preserves: branch, completed work, pending steps
3. **Fetch Sprint Issues**
- Use `list_issues` to fetch open issues for the sprint
- Identify priorities based on labels (Priority/Critical, Priority/High, etc.)
@@ -72,6 +124,67 @@ Parallel Execution Batches:
**Independent tasks in the same batch run in parallel.**
## File Conflict Prevention (MANDATORY)
**CRITICAL: Before dispatching parallel agents, check for file overlap.**
**Pre-Dispatch Conflict Check:**
1. **Identify target files** for each task in the batch
2. **Check for overlap** - Do any tasks modify the same file?
3. **If overlap detected** - Sequentialize those specific tasks
**Example Conflict Detection:**
```
Batch 1 Analysis:
#45 - Implement JWT service
Files: auth/jwt_service.py, auth/__init__.py, tests/test_jwt.py
#48 - Update API documentation
Files: docs/api.md, README.md
Overlap: NONE → Safe to parallelize
Batch 2 Analysis:
#46 - Build login endpoint
Files: api/routes/auth.py, auth/__init__.py
#49 - Add auth tests
Files: tests/test_auth.py, auth/__init__.py
Overlap: auth/__init__.py → CONFLICT!
Action: Sequentialize #46 and #49 (run #46 first)
```
**Conflict Resolution Rules:**
| Conflict Type | Action |
|---------------|--------|
| Same file in checklist | Sequentialize tasks |
| Same directory | Review if safe, usually OK |
| Shared test file | Sequentialize or assign different test files |
| Shared config | Sequentialize |
**Branch Isolation Protocol:**
Even for parallel tasks, each MUST run on its own branch:
```
Task #45 → feat/45-jwt-service (isolated)
Task #48 → feat/48-api-docs (isolated)
```
**Sequential Merge After Completion:**
```
1. Task #45 completes → merge feat/45-jwt-service to development
2. Task #48 completes → merge feat/48-api-docs to development
3. Never merge simultaneously - always sequential to detect conflicts
```
**If Merge Conflict Occurs:**
1. Stop second task
2. Resolve conflict manually or assign to human
3. Resume/restart second task with updated base
## Branch Naming Convention (MANDATORY)
When creating branches for tasks:
@@ -239,6 +352,61 @@ Batch 2 (now unblocked):
Starting #46 while #48 continues...
```
## Checkpoint Resume Support
If a previous session was interrupted (agent stopped, failure, budget exhausted), checkpoints enable resumption.
**Checkpoint Detection:**
The orchestrator scans issue comments for `## Checkpoint` markers containing:
- Branch name
- Last commit hash
- Completed/pending steps
- Files modified
**Resume Flow:**
```
User: /sprint-start
Orchestrator: Checking for checkpoints...
Found checkpoint for #45 (JWT service):
Branch: feat/45-jwt-service
Last activity: 2 hours ago
Progress: 4/7 steps completed
Pending: Write tests, add refresh, commit
Options:
1. Resume from checkpoint (recommended)
2. Start fresh (lose previous work)
3. Review checkpoint details
User: 1
Orchestrator: Resuming #45 from checkpoint...
✓ Branch exists
✓ Files match checkpoint
✓ Dispatching executor with context
Executor continues from pending steps...
```
**Checkpoint Format:**
Executors save checkpoints after major steps:
```markdown
## Checkpoint
**Branch:** feat/45-jwt-service
**Commit:** abc123
**Phase:** Testing
### Completed Steps
- [x] Step 1
- [x] Step 2
### Pending Steps
- [ ] Step 3
- [ ] Step 4
```
## Getting Started
Simply invoke `/sprint-start` and the orchestrator will:

View File

@@ -79,7 +79,12 @@ Completed Issues (3):
In Progress (2):
#46: [Sprint 18] feat: Build login endpoint [Type/Feature, Priority/High]
Status: In Progress | Phase: Implementation | Tool Calls: 45/100
Progress: 3/5 steps | Current: Writing validation logic
#49: [Sprint 18] test: Add auth tests [Type/Test, Priority/Medium]
Status: In Progress | Phase: Testing | Tool Calls: 30/100
Progress: 2/4 steps | Current: Testing edge cases
Ready to Start (2):
#50: [Sprint 18] feat: Integrate OAuth providers [Type/Feature, Priority/Low]
@@ -137,12 +142,53 @@ Show only backend issues:
list_issues(labels=["Component/Backend"])
```
## Progress Comment Parsing
Agents post structured progress comments in this format:
```markdown
## Progress Update
**Status:** In Progress | Blocked | Failed
**Phase:** [current phase name]
**Tool Calls:** X (budget: Y)
### Completed
- [x] Step 1
### In Progress
- [ ] Current step
### Blockers
- None | [blocker description]
```
**To extract real-time progress:**
1. Fetch issue comments: `get_issue(number)` includes recent comments
2. Look for comments containing `## Progress Update`
3. Parse the **Status:** line for current state
4. Parse **Tool Calls:** for budget consumption
5. Extract blockers from `### Blockers` section
**Progress Summary Display:**
```
In Progress Issues:
#45: [Sprint 18] feat: JWT service
Status: In Progress | Phase: Testing | Tool Calls: 67/100
Completed: 4/6 steps | Current: Writing unit tests
#46: [Sprint 18] feat: Login endpoint
Status: Blocked | Phase: Implementation | Tool Calls: 23/100
Blocker: Waiting for JWT service (#45)
```
## Blocker Detection
The command identifies blocked issues by:
1. **Dependency Analysis** - Uses `list_issue_dependencies` to find unmet dependencies
2. **Comment Keywords** - Checks for "blocked", "blocker", "waiting for"
3. **Stale Issues** - Issues with no recent activity (>7 days)
1. **Progress Comments** - Parse `### Blockers` section from structured comments
2. **Status Labels** - Check for `Status/Blocked` label on issue
3. **Dependency Analysis** - Uses `list_issue_dependencies` to find unmet dependencies
4. **Comment Keywords** - Checks for "blocked", "blocker", "waiting for"
5. **Stale Issues** - Issues with no recent activity (>7 days)
## When to Use

View File

@@ -13,9 +13,9 @@ description: Dynamic reference for Gitea label taxonomy (organization + reposito
This skill provides the current label taxonomy used for issue classification in Gitea. Labels are **fetched dynamically** from Gitea and should never be hardcoded.
**Current Taxonomy:** 43 labels (27 organization + 16 repository)
**Current Taxonomy:** 47 labels (31 organization + 16 repository)
## Organization Labels (27)
## Organization Labels (31)
Organization-level labels are shared across all repositories in your configured organization.
@@ -60,6 +60,12 @@ Organization-level labels are shared across all repositories in your configured
- `Type/Test` (#1d76db) - Testing-related work (unit, integration, e2e)
- `Type/Chore` (#fef2c0) - Maintenance, tooling, dependencies, build tasks
### Status (4)
- `Status/In-Progress` (#0052cc) - Work is actively being done on this issue
- `Status/Blocked` (#ff5630) - Blocked by external dependency or issue
- `Status/Failed` (#de350b) - Implementation attempted but failed, needs investigation
- `Status/Deferred` (#6554c0) - Moved to a future sprint or backlog
## Repository Labels (16)
Repository-level labels are specific to each project.
@@ -168,6 +174,28 @@ When suggesting labels for issues, consider the following patterns:
- Keywords: "deploy", "deployment", "docker", "infrastructure", "ci/cd", "production"
- Example: "Deploy authentication service to production"
### Status Detection
**Status/In-Progress:**
- Applied when: Agent starts working on an issue
- Remove when: Work completes, fails, or is blocked
- Example: Orchestrator applies when dispatching task to executor
**Status/Blocked:**
- Applied when: Issue cannot proceed due to external dependency
- Context: Waiting for another issue, external service, or decision
- Example: "Blocked by #45 - need JWT service first"
**Status/Failed:**
- Applied when: Implementation was attempted but failed
- Context: Errors, permission issues, technical blockers
- Example: Agent hit permission errors and couldn't complete
**Status/Deferred:**
- Applied when: Work is moved to a future sprint
- Context: Scope reduction, reprioritization
- Example: "Moving to Sprint 5 due to scope constraints"
### Tech Detection
**Tech/Python:**