feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]

Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action>
sub-command pattern. Git-flow consolidated from 8→5 commands (commit
variants absorbed into --push/--merge/--sync flags). Dispatch files,
name: frontmatter, and cross-reference updates for all plugins.

Phase 2: Design documents for 8 new plugins in docs/designs/.

Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate,
saas-react-platform, saas-test-pilot, data-seed, ops-release-manager,
ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents,
skills, README, and claude-md-integration. Marketplace grows from 12→20.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-06 14:52:11 -05:00
parent 5098422858
commit 2d51df7a42
321 changed files with 13582 additions and 1019 deletions

View File

@@ -0,0 +1,25 @@
{
"name": "saas-db-migrate",
"description": "Database migration management for Alembic, Prisma, and raw SQL",
"version": "1.0.0",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/saas-db-migrate/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"license": "MIT",
"keywords": [
"database",
"migration",
"alembic",
"prisma",
"sql",
"schema",
"rollback"
],
"commands": [
"./commands/"
],
"domain": "saas"
}

View File

@@ -0,0 +1,58 @@
# saas-db-migrate
Database migration management for Alembic, Prisma, and raw SQL.
## Overview
The saas-db-migrate plugin provides a complete database migration toolkit. It detects your migration tool, generates migration files from model diffs, validates migrations for safety before applying, plans execution with rollback strategies, and tracks migration history.
## Supported Migration Tools
- **Alembic** (Python/SQLAlchemy) - Revision-based migrations with auto-generation
- **Prisma** (Node.js/TypeScript) - Schema-first migrations with diff-based generation
- **Raw SQL** - Sequential numbered SQL files for any database
## Supported Databases
- PostgreSQL (primary, with lock analysis)
- MySQL (with engine-specific considerations)
- SQLite (with ALTER limitations noted)
## Commands
| Command | Description |
|---------|-------------|
| `/db-migrate setup` | Setup wizard - detect tool, map configuration |
| `/db-migrate generate <desc>` | Generate migration from model diff or empty template |
| `/db-migrate validate` | Check migration safety (data loss, locks, rollback) |
| `/db-migrate plan` | Show execution plan with rollback strategy |
| `/db-migrate history` | Display migration history and current state |
| `/db-migrate rollback` | Generate rollback migration or plan |
## Agents
| Agent | Model | Mode | Purpose |
|-------|-------|------|---------|
| `migration-planner` | sonnet | default | Migration generation, planning, rollback |
| `migration-auditor` | haiku | plan (read-only) | Safety validation and risk assessment |
## Installation
This plugin is part of the Leo Claude Marketplace. It is installed automatically when the marketplace is configured.
### Prerequisites
- A project with an existing database and migration tool
- Run `/db-migrate setup` before using other commands
## Configuration
The `/db-migrate setup` command creates `.db-migrate.json` in your project root with detected settings. All subsequent commands read this file for tool and path configuration.
## Safety Philosophy
This plugin prioritizes data safety above all else. Every migration is analyzed for:
- **Data loss risk**: DROP and ALTER operations are flagged prominently
- **Lock duration**: DDL operations are assessed for table lock impact
- **Rollback completeness**: Every upgrade must have a corresponding downgrade
- **Transaction safety**: All operations must be wrapped in transactions

View File

@@ -0,0 +1,82 @@
---
name: migration-auditor
description: Read-only safety validation of database migrations
model: haiku
permissionMode: plan
disallowedTools: Write, Edit, MultiEdit
---
# Migration Auditor Agent
You are a strict database migration safety auditor. Your role is to analyze migration files for data loss risks, lock contention, and operational safety issues. You never modify files; you only read and report.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| DB-MIGRATE - Migration Auditor |
| [Context Line] |
+----------------------------------------------------------------------+
```
## Expertise
- Database DDL operation risk assessment
- Lock behavior analysis for PostgreSQL, MySQL, SQLite
- Data loss detection in schema migrations
- Transaction safety verification
- Rollback completeness auditing
- Production deployment impact estimation
## Skills to Load
- skills/migration-safety.md
- skills/visual-header.md
## Validation Methodology
### 1. Parse Migration Operations
Read the migration file and extract all SQL operations:
- DDL statements (CREATE, ALTER, DROP)
- DML statements (INSERT, UPDATE, DELETE)
- Constraint operations (ADD/DROP CONSTRAINT, INDEX)
- Transaction control (BEGIN, COMMIT, ROLLBACK)
### 2. Risk Classification
Apply the migration safety rules to each operation:
| Risk Level | Criteria | Examples |
|------------|----------|---------|
| **FAIL** | Irreversible data loss without safeguards | DROP TABLE, DROP COLUMN without backup step |
| **FAIL** | Schema inconsistency risk | ALTER TYPE narrowing, NOT NULL without DEFAULT |
| **FAIL** | Missing transaction wrapper | DDL outside transaction boundaries |
| **WARN** | Potentially long-running lock | ALTER TABLE on large tables, ADD INDEX non-concurrently |
| **WARN** | Incomplete rollback | Downgrade function missing or partial |
| **WARN** | Mixed concerns | Schema and data changes in same migration |
| **INFO** | Optimization opportunity | Could use IF NOT EXISTS, concurrent index creation |
### 3. Lock Duration Estimation
For each ALTER operation, estimate lock behavior:
- PostgreSQL: ADD COLUMN with DEFAULT is instant (11+); ALTER TYPE requires full rewrite
- MySQL: Most ALTERs require table copy (consider pt-online-schema-change)
- SQLite: ALTER is limited; most changes require table recreation
### 4. Rollback Completeness Check
Verify the downgrade/rollback section:
- Every upgrade operation has a corresponding downgrade
- DROP operations in downgrade include data loss warnings
- Transaction wrapping in downgrade matches upgrade
## Report Format
Always output findings grouped by severity with exact line references and actionable fix instructions. Include a summary with operation count, risk level, and pass/fail verdict.
## Communication Style
Precise, factual, and risk-focused. Report findings with specific line numbers, exact SQL operations, and concrete risk descriptions. Every finding must include a fix recommendation. No subjective commentary; only objective safety analysis.

View File

@@ -0,0 +1,92 @@
---
name: migration-planner
description: Migration generation, rollback planning, and schema management
model: sonnet
permissionMode: default
---
# Migration Planner Agent
You are a database migration specialist. You generate, plan, and manage schema migrations for Alembic, Prisma, and raw SQL workflows. You understand the risks of schema changes and always prioritize data safety.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| DB-MIGRATE - Migration Planner |
| [Context Line] |
+----------------------------------------------------------------------+
```
## Expertise
- Alembic migration generation and revision management
- Prisma schema diffing and migration creation
- Raw SQL migration scripting with transaction safety
- SQLAlchemy model introspection
- PostgreSQL, MySQL, and SQLite schema operations
- Lock behavior and performance impact of DDL operations
- Data migration strategies (backfill, transform, split)
## Skills to Load
- skills/orm-detection.md
- skills/naming-conventions.md
- skills/rollback-patterns.md
- skills/migration-safety.md
- skills/visual-header.md
## Operating Principles
### Data Safety First
Every migration you generate must:
1. Be wrapped in a transaction (or use tool-native transaction support)
2. Include a rollback/downgrade path
3. Flag destructive operations (DROP, ALTER TYPE narrowing) prominently
4. Suggest data backup steps when data loss is possible
5. Never combine schema changes and data changes in the same migration
### Migration Quality Standards
All generated migrations must:
1. Have a clear, descriptive name following the naming convention
2. Include comments explaining WHY each operation is needed
3. Handle edge cases (empty tables, NULL values, constraint violations)
4. Be idempotent where possible (IF NOT EXISTS, IF EXISTS)
5. Consider the impact on running applications (zero-downtime patterns)
### Tool-Specific Behavior
**Alembic:**
- Generate proper `revision` chain with `down_revision` references
- Use `op.` operations (not raw SQL) when Alembic supports the operation
- Include `# type: ignore` comments for mypy compatibility when needed
- Test that `upgrade()` and `downgrade()` are symmetric
**Prisma:**
- Respect `schema.prisma` as the single source of truth
- Generate migration SQL that matches what `prisma migrate dev` would produce
- Handle Prisma's migration directory structure (timestamp folders)
**Raw SQL:**
- Generate separate UP and DOWN sections clearly marked
- Use database-specific syntax (PostgreSQL vs MySQL vs SQLite)
- Include explicit transaction control (BEGIN/COMMIT/ROLLBACK)
### Zero-Downtime Patterns
For production-critical changes, recommend multi-step approaches:
1. **Add column**: Add as nullable first, backfill, then add NOT NULL constraint
2. **Rename column**: Add new column, copy data, update code, drop old column
3. **Change type**: Add new column with new type, migrate data, swap, drop old
4. **Drop column**: Remove from code first, verify unused, then drop in migration
## Communication Style
Methodical and safety-conscious. Always present the risk level of operations. When multiple approaches exist, explain trade-offs (speed vs safety vs complexity). Use clear indicators for new files ([+]), modifications ([~]), and deletions ([-]).

View File

@@ -0,0 +1,66 @@
# saas-db-migrate Plugin - CLAUDE.md Integration
Add this section to your project's CLAUDE.md to enable saas-db-migrate plugin features.
## Suggested CLAUDE.md Section
```markdown
## Database Migration Integration
This project uses the saas-db-migrate plugin for database migration workflows.
### Configuration
Run `/db-migrate setup` to auto-detect migration tool and configure paths.
Settings stored in `.db-migrate.json` in project root.
### Available Commands
| Command | Purpose |
|---------|---------|
| `/db-migrate setup` | Detect migration tool and configure |
| `/db-migrate generate <desc>` | Generate migration from model changes |
| `/db-migrate validate` | Check migration for safety issues |
| `/db-migrate plan` | Preview execution plan with rollback |
| `/db-migrate history` | Show migration history and state |
| `/db-migrate rollback` | Generate rollback migration |
### When to Use
- **After model changes**: `/db-migrate generate add_status_to_orders --auto` detects diffs
- **Before applying**: `/db-migrate validate` checks for data loss and lock risks
- **Before deploy**: `/db-migrate plan --include-rollback` shows full execution strategy
- **After issues**: `/db-migrate rollback --steps=1` generates rollback plan
- **Status check**: `/db-migrate history` shows what has been applied
### Safety Rules
- Never apply migrations without running `/db-migrate validate` first
- Always have a rollback plan for production migrations
- Separate schema changes from data migrations
- Use zero-downtime patterns for production (add nullable, backfill, constrain)
```
## Typical Workflows
### New Feature Migration
```
/db-migrate generate add_orders_table --auto
/db-migrate validate
/db-migrate plan
# Apply migration via your tool (alembic upgrade head, prisma migrate deploy)
```
### Pre-Deploy Check
```
/db-migrate validate --all --strict
/db-migrate plan --include-rollback
/db-migrate history --status=pending
```
### Emergency Rollback
```
/db-migrate history
/db-migrate rollback --steps=1 --dry-run
/db-migrate rollback --steps=1
```

View File

@@ -0,0 +1,125 @@
---
name: db-migrate generate
description: Generate migration from model diff
agent: migration-planner
---
# /db-migrate generate - Migration Generator
## Skills to Load
- skills/orm-detection.md
- skills/naming-conventions.md
- skills/visual-header.md
## Visual Output
Display header: `DB-MIGRATE - Generate`
## Usage
```
/db-migrate generate <description> [--auto] [--empty]
```
**Arguments:**
- `<description>`: Short description of the change (e.g., "add_orders_table", "add_email_to_users")
- `--auto`: Auto-detect changes from model diff (Alembic/Prisma only)
- `--empty`: Generate empty migration file for manual editing
## Prerequisites
Run `/db-migrate setup` first. Reads `.db-migrate.json` for tool and configuration.
## Process
### 1. Read Configuration
Load `.db-migrate.json` to determine:
- Migration tool (Alembic, Prisma, raw SQL)
- Migration directory path
- Model directory path (for auto-detection)
- Naming convention
### 2. Detect Schema Changes (--auto mode)
**Alembic:**
- Compare current SQLAlchemy models against database schema
- Identify new tables, dropped tables, added/removed columns, type changes
- Detect index additions/removals, constraint changes
- Generate `upgrade()` and `downgrade()` functions
**Prisma:**
- Run `prisma migrate diff` to compare schema.prisma against database
- Identify model additions, field changes, relation updates
- Generate migration SQL and Prisma migration directory
**Raw SQL:**
- Auto-detection not available; create empty template
- Include commented sections for UP and DOWN operations
### 3. Generate Migration File
Create migration file following the naming convention:
| Tool | Format | Example |
|------|--------|---------|
| Alembic | `{revision}_{description}.py` | `a1b2c3d4_add_orders_table.py` |
| Prisma | `{timestamp}_{description}/migration.sql` | `20240115120000_add_orders_table/migration.sql` |
| Raw SQL | `{sequence}_{description}.sql` | `003_add_orders_table.sql` |
### 4. Include Safety Checks
Every generated migration includes:
- Transaction wrapping (BEGIN/COMMIT or framework equivalent)
- Data preservation warnings for destructive operations
- Rollback function/section (downgrade in Alembic, DOWN in raw SQL)
- Comments explaining each operation
### 5. Validate Generated Migration
Run safety checks from `skills/migration-safety.md` on the generated file before presenting to user.
## Output Format
```
+----------------------------------------------------------------------+
| DB-MIGRATE - Generate |
+----------------------------------------------------------------------+
Tool: Alembic
Mode: auto-detect
Description: add_orders_table
Changes Detected:
[+] Table: orders (5 columns)
[+] Column: orders.user_id (FK -> users.id)
[+] Index: ix_orders_user_id
Files Created:
[+] alembic/versions/a1b2c3d4_add_orders_table.py
Migration Preview:
upgrade():
- CREATE TABLE orders (id, user_id, total, status, created_at)
- CREATE INDEX ix_orders_user_id ON orders(user_id)
- ADD FOREIGN KEY orders.user_id -> users.id
downgrade():
- DROP INDEX ix_orders_user_id
- DROP TABLE orders
Safety Check: PASS (no destructive operations)
Next Steps:
- Review generated migration file
- Run /db-migrate validate for safety analysis
- Run /db-migrate plan to see execution plan
```
## Important Notes
- Auto-detection works best with Alembic and Prisma
- Always review generated migrations before applying
- Destructive operations (DROP, ALTER TYPE) are flagged with warnings
- The `--empty` flag is useful for data migrations that cannot be auto-detected

View File

@@ -0,0 +1,122 @@
---
name: db-migrate history
description: Display migration history and current state
agent: migration-planner
---
# /db-migrate history - Migration History
## Skills to Load
- skills/orm-detection.md
- skills/visual-header.md
## Visual Output
Display header: `DB-MIGRATE - History`
## Usage
```
/db-migrate history [--limit=<n>] [--status=applied|pending|all] [--verbose]
```
**Arguments:**
- `--limit`: Number of migrations to show (default: 20)
- `--status`: Filter by status (default: all)
- `--verbose`: Show full migration details including SQL operations
## Prerequisites
Run `/db-migrate setup` first. Reads `.db-migrate.json` for tool and configuration.
## Process
### 1. Read Migration Source
Depending on the detected tool:
**Alembic:**
- Read `alembic_version` table for applied migrations
- Scan `alembic/versions/` directory for all migration files
- Cross-reference to determine pending migrations
**Prisma:**
- Read `_prisma_migrations` table for applied migrations
- Scan `prisma/migrations/` directory for all migration directories
- Cross-reference applied vs available
**Raw SQL:**
- Read migration tracking table (if exists) for applied migrations
- Scan migration directory for numbered SQL files
- Determine state from sequence numbers
### 2. Build Timeline
For each migration, determine:
- Migration identifier (revision hash, timestamp, sequence number)
- Description (extracted from filename or metadata)
- Status: Applied, Pending, or Failed
- Applied timestamp (if available from tracking table)
- Author (if available from migration metadata)
### 3. Detect Anomalies
Flag unusual states:
- Out-of-order migrations (gap in sequence)
- Failed migrations that need manual intervention
- Migration files present in directory but not in tracking table
- Entries in tracking table without corresponding files (deleted migrations)
### 4. Display History
Present chronological list with status indicators.
## Output Format
```
+----------------------------------------------------------------------+
| DB-MIGRATE - History |
+----------------------------------------------------------------------+
Tool: Alembic
Database: PostgreSQL (myapp_production)
Total Migrations: 8 (6 applied, 2 pending)
MIGRATION HISTORY
# Status Timestamp Description
-- -------- -------------------- ----------------------------------------
1 [applied] 2024-01-05 10:30:00 initial_schema
2 [applied] 2024-01-12 14:15:00 add_users_table
3 [applied] 2024-01-20 09:45:00 add_products_table
4 [applied] 2024-02-01 11:00:00 add_orders_table
5 [applied] 2024-02-15 16:30:00 add_user_roles
6 [applied] 2024-03-01 08:20:00 add_order_status_column
7 [pending] -- add_order_items_table
8 [pending] -- add_payment_tracking
Current Head: migration_006_add_order_status_column
Pending Count: 2
No anomalies detected.
```
### Verbose Mode
With `--verbose`, each migration expands to show:
```
4 [applied] 2024-02-01 11:00:00 add_orders_table
Operations:
[+] CREATE TABLE orders (id, user_id, total, status, created_at)
[+] CREATE INDEX ix_orders_user_id
[+] ADD FOREIGN KEY orders.user_id -> users.id
Rollback: Available (DROP TABLE orders)
```
## Important Notes
- History reads from both the database tracking table and the filesystem
- If database is unreachable, only filesystem state is shown (no applied timestamps)
- Anomalies like missing files or orphaned tracking entries should be resolved manually

View File

@@ -0,0 +1,136 @@
---
name: db-migrate plan
description: Show execution plan with rollback strategy
agent: migration-planner
---
# /db-migrate plan - Migration Execution Plan
## Skills to Load
- skills/rollback-patterns.md
- skills/migration-safety.md
- skills/visual-header.md
## Visual Output
Display header: `DB-MIGRATE - Plan`
## Usage
```
/db-migrate plan [--target=<migration>] [--include-rollback]
```
**Arguments:**
- `--target`: Plan up to specific migration (default: all pending)
- `--include-rollback`: Show rollback plan alongside forward plan
## Prerequisites
Run `/db-migrate setup` first. Pending migrations must exist.
## Process
### 1. Determine Current State
Query the migration history to find:
- Latest applied migration
- All pending (unapplied) migrations in order
- Any out-of-order migrations (applied but not contiguous)
### 2. Build Forward Plan
For each pending migration, document:
- Migration identifier and description
- SQL operations that will execute (summarized)
- Estimated lock duration for ALTER operations
- Dependencies on previous migrations
- Expected outcome (tables/columns affected)
### 3. Build Rollback Plan (if --include-rollback)
For each migration in reverse order, document:
- Rollback/downgrade operations
- Data recovery strategy (if destructive operations present)
- Point-of-no-return warnings (migrations that cannot be fully rolled back)
- Recommended backup steps before applying
### 4. Risk Assessment
Evaluate the complete plan:
- Total number of operations
- Presence of destructive operations (DROP, ALTER TYPE)
- Estimated total lock time
- Data migration volume (if data changes included)
- Recommended maintenance window duration
### 5. Present Plan
Display ordered execution plan with risk indicators.
## Output Format
```
+----------------------------------------------------------------------+
| DB-MIGRATE - Plan |
+----------------------------------------------------------------------+
Current State: migration_005_add_user_roles (applied)
Pending: 3 migrations
FORWARD PLAN
Step 1: migration_006_add_orders_table
Operations:
[+] CREATE TABLE orders (5 columns)
[+] CREATE INDEX ix_orders_user_id
[+] ADD FOREIGN KEY orders.user_id -> users.id
Lock Impact: None (new table)
Risk: LOW
Step 2: migration_007_add_order_items
Operations:
[+] CREATE TABLE order_items (4 columns)
[+] CREATE INDEX ix_order_items_order_id
Lock Impact: None (new table)
Risk: LOW
Step 3: migration_008_add_status_to_orders
Operations:
[~] ADD COLUMN orders.status VARCHAR(20) DEFAULT 'pending'
[~] ADD CHECK CONSTRAINT valid_status
Lock Impact: ~2s (instant ADD COLUMN with DEFAULT in PostgreSQL 11+)
Risk: LOW
ROLLBACK PLAN
Step 3 (reverse): Undo migration_008
[~] DROP CONSTRAINT valid_status
[~] DROP COLUMN orders.status
Reversible: YES
Step 2 (reverse): Undo migration_007
[-] DROP TABLE order_items
Reversible: YES (but data is lost)
Step 1 (reverse): Undo migration_006
[-] DROP TABLE orders
Reversible: YES (but data is lost)
RISK SUMMARY
Total Operations: 9
Destructive: 0
Lock Time: ~2 seconds
Risk Level: LOW
Maintenance Window: Not required
RECOMMENDATION: Safe to apply without maintenance window.
```
## Important Notes
- The plan is informational; it does not apply any migrations
- Lock time estimates are approximate and depend on table size
- Always back up the database before applying destructive migrations
- Out-of-order migrations are flagged as warnings

View File

@@ -0,0 +1,130 @@
---
name: db-migrate rollback
description: Generate rollback migration for a previously applied migration
agent: migration-planner
---
# /db-migrate rollback - Rollback Generator
## Skills to Load
- skills/rollback-patterns.md
- skills/visual-header.md
## Visual Output
Display header: `DB-MIGRATE - Rollback`
## Usage
```
/db-migrate rollback [<migration>] [--steps=<n>] [--dry-run]
```
**Arguments:**
- `<migration>`: Specific migration to roll back to (exclusive — rolls back everything after it)
- `--steps`: Number of migrations to roll back from current head (default: 1)
- `--dry-run`: Show what would be rolled back without generating files
## Prerequisites
Run `/db-migrate setup` first. Target migrations must have rollback/downgrade operations defined.
## Process
### 1. Identify Rollback Target
Determine which migrations to reverse:
- If `<migration>` specified: roll back all migrations applied after it
- If `--steps=N`: roll back the last N applied migrations
- Default: roll back the single most recent migration
### 2. Check Rollback Feasibility
For each migration to roll back, verify:
| Check | Result | Action |
|-------|--------|--------|
| Downgrade function exists | Yes | Proceed |
| Downgrade function exists | No | FAIL: Cannot auto-rollback; manual intervention needed |
| Migration contains DROP TABLE | N/A | WARN: Data cannot be restored by rollback |
| Migration contains data changes | N/A | WARN: DML changes may not be fully reversible |
| Later migrations depend on this | Yes | Must roll back dependents first |
### 3. Generate Rollback
Depending on the tool:
**Alembic:**
- Generate `alembic downgrade <target_revision>` command
- Show the downgrade SQL that will execute
- If downgrade function is incomplete, generate supplementary migration
**Prisma:**
- Generate rollback migration SQL based on diff
- Create new migration directory with rollback operations
**Raw SQL:**
- Generate new numbered migration file with reverse operations
- Include transaction wrapping and safety checks
### 4. Data Recovery Plan
If rolled-back migrations included destructive operations:
- Recommend backup restoration for lost data
- Suggest data export before rollback
- Identify tables/columns that will be recreated empty
### 5. Present Rollback Plan
Show the complete rollback strategy with warnings.
## Output Format
```
+----------------------------------------------------------------------+
| DB-MIGRATE - Rollback |
+----------------------------------------------------------------------+
Mode: Roll back 2 steps
Tool: Alembic
ROLLBACK PLAN
Step 1: Undo migration_008_add_status_to_orders
Operations:
[-] DROP CONSTRAINT valid_status
[-] DROP COLUMN orders.status
Data Impact: Column data will be lost (12,450 rows)
Reversible: Partially (column recreated empty on re-apply)
Step 2: Undo migration_007_add_order_items
Operations:
[-] DROP TABLE order_items
Data Impact: Table and all data will be lost (3,200 rows)
Reversible: Partially (table recreated empty on re-apply)
WARNINGS
[!] 2 operations will cause data loss
[!] Back up affected tables before proceeding
COMMANDS TO EXECUTE
alembic downgrade -2
# Or: alembic downgrade migration_006_add_orders_table
Generated Files:
(No new files — Alembic uses existing downgrade functions)
RECOMMENDED PRE-ROLLBACK STEPS
1. pg_dump --table=order_items myapp > order_items_backup.sql
2. pg_dump --table=orders --column-inserts myapp > orders_status_backup.sql
3. Review downgrade SQL with /db-migrate plan --include-rollback
```
## Important Notes
- Rollback does NOT execute migrations; it generates the plan and/or files
- Always back up data before rolling back destructive migrations
- Some migrations are irreversible (data-only changes without backup)
- Use `--dry-run` to preview without creating any files
- After rollback, verify application compatibility with the older schema

View File

@@ -0,0 +1,92 @@
---
name: db-migrate setup
description: Setup wizard for migration tool detection and configuration
agent: migration-planner
---
# /db-migrate setup - Migration Platform Setup Wizard
## Skills to Load
- skills/orm-detection.md
- skills/visual-header.md
## Visual Output
Display header: `DB-MIGRATE - Setup Wizard`
## Usage
```
/db-migrate setup
```
## Workflow
### Phase 1: Migration Tool Detection
Scan the project for migration tool indicators:
| File / Pattern | Tool | Confidence |
|----------------|------|------------|
| `alembic.ini` in project root | Alembic | High |
| `alembic/` directory | Alembic | High |
| `sqlalchemy` in requirements | Alembic (likely) | Medium |
| `prisma/schema.prisma` | Prisma | High |
| `@prisma/client` in package.json | Prisma | High |
| `migrations/` with numbered `.sql` files | Raw SQL | Medium |
| `flyway.conf` | Flyway (raw SQL) | High |
| `knexfile.js` or `knexfile.ts` | Knex | High |
If no tool detected, ask user to select one.
### Phase 2: Configuration Mapping
Identify existing migration configuration:
- **Migration directory**: Where migration files live
- **Model directory**: Where ORM models are defined (for auto-generation)
- **Database URL**: Connection string location (env var, config file)
- **Naming convention**: How migration files are named
- **Current state**: Latest applied migration
### Phase 3: Database Connection Test
Attempt to verify database connectivity:
- Read connection string from detected location
- Test connection (read-only)
- Report database type (PostgreSQL, MySQL, SQLite)
- Report current schema version if detectable
### Phase 4: Validation
Display detected configuration summary and ask for confirmation.
## Output Format
```
+----------------------------------------------------------------------+
| DB-MIGRATE - Setup Wizard |
+----------------------------------------------------------------------+
Migration Tool: Alembic 1.13.1
ORM: SQLAlchemy 2.0.25
Database: PostgreSQL 16.1
Migration Dir: ./alembic/versions/
Model Dir: ./app/models/
DB URL Source: DATABASE_URL env var
Current State:
Latest Migration: 2024_01_15_add_orders_table (applied)
Pending: 0 migrations
Configuration saved to .db-migrate.json
```
## Important Notes
- This command does NOT run any migrations; it only detects and configures
- Database connection test is read-only (SELECT 1)
- If `.db-migrate.json` already exists, offer to update or keep
- All subsequent commands rely on this configuration

View File

@@ -0,0 +1,127 @@
---
name: db-migrate validate
description: Check migration safety before applying
agent: migration-auditor
---
# /db-migrate validate - Migration Safety Validator
## Skills to Load
- skills/migration-safety.md
- skills/visual-header.md
## Visual Output
Display header: `DB-MIGRATE - Validate`
## Usage
```
/db-migrate validate [<migration-file>] [--all] [--strict]
```
**Arguments:**
- `<migration-file>`: Specific migration to validate (default: latest unapplied)
- `--all`: Validate all unapplied migrations
- `--strict`: Treat warnings as errors
## Prerequisites
Run `/db-migrate setup` first. Migration files must exist in the configured directory.
## Process
### 1. Identify Target Migrations
Determine which migrations to validate:
- Specific file if provided
- All unapplied migrations if `--all`
- Latest unapplied migration by default
### 2. Parse Migration Operations
Read each migration file and extract SQL operations:
- Table creation/deletion
- Column additions, modifications, removals
- Index operations
- Constraint changes
- Data manipulation (INSERT, UPDATE, DELETE)
- Custom SQL blocks
### 3. Safety Analysis
Apply safety rules from `skills/migration-safety.md`:
| Check | Severity | Description |
|-------|----------|-------------|
| DROP TABLE | FAIL | Permanent data loss; requires explicit acknowledgment |
| DROP COLUMN | FAIL | Data loss; must confirm column is unused |
| ALTER COLUMN type (narrowing) | FAIL | Data truncation risk (e.g., VARCHAR(255) to VARCHAR(50)) |
| ALTER COLUMN type (widening) | WARN | Safe but verify application handles new type |
| ALTER COLUMN NOT NULL (existing data) | FAIL | May fail if NULLs exist; needs DEFAULT or backfill |
| RENAME TABLE/COLUMN | WARN | Application code must be updated simultaneously |
| Large table ALTER | WARN | May lock table for extended time; consider batching |
| Missing transaction wrapper | FAIL | Partial migrations leave inconsistent state |
| Missing rollback/downgrade | WARN | Cannot undo if problems occur |
| Data migration in schema migration | WARN | Should be separate migration |
| No-op migration | INFO | Migration has no effect |
### 4. Lock Duration Estimation
For ALTER operations on existing tables, estimate lock impact:
- Table size (if database connection available)
- Operation type (ADD COLUMN is instant in PostgreSQL, ALTER TYPE is not)
- Concurrent operation risk
### 5. Generate Report
Group findings by severity with actionable recommendations.
## Output Format
```
+----------------------------------------------------------------------+
| DB-MIGRATE - Validate |
+----------------------------------------------------------------------+
Target: alembic/versions/b3c4d5e6_drop_legacy_columns.py
Tool: Alembic
FINDINGS
FAIL (2)
1. DROP COLUMN users.legacy_email
Risk: Permanent data loss for 12,450 rows with values
Fix: Verify column is unused, add data backup step, or
rename column first and drop in a future migration
2. ALTER COLUMN orders.total VARCHAR(10) -> VARCHAR(5)
Risk: Data truncation for values longer than 5 characters
Fix: Check max actual length: SELECT MAX(LENGTH(total)) FROM orders
If safe, document in migration comment
WARN (1)
1. Missing downgrade for DROP COLUMN
Risk: Cannot rollback this migration
Fix: Add downgrade() that re-creates column (data will be lost)
INFO (1)
1. Migration includes both schema and data changes
Suggestion: Separate into two migrations for cleaner rollback
SUMMARY
Operations: 4 (2 DDL, 2 DML)
FAIL: 2 (must fix before applying)
WARN: 1 (should fix)
INFO: 1 (improve)
VERDICT: FAIL (2 blocking issues)
```
## Exit Guidance
- FAIL: Do not apply migration until issues are resolved
- WARN: Review carefully; proceed with caution
- INFO: Suggestions for improvement; safe to proceed
- `--strict`: All WARN become FAIL

View File

@@ -0,0 +1,19 @@
---
name: db-migrate
description: Database migration toolkit — generate, validate, and manage schema migrations
---
# /db-migrate
Database migration management for Alembic, Prisma, and raw SQL.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/db-migrate setup` | Setup wizard for migration tool detection |
| `/db-migrate generate` | Generate migration from model diff |
| `/db-migrate validate` | Check migration safety |
| `/db-migrate plan` | Show execution plan with rollback strategy |
| `/db-migrate history` | Display migration history |
| `/db-migrate rollback` | Generate rollback migration |

View File

@@ -0,0 +1,119 @@
---
name: migration-safety
description: Rules for detecting destructive operations, data loss risks, and long-running locks
---
# Migration Safety
## Purpose
Defines safety rules for analyzing database migrations. This skill is loaded by both the `migration-planner` (during generation) and `migration-auditor` (during validation) agents to ensure migrations do not cause data loss or operational issues.
---
## Destructive Operations
### FAIL-Level (Block Migration)
| Operation | Risk | Detection Pattern |
|-----------|------|-------------------|
| `DROP TABLE` | Complete data loss | `DROP TABLE` without preceding backup/export |
| `DROP COLUMN` | Column data loss | `DROP COLUMN` without verification step |
| `ALTER COLUMN` type narrowing | Data truncation | VARCHAR(N) to smaller N, INTEGER to SMALLINT |
| `ALTER COLUMN` SET NOT NULL | Failure if NULLs exist | `SET NOT NULL` without DEFAULT or backfill |
| `TRUNCATE TABLE` | All rows deleted | `TRUNCATE` in migration file |
| `DELETE FROM` without WHERE | All rows deleted | `DELETE FROM table` without WHERE clause |
| Missing transaction | Partial migration risk | DDL statements outside BEGIN/COMMIT |
### WARN-Level (Report, Continue)
| Operation | Risk | Detection Pattern |
|-----------|------|-------------------|
| `RENAME TABLE` | App code must update | `ALTER TABLE ... RENAME TO` |
| `RENAME COLUMN` | App code must update | `ALTER TABLE ... RENAME COLUMN` |
| `ALTER COLUMN` type widening | Usually safe but verify | INTEGER to BIGINT, VARCHAR to TEXT |
| `CREATE INDEX` (non-concurrent) | Table lock during build | `CREATE INDEX` without `CONCURRENTLY` |
| Large table ALTER | Extended lock time | Any ALTER on tables with 100K+ rows |
| Mixed schema + data migration | Complex rollback | DML and DDL in same migration file |
| Missing downgrade/rollback | Cannot undo | No downgrade function or DOWN section |
### INFO-Level (Suggestions)
| Operation | Suggestion | Detection Pattern |
|-----------|-----------|-------------------|
| No-op migration | Remove or document why | Empty upgrade function |
| Missing IF EXISTS/IF NOT EXISTS | Add for idempotency | `CREATE TABLE` without `IF NOT EXISTS` |
| Non-concurrent index on PostgreSQL | Use CONCURRENTLY | `CREATE INDEX` could be `CREATE INDEX CONCURRENTLY` |
---
## Lock Duration Rules
### PostgreSQL
| Operation | Lock Type | Duration |
|-----------|-----------|----------|
| ADD COLUMN (no default) | ACCESS EXCLUSIVE | Instant (metadata only) |
| ADD COLUMN with DEFAULT | ACCESS EXCLUSIVE | Instant (PG 11+) |
| ALTER COLUMN TYPE | ACCESS EXCLUSIVE | Full table rewrite |
| DROP COLUMN | ACCESS EXCLUSIVE | Instant (metadata only) |
| CREATE INDEX | SHARE | Proportional to table size |
| CREATE INDEX CONCURRENTLY | SHARE UPDATE EXCLUSIVE | Longer but non-blocking |
| ADD CONSTRAINT (CHECK) | ACCESS EXCLUSIVE | Scans entire table |
| ADD CONSTRAINT NOT VALID + VALIDATE | Split: instant + non-blocking | Recommended for large tables |
### MySQL
| Operation | Lock Type | Duration |
|-----------|-----------|----------|
| Most ALTER TABLE | Table copy | Proportional to table size |
| ADD COLUMN (last position) | Instant (8.0+ some cases) | Depends on engine |
| CREATE INDEX | Table copy or instant | Engine-dependent |
---
## Recommended Patterns
### Safe Column Addition
```sql
-- Good: nullable column, no lock
ALTER TABLE users ADD COLUMN middle_name VARCHAR(100);
-- Then backfill in batches (separate migration):
UPDATE users SET middle_name = '' WHERE middle_name IS NULL;
-- Then add constraint (separate migration):
ALTER TABLE users ALTER COLUMN middle_name SET NOT NULL;
```
### Safe Column Removal
```sql
-- Step 1: Remove from application code first
-- Step 2: Verify column is unused (no queries reference it)
-- Step 3: Drop in migration
ALTER TABLE users DROP COLUMN IF EXISTS legacy_field;
```
### Safe Type Change
```sql
-- Step 1: Add new column
ALTER TABLE orders ADD COLUMN amount_new NUMERIC(10,2);
-- Step 2: Backfill (separate migration)
UPDATE orders SET amount_new = amount::NUMERIC(10,2);
-- Step 3: Swap columns (separate migration)
ALTER TABLE orders DROP COLUMN amount;
ALTER TABLE orders RENAME COLUMN amount_new TO amount;
```
---
## Pre-Migration Checklist
Before applying any migration in production:
1. Database backup completed and verified
2. Migration validated with `/db-migrate validate`
3. Execution plan reviewed with `/db-migrate plan`
4. Rollback strategy documented and tested
5. Maintenance window scheduled (if required by lock analysis)
6. Application deployment coordinated (if schema change affects code)

View File

@@ -0,0 +1,104 @@
---
name: naming-conventions
description: Migration file naming rules — timestamp prefixes, descriptive suffixes, ordering
---
# Naming Conventions
## Purpose
Defines standard naming patterns for migration files across all supported tools. This skill ensures consistent, descriptive, and correctly ordered migration files.
---
## General Rules
1. **Use lowercase with underscores** for descriptions (snake_case)
2. **Be descriptive but concise**: Describe WHAT changes, not WHY
3. **Use verb prefixes**: `add_`, `drop_`, `rename_`, `alter_`, `create_`, `remove_`
4. **Include the table name** when the migration affects a single table
5. **Never use generic names** like `migration_1`, `update`, `fix`
---
## Tool-Specific Patterns
### Alembic
**Format:** `{revision_hash}_{description}.py`
The revision hash is auto-generated by Alembic. The description is provided by the user.
| Example | Description |
|---------|-------------|
| `a1b2c3d4_create_users_table.py` | Initial table creation |
| `e5f6g7h8_add_email_to_users.py` | Add column |
| `i9j0k1l2_drop_legacy_users_columns.py` | Remove columns |
| `m3n4o5p6_rename_orders_total_to_amount.py` | Rename column |
| `q7r8s9t0_add_index_on_orders_user_id.py` | Add index |
**Description rules for Alembic:**
- Max 60 characters for the description portion
- No spaces (use underscores)
- Alembic auto-generates the revision hash
### Prisma
**Format:** `{YYYYMMDDHHMMSS}_{description}/migration.sql`
| Example | Description |
|---------|-------------|
| `20240115120000_create_users_table/migration.sql` | Initial table |
| `20240120093000_add_email_to_users/migration.sql` | Add column |
| `20240201110000_add_orders_table/migration.sql` | New table |
**Description rules for Prisma:**
- Prisma generates the timestamp automatically with `prisma migrate dev`
- Description is the `--name` argument
- Use snake_case, no spaces
### Raw SQL
**Format:** `{NNN}_{description}.sql`
Sequential numbering with zero-padded prefix:
| Example | Description |
|---------|-------------|
| `001_create_users_table.sql` | First migration |
| `002_add_email_to_users.sql` | Second migration |
| `003_create_orders_table.sql` | Third migration |
**Numbering rules:**
- Zero-pad to 3 digits minimum (001, 002, ..., 999)
- If project exceeds 999, use 4 digits (0001, 0002, ...)
- Never reuse numbers, even for deleted migrations
- Gaps in sequence are acceptable (001, 002, 005 is fine)
---
## Description Verb Prefixes
| Prefix | Use When |
|--------|----------|
| `create_` | New table |
| `add_` | New column, index, or constraint to existing table |
| `drop_` | Remove table |
| `remove_` | Remove column, index, or constraint |
| `rename_` | Rename table or column |
| `alter_` | Change column type or constraint |
| `backfill_` | Data-only migration (populate column values) |
| `merge_` | Combine tables or columns |
| `split_` | Separate table or column into multiple |
---
## Anti-Patterns
| Bad Name | Problem | Better Name |
|----------|---------|-------------|
| `migration_1.py` | Not descriptive | `create_users_table.py` |
| `update.sql` | Too vague | `add_status_to_orders.sql` |
| `fix_bug.py` | Describes why, not what | `alter_email_column_length.py` |
| `changes.sql` | Not descriptive | `add_index_on_users_email.sql` |
| `final_migration.py` | Nothing is ever final | `remove_deprecated_columns.py` |

View File

@@ -0,0 +1,94 @@
---
name: orm-detection
description: Detect Alembic, Prisma, or raw SQL migration tools and locate configuration files
---
# ORM Detection
## Purpose
Identify the database migration tool in use and map its configuration. This skill is loaded by the `migration-planner` agent during setup and migration generation to ensure tool-appropriate output.
---
## Detection Rules
### Alembic Detection
| Indicator | Location | Confidence |
|-----------|----------|------------|
| `alembic.ini` file | Project root | High |
| `alembic/` directory with `env.py` | Project root | High |
| `alembic/versions/` directory | Within alembic dir | High |
| `sqlalchemy` + `alembic` in requirements | `requirements.txt`, `pyproject.toml` | Medium |
| `from alembic import op` in Python files | `*.py` in versions dir | High |
**Alembic Configuration Files:**
- `alembic.ini` — Main config (database URL, migration directory)
- `alembic/env.py` — Migration environment (model imports, target metadata)
- `alembic/versions/` — Migration files directory
**Model Location:**
- Look for `Base = declarative_base()` or `class Base(DeclarativeBase)` in Python files
- Check `target_metadata` in `env.py` to find the models module
- Common locations: `app/models/`, `models/`, `src/models/`
### Prisma Detection
| Indicator | Location | Confidence |
|-----------|----------|------------|
| `prisma/schema.prisma` file | Project root | High |
| `@prisma/client` in package.json | `package.json` | High |
| `prisma/migrations/` directory | Within prisma dir | High |
| `npx prisma` in scripts | `package.json` scripts | Medium |
**Prisma Configuration Files:**
- `prisma/schema.prisma` — Schema definition (models, datasource, generator)
- `prisma/migrations/` — Migration directories (timestamp-named)
- `.env``DATABASE_URL` connection string
### Raw SQL Detection
| Indicator | Location | Confidence |
|-----------|----------|------------|
| `migrations/` dir with numbered `.sql` files | Project root | Medium |
| `flyway.conf` | Project root | High (Flyway) |
| `knexfile.js` or `knexfile.ts` | Project root | High (Knex) |
| `db/migrate/` directory | Project root | Medium (Rails-style) |
**Raw SQL Configuration:**
- Migration directory location
- Naming pattern (sequential numbers, timestamps)
- Tracking table name (if database-tracked)
---
## Database Connection Detection
Look for connection strings in this order:
1. `DATABASE_URL` environment variable
2. `.env` file in project root
3. `alembic.ini` `sqlalchemy.url` setting
4. `prisma/schema.prisma` `datasource` block
5. Application config files (`config.py`, `config.js`, `settings.py`)
**Database Type Detection:**
- `postgresql://` or `postgres://` — PostgreSQL
- `mysql://` — MySQL
- `sqlite:///` — SQLite
- `mongodb://` — MongoDB (not supported for SQL migrations)
---
## Version Detection
**Alembic**: Parse from `pip show alembic` or `requirements.txt` pin
**Prisma**: Parse from `package.json` `@prisma/client` version
**SQLAlchemy**: Parse from requirements; important for feature compatibility (1.4 vs 2.0 API)
---
## Ambiguous Projects
If multiple migration tools are detected (e.g., Alembic for backend + Prisma for a separate service), ask the user which one to target. Store selection in `.db-migrate.json`.

View File

@@ -0,0 +1,157 @@
---
name: rollback-patterns
description: Standard rollback generation patterns, reverse operations, and data backup strategies
---
# Rollback Patterns
## Purpose
Defines patterns for generating safe rollback migrations. This skill is loaded by the `migration-planner` agent when generating migrations (to include downgrade sections) and when creating explicit rollback plans.
---
## Reverse Operation Map
| Forward Operation | Reverse Operation | Data Preserved |
|-------------------|-------------------|----------------|
| CREATE TABLE | DROP TABLE | No (all data lost) |
| DROP TABLE | CREATE TABLE (empty) | No (must restore from backup) |
| ADD COLUMN | DROP COLUMN | No (column data lost) |
| DROP COLUMN | ADD COLUMN (nullable) | No (must restore from backup) |
| RENAME TABLE | RENAME TABLE (back) | Yes |
| RENAME COLUMN | RENAME COLUMN (back) | Yes |
| ADD INDEX | DROP INDEX | Yes (data unaffected) |
| DROP INDEX | CREATE INDEX | Yes (data unaffected) |
| ADD CONSTRAINT | DROP CONSTRAINT | Yes |
| DROP CONSTRAINT | ADD CONSTRAINT | Yes (if data still valid) |
| ALTER COLUMN TYPE | ALTER COLUMN TYPE (back) | Depends on conversion |
| INSERT rows | DELETE matching rows | Yes (if identifiable) |
| UPDATE rows | UPDATE with original values | Only if originals saved |
| DELETE rows | INSERT saved rows | Only if backed up |
---
## Rollback Classification
### Fully Reversible (Green)
Operations that can be undone with no data loss:
- RENAME operations (table, column)
- ADD/DROP INDEX
- ADD/DROP CONSTRAINT (when data satisfies constraint)
- ADD COLUMN (drop it in rollback)
### Partially Reversible (Yellow)
Operations where structure is restored but data is lost:
- CREATE TABLE (rollback = DROP TABLE; data lost)
- DROP COLUMN (rollback = ADD COLUMN; column data gone)
- ALTER COLUMN TYPE narrowing then widening (precision lost)
### Irreversible (Red)
Operations that cannot be meaningfully undone:
- DROP TABLE without backup (data permanently gone)
- TRUNCATE TABLE without backup
- DELETE without WHERE without backup
- Data transformation that loses information (e.g., hash, round)
---
## Backup Strategies
### Pre-Migration Table Backup
For migrations that will cause data loss, generate backup commands:
**PostgreSQL:**
```sql
-- Full table backup
CREATE TABLE _backup_users_20240115 AS SELECT * FROM users;
-- Column-only backup
CREATE TABLE _backup_users_email_20240115 AS
SELECT id, legacy_email FROM users;
```
**Export to file:**
```bash
pg_dump --table=users --column-inserts dbname > users_backup_20240115.sql
```
### Restoration Commands
Include restoration commands in rollback section:
```sql
-- Restore from backup table
INSERT INTO users (id, legacy_email)
SELECT id, legacy_email FROM _backup_users_email_20240115;
-- Clean up backup
DROP TABLE IF EXISTS _backup_users_email_20240115;
```
---
## Alembic Downgrade Patterns
```python
def downgrade():
# Reverse of upgrade, in opposite order
op.drop_index('ix_orders_user_id', table_name='orders')
op.drop_table('orders')
```
For complex downgrades with data restoration:
```python
def downgrade():
# Re-create dropped column
op.add_column('users', sa.Column('legacy_email', sa.String(255), nullable=True))
# Note: Data cannot be restored automatically
# Restore from backup: _backup_users_email_YYYYMMDD
```
---
## Prisma Rollback Patterns
Prisma does not have native downgrade support. Generate a new migration that reverses the operations:
```sql
-- Rollback: undo add_orders_table
DROP TABLE IF EXISTS "order_items";
DROP TABLE IF EXISTS "orders";
```
---
## Raw SQL Rollback Patterns
Always include DOWN section in migration files:
```sql
-- UP
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id),
total DECIMAL(10,2) NOT NULL
);
-- DOWN
DROP TABLE IF EXISTS orders;
```
---
## Point-of-No-Return Identification
Flag migrations that cross the point of no return:
1. **Data deletion without backup step**: Mark as irreversible
2. **Type narrowing that truncates data**: Data is permanently altered
3. **Hash/encrypt transformations**: Original values unrecoverable
4. **Aggregate/merge operations**: Individual records lost
When a migration includes irreversible operations, the rollback section must clearly state: "This migration cannot be fully rolled back. Data backup is required before applying."

View File

@@ -0,0 +1,56 @@
---
name: visual-header
description: Standard header format for db-migrate commands and agents
---
# Visual Header
## Standard Format
Display at the start of every command execution:
```
+----------------------------------------------------------------------+
| DB-MIGRATE - [Command Name] |
+----------------------------------------------------------------------+
```
## Command Headers
| Command | Header Text |
|---------|-------------|
| db-migrate-setup | Setup Wizard |
| db-migrate-generate | Generate |
| db-migrate-validate | Validate |
| db-migrate-plan | Plan |
| db-migrate-history | History |
| db-migrate-rollback | Rollback |
## Summary Box Format
For completion summaries:
```
+============================================================+
| DB-MIGRATE [OPERATION] COMPLETE |
+============================================================+
| Component: [Status] |
| Component: [Status] |
+============================================================+
```
## Status Indicators
- Success: `[check]` or `Ready`
- Warning: `[!]` or `Partial`
- Failure: `[X]` or `Failed`
- New file: `[+]`
- Modified file: `[~]`
- Deleted file: `[-]`
## Risk Level Indicators
- LOW: Safe operation, no data loss risk
- MEDIUM: Reversible but requires attention
- HIGH: Potential data loss, backup required
- CRITICAL: Irreversible data loss, explicit approval required