refactor: extract skills from commands across 8 plugins

Refactored commands to extract reusable skills following the
Commands → Skills separation pattern. Each command is now <50 lines
and references skill files for detailed knowledge.

Plugins refactored:
- claude-config-maintainer: 5 commands → 7 skills
- code-sentinel: 3 commands → 2 skills
- contract-validator: 5 commands → 6 skills
- data-platform: 10 commands → 6 skills
- doc-guardian: 5 commands → 6 skills (replaced nested dir)
- git-flow: 8 commands → 7 skills

Skills contain: workflows, validation rules, conventions,
reference data, tool documentation

Commands now contain: YAML frontmatter, agent assignment,
skills list, brief workflow steps, parameters

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-30 17:32:24 -05:00
parent aad02ef2d9
commit 7c8a20c804
71 changed files with 3896 additions and 3690 deletions

View File

@@ -1,243 +1,49 @@
---
description: Interactive setup wizard for data-platform plugin - configures MCP server and optional PostgreSQL/dbt
---
# /initial-setup - Data Platform Setup Wizard
# Data Platform Setup Wizard
## Skills to Load
- skills/setup-workflow.md
- skills/visual-header.md
## Visual Output
When executing this command, display the plugin header:
Display header: `DATA-PLATFORM - Setup Wizard`
## Usage
```
┌──────────────────────────────────────────────────────────────────┐
│ 📊 DATA-PLATFORM · Setup Wizard │
└──────────────────────────────────────────────────────────────────┘
/initial-setup
```
Then proceed with the setup.
## Workflow
This command sets up the data-platform plugin with pandas, PostgreSQL, and dbt integration.
Execute `skills/setup-workflow.md` phases in order:
## Important Context
### Phase 1: Environment Validation
- Check Python 3.10+ installed
- Stop if version too old
- **This command uses Bash, Read, Write, and AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart**
- **PostgreSQL and dbt are optional** - pandas tools work without them
### Phase 2: MCP Server Setup
- Locate MCP server (installed or source path)
- Check/create virtual environment
- Install dependencies if needed
---
### Phase 3: PostgreSQL Configuration (Optional)
- Ask user if they want PostgreSQL access
- If yes: create `~/.config/claude/postgres.env`
- Test connection and report status
## Phase 1: Environment Validation
### Phase 4: dbt Configuration (Optional)
- Ask user if they use dbt
- If yes: explain auto-detection via `dbt_project.yml`
- Check dbt CLI installation
### Step 1.1: Check Python Version
### Phase 5: Validation
- Verify MCP server can be imported
- Display summary with component status
- Inform user to restart session
```bash
python3 --version
```
## Important Notes
Requires Python 3.10+. If below, stop setup and inform user.
### Step 1.2: Check for Required Libraries
```bash
python3 -c "import sys; print(f'Python {sys.version_info.major}.{sys.version_info.minor}')"
```
---
## Phase 2: MCP Server Setup
### Step 2.1: Locate Data Platform MCP Server
The MCP server should be at the marketplace root:
```bash
# If running from installed marketplace
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/data-platform/ 2>/dev/null || echo "NOT_FOUND_INSTALLED"
# If running from source
ls -la ~/claude-plugins-work/mcp-servers/data-platform/ 2>/dev/null || echo "NOT_FOUND_SOURCE"
```
Determine the correct path based on which exists.
### Step 2.2: Check Virtual Environment
```bash
ls -la /path/to/mcp-servers/data-platform/.venv/bin/python 2>/dev/null && echo "VENV_EXISTS" || echo "VENV_MISSING"
```
### Step 2.3: Create Virtual Environment (if missing)
```bash
cd /path/to/mcp-servers/data-platform && python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip && pip install -r requirements.txt && deactivate
```
**Note:** This may take a few minutes due to pandas, pyarrow, and dbt dependencies.
---
## Phase 3: PostgreSQL Configuration (Optional)
### Step 3.1: Ask About PostgreSQL
Use AskUserQuestion:
- Question: "Do you want to configure PostgreSQL database access?"
- Header: "PostgreSQL"
- Options:
- "Yes, I have a PostgreSQL database"
- "No, I'll only use pandas/dbt tools"
**If user chooses "No":** Skip to Phase 4.
### Step 3.2: Create Config Directory
```bash
mkdir -p ~/.config/claude
```
### Step 3.3: Check PostgreSQL Configuration
```bash
cat ~/.config/claude/postgres.env 2>/dev/null || echo "FILE_NOT_FOUND"
```
**If file exists with valid URL:** Skip to Step 3.6.
**If missing or has placeholders:** Continue.
### Step 3.4: Gather PostgreSQL Information
Use AskUserQuestion:
- Question: "What is your PostgreSQL connection URL format?"
- Header: "DB Format"
- Options:
- "Standard: postgresql://user:pass@host:5432/db"
- "PostGIS: postgresql://user:pass@host:5432/db (with PostGIS extension)"
- "Other (I'll provide the full URL)"
Ask user to provide the connection URL.
### Step 3.5: Create Configuration File
```bash
cat > ~/.config/claude/postgres.env << 'EOF'
# PostgreSQL Configuration
# Generated by data-platform /initial-setup
POSTGRES_URL=<USER_PROVIDED_URL>
EOF
chmod 600 ~/.config/claude/postgres.env
```
### Step 3.6: Test PostgreSQL Connection (if configured)
```bash
source ~/.config/claude/postgres.env && python3 -c "
import asyncio
import asyncpg
async def test():
try:
conn = await asyncpg.connect('$POSTGRES_URL', timeout=5)
ver = await conn.fetchval('SELECT version()')
await conn.close()
print(f'SUCCESS: {ver.split(\",\")[0]}')
except Exception as e:
print(f'FAILED: {e}')
asyncio.run(test())
"
```
Report result:
- SUCCESS: Connection works
- FAILED: Show error and suggest fixes
---
## Phase 4: dbt Configuration (Optional)
### Step 4.1: Ask About dbt
Use AskUserQuestion:
- Question: "Do you use dbt for data transformations in your projects?"
- Header: "dbt"
- Options:
- "Yes, I have dbt projects"
- "No, I don't use dbt"
**If user chooses "No":** Skip to Phase 5.
### Step 4.2: dbt Discovery
dbt configuration is **project-level** (not system-level). The plugin auto-detects dbt projects by looking for `dbt_project.yml`.
Inform user:
```
dbt projects are detected automatically when you work in a directory
containing dbt_project.yml.
If your dbt project is in a subdirectory, you can set DBT_PROJECT_DIR
in your project's .env file:
DBT_PROJECT_DIR=./transform
DBT_PROFILES_DIR=~/.dbt
```
### Step 4.3: Check dbt Installation
```bash
dbt --version 2>/dev/null || echo "DBT_NOT_FOUND"
```
**If not found:** Inform user that dbt CLI tools require dbt-core to be installed globally or in the project.
---
## Phase 5: Validation
### Step 5.1: Verify MCP Server
```bash
cd /path/to/mcp-servers/data-platform && .venv/bin/python -c "from mcp_server.server import DataPlatformMCPServer; print('MCP Server OK')"
```
### Step 5.2: Summary
```
╔════════════════════════════════════════════════════════════╗
║ DATA-PLATFORM SETUP COMPLETE ║
╠════════════════════════════════════════════════════════════╣
║ MCP Server: ✓ Ready ║
║ pandas Tools: ✓ Available (14 tools) ║
║ PostgreSQL Tools: [✓/✗] [Status based on config] ║
║ PostGIS Tools: [✓/✗] [Status based on PostGIS] ║
║ dbt Tools: [✓/✗] [Status based on discovery] ║
╚════════════════════════════════════════════════════════════╝
```
### Step 5.3: Session Restart Notice
---
**⚠️ Session Restart Required**
Restart your Claude Code session for MCP tools to become available.
**After restart, you can:**
- Run `/ingest` to load data from files or database
- Run `/profile` to analyze DataFrame statistics
- Run `/schema` to explore database/DataFrame schema
- Run `/run` to execute dbt models (if configured)
- Run `/lineage` to view dbt model dependencies
---
## Memory Limits
The data-platform plugin has a default row limit of 100,000 rows per DataFrame. For larger datasets:
- Use chunked processing (`chunk_size` parameter)
- Filter data before loading
- Store to Parquet for efficient re-loading
You can override the limit by setting in your project `.env`:
```
DATA_PLATFORM_MAX_ROWS=500000
```
- Uses Bash, Read, Write, AskUserQuestion tools (NOT MCP tools)
- MCP tools unavailable until session restart
- PostgreSQL and dbt are optional - pandas works without them