Files
leo-claude-mktplace/plugins/data-platform/skills/setup-workflow.md
lmiranda 7c8a20c804 refactor: extract skills from commands across 8 plugins
Refactored commands to extract reusable skills following the
Commands → Skills separation pattern. Each command is now <50 lines
and references skill files for detailed knowledge.

Plugins refactored:
- claude-config-maintainer: 5 commands → 7 skills
- code-sentinel: 3 commands → 2 skills
- contract-validator: 5 commands → 6 skills
- data-platform: 10 commands → 6 skills
- doc-guardian: 5 commands → 6 skills (replaced nested dir)
- git-flow: 8 commands → 7 skills

Skills contain: workflows, validation rules, conventions,
reference data, tool documentation

Commands now contain: YAML frontmatter, agent assignment,
skills list, brief workflow steps, parameters

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:32:24 -05:00

109 lines
2.3 KiB
Markdown

# Setup Workflow
## Important Context
- **This workflow uses Bash, Read, Write, AskUserQuestion tools** - NOT MCP tools
- **MCP tools won't work until after setup + session restart**
- **PostgreSQL and dbt are optional** - pandas tools work without them
## Phase 1: Environment Validation
### Check Python Version
```bash
python3 --version
```
Requires Python 3.10+. If below, stop and inform user.
## Phase 2: MCP Server Setup
### Locate MCP Server
Check both paths:
```bash
# Installed marketplace
ls -la ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/data-platform/
# Source
ls -la ~/claude-plugins-work/mcp-servers/data-platform/
```
### Check/Create Virtual Environment
```bash
# Check
ls -la /path/to/mcp-servers/data-platform/.venv/bin/python
# Create if missing
cd /path/to/mcp-servers/data-platform
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
deactivate
```
## Phase 3: PostgreSQL Configuration (Optional)
### Config Location
`~/.config/claude/postgres.env`
### Config Format
```bash
# PostgreSQL Configuration
POSTGRES_URL=postgresql://user:pass@host:5432/db
```
Set permissions: `chmod 600 ~/.config/claude/postgres.env`
### Test Connection
```bash
source ~/.config/claude/postgres.env && python3 -c "
import asyncio, asyncpg
async def test():
conn = await asyncpg.connect('$POSTGRES_URL', timeout=5)
ver = await conn.fetchval('SELECT version()')
await conn.close()
print(f'SUCCESS: {ver.split(\",\")[0]}')
asyncio.run(test())
"
```
## Phase 4: dbt Configuration (Optional)
dbt is **project-level** (auto-detected via `dbt_project.yml`).
For subdirectory projects, set in `.env`:
```
DBT_PROJECT_DIR=./transform
DBT_PROFILES_DIR=~/.dbt
```
### Check dbt Installation
```bash
dbt --version
```
## Phase 5: Validation
### Verify MCP Server
```bash
cd /path/to/mcp-servers/data-platform
.venv/bin/python -c "from mcp_server.server import DataPlatformMCPServer; print('OK')"
```
## Memory Limits
Default: 100,000 rows per DataFrame
Override in project `.env`:
```
DATA_PLATFORM_MAX_ROWS=500000
```
For larger datasets:
- Use chunked processing (`chunk_size` parameter)
- Filter data before loading
- Store to Parquet for efficient re-loading
## Session Restart
After setup, restart Claude Code session for MCP tools to become available.