Files
leo-claude-mktplace/plugins/data-platform/claude-md-integration.md
lmiranda 2d51df7a42 feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]
Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action>
sub-command pattern. Git-flow consolidated from 8→5 commands (commit
variants absorbed into --push/--merge/--sync flags). Dispatch files,
name: frontmatter, and cross-reference updates for all plugins.

Phase 2: Design documents for 8 new plugins in docs/designs/.

Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate,
saas-react-platform, saas-test-pilot, data-seed, ops-release-manager,
ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents,
skills, README, and claude-md-integration. Marketplace grows from 12→20.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:52:11 -05:00

2.2 KiB

data-platform Plugin - CLAUDE.md Integration

Add this section to your project's CLAUDE.md to enable data-platform plugin features.

Suggested CLAUDE.md Section

## Data Platform Integration

This project uses the data-platform plugin for data engineering workflows.

### Configuration

**PostgreSQL**: Credentials in `~/.config/claude/postgres.env`
**dbt**: Project path auto-detected from `dbt_project.yml`

### Available Commands

| Command | Purpose |
|---------|---------|
| `/data ingest` | Load data from files or database |
| `/data profile` | Generate statistical profile |
| `/data schema` | Show schema information |
| `/data explain` | Explain dbt model |
| `/data lineage` | Show data lineage |
| `/data run` | Execute dbt models |

### data_ref Convention

DataFrames are stored with references. Use meaningful names:
- `raw_*` for source data
- `stg_*` for staged/cleaned data
- `dim_*` for dimension tables
- `fct_*` for fact tables
- `rpt_*` for reports

### dbt Workflow

1. Always validate before running: `/data run` includes automatic `dbt_parse`
2. For dbt 1.9+, check for deprecated syntax before commits
3. Use `/data lineage` to understand impact of changes

### Database Access

PostgreSQL tools require POSTGRES_URL configuration:
- Read-only queries: `pg_query`
- Write operations: `pg_execute`
- Schema exploration: `pg_tables`, `pg_columns`

PostGIS spatial data:
- List spatial tables: `st_tables`
- Check geometry: `st_geometry_type`, `st_srid`, `st_extent`

Environment Variables

Add to project .env if needed:

# dbt configuration
DBT_PROJECT_DIR=./transform
DBT_PROFILES_DIR=~/.dbt

# Memory limits
DATA_PLATFORM_MAX_ROWS=100000

Typical Workflows

Data Exploration

/data ingest data/raw_customers.csv
/data profile raw_customers
/data schema

ETL Development

/data schema orders              # Understand source
/data explain stg_orders         # Understand transformation
/data run stg_orders             # Test the model
/data lineage fct_orders         # Check downstream impact

Database Analysis

/data schema                     # List all tables
pg_columns orders           # Detailed schema
st_tables                   # Find spatial data