Files
leo-claude-mktplace/plugins/data-platform/claude-md-integration.md
lmiranda 89f0354ccc feat: add data-platform plugin (v4.0.0)
Add new data-platform plugin for data engineering workflows with:

MCP Server (32 tools):
- pandas operations (14 tools): read_csv, read_parquet, read_json,
  to_csv, to_parquet, describe, head, tail, filter, select, groupby,
  join, list_data, drop_data
- PostgreSQL/PostGIS (10 tools): pg_connect, pg_query, pg_execute,
  pg_tables, pg_columns, pg_schemas, st_tables, st_geometry_type,
  st_srid, st_extent
- dbt integration (8 tools): dbt_parse, dbt_run, dbt_test, dbt_build,
  dbt_compile, dbt_ls, dbt_docs_generate, dbt_lineage

Plugin Features:
- Arrow IPC data_ref system for DataFrame persistence across tool calls
- Pre-execution validation for dbt with `dbt parse`
- SessionStart hook for PostgreSQL connectivity check (non-blocking)
- Hybrid configuration (system ~/.config/claude/postgres.env + project .env)
- Memory management with 100k row limit and chunking support

Commands: /initial-setup, /ingest, /profile, /schema, /explain, /lineage, /run
Agents: data-ingestion, data-analysis

Test suite: 71 tests covering config, data store, pandas, postgres, dbt tools

Addresses data workflow issues from personal-portfolio project:
- Lost data after multiple interactions (solved by Arrow IPC data_ref)
- dbt 1.9+ syntax deprecation (solved by pre-execution validation)
- Ungraceful PostgreSQL error handling (solved by SessionStart hook)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 14:24:03 -05:00

2.1 KiB

data-platform Plugin - CLAUDE.md Integration

Add this section to your project's CLAUDE.md to enable data-platform plugin features.

Suggested CLAUDE.md Section

## Data Platform Integration

This project uses the data-platform plugin for data engineering workflows.

### Configuration

**PostgreSQL**: Credentials in `~/.config/claude/postgres.env`
**dbt**: Project path auto-detected from `dbt_project.yml`

### Available Commands

| Command | Purpose |
|---------|---------|
| `/ingest` | Load data from files or database |
| `/profile` | Generate statistical profile |
| `/schema` | Show schema information |
| `/explain` | Explain dbt model |
| `/lineage` | Show data lineage |
| `/run` | Execute dbt models |

### data_ref Convention

DataFrames are stored with references. Use meaningful names:
- `raw_*` for source data
- `stg_*` for staged/cleaned data
- `dim_*` for dimension tables
- `fct_*` for fact tables
- `rpt_*` for reports

### dbt Workflow

1. Always validate before running: `/run` includes automatic `dbt_parse`
2. For dbt 1.9+, check for deprecated syntax before commits
3. Use `/lineage` to understand impact of changes

### Database Access

PostgreSQL tools require POSTGRES_URL configuration:
- Read-only queries: `pg_query`
- Write operations: `pg_execute`
- Schema exploration: `pg_tables`, `pg_columns`

PostGIS spatial data:
- List spatial tables: `st_tables`
- Check geometry: `st_geometry_type`, `st_srid`, `st_extent`

Environment Variables

Add to project .env if needed:

# dbt configuration
DBT_PROJECT_DIR=./transform
DBT_PROFILES_DIR=~/.dbt

# Memory limits
DATA_PLATFORM_MAX_ROWS=100000

Typical Workflows

Data Exploration

/ingest data/raw_customers.csv
/profile raw_customers
/schema

ETL Development

/schema orders              # Understand source
/explain stg_orders         # Understand transformation
/run stg_orders             # Test the model
/lineage fct_orders         # Check downstream impact

Database Analysis

/schema                     # List all tables
pg_columns orders           # Detailed schema
st_tables                   # Find spatial data