Add new data-platform plugin for data engineering workflows with: MCP Server (32 tools): - pandas operations (14 tools): read_csv, read_parquet, read_json, to_csv, to_parquet, describe, head, tail, filter, select, groupby, join, list_data, drop_data - PostgreSQL/PostGIS (10 tools): pg_connect, pg_query, pg_execute, pg_tables, pg_columns, pg_schemas, st_tables, st_geometry_type, st_srid, st_extent - dbt integration (8 tools): dbt_parse, dbt_run, dbt_test, dbt_build, dbt_compile, dbt_ls, dbt_docs_generate, dbt_lineage Plugin Features: - Arrow IPC data_ref system for DataFrame persistence across tool calls - Pre-execution validation for dbt with `dbt parse` - SessionStart hook for PostgreSQL connectivity check (non-blocking) - Hybrid configuration (system ~/.config/claude/postgres.env + project .env) - Memory management with 100k row limit and chunking support Commands: /initial-setup, /ingest, /profile, /schema, /explain, /lineage, /run Agents: data-ingestion, data-analysis Test suite: 71 tests covering config, data store, pandas, postgres, dbt tools Addresses data workflow issues from personal-portfolio project: - Lost data after multiple interactions (solved by Arrow IPC data_ref) - dbt 1.9+ syntax deprecation (solved by pre-execution validation) - Ungraceful PostgreSQL error handling (solved by SessionStart hook) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
120 lines
3.2 KiB
Markdown
120 lines
3.2 KiB
Markdown
# data-platform Plugin
|
|
|
|
Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration for Claude Code.
|
|
|
|
## Features
|
|
|
|
- **pandas Operations**: Load, transform, and export DataFrames with persistent data_ref system
|
|
- **PostgreSQL/PostGIS**: Database queries with connection pooling and spatial data support
|
|
- **dbt Integration**: Build tool wrapper with pre-execution validation
|
|
|
|
## Installation
|
|
|
|
This plugin is part of the leo-claude-mktplace. Install via:
|
|
|
|
```bash
|
|
# From marketplace
|
|
claude plugins install leo-claude-mktplace/data-platform
|
|
|
|
# Setup MCP server venv
|
|
cd ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/data-platform
|
|
python -m venv .venv
|
|
source .venv/bin/activate
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
## Configuration
|
|
|
|
### PostgreSQL (Optional)
|
|
|
|
Create `~/.config/claude/postgres.env`:
|
|
|
|
```env
|
|
POSTGRES_URL=postgresql://user:password@host:5432/database
|
|
```
|
|
|
|
### dbt (Optional)
|
|
|
|
Add to project `.env`:
|
|
|
|
```env
|
|
DBT_PROJECT_DIR=/path/to/dbt/project
|
|
DBT_PROFILES_DIR=~/.dbt
|
|
```
|
|
|
|
## Commands
|
|
|
|
| Command | Description |
|
|
|---------|-------------|
|
|
| `/initial-setup` | Interactive setup wizard for PostgreSQL and dbt configuration |
|
|
| `/ingest` | Load data from files or database |
|
|
| `/profile` | Generate data profile and statistics |
|
|
| `/schema` | Show database/DataFrame schema |
|
|
| `/explain` | Explain dbt model lineage |
|
|
| `/lineage` | Visualize data dependencies |
|
|
| `/run` | Execute dbt models |
|
|
|
|
## Agents
|
|
|
|
| Agent | Description |
|
|
|-------|-------------|
|
|
| `data-ingestion` | Data loading and transformation specialist |
|
|
| `data-analysis` | Exploration and profiling specialist |
|
|
|
|
## data_ref System
|
|
|
|
All DataFrame operations use a `data_ref` system for persistence:
|
|
|
|
```
|
|
# Load returns a reference
|
|
read_csv("data.csv") → {"data_ref": "sales_data"}
|
|
|
|
# Use reference in subsequent operations
|
|
filter("sales_data", "amount > 100") → {"data_ref": "sales_data_filtered"}
|
|
describe("sales_data_filtered") → {statistics}
|
|
```
|
|
|
|
## Example Workflow
|
|
|
|
```
|
|
/ingest data/sales.csv
|
|
# → Loaded 50,000 rows as "sales_data"
|
|
|
|
/profile sales_data
|
|
# → Statistical summary, null counts, quality assessment
|
|
|
|
/schema orders
|
|
# → Column names, types, constraints
|
|
|
|
/lineage fct_orders
|
|
# → Dependency graph showing upstream/downstream models
|
|
|
|
/run dim_customers
|
|
# → Pre-validates then executes dbt model
|
|
```
|
|
|
|
## Tools Summary
|
|
|
|
### pandas (14 tools)
|
|
`read_csv`, `read_parquet`, `read_json`, `to_csv`, `to_parquet`, `describe`, `head`, `tail`, `filter`, `select`, `groupby`, `join`, `list_data`, `drop_data`
|
|
|
|
### PostgreSQL (6 tools)
|
|
`pg_connect`, `pg_query`, `pg_execute`, `pg_tables`, `pg_columns`, `pg_schemas`
|
|
|
|
### PostGIS (4 tools)
|
|
`st_tables`, `st_geometry_type`, `st_srid`, `st_extent`
|
|
|
|
### dbt (8 tools)
|
|
`dbt_parse`, `dbt_run`, `dbt_test`, `dbt_build`, `dbt_compile`, `dbt_ls`, `dbt_docs_generate`, `dbt_lineage`
|
|
|
|
## Memory Management
|
|
|
|
- Default limit: 100,000 rows per DataFrame
|
|
- Configure via `DATA_PLATFORM_MAX_ROWS` environment variable
|
|
- Use `chunk_size` parameter for large files
|
|
- Monitor with `list_data` tool
|
|
|
|
## SessionStart Hook
|
|
|
|
On session start, the plugin checks PostgreSQL connectivity and displays a warning if unavailable. This is non-blocking - pandas and dbt tools remain available.
|