4 Commits

Author SHA1 Message Date
6e90064160 fix(mcp): capture CLAUDE_PROJECT_DIR from PWD before cd
All MCP server run.sh scripts now capture the original working
directory as CLAUDE_PROJECT_DIR before changing to the script
directory. This fixes the branch detection issue where MCP tools
detected the plugin repo's branch instead of the user's project branch.

This is a follow-up fix to #231 - the original fix relied on
CLAUDE_PROJECT_DIR being set by Claude Code, but it isn't.
Now we capture it ourselves from PWD at startup time.

Closes #231 (proper fix)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 11:43:07 -05:00
e7050e2ad8 fix(mcp): use wrapper scripts instead of venv symlinks
Replace direct python path in .mcp.json with run.sh wrapper scripts
that automatically locate the venv in cache or local directory.

Problem: .venv symlinks are gitignored, causing them to be wiped on
every git operation. The SessionStart hook should recreate them but
this was unreliable, leading to repeated MCP server failures.

Solution: run.sh scripts that:
- First check ~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv
- Fallback to local .venv if exists
- Exit with helpful error if neither found

This eliminates dependency on symlinks entirely - the scripts are
tracked in git and will always be present after clone/pull/update.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:36:05 -05:00
4ed3ed7e14 fix(data-platform): reset index after filter to prevent extra column
The filter tool was adding an __index_level_0__ column to results
because pandas query() preserves the original index, which gets
converted to a column when storing the DataFrame.

Added .reset_index(drop=True) after query() to drop the preserved
index and create a clean sequential index.

Fixes #203

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 17:25:37 -05:00
89f0354ccc feat: add data-platform plugin (v4.0.0)
Add new data-platform plugin for data engineering workflows with:

MCP Server (32 tools):
- pandas operations (14 tools): read_csv, read_parquet, read_json,
  to_csv, to_parquet, describe, head, tail, filter, select, groupby,
  join, list_data, drop_data
- PostgreSQL/PostGIS (10 tools): pg_connect, pg_query, pg_execute,
  pg_tables, pg_columns, pg_schemas, st_tables, st_geometry_type,
  st_srid, st_extent
- dbt integration (8 tools): dbt_parse, dbt_run, dbt_test, dbt_build,
  dbt_compile, dbt_ls, dbt_docs_generate, dbt_lineage

Plugin Features:
- Arrow IPC data_ref system for DataFrame persistence across tool calls
- Pre-execution validation for dbt with `dbt parse`
- SessionStart hook for PostgreSQL connectivity check (non-blocking)
- Hybrid configuration (system ~/.config/claude/postgres.env + project .env)
- Memory management with 100k row limit and chunking support

Commands: /initial-setup, /ingest, /profile, /schema, /explain, /lineage, /run
Agents: data-ingestion, data-analysis

Test suite: 71 tests covering config, data store, pandas, postgres, dbt tools

Addresses data workflow issues from personal-portfolio project:
- Lost data after multiple interactions (solved by Arrow IPC data_ref)
- dbt 1.9+ syntax deprecation (solved by pre-execution validation)
- Ungraceful PostgreSQL error handling (solved by SessionStart hook)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 14:24:03 -05:00