Compare commits
385 Commits
74531e06d0
...
feat/skill
| Author | SHA1 | Date | |
|---|---|---|---|
| 38a6fc38d3 | |||
| faedf68318 | |||
| 035cd00f4b | |||
| 9daa0cc5a1 | |||
| a3516ae0c1 | |||
| becf9b97b2 | |||
| 8129be5ef3 | |||
| 826f0ec674 | |||
| 5bbcee06ac | |||
| 93d93fcf09 | |||
| 30e424306c | |||
| f2bc1fc5d4 | |||
| f87fc2537d | |||
| 271e0210a5 | |||
| bbce9ed321 | |||
| 16bf1b969d | |||
| 1fe19b85e3 | |||
| e824b18e44 | |||
| f4e62dd1fb | |||
| 97362576dc | |||
| a169399da7 | |||
| 47d2ef2e07 | |||
| 78429a709f | |||
| 382a3a3af8 | |||
| 2d51df7a42 | |||
| 5098422858 | |||
| 9ba2e660d3 | |||
| 442ed63b4c | |||
| d13c310e67 | |||
| faafbd56f5 | |||
| 7380b07312 | |||
| 4cbcc7d391 | |||
| 4baad6c2b5 | |||
| 398fc50099 | |||
| 7b127d8e5c | |||
| ba7350e130 | |||
| 4b7c40e1f5 | |||
| 9673c0c994 | |||
| e1d1ffc555 | |||
| 318d027bfa | |||
| 34a5e4a5a3 | |||
| 9710d296e4 | |||
| 59b5545a9b | |||
| b3c388b732 | |||
| a4fb5b6feb | |||
| de4126bf68 | |||
| 5a8c3b041f | |||
| 33a7c91f4f | |||
| 8781179fd0 | |||
| 7d0b6050f9 | |||
| 6209ab9597 | |||
| 2c41ca338d | |||
| 34fc1d842c | |||
| 9eece4daa3 | |||
| 871d1bff58 | |||
| d90a8d05af | |||
| 044c49ba95 | |||
| ab3847c656 | |||
| 76105e98e0 | |||
| da628a3774 | |||
| 2d6fce9285 | |||
| a04a3c7a60 | |||
| 4ba38eb620 | |||
| 71f1f3239a | |||
| 49891c1e0c | |||
| c6182a3fda | |||
| 0e70156e26 | |||
| 01c225540b | |||
| 52c5be32c4 | |||
| 46e83bc711 | |||
| c0443a7f36 | |||
| c4dd4ee25d | |||
| 184ab48933 | |||
| a741ec3f88 | |||
| f1732f07c1 | |||
| f9df3b57ea | |||
| b0e6d738fa | |||
| 9044fe28ec | |||
| c37107fc42 | |||
| 841ce67dae | |||
| da0be51946 | |||
| d9d80d77cb | |||
| 3557f17177 | |||
| a005610a37 | |||
| 19ba80191f | |||
| 8f9ba64688 | |||
| e35e22cffb | |||
| 61907b78db | |||
| 0ea30e0d75 | |||
| c4037f505c | |||
| dbf3fa7e0d | |||
| c530f568ed | |||
| 6d093e83b6 | |||
| 13de992638 | |||
| ef28f172d6 | |||
| 027ae660c4 | |||
| 39556dbb59 | |||
| c9e054e013 | |||
| 5e20c6b6ef | |||
| db8fec42f2 | |||
| ba1dee4553 | |||
| 5e20a4a229 | |||
| 01e184b68f | |||
| c0d62f4957 | |||
| 56c9a38813 | |||
| 5b1dde694c | |||
| eafcfe5bd1 | |||
| 67d769e9e5 | |||
| dc113d8b09 | |||
| aca5c6e5b1 | |||
| 3d2f14b0ab | |||
| 120f00ece6 | |||
| 3012a7af68 | |||
| b76e53c215 | |||
| d12d9b4962 | |||
| b302a4237d | |||
| 3d96f6b505 | |||
| a034c12eb6 | |||
| 636bd0af59 | |||
| bf5029d6dc | |||
| e939af0689 | |||
| eb6ce62a76 | |||
| f7bcd48fc0 | |||
| 72b3436a24 | |||
| bea46d7689 | |||
| f2fddafca3 | |||
| fdcb5d9874 | |||
| edee44088c | |||
| 45510b44c0 | |||
| 79468f5d9e | |||
| 938ddd7b69 | |||
| 9148e21bc5 | |||
| 2925ec4229 | |||
| aa68883f87 | |||
| 4340873f4e | |||
| 98fd4e45e2 | |||
| dd36a79bcb | |||
| 793565aaaa | |||
| 27f3603f52 | |||
| 2872092554 | |||
| 4c857dde47 | |||
| efe3808bd2 | |||
| 61c315a605 | |||
| 56d3307cd3 | |||
| ea9d501a5d | |||
| cd27b92b28 | |||
| 61654057b8 | |||
| 2c998dff1d | |||
| 091e3d25f3 | |||
| 192f808f48 | |||
| 65574a03fb | |||
| 571c713697 | |||
| 5429f193b3 | |||
| a866e0d43d | |||
| 2f5161675c | |||
| f95b7eb650 | |||
| 8095f16cd6 | |||
| 7d2705e3bf | |||
| 0d04c8703a | |||
| 407dd1b93b | |||
| 4614726350 | |||
| 69358a78ba | |||
| 557bf6115b | |||
| 6eeb4a4e9a | |||
| b605a2de5e | |||
| fde61f5f73 | |||
| addfc9c1d5 | |||
| 4aa0baa2a6 | |||
| 733b8679c9 | |||
| 5becd3d41c | |||
| 305c534402 | |||
| 7b7e7dce16 | |||
| e3db084195 | |||
| b9eaae5317 | |||
| 16acc0609e | |||
| 2a92211b28 | |||
| f0f4369eac | |||
| 89e841d448 | |||
| de6cba5f31 | |||
| b6b09c1754 | |||
| 94d8f03cf9 | |||
| 31dcf0338c | |||
| f23e047842 | |||
| dc59cb3713 | |||
| 569dc9a8f2 | |||
| a78bde2e42 | |||
| f082b78c0b | |||
| 7217790143 | |||
| f684b47161 | |||
| 7c8a20c804 | |||
| aad02ef2d9 | |||
| db8ffa23ec | |||
| 5bf1271347 | |||
| 747a2b15e5 | |||
| 5152cda161 | |||
| 80b9e919c7 | |||
| 97159274c7 | |||
| 3437ece76e | |||
| 2e65b60725 | |||
| 5cf4b4a78c | |||
| 8fe685037e | |||
| c9276d983b | |||
| 63327ecf65 | |||
| 96a612a1f4 | |||
| 3839575272 | |||
| 11d77ebe84 | |||
| 47a3a8b48a | |||
| e190eb8b28 | |||
| c81c4a9981 | |||
| 1b75b10fec | |||
| 45af713366 | |||
| 9550a85f4d | |||
| e925f80252 | |||
| e1d7ec46ae | |||
| c8b91f6a87 | |||
| b1070aac52 | |||
| ce106ace8a | |||
| afd4c44d11 | |||
| d2b6560fba | |||
| 7c3a2ac31c | |||
| e0ab4c2ddf | |||
| 4b1c561bb6 | |||
| 5f82f8ebbd | |||
| b492a13702 | |||
| 786d3c0013 | |||
| 5aaab4cb9a | |||
| 3c3b3b4575 | |||
| 59cc67f857 | |||
| 6e9b703151 | |||
| b603743811 | |||
| a63ccc079d | |||
| d4481ec09f | |||
| 50951378f7 | |||
| b3975c2f4f | |||
| 8a95e061ad | |||
| 4983cc9feb | |||
| cf4d1b595c | |||
| 5aff53972e | |||
| d429319392 | |||
| 6613ef1d67 | |||
| 5b1b0f609c | |||
| 0acd42ea65 | |||
| 6619d0a2fb | |||
| 8c1890c258 | |||
| e44d97edc2 | |||
| dc96207da7 | |||
| c998c0a2dc | |||
| 14633736aa | |||
| af46046bc8 | |||
| f669479122 | |||
| dc08ce1439 | |||
| d457e458a8 | |||
| 7c4959fb77 | |||
| ba4db941ab | |||
| 1dad393eaf | |||
| 2173f3389a | |||
| d8971efafe | |||
| e3a8ebd4da | |||
| fab1345bcb | |||
| 193908721c | |||
| 7e9f70d0a7 | |||
| 86413c4801 | |||
| b5d36865ee | |||
| 79ee93ea88 | |||
| 3561025dfc | |||
| 36e6ac2dd0 | |||
| d27c440631 | |||
| e56d685a68 | |||
| 3e0e779803 | |||
| 5638891d01 | |||
| 611b50b150 | |||
| 74198743ab | |||
| cde5c67134 | |||
| baad41da98 | |||
| d57bff184e | |||
| f6d9fcaae2 | |||
| 8d94bb606c | |||
| b175d4d890 | |||
| 6973f657d7 | |||
| a0d1b38c6e | |||
| c5f68256c5 | |||
| 9698e8724d | |||
| f3e1f42413 | |||
| 8a957b1b69 | |||
| 6e90064160 | |||
| 5c4e97a3f6 | |||
| 351be5a40d | |||
| 67944a7e1c | |||
| e37653f956 | |||
| 235e72d3d7 | |||
| ba8e86e31c | |||
| 67f330be6c | |||
| 445b744196 | |||
| ad73c526b7 | |||
| 26310d05f0 | |||
| 459550e7d3 | |||
| a69a4d19d0 | |||
| f2a62627d0 | |||
| 0abf510ec0 | |||
| 008187a0a4 | |||
| 4bd15e5deb | |||
| 8234683bc3 | |||
| 5b3da8da85 | |||
| 894e015c01 | |||
| a66a2bc519 | |||
| b8851a0ae3 | |||
| aee199e6cf | |||
| 223a2d626a | |||
| b7fce0fafd | |||
| 551c60fb45 | |||
| af6a42b2ac | |||
| 7cae21f7c9 | |||
| 8048fba931 | |||
| 1b36ca77ab | |||
| eb85ea31bb | |||
| 8627d9e968 | |||
| 3da9adf44e | |||
| bcb24ae641 | |||
| c8ede3c30b | |||
| fb1c664309 | |||
| 90f19dfc0f | |||
| 75492b0d38 | |||
| 54bb347ee1 | |||
| 51bcc26ea9 | |||
| d813147ca7 | |||
| dbb6d46fa4 | |||
| e7050e2ad8 | |||
| 206f1c378e | |||
| 35380594b4 | |||
| 0055c9ecf2 | |||
| a74a048898 | |||
| 37676d4645 | |||
| 7492cfad66 | |||
| 59db9ea0b0 | |||
| 9234cf1add | |||
| a21199d3db | |||
| 1abda1ca0f | |||
| 0118bc7b9b | |||
| bbb822db16 | |||
| 08e1dcb1f5 | |||
| ec7141a5aa | |||
| 1b029d97b8 | |||
| 4ed3ed7e14 | |||
| c5232bd7bf | |||
| f9e23fd6eb | |||
| 457ed9c9ff | |||
| dadb4d3576 | |||
| ba771f100f | |||
| 2b9cb5defd | |||
| 34227126c2 | |||
| ef94602eba | |||
| 155e7be399 | |||
| 1fb9e6cece | |||
| f2cf082ba8 | |||
| d580464f4a | |||
| fe6b354ee2 | |||
| ec965dc8ee | |||
| cb07a382ea | |||
| 8f450c0e7b | |||
| fdee539371 | |||
| 0e9187c5a9 | |||
| 46af00019c | |||
| 2b041cb771 | |||
| 0fc40d0fda | |||
| 68f50fed55 | |||
| 9d5615409c | |||
| 48ce693bb5 | |||
| bc0282b5f8 | |||
| db67d3cc76 | |||
| a933edeef1 | |||
| 1df9573f7a | |||
| 35d5f14003 | |||
| 5321b2929e | |||
| 05aa50d409 | |||
| 54e8e694b1 | |||
| 8ec7cbb1e9 | |||
| 3519a96d06 | |||
| f809c672b5 | |||
| 5d205c9c13 | |||
| efb83e0f28 | |||
| 7bedfa2c65 | |||
| 42ab4f13cf | |||
| 77dc122079 | |||
| ce774bcc6f | |||
| b3abe863af |
205
.claude-plugin/marketplace-full.json
Normal file
205
.claude-plugin/marketplace-full.json
Normal file
@@ -0,0 +1,205 @@
|
|||||||
|
{
|
||||||
|
"name": "leo-claude-mktplace",
|
||||||
|
"owner": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"description": "Project management plugins with Gitea and NetBox integrations",
|
||||||
|
"version": "7.1.0"
|
||||||
|
},
|
||||||
|
"plugins": [
|
||||||
|
{
|
||||||
|
"name": "projman",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Sprint planning and project management with Gitea integration",
|
||||||
|
"source": "./plugins/projman",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["sprint", "agile", "gitea", "project-management"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "doc-guardian",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Automatic documentation drift detection and synchronization",
|
||||||
|
"source": "./plugins/doc-guardian",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/doc-guardian/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "productivity",
|
||||||
|
"tags": ["documentation", "drift-detection", "sync"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "code-sentinel",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Security scanning and code refactoring tools",
|
||||||
|
"source": "./plugins/code-sentinel",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/code-sentinel/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "security",
|
||||||
|
"tags": ["security-scan", "refactoring", "vulnerabilities"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "project-hygiene",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Post-task cleanup hook that removes temp files and manages orphaned files",
|
||||||
|
"source": "./plugins/project-hygiene",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/project-hygiene/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "productivity",
|
||||||
|
"tags": ["cleanup", "automation", "hygiene"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "cmdb-assistant",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "NetBox CMDB integration with data quality validation and machine registration",
|
||||||
|
"source": "./plugins/cmdb-assistant",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/cmdb-assistant/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "infrastructure",
|
||||||
|
"tags": ["cmdb", "netbox", "dcim", "ipam", "data-quality", "validation"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "claude-config-maintainer",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "CLAUDE.md and settings.local.json optimization for Claude Code projects",
|
||||||
|
"source": "./plugins/claude-config-maintainer",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/claude-config-maintainer/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["claude-md", "configuration", "optimization"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "clarity-assist",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
|
||||||
|
"source": "./plugins/clarity-assist",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/clarity-assist/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "productivity",
|
||||||
|
"tags": ["prompts", "requirements", "clarification", "nd-friendly"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "git-flow",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Git workflow automation with intelligent commit messages and branch management",
|
||||||
|
"source": "./plugins/git-flow",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/git-flow/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["git", "workflow", "commits", "branching"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "pr-review",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Multi-agent pull request review with confidence scoring and actionable feedback",
|
||||||
|
"source": "./plugins/pr-review",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["code-review", "pull-requests", "security", "quality"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "data-platform",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
|
||||||
|
"source": "./plugins/data-platform",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-platform/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "data",
|
||||||
|
"tags": ["pandas", "postgresql", "postgis", "dbt", "data-engineering", "etl"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "viz-platform",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Visualization tools with Dash Mantine Components validation, Plotly charts, and theming",
|
||||||
|
"source": "./plugins/viz-platform",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/viz-platform/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "visualization",
|
||||||
|
"tags": ["dash", "plotly", "mantine", "charts", "dashboards", "theming", "dmc"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "contract-validator",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Cross-plugin compatibility validation and Claude.md agent verification",
|
||||||
|
"source": "./plugins/contract-validator",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/contract-validator/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["validation", "contracts", "compatibility", "agents", "interfaces", "cross-plugin"],
|
||||||
|
"license": "MIT"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
109
.claude-plugin/marketplace-lean.json
Normal file
109
.claude-plugin/marketplace-lean.json
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
{
|
||||||
|
"name": "leo-claude-mktplace",
|
||||||
|
"owner": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"description": "Project management plugins with Gitea and NetBox integrations",
|
||||||
|
"version": "7.1.0"
|
||||||
|
},
|
||||||
|
"plugins": [
|
||||||
|
{
|
||||||
|
"name": "projman",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Sprint planning and project management with Gitea integration",
|
||||||
|
"source": "./plugins/projman",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["sprint", "agile", "gitea", "project-management"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "git-flow",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Git workflow automation with intelligent commit messages and branch management",
|
||||||
|
"source": "./plugins/git-flow",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/git-flow/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["git", "workflow", "commits", "branching"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "pr-review",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Multi-agent pull request review with confidence scoring and actionable feedback",
|
||||||
|
"source": "./plugins/pr-review",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "development",
|
||||||
|
"tags": ["code-review", "pull-requests", "security", "quality"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "clarity-assist",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
|
||||||
|
"source": "./plugins/clarity-assist",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/clarity-assist/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "productivity",
|
||||||
|
"tags": ["prompts", "requirements", "clarification", "nd-friendly"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "code-sentinel",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Security scanning and code refactoring tools",
|
||||||
|
"source": "./plugins/code-sentinel",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/code-sentinel/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "security",
|
||||||
|
"tags": ["security-scan", "refactoring", "vulnerabilities"],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "doc-guardian",
|
||||||
|
"version": "7.1.0",
|
||||||
|
"description": "Automatic documentation drift detection and synchronization",
|
||||||
|
"source": "./plugins/doc-guardian",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/doc-guardian/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": ["./hooks/hooks.json"],
|
||||||
|
"category": "productivity",
|
||||||
|
"tags": ["documentation", "drift-detection", "sync"],
|
||||||
|
"license": "MIT"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -6,12 +6,12 @@
|
|||||||
},
|
},
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"description": "Project management plugins with Gitea and NetBox integrations",
|
"description": "Project management plugins with Gitea and NetBox integrations",
|
||||||
"version": "4.1.0"
|
"version": "9.1.2"
|
||||||
},
|
},
|
||||||
"plugins": [
|
"plugins": [
|
||||||
{
|
{
|
||||||
"name": "projman",
|
"name": "projman",
|
||||||
"version": "3.2.0",
|
"version": "9.0.1",
|
||||||
"description": "Sprint planning and project management with Gitea integration",
|
"description": "Sprint planning and project management with Gitea integration",
|
||||||
"source": "./plugins/projman",
|
"source": "./plugins/projman",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -20,14 +20,18 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/projman/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"mcpServers": ["./.mcp.json"],
|
|
||||||
"category": "development",
|
"category": "development",
|
||||||
"tags": ["sprint", "agile", "gitea", "project-management"],
|
"tags": [
|
||||||
|
"sprint",
|
||||||
|
"agile",
|
||||||
|
"gitea",
|
||||||
|
"project-management"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "doc-guardian",
|
"name": "doc-guardian",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Automatic documentation drift detection and synchronization",
|
"description": "Automatic documentation drift detection and synchronization",
|
||||||
"source": "./plugins/doc-guardian",
|
"source": "./plugins/doc-guardian",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -36,14 +40,17 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/doc-guardian/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/doc-guardian/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"hooks": ["./hooks/hooks.json"],
|
|
||||||
"category": "productivity",
|
"category": "productivity",
|
||||||
"tags": ["documentation", "drift-detection", "sync"],
|
"tags": [
|
||||||
|
"documentation",
|
||||||
|
"drift-detection",
|
||||||
|
"sync"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "code-sentinel",
|
"name": "code-sentinel",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Security scanning and code refactoring tools",
|
"description": "Security scanning and code refactoring tools",
|
||||||
"source": "./plugins/code-sentinel",
|
"source": "./plugins/code-sentinel",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -52,15 +59,21 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/code-sentinel/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/code-sentinel/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"hooks": ["./hooks/hooks.json"],
|
"hooks": [
|
||||||
|
"./hooks/hooks.json"
|
||||||
|
],
|
||||||
"category": "security",
|
"category": "security",
|
||||||
"tags": ["security-scan", "refactoring", "vulnerabilities"],
|
"tags": [
|
||||||
|
"security-scan",
|
||||||
|
"refactoring",
|
||||||
|
"vulnerabilities"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "project-hygiene",
|
"name": "project-hygiene",
|
||||||
"version": "0.1.0",
|
"version": "9.0.1",
|
||||||
"description": "Post-task cleanup hook that removes temp files and manages orphaned files",
|
"description": "Manual project hygiene checks — temp files, misplaced files, empty dirs, debug artifacts",
|
||||||
"source": "./plugins/project-hygiene",
|
"source": "./plugins/project-hygiene",
|
||||||
"author": {
|
"author": {
|
||||||
"name": "Leo Miranda",
|
"name": "Leo Miranda",
|
||||||
@@ -68,15 +81,19 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/project-hygiene/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/project-hygiene/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"hooks": ["./hooks/hooks.json"],
|
|
||||||
"category": "productivity",
|
"category": "productivity",
|
||||||
"tags": ["cleanup", "automation", "hygiene"],
|
"tags": [
|
||||||
|
"cleanup",
|
||||||
|
"hygiene",
|
||||||
|
"maintenance",
|
||||||
|
"manual-check"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "cmdb-assistant",
|
"name": "cmdb-assistant",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "NetBox CMDB integration for infrastructure management",
|
"description": "NetBox CMDB integration with data quality validation and machine registration",
|
||||||
"source": "./plugins/cmdb-assistant",
|
"source": "./plugins/cmdb-assistant",
|
||||||
"author": {
|
"author": {
|
||||||
"name": "Leo Miranda",
|
"name": "Leo Miranda",
|
||||||
@@ -84,15 +101,24 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/cmdb-assistant/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/cmdb-assistant/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"mcpServers": ["./.mcp.json"],
|
"hooks": [
|
||||||
|
"./hooks/hooks.json"
|
||||||
|
],
|
||||||
"category": "infrastructure",
|
"category": "infrastructure",
|
||||||
"tags": ["cmdb", "netbox", "dcim", "ipam"],
|
"tags": [
|
||||||
|
"cmdb",
|
||||||
|
"netbox",
|
||||||
|
"dcim",
|
||||||
|
"ipam",
|
||||||
|
"data-quality",
|
||||||
|
"validation"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "claude-config-maintainer",
|
"name": "claude-config-maintainer",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "CLAUDE.md optimization and maintenance for Claude Code projects",
|
"description": "CLAUDE.md and settings.local.json optimization for Claude Code projects",
|
||||||
"source": "./plugins/claude-config-maintainer",
|
"source": "./plugins/claude-config-maintainer",
|
||||||
"author": {
|
"author": {
|
||||||
"name": "Leo Miranda",
|
"name": "Leo Miranda",
|
||||||
@@ -101,12 +127,16 @@
|
|||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/claude-config-maintainer/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/claude-config-maintainer/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"category": "development",
|
"category": "development",
|
||||||
"tags": ["claude-md", "configuration", "optimization"],
|
"tags": [
|
||||||
|
"claude-md",
|
||||||
|
"configuration",
|
||||||
|
"optimization"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "clarity-assist",
|
"name": "clarity-assist",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
|
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
|
||||||
"source": "./plugins/clarity-assist",
|
"source": "./plugins/clarity-assist",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -115,13 +145,21 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/clarity-assist/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/clarity-assist/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": [
|
||||||
|
"./hooks/hooks.json"
|
||||||
|
],
|
||||||
"category": "productivity",
|
"category": "productivity",
|
||||||
"tags": ["prompts", "requirements", "clarification", "nd-friendly"],
|
"tags": [
|
||||||
|
"prompts",
|
||||||
|
"requirements",
|
||||||
|
"clarification",
|
||||||
|
"nd-friendly"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "git-flow",
|
"name": "git-flow",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Git workflow automation with intelligent commit messages and branch management",
|
"description": "Git workflow automation with intelligent commit messages and branch management",
|
||||||
"source": "./plugins/git-flow",
|
"source": "./plugins/git-flow",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -130,13 +168,21 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/git-flow/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/git-flow/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"hooks": [
|
||||||
|
"./hooks/hooks.json"
|
||||||
|
],
|
||||||
"category": "development",
|
"category": "development",
|
||||||
"tags": ["git", "workflow", "commits", "branching"],
|
"tags": [
|
||||||
|
"git",
|
||||||
|
"workflow",
|
||||||
|
"commits",
|
||||||
|
"branching"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "pr-review",
|
"name": "pr-review",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Multi-agent pull request review with confidence scoring and actionable feedback",
|
"description": "Multi-agent pull request review with confidence scoring and actionable feedback",
|
||||||
"source": "./plugins/pr-review",
|
"source": "./plugins/pr-review",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -145,14 +191,18 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/pr-review/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"mcpServers": ["./.mcp.json"],
|
|
||||||
"category": "development",
|
"category": "development",
|
||||||
"tags": ["code-review", "pull-requests", "security", "quality"],
|
"tags": [
|
||||||
|
"code-review",
|
||||||
|
"pull-requests",
|
||||||
|
"security",
|
||||||
|
"quality"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "data-platform",
|
"name": "data-platform",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
|
"description": "Data engineering tools with pandas, PostgreSQL/PostGIS, and dbt integration",
|
||||||
"source": "./plugins/data-platform",
|
"source": "./plugins/data-platform",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -161,14 +211,20 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-platform/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-platform/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"mcpServers": ["./.mcp.json"],
|
|
||||||
"category": "data",
|
"category": "data",
|
||||||
"tags": ["pandas", "postgresql", "postgis", "dbt", "data-engineering", "etl"],
|
"tags": [
|
||||||
|
"pandas",
|
||||||
|
"postgresql",
|
||||||
|
"postgis",
|
||||||
|
"dbt",
|
||||||
|
"data-engineering",
|
||||||
|
"etl"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "viz-platform",
|
"name": "viz-platform",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Visualization tools with Dash Mantine Components validation, Plotly charts, and theming",
|
"description": "Visualization tools with Dash Mantine Components validation, Plotly charts, and theming",
|
||||||
"source": "./plugins/viz-platform",
|
"source": "./plugins/viz-platform",
|
||||||
"author": {
|
"author": {
|
||||||
@@ -177,9 +233,210 @@
|
|||||||
},
|
},
|
||||||
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/viz-platform/README.md",
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/viz-platform/README.md",
|
||||||
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
"mcpServers": ["./.mcp.json"],
|
|
||||||
"category": "visualization",
|
"category": "visualization",
|
||||||
"tags": ["dash", "plotly", "mantine", "charts", "dashboards", "theming", "dmc"],
|
"tags": [
|
||||||
|
"dash",
|
||||||
|
"plotly",
|
||||||
|
"mantine",
|
||||||
|
"charts",
|
||||||
|
"dashboards",
|
||||||
|
"theming",
|
||||||
|
"dmc"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "contract-validator",
|
||||||
|
"version": "9.0.1",
|
||||||
|
"description": "Cross-plugin compatibility validation and Claude.md agent verification",
|
||||||
|
"source": "./plugins/contract-validator",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/contract-validator/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "development",
|
||||||
|
"tags": [
|
||||||
|
"validation",
|
||||||
|
"contracts",
|
||||||
|
"compatibility",
|
||||||
|
"agents",
|
||||||
|
"interfaces",
|
||||||
|
"cross-plugin"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "saas-api-platform",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "REST and GraphQL API scaffolding for FastAPI and Express projects",
|
||||||
|
"source": "./plugins/saas-api-platform",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/saas-api-platform/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "development",
|
||||||
|
"tags": [
|
||||||
|
"api",
|
||||||
|
"rest",
|
||||||
|
"graphql",
|
||||||
|
"fastapi",
|
||||||
|
"express",
|
||||||
|
"openapi"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "saas-db-migrate",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "Database migration management for Alembic, Prisma, and raw SQL",
|
||||||
|
"source": "./plugins/saas-db-migrate",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/saas-db-migrate/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "development",
|
||||||
|
"tags": [
|
||||||
|
"database",
|
||||||
|
"migrations",
|
||||||
|
"alembic",
|
||||||
|
"prisma",
|
||||||
|
"sql",
|
||||||
|
"schema"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "saas-react-platform",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "React frontend development toolkit for Next.js and Vite projects",
|
||||||
|
"source": "./plugins/saas-react-platform",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/saas-react-platform/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "development",
|
||||||
|
"tags": [
|
||||||
|
"react",
|
||||||
|
"nextjs",
|
||||||
|
"vite",
|
||||||
|
"typescript",
|
||||||
|
"frontend",
|
||||||
|
"components"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "saas-test-pilot",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "Test automation toolkit for pytest, Jest, Vitest, and Playwright",
|
||||||
|
"source": "./plugins/saas-test-pilot",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/saas-test-pilot/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "development",
|
||||||
|
"tags": [
|
||||||
|
"testing",
|
||||||
|
"pytest",
|
||||||
|
"jest",
|
||||||
|
"vitest",
|
||||||
|
"playwright",
|
||||||
|
"coverage"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "data-seed",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "Test data generation and database seeding with relationship-aware profiles",
|
||||||
|
"source": "./plugins/data-seed",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/data-seed/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "data",
|
||||||
|
"tags": [
|
||||||
|
"seed-data",
|
||||||
|
"test-data",
|
||||||
|
"faker",
|
||||||
|
"fixtures",
|
||||||
|
"database"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ops-release-manager",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "Release management with semantic versioning, changelogs, and tag automation",
|
||||||
|
"source": "./plugins/ops-release-manager",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/ops-release-manager/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "development",
|
||||||
|
"tags": [
|
||||||
|
"release",
|
||||||
|
"semver",
|
||||||
|
"changelog",
|
||||||
|
"versioning",
|
||||||
|
"tags"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ops-deploy-pipeline",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "CI/CD deployment pipeline management for Docker Compose and systemd services",
|
||||||
|
"source": "./plugins/ops-deploy-pipeline",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/ops-deploy-pipeline/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "infrastructure",
|
||||||
|
"tags": [
|
||||||
|
"deploy",
|
||||||
|
"docker-compose",
|
||||||
|
"systemd",
|
||||||
|
"caddy",
|
||||||
|
"cicd"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "debug-mcp",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "MCP server debugging, inspection, and development toolkit",
|
||||||
|
"source": "./plugins/debug-mcp",
|
||||||
|
"author": {
|
||||||
|
"name": "Leo Miranda",
|
||||||
|
"email": "leobmiranda@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/debug-mcp/README.md",
|
||||||
|
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
|
||||||
|
"category": "development",
|
||||||
|
"tags": [
|
||||||
|
"mcp",
|
||||||
|
"debugging",
|
||||||
|
"diagnostics",
|
||||||
|
"server",
|
||||||
|
"development"
|
||||||
|
],
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
3
.claude/settings.json
Normal file
3
.claude/settings.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"model": "opusplan"
|
||||||
|
}
|
||||||
10
.gitignore
vendored
10
.gitignore
vendored
@@ -31,6 +31,8 @@ venv/
|
|||||||
ENV/
|
ENV/
|
||||||
env/
|
env/
|
||||||
.venv/
|
.venv/
|
||||||
|
.venv
|
||||||
|
**/.venv
|
||||||
|
|
||||||
# PyCharm
|
# PyCharm
|
||||||
.idea/
|
.idea/
|
||||||
@@ -82,6 +84,13 @@ Thumbs.db
|
|||||||
# Claude Code
|
# Claude Code
|
||||||
.claude/settings.local.json
|
.claude/settings.local.json
|
||||||
.claude/history/
|
.claude/history/
|
||||||
|
.claude/backups/
|
||||||
|
|
||||||
|
# Doc Guardian transient files
|
||||||
|
.doc-guardian-queue
|
||||||
|
|
||||||
|
# Development convenience links
|
||||||
|
.marketplaces-link
|
||||||
|
|
||||||
# Logs
|
# Logs
|
||||||
logs/
|
logs/
|
||||||
@@ -123,4 +132,5 @@ site/
|
|||||||
*credentials*
|
*credentials*
|
||||||
*secret*
|
*secret*
|
||||||
*token*
|
*token*
|
||||||
|
!**/token-budget-report.md
|
||||||
!.gitkeep
|
!.gitkeep
|
||||||
|
|||||||
24
.mcp-full.json
Normal file
24
.mcp-full.json
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"gitea": {
|
||||||
|
"command": "./mcp-servers/gitea/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"netbox": {
|
||||||
|
"command": "./mcp-servers/netbox/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"viz-platform": {
|
||||||
|
"command": "./mcp-servers/viz-platform/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"data-platform": {
|
||||||
|
"command": "./mcp-servers/data-platform/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"contract-validator": {
|
||||||
|
"command": "./mcp-servers/contract-validator/run.sh",
|
||||||
|
"args": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
8
.mcp-lean.json
Normal file
8
.mcp-lean.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"gitea": {
|
||||||
|
"command": "./mcp-servers/gitea/run.sh",
|
||||||
|
"args": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
24
.mcp.json
Normal file
24
.mcp.json
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"gitea": {
|
||||||
|
"command": "./mcp-servers/gitea/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"netbox": {
|
||||||
|
"command": "./mcp-servers/netbox/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"viz-platform": {
|
||||||
|
"command": "./mcp-servers/viz-platform/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"data-platform": {
|
||||||
|
"command": "./mcp-servers/data-platform/run.sh",
|
||||||
|
"args": []
|
||||||
|
},
|
||||||
|
"contract-validator": {
|
||||||
|
"command": "./mcp-servers/contract-validator/run.sh",
|
||||||
|
"args": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
923
CHANGELOG.md
923
CHANGELOG.md
@@ -6,6 +6,923 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
|||||||
|
|
||||||
## [Unreleased]
|
## [Unreleased]
|
||||||
|
|
||||||
|
### Changed — BREAKING
|
||||||
|
|
||||||
|
#### NetBox MCP Server: Gutted to 37 Tools (from 182)
|
||||||
|
|
||||||
|
Removed all tools not needed for tracking applications, services, databases, and VPS/servers.
|
||||||
|
|
||||||
|
**Deleted modules (entire files removed):**
|
||||||
|
- `circuits.py` — providers, circuits, terminations
|
||||||
|
- `tenancy.py` — tenants, contacts
|
||||||
|
- `vpn.py` — tunnels, IKE/IPSec, L2VPN
|
||||||
|
- `wireless.py` — WLANs, links, groups
|
||||||
|
|
||||||
|
**Deleted tool categories within remaining modules:**
|
||||||
|
- DCIM: regions, locations, racks, rack roles, manufacturers, device types, platforms, cables, console ports, front/rear ports, module bays, power panels, power feeds, virtual chassis, inventory items
|
||||||
|
- IPAM: VLANs, VLAN groups, VRFs, ASNs, RIRs, aggregates, available IPs, available prefixes
|
||||||
|
- Virtualization: cluster types, cluster groups, all delete operations
|
||||||
|
- Extras: custom fields, webhooks, config contexts, update/delete for tags
|
||||||
|
|
||||||
|
**Deleted infrastructure:**
|
||||||
|
- `NETBOX_ENABLED_MODULES` env var and all module filtering code
|
||||||
|
- `ALL_MODULES` constant, `PREFIX_TO_MODULE` dict, `_get_tool_module()` function
|
||||||
|
- Conditional module instantiation in server.py
|
||||||
|
|
||||||
|
**Token impact:** ~19,810 → ~3,700 tokens (~81% reduction)
|
||||||
|
|
||||||
|
**Remaining tools (37):**
|
||||||
|
- DCIM: sites (4), devices (4), interfaces (3) = 11
|
||||||
|
- IPAM: IPs (4), prefixes (3), services (3) = 10
|
||||||
|
- Virtualization: clusters (3), VMs (4), VM interfaces (3) = 10
|
||||||
|
- Extras: tags (3), journal entries (3) = 6
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **All plugins:** Dispatch files now active command handlers — bare `/noun` shows available sub-commands and prompts for selection instead of doing nothing
|
||||||
|
- **git-flow:** New `/gitflow setup` command — auto-detects Gitea system config, configures workflow settings, injects CLAUDE.md git-flow section automatically
|
||||||
|
- **git-flow:** `claude-md-integration.md` moved to `skills/` for programmatic use by setup command
|
||||||
|
- **project-hygiene:** New `hygiene.md` dispatch file for bare `/hygiene` invocation
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **gitea MCP:** Switched from local source (`mcp_server/`) to published `gitea-mcp>=1.0.0` package from Gitea PyPI registry
|
||||||
|
- **gitea MCP:** Module namespace changed: `mcp_server.server` → `gitea_mcp.server`
|
||||||
|
- **mcp-servers/gitea/:** Thinned to venv wrapper — source code removed, package installed from registry
|
||||||
|
- **All dispatch files:** Renamed `## Sub-commands` to `## Available Commands`, added `## Workflow` section
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- **All plugins:** Bare `/noun` commands no longer dead-end — they display sub-command menus and prompt for selection
|
||||||
|
|
||||||
|
## [9.1.2] - 2026-02-07
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- **BREAKING FIX:** Removed `"domain"` field from all `marketplace.json` and `plugin.json` files — Claude Code's strict schema validator rejects unrecognized keys, causing `Failed to load marketplace` error
|
||||||
|
- Domain metadata moved to `metadata.json` per plugin (same pattern as `mcp_servers`)
|
||||||
|
- `validate-marketplace.sh` updated to read domain from `metadata.json` instead of `marketplace.json`/`plugin.json`
|
||||||
|
- All documentation updated to reference `metadata.json` as domain source
|
||||||
|
|
||||||
|
## [9.1.1] - 2026-02-07
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- README.md fully rewritten — clean structure with plugins grouped by domain, accurate structure tree, all 10 scripts, all 7 docs
|
||||||
|
- CLAUDE.md structure tree updated to match README (was showing only 12 plugins, 3 scripts, 2 docs)
|
||||||
|
- doc-guardian `/doc sync` and `sync-workflow.md` updated to remove stale `.doc-guardian-queue` references (queue file deleted in v8.1.0)
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
|
||||||
|
- `scripts/check-venv.sh` — dead code designed for SessionStart hooks that were never implemented; functionality covered by `setup-venvs.sh`
|
||||||
|
|
||||||
|
## [9.1.0] - 2026-02-07
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- `docs/ARCHITECTURE.md` — Consolidated architecture document covering all 20 plugins, 5 MCP servers, hook inventory, agent model, launch profiles, and per-plugin command reference
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- All 12 original plugin versions bumped to 9.0.1 in both `plugin.json` and `marketplace.json` (were at various pre-9.x versions)
|
||||||
|
- `project-hygiene` description updated from "Post-task cleanup hook" to "Manual project hygiene checks" in both manifests; removed "hooks" and "automation" keywords
|
||||||
|
- `CANONICAL-PATHS.md` refreshed for v9.1.0: added Phase 3 plugins, added ARCHITECTURE.md and MIGRATION-v9.md, removed stale hooks/ dirs, updated Domain table
|
||||||
|
- `UPDATING.md` updated with all 5 MCP servers
|
||||||
|
- `COMMANDS-CHEATSHEET.md` expanded /rfc, /project, /adr to individual rows per sub-command
|
||||||
|
- `README.md` documentation table and structure tree updated; command rows normalized; project-hygiene description corrected
|
||||||
|
- `CLAUDE.md` documentation index updated with ARCHITECTURE.md and MIGRATION-v9.md; plugin version table updated; /rfc, /project, /adr commands expanded
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
|
||||||
|
- `.doc-guardian-queue` — orphan file from deleted PostToolUse hook
|
||||||
|
- `.claude/backups/CLAUDE.md.2026-01-22_132037` — v3.0.1 backup, superseded by git history
|
||||||
|
- `scripts/switch-profile.sh` — deprecated in favor of `claude-launch.sh`
|
||||||
|
- `docs/architecture/` — stale pre-v3.0.0 Draw.io specs (replaced by `docs/ARCHITECTURE.md`)
|
||||||
|
- `docs/designs/` — Phase 3 design specs (implemented as plugin scaffolds, now redundant)
|
||||||
|
- `docs/prompts/` — moved to Gitea Wiki
|
||||||
|
|
||||||
|
## [9.0.1] - 2026-02-06
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- **claude-config-maintainer:** `claude-config-audit-settings.md` Step 4 referenced deleted hooks.json files (doc-guardian, project-hygiene, data-platform, contract-validator) — updated to current hook inventory (code-sentinel, git-flow, cmdb-assistant, clarity-assist)
|
||||||
|
- **claude-config-maintainer:** `maintainer.md` agent referenced project-hygiene PostToolUse hooks — updated to current hook types
|
||||||
|
- **claude-config-maintainer:** `claude-config-audit-settings.md` output format referenced doc-guardian review layer — updated to git-flow, cmdb-assistant, clarity-assist
|
||||||
|
- **claude-config-maintainer:** `claude-config-audit-settings.md` Mermaid diagram referenced doc-guardian — updated to git-flow
|
||||||
|
- **claude-config-maintainer:** `claude-config-optimize-settings.md` reviewed profile prerequisites referenced doc-guardian PostToolUse — updated to git-flow PreToolUse
|
||||||
|
- **project-hygiene:** `claude-md-integration.md` described PostToolUse hook behavior that was removed in v8.1.0 — rewritten for manual `/hygiene check` command
|
||||||
|
- **doc-guardian:** `doc-sync.md` referenced doc-guardian hooks — updated to reference `/doc audit`
|
||||||
|
- **doc-guardian:** `sync-workflow.md` referenced PostToolUse hook — updated to note removal per Decision #29
|
||||||
|
- **projman:** `task-sizing.md` example referenced PostToolUse — updated to PreToolUse
|
||||||
|
- **docs:** `MIGRATION-v9.md` listed `/pm-debug`, `/suggest-version`, `/proposal-status` as renamed to `/projman` sub-commands — corrected to show as **Removed** (these were deleted in v8.1.0, not renamed in v9.0.0)
|
||||||
|
- **docs:** `CONFIGURATION.md` listed doc-guardian as "Commands and hooks only" — corrected to "Commands only"
|
||||||
|
- **scripts:** `setup.sh` referenced old `/labels-sync` command — updated to `/labels sync`
|
||||||
|
|
||||||
|
## [9.0.0] - 2026-02-06
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **Phase 3: 8 new plugin scaffolds**
|
||||||
|
- `saas-api-platform` (domain: saas) — REST/GraphQL API scaffolding for FastAPI and Express. 6 commands, 2 agents, 5 skills
|
||||||
|
- `saas-db-migrate` (domain: saas) — Database migration management for Alembic, Prisma, and raw SQL. 6 commands, 2 agents, 5 skills
|
||||||
|
- `saas-react-platform` (domain: saas) — React frontend toolkit for Next.js and Vite projects. 6 commands, 2 agents, 6 skills
|
||||||
|
- `saas-test-pilot` (domain: saas) — Test automation for pytest, Jest, Vitest, and Playwright. 6 commands, 2 agents, 6 skills
|
||||||
|
- `data-seed` (domain: data) — Test data generation and database seeding. 5 commands, 2 agents, 5 skills
|
||||||
|
- `ops-release-manager` (domain: ops) — Release management with SemVer, changelogs, and tag automation. 6 commands, 2 agents, 5 skills
|
||||||
|
- `ops-deploy-pipeline` (domain: ops) — CI/CD deployment pipeline for Docker Compose and systemd. 6 commands, 2 agents, 6 skills
|
||||||
|
- `debug-mcp` (domain: debug) — MCP server debugging, inspection, and development toolkit. 5 commands, 1 agent, 5 skills
|
||||||
|
- 8 design documents in `docs/designs/` for all new plugins
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [9.0.0] - 2026-02-06
|
||||||
|
|
||||||
|
### BREAKING CHANGES
|
||||||
|
|
||||||
|
#### Command Consolidation (v9.0.0)
|
||||||
|
|
||||||
|
All commands renamed to `/<noun> <action>` sub-command pattern. Every command across all 12 plugins now follows this convention. See [MIGRATION-v9.md](./docs/MIGRATION-v9.md) for the complete old-to-new mapping.
|
||||||
|
|
||||||
|
**Key changes:**
|
||||||
|
- **projman:** `/sprint-plan` → `/sprint plan`, `/pm-setup` → `/projman setup`, `/pm-review` → `/sprint review`, `/pm-test` → `/sprint test`, `/labels-sync` → `/labels sync`
|
||||||
|
- **git-flow:** 8→5 commands. `/git-commit` → `/gitflow commit`. Three commit variants (`-push`, `-merge`, `-sync`) consolidated into `--push`/`--merge`/`--sync` flags. `/branch-start` → `/gitflow branch-start`, `/git-status` → `/gitflow status`, `/git-config` → `/gitflow config`
|
||||||
|
- **pr-review:** `/pr-review` → `/pr review`, `/project-init` → `/pr init`, `/project-sync` → `/pr sync`
|
||||||
|
- **clarity-assist:** `/clarify` → `/clarity clarify`, `/quick-clarify` → `/clarity quick-clarify`
|
||||||
|
- **doc-guardian:** `/doc-audit` → `/doc audit`, `/changelog-gen` → `/doc changelog-gen`, `/stale-docs` → `/doc stale-docs`
|
||||||
|
- **code-sentinel:** `/security-scan` → `/sentinel scan`, `/refactor` → `/sentinel refactor`
|
||||||
|
- **claude-config-maintainer:** `/config-analyze` → `/claude-config analyze` (all 8 commands prefixed)
|
||||||
|
- **contract-validator:** `/validate-contracts` → `/cv validate`, `/check-agent` → `/cv check-agent`
|
||||||
|
- **cmdb-assistant:** `/cmdb-search` → `/cmdb search`, `/change-audit` → `/cmdb change-audit`, `/ip-conflicts` → `/cmdb ip-conflicts`
|
||||||
|
- **data-platform:** `/data-ingest` → `/data ingest`, `/dbt-test` → `/data dbt-test`, `/lineage-viz` → `/data lineage-viz`
|
||||||
|
- **viz-platform:** `/accessibility-check` → `/viz accessibility-check`, `/design-gate` → `/viz design-gate`, `/design-review` → `/viz design-review`
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- Dispatch files for all 12 plugins — each plugin now has a `<noun>.md` routing table listing all sub-commands
|
||||||
|
- `name:` frontmatter field added to all command files for sub-command resolution
|
||||||
|
- `docs/MIGRATION-v9.md` — Complete old-to-new command mapping for consumer migration
|
||||||
|
- `docs/COMMANDS-CHEATSHEET.md` — Full rewrite with v9.0.0 command names
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- All documentation updated with new command names: CLAUDE.md, README.md, CONFIGURATION.md, UPDATING.md, agent-workflow.spec.md, netbox/README.md
|
||||||
|
- All cross-plugin references updated (skills, agents, integration files)
|
||||||
|
- `marketplace.json` version bumped to 9.0.0
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [8.1.0] - 2026-02-06
|
||||||
|
|
||||||
|
### BREAKING CHANGES
|
||||||
|
|
||||||
|
#### Hook Migration (v8.1.0)
|
||||||
|
|
||||||
|
All `SessionStart` and `PostToolUse` hooks removed. Only `PreToolUse` safety hooks and `UserPromptSubmit` quality hooks remain. Plugins that relied on automatic startup checks or post-write automation must use manual commands instead.
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **projman:** 7 new skills — `source-analysis`, `project-charter`, `adr-conventions`, `epic-conventions`, `wbs`, `risk-register`, `sprint-roadmap`
|
||||||
|
- **projman:** `/project` command family — `initiation`, `plan`, `status`, `close` for full project lifecycle management
|
||||||
|
- **projman:** `/adr` command family — `create`, `list`, `update`, `supersede` for Architecture Decision Records
|
||||||
|
- **projman:** Expanded `wiki-conventions.md` with dependency headers, R&D notes, page naming patterns
|
||||||
|
- **projman:** Epic/* labels (5) and RnD/* labels (4) added to label taxonomy
|
||||||
|
- **project-hygiene:** `/hygiene check` manual command replacing PostToolUse hook
|
||||||
|
- **contract-validator:** `/cv status` marketplace-wide health check command
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- `verify-hooks.sh` rewritten to validate post-migration hook inventory (4 plugins, 5 hooks)
|
||||||
|
- `config-permissions-map.md` updated to reflect reduced hook inventory
|
||||||
|
- `settings-optimization.md` updated for current hook landscape
|
||||||
|
- `sprint-plan.md` no longer loads `token-budget-report.md` skill
|
||||||
|
- `sprint-close.md` loads `rfc-workflow.md` conditionally; manual CHANGELOG review replaces `/suggest-version`
|
||||||
|
- `planner.md` and `orchestrator.md` no longer reference domain consultation or domain gates
|
||||||
|
- Label taxonomy updated from 43 to 58 labels (added Status/4, Domain/2, Epic/5, RnD/4)
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
|
||||||
|
- **hooks:** 8 hooks.json files deleted (projman, pr-review, doc-guardian, project-hygiene, claude-config-maintainer, viz-platform, data-platform, contract-validator SessionStart/PostToolUse hooks)
|
||||||
|
- **hooks:** Orphaned shell scripts deleted (startup-check.sh, notify.sh, cleanup.sh, enforce-rules.sh, schema-diff-check.sh, auto-validate.sh, breaking-change-check.sh)
|
||||||
|
- **projman:** `/pm-debug`, `/suggest-version`, `/proposal-status` commands deleted
|
||||||
|
- **projman:** `domain-consultation.md` skill deleted
|
||||||
|
- **cmdb-assistant:** SessionStart hook removed (PreToolUse hook retained)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [8.0.0] - 2026-02-06
|
||||||
|
|
||||||
|
### BREAKING CHANGES
|
||||||
|
|
||||||
|
#### Domain Metadata Required (v8.0.0)
|
||||||
|
|
||||||
|
All plugin manifests now require a `domain` field. `validate-marketplace.sh` rejects plugins without it.
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **marketplace:** `domain` field added to all 12 `plugin.json` files and all `marketplace.json` entries
|
||||||
|
- **marketplace:** Domain validation in `validate-marketplace.sh` — validates presence, allowed values, and cross-file consistency
|
||||||
|
- **marketplace:** New launch profiles: `saas`, `ops`, `debug` in `claude-launch.sh`
|
||||||
|
- **marketplace:** `data-seed` added to `data` launch profile (forward-looking)
|
||||||
|
- **docs:** Domain metadata conventions in `CANONICAL-PATHS.md`
|
||||||
|
- **docs:** Domain field requirements in `CLAUDE.md` "Adding a New Plugin" section
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- `validate-marketplace.sh` now requires `domain` in both `plugin.json` and `marketplace.json` (breaking change for validation pipeline)
|
||||||
|
- `claude-launch.sh` profiles expanded: sprint, data, saas, ops, review, debug, full
|
||||||
|
|
||||||
|
### Deprecated
|
||||||
|
|
||||||
|
- `infra` launch profile — use `ops` instead (auto-redirects with warning)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- Confirmed projman `metadata.json` exists with gitea MCP mapping
|
||||||
|
- Synced `marketplace-full.json` and `marketplace-lean.json` to current version (were stale)
|
||||||
|
- Added `metadata.json` validation to `validate-marketplace.sh` — rejects `mcp_servers` in `plugin.json`, verifies MCP server references
|
||||||
|
- Updated `CANONICAL-PATHS.md` to current version
|
||||||
|
- Deprecated `switch-profile.sh` in favor of `claude-launch.sh`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [7.1.0] - 2026-02-04
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **marketplace:** Task-specific launcher script for token optimization
|
||||||
|
- New script: `scripts/claude-launch.sh` loads only needed plugins via `--plugin-dir`
|
||||||
|
- Profiles: sprint (default), review, data, infra, full
|
||||||
|
- Reduces token overhead from ~22K to ~4-6K tokens
|
||||||
|
- Enables `ENABLE_TOOL_SEARCH=true` for MCP lazy loading
|
||||||
|
- **marketplace:** Lean/full profile config files for manual switching (superseded by `claude-launch.sh`)
|
||||||
|
- Files: `.mcp-lean.json`, `.mcp-full.json`, `marketplace-lean.json`, `marketplace-full.json`
|
||||||
|
- Script `scripts/switch-profile.sh` available but `claude-launch.sh` is the recommended approach
|
||||||
|
- Full profile remains the default baseline; launcher handles selective loading
|
||||||
|
- **projman:** Token usage estimation reporting at sprint workflow boundaries
|
||||||
|
- New skill: `token-budget-report.md` with MCP overhead and skill loading estimation model
|
||||||
|
- Token report displayed at end of `/sprint-plan` and `/sprint-close`
|
||||||
|
- On-demand via `/sprint-status --tokens`
|
||||||
|
- Helps identify which phases and components consume the most context budget
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **projman:** `/sprint-status` now uses conditional skill loading for reduced token overhead
|
||||||
|
- Only loads `mcp-tools-reference.md` by default (~1.5k tokens vs ~5k)
|
||||||
|
- `--diagram` flag loads `dependency-management.md` and `progress-tracking.md`
|
||||||
|
- `--tokens` flag loads `token-budget-report.md`
|
||||||
|
- Estimated savings: ~3.5k tokens per status check
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- **docs:** Stale command references in data-platform visual-header.md and viz-platform claude-md-integration.md updated to v7.0.0 namespaced names
|
||||||
|
- **docs:** git-flow visual-header.md and git-status.md quick actions updated to namespaced commands
|
||||||
|
- **docs:** projman/CONFIGURATION.md and docs/DEBUGGING-CHECKLIST.md updated with correct command names
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [7.0.0] - 2026-02-03
|
||||||
|
|
||||||
|
### BREAKING CHANGES
|
||||||
|
|
||||||
|
#### Command Namespace Rename
|
||||||
|
|
||||||
|
All generic command names are now prefixed with their plugin's namespace to eliminate collisions across the marketplace. This is a **breaking change** for consuming projects — update your CLAUDE.md integration snippets.
|
||||||
|
|
||||||
|
**Full Rename Map:**
|
||||||
|
|
||||||
|
| Plugin | Old | New |
|
||||||
|
|--------|-----|-----|
|
||||||
|
| projman | `/setup` | `/pm-setup` |
|
||||||
|
| projman | `/review` | `/pm-review` |
|
||||||
|
| projman | `/test` | `/pm-test` |
|
||||||
|
| projman | `/debug` | `/pm-debug` |
|
||||||
|
| git-flow | `/commit` | `/git-commit` |
|
||||||
|
| git-flow | `/commit-push` | `/git-commit-push` |
|
||||||
|
| git-flow | `/commit-merge` | `/git-commit-merge` |
|
||||||
|
| git-flow | `/commit-sync` | `/git-commit-sync` |
|
||||||
|
| pr-review | `/initial-setup` | `/pr-setup` |
|
||||||
|
| cmdb-assistant | `/initial-setup` | `/cmdb-setup` |
|
||||||
|
| data-platform | `/initial-setup` | `/data-setup` |
|
||||||
|
| data-platform | `/run` | `/data-run` |
|
||||||
|
| data-platform | `/ingest` | `/data-ingest` |
|
||||||
|
| data-platform | `/profile` | `/data-profile` |
|
||||||
|
| data-platform | `/schema` | `/data-schema` |
|
||||||
|
| data-platform | `/explain` | `/data-explain` |
|
||||||
|
| data-platform | `/lineage` | `/data-lineage` |
|
||||||
|
| viz-platform | `/initial-setup` | `/viz-setup` |
|
||||||
|
| viz-platform | `/theme` | `/viz-theme` |
|
||||||
|
| viz-platform | `/theme-new` | `/viz-theme-new` |
|
||||||
|
| viz-platform | `/theme-css` | `/viz-theme-css` |
|
||||||
|
| viz-platform | `/chart` | `/viz-chart` |
|
||||||
|
| viz-platform | `/chart-export` | `/viz-chart-export` |
|
||||||
|
| viz-platform | `/dashboard` | `/viz-dashboard` |
|
||||||
|
| viz-platform | `/component` | `/viz-component` |
|
||||||
|
| viz-platform | `/breakpoints` | `/viz-breakpoints` |
|
||||||
|
| contract-validator | `/initial-setup` | `/cv-setup` |
|
||||||
|
|
||||||
|
**Migration:** Update your project's CLAUDE.md integration snippets to use the new command names. Run `/plugin list` to verify installed plugins are using v7.0.0+.
|
||||||
|
|
||||||
|
**Unchanged:** Commands already using plugin-namespaced prefixes (`/sprint-*`, `/cmdb-*`, `/labels-sync`, `/branch-*`, `/git-status`, `/git-config`, `/pr-review`, `/pr-summary`, `/pr-findings`, `/pr-diff`, `/project-init`, `/project-sync`, `/config-*`, `/design-*`, `/data-quality`, `/data-review`, `/data-gate`, `/lineage-viz`, `/dbt-test`, `/accessibility-check`, `/validate-contracts`, `/check-agent`, `/list-interfaces`, `/dependency-graph`, `/doc-audit`, `/doc-sync`, `/security-scan`, `/refactor`, `/refactor-dry`, `/clarify`, `/suggest-version`, `/proposal-status`, `/rfc`, `/change-audit`, `/ip-conflicts`) are **not affected**.
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
#### Plan-Then-Batch Skill Optimization (projman)
|
||||||
|
|
||||||
|
New execution pattern that separates cognitive work from mechanical API operations, reducing skill-related token consumption by ~76-83% during sprint workflows.
|
||||||
|
|
||||||
|
- **`skills/batch-execution.md`** — New skill defining the plan-then-batch protocol:
|
||||||
|
- Phase 1: Cognitive work with all skills loaded
|
||||||
|
- Phase 2: Execution manifest (structured plan of all API operations)
|
||||||
|
- Phase 3: Batch execute API calls using only frontmatter skills
|
||||||
|
- Phase 4: Batch report with success/failure summary
|
||||||
|
- Error handling: continue on individual failures, report at end
|
||||||
|
|
||||||
|
- **Frontmatter skill promotion:**
|
||||||
|
- Planner agent: `mcp-tools-reference` and `batch-execution` promoted to frontmatter (auto-injected, zero re-read cost)
|
||||||
|
- Orchestrator agent: same promotion
|
||||||
|
- Eliminates per-operation skill file re-reads during API execution loops
|
||||||
|
|
||||||
|
- **Phase-based skill loading:**
|
||||||
|
- Planner: 3 phases (validation → analysis → approval) with explicit "read once" instructions
|
||||||
|
- Orchestrator: 2 phases (startup → dispatch) with same pattern
|
||||||
|
- New `## Skill Loading Protocol` section replaces flat `## Skills to Load` in agent files
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **`planning-workflow.md`** — Steps 8-10 restructured:
|
||||||
|
- Step 8: "Draft Issue Specifications" (no API calls — resolve all parameters first)
|
||||||
|
- Step 8a: "Batch Execute Issue Creation" (tight API loop, frontmatter skills only)
|
||||||
|
- Step 9: Merged into Step 8a (dependencies created in batch)
|
||||||
|
- Step 10: Milestone creation moved before batch (must exist for assignment)
|
||||||
|
|
||||||
|
- **Agent matrix updated:**
|
||||||
|
- Planner: `body text (14)` → `frontmatter (2) + body text (12)`
|
||||||
|
- Orchestrator: `body text (12)` → `frontmatter (2) + body text (10)`
|
||||||
|
|
||||||
|
- **`docs/CONFIGURATION.md`** — New "Phase-Based Skill Loading" subsection documenting the pattern
|
||||||
|
|
||||||
|
### Token Impact
|
||||||
|
|
||||||
|
| Scenario | Before | After | Savings |
|
||||||
|
|----------|--------|-------|---------|
|
||||||
|
| 6-issue sprint (planning) | ~23,800 lines | ~5,600 lines | ~76% |
|
||||||
|
| 10-issue sprint (planning) | ~35,000 lines | ~7,000 lines | ~80% |
|
||||||
|
| 8-issue status updates (orchestrator) | ~9,600 lines | ~1,600 lines | ~83% |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.10.0] - 2026-02-03
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
#### NetBox MCP Server: Module-Based Tool Filtering
|
||||||
|
|
||||||
|
Environment-variable-driven module filtering to reduce token consumption:
|
||||||
|
|
||||||
|
- **New config option**: `NETBOX_ENABLED_MODULES` in `~/.config/claude/netbox.env`
|
||||||
|
- **Token savings**: ~15,000 tokens (from ~19,810 to ~4,500) with recommended config
|
||||||
|
- **Default behavior**: All modules enabled if env var unset (backward compatible)
|
||||||
|
- **Startup logging**: Shows enabled modules and tool count on initialization
|
||||||
|
- **Routing guard**: Clear error message when calling disabled module's tools
|
||||||
|
|
||||||
|
**Recommended configuration for cmdb-assistant users:**
|
||||||
|
```bash
|
||||||
|
NETBOX_ENABLED_MODULES=dcim,ipam,virtualization,extras
|
||||||
|
```
|
||||||
|
|
||||||
|
This enables ~43 tools covering all cmdb-assistant commands while staying well below the 25K token warning threshold.
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
#### cmdb-assistant Documentation: Incorrect Tool Names
|
||||||
|
|
||||||
|
Fixed documentation referencing non-existent `virtualization_*` tool names:
|
||||||
|
|
||||||
|
| File | Wrong | Correct |
|
||||||
|
|------|-------|---------|
|
||||||
|
| `claude-md-integration.md` | `virtualization_list_virtual_machines` | `virt_list_vms` |
|
||||||
|
| `claude-md-integration.md` | `virtualization_create_virtual_machine` | `virt_create_vm` |
|
||||||
|
| `cmdb-search.md` | `virtualization_list_virtual_machines` | `virt_list_vms` |
|
||||||
|
|
||||||
|
Also fixed NetBox README.md tool name references for virtualization, wireless, and circuits modules.
|
||||||
|
|
||||||
|
#### Gitea MCP Server: Standardized Build Backend
|
||||||
|
|
||||||
|
Changed `mcp-servers/gitea/pyproject.toml` from hatchling to setuptools:
|
||||||
|
- Matches all other MCP servers (contract-validator, viz-platform, data-platform)
|
||||||
|
- Updated license format to PEP 639 compliance
|
||||||
|
- Added pytest configuration for consistency
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.9.0] - 2026-02-03
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
#### Plugin Installation Scripts
|
||||||
|
New scripts for installing marketplace plugins into consumer projects:
|
||||||
|
|
||||||
|
- **`scripts/install-plugin.sh`** — Install a plugin to a consumer project
|
||||||
|
- Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
|
||||||
|
- Appends integration snippet to target's `CLAUDE.md`
|
||||||
|
- Idempotent: safe to run multiple times
|
||||||
|
- Validates plugin exists and target path is valid
|
||||||
|
|
||||||
|
- **`scripts/uninstall-plugin.sh`** — Remove a plugin from a consumer project
|
||||||
|
- Removes MCP server entry from `.mcp.json`
|
||||||
|
- Removes integration section from `CLAUDE.md`
|
||||||
|
|
||||||
|
- **`scripts/list-installed.sh`** — Show installed plugins in a project
|
||||||
|
- Lists fully installed, partially installed, and available plugins
|
||||||
|
- Shows plugin versions and descriptions
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
```bash
|
||||||
|
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
|
||||||
|
./scripts/list-installed.sh ~/projects/personal-portfolio
|
||||||
|
./scripts/uninstall-plugin.sh data-platform ~/projects/personal-portfolio
|
||||||
|
```
|
||||||
|
|
||||||
|
**Documentation:** `docs/CONFIGURATION.md` updated with "Installing Plugins to Consumer Projects" section.
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
#### Plugin Installation Scripts — MCP Mapping & Section Markers
|
||||||
|
|
||||||
|
**MCP Server Mapping:**
|
||||||
|
- Added `mcp_servers` field to plugin.json for plugins that use shared MCP servers
|
||||||
|
- `projman` and `pr-review` now correctly install `gitea` MCP server
|
||||||
|
- `cmdb-assistant` now correctly installs `netbox` MCP server
|
||||||
|
- Scripts read MCP server names from plugin.json instead of assuming plugin name = server name
|
||||||
|
|
||||||
|
**CLAUDE.md Section Markers:**
|
||||||
|
- Install script now wraps integration content with HTML comment markers:
|
||||||
|
`<!-- BEGIN marketplace-plugin: {name} -->` and `<!-- END marketplace-plugin: {name} -->`
|
||||||
|
- Uninstall script uses markers for precise section removal (no more code block false positives)
|
||||||
|
- Backward compatible: falls back to legacy header detection for pre-marker installations
|
||||||
|
|
||||||
|
**Plugins updated with `mcp_servers` field:**
|
||||||
|
- `projman` → `["gitea"]`
|
||||||
|
- `pr-review` → `["gitea"]`
|
||||||
|
- `cmdb-assistant` → `["netbox"]`
|
||||||
|
- `data-platform` → `["data-platform"]`
|
||||||
|
- `viz-platform` → `["viz-platform"]`
|
||||||
|
- `contract-validator` → `["contract-validator"]`
|
||||||
|
|
||||||
|
#### Agent Model Selection
|
||||||
|
|
||||||
|
Per-agent model selection using Claude Code's now-supported `model` frontmatter field.
|
||||||
|
|
||||||
|
- All 25 marketplace agents assigned appropriate model (`sonnet`, `haiku`, or `inherit`)
|
||||||
|
- Model assignment based on reasoning depth, tool complexity, and latency requirements
|
||||||
|
- Documentation added to `CLAUDE.md` and `docs/CONFIGURATION.md`
|
||||||
|
|
||||||
|
**Supported values:** `sonnet` (default), `opus`, `haiku`, `inherit`
|
||||||
|
|
||||||
|
**Model assignments:**
|
||||||
|
| Model | Agent Types |
|
||||||
|
|-------|-------------|
|
||||||
|
| sonnet | Planner, Orchestrator, Executor, Code Reviewer, Coordinator, Security Reviewers, Data Advisor, Design Reviewer, etc. |
|
||||||
|
| haiku | Maintainability Auditor, Test Validator, Component Check, Theme Setup, Git Assistant, Data Ingestion, Agent Check |
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
#### Agent Frontmatter Standardization
|
||||||
|
|
||||||
|
- Fixed viz-platform and data-platform agents using non-standard `agent:` field (now `name:`)
|
||||||
|
- Removed non-standard `triggers:` field from domain agents (trigger info already in agent body)
|
||||||
|
- Added missing frontmatter to 13 agents across pr-review, viz-platform, contract-validator, clarity-assist, git-flow, doc-guardian, code-sentinel, cmdb-assistant, and data-platform
|
||||||
|
- All 25 agents now have consistent `name`, `description`, and `model` fields
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
#### Agent Frontmatter Hardening v3
|
||||||
|
|
||||||
|
Comprehensive agent-level configuration using Claude Code's supported frontmatter fields.
|
||||||
|
|
||||||
|
**permissionMode added to all 25 agents:**
|
||||||
|
- `bypassPermissions` (1): Executor — full autonomy with code-sentinel + Code Reviewer safety nets
|
||||||
|
- `acceptEdits` (7): Orchestrator, Data Ingestion, Theme Setup, Refactor Advisor, Doc Analyzer, Git Assistant, Maintainer
|
||||||
|
- `default` (7): Planner, Code Reviewer, Data Advisor, Layout Builder, Full Validation, Clarity Coach, CMDB Assistant
|
||||||
|
- `plan` (10): All pr-review agents (5), Data Analysis, Design Reviewer, Component Check, Agent Check, Security Reviewer (code-sentinel)
|
||||||
|
|
||||||
|
**disallowedTools added to 12 agents:**
|
||||||
|
- All `plan`-mode agents (10) + Code Reviewer + Clarity Coach receive `disallowedTools: Write, Edit, MultiEdit`
|
||||||
|
- Enforces read-only contracts at platform level (defense-in-depth with `permissionMode`)
|
||||||
|
|
||||||
|
**Model promotions:**
|
||||||
|
- Planner: `sonnet` → `opus` (architectural reasoning benefits from deeper analysis)
|
||||||
|
- Code Reviewer: `sonnet` → `opus` (quality gate benefits from thorough review)
|
||||||
|
|
||||||
|
**skills frontmatter on 3 agents:**
|
||||||
|
- Executor: 7 safety-critical skills auto-injected (branch-security, runaway-detection, etc.)
|
||||||
|
- Code Reviewer: 4 review skills auto-injected
|
||||||
|
- Maintainer: 2 config skills auto-injected
|
||||||
|
- Body text `## Skills to Load` removed for these agents to avoid duplication
|
||||||
|
|
||||||
|
**Documentation:**
|
||||||
|
- `CLAUDE.md` and `docs/CONFIGURATION.md` updated with complete agent configuration matrix
|
||||||
|
- New subsections: permissionMode Guide, disallowedTools Guide, skills Frontmatter Guide
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.8.0] - 2026-02-02
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
#### claude-config-maintainer v1.2.0 - Settings Audit Feature
|
||||||
|
|
||||||
|
New commands for auditing and optimizing `settings.local.json` permission configurations:
|
||||||
|
|
||||||
|
- **`/config-audit-settings`** — Audit `settings.local.json` permissions with 100-point scoring across redundancy, coverage, safety alignment, and profile fit
|
||||||
|
- **`/config-optimize-settings`** — Apply permission optimizations with dry-run, named profiles (`conservative`, `reviewed`, `autonomous`), and consolidation modes
|
||||||
|
- **`/config-permissions-map`** — Generate Mermaid diagram of review layer coverage and permission gaps
|
||||||
|
- **`skills/settings-optimization.md`** — Comprehensive skill for permission pattern analysis, consolidation rules, review-layer-aware recommendations, and named profiles
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- Settings Efficiency Score (100 points) alongside existing CLAUDE.md score
|
||||||
|
- Review layer verification — agent reads `hooks/hooks.json` from installed plugins before recommending auto-allow patterns
|
||||||
|
- Three named profiles: `conservative` (prompts for most writes), `reviewed` (for projects with ≥2 review layers), `autonomous` (sandboxed environments)
|
||||||
|
- Pattern consolidation detection: duplicates, subsets, merge candidates, stale entries, conflicts
|
||||||
|
|
||||||
|
#### Projman Hardening Sprint
|
||||||
|
Targeted improvements to safety gates, command structure, lifecycle tracking, and cross-plugin contracts.
|
||||||
|
|
||||||
|
**Sprint Lifecycle State Machine:**
|
||||||
|
- New `skills/sprint-lifecycle.md` - defines valid states and transitions via milestone metadata
|
||||||
|
- States: idle -> Sprint/Planning -> Sprint/Executing -> Sprint/Reviewing -> idle
|
||||||
|
- All sprint commands check and set lifecycle state on entry/exit
|
||||||
|
- Out-of-order calls produce warnings with guidance, `--force` override available
|
||||||
|
|
||||||
|
**Sprint Dispatch Log:**
|
||||||
|
- Orchestrator now maintains a structured dispatch log during execution
|
||||||
|
- Records task dispatch, completion, failure, gate checks, and resume events
|
||||||
|
- Enables timeline reconstruction after interrupted sessions
|
||||||
|
|
||||||
|
**Gate Contract Versioning:**
|
||||||
|
- Gate commands (`/design-gate`, `/data-gate`) declare `gate_contract: v1` in frontmatter
|
||||||
|
- `domain-consultation.md` Gate Command Reference includes expected contract version
|
||||||
|
- `validate_workflow_integration` now checks contract version compatibility
|
||||||
|
- Mismatch produces WARNING, missing contract produces INFO suggestion
|
||||||
|
|
||||||
|
**Shared Visual Output Skill:**
|
||||||
|
- New `skills/visual-output.md` - single source of truth for projman visual headers
|
||||||
|
- All 4 agent files reference the skill instead of inline templates
|
||||||
|
- Phase Registry maps agents to emoji and phase names
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
**Sprint Approval Gate Hardened:**
|
||||||
|
- Approval is now a hard block, not a warning (was "recommended", now required)
|
||||||
|
- `--force` flag added to bypass in emergencies (logged to milestone)
|
||||||
|
- Consistent language across sprint-approval.md, sprint-start.md, and orchestrator.md
|
||||||
|
|
||||||
|
**RFC Commands Normalized:**
|
||||||
|
- 5 individual commands (`/rfc-create`, `/rfc-list`, `/rfc-review`, `/rfc-approve`, `/rfc-reject`) consolidated into `/rfc create|list|review|approve|reject`
|
||||||
|
- `/clear-cache` absorbed into `/setup --clear-cache`
|
||||||
|
- Command count reduced from 17 to 12
|
||||||
|
|
||||||
|
**`/test` Command Documentation Expanded:**
|
||||||
|
- Sprint integration section (pre-close verification workflow)
|
||||||
|
- Concrete usage examples for all modes
|
||||||
|
- Edge cases table
|
||||||
|
- DO NOT rules for both modes
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
|
||||||
|
- `plugins/projman/commands/rfc-create.md` (replaced by `/rfc create`)
|
||||||
|
- `plugins/projman/commands/rfc-list.md` (replaced by `/rfc list`)
|
||||||
|
- `plugins/projman/commands/rfc-review.md` (replaced by `/rfc review`)
|
||||||
|
- `plugins/projman/commands/rfc-approve.md` (replaced by `/rfc approve`)
|
||||||
|
- `plugins/projman/commands/rfc-reject.md` (replaced by `/rfc reject`)
|
||||||
|
- `plugins/projman/commands/clear-cache.md` (replaced by `/setup --clear-cache`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.7.1] - 2026-02-02
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **contract-validator**: New `validate_workflow_integration` MCP tool — validates domain plugins expose required advisory interfaces (gate command, review command, advisory agent)
|
||||||
|
- **contract-validator**: New `MISSING_INTEGRATION` issue type for workflow integration validation
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- `scripts/setup.sh` banner version updated from v5.1.0 to v5.7.1
|
||||||
|
|
||||||
|
### Reverted
|
||||||
|
- **marketplace.json**: Removed `integrates_with` field — Claude Code schema does not support custom plugin fields (causes marketplace load failure)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.7.0] - 2026-02-02
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **data-platform**: New `data-advisor` agent for data integrity, schema, and dbt compliance validation
|
||||||
|
- **data-platform**: New `data-integrity-audit.md` skill defining audit rules, severity levels, and scanning strategies
|
||||||
|
- **data-platform**: New `/data-gate` command for binary pass/fail data integrity gates (projman integration)
|
||||||
|
- **data-platform**: New `/data-review` command for comprehensive data integrity audits
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Domain Advisory Pattern now fully operational for both Viz and Data domains
|
||||||
|
- projman orchestrator `Domain/Data` gates now resolve to live `/data-gate` command (previously fell through to "gate unavailable" warning)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.6.0] - 2026-02-01
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **Domain Advisory Pattern**: Cross-plugin integration enabling projman to consult domain-specific plugins during sprint lifecycle
|
||||||
|
- **projman**: New `domain-consultation.md` skill for domain detection and gate protocols
|
||||||
|
- **viz-platform**: New `design-reviewer` agent for design system compliance auditing
|
||||||
|
- **viz-platform**: New `design-system-audit.md` skill defining audit rules and severity levels
|
||||||
|
- **viz-platform**: New `/design-review` command for detailed design system audits
|
||||||
|
- **viz-platform**: New `/design-gate` command for binary pass/fail validation gates
|
||||||
|
- **Labels**: New `Domain/Viz` and `Domain/Data` labels for domain routing
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- **projman planner**: Now loads domain-consultation skill and performs domain detection during planning
|
||||||
|
- **projman orchestrator**: Now runs domain gates before marking Domain/* labeled issues as complete
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.5.0] - 2026-02-01
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
#### RFC System for Feature Tracking
|
||||||
|
Wiki-based Request for Comments (RFC) system for capturing, reviewing, and tracking feature ideas through their lifecycle.
|
||||||
|
|
||||||
|
**New Commands (projman):**
|
||||||
|
- `/rfc-create` - Create new RFC from conversation or clarified specification
|
||||||
|
- `/rfc-list` - List all RFCs grouped by status (Draft, Review, Approved, Implementing, Implemented, Rejected, Stale)
|
||||||
|
- `/rfc-review` - Submit Draft RFC for maintainer review
|
||||||
|
- `/rfc-approve` - Approve RFC, making it available for sprint planning
|
||||||
|
- `/rfc-reject` - Reject RFC with documented reason
|
||||||
|
|
||||||
|
**RFC Lifecycle:**
|
||||||
|
- Draft → Review → Approved → Implementing → Implemented
|
||||||
|
- Terminal states: Rejected, Superseded
|
||||||
|
- Stale: Drafts with no activity >90 days
|
||||||
|
|
||||||
|
**Sprint Integration:**
|
||||||
|
- `/sprint-plan` now detects approved RFCs and offers selection
|
||||||
|
- `/sprint-close` updates RFC status to Implemented on completion
|
||||||
|
- RFC-Index wiki page auto-maintained with status sections
|
||||||
|
|
||||||
|
**Clarity-Assist Integration:**
|
||||||
|
- Vagueness hook now detects feature request patterns
|
||||||
|
- Suggests `/rfc-create` for feature ideas
|
||||||
|
- `/clarify` offers RFC creation after delivering clarified spec
|
||||||
|
|
||||||
|
**New MCP Tool:**
|
||||||
|
- `allocate_rfc_number` - Allocates next sequential RFC number
|
||||||
|
|
||||||
|
**New Skills:**
|
||||||
|
- `skills/rfc-workflow.md` - RFC lifecycle and state transitions
|
||||||
|
- `skills/rfc-templates.md` - RFC page template specifications
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
#### Sprint 8: Hook Efficiency Quick Wins
|
||||||
|
Performance optimizations for plugin hooks to reduce overhead on every command.
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
- **viz-platform:** Remove SessionStart hook that only echoed "loaded" (zero value)
|
||||||
|
- **git-flow:** Add early exit to `branch-check.sh` for non-git commands (skip JSON parsing)
|
||||||
|
- **git-flow:** Add early exit to `commit-msg-check.sh` for non-git commands (skip Python spawn)
|
||||||
|
- **project-hygiene:** Add 60-second cooldown to `cleanup.sh` (reduce find operations)
|
||||||
|
|
||||||
|
**Impact:** Hooks now exit immediately for 90%+ of Bash commands that don't need processing.
|
||||||
|
|
||||||
|
**Issues:** #321, #322, #323, #324
|
||||||
|
**PR:** #334
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.4.1] - 2026-01-30
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
|
||||||
|
#### Multi-Model Agent Support (REVERTED)
|
||||||
|
|
||||||
|
**Reason:** Claude Code does not support `defaultModel` in plugin.json or `model` in agent frontmatter. The schema validation rejects these as "Unrecognized key".
|
||||||
|
|
||||||
|
**Removed:**
|
||||||
|
- `defaultModel` field from all plugin.json files (6 plugins)
|
||||||
|
- `model` field references from agent frontmatter
|
||||||
|
- `docs/MODEL-RECOMMENDATIONS.md` - Deleted entirely
|
||||||
|
- Model configuration sections from `docs/CONFIGURATION.md` and `CLAUDE.md`
|
||||||
|
|
||||||
|
**Lesson:** Do not implement features without verifying they are supported by Claude Code's plugin schema.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.4.0] - 2026-01-28 [REVERTED]
|
||||||
|
|
||||||
|
### Added (NOW REMOVED - See 5.4.1)
|
||||||
|
|
||||||
|
#### Sprint 7: Multi-Model Agent Support
|
||||||
|
~~Configurable model selection for agents with inheritance chain.~~
|
||||||
|
|
||||||
|
**This feature was reverted in 5.4.1 - Claude Code does not support these fields.**
|
||||||
|
|
||||||
|
Original sprint work:
|
||||||
|
- Issues: #302, #303, #304, #305, #306
|
||||||
|
- PRs: #307, #308
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.3.0] - 2026-01-28
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
#### Sprint 6: Visual Branding Overhaul
|
||||||
|
Consistent visual headers and progress tracking across all plugins.
|
||||||
|
|
||||||
|
**Visual Output Headers (109 files):**
|
||||||
|
- **Projman**: Double-line headers (╔═╗) with phase indicators (🎯 PLANNING, ⚡ EXECUTION, 🏁 CLOSING)
|
||||||
|
- **Other Plugins**: Single-line headers (┌─┐) with plugin icons
|
||||||
|
- **All 23 agents** updated with Visual Output Requirements section
|
||||||
|
- **All 86 commands** updated with Visual Output section and header templates
|
||||||
|
|
||||||
|
**Plugin Icon Registry:**
|
||||||
|
| Plugin | Icon |
|
||||||
|
|--------|------|
|
||||||
|
| projman | 📋 |
|
||||||
|
| code-sentinel | 🔒 |
|
||||||
|
| doc-guardian | 📝 |
|
||||||
|
| pr-review | 🔍 |
|
||||||
|
| clarity-assist | 💬 |
|
||||||
|
| git-flow | 🔀 |
|
||||||
|
| cmdb-assistant | 🖥️ |
|
||||||
|
| data-platform | 📊 |
|
||||||
|
| viz-platform | 🎨 |
|
||||||
|
| contract-validator | ✅ |
|
||||||
|
| claude-config-maintainer | ⚙️ |
|
||||||
|
|
||||||
|
**Wiki Branding Specification (4 pages):**
|
||||||
|
- [branding/visual-spec](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fvisual-spec) - Central specification
|
||||||
|
- [branding/plugin-registry](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fplugin-registry) - Icons and styles
|
||||||
|
- [branding/header-templates](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fheader-templates) - Copy-paste templates
|
||||||
|
- [branding/progress-templates](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/branding%2Fprogress-templates) - Sprint progress blocks
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- **Docs:** Version sync - CLAUDE.md, marketplace.json, README.md now consistent
|
||||||
|
- **Docs:** Added 18 missing commands from Sprint 4 & 5 to README.md and COMMANDS-CHEATSHEET.md
|
||||||
|
- **MCP:** Registered `/sprint-diagram` as invokable skill
|
||||||
|
|
||||||
|
**Sprint Completed:**
|
||||||
|
- Milestone: Sprint 6 - Visual Branding Overhaul (closed 2026-01-28)
|
||||||
|
- Issues: #272, #273, #274, #275, #276, #277, #278
|
||||||
|
- PRs: #284, #285
|
||||||
|
- Wiki: [Sprint 6 Lessons](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-6---visual-branding-and-documentation-maintenance)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.2.0] - 2026-01-28
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
#### Sprint 5: Documentation (V5.2.0 Plugin Enhancements)
|
||||||
|
Documentation and guides for the plugin enhancements initiative.
|
||||||
|
|
||||||
|
**git-flow v1.2.0:**
|
||||||
|
- **Branching Strategy Guide** (`docs/BRANCHING-STRATEGY.md`) - Complete documentation of `development -> staging -> main` promotion flow with Mermaid diagrams
|
||||||
|
|
||||||
|
**clarity-assist v1.2.0:**
|
||||||
|
- **ND Support Guide** (`docs/ND-SUPPORT.md`) - Documentation of neurodivergent accommodations, features, and usage examples
|
||||||
|
|
||||||
|
**Gitea MCP Server:**
|
||||||
|
- **`update_issue` milestone parameter** - Can now assign/change milestones programmatically
|
||||||
|
|
||||||
|
**Sprint Completed:**
|
||||||
|
- Milestone: Sprint 5 - Documentation (closed 2026-01-28)
|
||||||
|
- Issues: #266, #267, #268, #269
|
||||||
|
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 5 Documentation)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-5-Documentation%29)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Sprint 4: Commands (V5.2.0 Plugin Enhancements)
|
||||||
|
Implementation of 18 new user-facing commands across 8 plugins.
|
||||||
|
|
||||||
|
**projman v3.3.0:**
|
||||||
|
- **`/sprint-diagram`** - Generate Mermaid diagram of sprint issues with dependencies and status
|
||||||
|
|
||||||
|
**pr-review v1.1.0:**
|
||||||
|
- **`/pr-diff`** - Formatted diff with inline review comments and annotations
|
||||||
|
- **Confidence threshold config** - `PR_REVIEW_CONFIDENCE_THRESHOLD` env var (default: 0.7)
|
||||||
|
|
||||||
|
**data-platform v1.2.0:**
|
||||||
|
- **`/data-quality`** - DataFrame quality checks (nulls, duplicates, types, outliers) with pass/warn/fail scoring
|
||||||
|
- **`/lineage-viz`** - dbt lineage visualization as Mermaid diagrams
|
||||||
|
- **`/dbt-test`** - Formatted dbt test runner with summary and failure details
|
||||||
|
|
||||||
|
**viz-platform v1.1.0:**
|
||||||
|
- **`/chart-export`** - Export charts to PNG, SVG, PDF via kaleido
|
||||||
|
- **`/accessibility-check`** - Color blind validation (WCAG contrast ratios)
|
||||||
|
- **`/breakpoints`** - Responsive layout breakpoint configuration
|
||||||
|
- **New MCP tools**: `chart_export`, `accessibility_validate_colors`, `accessibility_validate_theme`, `accessibility_suggest_alternative`, `layout_set_breakpoints`
|
||||||
|
- **New dependency**: kaleido>=0.2.1 for chart rendering
|
||||||
|
|
||||||
|
**contract-validator v1.2.0:**
|
||||||
|
- **`/dependency-graph`** - Mermaid visualization of plugin dependencies with data flow
|
||||||
|
|
||||||
|
**doc-guardian v1.1.0:**
|
||||||
|
- **`/changelog-gen`** - Generate changelog from conventional commits
|
||||||
|
- **`/doc-coverage`** - Documentation coverage metrics by function/class
|
||||||
|
- **`/stale-docs`** - Flag documentation behind code changes
|
||||||
|
|
||||||
|
**claude-config-maintainer v1.1.0:**
|
||||||
|
- **`/config-diff`** - Track CLAUDE.md changes over time with behavioral impact analysis
|
||||||
|
- **`/config-lint`** - 31 lint rules for CLAUDE.md (security, structure, content, format, best practices)
|
||||||
|
|
||||||
|
**cmdb-assistant v1.2.0:**
|
||||||
|
- **`/cmdb-topology`** - Infrastructure topology diagrams (rack, network, site views)
|
||||||
|
- **`/change-audit`** - NetBox audit trail queries with filtering
|
||||||
|
- **`/ip-conflicts`** - Detect IP conflicts and overlapping prefixes
|
||||||
|
|
||||||
|
**Sprint Completed:**
|
||||||
|
- Milestone: Sprint 4 - Commands (closed 2026-01-28)
|
||||||
|
- Issues: #241-#258 (18/18 closed)
|
||||||
|
- Wiki: [Change V5.2.0: Plugin Enhancements (Sprint 4 Commands)](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0%3A-Plugin-Enhancements-%28Sprint-4-Commands%29)
|
||||||
|
- Lessons: [Sprint 4 - Plugin Commands Implementation](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/sprints/sprint-4---plugin-commands-implementation)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- **MCP:** Project directory detection - all run.sh scripts now capture `CLAUDE_PROJECT_DIR` from PWD before changing directories
|
||||||
|
- **Docs:** Added Gitea auto-close behavior and MCP session restart notes to DEBUGGING-CHECKLIST.md
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Sprint 3: Hooks (V5.2.0 Plugin Enhancements)
|
||||||
|
Implementation of 6 foundational hooks across 4 plugins.
|
||||||
|
|
||||||
|
**git-flow v1.1.0:**
|
||||||
|
- **Commit message enforcement hook** - PreToolUse hook validates conventional commit format on all `git commit` commands (not just `/commit`). Blocks invalid commits with format guidance.
|
||||||
|
- **Branch name validation hook** - PreToolUse hook validates branch naming on `git checkout -b` and `git switch -c`. Enforces `type/description` format, lowercase, max 50 chars.
|
||||||
|
|
||||||
|
**clarity-assist v1.1.0:**
|
||||||
|
- **Vagueness detection hook** - UserPromptSubmit hook detects vague prompts and suggests `/clarify` when ambiguity, missing context, or unclear scope detected.
|
||||||
|
|
||||||
|
**data-platform v1.1.0:**
|
||||||
|
- **Schema diff detection hook** - PostToolUse hook monitors edits to schema files (dbt models, SQL migrations). Warns on breaking changes (column removal, type narrowing, constraint addition).
|
||||||
|
|
||||||
|
**contract-validator v1.1.0:**
|
||||||
|
- **SessionStart auto-validate hook** - Smart validation that only runs when plugin files changed since last check. Detects interface compatibility issues at session start.
|
||||||
|
- **Breaking change detection hook** - PostToolUse hook monitors plugin interface files (README.md, plugin.json). Warns when changes would break consumers.
|
||||||
|
|
||||||
|
**Sprint Completed:**
|
||||||
|
- Milestone: Sprint 3 - Hooks (closed 2026-01-28)
|
||||||
|
- Issues: #225, #226, #227, #228, #229, #230
|
||||||
|
- Wiki: [Change V5.2.0: Plugin Enhancements Proposal](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/Change-V5.2.0:-Plugin-Enhancements-Proposal)
|
||||||
|
- Lessons: Background agent permissions, agent runaway detection, MCP branch detection bug
|
||||||
|
|
||||||
|
### Known Issues
|
||||||
|
- **MCP Bug #231:** Branch detection in Gitea MCP runs from installed plugin directory, not user's project directory. Workaround: close issues via Gitea web UI.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Gitea MCP Server - create_pull_request Tool
|
||||||
|
- **`create_pull_request`**: Create new pull requests via MCP
|
||||||
|
- Parameters: title, body, head (source branch), base (target branch), labels
|
||||||
|
- Branch-aware security: only allowed on development/feature branches
|
||||||
|
- Completes the PR lifecycle (was previously missing - only had list/get/review/comment)
|
||||||
|
|
||||||
|
#### cmdb-assistant v1.1.0 - Data Quality Validation
|
||||||
|
- **SessionStart Hook**: Tests NetBox API connectivity at session start
|
||||||
|
- Warns if VMs exist without site assignment
|
||||||
|
- Warns if devices exist without platform
|
||||||
|
- Non-blocking: displays warning, doesn't prevent work
|
||||||
|
- **PreToolUse Hook**: Validates input parameters before VM/device operations
|
||||||
|
- Warns about missing site, tenant, platform
|
||||||
|
- Non-blocking: suggests best practices without blocking
|
||||||
|
- **`/cmdb-audit` Command**: Comprehensive data quality analysis
|
||||||
|
- Scopes: all, vms, devices, naming, roles
|
||||||
|
- Identifies Critical/High/Medium/Low issues
|
||||||
|
- Provides prioritized remediation recommendations
|
||||||
|
- **`/cmdb-register` Command**: Register current machine into NetBox
|
||||||
|
- Discovers system info: hostname, platform, hardware, network interfaces
|
||||||
|
- Discovers running apps: Docker containers, systemd services
|
||||||
|
- Creates device with interfaces, IPs, and sets primary IP
|
||||||
|
- Creates cluster and VMs for Docker containers
|
||||||
|
- **`/cmdb-sync` Command**: Sync machine state with NetBox
|
||||||
|
- Compares current state with NetBox record
|
||||||
|
- Shows diff of changes (interfaces, IPs, containers)
|
||||||
|
- Updates with user confirmation
|
||||||
|
- Supports --full and --dry-run flags
|
||||||
|
- **NetBox Best Practices Skill**: Reference documentation
|
||||||
|
- Dependency order for object creation
|
||||||
|
- Naming conventions (`{role}-{site}-{number}`, `{env}-{app}-{number}`)
|
||||||
|
- Role consolidation guidance
|
||||||
|
- Site/tenant/platform assignment requirements
|
||||||
|
- **Agent Enhancement**: Updated cmdb-assistant agent with validation requirements
|
||||||
|
- Proactive suggestions for missing fields
|
||||||
|
- Naming convention checks
|
||||||
|
- Dependency order enforcement
|
||||||
|
- Duplicate prevention
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [5.0.0] - 2026-01-26
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
#### Sprint 1: viz-platform Plugin ✅ Completed
|
#### Sprint 1: viz-platform Plugin ✅ Completed
|
||||||
@@ -15,12 +932,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
|||||||
- Static JSON registry approach for fast, deterministic validation
|
- Static JSON registry approach for fast, deterministic validation
|
||||||
- **Chart Tools** (2 tools): `chart_create`, `chart_configure_interaction`
|
- **Chart Tools** (2 tools): `chart_create`, `chart_configure_interaction`
|
||||||
- Plotly-based visualization with theme token support
|
- Plotly-based visualization with theme token support
|
||||||
- **Layout Tools** (3 tools): `layout_create`, `layout_add_filter`, `layout_set_grid`
|
- **Layout Tools** (5 tools): `layout_create`, `layout_add_filter`, `layout_set_grid`, `layout_get`, `layout_add_section`
|
||||||
- Dashboard composition with responsive grid system
|
- Dashboard composition with responsive grid system
|
||||||
- **Theme Tools** (4 tools): `theme_create`, `theme_extend`, `theme_validate`, `theme_export_css`
|
- **Theme Tools** (6 tools): `theme_create`, `theme_extend`, `theme_validate`, `theme_export_css`, `theme_list`, `theme_activate`
|
||||||
- Design token-based theming system
|
- Design token-based theming system
|
||||||
- Dual storage: user-level (`~/.config/claude/themes/`) and project-level
|
- Dual storage: user-level (`~/.config/claude/themes/`) and project-level
|
||||||
- **Page Tools** (3 tools): `page_create`, `page_add_navbar`, `page_set_auth`
|
- **Page Tools** (5 tools): `page_create`, `page_add_navbar`, `page_set_auth`, `page_list`, `page_get_app_config`
|
||||||
- Multi-page Dash app structure generation
|
- Multi-page Dash app structure generation
|
||||||
- **Commands**: `/chart`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/initial-setup`
|
- **Commands**: `/chart`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css`, `/component`, `/initial-setup`
|
||||||
- **Agents**: `theme-setup`, `layout-builder`, `component-check`
|
- **Agents**: `theme-setup`, `layout-builder`, `component-check`
|
||||||
|
|||||||
490
CLAUDE.md
490
CLAUDE.md
@@ -1,6 +1,5 @@
|
|||||||
# CLAUDE.md
|
# CLAUDE.md
|
||||||
|
|
||||||
This file provides guidance to Claude Code when working with code in this repository.
|
|
||||||
## ⛔ MANDATORY BEHAVIOR RULES - READ FIRST
|
## ⛔ MANDATORY BEHAVIOR RULES - READ FIRST
|
||||||
|
|
||||||
**These rules are NON-NEGOTIABLE. Violating them wastes the user's time and money.**
|
**These rules are NON-NEGOTIABLE. Violating them wastes the user's time and money.**
|
||||||
@@ -9,65 +8,179 @@ This file provides guidance to Claude Code when working with code in this reposi
|
|||||||
- Search ALL locations, not just where you think it is
|
- Search ALL locations, not just where you think it is
|
||||||
- Check cache directories: `~/.claude/plugins/cache/`
|
- Check cache directories: `~/.claude/plugins/cache/`
|
||||||
- Check installed: `~/.claude/plugins/marketplaces/`
|
- Check installed: `~/.claude/plugins/marketplaces/`
|
||||||
- Check source: `~/claude-plugins-work/`
|
- Check source directories
|
||||||
- **NEVER say "no" or "that's not the issue" without exhaustive verification**
|
- **NEVER say "no" or "that's not the issue" without exhaustive verification**
|
||||||
|
|
||||||
### 2. WHEN USER SAYS SOMETHING IS WRONG - BELIEVE THEM
|
### 2. WHEN USER SAYS SOMETHING IS WRONG - BELIEVE THEM
|
||||||
- The user knows their system better than you
|
- The user knows their system better than you
|
||||||
- Investigate thoroughly before disagreeing
|
- Investigate thoroughly before disagreeing
|
||||||
- If user suspects cache, CHECK THE CACHE
|
|
||||||
- If user suspects a file, READ THE FILE
|
|
||||||
- **Your confidence is often wrong. User's instincts are often right.**
|
- **Your confidence is often wrong. User's instincts are often right.**
|
||||||
|
|
||||||
### 3. NEVER SAY "DONE" WITHOUT VERIFICATION
|
### 3. NEVER SAY "DONE" WITHOUT VERIFICATION
|
||||||
- Run the actual command/script to verify
|
- Run the actual command/script to verify
|
||||||
- Show the output to the user
|
- Show the output to the user
|
||||||
- Check ALL affected locations
|
|
||||||
- **"Done" means VERIFIED WORKING, not "I made changes"**
|
- **"Done" means VERIFIED WORKING, not "I made changes"**
|
||||||
|
|
||||||
### 4. SHOW EXACTLY WHAT USER ASKS FOR
|
### 4. SHOW EXACTLY WHAT USER ASKS FOR
|
||||||
- If user asks for messages, show the MESSAGES
|
- If user asks for messages, show the MESSAGES
|
||||||
- If user asks for code, show the CODE
|
- If user asks for code, show the CODE
|
||||||
- If user asks for output, show the OUTPUT
|
- **Do not interpret or summarize unless asked**
|
||||||
- **Don't interpret or summarize unless asked**
|
|
||||||
|
|
||||||
### 5. AFTER PLUGIN UPDATES - VERIFY AND RESTART
|
|
||||||
|
|
||||||
**⚠️ DO NOT clear cache mid-session** - this breaks MCP tools that are already loaded.
|
|
||||||
|
|
||||||
1. Run `./scripts/verify-hooks.sh` to check hook types
|
|
||||||
2. If changes affect MCP servers or hooks, inform the user:
|
|
||||||
> "Plugin changes require a session restart to take effect. Please restart Claude Code."
|
|
||||||
3. Cache clearing is ONLY safe **before** starting a new session (not during)
|
|
||||||
|
|
||||||
See `docs/DEBUGGING-CHECKLIST.md` for details on cache timing.
|
|
||||||
|
|
||||||
**FAILURE TO FOLLOW THESE RULES = WASTED USER TIME = UNACCEPTABLE**
|
**FAILURE TO FOLLOW THESE RULES = WASTED USER TIME = UNACCEPTABLE**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code when working with code in this repository.
|
||||||
|
|
||||||
|
## ⛔ RULES - READ FIRST
|
||||||
|
|
||||||
|
### Behavioral Rules
|
||||||
|
|
||||||
|
| Rule | Summary |
|
||||||
|
|------|---------|
|
||||||
|
| **Check everything** | Search cache (`~/.claude/plugins/cache/`), installed (`~/.claude/plugins/marketplaces/`), and source (`~/claude-plugins-work/`) |
|
||||||
|
| **Believe the user** | User knows their system. Investigate before disagreeing. |
|
||||||
|
| **Verify before "done"** | Run commands, show output, check all locations. "Done" = verified working. |
|
||||||
|
| **Show what's asked** | Don't interpret or summarize unless asked. |
|
||||||
|
|
||||||
|
### After Plugin Updates
|
||||||
|
|
||||||
|
Run `./scripts/verify-hooks.sh`. If changes affect MCP servers or hooks, inform user to restart session.
|
||||||
|
**DO NOT clear cache mid-session** - breaks loaded MCP tools.
|
||||||
|
|
||||||
|
### NEVER USE CLI TOOLS FOR EXTERNAL SERVICES
|
||||||
|
- **FORBIDDEN:** `gh`, `tea`, `curl` to APIs, any CLI that talks to Gitea/GitHub/external services
|
||||||
|
- **REQUIRED:** Use MCP tools exclusively (`mcp__plugin_projman_gitea__*`, `mcp__plugin_pr-review_gitea__*`)
|
||||||
|
- **NO EXCEPTIONS.** Don't try CLI first. Don't fall back to CLI. MCP ONLY.
|
||||||
|
|
||||||
|
### NEVER PUSH DIRECTLY TO PROTECTED BRANCHES
|
||||||
|
- **FORBIDDEN:** `git push origin development`, `git push origin main`, `git push origin master`
|
||||||
|
- **REQUIRED:** Create feature branch → push feature branch → create PR via MCP
|
||||||
|
- If you accidentally commit to a protected branch locally: `git checkout -b fix/branch-name` then reset the protected branch
|
||||||
|
|
||||||
|
### Repository Rules
|
||||||
|
|
||||||
|
| Rule | Details |
|
||||||
|
|------|---------|
|
||||||
|
| **File creation** | Only in allowed paths. Use `.scratch/` for temp work. Verify against `docs/CANONICAL-PATHS.md` |
|
||||||
|
| **plugin.json location** | Must be in `.claude-plugin/` directory |
|
||||||
|
| **Hooks** | Use `hooks/hooks.json` (auto-discovered). Never inline in plugin.json |
|
||||||
|
| **MCP servers** | Defined in root `.mcp.json`. Use MCP tools, never CLI (`tea`, `gh`) |
|
||||||
|
| **Allowed root files** | `CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example` |
|
||||||
|
|
||||||
|
**Valid hook events:** `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
|
||||||
|
|
||||||
|
### ⛔ MANDATORY: Before Any Code Change
|
||||||
|
|
||||||
|
**Claude MUST show this checklist BEFORE editing any file:**
|
||||||
|
|
||||||
|
#### 1. Impact Search Results
|
||||||
|
Run and show output of:
|
||||||
|
```bash
|
||||||
|
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Files That Will Be Affected
|
||||||
|
Numbered list of every file to be modified, with the specific change for each.
|
||||||
|
|
||||||
|
#### 3. Files Searched But Not Changed (and why)
|
||||||
|
Proof that related files were checked and determined unchanged.
|
||||||
|
|
||||||
|
#### 4. Documentation That References This
|
||||||
|
List of docs that mention this feature/script/function.
|
||||||
|
|
||||||
|
**User verifies this list before Claude proceeds. If Claude skips this, STOP IMMEDIATELY.**
|
||||||
|
|
||||||
|
#### After Changes
|
||||||
|
Run the same grep and show results proving no references remain unaddressed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ Development Context: We Build AND Use These Plugins
|
||||||
|
|
||||||
|
**This is a self-referential project.** We are:
|
||||||
|
1. **BUILDING** a plugin marketplace (source code in `plugins/`)
|
||||||
|
2. **USING** the installed marketplace to build it (dogfooding)
|
||||||
|
|
||||||
|
### Plugins ACTIVELY USED in This Project
|
||||||
|
|
||||||
|
These plugins are installed and should be used during development:
|
||||||
|
|
||||||
|
| Plugin | Used For |
|
||||||
|
|--------|----------|
|
||||||
|
| **projman** | Sprint planning, issue management, lessons learned |
|
||||||
|
| **git-flow** | Commits, branch management |
|
||||||
|
| **pr-review** | Pull request reviews |
|
||||||
|
| **doc-guardian** | Documentation drift detection |
|
||||||
|
| **code-sentinel** | Security scanning, refactoring |
|
||||||
|
| **clarity-assist** | Prompt clarification |
|
||||||
|
| **claude-config-maintainer** | CLAUDE.md optimization |
|
||||||
|
| **contract-validator** | Cross-plugin compatibility |
|
||||||
|
|
||||||
|
### Plugins NOT Used Here (Development Only)
|
||||||
|
|
||||||
|
These plugins exist in source but are **NOT relevant** to this project's workflow:
|
||||||
|
|
||||||
|
| Plugin | Why Not Used |
|
||||||
|
|--------|--------------|
|
||||||
|
| **data-platform** | For data engineering projects (pandas, PostgreSQL, dbt) |
|
||||||
|
| **viz-platform** | For dashboard projects (Dash, Plotly) |
|
||||||
|
| **cmdb-assistant** | For infrastructure projects (NetBox) |
|
||||||
|
| **saas-api-platform** | For REST/GraphQL API projects (FastAPI, Express) |
|
||||||
|
| **saas-db-migrate** | For database migration projects (Alembic, Prisma) |
|
||||||
|
| **saas-react-platform** | For React frontend projects (Next.js, Vite) |
|
||||||
|
| **saas-test-pilot** | For test automation projects (pytest, Jest, Playwright) |
|
||||||
|
| **data-seed** | For test data generation and seeding |
|
||||||
|
| **ops-release-manager** | For release management workflows |
|
||||||
|
| **ops-deploy-pipeline** | For deployment pipeline management |
|
||||||
|
| **debug-mcp** | For MCP server debugging and development |
|
||||||
|
|
||||||
|
**Do NOT suggest** `/data ingest`, `/data profile`, `/viz chart`, `/cmdb *`, `/api *`, `/db-migrate *`, `/react *`, `/test *`, `/seed *`, `/release *`, `/deploy *`, `/debug-mcp *` commands - they don't apply here.
|
||||||
|
|
||||||
|
### Key Distinction
|
||||||
|
|
||||||
|
| Context | Path | What To Do |
|
||||||
|
|---------|------|------------|
|
||||||
|
| **Editing plugin source** | `~/claude-plugins-work/plugins/` | Modify code, add features |
|
||||||
|
| **Using installed plugins** | `~/.claude/plugins/marketplaces/` | Run commands like `/sprint plan` |
|
||||||
|
|
||||||
|
When user says "run /sprint plan", use the INSTALLED plugin.
|
||||||
|
When user says "fix the sprint plan command", edit the SOURCE code.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Project Overview
|
## Project Overview
|
||||||
|
|
||||||
**Repository:** leo-claude-mktplace
|
**Repository:** leo-claude-mktplace
|
||||||
**Version:** 4.0.0
|
**Version:** 9.1.2
|
||||||
**Status:** Production Ready
|
**Status:** Production Ready
|
||||||
|
|
||||||
A plugin marketplace for Claude Code containing:
|
A plugin marketplace for Claude Code containing:
|
||||||
|
|
||||||
| Plugin | Description | Version |
|
| Plugin | Description | Version |
|
||||||
|--------|-------------|---------|
|
|--------|-------------|---------|
|
||||||
| `projman` | Sprint planning and project management with Gitea integration | 3.1.0 |
|
| `projman` | Sprint planning and project management with Gitea integration | 9.0.1 |
|
||||||
| `git-flow` | Git workflow automation with smart commits and branch management | 1.0.0 |
|
| `git-flow` | Git workflow automation with smart commits and branch management | 9.0.1 |
|
||||||
| `pr-review` | Multi-agent PR review with confidence scoring | 1.0.0 |
|
| `pr-review` | Multi-agent PR review with confidence scoring | 9.0.1 |
|
||||||
| `clarity-assist` | Prompt optimization with ND-friendly accommodations | 1.0.0 |
|
| `clarity-assist` | Prompt optimization with ND-friendly accommodations | 9.0.1 |
|
||||||
| `doc-guardian` | Automatic documentation drift detection and synchronization | 1.0.0 |
|
| `doc-guardian` | Automatic documentation drift detection and synchronization | 9.0.1 |
|
||||||
| `code-sentinel` | Security scanning and code refactoring tools | 1.0.0 |
|
| `code-sentinel` | Security scanning and code refactoring tools | 9.0.1 |
|
||||||
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 1.0.0 |
|
| `claude-config-maintainer` | CLAUDE.md optimization and maintenance | 9.0.1 |
|
||||||
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 1.0.0 |
|
| `cmdb-assistant` | NetBox CMDB integration for infrastructure management | 9.0.1 |
|
||||||
| `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 1.0.0 |
|
| `data-platform` | pandas, PostgreSQL, and dbt integration for data engineering | 9.0.1 |
|
||||||
| `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 1.0.0 |
|
| `viz-platform` | DMC validation, Plotly charts, and theming for dashboards | 9.0.1 |
|
||||||
| `project-hygiene` | Post-task cleanup automation via hooks | 0.1.0 |
|
| `contract-validator` | Cross-plugin compatibility validation and agent verification | 9.0.1 |
|
||||||
|
| `project-hygiene` | Manual project hygiene checks | 9.0.1 |
|
||||||
|
| `saas-api-platform` | REST/GraphQL API scaffolding for FastAPI and Express | 0.1.0 |
|
||||||
|
| `saas-db-migrate` | Database migration management for Alembic, Prisma, raw SQL | 0.1.0 |
|
||||||
|
| `saas-react-platform` | React frontend toolkit for Next.js and Vite | 0.1.0 |
|
||||||
|
| `saas-test-pilot` | Test automation for pytest, Jest, Vitest, Playwright | 0.1.0 |
|
||||||
|
| `data-seed` | Test data generation and database seeding | 0.1.0 |
|
||||||
|
| `ops-release-manager` | Release management with SemVer and changelog automation | 0.1.0 |
|
||||||
|
| `ops-deploy-pipeline` | Deployment pipeline for Docker Compose and systemd | 0.1.0 |
|
||||||
|
| `debug-mcp` | MCP server debugging and development toolkit | 0.1.0 |
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
@@ -76,122 +189,110 @@ A plugin marketplace for Claude Code containing:
|
|||||||
./scripts/validate-marketplace.sh
|
./scripts/validate-marketplace.sh
|
||||||
|
|
||||||
# After updates
|
# After updates
|
||||||
./scripts/post-update.sh # Rebuild venvs, verify symlinks
|
./scripts/post-update.sh # Rebuild venvs
|
||||||
```
|
```
|
||||||
|
|
||||||
### Plugin Commands by Category
|
### Plugin Commands - USE THESE in This Project
|
||||||
|
|
||||||
| Category | Commands |
|
| Category | Commands |
|
||||||
|----------|----------|
|
|----------|----------|
|
||||||
| **Setup** | `/initial-setup`, `/project-init`, `/project-sync` |
|
| **Setup** | `/projman setup` (modes: `--full`, `--quick`, `--sync`) |
|
||||||
| **Sprint** | `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close` |
|
| **Sprint** | `/sprint plan`, `/sprint start`, `/sprint status` (with `--diagram`), `/sprint close` |
|
||||||
| **Quality** | `/review`, `/test-check`, `/test-gen` |
|
| **Quality** | `/sprint review`, `/sprint test` (modes: `run`, `gen`) |
|
||||||
| **Versioning** | `/suggest-version` |
|
| **Project** | `/project initiation`, `/project plan`, `/project status`, `/project close` |
|
||||||
| **PR Review** | `/pr-review:initial-setup`, `/pr-review:project-init` |
|
| **ADR** | `/adr create`, `/adr list`, `/adr update`, `/adr supersede` |
|
||||||
| **Docs** | `/doc-audit`, `/doc-sync` |
|
| **RFC** | `/rfc create`, `/rfc list`, `/rfc review`, `/rfc approve`, `/rfc reject` |
|
||||||
| **Security** | `/security-scan`, `/refactor`, `/refactor-dry` |
|
| **PR Review** | `/pr review`, `/pr summary`, `/pr findings`, `/pr diff` |
|
||||||
| **Config** | `/config-analyze`, `/config-optimize` |
|
| **Docs** | `/doc audit`, `/doc sync`, `/doc changelog-gen`, `/doc coverage`, `/doc stale-docs` |
|
||||||
| **Data** | `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/run` |
|
| **Security** | `/sentinel scan`, `/sentinel refactor`, `/sentinel refactor-dry` |
|
||||||
| **Visualization** | `/component`, `/chart`, `/dashboard`, `/theme`, `/theme-new`, `/theme-css` |
|
| **Config** | `/claude-config analyze`, `/claude-config optimize`, `/claude-config diff`, `/claude-config lint` |
|
||||||
| **Debug** | `/debug-report`, `/debug-review` |
|
| **Validation** | `/cv validate`, `/cv check-agent`, `/cv list-interfaces`, `/cv dependency-graph`, `/cv status` |
|
||||||
|
| **Maintenance** | `/hygiene check` |
|
||||||
|
|
||||||
|
### Plugin Commands - NOT RELEVANT to This Project
|
||||||
|
|
||||||
|
These commands are being developed but don't apply to this project's workflow:
|
||||||
|
|
||||||
|
| Category | Commands | For Projects Using |
|
||||||
|
|----------|----------|-------------------|
|
||||||
|
| **Data** | `/data ingest`, `/data profile`, `/data schema`, `/data lineage`, `/data dbt-test` | pandas, PostgreSQL, dbt |
|
||||||
|
| **Visualization** | `/viz component`, `/viz chart`, `/viz dashboard`, `/viz theme` | Dash, Plotly dashboards |
|
||||||
|
| **CMDB** | `/cmdb search`, `/cmdb device`, `/cmdb sync` | NetBox infrastructure |
|
||||||
|
| **API** | `/api scaffold`, `/api validate`, `/api docs`, `/api middleware` | FastAPI, Express |
|
||||||
|
| **DB Migrate** | `/db-migrate generate`, `/db-migrate validate`, `/db-migrate plan` | Alembic, Prisma |
|
||||||
|
| **React** | `/react component`, `/react route`, `/react state`, `/react hook` | Next.js, Vite |
|
||||||
|
| **Testing** | `/test generate`, `/test coverage`, `/test fixtures`, `/test e2e` | pytest, Jest, Playwright |
|
||||||
|
| **Seeding** | `/seed generate`, `/seed profile`, `/seed apply` | Faker, test data |
|
||||||
|
| **Release** | `/release prepare`, `/release validate`, `/release tag` | SemVer releases |
|
||||||
|
| **Deploy** | `/deploy generate`, `/deploy validate`, `/deploy check` | Docker Compose, systemd |
|
||||||
|
| **Debug MCP** | `/debug-mcp status`, `/debug-mcp test`, `/debug-mcp logs` | MCP server development |
|
||||||
|
|
||||||
## Repository Structure
|
## Repository Structure
|
||||||
|
|
||||||
```
|
```
|
||||||
leo-claude-mktplace/
|
leo-claude-mktplace/
|
||||||
├── .claude-plugin/
|
├── .claude-plugin/ # Marketplace manifest
|
||||||
│ └── marketplace.json # Marketplace manifest
|
│ ├── marketplace.json
|
||||||
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
|
│ ├── marketplace-lean.json # Lean profile (6 core plugins)
|
||||||
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
|
│ └── marketplace-full.json # Full profile (all plugins)
|
||||||
│ ├── netbox/ # NetBox MCP (CMDB)
|
├── .mcp.json # MCP server configuration (all servers)
|
||||||
│ └── viz-platform/ # DMC validation, charts, themes
|
├── mcp-servers/ # SHARED MCP servers
|
||||||
├── plugins/
|
│ ├── gitea/ # Gitea (issues, PRs, wiki)
|
||||||
│ ├── projman/ # Sprint management
|
│ ├── netbox/ # NetBox (DCIM, IPAM)
|
||||||
|
│ ├── data-platform/ # pandas, PostgreSQL, dbt
|
||||||
|
│ ├── viz-platform/ # DMC, Plotly, theming
|
||||||
|
│ └── contract-validator/ # Plugin compatibility validation
|
||||||
|
├── plugins/ # All plugins (20 total)
|
||||||
|
│ ├── projman/ # [core] Sprint management
|
||||||
│ │ ├── .claude-plugin/plugin.json
|
│ │ ├── .claude-plugin/plugin.json
|
||||||
│ │ ├── .mcp.json
|
│ │ ├── commands/ # 19 commands
|
||||||
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
|
|
||||||
│ │ ├── commands/ # 14 commands (incl. setup, debug, suggest-version)
|
|
||||||
│ │ ├── hooks/ # SessionStart: mismatch detection + sprint suggestions
|
|
||||||
│ │ ├── agents/ # 4 agents
|
│ │ ├── agents/ # 4 agents
|
||||||
│ │ └── skills/label-taxonomy/
|
│ │ └── skills/ # 23 reusable skill files
|
||||||
│ ├── git-flow/ # Git workflow automation
|
│ ├── git-flow/ # [core] Git workflow automation
|
||||||
│ │ ├── .claude-plugin/plugin.json
|
│ ├── pr-review/ # [core] PR review
|
||||||
│ │ ├── commands/ # 8 commands
|
│ ├── clarity-assist/ # [core] Prompt optimization
|
||||||
│ │ └── agents/
|
│ ├── doc-guardian/ # [core] Documentation drift detection
|
||||||
│ ├── pr-review/ # Multi-agent PR review
|
│ ├── code-sentinel/ # [core] Security scanning
|
||||||
│ │ ├── .claude-plugin/plugin.json
|
│ ├── claude-config-maintainer/ # [core] CLAUDE.md optimization
|
||||||
│ │ ├── .mcp.json
|
│ ├── contract-validator/ # [core] Cross-plugin validation
|
||||||
│ │ ├── mcp-servers/gitea -> ../../../mcp-servers/gitea # SYMLINK
|
│ ├── project-hygiene/ # [core] Manual cleanup checks
|
||||||
│ │ ├── commands/ # 6 commands (incl. setup)
|
│ ├── cmdb-assistant/ # [ops] NetBox CMDB integration
|
||||||
│ │ ├── hooks/ # SessionStart mismatch detection
|
│ ├── data-platform/ # [data] Data engineering
|
||||||
│ │ └── agents/ # 5 agents
|
│ ├── viz-platform/ # [data] Visualization
|
||||||
│ ├── clarity-assist/ # Prompt optimization
|
│ ├── data-seed/ # [data] Test data generation (scaffold)
|
||||||
│ │ ├── .claude-plugin/plugin.json
|
│ ├── saas-api-platform/ # [saas] API scaffolding (scaffold)
|
||||||
│ │ ├── commands/ # 2 commands
|
│ ├── saas-db-migrate/ # [saas] DB migrations (scaffold)
|
||||||
│ │ └── agents/
|
│ ├── saas-react-platform/ # [saas] React toolkit (scaffold)
|
||||||
│ ├── data-platform/ # Data engineering (NEW v4.0.0)
|
│ ├── saas-test-pilot/ # [saas] Test automation (scaffold)
|
||||||
│ │ ├── .claude-plugin/plugin.json
|
│ ├── ops-release-manager/ # [ops] Release management (scaffold)
|
||||||
│ │ ├── .mcp.json
|
│ ├── ops-deploy-pipeline/ # [ops] Deployment pipeline (scaffold)
|
||||||
│ │ ├── mcp-servers/ # pandas, postgresql, dbt MCPs
|
│ └── debug-mcp/ # [debug] MCP debugging (scaffold)
|
||||||
│ │ ├── commands/ # 7 commands
|
├── scripts/ # Setup and maintenance
|
||||||
│ │ ├── hooks/ # SessionStart PostgreSQL check
|
│ ├── setup.sh # Initial setup (create venvs, config)
|
||||||
│ │ └── agents/ # 2 agents
|
│ ├── post-update.sh # Post-update (clear cache, changelog)
|
||||||
│ ├── viz-platform/ # Visualization (NEW v4.0.0)
|
│ ├── setup-venvs.sh # MCP server venv management (cache-based)
|
||||||
│ │ ├── .claude-plugin/plugin.json
|
|
||||||
│ │ ├── .mcp.json
|
|
||||||
│ │ ├── mcp-servers/ # viz-platform MCP
|
|
||||||
│ │ ├── commands/ # 7 commands
|
|
||||||
│ │ ├── hooks/ # SessionStart DMC check
|
|
||||||
│ │ └── agents/ # 3 agents
|
|
||||||
│ ├── doc-guardian/ # Documentation drift detection
|
|
||||||
│ ├── code-sentinel/ # Security scanning & refactoring
|
|
||||||
│ ├── claude-config-maintainer/
|
|
||||||
│ ├── cmdb-assistant/
|
|
||||||
│ └── project-hygiene/
|
|
||||||
├── scripts/
|
|
||||||
│ ├── setup.sh, post-update.sh
|
|
||||||
│ ├── validate-marketplace.sh # Marketplace compliance validation
|
│ ├── validate-marketplace.sh # Marketplace compliance validation
|
||||||
│ ├── verify-hooks.sh # Verify all hooks are command type
|
│ ├── verify-hooks.sh # Hook inventory verification
|
||||||
│ └── check-venv.sh # Check MCP server venvs exist
|
│ ├── release.sh # Release automation with version bumping
|
||||||
└── docs/
|
│ ├── claude-launch.sh # Profile-based launcher
|
||||||
├── CANONICAL-PATHS.md # Single source of truth for paths
|
│ ├── install-plugin.sh # Install plugin to consumer project
|
||||||
└── CONFIGURATION.md # Centralized configuration guide
|
│ ├── list-installed.sh # Show installed plugins in a project
|
||||||
|
│ └── uninstall-plugin.sh # Remove plugin from consumer project
|
||||||
|
├── docs/ # Documentation
|
||||||
|
│ ├── ARCHITECTURE.md # System architecture & plugin reference
|
||||||
|
│ ├── CANONICAL-PATHS.md # Authoritative path reference
|
||||||
|
│ ├── COMMANDS-CHEATSHEET.md # All commands quick reference
|
||||||
|
│ ├── CONFIGURATION.md # Centralized setup guide
|
||||||
|
│ ├── DEBUGGING-CHECKLIST.md # Systematic troubleshooting guide
|
||||||
|
│ ├── MIGRATION-v9.md # v8.x to v9.0.0 migration guide
|
||||||
|
│ └── UPDATING.md # Update guide
|
||||||
|
├── CLAUDE.md # Project instructions for Claude Code
|
||||||
|
├── README.md
|
||||||
|
├── CHANGELOG.md
|
||||||
|
├── LICENSE
|
||||||
|
└── .gitignore
|
||||||
```
|
```
|
||||||
|
|
||||||
## CRITICAL: Rules You MUST Follow
|
|
||||||
|
|
||||||
### File Operations
|
|
||||||
- **NEVER** create files in repository root unless listed in "Allowed Root Files"
|
|
||||||
- **NEVER** modify `.gitignore` without explicit permission
|
|
||||||
- **ALWAYS** use `.scratch/` for temporary/exploratory work
|
|
||||||
- **ALWAYS** verify paths against `docs/CANONICAL-PATHS.md` before creating files
|
|
||||||
|
|
||||||
### Plugin Development
|
|
||||||
- **plugin.json MUST be in `.claude-plugin/` directory** (not plugin root)
|
|
||||||
- **Every plugin MUST be listed in marketplace.json**
|
|
||||||
- **MCP servers are SHARED at root** with symlinks from plugins
|
|
||||||
- **MCP server venv path**: `${CLAUDE_PLUGIN_ROOT}/mcp-servers/{name}/.venv/bin/python`
|
|
||||||
- **CLI tools forbidden** - Use MCP tools exclusively (never `tea`, `gh`, etc.)
|
|
||||||
|
|
||||||
#### ⚠️ plugin.json Format Rules (CRITICAL)
|
|
||||||
- **Hooks in separate file** - Use `hooks/hooks.json` (auto-discovered), NOT inline in plugin.json
|
|
||||||
- **NEVER reference hooks** - Don't add `"hooks": "..."` field to plugin.json at all
|
|
||||||
- **Agents auto-discover** - NEVER add `"agents": ["./agents/"]` - .md files found automatically
|
|
||||||
- **Always validate** - Run `./scripts/validate-marketplace.sh` before committing
|
|
||||||
- **Working examples:** projman, pr-review, claude-config-maintainer all use `hooks/hooks.json`
|
|
||||||
- See lesson: `lessons/patterns/plugin-manifest-validation---hooks-and-agents-format-requirements`
|
|
||||||
|
|
||||||
### Hooks (Valid Events Only)
|
|
||||||
`PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Notification`, `Stop`, `SubagentStop`, `PreCompact`
|
|
||||||
|
|
||||||
**INVALID:** `task-completed`, `file-changed`, `git-commit-msg-needed`
|
|
||||||
|
|
||||||
### Allowed Root Files
|
|
||||||
`CLAUDE.md`, `README.md`, `LICENSE`, `CHANGELOG.md`, `.gitignore`, `.env.example`
|
|
||||||
|
|
||||||
### Allowed Root Directories
|
|
||||||
`.claude/`, `.claude-plugin/`, `.claude-plugins/`, `.scratch/`, `docs/`, `hooks/`, `mcp-servers/`, `plugins/`, `scripts/`
|
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
### Four-Agent Model (projman)
|
### Four-Agent Model (projman)
|
||||||
@@ -203,6 +304,61 @@ leo-claude-mktplace/
|
|||||||
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
|
| **Executor** | Implementation-focused | Code implementation, branch management, MR creation |
|
||||||
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
|
| **Code Reviewer** | Thorough, practical | Pre-close quality review, security scan, test verification |
|
||||||
|
|
||||||
|
### Agent Frontmatter Configuration
|
||||||
|
|
||||||
|
Agents specify their configuration in frontmatter using Claude Code's supported fields. Reference: https://code.claude.com/docs/en/sub-agents
|
||||||
|
|
||||||
|
**Supported frontmatter fields:**
|
||||||
|
|
||||||
|
| Field | Required | Default | Description |
|
||||||
|
|-------|----------|---------|-------------|
|
||||||
|
| `name` | Yes | — | Unique identifier, lowercase + hyphens |
|
||||||
|
| `description` | Yes | — | When Claude should delegate to this subagent |
|
||||||
|
| `model` | No | `inherit` | `sonnet`, `opus`, `haiku`, or `inherit` |
|
||||||
|
| `permissionMode` | No | `default` | Controls permission prompts: `default`, `acceptEdits`, `dontAsk`, `bypassPermissions`, `plan` |
|
||||||
|
| `disallowedTools` | No | none | Comma-separated tools to remove from agent's toolset |
|
||||||
|
| `skills` | No | none | Comma-separated skills auto-injected into context at startup |
|
||||||
|
| `hooks` | No | none | Lifecycle hooks scoped to this subagent |
|
||||||
|
|
||||||
|
**Complete agent matrix:**
|
||||||
|
|
||||||
|
| Plugin | Agent | `model` | `permissionMode` | `disallowedTools` | `skills` |
|
||||||
|
|--------|-------|---------|-------------------|--------------------|----------|
|
||||||
|
| projman | planner | opus | default | — | frontmatter (2) + body text (12) |
|
||||||
|
| projman | orchestrator | sonnet | acceptEdits | — | frontmatter (2) + body text (10) |
|
||||||
|
| projman | executor | sonnet | bypassPermissions | — | frontmatter (7) |
|
||||||
|
| projman | code-reviewer | opus | default | Write, Edit, MultiEdit | frontmatter (4) |
|
||||||
|
| pr-review | coordinator | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | performance-analyst | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | maintainability-auditor | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | test-validator | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| data-platform | data-advisor | sonnet | default | — | — |
|
||||||
|
| data-platform | data-analysis | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| data-platform | data-ingestion | haiku | acceptEdits | — | — |
|
||||||
|
| viz-platform | design-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| viz-platform | layout-builder | sonnet | default | — | — |
|
||||||
|
| viz-platform | component-check | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| viz-platform | theme-setup | haiku | acceptEdits | — | — |
|
||||||
|
| contract-validator | full-validation | sonnet | default | — | — |
|
||||||
|
| contract-validator | agent-check | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| code-sentinel | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| code-sentinel | refactor-advisor | sonnet | acceptEdits | — | — |
|
||||||
|
| doc-guardian | doc-analyzer | sonnet | acceptEdits | — | — |
|
||||||
|
| clarity-assist | clarity-coach | sonnet | default | Write, Edit, MultiEdit | — |
|
||||||
|
| git-flow | git-assistant | haiku | acceptEdits | — | — |
|
||||||
|
| claude-config-maintainer | maintainer | sonnet | acceptEdits | — | frontmatter (2) |
|
||||||
|
| cmdb-assistant | cmdb-assistant | sonnet | default | — | — |
|
||||||
|
|
||||||
|
**Design principles:**
|
||||||
|
- `bypassPermissions` is granted to exactly ONE agent (Executor) which has code-sentinel PreToolUse hook + Code Reviewer downstream as safety nets.
|
||||||
|
- `plan` mode is assigned to all pure analysis agents (pr-review, read-only validators).
|
||||||
|
- `disallowedTools: Write, Edit, MultiEdit` provides defense-in-depth on agents that should never write files.
|
||||||
|
- `skills` frontmatter is used for agents with ≤7 skills where guaranteed loading is safety-critical. Agents with 8+ skills use body text `## Skills to Load` for selective loading.
|
||||||
|
- `hooks` (agent-scoped) is reserved for future use (v6.0+).
|
||||||
|
|
||||||
|
Override any field by editing the agent's `.md` file in `plugins/{plugin}/agents/`.
|
||||||
|
|
||||||
### MCP Server Tools (Gitea)
|
### MCP Server Tools (Gitea)
|
||||||
|
|
||||||
| Category | Tools |
|
| Category | Tools |
|
||||||
@@ -211,7 +367,7 @@ leo-claude-mktplace/
|
|||||||
| Labels | `get_labels`, `suggest_labels`, `create_label`, `create_label_smart` |
|
| Labels | `get_labels`, `suggest_labels`, `create_label`, `create_label_smart` |
|
||||||
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone`, `delete_milestone` |
|
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone`, `delete_milestone` |
|
||||||
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `remove_issue_dependency`, `get_execution_order` |
|
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `remove_issue_dependency`, `get_execution_order` |
|
||||||
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `update_wiki_page`, `create_lesson`, `search_lessons` |
|
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `update_wiki_page`, `create_lesson`, `search_lessons`, `allocate_rfc_number` |
|
||||||
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` |
|
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` |
|
||||||
| Validation | `validate_repo_org`, `get_branch_protection` |
|
| Validation | `validate_repo_org`, `get_branch_protection` |
|
||||||
|
|
||||||
@@ -232,14 +388,28 @@ leo-claude-mktplace/
|
|||||||
| `staging` | Staging | Read-only code, can create issues |
|
| `staging` | Staging | Read-only code, can create issues |
|
||||||
| `main`, `master` | Production | Read-only, emergency only |
|
| `main`, `master` | Production | Read-only, emergency only |
|
||||||
|
|
||||||
|
### RFC System
|
||||||
|
|
||||||
|
Wiki-based Request for Comments system for tracking feature ideas from proposal through implementation.
|
||||||
|
|
||||||
|
**RFC Wiki Naming:**
|
||||||
|
- RFC pages: `RFC-NNNN: Short Title` (4-digit zero-padded)
|
||||||
|
- Index page: `RFC-Index` (auto-maintained)
|
||||||
|
|
||||||
|
**Lifecycle:** Draft → Review → Approved → Implementing → Implemented
|
||||||
|
|
||||||
|
**Integration with Sprint Planning:**
|
||||||
|
- `/sprint plan` detects approved RFCs and offers selection
|
||||||
|
- `/sprint close` updates RFC status on completion
|
||||||
|
|
||||||
## Label Taxonomy
|
## Label Taxonomy
|
||||||
|
|
||||||
43 labels total: 27 organization + 16 repository
|
58 labels total: 31 organization + 27 repository
|
||||||
|
|
||||||
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Type/6
|
**Organization:** Agent/2, Complexity/3, Efforts/5, Priority/4, Risk/3, Source/4, Status/4, Type/6
|
||||||
**Repository:** Component/9, Tech/7
|
**Repository:** Component/9, Tech/7, Domain/2, Epic/5, RnD/4
|
||||||
|
|
||||||
Sync with `/labels-sync` command.
|
Sync with `/labels sync` command.
|
||||||
|
|
||||||
## Lessons Learned System
|
## Lessons Learned System
|
||||||
|
|
||||||
@@ -254,18 +424,37 @@ Stored in Gitea Wiki under `lessons-learned/sprints/`.
|
|||||||
|
|
||||||
### Adding a New Plugin
|
### Adding a New Plugin
|
||||||
|
|
||||||
1. Create `plugins/{name}/.claude-plugin/plugin.json`
|
1. Create `plugins/{name}/.claude-plugin/plugin.json` (standard schema fields only — no custom fields)
|
||||||
2. Add entry to `.claude-plugin/marketplace.json` with category, tags, license
|
2. Create `plugins/{name}/.claude-plugin/metadata.json` — must include `"domain"` field (`core`, `data`, `saas`, `ops`, or `debug`)
|
||||||
3. Create `README.md` and `claude-md-integration.md`
|
3. Add entry to `.claude-plugin/marketplace.json` with category, tags, license (no custom fields — Claude Code schema is strict)
|
||||||
4. If using MCP server, create symlink: `ln -s ../../../mcp-servers/{server} plugins/{name}/mcp-servers/{server}`
|
4. Create `claude-md-integration.md`
|
||||||
5. Run `./scripts/validate-marketplace.sh`
|
5. If using new MCP server, add to root `mcp-servers/` and update `.mcp.json`
|
||||||
6. Update `CHANGELOG.md`
|
6. Run `./scripts/validate-marketplace.sh` — rejects plugins without valid `domain` field
|
||||||
|
7. Update `CHANGELOG.md`
|
||||||
|
|
||||||
|
**Domain field is required in metadata.json (v8.0.0+, moved from plugin.json in v9.1.2):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"domain": "core"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Naming convention:** New plugins use domain prefix (`saas-*`, `ops-*`, `data-*`, `debug-*`). Core plugins have no prefix.
|
||||||
|
|
||||||
|
### Domain Assignments
|
||||||
|
|
||||||
|
| Domain | Plugins |
|
||||||
|
|--------|---------|
|
||||||
|
| `core` | projman, git-flow, pr-review, code-sentinel, doc-guardian, clarity-assist, contract-validator, claude-config-maintainer, project-hygiene |
|
||||||
|
| `data` | data-platform, viz-platform, data-seed |
|
||||||
|
| `saas` | saas-api-platform, saas-db-migrate, saas-react-platform, saas-test-pilot |
|
||||||
|
| `ops` | cmdb-assistant, ops-release-manager, ops-deploy-pipeline |
|
||||||
|
| `debug` | debug-mcp |
|
||||||
|
|
||||||
### Adding a Command to projman
|
### Adding a Command to projman
|
||||||
|
|
||||||
1. Create `plugins/projman/commands/{name}.md`
|
1. Create `plugins/projman/commands/{name}.md`
|
||||||
2. Update `plugins/projman/README.md`
|
2. Update marketplace description if significant
|
||||||
3. Update marketplace description if significant
|
|
||||||
|
|
||||||
### Validation
|
### Validation
|
||||||
|
|
||||||
@@ -286,13 +475,14 @@ Stored in Gitea Wiki under `lessons-learned/sprints/`.
|
|||||||
|
|
||||||
| Document | Purpose |
|
| Document | Purpose |
|
||||||
|----------|---------|
|
|----------|---------|
|
||||||
|
| `docs/ARCHITECTURE.md` | System architecture and plugin reference |
|
||||||
| `docs/CANONICAL-PATHS.md` | **Single source of truth** for paths |
|
| `docs/CANONICAL-PATHS.md` | **Single source of truth** for paths |
|
||||||
| `docs/COMMANDS-CHEATSHEET.md` | All commands quick reference |
|
| `docs/COMMANDS-CHEATSHEET.md` | All commands quick reference |
|
||||||
| `docs/CONFIGURATION.md` | Centralized setup guide |
|
| `docs/CONFIGURATION.md` | Centralized setup guide |
|
||||||
| `docs/DEBUGGING-CHECKLIST.md` | Systematic troubleshooting guide |
|
| `docs/DEBUGGING-CHECKLIST.md` | Systematic troubleshooting guide |
|
||||||
|
| `docs/MIGRATION-v9.md` | v8.x to v9.0.0 migration guide |
|
||||||
| `docs/UPDATING.md` | Update guide for the marketplace |
|
| `docs/UPDATING.md` | Update guide for the marketplace |
|
||||||
| `plugins/projman/CONFIGURATION.md` | Projman quick reference (links to central) |
|
| `plugins/projman/CONFIGURATION.md` | Projman quick reference (links to central) |
|
||||||
| `plugins/projman/README.md` | Projman full documentation |
|
|
||||||
|
|
||||||
## Installation Paths
|
## Installation Paths
|
||||||
|
|
||||||
@@ -314,12 +504,12 @@ See `docs/DEBUGGING-CHECKLIST.md` for systematic troubleshooting.
|
|||||||
| Symptom | Likely Cause | Fix |
|
| Symptom | Likely Cause | Fix |
|
||||||
|---------|--------------|-----|
|
|---------|--------------|-----|
|
||||||
| "X MCP servers failed" | Missing venv in installed path | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
|
| "X MCP servers failed" | Missing venv in installed path | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
|
||||||
| MCP tools not available | Symlink broken or venv missing | Run `/debug-report` to diagnose |
|
| MCP tools not available | Venv missing or .mcp.json misconfigured | Run `/cv status` to diagnose |
|
||||||
| Changes not taking effect | Editing source, not installed | Reinstall plugin or edit installed path |
|
| Changes not taking effect | Editing source, not installed | Reinstall plugin or edit installed path |
|
||||||
|
|
||||||
**Debug Commands:**
|
**Diagnostic Commands:**
|
||||||
- `/debug-report` - Run full diagnostics, create issue if needed
|
- `/cv status` - Marketplace-wide health check (installation, MCP, configuration)
|
||||||
- `/debug-review` - Investigate and propose fixes
|
- `/hygiene check` - Project file organization and cleanup check
|
||||||
|
|
||||||
## Versioning Workflow
|
## Versioning Workflow
|
||||||
|
|
||||||
@@ -373,4 +563,4 @@ The script will:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Last Updated:** 2026-01-24
|
**Last Updated:** 2026-02-07
|
||||||
|
|||||||
387
README.md
387
README.md
@@ -1,156 +1,160 @@
|
|||||||
# Leo Claude Marketplace - v4.1.0
|
# Leo Claude Marketplace — v9.1.2
|
||||||
|
|
||||||
A collection of Claude Code plugins for project management, infrastructure automation, and development workflows.
|
A plugin marketplace for Claude Code providing sprint management, code review, security scanning, infrastructure automation, and development workflow tools. 20 plugins across 5 domains, backed by 5 shared MCP servers.
|
||||||
|
|
||||||
## Plugins
|
## Plugins
|
||||||
|
|
||||||
### Development & Project Management
|
### Core (9 plugins — v9.0.1)
|
||||||
|
|
||||||
#### [projman](./plugins/projman/README.md)
|
| Plugin | Description |
|
||||||
**Sprint Planning and Project Management**
|
|--------|-------------|
|
||||||
|
| `projman` | Sprint planning and project management with Gitea integration |
|
||||||
|
| `git-flow` | Git workflow automation with intelligent commit messages and branch management |
|
||||||
|
| `pr-review` | Multi-agent pull request review with confidence scoring |
|
||||||
|
| `code-sentinel` | Security scanning and code refactoring tools |
|
||||||
|
| `doc-guardian` | Documentation drift detection and synchronization |
|
||||||
|
| `clarity-assist` | Prompt optimization with ND-friendly accommodations |
|
||||||
|
| `contract-validator` | Cross-plugin compatibility validation and agent verification |
|
||||||
|
| `claude-config-maintainer` | CLAUDE.md and settings.local.json optimization |
|
||||||
|
| `project-hygiene` | Manual project file cleanup checks |
|
||||||
|
|
||||||
AI-guided sprint planning with full Gitea integration. Transforms a proven 15-sprint workflow into a distributable plugin.
|
### Data (3 plugins)
|
||||||
|
|
||||||
- Four-agent model: Planner, Orchestrator, Executor, Code Reviewer
|
| Plugin | Version | Description |
|
||||||
- Intelligent label suggestions from 43-label taxonomy
|
|--------|---------|-------------|
|
||||||
- Lessons learned capture via Gitea Wiki
|
| `data-platform` | 9.0.1 | pandas, PostgreSQL/PostGIS, and dbt integration |
|
||||||
- Native issue dependencies with parallel execution
|
| `viz-platform` | 9.0.1 | Dash Mantine Components validation, Plotly charts, and theming |
|
||||||
- Milestone management for sprint organization
|
| `data-seed` | 0.1.0 — scaffold | Test data generation and database seeding |
|
||||||
- Branch-aware security (development/staging/production)
|
|
||||||
- Pre-sprint-close code quality review and test verification
|
|
||||||
|
|
||||||
**Commands:** `/sprint-plan`, `/sprint-start`, `/sprint-status`, `/sprint-close`, `/labels-sync`, `/initial-setup`, `/project-init`, `/project-sync`, `/review`, `/test-check`, `/test-gen`, `/debug-report`, `/debug-review`
|
### Ops (3 plugins)
|
||||||
|
|
||||||
#### [git-flow](./plugins/git-flow/README.md) *NEW in v3.0.0*
|
| Plugin | Version | Description |
|
||||||
**Git Workflow Automation**
|
|--------|---------|-------------|
|
||||||
|
| `cmdb-assistant` | 9.0.1 | NetBox CMDB integration with data quality validation |
|
||||||
|
| `ops-release-manager` | 0.1.0 — scaffold | Release management with SemVer and changelog automation |
|
||||||
|
| `ops-deploy-pipeline` | 0.1.0 — scaffold | Deployment pipeline for Docker Compose and systemd |
|
||||||
|
|
||||||
Smart git operations with intelligent commit messages and branch management.
|
### SaaS (4 plugins — v0.1.0 scaffolds)
|
||||||
|
|
||||||
- Auto-generated conventional commit messages
|
| Plugin | Description |
|
||||||
- Multiple workflow styles (simple, feature-branch, pr-required, trunk-based)
|
|--------|-------------|
|
||||||
- Branch naming enforcement
|
| `saas-api-platform` | REST/GraphQL API scaffolding for FastAPI and Express |
|
||||||
- Merge and cleanup automation
|
| `saas-db-migrate` | Database migration management for Alembic, Prisma, raw SQL |
|
||||||
- Protected branch awareness
|
| `saas-react-platform` | React frontend toolkit for Next.js and Vite |
|
||||||
|
| `saas-test-pilot` | Test automation for pytest, Jest, Vitest, Playwright |
|
||||||
|
|
||||||
**Commands:** `/commit`, `/commit-push`, `/commit-merge`, `/commit-sync`, `/branch-start`, `/branch-cleanup`, `/git-status`, `/git-config`
|
### Debug (1 plugin — v0.1.0 scaffold)
|
||||||
|
|
||||||
#### [pr-review](./plugins/pr-review/README.md) *NEW in v3.0.0*
|
| Plugin | Description |
|
||||||
**Multi-Agent PR Review**
|
|--------|-------------|
|
||||||
|
| `debug-mcp` | MCP server debugging, inspection, and development toolkit |
|
||||||
|
|
||||||
Comprehensive pull request review using specialized agents.
|
## Quick Start
|
||||||
|
|
||||||
- Multi-agent review: Security, Performance, Maintainability, Tests
|
### Launch with profiles
|
||||||
- Confidence scoring (only reports HIGH/MEDIUM confidence findings)
|
|
||||||
- Actionable feedback with suggested fixes
|
|
||||||
- Gitea integration for automated review submission
|
|
||||||
|
|
||||||
**Commands:** `/pr-review`, `/pr-summary`, `/pr-findings`, `/initial-setup`, `/project-init`, `/project-sync`
|
```bash
|
||||||
|
./scripts/claude-launch.sh [profile] [extra-args...]
|
||||||
|
```
|
||||||
|
|
||||||
#### [claude-config-maintainer](./plugins/claude-config-maintainer/README.md)
|
| Profile | Plugins Loaded | Use Case |
|
||||||
**CLAUDE.md Optimization and Maintenance**
|
|---------|----------------|----------|
|
||||||
|
| `sprint` | projman, git-flow, pr-review, code-sentinel, doc-guardian, clarity-assist | Default. Sprint planning and development |
|
||||||
|
| `review` | pr-review, code-sentinel | Lightweight code review |
|
||||||
|
| `data` | data-platform, viz-platform | Data engineering and visualization |
|
||||||
|
| `infra` | cmdb-assistant | Infrastructure/CMDB management |
|
||||||
|
| `full` | All 20 plugins | When you need everything |
|
||||||
|
|
||||||
Analyze, optimize, and create CLAUDE.md configuration files for Claude Code projects.
|
```bash
|
||||||
|
./scripts/claude-launch.sh # Default sprint profile
|
||||||
|
./scripts/claude-launch.sh data --model opus # Data profile with Opus
|
||||||
|
./scripts/claude-launch.sh full # Load all plugins
|
||||||
|
```
|
||||||
|
|
||||||
**Commands:** `/config-analyze`, `/config-optimize`, `/config-init`
|
### Common commands
|
||||||
|
|
||||||
### Productivity
|
```bash
|
||||||
|
/sprint plan # Plan a sprint with architecture analysis
|
||||||
|
/sprint start # Begin sprint execution
|
||||||
|
/gitflow commit --push # Commit with auto-generated message and push
|
||||||
|
/pr review # Full multi-agent PR review
|
||||||
|
/sentinel scan # Security audit
|
||||||
|
/doc audit # Check for documentation drift
|
||||||
|
/cv status # Marketplace health check
|
||||||
|
```
|
||||||
|
|
||||||
#### [clarity-assist](./plugins/clarity-assist/README.md) *NEW in v3.0.0*
|
## Repository Structure
|
||||||
**Prompt Optimization with ND Accommodations**
|
|
||||||
|
|
||||||
Transform vague requests into clear specifications using structured methodology.
|
```
|
||||||
|
leo-claude-mktplace/
|
||||||
- 4-D methodology: Deconstruct, Diagnose, Develop, Deliver
|
├── .claude-plugin/ # Marketplace manifest
|
||||||
- ND-friendly question patterns (option-based, chunked)
|
│ ├── marketplace.json
|
||||||
- Conflict detection and escalation protocols
|
│ ├── marketplace-lean.json # Lean profile (6 core plugins)
|
||||||
|
│ └── marketplace-full.json # Full profile (all plugins)
|
||||||
**Commands:** `/clarify`, `/quick-clarify`
|
├── mcp-servers/ # Shared MCP servers
|
||||||
|
│ ├── gitea/ # Gitea (issues, PRs, wiki)
|
||||||
#### [doc-guardian](./plugins/doc-guardian/README.md)
|
│ ├── netbox/ # NetBox (DCIM, IPAM)
|
||||||
**Documentation Lifecycle Management**
|
│ ├── data-platform/ # pandas, PostgreSQL, dbt
|
||||||
|
│ ├── viz-platform/ # DMC, Plotly, theming
|
||||||
Automatic documentation drift detection and synchronization.
|
│ └── contract-validator/ # Plugin compatibility validation
|
||||||
|
├── plugins/ # All plugins (20 total)
|
||||||
**Commands:** `/doc-audit`, `/doc-sync`
|
│ ├── projman/ # [core] Sprint management
|
||||||
|
│ ├── git-flow/ # [core] Git workflow automation
|
||||||
#### [project-hygiene](./plugins/project-hygiene/README.md)
|
│ ├── pr-review/ # [core] PR review
|
||||||
**Post-Task Cleanup Automation**
|
│ ├── clarity-assist/ # [core] Prompt optimization
|
||||||
|
│ ├── doc-guardian/ # [core] Documentation drift detection
|
||||||
Hook-based cleanup that runs after Claude completes work.
|
│ ├── code-sentinel/ # [core] Security scanning
|
||||||
|
│ ├── claude-config-maintainer/ # [core] CLAUDE.md optimization
|
||||||
### Security
|
│ ├── contract-validator/ # [core] Cross-plugin validation
|
||||||
|
│ ├── project-hygiene/ # [core] Manual cleanup checks
|
||||||
#### [code-sentinel](./plugins/code-sentinel/README.md)
|
│ ├── cmdb-assistant/ # [ops] NetBox CMDB integration
|
||||||
**Security Scanning & Refactoring**
|
│ ├── data-platform/ # [data] Data engineering
|
||||||
|
│ ├── viz-platform/ # [data] Visualization
|
||||||
Security vulnerability detection and code refactoring tools.
|
│ ├── data-seed/ # [data] Test data generation (scaffold)
|
||||||
|
│ ├── saas-api-platform/ # [saas] API scaffolding (scaffold)
|
||||||
**Commands:** `/security-scan`, `/refactor`, `/refactor-dry`
|
│ ├── saas-db-migrate/ # [saas] DB migrations (scaffold)
|
||||||
|
│ ├── saas-react-platform/ # [saas] React toolkit (scaffold)
|
||||||
### Infrastructure
|
│ ├── saas-test-pilot/ # [saas] Test automation (scaffold)
|
||||||
|
│ ├── ops-release-manager/ # [ops] Release management (scaffold)
|
||||||
#### [cmdb-assistant](./plugins/cmdb-assistant/README.md)
|
│ ├── ops-deploy-pipeline/ # [ops] Deployment pipeline (scaffold)
|
||||||
**NetBox CMDB Integration**
|
│ └── debug-mcp/ # [debug] MCP debugging (scaffold)
|
||||||
|
├── scripts/ # Setup and maintenance
|
||||||
Full CRUD operations for network infrastructure management directly from Claude Code.
|
│ ├── setup.sh # Initial setup (create venvs, config)
|
||||||
|
│ ├── post-update.sh # Post-update (clear cache, changelog)
|
||||||
**Commands:** `/initial-setup`, `/cmdb-search`, `/cmdb-device`, `/cmdb-ip`, `/cmdb-site`
|
│ ├── setup-venvs.sh # MCP server venv management (cache-based)
|
||||||
|
│ ├── validate-marketplace.sh # Marketplace compliance validation
|
||||||
### Data Engineering
|
│ ├── verify-hooks.sh # Hook inventory verification
|
||||||
|
│ ├── release.sh # Release automation with version bumping
|
||||||
#### [data-platform](./plugins/data-platform/README.md) *NEW*
|
│ ├── claude-launch.sh # Profile-based launcher
|
||||||
**pandas, PostgreSQL/PostGIS, and dbt Integration**
|
│ ├── install-plugin.sh # Install plugin to consumer project
|
||||||
|
│ ├── list-installed.sh # Show installed plugins in a project
|
||||||
Comprehensive data engineering toolkit with persistent DataFrame storage.
|
│ └── uninstall-plugin.sh # Remove plugin from consumer project
|
||||||
|
├── docs/ # Documentation
|
||||||
- 14 pandas tools with Arrow IPC data_ref system
|
│ ├── ARCHITECTURE.md # System architecture & plugin reference
|
||||||
- 10 PostgreSQL/PostGIS tools with connection pooling
|
│ ├── CANONICAL-PATHS.md # Authoritative path reference
|
||||||
- 8 dbt tools with automatic pre-validation
|
│ ├── COMMANDS-CHEATSHEET.md # All commands quick reference
|
||||||
- 100k row limit with chunking support
|
│ ├── CONFIGURATION.md # Centralized setup guide
|
||||||
- Auto-detection of dbt projects
|
│ ├── DEBUGGING-CHECKLIST.md # Systematic troubleshooting guide
|
||||||
|
│ ├── MIGRATION-v9.md # v8.x to v9.0.0 migration guide
|
||||||
**Commands:** `/ingest`, `/profile`, `/schema`, `/explain`, `/lineage`, `/run`
|
│ └── UPDATING.md # Update guide
|
||||||
|
├── CLAUDE.md # Project instructions for Claude Code
|
||||||
|
├── README.md
|
||||||
|
├── CHANGELOG.md
|
||||||
|
├── LICENSE
|
||||||
|
└── .gitignore
|
||||||
|
```
|
||||||
|
|
||||||
## MCP Servers
|
## MCP Servers
|
||||||
|
|
||||||
MCP servers are **shared at repository root** with **symlinks** from plugins that use them.
|
All MCP servers are shared at repository root and configured in `.mcp.json`.
|
||||||
|
|
||||||
### Gitea MCP Server (shared)
|
| Server | Used By | External System |
|
||||||
|
|--------|---------|-----------------|
|
||||||
Full Gitea API integration for project management.
|
| gitea | projman, pr-review | Gitea (issues, PRs, wiki, milestones) |
|
||||||
|
| netbox | cmdb-assistant | NetBox (DCIM, IPAM) |
|
||||||
| Category | Tools |
|
| data-platform | data-platform | PostgreSQL, dbt |
|
||||||
|----------|-------|
|
| viz-platform | viz-platform | DMC component registry |
|
||||||
| Issues | `list_issues`, `get_issue`, `create_issue`, `update_issue`, `add_comment`, `aggregate_issues` |
|
| contract-validator | contract-validator | Internal validation |
|
||||||
| Labels | `get_labels`, `suggest_labels`, `create_label`, `create_label_smart` |
|
|
||||||
| Wiki | `list_wiki_pages`, `get_wiki_page`, `create_wiki_page`, `update_wiki_page`, `create_lesson`, `search_lessons` |
|
|
||||||
| Milestones | `list_milestones`, `get_milestone`, `create_milestone`, `update_milestone`, `delete_milestone` |
|
|
||||||
| Dependencies | `list_issue_dependencies`, `create_issue_dependency`, `remove_issue_dependency`, `get_execution_order` |
|
|
||||||
| **Pull Requests** | `list_pull_requests`, `get_pull_request`, `get_pr_diff`, `get_pr_comments`, `create_pr_review`, `add_pr_comment` *(NEW in v3.0.0)* |
|
|
||||||
| Validation | `validate_repo_org`, `get_branch_protection` |
|
|
||||||
|
|
||||||
### NetBox MCP Server (shared)
|
|
||||||
|
|
||||||
Comprehensive NetBox REST API integration for infrastructure management.
|
|
||||||
|
|
||||||
| Module | Coverage |
|
|
||||||
|--------|----------|
|
|
||||||
| DCIM | Sites, Racks, Devices, Interfaces, Cables |
|
|
||||||
| IPAM | Prefixes, IPs, VLANs, VRFs |
|
|
||||||
| Circuits | Providers, Circuits, Terminations |
|
|
||||||
| Virtualization | Clusters, VMs, Interfaces |
|
|
||||||
| Extras | Tags, Custom Fields, Audit Log |
|
|
||||||
|
|
||||||
### Data Platform MCP Server (shared) *NEW*
|
|
||||||
|
|
||||||
pandas, PostgreSQL/PostGIS, and dbt integration for data engineering.
|
|
||||||
|
|
||||||
| Category | Tools |
|
|
||||||
|----------|-------|
|
|
||||||
| pandas | `read_csv`, `read_parquet`, `read_json`, `to_csv`, `to_parquet`, `describe`, `head`, `tail`, `filter`, `select`, `groupby`, `join`, `list_data`, `drop_data` |
|
|
||||||
| PostgreSQL | `pg_connect`, `pg_query`, `pg_execute`, `pg_tables`, `pg_columns`, `pg_schemas` |
|
|
||||||
| PostGIS | `st_tables`, `st_geometry_type`, `st_srid`, `st_extent` |
|
|
||||||
| dbt | `dbt_parse`, `dbt_run`, `dbt_test`, `dbt_build`, `dbt_compile`, `dbt_ls`, `dbt_docs_generate`, `dbt_lineage` |
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -160,16 +164,14 @@ pandas, PostgreSQL/PostGIS, and dbt integration for data engineering.
|
|||||||
- Python 3.10+
|
- Python 3.10+
|
||||||
- Access to target services (Gitea, NetBox as needed)
|
- Access to target services (Gitea, NetBox as needed)
|
||||||
|
|
||||||
### Add Marketplace to Claude Code
|
### Add marketplace to Claude Code
|
||||||
|
|
||||||
**Option 1 - CLI command (recommended):**
|
|
||||||
```bash
|
```bash
|
||||||
/plugin marketplace add https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git
|
/plugin marketplace add https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git
|
||||||
```
|
```
|
||||||
|
|
||||||
**Option 2 - Settings file (for team distribution):**
|
Or add to `.claude/settings.json`:
|
||||||
|
|
||||||
Add to `.claude/settings.json` in your target project:
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"extraKnownMarketplaces": {
|
"extraKnownMarketplaces": {
|
||||||
@@ -183,118 +185,55 @@ Add to `.claude/settings.json` in your target project:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Run Interactive Setup
|
### Setup MCP servers
|
||||||
|
|
||||||
After installing plugins, run the setup wizard:
|
After installing, create Python venvs for MCP servers:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
/initial-setup
|
cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
The wizard handles everything:
|
Then restart Claude Code and run the interactive setup:
|
||||||
- Sets up MCP server (Python venv + dependencies)
|
|
||||||
- Creates system config (`~/.config/claude/gitea.env`)
|
|
||||||
- Guides you through adding your API token
|
|
||||||
- Detects and validates your repository via API
|
|
||||||
- Creates project config (`.env`)
|
|
||||||
|
|
||||||
**For new projects** (when system is already configured):
|
|
||||||
```
|
|
||||||
/project-init
|
|
||||||
```
|
|
||||||
|
|
||||||
**After moving a repository:**
|
|
||||||
```
|
|
||||||
/project-sync
|
|
||||||
```
|
|
||||||
|
|
||||||
See [docs/CONFIGURATION.md](./docs/CONFIGURATION.md) for manual setup and advanced options.
|
|
||||||
|
|
||||||
## Verifying Plugin Installation
|
|
||||||
|
|
||||||
After installing plugins, the `/plugin` command may show `(no content)` - this is normal Claude Code behavior and doesn't indicate an error.
|
|
||||||
|
|
||||||
**To verify a plugin is installed correctly:**
|
|
||||||
|
|
||||||
1. **Check installed plugins list:**
|
|
||||||
```
|
|
||||||
/plugin list
|
|
||||||
```
|
|
||||||
Look for `✔ plugin-name · Installed`
|
|
||||||
|
|
||||||
2. **Test a plugin command directly:**
|
|
||||||
```
|
|
||||||
/git-flow:git-status
|
|
||||||
/projman:sprint-status
|
|
||||||
/clarity-assist:clarify
|
|
||||||
```
|
|
||||||
If the command executes and shows output, the plugin is working.
|
|
||||||
|
|
||||||
3. **Check for loading errors:**
|
|
||||||
```
|
|
||||||
/plugin list
|
|
||||||
```
|
|
||||||
Look for any `Plugin Loading Errors` section - this indicates manifest issues.
|
|
||||||
|
|
||||||
**Command format:** All plugin commands use the format `/plugin-name:command-name`
|
|
||||||
|
|
||||||
| Plugin | Test Command |
|
|
||||||
|--------|--------------|
|
|
||||||
| git-flow | `/git-flow:git-status` |
|
|
||||||
| projman | `/projman:sprint-status` |
|
|
||||||
| pr-review | `/pr-review:pr-summary` |
|
|
||||||
| clarity-assist | `/clarity-assist:clarify` |
|
|
||||||
| doc-guardian | `/doc-guardian:doc-audit` |
|
|
||||||
| code-sentinel | `/code-sentinel:security-scan` |
|
|
||||||
| claude-config-maintainer | `/claude-config-maintainer:config-analyze` |
|
|
||||||
| cmdb-assistant | `/cmdb-assistant:cmdb-search` |
|
|
||||||
| data-platform | `/data-platform:ingest` |
|
|
||||||
|
|
||||||
## Repository Structure
|
|
||||||
|
|
||||||
```
|
```
|
||||||
leo-claude-mktplace/
|
/projman setup
|
||||||
├── .claude-plugin/ # Marketplace manifest
|
```
|
||||||
│ └── marketplace.json
|
|
||||||
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
|
See [CONFIGURATION.md](./docs/CONFIGURATION.md) for manual setup and advanced options.
|
||||||
│ ├── gitea/ # Gitea MCP (issues, PRs, wiki)
|
|
||||||
│ ├── netbox/ # NetBox MCP (CMDB)
|
### Install to consumer projects
|
||||||
│ └── data-platform/ # Data engineering (pandas, PostgreSQL, dbt)
|
|
||||||
├── plugins/ # All plugins
|
```bash
|
||||||
│ ├── projman/ # Sprint management
|
./scripts/install-plugin.sh <plugin-name> /path/to/project
|
||||||
│ ├── git-flow/ # Git workflow automation
|
./scripts/list-installed.sh /path/to/project
|
||||||
│ ├── pr-review/ # PR review
|
./scripts/uninstall-plugin.sh <plugin-name> /path/to/project
|
||||||
│ ├── clarity-assist/ # Prompt optimization
|
|
||||||
│ ├── data-platform/ # Data engineering (NEW)
|
|
||||||
│ ├── claude-config-maintainer/ # CLAUDE.md optimization
|
|
||||||
│ ├── cmdb-assistant/ # NetBox CMDB integration
|
|
||||||
│ ├── doc-guardian/ # Documentation drift detection
|
|
||||||
│ ├── code-sentinel/ # Security scanning
|
|
||||||
│ └── project-hygiene/ # Cleanup automation
|
|
||||||
├── docs/ # Documentation
|
|
||||||
│ ├── CANONICAL-PATHS.md # Path reference
|
|
||||||
│ └── CONFIGURATION.md # Setup guide
|
|
||||||
├── scripts/ # Setup scripts
|
|
||||||
└── CHANGELOG.md # Version history
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
| Document | Description |
|
| Document | Description |
|
||||||
|----------|-------------|
|
|----------|-------------|
|
||||||
| [CLAUDE.md](./CLAUDE.md) | Main project instructions |
|
| [CLAUDE.md](./CLAUDE.md) | Project instructions for Claude Code |
|
||||||
| [CONFIGURATION.md](./docs/CONFIGURATION.md) | Centralized setup guide |
|
| [ARCHITECTURE.md](./docs/ARCHITECTURE.md) | System architecture and plugin reference |
|
||||||
| [COMMANDS-CHEATSHEET.md](./docs/COMMANDS-CHEATSHEET.md) | All commands quick reference |
|
| [COMMANDS-CHEATSHEET.md](./docs/COMMANDS-CHEATSHEET.md) | All commands quick reference |
|
||||||
| [UPDATING.md](./docs/UPDATING.md) | Update guide for the marketplace |
|
| [CONFIGURATION.md](./docs/CONFIGURATION.md) | Centralized setup guide |
|
||||||
| [CANONICAL-PATHS.md](./docs/CANONICAL-PATHS.md) | Authoritative path reference |
|
|
||||||
| [DEBUGGING-CHECKLIST.md](./docs/DEBUGGING-CHECKLIST.md) | Systematic troubleshooting guide |
|
| [DEBUGGING-CHECKLIST.md](./docs/DEBUGGING-CHECKLIST.md) | Systematic troubleshooting guide |
|
||||||
|
| [UPDATING.md](./docs/UPDATING.md) | Update guide for the marketplace |
|
||||||
|
| [MIGRATION-v9.md](./docs/MIGRATION-v9.md) | v8.x to v9.0.0 migration guide |
|
||||||
|
| [CANONICAL-PATHS.md](./docs/CANONICAL-PATHS.md) | Authoritative path reference |
|
||||||
| [CHANGELOG.md](./CHANGELOG.md) | Version history |
|
| [CHANGELOG.md](./CHANGELOG.md) | Version history |
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/validate-marketplace.sh # Marketplace compliance (manifests, domains, paths)
|
||||||
|
./scripts/verify-hooks.sh # Hook inventory (4 PreToolUse + 1 UserPromptSubmit)
|
||||||
|
```
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
MIT License
|
MIT License
|
||||||
|
|
||||||
## Support
|
## Support
|
||||||
|
|
||||||
- **Issues**: Contact repository maintainer
|
- **Repository**: https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git
|
||||||
- **Repository**: `https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git`
|
|
||||||
|
|||||||
184
docs/ARCHITECTURE.md
Normal file
184
docs/ARCHITECTURE.md
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
# Architecture — Leo Claude Marketplace v9.1.0
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Plugin marketplace for Claude Code. 20 plugins across 5 domains, 5 shared MCP servers,
|
||||||
|
4 PreToolUse safety hooks + 1 UserPromptSubmit quality hook.
|
||||||
|
|
||||||
|
## System Architecture
|
||||||
|
|
||||||
|
### Plugin Domains
|
||||||
|
|
||||||
|
| Domain | Purpose | Plugins |
|
||||||
|
|--------|---------|---------|
|
||||||
|
| core | Development workflow | projman, git-flow, pr-review, code-sentinel, doc-guardian, clarity-assist, contract-validator, claude-config-maintainer, project-hygiene |
|
||||||
|
| data | Data engineering | data-platform, viz-platform, data-seed |
|
||||||
|
| saas | SaaS development | saas-api-platform, saas-db-migrate, saas-react-platform, saas-test-pilot |
|
||||||
|
| ops | Operations | cmdb-assistant, ops-release-manager, ops-deploy-pipeline |
|
||||||
|
| debug | Diagnostics | debug-mcp |
|
||||||
|
|
||||||
|
### MCP Servers (Shared at Root)
|
||||||
|
|
||||||
|
| Server | Plugins Using It | External System |
|
||||||
|
|--------|-------------------|-----------------|
|
||||||
|
| gitea | projman, pr-review | Gitea (issues, PRs, wiki) — uses published `gitea-mcp` package |
|
||||||
|
| netbox | cmdb-assistant | NetBox (DCIM, IPAM) |
|
||||||
|
| data-platform | data-platform | PostgreSQL, dbt |
|
||||||
|
| viz-platform | viz-platform | DMC registry |
|
||||||
|
| contract-validator | contract-validator | (internal validation) |
|
||||||
|
|
||||||
|
### Hook Architecture
|
||||||
|
|
||||||
|
| Plugin | Event | Trigger | Script |
|
||||||
|
|--------|-------|---------|--------|
|
||||||
|
| code-sentinel | PreToolUse | Write\|Edit\|MultiEdit | security-check.sh |
|
||||||
|
| git-flow | PreToolUse | Bash (branch naming) | branch-check.sh |
|
||||||
|
| git-flow | PreToolUse | Bash (git commit) | commit-msg-check.sh |
|
||||||
|
| cmdb-assistant | PreToolUse | MCP create/update | validate-input.sh |
|
||||||
|
| clarity-assist | UserPromptSubmit | All prompts | vagueness-check.sh |
|
||||||
|
|
||||||
|
No other hook types permitted. All workflow automation is via explicit commands.
|
||||||
|
|
||||||
|
### Agent Model (projman)
|
||||||
|
|
||||||
|
| Agent | Model | Permission Mode | Role |
|
||||||
|
|-------|-------|-----------------|------|
|
||||||
|
| Planner | opus | default | Sprint planning, architecture analysis, issue creation |
|
||||||
|
| Orchestrator | sonnet | acceptEdits | Sprint execution, parallel batching, lesson capture |
|
||||||
|
| Executor | sonnet | bypassPermissions | Code implementation, branch management |
|
||||||
|
| Code Reviewer | opus | default | Pre-close quality review, security, tests |
|
||||||
|
|
||||||
|
### Config Hierarchy
|
||||||
|
|
||||||
|
| Level | Location | Contains |
|
||||||
|
|-------|----------|----------|
|
||||||
|
| System | ~/.config/claude/{service}.env | Credentials |
|
||||||
|
| Project | .env in project root | Repo-specific config |
|
||||||
|
|
||||||
|
### Branch Security
|
||||||
|
|
||||||
|
| Pattern | Access |
|
||||||
|
|---------|--------|
|
||||||
|
| development, feat/*, fix/* | Full |
|
||||||
|
| staging, stage/* | Read-only code, can create issues |
|
||||||
|
| main, master, prod/* | READ-ONLY. Emergency only. |
|
||||||
|
|
||||||
|
### Launch Profiles
|
||||||
|
|
||||||
|
| Profile | Plugins |
|
||||||
|
|---------|---------|
|
||||||
|
| sprint | projman, git-flow, pr-review, code-sentinel, doc-guardian, clarity-assist |
|
||||||
|
| data | data-platform, viz-platform, data-seed |
|
||||||
|
| saas | saas-api-platform, saas-react-platform, saas-db-migrate, saas-test-pilot |
|
||||||
|
| ops | cmdb-assistant, ops-release-manager, ops-deploy-pipeline |
|
||||||
|
| review | pr-review, code-sentinel |
|
||||||
|
| debug | debug-mcp |
|
||||||
|
| full | all plugins |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Plugin Reference
|
||||||
|
|
||||||
|
### Core Domain
|
||||||
|
|
||||||
|
#### projman (v9.0.1)
|
||||||
|
Sprint planning and project management with Gitea integration.
|
||||||
|
- **Commands:** /sprint (plan|start|status|close|review|test), /project (initiation|plan|status|close), /adr (create|list|update|supersede), /rfc (create|list|review|approve|reject), /labels sync, /projman setup
|
||||||
|
- **Agents:** planner, orchestrator, executor, code-reviewer
|
||||||
|
- **MCP:** gitea
|
||||||
|
|
||||||
|
#### git-flow (v9.0.1)
|
||||||
|
Git workflow automation with smart commits and branch management.
|
||||||
|
- **Commands:** /gitflow (commit|branch-start|branch-cleanup|status|config)
|
||||||
|
- **Commit flags:** --push, --merge, --sync
|
||||||
|
- **Agents:** git-assistant
|
||||||
|
- **Hooks:** PreToolUse (branch-check.sh, commit-msg-check.sh)
|
||||||
|
|
||||||
|
#### pr-review (v9.0.1)
|
||||||
|
Multi-agent PR review with confidence scoring.
|
||||||
|
- **Commands:** /pr (review|summary|findings|diff|setup|init|sync)
|
||||||
|
- **Agents:** coordinator, security-reviewer, performance-analyst, maintainability-auditor, test-validator
|
||||||
|
- **MCP:** gitea
|
||||||
|
|
||||||
|
#### code-sentinel (v9.0.1)
|
||||||
|
Security scanning and code refactoring.
|
||||||
|
- **Commands:** /sentinel (scan|refactor|refactor-dry)
|
||||||
|
- **Agents:** security-reviewer, refactor-advisor
|
||||||
|
- **Hooks:** PreToolUse (security-check.sh)
|
||||||
|
|
||||||
|
#### doc-guardian (v9.0.1)
|
||||||
|
Documentation drift detection and synchronization.
|
||||||
|
- **Commands:** /doc (audit|sync|changelog-gen|coverage|stale-docs)
|
||||||
|
- **Agents:** doc-analyzer
|
||||||
|
|
||||||
|
#### clarity-assist (v9.0.1)
|
||||||
|
Prompt optimization with ND-friendly accommodations.
|
||||||
|
- **Commands:** /clarity (clarify|quick-clarify)
|
||||||
|
- **Agents:** clarity-coach
|
||||||
|
- **Hooks:** UserPromptSubmit (vagueness-check.sh)
|
||||||
|
|
||||||
|
#### contract-validator (v9.0.1)
|
||||||
|
Cross-plugin compatibility validation.
|
||||||
|
- **Commands:** /cv (validate|check-agent|list-interfaces|dependency-graph|setup|status)
|
||||||
|
- **Agents:** full-validation, agent-check
|
||||||
|
- **MCP:** contract-validator
|
||||||
|
|
||||||
|
#### claude-config-maintainer (v9.0.1)
|
||||||
|
CLAUDE.md and settings optimization.
|
||||||
|
- **Commands:** /claude-config (analyze|optimize|init|diff|lint|audit-settings|optimize-settings|permissions-map)
|
||||||
|
- **Agents:** maintainer
|
||||||
|
|
||||||
|
#### project-hygiene (v9.0.1)
|
||||||
|
Manual project file cleanup checks.
|
||||||
|
- **Commands:** /hygiene check (--fix flag for auto-fix)
|
||||||
|
|
||||||
|
### Data Domain
|
||||||
|
|
||||||
|
#### data-platform (v9.0.1)
|
||||||
|
pandas, PostgreSQL, and dbt integration.
|
||||||
|
- **Commands:** /data (ingest|profile|schema|explain|lineage|lineage-viz|run|dbt-test|quality|review|gate|setup)
|
||||||
|
- **Agents:** data-advisor, data-analysis, data-ingestion
|
||||||
|
- **MCP:** data-platform
|
||||||
|
|
||||||
|
#### viz-platform (v9.0.1)
|
||||||
|
DMC validation, Plotly charts, and theming.
|
||||||
|
- **Commands:** /viz (setup|chart|chart-export|dashboard|theme|theme-new|theme-css|component|accessibility-check|breakpoints|design-review|design-gate)
|
||||||
|
- **Agents:** design-reviewer, layout-builder, component-check, theme-setup
|
||||||
|
- **MCP:** viz-platform
|
||||||
|
|
||||||
|
#### data-seed (v0.1.0)
|
||||||
|
Test data generation and database seeding. *Scaffold — not yet implemented.*
|
||||||
|
|
||||||
|
### SaaS Domain
|
||||||
|
|
||||||
|
#### saas-api-platform (v0.1.0)
|
||||||
|
REST/GraphQL API scaffolding for FastAPI and Express. *Scaffold.*
|
||||||
|
|
||||||
|
#### saas-db-migrate (v0.1.0)
|
||||||
|
Database migration management for Alembic, Prisma, raw SQL. *Scaffold.*
|
||||||
|
|
||||||
|
#### saas-react-platform (v0.1.0)
|
||||||
|
React frontend toolkit for Next.js and Vite. *Scaffold.*
|
||||||
|
|
||||||
|
#### saas-test-pilot (v0.1.0)
|
||||||
|
Test automation for pytest, Jest, Vitest, Playwright. *Scaffold.*
|
||||||
|
|
||||||
|
### Ops Domain
|
||||||
|
|
||||||
|
#### cmdb-assistant (v9.0.1)
|
||||||
|
NetBox CMDB integration for infrastructure management.
|
||||||
|
- **Commands:** /cmdb (search|device|ip|site|audit|register|sync|topology|change-audit|ip-conflicts|setup)
|
||||||
|
- **Agents:** cmdb-assistant
|
||||||
|
- **MCP:** netbox
|
||||||
|
- **Hooks:** PreToolUse (validate-input.sh)
|
||||||
|
|
||||||
|
#### ops-release-manager (v0.1.0)
|
||||||
|
Release management with SemVer and changelog automation. *Scaffold.*
|
||||||
|
|
||||||
|
#### ops-deploy-pipeline (v0.1.0)
|
||||||
|
Deployment pipeline for Docker Compose and systemd. *Scaffold.*
|
||||||
|
|
||||||
|
### Debug Domain
|
||||||
|
|
||||||
|
#### debug-mcp (v0.1.0)
|
||||||
|
MCP server debugging and diagnostics. *Scaffold.*
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
|
**This file defines ALL valid paths in this repository. No exceptions. No inference. No assumptions.**
|
||||||
|
|
||||||
Last Updated: 2026-01-23 (v3.1.2)
|
Last Updated: 2026-02-07 (v9.1.0)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -12,16 +12,20 @@ Last Updated: 2026-01-23 (v3.1.2)
|
|||||||
leo-claude-mktplace/
|
leo-claude-mktplace/
|
||||||
├── .claude/ # Claude Code local settings
|
├── .claude/ # Claude Code local settings
|
||||||
├── .claude-plugin/ # Marketplace manifest
|
├── .claude-plugin/ # Marketplace manifest
|
||||||
│ └── marketplace.json
|
│ ├── marketplace.json
|
||||||
|
│ ├── marketplace-lean.json # Lean profile (6 core plugins)
|
||||||
|
│ └── marketplace-full.json # Full profile (all plugins)
|
||||||
|
├── .mcp-lean.json # Lean profile MCP config (gitea only)
|
||||||
|
├── .mcp-full.json # Full profile MCP config (all servers)
|
||||||
├── .scratch/ # Transient work (auto-cleaned)
|
├── .scratch/ # Transient work (auto-cleaned)
|
||||||
├── docs/ # All documentation
|
├── docs/ # All documentation
|
||||||
│ ├── architecture/ # Draw.io diagrams and specs
|
│ ├── ARCHITECTURE.md # System architecture and plugin reference
|
||||||
│ ├── CANONICAL-PATHS.md # This file - single source of truth
|
│ ├── CANONICAL-PATHS.md # This file - single source of truth
|
||||||
│ ├── COMMANDS-CHEATSHEET.md # All commands quick reference
|
│ ├── COMMANDS-CHEATSHEET.md # All commands quick reference
|
||||||
│ ├── CONFIGURATION.md # Centralized configuration guide
|
│ ├── CONFIGURATION.md # Centralized configuration guide
|
||||||
│ ├── DEBUGGING-CHECKLIST.md # Systematic troubleshooting guide
|
│ ├── DEBUGGING-CHECKLIST.md # Systematic troubleshooting guide
|
||||||
|
│ ├── MIGRATION-v9.md # v8.x → v9.0.0 migration guide
|
||||||
│ └── UPDATING.md # Update guide
|
│ └── UPDATING.md # Update guide
|
||||||
├── hooks/ # Shared hooks (if any)
|
|
||||||
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
|
├── mcp-servers/ # SHARED MCP servers (v3.0.0+)
|
||||||
│ ├── gitea/ # Gitea MCP server
|
│ ├── gitea/ # Gitea MCP server
|
||||||
│ │ ├── mcp_server/
|
│ │ ├── mcp_server/
|
||||||
@@ -37,23 +41,51 @@ leo-claude-mktplace/
|
|||||||
│ │ │ └── pull_requests.py # NEW in v3.0.0
|
│ │ │ └── pull_requests.py # NEW in v3.0.0
|
||||||
│ │ ├── requirements.txt
|
│ │ ├── requirements.txt
|
||||||
│ │ └── .venv/
|
│ │ └── .venv/
|
||||||
│ └── netbox/ # NetBox MCP server
|
│ ├── netbox/ # NetBox MCP server
|
||||||
|
│ │ ├── mcp_server/
|
||||||
|
│ │ ├── requirements.txt
|
||||||
|
│ │ └── .venv/
|
||||||
|
│ ├── data-platform/ # Data engineering MCP (NEW v4.0.0)
|
||||||
|
│ │ ├── mcp_server/
|
||||||
|
│ │ │ ├── server.py
|
||||||
|
│ │ │ ├── pandas_tools.py
|
||||||
|
│ │ │ ├── postgres_tools.py
|
||||||
|
│ │ │ └── dbt_tools.py
|
||||||
|
│ │ ├── requirements.txt
|
||||||
|
│ │ └── .venv/
|
||||||
|
│ ├── contract-validator/ # Contract validation MCP (NEW v5.0.0)
|
||||||
|
│ │ ├── mcp_server/
|
||||||
|
│ │ │ ├── server.py
|
||||||
|
│ │ │ ├── parse_tools.py
|
||||||
|
│ │ │ ├── validation_tools.py
|
||||||
|
│ │ │ └── report_tools.py
|
||||||
|
│ │ ├── tests/
|
||||||
|
│ │ ├── requirements.txt
|
||||||
|
│ │ └── .venv/
|
||||||
|
│ └── viz-platform/ # Visualization MCP (NEW v4.1.0)
|
||||||
│ ├── mcp_server/
|
│ ├── mcp_server/
|
||||||
|
│ │ ├── server.py
|
||||||
|
│ │ ├── config.py
|
||||||
|
│ │ ├── component_registry.py
|
||||||
|
│ │ ├── dmc_tools.py
|
||||||
|
│ │ ├── chart_tools.py
|
||||||
|
│ │ ├── layout_tools.py
|
||||||
|
│ │ ├── theme_tools.py
|
||||||
|
│ │ ├── theme_store.py
|
||||||
|
│ │ └── page_tools.py
|
||||||
|
│ ├── registry/ # DMC component JSON registries
|
||||||
|
│ ├── tests/ # 94 tests
|
||||||
│ ├── requirements.txt
|
│ ├── requirements.txt
|
||||||
│ └── .venv/
|
│ └── .venv/
|
||||||
├── plugins/ # ALL plugins
|
├── plugins/ # ALL plugins
|
||||||
│ ├── projman/ # Sprint management
|
│ ├── projman/ # Sprint management
|
||||||
│ │ ├── .claude-plugin/
|
│ │ ├── .claude-plugin/
|
||||||
│ │ ├── .mcp.json
|
|
||||||
│ │ ├── mcp-servers/
|
|
||||||
│ │ │ └── gitea -> ../../../mcp-servers/gitea # SYMLINK
|
|
||||||
│ │ ├── commands/
|
│ │ ├── commands/
|
||||||
│ │ ├── agents/
|
│ │ ├── agents/
|
||||||
│ │ ├── skills/
|
│ │ ├── skills/
|
||||||
│ │ └── claude-md-integration.md
|
│ │ └── claude-md-integration.md
|
||||||
│ ├── doc-guardian/ # Documentation drift detection
|
│ ├── doc-guardian/ # Documentation drift detection
|
||||||
│ │ ├── .claude-plugin/
|
│ │ ├── .claude-plugin/
|
||||||
│ │ ├── hooks/
|
|
||||||
│ │ ├── commands/
|
│ │ ├── commands/
|
||||||
│ │ ├── agents/
|
│ │ ├── agents/
|
||||||
│ │ ├── skills/
|
│ │ ├── skills/
|
||||||
@@ -67,9 +99,6 @@ leo-claude-mktplace/
|
|||||||
│ │ └── claude-md-integration.md
|
│ │ └── claude-md-integration.md
|
||||||
│ ├── cmdb-assistant/ # NetBox CMDB integration
|
│ ├── cmdb-assistant/ # NetBox CMDB integration
|
||||||
│ │ ├── .claude-plugin/
|
│ │ ├── .claude-plugin/
|
||||||
│ │ ├── .mcp.json
|
|
||||||
│ │ ├── mcp-servers/
|
|
||||||
│ │ │ └── netbox -> ../../../mcp-servers/netbox # SYMLINK
|
|
||||||
│ │ ├── commands/
|
│ │ ├── commands/
|
||||||
│ │ ├── agents/
|
│ │ ├── agents/
|
||||||
│ │ └── claude-md-integration.md
|
│ │ └── claude-md-integration.md
|
||||||
@@ -80,34 +109,100 @@ leo-claude-mktplace/
|
|||||||
│ │ └── claude-md-integration.md
|
│ │ └── claude-md-integration.md
|
||||||
│ ├── project-hygiene/
|
│ ├── project-hygiene/
|
||||||
│ │ ├── .claude-plugin/
|
│ │ ├── .claude-plugin/
|
||||||
│ │ ├── hooks/
|
│ │ ├── commands/
|
||||||
│ │ └── claude-md-integration.md
|
│ │ └── claude-md-integration.md
|
||||||
│ ├── clarity-assist/ # NEW in v3.0.0
|
│ ├── clarity-assist/
|
||||||
│ │ ├── .claude-plugin/
|
│ │ ├── .claude-plugin/
|
||||||
│ │ ├── commands/
|
│ │ ├── commands/
|
||||||
│ │ ├── agents/
|
│ │ ├── agents/
|
||||||
│ │ ├── skills/
|
│ │ ├── skills/
|
||||||
│ │ └── claude-md-integration.md
|
│ │ └── claude-md-integration.md
|
||||||
│ ├── git-flow/ # NEW in v3.0.0
|
│ ├── git-flow/
|
||||||
│ │ ├── .claude-plugin/
|
│ │ ├── .claude-plugin/
|
||||||
│ │ ├── commands/
|
│ │ ├── commands/
|
||||||
│ │ ├── agents/
|
│ │ ├── agents/
|
||||||
│ │ ├── skills/
|
│ │ ├── skills/
|
||||||
│ │ └── claude-md-integration.md
|
│ │ └── claude-md-integration.md
|
||||||
│ └── pr-review/ # NEW in v3.0.0
|
│ ├── pr-review/
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── data-platform/
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── contract-validator/
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── viz-platform/
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── saas-api-platform/ # REST/GraphQL API scaffolding (scaffold)
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── saas-db-migrate/ # Database migration management (scaffold)
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── saas-react-platform/ # React frontend toolkit (scaffold)
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── saas-test-pilot/ # Test automation (scaffold)
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── data-seed/ # Test data generation (scaffold)
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── ops-release-manager/ # Release management (scaffold)
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ ├── ops-deploy-pipeline/ # Deployment pipeline (scaffold)
|
||||||
|
│ │ ├── .claude-plugin/
|
||||||
|
│ │ ├── commands/
|
||||||
|
│ │ ├── agents/
|
||||||
|
│ │ ├── skills/
|
||||||
|
│ │ └── claude-md-integration.md
|
||||||
|
│ └── debug-mcp/ # MCP debugging toolkit (scaffold)
|
||||||
│ ├── .claude-plugin/
|
│ ├── .claude-plugin/
|
||||||
│ ├── .mcp.json
|
|
||||||
│ ├── mcp-servers/
|
|
||||||
│ │ └── gitea -> ../../../mcp-servers/gitea # SYMLINK
|
|
||||||
│ ├── commands/
|
│ ├── commands/
|
||||||
│ ├── agents/
|
│ ├── agents/
|
||||||
│ ├── skills/
|
│ ├── skills/
|
||||||
│ └── claude-md-integration.md
|
│ └── claude-md-integration.md
|
||||||
├── scripts/ # Setup and maintenance scripts
|
├── scripts/ # Setup and maintenance scripts
|
||||||
│ ├── setup.sh # Initial setup (create venvs, config templates)
|
│ ├── setup.sh # Initial setup (create venvs, config templates)
|
||||||
│ ├── post-update.sh # Post-update (rebuild venvs, verify symlinks)
|
│ ├── post-update.sh # Post-update (clear cache, show changelog)
|
||||||
│ ├── check-venv.sh # Check if venvs exist (for hooks)
|
│ ├── setup-venvs.sh # Setup MCP server venvs (create only, never delete)
|
||||||
│ └── validate-marketplace.sh # Marketplace compliance validation
|
│ ├── validate-marketplace.sh # Marketplace compliance validation
|
||||||
|
│ ├── verify-hooks.sh # Verify all hooks use correct event types
|
||||||
|
│ ├── release.sh # Release automation with version bumping
|
||||||
|
│ ├── claude-launch.sh # Task-specific launcher with profile selection
|
||||||
|
│ ├── install-plugin.sh # Install plugin to consumer project
|
||||||
|
│ ├── list-installed.sh # Show installed plugins in a project
|
||||||
|
│ └── uninstall-plugin.sh # Remove plugin from consumer project
|
||||||
├── CLAUDE.md
|
├── CLAUDE.md
|
||||||
├── README.md
|
├── README.md
|
||||||
├── LICENSE
|
├── LICENSE
|
||||||
@@ -119,48 +214,103 @@ leo-claude-mktplace/
|
|||||||
|
|
||||||
## Path Patterns (MANDATORY)
|
## Path Patterns (MANDATORY)
|
||||||
|
|
||||||
|
### Phase 1a Paths (v8.1.0)
|
||||||
|
|
||||||
|
New files added in v8.1.0:
|
||||||
|
|
||||||
|
```
|
||||||
|
plugins/projman/commands/project.md
|
||||||
|
plugins/projman/commands/project-initiation.md
|
||||||
|
plugins/projman/commands/project-plan.md
|
||||||
|
plugins/projman/commands/project-status.md
|
||||||
|
plugins/projman/commands/project-close.md
|
||||||
|
plugins/projman/commands/adr.md
|
||||||
|
plugins/projman/commands/adr-create.md
|
||||||
|
plugins/projman/commands/adr-list.md
|
||||||
|
plugins/projman/commands/adr-update.md
|
||||||
|
plugins/projman/commands/adr-supersede.md
|
||||||
|
plugins/projman/skills/source-analysis.md
|
||||||
|
plugins/projman/skills/project-charter.md
|
||||||
|
plugins/projman/skills/adr-conventions.md
|
||||||
|
plugins/projman/skills/epic-conventions.md
|
||||||
|
plugins/projman/skills/wbs.md
|
||||||
|
plugins/projman/skills/risk-register.md
|
||||||
|
plugins/projman/skills/sprint-roadmap.md
|
||||||
|
plugins/projman/skills/wiki-conventions.md
|
||||||
|
plugins/project-hygiene/commands/hygiene-check.md
|
||||||
|
plugins/contract-validator/commands/cv-status.md
|
||||||
|
```
|
||||||
|
|
||||||
### Plugin Paths
|
### Plugin Paths
|
||||||
|
|
||||||
| Context | Pattern | Example |
|
| Context | Pattern | Example |
|
||||||
|---------|---------|---------|
|
|---------|---------|---------|
|
||||||
| Plugin location | `plugins/{plugin-name}/` | `plugins/projman/` |
|
| Plugin location | `plugins/{plugin-name}/` | `plugins/projman/` |
|
||||||
| Plugin manifest | `plugins/{plugin-name}/.claude-plugin/plugin.json` | `plugins/projman/.claude-plugin/plugin.json` |
|
| Plugin manifest | `plugins/{plugin-name}/.claude-plugin/plugin.json` | `plugins/projman/.claude-plugin/plugin.json` |
|
||||||
|
| Plugin MCP mapping (optional) | `plugins/{plugin-name}/.claude-plugin/metadata.json` | `plugins/projman/.claude-plugin/metadata.json` |
|
||||||
| Plugin commands | `plugins/{plugin-name}/commands/` | `plugins/projman/commands/` |
|
| Plugin commands | `plugins/{plugin-name}/commands/` | `plugins/projman/commands/` |
|
||||||
| Plugin agents | `plugins/{plugin-name}/agents/` | `plugins/projman/agents/` |
|
| Plugin agents | `plugins/{plugin-name}/agents/` | `plugins/projman/agents/` |
|
||||||
| Plugin .mcp.json | `plugins/{plugin-name}/.mcp.json` | `plugins/projman/.mcp.json` |
|
| Plugin skills | `plugins/{plugin-name}/skills/` | `plugins/projman/skills/` |
|
||||||
| Plugin integration snippet | `plugins/{plugin-name}/claude-md-integration.md` | `plugins/projman/claude-md-integration.md` |
|
| Plugin integration snippet | `plugins/{plugin-name}/claude-md-integration.md` | `plugins/projman/claude-md-integration.md` |
|
||||||
|
|
||||||
### MCP Server Paths (v3.0.0 Architecture)
|
### MCP Server Paths
|
||||||
|
|
||||||
MCP servers are **shared at repository root** with **symlinks** from plugins.
|
MCP servers are **shared at repository root** and configured in `.mcp.json`.
|
||||||
|
|
||||||
| Context | Pattern | Example |
|
| Context | Pattern | Example |
|
||||||
|---------|---------|---------|
|
|---------|---------|---------|
|
||||||
|
| MCP configuration | `.mcp.json` | `.mcp.json` (at repo root) |
|
||||||
| Shared MCP server | `mcp-servers/{server}/` | `mcp-servers/gitea/` |
|
| Shared MCP server | `mcp-servers/{server}/` | `mcp-servers/gitea/` |
|
||||||
| MCP server code | `mcp-servers/{server}/mcp_server/` | `mcp-servers/gitea/mcp_server/` |
|
| MCP server code | `mcp-servers/{server}/mcp_server/` | `mcp-servers/netbox/mcp_server/` |
|
||||||
| MCP venv | `mcp-servers/{server}/.venv/` | `mcp-servers/gitea/.venv/` |
|
| MCP venv (local) | `mcp-servers/{server}/.venv/` | `mcp-servers/gitea/.venv/` |
|
||||||
| Plugin symlink | `plugins/{plugin}/mcp-servers/{server}` | `plugins/projman/mcp-servers/gitea` |
|
|
||||||
|
|
||||||
### Symlink Pattern
|
**Note:** `mcp-servers/gitea/` is a thin wrapper — source code is in the published `gitea-mcp` package (Gitea PyPI). Other MCP servers still have local source code.
|
||||||
|
|
||||||
Plugins that use MCP servers create symlinks:
|
**Note:** Plugins do NOT have their own `mcp-servers/` directories. All MCP servers are shared at root and configured via `.mcp.json`.
|
||||||
|
|
||||||
|
### MCP Venv Paths - CRITICAL
|
||||||
|
|
||||||
|
**Venvs live in a CACHE directory that SURVIVES marketplace updates.**
|
||||||
|
|
||||||
|
When checking for venvs, ALWAYS check in this order:
|
||||||
|
|
||||||
|
| Priority | Path | Survives Updates? |
|
||||||
|
|----------|------|-------------------|
|
||||||
|
| 1 (CHECK FIRST) | `~/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/` | YES |
|
||||||
|
| 2 (fallback) | `{marketplace}/mcp-servers/{server}/.venv/` | NO |
|
||||||
|
|
||||||
|
**Why cache first?**
|
||||||
|
- Marketplace directory gets WIPED on every update/reinstall
|
||||||
|
- Cache directory SURVIVES updates
|
||||||
|
- False "venv missing" errors waste hours of debugging
|
||||||
|
|
||||||
|
**Pattern for hooks checking venvs:**
|
||||||
```bash
|
```bash
|
||||||
# From plugin directory
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/{server}/.venv/bin/python"
|
||||||
ln -s ../../../mcp-servers/gitea plugins/projman/mcp-servers/gitea
|
LOCAL_VENV="$MARKETPLACE_ROOT/mcp-servers/{server}/.venv/bin/python"
|
||||||
|
|
||||||
|
if [[ -f "$CACHE_VENV" ]]; then
|
||||||
|
VENV_PATH="$CACHE_VENV"
|
||||||
|
elif [[ -f "$LOCAL_VENV" ]]; then
|
||||||
|
VENV_PATH="$LOCAL_VENV"
|
||||||
|
else
|
||||||
|
echo "venv missing"
|
||||||
|
fi
|
||||||
```
|
```
|
||||||
|
|
||||||
The symlink target is relative: `../../../mcp-servers/{server}`
|
**See lesson learned:** [Startup Hooks Must Check Venv Cache Path First](https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/wiki/lessons/patterns/startup-hooks-must-check-venv-cache-path-first)
|
||||||
|
|
||||||
### Documentation Paths
|
### Documentation Paths
|
||||||
|
|
||||||
| Type | Location |
|
| Type | Location |
|
||||||
|------|----------|
|
|------|----------|
|
||||||
| Architecture diagrams | `docs/architecture/` |
|
| Architecture & plugin reference | `docs/ARCHITECTURE.md` |
|
||||||
| This file | `docs/CANONICAL-PATHS.md` |
|
| This file | `docs/CANONICAL-PATHS.md` |
|
||||||
| Update guide | `docs/UPDATING.md` |
|
| Update guide | `docs/UPDATING.md` |
|
||||||
| Configuration guide | `docs/CONFIGURATION.md` |
|
| Configuration guide | `docs/CONFIGURATION.md` |
|
||||||
| Commands cheat sheet | `docs/COMMANDS-CHEATSHEET.md` |
|
| Commands cheat sheet | `docs/COMMANDS-CHEATSHEET.md` |
|
||||||
| Debugging checklist | `docs/DEBUGGING-CHECKLIST.md` |
|
| Debugging checklist | `docs/DEBUGGING-CHECKLIST.md` |
|
||||||
|
| Migration guide (v8→v9) | `docs/MIGRATION-v9.md` |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -178,15 +328,12 @@ The symlink target is relative: `../../../mcp-servers/{server}`
|
|||||||
2. Verify each path against patterns in this file
|
2. Verify each path against patterns in this file
|
||||||
3. Show verification to user before proceeding
|
3. Show verification to user before proceeding
|
||||||
|
|
||||||
### Relative Path Calculation (v3.0.0)
|
### Relative Path Calculation
|
||||||
|
|
||||||
From `plugins/projman/.mcp.json` to shared `mcp-servers/gitea/`:
|
From `.mcp.json` (at root) to `mcp-servers/gitea/`:
|
||||||
```
|
```
|
||||||
plugins/projman/.mcp.json
|
.mcp.json (at repository root)
|
||||||
→ Uses ${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/
|
→ Uses absolute installed path: ~/.claude/plugins/marketplaces/.../mcp-servers/gitea/run.sh
|
||||||
→ Symlink at plugins/projman/mcp-servers/gitea points to ../../../mcp-servers/gitea
|
|
||||||
|
|
||||||
Result in .mcp.json: ${CLAUDE_PLUGIN_ROOT}/mcp-servers/gitea/.venv/bin/python
|
|
||||||
```
|
```
|
||||||
|
|
||||||
From `.claude-plugin/marketplace.json` to `plugins/projman/`:
|
From `.claude-plugin/marketplace.json` to `plugins/projman/`:
|
||||||
@@ -205,27 +352,77 @@ Result: ./plugins/projman
|
|||||||
| Wrong | Why | Correct |
|
| Wrong | Why | Correct |
|
||||||
|-------|-----|---------|
|
|-------|-----|---------|
|
||||||
| `projman/` at root | Plugins go in `plugins/` | `plugins/projman/` |
|
| `projman/` at root | Plugins go in `plugins/` | `plugins/projman/` |
|
||||||
| Direct path in .mcp.json to root mcp-servers | Use symlink | Symlink at `plugins/{plugin}/mcp-servers/` |
|
| `mcp-servers/` inside plugins | MCP servers are shared at root | Use root `mcp-servers/` |
|
||||||
| Creating new mcp-servers inside plugins | Use shared + symlink | Symlink to `mcp-servers/` |
|
| Plugin-level `.mcp.json` | MCP config is at root | Use root `.mcp.json` |
|
||||||
| Hardcoding absolute paths | Breaks portability | Use `${CLAUDE_PLUGIN_ROOT}` |
|
| Hardcoding absolute paths in source | Breaks portability | Use relative paths or `${CLAUDE_PLUGIN_ROOT}` |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Architecture Note (v3.0.0)
|
## Architecture Note
|
||||||
|
|
||||||
MCP servers are now **shared at repository root** with **symlinks** from plugins:
|
MCP servers are **shared at repository root** and configured in a single `.mcp.json` file.
|
||||||
|
|
||||||
**Benefits:**
|
**Benefits:**
|
||||||
- Single source of truth for each MCP server
|
- Single source of truth for each MCP server
|
||||||
- Updates apply to all plugins automatically
|
- Updates apply to all plugins automatically
|
||||||
- Reduced duplication
|
- No duplication - clean plugin structure
|
||||||
- Symlinks work with Claude Code caching
|
- Simple configuration in one place
|
||||||
|
|
||||||
**Symlink Pattern:**
|
**Configuration:**
|
||||||
|
All MCP servers are defined in `.mcp.json` at repository root:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"gitea": { "command": ".../mcp-servers/gitea/run.sh" },
|
||||||
|
"netbox": { "command": ".../mcp-servers/netbox/run.sh" },
|
||||||
|
"data-platform": { "command": ".../mcp-servers/data-platform/run.sh" },
|
||||||
|
"viz-platform": { "command": ".../mcp-servers/viz-platform/run.sh" },
|
||||||
|
"contract-validator": { "command": ".../mcp-servers/contract-validator/run.sh" }
|
||||||
|
}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
plugins/projman/mcp-servers/gitea -> ../../../mcp-servers/gitea
|
|
||||||
plugins/cmdb-assistant/mcp-servers/netbox -> ../../../mcp-servers/netbox
|
---
|
||||||
plugins/pr-review/mcp-servers/gitea -> ../../../mcp-servers/gitea
|
|
||||||
|
## Domain Metadata
|
||||||
|
|
||||||
|
### Domain Field Location
|
||||||
|
|
||||||
|
Domain metadata is stored in `metadata.json` (v9.1.2+, moved from plugin.json/marketplace.json for Claude Code schema compliance):
|
||||||
|
|
||||||
|
| Location | Field | Example |
|
||||||
|
|----------|-------|---------|
|
||||||
|
| `plugins/{name}/.claude-plugin/metadata.json` | `"domain": "core"` | `plugins/projman/.claude-plugin/metadata.json` |
|
||||||
|
|
||||||
|
### Allowed Domain Values
|
||||||
|
|
||||||
|
| Domain | Purpose | Existing Plugins |
|
||||||
|
|--------|---------|-----------------|
|
||||||
|
| `core` | Development workflow plugins | projman, git-flow, pr-review, code-sentinel, doc-guardian, clarity-assist, contract-validator, claude-config-maintainer, project-hygiene |
|
||||||
|
| `data` | Data engineering and visualization | data-platform, viz-platform, data-seed |
|
||||||
|
| `ops` | Operations and infrastructure | cmdb-assistant, ops-release-manager, ops-deploy-pipeline |
|
||||||
|
| `saas` | SaaS application development | saas-api-platform, saas-db-migrate, saas-react-platform, saas-test-pilot |
|
||||||
|
| `debug` | Debugging and diagnostics | debug-mcp |
|
||||||
|
|
||||||
|
### Plugin Naming Convention
|
||||||
|
|
||||||
|
- **Core plugins:** No prefix (existing names never change)
|
||||||
|
- **New plugins:** Domain prefix: `saas-*`, `ops-*`, `data-*`, `debug-*`
|
||||||
|
- Domain is always in metadata — prefix is a naming convention, not a requirement
|
||||||
|
|
||||||
|
### Domain Query Examples
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all plugins in a domain
|
||||||
|
for p in plugins/*; do
|
||||||
|
d=$(jq -r '.domain // empty' "$p/.claude-plugin/metadata.json" 2>/dev/null)
|
||||||
|
[[ "$d" == "saas" ]] && basename "$p"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Count plugins per domain
|
||||||
|
for p in plugins/*; do
|
||||||
|
jq -r '.domain // empty' "$p/.claude-plugin/metadata.json" 2>/dev/null
|
||||||
|
done | sort | uniq -c | sort -rn
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -234,6 +431,14 @@ plugins/pr-review/mcp-servers/gitea -> ../../../mcp-servers/gitea
|
|||||||
|
|
||||||
| Date | Change | By |
|
| Date | Change | By |
|
||||||
|------|--------|-----|
|
|------|--------|-----|
|
||||||
|
| 2026-02-07 | v9.1.2: Moved domain field from plugin.json/marketplace.json to metadata.json for Claude Code schema compliance | Claude Code |
|
||||||
|
| 2026-02-07 | v9.1.0: Removed deleted dirs (architecture/, prompts/, project-lessons-learned/), added Phase 3 plugins, added ARCHITECTURE.md, MIGRATION-v9.md, updated Domain table, removed stale hooks/ dirs | Claude Code |
|
||||||
|
| 2026-02-06 | v8.0.0: Added domain metadata section, Phase 1a paths, future plugin paths | Claude Code |
|
||||||
|
| 2026-02-04 | v7.1.0: Added profile configs, prompts/, project-lessons-learned/, metadata.json, deprecated switch-profile.sh | Claude Code |
|
||||||
|
| 2026-01-30 | v5.5.0: Removed plugin-level mcp-servers symlinks - all MCP config now in root .mcp.json | Claude Code |
|
||||||
|
| 2026-01-26 | v5.0.0: Added contract-validator plugin and MCP server | Claude Code |
|
||||||
|
| 2026-01-26 | v4.1.0: Added viz-platform plugin and MCP server | Claude Code |
|
||||||
|
| 2026-01-25 | v4.0.0: Added data-platform plugin and MCP server | Claude Code |
|
||||||
| 2026-01-20 | v3.0.0: MCP servers moved to root with symlinks | Claude Code |
|
| 2026-01-20 | v3.0.0: MCP servers moved to root with symlinks | Claude Code |
|
||||||
| 2026-01-20 | v3.0.0: Added clarity-assist, git-flow, pr-review plugins | Claude Code |
|
| 2026-01-20 | v3.0.0: Added clarity-assist, git-flow, pr-review plugins | Claude Code |
|
||||||
| 2026-01-20 | v3.0.0: Added docs/CONFIGURATION.md | Claude Code |
|
| 2026-01-20 | v3.0.0: Added docs/CONFIGURATION.md | Claude Code |
|
||||||
|
|||||||
@@ -1,6 +1,24 @@
|
|||||||
# Plugin Commands Cheat Sheet
|
# Plugin Commands Cheat Sheet
|
||||||
|
|
||||||
Quick reference for all commands in the Leo Claude Marketplace.
|
Quick reference for all commands in the Leo Claude Marketplace (v9.0.0+).
|
||||||
|
|
||||||
|
All commands follow the `/<noun> <action>` sub-command pattern.
|
||||||
|
|
||||||
|
## Invocation
|
||||||
|
|
||||||
|
Commands can be invoked in two ways:
|
||||||
|
|
||||||
|
1. **Via dispatch file:** `/doc audit` (routes through dispatch file to invoke `/doc-guardian:doc-audit`)
|
||||||
|
2. **Direct plugin-prefixed:** `/doc-guardian:doc-audit` (invokes command directly)
|
||||||
|
|
||||||
|
Both methods work identically. The dispatch file provides a user-friendly interface with `$ARGUMENTS` parsing, while the direct format bypasses the dispatcher.
|
||||||
|
|
||||||
|
If dispatch routing fails, use the direct plugin-prefixed format: `/<plugin-name>:<command-name>`.
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
- `/sprint plan` → routes to `/projman:sprint-plan`
|
||||||
|
- `/doc audit` → routes to `/doc-guardian:doc-audit`
|
||||||
|
- `/pr review` → routes to `/pr-review:pr-review`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -8,62 +26,106 @@ Quick reference for all commands in the Leo Claude Marketplace.
|
|||||||
|
|
||||||
| Plugin | Command | Auto | Manual | Description |
|
| Plugin | Command | Auto | Manual | Description |
|
||||||
|--------|---------|:----:|:------:|-------------|
|
|--------|---------|:----:|:------:|-------------|
|
||||||
| **projman** | `/sprint-plan` | | X | Start sprint planning with AI-guided architecture analysis and issue creation |
|
| **projman** | `/sprint plan` | | X | Start sprint planning with AI-guided architecture analysis and issue creation |
|
||||||
| **projman** | `/sprint-start` | | X | Begin sprint execution with dependency analysis and parallel task coordination |
|
| **projman** | `/sprint start` | | X | Begin sprint execution with dependency analysis and parallel task coordination (requires approval or `--force`) |
|
||||||
| **projman** | `/sprint-status` | | X | Check current sprint progress and identify blockers |
|
| **projman** | `/sprint status` | | X | Check current sprint progress (add `--diagram` for Mermaid visualization) |
|
||||||
| **projman** | `/review` | | X | Pre-sprint-close code quality review (debug artifacts, security, error handling) |
|
| **projman** | `/sprint review` | | X | Pre-sprint-close code quality review (debug artifacts, security, error handling) |
|
||||||
| **projman** | `/test-check` | | X | Run tests and verify coverage before sprint close |
|
| **projman** | `/sprint test` | | X | Run tests (`/sprint test run`) or generate tests (`/sprint test gen <target>`) |
|
||||||
| **projman** | `/sprint-close` | | X | Complete sprint and capture lessons learned to Gitea Wiki |
|
| **projman** | `/sprint close` | | X | Complete sprint and capture lessons learned to Gitea Wiki |
|
||||||
| **projman** | `/labels-sync` | | X | Synchronize label taxonomy from Gitea |
|
| **projman** | `/labels sync` | | X | Synchronize label taxonomy from Gitea |
|
||||||
| **projman** | `/initial-setup` | | X | Full setup wizard: MCP server + system config + project config |
|
| **projman** | `/projman setup` | | X | Auto-detect mode or use `--full`, `--quick`, `--sync`, `--clear-cache` |
|
||||||
| **projman** | `/project-init` | | X | Quick project setup (assumes system config exists) |
|
| **projman** | `/rfc create` | | X | Create new RFC from conversation or spec |
|
||||||
| **projman** | `/project-sync` | | X | Sync config with git remote after repo move/rename |
|
| **projman** | `/rfc list` | | X | List all RFCs grouped by status |
|
||||||
| **projman** | *SessionStart hook* | X | | Detects git remote vs .env mismatch, warns to run /project-sync |
|
| **projman** | `/rfc review` | | X | Submit RFC for maintainer review |
|
||||||
| **projman** | `/test-gen` | | X | Generate comprehensive tests for specified code |
|
| **projman** | `/rfc approve` | | X | Approve RFC for sprint planning |
|
||||||
| **projman** | `/debug-report` | | X | Run diagnostics and create structured issue in marketplace |
|
| **projman** | `/rfc reject` | | X | Reject RFC with documented reason |
|
||||||
| **projman** | `/debug-review` | | X | Investigate diagnostic issues and propose fixes with approval gates |
|
| **projman** | `/project initiation` | | X | Discovery, source analysis, project charter |
|
||||||
| **projman** | `/suggest-version` | | X | Analyze CHANGELOG and recommend semantic version bump |
|
| **projman** | `/project plan` | | X | WBS, risk register, sprint roadmap |
|
||||||
| **git-flow** | `/commit` | | X | Create commit with auto-generated conventional message |
|
| **projman** | `/project status` | | X | Project health check across all sprints |
|
||||||
| **git-flow** | `/commit-push` | | X | Commit and push to remote in one operation |
|
| **projman** | `/project close` | | X | Final retrospective and archival |
|
||||||
| **git-flow** | `/commit-merge` | | X | Commit current changes, then merge into target branch |
|
| **projman** | `/adr create` | | X | Create new Architecture Decision Record |
|
||||||
| **git-flow** | `/commit-sync` | | X | Full sync: commit, push, and sync with upstream/base branch |
|
| **projman** | `/adr list` | | X | List all ADRs with status |
|
||||||
| **git-flow** | `/branch-start` | | X | Create new feature/fix/chore branch with naming conventions |
|
| **projman** | `/adr update` | | X | Update existing ADR |
|
||||||
| **git-flow** | `/branch-cleanup` | | X | Remove merged branches locally and optionally on remote |
|
| **projman** | `/adr supersede` | | X | Supersede ADR with new decision |
|
||||||
| **git-flow** | `/git-status` | | X | Enhanced git status with recommendations |
|
| **git-flow** | `/gitflow commit` | | X | Create commit with auto-generated conventional message. Flags: `--push`, `--merge`, `--sync` |
|
||||||
| **git-flow** | `/git-config` | | X | Configure git-flow settings for the project |
|
| **git-flow** | `/gitflow branch-start` | | X | Create new feature/fix/chore branch with naming conventions |
|
||||||
| **pr-review** | `/initial-setup` | | X | Setup wizard for pr-review (shares Gitea MCP with projman) |
|
| **git-flow** | `/gitflow branch-cleanup` | | X | Remove merged branches locally and optionally on remote |
|
||||||
| **pr-review** | `/project-init` | | X | Quick project setup for PR reviews |
|
| **git-flow** | `/gitflow status` | | X | Enhanced git status with recommendations |
|
||||||
| **pr-review** | `/project-sync` | | X | Sync config with git remote after repo move/rename |
|
| **git-flow** | `/gitflow config` | | X | Configure git-flow settings for the project |
|
||||||
| **pr-review** | *SessionStart hook* | X | | Detects git remote vs .env mismatch |
|
| **pr-review** | `/pr setup` | | X | Setup wizard for pr-review (shares Gitea MCP with projman) |
|
||||||
| **pr-review** | `/pr-review` | | X | Full multi-agent PR review with confidence scoring |
|
| **pr-review** | `/pr init` | | X | Quick project setup for PR reviews |
|
||||||
| **pr-review** | `/pr-summary` | | X | Quick summary of PR changes |
|
| **pr-review** | `/pr sync` | | X | Sync config with git remote after repo move/rename |
|
||||||
| **pr-review** | `/pr-findings` | | X | List and filter review findings by category/severity |
|
| **pr-review** | `/pr review` | | X | Full multi-agent PR review with confidence scoring |
|
||||||
| **clarity-assist** | `/clarify` | | X | Full 4-D prompt optimization with ND accommodations |
|
| **pr-review** | `/pr summary` | | X | Quick summary of PR changes |
|
||||||
| **clarity-assist** | `/quick-clarify` | | X | Rapid single-pass clarification for simple requests |
|
| **pr-review** | `/pr findings` | | X | List and filter review findings by category/severity |
|
||||||
| **doc-guardian** | `/doc-audit` | | X | Full documentation audit - scans for doc drift |
|
| **pr-review** | `/pr diff` | | X | Formatted diff with inline review comments and annotations |
|
||||||
| **doc-guardian** | `/doc-sync` | | X | Synchronize pending documentation updates |
|
| **clarity-assist** | `/clarity clarify` | | X | Full 4-D prompt optimization with ND accommodations |
|
||||||
| **doc-guardian** | *PostToolUse hook* | X | | Silently detects doc drift on Write/Edit |
|
| **clarity-assist** | `/clarity quick-clarify` | | X | Rapid single-pass clarification for simple requests |
|
||||||
| **code-sentinel** | `/security-scan` | | X | Full security audit (SQL injection, XSS, secrets, etc.) |
|
| **doc-guardian** | `/doc audit` | | X | Full documentation audit - scans for doc drift |
|
||||||
| **code-sentinel** | `/refactor` | | X | Apply refactoring patterns to improve code |
|
| **doc-guardian** | `/doc sync` | | X | Synchronize pending documentation updates |
|
||||||
| **code-sentinel** | `/refactor-dry` | | X | Preview refactoring without applying changes |
|
| **doc-guardian** | `/doc changelog-gen` | | X | Generate changelog from conventional commits |
|
||||||
|
| **doc-guardian** | `/doc coverage` | | X | Documentation coverage metrics by function/class |
|
||||||
|
| **doc-guardian** | `/doc stale-docs` | | X | Flag documentation behind code changes |
|
||||||
|
| **code-sentinel** | `/sentinel scan` | | X | Full security audit (SQL injection, XSS, secrets, etc.) |
|
||||||
|
| **code-sentinel** | `/sentinel refactor` | | X | Apply refactoring patterns to improve code |
|
||||||
|
| **code-sentinel** | `/sentinel refactor-dry` | | X | Preview refactoring without applying changes |
|
||||||
| **code-sentinel** | *PreToolUse hook* | X | | Scans code before writing; blocks critical issues |
|
| **code-sentinel** | *PreToolUse hook* | X | | Scans code before writing; blocks critical issues |
|
||||||
| **claude-config-maintainer** | `/config-analyze` | | X | Analyze CLAUDE.md for optimization opportunities |
|
| **claude-config-maintainer** | `/claude-config analyze` | | X | Analyze CLAUDE.md for optimization opportunities |
|
||||||
| **claude-config-maintainer** | `/config-optimize` | | X | Optimize CLAUDE.md structure with preview/backup |
|
| **claude-config-maintainer** | `/claude-config optimize` | | X | Optimize CLAUDE.md structure with preview/backup |
|
||||||
| **claude-config-maintainer** | `/config-init` | | X | Initialize new CLAUDE.md for a project |
|
| **claude-config-maintainer** | `/claude-config init` | | X | Initialize new CLAUDE.md for a project |
|
||||||
| **cmdb-assistant** | `/initial-setup` | | X | Setup wizard for NetBox MCP server |
|
| **claude-config-maintainer** | `/claude-config diff` | | X | Track CLAUDE.md changes over time with behavioral impact |
|
||||||
| **cmdb-assistant** | `/cmdb-search` | | X | Search NetBox for devices, IPs, sites |
|
| **claude-config-maintainer** | `/claude-config lint` | | X | Lint CLAUDE.md for anti-patterns and best practices |
|
||||||
| **cmdb-assistant** | `/cmdb-device` | | X | Manage network devices (create, view, update, delete) |
|
| **claude-config-maintainer** | `/claude-config audit-settings` | | X | Audit settings.local.json permissions (100-point score) |
|
||||||
| **cmdb-assistant** | `/cmdb-ip` | | X | Manage IP addresses and prefixes |
|
| **claude-config-maintainer** | `/claude-config optimize-settings` | | X | Optimize permissions (profiles, consolidation, dry-run) |
|
||||||
| **cmdb-assistant** | `/cmdb-site` | | X | Manage sites, locations, racks, and regions |
|
| **claude-config-maintainer** | `/claude-config permissions-map` | | X | Visual review layer + permission coverage map |
|
||||||
| **project-hygiene** | *PostToolUse hook* | X | | Removes temp files, warns about unexpected root files |
|
| **cmdb-assistant** | `/cmdb setup` | | X | Setup wizard for NetBox MCP server |
|
||||||
| **data-platform** | `/ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
|
| **cmdb-assistant** | `/cmdb search` | | X | Search NetBox for devices, IPs, sites |
|
||||||
| **data-platform** | `/profile` | | X | Generate data profiling report with statistics |
|
| **cmdb-assistant** | `/cmdb device` | | X | Manage network devices (create, view, update, delete) |
|
||||||
| **data-platform** | `/schema` | | X | Explore database schemas, tables, columns |
|
| **cmdb-assistant** | `/cmdb ip` | | X | Manage IP addresses and prefixes |
|
||||||
| **data-platform** | `/explain` | | X | Explain query execution plan |
|
| **cmdb-assistant** | `/cmdb site` | | X | Manage sites, locations, racks, and regions |
|
||||||
| **data-platform** | `/lineage` | | X | Show dbt model lineage and dependencies |
|
| **cmdb-assistant** | `/cmdb audit` | | X | Data quality analysis (VMs, devices, naming, roles) |
|
||||||
| **data-platform** | `/run` | | X | Run dbt models with validation |
|
| **cmdb-assistant** | `/cmdb register` | | X | Register current machine into NetBox with running apps |
|
||||||
| **data-platform** | `/initial-setup` | | X | Setup wizard for data-platform MCP servers |
|
| **cmdb-assistant** | `/cmdb sync` | | X | Sync machine state with NetBox (detect drift, update) |
|
||||||
| **data-platform** | *SessionStart hook* | X | | Checks PostgreSQL connection (non-blocking warning) |
|
| **cmdb-assistant** | `/cmdb topology` | | X | Infrastructure topology diagrams (rack, network, site views) |
|
||||||
|
| **cmdb-assistant** | `/cmdb change-audit` | | X | NetBox audit trail queries with filtering |
|
||||||
|
| **cmdb-assistant** | `/cmdb ip-conflicts` | | X | Detect IP conflicts and overlapping prefixes |
|
||||||
|
| **project-hygiene** | `/hygiene check` | | X | Project file organization and cleanup check |
|
||||||
|
| **data-platform** | `/data ingest` | | X | Load data from CSV, Parquet, JSON into DataFrame |
|
||||||
|
| **data-platform** | `/data profile` | | X | Generate data profiling report with statistics |
|
||||||
|
| **data-platform** | `/data schema` | | X | Explore database schemas, tables, columns |
|
||||||
|
| **data-platform** | `/data explain` | | X | Explain query execution plan |
|
||||||
|
| **data-platform** | `/data lineage` | | X | Show dbt model lineage and dependencies |
|
||||||
|
| **data-platform** | `/data run` | | X | Run dbt models with validation |
|
||||||
|
| **data-platform** | `/data lineage-viz` | | X | dbt lineage visualization as Mermaid diagrams |
|
||||||
|
| **data-platform** | `/data dbt-test` | | X | Formatted dbt test runner with summary and failure details |
|
||||||
|
| **data-platform** | `/data quality` | | X | DataFrame quality checks (nulls, duplicates, types, outliers) |
|
||||||
|
| **data-platform** | `/data review` | | X | Comprehensive data integrity audits |
|
||||||
|
| **data-platform** | `/data gate` | | X | Binary pass/fail data integrity gates |
|
||||||
|
| **data-platform** | `/data setup` | | X | Setup wizard for data-platform MCP servers |
|
||||||
|
| **viz-platform** | `/viz setup` | | X | Setup wizard for viz-platform MCP server |
|
||||||
|
| **viz-platform** | `/viz chart` | | X | Create Plotly charts with theme integration |
|
||||||
|
| **viz-platform** | `/viz chart-export` | | X | Export charts to PNG, SVG, PDF via kaleido |
|
||||||
|
| **viz-platform** | `/viz dashboard` | | X | Create dashboard layouts with filters and grids |
|
||||||
|
| **viz-platform** | `/viz theme` | | X | Apply existing theme to visualizations |
|
||||||
|
| **viz-platform** | `/viz theme-new` | | X | Create new custom theme with design tokens |
|
||||||
|
| **viz-platform** | `/viz theme-css` | | X | Export theme as CSS custom properties |
|
||||||
|
| **viz-platform** | `/viz component` | | X | Inspect DMC component props and validation |
|
||||||
|
| **viz-platform** | `/viz accessibility-check` | | X | Color blind validation (WCAG contrast ratios) |
|
||||||
|
| **viz-platform** | `/viz breakpoints` | | X | Configure responsive layout breakpoints |
|
||||||
|
| **viz-platform** | `/viz design-review` | | X | Detailed design system audits |
|
||||||
|
| **viz-platform** | `/viz design-gate` | | X | Binary pass/fail design system validation gates |
|
||||||
|
| **contract-validator** | `/cv validate` | | X | Full marketplace compatibility validation |
|
||||||
|
| **contract-validator** | `/cv check-agent` | | X | Validate single agent definition |
|
||||||
|
| **contract-validator** | `/cv list-interfaces` | | X | Show all plugin interfaces |
|
||||||
|
| **contract-validator** | `/cv dependency-graph` | | X | Mermaid visualization of plugin dependencies |
|
||||||
|
| **contract-validator** | `/cv setup` | | X | Setup wizard for contract-validator MCP |
|
||||||
|
| **contract-validator** | `/cv status` | | X | Marketplace-wide health check (installation, MCP, configuration) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration from v8.x
|
||||||
|
|
||||||
|
All commands were renamed in v9.0.0 to follow `/<noun> <action>` pattern. See [MIGRATION-v9.md](./MIGRATION-v9.md) for the complete old-to-new mapping.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -71,14 +133,16 @@ Quick reference for all commands in the Leo Claude Marketplace.
|
|||||||
|
|
||||||
| Category | Plugins | Primary Use |
|
| Category | Plugins | Primary Use |
|
||||||
|----------|---------|-------------|
|
|----------|---------|-------------|
|
||||||
| **Setup** | projman, pr-review, cmdb-assistant, data-platform | `/initial-setup`, `/project-init` |
|
| **Setup** | projman, pr-review, cmdb-assistant, data-platform, viz-platform, contract-validator | `/projman setup`, `/pr setup`, `/cmdb setup`, `/data setup`, `/viz setup`, `/cv setup` |
|
||||||
| **Task Planning** | projman, clarity-assist | Sprint management, requirement clarification |
|
| **Task Planning** | projman, clarity-assist | Sprint management, requirement clarification |
|
||||||
| **Code Quality** | code-sentinel, pr-review | Security scanning, PR reviews |
|
| **Code Quality** | code-sentinel, pr-review | Security scanning, PR reviews |
|
||||||
| **Documentation** | doc-guardian, claude-config-maintainer | Doc sync, CLAUDE.md maintenance |
|
| **Documentation** | doc-guardian, claude-config-maintainer | Doc sync, CLAUDE.md maintenance |
|
||||||
| **Git Operations** | git-flow | Commits, branches, workflow automation |
|
| **Git Operations** | git-flow | Commits, branches, workflow automation |
|
||||||
| **Infrastructure** | cmdb-assistant | NetBox CMDB management |
|
| **Infrastructure** | cmdb-assistant | NetBox CMDB management |
|
||||||
| **Data Engineering** | data-platform | pandas, PostgreSQL, dbt operations |
|
| **Data Engineering** | data-platform | pandas, PostgreSQL, dbt operations |
|
||||||
| **Maintenance** | project-hygiene | Automatic cleanup |
|
| **Visualization** | viz-platform | DMC validation, Plotly charts, theming |
|
||||||
|
| **Validation** | contract-validator | Cross-plugin compatibility checks |
|
||||||
|
| **Maintenance** | project-hygiene | Manual cleanup via `/hygiene check` |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -86,33 +150,47 @@ Quick reference for all commands in the Leo Claude Marketplace.
|
|||||||
|
|
||||||
| Plugin | Hook Event | Behavior |
|
| Plugin | Hook Event | Behavior |
|
||||||
|--------|------------|----------|
|
|--------|------------|----------|
|
||||||
| **projman** | SessionStart | Checks git remote vs .env; warns if mismatch detected; suggests sprint planning if issues exist |
|
| **code-sentinel** | PreToolUse (Write/Edit/MultiEdit) | Scans code before writing; blocks critical security issues |
|
||||||
| **pr-review** | SessionStart | Checks git remote vs .env; warns if mismatch detected |
|
| **git-flow** | PreToolUse (Bash) | Validates branch naming and commit message conventions |
|
||||||
| **doc-guardian** | PostToolUse (Write/Edit) | Tracks documentation drift; auto-updates dependent docs |
|
| **cmdb-assistant** | PreToolUse (MCP create/update) | Validates input data before NetBox writes |
|
||||||
| **code-sentinel** | PreToolUse (Write/Edit) | Scans for security issues; blocks critical vulnerabilities |
|
| **clarity-assist** | UserPromptSubmit | Detects vague prompts and suggests clarification |
|
||||||
| **project-hygiene** | PostToolUse (Write/Edit) | Cleans temp files, warns about misplaced files |
|
|
||||||
| **data-platform** | SessionStart | Checks PostgreSQL connection; non-blocking warning if unavailable |
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Dev Workflow Examples
|
## Dev Workflow Examples
|
||||||
|
|
||||||
|
### Example 0: RFC-Driven Feature Development
|
||||||
|
|
||||||
|
Full workflow from idea to implementation using RFCs:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. /clarity clarify # Clarify the feature idea
|
||||||
|
2. /rfc create # Create RFC from clarified spec
|
||||||
|
... refine RFC content ...
|
||||||
|
3. /rfc review 0001 # Submit RFC for review
|
||||||
|
... review discussion ...
|
||||||
|
4. /rfc approve 0001 # Approve RFC for implementation
|
||||||
|
5. /sprint plan # Select approved RFC for sprint
|
||||||
|
... implement feature ...
|
||||||
|
6. /sprint close # Complete sprint, RFC marked Implemented
|
||||||
|
```
|
||||||
|
|
||||||
### Example 1: Starting a New Feature Sprint
|
### Example 1: Starting a New Feature Sprint
|
||||||
|
|
||||||
A typical workflow for planning and executing a feature sprint:
|
A typical workflow for planning and executing a feature sprint:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /clarify # Clarify requirements if vague
|
1. /clarity clarify # Clarify requirements if vague
|
||||||
2. /sprint-plan # Plan the sprint with architecture analysis
|
2. /sprint plan # Plan the sprint with architecture analysis
|
||||||
3. /labels-sync # Ensure labels are up-to-date
|
3. /labels sync # Ensure labels are up-to-date
|
||||||
4. /sprint-start # Begin execution with dependency ordering
|
4. /sprint start # Begin execution with dependency ordering
|
||||||
5. /branch-start feat/... # Create feature branch
|
5. /gitflow branch-start feat/... # Create feature branch
|
||||||
... implement features ...
|
... implement features ...
|
||||||
6. /commit # Commit with conventional message
|
6. /gitflow commit # Commit with conventional message
|
||||||
7. /sprint-status # Check progress mid-sprint
|
7. /sprint status --diagram # Check progress with visualization
|
||||||
8. /review # Pre-close quality review
|
8. /sprint review # Pre-close quality review
|
||||||
9. /test-check # Verify test coverage
|
9. /sprint test run # Verify test coverage
|
||||||
10. /sprint-close # Capture lessons learned
|
10. /sprint close # Capture lessons learned
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 2: Daily Development Cycle
|
### Example 2: Daily Development Cycle
|
||||||
@@ -120,12 +198,12 @@ A typical workflow for planning and executing a feature sprint:
|
|||||||
Quick daily workflow with git-flow:
|
Quick daily workflow with git-flow:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /git-status # Check current state
|
1. /gitflow status # Check current state
|
||||||
2. /branch-start fix/... # Start bugfix branch
|
2. /gitflow branch-start fix/... # Start bugfix branch
|
||||||
... make changes ...
|
... make changes ...
|
||||||
3. /commit # Auto-generate commit message
|
3. /gitflow commit # Auto-generate commit message
|
||||||
4. /commit-push # Push to remote
|
4. /gitflow commit --push # Commit and push to remote
|
||||||
5. /branch-cleanup # Clean merged branches
|
5. /gitflow branch-cleanup # Clean merged branches
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 3: Pull Request Review Workflow
|
### Example 3: Pull Request Review Workflow
|
||||||
@@ -133,10 +211,10 @@ Quick daily workflow with git-flow:
|
|||||||
Reviewing a PR before merge:
|
Reviewing a PR before merge:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /pr-summary # Quick overview of changes
|
1. /pr summary # Quick overview of changes
|
||||||
2. /pr-review # Full multi-agent review
|
2. /pr review # Full multi-agent review
|
||||||
3. /pr-findings # Filter findings by severity
|
3. /pr findings # Filter findings by severity
|
||||||
4. /security-scan # Deep security audit if needed
|
4. /sentinel scan # Deep security audit if needed
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 4: Documentation Maintenance
|
### Example 4: Documentation Maintenance
|
||||||
@@ -144,10 +222,10 @@ Reviewing a PR before merge:
|
|||||||
Keeping docs in sync:
|
Keeping docs in sync:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /doc-audit # Scan for documentation drift
|
1. /doc audit # Scan for documentation drift
|
||||||
2. /doc-sync # Apply pending updates
|
2. /doc sync # Apply pending updates
|
||||||
3. /config-analyze # Check CLAUDE.md health
|
3. /claude-config analyze # Check CLAUDE.md health
|
||||||
4. /config-optimize # Optimize if needed
|
4. /claude-config optimize # Optimize if needed
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 5: Code Refactoring Session
|
### Example 5: Code Refactoring Session
|
||||||
@@ -155,11 +233,11 @@ Keeping docs in sync:
|
|||||||
Safe refactoring with preview:
|
Safe refactoring with preview:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /refactor-dry # Preview opportunities
|
1. /sentinel refactor-dry # Preview opportunities
|
||||||
2. /security-scan # Baseline security check
|
2. /sentinel scan # Baseline security check
|
||||||
3. /refactor # Apply improvements
|
3. /sentinel refactor # Apply improvements
|
||||||
4. /test-check # Verify nothing broke
|
4. /sprint test run # Verify nothing broke
|
||||||
5. /commit # Commit with descriptive message
|
5. /gitflow commit # Commit with descriptive message
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 6: Infrastructure Documentation
|
### Example 6: Infrastructure Documentation
|
||||||
@@ -167,10 +245,10 @@ Safe refactoring with preview:
|
|||||||
Managing infrastructure with CMDB:
|
Managing infrastructure with CMDB:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /cmdb-search "server" # Find existing devices
|
1. /cmdb search "server" # Find existing devices
|
||||||
2. /cmdb-device view X # Check device details
|
2. /cmdb device view X # Check device details
|
||||||
3. /cmdb-ip list # List available IPs
|
3. /cmdb ip list # List available IPs
|
||||||
4. /cmdb-site view Y # Check site info
|
4. /cmdb site view Y # Check site info
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 6b: Data Engineering Workflow
|
### Example 6b: Data Engineering Workflow
|
||||||
@@ -178,12 +256,12 @@ Managing infrastructure with CMDB:
|
|||||||
Working with data pipelines:
|
Working with data pipelines:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /ingest file.csv # Load data into DataFrame
|
1. /data ingest file.csv # Load data into DataFrame
|
||||||
2. /profile # Generate data profiling report
|
2. /data profile # Generate data profiling report
|
||||||
3. /schema # Explore database schemas
|
3. /data schema # Explore database schemas
|
||||||
4. /lineage model_name # View dbt model dependencies
|
4. /data lineage model_name # View dbt model dependencies
|
||||||
5. /run model_name # Execute dbt models
|
5. /data run model_name # Execute dbt models
|
||||||
6. /explain "SELECT ..." # Analyze query execution plan
|
6. /data explain "SELECT ..." # Analyze query execution plan
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 7: First-Time Setup (New Machine)
|
### Example 7: First-Time Setup (New Machine)
|
||||||
@@ -191,13 +269,13 @@ Working with data pipelines:
|
|||||||
Setting up the marketplace for the first time:
|
Setting up the marketplace for the first time:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /initial-setup # Full setup: MCP + system config + project
|
1. /projman setup --full # Full setup: MCP + system config + project
|
||||||
# → Follow prompts for Gitea URL, org
|
# → Follow prompts for Gitea URL, org
|
||||||
# → Add token manually when prompted
|
# → Add token manually when prompted
|
||||||
# → Confirm repository name
|
# → Confirm repository name
|
||||||
2. # Restart Claude Code session
|
2. # Restart Claude Code session
|
||||||
3. /labels-sync # Sync Gitea labels
|
3. /labels sync # Sync Gitea labels
|
||||||
4. /sprint-plan # Plan first sprint
|
4. /sprint plan # Plan first sprint
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 8: New Project Setup (System Already Configured)
|
### Example 8: New Project Setup (System Already Configured)
|
||||||
@@ -205,22 +283,23 @@ Setting up the marketplace for the first time:
|
|||||||
Adding a new project when system config exists:
|
Adding a new project when system config exists:
|
||||||
|
|
||||||
```
|
```
|
||||||
1. /project-init # Quick project setup
|
1. /projman setup --quick # Quick project setup (auto-detected)
|
||||||
# → Confirms detected repo name
|
# → Confirms detected repo name
|
||||||
# → Creates .env
|
# → Creates .env
|
||||||
2. /labels-sync # Sync Gitea labels
|
2. /labels sync # Sync Gitea labels
|
||||||
3. /sprint-plan # Plan first sprint
|
3. /sprint plan # Plan first sprint
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Quick Tips
|
## Quick Tips
|
||||||
|
|
||||||
- **Hooks run automatically** - doc-guardian and code-sentinel protect you without manual invocation
|
- **Hooks run automatically** - code-sentinel and git-flow protect you without manual invocation
|
||||||
- **Use `/commit` over `git commit`** - generates better commit messages following conventions
|
- **Use `/gitflow commit` over `git commit`** - generates better commit messages following conventions
|
||||||
- **Run `/review` before `/sprint-close`** - catches issues before closing the sprint
|
- **Run `/sprint review` before `/sprint close`** - catches issues before closing the sprint
|
||||||
- **Use `/clarify` for vague requests** - especially helpful for complex requirements
|
- **Use `/clarity clarify` for vague requests** - especially helpful for complex requirements
|
||||||
- **`/refactor-dry` is safe** - always preview before applying refactoring changes
|
- **`/sentinel refactor-dry` is safe** - always preview before applying refactoring changes
|
||||||
|
- **`/gitflow commit --push`** replaces the old `/git-commit-push` - fewer commands to remember
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -234,9 +313,11 @@ Some plugins require MCP server connectivity:
|
|||||||
| pr-review | Gitea | PR operations and reviews |
|
| pr-review | Gitea | PR operations and reviews |
|
||||||
| cmdb-assistant | NetBox | Infrastructure CMDB |
|
| cmdb-assistant | NetBox | Infrastructure CMDB |
|
||||||
| data-platform | pandas, PostgreSQL, dbt | DataFrames, database queries, dbt builds |
|
| data-platform | pandas, PostgreSQL, dbt | DataFrames, database queries, dbt builds |
|
||||||
|
| viz-platform | viz-platform | DMC validation, charts, layouts, themes, pages |
|
||||||
|
| contract-validator | contract-validator | Plugin interface parsing, compatibility validation |
|
||||||
|
|
||||||
Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/claude/netbox.env`, or `~/.config/claude/postgres.env`.
|
Ensure credentials are configured in `~/.config/claude/gitea.env`, `~/.config/claude/netbox.env`, or `~/.config/claude/postgres.env`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
*Last Updated: 2026-01-25*
|
*Last Updated: 2026-02-06*
|
||||||
|
|||||||
@@ -9,10 +9,10 @@ Centralized configuration documentation for all plugins and MCP servers in the L
|
|||||||
**After installing the marketplace and plugins via Claude Code:**
|
**After installing the marketplace and plugins via Claude Code:**
|
||||||
|
|
||||||
```
|
```
|
||||||
/initial-setup
|
/projman setup
|
||||||
```
|
```
|
||||||
|
|
||||||
The interactive wizard handles everything except manually adding your API tokens.
|
The interactive wizard auto-detects what's needed and handles everything except manually adding your API tokens.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -25,7 +25,8 @@ The interactive wizard handles everything except manually adding your API tokens
|
|||||||
└─────────────────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
│
|
│
|
||||||
▼
|
▼
|
||||||
/initial-setup
|
/projman setup --full
|
||||||
|
(or /projman setup auto-detects)
|
||||||
│
|
│
|
||||||
┌──────────────────────────────┼──────────────────────────────┐
|
┌──────────────────────────────┼──────────────────────────────┐
|
||||||
▼ ▼ ▼
|
▼ ▼ ▼
|
||||||
@@ -78,8 +79,8 @@ The interactive wizard handles everything except manually adding your API tokens
|
|||||||
│
|
│
|
||||||
┌───────────────┴───────────────┐
|
┌───────────────┴───────────────┐
|
||||||
▼ ▼
|
▼ ▼
|
||||||
/project-init /initial-setup
|
/projman setup --quick /projman setup
|
||||||
(direct path) (smart detection)
|
(explicit mode) (auto-detects mode)
|
||||||
│ │
|
│ │
|
||||||
│ ┌──────────┴──────────┐
|
│ ┌──────────┴──────────┐
|
||||||
│ ▼ ▼
|
│ ▼ ▼
|
||||||
@@ -108,7 +109,7 @@ The interactive wizard handles everything except manually adding your API tokens
|
|||||||
|
|
||||||
## What Runs Automatically vs User Interaction
|
## What Runs Automatically vs User Interaction
|
||||||
|
|
||||||
### `/initial-setup` - Full Setup
|
### `/projman setup --full` - Full Setup
|
||||||
|
|
||||||
| Phase | Type | What Happens |
|
| Phase | Type | What Happens |
|
||||||
|-------|------|--------------|
|
|-------|------|--------------|
|
||||||
@@ -120,7 +121,7 @@ The interactive wizard handles everything except manually adding your API tokens
|
|||||||
| **6. Project Config** | Automated | Creates `.env` file, checks `.gitignore` |
|
| **6. Project Config** | Automated | Creates `.env` file, checks `.gitignore` |
|
||||||
| **7. Validation** | Automated | Tests API connectivity, shows summary |
|
| **7. Validation** | Automated | Tests API connectivity, shows summary |
|
||||||
|
|
||||||
### `/project-init` - Quick Project Setup
|
### `/projman setup --quick` - Quick Project Setup
|
||||||
|
|
||||||
| Phase | Type | What Happens |
|
| Phase | Type | What Happens |
|
||||||
|-------|------|--------------|
|
|-------|------|--------------|
|
||||||
@@ -131,23 +132,25 @@ The interactive wizard handles everything except manually adding your API tokens
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Three Commands for Different Scenarios
|
## One Command, Three Modes
|
||||||
|
|
||||||
| Command | When to Use | What It Does |
|
| Mode | When to Use | What It Does |
|
||||||
|---------|-------------|--------------|
|
|------|-------------|--------------|
|
||||||
| `/initial-setup` | First time on a machine | Full setup: MCP server + system config + project config |
|
| `/projman setup` | Any time | Auto-detects: runs full, quick, or sync as needed |
|
||||||
| `/project-init` | Starting a new project | Quick setup: project config only (assumes system is ready) |
|
| `/projman setup --full` | First time on a machine | Full setup: MCP server + system config + project config |
|
||||||
| `/project-sync` | After repo move/rename | Updates .env to match current git remote |
|
| `/projman setup --quick` | Starting a new project | Quick setup: project config only (assumes system is ready) |
|
||||||
|
| `/projman setup --sync` | After repo move/rename | Updates .env to match current git remote |
|
||||||
|
|
||||||
|
**Auto-detection logic:**
|
||||||
|
1. No system config → **full** mode
|
||||||
|
2. System config exists, no project config → **quick** mode
|
||||||
|
3. Both exist, git remote differs → **sync** mode
|
||||||
|
4. Both exist, match → already configured, offer to reconfigure
|
||||||
|
|
||||||
**Typical workflow:**
|
**Typical workflow:**
|
||||||
1. Install plugin → run `/initial-setup` (once per machine)
|
1. Install plugin → run `/projman setup` (auto-runs full mode)
|
||||||
2. Start new project → run `/project-init` (once per project)
|
2. Start new project → run `/projman setup` (auto-runs quick mode)
|
||||||
3. Repository moved? → run `/project-sync` (updates config)
|
3. Repository moved? → run `/projman setup` (auto-runs sync mode)
|
||||||
|
|
||||||
**Smart features:**
|
|
||||||
- `/initial-setup` detects existing system config and offers quick project setup
|
|
||||||
- All commands validate org/repo via Gitea API before saving (auto-fills if verified)
|
|
||||||
- SessionStart hook automatically detects git remote vs .env mismatches
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -171,8 +174,7 @@ This marketplace uses a **hybrid configuration** approach:
|
|||||||
│ PROJECT-LEVEL (once per project) │
|
│ PROJECT-LEVEL (once per project) │
|
||||||
│ <project-root>/.env │
|
│ <project-root>/.env │
|
||||||
├─────────────────────────────────────────────────────────────────┤
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
│ GITEA_ORG │ Organization for this project │
|
│ GITEA_REPO │ Repository as owner/repo format │
|
||||||
│ GITEA_REPO │ Repository name for this project │
|
|
||||||
│ GIT_WORKFLOW_STYLE │ (optional) Override system default │
|
│ GIT_WORKFLOW_STYLE │ (optional) Override system default │
|
||||||
│ PR_REVIEW_* │ (optional) PR review settings │
|
│ PR_REVIEW_* │ (optional) PR review settings │
|
||||||
└─────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
@@ -180,7 +182,7 @@ This marketplace uses a **hybrid configuration** approach:
|
|||||||
|
|
||||||
**Benefits:**
|
**Benefits:**
|
||||||
- Single token per service (update once, use everywhere)
|
- Single token per service (update once, use everywhere)
|
||||||
- Easy multi-project setup (just run `/project-init` in each project)
|
- Easy multi-project setup (just run `/projman setup` in each project)
|
||||||
- Security (tokens never committed to git, never typed into AI chat)
|
- Security (tokens never committed to git, never typed into AI chat)
|
||||||
- Project isolation (each project can override defaults)
|
- Project isolation (each project can override defaults)
|
||||||
|
|
||||||
@@ -188,7 +190,7 @@ This marketplace uses a **hybrid configuration** approach:
|
|||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
Before running `/initial-setup`:
|
Before running `/projman setup`:
|
||||||
|
|
||||||
1. **Python 3.10+** installed
|
1. **Python 3.10+** installed
|
||||||
```bash
|
```bash
|
||||||
@@ -211,10 +213,10 @@ Before running `/initial-setup`:
|
|||||||
Run the setup wizard in Claude Code:
|
Run the setup wizard in Claude Code:
|
||||||
|
|
||||||
```
|
```
|
||||||
/initial-setup
|
/projman setup
|
||||||
```
|
```
|
||||||
|
|
||||||
The wizard will guide you through each step interactively.
|
The wizard will guide you through each step interactively and auto-detect the appropriate mode.
|
||||||
|
|
||||||
**Note:** After first-time setup, you'll need to restart your Claude Code session for MCP tools to become available.
|
**Note:** After first-time setup, you'll need to restart your Claude Code session for MCP tools to become available.
|
||||||
|
|
||||||
@@ -262,8 +264,7 @@ In each project root:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat > .env << 'EOF'
|
cat > .env << 'EOF'
|
||||||
GITEA_ORG=your-organization
|
GITEA_REPO=your-organization/your-repo-name
|
||||||
GITEA_REPO=your-repo-name
|
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -307,7 +308,7 @@ GITEA_API_TOKEN=your_gitea_token_here
|
|||||||
| `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` |
|
| `GITEA_API_URL` | Gitea API endpoint (with `/api/v1`) | `https://gitea.example.com/api/v1` |
|
||||||
| `GITEA_API_TOKEN` | Personal access token | `abc123...` |
|
| `GITEA_API_TOKEN` | Personal access token | `abc123...` |
|
||||||
|
|
||||||
**Note:** `GITEA_ORG` is configured at the project level (see below) since different projects may belong to different organizations.
|
**Note:** `GITEA_REPO` is configured at the project level in `owner/repo` format since different projects may belong to different organizations.
|
||||||
|
|
||||||
**Generating a Gitea Token:**
|
**Generating a Gitea Token:**
|
||||||
1. Log into Gitea → **User Icon** → **Settings**
|
1. Log into Gitea → **User Icon** → **Settings**
|
||||||
@@ -362,9 +363,8 @@ GIT_CO_AUTHOR=true
|
|||||||
Create `.env` in each project root:
|
Create `.env` in each project root:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Required for projman, pr-review
|
# Required for projman, pr-review (use owner/repo format)
|
||||||
GITEA_ORG=your-organization
|
GITEA_REPO=your-organization/your-repo-name
|
||||||
GITEA_REPO=your-repo-name
|
|
||||||
|
|
||||||
# Optional: Override git-flow defaults
|
# Optional: Override git-flow defaults
|
||||||
GIT_WORKFLOW_STYLE=pr-required
|
GIT_WORKFLOW_STYLE=pr-required
|
||||||
@@ -377,8 +377,7 @@ PR_REVIEW_AUTO_SUBMIT=false
|
|||||||
|
|
||||||
| Variable | Required | Description |
|
| Variable | Required | Description |
|
||||||
|----------|----------|-------------|
|
|----------|----------|-------------|
|
||||||
| `GITEA_ORG` | Yes | Gitea organization for this project |
|
| `GITEA_REPO` | Yes | Repository in `owner/repo` format (e.g., `my-org/my-repo`) |
|
||||||
| `GITEA_REPO` | Yes | Repository name (must match Gitea exactly) |
|
|
||||||
| `GIT_WORKFLOW_STYLE` | No | Override system default |
|
| `GIT_WORKFLOW_STYLE` | No | Override system default |
|
||||||
| `PR_REVIEW_*` | No | PR review settings |
|
| `PR_REVIEW_*` | No | PR review settings |
|
||||||
|
|
||||||
@@ -386,17 +385,20 @@ PR_REVIEW_AUTO_SUBMIT=false
|
|||||||
|
|
||||||
## Plugin Configuration Summary
|
## Plugin Configuration Summary
|
||||||
|
|
||||||
| Plugin | System Config | Project Config | Setup Commands |
|
| Plugin | System Config | Project Config | Setup Command |
|
||||||
|--------|---------------|----------------|----------------|
|
|--------|---------------|----------------|---------------|
|
||||||
| **projman** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` |
|
| **projman** | gitea.env | .env (GITEA_REPO=owner/repo) | `/projman setup` |
|
||||||
| **pr-review** | gitea.env | .env (GITEA_ORG, GITEA_REPO) | `/initial-setup`, `/project-init`, `/project-sync` |
|
| **pr-review** | gitea.env | .env (GITEA_REPO=owner/repo) | `/pr setup` |
|
||||||
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
|
| **git-flow** | git-flow.env (optional) | .env (optional) | None needed |
|
||||||
| **clarity-assist** | None | None | None needed |
|
| **clarity-assist** | None | None | None needed |
|
||||||
| **cmdb-assistant** | netbox.env | None | `/initial-setup` |
|
| **cmdb-assistant** | netbox.env | None | `/cmdb setup` |
|
||||||
|
| **data-platform** | postgres.env | .env (optional) | `/data setup` |
|
||||||
|
| **viz-platform** | None | .env (optional DMC_VERSION) | `/viz setup` |
|
||||||
| **doc-guardian** | None | None | None needed |
|
| **doc-guardian** | None | None | None needed |
|
||||||
| **code-sentinel** | None | None | None needed |
|
| **code-sentinel** | None | None | None needed |
|
||||||
| **project-hygiene** | None | None | None needed |
|
| **project-hygiene** | None | None | None needed |
|
||||||
| **claude-config-maintainer** | None | None | None needed |
|
| **claude-config-maintainer** | None | None | None needed |
|
||||||
|
| **contract-validator** | None | None | `/cv setup` |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -404,21 +406,224 @@ PR_REVIEW_AUTO_SUBMIT=false
|
|||||||
|
|
||||||
Once system-level config is set up, adding new projects is simple:
|
Once system-level config is set up, adding new projects is simple:
|
||||||
|
|
||||||
**Option 1: Use `/project-init` (faster)**
|
|
||||||
```
|
```
|
||||||
cd ~/projects/new-project
|
cd ~/projects/new-project
|
||||||
/project-init
|
/projman setup
|
||||||
```
|
```
|
||||||
|
|
||||||
**Option 2: Use `/initial-setup` (auto-detects)**
|
The command auto-detects that system config exists and runs quick project setup.
|
||||||
```
|
|
||||||
cd ~/projects/new-project
|
---
|
||||||
/initial-setup
|
|
||||||
# → Detects system config exists
|
## Installing Plugins to Consumer Projects
|
||||||
# → Offers "Quick project setup" option
|
|
||||||
|
The marketplace provides scripts to install plugins into consumer projects. This sets up the MCP server connections and adds CLAUDE.md integration snippets.
|
||||||
|
|
||||||
|
### Install a Plugin
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /path/to/leo-claude-mktplace
|
||||||
|
./scripts/install-plugin.sh <plugin-name> <target-project-path>
|
||||||
```
|
```
|
||||||
|
|
||||||
Both approaches work. Use `/project-init` when you know the system is already configured.
|
**Examples:**
|
||||||
|
```bash
|
||||||
|
# Install data-platform to a portfolio project
|
||||||
|
./scripts/install-plugin.sh data-platform ~/projects/personal-portfolio
|
||||||
|
|
||||||
|
# Install multiple plugins
|
||||||
|
./scripts/install-plugin.sh viz-platform ~/projects/personal-portfolio
|
||||||
|
./scripts/install-plugin.sh projman ~/projects/personal-portfolio
|
||||||
|
```
|
||||||
|
|
||||||
|
**What it does:**
|
||||||
|
1. Validates the plugin exists in the marketplace
|
||||||
|
2. Adds MCP server entry to target's `.mcp.json` (if plugin has MCP server)
|
||||||
|
3. Appends integration snippet to target's `CLAUDE.md`
|
||||||
|
4. Reports changes and lists available commands
|
||||||
|
|
||||||
|
**After installation:** Restart your Claude Code session for MCP tools to become available.
|
||||||
|
|
||||||
|
### Uninstall a Plugin
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/uninstall-plugin.sh <plugin-name> <target-project-path>
|
||||||
|
```
|
||||||
|
|
||||||
|
Removes the MCP server entry and CLAUDE.md integration section.
|
||||||
|
|
||||||
|
### List Installed Plugins
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/list-installed.sh <target-project-path>
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows which marketplace plugins are installed, partially installed, or available.
|
||||||
|
|
||||||
|
**Output example:**
|
||||||
|
```
|
||||||
|
✓ Fully Installed:
|
||||||
|
PLUGIN VERSION DESCRIPTION
|
||||||
|
------ ------- -----------
|
||||||
|
data-platform 1.3.0 pandas, PostgreSQL, and dbt integration...
|
||||||
|
viz-platform 1.1.0 DMC validation, Plotly charts, and theming...
|
||||||
|
|
||||||
|
○ Available (not installed):
|
||||||
|
projman 3.4.0 Sprint planning and project management...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Plugins with MCP Servers
|
||||||
|
|
||||||
|
Not all plugins have MCP servers. The install script handles this automatically:
|
||||||
|
|
||||||
|
| Plugin | Has MCP Server | Notes |
|
||||||
|
|--------|---------------|-------|
|
||||||
|
| data-platform | ✓ | pandas, PostgreSQL, dbt tools |
|
||||||
|
| viz-platform | ✓ | DMC validation, chart, theme tools |
|
||||||
|
| contract-validator | ✓ | Plugin compatibility validation |
|
||||||
|
| cmdb-assistant | ✓ (via netbox) | NetBox CMDB tools |
|
||||||
|
| projman | ✓ (via gitea) | Issue, wiki, PR tools |
|
||||||
|
| pr-review | ✓ (via gitea) | PR review tools |
|
||||||
|
| git-flow | ✗ | Commands only |
|
||||||
|
| doc-guardian | ✗ | Commands only |
|
||||||
|
| code-sentinel | ✗ | Commands and hooks only |
|
||||||
|
| clarity-assist | ✗ | Commands only |
|
||||||
|
|
||||||
|
### Script Requirements
|
||||||
|
|
||||||
|
- **jq** must be installed (`sudo apt install jq`)
|
||||||
|
- Scripts are idempotent (safe to run multiple times)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Frontmatter Configuration
|
||||||
|
|
||||||
|
Agents specify their configuration in frontmatter using Claude Code's supported fields. Reference: https://code.claude.com/docs/en/sub-agents
|
||||||
|
|
||||||
|
### Supported Frontmatter Fields
|
||||||
|
|
||||||
|
| Field | Required | Default | Description |
|
||||||
|
|-------|----------|---------|-------------|
|
||||||
|
| `name` | Yes | — | Unique identifier, lowercase + hyphens |
|
||||||
|
| `description` | Yes | — | When Claude should delegate to this subagent |
|
||||||
|
| `model` | No | `inherit` | `sonnet`, `opus`, `haiku`, or `inherit` |
|
||||||
|
| `permissionMode` | No | `default` | Controls permission prompts: `default`, `acceptEdits`, `dontAsk`, `bypassPermissions`, `plan` |
|
||||||
|
| `disallowedTools` | No | none | Comma-separated tools to remove from agent's toolset |
|
||||||
|
| `skills` | No | none | Comma-separated skills auto-injected into context at startup |
|
||||||
|
| `hooks` | No | none | Lifecycle hooks scoped to this subagent |
|
||||||
|
|
||||||
|
### Complete Agent Matrix
|
||||||
|
|
||||||
|
| Plugin | Agent | `model` | `permissionMode` | `disallowedTools` | `skills` |
|
||||||
|
|--------|-------|---------|-------------------|--------------------|----------|
|
||||||
|
| projman | planner | opus | default | — | frontmatter (2) + body text (12) |
|
||||||
|
| projman | orchestrator | sonnet | acceptEdits | — | frontmatter (2) + body text (10) |
|
||||||
|
| projman | executor | sonnet | bypassPermissions | — | frontmatter (7) |
|
||||||
|
| projman | code-reviewer | opus | default | Write, Edit, MultiEdit | frontmatter (4) |
|
||||||
|
| pr-review | coordinator | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | performance-analyst | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | maintainability-auditor | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| pr-review | test-validator | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| data-platform | data-advisor | sonnet | default | — | — |
|
||||||
|
| data-platform | data-analysis | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| data-platform | data-ingestion | haiku | acceptEdits | — | — |
|
||||||
|
| viz-platform | design-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| viz-platform | layout-builder | sonnet | default | — | — |
|
||||||
|
| viz-platform | component-check | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| viz-platform | theme-setup | haiku | acceptEdits | — | — |
|
||||||
|
| contract-validator | full-validation | sonnet | default | — | — |
|
||||||
|
| contract-validator | agent-check | haiku | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| code-sentinel | security-reviewer | sonnet | plan | Write, Edit, MultiEdit | — |
|
||||||
|
| code-sentinel | refactor-advisor | sonnet | acceptEdits | — | — |
|
||||||
|
| doc-guardian | doc-analyzer | sonnet | acceptEdits | — | — |
|
||||||
|
| clarity-assist | clarity-coach | sonnet | default | Write, Edit, MultiEdit | — |
|
||||||
|
| git-flow | git-assistant | haiku | acceptEdits | — | — |
|
||||||
|
| claude-config-maintainer | maintainer | sonnet | acceptEdits | — | frontmatter (2) |
|
||||||
|
| cmdb-assistant | cmdb-assistant | sonnet | default | — | — |
|
||||||
|
|
||||||
|
### Design Principles
|
||||||
|
|
||||||
|
- `bypassPermissions` is granted to exactly ONE agent (Executor) which has code-sentinel PreToolUse hook + Code Reviewer downstream as safety nets.
|
||||||
|
- `plan` mode is assigned to all pure analysis agents (pr-review, read-only validators).
|
||||||
|
- `disallowedTools: Write, Edit, MultiEdit` provides defense-in-depth on agents that should never write files.
|
||||||
|
- `skills` frontmatter is used for agents with ≤7 skills where guaranteed loading is safety-critical. Agents with 8+ skills use body text `## Skills to Load` for selective loading.
|
||||||
|
- `hooks` (agent-scoped) is reserved for future use (v6.0+).
|
||||||
|
|
||||||
|
Override any field by editing the agent's `.md` file in `plugins/{plugin}/agents/`.
|
||||||
|
|
||||||
|
### permissionMode Guide
|
||||||
|
|
||||||
|
| Value | Prompts for file ops? | Prompts for Bash? | Prompts for MCP? | Use when |
|
||||||
|
|-------|-----------------------|-------------------|-------------------|----------|
|
||||||
|
| `default` | Yes | Yes | No (MCP bypasses permissions) | You want full visibility |
|
||||||
|
| `acceptEdits` | No | Yes | No | Core job is file read/write, Bash visibility useful |
|
||||||
|
| `dontAsk` | No | No (most) | No | Even Bash prompts are friction |
|
||||||
|
| `bypassPermissions` | No | No | No | Agent has downstream safety layers |
|
||||||
|
| `plan` | N/A (read-only) | N/A (read-only) | No | Pure analysis, no modifications |
|
||||||
|
|
||||||
|
### disallowedTools Guide
|
||||||
|
|
||||||
|
Use `disallowedTools` to remove specific tools from an agent's toolset. This is a blacklist — the agent inherits all tools from the main thread, then the listed tools are removed.
|
||||||
|
|
||||||
|
Prefer `disallowedTools` over `tools` (whitelist) because:
|
||||||
|
- New MCP servers are automatically available without updating every agent.
|
||||||
|
- Less configuration to maintain.
|
||||||
|
- Easier to audit — you only list what's blocked.
|
||||||
|
|
||||||
|
Common patterns:
|
||||||
|
- `disallowedTools: Write, Edit, MultiEdit` — read-only agent, cannot modify files.
|
||||||
|
- `disallowedTools: Bash` — no shell access (rare, most agents need at least read-only Bash).
|
||||||
|
|
||||||
|
### skills Frontmatter Guide
|
||||||
|
|
||||||
|
The `skills` field auto-injects skill file contents into the agent's context window at startup. The agent does NOT need to read the files — they are already present.
|
||||||
|
|
||||||
|
**When to use frontmatter `skills`:**
|
||||||
|
- Agent has ≤7 skills.
|
||||||
|
- Skills are safety-critical (e.g., `branch-security`, `runaway-detection`).
|
||||||
|
- You need guaranteed loading — no risk of the agent skipping a skill.
|
||||||
|
|
||||||
|
**When to keep body text `## Skills to Load`:**
|
||||||
|
- Agent has 8+ skills (context window cost too high for full injection).
|
||||||
|
- Skills are situational — not all needed for every invocation.
|
||||||
|
- Agent benefits from selective loading based on the specific task.
|
||||||
|
|
||||||
|
Skill names in frontmatter are resolved relative to the plugin's `skills/` directory. Use the filename without the `.md` extension.
|
||||||
|
|
||||||
|
### Phase-Based Skill Loading (Body Text)
|
||||||
|
|
||||||
|
For agents with 8+ skills, use **phase-based loading** in the agent body text. This structures skill reads into logical phases, with explicit instructions to read each skill exactly once.
|
||||||
|
|
||||||
|
**Pattern:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Skill Loading Protocol
|
||||||
|
|
||||||
|
**Frontmatter skills (auto-injected, always available — DO NOT re-read these):**
|
||||||
|
- `skill-a` — description
|
||||||
|
- `skill-b` — description
|
||||||
|
|
||||||
|
**Phase 1 skills — read ONCE at session start:**
|
||||||
|
- skills/validation-skill.md
|
||||||
|
- skills/safety-skill.md
|
||||||
|
|
||||||
|
**Phase 2 skills — read ONCE when entering main work:**
|
||||||
|
- skills/workflow-skill.md
|
||||||
|
- skills/domain-skill.md
|
||||||
|
|
||||||
|
**CRITICAL: Read each skill file exactly ONCE. Do NOT re-read skill files between MCP API calls.**
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Frontmatter skills (always needed) are auto-injected — zero file read cost
|
||||||
|
- Phase skills are read once at the appropriate time — not re-read per API call
|
||||||
|
- `batch-execution` skill provides protocol for API-heavy phases
|
||||||
|
- ~76-83% reduction in skill-related token consumption for typical sprints
|
||||||
|
|
||||||
|
**Currently applied to:**
|
||||||
|
- Planner agent: 2 frontmatter + 12 body text (3 phases)
|
||||||
|
- Orchestrator agent: 2 frontmatter + 10 body text (2 phases)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -426,12 +631,12 @@ Both approaches work. Use `/project-init` when you know the system is already co
|
|||||||
|
|
||||||
### API Validation
|
### API Validation
|
||||||
|
|
||||||
When running `/initial-setup`, `/project-init`, or `/project-sync`, the commands:
|
When running `/projman setup`, the command:
|
||||||
|
|
||||||
1. **Detect** organization and repository from git remote URL
|
1. **Detects** organization and repository from git remote URL
|
||||||
2. **Validate** via Gitea API: `GET /api/v1/repos/{org}/{repo}`
|
2. **Validates** via Gitea API: `GET /api/v1/repos/{org}/{repo}`
|
||||||
3. **Auto-fill** if repository exists and is accessible (no confirmation needed)
|
3. **Auto-fills** if repository exists and is accessible (no confirmation needed)
|
||||||
4. **Ask for confirmation** only if validation fails (404 or permission error)
|
4. **Asks for confirmation** only if validation fails (404 or permission error)
|
||||||
|
|
||||||
This catches typos and permission issues before saving configuration.
|
This catches typos and permission issues before saving configuration.
|
||||||
|
|
||||||
@@ -439,9 +644,9 @@ This catches typos and permission issues before saving configuration.
|
|||||||
|
|
||||||
When you start a Claude Code session, a hook automatically:
|
When you start a Claude Code session, a hook automatically:
|
||||||
|
|
||||||
1. Reads `GITEA_ORG` and `GITEA_REPO` from `.env`
|
1. Reads `GITEA_REPO` (in `owner/repo` format) from `.env`
|
||||||
2. Compares with current `git remote get-url origin`
|
2. Compares with current `git remote get-url origin`
|
||||||
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/project-sync` to update."
|
3. **Warns** if mismatch detected: "Repository location mismatch. Run `/projman setup --sync` to update."
|
||||||
|
|
||||||
This helps when you:
|
This helps when you:
|
||||||
- Move a repository to a different organization
|
- Move a repository to a different organization
|
||||||
@@ -463,7 +668,7 @@ curl -H "Authorization: token $GITEA_API_TOKEN" "$GITEA_API_URL/user"
|
|||||||
|
|
||||||
In Claude Code, after restarting your session:
|
In Claude Code, after restarting your session:
|
||||||
```
|
```
|
||||||
/labels-sync
|
/labels sync
|
||||||
```
|
```
|
||||||
|
|
||||||
If this works, your setup is complete.
|
If this works, your setup is complete.
|
||||||
@@ -503,9 +708,8 @@ If you get 401, regenerate your token in Gitea.
|
|||||||
# Check venv exists
|
# Check venv exists
|
||||||
ls /path/to/mcp-servers/gitea/.venv
|
ls /path/to/mcp-servers/gitea/.venv
|
||||||
|
|
||||||
# Reinstall if missing
|
# If missing, create venv (do NOT delete existing venvs)
|
||||||
cd /path/to/mcp-servers/gitea
|
cd /path/to/mcp-servers/gitea
|
||||||
rm -rf .venv
|
|
||||||
python3 -m venv .venv
|
python3 -m venv .venv
|
||||||
source .venv/bin/activate
|
source .venv/bin/activate
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
@@ -518,7 +722,8 @@ deactivate
|
|||||||
# Check project .env
|
# Check project .env
|
||||||
cat .env
|
cat .env
|
||||||
|
|
||||||
# Verify GITEA_REPO matches the Gitea repository name exactly
|
# Verify GITEA_REPO is in owner/repo format and matches Gitea exactly
|
||||||
|
# Example: GITEA_REPO=my-org/my-repo
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -536,7 +741,7 @@ cat .env
|
|||||||
|
|
||||||
3. **Never type tokens into AI chat**
|
3. **Never type tokens into AI chat**
|
||||||
- Always edit config files directly in your editor
|
- Always edit config files directly in your editor
|
||||||
- The `/initial-setup` wizard respects this
|
- The `/projman setup` wizard respects this
|
||||||
|
|
||||||
4. **Rotate tokens periodically**
|
4. **Rotate tokens periodically**
|
||||||
- Every 6-12 months
|
- Every 6-12 months
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
**Purpose:** Systematic approach to diagnose and fix plugin loading issues.
|
**Purpose:** Systematic approach to diagnose and fix plugin loading issues.
|
||||||
|
|
||||||
Last Updated: 2026-01-22
|
Last Updated: 2026-02-08
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -73,25 +73,19 @@ cd $RUNTIME && ./scripts/setup.sh
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 4: Verify Symlink Resolution
|
## Step 4: Verify MCP Configuration
|
||||||
|
|
||||||
Plugins use symlinks to shared MCP servers. Verify they resolve correctly:
|
Check `.mcp.json` at marketplace root is correctly configured:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
|
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
|
||||||
|
|
||||||
# Check symlinks exist and resolve
|
# Check .mcp.json exists and has valid content
|
||||||
readlink -f $RUNTIME/plugins/projman/mcp-servers/gitea
|
cat $RUNTIME/.mcp.json | jq '.mcpServers | keys'
|
||||||
readlink -f $RUNTIME/plugins/pr-review/mcp-servers/gitea
|
|
||||||
readlink -f $RUNTIME/plugins/cmdb-assistant/mcp-servers/netbox
|
|
||||||
|
|
||||||
# Should resolve to:
|
# Should list: gitea, netbox, data-platform, viz-platform, contract-validator
|
||||||
# $RUNTIME/mcp-servers/gitea
|
|
||||||
# $RUNTIME/mcp-servers/netbox
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**If broken:** Symlinks are relative. If directory structure differs, they'll break.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 5: Test MCP Server Startup
|
## Step 5: Test MCP Server Startup
|
||||||
@@ -101,9 +95,9 @@ Manually test if the MCP server can start:
|
|||||||
```bash
|
```bash
|
||||||
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
|
RUNTIME=~/.claude/plugins/marketplaces/leo-claude-mktplace
|
||||||
|
|
||||||
# Test Gitea MCP
|
# Test Gitea MCP (uses gitea-mcp package from registry)
|
||||||
cd $RUNTIME/mcp-servers/gitea
|
cd $RUNTIME/mcp-servers/gitea
|
||||||
PYTHONPATH=. .venv/bin/python -c "from mcp_server.server import main; print('OK')"
|
.venv/bin/python -c "from gitea_mcp.server import main; print('OK')"
|
||||||
|
|
||||||
# Test NetBox MCP
|
# Test NetBox MCP
|
||||||
cd $RUNTIME/mcp-servers/netbox
|
cd $RUNTIME/mcp-servers/netbox
|
||||||
@@ -128,7 +122,7 @@ cat ~/.config/claude/netbox.env
|
|||||||
|
|
||||||
# Project-level config (in target project)
|
# Project-level config (in target project)
|
||||||
cat /path/to/project/.env
|
cat /path/to/project/.env
|
||||||
# Should contain: GITEA_ORG, GITEA_REPO
|
# Should contain: GITEA_REPO=owner/repo (e.g., my-org/my-repo)
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -165,10 +159,8 @@ echo -e "\n=== Virtual Environments ==="
|
|||||||
[ -f "$RUNTIME/mcp-servers/gitea/.venv/bin/python" ] && echo "Gitea venv: OK" || echo "Gitea venv: MISSING"
|
[ -f "$RUNTIME/mcp-servers/gitea/.venv/bin/python" ] && echo "Gitea venv: OK" || echo "Gitea venv: MISSING"
|
||||||
[ -f "$RUNTIME/mcp-servers/netbox/.venv/bin/python" ] && echo "NetBox venv: OK" || echo "NetBox venv: MISSING"
|
[ -f "$RUNTIME/mcp-servers/netbox/.venv/bin/python" ] && echo "NetBox venv: OK" || echo "NetBox venv: MISSING"
|
||||||
|
|
||||||
echo -e "\n=== Symlinks ==="
|
echo -e "\n=== MCP Configuration ==="
|
||||||
[ -L "$RUNTIME/plugins/projman/mcp-servers/gitea" ] && echo "projman->gitea: OK" || echo "projman->gitea: MISSING"
|
[ -f "$RUNTIME/.mcp.json" ] && echo ".mcp.json: OK" || echo ".mcp.json: MISSING"
|
||||||
[ -L "$RUNTIME/plugins/pr-review/mcp-servers/gitea" ] && echo "pr-review->gitea: OK" || echo "pr-review->gitea: MISSING"
|
|
||||||
[ -L "$RUNTIME/plugins/cmdb-assistant/mcp-servers/netbox" ] && echo "cmdb-assistant->netbox: OK" || echo "cmdb-assistant->netbox: MISSING"
|
|
||||||
|
|
||||||
echo -e "\n=== Config Files ==="
|
echo -e "\n=== Config Files ==="
|
||||||
[ -f ~/.config/claude/gitea.env ] && echo "gitea.env: OK" || echo "gitea.env: MISSING"
|
[ -f ~/.config/claude/gitea.env ] && echo "gitea.env: OK" || echo "gitea.env: MISSING"
|
||||||
@@ -182,10 +174,51 @@ echo -e "\n=== Config Files ==="
|
|||||||
| Issue | Symptom | Fix |
|
| Issue | Symptom | Fix |
|
||||||
|-------|---------|-----|
|
|-------|---------|-----|
|
||||||
| Missing venvs | "X MCP servers failed" | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
|
| Missing venvs | "X MCP servers failed" | `cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh` |
|
||||||
| Broken symlinks | MCP tools not available | Reinstall marketplace or manually recreate symlinks |
|
| Missing .mcp.json | MCP tools not available | Check `.mcp.json` exists at marketplace root |
|
||||||
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
|
| Wrong path edits | Changes don't take effect | Edit installed path or reinstall after source changes |
|
||||||
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
|
| Missing credentials | MCP connection errors | Create `~/.config/claude/gitea.env` with API credentials |
|
||||||
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
|
| Invalid hook events | Hooks don't fire | Use only valid event names (see Step 7) |
|
||||||
|
| Gitea issues not closing | Merged to non-default branch | Manually close issues (see below) |
|
||||||
|
| MCP changes not taking effect | Session caching | Restart Claude Code session (see below) |
|
||||||
|
|
||||||
|
### Gitea Auto-Close Behavior
|
||||||
|
|
||||||
|
**Issue:** Using `Closes #XX` or `Fixes #XX` in commit/PR messages does NOT auto-close issues when merging to `development`.
|
||||||
|
|
||||||
|
**Root Cause:** Gitea only auto-closes issues when merging to the **default branch** (typically `main` or `master`). Merging to `development`, `staging`, or any other branch will NOT trigger auto-close.
|
||||||
|
|
||||||
|
**Workaround:**
|
||||||
|
1. Use the Gitea MCP tool to manually close issues after merging to development:
|
||||||
|
```
|
||||||
|
mcp__plugin_projman_gitea__update_issue(issue_number=XX, state="closed")
|
||||||
|
```
|
||||||
|
2. Or close issues via the Gitea web UI
|
||||||
|
3. The auto-close keywords will still work when the changes are eventually merged to `main`
|
||||||
|
|
||||||
|
**Recommendation:** Include the `Closes #XX` keywords in commits anyway - they'll work when the final merge to `main` happens.
|
||||||
|
|
||||||
|
### MCP Session Restart Requirement
|
||||||
|
|
||||||
|
**Issue:** Changes to MCP servers, hooks, or plugin configuration don't take effect immediately.
|
||||||
|
|
||||||
|
**Root Cause:** Claude Code loads MCP tools and plugin configuration at session start. These are cached in session memory and not reloaded dynamically.
|
||||||
|
|
||||||
|
**What requires a session restart:**
|
||||||
|
- MCP server code changes (Python files in `mcp-servers/`)
|
||||||
|
- Changes to `.mcp.json` files
|
||||||
|
- Changes to `hooks/hooks.json`
|
||||||
|
- Changes to `plugin.json`
|
||||||
|
- Adding new MCP tools or modifying tool signatures
|
||||||
|
|
||||||
|
**What does NOT require a restart:**
|
||||||
|
- Command/skill markdown files (`.md`) - these are read on invocation
|
||||||
|
- Agent markdown files - read when agent is invoked
|
||||||
|
|
||||||
|
**Correct workflow after plugin changes:**
|
||||||
|
1. Make changes to source files
|
||||||
|
2. Run `./scripts/verify-hooks.sh` to validate
|
||||||
|
3. Inform user: "Please restart Claude Code for changes to take effect"
|
||||||
|
4. **Do NOT clear cache mid-session** - see "Cache Clearing" section
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -246,8 +279,8 @@ Error: Could not find a suitable TLS CA certificate bundle, invalid path:
|
|||||||
|
|
||||||
Use these commands for automated checking:
|
Use these commands for automated checking:
|
||||||
|
|
||||||
- `/debug-report` - Run full diagnostics, create issue if problems found
|
- `/cv status` - Marketplace-wide health check (installation, MCP, configuration)
|
||||||
- `/debug-review` - Investigate existing diagnostic issues and propose fixes
|
- `/hygiene check` - Project file organization and cleanup check
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
249
docs/MIGRATION-v9.md
Normal file
249
docs/MIGRATION-v9.md
Normal file
@@ -0,0 +1,249 @@
|
|||||||
|
# Migration Guide: v8.x → v9.0.0
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
v9.0.0 standardizes all commands to the `/<noun> <action>` sub-command pattern. Every command in the marketplace now follows this convention.
|
||||||
|
|
||||||
|
**Breaking change:** All old command names are removed. Update your workflows, scripts, and CLAUDE.md references.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Command Invocation
|
||||||
|
|
||||||
|
The `/<noun> <action>` pattern is a **display convention** for user-friendly command invocation. Under the hood, Claude Code resolves commands by filename using hyphens.
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
| You Type | What Happens | Actual Command Loaded |
|
||||||
|
|----------|--------------|----------------------|
|
||||||
|
| `/doc audit` | Dispatch file `doc.md` receives `$ARGUMENTS="audit"` | Routes to `/doc-guardian:doc-audit` (file: `doc-audit.md`) |
|
||||||
|
| `/sprint plan` | Dispatch file `sprint.md` receives `$ARGUMENTS="plan"` | Routes to `/projman:sprint-plan` (file: `sprint-plan.md`) |
|
||||||
|
| `/doc-guardian:doc-audit` | Direct invocation (bypasses dispatch) | Loads `doc-audit.md` directly |
|
||||||
|
|
||||||
|
### Two Invocation Methods
|
||||||
|
|
||||||
|
1. **User-friendly (via dispatch):** `/doc audit` — space-separated, routes through dispatch file
|
||||||
|
2. **Direct (plugin-prefixed):** `/doc-guardian:doc-audit` — bypasses dispatch, invokes command directly
|
||||||
|
|
||||||
|
Both methods work identically. The dispatch file provides `$ARGUMENTS` parsing and a menu interface when invoked without arguments.
|
||||||
|
|
||||||
|
### Command Name Mapping
|
||||||
|
|
||||||
|
**Pattern:** Spaces in display names become hyphens in filenames.
|
||||||
|
|
||||||
|
| Display Name | Filename | Plugin-Prefixed |
|
||||||
|
|--------------|----------|-----------------|
|
||||||
|
| `/doc audit` | `doc-audit.md` | `/doc-guardian:doc-audit` |
|
||||||
|
| `/sprint plan` | `sprint-plan.md` | `/projman:sprint-plan` |
|
||||||
|
| `/pr review` | `pr-review.md` | `/pr-review:pr-review` |
|
||||||
|
| `/gitflow commit` | `gitflow-commit.md` | `/git-flow:gitflow-commit` |
|
||||||
|
|
||||||
|
If dispatch routing fails, use the direct plugin-prefixed format.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Command Mapping
|
||||||
|
|
||||||
|
### projman
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/sprint-plan` | `/sprint plan` | |
|
||||||
|
| `/sprint-start` | `/sprint start` | |
|
||||||
|
| `/sprint-status` | `/sprint status` | |
|
||||||
|
| `/sprint-close` | `/sprint close` | |
|
||||||
|
| `/pm-review` | `/sprint review` | Moved under `/sprint` |
|
||||||
|
| `/pm-test` | `/sprint test` | Moved under `/sprint` |
|
||||||
|
| `/pm-setup` | `/projman setup` | Moved under `/projman` |
|
||||||
|
| `/pm-debug` | **Removed** | Deleted in v8.1.0 — migrated to `debug-mcp` plugin (Decision #11) |
|
||||||
|
| `/labels-sync` | `/labels sync` | |
|
||||||
|
| `/suggest-version` | **Removed** | Deleted in v8.1.0 — migrated to `ops-release-manager` plugin (Decision #18) |
|
||||||
|
| `/proposal-status` | **Removed** | Deleted in v8.1.0 — absorbed into `/project status` (Decision #19) |
|
||||||
|
| `/rfc <sub>` | `/rfc <sub>` | Unchanged |
|
||||||
|
| `/project <sub>` | `/project <sub>` | Unchanged |
|
||||||
|
| `/adr <sub>` | `/adr <sub>` | Unchanged |
|
||||||
|
|
||||||
|
### git-flow
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/git-commit` | `/gitflow commit` | |
|
||||||
|
| `/git-commit-push` | `/gitflow commit --push` | **Consolidated** into flag |
|
||||||
|
| `/git-commit-merge` | `/gitflow commit --merge` | **Consolidated** into flag |
|
||||||
|
| `/git-commit-sync` | `/gitflow commit --sync` | **Consolidated** into flag |
|
||||||
|
| `/branch-start` | `/gitflow branch-start` | |
|
||||||
|
| `/branch-cleanup` | `/gitflow branch-cleanup` | |
|
||||||
|
| `/git-status` | `/gitflow status` | |
|
||||||
|
| `/git-config` | `/gitflow config` | |
|
||||||
|
|
||||||
|
**Note:** The three commit variants (`-push`, `-merge`, `-sync`) are now flags on `/gitflow commit`. This reduces 8 commands to 5.
|
||||||
|
|
||||||
|
### pr-review
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/pr-review` | `/pr review` | |
|
||||||
|
| `/pr-summary` | `/pr summary` | |
|
||||||
|
| `/pr-findings` | `/pr findings` | |
|
||||||
|
| `/pr-diff` | `/pr diff` | |
|
||||||
|
| `/pr-setup` | `/pr setup` | |
|
||||||
|
| `/project-init` | `/pr init` | Renamed |
|
||||||
|
| `/project-sync` | `/pr sync` | Renamed |
|
||||||
|
|
||||||
|
### clarity-assist
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/clarify` | `/clarity clarify` | |
|
||||||
|
| `/quick-clarify` | `/clarity quick-clarify` | |
|
||||||
|
|
||||||
|
### doc-guardian
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/doc-audit` | `/doc audit` | |
|
||||||
|
| `/doc-sync` | `/doc sync` | |
|
||||||
|
| `/changelog-gen` | `/doc changelog-gen` | Moved under `/doc` |
|
||||||
|
| `/doc-coverage` | `/doc coverage` | |
|
||||||
|
| `/stale-docs` | `/doc stale-docs` | Moved under `/doc` |
|
||||||
|
|
||||||
|
### code-sentinel
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/security-scan` | `/sentinel scan` | |
|
||||||
|
| `/refactor` | `/sentinel refactor` | |
|
||||||
|
| `/refactor-dry` | `/sentinel refactor-dry` | |
|
||||||
|
|
||||||
|
### claude-config-maintainer
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/config-analyze` (or `/analyze`) | `/claude-config analyze` | |
|
||||||
|
| `/config-optimize` (or `/optimize`) | `/claude-config optimize` | |
|
||||||
|
| `/config-init` (or `/init`) | `/claude-config init` | |
|
||||||
|
| `/config-diff` | `/claude-config diff` | |
|
||||||
|
| `/config-lint` | `/claude-config lint` | |
|
||||||
|
| `/config-audit-settings` | `/claude-config audit-settings` | |
|
||||||
|
| `/config-optimize-settings` | `/claude-config optimize-settings` | |
|
||||||
|
| `/config-permissions-map` | `/claude-config permissions-map` | |
|
||||||
|
|
||||||
|
### contract-validator
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/validate-contracts` | `/cv validate` | |
|
||||||
|
| `/check-agent` | `/cv check-agent` | |
|
||||||
|
| `/list-interfaces` | `/cv list-interfaces` | |
|
||||||
|
| `/dependency-graph` | `/cv dependency-graph` | |
|
||||||
|
| `/cv-setup` | `/cv setup` | |
|
||||||
|
| `/cv status` | `/cv status` | Unchanged |
|
||||||
|
|
||||||
|
### cmdb-assistant
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/cmdb-setup` | `/cmdb setup` | |
|
||||||
|
| `/cmdb-search` | `/cmdb search` | |
|
||||||
|
| `/cmdb-device` | `/cmdb device` | |
|
||||||
|
| `/cmdb-ip` | `/cmdb ip` | |
|
||||||
|
| `/cmdb-site` | `/cmdb site` | |
|
||||||
|
| `/cmdb-audit` | `/cmdb audit` | |
|
||||||
|
| `/cmdb-register` | `/cmdb register` | |
|
||||||
|
| `/cmdb-sync` | `/cmdb sync` | |
|
||||||
|
| `/cmdb-topology` | `/cmdb topology` | |
|
||||||
|
| `/change-audit` | `/cmdb change-audit` | Moved under `/cmdb` |
|
||||||
|
| `/ip-conflicts` | `/cmdb ip-conflicts` | Moved under `/cmdb` |
|
||||||
|
|
||||||
|
### data-platform
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/data-ingest` | `/data ingest` | |
|
||||||
|
| `/data-profile` | `/data profile` | |
|
||||||
|
| `/data-schema` | `/data schema` | |
|
||||||
|
| `/data-explain` | `/data explain` | |
|
||||||
|
| `/data-lineage` | `/data lineage` | |
|
||||||
|
| `/data-run` | `/data run` | |
|
||||||
|
| `/lineage-viz` | `/data lineage-viz` | Moved under `/data` |
|
||||||
|
| `/dbt-test` | `/data dbt-test` | Moved under `/data` |
|
||||||
|
| `/data-quality` | `/data quality` | |
|
||||||
|
| `/data-review` | `/data review` | |
|
||||||
|
| `/data-gate` | `/data gate` | |
|
||||||
|
| `/data-setup` | `/data setup` | |
|
||||||
|
|
||||||
|
### viz-platform
|
||||||
|
|
||||||
|
| Old (v8.x) | New (v9.0.0) | Notes |
|
||||||
|
|-------------|--------------|-------|
|
||||||
|
| `/viz-setup` | `/viz setup` | |
|
||||||
|
| `/viz-chart` | `/viz chart` | |
|
||||||
|
| `/viz-chart-export` | `/viz chart-export` | |
|
||||||
|
| `/viz-dashboard` | `/viz dashboard` | |
|
||||||
|
| `/viz-theme` | `/viz theme` | |
|
||||||
|
| `/viz-theme-new` | `/viz theme-new` | |
|
||||||
|
| `/viz-theme-css` | `/viz theme-css` | |
|
||||||
|
| `/viz-component` | `/viz component` | |
|
||||||
|
| `/accessibility-check` | `/viz accessibility-check` | Moved under `/viz` |
|
||||||
|
| `/viz-breakpoints` | `/viz breakpoints` | |
|
||||||
|
| `/design-review` | `/viz design-review` | Moved under `/viz` |
|
||||||
|
| `/design-gate` | `/viz design-gate` | Moved under `/viz` |
|
||||||
|
|
||||||
|
### project-hygiene
|
||||||
|
|
||||||
|
No changes — already used `/<noun> <action>` pattern.
|
||||||
|
|
||||||
|
| Command | Status |
|
||||||
|
|---------|--------|
|
||||||
|
| `/hygiene check` | Unchanged |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verifying Plugin Installation (v9.0.0)
|
||||||
|
|
||||||
|
Test commands use the new format:
|
||||||
|
|
||||||
|
| Plugin | Test Command |
|
||||||
|
|--------|--------------|
|
||||||
|
| git-flow | `/git-flow:gitflow-status` |
|
||||||
|
| projman | `/projman:sprint-status` |
|
||||||
|
| pr-review | `/pr-review:pr-summary` |
|
||||||
|
| clarity-assist | `/clarity-assist:clarity-clarify` |
|
||||||
|
| doc-guardian | `/doc-guardian:doc-audit` |
|
||||||
|
| code-sentinel | `/code-sentinel:sentinel-scan` |
|
||||||
|
| claude-config-maintainer | `/claude-config-maintainer:claude-config-analyze` |
|
||||||
|
| cmdb-assistant | `/cmdb-assistant:cmdb-search` |
|
||||||
|
| data-platform | `/data-platform:data-ingest` |
|
||||||
|
| viz-platform | `/viz-platform:viz-chart` |
|
||||||
|
| contract-validator | `/contract-validator:cv-validate` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CLAUDE.md Updates
|
||||||
|
|
||||||
|
If your project's CLAUDE.md references old command names, update them:
|
||||||
|
|
||||||
|
**Find old references:**
|
||||||
|
```bash
|
||||||
|
grep -rn '/sprint-plan\|/pm-setup\|/git-commit\|/pr-review\|/security-scan\|/config-analyze\|/validate-contracts\|/cmdb-search\|/data-ingest\|/viz-chart\b\|/clarify\b\|/doc-audit' CLAUDE.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key patterns to search and replace:**
|
||||||
|
- `/sprint-plan` → `/sprint plan`
|
||||||
|
- `/pm-setup` → `/projman setup`
|
||||||
|
- `/pm-review` → `/sprint review`
|
||||||
|
- `/git-commit` → `/gitflow commit`
|
||||||
|
- `/pr-review` → `/pr review`
|
||||||
|
- `/security-scan` → `/sentinel scan`
|
||||||
|
- `/refactor` → `/sentinel refactor`
|
||||||
|
- `/config-analyze` → `/claude-config analyze`
|
||||||
|
- `/validate-contracts` → `/cv validate`
|
||||||
|
- `/clarify` → `/clarity clarify`
|
||||||
|
- `/doc-audit` → `/doc audit`
|
||||||
|
- `/cmdb-search` → `/cmdb search`
|
||||||
|
- `/data-ingest` → `/data ingest`
|
||||||
|
- `/viz-chart` → `/viz chart`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Last Updated: 2026-02-06*
|
||||||
@@ -38,7 +38,7 @@ cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
|
|||||||
|
|
||||||
## What the Post-Update Script Does
|
## What the Post-Update Script Does
|
||||||
|
|
||||||
1. **Updates Python dependencies** for MCP servers (gitea, netbox)
|
1. **Updates Python dependencies** for all 5 MCP servers (gitea, netbox, data-platform, viz-platform, contract-validator)
|
||||||
2. **Shows recent changelog entries** so you know what changed
|
2. **Shows recent changelog entries** so you know what changed
|
||||||
3. **Validates your configuration** is still compatible
|
3. **Validates your configuration** is still compatible
|
||||||
|
|
||||||
@@ -46,9 +46,9 @@ cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
|
|||||||
|
|
||||||
## After Updating: Re-run Setup if Needed
|
## After Updating: Re-run Setup if Needed
|
||||||
|
|
||||||
### When to Re-run `/initial-setup`
|
### When to Re-run Setup
|
||||||
|
|
||||||
You typically **don't need** to re-run setup after updates. However, re-run if:
|
You typically **don't need** to re-run setup after updates. However, re-run your plugin's setup command (e.g., `/projman setup`, `/pr setup`, `/cmdb setup`) if:
|
||||||
|
|
||||||
- Changelog mentions **new required environment variables**
|
- Changelog mentions **new required environment variables**
|
||||||
- Changelog mentions **breaking changes** to configuration
|
- Changelog mentions **breaking changes** to configuration
|
||||||
@@ -59,7 +59,7 @@ You typically **don't need** to re-run setup after updates. However, re-run if:
|
|||||||
If an update requires new project-level configuration:
|
If an update requires new project-level configuration:
|
||||||
|
|
||||||
```
|
```
|
||||||
/project-init
|
/pr init
|
||||||
```
|
```
|
||||||
|
|
||||||
This will detect existing settings and only add what's missing.
|
This will detect existing settings and only add what's missing.
|
||||||
@@ -97,9 +97,9 @@ When updating, review if changes affect the setup workflow:
|
|||||||
|
|
||||||
1. **Check for setup command changes:**
|
1. **Check for setup command changes:**
|
||||||
```bash
|
```bash
|
||||||
git diff HEAD~1 plugins/*/commands/initial-setup.md
|
git diff HEAD~1 plugins/*/commands/*-setup.md
|
||||||
git diff HEAD~1 plugins/*/commands/project-init.md
|
git diff HEAD~1 plugins/*/commands/pr-init.md
|
||||||
git diff HEAD~1 plugins/*/commands/project-sync.md
|
git diff HEAD~1 plugins/*/commands/pr-sync.md
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Check for hook changes:**
|
2. **Check for hook changes:**
|
||||||
@@ -114,7 +114,7 @@ When updating, review if changes affect the setup workflow:
|
|||||||
|
|
||||||
**If setup commands changed:**
|
**If setup commands changed:**
|
||||||
- Review what's new (new validation steps, new prompts, etc.)
|
- Review what's new (new validation steps, new prompts, etc.)
|
||||||
- Consider re-running `/initial-setup` or `/project-init` to benefit from improvements
|
- Consider re-running your plugin's setup command or `/pr init` to benefit from improvements
|
||||||
- Existing configurations remain valid unless changelog notes breaking changes
|
- Existing configurations remain valid unless changelog notes breaking changes
|
||||||
|
|
||||||
**If hooks changed:**
|
**If hooks changed:**
|
||||||
@@ -123,7 +123,7 @@ When updating, review if changes affect the setup workflow:
|
|||||||
|
|
||||||
**If configuration structure changed:**
|
**If configuration structure changed:**
|
||||||
- Check if new variables are required
|
- Check if new variables are required
|
||||||
- Run `/project-sync` if repository detection logic improved
|
- Run `/pr sync` if repository detection logic improved
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -132,10 +132,8 @@ When updating, review if changes affect the setup workflow:
|
|||||||
### Dependencies fail to install
|
### Dependencies fail to install
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Rebuild virtual environment
|
# Install missing dependencies (do NOT delete .venv)
|
||||||
cd mcp-servers/gitea
|
cd mcp-servers/gitea
|
||||||
rm -rf .venv
|
|
||||||
python3 -m venv .venv
|
|
||||||
source .venv/bin/activate
|
source .venv/bin/activate
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
deactivate
|
deactivate
|
||||||
@@ -144,7 +142,7 @@ deactivate
|
|||||||
### Configuration no longer works
|
### Configuration no longer works
|
||||||
|
|
||||||
1. Check CHANGELOG.md for breaking changes
|
1. Check CHANGELOG.md for breaking changes
|
||||||
2. Run `/initial-setup` to re-validate and fix configuration
|
2. Run your plugin's setup command (e.g., `/projman setup`) to re-validate and fix configuration
|
||||||
3. Compare your config files with documentation in `docs/CONFIGURATION.md`
|
3. Compare your config files with documentation in `docs/CONFIGURATION.md`
|
||||||
|
|
||||||
### MCP server won't start after update
|
### MCP server won't start after update
|
||||||
@@ -159,12 +157,13 @@ cd ~/.claude/plugins/marketplaces/leo-claude-mktplace && ./scripts/setup.sh
|
|||||||
If that doesn't work:
|
If that doesn't work:
|
||||||
|
|
||||||
1. Check Python version: `python3 --version` (requires 3.10+)
|
1. Check Python version: `python3 --version` (requires 3.10+)
|
||||||
2. Verify venv exists in INSTALLED location:
|
2. Verify venvs exist in INSTALLED location:
|
||||||
```bash
|
```bash
|
||||||
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/gitea/.venv
|
for server in gitea netbox data-platform viz-platform contract-validator; do
|
||||||
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/netbox/.venv
|
ls ~/.claude/plugins/marketplaces/leo-claude-mktplace/mcp-servers/$server/.venv && echo "$server: OK" || echo "$server: MISSING"
|
||||||
|
done
|
||||||
```
|
```
|
||||||
3. If missing, the symlinks won't resolve. Run setup.sh as shown above.
|
3. If missing, run setup.sh as shown above.
|
||||||
4. Restart Claude Code session
|
4. Restart Claude Code session
|
||||||
5. Check logs for specific errors
|
5. Check logs for specific errors
|
||||||
|
|
||||||
|
|||||||
@@ -1,271 +0,0 @@
|
|||||||
# Agent Workflow - Draw.io Specification
|
|
||||||
|
|
||||||
**Target File:** `docs/architecture/agent-workflow.drawio`
|
|
||||||
|
|
||||||
**Purpose:** Shows when Planner, Orchestrator, Executor, and Code Reviewer agents trigger during sprint lifecycle.
|
|
||||||
|
|
||||||
**Diagram Type:** Swimlane / Sequence Diagram
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## SWIMLANES
|
|
||||||
|
|
||||||
| ID | Label | Color | Position |
|
|
||||||
|----|-------|-------|----------|
|
|
||||||
| user-lane | User | #E3F2FD | 1 (leftmost) |
|
|
||||||
| planner-lane | Planner Agent | #4A90D9 | 2 |
|
|
||||||
| orchestrator-lane | Orchestrator Agent | #7CB342 | 3 |
|
|
||||||
| executor-lane | Executor Agent | #FF9800 | 4 |
|
|
||||||
| reviewer-lane | Code Reviewer Agent | #9C27B0 | 5 |
|
|
||||||
| gitea-lane | Gitea (Issues + Wiki) | #9E9E9E | 6 (rightmost) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## PHASE 1: SPRINT PLANNING
|
|
||||||
|
|
||||||
### Nodes
|
|
||||||
|
|
||||||
| ID | Label | Type | Lane | Sequence |
|
|
||||||
|----|-------|------|------|----------|
|
|
||||||
| p1-start | /sprint-plan | rounded-rect | user-lane | 1 |
|
|
||||||
| p1-activate | Planner Activates | rectangle | planner-lane | 2 |
|
|
||||||
| p1-search-lessons | Search Lessons Learned | rectangle | planner-lane | 3 |
|
|
||||||
| p1-gitea-wiki-query | Query Past Lessons (Wiki) | rectangle | gitea-lane | 4 |
|
|
||||||
| p1-return-lessons | Return Relevant Lessons | rectangle | planner-lane | 5 |
|
|
||||||
| p1-clarify | Ask Clarifying Questions | diamond | planner-lane | 6 |
|
|
||||||
| p1-user-answers | Provide Answers | rectangle | user-lane | 7 |
|
|
||||||
| p1-create-issues | Create Issues with Labels | rectangle | planner-lane | 8 |
|
|
||||||
| p1-gitea-create | Store Issues | rectangle | gitea-lane | 9 |
|
|
||||||
| p1-plan-complete | Planning Complete | rounded-rect | planner-lane | 10 |
|
|
||||||
|
|
||||||
### Edges
|
|
||||||
|
|
||||||
| From | To | Label | Style |
|
|
||||||
|------|----|-------|-------|
|
|
||||||
| p1-start | p1-activate | invokes | solid |
|
|
||||||
| p1-activate | p1-search-lessons | | solid |
|
|
||||||
| p1-search-lessons | p1-gitea-wiki-query | REST API (search_lessons) | solid |
|
|
||||||
| p1-gitea-wiki-query | p1-return-lessons | lessons data | dashed |
|
|
||||||
| p1-return-lessons | p1-clarify | | solid |
|
|
||||||
| p1-clarify | p1-user-answers | questions | solid |
|
|
||||||
| p1-user-answers | p1-clarify | answers | dashed |
|
|
||||||
| p1-clarify | p1-create-issues | | solid |
|
|
||||||
| p1-create-issues | p1-gitea-create | REST API | solid |
|
|
||||||
| p1-gitea-create | p1-plan-complete | confirm | dashed |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## PHASE 2: SPRINT EXECUTION
|
|
||||||
|
|
||||||
### Nodes
|
|
||||||
|
|
||||||
| ID | Label | Type | Lane | Sequence |
|
|
||||||
|----|-------|------|------|----------|
|
|
||||||
| p2-start | /sprint-start | rounded-rect | user-lane | 11 |
|
|
||||||
| p2-orch-activate | Orchestrator Activates | rectangle | orchestrator-lane | 12 |
|
|
||||||
| p2-fetch-issues | Fetch Sprint Issues | rectangle | orchestrator-lane | 13 |
|
|
||||||
| p2-gitea-list | List Open Issues | rectangle | gitea-lane | 14 |
|
|
||||||
| p2-sequence | Sequence Work (Dependencies) | rectangle | orchestrator-lane | 15 |
|
|
||||||
| p2-dispatch | Dispatch Task | rectangle | orchestrator-lane | 16 |
|
|
||||||
| p2-exec-activate | Executor Activates | rectangle | executor-lane | 17 |
|
|
||||||
| p2-implement | Implement Task | rectangle | executor-lane | 18 |
|
|
||||||
| p2-update-status | Update Issue Status | rectangle | executor-lane | 19 |
|
|
||||||
| p2-gitea-update | Update Issue | rectangle | gitea-lane | 20 |
|
|
||||||
| p2-report | Report Completion | rectangle | executor-lane | 21 |
|
|
||||||
| p2-loop | More Tasks? | diamond | orchestrator-lane | 22 |
|
|
||||||
| p2-exec-complete | Execution Complete | rounded-rect | orchestrator-lane | 23 |
|
|
||||||
|
|
||||||
### Edges
|
|
||||||
|
|
||||||
| From | To | Label | Style |
|
|
||||||
|------|----|-------|-------|
|
|
||||||
| p2-start | p2-orch-activate | invokes | solid |
|
|
||||||
| p2-orch-activate | p2-fetch-issues | | solid |
|
|
||||||
| p2-fetch-issues | p2-gitea-list | REST API | solid |
|
|
||||||
| p2-gitea-list | p2-sequence | issues data | dashed |
|
|
||||||
| p2-sequence | p2-dispatch | parallel batching | solid |
|
|
||||||
| p2-dispatch | p2-exec-activate | execution prompt | solid |
|
|
||||||
| p2-exec-activate | p2-implement | | solid |
|
|
||||||
| p2-implement | p2-update-status | | solid |
|
|
||||||
| p2-update-status | p2-gitea-update | REST API | solid |
|
|
||||||
| p2-gitea-update | p2-report | confirm | dashed |
|
|
||||||
| p2-report | p2-loop | | solid |
|
|
||||||
| p2-loop | p2-dispatch | yes | solid |
|
|
||||||
| p2-loop | p2-exec-complete | no | solid |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## PHASE 2.5: CODE REVIEW (Pre-Close)
|
|
||||||
|
|
||||||
### Nodes
|
|
||||||
|
|
||||||
| ID | Label | Type | Lane | Sequence |
|
|
||||||
|----|-------|------|------|----------|
|
|
||||||
| p25-start | /review | rounded-rect | user-lane | 24 |
|
|
||||||
| p25-reviewer-activate | Code Reviewer Activates | rectangle | reviewer-lane | 25 |
|
|
||||||
| p25-scan-changes | Scan Recent Changes | rectangle | reviewer-lane | 26 |
|
|
||||||
| p25-check-quality | Check Code Quality | rectangle | reviewer-lane | 27 |
|
|
||||||
| p25-security-scan | Security Scan | rectangle | reviewer-lane | 28 |
|
|
||||||
| p25-report | Generate Review Report | rectangle | reviewer-lane | 29 |
|
|
||||||
| p25-complete | Review Complete | rounded-rect | reviewer-lane | 30 |
|
|
||||||
|
|
||||||
### Edges
|
|
||||||
|
|
||||||
| From | To | Label | Style |
|
|
||||||
|------|----|-------|-------|
|
|
||||||
| p25-start | p25-reviewer-activate | invokes | solid |
|
|
||||||
| p25-reviewer-activate | p25-scan-changes | | solid |
|
|
||||||
| p25-scan-changes | p25-check-quality | | solid |
|
|
||||||
| p25-check-quality | p25-security-scan | | solid |
|
|
||||||
| p25-security-scan | p25-report | | solid |
|
|
||||||
| p25-report | p25-complete | | solid |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## PHASE 3: SPRINT CLOSE
|
|
||||||
|
|
||||||
### Nodes
|
|
||||||
|
|
||||||
| ID | Label | Type | Lane | Sequence |
|
|
||||||
|----|-------|------|------|----------|
|
|
||||||
| p3-start | /sprint-close | rounded-rect | user-lane | 31 |
|
|
||||||
| p3-orch-activate | Orchestrator Activates | rectangle | orchestrator-lane | 32 |
|
|
||||||
| p3-review | Review Sprint | rectangle | orchestrator-lane | 33 |
|
|
||||||
| p3-gitea-status | Get Final Status | rectangle | gitea-lane | 34 |
|
|
||||||
| p3-capture | Capture Lessons Learned | rectangle | orchestrator-lane | 35 |
|
|
||||||
| p3-user-input | Confirm Lessons | diamond | user-lane | 36 |
|
|
||||||
| p3-create-wiki | Create Wiki Pages | rectangle | orchestrator-lane | 37 |
|
|
||||||
| p3-gitea-wiki-create | Store Lessons (Wiki) | rectangle | gitea-lane | 38 |
|
|
||||||
| p3-close-issues | Close Issues | rectangle | orchestrator-lane | 39 |
|
|
||||||
| p3-gitea-close | Mark Closed | rectangle | gitea-lane | 40 |
|
|
||||||
| p3-complete | Sprint Closed | rounded-rect | orchestrator-lane | 41 |
|
|
||||||
|
|
||||||
### Edges
|
|
||||||
|
|
||||||
| From | To | Label | Style |
|
|
||||||
|------|----|-------|-------|
|
|
||||||
| p3-start | p3-orch-activate | invokes | solid |
|
|
||||||
| p3-orch-activate | p3-review | | solid |
|
|
||||||
| p3-review | p3-gitea-status | REST API | solid |
|
|
||||||
| p3-gitea-status | p3-capture | status data | dashed |
|
|
||||||
| p3-capture | p3-user-input | proposed lessons | solid |
|
|
||||||
| p3-user-input | p3-create-wiki | confirmed | solid |
|
|
||||||
| p3-create-wiki | p3-gitea-wiki-create | REST API (create_lesson) | solid |
|
|
||||||
| p3-gitea-wiki-create | p3-close-issues | confirm | dashed |
|
|
||||||
| p3-close-issues | p3-gitea-close | REST API | solid |
|
|
||||||
| p3-gitea-close | p3-complete | confirm | dashed |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## LAYOUT NOTES
|
|
||||||
|
|
||||||
```
|
|
||||||
+--------+------------+---------------+------------+----------+------------------+
|
|
||||||
| User | Planner | Orchestrator | Executor | Reviewer | Gitea |
|
|
||||||
| | | | | | (Issues + Wiki) |
|
|
||||||
+--------+------------+---------------+------------+----------+------------------+
|
|
||||||
| | | | | | |
|
|
||||||
| PHASE 1: SPRINT PLANNING |
|
|
||||||
|-------------------------------------------------------------------------------|
|
|
||||||
| O | | | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| +---->| O | | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | +----------|---------------|------------|--------->| O (Wiki Query) |
|
|
||||||
| | |<---------|---------------|------------|----------+ | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | O<> | | | | |
|
|
||||||
| O<--->+ | | | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | +----------|---------------|------------|--------->| O (Issues) |
|
|
||||||
| | O | | | | |
|
|
||||||
| | | | | | |
|
|
||||||
|-------------------------------------------------------------------------------|
|
|
||||||
| PHASE 2: SPRINT EXECUTION |
|
|
||||||
|-------------------------------------------------------------------------------|
|
|
||||||
| O | | | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| +-----|----------->| O | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | | +-------------|------------|--------->| O (Issues) |
|
|
||||||
| | | |<------------|------------|----------+ | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | | +------------>| O | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | | | +----------|--------->| O (Issues) |
|
|
||||||
| | | | |<---------|----------+ | |
|
|
||||||
| | | O<------------+ | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | | O (loop) | | | |
|
|
||||||
| | | | | | |
|
|
||||||
|-------------------------------------------------------------------------------|
|
|
||||||
| PHASE 2.5: CODE REVIEW |
|
|
||||||
|-------------------------------------------------------------------------------|
|
|
||||||
| O | | | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| +-----|------------|---------------|----------->| O | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | | | | O->O->O | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | | | | O | |
|
|
||||||
| | | | | | |
|
|
||||||
|-------------------------------------------------------------------------------|
|
|
||||||
| PHASE 3: SPRINT CLOSE |
|
|
||||||
|-------------------------------------------------------------------------------|
|
|
||||||
| O | | | | | |
|
|
||||||
| | | | | | | |
|
|
||||||
| +-----|----------->| O | | | |
|
|
||||||
| | | +-------------|------------|--------->| O (Issues) |
|
|
||||||
| | | |<------------|------------|----------+ | |
|
|
||||||
| | | | | | | |
|
|
||||||
| O<----|-----------<+ | | | | |
|
|
||||||
| +-----|----------->| | | | | |
|
|
||||||
| | | +-------------|------------|--------->| O (Wiki Create) |
|
|
||||||
| | | |<------------|------------|----------+ | |
|
|
||||||
| | | +-------------|------------|--------->| O (Issues Close) |
|
|
||||||
| | | O | | | |
|
|
||||||
+--------+------------+---------------+------------+----------+------------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## COLOR LEGEND
|
|
||||||
|
|
||||||
| Color | Hex | Meaning |
|
|
||||||
|-------|-----|---------|
|
|
||||||
| Light Blue | #E3F2FD | User actions |
|
|
||||||
| Blue | #4A90D9 | Planner Agent |
|
|
||||||
| Green | #7CB342 | Orchestrator Agent |
|
|
||||||
| Orange | #FF9800 | Executor Agent |
|
|
||||||
| Purple | #9C27B0 | Code Reviewer Agent |
|
|
||||||
| Gray | #9E9E9E | External Services (Gitea) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## SHAPE LEGEND
|
|
||||||
|
|
||||||
| Shape | Meaning |
|
|
||||||
|-------|---------|
|
|
||||||
| Rounded Rectangle | Start/End points (commands) |
|
|
||||||
| Rectangle | Process/Action |
|
|
||||||
| Diamond | Decision point |
|
|
||||||
| Cylinder | Data store (in component map) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ARROW LEGEND
|
|
||||||
|
|
||||||
| Style | Meaning |
|
|
||||||
|-------|---------|
|
|
||||||
| Solid | Action/Request |
|
|
||||||
| Dashed | Response/Data return |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ARCHITECTURE NOTES
|
|
||||||
|
|
||||||
- **Gitea provides BOTH issue tracking AND wiki** (no separate wiki service)
|
|
||||||
- All wiki operations use Gitea REST API via MCP tools
|
|
||||||
- Lessons learned stored in Gitea Wiki under `lessons-learned/sprints/`
|
|
||||||
- MCP tools: `search_lessons`, `create_lesson`, `list_wiki_pages`, `get_wiki_page`
|
|
||||||
- Four-agent model: Planner, Orchestrator, Executor, Code Reviewer
|
|
||||||
@@ -1,139 +0,0 @@
|
|||||||
# Component Map - Draw.io Specification
|
|
||||||
|
|
||||||
**Target File:** `docs/architecture/component-map.drawio`
|
|
||||||
|
|
||||||
**Purpose:** Shows all plugins, MCP servers, hooks and their relationships.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## NODES
|
|
||||||
|
|
||||||
### Plugins (Blue - #4A90D9)
|
|
||||||
|
|
||||||
| ID | Label | Type | Color | Position |
|
|
||||||
|----|-------|------|-------|----------|
|
|
||||||
| projman | projman | rectangle | #4A90D9 | top-center |
|
|
||||||
| projman-pmo | projman-pmo (planned) | rectangle | #4A90D9 | top-right |
|
|
||||||
| project-hygiene | project-hygiene | rectangle | #4A90D9 | top-left |
|
|
||||||
| claude-config | claude-config-maintainer | rectangle | #4A90D9 | bottom-left |
|
|
||||||
| cmdb-assistant | cmdb-assistant | rectangle | #4A90D9 | bottom-right |
|
|
||||||
|
|
||||||
### MCP Servers (Green - #7CB342)
|
|
||||||
|
|
||||||
MCP servers are **bundled inside each plugin** that needs them.
|
|
||||||
|
|
||||||
| ID | Label | Type | Color | Position | Bundled In |
|
|
||||||
|----|-------|------|-------|----------|------------|
|
|
||||||
| gitea-mcp | Gitea MCP Server | rectangle | #7CB342 | middle-left | projman |
|
|
||||||
| netbox-mcp | NetBox MCP Server | rectangle | #7CB342 | middle-right | cmdb-assistant |
|
|
||||||
|
|
||||||
### External Systems (Gray - #9E9E9E)
|
|
||||||
|
|
||||||
| ID | Label | Type | Color | Position |
|
|
||||||
|----|-------|------|-------|----------|
|
|
||||||
| gitea-instance | Gitea\n(Issues + Wiki) | cylinder | #9E9E9E | bottom-left |
|
|
||||||
| netbox-instance | NetBox | cylinder | #9E9E9E | bottom-right |
|
|
||||||
|
|
||||||
### Configuration (Orange - #FF9800)
|
|
||||||
|
|
||||||
| ID | Label | Type | Color | Position |
|
|
||||||
|----|-------|------|-------|----------|
|
|
||||||
| system-config | System Config\n~/.config/claude/ | rectangle | #FF9800 | far-left |
|
|
||||||
| project-config | Project Config\n.env | rectangle | #FF9800 | far-right |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## EDGES
|
|
||||||
|
|
||||||
### Plugin to MCP Server Connections
|
|
||||||
|
|
||||||
| From | To | Label | Style | Arrow |
|
|
||||||
|------|----|-------|-------|-------|
|
|
||||||
| projman | gitea-mcp | bundled | solid | bidirectional |
|
|
||||||
| cmdb-assistant | netbox-mcp | bundled | solid | bidirectional |
|
|
||||||
|
|
||||||
### Plugin Dependencies
|
|
||||||
|
|
||||||
| From | To | Label | Style | Arrow |
|
|
||||||
|------|----|-------|-------|-------|
|
|
||||||
| projman-pmo | projman | depends on | dashed | forward |
|
|
||||||
|
|
||||||
### MCP Server to External System Connections
|
|
||||||
|
|
||||||
| From | To | Label | Style | Arrow |
|
|
||||||
|------|----|-------|-------|-------|
|
|
||||||
| gitea-mcp | gitea-instance | REST API | solid | forward |
|
|
||||||
| netbox-mcp | netbox-instance | REST API | solid | forward |
|
|
||||||
|
|
||||||
### Configuration Connections
|
|
||||||
|
|
||||||
| From | To | Label | Style | Arrow |
|
|
||||||
|------|----|-------|-------|-------|
|
|
||||||
| system-config | gitea-mcp | credentials | dashed | forward |
|
|
||||||
| system-config | netbox-mcp | credentials | dashed | forward |
|
|
||||||
| project-config | gitea-mcp | repo context | dashed | forward |
|
|
||||||
| project-config | netbox-mcp | site context | dashed | forward |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## GROUPS
|
|
||||||
|
|
||||||
| ID | Label | Contains | Style |
|
|
||||||
|----|-------|----------|-------|
|
|
||||||
| plugins-group | Plugins | projman, projman-pmo, project-hygiene, claude-config, cmdb-assistant | light blue border |
|
|
||||||
| external-group | External Services | gitea-instance, netbox-instance | light gray border |
|
|
||||||
| config-group | Configuration | system-config, project-config | light orange border |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## LAYOUT NOTES
|
|
||||||
|
|
||||||
```
|
|
||||||
+------------------------------------------------------------------+
|
|
||||||
| PLUGINS GROUP |
|
|
||||||
| +----------------+ +----------------+ +-------------------+ |
|
|
||||||
| | project- | | projman | | projman-pmo | |
|
|
||||||
| | hygiene | | [gitea-mcp] | | (planned) | |
|
|
||||||
| +----------------+ +-------+--------+ +-------------------+ |
|
|
||||||
| | |
|
|
||||||
| +----------------+ +-------------------+ |
|
|
||||||
| | claude-config | | cmdb-assistant | |
|
|
||||||
| | -maintainer | | [netbox-mcp] | |
|
|
||||||
| +----------------+ +--------+----------+ |
|
|
||||||
+------------------------------------------------------------------+
|
|
||||||
|
|
|
||||||
v
|
|
||||||
+------------------------------------------------------------------+
|
|
||||||
| EXTERNAL SERVICES GROUP |
|
|
||||||
| +-------------------+ +-------------------+ |
|
|
||||||
| | Gitea | | NetBox | |
|
|
||||||
| | (Issues + Wiki) | | | |
|
|
||||||
| +-------------------+ +-------------------+ |
|
|
||||||
+------------------------------------------------------------------+
|
|
||||||
|
|
||||||
CONFIG GROUP (left side): CONFIG GROUP (right side):
|
|
||||||
+-------------------+ +-------------------+
|
|
||||||
| System Config | | Project Config |
|
|
||||||
| ~/.config/claude/ | | .env |
|
|
||||||
+-------------------+ +-------------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## COLOR LEGEND
|
|
||||||
|
|
||||||
| Color | Hex | Meaning |
|
|
||||||
|-------|-----|---------|
|
|
||||||
| Blue | #4A90D9 | Plugins |
|
|
||||||
| Green | #7CB342 | MCP Servers (bundled in plugins) |
|
|
||||||
| Gray | #9E9E9E | External Systems |
|
|
||||||
| Orange | #FF9800 | Configuration |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ARCHITECTURE NOTES
|
|
||||||
|
|
||||||
- MCP servers are **bundled inside plugins** (not shared at root)
|
|
||||||
- Gitea provides both issue tracking AND wiki (lessons learned)
|
|
||||||
- No separate Wiki.js - all wiki functionality uses Gitea Wiki
|
|
||||||
- Each plugin is self-contained for Claude Code caching
|
|
||||||
20
mcp-servers/contract-validator/.doc-guardian-queue
Normal file
20
mcp-servers/contract-validator/.doc-guardian-queue
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
2026-01-26T14:36:42 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T14:37:38 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T14:37:48 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T14:38:05 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T14:38:55 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T14:39:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T14:40:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T15:02:30 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T15:02:37 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_parse_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T15:03:41 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_report_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-02-02T10:56:19 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/mcp_server/validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-02-02T10:57:49 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/contract-validator/tests/test_validation_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-02-02T10:58:22 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/mcp-tools-reference.md | README.md
|
||||||
|
2026-02-02T10:58:38 | skills | /home/lmiranda/claude-plugins-work/plugins/contract-validator/skills/validation-rules.md | README.md
|
||||||
|
2026-02-02T10:59:13 | .claude-plugin | /home/lmiranda/claude-plugins-work/.claude-plugin/marketplace.json | CLAUDE.md .claude-plugin/marketplace.json
|
||||||
|
2026-02-02T13:55:33 | skills | /home/lmiranda/claude-plugins-work/plugins/projman/skills/visual-output.md | README.md
|
||||||
|
2026-02-02T13:55:41 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/planner.md | README.md CLAUDE.md
|
||||||
|
2026-02-02T13:55:55 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/orchestrator.md | README.md CLAUDE.md
|
||||||
|
2026-02-02T13:56:14 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/executor.md | README.md CLAUDE.md
|
||||||
|
2026-02-02T13:56:34 | agents | /home/lmiranda/claude-plugins-work/plugins/projman/agents/code-reviewer.md | README.md CLAUDE.md
|
||||||
3
mcp-servers/contract-validator/mcp_server/__init__.py
Normal file
3
mcp-servers/contract-validator/mcp_server/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
"""Contract Validator MCP Server - Cross-plugin compatibility validation."""
|
||||||
|
|
||||||
|
__version__ = "1.0.0"
|
||||||
415
mcp-servers/contract-validator/mcp_server/parse_tools.py
Normal file
415
mcp-servers/contract-validator/mcp_server/parse_tools.py
Normal file
@@ -0,0 +1,415 @@
|
|||||||
|
"""
|
||||||
|
Parse tools for extracting interfaces from plugin documentation.
|
||||||
|
|
||||||
|
Provides structured extraction of:
|
||||||
|
- Plugin interfaces from README.md (commands, agents, tools)
|
||||||
|
- Agent definitions from CLAUDE.md (tool sequences, workflows)
|
||||||
|
"""
|
||||||
|
import re
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
|
||||||
|
class ToolInfo(BaseModel):
|
||||||
|
"""Information about a single tool"""
|
||||||
|
name: str
|
||||||
|
category: Optional[str] = None
|
||||||
|
description: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
class CommandInfo(BaseModel):
|
||||||
|
"""Information about a plugin command"""
|
||||||
|
name: str
|
||||||
|
description: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
class AgentInfo(BaseModel):
|
||||||
|
"""Information about a plugin agent"""
|
||||||
|
name: str
|
||||||
|
description: Optional[str] = None
|
||||||
|
tools: list[str] = []
|
||||||
|
|
||||||
|
|
||||||
|
class PluginInterface(BaseModel):
|
||||||
|
"""Structured plugin interface extracted from README"""
|
||||||
|
plugin_name: str
|
||||||
|
description: Optional[str] = None
|
||||||
|
commands: list[CommandInfo] = []
|
||||||
|
agents: list[AgentInfo] = []
|
||||||
|
tools: list[ToolInfo] = []
|
||||||
|
tool_categories: dict[str, list[str]] = {}
|
||||||
|
features: list[str] = []
|
||||||
|
|
||||||
|
|
||||||
|
class ClaudeMdAgent(BaseModel):
|
||||||
|
"""Agent definition extracted from CLAUDE.md"""
|
||||||
|
name: str
|
||||||
|
personality: Optional[str] = None
|
||||||
|
responsibilities: list[str] = []
|
||||||
|
tool_refs: list[str] = []
|
||||||
|
workflow_steps: list[str] = []
|
||||||
|
|
||||||
|
|
||||||
|
class ParseTools:
|
||||||
|
"""Tools for parsing plugin documentation"""
|
||||||
|
|
||||||
|
async def parse_plugin_interface(self, plugin_path: str) -> dict:
|
||||||
|
"""
|
||||||
|
Parse plugin README.md to extract interface declarations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
plugin_path: Path to plugin directory or README.md file
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Structured interface with commands, agents, tools, etc.
|
||||||
|
"""
|
||||||
|
# Resolve path to README
|
||||||
|
path = Path(plugin_path)
|
||||||
|
if path.is_dir():
|
||||||
|
readme_path = path / "README.md"
|
||||||
|
else:
|
||||||
|
readme_path = path
|
||||||
|
|
||||||
|
if not readme_path.exists():
|
||||||
|
return {
|
||||||
|
"error": f"README.md not found at {readme_path}",
|
||||||
|
"plugin_path": plugin_path
|
||||||
|
}
|
||||||
|
|
||||||
|
content = readme_path.read_text()
|
||||||
|
plugin_name = self._extract_plugin_name(content, path)
|
||||||
|
|
||||||
|
interface = PluginInterface(
|
||||||
|
plugin_name=plugin_name,
|
||||||
|
description=self._extract_description(content),
|
||||||
|
commands=self._extract_commands(content),
|
||||||
|
agents=self._extract_agents_from_readme(content),
|
||||||
|
tools=self._extract_tools(content),
|
||||||
|
tool_categories=self._extract_tool_categories(content),
|
||||||
|
features=self._extract_features(content)
|
||||||
|
)
|
||||||
|
|
||||||
|
return interface.model_dump()
|
||||||
|
|
||||||
|
async def parse_claude_md_agents(self, claude_md_path: str) -> dict:
|
||||||
|
"""
|
||||||
|
Parse CLAUDE.md to extract agent definitions and tool sequences.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
claude_md_path: Path to CLAUDE.md file
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of agents with their tool sequences
|
||||||
|
"""
|
||||||
|
path = Path(claude_md_path)
|
||||||
|
|
||||||
|
if not path.exists():
|
||||||
|
return {
|
||||||
|
"error": f"CLAUDE.md not found at {path}",
|
||||||
|
"claude_md_path": claude_md_path
|
||||||
|
}
|
||||||
|
|
||||||
|
content = path.read_text()
|
||||||
|
agents = self._extract_agents_from_claude_md(content)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"file": str(path),
|
||||||
|
"agents": [a.model_dump() for a in agents],
|
||||||
|
"agent_count": len(agents)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _extract_plugin_name(self, content: str, path: Path) -> str:
|
||||||
|
"""Extract plugin name from content or path"""
|
||||||
|
# Try to get from H1 header
|
||||||
|
match = re.search(r'^#\s+(.+?)(?:\s+Plugin|\s*$)', content, re.MULTILINE)
|
||||||
|
if match:
|
||||||
|
name = match.group(1).strip()
|
||||||
|
# Handle cases like "# data-platform Plugin"
|
||||||
|
name = re.sub(r'\s*Plugin\s*$', '', name, flags=re.IGNORECASE)
|
||||||
|
return name
|
||||||
|
|
||||||
|
# Fall back to directory name
|
||||||
|
if path.is_dir():
|
||||||
|
return path.name
|
||||||
|
return path.parent.name
|
||||||
|
|
||||||
|
def _extract_description(self, content: str) -> Optional[str]:
|
||||||
|
"""Extract plugin description from first paragraph after title"""
|
||||||
|
# Get content after H1, before first H2
|
||||||
|
match = re.search(r'^#\s+.+?\n\n(.+?)(?=\n##|\n\n##|\Z)', content, re.MULTILINE | re.DOTALL)
|
||||||
|
if match:
|
||||||
|
desc = match.group(1).strip()
|
||||||
|
# Take first paragraph only
|
||||||
|
desc = desc.split('\n\n')[0].strip()
|
||||||
|
return desc
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _extract_commands(self, content: str) -> list[CommandInfo]:
|
||||||
|
"""Extract commands from Commands section"""
|
||||||
|
commands = []
|
||||||
|
|
||||||
|
# Find Commands section
|
||||||
|
commands_section = self._extract_section(content, "Commands")
|
||||||
|
if not commands_section:
|
||||||
|
return commands
|
||||||
|
|
||||||
|
# Parse table format: | Command | Description |
|
||||||
|
# Only match actual command names (start with / or alphanumeric)
|
||||||
|
table_pattern = r'\|\s*`?(/[a-z][-a-z0-9]*)`?\s*\|\s*([^|]+)\s*\|'
|
||||||
|
for match in re.finditer(table_pattern, commands_section):
|
||||||
|
cmd_name = match.group(1).strip()
|
||||||
|
desc = match.group(2).strip()
|
||||||
|
|
||||||
|
# Skip header row and separators
|
||||||
|
if cmd_name.lower() in ('command', 'commands') or cmd_name.startswith('-'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
commands.append(CommandInfo(
|
||||||
|
name=cmd_name,
|
||||||
|
description=desc
|
||||||
|
))
|
||||||
|
|
||||||
|
# Also look for ### `/command-name` format (with backticks)
|
||||||
|
cmd_header_pattern = r'^###\s+`(/[a-z][-a-z0-9]*)`\s*\n(.+?)(?=\n###|\n##|\Z)'
|
||||||
|
for match in re.finditer(cmd_header_pattern, commands_section, re.MULTILINE | re.DOTALL):
|
||||||
|
cmd_name = match.group(1).strip()
|
||||||
|
desc_block = match.group(2).strip()
|
||||||
|
# Get first line or paragraph as description
|
||||||
|
desc = desc_block.split('\n')[0].strip()
|
||||||
|
|
||||||
|
# Don't duplicate if already found in table
|
||||||
|
if not any(c.name == cmd_name for c in commands):
|
||||||
|
commands.append(CommandInfo(name=cmd_name, description=desc))
|
||||||
|
|
||||||
|
# Also look for ### /command-name format (without backticks)
|
||||||
|
cmd_header_pattern2 = r'^###\s+(/[a-z][-a-z0-9]*)\s*\n(.+?)(?=\n###|\n##|\Z)'
|
||||||
|
for match in re.finditer(cmd_header_pattern2, commands_section, re.MULTILINE | re.DOTALL):
|
||||||
|
cmd_name = match.group(1).strip()
|
||||||
|
desc_block = match.group(2).strip()
|
||||||
|
# Get first line or paragraph as description
|
||||||
|
desc = desc_block.split('\n')[0].strip()
|
||||||
|
|
||||||
|
# Don't duplicate if already found in table
|
||||||
|
if not any(c.name == cmd_name for c in commands):
|
||||||
|
commands.append(CommandInfo(name=cmd_name, description=desc))
|
||||||
|
|
||||||
|
return commands
|
||||||
|
|
||||||
|
def _extract_agents_from_readme(self, content: str) -> list[AgentInfo]:
|
||||||
|
"""Extract agents from Agents section in README"""
|
||||||
|
agents = []
|
||||||
|
|
||||||
|
# Find Agents section
|
||||||
|
agents_section = self._extract_section(content, "Agents")
|
||||||
|
if not agents_section:
|
||||||
|
return agents
|
||||||
|
|
||||||
|
# Parse table format: | Agent | Description |
|
||||||
|
# Only match actual agent names (alphanumeric with dashes/underscores)
|
||||||
|
table_pattern = r'\|\s*`?([a-z][-a-z0-9_]*)`?\s*\|\s*([^|]+)\s*\|'
|
||||||
|
for match in re.finditer(table_pattern, agents_section):
|
||||||
|
agent_name = match.group(1).strip()
|
||||||
|
desc = match.group(2).strip()
|
||||||
|
|
||||||
|
# Skip header row and separators
|
||||||
|
if agent_name.lower() in ('agent', 'agents') or agent_name.startswith('-'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
agents.append(AgentInfo(name=agent_name, description=desc))
|
||||||
|
|
||||||
|
return agents
|
||||||
|
|
||||||
|
def _extract_tools(self, content: str) -> list[ToolInfo]:
|
||||||
|
"""Extract tool list from Tools Summary or similar section"""
|
||||||
|
tools = []
|
||||||
|
|
||||||
|
# Find Tools Summary section
|
||||||
|
tools_section = self._extract_section(content, "Tools Summary")
|
||||||
|
if not tools_section:
|
||||||
|
tools_section = self._extract_section(content, "Tools")
|
||||||
|
if not tools_section:
|
||||||
|
tools_section = self._extract_section(content, "MCP Server Tools")
|
||||||
|
|
||||||
|
if not tools_section:
|
||||||
|
return tools
|
||||||
|
|
||||||
|
# Parse category headers: ### category (N tools)
|
||||||
|
category_pattern = r'###\s*(.+?)\s*(?:\((\d+)\s*tools?\))?\s*\n([^#]+)'
|
||||||
|
for match in re.finditer(category_pattern, tools_section):
|
||||||
|
category = match.group(1).strip()
|
||||||
|
tool_list_text = match.group(3).strip()
|
||||||
|
|
||||||
|
# Extract tool names from backtick lists
|
||||||
|
tool_names = re.findall(r'`([a-z_]+)`', tool_list_text)
|
||||||
|
for name in tool_names:
|
||||||
|
tools.append(ToolInfo(name=name, category=category))
|
||||||
|
|
||||||
|
# Also look for inline tool lists without categories
|
||||||
|
inline_pattern = r'`([a-z_]+)`'
|
||||||
|
all_tool_names = set(t.name for t in tools)
|
||||||
|
for match in re.finditer(inline_pattern, tools_section):
|
||||||
|
name = match.group(1)
|
||||||
|
if name not in all_tool_names:
|
||||||
|
tools.append(ToolInfo(name=name))
|
||||||
|
all_tool_names.add(name)
|
||||||
|
|
||||||
|
return tools
|
||||||
|
|
||||||
|
def _extract_tool_categories(self, content: str) -> dict[str, list[str]]:
|
||||||
|
"""Extract tool categories with their tool lists"""
|
||||||
|
categories = {}
|
||||||
|
|
||||||
|
tools_section = self._extract_section(content, "Tools Summary")
|
||||||
|
if not tools_section:
|
||||||
|
tools_section = self._extract_section(content, "Tools")
|
||||||
|
if not tools_section:
|
||||||
|
return categories
|
||||||
|
|
||||||
|
# Parse category headers: ### category (N tools)
|
||||||
|
category_pattern = r'###\s*(.+?)\s*(?:\((\d+)\s*tools?\))?\s*\n([^#]+)'
|
||||||
|
for match in re.finditer(category_pattern, tools_section):
|
||||||
|
category = match.group(1).strip()
|
||||||
|
tool_list_text = match.group(3).strip()
|
||||||
|
|
||||||
|
# Extract tool names from backtick lists
|
||||||
|
tool_names = re.findall(r'`([a-z_]+)`', tool_list_text)
|
||||||
|
if tool_names:
|
||||||
|
categories[category] = tool_names
|
||||||
|
|
||||||
|
return categories
|
||||||
|
|
||||||
|
def _extract_features(self, content: str) -> list[str]:
|
||||||
|
"""Extract features from Features section"""
|
||||||
|
features = []
|
||||||
|
|
||||||
|
features_section = self._extract_section(content, "Features")
|
||||||
|
if not features_section:
|
||||||
|
return features
|
||||||
|
|
||||||
|
# Parse bullet points
|
||||||
|
bullet_pattern = r'^[-*]\s+\*\*(.+?)\*\*'
|
||||||
|
for match in re.finditer(bullet_pattern, features_section, re.MULTILINE):
|
||||||
|
features.append(match.group(1).strip())
|
||||||
|
|
||||||
|
return features
|
||||||
|
|
||||||
|
def _extract_section(self, content: str, section_name: str) -> Optional[str]:
|
||||||
|
"""Extract content of a markdown section by header name"""
|
||||||
|
# Match ## Section Name - include all content until next ## (same level or higher)
|
||||||
|
pattern = rf'^##\s+{re.escape(section_name)}(?:\s*\([^)]*\))?\s*\n(.*?)(?=\n##[^#]|\Z)'
|
||||||
|
match = re.search(pattern, content, re.MULTILINE | re.DOTALL | re.IGNORECASE)
|
||||||
|
if match:
|
||||||
|
return match.group(1).strip()
|
||||||
|
|
||||||
|
# Try ### level - include content until next ## or ###
|
||||||
|
pattern = rf'^###\s+{re.escape(section_name)}(?:\s*\([^)]*\))?\s*\n(.*?)(?=\n##|\n###[^#]|\Z)'
|
||||||
|
match = re.search(pattern, content, re.MULTILINE | re.DOTALL | re.IGNORECASE)
|
||||||
|
if match:
|
||||||
|
return match.group(1).strip()
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _extract_agents_from_claude_md(self, content: str) -> list[ClaudeMdAgent]:
|
||||||
|
"""Extract agent definitions from CLAUDE.md"""
|
||||||
|
agents = []
|
||||||
|
|
||||||
|
# Look for Four-Agent Model section specifically
|
||||||
|
# Match section headers like "### Four-Agent Model (projman)" or "## Four-Agent Model"
|
||||||
|
agent_model_match = re.search(
|
||||||
|
r'^##[#]?\s+Four-Agent Model.*?\n(.*?)(?=\n##[^#]|\Z)',
|
||||||
|
content, re.MULTILINE | re.DOTALL
|
||||||
|
)
|
||||||
|
agent_model_section = agent_model_match.group(1) if agent_model_match else None
|
||||||
|
|
||||||
|
if agent_model_section:
|
||||||
|
# Parse agent table within this section
|
||||||
|
# | **Planner** | Thoughtful, methodical | Sprint planning, ... |
|
||||||
|
# Match rows where first cell starts with ** (bold) and contains a capitalized word
|
||||||
|
agent_table_pattern = r'\|\s*\*\*([A-Z][a-zA-Z\s]+?)\*\*\s*\|\s*([^|]+)\s*\|\s*([^|]+)\s*\|'
|
||||||
|
|
||||||
|
for match in re.finditer(agent_table_pattern, agent_model_section):
|
||||||
|
agent_name = match.group(1).strip()
|
||||||
|
personality = match.group(2).strip()
|
||||||
|
responsibilities = match.group(3).strip()
|
||||||
|
|
||||||
|
# Skip header rows and separator rows
|
||||||
|
if agent_name.lower() in ('agent', 'agents', '---', '-', ''):
|
||||||
|
continue
|
||||||
|
if 'personality' in personality.lower() or '---' in personality:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Skip if personality looks like tool names (contains backticks)
|
||||||
|
if '`' in personality:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Extract tool references from responsibilities
|
||||||
|
tool_refs = re.findall(r'`([a-z_]+)`', responsibilities)
|
||||||
|
|
||||||
|
# Split responsibilities by comma
|
||||||
|
resp_list = [r.strip() for r in responsibilities.split(',')]
|
||||||
|
|
||||||
|
agents.append(ClaudeMdAgent(
|
||||||
|
name=agent_name,
|
||||||
|
personality=personality,
|
||||||
|
responsibilities=resp_list,
|
||||||
|
tool_refs=tool_refs
|
||||||
|
))
|
||||||
|
|
||||||
|
# Also look for agents table in ## Agents section
|
||||||
|
agents_section = self._extract_section(content, "Agents")
|
||||||
|
if agents_section:
|
||||||
|
# Parse table: | Agent | Description |
|
||||||
|
table_pattern = r'\|\s*`?([a-z][-a-z0-9_]+)`?\s*\|\s*([^|]+)\s*\|'
|
||||||
|
for match in re.finditer(table_pattern, agents_section):
|
||||||
|
agent_name = match.group(1).strip()
|
||||||
|
desc = match.group(2).strip()
|
||||||
|
|
||||||
|
# Skip header rows
|
||||||
|
if agent_name.lower() in ('agent', 'agents', '---', '-'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if agent already exists
|
||||||
|
if not any(a.name.lower() == agent_name.lower() for a in agents):
|
||||||
|
agents.append(ClaudeMdAgent(
|
||||||
|
name=agent_name,
|
||||||
|
responsibilities=[desc] if desc else []
|
||||||
|
))
|
||||||
|
|
||||||
|
# Look for workflow sections to enrich agent data
|
||||||
|
workflow_section = self._extract_section(content, "Workflow")
|
||||||
|
if workflow_section:
|
||||||
|
# Parse numbered steps
|
||||||
|
step_pattern = r'^\d+\.\s+(.+?)$'
|
||||||
|
workflow_steps = re.findall(step_pattern, workflow_section, re.MULTILINE)
|
||||||
|
|
||||||
|
# Associate workflow steps with agents mentioned
|
||||||
|
for agent in agents:
|
||||||
|
for step in workflow_steps:
|
||||||
|
if agent.name.lower() in step.lower():
|
||||||
|
agent.workflow_steps.append(step)
|
||||||
|
# Extract any tool references in the step
|
||||||
|
step_tools = re.findall(r'`([a-z_]+)`', step)
|
||||||
|
agent.tool_refs.extend(t for t in step_tools if t not in agent.tool_refs)
|
||||||
|
|
||||||
|
# Look for agent-specific sections (### Planner Agent)
|
||||||
|
agent_section_pattern = r'^###?\s+([A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)\s+Agent\s*\n(.*?)(?=\n##|\n###|\Z)'
|
||||||
|
for match in re.finditer(agent_section_pattern, content, re.MULTILINE | re.DOTALL):
|
||||||
|
agent_name = match.group(1).strip()
|
||||||
|
section_content = match.group(2).strip()
|
||||||
|
|
||||||
|
# Check if agent already exists
|
||||||
|
existing = next((a for a in agents if a.name.lower() == agent_name.lower()), None)
|
||||||
|
if existing:
|
||||||
|
# Add tool refs from this section
|
||||||
|
tool_refs = re.findall(r'`([a-z_]+)`', section_content)
|
||||||
|
existing.tool_refs.extend(t for t in tool_refs if t not in existing.tool_refs)
|
||||||
|
else:
|
||||||
|
tool_refs = re.findall(r'`([a-z_]+)`', section_content)
|
||||||
|
agents.append(ClaudeMdAgent(
|
||||||
|
name=agent_name,
|
||||||
|
tool_refs=tool_refs
|
||||||
|
))
|
||||||
|
|
||||||
|
return agents
|
||||||
337
mcp-servers/contract-validator/mcp_server/report_tools.py
Normal file
337
mcp-servers/contract-validator/mcp_server/report_tools.py
Normal file
@@ -0,0 +1,337 @@
|
|||||||
|
"""
|
||||||
|
Report tools for generating compatibility reports and listing issues.
|
||||||
|
|
||||||
|
Provides:
|
||||||
|
- generate_compatibility_report: Full marketplace validation report
|
||||||
|
- list_issues: Filtered issue listing
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Optional
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
from .parse_tools import ParseTools
|
||||||
|
from .validation_tools import ValidationTools, IssueSeverity, IssueType, ValidationIssue
|
||||||
|
|
||||||
|
|
||||||
|
class ReportSummary(BaseModel):
|
||||||
|
"""Summary statistics for a report"""
|
||||||
|
total_plugins: int = 0
|
||||||
|
total_commands: int = 0
|
||||||
|
total_agents: int = 0
|
||||||
|
total_tools: int = 0
|
||||||
|
total_issues: int = 0
|
||||||
|
errors: int = 0
|
||||||
|
warnings: int = 0
|
||||||
|
info: int = 0
|
||||||
|
|
||||||
|
|
||||||
|
class ReportTools:
|
||||||
|
"""Tools for generating reports and listing issues"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.parse_tools = ParseTools()
|
||||||
|
self.validation_tools = ValidationTools()
|
||||||
|
|
||||||
|
async def generate_compatibility_report(
|
||||||
|
self,
|
||||||
|
marketplace_path: str,
|
||||||
|
format: str = "markdown"
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
Generate a comprehensive compatibility report for all plugins.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
marketplace_path: Path to marketplace root directory
|
||||||
|
format: Output format ("markdown" or "json")
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Full compatibility report with all findings
|
||||||
|
"""
|
||||||
|
marketplace = Path(marketplace_path)
|
||||||
|
plugins_dir = marketplace / "plugins"
|
||||||
|
|
||||||
|
if not plugins_dir.exists():
|
||||||
|
return {
|
||||||
|
"error": f"Plugins directory not found at {plugins_dir}",
|
||||||
|
"marketplace_path": marketplace_path
|
||||||
|
}
|
||||||
|
|
||||||
|
# Discover all plugins
|
||||||
|
plugins = []
|
||||||
|
for item in plugins_dir.iterdir():
|
||||||
|
if item.is_dir() and (item / ".claude-plugin").exists():
|
||||||
|
plugins.append(item)
|
||||||
|
|
||||||
|
if not plugins:
|
||||||
|
return {
|
||||||
|
"error": "No plugins found in marketplace",
|
||||||
|
"marketplace_path": marketplace_path
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse all plugin interfaces
|
||||||
|
interfaces = {}
|
||||||
|
all_issues = []
|
||||||
|
summary = ReportSummary(total_plugins=len(plugins))
|
||||||
|
|
||||||
|
for plugin_path in plugins:
|
||||||
|
interface = await self.parse_tools.parse_plugin_interface(str(plugin_path))
|
||||||
|
if "error" not in interface:
|
||||||
|
interfaces[interface["plugin_name"]] = interface
|
||||||
|
summary.total_commands += len(interface.get("commands", []))
|
||||||
|
summary.total_agents += len(interface.get("agents", []))
|
||||||
|
summary.total_tools += len(interface.get("tools", []))
|
||||||
|
|
||||||
|
# Run pairwise compatibility checks
|
||||||
|
plugin_names = list(interfaces.keys())
|
||||||
|
compatibility_results = []
|
||||||
|
|
||||||
|
for i, name_a in enumerate(plugin_names):
|
||||||
|
for name_b in plugin_names[i+1:]:
|
||||||
|
path_a = plugins_dir / self._find_plugin_dir(plugins_dir, name_a)
|
||||||
|
path_b = plugins_dir / self._find_plugin_dir(plugins_dir, name_b)
|
||||||
|
|
||||||
|
result = await self.validation_tools.validate_compatibility(
|
||||||
|
str(path_a), str(path_b)
|
||||||
|
)
|
||||||
|
|
||||||
|
if "error" not in result:
|
||||||
|
compatibility_results.append(result)
|
||||||
|
all_issues.extend(result.get("issues", []))
|
||||||
|
|
||||||
|
# Parse CLAUDE.md if exists
|
||||||
|
claude_md = marketplace / "CLAUDE.md"
|
||||||
|
agents_from_claude = []
|
||||||
|
if claude_md.exists():
|
||||||
|
agents_result = await self.parse_tools.parse_claude_md_agents(str(claude_md))
|
||||||
|
if "error" not in agents_result:
|
||||||
|
agents_from_claude = agents_result.get("agents", [])
|
||||||
|
|
||||||
|
# Validate each agent
|
||||||
|
for agent in agents_from_claude:
|
||||||
|
agent_result = await self.validation_tools.validate_agent_refs(
|
||||||
|
agent["name"],
|
||||||
|
str(claude_md),
|
||||||
|
[str(p) for p in plugins]
|
||||||
|
)
|
||||||
|
if "error" not in agent_result:
|
||||||
|
all_issues.extend(agent_result.get("issues", []))
|
||||||
|
|
||||||
|
# Count issues by severity
|
||||||
|
for issue in all_issues:
|
||||||
|
severity = issue.get("severity", "info")
|
||||||
|
if isinstance(severity, str):
|
||||||
|
severity_str = severity.lower()
|
||||||
|
else:
|
||||||
|
severity_str = severity.value if hasattr(severity, 'value') else str(severity).lower()
|
||||||
|
|
||||||
|
if "error" in severity_str:
|
||||||
|
summary.errors += 1
|
||||||
|
elif "warning" in severity_str:
|
||||||
|
summary.warnings += 1
|
||||||
|
else:
|
||||||
|
summary.info += 1
|
||||||
|
|
||||||
|
summary.total_issues = len(all_issues)
|
||||||
|
|
||||||
|
# Generate report
|
||||||
|
if format == "json":
|
||||||
|
return {
|
||||||
|
"generated_at": datetime.now().isoformat(),
|
||||||
|
"marketplace_path": marketplace_path,
|
||||||
|
"summary": summary.model_dump(),
|
||||||
|
"plugins": interfaces,
|
||||||
|
"compatibility_checks": compatibility_results,
|
||||||
|
"claude_md_agents": agents_from_claude,
|
||||||
|
"all_issues": all_issues
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Generate markdown report
|
||||||
|
report = self._generate_markdown_report(
|
||||||
|
marketplace_path,
|
||||||
|
summary,
|
||||||
|
interfaces,
|
||||||
|
compatibility_results,
|
||||||
|
agents_from_claude,
|
||||||
|
all_issues
|
||||||
|
)
|
||||||
|
return {
|
||||||
|
"generated_at": datetime.now().isoformat(),
|
||||||
|
"marketplace_path": marketplace_path,
|
||||||
|
"summary": summary.model_dump(),
|
||||||
|
"report": report
|
||||||
|
}
|
||||||
|
|
||||||
|
def _find_plugin_dir(self, plugins_dir: Path, plugin_name: str) -> str:
|
||||||
|
"""Find plugin directory by name (handles naming variations)"""
|
||||||
|
# Try exact match first
|
||||||
|
for item in plugins_dir.iterdir():
|
||||||
|
if item.is_dir():
|
||||||
|
if item.name.lower() == plugin_name.lower():
|
||||||
|
return item.name
|
||||||
|
# Check plugin.json for name
|
||||||
|
plugin_json = item / ".claude-plugin" / "plugin.json"
|
||||||
|
if plugin_json.exists():
|
||||||
|
import json
|
||||||
|
try:
|
||||||
|
data = json.loads(plugin_json.read_text())
|
||||||
|
if data.get("name", "").lower() == plugin_name.lower():
|
||||||
|
return item.name
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return plugin_name
|
||||||
|
|
||||||
|
def _generate_markdown_report(
|
||||||
|
self,
|
||||||
|
marketplace_path: str,
|
||||||
|
summary: ReportSummary,
|
||||||
|
interfaces: dict,
|
||||||
|
compatibility_results: list,
|
||||||
|
agents: list,
|
||||||
|
issues: list
|
||||||
|
) -> str:
|
||||||
|
"""Generate markdown formatted report"""
|
||||||
|
lines = [
|
||||||
|
"# Contract Validation Report",
|
||||||
|
"",
|
||||||
|
f"**Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
|
||||||
|
f"**Marketplace:** `{marketplace_path}`",
|
||||||
|
"",
|
||||||
|
"## Summary",
|
||||||
|
"",
|
||||||
|
f"| Metric | Count |",
|
||||||
|
f"|--------|-------|",
|
||||||
|
f"| Plugins | {summary.total_plugins} |",
|
||||||
|
f"| Commands | {summary.total_commands} |",
|
||||||
|
f"| Agents | {summary.total_agents} |",
|
||||||
|
f"| Tools | {summary.total_tools} |",
|
||||||
|
f"| **Issues** | **{summary.total_issues}** |",
|
||||||
|
f"| - Errors | {summary.errors} |",
|
||||||
|
f"| - Warnings | {summary.warnings} |",
|
||||||
|
f"| - Info | {summary.info} |",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Plugin details
|
||||||
|
lines.extend([
|
||||||
|
"## Plugins",
|
||||||
|
"",
|
||||||
|
])
|
||||||
|
|
||||||
|
for name, interface in interfaces.items():
|
||||||
|
cmds = len(interface.get("commands", []))
|
||||||
|
agents_count = len(interface.get("agents", []))
|
||||||
|
tools = len(interface.get("tools", []))
|
||||||
|
lines.append(f"### {name}")
|
||||||
|
lines.append("")
|
||||||
|
lines.append(f"- Commands: {cmds}")
|
||||||
|
lines.append(f"- Agents: {agents_count}")
|
||||||
|
lines.append(f"- Tools: {tools}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Compatibility results
|
||||||
|
if compatibility_results:
|
||||||
|
lines.extend([
|
||||||
|
"## Compatibility Checks",
|
||||||
|
"",
|
||||||
|
])
|
||||||
|
|
||||||
|
for result in compatibility_results:
|
||||||
|
status = "✓" if result.get("compatible", True) else "✗"
|
||||||
|
lines.append(f"### {result['plugin_a']} ↔ {result['plugin_b']} {status}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
if result.get("shared_tools"):
|
||||||
|
lines.append(f"- Shared tools: `{', '.join(result['shared_tools'])}`")
|
||||||
|
if result.get("issues"):
|
||||||
|
for issue in result["issues"]:
|
||||||
|
sev = issue.get("severity", "info")
|
||||||
|
if hasattr(sev, 'value'):
|
||||||
|
sev = sev.value
|
||||||
|
lines.append(f"- [{sev.upper()}] {issue['message']}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Issues section
|
||||||
|
if issues:
|
||||||
|
lines.extend([
|
||||||
|
"## All Issues",
|
||||||
|
"",
|
||||||
|
"| Severity | Type | Message |",
|
||||||
|
"|----------|------|---------|",
|
||||||
|
])
|
||||||
|
|
||||||
|
for issue in issues:
|
||||||
|
sev = issue.get("severity", "info")
|
||||||
|
itype = issue.get("issue_type", "unknown")
|
||||||
|
msg = issue.get("message", "")
|
||||||
|
|
||||||
|
if hasattr(sev, 'value'):
|
||||||
|
sev = sev.value
|
||||||
|
if hasattr(itype, 'value'):
|
||||||
|
itype = itype.value
|
||||||
|
|
||||||
|
# Truncate message for table
|
||||||
|
msg_short = msg[:60] + "..." if len(msg) > 60 else msg
|
||||||
|
lines.append(f"| {sev} | {itype} | {msg_short} |")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
async def list_issues(
|
||||||
|
self,
|
||||||
|
marketplace_path: str,
|
||||||
|
severity: str = "all",
|
||||||
|
issue_type: str = "all"
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
List validation issues with optional filtering.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
marketplace_path: Path to marketplace root directory
|
||||||
|
severity: Filter by severity ("error", "warning", "info", "all")
|
||||||
|
issue_type: Filter by type ("missing_tool", "interface_mismatch", etc., "all")
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Filtered list of issues
|
||||||
|
"""
|
||||||
|
# Generate full report first
|
||||||
|
report = await self.generate_compatibility_report(marketplace_path, format="json")
|
||||||
|
|
||||||
|
if "error" in report:
|
||||||
|
return report
|
||||||
|
|
||||||
|
all_issues = report.get("all_issues", [])
|
||||||
|
|
||||||
|
# Filter by severity
|
||||||
|
if severity != "all":
|
||||||
|
filtered = []
|
||||||
|
for issue in all_issues:
|
||||||
|
issue_sev = issue.get("severity", "info")
|
||||||
|
if hasattr(issue_sev, 'value'):
|
||||||
|
issue_sev = issue_sev.value
|
||||||
|
if isinstance(issue_sev, str) and severity.lower() in issue_sev.lower():
|
||||||
|
filtered.append(issue)
|
||||||
|
all_issues = filtered
|
||||||
|
|
||||||
|
# Filter by type
|
||||||
|
if issue_type != "all":
|
||||||
|
filtered = []
|
||||||
|
for issue in all_issues:
|
||||||
|
itype = issue.get("issue_type", "unknown")
|
||||||
|
if hasattr(itype, 'value'):
|
||||||
|
itype = itype.value
|
||||||
|
if isinstance(itype, str) and issue_type.lower() in itype.lower():
|
||||||
|
filtered.append(issue)
|
||||||
|
all_issues = filtered
|
||||||
|
|
||||||
|
return {
|
||||||
|
"marketplace_path": marketplace_path,
|
||||||
|
"filters": {
|
||||||
|
"severity": severity,
|
||||||
|
"issue_type": issue_type
|
||||||
|
},
|
||||||
|
"total_issues": len(all_issues),
|
||||||
|
"issues": all_issues
|
||||||
|
}
|
||||||
309
mcp-servers/contract-validator/mcp_server/server.py
Normal file
309
mcp-servers/contract-validator/mcp_server/server.py
Normal file
@@ -0,0 +1,309 @@
|
|||||||
|
"""
|
||||||
|
MCP Server entry point for Contract Validator.
|
||||||
|
|
||||||
|
Provides cross-plugin compatibility validation and Claude.md agent verification
|
||||||
|
tools to Claude Code via JSON-RPC 2.0 over stdio.
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
import json
|
||||||
|
from mcp.server import Server
|
||||||
|
from mcp.server.stdio import stdio_server
|
||||||
|
from mcp.types import Tool, TextContent
|
||||||
|
|
||||||
|
from .parse_tools import ParseTools
|
||||||
|
from .validation_tools import ValidationTools
|
||||||
|
from .report_tools import ReportTools
|
||||||
|
|
||||||
|
# Suppress noisy MCP validation warnings on stderr
|
||||||
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
logging.getLogger("root").setLevel(logging.ERROR)
|
||||||
|
logging.getLogger("mcp").setLevel(logging.ERROR)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ContractValidatorMCPServer:
|
||||||
|
"""MCP Server for cross-plugin compatibility validation"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.server = Server("contract-validator-mcp")
|
||||||
|
self.parse_tools = ParseTools()
|
||||||
|
self.validation_tools = ValidationTools()
|
||||||
|
self.report_tools = ReportTools()
|
||||||
|
|
||||||
|
async def initialize(self):
|
||||||
|
"""Initialize server."""
|
||||||
|
logger.info("Contract Validator MCP Server initialized")
|
||||||
|
|
||||||
|
def setup_tools(self):
|
||||||
|
"""Register all available tools with the MCP server"""
|
||||||
|
|
||||||
|
@self.server.list_tools()
|
||||||
|
async def list_tools() -> list[Tool]:
|
||||||
|
"""Return list of available tools"""
|
||||||
|
tools = [
|
||||||
|
# Parse tools (to be implemented in #186)
|
||||||
|
Tool(
|
||||||
|
name="parse_plugin_interface",
|
||||||
|
description="Parse plugin README.md to extract interface declarations (inputs, outputs, tools)",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"plugin_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to plugin directory or README.md"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["plugin_path"]
|
||||||
|
}
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="parse_claude_md_agents",
|
||||||
|
description="Parse Claude.md to extract agent definitions and their tool sequences",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"claude_md_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to CLAUDE.md file"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["claude_md_path"]
|
||||||
|
}
|
||||||
|
),
|
||||||
|
# Validation tools (to be implemented in #187)
|
||||||
|
Tool(
|
||||||
|
name="validate_compatibility",
|
||||||
|
description="Validate compatibility between two plugin interfaces",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"plugin_a": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to first plugin"
|
||||||
|
},
|
||||||
|
"plugin_b": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to second plugin"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["plugin_a", "plugin_b"]
|
||||||
|
}
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="validate_agent_refs",
|
||||||
|
description="Validate that all tool references in an agent definition exist",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"agent_name": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Name of agent to validate"
|
||||||
|
},
|
||||||
|
"claude_md_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to CLAUDE.md containing agent"
|
||||||
|
},
|
||||||
|
"plugin_paths": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Paths to available plugins"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["agent_name", "claude_md_path"]
|
||||||
|
}
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="validate_data_flow",
|
||||||
|
description="Validate data flow through an agent's tool sequence",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"agent_name": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Name of agent to validate"
|
||||||
|
},
|
||||||
|
"claude_md_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to CLAUDE.md containing agent"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["agent_name", "claude_md_path"]
|
||||||
|
}
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="validate_workflow_integration",
|
||||||
|
description="Validate that a domain plugin exposes the required advisory interfaces (gate command, review command, advisory agent) expected by projman's domain-consultation skill. Also checks gate contract version compatibility.",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"plugin_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to the domain plugin directory"
|
||||||
|
},
|
||||||
|
"domain_label": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "The Domain/* label it claims to handle, e.g. Domain/Viz"
|
||||||
|
},
|
||||||
|
"expected_contract": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Expected contract version (e.g., 'v1'). If provided, validates the gate command's contract matches."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["plugin_path", "domain_label"]
|
||||||
|
}
|
||||||
|
),
|
||||||
|
# Report tools (to be implemented in #188)
|
||||||
|
Tool(
|
||||||
|
name="generate_compatibility_report",
|
||||||
|
description="Generate a comprehensive compatibility report for all plugins",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"marketplace_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to marketplace root directory"
|
||||||
|
},
|
||||||
|
"format": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["markdown", "json"],
|
||||||
|
"default": "markdown",
|
||||||
|
"description": "Output format"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["marketplace_path"]
|
||||||
|
}
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="list_issues",
|
||||||
|
description="List validation issues with optional filtering",
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"marketplace_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Path to marketplace root directory"
|
||||||
|
},
|
||||||
|
"severity": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["error", "warning", "info", "all"],
|
||||||
|
"default": "all",
|
||||||
|
"description": "Filter by severity"
|
||||||
|
},
|
||||||
|
"issue_type": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["missing_tool", "interface_mismatch", "optional_dependency", "undeclared_output", "all"],
|
||||||
|
"default": "all",
|
||||||
|
"description": "Filter by issue type"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["marketplace_path"]
|
||||||
|
}
|
||||||
|
)
|
||||||
|
]
|
||||||
|
return tools
|
||||||
|
|
||||||
|
@self.server.call_tool()
|
||||||
|
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
|
||||||
|
"""Handle tool invocation."""
|
||||||
|
try:
|
||||||
|
# All tools return placeholder responses for now
|
||||||
|
# Actual implementation will be added in issues #186, #187, #188
|
||||||
|
|
||||||
|
if name == "parse_plugin_interface":
|
||||||
|
result = await self._parse_plugin_interface(**arguments)
|
||||||
|
elif name == "parse_claude_md_agents":
|
||||||
|
result = await self._parse_claude_md_agents(**arguments)
|
||||||
|
elif name == "validate_compatibility":
|
||||||
|
result = await self._validate_compatibility(**arguments)
|
||||||
|
elif name == "validate_agent_refs":
|
||||||
|
result = await self._validate_agent_refs(**arguments)
|
||||||
|
elif name == "validate_data_flow":
|
||||||
|
result = await self._validate_data_flow(**arguments)
|
||||||
|
elif name == "validate_workflow_integration":
|
||||||
|
result = await self._validate_workflow_integration(**arguments)
|
||||||
|
elif name == "generate_compatibility_report":
|
||||||
|
result = await self._generate_compatibility_report(**arguments)
|
||||||
|
elif name == "list_issues":
|
||||||
|
result = await self._list_issues(**arguments)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unknown tool: {name}")
|
||||||
|
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps(result, indent=2, default=str)
|
||||||
|
)]
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Tool {name} failed: {e}")
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps({"error": str(e)}, indent=2)
|
||||||
|
)]
|
||||||
|
|
||||||
|
# Parse tool implementations (Issue #186)
|
||||||
|
|
||||||
|
async def _parse_plugin_interface(self, plugin_path: str) -> dict:
|
||||||
|
"""Parse plugin interface from README.md"""
|
||||||
|
return await self.parse_tools.parse_plugin_interface(plugin_path)
|
||||||
|
|
||||||
|
async def _parse_claude_md_agents(self, claude_md_path: str) -> dict:
|
||||||
|
"""Parse agents from CLAUDE.md"""
|
||||||
|
return await self.parse_tools.parse_claude_md_agents(claude_md_path)
|
||||||
|
|
||||||
|
# Validation tool implementations (Issue #187)
|
||||||
|
|
||||||
|
async def _validate_compatibility(self, plugin_a: str, plugin_b: str) -> dict:
|
||||||
|
"""Validate compatibility between plugins"""
|
||||||
|
return await self.validation_tools.validate_compatibility(plugin_a, plugin_b)
|
||||||
|
|
||||||
|
async def _validate_agent_refs(self, agent_name: str, claude_md_path: str, plugin_paths: list = None) -> dict:
|
||||||
|
"""Validate agent tool references"""
|
||||||
|
return await self.validation_tools.validate_agent_refs(agent_name, claude_md_path, plugin_paths)
|
||||||
|
|
||||||
|
async def _validate_data_flow(self, agent_name: str, claude_md_path: str) -> dict:
|
||||||
|
"""Validate agent data flow"""
|
||||||
|
return await self.validation_tools.validate_data_flow(agent_name, claude_md_path)
|
||||||
|
|
||||||
|
async def _validate_workflow_integration(
|
||||||
|
self,
|
||||||
|
plugin_path: str,
|
||||||
|
domain_label: str,
|
||||||
|
expected_contract: str = None
|
||||||
|
) -> dict:
|
||||||
|
"""Validate domain plugin exposes required advisory interfaces"""
|
||||||
|
return await self.validation_tools.validate_workflow_integration(
|
||||||
|
plugin_path, domain_label, expected_contract
|
||||||
|
)
|
||||||
|
|
||||||
|
# Report tool implementations (Issue #188)
|
||||||
|
|
||||||
|
async def _generate_compatibility_report(self, marketplace_path: str, format: str = "markdown") -> dict:
|
||||||
|
"""Generate comprehensive compatibility report"""
|
||||||
|
return await self.report_tools.generate_compatibility_report(marketplace_path, format)
|
||||||
|
|
||||||
|
async def _list_issues(self, marketplace_path: str, severity: str = "all", issue_type: str = "all") -> dict:
|
||||||
|
"""List validation issues with filtering"""
|
||||||
|
return await self.report_tools.list_issues(marketplace_path, severity, issue_type)
|
||||||
|
|
||||||
|
async def run(self):
|
||||||
|
"""Run the MCP server"""
|
||||||
|
await self.initialize()
|
||||||
|
self.setup_tools()
|
||||||
|
|
||||||
|
async with stdio_server() as (read_stream, write_stream):
|
||||||
|
await self.server.run(
|
||||||
|
read_stream,
|
||||||
|
write_stream,
|
||||||
|
self.server.create_initialization_options()
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
"""Main entry point"""
|
||||||
|
server = ContractValidatorMCPServer()
|
||||||
|
await server.run()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
asyncio.run(main())
|
||||||
493
mcp-servers/contract-validator/mcp_server/validation_tools.py
Normal file
493
mcp-servers/contract-validator/mcp_server/validation_tools.py
Normal file
@@ -0,0 +1,493 @@
|
|||||||
|
"""
|
||||||
|
Validation tools for checking cross-plugin compatibility and agent references.
|
||||||
|
|
||||||
|
Provides:
|
||||||
|
- validate_compatibility: Compare two plugin interfaces
|
||||||
|
- validate_agent_refs: Check agent tool references exist
|
||||||
|
- validate_data_flow: Verify data flow through agent sequences
|
||||||
|
"""
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from enum import Enum
|
||||||
|
|
||||||
|
from .parse_tools import ParseTools, PluginInterface, ClaudeMdAgent
|
||||||
|
|
||||||
|
|
||||||
|
class IssueSeverity(str, Enum):
|
||||||
|
ERROR = "error"
|
||||||
|
WARNING = "warning"
|
||||||
|
INFO = "info"
|
||||||
|
|
||||||
|
|
||||||
|
class IssueType(str, Enum):
|
||||||
|
MISSING_TOOL = "missing_tool"
|
||||||
|
INTERFACE_MISMATCH = "interface_mismatch"
|
||||||
|
OPTIONAL_DEPENDENCY = "optional_dependency"
|
||||||
|
UNDECLARED_OUTPUT = "undeclared_output"
|
||||||
|
INVALID_SEQUENCE = "invalid_sequence"
|
||||||
|
MISSING_INTEGRATION = "missing_integration"
|
||||||
|
|
||||||
|
|
||||||
|
class ValidationIssue(BaseModel):
|
||||||
|
"""A single validation issue"""
|
||||||
|
severity: IssueSeverity
|
||||||
|
issue_type: IssueType
|
||||||
|
message: str
|
||||||
|
location: Optional[str] = None
|
||||||
|
suggestion: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
class CompatibilityResult(BaseModel):
|
||||||
|
"""Result of compatibility check between two plugins"""
|
||||||
|
plugin_a: str
|
||||||
|
plugin_b: str
|
||||||
|
compatible: bool
|
||||||
|
shared_tools: list[str] = []
|
||||||
|
a_only_tools: list[str] = []
|
||||||
|
b_only_tools: list[str] = []
|
||||||
|
issues: list[ValidationIssue] = []
|
||||||
|
|
||||||
|
|
||||||
|
class AgentValidationResult(BaseModel):
|
||||||
|
"""Result of agent reference validation"""
|
||||||
|
agent_name: str
|
||||||
|
valid: bool
|
||||||
|
tool_refs_found: list[str] = []
|
||||||
|
tool_refs_missing: list[str] = []
|
||||||
|
issues: list[ValidationIssue] = []
|
||||||
|
|
||||||
|
|
||||||
|
class DataFlowResult(BaseModel):
|
||||||
|
"""Result of data flow validation"""
|
||||||
|
agent_name: str
|
||||||
|
valid: bool
|
||||||
|
flow_steps: list[str] = []
|
||||||
|
issues: list[ValidationIssue] = []
|
||||||
|
|
||||||
|
|
||||||
|
class WorkflowIntegrationResult(BaseModel):
|
||||||
|
"""Result of workflow integration validation for domain plugins"""
|
||||||
|
plugin_name: str
|
||||||
|
domain_label: str
|
||||||
|
valid: bool
|
||||||
|
gate_command_found: bool
|
||||||
|
gate_contract: Optional[str] = None # Contract version declared by gate command
|
||||||
|
review_command_found: bool
|
||||||
|
advisory_agent_found: bool
|
||||||
|
issues: list[ValidationIssue] = []
|
||||||
|
|
||||||
|
|
||||||
|
class ValidationTools:
|
||||||
|
"""Tools for validating plugin compatibility and agent references"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.parse_tools = ParseTools()
|
||||||
|
|
||||||
|
async def validate_compatibility(self, plugin_a: str, plugin_b: str) -> dict:
|
||||||
|
"""
|
||||||
|
Validate compatibility between two plugin interfaces.
|
||||||
|
|
||||||
|
Compares tools, commands, and agents to identify overlaps and gaps.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
plugin_a: Path to first plugin directory
|
||||||
|
plugin_b: Path to second plugin directory
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Compatibility report with shared tools, unique tools, and issues
|
||||||
|
"""
|
||||||
|
# Parse both plugins
|
||||||
|
interface_a = await self.parse_tools.parse_plugin_interface(plugin_a)
|
||||||
|
interface_b = await self.parse_tools.parse_plugin_interface(plugin_b)
|
||||||
|
|
||||||
|
# Check for parse errors
|
||||||
|
if "error" in interface_a:
|
||||||
|
return {
|
||||||
|
"error": f"Failed to parse plugin A: {interface_a['error']}",
|
||||||
|
"plugin_a": plugin_a,
|
||||||
|
"plugin_b": plugin_b
|
||||||
|
}
|
||||||
|
if "error" in interface_b:
|
||||||
|
return {
|
||||||
|
"error": f"Failed to parse plugin B: {interface_b['error']}",
|
||||||
|
"plugin_a": plugin_a,
|
||||||
|
"plugin_b": plugin_b
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extract tool names
|
||||||
|
tools_a = set(t["name"] for t in interface_a.get("tools", []))
|
||||||
|
tools_b = set(t["name"] for t in interface_b.get("tools", []))
|
||||||
|
|
||||||
|
# Find overlaps and differences
|
||||||
|
shared = tools_a & tools_b
|
||||||
|
a_only = tools_a - tools_b
|
||||||
|
b_only = tools_b - tools_a
|
||||||
|
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
# Check for potential naming conflicts
|
||||||
|
if shared:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.WARNING,
|
||||||
|
issue_type=IssueType.INTERFACE_MISMATCH,
|
||||||
|
message=f"Both plugins define tools with same names: {list(shared)}",
|
||||||
|
location=f"{interface_a['plugin_name']} and {interface_b['plugin_name']}",
|
||||||
|
suggestion="Ensure tools with same names have compatible interfaces"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check command overlaps
|
||||||
|
cmds_a = set(c["name"] for c in interface_a.get("commands", []))
|
||||||
|
cmds_b = set(c["name"] for c in interface_b.get("commands", []))
|
||||||
|
shared_cmds = cmds_a & cmds_b
|
||||||
|
|
||||||
|
if shared_cmds:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.ERROR,
|
||||||
|
issue_type=IssueType.INTERFACE_MISMATCH,
|
||||||
|
message=f"Command name conflict: {list(shared_cmds)}",
|
||||||
|
location=f"{interface_a['plugin_name']} and {interface_b['plugin_name']}",
|
||||||
|
suggestion="Rename conflicting commands to avoid ambiguity"
|
||||||
|
))
|
||||||
|
|
||||||
|
result = CompatibilityResult(
|
||||||
|
plugin_a=interface_a["plugin_name"],
|
||||||
|
plugin_b=interface_b["plugin_name"],
|
||||||
|
compatible=len([i for i in issues if i.severity == IssueSeverity.ERROR]) == 0,
|
||||||
|
shared_tools=list(shared),
|
||||||
|
a_only_tools=list(a_only),
|
||||||
|
b_only_tools=list(b_only),
|
||||||
|
issues=issues
|
||||||
|
)
|
||||||
|
|
||||||
|
return result.model_dump()
|
||||||
|
|
||||||
|
async def validate_agent_refs(
|
||||||
|
self,
|
||||||
|
agent_name: str,
|
||||||
|
claude_md_path: str,
|
||||||
|
plugin_paths: list[str] = None
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
Validate that all tool references in an agent definition exist.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
agent_name: Name of the agent to validate
|
||||||
|
claude_md_path: Path to CLAUDE.md containing the agent
|
||||||
|
plugin_paths: Optional list of plugin paths to check for tools
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Validation result with found/missing tools and issues
|
||||||
|
"""
|
||||||
|
# Parse CLAUDE.md for agents
|
||||||
|
agents_result = await self.parse_tools.parse_claude_md_agents(claude_md_path)
|
||||||
|
|
||||||
|
if "error" in agents_result:
|
||||||
|
return {
|
||||||
|
"error": agents_result["error"],
|
||||||
|
"agent_name": agent_name
|
||||||
|
}
|
||||||
|
|
||||||
|
# Find the specific agent
|
||||||
|
agent = None
|
||||||
|
for a in agents_result.get("agents", []):
|
||||||
|
if a["name"].lower() == agent_name.lower():
|
||||||
|
agent = a
|
||||||
|
break
|
||||||
|
|
||||||
|
if not agent:
|
||||||
|
return {
|
||||||
|
"error": f"Agent '{agent_name}' not found in {claude_md_path}",
|
||||||
|
"agent_name": agent_name,
|
||||||
|
"available_agents": [a["name"] for a in agents_result.get("agents", [])]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Collect all available tools from plugins
|
||||||
|
available_tools = set()
|
||||||
|
if plugin_paths:
|
||||||
|
for plugin_path in plugin_paths:
|
||||||
|
interface = await self.parse_tools.parse_plugin_interface(plugin_path)
|
||||||
|
if "error" not in interface:
|
||||||
|
for tool in interface.get("tools", []):
|
||||||
|
available_tools.add(tool["name"])
|
||||||
|
|
||||||
|
# Check agent tool references
|
||||||
|
tool_refs = set(agent.get("tool_refs", []))
|
||||||
|
found = tool_refs & available_tools if available_tools else tool_refs
|
||||||
|
missing = tool_refs - available_tools if available_tools else set()
|
||||||
|
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
# Report missing tools
|
||||||
|
for tool in missing:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.ERROR,
|
||||||
|
issue_type=IssueType.MISSING_TOOL,
|
||||||
|
message=f"Agent '{agent_name}' references tool '{tool}' which is not found",
|
||||||
|
location=claude_md_path,
|
||||||
|
suggestion=f"Check if tool '{tool}' exists or fix the reference"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check if agent has no tool refs (might be incomplete)
|
||||||
|
if not tool_refs:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.INFO,
|
||||||
|
issue_type=IssueType.UNDECLARED_OUTPUT,
|
||||||
|
message=f"Agent '{agent_name}' has no documented tool references",
|
||||||
|
location=claude_md_path,
|
||||||
|
suggestion="Consider documenting which tools this agent uses"
|
||||||
|
))
|
||||||
|
|
||||||
|
result = AgentValidationResult(
|
||||||
|
agent_name=agent_name,
|
||||||
|
valid=len([i for i in issues if i.severity == IssueSeverity.ERROR]) == 0,
|
||||||
|
tool_refs_found=list(found),
|
||||||
|
tool_refs_missing=list(missing),
|
||||||
|
issues=issues
|
||||||
|
)
|
||||||
|
|
||||||
|
return result.model_dump()
|
||||||
|
|
||||||
|
async def validate_data_flow(self, agent_name: str, claude_md_path: str) -> dict:
|
||||||
|
"""
|
||||||
|
Validate data flow through an agent's tool sequence.
|
||||||
|
|
||||||
|
Checks that each step's expected output can be used by the next step.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
agent_name: Name of the agent to validate
|
||||||
|
claude_md_path: Path to CLAUDE.md containing the agent
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Data flow validation result with steps and issues
|
||||||
|
"""
|
||||||
|
# Parse CLAUDE.md for agents
|
||||||
|
agents_result = await self.parse_tools.parse_claude_md_agents(claude_md_path)
|
||||||
|
|
||||||
|
if "error" in agents_result:
|
||||||
|
return {
|
||||||
|
"error": agents_result["error"],
|
||||||
|
"agent_name": agent_name
|
||||||
|
}
|
||||||
|
|
||||||
|
# Find the specific agent
|
||||||
|
agent = None
|
||||||
|
for a in agents_result.get("agents", []):
|
||||||
|
if a["name"].lower() == agent_name.lower():
|
||||||
|
agent = a
|
||||||
|
break
|
||||||
|
|
||||||
|
if not agent:
|
||||||
|
return {
|
||||||
|
"error": f"Agent '{agent_name}' not found in {claude_md_path}",
|
||||||
|
"agent_name": agent_name,
|
||||||
|
"available_agents": [a["name"] for a in agents_result.get("agents", [])]
|
||||||
|
}
|
||||||
|
|
||||||
|
issues = []
|
||||||
|
flow_steps = []
|
||||||
|
|
||||||
|
# Extract workflow steps
|
||||||
|
workflow_steps = agent.get("workflow_steps", [])
|
||||||
|
responsibilities = agent.get("responsibilities", [])
|
||||||
|
|
||||||
|
# Build flow from workflow steps or responsibilities
|
||||||
|
steps = workflow_steps if workflow_steps else responsibilities
|
||||||
|
|
||||||
|
for i, step in enumerate(steps):
|
||||||
|
flow_steps.append(f"Step {i+1}: {step}")
|
||||||
|
|
||||||
|
# Check for data flow patterns
|
||||||
|
tool_refs = agent.get("tool_refs", [])
|
||||||
|
|
||||||
|
# Known data flow patterns
|
||||||
|
# e.g., data-platform produces data_ref, viz-platform consumes it
|
||||||
|
known_producers = {
|
||||||
|
"read_csv": "data_ref",
|
||||||
|
"read_parquet": "data_ref",
|
||||||
|
"pg_query": "data_ref",
|
||||||
|
"filter": "data_ref",
|
||||||
|
"groupby": "data_ref",
|
||||||
|
}
|
||||||
|
|
||||||
|
known_consumers = {
|
||||||
|
"describe": "data_ref",
|
||||||
|
"head": "data_ref",
|
||||||
|
"tail": "data_ref",
|
||||||
|
"to_csv": "data_ref",
|
||||||
|
"to_parquet": "data_ref",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if agent uses tools that require data_ref
|
||||||
|
has_producer = any(t in known_producers for t in tool_refs)
|
||||||
|
has_consumer = any(t in known_consumers for t in tool_refs)
|
||||||
|
|
||||||
|
if has_consumer and not has_producer:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.WARNING,
|
||||||
|
issue_type=IssueType.INTERFACE_MISMATCH,
|
||||||
|
message=f"Agent '{agent_name}' uses tools that consume data_ref but no producer found",
|
||||||
|
location=claude_md_path,
|
||||||
|
suggestion="Ensure a data loading tool (read_csv, pg_query, etc.) is used before data consumers"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check for empty workflow
|
||||||
|
if not steps and not tool_refs:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.INFO,
|
||||||
|
issue_type=IssueType.UNDECLARED_OUTPUT,
|
||||||
|
message=f"Agent '{agent_name}' has no documented workflow or tool sequence",
|
||||||
|
location=claude_md_path,
|
||||||
|
suggestion="Consider documenting the agent's workflow steps"
|
||||||
|
))
|
||||||
|
|
||||||
|
result = DataFlowResult(
|
||||||
|
agent_name=agent_name,
|
||||||
|
valid=len([i for i in issues if i.severity == IssueSeverity.ERROR]) == 0,
|
||||||
|
flow_steps=flow_steps,
|
||||||
|
issues=issues
|
||||||
|
)
|
||||||
|
|
||||||
|
return result.model_dump()
|
||||||
|
|
||||||
|
async def validate_workflow_integration(
|
||||||
|
self,
|
||||||
|
plugin_path: str,
|
||||||
|
domain_label: str,
|
||||||
|
expected_contract: Optional[str] = None
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
Validate that a domain plugin exposes required advisory interfaces.
|
||||||
|
|
||||||
|
Checks for:
|
||||||
|
- Gate command (e.g., /design-gate, /data-gate) - REQUIRED
|
||||||
|
- Gate contract version (gate_contract in frontmatter) - INFO if missing
|
||||||
|
- Review command (e.g., /design-review, /data-review) - recommended
|
||||||
|
- Advisory agent referencing the domain label - recommended
|
||||||
|
|
||||||
|
Args:
|
||||||
|
plugin_path: Path to the domain plugin directory
|
||||||
|
domain_label: The Domain/* label it claims to handle (e.g., Domain/Viz)
|
||||||
|
expected_contract: Expected contract version (e.g., 'v1'). If provided,
|
||||||
|
validates the gate command's contract matches.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Validation result with found interfaces and issues
|
||||||
|
"""
|
||||||
|
import re
|
||||||
|
|
||||||
|
plugin_path_obj = Path(plugin_path)
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
# Extract plugin name from path
|
||||||
|
plugin_name = plugin_path_obj.name
|
||||||
|
if not plugin_path_obj.exists():
|
||||||
|
return {
|
||||||
|
"error": f"Plugin directory not found: {plugin_path}",
|
||||||
|
"plugin_path": plugin_path,
|
||||||
|
"domain_label": domain_label
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extract domain short name from label (e.g., "Domain/Viz" -> "viz", "Domain/Data" -> "data")
|
||||||
|
domain_short = domain_label.split("/")[-1].lower() if "/" in domain_label else domain_label.lower()
|
||||||
|
|
||||||
|
# Check for gate command
|
||||||
|
commands_dir = plugin_path_obj / "commands"
|
||||||
|
gate_command_found = False
|
||||||
|
gate_contract = None
|
||||||
|
gate_patterns = ["pass", "fail", "PASS", "FAIL", "Binary pass/fail", "gate"]
|
||||||
|
|
||||||
|
if commands_dir.exists():
|
||||||
|
for cmd_file in commands_dir.glob("*.md"):
|
||||||
|
if "gate" in cmd_file.name.lower():
|
||||||
|
# Verify it's actually a gate command by checking content
|
||||||
|
content = cmd_file.read_text()
|
||||||
|
if any(pattern in content for pattern in gate_patterns):
|
||||||
|
gate_command_found = True
|
||||||
|
# Parse frontmatter for gate_contract
|
||||||
|
frontmatter_match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
|
||||||
|
if frontmatter_match:
|
||||||
|
frontmatter = frontmatter_match.group(1)
|
||||||
|
contract_match = re.search(r'gate_contract:\s*(\S+)', frontmatter)
|
||||||
|
if contract_match:
|
||||||
|
gate_contract = contract_match.group(1)
|
||||||
|
break
|
||||||
|
|
||||||
|
if not gate_command_found:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.ERROR,
|
||||||
|
issue_type=IssueType.MISSING_INTEGRATION,
|
||||||
|
message=f"Plugin '{plugin_name}' lacks a gate command for domain '{domain_label}'",
|
||||||
|
location=str(commands_dir),
|
||||||
|
suggestion=f"Create commands/{domain_short}-gate.md with binary PASS/FAIL output"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check for review command
|
||||||
|
review_command_found = False
|
||||||
|
if commands_dir.exists():
|
||||||
|
for cmd_file in commands_dir.glob("*.md"):
|
||||||
|
if "review" in cmd_file.name.lower() and "gate" not in cmd_file.name.lower():
|
||||||
|
review_command_found = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not review_command_found:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.WARNING,
|
||||||
|
issue_type=IssueType.MISSING_INTEGRATION,
|
||||||
|
message=f"Plugin '{plugin_name}' lacks a review command for domain '{domain_label}'",
|
||||||
|
location=str(commands_dir),
|
||||||
|
suggestion=f"Create commands/{domain_short}-review.md for detailed audits"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check for advisory agent
|
||||||
|
agents_dir = plugin_path_obj / "agents"
|
||||||
|
advisory_agent_found = False
|
||||||
|
|
||||||
|
if agents_dir.exists():
|
||||||
|
for agent_file in agents_dir.glob("*.md"):
|
||||||
|
content = agent_file.read_text()
|
||||||
|
# Check if agent references the domain label or gate command
|
||||||
|
if domain_label in content or f"{domain_short}-gate" in content.lower() or "advisor" in agent_file.name.lower() or "reviewer" in agent_file.name.lower():
|
||||||
|
advisory_agent_found = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not advisory_agent_found:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.WARNING,
|
||||||
|
issue_type=IssueType.MISSING_INTEGRATION,
|
||||||
|
message=f"Plugin '{plugin_name}' lacks an advisory agent for domain '{domain_label}'",
|
||||||
|
location=str(agents_dir) if agents_dir.exists() else str(plugin_path_obj),
|
||||||
|
suggestion=f"Create agents/{domain_short}-advisor.md referencing '{domain_label}'"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check gate contract version
|
||||||
|
if gate_command_found:
|
||||||
|
if not gate_contract:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.INFO,
|
||||||
|
issue_type=IssueType.MISSING_INTEGRATION,
|
||||||
|
message=f"Gate command does not declare a contract version",
|
||||||
|
location=str(commands_dir),
|
||||||
|
suggestion="Consider adding `gate_contract: v1` to frontmatter for version tracking"
|
||||||
|
))
|
||||||
|
elif expected_contract and gate_contract != expected_contract:
|
||||||
|
issues.append(ValidationIssue(
|
||||||
|
severity=IssueSeverity.WARNING,
|
||||||
|
issue_type=IssueType.INTERFACE_MISMATCH,
|
||||||
|
message=f"Contract version mismatch: gate declares {gate_contract}, projman expects {expected_contract}",
|
||||||
|
location=str(commands_dir),
|
||||||
|
suggestion=f"Update domain-consultation.md Gate Command Reference table to {gate_contract}, or update gate command to {expected_contract}"
|
||||||
|
))
|
||||||
|
|
||||||
|
result = WorkflowIntegrationResult(
|
||||||
|
plugin_name=plugin_name,
|
||||||
|
domain_label=domain_label,
|
||||||
|
valid=gate_command_found, # Only gate is required for validity
|
||||||
|
gate_command_found=gate_command_found,
|
||||||
|
gate_contract=gate_contract,
|
||||||
|
review_command_found=review_command_found,
|
||||||
|
advisory_agent_found=advisory_agent_found,
|
||||||
|
issues=issues
|
||||||
|
)
|
||||||
|
|
||||||
|
return result.model_dump()
|
||||||
41
mcp-servers/contract-validator/pyproject.toml
Normal file
41
mcp-servers/contract-validator/pyproject.toml
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
[build-system]
|
||||||
|
requires = ["setuptools>=61.0", "wheel"]
|
||||||
|
build-backend = "setuptools.build_meta"
|
||||||
|
|
||||||
|
[project]
|
||||||
|
name = "contract-validator-mcp"
|
||||||
|
version = "1.0.0"
|
||||||
|
description = "MCP Server for cross-plugin compatibility validation and agent verification"
|
||||||
|
readme = "README.md"
|
||||||
|
license = {text = "MIT"}
|
||||||
|
requires-python = ">=3.10"
|
||||||
|
authors = [
|
||||||
|
{name = "Leo Miranda"}
|
||||||
|
]
|
||||||
|
classifiers = [
|
||||||
|
"Development Status :: 4 - Beta",
|
||||||
|
"Intended Audience :: Developers",
|
||||||
|
"License :: OSI Approved :: MIT License",
|
||||||
|
"Programming Language :: Python :: 3",
|
||||||
|
"Programming Language :: Python :: 3.10",
|
||||||
|
"Programming Language :: Python :: 3.11",
|
||||||
|
"Programming Language :: Python :: 3.12",
|
||||||
|
]
|
||||||
|
dependencies = [
|
||||||
|
"mcp>=0.9.0",
|
||||||
|
"pydantic>=2.5.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.optional-dependencies]
|
||||||
|
dev = [
|
||||||
|
"pytest>=7.4.3",
|
||||||
|
"pytest-asyncio>=0.23.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.setuptools.packages.find]
|
||||||
|
where = ["."]
|
||||||
|
include = ["mcp_server*"]
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
asyncio_mode = "auto"
|
||||||
|
testpaths = ["tests"]
|
||||||
9
mcp-servers/contract-validator/requirements.txt
Normal file
9
mcp-servers/contract-validator/requirements.txt
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# MCP SDK
|
||||||
|
mcp>=0.9.0
|
||||||
|
|
||||||
|
# Utilities
|
||||||
|
pydantic>=2.5.0
|
||||||
|
|
||||||
|
# Testing
|
||||||
|
pytest>=7.4.3
|
||||||
|
pytest-asyncio>=0.23.0
|
||||||
21
mcp-servers/contract-validator/run.sh
Executable file
21
mcp-servers/contract-validator/run.sh
Executable file
@@ -0,0 +1,21 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Capture original working directory before any cd operations
|
||||||
|
# This should be the user's project directory when launched by Claude Code
|
||||||
|
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/contract-validator/.venv"
|
||||||
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|
||||||
|
if [[ -f "$CACHE_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$CACHE_VENV/bin/python"
|
||||||
|
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$LOCAL_VENV/bin/python"
|
||||||
|
else
|
||||||
|
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$SCRIPT_DIR"
|
||||||
|
export PYTHONPATH="$SCRIPT_DIR"
|
||||||
|
exec "$PYTHON" -m mcp_server.server "$@"
|
||||||
1
mcp-servers/contract-validator/tests/__init__.py
Normal file
1
mcp-servers/contract-validator/tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Tests for contract-validator MCP server
|
||||||
193
mcp-servers/contract-validator/tests/test_parse_tools.py
Normal file
193
mcp-servers/contract-validator/tests/test_parse_tools.py
Normal file
@@ -0,0 +1,193 @@
|
|||||||
|
"""
|
||||||
|
Unit tests for parse tools.
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def parse_tools():
|
||||||
|
"""Create ParseTools instance"""
|
||||||
|
from mcp_server.parse_tools import ParseTools
|
||||||
|
return ParseTools()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def sample_readme(tmp_path):
|
||||||
|
"""Create a sample README.md for testing"""
|
||||||
|
readme = tmp_path / "README.md"
|
||||||
|
readme.write_text("""# Test Plugin
|
||||||
|
|
||||||
|
A test plugin for validation.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Feature One**: Does something
|
||||||
|
- **Feature Two**: Does something else
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/test-cmd` | Test command |
|
||||||
|
| `/another-cmd` | Another test command |
|
||||||
|
|
||||||
|
## Agents
|
||||||
|
|
||||||
|
| Agent | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `test-agent` | A test agent |
|
||||||
|
|
||||||
|
## Tools Summary
|
||||||
|
|
||||||
|
### Category A (3 tools)
|
||||||
|
`tool_a`, `tool_b`, `tool_c`
|
||||||
|
|
||||||
|
### Category B (2 tools)
|
||||||
|
`tool_d`, `tool_e`
|
||||||
|
""")
|
||||||
|
return str(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def sample_claude_md(tmp_path):
|
||||||
|
"""Create a sample CLAUDE.md for testing"""
|
||||||
|
claude_md = tmp_path / "CLAUDE.md"
|
||||||
|
claude_md.write_text("""# CLAUDE.md
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
### Four-Agent Model (test)
|
||||||
|
|
||||||
|
| Agent | Personality | Responsibilities |
|
||||||
|
|-------|-------------|------------------|
|
||||||
|
| **Planner** | Thoughtful | Planning via `create_issue`, `search_lessons` |
|
||||||
|
| **Executor** | Focused | Implementation via `write`, `edit` |
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
1. Planner creates issues
|
||||||
|
2. Executor implements code
|
||||||
|
""")
|
||||||
|
return str(claude_md)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_interface_basic(parse_tools, sample_readme):
|
||||||
|
"""Test basic plugin interface parsing"""
|
||||||
|
result = await parse_tools.parse_plugin_interface(sample_readme)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
# Plugin name extraction strips "Plugin" suffix
|
||||||
|
assert result["plugin_name"] == "Test"
|
||||||
|
assert "A test plugin" in result["description"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_interface_commands(parse_tools, sample_readme):
|
||||||
|
"""Test command extraction from README"""
|
||||||
|
result = await parse_tools.parse_plugin_interface(sample_readme)
|
||||||
|
|
||||||
|
commands = result["commands"]
|
||||||
|
assert len(commands) == 2
|
||||||
|
assert commands[0]["name"] == "/test-cmd"
|
||||||
|
assert commands[1]["name"] == "/another-cmd"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_interface_agents(parse_tools, sample_readme):
|
||||||
|
"""Test agent extraction from README"""
|
||||||
|
result = await parse_tools.parse_plugin_interface(sample_readme)
|
||||||
|
|
||||||
|
agents = result["agents"]
|
||||||
|
assert len(agents) == 1
|
||||||
|
assert agents[0]["name"] == "test-agent"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_interface_tools(parse_tools, sample_readme):
|
||||||
|
"""Test tool extraction from README"""
|
||||||
|
result = await parse_tools.parse_plugin_interface(sample_readme)
|
||||||
|
|
||||||
|
tools = result["tools"]
|
||||||
|
tool_names = [t["name"] for t in tools]
|
||||||
|
assert "tool_a" in tool_names
|
||||||
|
assert "tool_b" in tool_names
|
||||||
|
assert "tool_e" in tool_names
|
||||||
|
assert len(tools) >= 5
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_interface_categories(parse_tools, sample_readme):
|
||||||
|
"""Test tool category extraction"""
|
||||||
|
result = await parse_tools.parse_plugin_interface(sample_readme)
|
||||||
|
|
||||||
|
categories = result["tool_categories"]
|
||||||
|
assert "Category A" in categories
|
||||||
|
assert "Category B" in categories
|
||||||
|
assert "tool_a" in categories["Category A"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_interface_features(parse_tools, sample_readme):
|
||||||
|
"""Test feature extraction"""
|
||||||
|
result = await parse_tools.parse_plugin_interface(sample_readme)
|
||||||
|
|
||||||
|
features = result["features"]
|
||||||
|
assert "Feature One" in features
|
||||||
|
assert "Feature Two" in features
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_interface_not_found(parse_tools, tmp_path):
|
||||||
|
"""Test error when README not found"""
|
||||||
|
result = await parse_tools.parse_plugin_interface(str(tmp_path / "nonexistent"))
|
||||||
|
|
||||||
|
assert "error" in result
|
||||||
|
assert "not found" in result["error"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_claude_md_agents(parse_tools, sample_claude_md):
|
||||||
|
"""Test agent extraction from CLAUDE.md"""
|
||||||
|
result = await parse_tools.parse_claude_md_agents(sample_claude_md)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["agent_count"] == 2
|
||||||
|
|
||||||
|
agents = result["agents"]
|
||||||
|
agent_names = [a["name"] for a in agents]
|
||||||
|
assert "Planner" in agent_names
|
||||||
|
assert "Executor" in agent_names
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_claude_md_tool_refs(parse_tools, sample_claude_md):
|
||||||
|
"""Test tool reference extraction from agents"""
|
||||||
|
result = await parse_tools.parse_claude_md_agents(sample_claude_md)
|
||||||
|
|
||||||
|
agents = {a["name"]: a for a in result["agents"]}
|
||||||
|
planner = agents["Planner"]
|
||||||
|
|
||||||
|
assert "create_issue" in planner["tool_refs"]
|
||||||
|
assert "search_lessons" in planner["tool_refs"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_claude_md_not_found(parse_tools, tmp_path):
|
||||||
|
"""Test error when CLAUDE.md not found"""
|
||||||
|
result = await parse_tools.parse_claude_md_agents(str(tmp_path / "CLAUDE.md"))
|
||||||
|
|
||||||
|
assert "error" in result
|
||||||
|
assert "not found" in result["error"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_parse_plugin_with_direct_file(parse_tools, sample_readme):
|
||||||
|
"""Test parsing with direct file path instead of directory"""
|
||||||
|
readme_path = Path(sample_readme) / "README.md"
|
||||||
|
result = await parse_tools.parse_plugin_interface(str(readme_path))
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
# Plugin name extraction strips "Plugin" suffix
|
||||||
|
assert result["plugin_name"] == "Test"
|
||||||
261
mcp-servers/contract-validator/tests/test_report_tools.py
Normal file
261
mcp-servers/contract-validator/tests/test_report_tools.py
Normal file
@@ -0,0 +1,261 @@
|
|||||||
|
"""
|
||||||
|
Unit tests for report tools.
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def report_tools():
|
||||||
|
"""Create ReportTools instance"""
|
||||||
|
from mcp_server.report_tools import ReportTools
|
||||||
|
return ReportTools()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def sample_marketplace(tmp_path):
|
||||||
|
"""Create a sample marketplace structure"""
|
||||||
|
import json
|
||||||
|
|
||||||
|
plugins_dir = tmp_path / "plugins"
|
||||||
|
plugins_dir.mkdir()
|
||||||
|
|
||||||
|
# Plugin 1
|
||||||
|
plugin1 = plugins_dir / "plugin-one"
|
||||||
|
plugin1.mkdir()
|
||||||
|
plugin1_meta = plugin1 / ".claude-plugin"
|
||||||
|
plugin1_meta.mkdir()
|
||||||
|
(plugin1_meta / "plugin.json").write_text(json.dumps({"name": "plugin-one"}))
|
||||||
|
(plugin1 / "README.md").write_text("""# plugin-one
|
||||||
|
|
||||||
|
First test plugin.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/cmd-one` | Command one |
|
||||||
|
|
||||||
|
## Tools Summary
|
||||||
|
|
||||||
|
### Tools (2 tools)
|
||||||
|
`tool_a`, `tool_b`
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Plugin 2
|
||||||
|
plugin2 = plugins_dir / "plugin-two"
|
||||||
|
plugin2.mkdir()
|
||||||
|
plugin2_meta = plugin2 / ".claude-plugin"
|
||||||
|
plugin2_meta.mkdir()
|
||||||
|
(plugin2_meta / "plugin.json").write_text(json.dumps({"name": "plugin-two"}))
|
||||||
|
(plugin2 / "README.md").write_text("""# plugin-two
|
||||||
|
|
||||||
|
Second test plugin.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/cmd-two` | Command two |
|
||||||
|
|
||||||
|
## Tools Summary
|
||||||
|
|
||||||
|
### Tools (2 tools)
|
||||||
|
`tool_c`, `tool_d`
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Plugin 3 (with conflict)
|
||||||
|
plugin3 = plugins_dir / "plugin-three"
|
||||||
|
plugin3.mkdir()
|
||||||
|
plugin3_meta = plugin3 / ".claude-plugin"
|
||||||
|
plugin3_meta.mkdir()
|
||||||
|
(plugin3_meta / "plugin.json").write_text(json.dumps({"name": "plugin-three"}))
|
||||||
|
(plugin3 / "README.md").write_text("""# plugin-three
|
||||||
|
|
||||||
|
Third test plugin with conflict.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/cmd-one` | Conflicting command |
|
||||||
|
|
||||||
|
## Tools Summary
|
||||||
|
|
||||||
|
### Tools (1 tool)
|
||||||
|
`tool_e`
|
||||||
|
""")
|
||||||
|
|
||||||
|
return str(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def marketplace_no_plugins(tmp_path):
|
||||||
|
"""Create marketplace with no plugins"""
|
||||||
|
plugins_dir = tmp_path / "plugins"
|
||||||
|
plugins_dir.mkdir()
|
||||||
|
return str(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def marketplace_no_dir(tmp_path):
|
||||||
|
"""Create path without plugins directory"""
|
||||||
|
return str(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_report_json_format(report_tools, sample_marketplace):
|
||||||
|
"""Test JSON format report generation"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
sample_marketplace, "json"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert "generated_at" in result
|
||||||
|
assert "summary" in result
|
||||||
|
assert "plugins" in result
|
||||||
|
assert result["summary"]["total_plugins"] == 3
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_report_markdown_format(report_tools, sample_marketplace):
|
||||||
|
"""Test markdown format report generation"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
sample_marketplace, "markdown"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert "report" in result
|
||||||
|
assert "# Contract Validation Report" in result["report"]
|
||||||
|
assert "## Summary" in result["report"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_report_finds_conflicts(report_tools, sample_marketplace):
|
||||||
|
"""Test that report finds command conflicts"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
sample_marketplace, "json"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["summary"]["errors"] > 0
|
||||||
|
assert result["summary"]["total_issues"] > 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_report_counts_correctly(report_tools, sample_marketplace):
|
||||||
|
"""Test summary counts are correct"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
sample_marketplace, "json"
|
||||||
|
)
|
||||||
|
|
||||||
|
summary = result["summary"]
|
||||||
|
assert summary["total_plugins"] == 3
|
||||||
|
assert summary["total_commands"] == 3 # 3 commands total
|
||||||
|
assert summary["total_tools"] == 5 # a, b, c, d, e
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_report_no_plugins(report_tools, marketplace_no_plugins):
|
||||||
|
"""Test error when no plugins found"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
marketplace_no_plugins, "json"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" in result
|
||||||
|
assert "no plugins" in result["error"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_report_no_plugins_dir(report_tools, marketplace_no_dir):
|
||||||
|
"""Test error when plugins directory doesn't exist"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
marketplace_no_dir, "json"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" in result
|
||||||
|
assert "not found" in result["error"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_list_issues_all(report_tools, sample_marketplace):
|
||||||
|
"""Test listing all issues"""
|
||||||
|
result = await report_tools.list_issues(sample_marketplace, "all", "all")
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert "issues" in result
|
||||||
|
assert result["total_issues"] > 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_list_issues_filter_by_severity(report_tools, sample_marketplace):
|
||||||
|
"""Test filtering issues by severity"""
|
||||||
|
all_result = await report_tools.list_issues(sample_marketplace, "all", "all")
|
||||||
|
error_result = await report_tools.list_issues(sample_marketplace, "error", "all")
|
||||||
|
|
||||||
|
# Error count should be less than or equal to all
|
||||||
|
assert error_result["total_issues"] <= all_result["total_issues"]
|
||||||
|
|
||||||
|
# All issues should have error severity
|
||||||
|
for issue in error_result["issues"]:
|
||||||
|
sev = issue.get("severity", "")
|
||||||
|
if hasattr(sev, 'value'):
|
||||||
|
sev = sev.value
|
||||||
|
assert "error" in str(sev).lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_list_issues_filter_by_type(report_tools, sample_marketplace):
|
||||||
|
"""Test filtering issues by type"""
|
||||||
|
result = await report_tools.list_issues(
|
||||||
|
sample_marketplace, "all", "interface_mismatch"
|
||||||
|
)
|
||||||
|
|
||||||
|
# All issues should have matching type
|
||||||
|
for issue in result["issues"]:
|
||||||
|
itype = issue.get("issue_type", "")
|
||||||
|
if hasattr(itype, 'value'):
|
||||||
|
itype = itype.value
|
||||||
|
assert "interface_mismatch" in str(itype).lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_list_issues_combined_filters(report_tools, sample_marketplace):
|
||||||
|
"""Test combined severity and type filters"""
|
||||||
|
result = await report_tools.list_issues(
|
||||||
|
sample_marketplace, "error", "interface_mismatch"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
# Should have command conflict errors
|
||||||
|
assert result["total_issues"] > 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_report_markdown_has_all_sections(report_tools, sample_marketplace):
|
||||||
|
"""Test markdown report contains all expected sections"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
sample_marketplace, "markdown"
|
||||||
|
)
|
||||||
|
|
||||||
|
report = result["report"]
|
||||||
|
assert "## Summary" in report
|
||||||
|
assert "## Plugins" in report
|
||||||
|
# Compatibility section only if there are checks
|
||||||
|
assert "Plugin One" in report or "plugin-one" in report.lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_report_includes_suggestions(report_tools, sample_marketplace):
|
||||||
|
"""Test that issues include suggestions"""
|
||||||
|
result = await report_tools.generate_compatibility_report(
|
||||||
|
sample_marketplace, "json"
|
||||||
|
)
|
||||||
|
|
||||||
|
issues = result.get("all_issues", [])
|
||||||
|
# Find an issue with a suggestion
|
||||||
|
issues_with_suggestions = [
|
||||||
|
i for i in issues
|
||||||
|
if i.get("suggestion")
|
||||||
|
]
|
||||||
|
assert len(issues_with_suggestions) > 0
|
||||||
514
mcp-servers/contract-validator/tests/test_validation_tools.py
Normal file
514
mcp-servers/contract-validator/tests/test_validation_tools.py
Normal file
@@ -0,0 +1,514 @@
|
|||||||
|
"""
|
||||||
|
Unit tests for validation tools.
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def validation_tools():
|
||||||
|
"""Create ValidationTools instance"""
|
||||||
|
from mcp_server.validation_tools import ValidationTools
|
||||||
|
return ValidationTools()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def plugin_a(tmp_path):
|
||||||
|
"""Create first test plugin"""
|
||||||
|
plugin_dir = tmp_path / "plugin-a"
|
||||||
|
plugin_dir.mkdir()
|
||||||
|
(plugin_dir / ".claude-plugin").mkdir()
|
||||||
|
|
||||||
|
readme = plugin_dir / "README.md"
|
||||||
|
readme.write_text("""# Plugin A
|
||||||
|
|
||||||
|
Test plugin A.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/setup-a` | Setup A |
|
||||||
|
| `/shared-cmd` | Shared command |
|
||||||
|
|
||||||
|
## Tools Summary
|
||||||
|
|
||||||
|
### Core (2 tools)
|
||||||
|
`tool_one`, `tool_two`
|
||||||
|
""")
|
||||||
|
return str(plugin_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def plugin_b(tmp_path):
|
||||||
|
"""Create second test plugin"""
|
||||||
|
plugin_dir = tmp_path / "plugin-b"
|
||||||
|
plugin_dir.mkdir()
|
||||||
|
(plugin_dir / ".claude-plugin").mkdir()
|
||||||
|
|
||||||
|
readme = plugin_dir / "README.md"
|
||||||
|
readme.write_text("""# Plugin B
|
||||||
|
|
||||||
|
Test plugin B.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/setup-b` | Setup B |
|
||||||
|
| `/shared-cmd` | Shared command (conflict!) |
|
||||||
|
|
||||||
|
## Tools Summary
|
||||||
|
|
||||||
|
### Core (2 tools)
|
||||||
|
`tool_two`, `tool_three`
|
||||||
|
""")
|
||||||
|
return str(plugin_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def plugin_no_conflict(tmp_path):
|
||||||
|
"""Create plugin with no conflicts"""
|
||||||
|
plugin_dir = tmp_path / "plugin-c"
|
||||||
|
plugin_dir.mkdir()
|
||||||
|
(plugin_dir / ".claude-plugin").mkdir()
|
||||||
|
|
||||||
|
readme = plugin_dir / "README.md"
|
||||||
|
readme.write_text("""# Plugin C
|
||||||
|
|
||||||
|
Test plugin C.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/unique-cmd` | Unique command |
|
||||||
|
|
||||||
|
## Tools Summary
|
||||||
|
|
||||||
|
### Core (1 tool)
|
||||||
|
`unique_tool`
|
||||||
|
""")
|
||||||
|
return str(plugin_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def claude_md_with_agents(tmp_path):
|
||||||
|
"""Create CLAUDE.md with agent definitions"""
|
||||||
|
claude_md = tmp_path / "CLAUDE.md"
|
||||||
|
claude_md.write_text("""# CLAUDE.md
|
||||||
|
|
||||||
|
### Four-Agent Model
|
||||||
|
|
||||||
|
| Agent | Personality | Responsibilities |
|
||||||
|
|-------|-------------|------------------|
|
||||||
|
| **TestAgent** | Careful | Uses `tool_one`, `tool_two`, `missing_tool` |
|
||||||
|
| **ValidAgent** | Thorough | Uses `tool_one` only |
|
||||||
|
| **EmptyAgent** | Unknown | General tasks |
|
||||||
|
""")
|
||||||
|
return str(claude_md)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_compatibility_command_conflict(validation_tools, plugin_a, plugin_b):
|
||||||
|
"""Test detection of command name conflicts"""
|
||||||
|
result = await validation_tools.validate_compatibility(plugin_a, plugin_b)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["compatible"] is False
|
||||||
|
|
||||||
|
# Find the command conflict issue
|
||||||
|
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
|
||||||
|
assert len(error_issues) > 0
|
||||||
|
assert any("/shared-cmd" in str(i["message"]) for i in error_issues)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_compatibility_tool_overlap(validation_tools, plugin_a, plugin_b):
|
||||||
|
"""Test detection of tool name overlaps"""
|
||||||
|
result = await validation_tools.validate_compatibility(plugin_a, plugin_b)
|
||||||
|
|
||||||
|
assert "tool_two" in result["shared_tools"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_compatibility_unique_tools(validation_tools, plugin_a, plugin_b):
|
||||||
|
"""Test identification of unique tools per plugin"""
|
||||||
|
result = await validation_tools.validate_compatibility(plugin_a, plugin_b)
|
||||||
|
|
||||||
|
assert "tool_one" in result["a_only_tools"]
|
||||||
|
assert "tool_three" in result["b_only_tools"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_compatibility_no_conflict(validation_tools, plugin_a, plugin_no_conflict):
|
||||||
|
"""Test compatible plugins"""
|
||||||
|
result = await validation_tools.validate_compatibility(plugin_a, plugin_no_conflict)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["compatible"] is True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_compatibility_missing_plugin(validation_tools, plugin_a, tmp_path):
|
||||||
|
"""Test error when plugin not found"""
|
||||||
|
result = await validation_tools.validate_compatibility(
|
||||||
|
plugin_a,
|
||||||
|
str(tmp_path / "nonexistent")
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" in result
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_agent_refs_with_missing_tools(validation_tools, claude_md_with_agents, plugin_a):
|
||||||
|
"""Test detection of missing tool references"""
|
||||||
|
result = await validation_tools.validate_agent_refs(
|
||||||
|
"TestAgent",
|
||||||
|
claude_md_with_agents,
|
||||||
|
[plugin_a]
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is False
|
||||||
|
assert "missing_tool" in result["tool_refs_missing"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_agent_refs_valid_agent(validation_tools, claude_md_with_agents, plugin_a):
|
||||||
|
"""Test valid agent with all tools found"""
|
||||||
|
result = await validation_tools.validate_agent_refs(
|
||||||
|
"ValidAgent",
|
||||||
|
claude_md_with_agents,
|
||||||
|
[plugin_a]
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is True
|
||||||
|
assert "tool_one" in result["tool_refs_found"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_agent_refs_empty_agent(validation_tools, claude_md_with_agents, plugin_a):
|
||||||
|
"""Test agent with no tool references"""
|
||||||
|
result = await validation_tools.validate_agent_refs(
|
||||||
|
"EmptyAgent",
|
||||||
|
claude_md_with_agents,
|
||||||
|
[plugin_a]
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
# Should have info issue about undocumented references
|
||||||
|
info_issues = [i for i in result["issues"] if i["severity"].value == "info"]
|
||||||
|
assert len(info_issues) > 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_agent_refs_agent_not_found(validation_tools, claude_md_with_agents, plugin_a):
|
||||||
|
"""Test error when agent not found"""
|
||||||
|
result = await validation_tools.validate_agent_refs(
|
||||||
|
"NonexistentAgent",
|
||||||
|
claude_md_with_agents,
|
||||||
|
[plugin_a]
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" in result
|
||||||
|
assert "not found" in result["error"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_data_flow_valid(validation_tools, tmp_path):
|
||||||
|
"""Test data flow validation with valid flow"""
|
||||||
|
claude_md = tmp_path / "CLAUDE.md"
|
||||||
|
claude_md.write_text("""# CLAUDE.md
|
||||||
|
|
||||||
|
### Four-Agent Model
|
||||||
|
|
||||||
|
| Agent | Personality | Responsibilities |
|
||||||
|
|-------|-------------|------------------|
|
||||||
|
| **DataAgent** | Analytical | Load with `read_csv`, analyze with `describe`, export with `to_csv` |
|
||||||
|
""")
|
||||||
|
|
||||||
|
result = await validation_tools.validate_data_flow("DataAgent", str(claude_md))
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_data_flow_missing_producer(validation_tools, tmp_path):
|
||||||
|
"""Test data flow with consumer but no producer"""
|
||||||
|
claude_md = tmp_path / "CLAUDE.md"
|
||||||
|
claude_md.write_text("""# CLAUDE.md
|
||||||
|
|
||||||
|
### Four-Agent Model
|
||||||
|
|
||||||
|
| Agent | Personality | Responsibilities |
|
||||||
|
|-------|-------------|------------------|
|
||||||
|
| **BadAgent** | Careless | Just runs `describe`, `head`, `tail` without loading |
|
||||||
|
""")
|
||||||
|
|
||||||
|
result = await validation_tools.validate_data_flow("BadAgent", str(claude_md))
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
# Should have warning about missing producer
|
||||||
|
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
|
||||||
|
assert len(warning_issues) > 0
|
||||||
|
|
||||||
|
|
||||||
|
# --- Workflow Integration Tests ---
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def domain_plugin_complete(tmp_path):
|
||||||
|
"""Create a complete domain plugin with gate, review, and advisory agent"""
|
||||||
|
plugin_dir = tmp_path / "viz-platform"
|
||||||
|
plugin_dir.mkdir()
|
||||||
|
(plugin_dir / ".claude-plugin").mkdir()
|
||||||
|
(plugin_dir / "commands").mkdir()
|
||||||
|
(plugin_dir / "agents").mkdir()
|
||||||
|
|
||||||
|
# Gate command with PASS/FAIL pattern
|
||||||
|
gate_cmd = plugin_dir / "commands" / "design-gate.md"
|
||||||
|
gate_cmd.write_text("""# /design-gate
|
||||||
|
|
||||||
|
Binary pass/fail validation gate for design system compliance.
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
- **PASS**: All design system checks passed
|
||||||
|
- **FAIL**: Design system violations detected
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Review command
|
||||||
|
review_cmd = plugin_dir / "commands" / "design-review.md"
|
||||||
|
review_cmd.write_text("""# /design-review
|
||||||
|
|
||||||
|
Comprehensive design system audit.
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Advisory agent
|
||||||
|
agent = plugin_dir / "agents" / "design-reviewer.md"
|
||||||
|
agent.write_text("""# design-reviewer
|
||||||
|
|
||||||
|
Design system compliance auditor.
|
||||||
|
|
||||||
|
Handles issues with `Domain/Viz` label.
|
||||||
|
""")
|
||||||
|
|
||||||
|
return str(plugin_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def domain_plugin_missing_gate(tmp_path):
|
||||||
|
"""Create domain plugin with review and agent but no gate command"""
|
||||||
|
plugin_dir = tmp_path / "data-platform"
|
||||||
|
plugin_dir.mkdir()
|
||||||
|
(plugin_dir / ".claude-plugin").mkdir()
|
||||||
|
(plugin_dir / "commands").mkdir()
|
||||||
|
(plugin_dir / "agents").mkdir()
|
||||||
|
|
||||||
|
# Review command (but no gate)
|
||||||
|
review_cmd = plugin_dir / "commands" / "data-review.md"
|
||||||
|
review_cmd.write_text("""# /data-review
|
||||||
|
|
||||||
|
Data integrity audit.
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Advisory agent
|
||||||
|
agent = plugin_dir / "agents" / "data-advisor.md"
|
||||||
|
agent.write_text("""# data-advisor
|
||||||
|
|
||||||
|
Data integrity advisor for Domain/Data issues.
|
||||||
|
""")
|
||||||
|
|
||||||
|
return str(plugin_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def domain_plugin_minimal(tmp_path):
|
||||||
|
"""Create minimal plugin with no commands or agents"""
|
||||||
|
plugin_dir = tmp_path / "minimal-plugin"
|
||||||
|
plugin_dir.mkdir()
|
||||||
|
(plugin_dir / ".claude-plugin").mkdir()
|
||||||
|
|
||||||
|
readme = plugin_dir / "README.md"
|
||||||
|
readme.write_text("# Minimal Plugin\n\nNo commands or agents.")
|
||||||
|
|
||||||
|
return str(plugin_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_workflow_integration_complete(validation_tools, domain_plugin_complete):
|
||||||
|
"""Test complete domain plugin returns valid with all interfaces found"""
|
||||||
|
result = await validation_tools.validate_workflow_integration(
|
||||||
|
domain_plugin_complete,
|
||||||
|
"Domain/Viz"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is True
|
||||||
|
assert result["gate_command_found"] is True
|
||||||
|
assert result["review_command_found"] is True
|
||||||
|
assert result["advisory_agent_found"] is True
|
||||||
|
# May have INFO issue about missing contract version (not an error/warning)
|
||||||
|
error_or_warning = [i for i in result["issues"]
|
||||||
|
if i["severity"].value in ("error", "warning")]
|
||||||
|
assert len(error_or_warning) == 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_workflow_integration_missing_gate(validation_tools, domain_plugin_missing_gate):
|
||||||
|
"""Test plugin missing gate command returns invalid with ERROR"""
|
||||||
|
result = await validation_tools.validate_workflow_integration(
|
||||||
|
domain_plugin_missing_gate,
|
||||||
|
"Domain/Data"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is False
|
||||||
|
assert result["gate_command_found"] is False
|
||||||
|
assert result["review_command_found"] is True
|
||||||
|
assert result["advisory_agent_found"] is True
|
||||||
|
|
||||||
|
# Should have one ERROR for missing gate
|
||||||
|
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
|
||||||
|
assert len(error_issues) == 1
|
||||||
|
assert "gate" in error_issues[0]["message"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_workflow_integration_minimal(validation_tools, domain_plugin_minimal):
|
||||||
|
"""Test minimal plugin returns invalid with multiple issues"""
|
||||||
|
result = await validation_tools.validate_workflow_integration(
|
||||||
|
domain_plugin_minimal,
|
||||||
|
"Domain/Test"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is False
|
||||||
|
assert result["gate_command_found"] is False
|
||||||
|
assert result["review_command_found"] is False
|
||||||
|
assert result["advisory_agent_found"] is False
|
||||||
|
|
||||||
|
# Should have one ERROR (gate) and two WARNINGs (review, agent)
|
||||||
|
error_issues = [i for i in result["issues"] if i["severity"].value == "error"]
|
||||||
|
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
|
||||||
|
assert len(error_issues) == 1
|
||||||
|
assert len(warning_issues) == 2
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_workflow_integration_nonexistent_plugin(validation_tools, tmp_path):
|
||||||
|
"""Test error when plugin directory doesn't exist"""
|
||||||
|
result = await validation_tools.validate_workflow_integration(
|
||||||
|
str(tmp_path / "nonexistent"),
|
||||||
|
"Domain/Test"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" in result
|
||||||
|
assert "not found" in result["error"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
# --- Gate Contract Version Tests ---
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def domain_plugin_with_contract(tmp_path):
|
||||||
|
"""Create domain plugin with gate_contract: v1 in frontmatter"""
|
||||||
|
plugin_dir = tmp_path / "viz-platform-versioned"
|
||||||
|
plugin_dir.mkdir()
|
||||||
|
(plugin_dir / ".claude-plugin").mkdir()
|
||||||
|
(plugin_dir / "commands").mkdir()
|
||||||
|
(plugin_dir / "agents").mkdir()
|
||||||
|
|
||||||
|
# Gate command with gate_contract in frontmatter
|
||||||
|
gate_cmd = plugin_dir / "commands" / "design-gate.md"
|
||||||
|
gate_cmd.write_text("""---
|
||||||
|
description: Design system compliance gate (pass/fail)
|
||||||
|
gate_contract: v1
|
||||||
|
---
|
||||||
|
|
||||||
|
# /design-gate
|
||||||
|
|
||||||
|
Binary pass/fail validation gate for design system compliance.
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
- **PASS**: All design system checks passed
|
||||||
|
- **FAIL**: Design system violations detected
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Review command
|
||||||
|
review_cmd = plugin_dir / "commands" / "design-review.md"
|
||||||
|
review_cmd.write_text("""# /design-review
|
||||||
|
|
||||||
|
Comprehensive design system audit.
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Advisory agent
|
||||||
|
agent = plugin_dir / "agents" / "design-reviewer.md"
|
||||||
|
agent.write_text("""# design-reviewer
|
||||||
|
|
||||||
|
Design system compliance auditor for Domain/Viz issues.
|
||||||
|
""")
|
||||||
|
|
||||||
|
return str(plugin_dir)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_workflow_contract_match(validation_tools, domain_plugin_with_contract):
|
||||||
|
"""Test that matching expected_contract produces no warning"""
|
||||||
|
result = await validation_tools.validate_workflow_integration(
|
||||||
|
domain_plugin_with_contract,
|
||||||
|
"Domain/Viz",
|
||||||
|
expected_contract="v1"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is True
|
||||||
|
assert result["gate_contract"] == "v1"
|
||||||
|
|
||||||
|
# Should have no warnings about contract mismatch
|
||||||
|
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
|
||||||
|
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
|
||||||
|
assert len(contract_warnings) == 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_workflow_contract_mismatch(validation_tools, domain_plugin_with_contract):
|
||||||
|
"""Test that mismatched expected_contract produces WARNING"""
|
||||||
|
result = await validation_tools.validate_workflow_integration(
|
||||||
|
domain_plugin_with_contract,
|
||||||
|
"Domain/Viz",
|
||||||
|
expected_contract="v2" # Gate has v1
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is True # Contract mismatch doesn't affect validity
|
||||||
|
assert result["gate_contract"] == "v1"
|
||||||
|
|
||||||
|
# Should have warning about contract mismatch
|
||||||
|
warning_issues = [i for i in result["issues"] if i["severity"].value == "warning"]
|
||||||
|
contract_warnings = [i for i in warning_issues if "contract" in i["message"].lower()]
|
||||||
|
assert len(contract_warnings) == 1
|
||||||
|
assert "mismatch" in contract_warnings[0]["message"].lower()
|
||||||
|
assert "v1" in contract_warnings[0]["message"]
|
||||||
|
assert "v2" in contract_warnings[0]["message"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_workflow_no_contract(validation_tools, domain_plugin_complete):
|
||||||
|
"""Test that missing gate_contract produces INFO suggestion"""
|
||||||
|
result = await validation_tools.validate_workflow_integration(
|
||||||
|
domain_plugin_complete,
|
||||||
|
"Domain/Viz"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "error" not in result
|
||||||
|
assert result["valid"] is True
|
||||||
|
assert result["gate_contract"] is None
|
||||||
|
|
||||||
|
# Should have info issue about missing contract
|
||||||
|
info_issues = [i for i in result["issues"] if i["severity"].value == "info"]
|
||||||
|
contract_info = [i for i in info_issues if "contract" in i["message"].lower()]
|
||||||
|
assert len(contract_info) == 1
|
||||||
|
assert "does not declare" in contract_info[0]["message"].lower()
|
||||||
@@ -330,7 +330,7 @@ class PandasTools:
|
|||||||
return {'error': f'DataFrame not found: {data_ref}'}
|
return {'error': f'DataFrame not found: {data_ref}'}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
filtered = df.query(condition)
|
filtered = df.query(condition).reset_index(drop=True)
|
||||||
result_name = name or f"{data_ref}_filtered"
|
result_name = name or f"{data_ref}_filtered"
|
||||||
return self._check_and_store(
|
return self._check_and_store(
|
||||||
filtered,
|
filtered,
|
||||||
|
|||||||
21
mcp-servers/data-platform/run.sh
Executable file
21
mcp-servers/data-platform/run.sh
Executable file
@@ -0,0 +1,21 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Capture original working directory before any cd operations
|
||||||
|
# This should be the user's project directory when launched by Claude Code
|
||||||
|
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/data-platform/.venv"
|
||||||
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|
||||||
|
if [[ -f "$CACHE_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$CACHE_VENV/bin/python"
|
||||||
|
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$LOCAL_VENV/bin/python"
|
||||||
|
else
|
||||||
|
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$SCRIPT_DIR"
|
||||||
|
export PYTHONPATH="$SCRIPT_DIR"
|
||||||
|
exec "$PYTHON" -m mcp_server.server "$@"
|
||||||
@@ -1,412 +1,47 @@
|
|||||||
# Gitea MCP Server
|
# Gitea MCP Server (Marketplace Wrapper)
|
||||||
|
|
||||||
Model Context Protocol (MCP) server for Gitea integration with Claude Code.
|
This directory provides the virtual environment for the `gitea-mcp` package.
|
||||||
|
|
||||||
## Overview
|
## Package
|
||||||
|
|
||||||
The Gitea MCP Server provides Claude Code with direct access to Gitea for issue management, label operations, and repository tracking. It supports both single-repository (project mode) and multi-repository (company/PMO mode) operations.
|
**Source:** [gitea-mcp](https://gitea.hotserv.cloud/personal-projects/gitea-mcp)
|
||||||
|
**Registry:** Gitea PyPI at gitea.hotserv.cloud
|
||||||
|
**Version:** >=1.0.0
|
||||||
|
|
||||||
**Status**: ✅ Phase 1 Complete - Fully functional and tested
|
## Setup
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### Core Functionality
|
|
||||||
|
|
||||||
- **Issue Management**: CRUD operations for Gitea issues
|
|
||||||
- **Label Taxonomy**: Dynamic 44-label system with intelligent suggestions
|
|
||||||
- **Mode Detection**: Automatic project vs company-wide mode detection
|
|
||||||
- **Branch-Aware Security**: Prevents accidental changes on production branches
|
|
||||||
- **Hybrid Configuration**: System-level credentials + project-level paths
|
|
||||||
- **PMO Support**: Multi-repository aggregation for organization-wide views
|
|
||||||
|
|
||||||
### Tools Provided
|
|
||||||
|
|
||||||
| Tool | Description | Mode |
|
|
||||||
|------|-------------|------|
|
|
||||||
| `list_issues` | List issues from repository | Both |
|
|
||||||
| `get_issue` | Get specific issue details | Both |
|
|
||||||
| `create_issue` | Create new issue with labels | Both |
|
|
||||||
| `update_issue` | Update existing issue | Both |
|
|
||||||
| `add_comment` | Add comment to issue | Both |
|
|
||||||
| `get_labels` | Get all labels (org + repo) | Both |
|
|
||||||
| `suggest_labels` | Intelligent label suggestion | Both |
|
|
||||||
| `aggregate_issues` | Cross-repository issue aggregation | PMO Only |
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp-servers/gitea/
|
|
||||||
├── .venv/ # Python virtual environment
|
|
||||||
├── requirements.txt # Python dependencies
|
|
||||||
├── mcp_server/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── server.py # MCP server entry point
|
|
||||||
│ ├── config.py # Configuration loader
|
|
||||||
│ ├── gitea_client.py # Gitea API client
|
|
||||||
│ └── tools/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── issues.py # Issue tools
|
|
||||||
│ └── labels.py # Label tools
|
|
||||||
├── tests/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── test_config.py
|
|
||||||
│ ├── test_gitea_client.py
|
|
||||||
│ ├── test_issues.py
|
|
||||||
│ └── test_labels.py
|
|
||||||
├── README.md # This file
|
|
||||||
└── TESTING.md # Testing instructions
|
|
||||||
```
|
|
||||||
|
|
||||||
### Mode Detection
|
|
||||||
|
|
||||||
The server operates in two modes based on environment variables:
|
|
||||||
|
|
||||||
**Project Mode** (Single Repository):
|
|
||||||
- When `GITEA_REPO` is set
|
|
||||||
- Operates on single repository
|
|
||||||
- Used by `projman` plugin
|
|
||||||
|
|
||||||
**Company Mode** (Multi-Repository / PMO):
|
|
||||||
- When `GITEA_REPO` is NOT set
|
|
||||||
- Operates on all repositories in organization
|
|
||||||
- Used by `projman-pmo` plugin
|
|
||||||
|
|
||||||
### Branch-Aware Security
|
|
||||||
|
|
||||||
Operations are restricted based on the current Git branch:
|
|
||||||
|
|
||||||
| Branch | Read | Create Issue | Update/Comment |
|
|
||||||
|--------|------|--------------|----------------|
|
|
||||||
| `main`, `master`, `prod/*` | ✅ | ❌ | ❌ |
|
|
||||||
| `staging`, `stage/*` | ✅ | ✅ | ❌ |
|
|
||||||
| `development`, `develop`, `feat/*`, `dev/*` | ✅ | ✅ | ✅ |
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
- Python 3.10 or higher
|
|
||||||
- Git repository (for branch detection)
|
|
||||||
- Access to Gitea instance with API token
|
|
||||||
|
|
||||||
### Step 1: Install Dependencies
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd mcp-servers/gitea
|
|
||||||
python3 -m venv .venv
|
python3 -m venv .venv
|
||||||
source .venv/bin/activate # Linux/Mac
|
source .venv/bin/activate
|
||||||
# or .venv\Scripts\activate # Windows
|
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Configure System-Level Settings
|
Or use the marketplace setup script:
|
||||||
|
|
||||||
Create `~/.config/claude/gitea.env`:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mkdir -p ~/.config/claude
|
./scripts/setup-venvs.sh gitea
|
||||||
|
|
||||||
cat > ~/.config/claude/gitea.env << EOF
|
|
||||||
GITEA_API_URL=https://gitea.example.com/api/v1
|
|
||||||
GITEA_API_TOKEN=your_gitea_token_here
|
|
||||||
GITEA_OWNER=bandit
|
|
||||||
EOF
|
|
||||||
|
|
||||||
chmod 600 ~/.config/claude/gitea.env
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Configure Project-Level Settings (Optional)
|
|
||||||
|
|
||||||
For project mode, create `.env` in your project root:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo "GITEA_REPO=your-repo-name" > .env
|
|
||||||
echo ".env" >> .gitignore
|
|
||||||
```
|
|
||||||
|
|
||||||
For company/PMO mode, omit the `.env` file or don't set `GITEA_REPO`.
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
### System-Level Configuration
|
See `~/.config/claude/gitea.env` for system-level config (API URL, token).
|
||||||
|
See project `.env` for project-level config (GITEA_ORG, GITEA_REPO).
|
||||||
|
|
||||||
**File**: `~/.config/claude/gitea.env`
|
## Updating
|
||||||
|
|
||||||
**Required Variables**:
|
|
||||||
- `GITEA_API_URL` - Gitea API endpoint (e.g., `https://gitea.example.com/api/v1`)
|
|
||||||
- `GITEA_API_TOKEN` - Personal access token with repo permissions
|
|
||||||
- `GITEA_OWNER` - Organization or user name (e.g., `bandit`)
|
|
||||||
|
|
||||||
### Project-Level Configuration
|
|
||||||
|
|
||||||
**File**: `<project-root>/.env`
|
|
||||||
|
|
||||||
**Optional Variables**:
|
|
||||||
- `GITEA_REPO` - Repository name (enables project mode)
|
|
||||||
|
|
||||||
### Generating Gitea API Token
|
|
||||||
|
|
||||||
1. Log into Gitea: https://gitea.example.com
|
|
||||||
2. Navigate to: **Settings** → **Applications** → **Manage Access Tokens**
|
|
||||||
3. Click **Generate New Token**
|
|
||||||
4. Configure token:
|
|
||||||
- **Token Name**: `claude-code-mcp`
|
|
||||||
- **Permissions**:
|
|
||||||
- ✅ `repo` (all) - Read/write repositories, issues, labels
|
|
||||||
- ✅ `read:org` - Read organization information and labels
|
|
||||||
- ✅ `read:user` - Read user information
|
|
||||||
5. Click **Generate Token**
|
|
||||||
6. Copy token immediately (shown only once)
|
|
||||||
7. Add to `~/.config/claude/gitea.env`
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Running the MCP Server
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd mcp-servers/gitea
|
|
||||||
source .venv/bin/activate
|
source .venv/bin/activate
|
||||||
python -m mcp_server.server
|
pip install --upgrade gitea-mcp \
|
||||||
|
--extra-index-url https://gitea.hotserv.cloud/api/packages/personal-projects/pypi/simple
|
||||||
```
|
```
|
||||||
|
|
||||||
The server communicates via JSON-RPC 2.0 over stdio.
|
## Features
|
||||||
|
|
||||||
### Integration with Claude Code Plugins
|
The `gitea-mcp` package provides MCP tools for:
|
||||||
|
|
||||||
The MCP server is designed to be used by Claude Code plugins via `.mcp.json` configuration:
|
- **Issues**: CRUD, comments, dependencies, execution order
|
||||||
|
- **Labels**: Get, suggest, create (org + repo level)
|
||||||
```json
|
- **Milestones**: CRUD operations
|
||||||
{
|
- **Pull Requests**: List, get, diff, comments, reviews, create
|
||||||
"mcpServers": {
|
- **Wiki**: Pages, lessons learned, RFC allocation
|
||||||
"gitea": {
|
- **Validation**: Repository org check, branch protection
|
||||||
"command": "python",
|
|
||||||
"args": ["-m", "mcp_server.server"],
|
|
||||||
"cwd": "${CLAUDE_PLUGIN_ROOT}/../mcp-servers/gitea",
|
|
||||||
"env": {
|
|
||||||
"PYTHONPATH": "${CLAUDE_PLUGIN_ROOT}/../mcp-servers/gitea"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Tool Calls
|
|
||||||
|
|
||||||
**List Issues**:
|
|
||||||
```python
|
|
||||||
from mcp_server.tools.issues import IssueTools
|
|
||||||
from mcp_server.gitea_client import GiteaClient
|
|
||||||
|
|
||||||
client = GiteaClient()
|
|
||||||
issue_tools = IssueTools(client)
|
|
||||||
|
|
||||||
issues = await issue_tools.list_issues(state='open', labels=['Type/Bug'])
|
|
||||||
```
|
|
||||||
|
|
||||||
**Suggest Labels**:
|
|
||||||
```python
|
|
||||||
from mcp_server.tools.labels import LabelTools
|
|
||||||
|
|
||||||
label_tools = LabelTools(client)
|
|
||||||
|
|
||||||
context = "Fix critical authentication bug in production API"
|
|
||||||
suggestions = await label_tools.suggest_labels(context)
|
|
||||||
# Returns: ['Type/Bug', 'Priority/Critical', 'Component/Auth', 'Component/API', ...]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
### Unit Tests
|
|
||||||
|
|
||||||
Run all 42 unit tests with mocks:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pytest tests/ -v
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: `42 passed in 0.57s`
|
|
||||||
|
|
||||||
### Integration Tests
|
|
||||||
|
|
||||||
Test with real Gitea instance:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -c "
|
|
||||||
from mcp_server.gitea_client import GiteaClient
|
|
||||||
|
|
||||||
client = GiteaClient()
|
|
||||||
issues = client.list_issues(state='open')
|
|
||||||
print(f'Found {len(issues)} open issues')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Full Testing Guide
|
|
||||||
|
|
||||||
See [TESTING.md](./TESTING.md) for comprehensive testing instructions.
|
|
||||||
|
|
||||||
## Label Taxonomy System
|
|
||||||
|
|
||||||
The system supports a dynamic 44-label taxonomy (28 org + 16 repo):
|
|
||||||
|
|
||||||
**Organization Labels (28)**:
|
|
||||||
- `Agent/*` (2) - Agent/Human, Agent/Claude
|
|
||||||
- `Complexity/*` (3) - Simple, Medium, Complex
|
|
||||||
- `Efforts/*` (5) - XS, S, M, L, XL
|
|
||||||
- `Priority/*` (4) - Low, Medium, High, Critical
|
|
||||||
- `Risk/*` (3) - Low, Medium, High
|
|
||||||
- `Source/*` (4) - Development, Staging, Production, Customer
|
|
||||||
- `Type/*` (6) - Bug, Feature, Refactor, Documentation, Test, Chore
|
|
||||||
|
|
||||||
**Repository Labels (16)**:
|
|
||||||
- `Component/*` (9) - Backend, Frontend, API, Database, Auth, Deploy, Testing, Docs, Infra
|
|
||||||
- `Tech/*` (7) - Python, JavaScript, Docker, PostgreSQL, Redis, Vue, FastAPI
|
|
||||||
|
|
||||||
Labels are fetched dynamically from Gitea and suggestions adapt to the current taxonomy.
|
|
||||||
|
|
||||||
## Security
|
|
||||||
|
|
||||||
### Token Storage
|
|
||||||
|
|
||||||
- Store tokens in `~/.config/claude/gitea.env`
|
|
||||||
- Set file permissions to `600` (read/write owner only)
|
|
||||||
- Never commit tokens to Git
|
|
||||||
- Use separate tokens for development and production
|
|
||||||
|
|
||||||
### Branch Detection
|
|
||||||
|
|
||||||
The MCP server implements defense-in-depth branch detection:
|
|
||||||
|
|
||||||
1. **MCP Tools**: Check branch before operations
|
|
||||||
2. **Agent Prompts**: Warn users about branch restrictions
|
|
||||||
3. **CLAUDE.md**: Provides additional context
|
|
||||||
|
|
||||||
### Input Validation
|
|
||||||
|
|
||||||
- All user input is validated before API calls
|
|
||||||
- Issue titles and descriptions are sanitized
|
|
||||||
- Label names are checked against taxonomy
|
|
||||||
- Repository names are validated
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
**Module not found**:
|
|
||||||
```bash
|
|
||||||
cd mcp-servers/gitea
|
|
||||||
source .venv/bin/activate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Configuration not found**:
|
|
||||||
```bash
|
|
||||||
ls -la ~/.config/claude/gitea.env
|
|
||||||
# If missing, create it following installation steps
|
|
||||||
```
|
|
||||||
|
|
||||||
**Authentication failed**:
|
|
||||||
```bash
|
|
||||||
# Test token manually
|
|
||||||
curl -H "Authorization: token YOUR_TOKEN" \
|
|
||||||
https://gitea.example.com/api/v1/user
|
|
||||||
```
|
|
||||||
|
|
||||||
**Permission denied on branch**:
|
|
||||||
```bash
|
|
||||||
# Check current branch
|
|
||||||
git branch --show-current
|
|
||||||
|
|
||||||
# Switch to development branch
|
|
||||||
git checkout development
|
|
||||||
```
|
|
||||||
|
|
||||||
See [TESTING.md](./TESTING.md#troubleshooting) for more details.
|
|
||||||
|
|
||||||
## Development
|
|
||||||
|
|
||||||
### Project Structure
|
|
||||||
|
|
||||||
- `config.py` - Hybrid configuration loader with mode detection
|
|
||||||
- `gitea_client.py` - Synchronous Gitea API client using requests
|
|
||||||
- `tools/issues.py` - Async wrappers with branch detection
|
|
||||||
- `tools/labels.py` - Label management and suggestion
|
|
||||||
- `server.py` - MCP server with JSON-RPC 2.0 over stdio
|
|
||||||
|
|
||||||
### Adding New Tools
|
|
||||||
|
|
||||||
1. Add method to `GiteaClient` (sync)
|
|
||||||
2. Add async wrapper to appropriate tool class
|
|
||||||
3. Register tool in `server.py` `setup_tools()`
|
|
||||||
4. Add unit tests
|
|
||||||
5. Update documentation
|
|
||||||
|
|
||||||
### Testing Philosophy
|
|
||||||
|
|
||||||
- **Unit tests**: Use mocks for fast feedback
|
|
||||||
- **Integration tests**: Use real Gitea API for validation
|
|
||||||
- **Branch detection**: Test all branch types
|
|
||||||
- **Mode detection**: Test both project and company modes
|
|
||||||
|
|
||||||
## Performance
|
|
||||||
|
|
||||||
### Caching
|
|
||||||
|
|
||||||
Labels are cached to reduce API calls:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from functools import lru_cache
|
|
||||||
|
|
||||||
@lru_cache(maxsize=128)
|
|
||||||
def get_labels_cached(self, repo: str):
|
|
||||||
return self.get_labels(repo)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Retry Logic
|
|
||||||
|
|
||||||
API calls include automatic retry with exponential backoff:
|
|
||||||
|
|
||||||
```python
|
|
||||||
@retry_on_failure(max_retries=3, delay=1)
|
|
||||||
def list_issues(self, state='open', labels=None, repo=None):
|
|
||||||
# Implementation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Changelog
|
|
||||||
|
|
||||||
### v1.0.0 (2025-01-06) - Phase 1 Complete
|
|
||||||
|
|
||||||
✅ Initial implementation:
|
|
||||||
- Configuration management (hybrid system + project)
|
|
||||||
- Gitea API client with all CRUD operations
|
|
||||||
- MCP server with 8 tools
|
|
||||||
- Issue tools with branch detection
|
|
||||||
- Label tools with intelligent suggestions
|
|
||||||
- Mode detection (project vs company)
|
|
||||||
- Branch-aware security model
|
|
||||||
- 42 unit tests (100% passing)
|
|
||||||
- Comprehensive documentation
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
MIT License - Part of the Leo Claude Marketplace project.
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- **Projman Documentation**: `plugins/projman/README.md`
|
|
||||||
- **Configuration Guide**: `plugins/projman/CONFIGURATION.md`
|
|
||||||
- **Testing Guide**: `TESTING.md`
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
For issues or questions:
|
|
||||||
1. Check [TESTING.md](./TESTING.md) troubleshooting section
|
|
||||||
2. Review [plugins/projman/README.md](../../README.md) for plugin documentation
|
|
||||||
3. Create an issue in the project repository
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Built for**: Leo Claude Marketplace - Project Management Plugins
|
|
||||||
**Phase**: 1 (Complete)
|
|
||||||
**Status**: ✅ Production Ready
|
|
||||||
**Last Updated**: 2025-01-06
|
|
||||||
|
|||||||
@@ -1,582 +0,0 @@
|
|||||||
# Gitea MCP Server - Testing Guide
|
|
||||||
|
|
||||||
This document provides comprehensive testing instructions for the Gitea MCP Server implementation.
|
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
|
|
||||||
1. [Unit Tests](#unit-tests)
|
|
||||||
2. [Manual MCP Server Testing](#manual-mcp-server-testing)
|
|
||||||
3. [Integration Testing](#integration-testing)
|
|
||||||
4. [Configuration Setup for Testing](#configuration-setup-for-testing)
|
|
||||||
5. [Troubleshooting](#troubleshooting)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Unit Tests
|
|
||||||
|
|
||||||
Unit tests use mocks to test all modules without requiring a real Gitea instance.
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
Ensure the virtual environment is activated and dependencies are installed:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd mcp-servers/gitea
|
|
||||||
source .venv/bin/activate # Linux/Mac
|
|
||||||
# or .venv\Scripts\activate # Windows
|
|
||||||
```
|
|
||||||
|
|
||||||
### Running All Tests
|
|
||||||
|
|
||||||
Run all 42 unit tests:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pytest tests/ -v
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected output:
|
|
||||||
```
|
|
||||||
============================== 42 passed in 0.57s ==============================
|
|
||||||
```
|
|
||||||
|
|
||||||
### Running Specific Test Files
|
|
||||||
|
|
||||||
Run tests for a specific module:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Configuration tests
|
|
||||||
pytest tests/test_config.py -v
|
|
||||||
|
|
||||||
# Gitea client tests
|
|
||||||
pytest tests/test_gitea_client.py -v
|
|
||||||
|
|
||||||
# Issue tools tests
|
|
||||||
pytest tests/test_issues.py -v
|
|
||||||
|
|
||||||
# Label tools tests
|
|
||||||
pytest tests/test_labels.py -v
|
|
||||||
```
|
|
||||||
|
|
||||||
### Running Specific Tests
|
|
||||||
|
|
||||||
Run a single test:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pytest tests/test_config.py::test_load_system_config -v
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Coverage
|
|
||||||
|
|
||||||
Generate coverage report:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pytest --cov=mcp_server --cov-report=html tests/
|
|
||||||
|
|
||||||
# View coverage report
|
|
||||||
# Open htmlcov/index.html in your browser
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected coverage: >80% for all modules
|
|
||||||
|
|
||||||
### Test Organization
|
|
||||||
|
|
||||||
**Configuration Tests** (`test_config.py`):
|
|
||||||
- System-level configuration loading
|
|
||||||
- Project-level configuration override
|
|
||||||
- Mode detection (project vs company)
|
|
||||||
- Missing configuration handling
|
|
||||||
|
|
||||||
**Gitea Client Tests** (`test_gitea_client.py`):
|
|
||||||
- API client initialization
|
|
||||||
- Issue CRUD operations
|
|
||||||
- Label retrieval
|
|
||||||
- PMO multi-repo operations
|
|
||||||
|
|
||||||
**Issue Tools Tests** (`test_issues.py`):
|
|
||||||
- Branch-aware security checks
|
|
||||||
- Async wrappers for sync client
|
|
||||||
- Permission enforcement
|
|
||||||
- PMO aggregation mode
|
|
||||||
|
|
||||||
**Label Tools Tests** (`test_labels.py`):
|
|
||||||
- Label retrieval (org + repo)
|
|
||||||
- Intelligent label suggestion
|
|
||||||
- Multi-category detection
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Manual MCP Server Testing
|
|
||||||
|
|
||||||
Test the MCP server manually using stdio communication.
|
|
||||||
|
|
||||||
### Step 1: Start the MCP Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd mcp-servers/gitea
|
|
||||||
source .venv/bin/activate
|
|
||||||
python -m mcp_server.server
|
|
||||||
```
|
|
||||||
|
|
||||||
The server will start and wait for JSON-RPC 2.0 messages on stdin.
|
|
||||||
|
|
||||||
### Step 2: Test Tool Listing
|
|
||||||
|
|
||||||
In another terminal, send a tool listing request:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}' | python -m mcp_server.server
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected response:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"jsonrpc": "2.0",
|
|
||||||
"id": 1,
|
|
||||||
"result": {
|
|
||||||
"tools": [
|
|
||||||
{"name": "list_issues", "description": "List issues from Gitea repository", ...},
|
|
||||||
{"name": "get_issue", "description": "Get specific issue details", ...},
|
|
||||||
{"name": "create_issue", "description": "Create a new issue in Gitea", ...},
|
|
||||||
...
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Test Tool Invocation
|
|
||||||
|
|
||||||
**Note:** Manual tool invocation requires proper configuration. See [Configuration Setup](#configuration-setup-for-testing).
|
|
||||||
|
|
||||||
Example: List issues
|
|
||||||
```bash
|
|
||||||
echo '{
|
|
||||||
"jsonrpc": "2.0",
|
|
||||||
"id": 2,
|
|
||||||
"method": "tools/call",
|
|
||||||
"params": {
|
|
||||||
"name": "list_issues",
|
|
||||||
"arguments": {
|
|
||||||
"state": "open"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}' | python -m mcp_server.server
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration Testing
|
|
||||||
|
|
||||||
Test the MCP server with a real Gitea instance.
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
1. **Gitea Instance**: Access to https://gitea.example.com (or your Gitea instance)
|
|
||||||
2. **API Token**: Personal access token with required permissions
|
|
||||||
3. **Configuration**: Properly configured system and project configs
|
|
||||||
|
|
||||||
### Step 1: Configuration Setup
|
|
||||||
|
|
||||||
Create system-level configuration:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mkdir -p ~/.config/claude
|
|
||||||
|
|
||||||
cat > ~/.config/claude/gitea.env << EOF
|
|
||||||
GITEA_API_URL=https://gitea.example.com/api/v1
|
|
||||||
GITEA_API_TOKEN=your_gitea_token_here
|
|
||||||
GITEA_OWNER=bandit
|
|
||||||
EOF
|
|
||||||
|
|
||||||
chmod 600 ~/.config/claude/gitea.env
|
|
||||||
```
|
|
||||||
|
|
||||||
Create project-level configuration (for project mode testing):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /path/to/test/project
|
|
||||||
|
|
||||||
cat > .env << EOF
|
|
||||||
GITEA_REPO=test-repo
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Add to .gitignore
|
|
||||||
echo ".env" >> .gitignore
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Generate Gitea API Token
|
|
||||||
|
|
||||||
1. Log into Gitea: https://gitea.example.com
|
|
||||||
2. Navigate to: **Settings** → **Applications** → **Manage Access Tokens**
|
|
||||||
3. Click **Generate New Token**
|
|
||||||
4. Token configuration:
|
|
||||||
- **Token Name:** `mcp-integration-test`
|
|
||||||
- **Required Permissions:**
|
|
||||||
- ✅ `repo` (all) - Read/write access to repositories, issues, labels
|
|
||||||
- ✅ `read:org` - Read organization information and labels
|
|
||||||
- ✅ `read:user` - Read user information
|
|
||||||
5. Click **Generate Token**
|
|
||||||
6. Copy the token immediately (shown only once)
|
|
||||||
7. Add to `~/.config/claude/gitea.env`
|
|
||||||
|
|
||||||
### Step 3: Verify Configuration
|
|
||||||
|
|
||||||
Test configuration loading:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd mcp-servers/gitea
|
|
||||||
source .venv/bin/activate
|
|
||||||
python -c "
|
|
||||||
from mcp_server.config import GiteaConfig
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config.load()
|
|
||||||
print(f'API URL: {result[\"api_url\"]}')
|
|
||||||
print(f'Owner: {result[\"owner\"]}')
|
|
||||||
print(f'Repo: {result[\"repo\"]}')
|
|
||||||
print(f'Mode: {result[\"mode\"]}')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected output:
|
|
||||||
```
|
|
||||||
API URL: https://gitea.example.com/api/v1
|
|
||||||
Owner: bandit
|
|
||||||
Repo: test-repo (or None for company mode)
|
|
||||||
Mode: project (or company)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Test Gitea Client
|
|
||||||
|
|
||||||
Test basic Gitea API operations:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -c "
|
|
||||||
from mcp_server.gitea_client import GiteaClient
|
|
||||||
|
|
||||||
client = GiteaClient()
|
|
||||||
|
|
||||||
# Test listing issues
|
|
||||||
print('Testing list_issues...')
|
|
||||||
issues = client.list_issues(state='open')
|
|
||||||
print(f'Found {len(issues)} open issues')
|
|
||||||
|
|
||||||
# Test getting labels
|
|
||||||
print('\\nTesting get_labels...')
|
|
||||||
labels = client.get_labels()
|
|
||||||
print(f'Found {len(labels)} repository labels')
|
|
||||||
|
|
||||||
# Test getting org labels
|
|
||||||
print('\\nTesting get_org_labels...')
|
|
||||||
org_labels = client.get_org_labels()
|
|
||||||
print(f'Found {len(org_labels)} organization labels')
|
|
||||||
|
|
||||||
print('\\n✅ All integration tests passed!')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Test Issue Creation (Optional)
|
|
||||||
|
|
||||||
**Warning:** This creates a real issue in Gitea. Use a test repository.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -c "
|
|
||||||
from mcp_server.gitea_client import GiteaClient
|
|
||||||
|
|
||||||
client = GiteaClient()
|
|
||||||
|
|
||||||
# Create test issue
|
|
||||||
print('Creating test issue...')
|
|
||||||
issue = client.create_issue(
|
|
||||||
title='[TEST] MCP Server Integration Test',
|
|
||||||
body='This is a test issue created by the Gitea MCP Server integration tests.',
|
|
||||||
labels=['Type/Test']
|
|
||||||
)
|
|
||||||
print(f'Created issue #{issue[\"number\"]}: {issue[\"title\"]}')
|
|
||||||
|
|
||||||
# Clean up: Close the issue
|
|
||||||
print('\\nClosing test issue...')
|
|
||||||
client.update_issue(issue['number'], state='closed')
|
|
||||||
print('✅ Test issue closed')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Test MCP Server with Real API
|
|
||||||
|
|
||||||
Start the MCP server and test with real Gitea API:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd mcp-servers/gitea
|
|
||||||
source .venv/bin/activate
|
|
||||||
|
|
||||||
# Run server with test script
|
|
||||||
python << 'EOF'
|
|
||||||
import asyncio
|
|
||||||
import json
|
|
||||||
from mcp_server.server import GiteaMCPServer
|
|
||||||
|
|
||||||
async def test_server():
|
|
||||||
server = GiteaMCPServer()
|
|
||||||
await server.initialize()
|
|
||||||
|
|
||||||
# Test list_issues
|
|
||||||
result = await server.issue_tools.list_issues(state='open')
|
|
||||||
print(f'Found {len(result)} open issues')
|
|
||||||
|
|
||||||
# Test get_labels
|
|
||||||
labels = await server.label_tools.get_labels()
|
|
||||||
print(f'Found {labels["total_count"]} total labels')
|
|
||||||
|
|
||||||
# Test suggest_labels
|
|
||||||
suggestions = await server.label_tools.suggest_labels(
|
|
||||||
"Fix critical bug in authentication"
|
|
||||||
)
|
|
||||||
print(f'Suggested labels: {", ".join(suggestions)}')
|
|
||||||
|
|
||||||
print('✅ All MCP server integration tests passed!')
|
|
||||||
|
|
||||||
asyncio.run(test_server())
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 7: Test PMO Mode (Optional)
|
|
||||||
|
|
||||||
Test company-wide mode (no GITEA_REPO):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Temporarily remove GITEA_REPO
|
|
||||||
unset GITEA_REPO
|
|
||||||
|
|
||||||
python -c "
|
|
||||||
from mcp_server.gitea_client import GiteaClient
|
|
||||||
|
|
||||||
client = GiteaClient()
|
|
||||||
|
|
||||||
print(f'Running in {client.mode} mode')
|
|
||||||
|
|
||||||
# Test list_repos
|
|
||||||
print('\\nTesting list_repos...')
|
|
||||||
repos = client.list_repos()
|
|
||||||
print(f'Found {len(repos)} repositories')
|
|
||||||
|
|
||||||
# Test aggregate_issues
|
|
||||||
print('\\nTesting aggregate_issues...')
|
|
||||||
aggregated = client.aggregate_issues(state='open')
|
|
||||||
for repo_name, issues in aggregated.items():
|
|
||||||
print(f' {repo_name}: {len(issues)} open issues')
|
|
||||||
|
|
||||||
print('\\n✅ PMO mode tests passed!')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Setup for Testing
|
|
||||||
|
|
||||||
### Minimal Configuration
|
|
||||||
|
|
||||||
**System-level** (`~/.config/claude/gitea.env`):
|
|
||||||
```bash
|
|
||||||
GITEA_API_URL=https://gitea.example.com/api/v1
|
|
||||||
GITEA_API_TOKEN=your_token_here
|
|
||||||
GITEA_OWNER=bandit
|
|
||||||
```
|
|
||||||
|
|
||||||
**Project-level** (`.env` in project root):
|
|
||||||
```bash
|
|
||||||
# For project mode
|
|
||||||
GITEA_REPO=test-repo
|
|
||||||
|
|
||||||
# For company mode (PMO), omit GITEA_REPO
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
|
|
||||||
Verify configuration is correct:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check system config exists
|
|
||||||
ls -la ~/.config/claude/gitea.env
|
|
||||||
|
|
||||||
# Check permissions (should be 600)
|
|
||||||
stat -c "%a %n" ~/.config/claude/gitea.env
|
|
||||||
|
|
||||||
# Check content (without exposing token)
|
|
||||||
grep -v TOKEN ~/.config/claude/gitea.env
|
|
||||||
|
|
||||||
# Check project config (if using project mode)
|
|
||||||
cat .env
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
#### 1. Import Errors
|
|
||||||
|
|
||||||
**Error:**
|
|
||||||
```
|
|
||||||
ModuleNotFoundError: No module named 'mcp_server'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Ensure you're in the correct directory
|
|
||||||
cd mcp-servers/gitea
|
|
||||||
|
|
||||||
# Activate virtual environment
|
|
||||||
source .venv/bin/activate
|
|
||||||
|
|
||||||
# Verify installation
|
|
||||||
pip list | grep mcp
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Configuration Not Found
|
|
||||||
|
|
||||||
**Error:**
|
|
||||||
```
|
|
||||||
FileNotFoundError: System config not found: /home/user/.config/claude/gitea.env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Create system config
|
|
||||||
mkdir -p ~/.config/claude
|
|
||||||
cat > ~/.config/claude/gitea.env << EOF
|
|
||||||
GITEA_API_URL=https://gitea.example.com/api/v1
|
|
||||||
GITEA_API_TOKEN=your_token_here
|
|
||||||
GITEA_OWNER=bandit
|
|
||||||
EOF
|
|
||||||
|
|
||||||
chmod 600 ~/.config/claude/gitea.env
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Missing Required Configuration
|
|
||||||
|
|
||||||
**Error:**
|
|
||||||
```
|
|
||||||
ValueError: Missing required configuration: GITEA_API_TOKEN, GITEA_OWNER
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Check configuration file
|
|
||||||
cat ~/.config/claude/gitea.env
|
|
||||||
|
|
||||||
# Ensure all required variables are present:
|
|
||||||
# - GITEA_API_URL
|
|
||||||
# - GITEA_API_TOKEN
|
|
||||||
# - GITEA_OWNER
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. API Authentication Failed
|
|
||||||
|
|
||||||
**Error:**
|
|
||||||
```
|
|
||||||
requests.exceptions.HTTPError: 401 Client Error: Unauthorized
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Test token manually
|
|
||||||
curl -H "Authorization: token YOUR_TOKEN" \
|
|
||||||
https://gitea.example.com/api/v1/user
|
|
||||||
|
|
||||||
# If fails, regenerate token in Gitea settings
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 5. Permission Errors (Branch Detection)
|
|
||||||
|
|
||||||
**Error:**
|
|
||||||
```
|
|
||||||
PermissionError: Cannot create issues on branch 'main'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Check current branch
|
|
||||||
git branch --show-current
|
|
||||||
|
|
||||||
# Switch to development branch
|
|
||||||
git checkout development
|
|
||||||
# or
|
|
||||||
git checkout -b feat/test-feature
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 6. Repository Not Specified
|
|
||||||
|
|
||||||
**Error:**
|
|
||||||
```
|
|
||||||
ValueError: Repository not specified
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Add GITEA_REPO to project config
|
|
||||||
echo "GITEA_REPO=your-repo-name" >> .env
|
|
||||||
|
|
||||||
# Or specify repo in tool call
|
|
||||||
# (for PMO mode multi-repo operations)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Debug Mode
|
|
||||||
|
|
||||||
Enable debug logging:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export LOG_LEVEL=DEBUG
|
|
||||||
python -m mcp_server.server
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Summary
|
|
||||||
|
|
||||||
After completing all tests, verify:
|
|
||||||
|
|
||||||
- ✅ All 42 unit tests pass
|
|
||||||
- ✅ MCP server starts without errors
|
|
||||||
- ✅ Configuration loads correctly
|
|
||||||
- ✅ Gitea API client connects successfully
|
|
||||||
- ✅ Issues can be listed from Gitea
|
|
||||||
- ✅ Labels can be retrieved
|
|
||||||
- ✅ Label suggestions work correctly
|
|
||||||
- ✅ Branch detection blocks writes on main/staging
|
|
||||||
- ✅ Mode detection works (project vs company)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
|
|
||||||
Phase 1 is complete when:
|
|
||||||
|
|
||||||
1. **All unit tests pass** (42/42)
|
|
||||||
2. **MCP server starts without errors**
|
|
||||||
3. **Can list issues from Gitea**
|
|
||||||
4. **Can create issues with labels** (in development mode)
|
|
||||||
5. **Mode detection works** (project vs company)
|
|
||||||
6. **Branch detection prevents writes on main/staging**
|
|
||||||
7. **Configuration properly merges** system + project levels
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
After completing testing:
|
|
||||||
|
|
||||||
1. **Document any issues** found during testing
|
|
||||||
2. **Create integration with projman plugin** (Phase 2)
|
|
||||||
3. **Test in real project workflow** (Phase 5)
|
|
||||||
4. **Performance optimization** (if needed)
|
|
||||||
5. **Production hardening** (Phase 8)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Additional Resources
|
|
||||||
|
|
||||||
- **MCP Documentation**: https://docs.anthropic.com/claude/docs/mcp
|
|
||||||
- **Gitea API Documentation**: https://docs.gitea.io/en-us/api-usage/
|
|
||||||
- **Projman Documentation**: `plugins/projman/README.md`
|
|
||||||
- **Configuration Guide**: `plugins/projman/CONFIGURATION.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: 2025-01-06 (Phase 1 Implementation)
|
|
||||||
@@ -1,227 +0,0 @@
|
|||||||
"""
|
|
||||||
Configuration loader for Gitea MCP Server.
|
|
||||||
|
|
||||||
Implements hybrid configuration system:
|
|
||||||
- System-level: ~/.config/claude/gitea.env (credentials)
|
|
||||||
- Project-level: .env (repository specification)
|
|
||||||
- Auto-detection: Falls back to git remote URL parsing
|
|
||||||
"""
|
|
||||||
from pathlib import Path
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import subprocess
|
|
||||||
import logging
|
|
||||||
from typing import Dict, Optional
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class GiteaConfig:
|
|
||||||
"""Hybrid configuration loader with mode detection"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.api_url: Optional[str] = None
|
|
||||||
self.api_token: Optional[str] = None
|
|
||||||
self.repo: Optional[str] = None
|
|
||||||
self.mode: str = 'project'
|
|
||||||
|
|
||||||
def load(self) -> Dict[str, Optional[str]]:
|
|
||||||
"""
|
|
||||||
Load configuration from system and project levels.
|
|
||||||
Project-level configuration overrides system-level.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict containing api_url, api_token, repo, mode
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
FileNotFoundError: If system config is missing
|
|
||||||
ValueError: If required configuration is missing
|
|
||||||
"""
|
|
||||||
# Load system config
|
|
||||||
system_config = Path.home() / '.config' / 'claude' / 'gitea.env'
|
|
||||||
if system_config.exists():
|
|
||||||
load_dotenv(system_config)
|
|
||||||
logger.info(f"Loaded system configuration from {system_config}")
|
|
||||||
else:
|
|
||||||
raise FileNotFoundError(
|
|
||||||
f"System config not found: {system_config}\n"
|
|
||||||
"Create it with: mkdir -p ~/.config/claude && "
|
|
||||||
"cat > ~/.config/claude/gitea.env"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Find project directory (MCP server cwd is plugin dir, not project dir)
|
|
||||||
project_dir = self._find_project_directory()
|
|
||||||
|
|
||||||
# Load project config (overrides system)
|
|
||||||
if project_dir:
|
|
||||||
project_config = project_dir / '.env'
|
|
||||||
if project_config.exists():
|
|
||||||
load_dotenv(project_config, override=True)
|
|
||||||
logger.info(f"Loaded project configuration from {project_config}")
|
|
||||||
|
|
||||||
# Extract values
|
|
||||||
self.api_url = os.getenv('GITEA_API_URL')
|
|
||||||
self.api_token = os.getenv('GITEA_API_TOKEN')
|
|
||||||
self.repo = os.getenv('GITEA_REPO') # Optional, must be owner/repo format
|
|
||||||
|
|
||||||
# Auto-detect repo from git remote if not specified
|
|
||||||
if not self.repo and project_dir:
|
|
||||||
self.repo = self._detect_repo_from_git(project_dir)
|
|
||||||
if self.repo:
|
|
||||||
logger.info(f"Auto-detected repository from git remote: {self.repo}")
|
|
||||||
|
|
||||||
# Detect mode
|
|
||||||
if self.repo:
|
|
||||||
self.mode = 'project'
|
|
||||||
logger.info(f"Running in project mode: {self.repo}")
|
|
||||||
else:
|
|
||||||
self.mode = 'company'
|
|
||||||
logger.info("Running in company-wide mode (PMO)")
|
|
||||||
|
|
||||||
# Validate required variables
|
|
||||||
self._validate()
|
|
||||||
|
|
||||||
return {
|
|
||||||
'api_url': self.api_url,
|
|
||||||
'api_token': self.api_token,
|
|
||||||
'repo': self.repo,
|
|
||||||
'mode': self.mode
|
|
||||||
}
|
|
||||||
|
|
||||||
def _validate(self) -> None:
|
|
||||||
"""
|
|
||||||
Validate that required configuration is present.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If required configuration is missing
|
|
||||||
"""
|
|
||||||
required = {
|
|
||||||
'GITEA_API_URL': self.api_url,
|
|
||||||
'GITEA_API_TOKEN': self.api_token
|
|
||||||
}
|
|
||||||
|
|
||||||
missing = [key for key, value in required.items() if not value]
|
|
||||||
|
|
||||||
if missing:
|
|
||||||
raise ValueError(
|
|
||||||
f"Missing required configuration: {', '.join(missing)}\n"
|
|
||||||
"Check your ~/.config/claude/gitea.env file"
|
|
||||||
)
|
|
||||||
|
|
||||||
def _find_project_directory(self) -> Optional[Path]:
|
|
||||||
"""
|
|
||||||
Find the user's project directory.
|
|
||||||
|
|
||||||
The MCP server runs with cwd set to the plugin directory, not the
|
|
||||||
user's project. We need to find the actual project directory using
|
|
||||||
various heuristics.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Path to project directory, or None if not found
|
|
||||||
"""
|
|
||||||
# Strategy 1: Check CLAUDE_PROJECT_DIR environment variable
|
|
||||||
project_dir = os.getenv('CLAUDE_PROJECT_DIR')
|
|
||||||
if project_dir:
|
|
||||||
path = Path(project_dir)
|
|
||||||
if path.exists():
|
|
||||||
logger.info(f"Found project directory from CLAUDE_PROJECT_DIR: {path}")
|
|
||||||
return path
|
|
||||||
|
|
||||||
# Strategy 2: Check PWD (original working directory before cwd override)
|
|
||||||
pwd = os.getenv('PWD')
|
|
||||||
if pwd:
|
|
||||||
path = Path(pwd)
|
|
||||||
# Verify it has .git or .env (indicates a project)
|
|
||||||
if path.exists() and ((path / '.git').exists() or (path / '.env').exists()):
|
|
||||||
logger.info(f"Found project directory from PWD: {path}")
|
|
||||||
return path
|
|
||||||
|
|
||||||
# Strategy 3: Check current working directory
|
|
||||||
# This handles test scenarios and cases where cwd is actually the project
|
|
||||||
cwd = Path.cwd()
|
|
||||||
if (cwd / '.git').exists() or (cwd / '.env').exists():
|
|
||||||
logger.info(f"Found project directory from cwd: {cwd}")
|
|
||||||
return cwd
|
|
||||||
|
|
||||||
# Strategy 4: Check if GITEA_REPO is already set (user configured it)
|
|
||||||
# If so, we don't need to find the project directory for git detection
|
|
||||||
if os.getenv('GITEA_REPO'):
|
|
||||||
logger.debug("GITEA_REPO already set, skipping project directory detection")
|
|
||||||
return None
|
|
||||||
|
|
||||||
logger.debug("Could not determine project directory")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _detect_repo_from_git(self, project_dir: Optional[Path] = None) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
Auto-detect repository from git remote origin URL.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
project_dir: Directory to run git command from (defaults to cwd)
|
|
||||||
|
|
||||||
Supports URL formats:
|
|
||||||
- SSH: ssh://git@host:port/owner/repo.git
|
|
||||||
- SSH short: git@host:owner/repo.git
|
|
||||||
- HTTPS: https://host/owner/repo.git
|
|
||||||
- HTTP: http://host/owner/repo.git
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Repository in 'owner/repo' format, or None if detection fails
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
result = subprocess.run(
|
|
||||||
['git', 'remote', 'get-url', 'origin'],
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
timeout=5,
|
|
||||||
cwd=str(project_dir) if project_dir else None
|
|
||||||
)
|
|
||||||
if result.returncode != 0:
|
|
||||||
logger.debug("No git remote 'origin' found")
|
|
||||||
return None
|
|
||||||
|
|
||||||
url = result.stdout.strip()
|
|
||||||
return self._parse_git_url(url)
|
|
||||||
|
|
||||||
except subprocess.TimeoutExpired:
|
|
||||||
logger.warning("Git command timed out")
|
|
||||||
return None
|
|
||||||
except FileNotFoundError:
|
|
||||||
logger.debug("Git not available")
|
|
||||||
return None
|
|
||||||
except Exception as e:
|
|
||||||
logger.debug(f"Failed to detect repo from git: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _parse_git_url(self, url: str) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
Parse git URL to extract owner/repo.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
url: Git remote URL
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Repository in 'owner/repo' format, or None if parsing fails
|
|
||||||
"""
|
|
||||||
# Remove .git suffix if present
|
|
||||||
url = re.sub(r'\.git$', '', url)
|
|
||||||
|
|
||||||
# SSH format: ssh://git@host:port/owner/repo
|
|
||||||
ssh_match = re.match(r'ssh://[^/]+/(.+/.+)$', url)
|
|
||||||
if ssh_match:
|
|
||||||
return ssh_match.group(1)
|
|
||||||
|
|
||||||
# SSH short format: git@host:owner/repo
|
|
||||||
ssh_short_match = re.match(r'git@[^:]+:(.+/.+)$', url)
|
|
||||||
if ssh_short_match:
|
|
||||||
return ssh_short_match.group(1)
|
|
||||||
|
|
||||||
# HTTPS/HTTP format: https://host/owner/repo
|
|
||||||
http_match = re.match(r'https?://[^/]+/(.+/.+)$', url)
|
|
||||||
if http_match:
|
|
||||||
return http_match.group(1)
|
|
||||||
|
|
||||||
logger.warning(f"Could not parse git URL: {url}")
|
|
||||||
return None
|
|
||||||
@@ -1,789 +0,0 @@
|
|||||||
"""
|
|
||||||
Gitea API client for interacting with Gitea API.
|
|
||||||
|
|
||||||
Provides synchronous methods for:
|
|
||||||
- Issue CRUD operations
|
|
||||||
- Label management
|
|
||||||
- Repository operations
|
|
||||||
- PMO multi-repo aggregation
|
|
||||||
- Wiki operations (lessons learned)
|
|
||||||
- Milestone management
|
|
||||||
- Issue dependencies
|
|
||||||
"""
|
|
||||||
import requests
|
|
||||||
import logging
|
|
||||||
import re
|
|
||||||
from typing import List, Dict, Optional
|
|
||||||
from .config import GiteaConfig
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class GiteaClient:
|
|
||||||
"""Client for interacting with Gitea API"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
"""Initialize Gitea client with configuration"""
|
|
||||||
config = GiteaConfig()
|
|
||||||
config_dict = config.load()
|
|
||||||
|
|
||||||
self.base_url = config_dict['api_url']
|
|
||||||
self.token = config_dict['api_token']
|
|
||||||
self.repo = config_dict.get('repo') # Optional default repo in owner/repo format
|
|
||||||
self.mode = config_dict['mode']
|
|
||||||
|
|
||||||
self.session = requests.Session()
|
|
||||||
self.session.headers.update({
|
|
||||||
'Authorization': f'token {self.token}',
|
|
||||||
'Content-Type': 'application/json'
|
|
||||||
})
|
|
||||||
|
|
||||||
logger.info(f"Gitea client initialized in {self.mode} mode")
|
|
||||||
|
|
||||||
def _parse_repo(self, repo: Optional[str] = None) -> tuple:
|
|
||||||
"""Parse owner/repo from input. Always requires 'owner/repo' format."""
|
|
||||||
target = repo or self.repo
|
|
||||||
if not target or '/' not in target:
|
|
||||||
raise ValueError("Use 'owner/repo' format (e.g. 'org/repo-name')")
|
|
||||||
parts = target.split('/', 1)
|
|
||||||
return parts[0], parts[1]
|
|
||||||
|
|
||||||
def list_issues(
|
|
||||||
self,
|
|
||||||
state: str = 'open',
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
List issues from Gitea repository.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
state: Issue state (open, closed, all)
|
|
||||||
labels: Filter by labels
|
|
||||||
repo: Repository in 'owner/repo' format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of issue dictionaries
|
|
||||||
"""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues"
|
|
||||||
params = {'state': state}
|
|
||||||
if labels:
|
|
||||||
params['labels'] = ','.join(labels)
|
|
||||||
logger.info(f"Listing issues from {owner}/{target_repo} with state={state}")
|
|
||||||
response = self.session.get(url, params=params)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def get_issue(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Get specific issue details."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}"
|
|
||||||
logger.info(f"Getting issue #{issue_number} from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def create_issue(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
body: str,
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new issue in Gitea."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues"
|
|
||||||
data = {'title': title, 'body': body}
|
|
||||||
if labels:
|
|
||||||
label_ids = self._resolve_label_ids(labels, owner, target_repo)
|
|
||||||
data['labels'] = label_ids
|
|
||||||
logger.info(f"Creating issue in {owner}/{target_repo}: {title}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def _resolve_label_ids(self, label_names: List[str], owner: str, repo: str) -> List[int]:
|
|
||||||
"""Convert label names to label IDs."""
|
|
||||||
full_repo = f"{owner}/{repo}"
|
|
||||||
|
|
||||||
# Only fetch org labels if repo belongs to an organization
|
|
||||||
org_labels = []
|
|
||||||
if self.is_org_repo(full_repo):
|
|
||||||
org_labels = self.get_org_labels(owner)
|
|
||||||
|
|
||||||
repo_labels = self.get_labels(full_repo)
|
|
||||||
all_labels = org_labels + repo_labels
|
|
||||||
label_map = {label['name']: label['id'] for label in all_labels}
|
|
||||||
label_ids = []
|
|
||||||
for name in label_names:
|
|
||||||
if name in label_map:
|
|
||||||
label_ids.append(label_map[name])
|
|
||||||
else:
|
|
||||||
logger.warning(f"Label '{name}' not found, skipping")
|
|
||||||
return label_ids
|
|
||||||
|
|
||||||
def update_issue(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
title: Optional[str] = None,
|
|
||||||
body: Optional[str] = None,
|
|
||||||
state: Optional[str] = None,
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Update existing issue. Repo must be 'owner/repo' format."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}"
|
|
||||||
data = {}
|
|
||||||
if title is not None:
|
|
||||||
data['title'] = title
|
|
||||||
if body is not None:
|
|
||||||
data['body'] = body
|
|
||||||
if state is not None:
|
|
||||||
data['state'] = state
|
|
||||||
if labels is not None:
|
|
||||||
data['labels'] = labels
|
|
||||||
logger.info(f"Updating issue #{issue_number} in {owner}/{target_repo}")
|
|
||||||
response = self.session.patch(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def add_comment(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
comment: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Add comment to issue. Repo must be 'owner/repo' format."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}/comments"
|
|
||||||
data = {'body': comment}
|
|
||||||
logger.info(f"Adding comment to issue #{issue_number} in {owner}/{target_repo}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def get_labels(self, repo: Optional[str] = None) -> List[Dict]:
|
|
||||||
"""Get all labels from repository. Repo must be 'owner/repo' format."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/labels"
|
|
||||||
logger.info(f"Getting labels from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def get_org_labels(self, org: str) -> List[Dict]:
|
|
||||||
"""Get organization-level labels. Org is the organization name."""
|
|
||||||
url = f"{self.base_url}/orgs/{org}/labels"
|
|
||||||
logger.info(f"Getting organization labels for {org}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def list_repos(self, org: str) -> List[Dict]:
|
|
||||||
"""List all repositories in organization. Org is the organization name."""
|
|
||||||
url = f"{self.base_url}/orgs/{org}/repos"
|
|
||||||
logger.info(f"Listing all repositories for organization {org}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def aggregate_issues(
|
|
||||||
self,
|
|
||||||
org: str,
|
|
||||||
state: str = 'open',
|
|
||||||
labels: Optional[List[str]] = None
|
|
||||||
) -> Dict[str, List[Dict]]:
|
|
||||||
"""Fetch issues across all repositories in org."""
|
|
||||||
repos = self.list_repos(org)
|
|
||||||
aggregated = {}
|
|
||||||
logger.info(f"Aggregating issues across {len(repos)} repositories")
|
|
||||||
for repo in repos:
|
|
||||||
repo_name = repo['name']
|
|
||||||
try:
|
|
||||||
issues = self.list_issues(
|
|
||||||
state=state,
|
|
||||||
labels=labels,
|
|
||||||
repo=f"{org}/{repo_name}"
|
|
||||||
)
|
|
||||||
if issues:
|
|
||||||
aggregated[repo_name] = issues
|
|
||||||
logger.info(f"Found {len(issues)} issues in {repo_name}")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error fetching issues from {repo_name}: {e}")
|
|
||||||
|
|
||||||
return aggregated
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# WIKI OPERATIONS (Lessons Learned)
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def list_wiki_pages(self, repo: Optional[str] = None) -> List[Dict]:
|
|
||||||
"""List all wiki pages in repository."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/pages"
|
|
||||||
logger.info(f"Listing wiki pages from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def get_wiki_page(
|
|
||||||
self,
|
|
||||||
page_name: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Get a specific wiki page by name."""
|
|
||||||
from urllib.parse import quote
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
# URL-encode the page_name to handle special characters like ':'
|
|
||||||
encoded_page_name = quote(page_name, safe='')
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{encoded_page_name}"
|
|
||||||
logger.info(f"Getting wiki page '{page_name}' from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def create_wiki_page(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
content: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new wiki page."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/new"
|
|
||||||
data = {
|
|
||||||
'title': title,
|
|
||||||
'content_base64': self._encode_base64(content)
|
|
||||||
}
|
|
||||||
logger.info(f"Creating wiki page '{title}' in {owner}/{target_repo}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def update_wiki_page(
|
|
||||||
self,
|
|
||||||
page_name: str,
|
|
||||||
content: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Update an existing wiki page."""
|
|
||||||
from urllib.parse import quote
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
# URL-encode the page_name to handle special characters like ':'
|
|
||||||
encoded_page_name = quote(page_name, safe='')
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{encoded_page_name}"
|
|
||||||
data = {
|
|
||||||
'title': page_name, # CRITICAL: include title to preserve page name
|
|
||||||
'content_base64': self._encode_base64(content)
|
|
||||||
}
|
|
||||||
logger.info(f"Updating wiki page '{page_name}' in {owner}/{target_repo}")
|
|
||||||
response = self.session.patch(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def delete_wiki_page(
|
|
||||||
self,
|
|
||||||
page_name: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> bool:
|
|
||||||
"""Delete a wiki page."""
|
|
||||||
from urllib.parse import quote
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
# URL-encode the page_name to handle special characters like ':'
|
|
||||||
encoded_page_name = quote(page_name, safe='')
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/wiki/page/{encoded_page_name}"
|
|
||||||
logger.info(f"Deleting wiki page '{page_name}' from {owner}/{target_repo}")
|
|
||||||
response = self.session.delete(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _encode_base64(self, content: str) -> str:
|
|
||||||
"""Encode content to base64 for wiki API."""
|
|
||||||
import base64
|
|
||||||
return base64.b64encode(content.encode('utf-8')).decode('utf-8')
|
|
||||||
|
|
||||||
def _decode_base64(self, content: str) -> str:
|
|
||||||
"""Decode base64 content from wiki API."""
|
|
||||||
import base64
|
|
||||||
return base64.b64decode(content.encode('utf-8')).decode('utf-8')
|
|
||||||
|
|
||||||
def search_wiki_pages(
|
|
||||||
self,
|
|
||||||
query: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""Search wiki pages by content (client-side filtering)."""
|
|
||||||
pages = self.list_wiki_pages(repo)
|
|
||||||
results = []
|
|
||||||
query_lower = query.lower()
|
|
||||||
for page in pages:
|
|
||||||
if query_lower in page.get('title', '').lower():
|
|
||||||
results.append(page)
|
|
||||||
return results
|
|
||||||
|
|
||||||
def create_lesson(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
content: str,
|
|
||||||
tags: List[str],
|
|
||||||
category: str = "sprints",
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a lessons learned entry in the wiki."""
|
|
||||||
# Sanitize title for wiki page name
|
|
||||||
page_name = f"lessons/{category}/{self._sanitize_page_name(title)}"
|
|
||||||
|
|
||||||
# Add tags as metadata at the end of content
|
|
||||||
full_content = f"{content}\n\n---\n**Tags:** {', '.join(tags)}"
|
|
||||||
|
|
||||||
return self.create_wiki_page(page_name, full_content, repo)
|
|
||||||
|
|
||||||
def search_lessons(
|
|
||||||
self,
|
|
||||||
query: Optional[str] = None,
|
|
||||||
tags: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""Search lessons learned by query and/or tags."""
|
|
||||||
pages = self.list_wiki_pages(repo)
|
|
||||||
results = []
|
|
||||||
|
|
||||||
for page in pages:
|
|
||||||
title = page.get('title', '')
|
|
||||||
# Filter to only lessons (pages starting with lessons/)
|
|
||||||
if not title.startswith('lessons/'):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# If query provided, check if it matches title
|
|
||||||
if query:
|
|
||||||
if query.lower() not in title.lower():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Get full page content for tag matching if tags provided
|
|
||||||
if tags:
|
|
||||||
try:
|
|
||||||
full_page = self.get_wiki_page(title, repo)
|
|
||||||
content = self._decode_base64(full_page.get('content_base64', ''))
|
|
||||||
# Check if any tag is in the content
|
|
||||||
if not any(tag.lower() in content.lower() for tag in tags):
|
|
||||||
continue
|
|
||||||
except Exception:
|
|
||||||
continue
|
|
||||||
|
|
||||||
results.append(page)
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
def _sanitize_page_name(self, title: str) -> str:
|
|
||||||
"""Convert title to valid wiki page name."""
|
|
||||||
# Replace spaces with hyphens, remove special chars
|
|
||||||
name = re.sub(r'[^\w\s-]', '', title)
|
|
||||||
name = re.sub(r'[\s]+', '-', name)
|
|
||||||
return name.lower()
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# MILESTONE OPERATIONS
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def list_milestones(
|
|
||||||
self,
|
|
||||||
state: str = 'open',
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all milestones in repository."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/milestones"
|
|
||||||
params = {'state': state}
|
|
||||||
logger.info(f"Listing milestones from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url, params=params)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def get_milestone(
|
|
||||||
self,
|
|
||||||
milestone_id: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Get a specific milestone by ID."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/milestones/{milestone_id}"
|
|
||||||
logger.info(f"Getting milestone #{milestone_id} from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def create_milestone(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
due_on: Optional[str] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new milestone."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/milestones"
|
|
||||||
data = {'title': title}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
if due_on:
|
|
||||||
data['due_on'] = due_on
|
|
||||||
logger.info(f"Creating milestone '{title}' in {owner}/{target_repo}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def update_milestone(
|
|
||||||
self,
|
|
||||||
milestone_id: int,
|
|
||||||
title: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
state: Optional[str] = None,
|
|
||||||
due_on: Optional[str] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Update an existing milestone."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/milestones/{milestone_id}"
|
|
||||||
data = {}
|
|
||||||
if title is not None:
|
|
||||||
data['title'] = title
|
|
||||||
if description is not None:
|
|
||||||
data['description'] = description
|
|
||||||
if state is not None:
|
|
||||||
data['state'] = state
|
|
||||||
if due_on is not None:
|
|
||||||
data['due_on'] = due_on
|
|
||||||
logger.info(f"Updating milestone #{milestone_id} in {owner}/{target_repo}")
|
|
||||||
response = self.session.patch(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def delete_milestone(
|
|
||||||
self,
|
|
||||||
milestone_id: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> bool:
|
|
||||||
"""Delete a milestone."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/milestones/{milestone_id}"
|
|
||||||
logger.info(f"Deleting milestone #{milestone_id} from {owner}/{target_repo}")
|
|
||||||
response = self.session.delete(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return True
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# ISSUE DEPENDENCY OPERATIONS
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def list_issue_dependencies(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all dependencies for an issue (issues that block this one)."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}/dependencies"
|
|
||||||
logger.info(f"Listing dependencies for issue #{issue_number} in {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def create_issue_dependency(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
depends_on: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a dependency (issue_number depends on depends_on)."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}/dependencies"
|
|
||||||
data = {
|
|
||||||
'dependentIssue': {
|
|
||||||
'owner': owner,
|
|
||||||
'repo': target_repo,
|
|
||||||
'index': depends_on
|
|
||||||
}
|
|
||||||
}
|
|
||||||
logger.info(f"Creating dependency: #{issue_number} depends on #{depends_on} in {owner}/{target_repo}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def remove_issue_dependency(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
depends_on: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> bool:
|
|
||||||
"""Remove a dependency between issues."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}/dependencies"
|
|
||||||
data = {
|
|
||||||
'dependentIssue': {
|
|
||||||
'owner': owner,
|
|
||||||
'repo': target_repo,
|
|
||||||
'index': depends_on
|
|
||||||
}
|
|
||||||
}
|
|
||||||
logger.info(f"Removing dependency: #{issue_number} no longer depends on #{depends_on}")
|
|
||||||
response = self.session.delete(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return True
|
|
||||||
|
|
||||||
def list_issue_blocks(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all issues that this issue blocks."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{issue_number}/blocks"
|
|
||||||
logger.info(f"Listing issues blocked by #{issue_number} in {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# REPOSITORY VALIDATION
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def get_repo_info(self, repo: Optional[str] = None) -> Dict:
|
|
||||||
"""Get repository information including owner type."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}"
|
|
||||||
logger.info(f"Getting repo info for {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def is_org_repo(self, repo: Optional[str] = None) -> bool:
|
|
||||||
"""
|
|
||||||
Check if repository belongs to an organization (not a user).
|
|
||||||
|
|
||||||
Uses the /orgs/{owner} endpoint to reliably detect organizations,
|
|
||||||
as the owner.type field in repo info may be null in some Gitea versions.
|
|
||||||
"""
|
|
||||||
owner, _ = self._parse_repo(repo)
|
|
||||||
return self._is_organization(owner)
|
|
||||||
|
|
||||||
def _is_organization(self, owner: str) -> bool:
|
|
||||||
"""
|
|
||||||
Check if an owner is an organization by querying the orgs endpoint.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
owner: The owner name to check
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if owner is an organization, False if user or unknown
|
|
||||||
"""
|
|
||||||
url = f"{self.base_url}/orgs/{owner}"
|
|
||||||
try:
|
|
||||||
response = self.session.get(url)
|
|
||||||
# 200 = organization exists, 404 = not an organization (user account)
|
|
||||||
return response.status_code == 200
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to check if {owner} is organization: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def get_branch_protection(
|
|
||||||
self,
|
|
||||||
branch: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Optional[Dict]:
|
|
||||||
"""Get branch protection rules for a branch."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/branch_protections/{branch}"
|
|
||||||
logger.info(f"Getting branch protection for {branch} in {owner}/{target_repo}")
|
|
||||||
try:
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
except requests.exceptions.HTTPError as e:
|
|
||||||
if e.response.status_code == 404:
|
|
||||||
return None # No protection rules
|
|
||||||
raise
|
|
||||||
|
|
||||||
def create_label(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
color: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new label in the repository."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/labels"
|
|
||||||
data = {
|
|
||||||
'name': name,
|
|
||||||
'color': color.lstrip('#') # Remove # if present
|
|
||||||
}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
logger.info(f"Creating label '{name}' in {owner}/{target_repo}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def create_org_label(
|
|
||||||
self,
|
|
||||||
org: str,
|
|
||||||
name: str,
|
|
||||||
color: str,
|
|
||||||
description: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a new label at the organization level.
|
|
||||||
|
|
||||||
Organization labels are shared across all repositories in the org.
|
|
||||||
Use this for workflow labels (Type, Priority, Complexity, Effort, etc.)
|
|
||||||
|
|
||||||
Args:
|
|
||||||
org: Organization name
|
|
||||||
name: Label name (e.g., 'Type/Bug', 'Priority/High')
|
|
||||||
color: Hex color code (with or without #)
|
|
||||||
description: Optional label description
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created label dictionary
|
|
||||||
"""
|
|
||||||
url = f"{self.base_url}/orgs/{org}/labels"
|
|
||||||
data = {
|
|
||||||
'name': name,
|
|
||||||
'color': color.lstrip('#') # Remove # if present
|
|
||||||
}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
logger.info(f"Creating organization label '{name}' in {org}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# PULL REQUEST OPERATIONS
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def list_pull_requests(
|
|
||||||
self,
|
|
||||||
state: str = 'open',
|
|
||||||
sort: str = 'recentupdate',
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
List pull requests from Gitea repository.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
state: PR state (open, closed, all)
|
|
||||||
sort: Sort order (oldest, recentupdate, leastupdate, mostcomment, leastcomment, priority)
|
|
||||||
labels: Filter by labels
|
|
||||||
repo: Repository in 'owner/repo' format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of pull request dictionaries
|
|
||||||
"""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls"
|
|
||||||
params = {'state': state, 'sort': sort}
|
|
||||||
if labels:
|
|
||||||
params['labels'] = ','.join(labels)
|
|
||||||
logger.info(f"Listing PRs from {owner}/{target_repo} with state={state}")
|
|
||||||
response = self.session.get(url, params=params)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def get_pull_request(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Get specific pull request details."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls/{pr_number}"
|
|
||||||
logger.info(f"Getting PR #{pr_number} from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def get_pr_diff(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> str:
|
|
||||||
"""Get the diff for a pull request."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls/{pr_number}.diff"
|
|
||||||
logger.info(f"Getting diff for PR #{pr_number} from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.text
|
|
||||||
|
|
||||||
def get_pr_comments(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""Get comments on a pull request (uses issue comments endpoint)."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
# PRs share comment endpoint with issues in Gitea
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{pr_number}/comments"
|
|
||||||
logger.info(f"Getting comments for PR #{pr_number} from {owner}/{target_repo}")
|
|
||||||
response = self.session.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def create_pr_review(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
body: str,
|
|
||||||
event: str = 'COMMENT',
|
|
||||||
comments: Optional[List[Dict]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a review on a pull request.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pr_number: Pull request number
|
|
||||||
body: Review body/summary
|
|
||||||
event: Review action (APPROVE, REQUEST_CHANGES, COMMENT)
|
|
||||||
comments: Optional list of inline comments with path, position, body
|
|
||||||
repo: Repository in 'owner/repo' format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created review dictionary
|
|
||||||
"""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/pulls/{pr_number}/reviews"
|
|
||||||
data = {
|
|
||||||
'body': body,
|
|
||||||
'event': event
|
|
||||||
}
|
|
||||||
if comments:
|
|
||||||
data['comments'] = comments
|
|
||||||
logger.info(f"Creating review on PR #{pr_number} in {owner}/{target_repo}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
def add_pr_comment(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
body: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Add a general comment to a pull request (uses issue comment endpoint)."""
|
|
||||||
owner, target_repo = self._parse_repo(repo)
|
|
||||||
# PRs share comment endpoint with issues in Gitea
|
|
||||||
url = f"{self.base_url}/repos/{owner}/{target_repo}/issues/{pr_number}/comments"
|
|
||||||
data = {'body': body}
|
|
||||||
logger.info(f"Adding comment to PR #{pr_number} in {owner}/{target_repo}")
|
|
||||||
response = self.session.post(url, json=data)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.json()
|
|
||||||
@@ -1,997 +0,0 @@
|
|||||||
"""
|
|
||||||
MCP Server entry point for Gitea integration.
|
|
||||||
|
|
||||||
Provides Gitea tools to Claude Code via JSON-RPC 2.0 over stdio.
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
import json
|
|
||||||
from mcp.server import Server
|
|
||||||
from mcp.server.stdio import stdio_server
|
|
||||||
from mcp.types import Tool, TextContent
|
|
||||||
|
|
||||||
from .config import GiteaConfig
|
|
||||||
from .gitea_client import GiteaClient
|
|
||||||
from .tools.issues import IssueTools
|
|
||||||
from .tools.labels import LabelTools
|
|
||||||
from .tools.wiki import WikiTools
|
|
||||||
from .tools.milestones import MilestoneTools
|
|
||||||
from .tools.dependencies import DependencyTools
|
|
||||||
from .tools.pull_requests import PullRequestTools
|
|
||||||
|
|
||||||
# Suppress noisy MCP validation warnings on stderr
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logging.getLogger("root").setLevel(logging.ERROR)
|
|
||||||
logging.getLogger("mcp").setLevel(logging.ERROR)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class GiteaMCPServer:
|
|
||||||
"""MCP Server for Gitea integration"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.server = Server("gitea-mcp")
|
|
||||||
self.config = None
|
|
||||||
self.client = None
|
|
||||||
self.issue_tools = None
|
|
||||||
self.label_tools = None
|
|
||||||
self.wiki_tools = None
|
|
||||||
self.milestone_tools = None
|
|
||||||
self.dependency_tools = None
|
|
||||||
self.pr_tools = None
|
|
||||||
|
|
||||||
async def initialize(self):
|
|
||||||
"""
|
|
||||||
Initialize server and load configuration.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
Exception: If initialization fails
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
config_loader = GiteaConfig()
|
|
||||||
self.config = config_loader.load()
|
|
||||||
|
|
||||||
self.client = GiteaClient()
|
|
||||||
self.issue_tools = IssueTools(self.client)
|
|
||||||
self.label_tools = LabelTools(self.client)
|
|
||||||
self.wiki_tools = WikiTools(self.client)
|
|
||||||
self.milestone_tools = MilestoneTools(self.client)
|
|
||||||
self.dependency_tools = DependencyTools(self.client)
|
|
||||||
self.pr_tools = PullRequestTools(self.client)
|
|
||||||
|
|
||||||
logger.info(f"Gitea MCP Server initialized in {self.config['mode']} mode")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Failed to initialize: {e}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
def setup_tools(self):
|
|
||||||
"""Register all available tools with the MCP server"""
|
|
||||||
|
|
||||||
@self.server.list_tools()
|
|
||||||
async def list_tools() -> list[Tool]:
|
|
||||||
"""Return list of available tools"""
|
|
||||||
return [
|
|
||||||
Tool(
|
|
||||||
name="list_issues",
|
|
||||||
description="List issues from Gitea repository",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"state": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["open", "closed", "all"],
|
|
||||||
"default": "open",
|
|
||||||
"description": "Issue state filter"
|
|
||||||
},
|
|
||||||
"labels": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "Filter by labels"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (for PMO mode)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_issue",
|
|
||||||
description="Get specific issue details",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"issue_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue number"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (for PMO mode)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["issue_number"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_issue",
|
|
||||||
description="Create a new issue in Gitea",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"title": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Issue title"
|
|
||||||
},
|
|
||||||
"body": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Issue description"
|
|
||||||
},
|
|
||||||
"labels": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "List of label names"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (for PMO mode)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["title", "body"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="update_issue",
|
|
||||||
description="Update existing issue",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"issue_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue number"
|
|
||||||
},
|
|
||||||
"title": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "New title"
|
|
||||||
},
|
|
||||||
"body": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "New body"
|
|
||||||
},
|
|
||||||
"state": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["open", "closed"],
|
|
||||||
"description": "New state"
|
|
||||||
},
|
|
||||||
"labels": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "New labels"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (for PMO mode)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["issue_number"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="add_comment",
|
|
||||||
description="Add comment to issue",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"issue_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue number"
|
|
||||||
},
|
|
||||||
"comment": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Comment text"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (for PMO mode)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["issue_number", "comment"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_labels",
|
|
||||||
description="Get all available labels (org + repo)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (for PMO mode)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="suggest_labels",
|
|
||||||
description="Analyze context and suggest appropriate labels",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"context": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Issue title + description or sprint context"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["context"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="aggregate_issues",
|
|
||||||
description="Fetch issues across all repositories (PMO mode)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"org": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Organization name (e.g. 'bandit')"
|
|
||||||
},
|
|
||||||
"state": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["open", "closed", "all"],
|
|
||||||
"default": "open",
|
|
||||||
"description": "Issue state filter"
|
|
||||||
},
|
|
||||||
"labels": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "Filter by labels"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["org"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
# Wiki Tools (Lessons Learned)
|
|
||||||
Tool(
|
|
||||||
name="list_wiki_pages",
|
|
||||||
description="List all wiki pages in repository",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_wiki_page",
|
|
||||||
description="Get a specific wiki page by name",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"page_name": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Wiki page name/path"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["page_name"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_wiki_page",
|
|
||||||
description="Create a new wiki page",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"title": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Page title/name"
|
|
||||||
},
|
|
||||||
"content": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Page content (markdown)"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["title", "content"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="update_wiki_page",
|
|
||||||
description="Update an existing wiki page",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"page_name": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Wiki page name/path"
|
|
||||||
},
|
|
||||||
"content": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "New page content (markdown)"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["page_name", "content"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_lesson",
|
|
||||||
description="Create a lessons learned entry in the wiki",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"title": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Lesson title (e.g., 'Sprint 16 - Prevent Infinite Loops')"
|
|
||||||
},
|
|
||||||
"content": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Lesson content (markdown with context, problem, solution, prevention)"
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "Tags for categorization"
|
|
||||||
},
|
|
||||||
"category": {
|
|
||||||
"type": "string",
|
|
||||||
"default": "sprints",
|
|
||||||
"description": "Category (sprints, patterns, architecture, etc.)"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["title", "content", "tags"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="search_lessons",
|
|
||||||
description="Search lessons learned from previous sprints",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"query": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Search query (optional)"
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "Tags to filter by (optional)"
|
|
||||||
},
|
|
||||||
"limit": {
|
|
||||||
"type": "integer",
|
|
||||||
"default": 20,
|
|
||||||
"description": "Maximum results"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
),
|
|
||||||
# Milestone Tools
|
|
||||||
Tool(
|
|
||||||
name="list_milestones",
|
|
||||||
description="List all milestones in repository",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"state": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["open", "closed", "all"],
|
|
||||||
"default": "open",
|
|
||||||
"description": "Milestone state filter"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_milestone",
|
|
||||||
description="Get a specific milestone by ID",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"milestone_id": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Milestone ID"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["milestone_id"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_milestone",
|
|
||||||
description="Create a new milestone",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"title": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Milestone title"
|
|
||||||
},
|
|
||||||
"description": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Milestone description"
|
|
||||||
},
|
|
||||||
"due_on": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Due date (ISO 8601 format)"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["title"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="update_milestone",
|
|
||||||
description="Update an existing milestone",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"milestone_id": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Milestone ID"
|
|
||||||
},
|
|
||||||
"title": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "New title"
|
|
||||||
},
|
|
||||||
"description": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "New description"
|
|
||||||
},
|
|
||||||
"state": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["open", "closed"],
|
|
||||||
"description": "New state"
|
|
||||||
},
|
|
||||||
"due_on": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "New due date"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["milestone_id"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="delete_milestone",
|
|
||||||
description="Delete a milestone",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"milestone_id": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Milestone ID"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["milestone_id"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
# Dependency Tools
|
|
||||||
Tool(
|
|
||||||
name="list_issue_dependencies",
|
|
||||||
description="List all dependencies for an issue (issues that block this one)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"issue_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue number"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["issue_number"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_issue_dependency",
|
|
||||||
description="Create a dependency (issue depends on another issue)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"issue_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue that will depend on another"
|
|
||||||
},
|
|
||||||
"depends_on": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue that blocks issue_number"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["issue_number", "depends_on"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="remove_issue_dependency",
|
|
||||||
description="Remove a dependency between issues",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"issue_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue that depends on another"
|
|
||||||
},
|
|
||||||
"depends_on": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Issue being depended on"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["issue_number", "depends_on"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_execution_order",
|
|
||||||
description="Get parallelizable execution order for issues based on dependencies",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"issue_numbers": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "integer"},
|
|
||||||
"description": "List of issue numbers to analyze"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["issue_numbers"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
# Validation Tools
|
|
||||||
Tool(
|
|
||||||
name="validate_repo_org",
|
|
||||||
description="Check if repository belongs to an organization",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_branch_protection",
|
|
||||||
description="Get branch protection rules",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"branch": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Branch name"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["branch"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_label",
|
|
||||||
description="Create a new label in the repository (for repo-specific labels like Component/*, Tech/*)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"name": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label name (e.g., 'Component/Backend', 'Tech/Python')"
|
|
||||||
},
|
|
||||||
"color": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label color (hex code)"
|
|
||||||
},
|
|
||||||
"description": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label description"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["name", "color"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_org_label",
|
|
||||||
description="Create a new label at organization level (for workflow labels like Type/*, Priority/*, Complexity/*, Effort/*)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"org": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Organization name"
|
|
||||||
},
|
|
||||||
"name": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label name (e.g., 'Type/Bug', 'Priority/High')"
|
|
||||||
},
|
|
||||||
"color": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label color (hex code)"
|
|
||||||
},
|
|
||||||
"description": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label description"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["org", "name", "color"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_label_smart",
|
|
||||||
description="Create a label at the appropriate level (org or repo) based on category. Org: Type/*, Priority/*, Complexity/*, Effort/*, Risk/*, Source/*, Agent/*. Repo: Component/*, Tech/*",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"name": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label name (e.g., 'Type/Bug', 'Component/Backend')"
|
|
||||||
},
|
|
||||||
"color": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label color (hex code)"
|
|
||||||
},
|
|
||||||
"description": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Label description"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["name", "color"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
# Pull Request Tools
|
|
||||||
Tool(
|
|
||||||
name="list_pull_requests",
|
|
||||||
description="List pull requests from repository",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"state": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["open", "closed", "all"],
|
|
||||||
"default": "open",
|
|
||||||
"description": "PR state filter"
|
|
||||||
},
|
|
||||||
"sort": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["oldest", "recentupdate", "leastupdate", "mostcomment", "leastcomment", "priority"],
|
|
||||||
"default": "recentupdate",
|
|
||||||
"description": "Sort order"
|
|
||||||
},
|
|
||||||
"labels": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "Filter by labels"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_pull_request",
|
|
||||||
description="Get specific pull request details",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"pr_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Pull request number"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["pr_number"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_pr_diff",
|
|
||||||
description="Get the diff for a pull request",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"pr_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Pull request number"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["pr_number"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_pr_comments",
|
|
||||||
description="Get comments on a pull request",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"pr_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Pull request number"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["pr_number"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="create_pr_review",
|
|
||||||
description="Create a review on a pull request (approve, request changes, or comment)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"pr_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Pull request number"
|
|
||||||
},
|
|
||||||
"body": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Review body/summary"
|
|
||||||
},
|
|
||||||
"event": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["APPROVE", "REQUEST_CHANGES", "COMMENT"],
|
|
||||||
"default": "COMMENT",
|
|
||||||
"description": "Review action"
|
|
||||||
},
|
|
||||||
"comments": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"path": {"type": "string"},
|
|
||||||
"position": {"type": "integer"},
|
|
||||||
"body": {"type": "string"}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"description": "Optional inline comments"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["pr_number", "body"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="add_pr_comment",
|
|
||||||
description="Add a general comment to a pull request",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"pr_number": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Pull request number"
|
|
||||||
},
|
|
||||||
"body": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Comment text"
|
|
||||||
},
|
|
||||||
"repo": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Repository name (owner/repo format)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["pr_number", "body"]
|
|
||||||
}
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
@self.server.call_tool()
|
|
||||||
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
|
|
||||||
"""
|
|
||||||
Handle tool invocation.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: Tool name
|
|
||||||
arguments: Tool arguments
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of TextContent with results
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
# Route to appropriate tool handler
|
|
||||||
if name == "list_issues":
|
|
||||||
result = await self.issue_tools.list_issues(**arguments)
|
|
||||||
elif name == "get_issue":
|
|
||||||
result = await self.issue_tools.get_issue(**arguments)
|
|
||||||
elif name == "create_issue":
|
|
||||||
result = await self.issue_tools.create_issue(**arguments)
|
|
||||||
elif name == "update_issue":
|
|
||||||
result = await self.issue_tools.update_issue(**arguments)
|
|
||||||
elif name == "add_comment":
|
|
||||||
result = await self.issue_tools.add_comment(**arguments)
|
|
||||||
elif name == "get_labels":
|
|
||||||
result = await self.label_tools.get_labels(**arguments)
|
|
||||||
elif name == "suggest_labels":
|
|
||||||
result = await self.label_tools.suggest_labels(**arguments)
|
|
||||||
elif name == "aggregate_issues":
|
|
||||||
result = await self.issue_tools.aggregate_issues(**arguments)
|
|
||||||
# Wiki tools
|
|
||||||
elif name == "list_wiki_pages":
|
|
||||||
result = await self.wiki_tools.list_wiki_pages(**arguments)
|
|
||||||
elif name == "get_wiki_page":
|
|
||||||
result = await self.wiki_tools.get_wiki_page(**arguments)
|
|
||||||
elif name == "create_wiki_page":
|
|
||||||
result = await self.wiki_tools.create_wiki_page(**arguments)
|
|
||||||
elif name == "update_wiki_page":
|
|
||||||
result = await self.wiki_tools.update_wiki_page(**arguments)
|
|
||||||
elif name == "create_lesson":
|
|
||||||
result = await self.wiki_tools.create_lesson(**arguments)
|
|
||||||
elif name == "search_lessons":
|
|
||||||
tags = arguments.get('tags')
|
|
||||||
result = await self.wiki_tools.search_lessons(
|
|
||||||
query=arguments.get('query'),
|
|
||||||
tags=tags,
|
|
||||||
limit=arguments.get('limit', 20),
|
|
||||||
repo=arguments.get('repo')
|
|
||||||
)
|
|
||||||
# Milestone tools
|
|
||||||
elif name == "list_milestones":
|
|
||||||
result = await self.milestone_tools.list_milestones(**arguments)
|
|
||||||
elif name == "get_milestone":
|
|
||||||
result = await self.milestone_tools.get_milestone(**arguments)
|
|
||||||
elif name == "create_milestone":
|
|
||||||
result = await self.milestone_tools.create_milestone(**arguments)
|
|
||||||
elif name == "update_milestone":
|
|
||||||
result = await self.milestone_tools.update_milestone(**arguments)
|
|
||||||
elif name == "delete_milestone":
|
|
||||||
result = await self.milestone_tools.delete_milestone(**arguments)
|
|
||||||
# Dependency tools
|
|
||||||
elif name == "list_issue_dependencies":
|
|
||||||
result = await self.dependency_tools.list_issue_dependencies(**arguments)
|
|
||||||
elif name == "create_issue_dependency":
|
|
||||||
result = await self.dependency_tools.create_issue_dependency(**arguments)
|
|
||||||
elif name == "remove_issue_dependency":
|
|
||||||
result = await self.dependency_tools.remove_issue_dependency(**arguments)
|
|
||||||
elif name == "get_execution_order":
|
|
||||||
result = await self.dependency_tools.get_execution_order(**arguments)
|
|
||||||
# Validation tools
|
|
||||||
elif name == "validate_repo_org":
|
|
||||||
is_org = self.client.is_org_repo(arguments.get('repo'))
|
|
||||||
result = {'is_organization': is_org}
|
|
||||||
elif name == "get_branch_protection":
|
|
||||||
result = self.client.get_branch_protection(
|
|
||||||
arguments['branch'],
|
|
||||||
arguments.get('repo')
|
|
||||||
)
|
|
||||||
elif name == "create_label":
|
|
||||||
result = self.client.create_label(
|
|
||||||
arguments['name'],
|
|
||||||
arguments['color'],
|
|
||||||
arguments.get('description'),
|
|
||||||
arguments.get('repo')
|
|
||||||
)
|
|
||||||
elif name == "create_org_label":
|
|
||||||
result = self.client.create_org_label(
|
|
||||||
arguments['org'],
|
|
||||||
arguments['name'],
|
|
||||||
arguments['color'],
|
|
||||||
arguments.get('description')
|
|
||||||
)
|
|
||||||
elif name == "create_label_smart":
|
|
||||||
result = await self.label_tools.create_label_smart(
|
|
||||||
arguments['name'],
|
|
||||||
arguments['color'],
|
|
||||||
arguments.get('description'),
|
|
||||||
arguments.get('repo')
|
|
||||||
)
|
|
||||||
# Pull Request tools
|
|
||||||
elif name == "list_pull_requests":
|
|
||||||
result = await self.pr_tools.list_pull_requests(**arguments)
|
|
||||||
elif name == "get_pull_request":
|
|
||||||
result = await self.pr_tools.get_pull_request(**arguments)
|
|
||||||
elif name == "get_pr_diff":
|
|
||||||
result = await self.pr_tools.get_pr_diff(**arguments)
|
|
||||||
elif name == "get_pr_comments":
|
|
||||||
result = await self.pr_tools.get_pr_comments(**arguments)
|
|
||||||
elif name == "create_pr_review":
|
|
||||||
result = await self.pr_tools.create_pr_review(**arguments)
|
|
||||||
elif name == "add_pr_comment":
|
|
||||||
result = await self.pr_tools.add_pr_comment(**arguments)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown tool: {name}")
|
|
||||||
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=json.dumps(result, indent=2)
|
|
||||||
)]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Tool {name} failed: {e}")
|
|
||||||
return [TextContent(
|
|
||||||
type="text",
|
|
||||||
text=f"Error: {str(e)}"
|
|
||||||
)]
|
|
||||||
|
|
||||||
async def run(self):
|
|
||||||
"""Run the MCP server"""
|
|
||||||
await self.initialize()
|
|
||||||
self.setup_tools()
|
|
||||||
|
|
||||||
async with stdio_server() as (read_stream, write_stream):
|
|
||||||
await self.server.run(
|
|
||||||
read_stream,
|
|
||||||
write_stream,
|
|
||||||
self.server.create_initialization_options()
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
"""Main entry point"""
|
|
||||||
server = GiteaMCPServer()
|
|
||||||
await server.run()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
"""
|
|
||||||
MCP tools for Gitea integration.
|
|
||||||
|
|
||||||
This package provides MCP tool implementations for:
|
|
||||||
- Issue operations (issues.py)
|
|
||||||
- Label management (labels.py)
|
|
||||||
- Wiki operations (wiki.py)
|
|
||||||
- Milestone management (milestones.py)
|
|
||||||
- Issue dependencies (dependencies.py)
|
|
||||||
- Pull request operations (pull_requests.py)
|
|
||||||
"""
|
|
||||||
@@ -1,216 +0,0 @@
|
|||||||
"""
|
|
||||||
Issue dependency management tools for MCP server.
|
|
||||||
|
|
||||||
Provides async wrappers for issue dependency operations:
|
|
||||||
- List/create/remove dependencies
|
|
||||||
- Build dependency graphs for parallel execution
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional, Set, Tuple
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class DependencyTools:
|
|
||||||
"""Async wrappers for Gitea issue dependency operations"""
|
|
||||||
|
|
||||||
def __init__(self, gitea_client):
|
|
||||||
"""
|
|
||||||
Initialize dependency tools.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
gitea_client: GiteaClient instance
|
|
||||||
"""
|
|
||||||
self.gitea = gitea_client
|
|
||||||
|
|
||||||
async def list_issue_dependencies(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
List all dependencies for an issue (issues that block this one).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: Issue number
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of issues that this issue depends on
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.list_issue_dependencies(issue_number, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def create_issue_dependency(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
depends_on: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a dependency between issues.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: The issue that will depend on another
|
|
||||||
depends_on: The issue that blocks issue_number
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created dependency information
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_issue_dependency(issue_number, depends_on, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def remove_issue_dependency(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
depends_on: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> bool:
|
|
||||||
"""
|
|
||||||
Remove a dependency between issues.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: The issue that currently depends on another
|
|
||||||
depends_on: The issue being depended on
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if removed successfully
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.remove_issue_dependency(issue_number, depends_on, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def list_issue_blocks(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
List all issues that this issue blocks.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: Issue number
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of issues blocked by this issue
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.list_issue_blocks(issue_number, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def build_dependency_graph(
|
|
||||||
self,
|
|
||||||
issue_numbers: List[int],
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict[int, List[int]]:
|
|
||||||
"""
|
|
||||||
Build a dependency graph for a list of issues.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_numbers: List of issue numbers to analyze
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dictionary mapping issue_number -> list of issues it depends on
|
|
||||||
"""
|
|
||||||
graph = {}
|
|
||||||
for issue_num in issue_numbers:
|
|
||||||
try:
|
|
||||||
deps = await self.list_issue_dependencies(issue_num, repo)
|
|
||||||
graph[issue_num] = [
|
|
||||||
d.get('number') or d.get('index')
|
|
||||||
for d in deps
|
|
||||||
if (d.get('number') or d.get('index')) in issue_numbers
|
|
||||||
]
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Could not fetch dependencies for #{issue_num}: {e}")
|
|
||||||
graph[issue_num] = []
|
|
||||||
return graph
|
|
||||||
|
|
||||||
async def get_ready_tasks(
|
|
||||||
self,
|
|
||||||
issue_numbers: List[int],
|
|
||||||
completed: Set[int],
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[int]:
|
|
||||||
"""
|
|
||||||
Get tasks that are ready to execute (no unresolved dependencies).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_numbers: List of all issue numbers in sprint
|
|
||||||
completed: Set of already completed issue numbers
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of issue numbers that can be executed now
|
|
||||||
"""
|
|
||||||
graph = await self.build_dependency_graph(issue_numbers, repo)
|
|
||||||
ready = []
|
|
||||||
|
|
||||||
for issue_num in issue_numbers:
|
|
||||||
if issue_num in completed:
|
|
||||||
continue
|
|
||||||
|
|
||||||
deps = graph.get(issue_num, [])
|
|
||||||
# Task is ready if all its dependencies are completed
|
|
||||||
if all(dep in completed for dep in deps):
|
|
||||||
ready.append(issue_num)
|
|
||||||
|
|
||||||
return ready
|
|
||||||
|
|
||||||
async def get_execution_order(
|
|
||||||
self,
|
|
||||||
issue_numbers: List[int],
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[List[int]]:
|
|
||||||
"""
|
|
||||||
Get a parallelizable execution order for issues.
|
|
||||||
|
|
||||||
Returns batches of issues that can be executed in parallel.
|
|
||||||
Each batch contains issues with no unresolved dependencies.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_numbers: List of all issue numbers
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of batches, where each batch can be executed in parallel
|
|
||||||
"""
|
|
||||||
graph = await self.build_dependency_graph(issue_numbers, repo)
|
|
||||||
completed: Set[int] = set()
|
|
||||||
remaining = set(issue_numbers)
|
|
||||||
batches = []
|
|
||||||
|
|
||||||
while remaining:
|
|
||||||
# Find all tasks with no unresolved dependencies
|
|
||||||
batch = []
|
|
||||||
for issue_num in remaining:
|
|
||||||
deps = graph.get(issue_num, [])
|
|
||||||
if all(dep in completed for dep in deps):
|
|
||||||
batch.append(issue_num)
|
|
||||||
|
|
||||||
if not batch:
|
|
||||||
# Circular dependency detected
|
|
||||||
logger.error(f"Circular dependency detected! Remaining: {remaining}")
|
|
||||||
batch = list(remaining) # Force include remaining to avoid infinite loop
|
|
||||||
|
|
||||||
batches.append(batch)
|
|
||||||
completed.update(batch)
|
|
||||||
remaining -= set(batch)
|
|
||||||
|
|
||||||
return batches
|
|
||||||
@@ -1,261 +0,0 @@
|
|||||||
"""
|
|
||||||
Issue management tools for MCP server.
|
|
||||||
|
|
||||||
Provides async wrappers for issue CRUD operations with:
|
|
||||||
- Branch-aware security
|
|
||||||
- PMO multi-repo support
|
|
||||||
- Comprehensive error handling
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import subprocess
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class IssueTools:
|
|
||||||
"""Async wrappers for Gitea issue operations with branch detection"""
|
|
||||||
|
|
||||||
def __init__(self, gitea_client):
|
|
||||||
"""
|
|
||||||
Initialize issue tools.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
gitea_client: GiteaClient instance
|
|
||||||
"""
|
|
||||||
self.gitea = gitea_client
|
|
||||||
|
|
||||||
def _get_current_branch(self) -> str:
|
|
||||||
"""
|
|
||||||
Get current git branch.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Current branch name or 'unknown' if not in a git repo
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
result = subprocess.run(
|
|
||||||
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
check=True
|
|
||||||
)
|
|
||||||
return result.stdout.strip()
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return "unknown"
|
|
||||||
|
|
||||||
def _check_branch_permissions(self, operation: str) -> bool:
|
|
||||||
"""
|
|
||||||
Check if operation is allowed on current branch.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
operation: Operation name (list_issues, create_issue, etc.)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if operation is allowed, False otherwise
|
|
||||||
"""
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
|
|
||||||
# Production branches (read-only except incidents)
|
|
||||||
if branch in ['main', 'master'] or branch.startswith('prod/'):
|
|
||||||
return operation in ['list_issues', 'get_issue', 'get_labels']
|
|
||||||
|
|
||||||
# Staging branches (read-only for code)
|
|
||||||
if branch == 'staging' or branch.startswith('stage/'):
|
|
||||||
return operation in ['list_issues', 'get_issue', 'get_labels', 'create_issue']
|
|
||||||
|
|
||||||
# Development branches (full access)
|
|
||||||
if branch in ['development', 'develop'] or branch.startswith(('feat/', 'feature/', 'dev/')):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Unknown branch - be restrictive
|
|
||||||
return False
|
|
||||||
|
|
||||||
async def list_issues(
|
|
||||||
self,
|
|
||||||
state: str = 'open',
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
List issues from repository (async wrapper).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
state: Issue state (open, closed, all)
|
|
||||||
labels: Filter by labels
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of issue dictionaries
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('list_issues'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot list issues on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.list_issues(state, labels, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def get_issue(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Get specific issue details (async wrapper).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: Issue number
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Issue dictionary
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('get_issue'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot get issue on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_issue(issue_number, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def create_issue(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
body: str,
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create new issue (async wrapper with branch check).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
title: Issue title
|
|
||||||
body: Issue description
|
|
||||||
labels: List of label names
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created issue dictionary
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('create_issue'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot create issues on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch to create issues."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_issue(title, body, labels, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def update_issue(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
title: Optional[str] = None,
|
|
||||||
body: Optional[str] = None,
|
|
||||||
state: Optional[str] = None,
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Update existing issue (async wrapper with branch check).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: Issue number
|
|
||||||
title: New title (optional)
|
|
||||||
body: New body (optional)
|
|
||||||
state: New state - 'open' or 'closed' (optional)
|
|
||||||
labels: New labels (optional)
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Updated issue dictionary
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('update_issue'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot update issues on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch to update issues."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.update_issue(issue_number, title, body, state, labels, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def add_comment(
|
|
||||||
self,
|
|
||||||
issue_number: int,
|
|
||||||
comment: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Add comment to issue (async wrapper with branch check).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
issue_number: Issue number
|
|
||||||
comment: Comment text
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created comment dictionary
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('add_comment'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot add comments on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch to add comments."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.add_comment(issue_number, comment, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def aggregate_issues(
|
|
||||||
self,
|
|
||||||
org: str,
|
|
||||||
state: str = 'open',
|
|
||||||
labels: Optional[List[str]] = None
|
|
||||||
) -> Dict[str, List[Dict]]:
|
|
||||||
"""Aggregate issues across all repositories in org."""
|
|
||||||
if not self._check_branch_permissions('aggregate_issues'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(f"Cannot aggregate issues on branch '{branch}'.")
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.aggregate_issues(org, state, labels)
|
|
||||||
)
|
|
||||||
@@ -1,377 +0,0 @@
|
|||||||
"""
|
|
||||||
Label management tools for MCP server.
|
|
||||||
|
|
||||||
Provides async wrappers for label operations with:
|
|
||||||
- Label taxonomy retrieval
|
|
||||||
- Intelligent label suggestion
|
|
||||||
- Dynamic label detection
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
import re
|
|
||||||
from typing import List, Dict, Optional
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class LabelTools:
|
|
||||||
"""Async wrappers for Gitea label operations"""
|
|
||||||
|
|
||||||
def __init__(self, gitea_client):
|
|
||||||
"""
|
|
||||||
Initialize label tools.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
gitea_client: GiteaClient instance
|
|
||||||
"""
|
|
||||||
self.gitea = gitea_client
|
|
||||||
|
|
||||||
async def get_labels(self, repo: Optional[str] = None) -> Dict[str, List[Dict]]:
|
|
||||||
"""Get all labels (org + repo if org-owned, repo-only if user-owned)."""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
|
|
||||||
target_repo = repo or self.gitea.repo
|
|
||||||
if not target_repo or '/' not in target_repo:
|
|
||||||
raise ValueError("Use 'owner/repo' format (e.g. 'org/repo-name')")
|
|
||||||
|
|
||||||
# Check if repo belongs to an organization or user
|
|
||||||
is_org = await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.is_org_repo(target_repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
org_labels = []
|
|
||||||
if is_org:
|
|
||||||
org = target_repo.split('/')[0]
|
|
||||||
org_labels = await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_org_labels(org)
|
|
||||||
)
|
|
||||||
|
|
||||||
repo_labels = await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_labels(target_repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
return {
|
|
||||||
'organization': org_labels,
|
|
||||||
'repository': repo_labels,
|
|
||||||
'total_count': len(org_labels) + len(repo_labels)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def suggest_labels(self, context: str, repo: Optional[str] = None) -> List[str]:
|
|
||||||
"""
|
|
||||||
Analyze context and suggest appropriate labels from repository's actual labels.
|
|
||||||
|
|
||||||
This method fetches actual labels from the repository and matches them
|
|
||||||
dynamically, supporting any label naming convention (slash, colon-space, etc.).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
context: Issue title + description or sprint context
|
|
||||||
repo: Repository in 'owner/repo' format (optional, uses default if not provided)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of suggested label names that exist in the repository
|
|
||||||
"""
|
|
||||||
# Fetch actual labels from repository
|
|
||||||
target_repo = repo or self.gitea.repo
|
|
||||||
if not target_repo:
|
|
||||||
logger.warning("No repository specified, returning empty suggestions")
|
|
||||||
return []
|
|
||||||
|
|
||||||
try:
|
|
||||||
labels_data = await self.get_labels(target_repo)
|
|
||||||
all_labels = labels_data.get('organization', []) + labels_data.get('repository', [])
|
|
||||||
label_names = [label['name'] for label in all_labels]
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to fetch labels: {e}. Using fallback suggestions.")
|
|
||||||
label_names = []
|
|
||||||
|
|
||||||
# Build label lookup for dynamic matching
|
|
||||||
label_lookup = self._build_label_lookup(label_names)
|
|
||||||
|
|
||||||
suggested = []
|
|
||||||
context_lower = context.lower()
|
|
||||||
|
|
||||||
# Type detection (exclusive - only one)
|
|
||||||
type_label = None
|
|
||||||
if any(word in context_lower for word in ['bug', 'error', 'fix', 'broken', 'crash', 'fail']):
|
|
||||||
type_label = self._find_label(label_lookup, 'type', 'bug')
|
|
||||||
elif any(word in context_lower for word in ['refactor', 'extract', 'restructure', 'architecture', 'service extraction']):
|
|
||||||
type_label = self._find_label(label_lookup, 'type', 'refactor')
|
|
||||||
elif any(word in context_lower for word in ['feature', 'add', 'implement', 'new', 'create']):
|
|
||||||
type_label = self._find_label(label_lookup, 'type', 'feature')
|
|
||||||
elif any(word in context_lower for word in ['docs', 'documentation', 'readme', 'guide']):
|
|
||||||
type_label = self._find_label(label_lookup, 'type', 'documentation')
|
|
||||||
elif any(word in context_lower for word in ['test', 'testing', 'spec', 'coverage']):
|
|
||||||
type_label = self._find_label(label_lookup, 'type', 'test')
|
|
||||||
elif any(word in context_lower for word in ['chore', 'maintenance', 'update', 'upgrade']):
|
|
||||||
type_label = self._find_label(label_lookup, 'type', 'chore')
|
|
||||||
if type_label:
|
|
||||||
suggested.append(type_label)
|
|
||||||
|
|
||||||
# Priority detection
|
|
||||||
priority_label = None
|
|
||||||
if any(word in context_lower for word in ['critical', 'urgent', 'blocker', 'blocking', 'emergency']):
|
|
||||||
priority_label = self._find_label(label_lookup, 'priority', 'critical')
|
|
||||||
elif any(word in context_lower for word in ['high', 'important', 'asap', 'soon']):
|
|
||||||
priority_label = self._find_label(label_lookup, 'priority', 'high')
|
|
||||||
elif any(word in context_lower for word in ['low', 'nice-to-have', 'optional', 'later']):
|
|
||||||
priority_label = self._find_label(label_lookup, 'priority', 'low')
|
|
||||||
else:
|
|
||||||
priority_label = self._find_label(label_lookup, 'priority', 'medium')
|
|
||||||
if priority_label:
|
|
||||||
suggested.append(priority_label)
|
|
||||||
|
|
||||||
# Complexity detection
|
|
||||||
complexity_label = None
|
|
||||||
if any(word in context_lower for word in ['simple', 'trivial', 'easy', 'quick']):
|
|
||||||
complexity_label = self._find_label(label_lookup, 'complexity', 'simple')
|
|
||||||
elif any(word in context_lower for word in ['complex', 'difficult', 'challenging', 'intricate']):
|
|
||||||
complexity_label = self._find_label(label_lookup, 'complexity', 'complex')
|
|
||||||
else:
|
|
||||||
complexity_label = self._find_label(label_lookup, 'complexity', 'medium')
|
|
||||||
if complexity_label:
|
|
||||||
suggested.append(complexity_label)
|
|
||||||
|
|
||||||
# Effort detection (supports both "Effort" and "Efforts" naming)
|
|
||||||
effort_label = None
|
|
||||||
if any(word in context_lower for word in ['xs', 'tiny', '1 hour', '2 hours']):
|
|
||||||
effort_label = self._find_label(label_lookup, 'effort', 'xs')
|
|
||||||
elif any(word in context_lower for word in ['small', 's ', '1 day', 'half day']):
|
|
||||||
effort_label = self._find_label(label_lookup, 'effort', 's')
|
|
||||||
elif any(word in context_lower for word in ['medium', 'm ', '2 days', '3 days']):
|
|
||||||
effort_label = self._find_label(label_lookup, 'effort', 'm')
|
|
||||||
elif any(word in context_lower for word in ['large', 'l ', '1 week', '5 days']):
|
|
||||||
effort_label = self._find_label(label_lookup, 'effort', 'l')
|
|
||||||
elif any(word in context_lower for word in ['xl', 'extra large', '2 weeks', 'sprint']):
|
|
||||||
effort_label = self._find_label(label_lookup, 'effort', 'xl')
|
|
||||||
if effort_label:
|
|
||||||
suggested.append(effort_label)
|
|
||||||
|
|
||||||
# Component detection (based on keywords)
|
|
||||||
component_mappings = {
|
|
||||||
'backend': ['backend', 'server', 'api', 'database', 'service'],
|
|
||||||
'frontend': ['frontend', 'ui', 'interface', 'react', 'vue', 'component'],
|
|
||||||
'api': ['api', 'endpoint', 'rest', 'graphql', 'route'],
|
|
||||||
'database': ['database', 'db', 'sql', 'migration', 'schema', 'postgres'],
|
|
||||||
'auth': ['auth', 'authentication', 'login', 'oauth', 'token', 'session'],
|
|
||||||
'deploy': ['deploy', 'deployment', 'docker', 'kubernetes', 'ci/cd'],
|
|
||||||
'testing': ['test', 'testing', 'spec', 'jest', 'pytest', 'coverage'],
|
|
||||||
'docs': ['docs', 'documentation', 'readme', 'guide', 'wiki']
|
|
||||||
}
|
|
||||||
|
|
||||||
for component, keywords in component_mappings.items():
|
|
||||||
if any(keyword in context_lower for keyword in keywords):
|
|
||||||
label = self._find_label(label_lookup, 'component', component)
|
|
||||||
if label and label not in suggested:
|
|
||||||
suggested.append(label)
|
|
||||||
|
|
||||||
# Tech stack detection
|
|
||||||
tech_mappings = {
|
|
||||||
'python': ['python', 'fastapi', 'django', 'flask', 'pytest'],
|
|
||||||
'javascript': ['javascript', 'js', 'node', 'npm', 'yarn'],
|
|
||||||
'docker': ['docker', 'dockerfile', 'container', 'compose'],
|
|
||||||
'postgresql': ['postgres', 'postgresql', 'psql', 'sql'],
|
|
||||||
'redis': ['redis', 'cache', 'session store'],
|
|
||||||
'vue': ['vue', 'vuejs', 'nuxt'],
|
|
||||||
'fastapi': ['fastapi', 'pydantic', 'starlette']
|
|
||||||
}
|
|
||||||
|
|
||||||
for tech, keywords in tech_mappings.items():
|
|
||||||
if any(keyword in context_lower for keyword in keywords):
|
|
||||||
label = self._find_label(label_lookup, 'tech', tech)
|
|
||||||
if label and label not in suggested:
|
|
||||||
suggested.append(label)
|
|
||||||
|
|
||||||
# Source detection (based on git branch or context)
|
|
||||||
source_label = None
|
|
||||||
if 'development' in context_lower or 'dev/' in context_lower:
|
|
||||||
source_label = self._find_label(label_lookup, 'source', 'development')
|
|
||||||
elif 'staging' in context_lower or 'stage/' in context_lower:
|
|
||||||
source_label = self._find_label(label_lookup, 'source', 'staging')
|
|
||||||
elif 'production' in context_lower or 'prod' in context_lower:
|
|
||||||
source_label = self._find_label(label_lookup, 'source', 'production')
|
|
||||||
if source_label:
|
|
||||||
suggested.append(source_label)
|
|
||||||
|
|
||||||
# Risk detection
|
|
||||||
risk_label = None
|
|
||||||
if any(word in context_lower for word in ['breaking', 'breaking change', 'major', 'risky']):
|
|
||||||
risk_label = self._find_label(label_lookup, 'risk', 'high')
|
|
||||||
elif any(word in context_lower for word in ['safe', 'low risk', 'minor']):
|
|
||||||
risk_label = self._find_label(label_lookup, 'risk', 'low')
|
|
||||||
if risk_label:
|
|
||||||
suggested.append(risk_label)
|
|
||||||
|
|
||||||
logger.info(f"Suggested {len(suggested)} labels based on context and {len(label_names)} available labels")
|
|
||||||
return suggested
|
|
||||||
|
|
||||||
def _build_label_lookup(self, label_names: List[str]) -> Dict[str, Dict[str, str]]:
|
|
||||||
"""
|
|
||||||
Build a lookup dictionary for label matching.
|
|
||||||
|
|
||||||
Supports various label formats:
|
|
||||||
- Slash format: Type/Bug, Priority/High
|
|
||||||
- Colon-space format: Type: Bug, Priority: High
|
|
||||||
- Colon format: Type:Bug
|
|
||||||
|
|
||||||
Args:
|
|
||||||
label_names: List of actual label names from repository
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Nested dict: {category: {value: actual_label_name}}
|
|
||||||
"""
|
|
||||||
lookup: Dict[str, Dict[str, str]] = {}
|
|
||||||
|
|
||||||
for label in label_names:
|
|
||||||
# Try different separator patterns
|
|
||||||
# Pattern: Category<separator>Value
|
|
||||||
# Separators: /, : , :
|
|
||||||
match = re.match(r'^([^/:]+)(?:/|:\s*|:)(.+)$', label)
|
|
||||||
if match:
|
|
||||||
category = match.group(1).lower().rstrip('s') # Normalize: "Efforts" -> "effort"
|
|
||||||
value = match.group(2).lower()
|
|
||||||
|
|
||||||
if category not in lookup:
|
|
||||||
lookup[category] = {}
|
|
||||||
lookup[category][value] = label
|
|
||||||
|
|
||||||
return lookup
|
|
||||||
|
|
||||||
def _find_label(self, lookup: Dict[str, Dict[str, str]], category: str, value: str) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
Find actual label name from lookup.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
lookup: Label lookup dictionary
|
|
||||||
category: Category to search (e.g., 'type', 'priority')
|
|
||||||
value: Value to find (e.g., 'bug', 'high')
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Actual label name if found, None otherwise
|
|
||||||
"""
|
|
||||||
category_lower = category.lower().rstrip('s') # Normalize
|
|
||||||
value_lower = value.lower()
|
|
||||||
|
|
||||||
if category_lower in lookup and value_lower in lookup[category_lower]:
|
|
||||||
return lookup[category_lower][value_lower]
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Organization-level label categories (workflow labels shared across repos)
|
|
||||||
ORG_LABEL_CATEGORIES = {'agent', 'complexity', 'effort', 'efforts', 'priority', 'risk', 'source', 'type'}
|
|
||||||
|
|
||||||
# Repository-level label categories (project-specific labels)
|
|
||||||
REPO_LABEL_CATEGORIES = {'component', 'tech'}
|
|
||||||
|
|
||||||
async def create_label_smart(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
color: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a label at the appropriate level (org or repo) based on category.
|
|
||||||
Skips if label already exists (checks both org and repo levels).
|
|
||||||
|
|
||||||
Organization labels: Agent, Complexity, Effort, Priority, Risk, Source, Type
|
|
||||||
Repository labels: Component, Tech
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: Label name (e.g., 'Type/Bug', 'Component/Backend')
|
|
||||||
color: Hex color code
|
|
||||||
description: Optional label description
|
|
||||||
repo: Repository in 'owner/repo' format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created label dictionary with 'level' key, or 'skipped' if already exists
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
|
|
||||||
target_repo = repo or self.gitea.repo
|
|
||||||
if not target_repo or '/' not in target_repo:
|
|
||||||
raise ValueError("Use 'owner/repo' format (e.g. 'org/repo-name')")
|
|
||||||
|
|
||||||
owner = target_repo.split('/')[0]
|
|
||||||
is_org = await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.is_org_repo(target_repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Fetch existing labels to check for duplicates
|
|
||||||
existing_labels = await self.get_labels(target_repo)
|
|
||||||
all_existing = existing_labels.get('organization', []) + existing_labels.get('repository', [])
|
|
||||||
existing_names = [label['name'].lower() for label in all_existing]
|
|
||||||
|
|
||||||
# Normalize the new label name for comparison
|
|
||||||
name_normalized = name.lower()
|
|
||||||
|
|
||||||
# Also check for format variations (Type/Bug vs Type: Bug)
|
|
||||||
name_variations = [name_normalized]
|
|
||||||
if '/' in name:
|
|
||||||
name_variations.append(name.replace('/', ': ').lower())
|
|
||||||
name_variations.append(name.replace('/', ':').lower())
|
|
||||||
elif ': ' in name:
|
|
||||||
name_variations.append(name.replace(': ', '/').lower())
|
|
||||||
elif ':' in name:
|
|
||||||
name_variations.append(name.replace(':', '/').lower())
|
|
||||||
|
|
||||||
# Check if label already exists in any format
|
|
||||||
for variation in name_variations:
|
|
||||||
if variation in existing_names:
|
|
||||||
logger.info(f"Label '{name}' already exists (found as '{variation}'), skipping")
|
|
||||||
return {
|
|
||||||
'name': name,
|
|
||||||
'skipped': True,
|
|
||||||
'reason': f"Label already exists",
|
|
||||||
'level': 'existing'
|
|
||||||
}
|
|
||||||
|
|
||||||
# Parse category from label name
|
|
||||||
category = None
|
|
||||||
if '/' in name:
|
|
||||||
category = name.split('/')[0].lower().rstrip('s')
|
|
||||||
elif ':' in name:
|
|
||||||
category = name.split(':')[0].strip().lower().rstrip('s')
|
|
||||||
|
|
||||||
# If it's an org repo and the category is an org-level category, create at org level
|
|
||||||
if is_org and category in self.ORG_LABEL_CATEGORIES:
|
|
||||||
result = await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_org_label(owner, name, color, description)
|
|
||||||
)
|
|
||||||
# Handle unexpected response types (API may return list or non-dict)
|
|
||||||
if not isinstance(result, dict):
|
|
||||||
logger.error(f"Unexpected API response type for org label: {type(result)} - {result}")
|
|
||||||
return {
|
|
||||||
'name': name,
|
|
||||||
'error': True,
|
|
||||||
'reason': f"API returned {type(result).__name__} instead of dict: {result}",
|
|
||||||
'level': 'organization'
|
|
||||||
}
|
|
||||||
result['level'] = 'organization'
|
|
||||||
result['skipped'] = False
|
|
||||||
logger.info(f"Created organization label '{name}' in {owner}")
|
|
||||||
else:
|
|
||||||
# Create at repo level
|
|
||||||
result = await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_label(name, color, description, target_repo)
|
|
||||||
)
|
|
||||||
# Handle unexpected response types (API may return list or non-dict)
|
|
||||||
if not isinstance(result, dict):
|
|
||||||
logger.error(f"Unexpected API response type for repo label: {type(result)} - {result}")
|
|
||||||
return {
|
|
||||||
'name': name,
|
|
||||||
'error': True,
|
|
||||||
'reason': f"API returned {type(result).__name__} instead of dict: {result}",
|
|
||||||
'level': 'repository'
|
|
||||||
}
|
|
||||||
result['level'] = 'repository'
|
|
||||||
result['skipped'] = False
|
|
||||||
logger.info(f"Created repository label '{name}' in {target_repo}")
|
|
||||||
|
|
||||||
return result
|
|
||||||
@@ -1,145 +0,0 @@
|
|||||||
"""
|
|
||||||
Milestone management tools for MCP server.
|
|
||||||
|
|
||||||
Provides async wrappers for milestone operations:
|
|
||||||
- CRUD operations for milestones
|
|
||||||
- Milestone-sprint relationship tracking
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class MilestoneTools:
|
|
||||||
"""Async wrappers for Gitea milestone operations"""
|
|
||||||
|
|
||||||
def __init__(self, gitea_client):
|
|
||||||
"""
|
|
||||||
Initialize milestone tools.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
gitea_client: GiteaClient instance
|
|
||||||
"""
|
|
||||||
self.gitea = gitea_client
|
|
||||||
|
|
||||||
async def list_milestones(
|
|
||||||
self,
|
|
||||||
state: str = 'open',
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
List all milestones in repository.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
state: Milestone state (open, closed, all)
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of milestone dictionaries
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.list_milestones(state, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def get_milestone(
|
|
||||||
self,
|
|
||||||
milestone_id: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Get a specific milestone by ID.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
milestone_id: Milestone ID
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Milestone dictionary
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_milestone(milestone_id, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def create_milestone(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
due_on: Optional[str] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a new milestone.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
title: Milestone title (e.g., "v2.0 Release", "Sprint 17")
|
|
||||||
description: Milestone description
|
|
||||||
due_on: Due date in ISO 8601 format (e.g., "2025-02-01T00:00:00Z")
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created milestone dictionary
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_milestone(title, description, due_on, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def update_milestone(
|
|
||||||
self,
|
|
||||||
milestone_id: int,
|
|
||||||
title: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
state: Optional[str] = None,
|
|
||||||
due_on: Optional[str] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Update an existing milestone.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
milestone_id: Milestone ID
|
|
||||||
title: New title (optional)
|
|
||||||
description: New description (optional)
|
|
||||||
state: New state - 'open' or 'closed' (optional)
|
|
||||||
due_on: New due date (optional)
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Updated milestone dictionary
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.update_milestone(
|
|
||||||
milestone_id, title, description, state, due_on, repo
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def delete_milestone(
|
|
||||||
self,
|
|
||||||
milestone_id: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> bool:
|
|
||||||
"""
|
|
||||||
Delete a milestone.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
milestone_id: Milestone ID
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if deleted successfully
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.delete_milestone(milestone_id, repo)
|
|
||||||
)
|
|
||||||
@@ -1,274 +0,0 @@
|
|||||||
"""
|
|
||||||
Pull request management tools for MCP server.
|
|
||||||
|
|
||||||
Provides async wrappers for PR operations with:
|
|
||||||
- Branch-aware security
|
|
||||||
- PMO multi-repo support
|
|
||||||
- Comprehensive error handling
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import subprocess
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class PullRequestTools:
|
|
||||||
"""Async wrappers for Gitea pull request operations with branch detection"""
|
|
||||||
|
|
||||||
def __init__(self, gitea_client):
|
|
||||||
"""
|
|
||||||
Initialize pull request tools.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
gitea_client: GiteaClient instance
|
|
||||||
"""
|
|
||||||
self.gitea = gitea_client
|
|
||||||
|
|
||||||
def _get_current_branch(self) -> str:
|
|
||||||
"""
|
|
||||||
Get current git branch.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Current branch name or 'unknown' if not in a git repo
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
result = subprocess.run(
|
|
||||||
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
check=True
|
|
||||||
)
|
|
||||||
return result.stdout.strip()
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return "unknown"
|
|
||||||
|
|
||||||
def _check_branch_permissions(self, operation: str) -> bool:
|
|
||||||
"""
|
|
||||||
Check if operation is allowed on current branch.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
operation: Operation name (list_prs, create_review, etc.)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if operation is allowed, False otherwise
|
|
||||||
"""
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
|
|
||||||
# Read-only operations allowed everywhere
|
|
||||||
read_ops = ['list_pull_requests', 'get_pull_request', 'get_pr_diff', 'get_pr_comments']
|
|
||||||
|
|
||||||
# Production branches (read-only)
|
|
||||||
if branch in ['main', 'master'] or branch.startswith('prod/'):
|
|
||||||
return operation in read_ops
|
|
||||||
|
|
||||||
# Staging branches (read-only for PRs, can comment)
|
|
||||||
if branch == 'staging' or branch.startswith('stage/'):
|
|
||||||
return operation in read_ops + ['add_pr_comment']
|
|
||||||
|
|
||||||
# Development branches (full access)
|
|
||||||
if branch in ['development', 'develop'] or branch.startswith(('feat/', 'feature/', 'dev/')):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Unknown branch - be restrictive
|
|
||||||
return operation in read_ops
|
|
||||||
|
|
||||||
async def list_pull_requests(
|
|
||||||
self,
|
|
||||||
state: str = 'open',
|
|
||||||
sort: str = 'recentupdate',
|
|
||||||
labels: Optional[List[str]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
List pull requests from repository (async wrapper).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
state: PR state (open, closed, all)
|
|
||||||
sort: Sort order
|
|
||||||
labels: Filter by labels
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of pull request dictionaries
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('list_pull_requests'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot list PRs on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.list_pull_requests(state, sort, labels, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def get_pull_request(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Get specific pull request details (async wrapper).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pr_number: Pull request number
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Pull request dictionary
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('get_pull_request'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot get PR on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_pull_request(pr_number, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def get_pr_diff(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> str:
|
|
||||||
"""
|
|
||||||
Get pull request diff (async wrapper).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pr_number: Pull request number
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Diff as string
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('get_pr_diff'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot get PR diff on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_pr_diff(pr_number, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def get_pr_comments(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
Get comments on a pull request (async wrapper).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pr_number: Pull request number
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of comment dictionaries
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('get_pr_comments'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot get PR comments on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_pr_comments(pr_number, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def create_pr_review(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
body: str,
|
|
||||||
event: str = 'COMMENT',
|
|
||||||
comments: Optional[List[Dict]] = None,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a review on a pull request (async wrapper with branch check).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pr_number: Pull request number
|
|
||||||
body: Review body/summary
|
|
||||||
event: Review action (APPROVE, REQUEST_CHANGES, COMMENT)
|
|
||||||
comments: Optional list of inline comments
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created review dictionary
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('create_pr_review'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot create PR review on branch '{branch}'. "
|
|
||||||
f"Switch to a development branch to review PRs."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_pr_review(pr_number, body, event, comments, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def add_pr_comment(
|
|
||||||
self,
|
|
||||||
pr_number: int,
|
|
||||||
body: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Add a general comment to a pull request (async wrapper with branch check).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pr_number: Pull request number
|
|
||||||
body: Comment text
|
|
||||||
repo: Override configured repo (for PMO multi-repo)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created comment dictionary
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
PermissionError: If operation not allowed on current branch
|
|
||||||
"""
|
|
||||||
if not self._check_branch_permissions('add_pr_comment'):
|
|
||||||
branch = self._get_current_branch()
|
|
||||||
raise PermissionError(
|
|
||||||
f"Cannot add PR comment on branch '{branch}'. "
|
|
||||||
f"Switch to a development or staging branch to comment on PRs."
|
|
||||||
)
|
|
||||||
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.add_pr_comment(pr_number, body, repo)
|
|
||||||
)
|
|
||||||
@@ -1,149 +0,0 @@
|
|||||||
"""
|
|
||||||
Wiki management tools for MCP server.
|
|
||||||
|
|
||||||
Provides async wrappers for wiki operations to support lessons learned:
|
|
||||||
- Page CRUD operations
|
|
||||||
- Lessons learned creation and search
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class WikiTools:
|
|
||||||
"""Async wrappers for Gitea wiki operations"""
|
|
||||||
|
|
||||||
def __init__(self, gitea_client):
|
|
||||||
"""
|
|
||||||
Initialize wiki tools.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
gitea_client: GiteaClient instance
|
|
||||||
"""
|
|
||||||
self.gitea = gitea_client
|
|
||||||
|
|
||||||
async def list_wiki_pages(self, repo: Optional[str] = None) -> List[Dict]:
|
|
||||||
"""List all wiki pages in repository."""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.list_wiki_pages(repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def get_wiki_page(
|
|
||||||
self,
|
|
||||||
page_name: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Get a specific wiki page by name."""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.get_wiki_page(page_name, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def create_wiki_page(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
content: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new wiki page."""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_wiki_page(title, content, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def update_wiki_page(
|
|
||||||
self,
|
|
||||||
page_name: str,
|
|
||||||
content: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""Update an existing wiki page."""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.update_wiki_page(page_name, content, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def delete_wiki_page(
|
|
||||||
self,
|
|
||||||
page_name: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> bool:
|
|
||||||
"""Delete a wiki page."""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.delete_wiki_page(page_name, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def search_wiki_pages(
|
|
||||||
self,
|
|
||||||
query: str,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""Search wiki pages by title."""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.search_wiki_pages(query, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def create_lesson(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
content: str,
|
|
||||||
tags: List[str],
|
|
||||||
category: str = "sprints",
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a lessons learned entry in the wiki.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
title: Lesson title (e.g., "Sprint 16 - Prevent Infinite Loops")
|
|
||||||
content: Lesson content in markdown
|
|
||||||
tags: List of tags for categorization
|
|
||||||
category: Category (sprints, patterns, architecture, etc.)
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Created wiki page
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
return await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.create_lesson(title, content, tags, category, repo)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def search_lessons(
|
|
||||||
self,
|
|
||||||
query: Optional[str] = None,
|
|
||||||
tags: Optional[List[str]] = None,
|
|
||||||
limit: int = 20,
|
|
||||||
repo: Optional[str] = None
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
Search lessons learned from previous sprints.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
query: Search query (optional)
|
|
||||||
tags: Tags to filter by (optional)
|
|
||||||
limit: Maximum results (default 20)
|
|
||||||
repo: Repository in owner/repo format
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of matching lessons
|
|
||||||
"""
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
results = await loop.run_in_executor(
|
|
||||||
None,
|
|
||||||
lambda: self.gitea.search_lessons(query, tags, repo)
|
|
||||||
)
|
|
||||||
return results[:limit]
|
|
||||||
@@ -1,6 +1,2 @@
|
|||||||
mcp>=0.9.0 # MCP SDK from Anthropic
|
--extra-index-url https://gitea.hotserv.cloud/api/packages/personal-projects/pypi/simple
|
||||||
python-dotenv>=1.0.0 # Environment variable loading
|
gitea-mcp>=1.0.0
|
||||||
requests>=2.31.0 # HTTP client for Gitea API
|
|
||||||
pydantic>=2.5.0 # Data validation
|
|
||||||
pytest>=7.4.3 # Testing framework
|
|
||||||
pytest-asyncio>=0.23.0 # Async testing support
|
|
||||||
|
|||||||
20
mcp-servers/gitea/run.sh
Executable file
20
mcp-servers/gitea/run.sh
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Capture original working directory before any cd operations
|
||||||
|
# This should be the user's project directory when launched by Claude Code
|
||||||
|
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/gitea/.venv"
|
||||||
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|
||||||
|
if [[ -f "$CACHE_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$CACHE_VENV/bin/python"
|
||||||
|
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$LOCAL_VENV/bin/python"
|
||||||
|
else
|
||||||
|
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$SCRIPT_DIR"
|
||||||
|
exec "$PYTHON" -m gitea_mcp.server "$@"
|
||||||
@@ -1,260 +0,0 @@
|
|||||||
"""
|
|
||||||
Unit tests for configuration loader.
|
|
||||||
"""
|
|
||||||
import pytest
|
|
||||||
from pathlib import Path
|
|
||||||
import os
|
|
||||||
from mcp_server.config import GiteaConfig
|
|
||||||
|
|
||||||
|
|
||||||
def test_load_system_config(tmp_path, monkeypatch):
|
|
||||||
"""Test loading system-level configuration"""
|
|
||||||
# Mock home directory
|
|
||||||
config_dir = tmp_path / '.config' / 'claude'
|
|
||||||
config_dir.mkdir(parents=True)
|
|
||||||
|
|
||||||
config_file = config_dir / 'gitea.env'
|
|
||||||
config_file.write_text(
|
|
||||||
"GITEA_API_URL=https://test.com/api/v1\n"
|
|
||||||
"GITEA_API_TOKEN=test_token\n"
|
|
||||||
"GITEA_OWNER=test_owner\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
monkeypatch.setenv('HOME', str(tmp_path))
|
|
||||||
monkeypatch.chdir(tmp_path)
|
|
||||||
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config.load()
|
|
||||||
|
|
||||||
assert result['api_url'] == 'https://test.com/api/v1'
|
|
||||||
assert result['api_token'] == 'test_token'
|
|
||||||
assert result['owner'] == 'test_owner'
|
|
||||||
assert result['mode'] == 'company' # No repo specified
|
|
||||||
assert result['repo'] is None
|
|
||||||
|
|
||||||
|
|
||||||
def test_project_config_override(tmp_path, monkeypatch):
|
|
||||||
"""Test that project config overrides system config"""
|
|
||||||
# Set up system config
|
|
||||||
system_config_dir = tmp_path / '.config' / 'claude'
|
|
||||||
system_config_dir.mkdir(parents=True)
|
|
||||||
|
|
||||||
system_config = system_config_dir / 'gitea.env'
|
|
||||||
system_config.write_text(
|
|
||||||
"GITEA_API_URL=https://test.com/api/v1\n"
|
|
||||||
"GITEA_API_TOKEN=test_token\n"
|
|
||||||
"GITEA_OWNER=test_owner\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Set up project config
|
|
||||||
project_dir = tmp_path / 'project'
|
|
||||||
project_dir.mkdir()
|
|
||||||
|
|
||||||
project_config = project_dir / '.env'
|
|
||||||
project_config.write_text("GITEA_REPO=test_repo\n")
|
|
||||||
|
|
||||||
monkeypatch.setenv('HOME', str(tmp_path))
|
|
||||||
monkeypatch.chdir(project_dir)
|
|
||||||
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config.load()
|
|
||||||
|
|
||||||
assert result['repo'] == 'test_repo'
|
|
||||||
assert result['mode'] == 'project'
|
|
||||||
|
|
||||||
|
|
||||||
def test_missing_system_config(tmp_path, monkeypatch):
|
|
||||||
"""Test error handling for missing system configuration"""
|
|
||||||
monkeypatch.setenv('HOME', str(tmp_path))
|
|
||||||
monkeypatch.chdir(tmp_path)
|
|
||||||
|
|
||||||
with pytest.raises(FileNotFoundError) as exc_info:
|
|
||||||
config = GiteaConfig()
|
|
||||||
config.load()
|
|
||||||
|
|
||||||
assert "System config not found" in str(exc_info.value)
|
|
||||||
|
|
||||||
|
|
||||||
def test_missing_required_config(tmp_path, monkeypatch):
|
|
||||||
"""Test error handling for missing required variables"""
|
|
||||||
# Clear environment variables
|
|
||||||
for var in ['GITEA_API_URL', 'GITEA_API_TOKEN', 'GITEA_OWNER', 'GITEA_REPO']:
|
|
||||||
monkeypatch.delenv(var, raising=False)
|
|
||||||
|
|
||||||
# Create incomplete config
|
|
||||||
config_dir = tmp_path / '.config' / 'claude'
|
|
||||||
config_dir.mkdir(parents=True)
|
|
||||||
|
|
||||||
config_file = config_dir / 'gitea.env'
|
|
||||||
config_file.write_text(
|
|
||||||
"GITEA_API_URL=https://test.com/api/v1\n"
|
|
||||||
# Missing GITEA_API_TOKEN and GITEA_OWNER
|
|
||||||
)
|
|
||||||
|
|
||||||
monkeypatch.setenv('HOME', str(tmp_path))
|
|
||||||
monkeypatch.chdir(tmp_path)
|
|
||||||
|
|
||||||
with pytest.raises(ValueError) as exc_info:
|
|
||||||
config = GiteaConfig()
|
|
||||||
config.load()
|
|
||||||
|
|
||||||
assert "Missing required configuration" in str(exc_info.value)
|
|
||||||
|
|
||||||
|
|
||||||
def test_mode_detection_project(tmp_path, monkeypatch):
|
|
||||||
"""Test mode detection for project mode"""
|
|
||||||
config_dir = tmp_path / '.config' / 'claude'
|
|
||||||
config_dir.mkdir(parents=True)
|
|
||||||
|
|
||||||
config_file = config_dir / 'gitea.env'
|
|
||||||
config_file.write_text(
|
|
||||||
"GITEA_API_URL=https://test.com/api/v1\n"
|
|
||||||
"GITEA_API_TOKEN=test_token\n"
|
|
||||||
"GITEA_OWNER=test_owner\n"
|
|
||||||
"GITEA_REPO=test_repo\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
monkeypatch.setenv('HOME', str(tmp_path))
|
|
||||||
monkeypatch.chdir(tmp_path)
|
|
||||||
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config.load()
|
|
||||||
|
|
||||||
assert result['mode'] == 'project'
|
|
||||||
assert result['repo'] == 'test_repo'
|
|
||||||
|
|
||||||
|
|
||||||
def test_mode_detection_company(tmp_path, monkeypatch):
|
|
||||||
"""Test mode detection for company mode (PMO)"""
|
|
||||||
# Clear environment variables, especially GITEA_REPO
|
|
||||||
for var in ['GITEA_API_URL', 'GITEA_API_TOKEN', 'GITEA_OWNER', 'GITEA_REPO']:
|
|
||||||
monkeypatch.delenv(var, raising=False)
|
|
||||||
|
|
||||||
config_dir = tmp_path / '.config' / 'claude'
|
|
||||||
config_dir.mkdir(parents=True)
|
|
||||||
|
|
||||||
config_file = config_dir / 'gitea.env'
|
|
||||||
config_file.write_text(
|
|
||||||
"GITEA_API_URL=https://test.com/api/v1\n"
|
|
||||||
"GITEA_API_TOKEN=test_token\n"
|
|
||||||
"GITEA_OWNER=test_owner\n"
|
|
||||||
# No GITEA_REPO
|
|
||||||
)
|
|
||||||
|
|
||||||
monkeypatch.setenv('HOME', str(tmp_path))
|
|
||||||
monkeypatch.chdir(tmp_path)
|
|
||||||
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config.load()
|
|
||||||
|
|
||||||
assert result['mode'] == 'company'
|
|
||||||
assert result['repo'] is None
|
|
||||||
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# GIT URL PARSING TESTS
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def test_parse_git_url_ssh_format():
|
|
||||||
"""Test parsing SSH format git URL"""
|
|
||||||
config = GiteaConfig()
|
|
||||||
|
|
||||||
# SSH with port: ssh://git@host:port/owner/repo.git
|
|
||||||
url = "ssh://git@hotserv.tailc9b278.ts.net:2222/personal-projects/personal-portfolio.git"
|
|
||||||
result = config._parse_git_url(url)
|
|
||||||
assert result == "personal-projects/personal-portfolio"
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_git_url_ssh_short_format():
|
|
||||||
"""Test parsing SSH short format git URL"""
|
|
||||||
config = GiteaConfig()
|
|
||||||
|
|
||||||
# SSH short: git@host:owner/repo.git
|
|
||||||
url = "git@github.com:owner/repo.git"
|
|
||||||
result = config._parse_git_url(url)
|
|
||||||
assert result == "owner/repo"
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_git_url_https_format():
|
|
||||||
"""Test parsing HTTPS format git URL"""
|
|
||||||
config = GiteaConfig()
|
|
||||||
|
|
||||||
# HTTPS: https://host/owner/repo.git
|
|
||||||
url = "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git"
|
|
||||||
result = config._parse_git_url(url)
|
|
||||||
assert result == "personal-projects/leo-claude-mktplace"
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_git_url_http_format():
|
|
||||||
"""Test parsing HTTP format git URL"""
|
|
||||||
config = GiteaConfig()
|
|
||||||
|
|
||||||
# HTTP: http://host/owner/repo.git
|
|
||||||
url = "http://gitea.hotserv.cloud/personal-projects/repo.git"
|
|
||||||
result = config._parse_git_url(url)
|
|
||||||
assert result == "personal-projects/repo"
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_git_url_without_git_suffix():
|
|
||||||
"""Test parsing git URL without .git suffix"""
|
|
||||||
config = GiteaConfig()
|
|
||||||
|
|
||||||
url = "https://github.com/owner/repo"
|
|
||||||
result = config._parse_git_url(url)
|
|
||||||
assert result == "owner/repo"
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_git_url_invalid_format():
|
|
||||||
"""Test parsing invalid git URL returns None"""
|
|
||||||
config = GiteaConfig()
|
|
||||||
|
|
||||||
url = "not-a-valid-url"
|
|
||||||
result = config._parse_git_url(url)
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
|
|
||||||
def test_find_project_directory_from_env(tmp_path, monkeypatch):
|
|
||||||
"""Test finding project directory from CLAUDE_PROJECT_DIR env var"""
|
|
||||||
project_dir = tmp_path / 'my-project'
|
|
||||||
project_dir.mkdir()
|
|
||||||
(project_dir / '.git').mkdir()
|
|
||||||
|
|
||||||
monkeypatch.setenv('CLAUDE_PROJECT_DIR', str(project_dir))
|
|
||||||
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config._find_project_directory()
|
|
||||||
|
|
||||||
assert result == project_dir
|
|
||||||
|
|
||||||
|
|
||||||
def test_find_project_directory_from_cwd(tmp_path, monkeypatch):
|
|
||||||
"""Test finding project directory from cwd with .env file"""
|
|
||||||
project_dir = tmp_path / 'project'
|
|
||||||
project_dir.mkdir()
|
|
||||||
(project_dir / '.env').write_text("GITEA_REPO=test/repo")
|
|
||||||
|
|
||||||
monkeypatch.chdir(project_dir)
|
|
||||||
# Clear env vars that might interfere
|
|
||||||
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
|
|
||||||
monkeypatch.delenv('PWD', raising=False)
|
|
||||||
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config._find_project_directory()
|
|
||||||
|
|
||||||
assert result == project_dir
|
|
||||||
|
|
||||||
|
|
||||||
def test_find_project_directory_none_when_no_markers(tmp_path, monkeypatch):
|
|
||||||
"""Test returns None when no project markers found"""
|
|
||||||
empty_dir = tmp_path / 'empty'
|
|
||||||
empty_dir.mkdir()
|
|
||||||
|
|
||||||
monkeypatch.chdir(empty_dir)
|
|
||||||
monkeypatch.delenv('CLAUDE_PROJECT_DIR', raising=False)
|
|
||||||
monkeypatch.delenv('PWD', raising=False)
|
|
||||||
monkeypatch.delenv('GITEA_REPO', raising=False)
|
|
||||||
|
|
||||||
config = GiteaConfig()
|
|
||||||
result = config._find_project_directory()
|
|
||||||
|
|
||||||
assert result is None
|
|
||||||
@@ -1,268 +0,0 @@
|
|||||||
"""
|
|
||||||
Unit tests for Gitea API client.
|
|
||||||
"""
|
|
||||||
import pytest
|
|
||||||
from unittest.mock import Mock, patch, MagicMock
|
|
||||||
from mcp_server.gitea_client import GiteaClient
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def mock_config():
|
|
||||||
"""Fixture providing mocked configuration"""
|
|
||||||
with patch('mcp_server.gitea_client.GiteaConfig') as mock_cfg:
|
|
||||||
mock_instance = mock_cfg.return_value
|
|
||||||
mock_instance.load.return_value = {
|
|
||||||
'api_url': 'https://test.com/api/v1',
|
|
||||||
'api_token': 'test_token',
|
|
||||||
'owner': 'test_owner',
|
|
||||||
'repo': 'test_repo',
|
|
||||||
'mode': 'project'
|
|
||||||
}
|
|
||||||
yield mock_cfg
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def gitea_client(mock_config):
|
|
||||||
"""Fixture providing GiteaClient instance with mocked config"""
|
|
||||||
return GiteaClient()
|
|
||||||
|
|
||||||
|
|
||||||
def test_client_initialization(gitea_client):
|
|
||||||
"""Test client initializes with correct configuration"""
|
|
||||||
assert gitea_client.base_url == 'https://test.com/api/v1'
|
|
||||||
assert gitea_client.token == 'test_token'
|
|
||||||
assert gitea_client.owner == 'test_owner'
|
|
||||||
assert gitea_client.repo == 'test_repo'
|
|
||||||
assert gitea_client.mode == 'project'
|
|
||||||
assert 'Authorization' in gitea_client.session.headers
|
|
||||||
assert gitea_client.session.headers['Authorization'] == 'token test_token'
|
|
||||||
|
|
||||||
|
|
||||||
def test_list_issues(gitea_client):
|
|
||||||
"""Test listing issues"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = [
|
|
||||||
{'number': 1, 'title': 'Test Issue 1'},
|
|
||||||
{'number': 2, 'title': 'Test Issue 2'}
|
|
||||||
]
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
issues = gitea_client.list_issues(state='open')
|
|
||||||
|
|
||||||
assert len(issues) == 2
|
|
||||||
assert issues[0]['title'] == 'Test Issue 1'
|
|
||||||
gitea_client.session.get.assert_called_once()
|
|
||||||
|
|
||||||
|
|
||||||
def test_list_issues_with_labels(gitea_client):
|
|
||||||
"""Test listing issues with label filter"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = [{'number': 1, 'title': 'Bug Issue'}]
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
issues = gitea_client.list_issues(state='open', labels=['Type/Bug'])
|
|
||||||
|
|
||||||
gitea_client.session.get.assert_called_once()
|
|
||||||
call_args = gitea_client.session.get.call_args
|
|
||||||
assert call_args[1]['params']['labels'] == 'Type/Bug'
|
|
||||||
|
|
||||||
|
|
||||||
def test_get_issue(gitea_client):
|
|
||||||
"""Test getting specific issue"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = {'number': 1, 'title': 'Test Issue'}
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
issue = gitea_client.get_issue(1)
|
|
||||||
|
|
||||||
assert issue['number'] == 1
|
|
||||||
assert issue['title'] == 'Test Issue'
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_issue(gitea_client):
|
|
||||||
"""Test creating new issue"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = {
|
|
||||||
'number': 1,
|
|
||||||
'title': 'New Issue',
|
|
||||||
'body': 'Issue body'
|
|
||||||
}
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'post', return_value=mock_response):
|
|
||||||
issue = gitea_client.create_issue(
|
|
||||||
title='New Issue',
|
|
||||||
body='Issue body',
|
|
||||||
labels=['Type/Bug']
|
|
||||||
)
|
|
||||||
|
|
||||||
assert issue['title'] == 'New Issue'
|
|
||||||
gitea_client.session.post.assert_called_once()
|
|
||||||
|
|
||||||
|
|
||||||
def test_update_issue(gitea_client):
|
|
||||||
"""Test updating existing issue"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = {
|
|
||||||
'number': 1,
|
|
||||||
'title': 'Updated Issue'
|
|
||||||
}
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'patch', return_value=mock_response):
|
|
||||||
issue = gitea_client.update_issue(
|
|
||||||
issue_number=1,
|
|
||||||
title='Updated Issue'
|
|
||||||
)
|
|
||||||
|
|
||||||
assert issue['title'] == 'Updated Issue'
|
|
||||||
gitea_client.session.patch.assert_called_once()
|
|
||||||
|
|
||||||
|
|
||||||
def test_add_comment(gitea_client):
|
|
||||||
"""Test adding comment to issue"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = {'body': 'Test comment'}
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'post', return_value=mock_response):
|
|
||||||
comment = gitea_client.add_comment(1, 'Test comment')
|
|
||||||
|
|
||||||
assert comment['body'] == 'Test comment'
|
|
||||||
gitea_client.session.post.assert_called_once()
|
|
||||||
|
|
||||||
|
|
||||||
def test_get_labels(gitea_client):
|
|
||||||
"""Test getting repository labels"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = [
|
|
||||||
{'name': 'Type/Bug'},
|
|
||||||
{'name': 'Priority/High'}
|
|
||||||
]
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
labels = gitea_client.get_labels()
|
|
||||||
|
|
||||||
assert len(labels) == 2
|
|
||||||
assert labels[0]['name'] == 'Type/Bug'
|
|
||||||
|
|
||||||
|
|
||||||
def test_get_org_labels(gitea_client):
|
|
||||||
"""Test getting organization labels"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = [
|
|
||||||
{'name': 'Type/Bug'},
|
|
||||||
{'name': 'Type/Feature'}
|
|
||||||
]
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
labels = gitea_client.get_org_labels()
|
|
||||||
|
|
||||||
assert len(labels) == 2
|
|
||||||
|
|
||||||
|
|
||||||
def test_list_repos(gitea_client):
|
|
||||||
"""Test listing organization repositories (PMO mode)"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.json.return_value = [
|
|
||||||
{'name': 'repo1'},
|
|
||||||
{'name': 'repo2'}
|
|
||||||
]
|
|
||||||
mock_response.raise_for_status = Mock()
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
repos = gitea_client.list_repos()
|
|
||||||
|
|
||||||
assert len(repos) == 2
|
|
||||||
assert repos[0]['name'] == 'repo1'
|
|
||||||
|
|
||||||
|
|
||||||
def test_aggregate_issues(gitea_client):
|
|
||||||
"""Test aggregating issues across repositories (PMO mode)"""
|
|
||||||
# Mock list_repos
|
|
||||||
gitea_client.list_repos = Mock(return_value=[
|
|
||||||
{'name': 'repo1'},
|
|
||||||
{'name': 'repo2'}
|
|
||||||
])
|
|
||||||
|
|
||||||
# Mock list_issues
|
|
||||||
gitea_client.list_issues = Mock(side_effect=[
|
|
||||||
[{'number': 1, 'title': 'Issue 1'}], # repo1
|
|
||||||
[{'number': 2, 'title': 'Issue 2'}] # repo2
|
|
||||||
])
|
|
||||||
|
|
||||||
aggregated = gitea_client.aggregate_issues(state='open')
|
|
||||||
|
|
||||||
assert 'repo1' in aggregated
|
|
||||||
assert 'repo2' in aggregated
|
|
||||||
assert len(aggregated['repo1']) == 1
|
|
||||||
assert len(aggregated['repo2']) == 1
|
|
||||||
|
|
||||||
|
|
||||||
def test_no_repo_specified_error(gitea_client):
|
|
||||||
"""Test error when repository not specified"""
|
|
||||||
# Create client without repo
|
|
||||||
with patch('mcp_server.gitea_client.GiteaConfig') as mock_cfg:
|
|
||||||
mock_instance = mock_cfg.return_value
|
|
||||||
mock_instance.load.return_value = {
|
|
||||||
'api_url': 'https://test.com/api/v1',
|
|
||||||
'api_token': 'test_token',
|
|
||||||
'owner': 'test_owner',
|
|
||||||
'repo': None, # No repo
|
|
||||||
'mode': 'company'
|
|
||||||
}
|
|
||||||
client = GiteaClient()
|
|
||||||
|
|
||||||
with pytest.raises(ValueError) as exc_info:
|
|
||||||
client.list_issues()
|
|
||||||
|
|
||||||
assert "Repository not specified" in str(exc_info.value)
|
|
||||||
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# ORGANIZATION DETECTION TESTS
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def test_is_organization_true(gitea_client):
|
|
||||||
"""Test _is_organization returns True for valid organization"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.status_code = 200
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
result = gitea_client._is_organization('personal-projects')
|
|
||||||
|
|
||||||
assert result is True
|
|
||||||
gitea_client.session.get.assert_called_once_with(
|
|
||||||
'https://test.com/api/v1/orgs/personal-projects'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_is_organization_false(gitea_client):
|
|
||||||
"""Test _is_organization returns False for user account"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.status_code = 404
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
result = gitea_client._is_organization('lmiranda')
|
|
||||||
|
|
||||||
assert result is False
|
|
||||||
|
|
||||||
|
|
||||||
def test_is_org_repo_uses_orgs_endpoint(gitea_client):
|
|
||||||
"""Test is_org_repo uses /orgs endpoint instead of owner.type"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.status_code = 200
|
|
||||||
|
|
||||||
with patch.object(gitea_client.session, 'get', return_value=mock_response):
|
|
||||||
result = gitea_client.is_org_repo('personal-projects/repo')
|
|
||||||
|
|
||||||
assert result is True
|
|
||||||
# Should call /orgs/personal-projects, not /repos/.../
|
|
||||||
gitea_client.session.get.assert_called_once_with(
|
|
||||||
'https://test.com/api/v1/orgs/personal-projects'
|
|
||||||
)
|
|
||||||
@@ -1,159 +0,0 @@
|
|||||||
"""
|
|
||||||
Unit tests for issue tools with branch detection.
|
|
||||||
"""
|
|
||||||
import pytest
|
|
||||||
from unittest.mock import Mock, patch, AsyncMock
|
|
||||||
from mcp_server.tools.issues import IssueTools
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def mock_gitea_client():
|
|
||||||
"""Fixture providing mocked Gitea client"""
|
|
||||||
client = Mock()
|
|
||||||
client.mode = 'project'
|
|
||||||
return client
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def issue_tools(mock_gitea_client):
|
|
||||||
"""Fixture providing IssueTools instance"""
|
|
||||||
return IssueTools(mock_gitea_client)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_list_issues_development_branch(issue_tools):
|
|
||||||
"""Test listing issues on development branch (allowed)"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='feat/test-feature'):
|
|
||||||
issue_tools.gitea.list_issues = Mock(return_value=[{'number': 1}])
|
|
||||||
|
|
||||||
issues = await issue_tools.list_issues(state='open')
|
|
||||||
|
|
||||||
assert len(issues) == 1
|
|
||||||
issue_tools.gitea.list_issues.assert_called_once()
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_create_issue_development_branch(issue_tools):
|
|
||||||
"""Test creating issue on development branch (allowed)"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='development'):
|
|
||||||
issue_tools.gitea.create_issue = Mock(return_value={'number': 1})
|
|
||||||
|
|
||||||
issue = await issue_tools.create_issue('Test', 'Body')
|
|
||||||
|
|
||||||
assert issue['number'] == 1
|
|
||||||
issue_tools.gitea.create_issue.assert_called_once()
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_create_issue_main_branch_blocked(issue_tools):
|
|
||||||
"""Test creating issue on main branch (blocked)"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='main'):
|
|
||||||
with pytest.raises(PermissionError) as exc_info:
|
|
||||||
await issue_tools.create_issue('Test', 'Body')
|
|
||||||
|
|
||||||
assert "Cannot create issues on branch 'main'" in str(exc_info.value)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_create_issue_staging_branch_allowed(issue_tools):
|
|
||||||
"""Test creating issue on staging branch (allowed for documentation)"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='staging'):
|
|
||||||
issue_tools.gitea.create_issue = Mock(return_value={'number': 1})
|
|
||||||
|
|
||||||
issue = await issue_tools.create_issue('Test', 'Body')
|
|
||||||
|
|
||||||
assert issue['number'] == 1
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_update_issue_main_branch_blocked(issue_tools):
|
|
||||||
"""Test updating issue on main branch (blocked)"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='main'):
|
|
||||||
with pytest.raises(PermissionError) as exc_info:
|
|
||||||
await issue_tools.update_issue(1, title='Updated')
|
|
||||||
|
|
||||||
assert "Cannot update issues on branch 'main'" in str(exc_info.value)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_list_issues_main_branch_allowed(issue_tools):
|
|
||||||
"""Test listing issues on main branch (allowed - read-only)"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='main'):
|
|
||||||
issue_tools.gitea.list_issues = Mock(return_value=[{'number': 1}])
|
|
||||||
|
|
||||||
issues = await issue_tools.list_issues(state='open')
|
|
||||||
|
|
||||||
assert len(issues) == 1
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_get_issue(issue_tools):
|
|
||||||
"""Test getting specific issue"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='development'):
|
|
||||||
issue_tools.gitea.get_issue = Mock(return_value={'number': 1, 'title': 'Test'})
|
|
||||||
|
|
||||||
issue = await issue_tools.get_issue(1)
|
|
||||||
|
|
||||||
assert issue['number'] == 1
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_add_comment(issue_tools):
|
|
||||||
"""Test adding comment to issue"""
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='development'):
|
|
||||||
issue_tools.gitea.add_comment = Mock(return_value={'body': 'Test comment'})
|
|
||||||
|
|
||||||
comment = await issue_tools.add_comment(1, 'Test comment')
|
|
||||||
|
|
||||||
assert comment['body'] == 'Test comment'
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_aggregate_issues_company_mode(issue_tools):
|
|
||||||
"""Test aggregating issues in company mode"""
|
|
||||||
issue_tools.gitea.mode = 'company'
|
|
||||||
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='development'):
|
|
||||||
issue_tools.gitea.aggregate_issues = Mock(return_value={
|
|
||||||
'repo1': [{'number': 1}],
|
|
||||||
'repo2': [{'number': 2}]
|
|
||||||
})
|
|
||||||
|
|
||||||
aggregated = await issue_tools.aggregate_issues()
|
|
||||||
|
|
||||||
assert 'repo1' in aggregated
|
|
||||||
assert 'repo2' in aggregated
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_aggregate_issues_project_mode_error(issue_tools):
|
|
||||||
"""Test that aggregate_issues fails in project mode"""
|
|
||||||
issue_tools.gitea.mode = 'project'
|
|
||||||
|
|
||||||
with patch.object(issue_tools, '_get_current_branch', return_value='development'):
|
|
||||||
with pytest.raises(ValueError) as exc_info:
|
|
||||||
await issue_tools.aggregate_issues()
|
|
||||||
|
|
||||||
assert "only available in company mode" in str(exc_info.value)
|
|
||||||
|
|
||||||
|
|
||||||
def test_branch_detection():
|
|
||||||
"""Test branch detection logic"""
|
|
||||||
tools = IssueTools(Mock())
|
|
||||||
|
|
||||||
# Test development branches
|
|
||||||
with patch.object(tools, '_get_current_branch', return_value='development'):
|
|
||||||
assert tools._check_branch_permissions('create_issue') is True
|
|
||||||
|
|
||||||
with patch.object(tools, '_get_current_branch', return_value='feat/new-feature'):
|
|
||||||
assert tools._check_branch_permissions('create_issue') is True
|
|
||||||
|
|
||||||
# Test production branches
|
|
||||||
with patch.object(tools, '_get_current_branch', return_value='main'):
|
|
||||||
assert tools._check_branch_permissions('create_issue') is False
|
|
||||||
assert tools._check_branch_permissions('list_issues') is True
|
|
||||||
|
|
||||||
# Test staging branches
|
|
||||||
with patch.object(tools, '_get_current_branch', return_value='staging'):
|
|
||||||
assert tools._check_branch_permissions('create_issue') is True
|
|
||||||
assert tools._check_branch_permissions('update_issue') is False
|
|
||||||
@@ -1,478 +0,0 @@
|
|||||||
"""
|
|
||||||
Unit tests for label tools with suggestion logic.
|
|
||||||
"""
|
|
||||||
import pytest
|
|
||||||
from unittest.mock import Mock, patch
|
|
||||||
from mcp_server.tools.labels import LabelTools
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def mock_gitea_client():
|
|
||||||
"""Fixture providing mocked Gitea client"""
|
|
||||||
client = Mock()
|
|
||||||
client.repo = 'test_org/test_repo'
|
|
||||||
client.is_org_repo = Mock(return_value=True)
|
|
||||||
return client
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def label_tools(mock_gitea_client):
|
|
||||||
"""Fixture providing LabelTools instance"""
|
|
||||||
return LabelTools(mock_gitea_client)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_get_labels(label_tools):
|
|
||||||
"""Test getting all labels (org + repo)"""
|
|
||||||
label_tools.gitea.get_org_labels = Mock(return_value=[
|
|
||||||
{'name': 'Type/Bug'},
|
|
||||||
{'name': 'Type/Feature'}
|
|
||||||
])
|
|
||||||
label_tools.gitea.get_labels = Mock(return_value=[
|
|
||||||
{'name': 'Component/Backend'},
|
|
||||||
{'name': 'Component/Frontend'}
|
|
||||||
])
|
|
||||||
|
|
||||||
result = await label_tools.get_labels()
|
|
||||||
|
|
||||||
assert len(result['organization']) == 2
|
|
||||||
assert len(result['repository']) == 2
|
|
||||||
assert result['total_count'] == 4
|
|
||||||
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# LABEL LOOKUP TESTS (NEW)
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def test_build_label_lookup_slash_format():
|
|
||||||
"""Test building label lookup with slash format labels"""
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = 'test/repo'
|
|
||||||
tools = LabelTools(mock_client)
|
|
||||||
|
|
||||||
labels = ['Type/Bug', 'Type/Feature', 'Priority/High', 'Priority/Low']
|
|
||||||
lookup = tools._build_label_lookup(labels)
|
|
||||||
|
|
||||||
assert 'type' in lookup
|
|
||||||
assert 'bug' in lookup['type']
|
|
||||||
assert lookup['type']['bug'] == 'Type/Bug'
|
|
||||||
assert lookup['type']['feature'] == 'Type/Feature'
|
|
||||||
assert 'priority' in lookup
|
|
||||||
assert lookup['priority']['high'] == 'Priority/High'
|
|
||||||
|
|
||||||
|
|
||||||
def test_build_label_lookup_colon_space_format():
|
|
||||||
"""Test building label lookup with colon-space format labels"""
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = 'test/repo'
|
|
||||||
tools = LabelTools(mock_client)
|
|
||||||
|
|
||||||
labels = ['Type: Bug', 'Type: Feature', 'Priority: High', 'Effort: M']
|
|
||||||
lookup = tools._build_label_lookup(labels)
|
|
||||||
|
|
||||||
assert 'type' in lookup
|
|
||||||
assert 'bug' in lookup['type']
|
|
||||||
assert lookup['type']['bug'] == 'Type: Bug'
|
|
||||||
assert lookup['type']['feature'] == 'Type: Feature'
|
|
||||||
assert 'priority' in lookup
|
|
||||||
assert lookup['priority']['high'] == 'Priority: High'
|
|
||||||
# Test singular "Effort" (not "Efforts")
|
|
||||||
assert 'effort' in lookup
|
|
||||||
assert lookup['effort']['m'] == 'Effort: M'
|
|
||||||
|
|
||||||
|
|
||||||
def test_build_label_lookup_efforts_normalization():
|
|
||||||
"""Test that 'Efforts' is normalized to 'effort' for matching"""
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = 'test/repo'
|
|
||||||
tools = LabelTools(mock_client)
|
|
||||||
|
|
||||||
labels = ['Efforts/XS', 'Efforts/S', 'Efforts/M']
|
|
||||||
lookup = tools._build_label_lookup(labels)
|
|
||||||
|
|
||||||
# 'Efforts' should be normalized to 'effort'
|
|
||||||
assert 'effort' in lookup
|
|
||||||
assert lookup['effort']['xs'] == 'Efforts/XS'
|
|
||||||
|
|
||||||
|
|
||||||
def test_find_label():
|
|
||||||
"""Test finding labels from lookup"""
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = 'test/repo'
|
|
||||||
tools = LabelTools(mock_client)
|
|
||||||
|
|
||||||
lookup = {
|
|
||||||
'type': {'bug': 'Type: Bug', 'feature': 'Type: Feature'},
|
|
||||||
'priority': {'high': 'Priority: High', 'low': 'Priority: Low'}
|
|
||||||
}
|
|
||||||
|
|
||||||
assert tools._find_label(lookup, 'type', 'bug') == 'Type: Bug'
|
|
||||||
assert tools._find_label(lookup, 'priority', 'high') == 'Priority: High'
|
|
||||||
assert tools._find_label(lookup, 'type', 'nonexistent') is None
|
|
||||||
assert tools._find_label(lookup, 'nonexistent', 'bug') is None
|
|
||||||
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# SUGGEST LABELS WITH DYNAMIC FORMAT TESTS
|
|
||||||
# ========================================
|
|
||||||
|
|
||||||
def _create_tools_with_labels(labels):
|
|
||||||
"""Helper to create LabelTools with mocked labels"""
|
|
||||||
import asyncio
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = 'test/repo'
|
|
||||||
mock_client.is_org_repo = Mock(return_value=False)
|
|
||||||
mock_client.get_labels = Mock(return_value=[{'name': l} for l in labels])
|
|
||||||
return LabelTools(mock_client)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_with_slash_format():
|
|
||||||
"""Test label suggestion with slash format labels"""
|
|
||||||
labels = [
|
|
||||||
'Type/Bug', 'Type/Feature', 'Type/Refactor',
|
|
||||||
'Priority/Critical', 'Priority/High', 'Priority/Medium', 'Priority/Low',
|
|
||||||
'Complexity/Simple', 'Complexity/Medium', 'Complexity/Complex',
|
|
||||||
'Component/Auth'
|
|
||||||
]
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = "Fix critical bug in login authentication"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
assert 'Type/Bug' in suggestions
|
|
||||||
assert 'Priority/Critical' in suggestions
|
|
||||||
assert 'Component/Auth' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_with_colon_space_format():
|
|
||||||
"""Test label suggestion with colon-space format labels"""
|
|
||||||
labels = [
|
|
||||||
'Type: Bug', 'Type: Feature', 'Type: Refactor',
|
|
||||||
'Priority: Critical', 'Priority: High', 'Priority: Medium', 'Priority: Low',
|
|
||||||
'Complexity: Simple', 'Complexity: Medium', 'Complexity: Complex',
|
|
||||||
'Effort: XS', 'Effort: S', 'Effort: M', 'Effort: L', 'Effort: XL'
|
|
||||||
]
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = "Fix critical bug for tiny 1 hour fix"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
# Should return colon-space format labels
|
|
||||||
assert 'Type: Bug' in suggestions
|
|
||||||
assert 'Priority: Critical' in suggestions
|
|
||||||
assert 'Effort: XS' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_bug():
|
|
||||||
"""Test label suggestion for bug context"""
|
|
||||||
labels = [
|
|
||||||
'Type/Bug', 'Type/Feature',
|
|
||||||
'Priority/Critical', 'Priority/High', 'Priority/Medium', 'Priority/Low',
|
|
||||||
'Complexity/Simple', 'Complexity/Medium', 'Complexity/Complex',
|
|
||||||
'Component/Auth'
|
|
||||||
]
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = "Fix critical bug in login authentication"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
assert 'Type/Bug' in suggestions
|
|
||||||
assert 'Priority/Critical' in suggestions
|
|
||||||
assert 'Component/Auth' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_feature():
|
|
||||||
"""Test label suggestion for feature context"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = "Add new feature to implement user dashboard"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
assert 'Type/Feature' in suggestions
|
|
||||||
assert any('Priority' in label for label in suggestions)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_refactor():
|
|
||||||
"""Test label suggestion for refactor context"""
|
|
||||||
labels = ['Type/Refactor', 'Priority/Medium', 'Complexity/Medium', 'Component/Backend']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = "Refactor architecture to extract service layer"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
assert 'Type/Refactor' in suggestions
|
|
||||||
assert 'Component/Backend' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_documentation():
|
|
||||||
"""Test label suggestion for documentation context"""
|
|
||||||
labels = ['Type/Documentation', 'Priority/Medium', 'Complexity/Medium', 'Component/API', 'Component/Docs']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = "Update documentation for API endpoints"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
assert 'Type/Documentation' in suggestions
|
|
||||||
assert 'Component/API' in suggestions or 'Component/Docs' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_priority():
|
|
||||||
"""Test priority detection in suggestions"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Critical', 'Priority/High', 'Priority/Medium', 'Priority/Low', 'Complexity/Medium']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
# Critical priority
|
|
||||||
context = "Urgent blocker in production"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Priority/Critical' in suggestions
|
|
||||||
|
|
||||||
# High priority
|
|
||||||
context = "Important feature needed asap"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Priority/High' in suggestions
|
|
||||||
|
|
||||||
# Low priority
|
|
||||||
context = "Nice-to-have optional improvement"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Priority/Low' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_complexity():
|
|
||||||
"""Test complexity detection in suggestions"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Simple', 'Complexity/Medium', 'Complexity/Complex']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
# Simple complexity
|
|
||||||
context = "Simple quick fix for typo"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Complexity/Simple' in suggestions
|
|
||||||
|
|
||||||
# Complex complexity
|
|
||||||
context = "Complex challenging architecture redesign"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Complexity/Complex' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_efforts():
|
|
||||||
"""Test efforts detection in suggestions"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Efforts/XS', 'Efforts/S', 'Efforts/M', 'Efforts/L', 'Efforts/XL']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
# XS effort
|
|
||||||
context = "Tiny fix that takes 1 hour"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Efforts/XS' in suggestions
|
|
||||||
|
|
||||||
# L effort
|
|
||||||
context = "Large feature taking 1 week"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Efforts/L' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_components():
|
|
||||||
"""Test component detection in suggestions"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Component/Backend', 'Component/Frontend', 'Component/API', 'Component/Database']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
# Backend component
|
|
||||||
context = "Update backend API service"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Component/Backend' in suggestions
|
|
||||||
assert 'Component/API' in suggestions
|
|
||||||
|
|
||||||
# Frontend component
|
|
||||||
context = "Fix frontend UI component"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Component/Frontend' in suggestions
|
|
||||||
|
|
||||||
# Database component
|
|
||||||
context = "Add database migration for schema"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Component/Database' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_tech_stack():
|
|
||||||
"""Test tech stack detection in suggestions"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Tech/Python', 'Tech/FastAPI', 'Tech/Docker', 'Tech/PostgreSQL']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
# Python
|
|
||||||
context = "Update Python FastAPI endpoint"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Tech/Python' in suggestions
|
|
||||||
assert 'Tech/FastAPI' in suggestions
|
|
||||||
|
|
||||||
# Docker
|
|
||||||
context = "Fix Dockerfile configuration"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Tech/Docker' in suggestions
|
|
||||||
|
|
||||||
# PostgreSQL
|
|
||||||
context = "Optimize PostgreSQL query"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Tech/PostgreSQL' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_source():
|
|
||||||
"""Test source detection in suggestions"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Source/Development', 'Source/Staging', 'Source/Production']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
# Development
|
|
||||||
context = "Issue found in development environment"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Source/Development' in suggestions
|
|
||||||
|
|
||||||
# Production
|
|
||||||
context = "Critical production issue"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Source/Production' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_risk():
|
|
||||||
"""Test risk detection in suggestions"""
|
|
||||||
labels = ['Type/Feature', 'Priority/Medium', 'Complexity/Medium', 'Risk/High', 'Risk/Low']
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
# High risk
|
|
||||||
context = "Breaking change to major API"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Risk/High' in suggestions
|
|
||||||
|
|
||||||
# Low risk
|
|
||||||
context = "Safe minor update with low risk"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
assert 'Risk/Low' in suggestions
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_multiple_categories():
|
|
||||||
"""Test that suggestions span multiple categories"""
|
|
||||||
labels = [
|
|
||||||
'Type/Bug', 'Type/Feature',
|
|
||||||
'Priority/Critical', 'Priority/Medium',
|
|
||||||
'Complexity/Complex', 'Complexity/Medium',
|
|
||||||
'Component/Backend', 'Component/API', 'Component/Auth',
|
|
||||||
'Tech/FastAPI', 'Tech/PostgreSQL',
|
|
||||||
'Source/Production'
|
|
||||||
]
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = """
|
|
||||||
Urgent critical bug in production backend API service.
|
|
||||||
Need to fix broken authentication endpoint.
|
|
||||||
This is a complex issue requiring FastAPI and PostgreSQL expertise.
|
|
||||||
"""
|
|
||||||
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
# Should have Type
|
|
||||||
assert any('Type/' in label for label in suggestions)
|
|
||||||
|
|
||||||
# Should have Priority
|
|
||||||
assert any('Priority/' in label for label in suggestions)
|
|
||||||
|
|
||||||
# Should have Component
|
|
||||||
assert any('Component/' in label for label in suggestions)
|
|
||||||
|
|
||||||
# Should have Tech
|
|
||||||
assert any('Tech/' in label for label in suggestions)
|
|
||||||
|
|
||||||
# Should have Source
|
|
||||||
assert any('Source/' in label for label in suggestions)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_empty_repo():
|
|
||||||
"""Test suggestions when no repo specified and no labels available"""
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = None
|
|
||||||
tools = LabelTools(mock_client)
|
|
||||||
|
|
||||||
context = "Fix a bug"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
# Should return empty list when no repo
|
|
||||||
assert suggestions == []
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_suggest_labels_no_matching_labels():
|
|
||||||
"""Test suggestions return empty when no matching labels exist"""
|
|
||||||
labels = ['Custom/Label', 'Other/Thing'] # No standard labels
|
|
||||||
tools = _create_tools_with_labels(labels)
|
|
||||||
|
|
||||||
context = "Fix a bug"
|
|
||||||
suggestions = await tools.suggest_labels(context)
|
|
||||||
|
|
||||||
# Should return empty list since no Type/Bug or similar exists
|
|
||||||
assert len(suggestions) == 0
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_get_labels_org_owned_repo():
|
|
||||||
"""Test getting labels for organization-owned repository"""
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = 'myorg/myrepo'
|
|
||||||
mock_client.is_org_repo = Mock(return_value=True)
|
|
||||||
mock_client.get_org_labels = Mock(return_value=[
|
|
||||||
{'name': 'Type/Bug', 'id': 1},
|
|
||||||
{'name': 'Type/Feature', 'id': 2}
|
|
||||||
])
|
|
||||||
mock_client.get_labels = Mock(return_value=[
|
|
||||||
{'name': 'Component/Backend', 'id': 3}
|
|
||||||
])
|
|
||||||
|
|
||||||
tools = LabelTools(mock_client)
|
|
||||||
result = await tools.get_labels()
|
|
||||||
|
|
||||||
# Should fetch both org and repo labels
|
|
||||||
mock_client.is_org_repo.assert_called_once_with('myorg/myrepo')
|
|
||||||
mock_client.get_org_labels.assert_called_once_with('myorg')
|
|
||||||
mock_client.get_labels.assert_called_once_with('myorg/myrepo')
|
|
||||||
|
|
||||||
assert len(result['organization']) == 2
|
|
||||||
assert len(result['repository']) == 1
|
|
||||||
assert result['total_count'] == 3
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_get_labels_user_owned_repo():
|
|
||||||
"""Test getting labels for user-owned repository (no org labels)"""
|
|
||||||
mock_client = Mock()
|
|
||||||
mock_client.repo = 'lmiranda/personal-portfolio'
|
|
||||||
mock_client.is_org_repo = Mock(return_value=False)
|
|
||||||
mock_client.get_labels = Mock(return_value=[
|
|
||||||
{'name': 'bug', 'id': 1},
|
|
||||||
{'name': 'enhancement', 'id': 2}
|
|
||||||
])
|
|
||||||
|
|
||||||
tools = LabelTools(mock_client)
|
|
||||||
result = await tools.get_labels()
|
|
||||||
|
|
||||||
# Should check if org repo
|
|
||||||
mock_client.is_org_repo.assert_called_once_with('lmiranda/personal-portfolio')
|
|
||||||
|
|
||||||
# Should NOT call get_org_labels for user-owned repos
|
|
||||||
mock_client.get_org_labels.assert_not_called()
|
|
||||||
|
|
||||||
# Should still get repo labels
|
|
||||||
mock_client.get_labels.assert_called_once_with('lmiranda/personal-portfolio')
|
|
||||||
|
|
||||||
assert len(result['organization']) == 0
|
|
||||||
assert len(result['repository']) == 2
|
|
||||||
assert result['total_count'] == 2
|
|
||||||
@@ -1,19 +1,17 @@
|
|||||||
# NetBox MCP Server
|
# NetBox MCP Server
|
||||||
|
|
||||||
MCP (Model Context Protocol) server for comprehensive NetBox API integration with Claude Code.
|
MCP (Model Context Protocol) server for essential NetBox API integration with Claude Code.
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
This MCP server provides Claude Code with full access to the NetBox REST API, enabling infrastructure management, documentation, and automation workflows. It covers all major NetBox application areas:
|
This MCP server provides Claude Code with focused access to the NetBox REST API for tracking **servers, services, IP addresses, and databases**. It has been optimized to include only essential tools:
|
||||||
|
|
||||||
- **DCIM** - Sites, Locations, Racks, Devices, Interfaces, Cables, Power
|
- **DCIM** - Sites, Devices (servers/VPS), Interfaces
|
||||||
- **IPAM** - IP Addresses, Prefixes, VLANs, VRFs, ASNs, Services
|
- **IPAM** - IP Addresses, Prefixes, Services (applications/databases)
|
||||||
- **Circuits** - Providers, Circuits, Terminations
|
|
||||||
- **Virtualization** - Clusters, Virtual Machines, VM Interfaces
|
- **Virtualization** - Clusters, Virtual Machines, VM Interfaces
|
||||||
- **Tenancy** - Tenants, Contacts, Contact Assignments
|
- **Extras** - Tags, Journal Entries (audit/notes)
|
||||||
- **VPN** - Tunnels, IKE/IPSec Policies, L2VPN
|
|
||||||
- **Wireless** - Wireless LANs, Links, Groups
|
**Total:** 37 tools (~3,700 tokens) — down from 182 tools (~19,810 tokens).
|
||||||
- **Extras** - Tags, Custom Fields, Webhooks, Config Contexts, Audit Log
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -49,249 +47,227 @@ EOF
|
|||||||
|
|
||||||
### 3. Register with Claude Code
|
### 3. Register with Claude Code
|
||||||
|
|
||||||
Add to your Claude Code MCP configuration (`~/.config/claude/mcp.json` or project `.mcp.json`):
|
Add to your Claude Code MCP configuration (`.claude/mcp.json` or project-level `.mcp.json`):
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"netbox": {
|
"netbox": {
|
||||||
"command": "/path/to/mcp-servers/netbox/.venv/bin/python",
|
"command": "/path/to/mcp-servers/netbox/.venv/bin/python",
|
||||||
"args": ["-m", "mcp_server.server"],
|
"args": ["-m", "mcp_server"],
|
||||||
"cwd": "/path/to/mcp-servers/netbox"
|
"cwd": "/path/to/mcp-servers/netbox"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Windows:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"netbox": {
|
||||||
|
"command": "C:\\path\\to\\mcp-servers\\netbox\\.venv\\Scripts\\python.exe",
|
||||||
|
"args": ["-m", "mcp_server"],
|
||||||
|
"cwd": "C:\\path\\to\\mcp-servers\\netbox"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Available Tools (37 Total)
|
||||||
|
|
||||||
|
### DCIM: Sites, Devices, Interfaces (11 tools)
|
||||||
|
|
||||||
|
**Sites (4):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `dcim_list_sites` | List sites | `name`, `status` |
|
||||||
|
| `dcim_get_site` | Get site by ID | `id` (required) |
|
||||||
|
| `dcim_create_site` | Create site | `name`, `slug` (required), `status` |
|
||||||
|
| `dcim_update_site` | Update site | `id` (required), fields to update |
|
||||||
|
|
||||||
|
**Devices (4):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `dcim_list_devices` | List devices (servers/VPS) | `name`, `site_id`, `status`, `role_id` |
|
||||||
|
| `dcim_get_device` | Get device by ID | `id` (required) |
|
||||||
|
| `dcim_create_device` | Create device | `name`, `device_type`, `role`, `site` (required) |
|
||||||
|
| `dcim_update_device` | Update device | `id` (required), fields to update |
|
||||||
|
|
||||||
|
**Interfaces (3):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `dcim_list_interfaces` | List device interfaces | `device_id`, `name`, `type` |
|
||||||
|
| `dcim_get_interface` | Get interface by ID | `id` (required) |
|
||||||
|
| `dcim_create_interface` | Create interface | `device`, `name`, `type` (required) |
|
||||||
|
|
||||||
|
### IPAM: IPs, Prefixes, Services (10 tools)
|
||||||
|
|
||||||
|
**IP Addresses (4):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `ipam_list_ip_addresses` | List IP addresses | `address`, `device_id`, `status` |
|
||||||
|
| `ipam_get_ip_address` | Get IP by ID | `id` (required) |
|
||||||
|
| `ipam_create_ip_address` | Create IP address | `address` (required), `status`, `assigned_object_type` |
|
||||||
|
| `ipam_update_ip_address` | Update IP address | `id` (required), fields to update |
|
||||||
|
|
||||||
|
**Prefixes (3):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `ipam_list_prefixes` | List prefixes | `prefix`, `site_id`, `status` |
|
||||||
|
| `ipam_get_prefix` | Get prefix by ID | `id` (required) |
|
||||||
|
| `ipam_create_prefix` | Create prefix | `prefix` (required), `status`, `site` |
|
||||||
|
|
||||||
|
**Services (3):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `ipam_list_services` | List services (apps/databases) | `device_id`, `virtual_machine_id`, `name` |
|
||||||
|
| `ipam_get_service` | Get service by ID | `id` (required) |
|
||||||
|
| `ipam_create_service` | Create service | `name`, `ports`, `protocol` (required), `device`, `virtual_machine` |
|
||||||
|
|
||||||
|
### Virtualization: Clusters, VMs, VM Interfaces (10 tools)
|
||||||
|
|
||||||
|
**Clusters (3):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `virt_list_clusters` | List virtualization clusters | `name`, `site_id` |
|
||||||
|
| `virt_get_cluster` | Get cluster by ID | `id` (required) |
|
||||||
|
| `virt_create_cluster` | Create cluster | `name`, `type` (required), `site` |
|
||||||
|
|
||||||
|
**Virtual Machines (4):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `virt_list_vms` | List VMs | `name`, `cluster_id`, `site_id`, `status` |
|
||||||
|
| `virt_get_vm` | Get VM by ID | `id` (required) |
|
||||||
|
| `virt_create_vm` | Create VM | `name`, `cluster` (required), `vcpus`, `memory`, `disk` |
|
||||||
|
| `virt_update_vm` | Update VM | `id` (required), fields to update |
|
||||||
|
|
||||||
|
**VM Interfaces (3):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `virt_list_vm_ifaces` | List VM interfaces | `virtual_machine_id` |
|
||||||
|
| `virt_get_vm_iface` | Get VM interface by ID | `id` (required) |
|
||||||
|
| `virt_create_vm_iface` | Create VM interface | `virtual_machine`, `name` (required) |
|
||||||
|
|
||||||
|
### Extras: Tags, Journal Entries (6 tools)
|
||||||
|
|
||||||
|
**Tags (3):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `extras_list_tags` | List tags | `name` |
|
||||||
|
| `extras_get_tag` | Get tag by ID | `id` (required) |
|
||||||
|
| `extras_create_tag` | Create tag | `name`, `slug` (required), `color` |
|
||||||
|
|
||||||
|
**Journal Entries (3):**
|
||||||
|
| Tool | Description | Parameters |
|
||||||
|
|------|-------------|-----------|
|
||||||
|
| `extras_list_journal_entries` | List journal entries | `assigned_object_type`, `assigned_object_id` |
|
||||||
|
| `extras_get_journal_entry` | Get journal entry by ID | `id` (required) |
|
||||||
|
| `extras_create_journal_entry` | Create journal entry | `assigned_object_type`, `assigned_object_id`, `comments` (required), `kind` |
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
### Environment Variables
|
All configuration is done via environment variables in `~/.config/claude/netbox.env`:
|
||||||
|
|
||||||
| Variable | Required | Default | Description |
|
| Variable | Required | Default | Description |
|
||||||
|----------|----------|---------|-------------|
|
|----------|----------|---------|-------------|
|
||||||
| `NETBOX_API_URL` | Yes | - | Full URL to NetBox API (e.g., `https://netbox.example.com/api`) |
|
| `NETBOX_API_URL` | Yes | — | NetBox API URL (e.g., https://netbox.example.com/api) |
|
||||||
| `NETBOX_API_TOKEN` | Yes | - | API authentication token |
|
| `NETBOX_API_TOKEN` | Yes | — | NetBox API token |
|
||||||
| `NETBOX_VERIFY_SSL` | No | `true` | Verify SSL certificates |
|
| `NETBOX_VERIFY_SSL` | No | true | Verify SSL certificates |
|
||||||
| `NETBOX_TIMEOUT` | No | `30` | Request timeout in seconds |
|
| `NETBOX_TIMEOUT` | No | 30 | Request timeout in seconds |
|
||||||
|
|
||||||
### Configuration Hierarchy
|
|
||||||
|
|
||||||
1. **System-level** (`~/.config/claude/netbox.env`): Credentials and defaults
|
|
||||||
2. **Project-level** (`.env` in current directory): Optional overrides
|
|
||||||
|
|
||||||
## Available Tools
|
|
||||||
|
|
||||||
### DCIM (Data Center Infrastructure Management)
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `dcim_list_sites` | List all sites |
|
|
||||||
| `dcim_get_site` | Get site details |
|
|
||||||
| `dcim_create_site` | Create a new site |
|
|
||||||
| `dcim_update_site` | Update a site |
|
|
||||||
| `dcim_delete_site` | Delete a site |
|
|
||||||
| `dcim_list_devices` | List all devices |
|
|
||||||
| `dcim_get_device` | Get device details |
|
|
||||||
| `dcim_create_device` | Create a new device |
|
|
||||||
| `dcim_update_device` | Update a device |
|
|
||||||
| `dcim_delete_device` | Delete a device |
|
|
||||||
| `dcim_list_interfaces` | List device interfaces |
|
|
||||||
| `dcim_create_interface` | Create an interface |
|
|
||||||
| `dcim_list_racks` | List all racks |
|
|
||||||
| `dcim_create_rack` | Create a new rack |
|
|
||||||
| `dcim_list_cables` | List all cables |
|
|
||||||
| `dcim_create_cable` | Create a cable connection |
|
|
||||||
| ... and many more |
|
|
||||||
|
|
||||||
### IPAM (IP Address Management)
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `ipam_list_prefixes` | List IP prefixes |
|
|
||||||
| `ipam_create_prefix` | Create a prefix |
|
|
||||||
| `ipam_list_available_prefixes` | List available child prefixes |
|
|
||||||
| `ipam_create_available_prefix` | Auto-allocate a prefix |
|
|
||||||
| `ipam_list_ip_addresses` | List IP addresses |
|
|
||||||
| `ipam_create_ip_address` | Create an IP address |
|
|
||||||
| `ipam_list_available_ips` | List available IPs in prefix |
|
|
||||||
| `ipam_create_available_ip` | Auto-allocate an IP |
|
|
||||||
| `ipam_list_vlans` | List VLANs |
|
|
||||||
| `ipam_create_vlan` | Create a VLAN |
|
|
||||||
| `ipam_list_vrfs` | List VRFs |
|
|
||||||
| ... and many more |
|
|
||||||
|
|
||||||
### Circuits
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `circuits_list_providers` | List circuit providers |
|
|
||||||
| `circuits_create_provider` | Create a provider |
|
|
||||||
| `circuits_list_circuits` | List circuits |
|
|
||||||
| `circuits_create_circuit` | Create a circuit |
|
|
||||||
| `circuits_list_circuit_terminations` | List terminations |
|
|
||||||
| ... and more |
|
|
||||||
|
|
||||||
### Virtualization
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `virtualization_list_clusters` | List clusters |
|
|
||||||
| `virtualization_create_cluster` | Create a cluster |
|
|
||||||
| `virtualization_list_virtual_machines` | List VMs |
|
|
||||||
| `virtualization_create_virtual_machine` | Create a VM |
|
|
||||||
| `virtualization_list_vm_interfaces` | List VM interfaces |
|
|
||||||
| ... and more |
|
|
||||||
|
|
||||||
### Tenancy
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `tenancy_list_tenants` | List tenants |
|
|
||||||
| `tenancy_create_tenant` | Create a tenant |
|
|
||||||
| `tenancy_list_contacts` | List contacts |
|
|
||||||
| `tenancy_create_contact` | Create a contact |
|
|
||||||
| ... and more |
|
|
||||||
|
|
||||||
### VPN
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `vpn_list_tunnels` | List VPN tunnels |
|
|
||||||
| `vpn_create_tunnel` | Create a tunnel |
|
|
||||||
| `vpn_list_l2vpns` | List L2VPNs |
|
|
||||||
| `vpn_list_ike_policies` | List IKE policies |
|
|
||||||
| `vpn_list_ipsec_policies` | List IPSec policies |
|
|
||||||
| ... and more |
|
|
||||||
|
|
||||||
### Wireless
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `wireless_list_wireless_lans` | List wireless LANs |
|
|
||||||
| `wireless_create_wireless_lan` | Create a WLAN |
|
|
||||||
| `wireless_list_wireless_links` | List wireless links |
|
|
||||||
| ... and more |
|
|
||||||
|
|
||||||
### Extras
|
|
||||||
|
|
||||||
| Tool | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| `extras_list_tags` | List all tags |
|
|
||||||
| `extras_create_tag` | Create a tag |
|
|
||||||
| `extras_list_custom_fields` | List custom fields |
|
|
||||||
| `extras_list_webhooks` | List webhooks |
|
|
||||||
| `extras_list_journal_entries` | List journal entries |
|
|
||||||
| `extras_create_journal_entry` | Create journal entry |
|
|
||||||
| `extras_list_object_changes` | View audit log |
|
|
||||||
| `extras_list_config_contexts` | List config contexts |
|
|
||||||
| ... and more |
|
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
### List all devices at a site
|
|
||||||
|
|
||||||
```
|
|
||||||
Use the dcim_list_devices tool with site_id filter to see all devices at site 5
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create a new prefix and allocate IPs
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Use ipam_create_prefix to create 10.0.1.0/24
|
|
||||||
2. Use ipam_list_available_ips with the prefix ID to see available addresses
|
|
||||||
3. Use ipam_create_available_ip to auto-allocate the next IP
|
|
||||||
```
|
|
||||||
|
|
||||||
### Document a new server
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Use dcim_create_device to create the device
|
|
||||||
2. Use dcim_create_interface to add network interfaces
|
|
||||||
3. Use ipam_create_ip_address to assign IPs to interfaces
|
|
||||||
4. Use extras_create_journal_entry to add notes
|
|
||||||
```
|
|
||||||
|
|
||||||
### Audit recent changes
|
|
||||||
|
|
||||||
```
|
|
||||||
Use extras_list_object_changes to see recent modifications in NetBox
|
|
||||||
```
|
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
|
### Hybrid Configuration
|
||||||
|
|
||||||
|
- **System-level:** `~/.config/claude/netbox.env` (credentials)
|
||||||
|
- **Project-level:** `.env` (optional overrides)
|
||||||
|
|
||||||
|
### Tool Routing
|
||||||
|
|
||||||
|
Tool names follow the pattern `{module}_{action}_{resource}`:
|
||||||
|
- `dcim_list_sites` → DCIMTools.list_sites()
|
||||||
|
- `ipam_create_service` → IPAMTools.create_service()
|
||||||
|
- `virt_list_vms` → VirtualizationTools.list_virtual_machines()
|
||||||
|
|
||||||
|
Shortened names (virt_*) are mapped via TOOL_NAME_MAP to meet the 28-character MCP limit.
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
All tools return JSON responses. Errors are caught and returned as:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"error": "Error message",
|
||||||
|
"status_code": 404
|
||||||
|
}
|
||||||
```
|
```
|
||||||
mcp-servers/netbox/
|
|
||||||
|
## Development
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test import
|
||||||
|
python -c "from mcp_server.server import NetBoxMCPServer; print('OK')"
|
||||||
|
|
||||||
|
# Test tool count
|
||||||
|
python -c "from mcp_server.server import TOOL_DEFINITIONS; print(f'{len(TOOL_DEFINITIONS)} tools')"
|
||||||
|
```
|
||||||
|
|
||||||
|
### File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
netbox/
|
||||||
├── mcp_server/
|
├── mcp_server/
|
||||||
│ ├── __init__.py
|
│ ├── __init__.py
|
||||||
|
│ ├── server.py # Main MCP server (37 TOOL_DEFINITIONS)
|
||||||
│ ├── config.py # Configuration loader
|
│ ├── config.py # Configuration loader
|
||||||
│ ├── netbox_client.py # Generic HTTP client
|
│ ├── netbox_client.py # HTTP client wrapper
|
||||||
│ ├── server.py # MCP server entry point
|
|
||||||
│ └── tools/
|
│ └── tools/
|
||||||
│ ├── __init__.py
|
│ ├── __init__.py
|
||||||
│ ├── dcim.py # DCIM operations
|
│ ├── dcim.py # Sites, Devices, Interfaces
|
||||||
│ ├── ipam.py # IPAM operations
|
│ ├── ipam.py # IPs, Prefixes, Services
|
||||||
│ ├── circuits.py # Circuits operations
|
│ ├── virtualization.py # Clusters, VMs, VM Interfaces
|
||||||
│ ├── virtualization.py
|
│ └── extras.py # Tags, Journal Entries
|
||||||
│ ├── tenancy.py
|
├── .venv/ # Python virtual environment
|
||||||
│ ├── vpn.py
|
|
||||||
│ ├── wireless.py
|
|
||||||
│ └── extras.py
|
|
||||||
├── tests/
|
|
||||||
│ └── __init__.py
|
|
||||||
├── requirements.txt
|
├── requirements.txt
|
||||||
└── README.md
|
└── README.md
|
||||||
```
|
```
|
||||||
|
|
||||||
## API Coverage
|
|
||||||
|
|
||||||
This MCP server provides comprehensive coverage of the NetBox REST API v4.x:
|
|
||||||
|
|
||||||
- Full CRUD operations for all major models
|
|
||||||
- Filtering and search capabilities
|
|
||||||
- Special endpoints (available prefixes, available IPs)
|
|
||||||
- Pagination handling (automatic)
|
|
||||||
- Error handling with detailed messages
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
The server returns detailed error messages from the NetBox API, including:
|
|
||||||
- Validation errors
|
|
||||||
- Authentication failures
|
|
||||||
- Not found errors
|
|
||||||
- Permission errors
|
|
||||||
|
|
||||||
## Security Notes
|
|
||||||
|
|
||||||
- API tokens should be kept secure and not committed to version control
|
|
||||||
- Use environment variables or the system config file for credentials
|
|
||||||
- SSL verification is enabled by default
|
|
||||||
- Consider using read-only tokens for query-only workflows
|
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
### Common Issues
|
### MCP Server Won't Start
|
||||||
|
|
||||||
1. **Connection refused**: Check `NETBOX_API_URL` is correct and accessible
|
**Check configuration:**
|
||||||
2. **401 Unauthorized**: Verify your API token is valid
|
```bash
|
||||||
3. **SSL errors**: Set `NETBOX_VERIFY_SSL=false` for self-signed certs (not recommended for production)
|
cat ~/.config/claude/netbox.env
|
||||||
4. **Timeout errors**: Increase `NETBOX_TIMEOUT` for slow connections
|
|
||||||
|
|
||||||
### Debug Mode
|
|
||||||
|
|
||||||
Enable debug logging:
|
|
||||||
|
|
||||||
```python
|
|
||||||
import logging
|
|
||||||
logging.basicConfig(level=logging.DEBUG)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Contributing
|
**Test credentials:**
|
||||||
|
```bash
|
||||||
|
curl -H "Authorization: Token YOUR_TOKEN" https://netbox.example.com/api/
|
||||||
|
```
|
||||||
|
|
||||||
1. Follow the existing code patterns
|
### Tools Not Appearing in Claude
|
||||||
2. Add tests for new functionality
|
|
||||||
3. Update documentation for new tools
|
**Verify MCP registration:**
|
||||||
4. Ensure compatibility with NetBox 4.x API
|
```bash
|
||||||
|
cat ~/.claude/mcp.json # or project-level .mcp.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check MCP server logs:**
|
||||||
|
Claude Code will show MCP server stderr in the UI.
|
||||||
|
|
||||||
|
### Connection Errors
|
||||||
|
|
||||||
|
- Verify `NETBOX_API_URL` ends with `/api`
|
||||||
|
- Check firewall/network connectivity to NetBox instance
|
||||||
|
- Ensure API token has required permissions
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
MIT License - Part of the Leo Claude Marketplace.
|
MIT License - See LICENSE file for details.
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
This MCP server is part of the leo-claude-mktplace project. For issues or contributions, refer to the main repository.
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -1,20 +1,12 @@
|
|||||||
"""NetBox MCP tools package."""
|
"""NetBox MCP tools package."""
|
||||||
from .dcim import DCIMTools
|
from .dcim import DCIMTools
|
||||||
from .ipam import IPAMTools
|
from .ipam import IPAMTools
|
||||||
from .circuits import CircuitsTools
|
|
||||||
from .virtualization import VirtualizationTools
|
from .virtualization import VirtualizationTools
|
||||||
from .tenancy import TenancyTools
|
|
||||||
from .vpn import VPNTools
|
|
||||||
from .wireless import WirelessTools
|
|
||||||
from .extras import ExtrasTools
|
from .extras import ExtrasTools
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'DCIMTools',
|
'DCIMTools',
|
||||||
'IPAMTools',
|
'IPAMTools',
|
||||||
'CircuitsTools',
|
|
||||||
'VirtualizationTools',
|
'VirtualizationTools',
|
||||||
'TenancyTools',
|
|
||||||
'VPNTools',
|
|
||||||
'WirelessTools',
|
|
||||||
'ExtrasTools',
|
'ExtrasTools',
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -1,373 +0,0 @@
|
|||||||
"""
|
|
||||||
Circuits tools for NetBox MCP Server.
|
|
||||||
|
|
||||||
Covers: Providers, Circuits, Circuit Types, Circuit Terminations, and related models.
|
|
||||||
"""
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional, Any
|
|
||||||
from ..netbox_client import NetBoxClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class CircuitsTools:
|
|
||||||
"""Tools for Circuits operations in NetBox"""
|
|
||||||
|
|
||||||
def __init__(self, client: NetBoxClient):
|
|
||||||
self.client = client
|
|
||||||
self.base_endpoint = 'circuits'
|
|
||||||
|
|
||||||
# ==================== Providers ====================
|
|
||||||
|
|
||||||
async def list_providers(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all circuit providers."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'slug': slug, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/providers', params=params)
|
|
||||||
|
|
||||||
async def get_provider(self, id: int) -> Dict:
|
|
||||||
"""Get a specific provider by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/providers', id)
|
|
||||||
|
|
||||||
async def create_provider(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
asns: Optional[List[int]] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new provider."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if asns:
|
|
||||||
data['asns'] = asns
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/providers', data)
|
|
||||||
|
|
||||||
async def update_provider(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a provider."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/providers', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_provider(self, id: int) -> None:
|
|
||||||
"""Delete a provider."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/providers', id)
|
|
||||||
|
|
||||||
# ==================== Provider Accounts ====================
|
|
||||||
|
|
||||||
async def list_provider_accounts(
|
|
||||||
self,
|
|
||||||
provider_id: Optional[int] = None,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
account: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all provider accounts."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'provider_id': provider_id, 'name': name, 'account': account, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/provider-accounts', params=params)
|
|
||||||
|
|
||||||
async def get_provider_account(self, id: int) -> Dict:
|
|
||||||
"""Get a specific provider account by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/provider-accounts', id)
|
|
||||||
|
|
||||||
async def create_provider_account(
|
|
||||||
self,
|
|
||||||
provider: int,
|
|
||||||
account: str,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new provider account."""
|
|
||||||
data = {'provider': provider, 'account': account, **kwargs}
|
|
||||||
if name:
|
|
||||||
data['name'] = name
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/provider-accounts', data)
|
|
||||||
|
|
||||||
async def update_provider_account(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a provider account."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/provider-accounts', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_provider_account(self, id: int) -> None:
|
|
||||||
"""Delete a provider account."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/provider-accounts', id)
|
|
||||||
|
|
||||||
# ==================== Provider Networks ====================
|
|
||||||
|
|
||||||
async def list_provider_networks(
|
|
||||||
self,
|
|
||||||
provider_id: Optional[int] = None,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all provider networks."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'provider_id': provider_id, 'name': name, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/provider-networks', params=params)
|
|
||||||
|
|
||||||
async def get_provider_network(self, id: int) -> Dict:
|
|
||||||
"""Get a specific provider network by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/provider-networks', id)
|
|
||||||
|
|
||||||
async def create_provider_network(
|
|
||||||
self,
|
|
||||||
provider: int,
|
|
||||||
name: str,
|
|
||||||
service_id: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new provider network."""
|
|
||||||
data = {'provider': provider, 'name': name, **kwargs}
|
|
||||||
if service_id:
|
|
||||||
data['service_id'] = service_id
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/provider-networks', data)
|
|
||||||
|
|
||||||
async def update_provider_network(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a provider network."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/provider-networks', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_provider_network(self, id: int) -> None:
|
|
||||||
"""Delete a provider network."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/provider-networks', id)
|
|
||||||
|
|
||||||
# ==================== Circuit Types ====================
|
|
||||||
|
|
||||||
async def list_circuit_types(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all circuit types."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'slug': slug, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/circuit-types', params=params)
|
|
||||||
|
|
||||||
async def get_circuit_type(self, id: int) -> Dict:
|
|
||||||
"""Get a specific circuit type by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/circuit-types', id)
|
|
||||||
|
|
||||||
async def create_circuit_type(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
color: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new circuit type."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if color:
|
|
||||||
data['color'] = color
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/circuit-types', data)
|
|
||||||
|
|
||||||
async def update_circuit_type(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a circuit type."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/circuit-types', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_circuit_type(self, id: int) -> None:
|
|
||||||
"""Delete a circuit type."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/circuit-types', id)
|
|
||||||
|
|
||||||
# ==================== Circuit Groups ====================
|
|
||||||
|
|
||||||
async def list_circuit_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all circuit groups."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'slug': slug, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/circuit-groups', params=params)
|
|
||||||
|
|
||||||
async def get_circuit_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific circuit group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/circuit-groups', id)
|
|
||||||
|
|
||||||
async def create_circuit_group(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new circuit group."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/circuit-groups', data)
|
|
||||||
|
|
||||||
async def update_circuit_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a circuit group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/circuit-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_circuit_group(self, id: int) -> None:
|
|
||||||
"""Delete a circuit group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/circuit-groups', id)
|
|
||||||
|
|
||||||
# ==================== Circuit Group Assignments ====================
|
|
||||||
|
|
||||||
async def list_circuit_group_assignments(
|
|
||||||
self,
|
|
||||||
group_id: Optional[int] = None,
|
|
||||||
circuit_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all circuit group assignments."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'group_id': group_id, 'circuit_id': circuit_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/circuit-group-assignments', params=params)
|
|
||||||
|
|
||||||
async def get_circuit_group_assignment(self, id: int) -> Dict:
|
|
||||||
"""Get a specific circuit group assignment by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/circuit-group-assignments', id)
|
|
||||||
|
|
||||||
async def create_circuit_group_assignment(
|
|
||||||
self,
|
|
||||||
group: int,
|
|
||||||
circuit: int,
|
|
||||||
priority: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new circuit group assignment."""
|
|
||||||
data = {'group': group, 'circuit': circuit, **kwargs}
|
|
||||||
if priority:
|
|
||||||
data['priority'] = priority
|
|
||||||
return self.client.create(f'{self.base_endpoint}/circuit-group-assignments', data)
|
|
||||||
|
|
||||||
async def update_circuit_group_assignment(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a circuit group assignment."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/circuit-group-assignments', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_circuit_group_assignment(self, id: int) -> None:
|
|
||||||
"""Delete a circuit group assignment."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/circuit-group-assignments', id)
|
|
||||||
|
|
||||||
# ==================== Circuits ====================
|
|
||||||
|
|
||||||
async def list_circuits(
|
|
||||||
self,
|
|
||||||
cid: Optional[str] = None,
|
|
||||||
provider_id: Optional[int] = None,
|
|
||||||
provider_account_id: Optional[int] = None,
|
|
||||||
type_id: Optional[int] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all circuits with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'cid': cid, 'provider_id': provider_id, 'provider_account_id': provider_account_id,
|
|
||||||
'type_id': type_id, 'status': status, 'tenant_id': tenant_id, 'site_id': site_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/circuits', params=params)
|
|
||||||
|
|
||||||
async def get_circuit(self, id: int) -> Dict:
|
|
||||||
"""Get a specific circuit by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/circuits', id)
|
|
||||||
|
|
||||||
async def create_circuit(
|
|
||||||
self,
|
|
||||||
cid: str,
|
|
||||||
provider: int,
|
|
||||||
type: int,
|
|
||||||
status: str = 'active',
|
|
||||||
provider_account: Optional[int] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
install_date: Optional[str] = None,
|
|
||||||
termination_date: Optional[str] = None,
|
|
||||||
commit_rate: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new circuit."""
|
|
||||||
data = {'cid': cid, 'provider': provider, 'type': type, 'status': status, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('provider_account', provider_account), ('tenant', tenant),
|
|
||||||
('install_date', install_date), ('termination_date', termination_date),
|
|
||||||
('commit_rate', commit_rate), ('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/circuits', data)
|
|
||||||
|
|
||||||
async def update_circuit(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a circuit."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/circuits', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_circuit(self, id: int) -> None:
|
|
||||||
"""Delete a circuit."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/circuits', id)
|
|
||||||
|
|
||||||
# ==================== Circuit Terminations ====================
|
|
||||||
|
|
||||||
async def list_circuit_terminations(
|
|
||||||
self,
|
|
||||||
circuit_id: Optional[int] = None,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
provider_network_id: Optional[int] = None,
|
|
||||||
term_side: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all circuit terminations."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'circuit_id': circuit_id, 'site_id': site_id,
|
|
||||||
'provider_network_id': provider_network_id, 'term_side': term_side, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/circuit-terminations', params=params)
|
|
||||||
|
|
||||||
async def get_circuit_termination(self, id: int) -> Dict:
|
|
||||||
"""Get a specific circuit termination by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/circuit-terminations', id)
|
|
||||||
|
|
||||||
async def create_circuit_termination(
|
|
||||||
self,
|
|
||||||
circuit: int,
|
|
||||||
term_side: str,
|
|
||||||
site: Optional[int] = None,
|
|
||||||
provider_network: Optional[int] = None,
|
|
||||||
port_speed: Optional[int] = None,
|
|
||||||
upstream_speed: Optional[int] = None,
|
|
||||||
xconnect_id: Optional[str] = None,
|
|
||||||
pp_info: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new circuit termination."""
|
|
||||||
data = {'circuit': circuit, 'term_side': term_side, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('site', site), ('provider_network', provider_network),
|
|
||||||
('port_speed', port_speed), ('upstream_speed', upstream_speed),
|
|
||||||
('xconnect_id', xconnect_id), ('pp_info', pp_info), ('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/circuit-terminations', data)
|
|
||||||
|
|
||||||
async def update_circuit_termination(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a circuit termination."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/circuit-terminations', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_circuit_termination(self, id: int) -> None:
|
|
||||||
"""Delete a circuit termination."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/circuit-terminations', id)
|
|
||||||
|
|
||||||
async def get_circuit_termination_paths(self, id: int) -> Dict:
|
|
||||||
"""Get cable paths for a circuit termination."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/circuit-terminations', f'{id}/paths')
|
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
DCIM (Data Center Infrastructure Management) tools for NetBox MCP Server.
|
DCIM (Data Center Infrastructure Management) tools for NetBox MCP Server.
|
||||||
|
|
||||||
Covers: Sites, Locations, Racks, Devices, Cables, Interfaces, and related models.
|
Covers: Sites, Devices, and Interfaces only.
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
from typing import List, Dict, Optional, Any
|
from typing import List, Dict, Optional, Any
|
||||||
@@ -17,74 +17,6 @@ class DCIMTools:
|
|||||||
self.client = client
|
self.client = client
|
||||||
self.base_endpoint = 'dcim'
|
self.base_endpoint = 'dcim'
|
||||||
|
|
||||||
# ==================== Regions ====================
|
|
||||||
|
|
||||||
async def list_regions(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
parent_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all regions with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'parent_id': parent_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/regions', params=params)
|
|
||||||
|
|
||||||
async def get_region(self, id: int) -> Dict:
|
|
||||||
"""Get a specific region by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/regions', id)
|
|
||||||
|
|
||||||
async def create_region(self, name: str, slug: str, parent: Optional[int] = None, **kwargs) -> Dict:
|
|
||||||
"""Create a new region."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if parent:
|
|
||||||
data['parent'] = parent
|
|
||||||
return self.client.create(f'{self.base_endpoint}/regions', data)
|
|
||||||
|
|
||||||
async def update_region(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a region."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/regions', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_region(self, id: int) -> None:
|
|
||||||
"""Delete a region."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/regions', id)
|
|
||||||
|
|
||||||
# ==================== Site Groups ====================
|
|
||||||
|
|
||||||
async def list_site_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
parent_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all site groups with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'parent_id': parent_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/site-groups', params=params)
|
|
||||||
|
|
||||||
async def get_site_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific site group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/site-groups', id)
|
|
||||||
|
|
||||||
async def create_site_group(self, name: str, slug: str, parent: Optional[int] = None, **kwargs) -> Dict:
|
|
||||||
"""Create a new site group."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if parent:
|
|
||||||
data['parent'] = parent
|
|
||||||
return self.client.create(f'{self.base_endpoint}/site-groups', data)
|
|
||||||
|
|
||||||
async def update_site_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a site group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/site-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_site_group(self, id: int) -> None:
|
|
||||||
"""Delete a site group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/site-groups', id)
|
|
||||||
|
|
||||||
# ==================== Sites ====================
|
# ==================== Sites ====================
|
||||||
|
|
||||||
async def list_sites(
|
async def list_sites(
|
||||||
@@ -142,359 +74,6 @@ class DCIMTools:
|
|||||||
"""Update a site."""
|
"""Update a site."""
|
||||||
return self.client.patch(f'{self.base_endpoint}/sites', id, kwargs)
|
return self.client.patch(f'{self.base_endpoint}/sites', id, kwargs)
|
||||||
|
|
||||||
async def delete_site(self, id: int) -> None:
|
|
||||||
"""Delete a site."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/sites', id)
|
|
||||||
|
|
||||||
# ==================== Locations ====================
|
|
||||||
|
|
||||||
async def list_locations(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
parent_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all locations with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'site_id': site_id, 'parent_id': parent_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/locations', params=params)
|
|
||||||
|
|
||||||
async def get_location(self, id: int) -> Dict:
|
|
||||||
"""Get a specific location by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/locations', id)
|
|
||||||
|
|
||||||
async def create_location(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
site: int,
|
|
||||||
parent: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new location."""
|
|
||||||
data = {'name': name, 'slug': slug, 'site': site, **kwargs}
|
|
||||||
if parent:
|
|
||||||
data['parent'] = parent
|
|
||||||
return self.client.create(f'{self.base_endpoint}/locations', data)
|
|
||||||
|
|
||||||
async def update_location(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a location."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/locations', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_location(self, id: int) -> None:
|
|
||||||
"""Delete a location."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/locations', id)
|
|
||||||
|
|
||||||
# ==================== Rack Roles ====================
|
|
||||||
|
|
||||||
async def list_rack_roles(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all rack roles."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/rack-roles', params=params)
|
|
||||||
|
|
||||||
async def get_rack_role(self, id: int) -> Dict:
|
|
||||||
"""Get a specific rack role by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/rack-roles', id)
|
|
||||||
|
|
||||||
async def create_rack_role(self, name: str, slug: str, color: str = '9e9e9e', **kwargs) -> Dict:
|
|
||||||
"""Create a new rack role."""
|
|
||||||
data = {'name': name, 'slug': slug, 'color': color, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/rack-roles', data)
|
|
||||||
|
|
||||||
async def update_rack_role(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a rack role."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/rack-roles', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_rack_role(self, id: int) -> None:
|
|
||||||
"""Delete a rack role."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/rack-roles', id)
|
|
||||||
|
|
||||||
# ==================== Rack Types ====================
|
|
||||||
|
|
||||||
async def list_rack_types(self, manufacturer_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all rack types."""
|
|
||||||
params = {k: v for k, v in {'manufacturer_id': manufacturer_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/rack-types', params=params)
|
|
||||||
|
|
||||||
async def get_rack_type(self, id: int) -> Dict:
|
|
||||||
"""Get a specific rack type by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/rack-types', id)
|
|
||||||
|
|
||||||
async def create_rack_type(
|
|
||||||
self,
|
|
||||||
manufacturer: int,
|
|
||||||
model: str,
|
|
||||||
slug: str,
|
|
||||||
form_factor: str = '4-post-frame',
|
|
||||||
width: int = 19,
|
|
||||||
u_height: int = 42,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new rack type."""
|
|
||||||
data = {
|
|
||||||
'manufacturer': manufacturer, 'model': model, 'slug': slug,
|
|
||||||
'form_factor': form_factor, 'width': width, 'u_height': u_height, **kwargs
|
|
||||||
}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/rack-types', data)
|
|
||||||
|
|
||||||
async def update_rack_type(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a rack type."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/rack-types', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_rack_type(self, id: int) -> None:
|
|
||||||
"""Delete a rack type."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/rack-types', id)
|
|
||||||
|
|
||||||
# ==================== Racks ====================
|
|
||||||
|
|
||||||
async def list_racks(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
location_id: Optional[int] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
role_id: Optional[int] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all racks with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'site_id': site_id, 'location_id': location_id,
|
|
||||||
'status': status, 'role_id': role_id, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/racks', params=params)
|
|
||||||
|
|
||||||
async def get_rack(self, id: int) -> Dict:
|
|
||||||
"""Get a specific rack by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/racks', id)
|
|
||||||
|
|
||||||
async def create_rack(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
site: int,
|
|
||||||
status: str = 'active',
|
|
||||||
location: Optional[int] = None,
|
|
||||||
role: Optional[int] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
rack_type: Optional[int] = None,
|
|
||||||
width: int = 19,
|
|
||||||
u_height: int = 42,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new rack."""
|
|
||||||
data = {'name': name, 'site': site, 'status': status, 'width': width, 'u_height': u_height, **kwargs}
|
|
||||||
for key, val in [('location', location), ('role', role), ('tenant', tenant), ('rack_type', rack_type)]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/racks', data)
|
|
||||||
|
|
||||||
async def update_rack(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a rack."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/racks', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_rack(self, id: int) -> None:
|
|
||||||
"""Delete a rack."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/racks', id)
|
|
||||||
|
|
||||||
# ==================== Rack Reservations ====================
|
|
||||||
|
|
||||||
async def list_rack_reservations(
|
|
||||||
self,
|
|
||||||
rack_id: Optional[int] = None,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all rack reservations."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'rack_id': rack_id, 'site_id': site_id, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/rack-reservations', params=params)
|
|
||||||
|
|
||||||
async def get_rack_reservation(self, id: int) -> Dict:
|
|
||||||
"""Get a specific rack reservation by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/rack-reservations', id)
|
|
||||||
|
|
||||||
async def create_rack_reservation(
|
|
||||||
self,
|
|
||||||
rack: int,
|
|
||||||
units: List[int],
|
|
||||||
user: int,
|
|
||||||
description: str,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new rack reservation."""
|
|
||||||
data = {'rack': rack, 'units': units, 'user': user, 'description': description, **kwargs}
|
|
||||||
if tenant:
|
|
||||||
data['tenant'] = tenant
|
|
||||||
return self.client.create(f'{self.base_endpoint}/rack-reservations', data)
|
|
||||||
|
|
||||||
async def update_rack_reservation(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a rack reservation."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/rack-reservations', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_rack_reservation(self, id: int) -> None:
|
|
||||||
"""Delete a rack reservation."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/rack-reservations', id)
|
|
||||||
|
|
||||||
# ==================== Manufacturers ====================
|
|
||||||
|
|
||||||
async def list_manufacturers(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all manufacturers."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/manufacturers', params=params)
|
|
||||||
|
|
||||||
async def get_manufacturer(self, id: int) -> Dict:
|
|
||||||
"""Get a specific manufacturer by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/manufacturers', id)
|
|
||||||
|
|
||||||
async def create_manufacturer(self, name: str, slug: str, **kwargs) -> Dict:
|
|
||||||
"""Create a new manufacturer."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/manufacturers', data)
|
|
||||||
|
|
||||||
async def update_manufacturer(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a manufacturer."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/manufacturers', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_manufacturer(self, id: int) -> None:
|
|
||||||
"""Delete a manufacturer."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/manufacturers', id)
|
|
||||||
|
|
||||||
# ==================== Device Types ====================
|
|
||||||
|
|
||||||
async def list_device_types(
|
|
||||||
self,
|
|
||||||
manufacturer_id: Optional[int] = None,
|
|
||||||
model: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all device types."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'manufacturer_id': manufacturer_id, 'model': model, 'slug': slug, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/device-types', params=params)
|
|
||||||
|
|
||||||
async def get_device_type(self, id: int) -> Dict:
|
|
||||||
"""Get a specific device type by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/device-types', id)
|
|
||||||
|
|
||||||
async def create_device_type(
|
|
||||||
self,
|
|
||||||
manufacturer: int,
|
|
||||||
model: str,
|
|
||||||
slug: str,
|
|
||||||
u_height: float = 1.0,
|
|
||||||
is_full_depth: bool = True,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new device type."""
|
|
||||||
data = {
|
|
||||||
'manufacturer': manufacturer, 'model': model, 'slug': slug,
|
|
||||||
'u_height': u_height, 'is_full_depth': is_full_depth, **kwargs
|
|
||||||
}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/device-types', data)
|
|
||||||
|
|
||||||
async def update_device_type(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a device type."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/device-types', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_device_type(self, id: int) -> None:
|
|
||||||
"""Delete a device type."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/device-types', id)
|
|
||||||
|
|
||||||
# ==================== Module Types ====================
|
|
||||||
|
|
||||||
async def list_module_types(self, manufacturer_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all module types."""
|
|
||||||
params = {k: v for k, v in {'manufacturer_id': manufacturer_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/module-types', params=params)
|
|
||||||
|
|
||||||
async def get_module_type(self, id: int) -> Dict:
|
|
||||||
"""Get a specific module type by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/module-types', id)
|
|
||||||
|
|
||||||
async def create_module_type(self, manufacturer: int, model: str, **kwargs) -> Dict:
|
|
||||||
"""Create a new module type."""
|
|
||||||
data = {'manufacturer': manufacturer, 'model': model, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/module-types', data)
|
|
||||||
|
|
||||||
async def update_module_type(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a module type."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/module-types', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_module_type(self, id: int) -> None:
|
|
||||||
"""Delete a module type."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/module-types', id)
|
|
||||||
|
|
||||||
# ==================== Device Roles ====================
|
|
||||||
|
|
||||||
async def list_device_roles(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all device roles."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/device-roles', params=params)
|
|
||||||
|
|
||||||
async def get_device_role(self, id: int) -> Dict:
|
|
||||||
"""Get a specific device role by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/device-roles', id)
|
|
||||||
|
|
||||||
async def create_device_role(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
color: str = '9e9e9e',
|
|
||||||
vm_role: bool = False,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new device role."""
|
|
||||||
data = {'name': name, 'slug': slug, 'color': color, 'vm_role': vm_role, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/device-roles', data)
|
|
||||||
|
|
||||||
async def update_device_role(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a device role."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/device-roles', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_device_role(self, id: int) -> None:
|
|
||||||
"""Delete a device role."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/device-roles', id)
|
|
||||||
|
|
||||||
# ==================== Platforms ====================
|
|
||||||
|
|
||||||
async def list_platforms(self, name: Optional[str] = None, manufacturer_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all platforms."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'manufacturer_id': manufacturer_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/platforms', params=params)
|
|
||||||
|
|
||||||
async def get_platform(self, id: int) -> Dict:
|
|
||||||
"""Get a specific platform by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/platforms', id)
|
|
||||||
|
|
||||||
async def create_platform(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
manufacturer: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new platform."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if manufacturer:
|
|
||||||
data['manufacturer'] = manufacturer
|
|
||||||
return self.client.create(f'{self.base_endpoint}/platforms', data)
|
|
||||||
|
|
||||||
async def update_platform(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a platform."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/platforms', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_platform(self, id: int) -> None:
|
|
||||||
"""Delete a platform."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/platforms', id)
|
|
||||||
|
|
||||||
# ==================== Devices ====================
|
# ==================== Devices ====================
|
||||||
|
|
||||||
async def list_devices(
|
async def list_devices(
|
||||||
@@ -565,34 +144,6 @@ class DCIMTools:
|
|||||||
"""Update a device."""
|
"""Update a device."""
|
||||||
return self.client.patch(f'{self.base_endpoint}/devices', id, kwargs)
|
return self.client.patch(f'{self.base_endpoint}/devices', id, kwargs)
|
||||||
|
|
||||||
async def delete_device(self, id: int) -> None:
|
|
||||||
"""Delete a device."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/devices', id)
|
|
||||||
|
|
||||||
# ==================== Modules ====================
|
|
||||||
|
|
||||||
async def list_modules(self, device_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all modules."""
|
|
||||||
params = {k: v for k, v in {'device_id': device_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/modules', params=params)
|
|
||||||
|
|
||||||
async def get_module(self, id: int) -> Dict:
|
|
||||||
"""Get a specific module by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/modules', id)
|
|
||||||
|
|
||||||
async def create_module(self, device: int, module_bay: int, module_type: int, **kwargs) -> Dict:
|
|
||||||
"""Create a new module."""
|
|
||||||
data = {'device': device, 'module_bay': module_bay, 'module_type': module_type, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/modules', data)
|
|
||||||
|
|
||||||
async def update_module(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a module."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/modules', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_module(self, id: int) -> None:
|
|
||||||
"""Delete a module."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/modules', id)
|
|
||||||
|
|
||||||
# ==================== Interfaces ====================
|
# ==================== Interfaces ====================
|
||||||
|
|
||||||
async def list_interfaces(
|
async def list_interfaces(
|
||||||
@@ -636,300 +187,3 @@ class DCIMTools:
|
|||||||
if val is not None:
|
if val is not None:
|
||||||
data[key] = val
|
data[key] = val
|
||||||
return self.client.create(f'{self.base_endpoint}/interfaces', data)
|
return self.client.create(f'{self.base_endpoint}/interfaces', data)
|
||||||
|
|
||||||
async def update_interface(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an interface."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/interfaces', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_interface(self, id: int) -> None:
|
|
||||||
"""Delete an interface."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/interfaces', id)
|
|
||||||
|
|
||||||
# ==================== Console Ports ====================
|
|
||||||
|
|
||||||
async def list_console_ports(self, device_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all console ports."""
|
|
||||||
params = {k: v for k, v in {'device_id': device_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/console-ports', params=params)
|
|
||||||
|
|
||||||
async def get_console_port(self, id: int) -> Dict:
|
|
||||||
"""Get a specific console port by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/console-ports', id)
|
|
||||||
|
|
||||||
async def create_console_port(self, device: int, name: str, **kwargs) -> Dict:
|
|
||||||
"""Create a new console port."""
|
|
||||||
data = {'device': device, 'name': name, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/console-ports', data)
|
|
||||||
|
|
||||||
async def update_console_port(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a console port."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/console-ports', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_console_port(self, id: int) -> None:
|
|
||||||
"""Delete a console port."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/console-ports', id)
|
|
||||||
|
|
||||||
# ==================== Console Server Ports ====================
|
|
||||||
|
|
||||||
async def list_console_server_ports(self, device_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all console server ports."""
|
|
||||||
params = {k: v for k, v in {'device_id': device_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/console-server-ports', params=params)
|
|
||||||
|
|
||||||
async def get_console_server_port(self, id: int) -> Dict:
|
|
||||||
"""Get a specific console server port by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/console-server-ports', id)
|
|
||||||
|
|
||||||
async def create_console_server_port(self, device: int, name: str, **kwargs) -> Dict:
|
|
||||||
"""Create a new console server port."""
|
|
||||||
data = {'device': device, 'name': name, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/console-server-ports', data)
|
|
||||||
|
|
||||||
async def update_console_server_port(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a console server port."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/console-server-ports', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_console_server_port(self, id: int) -> None:
|
|
||||||
"""Delete a console server port."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/console-server-ports', id)
|
|
||||||
|
|
||||||
# ==================== Power Ports ====================
|
|
||||||
|
|
||||||
async def list_power_ports(self, device_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all power ports."""
|
|
||||||
params = {k: v for k, v in {'device_id': device_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/power-ports', params=params)
|
|
||||||
|
|
||||||
async def get_power_port(self, id: int) -> Dict:
|
|
||||||
"""Get a specific power port by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/power-ports', id)
|
|
||||||
|
|
||||||
async def create_power_port(self, device: int, name: str, **kwargs) -> Dict:
|
|
||||||
"""Create a new power port."""
|
|
||||||
data = {'device': device, 'name': name, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/power-ports', data)
|
|
||||||
|
|
||||||
async def update_power_port(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a power port."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/power-ports', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_power_port(self, id: int) -> None:
|
|
||||||
"""Delete a power port."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/power-ports', id)
|
|
||||||
|
|
||||||
# ==================== Power Outlets ====================
|
|
||||||
|
|
||||||
async def list_power_outlets(self, device_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all power outlets."""
|
|
||||||
params = {k: v for k, v in {'device_id': device_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/power-outlets', params=params)
|
|
||||||
|
|
||||||
async def get_power_outlet(self, id: int) -> Dict:
|
|
||||||
"""Get a specific power outlet by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/power-outlets', id)
|
|
||||||
|
|
||||||
async def create_power_outlet(self, device: int, name: str, **kwargs) -> Dict:
|
|
||||||
"""Create a new power outlet."""
|
|
||||||
data = {'device': device, 'name': name, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/power-outlets', data)
|
|
||||||
|
|
||||||
async def update_power_outlet(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a power outlet."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/power-outlets', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_power_outlet(self, id: int) -> None:
|
|
||||||
"""Delete a power outlet."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/power-outlets', id)
|
|
||||||
|
|
||||||
# ==================== Power Panels ====================
|
|
||||||
|
|
||||||
async def list_power_panels(self, site_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all power panels."""
|
|
||||||
params = {k: v for k, v in {'site_id': site_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/power-panels', params=params)
|
|
||||||
|
|
||||||
async def get_power_panel(self, id: int) -> Dict:
|
|
||||||
"""Get a specific power panel by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/power-panels', id)
|
|
||||||
|
|
||||||
async def create_power_panel(self, site: int, name: str, location: Optional[int] = None, **kwargs) -> Dict:
|
|
||||||
"""Create a new power panel."""
|
|
||||||
data = {'site': site, 'name': name, **kwargs}
|
|
||||||
if location:
|
|
||||||
data['location'] = location
|
|
||||||
return self.client.create(f'{self.base_endpoint}/power-panels', data)
|
|
||||||
|
|
||||||
async def update_power_panel(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a power panel."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/power-panels', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_power_panel(self, id: int) -> None:
|
|
||||||
"""Delete a power panel."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/power-panels', id)
|
|
||||||
|
|
||||||
# ==================== Power Feeds ====================
|
|
||||||
|
|
||||||
async def list_power_feeds(self, power_panel_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all power feeds."""
|
|
||||||
params = {k: v for k, v in {'power_panel_id': power_panel_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/power-feeds', params=params)
|
|
||||||
|
|
||||||
async def get_power_feed(self, id: int) -> Dict:
|
|
||||||
"""Get a specific power feed by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/power-feeds', id)
|
|
||||||
|
|
||||||
async def create_power_feed(
|
|
||||||
self,
|
|
||||||
power_panel: int,
|
|
||||||
name: str,
|
|
||||||
status: str = 'active',
|
|
||||||
type: str = 'primary',
|
|
||||||
supply: str = 'ac',
|
|
||||||
phase: str = 'single-phase',
|
|
||||||
voltage: int = 120,
|
|
||||||
amperage: int = 20,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new power feed."""
|
|
||||||
data = {
|
|
||||||
'power_panel': power_panel, 'name': name, 'status': status,
|
|
||||||
'type': type, 'supply': supply, 'phase': phase,
|
|
||||||
'voltage': voltage, 'amperage': amperage, **kwargs
|
|
||||||
}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/power-feeds', data)
|
|
||||||
|
|
||||||
async def update_power_feed(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a power feed."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/power-feeds', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_power_feed(self, id: int) -> None:
|
|
||||||
"""Delete a power feed."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/power-feeds', id)
|
|
||||||
|
|
||||||
# ==================== Cables ====================
|
|
||||||
|
|
||||||
async def list_cables(
|
|
||||||
self,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
device_id: Optional[int] = None,
|
|
||||||
rack_id: Optional[int] = None,
|
|
||||||
type: Optional[str] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all cables."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'site_id': site_id, 'device_id': device_id, 'rack_id': rack_id,
|
|
||||||
'type': type, 'status': status, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/cables', params=params)
|
|
||||||
|
|
||||||
async def get_cable(self, id: int) -> Dict:
|
|
||||||
"""Get a specific cable by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/cables', id)
|
|
||||||
|
|
||||||
async def create_cable(
|
|
||||||
self,
|
|
||||||
a_terminations: List[Dict],
|
|
||||||
b_terminations: List[Dict],
|
|
||||||
type: Optional[str] = None,
|
|
||||||
status: str = 'connected',
|
|
||||||
label: Optional[str] = None,
|
|
||||||
color: Optional[str] = None,
|
|
||||||
length: Optional[float] = None,
|
|
||||||
length_unit: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Create a new cable.
|
|
||||||
|
|
||||||
a_terminations and b_terminations are lists of dicts with:
|
|
||||||
- object_type: e.g., 'dcim.interface'
|
|
||||||
- object_id: ID of the object
|
|
||||||
"""
|
|
||||||
data = {
|
|
||||||
'a_terminations': a_terminations,
|
|
||||||
'b_terminations': b_terminations,
|
|
||||||
'status': status,
|
|
||||||
**kwargs
|
|
||||||
}
|
|
||||||
for key, val in [
|
|
||||||
('type', type), ('label', label), ('color', color),
|
|
||||||
('length', length), ('length_unit', length_unit)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/cables', data)
|
|
||||||
|
|
||||||
async def update_cable(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a cable."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/cables', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_cable(self, id: int) -> None:
|
|
||||||
"""Delete a cable."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/cables', id)
|
|
||||||
|
|
||||||
# ==================== Virtual Chassis ====================
|
|
||||||
|
|
||||||
async def list_virtual_chassis(self, **kwargs) -> List[Dict]:
|
|
||||||
"""List all virtual chassis."""
|
|
||||||
return self.client.list(f'{self.base_endpoint}/virtual-chassis', params=kwargs)
|
|
||||||
|
|
||||||
async def get_virtual_chassis(self, id: int) -> Dict:
|
|
||||||
"""Get a specific virtual chassis by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/virtual-chassis', id)
|
|
||||||
|
|
||||||
async def create_virtual_chassis(self, name: str, domain: Optional[str] = None, **kwargs) -> Dict:
|
|
||||||
"""Create a new virtual chassis."""
|
|
||||||
data = {'name': name, **kwargs}
|
|
||||||
if domain:
|
|
||||||
data['domain'] = domain
|
|
||||||
return self.client.create(f'{self.base_endpoint}/virtual-chassis', data)
|
|
||||||
|
|
||||||
async def update_virtual_chassis(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a virtual chassis."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/virtual-chassis', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_virtual_chassis(self, id: int) -> None:
|
|
||||||
"""Delete a virtual chassis."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/virtual-chassis', id)
|
|
||||||
|
|
||||||
# ==================== Inventory Items ====================
|
|
||||||
|
|
||||||
async def list_inventory_items(self, device_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all inventory items."""
|
|
||||||
params = {k: v for k, v in {'device_id': device_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/inventory-items', params=params)
|
|
||||||
|
|
||||||
async def get_inventory_item(self, id: int) -> Dict:
|
|
||||||
"""Get a specific inventory item by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/inventory-items', id)
|
|
||||||
|
|
||||||
async def create_inventory_item(
|
|
||||||
self,
|
|
||||||
device: int,
|
|
||||||
name: str,
|
|
||||||
parent: Optional[int] = None,
|
|
||||||
manufacturer: Optional[int] = None,
|
|
||||||
part_id: Optional[str] = None,
|
|
||||||
serial: Optional[str] = None,
|
|
||||||
asset_tag: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new inventory item."""
|
|
||||||
data = {'device': device, 'name': name, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('parent', parent), ('manufacturer', manufacturer),
|
|
||||||
('part_id', part_id), ('serial', serial), ('asset_tag', asset_tag)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/inventory-items', data)
|
|
||||||
|
|
||||||
async def update_inventory_item(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an inventory item."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/inventory-items', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_inventory_item(self, id: int) -> None:
|
|
||||||
"""Delete an inventory item."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/inventory-items', id)
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Extras tools for NetBox MCP Server.
|
Extras tools for NetBox MCP Server.
|
||||||
|
|
||||||
Covers: Tags, Custom Fields, Custom Links, Webhooks, Journal Entries, and more.
|
Covers: Tags and Journal Entries only.
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
from typing import List, Dict, Optional, Any
|
from typing import List, Dict, Optional, Any
|
||||||
@@ -50,209 +50,6 @@ class ExtrasTools:
|
|||||||
data['description'] = description
|
data['description'] = description
|
||||||
return self.client.create(f'{self.base_endpoint}/tags', data)
|
return self.client.create(f'{self.base_endpoint}/tags', data)
|
||||||
|
|
||||||
async def update_tag(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a tag."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/tags', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_tag(self, id: int) -> None:
|
|
||||||
"""Delete a tag."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/tags', id)
|
|
||||||
|
|
||||||
# ==================== Custom Fields ====================
|
|
||||||
|
|
||||||
async def list_custom_fields(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
type: Optional[str] = None,
|
|
||||||
content_types: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all custom fields."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'type': type, 'content_types': content_types, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/custom-fields', params=params)
|
|
||||||
|
|
||||||
async def get_custom_field(self, id: int) -> Dict:
|
|
||||||
"""Get a specific custom field by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/custom-fields', id)
|
|
||||||
|
|
||||||
async def create_custom_field(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
content_types: List[str],
|
|
||||||
type: str = 'text',
|
|
||||||
label: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
required: bool = False,
|
|
||||||
filter_logic: str = 'loose',
|
|
||||||
default: Optional[Any] = None,
|
|
||||||
weight: int = 100,
|
|
||||||
validation_minimum: Optional[int] = None,
|
|
||||||
validation_maximum: Optional[int] = None,
|
|
||||||
validation_regex: Optional[str] = None,
|
|
||||||
choice_set: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new custom field."""
|
|
||||||
data = {
|
|
||||||
'name': name, 'content_types': content_types, 'type': type,
|
|
||||||
'required': required, 'filter_logic': filter_logic, 'weight': weight, **kwargs
|
|
||||||
}
|
|
||||||
for key, val in [
|
|
||||||
('label', label), ('description', description), ('default', default),
|
|
||||||
('validation_minimum', validation_minimum), ('validation_maximum', validation_maximum),
|
|
||||||
('validation_regex', validation_regex), ('choice_set', choice_set)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/custom-fields', data)
|
|
||||||
|
|
||||||
async def update_custom_field(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a custom field."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/custom-fields', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_custom_field(self, id: int) -> None:
|
|
||||||
"""Delete a custom field."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/custom-fields', id)
|
|
||||||
|
|
||||||
# ==================== Custom Field Choice Sets ====================
|
|
||||||
|
|
||||||
async def list_custom_field_choice_sets(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all custom field choice sets."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/custom-field-choice-sets', params=params)
|
|
||||||
|
|
||||||
async def get_custom_field_choice_set(self, id: int) -> Dict:
|
|
||||||
"""Get a specific custom field choice set by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/custom-field-choice-sets', id)
|
|
||||||
|
|
||||||
async def create_custom_field_choice_set(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
extra_choices: List[List[str]],
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new custom field choice set."""
|
|
||||||
data = {'name': name, 'extra_choices': extra_choices, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/custom-field-choice-sets', data)
|
|
||||||
|
|
||||||
async def update_custom_field_choice_set(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a custom field choice set."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/custom-field-choice-sets', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_custom_field_choice_set(self, id: int) -> None:
|
|
||||||
"""Delete a custom field choice set."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/custom-field-choice-sets', id)
|
|
||||||
|
|
||||||
# ==================== Custom Links ====================
|
|
||||||
|
|
||||||
async def list_custom_links(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
content_types: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all custom links."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'content_types': content_types, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/custom-links', params=params)
|
|
||||||
|
|
||||||
async def get_custom_link(self, id: int) -> Dict:
|
|
||||||
"""Get a specific custom link by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/custom-links', id)
|
|
||||||
|
|
||||||
async def create_custom_link(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
content_types: List[str],
|
|
||||||
link_text: str,
|
|
||||||
link_url: str,
|
|
||||||
enabled: bool = True,
|
|
||||||
new_window: bool = False,
|
|
||||||
weight: int = 100,
|
|
||||||
group_name: Optional[str] = None,
|
|
||||||
button_class: str = 'outline-dark',
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new custom link."""
|
|
||||||
data = {
|
|
||||||
'name': name, 'content_types': content_types,
|
|
||||||
'link_text': link_text, 'link_url': link_url,
|
|
||||||
'enabled': enabled, 'new_window': new_window,
|
|
||||||
'weight': weight, 'button_class': button_class, **kwargs
|
|
||||||
}
|
|
||||||
if group_name:
|
|
||||||
data['group_name'] = group_name
|
|
||||||
return self.client.create(f'{self.base_endpoint}/custom-links', data)
|
|
||||||
|
|
||||||
async def update_custom_link(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a custom link."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/custom-links', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_custom_link(self, id: int) -> None:
|
|
||||||
"""Delete a custom link."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/custom-links', id)
|
|
||||||
|
|
||||||
# ==================== Webhooks ====================
|
|
||||||
|
|
||||||
async def list_webhooks(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all webhooks."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/webhooks', params=params)
|
|
||||||
|
|
||||||
async def get_webhook(self, id: int) -> Dict:
|
|
||||||
"""Get a specific webhook by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/webhooks', id)
|
|
||||||
|
|
||||||
async def create_webhook(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
payload_url: str,
|
|
||||||
content_types: List[str],
|
|
||||||
type_create: bool = True,
|
|
||||||
type_update: bool = True,
|
|
||||||
type_delete: bool = True,
|
|
||||||
type_job_start: bool = False,
|
|
||||||
type_job_end: bool = False,
|
|
||||||
enabled: bool = True,
|
|
||||||
http_method: str = 'POST',
|
|
||||||
http_content_type: str = 'application/json',
|
|
||||||
additional_headers: Optional[str] = None,
|
|
||||||
body_template: Optional[str] = None,
|
|
||||||
secret: Optional[str] = None,
|
|
||||||
ssl_verification: bool = True,
|
|
||||||
ca_file_path: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new webhook."""
|
|
||||||
data = {
|
|
||||||
'name': name, 'payload_url': payload_url, 'content_types': content_types,
|
|
||||||
'type_create': type_create, 'type_update': type_update, 'type_delete': type_delete,
|
|
||||||
'type_job_start': type_job_start, 'type_job_end': type_job_end,
|
|
||||||
'enabled': enabled, 'http_method': http_method,
|
|
||||||
'http_content_type': http_content_type, 'ssl_verification': ssl_verification, **kwargs
|
|
||||||
}
|
|
||||||
for key, val in [
|
|
||||||
('additional_headers', additional_headers), ('body_template', body_template),
|
|
||||||
('secret', secret), ('ca_file_path', ca_file_path)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/webhooks', data)
|
|
||||||
|
|
||||||
async def update_webhook(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a webhook."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/webhooks', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_webhook(self, id: int) -> None:
|
|
||||||
"""Delete a webhook."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/webhooks', id)
|
|
||||||
|
|
||||||
# ==================== Journal Entries ====================
|
# ==================== Journal Entries ====================
|
||||||
|
|
||||||
async def list_journal_entries(
|
async def list_journal_entries(
|
||||||
@@ -288,273 +85,3 @@ class ExtrasTools:
|
|||||||
'comments': comments, 'kind': kind, **kwargs
|
'comments': comments, 'kind': kind, **kwargs
|
||||||
}
|
}
|
||||||
return self.client.create(f'{self.base_endpoint}/journal-entries', data)
|
return self.client.create(f'{self.base_endpoint}/journal-entries', data)
|
||||||
|
|
||||||
async def update_journal_entry(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a journal entry."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/journal-entries', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_journal_entry(self, id: int) -> None:
|
|
||||||
"""Delete a journal entry."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/journal-entries', id)
|
|
||||||
|
|
||||||
# ==================== Config Contexts ====================
|
|
||||||
|
|
||||||
async def list_config_contexts(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all config contexts."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/config-contexts', params=params)
|
|
||||||
|
|
||||||
async def get_config_context(self, id: int) -> Dict:
|
|
||||||
"""Get a specific config context by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/config-contexts', id)
|
|
||||||
|
|
||||||
async def create_config_context(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
data: Dict[str, Any],
|
|
||||||
weight: int = 1000,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
is_active: bool = True,
|
|
||||||
regions: Optional[List[int]] = None,
|
|
||||||
site_groups: Optional[List[int]] = None,
|
|
||||||
sites: Optional[List[int]] = None,
|
|
||||||
locations: Optional[List[int]] = None,
|
|
||||||
device_types: Optional[List[int]] = None,
|
|
||||||
roles: Optional[List[int]] = None,
|
|
||||||
platforms: Optional[List[int]] = None,
|
|
||||||
cluster_types: Optional[List[int]] = None,
|
|
||||||
cluster_groups: Optional[List[int]] = None,
|
|
||||||
clusters: Optional[List[int]] = None,
|
|
||||||
tenant_groups: Optional[List[int]] = None,
|
|
||||||
tenants: Optional[List[int]] = None,
|
|
||||||
tags: Optional[List[str]] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new config context."""
|
|
||||||
context_data = {
|
|
||||||
'name': name, 'data': data, 'weight': weight, 'is_active': is_active, **kwargs
|
|
||||||
}
|
|
||||||
for key, val in [
|
|
||||||
('description', description), ('regions', regions),
|
|
||||||
('site_groups', site_groups), ('sites', sites),
|
|
||||||
('locations', locations), ('device_types', device_types),
|
|
||||||
('roles', roles), ('platforms', platforms),
|
|
||||||
('cluster_types', cluster_types), ('cluster_groups', cluster_groups),
|
|
||||||
('clusters', clusters), ('tenant_groups', tenant_groups),
|
|
||||||
('tenants', tenants), ('tags', tags)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
context_data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/config-contexts', context_data)
|
|
||||||
|
|
||||||
async def update_config_context(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a config context."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/config-contexts', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_config_context(self, id: int) -> None:
|
|
||||||
"""Delete a config context."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/config-contexts', id)
|
|
||||||
|
|
||||||
# ==================== Config Templates ====================
|
|
||||||
|
|
||||||
async def list_config_templates(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all config templates."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/config-templates', params=params)
|
|
||||||
|
|
||||||
async def get_config_template(self, id: int) -> Dict:
|
|
||||||
"""Get a specific config template by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/config-templates', id)
|
|
||||||
|
|
||||||
async def create_config_template(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
template_code: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
environment_params: Optional[Dict[str, Any]] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new config template."""
|
|
||||||
data = {'name': name, 'template_code': template_code, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
if environment_params:
|
|
||||||
data['environment_params'] = environment_params
|
|
||||||
return self.client.create(f'{self.base_endpoint}/config-templates', data)
|
|
||||||
|
|
||||||
async def update_config_template(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a config template."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/config-templates', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_config_template(self, id: int) -> None:
|
|
||||||
"""Delete a config template."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/config-templates', id)
|
|
||||||
|
|
||||||
# ==================== Export Templates ====================
|
|
||||||
|
|
||||||
async def list_export_templates(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
content_types: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all export templates."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'content_types': content_types, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/export-templates', params=params)
|
|
||||||
|
|
||||||
async def get_export_template(self, id: int) -> Dict:
|
|
||||||
"""Get a specific export template by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/export-templates', id)
|
|
||||||
|
|
||||||
async def create_export_template(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
content_types: List[str],
|
|
||||||
template_code: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
mime_type: str = 'text/plain',
|
|
||||||
file_extension: Optional[str] = None,
|
|
||||||
as_attachment: bool = True,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new export template."""
|
|
||||||
data = {
|
|
||||||
'name': name, 'content_types': content_types,
|
|
||||||
'template_code': template_code, 'mime_type': mime_type,
|
|
||||||
'as_attachment': as_attachment, **kwargs
|
|
||||||
}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
if file_extension:
|
|
||||||
data['file_extension'] = file_extension
|
|
||||||
return self.client.create(f'{self.base_endpoint}/export-templates', data)
|
|
||||||
|
|
||||||
async def update_export_template(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an export template."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/export-templates', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_export_template(self, id: int) -> None:
|
|
||||||
"""Delete an export template."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/export-templates', id)
|
|
||||||
|
|
||||||
# ==================== Saved Filters ====================
|
|
||||||
|
|
||||||
async def list_saved_filters(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
content_types: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all saved filters."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'content_types': content_types, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/saved-filters', params=params)
|
|
||||||
|
|
||||||
async def get_saved_filter(self, id: int) -> Dict:
|
|
||||||
"""Get a specific saved filter by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/saved-filters', id)
|
|
||||||
|
|
||||||
async def create_saved_filter(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
content_types: List[str],
|
|
||||||
parameters: Dict[str, Any],
|
|
||||||
description: Optional[str] = None,
|
|
||||||
weight: int = 100,
|
|
||||||
enabled: bool = True,
|
|
||||||
shared: bool = True,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new saved filter."""
|
|
||||||
data = {
|
|
||||||
'name': name, 'slug': slug, 'content_types': content_types,
|
|
||||||
'parameters': parameters, 'weight': weight,
|
|
||||||
'enabled': enabled, 'shared': shared, **kwargs
|
|
||||||
}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/saved-filters', data)
|
|
||||||
|
|
||||||
async def update_saved_filter(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a saved filter."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/saved-filters', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_saved_filter(self, id: int) -> None:
|
|
||||||
"""Delete a saved filter."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/saved-filters', id)
|
|
||||||
|
|
||||||
# ==================== Image Attachments ====================
|
|
||||||
|
|
||||||
async def list_image_attachments(
|
|
||||||
self,
|
|
||||||
object_type: Optional[str] = None,
|
|
||||||
object_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all image attachments."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'object_type': object_type, 'object_id': object_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/image-attachments', params=params)
|
|
||||||
|
|
||||||
async def get_image_attachment(self, id: int) -> Dict:
|
|
||||||
"""Get a specific image attachment by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/image-attachments', id)
|
|
||||||
|
|
||||||
async def delete_image_attachment(self, id: int) -> None:
|
|
||||||
"""Delete an image attachment."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/image-attachments', id)
|
|
||||||
|
|
||||||
# ==================== Object Changes (Audit Log) ====================
|
|
||||||
|
|
||||||
async def list_object_changes(
|
|
||||||
self,
|
|
||||||
user_id: Optional[int] = None,
|
|
||||||
changed_object_type: Optional[str] = None,
|
|
||||||
changed_object_id: Optional[int] = None,
|
|
||||||
action: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all object changes (audit log)."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'user_id': user_id, 'changed_object_type': changed_object_type,
|
|
||||||
'changed_object_id': changed_object_id, 'action': action, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/object-changes', params=params)
|
|
||||||
|
|
||||||
async def get_object_change(self, id: int) -> Dict:
|
|
||||||
"""Get a specific object change by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/object-changes', id)
|
|
||||||
|
|
||||||
# ==================== Scripts ====================
|
|
||||||
|
|
||||||
async def list_scripts(self, **kwargs) -> List[Dict]:
|
|
||||||
"""List all available scripts."""
|
|
||||||
return self.client.list(f'{self.base_endpoint}/scripts', params=kwargs)
|
|
||||||
|
|
||||||
async def get_script(self, id: str) -> Dict:
|
|
||||||
"""Get a specific script by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/scripts', id)
|
|
||||||
|
|
||||||
async def run_script(self, id: str, data: Dict[str, Any], commit: bool = True) -> Dict:
|
|
||||||
"""Run a script with the provided data."""
|
|
||||||
payload = {'data': data, 'commit': commit}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/scripts/{id}', payload)
|
|
||||||
|
|
||||||
# ==================== Reports ====================
|
|
||||||
|
|
||||||
async def list_reports(self, **kwargs) -> List[Dict]:
|
|
||||||
"""List all available reports."""
|
|
||||||
return self.client.list(f'{self.base_endpoint}/reports', params=kwargs)
|
|
||||||
|
|
||||||
async def get_report(self, id: str) -> Dict:
|
|
||||||
"""Get a specific report by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/reports', id)
|
|
||||||
|
|
||||||
async def run_report(self, id: str) -> Dict:
|
|
||||||
"""Run a report."""
|
|
||||||
return self.client.create(f'{self.base_endpoint}/reports/{id}', {})
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
IPAM (IP Address Management) tools for NetBox MCP Server.
|
IPAM (IP Address Management) tools for NetBox MCP Server.
|
||||||
|
|
||||||
Covers: IP Addresses, Prefixes, VLANs, VRFs, ASNs, and related models.
|
Covers: IP Addresses, Prefixes, and Services only.
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
from typing import List, Dict, Optional, Any
|
from typing import List, Dict, Optional, Any
|
||||||
@@ -17,164 +17,6 @@ class IPAMTools:
|
|||||||
self.client = client
|
self.client = client
|
||||||
self.base_endpoint = 'ipam'
|
self.base_endpoint = 'ipam'
|
||||||
|
|
||||||
# ==================== ASN Ranges ====================
|
|
||||||
|
|
||||||
async def list_asn_ranges(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all ASN ranges."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/asn-ranges', params=params)
|
|
||||||
|
|
||||||
async def get_asn_range(self, id: int) -> Dict:
|
|
||||||
"""Get a specific ASN range by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/asn-ranges', id)
|
|
||||||
|
|
||||||
async def create_asn_range(self, name: str, slug: str, rir: int, start: int, end: int, **kwargs) -> Dict:
|
|
||||||
"""Create a new ASN range."""
|
|
||||||
data = {'name': name, 'slug': slug, 'rir': rir, 'start': start, 'end': end, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/asn-ranges', data)
|
|
||||||
|
|
||||||
async def update_asn_range(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an ASN range."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/asn-ranges', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_asn_range(self, id: int) -> None:
|
|
||||||
"""Delete an ASN range."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/asn-ranges', id)
|
|
||||||
|
|
||||||
# ==================== ASNs ====================
|
|
||||||
|
|
||||||
async def list_asns(
|
|
||||||
self,
|
|
||||||
asn: Optional[int] = None,
|
|
||||||
rir_id: Optional[int] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all ASNs."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'asn': asn, 'rir_id': rir_id, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/asns', params=params)
|
|
||||||
|
|
||||||
async def get_asn(self, id: int) -> Dict:
|
|
||||||
"""Get a specific ASN by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/asns', id)
|
|
||||||
|
|
||||||
async def create_asn(
|
|
||||||
self,
|
|
||||||
asn: int,
|
|
||||||
rir: int,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new ASN."""
|
|
||||||
data = {'asn': asn, 'rir': rir, **kwargs}
|
|
||||||
if tenant:
|
|
||||||
data['tenant'] = tenant
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/asns', data)
|
|
||||||
|
|
||||||
async def update_asn(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an ASN."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/asns', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_asn(self, id: int) -> None:
|
|
||||||
"""Delete an ASN."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/asns', id)
|
|
||||||
|
|
||||||
# ==================== RIRs ====================
|
|
||||||
|
|
||||||
async def list_rirs(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all RIRs (Regional Internet Registries)."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/rirs', params=params)
|
|
||||||
|
|
||||||
async def get_rir(self, id: int) -> Dict:
|
|
||||||
"""Get a specific RIR by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/rirs', id)
|
|
||||||
|
|
||||||
async def create_rir(self, name: str, slug: str, is_private: bool = False, **kwargs) -> Dict:
|
|
||||||
"""Create a new RIR."""
|
|
||||||
data = {'name': name, 'slug': slug, 'is_private': is_private, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/rirs', data)
|
|
||||||
|
|
||||||
async def update_rir(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a RIR."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/rirs', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_rir(self, id: int) -> None:
|
|
||||||
"""Delete a RIR."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/rirs', id)
|
|
||||||
|
|
||||||
# ==================== Aggregates ====================
|
|
||||||
|
|
||||||
async def list_aggregates(
|
|
||||||
self,
|
|
||||||
prefix: Optional[str] = None,
|
|
||||||
rir_id: Optional[int] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all aggregates."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'prefix': prefix, 'rir_id': rir_id, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/aggregates', params=params)
|
|
||||||
|
|
||||||
async def get_aggregate(self, id: int) -> Dict:
|
|
||||||
"""Get a specific aggregate by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/aggregates', id)
|
|
||||||
|
|
||||||
async def create_aggregate(
|
|
||||||
self,
|
|
||||||
prefix: str,
|
|
||||||
rir: int,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new aggregate."""
|
|
||||||
data = {'prefix': prefix, 'rir': rir, **kwargs}
|
|
||||||
if tenant:
|
|
||||||
data['tenant'] = tenant
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/aggregates', data)
|
|
||||||
|
|
||||||
async def update_aggregate(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an aggregate."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/aggregates', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_aggregate(self, id: int) -> None:
|
|
||||||
"""Delete an aggregate."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/aggregates', id)
|
|
||||||
|
|
||||||
# ==================== Roles ====================
|
|
||||||
|
|
||||||
async def list_roles(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all IPAM roles."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/roles', params=params)
|
|
||||||
|
|
||||||
async def get_role(self, id: int) -> Dict:
|
|
||||||
"""Get a specific role by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/roles', id)
|
|
||||||
|
|
||||||
async def create_role(self, name: str, slug: str, weight: int = 1000, **kwargs) -> Dict:
|
|
||||||
"""Create a new IPAM role."""
|
|
||||||
data = {'name': name, 'slug': slug, 'weight': weight, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/roles', data)
|
|
||||||
|
|
||||||
async def update_role(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a role."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/roles', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_role(self, id: int) -> None:
|
|
||||||
"""Delete a role."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/roles', id)
|
|
||||||
|
|
||||||
# ==================== Prefixes ====================
|
# ==================== Prefixes ====================
|
||||||
|
|
||||||
async def list_prefixes(
|
async def list_prefixes(
|
||||||
@@ -230,83 +72,6 @@ class IPAMTools:
|
|||||||
data[key] = val
|
data[key] = val
|
||||||
return self.client.create(f'{self.base_endpoint}/prefixes', data)
|
return self.client.create(f'{self.base_endpoint}/prefixes', data)
|
||||||
|
|
||||||
async def update_prefix(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a prefix."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/prefixes', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_prefix(self, id: int) -> None:
|
|
||||||
"""Delete a prefix."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/prefixes', id)
|
|
||||||
|
|
||||||
async def list_available_prefixes(self, id: int) -> List[Dict]:
|
|
||||||
"""List available child prefixes within a prefix."""
|
|
||||||
return self.client.list(f'{self.base_endpoint}/prefixes/{id}/available-prefixes', paginate=False)
|
|
||||||
|
|
||||||
async def create_available_prefix(self, id: int, prefix_length: int, **kwargs) -> Dict:
|
|
||||||
"""Create a new prefix from available space."""
|
|
||||||
data = {'prefix_length': prefix_length, **kwargs}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/prefixes/{id}/available-prefixes', data)
|
|
||||||
|
|
||||||
async def list_available_ips(self, id: int) -> List[Dict]:
|
|
||||||
"""List available IP addresses within a prefix."""
|
|
||||||
return self.client.list(f'{self.base_endpoint}/prefixes/{id}/available-ips', paginate=False)
|
|
||||||
|
|
||||||
async def create_available_ip(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Create a new IP address from available space in prefix."""
|
|
||||||
return self.client.create(f'{self.base_endpoint}/prefixes/{id}/available-ips', kwargs)
|
|
||||||
|
|
||||||
# ==================== IP Ranges ====================
|
|
||||||
|
|
||||||
async def list_ip_ranges(
|
|
||||||
self,
|
|
||||||
start_address: Optional[str] = None,
|
|
||||||
end_address: Optional[str] = None,
|
|
||||||
vrf_id: Optional[int] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all IP ranges."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'start_address': start_address, 'end_address': end_address,
|
|
||||||
'vrf_id': vrf_id, 'tenant_id': tenant_id, 'status': status, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/ip-ranges', params=params)
|
|
||||||
|
|
||||||
async def get_ip_range(self, id: int) -> Dict:
|
|
||||||
"""Get a specific IP range by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/ip-ranges', id)
|
|
||||||
|
|
||||||
async def create_ip_range(
|
|
||||||
self,
|
|
||||||
start_address: str,
|
|
||||||
end_address: str,
|
|
||||||
status: str = 'active',
|
|
||||||
vrf: Optional[int] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
role: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new IP range."""
|
|
||||||
data = {'start_address': start_address, 'end_address': end_address, 'status': status, **kwargs}
|
|
||||||
for key, val in [('vrf', vrf), ('tenant', tenant), ('role', role), ('description', description)]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/ip-ranges', data)
|
|
||||||
|
|
||||||
async def update_ip_range(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an IP range."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/ip-ranges', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_ip_range(self, id: int) -> None:
|
|
||||||
"""Delete an IP range."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/ip-ranges', id)
|
|
||||||
|
|
||||||
async def list_available_ips_in_range(self, id: int) -> List[Dict]:
|
|
||||||
"""List available IP addresses within an IP range."""
|
|
||||||
return self.client.list(f'{self.base_endpoint}/ip-ranges/{id}/available-ips', paginate=False)
|
|
||||||
|
|
||||||
# ==================== IP Addresses ====================
|
# ==================== IP Addresses ====================
|
||||||
|
|
||||||
async def list_ip_addresses(
|
async def list_ip_addresses(
|
||||||
@@ -368,271 +133,6 @@ class IPAMTools:
|
|||||||
"""Update an IP address."""
|
"""Update an IP address."""
|
||||||
return self.client.patch(f'{self.base_endpoint}/ip-addresses', id, kwargs)
|
return self.client.patch(f'{self.base_endpoint}/ip-addresses', id, kwargs)
|
||||||
|
|
||||||
async def delete_ip_address(self, id: int) -> None:
|
|
||||||
"""Delete an IP address."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/ip-addresses', id)
|
|
||||||
|
|
||||||
# ==================== FHRP Groups ====================
|
|
||||||
|
|
||||||
async def list_fhrp_groups(
|
|
||||||
self,
|
|
||||||
protocol: Optional[str] = None,
|
|
||||||
group_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all FHRP groups."""
|
|
||||||
params = {k: v for k, v in {'protocol': protocol, 'group_id': group_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/fhrp-groups', params=params)
|
|
||||||
|
|
||||||
async def get_fhrp_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific FHRP group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/fhrp-groups', id)
|
|
||||||
|
|
||||||
async def create_fhrp_group(
|
|
||||||
self,
|
|
||||||
protocol: str,
|
|
||||||
group_id: int,
|
|
||||||
auth_type: Optional[str] = None,
|
|
||||||
auth_key: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new FHRP group."""
|
|
||||||
data = {'protocol': protocol, 'group_id': group_id, **kwargs}
|
|
||||||
if auth_type:
|
|
||||||
data['auth_type'] = auth_type
|
|
||||||
if auth_key:
|
|
||||||
data['auth_key'] = auth_key
|
|
||||||
return self.client.create(f'{self.base_endpoint}/fhrp-groups', data)
|
|
||||||
|
|
||||||
async def update_fhrp_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an FHRP group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/fhrp-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_fhrp_group(self, id: int) -> None:
|
|
||||||
"""Delete an FHRP group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/fhrp-groups', id)
|
|
||||||
|
|
||||||
# ==================== FHRP Group Assignments ====================
|
|
||||||
|
|
||||||
async def list_fhrp_group_assignments(self, group_id: Optional[int] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all FHRP group assignments."""
|
|
||||||
params = {k: v for k, v in {'group_id': group_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/fhrp-group-assignments', params=params)
|
|
||||||
|
|
||||||
async def get_fhrp_group_assignment(self, id: int) -> Dict:
|
|
||||||
"""Get a specific FHRP group assignment by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/fhrp-group-assignments', id)
|
|
||||||
|
|
||||||
async def create_fhrp_group_assignment(
|
|
||||||
self,
|
|
||||||
group: int,
|
|
||||||
interface_type: str,
|
|
||||||
interface_id: int,
|
|
||||||
priority: int = 100,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new FHRP group assignment."""
|
|
||||||
data = {
|
|
||||||
'group': group, 'interface_type': interface_type,
|
|
||||||
'interface_id': interface_id, 'priority': priority, **kwargs
|
|
||||||
}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/fhrp-group-assignments', data)
|
|
||||||
|
|
||||||
async def update_fhrp_group_assignment(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an FHRP group assignment."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/fhrp-group-assignments', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_fhrp_group_assignment(self, id: int) -> None:
|
|
||||||
"""Delete an FHRP group assignment."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/fhrp-group-assignments', id)
|
|
||||||
|
|
||||||
# ==================== VLAN Groups ====================
|
|
||||||
|
|
||||||
async def list_vlan_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all VLAN groups."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'site_id': site_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/vlan-groups', params=params)
|
|
||||||
|
|
||||||
async def get_vlan_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific VLAN group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/vlan-groups', id)
|
|
||||||
|
|
||||||
async def create_vlan_group(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
scope_type: Optional[str] = None,
|
|
||||||
scope_id: Optional[int] = None,
|
|
||||||
min_vid: int = 1,
|
|
||||||
max_vid: int = 4094,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new VLAN group."""
|
|
||||||
data = {'name': name, 'slug': slug, 'min_vid': min_vid, 'max_vid': max_vid, **kwargs}
|
|
||||||
if scope_type:
|
|
||||||
data['scope_type'] = scope_type
|
|
||||||
if scope_id:
|
|
||||||
data['scope_id'] = scope_id
|
|
||||||
return self.client.create(f'{self.base_endpoint}/vlan-groups', data)
|
|
||||||
|
|
||||||
async def update_vlan_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a VLAN group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/vlan-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_vlan_group(self, id: int) -> None:
|
|
||||||
"""Delete a VLAN group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/vlan-groups', id)
|
|
||||||
|
|
||||||
async def list_available_vlans(self, id: int) -> List[Dict]:
|
|
||||||
"""List available VLANs in a VLAN group."""
|
|
||||||
return self.client.list(f'{self.base_endpoint}/vlan-groups/{id}/available-vlans', paginate=False)
|
|
||||||
|
|
||||||
# ==================== VLANs ====================
|
|
||||||
|
|
||||||
async def list_vlans(
|
|
||||||
self,
|
|
||||||
vid: Optional[int] = None,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
site_id: Optional[int] = None,
|
|
||||||
group_id: Optional[int] = None,
|
|
||||||
role_id: Optional[int] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all VLANs with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'vid': vid, 'name': name, 'site_id': site_id, 'group_id': group_id,
|
|
||||||
'role_id': role_id, 'tenant_id': tenant_id, 'status': status, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/vlans', params=params)
|
|
||||||
|
|
||||||
async def get_vlan(self, id: int) -> Dict:
|
|
||||||
"""Get a specific VLAN by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/vlans', id)
|
|
||||||
|
|
||||||
async def create_vlan(
|
|
||||||
self,
|
|
||||||
vid: int,
|
|
||||||
name: str,
|
|
||||||
status: str = 'active',
|
|
||||||
site: Optional[int] = None,
|
|
||||||
group: Optional[int] = None,
|
|
||||||
role: Optional[int] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new VLAN."""
|
|
||||||
data = {'vid': vid, 'name': name, 'status': status, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('site', site), ('group', group), ('role', role),
|
|
||||||
('tenant', tenant), ('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/vlans', data)
|
|
||||||
|
|
||||||
async def update_vlan(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a VLAN."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/vlans', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_vlan(self, id: int) -> None:
|
|
||||||
"""Delete a VLAN."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/vlans', id)
|
|
||||||
|
|
||||||
# ==================== VRFs ====================
|
|
||||||
|
|
||||||
async def list_vrfs(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
rd: Optional[str] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all VRFs with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'rd': rd, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/vrfs', params=params)
|
|
||||||
|
|
||||||
async def get_vrf(self, id: int) -> Dict:
|
|
||||||
"""Get a specific VRF by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/vrfs', id)
|
|
||||||
|
|
||||||
async def create_vrf(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
rd: Optional[str] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
enforce_unique: bool = True,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
import_targets: Optional[List[int]] = None,
|
|
||||||
export_targets: Optional[List[int]] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new VRF."""
|
|
||||||
data = {'name': name, 'enforce_unique': enforce_unique, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('rd', rd), ('tenant', tenant), ('description', description),
|
|
||||||
('import_targets', import_targets), ('export_targets', export_targets)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/vrfs', data)
|
|
||||||
|
|
||||||
async def update_vrf(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a VRF."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/vrfs', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_vrf(self, id: int) -> None:
|
|
||||||
"""Delete a VRF."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/vrfs', id)
|
|
||||||
|
|
||||||
# ==================== Route Targets ====================
|
|
||||||
|
|
||||||
async def list_route_targets(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all route targets."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'tenant_id': tenant_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/route-targets', params=params)
|
|
||||||
|
|
||||||
async def get_route_target(self, id: int) -> Dict:
|
|
||||||
"""Get a specific route target by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/route-targets', id)
|
|
||||||
|
|
||||||
async def create_route_target(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new route target."""
|
|
||||||
data = {'name': name, **kwargs}
|
|
||||||
if tenant:
|
|
||||||
data['tenant'] = tenant
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/route-targets', data)
|
|
||||||
|
|
||||||
async def update_route_target(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a route target."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/route-targets', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_route_target(self, id: int) -> None:
|
|
||||||
"""Delete a route target."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/route-targets', id)
|
|
||||||
|
|
||||||
# ==================== Services ====================
|
# ==================== Services ====================
|
||||||
|
|
||||||
async def list_services(
|
async def list_services(
|
||||||
@@ -675,44 +175,3 @@ class IPAMTools:
|
|||||||
if val is not None:
|
if val is not None:
|
||||||
data[key] = val
|
data[key] = val
|
||||||
return self.client.create(f'{self.base_endpoint}/services', data)
|
return self.client.create(f'{self.base_endpoint}/services', data)
|
||||||
|
|
||||||
async def update_service(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a service."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/services', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_service(self, id: int) -> None:
|
|
||||||
"""Delete a service."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/services', id)
|
|
||||||
|
|
||||||
# ==================== Service Templates ====================
|
|
||||||
|
|
||||||
async def list_service_templates(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all service templates."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/service-templates', params=params)
|
|
||||||
|
|
||||||
async def get_service_template(self, id: int) -> Dict:
|
|
||||||
"""Get a specific service template by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/service-templates', id)
|
|
||||||
|
|
||||||
async def create_service_template(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
ports: List[int],
|
|
||||||
protocol: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new service template."""
|
|
||||||
data = {'name': name, 'ports': ports, 'protocol': protocol, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/service-templates', data)
|
|
||||||
|
|
||||||
async def update_service_template(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a service template."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/service-templates', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_service_template(self, id: int) -> None:
|
|
||||||
"""Delete a service template."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/service-templates', id)
|
|
||||||
|
|||||||
@@ -1,281 +0,0 @@
|
|||||||
"""
|
|
||||||
Tenancy tools for NetBox MCP Server.
|
|
||||||
|
|
||||||
Covers: Tenants, Tenant Groups, Contacts, Contact Groups, and Contact Roles.
|
|
||||||
"""
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional, Any
|
|
||||||
from ..netbox_client import NetBoxClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class TenancyTools:
|
|
||||||
"""Tools for Tenancy operations in NetBox"""
|
|
||||||
|
|
||||||
def __init__(self, client: NetBoxClient):
|
|
||||||
self.client = client
|
|
||||||
self.base_endpoint = 'tenancy'
|
|
||||||
|
|
||||||
# ==================== Tenant Groups ====================
|
|
||||||
|
|
||||||
async def list_tenant_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
parent_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all tenant groups."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'parent_id': parent_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/tenant-groups', params=params)
|
|
||||||
|
|
||||||
async def get_tenant_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific tenant group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/tenant-groups', id)
|
|
||||||
|
|
||||||
async def create_tenant_group(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
parent: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new tenant group."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if parent:
|
|
||||||
data['parent'] = parent
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/tenant-groups', data)
|
|
||||||
|
|
||||||
async def update_tenant_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a tenant group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/tenant-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_tenant_group(self, id: int) -> None:
|
|
||||||
"""Delete a tenant group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/tenant-groups', id)
|
|
||||||
|
|
||||||
# ==================== Tenants ====================
|
|
||||||
|
|
||||||
async def list_tenants(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
group_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all tenants with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'group_id': group_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/tenants', params=params)
|
|
||||||
|
|
||||||
async def get_tenant(self, id: int) -> Dict:
|
|
||||||
"""Get a specific tenant by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/tenants', id)
|
|
||||||
|
|
||||||
async def create_tenant(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
group: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new tenant."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if group:
|
|
||||||
data['group'] = group
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/tenants', data)
|
|
||||||
|
|
||||||
async def update_tenant(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a tenant."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/tenants', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_tenant(self, id: int) -> None:
|
|
||||||
"""Delete a tenant."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/tenants', id)
|
|
||||||
|
|
||||||
# ==================== Contact Groups ====================
|
|
||||||
|
|
||||||
async def list_contact_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
parent_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all contact groups."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'parent_id': parent_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/contact-groups', params=params)
|
|
||||||
|
|
||||||
async def get_contact_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific contact group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/contact-groups', id)
|
|
||||||
|
|
||||||
async def create_contact_group(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
parent: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new contact group."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if parent:
|
|
||||||
data['parent'] = parent
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/contact-groups', data)
|
|
||||||
|
|
||||||
async def update_contact_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a contact group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/contact-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_contact_group(self, id: int) -> None:
|
|
||||||
"""Delete a contact group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/contact-groups', id)
|
|
||||||
|
|
||||||
# ==================== Contact Roles ====================
|
|
||||||
|
|
||||||
async def list_contact_roles(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all contact roles."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'slug': slug, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/contact-roles', params=params)
|
|
||||||
|
|
||||||
async def get_contact_role(self, id: int) -> Dict:
|
|
||||||
"""Get a specific contact role by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/contact-roles', id)
|
|
||||||
|
|
||||||
async def create_contact_role(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new contact role."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/contact-roles', data)
|
|
||||||
|
|
||||||
async def update_contact_role(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a contact role."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/contact-roles', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_contact_role(self, id: int) -> None:
|
|
||||||
"""Delete a contact role."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/contact-roles', id)
|
|
||||||
|
|
||||||
# ==================== Contacts ====================
|
|
||||||
|
|
||||||
async def list_contacts(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
group_id: Optional[int] = None,
|
|
||||||
email: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all contacts with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'group_id': group_id, 'email': email, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/contacts', params=params)
|
|
||||||
|
|
||||||
async def get_contact(self, id: int) -> Dict:
|
|
||||||
"""Get a specific contact by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/contacts', id)
|
|
||||||
|
|
||||||
async def create_contact(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
group: Optional[int] = None,
|
|
||||||
title: Optional[str] = None,
|
|
||||||
phone: Optional[str] = None,
|
|
||||||
email: Optional[str] = None,
|
|
||||||
address: Optional[str] = None,
|
|
||||||
link: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new contact."""
|
|
||||||
data = {'name': name, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('group', group), ('title', title), ('phone', phone),
|
|
||||||
('email', email), ('address', address), ('link', link),
|
|
||||||
('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/contacts', data)
|
|
||||||
|
|
||||||
async def update_contact(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a contact."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/contacts', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_contact(self, id: int) -> None:
|
|
||||||
"""Delete a contact."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/contacts', id)
|
|
||||||
|
|
||||||
# ==================== Contact Assignments ====================
|
|
||||||
|
|
||||||
async def list_contact_assignments(
|
|
||||||
self,
|
|
||||||
contact_id: Optional[int] = None,
|
|
||||||
role_id: Optional[int] = None,
|
|
||||||
object_type: Optional[str] = None,
|
|
||||||
object_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all contact assignments."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'contact_id': contact_id, 'role_id': role_id,
|
|
||||||
'object_type': object_type, 'object_id': object_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/contact-assignments', params=params)
|
|
||||||
|
|
||||||
async def get_contact_assignment(self, id: int) -> Dict:
|
|
||||||
"""Get a specific contact assignment by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/contact-assignments', id)
|
|
||||||
|
|
||||||
async def create_contact_assignment(
|
|
||||||
self,
|
|
||||||
contact: int,
|
|
||||||
role: int,
|
|
||||||
object_type: str,
|
|
||||||
object_id: int,
|
|
||||||
priority: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new contact assignment."""
|
|
||||||
data = {
|
|
||||||
'contact': contact, 'role': role,
|
|
||||||
'object_type': object_type, 'object_id': object_id, **kwargs
|
|
||||||
}
|
|
||||||
if priority:
|
|
||||||
data['priority'] = priority
|
|
||||||
return self.client.create(f'{self.base_endpoint}/contact-assignments', data)
|
|
||||||
|
|
||||||
async def update_contact_assignment(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a contact assignment."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/contact-assignments', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_contact_assignment(self, id: int) -> None:
|
|
||||||
"""Delete a contact assignment."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/contact-assignments', id)
|
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Virtualization tools for NetBox MCP Server.
|
Virtualization tools for NetBox MCP Server.
|
||||||
|
|
||||||
Covers: Clusters, Virtual Machines, VM Interfaces, and related models.
|
Covers: Clusters, Virtual Machines, and VM Interfaces only.
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
from typing import List, Dict, Optional, Any
|
from typing import List, Dict, Optional, Any
|
||||||
@@ -17,80 +17,6 @@ class VirtualizationTools:
|
|||||||
self.client = client
|
self.client = client
|
||||||
self.base_endpoint = 'virtualization'
|
self.base_endpoint = 'virtualization'
|
||||||
|
|
||||||
# ==================== Cluster Types ====================
|
|
||||||
|
|
||||||
async def list_cluster_types(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all cluster types."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'slug': slug, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/cluster-types', params=params)
|
|
||||||
|
|
||||||
async def get_cluster_type(self, id: int) -> Dict:
|
|
||||||
"""Get a specific cluster type by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/cluster-types', id)
|
|
||||||
|
|
||||||
async def create_cluster_type(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new cluster type."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/cluster-types', data)
|
|
||||||
|
|
||||||
async def update_cluster_type(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a cluster type."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/cluster-types', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_cluster_type(self, id: int) -> None:
|
|
||||||
"""Delete a cluster type."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/cluster-types', id)
|
|
||||||
|
|
||||||
# ==================== Cluster Groups ====================
|
|
||||||
|
|
||||||
async def list_cluster_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all cluster groups."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'slug': slug, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/cluster-groups', params=params)
|
|
||||||
|
|
||||||
async def get_cluster_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific cluster group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/cluster-groups', id)
|
|
||||||
|
|
||||||
async def create_cluster_group(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new cluster group."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/cluster-groups', data)
|
|
||||||
|
|
||||||
async def update_cluster_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a cluster group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/cluster-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_cluster_group(self, id: int) -> None:
|
|
||||||
"""Delete a cluster group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/cluster-groups', id)
|
|
||||||
|
|
||||||
# ==================== Clusters ====================
|
# ==================== Clusters ====================
|
||||||
|
|
||||||
async def list_clusters(
|
async def list_clusters(
|
||||||
@@ -134,14 +60,6 @@ class VirtualizationTools:
|
|||||||
data[key] = val
|
data[key] = val
|
||||||
return self.client.create(f'{self.base_endpoint}/clusters', data)
|
return self.client.create(f'{self.base_endpoint}/clusters', data)
|
||||||
|
|
||||||
async def update_cluster(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a cluster."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/clusters', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_cluster(self, id: int) -> None:
|
|
||||||
"""Delete a cluster."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/clusters', id)
|
|
||||||
|
|
||||||
# ==================== Virtual Machines ====================
|
# ==================== Virtual Machines ====================
|
||||||
|
|
||||||
async def list_virtual_machines(
|
async def list_virtual_machines(
|
||||||
@@ -201,10 +119,6 @@ class VirtualizationTools:
|
|||||||
"""Update a virtual machine."""
|
"""Update a virtual machine."""
|
||||||
return self.client.patch(f'{self.base_endpoint}/virtual-machines', id, kwargs)
|
return self.client.patch(f'{self.base_endpoint}/virtual-machines', id, kwargs)
|
||||||
|
|
||||||
async def delete_virtual_machine(self, id: int) -> None:
|
|
||||||
"""Delete a virtual machine."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/virtual-machines', id)
|
|
||||||
|
|
||||||
# ==================== VM Interfaces ====================
|
# ==================== VM Interfaces ====================
|
||||||
|
|
||||||
async def list_vm_interfaces(
|
async def list_vm_interfaces(
|
||||||
@@ -246,51 +160,3 @@ class VirtualizationTools:
|
|||||||
if val is not None:
|
if val is not None:
|
||||||
data[key] = val
|
data[key] = val
|
||||||
return self.client.create(f'{self.base_endpoint}/interfaces', data)
|
return self.client.create(f'{self.base_endpoint}/interfaces', data)
|
||||||
|
|
||||||
async def update_vm_interface(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a VM interface."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/interfaces', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_vm_interface(self, id: int) -> None:
|
|
||||||
"""Delete a VM interface."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/interfaces', id)
|
|
||||||
|
|
||||||
# ==================== Virtual Disks ====================
|
|
||||||
|
|
||||||
async def list_virtual_disks(
|
|
||||||
self,
|
|
||||||
virtual_machine_id: Optional[int] = None,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all virtual disks."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'virtual_machine_id': virtual_machine_id, 'name': name, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/virtual-disks', params=params)
|
|
||||||
|
|
||||||
async def get_virtual_disk(self, id: int) -> Dict:
|
|
||||||
"""Get a specific virtual disk by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/virtual-disks', id)
|
|
||||||
|
|
||||||
async def create_virtual_disk(
|
|
||||||
self,
|
|
||||||
virtual_machine: int,
|
|
||||||
name: str,
|
|
||||||
size: int,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new virtual disk."""
|
|
||||||
data = {'virtual_machine': virtual_machine, 'name': name, 'size': size, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/virtual-disks', data)
|
|
||||||
|
|
||||||
async def update_virtual_disk(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a virtual disk."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/virtual-disks', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_virtual_disk(self, id: int) -> None:
|
|
||||||
"""Delete a virtual disk."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/virtual-disks', id)
|
|
||||||
|
|||||||
@@ -1,428 +0,0 @@
|
|||||||
"""
|
|
||||||
VPN tools for NetBox MCP Server.
|
|
||||||
|
|
||||||
Covers: Tunnels, Tunnel Groups, Tunnel Terminations, IKE/IPSec Policies, and L2VPN.
|
|
||||||
"""
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional, Any
|
|
||||||
from ..netbox_client import NetBoxClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class VPNTools:
|
|
||||||
"""Tools for VPN operations in NetBox"""
|
|
||||||
|
|
||||||
def __init__(self, client: NetBoxClient):
|
|
||||||
self.client = client
|
|
||||||
self.base_endpoint = 'vpn'
|
|
||||||
|
|
||||||
# ==================== Tunnel Groups ====================
|
|
||||||
|
|
||||||
async def list_tunnel_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all tunnel groups."""
|
|
||||||
params = {k: v for k, v in {'name': name, 'slug': slug, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/tunnel-groups', params=params)
|
|
||||||
|
|
||||||
async def get_tunnel_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific tunnel group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/tunnel-groups', id)
|
|
||||||
|
|
||||||
async def create_tunnel_group(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new tunnel group."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/tunnel-groups', data)
|
|
||||||
|
|
||||||
async def update_tunnel_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a tunnel group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/tunnel-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_tunnel_group(self, id: int) -> None:
|
|
||||||
"""Delete a tunnel group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/tunnel-groups', id)
|
|
||||||
|
|
||||||
# ==================== Tunnels ====================
|
|
||||||
|
|
||||||
async def list_tunnels(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
group_id: Optional[int] = None,
|
|
||||||
encapsulation: Optional[str] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all tunnels with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'status': status, 'group_id': group_id,
|
|
||||||
'encapsulation': encapsulation, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/tunnels', params=params)
|
|
||||||
|
|
||||||
async def get_tunnel(self, id: int) -> Dict:
|
|
||||||
"""Get a specific tunnel by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/tunnels', id)
|
|
||||||
|
|
||||||
async def create_tunnel(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
status: str = 'active',
|
|
||||||
encapsulation: str = 'ipsec-tunnel',
|
|
||||||
group: Optional[int] = None,
|
|
||||||
ipsec_profile: Optional[int] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
tunnel_id: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new tunnel."""
|
|
||||||
data = {'name': name, 'status': status, 'encapsulation': encapsulation, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('group', group), ('ipsec_profile', ipsec_profile),
|
|
||||||
('tenant', tenant), ('tunnel_id', tunnel_id), ('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/tunnels', data)
|
|
||||||
|
|
||||||
async def update_tunnel(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a tunnel."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/tunnels', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_tunnel(self, id: int) -> None:
|
|
||||||
"""Delete a tunnel."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/tunnels', id)
|
|
||||||
|
|
||||||
# ==================== Tunnel Terminations ====================
|
|
||||||
|
|
||||||
async def list_tunnel_terminations(
|
|
||||||
self,
|
|
||||||
tunnel_id: Optional[int] = None,
|
|
||||||
role: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all tunnel terminations."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'tunnel_id': tunnel_id, 'role': role, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/tunnel-terminations', params=params)
|
|
||||||
|
|
||||||
async def get_tunnel_termination(self, id: int) -> Dict:
|
|
||||||
"""Get a specific tunnel termination by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/tunnel-terminations', id)
|
|
||||||
|
|
||||||
async def create_tunnel_termination(
|
|
||||||
self,
|
|
||||||
tunnel: int,
|
|
||||||
role: str,
|
|
||||||
termination_type: str,
|
|
||||||
termination_id: int,
|
|
||||||
outside_ip: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new tunnel termination."""
|
|
||||||
data = {
|
|
||||||
'tunnel': tunnel, 'role': role,
|
|
||||||
'termination_type': termination_type, 'termination_id': termination_id, **kwargs
|
|
||||||
}
|
|
||||||
if outside_ip:
|
|
||||||
data['outside_ip'] = outside_ip
|
|
||||||
return self.client.create(f'{self.base_endpoint}/tunnel-terminations', data)
|
|
||||||
|
|
||||||
async def update_tunnel_termination(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a tunnel termination."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/tunnel-terminations', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_tunnel_termination(self, id: int) -> None:
|
|
||||||
"""Delete a tunnel termination."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/tunnel-terminations', id)
|
|
||||||
|
|
||||||
# ==================== IKE Proposals ====================
|
|
||||||
|
|
||||||
async def list_ike_proposals(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all IKE proposals."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/ike-proposals', params=params)
|
|
||||||
|
|
||||||
async def get_ike_proposal(self, id: int) -> Dict:
|
|
||||||
"""Get a specific IKE proposal by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/ike-proposals', id)
|
|
||||||
|
|
||||||
async def create_ike_proposal(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
authentication_method: str,
|
|
||||||
encryption_algorithm: str,
|
|
||||||
authentication_algorithm: str,
|
|
||||||
group: int,
|
|
||||||
sa_lifetime: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new IKE proposal."""
|
|
||||||
data = {
|
|
||||||
'name': name, 'authentication_method': authentication_method,
|
|
||||||
'encryption_algorithm': encryption_algorithm,
|
|
||||||
'authentication_algorithm': authentication_algorithm, 'group': group, **kwargs
|
|
||||||
}
|
|
||||||
if sa_lifetime:
|
|
||||||
data['sa_lifetime'] = sa_lifetime
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/ike-proposals', data)
|
|
||||||
|
|
||||||
async def update_ike_proposal(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an IKE proposal."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/ike-proposals', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_ike_proposal(self, id: int) -> None:
|
|
||||||
"""Delete an IKE proposal."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/ike-proposals', id)
|
|
||||||
|
|
||||||
# ==================== IKE Policies ====================
|
|
||||||
|
|
||||||
async def list_ike_policies(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all IKE policies."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/ike-policies', params=params)
|
|
||||||
|
|
||||||
async def get_ike_policy(self, id: int) -> Dict:
|
|
||||||
"""Get a specific IKE policy by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/ike-policies', id)
|
|
||||||
|
|
||||||
async def create_ike_policy(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
version: int,
|
|
||||||
mode: str,
|
|
||||||
proposals: List[int],
|
|
||||||
preshared_key: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new IKE policy."""
|
|
||||||
data = {'name': name, 'version': version, 'mode': mode, 'proposals': proposals, **kwargs}
|
|
||||||
if preshared_key:
|
|
||||||
data['preshared_key'] = preshared_key
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/ike-policies', data)
|
|
||||||
|
|
||||||
async def update_ike_policy(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an IKE policy."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/ike-policies', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_ike_policy(self, id: int) -> None:
|
|
||||||
"""Delete an IKE policy."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/ike-policies', id)
|
|
||||||
|
|
||||||
# ==================== IPSec Proposals ====================
|
|
||||||
|
|
||||||
async def list_ipsec_proposals(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all IPSec proposals."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/ipsec-proposals', params=params)
|
|
||||||
|
|
||||||
async def get_ipsec_proposal(self, id: int) -> Dict:
|
|
||||||
"""Get a specific IPSec proposal by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/ipsec-proposals', id)
|
|
||||||
|
|
||||||
async def create_ipsec_proposal(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
encryption_algorithm: str,
|
|
||||||
authentication_algorithm: str,
|
|
||||||
sa_lifetime_seconds: Optional[int] = None,
|
|
||||||
sa_lifetime_data: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new IPSec proposal."""
|
|
||||||
data = {
|
|
||||||
'name': name, 'encryption_algorithm': encryption_algorithm,
|
|
||||||
'authentication_algorithm': authentication_algorithm, **kwargs
|
|
||||||
}
|
|
||||||
for key, val in [
|
|
||||||
('sa_lifetime_seconds', sa_lifetime_seconds),
|
|
||||||
('sa_lifetime_data', sa_lifetime_data), ('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/ipsec-proposals', data)
|
|
||||||
|
|
||||||
async def update_ipsec_proposal(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an IPSec proposal."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/ipsec-proposals', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_ipsec_proposal(self, id: int) -> None:
|
|
||||||
"""Delete an IPSec proposal."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/ipsec-proposals', id)
|
|
||||||
|
|
||||||
# ==================== IPSec Policies ====================
|
|
||||||
|
|
||||||
async def list_ipsec_policies(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all IPSec policies."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/ipsec-policies', params=params)
|
|
||||||
|
|
||||||
async def get_ipsec_policy(self, id: int) -> Dict:
|
|
||||||
"""Get a specific IPSec policy by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/ipsec-policies', id)
|
|
||||||
|
|
||||||
async def create_ipsec_policy(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
proposals: List[int],
|
|
||||||
pfs_group: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new IPSec policy."""
|
|
||||||
data = {'name': name, 'proposals': proposals, **kwargs}
|
|
||||||
if pfs_group:
|
|
||||||
data['pfs_group'] = pfs_group
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/ipsec-policies', data)
|
|
||||||
|
|
||||||
async def update_ipsec_policy(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an IPSec policy."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/ipsec-policies', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_ipsec_policy(self, id: int) -> None:
|
|
||||||
"""Delete an IPSec policy."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/ipsec-policies', id)
|
|
||||||
|
|
||||||
# ==================== IPSec Profiles ====================
|
|
||||||
|
|
||||||
async def list_ipsec_profiles(self, name: Optional[str] = None, **kwargs) -> List[Dict]:
|
|
||||||
"""List all IPSec profiles."""
|
|
||||||
params = {k: v for k, v in {'name': name, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/ipsec-profiles', params=params)
|
|
||||||
|
|
||||||
async def get_ipsec_profile(self, id: int) -> Dict:
|
|
||||||
"""Get a specific IPSec profile by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/ipsec-profiles', id)
|
|
||||||
|
|
||||||
async def create_ipsec_profile(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
mode: str,
|
|
||||||
ike_policy: int,
|
|
||||||
ipsec_policy: int,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new IPSec profile."""
|
|
||||||
data = {'name': name, 'mode': mode, 'ike_policy': ike_policy, 'ipsec_policy': ipsec_policy, **kwargs}
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/ipsec-profiles', data)
|
|
||||||
|
|
||||||
async def update_ipsec_profile(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an IPSec profile."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/ipsec-profiles', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_ipsec_profile(self, id: int) -> None:
|
|
||||||
"""Delete an IPSec profile."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/ipsec-profiles', id)
|
|
||||||
|
|
||||||
# ==================== L2VPN ====================
|
|
||||||
|
|
||||||
async def list_l2vpns(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
type: Optional[str] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all L2VPNs with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'type': type, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/l2vpns', params=params)
|
|
||||||
|
|
||||||
async def get_l2vpn(self, id: int) -> Dict:
|
|
||||||
"""Get a specific L2VPN by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/l2vpns', id)
|
|
||||||
|
|
||||||
async def create_l2vpn(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
type: str,
|
|
||||||
identifier: Optional[int] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
import_targets: Optional[List[int]] = None,
|
|
||||||
export_targets: Optional[List[int]] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new L2VPN."""
|
|
||||||
data = {'name': name, 'slug': slug, 'type': type, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('identifier', identifier), ('tenant', tenant), ('description', description),
|
|
||||||
('import_targets', import_targets), ('export_targets', export_targets)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/l2vpns', data)
|
|
||||||
|
|
||||||
async def update_l2vpn(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an L2VPN."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/l2vpns', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_l2vpn(self, id: int) -> None:
|
|
||||||
"""Delete an L2VPN."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/l2vpns', id)
|
|
||||||
|
|
||||||
# ==================== L2VPN Terminations ====================
|
|
||||||
|
|
||||||
async def list_l2vpn_terminations(
|
|
||||||
self,
|
|
||||||
l2vpn_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all L2VPN terminations."""
|
|
||||||
params = {k: v for k, v in {'l2vpn_id': l2vpn_id, **kwargs}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/l2vpn-terminations', params=params)
|
|
||||||
|
|
||||||
async def get_l2vpn_termination(self, id: int) -> Dict:
|
|
||||||
"""Get a specific L2VPN termination by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/l2vpn-terminations', id)
|
|
||||||
|
|
||||||
async def create_l2vpn_termination(
|
|
||||||
self,
|
|
||||||
l2vpn: int,
|
|
||||||
assigned_object_type: str,
|
|
||||||
assigned_object_id: int,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new L2VPN termination."""
|
|
||||||
data = {
|
|
||||||
'l2vpn': l2vpn, 'assigned_object_type': assigned_object_type,
|
|
||||||
'assigned_object_id': assigned_object_id, **kwargs
|
|
||||||
}
|
|
||||||
return self.client.create(f'{self.base_endpoint}/l2vpn-terminations', data)
|
|
||||||
|
|
||||||
async def update_l2vpn_termination(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update an L2VPN termination."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/l2vpn-terminations', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_l2vpn_termination(self, id: int) -> None:
|
|
||||||
"""Delete an L2VPN termination."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/l2vpn-terminations', id)
|
|
||||||
@@ -1,166 +0,0 @@
|
|||||||
"""
|
|
||||||
Wireless tools for NetBox MCP Server.
|
|
||||||
|
|
||||||
Covers: Wireless LANs, Wireless LAN Groups, and Wireless Links.
|
|
||||||
"""
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional, Any
|
|
||||||
from ..netbox_client import NetBoxClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class WirelessTools:
|
|
||||||
"""Tools for Wireless operations in NetBox"""
|
|
||||||
|
|
||||||
def __init__(self, client: NetBoxClient):
|
|
||||||
self.client = client
|
|
||||||
self.base_endpoint = 'wireless'
|
|
||||||
|
|
||||||
# ==================== Wireless LAN Groups ====================
|
|
||||||
|
|
||||||
async def list_wireless_lan_groups(
|
|
||||||
self,
|
|
||||||
name: Optional[str] = None,
|
|
||||||
slug: Optional[str] = None,
|
|
||||||
parent_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all wireless LAN groups."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'name': name, 'slug': slug, 'parent_id': parent_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/wireless-lan-groups', params=params)
|
|
||||||
|
|
||||||
async def get_wireless_lan_group(self, id: int) -> Dict:
|
|
||||||
"""Get a specific wireless LAN group by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/wireless-lan-groups', id)
|
|
||||||
|
|
||||||
async def create_wireless_lan_group(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
slug: str,
|
|
||||||
parent: Optional[int] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new wireless LAN group."""
|
|
||||||
data = {'name': name, 'slug': slug, **kwargs}
|
|
||||||
if parent:
|
|
||||||
data['parent'] = parent
|
|
||||||
if description:
|
|
||||||
data['description'] = description
|
|
||||||
return self.client.create(f'{self.base_endpoint}/wireless-lan-groups', data)
|
|
||||||
|
|
||||||
async def update_wireless_lan_group(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a wireless LAN group."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/wireless-lan-groups', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_wireless_lan_group(self, id: int) -> None:
|
|
||||||
"""Delete a wireless LAN group."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/wireless-lan-groups', id)
|
|
||||||
|
|
||||||
# ==================== Wireless LANs ====================
|
|
||||||
|
|
||||||
async def list_wireless_lans(
|
|
||||||
self,
|
|
||||||
ssid: Optional[str] = None,
|
|
||||||
group_id: Optional[int] = None,
|
|
||||||
vlan_id: Optional[int] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
auth_type: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all wireless LANs with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'ssid': ssid, 'group_id': group_id, 'vlan_id': vlan_id,
|
|
||||||
'tenant_id': tenant_id, 'status': status, 'auth_type': auth_type, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/wireless-lans', params=params)
|
|
||||||
|
|
||||||
async def get_wireless_lan(self, id: int) -> Dict:
|
|
||||||
"""Get a specific wireless LAN by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/wireless-lans', id)
|
|
||||||
|
|
||||||
async def create_wireless_lan(
|
|
||||||
self,
|
|
||||||
ssid: str,
|
|
||||||
status: str = 'active',
|
|
||||||
group: Optional[int] = None,
|
|
||||||
vlan: Optional[int] = None,
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
auth_type: Optional[str] = None,
|
|
||||||
auth_cipher: Optional[str] = None,
|
|
||||||
auth_psk: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new wireless LAN."""
|
|
||||||
data = {'ssid': ssid, 'status': status, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('group', group), ('vlan', vlan), ('tenant', tenant),
|
|
||||||
('auth_type', auth_type), ('auth_cipher', auth_cipher),
|
|
||||||
('auth_psk', auth_psk), ('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/wireless-lans', data)
|
|
||||||
|
|
||||||
async def update_wireless_lan(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a wireless LAN."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/wireless-lans', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_wireless_lan(self, id: int) -> None:
|
|
||||||
"""Delete a wireless LAN."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/wireless-lans', id)
|
|
||||||
|
|
||||||
# ==================== Wireless Links ====================
|
|
||||||
|
|
||||||
async def list_wireless_links(
|
|
||||||
self,
|
|
||||||
ssid: Optional[str] = None,
|
|
||||||
status: Optional[str] = None,
|
|
||||||
tenant_id: Optional[int] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""List all wireless links with optional filtering."""
|
|
||||||
params = {k: v for k, v in {
|
|
||||||
'ssid': ssid, 'status': status, 'tenant_id': tenant_id, **kwargs
|
|
||||||
}.items() if v is not None}
|
|
||||||
return self.client.list(f'{self.base_endpoint}/wireless-links', params=params)
|
|
||||||
|
|
||||||
async def get_wireless_link(self, id: int) -> Dict:
|
|
||||||
"""Get a specific wireless link by ID."""
|
|
||||||
return self.client.get(f'{self.base_endpoint}/wireless-links', id)
|
|
||||||
|
|
||||||
async def create_wireless_link(
|
|
||||||
self,
|
|
||||||
interface_a: int,
|
|
||||||
interface_b: int,
|
|
||||||
ssid: Optional[str] = None,
|
|
||||||
status: str = 'connected',
|
|
||||||
tenant: Optional[int] = None,
|
|
||||||
auth_type: Optional[str] = None,
|
|
||||||
auth_cipher: Optional[str] = None,
|
|
||||||
auth_psk: Optional[str] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
**kwargs
|
|
||||||
) -> Dict:
|
|
||||||
"""Create a new wireless link."""
|
|
||||||
data = {'interface_a': interface_a, 'interface_b': interface_b, 'status': status, **kwargs}
|
|
||||||
for key, val in [
|
|
||||||
('ssid', ssid), ('tenant', tenant), ('auth_type', auth_type),
|
|
||||||
('auth_cipher', auth_cipher), ('auth_psk', auth_psk), ('description', description)
|
|
||||||
]:
|
|
||||||
if val is not None:
|
|
||||||
data[key] = val
|
|
||||||
return self.client.create(f'{self.base_endpoint}/wireless-links', data)
|
|
||||||
|
|
||||||
async def update_wireless_link(self, id: int, **kwargs) -> Dict:
|
|
||||||
"""Update a wireless link."""
|
|
||||||
return self.client.patch(f'{self.base_endpoint}/wireless-links', id, kwargs)
|
|
||||||
|
|
||||||
async def delete_wireless_link(self, id: int) -> None:
|
|
||||||
"""Delete a wireless link."""
|
|
||||||
self.client.delete(f'{self.base_endpoint}/wireless-links', id)
|
|
||||||
21
mcp-servers/netbox/run.sh
Executable file
21
mcp-servers/netbox/run.sh
Executable file
@@ -0,0 +1,21 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Capture original working directory before any cd operations
|
||||||
|
# This should be the user's project directory when launched by Claude Code
|
||||||
|
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/netbox/.venv"
|
||||||
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|
||||||
|
if [[ -f "$CACHE_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$CACHE_VENV/bin/python"
|
||||||
|
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$LOCAL_VENV/bin/python"
|
||||||
|
else
|
||||||
|
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$SCRIPT_DIR"
|
||||||
|
export PYTHONPATH="$SCRIPT_DIR"
|
||||||
|
exec "$PYTHON" -m mcp_server.server "$@"
|
||||||
5
mcp-servers/viz-platform/.doc-guardian-queue
Normal file
5
mcp-servers/viz-platform/.doc-guardian-queue
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
2026-01-26T11:40:11 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/registry/dmc_2_5.json | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T13:46:31 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_chart_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T13:46:32 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T13:46:34 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
|
2026-01-26T13:46:35 | mcp-servers | /home/lmiranda/claude-plugins-work/mcp-servers/viz-platform/tests/test_theme_tools.py | docs/COMMANDS-CHEATSHEET.md CLAUDE.md
|
||||||
115
mcp-servers/viz-platform/README.md
Normal file
115
mcp-servers/viz-platform/README.md
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
# viz-platform MCP Server
|
||||||
|
|
||||||
|
Model Context Protocol (MCP) server for Dash Mantine Components validation and visualization tools.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This MCP server provides 21 tools for:
|
||||||
|
- **DMC Validation**: Version-locked component registry prevents Claude from hallucinating invalid props
|
||||||
|
- **Chart Creation**: Plotly-based visualization with theme integration
|
||||||
|
- **Layout Composition**: Dashboard layouts with responsive grids
|
||||||
|
- **Theme Management**: Design token-based theming system
|
||||||
|
- **Page Structure**: Multi-page Dash app generation
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
### DMC Tools (3)
|
||||||
|
|
||||||
|
| Tool | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `list_components` | List available DMC components by category |
|
||||||
|
| `get_component_props` | Get valid props, types, and defaults for a component |
|
||||||
|
| `validate_component` | Validate component definition before use |
|
||||||
|
|
||||||
|
### Chart Tools (2)
|
||||||
|
|
||||||
|
| Tool | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `chart_create` | Create Plotly chart (line, bar, scatter, pie, histogram, area, heatmap) |
|
||||||
|
| `chart_configure_interaction` | Configure chart interactions (zoom, pan, hover) |
|
||||||
|
|
||||||
|
### Layout Tools (5)
|
||||||
|
|
||||||
|
| Tool | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `layout_create` | Create dashboard layout structure |
|
||||||
|
| `layout_add_filter` | Add filter components to layout |
|
||||||
|
| `layout_set_grid` | Configure responsive grid settings |
|
||||||
|
| `layout_get` | Retrieve layout configuration |
|
||||||
|
| `layout_add_section` | Add sections to layout |
|
||||||
|
|
||||||
|
### Theme Tools (6)
|
||||||
|
|
||||||
|
| Tool | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `theme_create` | Create new theme with design tokens |
|
||||||
|
| `theme_extend` | Extend existing theme with overrides |
|
||||||
|
| `theme_validate` | Validate theme completeness |
|
||||||
|
| `theme_export_css` | Export theme as CSS custom properties |
|
||||||
|
| `theme_list` | List available themes |
|
||||||
|
| `theme_activate` | Set active theme for visualizations |
|
||||||
|
|
||||||
|
### Page Tools (5)
|
||||||
|
|
||||||
|
| Tool | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `page_create` | Create new page structure |
|
||||||
|
| `page_add_navbar` | Add navigation bar to page |
|
||||||
|
| `page_set_auth` | Configure page authentication |
|
||||||
|
| `page_list` | List available pages |
|
||||||
|
| `page_get_app_config` | Get full app configuration |
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
| Variable | Required | Description |
|
||||||
|
|----------|----------|-------------|
|
||||||
|
| `DMC_VERSION` | No | Dash Mantine Components version (auto-detected if installed) |
|
||||||
|
| `VIZ_DEFAULT_THEME` | No | Default theme name |
|
||||||
|
| `CLAUDE_PROJECT_DIR` | No | Project directory for theme storage |
|
||||||
|
|
||||||
|
### Theme Storage
|
||||||
|
|
||||||
|
Themes can be stored at two levels:
|
||||||
|
- **User-level**: `~/.config/claude/themes/`
|
||||||
|
- **Project-level**: `{project}/.viz-platform/themes/`
|
||||||
|
|
||||||
|
Project-level themes take precedence.
|
||||||
|
|
||||||
|
## Component Registry
|
||||||
|
|
||||||
|
The server uses a static JSON registry for DMC component validation:
|
||||||
|
- Pre-generated from DMC source code
|
||||||
|
- Version-tagged (e.g., `dmc_2_5.json`)
|
||||||
|
- Prevents hallucination of non-existent props
|
||||||
|
- Fast, deterministic validation
|
||||||
|
|
||||||
|
Registry files are stored in `registry/` directory.
|
||||||
|
|
||||||
|
## Tests
|
||||||
|
|
||||||
|
94 tests with coverage:
|
||||||
|
- `test_config.py`: 82% coverage
|
||||||
|
- `test_component_registry.py`: 92% coverage
|
||||||
|
- `test_dmc_tools.py`: 88% coverage
|
||||||
|
- `test_chart_tools.py`: 68% coverage
|
||||||
|
- `test_theme_tools.py`: 99% coverage
|
||||||
|
|
||||||
|
Run tests:
|
||||||
|
```bash
|
||||||
|
cd mcp-servers/viz-platform
|
||||||
|
source .venv/bin/activate
|
||||||
|
pytest tests/ -v
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- Python 3.10+
|
||||||
|
- FastMCP
|
||||||
|
- plotly
|
||||||
|
- dash-mantine-components (optional, for version detection)
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
This MCP server is used by the `viz-platform` plugin. See the plugin's commands in `plugins/viz-platform/commands/` for usage.
|
||||||
479
mcp-servers/viz-platform/mcp_server/accessibility_tools.py
Normal file
479
mcp-servers/viz-platform/mcp_server/accessibility_tools.py
Normal file
@@ -0,0 +1,479 @@
|
|||||||
|
"""
|
||||||
|
Accessibility validation tools for color blindness and WCAG compliance.
|
||||||
|
|
||||||
|
Provides tools for validating color palettes against color blindness
|
||||||
|
simulations and WCAG contrast requirements.
|
||||||
|
"""
|
||||||
|
import logging
|
||||||
|
import math
|
||||||
|
from typing import Dict, List, Optional, Any, Tuple
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
# Color-blind safe palettes
|
||||||
|
SAFE_PALETTES = {
|
||||||
|
"categorical": {
|
||||||
|
"name": "Paul Tol's Qualitative",
|
||||||
|
"colors": ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"],
|
||||||
|
"description": "Distinguishable for all types of color blindness"
|
||||||
|
},
|
||||||
|
"ibm": {
|
||||||
|
"name": "IBM Design",
|
||||||
|
"colors": ["#648FFF", "#785EF0", "#DC267F", "#FE6100", "#FFB000"],
|
||||||
|
"description": "IBM's accessible color palette"
|
||||||
|
},
|
||||||
|
"okabe_ito": {
|
||||||
|
"name": "Okabe-Ito",
|
||||||
|
"colors": ["#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#000000"],
|
||||||
|
"description": "Optimized for all color vision deficiencies"
|
||||||
|
},
|
||||||
|
"tableau_colorblind": {
|
||||||
|
"name": "Tableau Colorblind 10",
|
||||||
|
"colors": ["#006BA4", "#FF800E", "#ABABAB", "#595959", "#5F9ED1",
|
||||||
|
"#C85200", "#898989", "#A2C8EC", "#FFBC79", "#CFCFCF"],
|
||||||
|
"description": "Industry-standard accessible palette"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Simulation matrices for color blindness (LMS color space transformation)
|
||||||
|
# These approximate how colors appear to people with different types of color blindness
|
||||||
|
SIMULATION_MATRICES = {
|
||||||
|
"deuteranopia": {
|
||||||
|
# Green-blind (most common)
|
||||||
|
"severity": "common",
|
||||||
|
"population": "6% males, 0.4% females",
|
||||||
|
"description": "Difficulty distinguishing red from green (green-blind)",
|
||||||
|
"matrix": [
|
||||||
|
[0.625, 0.375, 0.0],
|
||||||
|
[0.700, 0.300, 0.0],
|
||||||
|
[0.0, 0.300, 0.700]
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"protanopia": {
|
||||||
|
# Red-blind
|
||||||
|
"severity": "common",
|
||||||
|
"population": "2.5% males, 0.05% females",
|
||||||
|
"description": "Difficulty distinguishing red from green (red-blind)",
|
||||||
|
"matrix": [
|
||||||
|
[0.567, 0.433, 0.0],
|
||||||
|
[0.558, 0.442, 0.0],
|
||||||
|
[0.0, 0.242, 0.758]
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"tritanopia": {
|
||||||
|
# Blue-blind (rare)
|
||||||
|
"severity": "rare",
|
||||||
|
"population": "0.01% total",
|
||||||
|
"description": "Difficulty distinguishing blue from yellow",
|
||||||
|
"matrix": [
|
||||||
|
[0.950, 0.050, 0.0],
|
||||||
|
[0.0, 0.433, 0.567],
|
||||||
|
[0.0, 0.475, 0.525]
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class AccessibilityTools:
|
||||||
|
"""
|
||||||
|
Color accessibility validation tools.
|
||||||
|
|
||||||
|
Validates colors for WCAG compliance and color blindness accessibility.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, theme_store=None):
|
||||||
|
"""
|
||||||
|
Initialize accessibility tools.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
theme_store: Optional ThemeStore for theme color extraction
|
||||||
|
"""
|
||||||
|
self.theme_store = theme_store
|
||||||
|
|
||||||
|
def _hex_to_rgb(self, hex_color: str) -> Tuple[int, int, int]:
|
||||||
|
"""Convert hex color to RGB tuple."""
|
||||||
|
hex_color = hex_color.lstrip('#')
|
||||||
|
if len(hex_color) == 3:
|
||||||
|
hex_color = ''.join([c * 2 for c in hex_color])
|
||||||
|
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
|
||||||
|
|
||||||
|
def _rgb_to_hex(self, rgb: Tuple[int, int, int]) -> str:
|
||||||
|
"""Convert RGB tuple to hex color."""
|
||||||
|
return '#{:02x}{:02x}{:02x}'.format(
|
||||||
|
max(0, min(255, int(rgb[0]))),
|
||||||
|
max(0, min(255, int(rgb[1]))),
|
||||||
|
max(0, min(255, int(rgb[2])))
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_relative_luminance(self, rgb: Tuple[int, int, int]) -> float:
|
||||||
|
"""
|
||||||
|
Calculate relative luminance per WCAG 2.1.
|
||||||
|
|
||||||
|
https://www.w3.org/WAI/GL/wiki/Relative_luminance
|
||||||
|
"""
|
||||||
|
def channel_luminance(value: int) -> float:
|
||||||
|
v = value / 255
|
||||||
|
return v / 12.92 if v <= 0.03928 else ((v + 0.055) / 1.055) ** 2.4
|
||||||
|
|
||||||
|
r, g, b = rgb
|
||||||
|
return (
|
||||||
|
0.2126 * channel_luminance(r) +
|
||||||
|
0.7152 * channel_luminance(g) +
|
||||||
|
0.0722 * channel_luminance(b)
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_contrast_ratio(self, color1: str, color2: str) -> float:
|
||||||
|
"""
|
||||||
|
Calculate contrast ratio between two colors per WCAG 2.1.
|
||||||
|
|
||||||
|
Returns ratio between 1:1 and 21:1.
|
||||||
|
"""
|
||||||
|
rgb1 = self._hex_to_rgb(color1)
|
||||||
|
rgb2 = self._hex_to_rgb(color2)
|
||||||
|
|
||||||
|
l1 = self._get_relative_luminance(rgb1)
|
||||||
|
l2 = self._get_relative_luminance(rgb2)
|
||||||
|
|
||||||
|
lighter = max(l1, l2)
|
||||||
|
darker = min(l1, l2)
|
||||||
|
|
||||||
|
return (lighter + 0.05) / (darker + 0.05)
|
||||||
|
|
||||||
|
def _simulate_color_blindness(
|
||||||
|
self,
|
||||||
|
hex_color: str,
|
||||||
|
deficiency_type: str
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Simulate how a color appears with a specific color blindness type.
|
||||||
|
|
||||||
|
Uses linear RGB transformation approximation.
|
||||||
|
"""
|
||||||
|
if deficiency_type not in SIMULATION_MATRICES:
|
||||||
|
return hex_color
|
||||||
|
|
||||||
|
rgb = self._hex_to_rgb(hex_color)
|
||||||
|
matrix = SIMULATION_MATRICES[deficiency_type]["matrix"]
|
||||||
|
|
||||||
|
# Apply transformation matrix
|
||||||
|
r = rgb[0] * matrix[0][0] + rgb[1] * matrix[0][1] + rgb[2] * matrix[0][2]
|
||||||
|
g = rgb[0] * matrix[1][0] + rgb[1] * matrix[1][1] + rgb[2] * matrix[1][2]
|
||||||
|
b = rgb[0] * matrix[2][0] + rgb[1] * matrix[2][1] + rgb[2] * matrix[2][2]
|
||||||
|
|
||||||
|
return self._rgb_to_hex((r, g, b))
|
||||||
|
|
||||||
|
def _get_color_distance(self, color1: str, color2: str) -> float:
|
||||||
|
"""
|
||||||
|
Calculate perceptual color distance (CIE76 approximation).
|
||||||
|
|
||||||
|
Returns a value where < 20 means colors may be hard to distinguish.
|
||||||
|
"""
|
||||||
|
rgb1 = self._hex_to_rgb(color1)
|
||||||
|
rgb2 = self._hex_to_rgb(color2)
|
||||||
|
|
||||||
|
# Simple Euclidean distance in RGB space (approximation)
|
||||||
|
# For production, should use CIEDE2000
|
||||||
|
return math.sqrt(
|
||||||
|
(rgb1[0] - rgb2[0]) ** 2 +
|
||||||
|
(rgb1[1] - rgb2[1]) ** 2 +
|
||||||
|
(rgb1[2] - rgb2[2]) ** 2
|
||||||
|
)
|
||||||
|
|
||||||
|
async def accessibility_validate_colors(
|
||||||
|
self,
|
||||||
|
colors: List[str],
|
||||||
|
check_types: Optional[List[str]] = None,
|
||||||
|
min_contrast_ratio: float = 4.5
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Validate a list of colors for accessibility.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
colors: List of hex colors to validate
|
||||||
|
check_types: Color blindness types to check (default: all)
|
||||||
|
min_contrast_ratio: Minimum WCAG contrast ratio (default: 4.5 for AA)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with:
|
||||||
|
- issues: List of accessibility issues found
|
||||||
|
- simulations: How colors appear under each deficiency
|
||||||
|
- recommendations: Suggestions for improvement
|
||||||
|
- safe_palettes: Color-blind safe palette suggestions
|
||||||
|
"""
|
||||||
|
check_types = check_types or list(SIMULATION_MATRICES.keys())
|
||||||
|
issues = []
|
||||||
|
simulations = {}
|
||||||
|
|
||||||
|
# Normalize colors
|
||||||
|
normalized_colors = [c.upper() if c.startswith('#') else f'#{c.upper()}' for c in colors]
|
||||||
|
|
||||||
|
# Simulate each color blindness type
|
||||||
|
for deficiency in check_types:
|
||||||
|
if deficiency not in SIMULATION_MATRICES:
|
||||||
|
continue
|
||||||
|
|
||||||
|
simulated = [self._simulate_color_blindness(c, deficiency) for c in normalized_colors]
|
||||||
|
simulations[deficiency] = {
|
||||||
|
"original": normalized_colors,
|
||||||
|
"simulated": simulated,
|
||||||
|
"info": SIMULATION_MATRICES[deficiency]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if any color pairs become indistinguishable
|
||||||
|
for i in range(len(normalized_colors)):
|
||||||
|
for j in range(i + 1, len(normalized_colors)):
|
||||||
|
distance = self._get_color_distance(simulated[i], simulated[j])
|
||||||
|
if distance < 30: # Threshold for distinguishability
|
||||||
|
issues.append({
|
||||||
|
"type": "distinguishability",
|
||||||
|
"severity": "warning" if distance > 15 else "error",
|
||||||
|
"colors": [normalized_colors[i], normalized_colors[j]],
|
||||||
|
"affected_by": [deficiency],
|
||||||
|
"simulated_colors": [simulated[i], simulated[j]],
|
||||||
|
"distance": round(distance, 1),
|
||||||
|
"message": f"Colors may be hard to distinguish for {deficiency} ({SIMULATION_MATRICES[deficiency]['description']})"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Check contrast ratios against white and black backgrounds
|
||||||
|
for color in normalized_colors:
|
||||||
|
white_contrast = self._get_contrast_ratio(color, "#FFFFFF")
|
||||||
|
black_contrast = self._get_contrast_ratio(color, "#000000")
|
||||||
|
|
||||||
|
if white_contrast < min_contrast_ratio and black_contrast < min_contrast_ratio:
|
||||||
|
issues.append({
|
||||||
|
"type": "contrast_ratio",
|
||||||
|
"severity": "error",
|
||||||
|
"colors": [color],
|
||||||
|
"white_contrast": round(white_contrast, 2),
|
||||||
|
"black_contrast": round(black_contrast, 2),
|
||||||
|
"required": min_contrast_ratio,
|
||||||
|
"message": f"Insufficient contrast against both white ({white_contrast:.1f}:1) and black ({black_contrast:.1f}:1) backgrounds"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Generate recommendations
|
||||||
|
recommendations = self._generate_recommendations(issues)
|
||||||
|
|
||||||
|
# Calculate overall score
|
||||||
|
error_count = sum(1 for i in issues if i["severity"] == "error")
|
||||||
|
warning_count = sum(1 for i in issues if i["severity"] == "warning")
|
||||||
|
|
||||||
|
if error_count == 0 and warning_count == 0:
|
||||||
|
score = "A"
|
||||||
|
elif error_count == 0 and warning_count <= 2:
|
||||||
|
score = "B"
|
||||||
|
elif error_count <= 2:
|
||||||
|
score = "C"
|
||||||
|
else:
|
||||||
|
score = "D"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"colors_checked": normalized_colors,
|
||||||
|
"overall_score": score,
|
||||||
|
"issue_count": len(issues),
|
||||||
|
"issues": issues,
|
||||||
|
"simulations": simulations,
|
||||||
|
"recommendations": recommendations,
|
||||||
|
"safe_palettes": SAFE_PALETTES
|
||||||
|
}
|
||||||
|
|
||||||
|
async def accessibility_validate_theme(
|
||||||
|
self,
|
||||||
|
theme_name: str
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Validate a theme's colors for accessibility.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
theme_name: Theme name to validate
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with accessibility validation results
|
||||||
|
"""
|
||||||
|
if not self.theme_store:
|
||||||
|
return {
|
||||||
|
"error": "Theme store not configured",
|
||||||
|
"theme_name": theme_name
|
||||||
|
}
|
||||||
|
|
||||||
|
theme = self.theme_store.get_theme(theme_name)
|
||||||
|
if not theme:
|
||||||
|
available = self.theme_store.list_themes()
|
||||||
|
return {
|
||||||
|
"error": f"Theme '{theme_name}' not found. Available: {available}",
|
||||||
|
"theme_name": theme_name
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extract colors from theme
|
||||||
|
colors = []
|
||||||
|
tokens = theme.get("tokens", {})
|
||||||
|
color_tokens = tokens.get("colors", {})
|
||||||
|
|
||||||
|
def extract_colors(obj, prefix=""):
|
||||||
|
"""Recursively extract color values."""
|
||||||
|
if isinstance(obj, str) and (obj.startswith('#') or len(obj) == 6):
|
||||||
|
colors.append(obj if obj.startswith('#') else f'#{obj}')
|
||||||
|
elif isinstance(obj, dict):
|
||||||
|
for key, value in obj.items():
|
||||||
|
extract_colors(value, f"{prefix}.{key}")
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
extract_colors(item, prefix)
|
||||||
|
|
||||||
|
extract_colors(color_tokens)
|
||||||
|
|
||||||
|
# Validate extracted colors
|
||||||
|
result = await self.accessibility_validate_colors(colors)
|
||||||
|
result["theme_name"] = theme_name
|
||||||
|
|
||||||
|
# Add theme-specific checks
|
||||||
|
primary = color_tokens.get("primary")
|
||||||
|
background = color_tokens.get("background", {})
|
||||||
|
text = color_tokens.get("text", {})
|
||||||
|
|
||||||
|
if primary and background:
|
||||||
|
bg_color = background.get("base") if isinstance(background, dict) else background
|
||||||
|
if bg_color:
|
||||||
|
contrast = self._get_contrast_ratio(primary, bg_color)
|
||||||
|
if contrast < 4.5:
|
||||||
|
result["issues"].append({
|
||||||
|
"type": "primary_contrast",
|
||||||
|
"severity": "error",
|
||||||
|
"colors": [primary, bg_color],
|
||||||
|
"ratio": round(contrast, 2),
|
||||||
|
"required": 4.5,
|
||||||
|
"message": f"Primary color has insufficient contrast ({contrast:.1f}:1) against background"
|
||||||
|
})
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
async def accessibility_suggest_alternative(
|
||||||
|
self,
|
||||||
|
color: str,
|
||||||
|
deficiency_type: str
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Suggest accessible alternative colors.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
color: Original hex color
|
||||||
|
deficiency_type: Type of color blindness to optimize for
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with alternative color suggestions
|
||||||
|
"""
|
||||||
|
rgb = self._hex_to_rgb(color)
|
||||||
|
|
||||||
|
suggestions = []
|
||||||
|
|
||||||
|
# Suggest shifting hue while maintaining saturation and brightness
|
||||||
|
# For red-green deficiency, shift toward blue or yellow
|
||||||
|
if deficiency_type in ["deuteranopia", "protanopia"]:
|
||||||
|
# Shift toward blue
|
||||||
|
blue_shift = self._rgb_to_hex((
|
||||||
|
max(0, rgb[0] - 50),
|
||||||
|
max(0, rgb[1] - 30),
|
||||||
|
min(255, rgb[2] + 80)
|
||||||
|
))
|
||||||
|
suggestions.append({
|
||||||
|
"color": blue_shift,
|
||||||
|
"description": "Blue-shifted alternative",
|
||||||
|
"preserves": "approximate brightness"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Shift toward yellow/orange
|
||||||
|
yellow_shift = self._rgb_to_hex((
|
||||||
|
min(255, rgb[0] + 50),
|
||||||
|
min(255, rgb[1] + 30),
|
||||||
|
max(0, rgb[2] - 80)
|
||||||
|
))
|
||||||
|
suggestions.append({
|
||||||
|
"color": yellow_shift,
|
||||||
|
"description": "Yellow-shifted alternative",
|
||||||
|
"preserves": "approximate brightness"
|
||||||
|
})
|
||||||
|
|
||||||
|
elif deficiency_type == "tritanopia":
|
||||||
|
# For blue-yellow deficiency, shift toward red or green
|
||||||
|
red_shift = self._rgb_to_hex((
|
||||||
|
min(255, rgb[0] + 60),
|
||||||
|
max(0, rgb[1] - 20),
|
||||||
|
max(0, rgb[2] - 40)
|
||||||
|
))
|
||||||
|
suggestions.append({
|
||||||
|
"color": red_shift,
|
||||||
|
"description": "Red-shifted alternative",
|
||||||
|
"preserves": "approximate brightness"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Add safe palette suggestions
|
||||||
|
for palette_name, palette in SAFE_PALETTES.items():
|
||||||
|
# Find closest color in safe palette
|
||||||
|
min_distance = float('inf')
|
||||||
|
closest = None
|
||||||
|
for safe_color in palette["colors"]:
|
||||||
|
distance = self._get_color_distance(color, safe_color)
|
||||||
|
if distance < min_distance:
|
||||||
|
min_distance = distance
|
||||||
|
closest = safe_color
|
||||||
|
|
||||||
|
if closest:
|
||||||
|
suggestions.append({
|
||||||
|
"color": closest,
|
||||||
|
"description": f"From {palette['name']} palette",
|
||||||
|
"palette": palette_name
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
"original_color": color,
|
||||||
|
"deficiency_type": deficiency_type,
|
||||||
|
"suggestions": suggestions[:5] # Limit to 5 suggestions
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_recommendations(self, issues: List[Dict[str, Any]]) -> List[str]:
|
||||||
|
"""Generate actionable recommendations based on issues."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
# Check for distinguishability issues
|
||||||
|
distinguishability_issues = [i for i in issues if i["type"] == "distinguishability"]
|
||||||
|
if distinguishability_issues:
|
||||||
|
affected_types = set()
|
||||||
|
for issue in distinguishability_issues:
|
||||||
|
affected_types.update(issue.get("affected_by", []))
|
||||||
|
|
||||||
|
if "deuteranopia" in affected_types or "protanopia" in affected_types:
|
||||||
|
recommendations.append(
|
||||||
|
"Avoid using red and green as the only differentiators - "
|
||||||
|
"add patterns, shapes, or labels"
|
||||||
|
)
|
||||||
|
|
||||||
|
recommendations.append(
|
||||||
|
"Consider using a color-blind safe palette like Okabe-Ito or IBM Design"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for contrast issues
|
||||||
|
contrast_issues = [i for i in issues if i["type"] in ["contrast_ratio", "primary_contrast"]]
|
||||||
|
if contrast_issues:
|
||||||
|
recommendations.append(
|
||||||
|
"Increase contrast by darkening colors for light backgrounds "
|
||||||
|
"or lightening for dark backgrounds"
|
||||||
|
)
|
||||||
|
recommendations.append(
|
||||||
|
"Use WCAG contrast checker tools to verify text readability"
|
||||||
|
)
|
||||||
|
|
||||||
|
# General recommendations
|
||||||
|
if len(issues) > 0:
|
||||||
|
recommendations.append(
|
||||||
|
"Add secondary visual cues (icons, patterns, labels) "
|
||||||
|
"to not rely solely on color"
|
||||||
|
)
|
||||||
|
|
||||||
|
if not recommendations:
|
||||||
|
recommendations.append(
|
||||||
|
"Color palette appears accessible! Consider adding patterns "
|
||||||
|
"for additional distinguishability"
|
||||||
|
)
|
||||||
|
|
||||||
|
return recommendations
|
||||||
@@ -3,11 +3,21 @@ Chart creation tools using Plotly.
|
|||||||
|
|
||||||
Provides tools for creating data visualizations with automatic theme integration.
|
Provides tools for creating data visualizations with automatic theme integration.
|
||||||
"""
|
"""
|
||||||
|
import base64
|
||||||
import logging
|
import logging
|
||||||
|
import os
|
||||||
from typing import Dict, List, Optional, Any, Union
|
from typing import Dict, List, Optional, Any, Union
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Check for kaleido availability
|
||||||
|
KALEIDO_AVAILABLE = False
|
||||||
|
try:
|
||||||
|
import kaleido
|
||||||
|
KALEIDO_AVAILABLE = True
|
||||||
|
except ImportError:
|
||||||
|
logger.debug("kaleido not installed - chart export will be unavailable")
|
||||||
|
|
||||||
|
|
||||||
# Default color palette based on Mantine theme
|
# Default color palette based on Mantine theme
|
||||||
DEFAULT_COLORS = [
|
DEFAULT_COLORS = [
|
||||||
@@ -395,3 +405,129 @@ class ChartTools:
|
|||||||
"figure": figure,
|
"figure": figure,
|
||||||
"interactions_added": []
|
"interactions_added": []
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async def chart_export(
|
||||||
|
self,
|
||||||
|
figure: Dict[str, Any],
|
||||||
|
format: str = "png",
|
||||||
|
width: Optional[int] = None,
|
||||||
|
height: Optional[int] = None,
|
||||||
|
scale: float = 2.0,
|
||||||
|
output_path: Optional[str] = None
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Export a Plotly chart to a static image format.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
figure: Plotly figure JSON to export
|
||||||
|
format: Output format - png, svg, or pdf
|
||||||
|
width: Image width in pixels (default: from figure or 1200)
|
||||||
|
height: Image height in pixels (default: from figure or 800)
|
||||||
|
scale: Resolution scale factor (default: 2 for retina)
|
||||||
|
output_path: Optional file path to save the image
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with:
|
||||||
|
- image_data: Base64-encoded image (if no output_path)
|
||||||
|
- file_path: Path to saved file (if output_path provided)
|
||||||
|
- format: Export format used
|
||||||
|
- dimensions: {width, height, scale}
|
||||||
|
- error: Error message if export failed
|
||||||
|
"""
|
||||||
|
# Validate format
|
||||||
|
valid_formats = ['png', 'svg', 'pdf']
|
||||||
|
format = format.lower()
|
||||||
|
if format not in valid_formats:
|
||||||
|
return {
|
||||||
|
"error": f"Invalid format '{format}'. Must be one of: {valid_formats}",
|
||||||
|
"format": format,
|
||||||
|
"image_data": None
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check kaleido availability
|
||||||
|
if not KALEIDO_AVAILABLE:
|
||||||
|
return {
|
||||||
|
"error": "kaleido package not installed. Install with: pip install kaleido",
|
||||||
|
"format": format,
|
||||||
|
"image_data": None,
|
||||||
|
"install_hint": "pip install kaleido"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Validate figure
|
||||||
|
if not figure or 'data' not in figure:
|
||||||
|
return {
|
||||||
|
"error": "Invalid figure: must contain 'data' key",
|
||||||
|
"format": format,
|
||||||
|
"image_data": None
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
import plotly.graph_objects as go
|
||||||
|
import plotly.io as pio
|
||||||
|
|
||||||
|
# Create Plotly figure object
|
||||||
|
fig = go.Figure(figure)
|
||||||
|
|
||||||
|
# Determine dimensions
|
||||||
|
layout = figure.get('layout', {})
|
||||||
|
export_width = width or layout.get('width') or 1200
|
||||||
|
export_height = height or layout.get('height') or 800
|
||||||
|
|
||||||
|
# Export to bytes
|
||||||
|
image_bytes = pio.to_image(
|
||||||
|
fig,
|
||||||
|
format=format,
|
||||||
|
width=export_width,
|
||||||
|
height=export_height,
|
||||||
|
scale=scale
|
||||||
|
)
|
||||||
|
|
||||||
|
result = {
|
||||||
|
"format": format,
|
||||||
|
"dimensions": {
|
||||||
|
"width": export_width,
|
||||||
|
"height": export_height,
|
||||||
|
"scale": scale,
|
||||||
|
"effective_width": int(export_width * scale),
|
||||||
|
"effective_height": int(export_height * scale)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Save to file or return base64
|
||||||
|
if output_path:
|
||||||
|
# Ensure directory exists
|
||||||
|
output_dir = os.path.dirname(output_path)
|
||||||
|
if output_dir and not os.path.exists(output_dir):
|
||||||
|
os.makedirs(output_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# Add extension if missing
|
||||||
|
if not output_path.endswith(f'.{format}'):
|
||||||
|
output_path = f"{output_path}.{format}"
|
||||||
|
|
||||||
|
with open(output_path, 'wb') as f:
|
||||||
|
f.write(image_bytes)
|
||||||
|
|
||||||
|
result["file_path"] = output_path
|
||||||
|
result["file_size_bytes"] = len(image_bytes)
|
||||||
|
else:
|
||||||
|
# Return as base64
|
||||||
|
result["image_data"] = base64.b64encode(image_bytes).decode('utf-8')
|
||||||
|
result["data_uri"] = f"data:image/{format};base64,{result['image_data']}"
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
except ImportError as e:
|
||||||
|
logger.error(f"Chart export failed - missing dependency: {e}")
|
||||||
|
return {
|
||||||
|
"error": f"Missing dependency for export: {e}",
|
||||||
|
"format": format,
|
||||||
|
"image_data": None,
|
||||||
|
"install_hint": "pip install plotly kaleido"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Chart export failed: {e}")
|
||||||
|
return {
|
||||||
|
"error": str(e),
|
||||||
|
"format": format,
|
||||||
|
"image_data": None
|
||||||
|
}
|
||||||
|
|||||||
@@ -10,6 +10,46 @@ from uuid import uuid4
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
# Standard responsive breakpoints (Mantine/Bootstrap-aligned)
|
||||||
|
DEFAULT_BREAKPOINTS = {
|
||||||
|
"xs": {
|
||||||
|
"min_width": "0px",
|
||||||
|
"max_width": "575px",
|
||||||
|
"cols": 1,
|
||||||
|
"spacing": "xs",
|
||||||
|
"description": "Extra small devices (phones, portrait)"
|
||||||
|
},
|
||||||
|
"sm": {
|
||||||
|
"min_width": "576px",
|
||||||
|
"max_width": "767px",
|
||||||
|
"cols": 2,
|
||||||
|
"spacing": "sm",
|
||||||
|
"description": "Small devices (phones, landscape)"
|
||||||
|
},
|
||||||
|
"md": {
|
||||||
|
"min_width": "768px",
|
||||||
|
"max_width": "991px",
|
||||||
|
"cols": 6,
|
||||||
|
"spacing": "md",
|
||||||
|
"description": "Medium devices (tablets)"
|
||||||
|
},
|
||||||
|
"lg": {
|
||||||
|
"min_width": "992px",
|
||||||
|
"max_width": "1199px",
|
||||||
|
"cols": 12,
|
||||||
|
"spacing": "md",
|
||||||
|
"description": "Large devices (desktops)"
|
||||||
|
},
|
||||||
|
"xl": {
|
||||||
|
"min_width": "1200px",
|
||||||
|
"max_width": None,
|
||||||
|
"cols": 12,
|
||||||
|
"spacing": "lg",
|
||||||
|
"description": "Extra large devices (large desktops)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
# Layout templates
|
# Layout templates
|
||||||
TEMPLATES = {
|
TEMPLATES = {
|
||||||
"dashboard": {
|
"dashboard": {
|
||||||
@@ -365,3 +405,149 @@ class LayoutTools:
|
|||||||
}
|
}
|
||||||
for name, config in FILTER_TYPES.items()
|
for name, config in FILTER_TYPES.items()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async def layout_set_breakpoints(
|
||||||
|
self,
|
||||||
|
layout_ref: str,
|
||||||
|
breakpoints: Dict[str, Dict[str, Any]],
|
||||||
|
mobile_first: bool = True
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Configure responsive breakpoints for a layout.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
layout_ref: Layout name to configure
|
||||||
|
breakpoints: Breakpoint configuration dict:
|
||||||
|
{
|
||||||
|
"xs": {"cols": 1, "spacing": "xs"},
|
||||||
|
"sm": {"cols": 2, "spacing": "sm"},
|
||||||
|
"md": {"cols": 6, "spacing": "md"},
|
||||||
|
"lg": {"cols": 12, "spacing": "md"},
|
||||||
|
"xl": {"cols": 12, "spacing": "lg"}
|
||||||
|
}
|
||||||
|
mobile_first: If True, use min-width media queries (default)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with:
|
||||||
|
- breakpoints: Complete breakpoint configuration
|
||||||
|
- css_media_queries: Generated CSS media queries
|
||||||
|
- mobile_first: Whether mobile-first approach is used
|
||||||
|
"""
|
||||||
|
# Validate layout exists
|
||||||
|
if layout_ref not in self._layouts:
|
||||||
|
return {
|
||||||
|
"error": f"Layout '{layout_ref}' not found. Create it first with layout_create.",
|
||||||
|
"breakpoints": None
|
||||||
|
}
|
||||||
|
|
||||||
|
layout = self._layouts[layout_ref]
|
||||||
|
|
||||||
|
# Validate breakpoint names
|
||||||
|
valid_breakpoints = ["xs", "sm", "md", "lg", "xl"]
|
||||||
|
for bp_name in breakpoints.keys():
|
||||||
|
if bp_name not in valid_breakpoints:
|
||||||
|
return {
|
||||||
|
"error": f"Invalid breakpoint '{bp_name}'. Must be one of: {valid_breakpoints}",
|
||||||
|
"breakpoints": layout.get("breakpoints")
|
||||||
|
}
|
||||||
|
|
||||||
|
# Merge with defaults
|
||||||
|
merged_breakpoints = {}
|
||||||
|
for bp_name in valid_breakpoints:
|
||||||
|
default = DEFAULT_BREAKPOINTS[bp_name].copy()
|
||||||
|
if bp_name in breakpoints:
|
||||||
|
default.update(breakpoints[bp_name])
|
||||||
|
merged_breakpoints[bp_name] = default
|
||||||
|
|
||||||
|
# Validate spacing values
|
||||||
|
valid_spacing = ["xs", "sm", "md", "lg", "xl"]
|
||||||
|
for bp_name, bp_config in merged_breakpoints.items():
|
||||||
|
if "spacing" in bp_config and bp_config["spacing"] not in valid_spacing:
|
||||||
|
return {
|
||||||
|
"error": f"Invalid spacing '{bp_config['spacing']}' for breakpoint '{bp_name}'. Must be one of: {valid_spacing}",
|
||||||
|
"breakpoints": layout.get("breakpoints")
|
||||||
|
}
|
||||||
|
|
||||||
|
# Validate column counts
|
||||||
|
for bp_name, bp_config in merged_breakpoints.items():
|
||||||
|
if "cols" in bp_config:
|
||||||
|
cols = bp_config["cols"]
|
||||||
|
if not isinstance(cols, int) or cols < 1 or cols > 24:
|
||||||
|
return {
|
||||||
|
"error": f"Invalid cols '{cols}' for breakpoint '{bp_name}'. Must be integer between 1 and 24.",
|
||||||
|
"breakpoints": layout.get("breakpoints")
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate CSS media queries
|
||||||
|
css_queries = self._generate_media_queries(merged_breakpoints, mobile_first)
|
||||||
|
|
||||||
|
# Store in layout
|
||||||
|
layout["breakpoints"] = merged_breakpoints
|
||||||
|
layout["mobile_first"] = mobile_first
|
||||||
|
layout["responsive_css"] = css_queries
|
||||||
|
|
||||||
|
return {
|
||||||
|
"layout_ref": layout_ref,
|
||||||
|
"breakpoints": merged_breakpoints,
|
||||||
|
"mobile_first": mobile_first,
|
||||||
|
"css_media_queries": css_queries
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_media_queries(
|
||||||
|
self,
|
||||||
|
breakpoints: Dict[str, Dict[str, Any]],
|
||||||
|
mobile_first: bool
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate CSS media queries for breakpoints."""
|
||||||
|
queries = []
|
||||||
|
bp_order = ["xs", "sm", "md", "lg", "xl"]
|
||||||
|
|
||||||
|
if mobile_first:
|
||||||
|
# Use min-width queries (mobile-first)
|
||||||
|
for bp_name in bp_order[1:]: # Skip xs (base styles)
|
||||||
|
bp = breakpoints[bp_name]
|
||||||
|
min_width = bp.get("min_width", DEFAULT_BREAKPOINTS[bp_name]["min_width"])
|
||||||
|
if min_width and min_width != "0px":
|
||||||
|
queries.append(f"@media (min-width: {min_width}) {{ /* {bp_name} styles */ }}")
|
||||||
|
else:
|
||||||
|
# Use max-width queries (desktop-first)
|
||||||
|
for bp_name in reversed(bp_order[:-1]): # Skip xl (base styles)
|
||||||
|
bp = breakpoints[bp_name]
|
||||||
|
max_width = bp.get("max_width", DEFAULT_BREAKPOINTS[bp_name]["max_width"])
|
||||||
|
if max_width:
|
||||||
|
queries.append(f"@media (max-width: {max_width}) {{ /* {bp_name} styles */ }}")
|
||||||
|
|
||||||
|
return queries
|
||||||
|
|
||||||
|
async def layout_get_breakpoints(self, layout_ref: str) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Get the breakpoint configuration for a layout.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
layout_ref: Layout name
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with breakpoint configuration
|
||||||
|
"""
|
||||||
|
if layout_ref not in self._layouts:
|
||||||
|
return {
|
||||||
|
"error": f"Layout '{layout_ref}' not found.",
|
||||||
|
"breakpoints": None
|
||||||
|
}
|
||||||
|
|
||||||
|
layout = self._layouts[layout_ref]
|
||||||
|
|
||||||
|
return {
|
||||||
|
"layout_ref": layout_ref,
|
||||||
|
"breakpoints": layout.get("breakpoints", DEFAULT_BREAKPOINTS.copy()),
|
||||||
|
"mobile_first": layout.get("mobile_first", True),
|
||||||
|
"css_media_queries": layout.get("responsive_css", [])
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_default_breakpoints(self) -> Dict[str, Any]:
|
||||||
|
"""Get the default breakpoint configuration."""
|
||||||
|
return {
|
||||||
|
"breakpoints": DEFAULT_BREAKPOINTS.copy(),
|
||||||
|
"description": "Standard responsive breakpoints aligned with Mantine/Bootstrap",
|
||||||
|
"mobile_first": True
|
||||||
|
}
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ from .chart_tools import ChartTools
|
|||||||
from .layout_tools import LayoutTools
|
from .layout_tools import LayoutTools
|
||||||
from .theme_tools import ThemeTools
|
from .theme_tools import ThemeTools
|
||||||
from .page_tools import PageTools
|
from .page_tools import PageTools
|
||||||
|
from .accessibility_tools import AccessibilityTools
|
||||||
|
|
||||||
# Suppress noisy MCP validation warnings on stderr
|
# Suppress noisy MCP validation warnings on stderr
|
||||||
logging.basicConfig(level=logging.INFO)
|
logging.basicConfig(level=logging.INFO)
|
||||||
@@ -36,6 +37,7 @@ class VizPlatformMCPServer:
|
|||||||
self.layout_tools = LayoutTools()
|
self.layout_tools = LayoutTools()
|
||||||
self.theme_tools = ThemeTools()
|
self.theme_tools = ThemeTools()
|
||||||
self.page_tools = PageTools()
|
self.page_tools = PageTools()
|
||||||
|
self.accessibility_tools = AccessibilityTools(theme_store=self.theme_tools.store)
|
||||||
|
|
||||||
async def initialize(self):
|
async def initialize(self):
|
||||||
"""Initialize server and load configuration."""
|
"""Initialize server and load configuration."""
|
||||||
@@ -198,6 +200,46 @@ class VizPlatformMCPServer:
|
|||||||
}
|
}
|
||||||
))
|
))
|
||||||
|
|
||||||
|
# Chart export tool (Issue #247)
|
||||||
|
tools.append(Tool(
|
||||||
|
name="chart_export",
|
||||||
|
description=(
|
||||||
|
"Export a Plotly chart to static image format (PNG, SVG, PDF). "
|
||||||
|
"Requires kaleido package. Returns base64 image data or saves to file."
|
||||||
|
),
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"figure": {
|
||||||
|
"type": "object",
|
||||||
|
"description": "Plotly figure JSON to export"
|
||||||
|
},
|
||||||
|
"format": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["png", "svg", "pdf"],
|
||||||
|
"description": "Output format (default: png)"
|
||||||
|
},
|
||||||
|
"width": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Image width in pixels (default: 1200)"
|
||||||
|
},
|
||||||
|
"height": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Image height in pixels (default: 800)"
|
||||||
|
},
|
||||||
|
"scale": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Resolution scale factor (default: 2 for retina)"
|
||||||
|
},
|
||||||
|
"output_path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Optional file path to save image"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["figure"]
|
||||||
|
}
|
||||||
|
))
|
||||||
|
|
||||||
# Layout tools (Issue #174)
|
# Layout tools (Issue #174)
|
||||||
tools.append(Tool(
|
tools.append(Tool(
|
||||||
name="layout_create",
|
name="layout_create",
|
||||||
@@ -280,6 +322,36 @@ class VizPlatformMCPServer:
|
|||||||
}
|
}
|
||||||
))
|
))
|
||||||
|
|
||||||
|
# Responsive breakpoints tool (Issue #249)
|
||||||
|
tools.append(Tool(
|
||||||
|
name="layout_set_breakpoints",
|
||||||
|
description=(
|
||||||
|
"Configure responsive breakpoints for a layout. "
|
||||||
|
"Supports xs, sm, md, lg, xl breakpoints with mobile-first approach. "
|
||||||
|
"Each breakpoint can define cols, spacing, and other grid properties."
|
||||||
|
),
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"layout_ref": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Layout name to configure"
|
||||||
|
},
|
||||||
|
"breakpoints": {
|
||||||
|
"type": "object",
|
||||||
|
"description": (
|
||||||
|
"Breakpoint config: {xs: {cols, spacing}, sm: {...}, md: {...}, lg: {...}, xl: {...}}"
|
||||||
|
)
|
||||||
|
},
|
||||||
|
"mobile_first": {
|
||||||
|
"type": "boolean",
|
||||||
|
"description": "Use mobile-first (min-width) media queries (default: true)"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["layout_ref", "breakpoints"]
|
||||||
|
}
|
||||||
|
))
|
||||||
|
|
||||||
# Theme tools (Issue #175)
|
# Theme tools (Issue #175)
|
||||||
tools.append(Tool(
|
tools.append(Tool(
|
||||||
name="theme_create",
|
name="theme_create",
|
||||||
@@ -451,6 +523,77 @@ class VizPlatformMCPServer:
|
|||||||
}
|
}
|
||||||
))
|
))
|
||||||
|
|
||||||
|
# Accessibility tools (Issue #248)
|
||||||
|
tools.append(Tool(
|
||||||
|
name="accessibility_validate_colors",
|
||||||
|
description=(
|
||||||
|
"Validate colors for color blind accessibility. "
|
||||||
|
"Checks contrast ratios for deuteranopia, protanopia, tritanopia. "
|
||||||
|
"Returns issues, simulations, and accessible palette suggestions."
|
||||||
|
),
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"colors": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "List of hex colors to validate (e.g., ['#228be6', '#40c057'])"
|
||||||
|
},
|
||||||
|
"check_types": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Color blindness types to check: deuteranopia, protanopia, tritanopia (default: all)"
|
||||||
|
},
|
||||||
|
"min_contrast_ratio": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Minimum WCAG contrast ratio (default: 4.5 for AA)"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["colors"]
|
||||||
|
}
|
||||||
|
))
|
||||||
|
|
||||||
|
tools.append(Tool(
|
||||||
|
name="accessibility_validate_theme",
|
||||||
|
description=(
|
||||||
|
"Validate a theme's colors for accessibility. "
|
||||||
|
"Extracts all colors from theme tokens and checks for color blind safety."
|
||||||
|
),
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"theme_name": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Theme name to validate"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["theme_name"]
|
||||||
|
}
|
||||||
|
))
|
||||||
|
|
||||||
|
tools.append(Tool(
|
||||||
|
name="accessibility_suggest_alternative",
|
||||||
|
description=(
|
||||||
|
"Suggest accessible alternative colors for a given color. "
|
||||||
|
"Provides alternatives optimized for specific color blindness types."
|
||||||
|
),
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"color": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Hex color to find alternatives for"
|
||||||
|
},
|
||||||
|
"deficiency_type": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["deuteranopia", "protanopia", "tritanopia"],
|
||||||
|
"description": "Color blindness type to optimize for"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["color", "deficiency_type"]
|
||||||
|
}
|
||||||
|
))
|
||||||
|
|
||||||
return tools
|
return tools
|
||||||
|
|
||||||
@self.server.call_tool()
|
@self.server.call_tool()
|
||||||
@@ -524,6 +667,26 @@ class VizPlatformMCPServer:
|
|||||||
text=json.dumps(result, indent=2)
|
text=json.dumps(result, indent=2)
|
||||||
)]
|
)]
|
||||||
|
|
||||||
|
elif name == "chart_export":
|
||||||
|
figure = arguments.get('figure')
|
||||||
|
if not figure:
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps({"error": "figure is required"}, indent=2)
|
||||||
|
)]
|
||||||
|
result = await self.chart_tools.chart_export(
|
||||||
|
figure=figure,
|
||||||
|
format=arguments.get('format', 'png'),
|
||||||
|
width=arguments.get('width'),
|
||||||
|
height=arguments.get('height'),
|
||||||
|
scale=arguments.get('scale', 2.0),
|
||||||
|
output_path=arguments.get('output_path')
|
||||||
|
)
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps(result, indent=2)
|
||||||
|
)]
|
||||||
|
|
||||||
# Layout tools
|
# Layout tools
|
||||||
elif name == "layout_create":
|
elif name == "layout_create":
|
||||||
layout_name = arguments.get('name')
|
layout_name = arguments.get('name')
|
||||||
@@ -568,6 +731,23 @@ class VizPlatformMCPServer:
|
|||||||
text=json.dumps(result, indent=2)
|
text=json.dumps(result, indent=2)
|
||||||
)]
|
)]
|
||||||
|
|
||||||
|
elif name == "layout_set_breakpoints":
|
||||||
|
layout_ref = arguments.get('layout_ref')
|
||||||
|
breakpoints = arguments.get('breakpoints', {})
|
||||||
|
mobile_first = arguments.get('mobile_first', True)
|
||||||
|
if not layout_ref:
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps({"error": "layout_ref is required"}, indent=2)
|
||||||
|
)]
|
||||||
|
result = await self.layout_tools.layout_set_breakpoints(
|
||||||
|
layout_ref, breakpoints, mobile_first
|
||||||
|
)
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps(result, indent=2)
|
||||||
|
)]
|
||||||
|
|
||||||
# Theme tools
|
# Theme tools
|
||||||
elif name == "theme_create":
|
elif name == "theme_create":
|
||||||
theme_name = arguments.get('name')
|
theme_name = arguments.get('name')
|
||||||
@@ -669,6 +849,53 @@ class VizPlatformMCPServer:
|
|||||||
text=json.dumps(result, indent=2)
|
text=json.dumps(result, indent=2)
|
||||||
)]
|
)]
|
||||||
|
|
||||||
|
# Accessibility tools
|
||||||
|
elif name == "accessibility_validate_colors":
|
||||||
|
colors = arguments.get('colors')
|
||||||
|
if not colors:
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps({"error": "colors list is required"}, indent=2)
|
||||||
|
)]
|
||||||
|
result = await self.accessibility_tools.accessibility_validate_colors(
|
||||||
|
colors=colors,
|
||||||
|
check_types=arguments.get('check_types'),
|
||||||
|
min_contrast_ratio=arguments.get('min_contrast_ratio', 4.5)
|
||||||
|
)
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps(result, indent=2)
|
||||||
|
)]
|
||||||
|
|
||||||
|
elif name == "accessibility_validate_theme":
|
||||||
|
theme_name = arguments.get('theme_name')
|
||||||
|
if not theme_name:
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps({"error": "theme_name is required"}, indent=2)
|
||||||
|
)]
|
||||||
|
result = await self.accessibility_tools.accessibility_validate_theme(theme_name)
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps(result, indent=2)
|
||||||
|
)]
|
||||||
|
|
||||||
|
elif name == "accessibility_suggest_alternative":
|
||||||
|
color = arguments.get('color')
|
||||||
|
deficiency_type = arguments.get('deficiency_type')
|
||||||
|
if not color or not deficiency_type:
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps({"error": "color and deficiency_type are required"}, indent=2)
|
||||||
|
)]
|
||||||
|
result = await self.accessibility_tools.accessibility_suggest_alternative(
|
||||||
|
color, deficiency_type
|
||||||
|
)
|
||||||
|
return [TextContent(
|
||||||
|
type="text",
|
||||||
|
text=json.dumps(result, indent=2)
|
||||||
|
)]
|
||||||
|
|
||||||
raise ValueError(f"Unknown tool: {name}")
|
raise ValueError(f"Unknown tool: {name}")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ mcp>=0.9.0
|
|||||||
plotly>=5.18.0
|
plotly>=5.18.0
|
||||||
dash>=2.14.0
|
dash>=2.14.0
|
||||||
dash-mantine-components>=2.0.0
|
dash-mantine-components>=2.0.0
|
||||||
|
kaleido>=0.2.1 # For chart export (PNG, SVG, PDF)
|
||||||
|
|
||||||
# Utilities
|
# Utilities
|
||||||
python-dotenv>=1.0.0
|
python-dotenv>=1.0.0
|
||||||
|
|||||||
21
mcp-servers/viz-platform/run.sh
Executable file
21
mcp-servers/viz-platform/run.sh
Executable file
@@ -0,0 +1,21 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Capture original working directory before any cd operations
|
||||||
|
# This should be the user's project directory when launched by Claude Code
|
||||||
|
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$PWD}"
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
CACHE_VENV="$HOME/.cache/claude-mcp-venvs/leo-claude-mktplace/viz-platform/.venv"
|
||||||
|
LOCAL_VENV="$SCRIPT_DIR/.venv"
|
||||||
|
|
||||||
|
if [[ -f "$CACHE_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$CACHE_VENV/bin/python"
|
||||||
|
elif [[ -f "$LOCAL_VENV/bin/python" ]]; then
|
||||||
|
PYTHON="$LOCAL_VENV/bin/python"
|
||||||
|
else
|
||||||
|
echo "ERROR: No venv found. Run: ./scripts/setup-venvs.sh" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$SCRIPT_DIR"
|
||||||
|
export PYTHONPATH="$SCRIPT_DIR"
|
||||||
|
exec "$PYTHON" -m mcp_server.server "$@"
|
||||||
195
mcp-servers/viz-platform/tests/test_accessibility_tools.py
Normal file
195
mcp-servers/viz-platform/tests/test_accessibility_tools.py
Normal file
@@ -0,0 +1,195 @@
|
|||||||
|
"""
|
||||||
|
Tests for accessibility validation tools.
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
from mcp_server.accessibility_tools import AccessibilityTools
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tools():
|
||||||
|
"""Create AccessibilityTools instance."""
|
||||||
|
return AccessibilityTools()
|
||||||
|
|
||||||
|
|
||||||
|
class TestHexToRgb:
|
||||||
|
"""Tests for _hex_to_rgb method."""
|
||||||
|
|
||||||
|
def test_hex_to_rgb_6_digit(self, tools):
|
||||||
|
"""Test 6-digit hex conversion."""
|
||||||
|
assert tools._hex_to_rgb("#FF0000") == (255, 0, 0)
|
||||||
|
assert tools._hex_to_rgb("#00FF00") == (0, 255, 0)
|
||||||
|
assert tools._hex_to_rgb("#0000FF") == (0, 0, 255)
|
||||||
|
|
||||||
|
def test_hex_to_rgb_3_digit(self, tools):
|
||||||
|
"""Test 3-digit hex conversion."""
|
||||||
|
assert tools._hex_to_rgb("#F00") == (255, 0, 0)
|
||||||
|
assert tools._hex_to_rgb("#0F0") == (0, 255, 0)
|
||||||
|
assert tools._hex_to_rgb("#00F") == (0, 0, 255)
|
||||||
|
|
||||||
|
def test_hex_to_rgb_lowercase(self, tools):
|
||||||
|
"""Test lowercase hex conversion."""
|
||||||
|
assert tools._hex_to_rgb("#ff0000") == (255, 0, 0)
|
||||||
|
|
||||||
|
|
||||||
|
class TestContrastRatio:
|
||||||
|
"""Tests for _get_contrast_ratio method."""
|
||||||
|
|
||||||
|
def test_black_white_contrast(self, tools):
|
||||||
|
"""Test black on white has maximum contrast."""
|
||||||
|
ratio = tools._get_contrast_ratio("#000000", "#FFFFFF")
|
||||||
|
assert ratio == pytest.approx(21.0, rel=0.01)
|
||||||
|
|
||||||
|
def test_same_color_contrast(self, tools):
|
||||||
|
"""Test same color has minimum contrast."""
|
||||||
|
ratio = tools._get_contrast_ratio("#FF0000", "#FF0000")
|
||||||
|
assert ratio == pytest.approx(1.0, rel=0.01)
|
||||||
|
|
||||||
|
def test_symmetric_contrast(self, tools):
|
||||||
|
"""Test contrast ratio is symmetric."""
|
||||||
|
ratio1 = tools._get_contrast_ratio("#228be6", "#FFFFFF")
|
||||||
|
ratio2 = tools._get_contrast_ratio("#FFFFFF", "#228be6")
|
||||||
|
assert ratio1 == pytest.approx(ratio2, rel=0.01)
|
||||||
|
|
||||||
|
|
||||||
|
class TestColorBlindnessSimulation:
|
||||||
|
"""Tests for _simulate_color_blindness method."""
|
||||||
|
|
||||||
|
def test_deuteranopia_simulation(self, tools):
|
||||||
|
"""Test deuteranopia (green-blind) simulation."""
|
||||||
|
# Red and green should appear more similar
|
||||||
|
original_red = "#FF0000"
|
||||||
|
original_green = "#00FF00"
|
||||||
|
|
||||||
|
simulated_red = tools._simulate_color_blindness(original_red, "deuteranopia")
|
||||||
|
simulated_green = tools._simulate_color_blindness(original_green, "deuteranopia")
|
||||||
|
|
||||||
|
# They should be different from originals
|
||||||
|
assert simulated_red != original_red or simulated_green != original_green
|
||||||
|
|
||||||
|
def test_protanopia_simulation(self, tools):
|
||||||
|
"""Test protanopia (red-blind) simulation."""
|
||||||
|
simulated = tools._simulate_color_blindness("#FF0000", "protanopia")
|
||||||
|
# Should return a modified color
|
||||||
|
assert simulated.startswith("#")
|
||||||
|
assert len(simulated) == 7
|
||||||
|
|
||||||
|
def test_tritanopia_simulation(self, tools):
|
||||||
|
"""Test tritanopia (blue-blind) simulation."""
|
||||||
|
simulated = tools._simulate_color_blindness("#0000FF", "tritanopia")
|
||||||
|
# Should return a modified color
|
||||||
|
assert simulated.startswith("#")
|
||||||
|
assert len(simulated) == 7
|
||||||
|
|
||||||
|
def test_unknown_deficiency_returns_original(self, tools):
|
||||||
|
"""Test unknown deficiency type returns original color."""
|
||||||
|
color = "#FF0000"
|
||||||
|
simulated = tools._simulate_color_blindness(color, "unknown")
|
||||||
|
assert simulated == color
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessibilityValidateColors:
|
||||||
|
"""Tests for accessibility_validate_colors method."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_single_color(self, tools):
|
||||||
|
"""Test validating a single color."""
|
||||||
|
result = await tools.accessibility_validate_colors(["#228be6"])
|
||||||
|
assert "colors_checked" in result
|
||||||
|
assert "overall_score" in result
|
||||||
|
assert "issues" in result
|
||||||
|
assert "safe_palettes" in result
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_problematic_colors(self, tools):
|
||||||
|
"""Test similar colors trigger warnings."""
|
||||||
|
# Use colors that are very close in hue, which should be harder to distinguish
|
||||||
|
result = await tools.accessibility_validate_colors(["#FF5555", "#FF6666"])
|
||||||
|
# Similar colors should trigger distinguishability warnings
|
||||||
|
assert "issues" in result
|
||||||
|
# The validation should at least run without errors
|
||||||
|
assert "colors_checked" in result
|
||||||
|
assert len(result["colors_checked"]) == 2
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_contrast_issue(self, tools):
|
||||||
|
"""Test low contrast colors trigger contrast warnings."""
|
||||||
|
# Yellow on white has poor contrast
|
||||||
|
result = await tools.accessibility_validate_colors(["#FFFF00"])
|
||||||
|
# Check for contrast issues (yellow may have issues with both black and white)
|
||||||
|
assert "issues" in result
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_validate_with_specific_types(self, tools):
|
||||||
|
"""Test validating for specific color blindness types."""
|
||||||
|
result = await tools.accessibility_validate_colors(
|
||||||
|
["#FF0000", "#00FF00"],
|
||||||
|
check_types=["deuteranopia"]
|
||||||
|
)
|
||||||
|
assert "simulations" in result
|
||||||
|
assert "deuteranopia" in result["simulations"]
|
||||||
|
assert "protanopia" not in result["simulations"]
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_overall_score(self, tools):
|
||||||
|
"""Test overall score is calculated."""
|
||||||
|
result = await tools.accessibility_validate_colors(["#228be6", "#ffffff"])
|
||||||
|
assert result["overall_score"] in ["A", "B", "C", "D"]
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_recommendations_generated(self, tools):
|
||||||
|
"""Test recommendations are generated for issues."""
|
||||||
|
result = await tools.accessibility_validate_colors(["#FF0000", "#00FF00"])
|
||||||
|
assert "recommendations" in result
|
||||||
|
assert len(result["recommendations"]) > 0
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessibilitySuggestAlternative:
|
||||||
|
"""Tests for accessibility_suggest_alternative method."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_suggest_alternative_deuteranopia(self, tools):
|
||||||
|
"""Test suggesting alternatives for deuteranopia."""
|
||||||
|
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
|
||||||
|
assert "original_color" in result
|
||||||
|
assert result["deficiency_type"] == "deuteranopia"
|
||||||
|
assert "suggestions" in result
|
||||||
|
assert len(result["suggestions"]) > 0
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_suggest_alternative_tritanopia(self, tools):
|
||||||
|
"""Test suggesting alternatives for tritanopia."""
|
||||||
|
result = await tools.accessibility_suggest_alternative("#0000FF", "tritanopia")
|
||||||
|
assert "suggestions" in result
|
||||||
|
assert len(result["suggestions"]) > 0
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_suggestions_include_safe_palettes(self, tools):
|
||||||
|
"""Test suggestions include colors from safe palettes."""
|
||||||
|
result = await tools.accessibility_suggest_alternative("#FF0000", "deuteranopia")
|
||||||
|
palette_suggestions = [
|
||||||
|
s for s in result["suggestions"]
|
||||||
|
if "palette" in s
|
||||||
|
]
|
||||||
|
assert len(palette_suggestions) > 0
|
||||||
|
|
||||||
|
|
||||||
|
class TestSafePalettes:
|
||||||
|
"""Tests for safe palette constants."""
|
||||||
|
|
||||||
|
def test_safe_palettes_exist(self, tools):
|
||||||
|
"""Test that safe palettes are defined."""
|
||||||
|
from mcp_server.accessibility_tools import SAFE_PALETTES
|
||||||
|
assert "categorical" in SAFE_PALETTES
|
||||||
|
assert "ibm" in SAFE_PALETTES
|
||||||
|
assert "okabe_ito" in SAFE_PALETTES
|
||||||
|
assert "tableau_colorblind" in SAFE_PALETTES
|
||||||
|
|
||||||
|
def test_safe_palettes_have_colors(self, tools):
|
||||||
|
"""Test that safe palettes have color lists."""
|
||||||
|
from mcp_server.accessibility_tools import SAFE_PALETTES
|
||||||
|
for palette_name, palette in SAFE_PALETTES.items():
|
||||||
|
assert "colors" in palette
|
||||||
|
assert len(palette["colors"]) > 0
|
||||||
|
# All colors should be valid hex
|
||||||
|
for color in palette["colors"]:
|
||||||
|
assert color.startswith("#")
|
||||||
3
plugins/clarity-assist/.claude-plugin/metadata.json
Normal file
3
plugins/clarity-assist/.claude-plugin/metadata.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"domain": "core"
|
||||||
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "clarity-assist",
|
"name": "clarity-assist",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
|
"description": "Prompt optimization and requirement clarification with ND-friendly accommodations",
|
||||||
"author": {
|
"author": {
|
||||||
"name": "Leo Miranda",
|
"name": "Leo Miranda",
|
||||||
@@ -16,5 +16,7 @@
|
|||||||
"requirements",
|
"requirements",
|
||||||
"methodology"
|
"methodology"
|
||||||
],
|
],
|
||||||
"commands": ["./commands/"]
|
"commands": [
|
||||||
|
"./commands/"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,99 +0,0 @@
|
|||||||
# clarity-assist
|
|
||||||
|
|
||||||
Prompt optimization and requirement clarification plugin with neurodivergent-friendly accommodations.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
clarity-assist helps transform vague, incomplete, or ambiguous requests into clear, actionable specifications. It uses a structured 4-D methodology (Deconstruct, Diagnose, Develop, Deliver) and ND-friendly communication patterns.
|
|
||||||
|
|
||||||
## Commands
|
|
||||||
|
|
||||||
| Command | Description |
|
|
||||||
|---------|-------------|
|
|
||||||
| `/clarify` | Full 4-D prompt optimization for complex requests |
|
|
||||||
| `/quick-clarify` | Rapid single-pass clarification for simple requests |
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### 4-D Methodology
|
|
||||||
|
|
||||||
1. **Deconstruct** - Break down the request into components
|
|
||||||
2. **Diagnose** - Analyze gaps and potential issues
|
|
||||||
3. **Develop** - Gather clarifications through structured questions
|
|
||||||
4. **Deliver** - Produce refined specification
|
|
||||||
|
|
||||||
### ND-Friendly Design
|
|
||||||
|
|
||||||
- **Option-based questioning** - Always provide 2-4 concrete choices
|
|
||||||
- **Chunked questions** - Ask 1-2 questions at a time
|
|
||||||
- **Context for questions** - Explain why you're asking
|
|
||||||
- **Conflict detection** - Check previous answers before new questions
|
|
||||||
- **Progress acknowledgment** - Summarize frequently
|
|
||||||
|
|
||||||
### Escalation Protocol
|
|
||||||
|
|
||||||
When requests are complex or users seem overwhelmed:
|
|
||||||
- Acknowledge complexity
|
|
||||||
- Offer to focus on one aspect at a time
|
|
||||||
- Build incrementally
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
Add to your project's `.claude/settings.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"plugins": ["clarity-assist"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Full Clarification
|
|
||||||
|
|
||||||
```
|
|
||||||
/clarify
|
|
||||||
|
|
||||||
[Your vague or complex request here]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Quick Clarification
|
|
||||||
|
|
||||||
```
|
|
||||||
/quick-clarify
|
|
||||||
|
|
||||||
[Your mostly-clear request here]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
No configuration required. The plugin uses sensible defaults.
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
After clarification, you receive a structured specification:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Clarified Request
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
[Description of what will be built]
|
|
||||||
|
|
||||||
### Scope
|
|
||||||
**In Scope:** [items]
|
|
||||||
**Out of Scope:** [items]
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
[Prioritized table]
|
|
||||||
|
|
||||||
### Assumptions
|
|
||||||
[List of assumptions]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
For CLAUDE.md integration instructions, see `claude-md-integration.md`.
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
MIT
|
|
||||||
@@ -1,5 +1,23 @@
|
|||||||
|
---
|
||||||
|
name: clarity-coach
|
||||||
|
description: Patient, structured coach helping users articulate requirements clearly. Uses neurodivergent-friendly communication patterns.
|
||||||
|
model: sonnet
|
||||||
|
permissionMode: default
|
||||||
|
disallowedTools: Write, Edit, MultiEdit
|
||||||
|
---
|
||||||
|
|
||||||
# Clarity Coach Agent
|
# Clarity Coach Agent
|
||||||
|
|
||||||
|
## Visual Output Requirements
|
||||||
|
|
||||||
|
**MANDATORY: Display header at start of every response.**
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────────────────────────────┐
|
||||||
|
│ 💬 CLARITY-ASSIST · Clarity Coach │
|
||||||
|
└──────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## Role
|
## Role
|
||||||
|
|
||||||
You are a patient, structured coach specializing in helping users articulate their requirements clearly. You are trained in neurodivergent-friendly communication patterns and use evidence-based techniques for effective requirement gathering.
|
You are a patient, structured coach specializing in helping users articulate their requirements clearly. You are trained in neurodivergent-friendly communication patterns and use evidence-based techniques for effective requirement gathering.
|
||||||
@@ -101,7 +119,7 @@ Track gathered information in a mental model:
|
|||||||
|
|
||||||
### After Clarification
|
### After Clarification
|
||||||
|
|
||||||
Produce a clear specification (see /clarify command for format).
|
Produce a clear specification (see /clarity clarify command for format).
|
||||||
|
|
||||||
## Example Session
|
## Example Session
|
||||||
|
|
||||||
|
|||||||
@@ -18,8 +18,8 @@ This project uses the clarity-assist plugin for requirement gathering.
|
|||||||
|
|
||||||
| Command | Use Case |
|
| Command | Use Case |
|
||||||
|---------|----------|
|
|---------|----------|
|
||||||
| `/clarify` | Full 4-D methodology for complex requests |
|
| `/clarity clarify` | Full 4-D methodology for complex requests |
|
||||||
| `/quick-clarify` | Rapid mode for simple disambiguation |
|
| `/clarity quick-clarify` | Rapid mode for simple disambiguation |
|
||||||
|
|
||||||
### Communication Style
|
### Communication Style
|
||||||
|
|
||||||
|
|||||||
@@ -1,137 +0,0 @@
|
|||||||
# /clarify - Full Prompt Optimization
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations.
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
- Complex multi-step requests
|
|
||||||
- Requirements with multiple possible interpretations
|
|
||||||
- Tasks requiring significant context gathering
|
|
||||||
- When user seems uncertain about what they want
|
|
||||||
|
|
||||||
## 4-D Methodology
|
|
||||||
|
|
||||||
### Phase 1: Deconstruct
|
|
||||||
|
|
||||||
Break down the user's request into components:
|
|
||||||
|
|
||||||
1. **Extract explicit requirements** - What was directly stated
|
|
||||||
2. **Identify implicit assumptions** - What seems assumed but not stated
|
|
||||||
3. **Note ambiguities** - Points that could go multiple ways
|
|
||||||
4. **List dependencies** - External factors that might affect implementation
|
|
||||||
|
|
||||||
### Phase 2: Diagnose
|
|
||||||
|
|
||||||
Analyze gaps and potential issues:
|
|
||||||
|
|
||||||
1. **Missing information** - What do we need to know?
|
|
||||||
2. **Conflicting requirements** - Do any stated goals contradict?
|
|
||||||
3. **Scope boundaries** - What's in/out of scope?
|
|
||||||
4. **Technical constraints** - Platform, language, architecture limits
|
|
||||||
|
|
||||||
### Phase 3: Develop
|
|
||||||
|
|
||||||
Gather clarifications through structured questioning:
|
|
||||||
|
|
||||||
**ND-Friendly Question Rules:**
|
|
||||||
- Present 2-4 concrete options (never open-ended alone)
|
|
||||||
- Include "Other" for custom responses
|
|
||||||
- Ask 1-2 questions at a time maximum
|
|
||||||
- Provide brief context for why you're asking
|
|
||||||
- Check for conflicts with previous answers
|
|
||||||
|
|
||||||
**Example Format:**
|
|
||||||
```
|
|
||||||
To help me understand the scope better:
|
|
||||||
|
|
||||||
**How should errors be handled?**
|
|
||||||
1. Silent logging (user sees nothing)
|
|
||||||
2. Toast notifications (brief, dismissible)
|
|
||||||
3. Modal dialogs (requires user action)
|
|
||||||
4. Other
|
|
||||||
|
|
||||||
[Context: This affects both UX and how much error-handling code we need]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: Deliver
|
|
||||||
|
|
||||||
Produce the refined specification:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Clarified Request
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
[1-2 sentence description of what will be built]
|
|
||||||
|
|
||||||
### Scope
|
|
||||||
**In Scope:**
|
|
||||||
- [Item 1]
|
|
||||||
- [Item 2]
|
|
||||||
|
|
||||||
**Out of Scope:**
|
|
||||||
- [Item 1]
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
|
|
||||||
| # | Requirement | Priority | Notes |
|
|
||||||
|---|-------------|----------|-------|
|
|
||||||
| 1 | ... | Must | ... |
|
|
||||||
| 2 | ... | Should | ... |
|
|
||||||
|
|
||||||
### Assumptions
|
|
||||||
- [Assumption made based on conversation]
|
|
||||||
|
|
||||||
### Open Questions
|
|
||||||
- [Any remaining ambiguities, if any]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Conflict Detection
|
|
||||||
|
|
||||||
Before each question, review previous answers for conflicts:
|
|
||||||
|
|
||||||
```
|
|
||||||
[Internal check]
|
|
||||||
Previous: User said "keep it simple"
|
|
||||||
Current question about: Adding configuration options
|
|
||||||
Potential conflict: More options = more complexity
|
|
||||||
→ Acknowledge: "Earlier you mentioned keeping it simple. With that in mind..."
|
|
||||||
```
|
|
||||||
|
|
||||||
## Escalation Protocol
|
|
||||||
|
|
||||||
If the request is particularly messy or the user seems overwhelmed:
|
|
||||||
|
|
||||||
1. Acknowledge the complexity
|
|
||||||
2. Offer to start with just ONE aspect
|
|
||||||
3. Build incrementally
|
|
||||||
|
|
||||||
```
|
|
||||||
"This touches a lot of areas. Rather than tackle everything at once,
|
|
||||||
let's start with [most critical piece]. Once that's clear, we can
|
|
||||||
add the other parts. Sound good?"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example Interaction
|
|
||||||
|
|
||||||
**User:** Make the app faster
|
|
||||||
|
|
||||||
**Clarify Response:**
|
|
||||||
|
|
||||||
```
|
|
||||||
Let me help clarify what kind of performance improvements you're looking for.
|
|
||||||
|
|
||||||
**What type of slowness are you experiencing?**
|
|
||||||
1. Initial page load takes too long
|
|
||||||
2. Interactions feel sluggish (clicking, typing)
|
|
||||||
3. API calls are slow
|
|
||||||
4. All of the above
|
|
||||||
5. Other
|
|
||||||
|
|
||||||
[I'm asking because each has very different solutions]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
After gathering all necessary information, use the Deliver phase format to present the clarified specification for user confirmation.
|
|
||||||
68
plugins/clarity-assist/commands/clarity-clarify.md
Normal file
68
plugins/clarity-assist/commands/clarity-clarify.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
---
|
||||||
|
name: clarity clarify
|
||||||
|
---
|
||||||
|
|
||||||
|
# /clarity clarify - Full Prompt Optimization
|
||||||
|
|
||||||
|
## Visual Output
|
||||||
|
|
||||||
|
```
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| CLARITY-ASSIST - Prompt Optimization |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Transform vague, incomplete, or ambiguous requests into clear, actionable specifications using the 4-D methodology with neurodivergent-friendly accommodations.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
- Complex multi-step requests
|
||||||
|
- Requirements with multiple possible interpretations
|
||||||
|
- Tasks requiring significant context gathering
|
||||||
|
- When user seems uncertain about what they want
|
||||||
|
|
||||||
|
## Skills to Load
|
||||||
|
|
||||||
|
Load these skills before proceeding:
|
||||||
|
|
||||||
|
- `skills/4d-methodology.md` - Core 4-phase process
|
||||||
|
- `skills/nd-accommodations.md` - ND-friendly question patterns
|
||||||
|
- `skills/clarification-techniques.md` - Anti-patterns and templates
|
||||||
|
- `skills/escalation-patterns.md` - When to adjust approach
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
1. **Deconstruct** - Break down request into components
|
||||||
|
2. **Diagnose** - Identify gaps and conflicts
|
||||||
|
3. **Develop** - Gather clarifications via structured questions
|
||||||
|
4. **Deliver** - Present refined specification
|
||||||
|
5. **Offer RFC Creation** - For feature work, offer to save as RFC
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Use the Deliver phase template from `skills/4d-methodology.md` to present the clarified specification for user confirmation.
|
||||||
|
|
||||||
|
## RFC Creation Offer (Step 5)
|
||||||
|
|
||||||
|
After presenting the clarified specification, if the request appears to be a feature or enhancement:
|
||||||
|
|
||||||
|
```
|
||||||
|
---
|
||||||
|
|
||||||
|
Would you like to save this as an RFC for formal tracking?
|
||||||
|
|
||||||
|
An RFC (Request for Comments) provides:
|
||||||
|
- Structured documentation of the proposal
|
||||||
|
- Review workflow before implementation
|
||||||
|
- Integration with sprint planning
|
||||||
|
|
||||||
|
[1] Yes, create RFC from this specification
|
||||||
|
[2] No, proceed with implementation directly
|
||||||
|
```
|
||||||
|
|
||||||
|
If user selects [1]:
|
||||||
|
- Pass clarified specification to `/rfc-create`
|
||||||
|
- The Summary, Motivation, and Design sections will be populated from the clarified spec
|
||||||
|
- User can then refine the RFC and submit for review
|
||||||
49
plugins/clarity-assist/commands/clarity-quick-clarify.md
Normal file
49
plugins/clarity-assist/commands/clarity-quick-clarify.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
---
|
||||||
|
name: clarity quick-clarify
|
||||||
|
---
|
||||||
|
|
||||||
|
# /clarity quick-clarify - Rapid Clarification Mode
|
||||||
|
|
||||||
|
## Visual Output
|
||||||
|
|
||||||
|
```
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| CLARITY-ASSIST - Quick Clarify |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Single-pass clarification for requests that are mostly clear but need minor disambiguation.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
- Request is fairly clear, just one or two ambiguities
|
||||||
|
- User is in a hurry
|
||||||
|
- Follow-up to an already-clarified request
|
||||||
|
- Simple feature additions or bug fixes
|
||||||
|
|
||||||
|
## Skills to Load
|
||||||
|
|
||||||
|
- `skills/nd-accommodations.md` - ND-friendly question patterns
|
||||||
|
- `skills/clarification-techniques.md` - Echo and micro-summary techniques
|
||||||
|
- `skills/escalation-patterns.md` - When to escalate to full `/clarity clarify`
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
1. **Echo Understanding** - Restate in a single sentence
|
||||||
|
2. **Quick Disambiguation** - Ask ONE multiple-choice question if needed
|
||||||
|
3. **Proceed or Confirm** - Start work or offer micro-summary
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
No formal specification document needed. Proceed after brief confirmation, documenting assumptions inline with the work.
|
||||||
|
|
||||||
|
## Escalation
|
||||||
|
|
||||||
|
If complexity emerges, offer to switch to full `/clarity clarify`:
|
||||||
|
|
||||||
|
```
|
||||||
|
"This is more involved than it first appeared. Want me to switch
|
||||||
|
to a more thorough clarification process?"
|
||||||
|
```
|
||||||
31
plugins/clarity-assist/commands/clarity.md
Normal file
31
plugins/clarity-assist/commands/clarity.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
---
|
||||||
|
name: clarity
|
||||||
|
description: Prompt optimization and requirement clarification — type /clarity <action> for commands
|
||||||
|
---
|
||||||
|
|
||||||
|
# /clarity
|
||||||
|
|
||||||
|
Prompt optimization and requirement clarification with ND-friendly accommodations.
|
||||||
|
|
||||||
|
When invoked without a sub-command, display available actions and ask which to run.
|
||||||
|
|
||||||
|
## Available Commands
|
||||||
|
|
||||||
|
| Action | Command to Invoke | Description |
|
||||||
|
|--------|-------------------|-------------|
|
||||||
|
| `clarify` | `/clarity-assist:clarity-clarify` | Full 4-D methodology for complex requests |
|
||||||
|
| `quick-clarify` | `/clarity-assist:clarity-quick-clarify` | Rapid mode for simple disambiguation |
|
||||||
|
|
||||||
|
## Routing
|
||||||
|
|
||||||
|
If `$ARGUMENTS` is provided (e.g., user typed `/clarity clarify`):
|
||||||
|
1. Match the first word of `$ARGUMENTS` against the **Action** column above
|
||||||
|
2. **Invoke the corresponding command** from the "Command to Invoke" column using the Skill tool
|
||||||
|
3. Pass any remaining arguments to the invoked command
|
||||||
|
|
||||||
|
If no arguments provided:
|
||||||
|
1. Display the Available Commands table
|
||||||
|
2. Ask: "Which action would you like to run?"
|
||||||
|
3. When the user responds, invoke the matching command using the Skill tool
|
||||||
|
|
||||||
|
**Note:** Commands can also be invoked directly using their plugin-prefixed names (e.g., `/clarity-assist:clarity-clarify`)
|
||||||
@@ -1,84 +0,0 @@
|
|||||||
# /quick-clarify - Rapid Clarification Mode
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Single-pass clarification for requests that are mostly clear but need minor disambiguation.
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
- Request is fairly clear, just one or two ambiguities
|
|
||||||
- User is in a hurry
|
|
||||||
- Follow-up to an already-clarified request
|
|
||||||
- Simple feature additions or bug fixes
|
|
||||||
|
|
||||||
## Process
|
|
||||||
|
|
||||||
### Step 1: Echo Understanding
|
|
||||||
|
|
||||||
Restate what you understood in a single sentence:
|
|
||||||
|
|
||||||
```
|
|
||||||
"I understand you want [X] that does [Y]."
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Quick Disambiguation
|
|
||||||
|
|
||||||
If needed, ask ONE multiple-choice question:
|
|
||||||
|
|
||||||
```
|
|
||||||
**Just to confirm:** Should this [A] or [B]?
|
|
||||||
1. [Option A with brief description]
|
|
||||||
2. [Option B with brief description]
|
|
||||||
3. Both
|
|
||||||
4. Neither / Other approach
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Proceed or Confirm
|
|
||||||
|
|
||||||
If confident (>90%), proceed with a brief confirmation:
|
|
||||||
|
|
||||||
```
|
|
||||||
"Got it. I'll implement [X with chosen option]. Starting now."
|
|
||||||
```
|
|
||||||
|
|
||||||
If less confident, offer a micro-summary:
|
|
||||||
|
|
||||||
```
|
|
||||||
"Quick summary before I start:
|
|
||||||
- [Key point 1]
|
|
||||||
- [Key point 2]
|
|
||||||
- [Assumption made]
|
|
||||||
|
|
||||||
Proceed? (Or clarify anything)"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example Interaction
|
|
||||||
|
|
||||||
**User:** Add a delete button to the user list
|
|
||||||
|
|
||||||
**Quick-Clarify Response:**
|
|
||||||
|
|
||||||
```
|
|
||||||
I'll add a delete button to each row in the user list.
|
|
||||||
|
|
||||||
**Should deletion require confirmation?**
|
|
||||||
1. Yes - show confirmation dialog (Recommended)
|
|
||||||
2. No - delete immediately
|
|
||||||
3. Soft delete - mark as deleted but keep data
|
|
||||||
|
|
||||||
[Asking because accidental deletions can be problematic]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Escalation to Full /clarify
|
|
||||||
|
|
||||||
If quick-clarify reveals complexity:
|
|
||||||
|
|
||||||
```
|
|
||||||
"This is more involved than it first appeared - there are
|
|
||||||
several decisions to make. Want me to switch to a more
|
|
||||||
thorough clarification process? (Just say 'yes' or 'clarify')"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
For quick-clarify, no formal specification document is needed. Just proceed with the task after brief confirmation, documenting assumptions inline with the work.
|
|
||||||
328
plugins/clarity-assist/docs/ND-SUPPORT.md
Normal file
328
plugins/clarity-assist/docs/ND-SUPPORT.md
Normal file
@@ -0,0 +1,328 @@
|
|||||||
|
# Neurodivergent Support in clarity-assist
|
||||||
|
|
||||||
|
This document describes how clarity-assist is designed to support users with neurodivergent traits, including ADHD, autism, anxiety, and other conditions that affect executive function, sensory processing, or cognitive style.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
### Purpose
|
||||||
|
|
||||||
|
clarity-assist exists to help all users transform vague or incomplete requests into clear, actionable specifications. For neurodivergent users specifically, it addresses common challenges:
|
||||||
|
|
||||||
|
- **Executive function difficulties** - Breaking down complex tasks, getting started, managing scope
|
||||||
|
- **Working memory limitations** - Keeping track of context across long conversations
|
||||||
|
- **Decision fatigue** - Facing too many open-ended choices
|
||||||
|
- **Processing style differences** - Preferring structured, predictable interactions
|
||||||
|
- **Anxiety around uncertainty** - Needing clear expectations and explicit confirmation
|
||||||
|
|
||||||
|
### Philosophy
|
||||||
|
|
||||||
|
Our design philosophy centers on three principles:
|
||||||
|
|
||||||
|
1. **Reduce cognitive load** - Never force the user to hold too much in their head at once
|
||||||
|
2. **Provide structure** - Use consistent, predictable patterns for all interactions
|
||||||
|
3. **Respect different communication styles** - Accommodate rather than assume one "right" way to think
|
||||||
|
|
||||||
|
## Features for ND Users
|
||||||
|
|
||||||
|
### 1. Reduced Cognitive Load
|
||||||
|
|
||||||
|
**Prompt Simplification**
|
||||||
|
- The 4-D methodology (Deconstruct, Diagnose, Develop, Deliver) breaks down complex requests into manageable phases
|
||||||
|
- Users never need to specify everything upfront - clarification happens incrementally
|
||||||
|
|
||||||
|
**Task Breakdown**
|
||||||
|
- Large requests are decomposed into explicit components
|
||||||
|
- Dependencies and relationships are surfaced rather than left implicit
|
||||||
|
- Scope boundaries are clearly defined (in-scope vs. out-of-scope)
|
||||||
|
|
||||||
|
### 2. Structured Output
|
||||||
|
|
||||||
|
**Consistent Formatting**
|
||||||
|
- Every clarification session produces the same structured specification:
|
||||||
|
- Summary (1-2 sentences)
|
||||||
|
- Scope (In/Out)
|
||||||
|
- Requirements table (numbered, prioritized)
|
||||||
|
- Assumptions list
|
||||||
|
- This predictability reduces the mental effort of parsing responses
|
||||||
|
|
||||||
|
**Predictable Patterns**
|
||||||
|
- Questions always follow the same format
|
||||||
|
- Progress summaries appear at regular intervals
|
||||||
|
- Escalation (simple to complex) is always offered, never forced
|
||||||
|
|
||||||
|
**Bulleted Lists Over Prose**
|
||||||
|
- Requirements are presented as scannable lists, not paragraphs
|
||||||
|
- Options are numbered for easy reference
|
||||||
|
- Key information is highlighted with bold labels
|
||||||
|
|
||||||
|
### 3. Customizable Verbosity
|
||||||
|
|
||||||
|
**Detail Levels**
|
||||||
|
- `/clarity clarify` - Full methodology for complex requests (more thorough, more questions)
|
||||||
|
- `/clarity quick-clarify` - Rapid mode for simple disambiguation (fewer questions, faster)
|
||||||
|
|
||||||
|
**User Control**
|
||||||
|
- Users can always say "that's enough detail" to end questioning early
|
||||||
|
- The plugin offers to break sessions into smaller parts
|
||||||
|
- "Good enough for now" is explicitly validated as an acceptable outcome
|
||||||
|
|
||||||
|
### 4. Vagueness Detection
|
||||||
|
|
||||||
|
The `UserPromptSubmit` hook automatically detects prompts that might benefit from clarification and gently suggests using `/clarity clarify`.
|
||||||
|
|
||||||
|
**Detection Signals**
|
||||||
|
- Short prompts (< 10 words) without specific technical terms
|
||||||
|
- Vague action phrases: "help me", "fix this", "make it better"
|
||||||
|
- Ambiguous scope words: "somehow", "something", "stuff", "etc."
|
||||||
|
- Open questions without context
|
||||||
|
|
||||||
|
**Non-Blocking Approach**
|
||||||
|
- The hook never prevents you from proceeding
|
||||||
|
- It provides a suggestion with a vagueness score (percentage)
|
||||||
|
- You can disable auto-suggestions entirely via environment variable
|
||||||
|
|
||||||
|
### 5. Focus Aids
|
||||||
|
|
||||||
|
**Task Prioritization**
|
||||||
|
- Requirements are tagged as Must/Should/Could/Won't (MoSCoW)
|
||||||
|
- Critical items are separated from nice-to-haves
|
||||||
|
- Scope creep is explicitly called out and deferred
|
||||||
|
|
||||||
|
**Context Switching Warnings**
|
||||||
|
- When questions touch multiple areas, the plugin acknowledges the complexity
|
||||||
|
- Offers to focus on one aspect at a time
|
||||||
|
- Summarizes frequently to rebuild context after interruptions
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
### The UserPromptSubmit Hook
|
||||||
|
|
||||||
|
When you submit a prompt, the vagueness detection hook (`hooks/vagueness-check.sh`) runs automatically:
|
||||||
|
|
||||||
|
```
|
||||||
|
User submits prompt
|
||||||
|
|
|
||||||
|
v
|
||||||
|
Hook reads prompt from stdin
|
||||||
|
|
|
||||||
|
v
|
||||||
|
Skip if: empty, starts with /, or contains file paths
|
||||||
|
|
|
||||||
|
v
|
||||||
|
Calculate vagueness score (0.0 - 1.0)
|
||||||
|
- Short prompts: +0.3
|
||||||
|
- Vague action phrases: +0.2
|
||||||
|
- Ambiguous scope words: +0.15
|
||||||
|
- Missing technical specifics: +0.2
|
||||||
|
- Short questions without context: +0.15
|
||||||
|
|
|
||||||
|
v
|
||||||
|
If score >= threshold (default 0.6):
|
||||||
|
- Output gentle suggestion with [clarity-assist] prefix
|
||||||
|
- Show vagueness percentage
|
||||||
|
|
|
||||||
|
v
|
||||||
|
Exit 0 (always non-blocking)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Hook Output
|
||||||
|
|
||||||
|
```
|
||||||
|
[clarity-assist] Your prompt could benefit from more clarity.
|
||||||
|
[clarity-assist] Consider running /clarity-assist to refine your request.
|
||||||
|
[clarity-assist] (Vagueness score: 65% - this is a suggestion, not a block)
|
||||||
|
```
|
||||||
|
|
||||||
|
### The 4-D Methodology
|
||||||
|
|
||||||
|
| Phase | What Happens | ND Benefit |
|
||||||
|
|-------|--------------|------------|
|
||||||
|
| **Deconstruct** | Break request into components | Shows the full picture without overwhelming |
|
||||||
|
| **Diagnose** | Identify gaps and conflicts | Surfaces hidden assumptions |
|
||||||
|
| **Develop** | Ask structured questions | Option-based, chunked, contextual |
|
||||||
|
| **Deliver** | Produce specification | Consistent, scannable format |
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Before: Vague Request
|
||||||
|
|
||||||
|
```
|
||||||
|
User: Make the app faster
|
||||||
|
```
|
||||||
|
|
||||||
|
This triggers vagueness detection because:
|
||||||
|
- Short (4 words)
|
||||||
|
- Vague action ("make")
|
||||||
|
- No specific technical context
|
||||||
|
- No measurable outcome
|
||||||
|
|
||||||
|
### After: Clarified with /clarity clarify
|
||||||
|
|
||||||
|
```
|
||||||
|
User: /clarity clarify Make the app faster
|
||||||
|
|
||||||
|
Claude: Let me help clarify what kind of performance improvements you're looking for.
|
||||||
|
|
||||||
|
**What type of slowness are you experiencing?**
|
||||||
|
1. Initial page load takes too long
|
||||||
|
2. Interactions feel sluggish (clicking, typing)
|
||||||
|
3. API calls are slow
|
||||||
|
4. All of the above
|
||||||
|
5. Other
|
||||||
|
|
||||||
|
[I'm asking because each has very different solutions]
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
User: 1
|
||||||
|
|
||||||
|
Claude: Initial page load - got it.
|
||||||
|
|
||||||
|
**What's an acceptable load time for you?**
|
||||||
|
1. Under 1 second (aggressive)
|
||||||
|
2. Under 2 seconds (standard)
|
||||||
|
3. Under 3 seconds (relaxed)
|
||||||
|
4. I don't have a specific target
|
||||||
|
|
||||||
|
[This helps me know whether to suggest quick wins or a major optimization effort]
|
||||||
|
```
|
||||||
|
|
||||||
|
After 2-3 more questions, the user receives:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Clarified Request
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
Optimize initial page load time to under 2 seconds by addressing bundle size and initial render blocking resources.
|
||||||
|
|
||||||
|
### Scope
|
||||||
|
**In Scope:**
|
||||||
|
- Bundle analysis and splitting
|
||||||
|
- Lazy loading non-critical components
|
||||||
|
- Image optimization
|
||||||
|
- CSS/JS minification check
|
||||||
|
|
||||||
|
**Out of Scope:**
|
||||||
|
- Backend API optimization
|
||||||
|
- Database query performance
|
||||||
|
- CDN setup
|
||||||
|
|
||||||
|
### Requirements
|
||||||
|
| # | Requirement | Priority | Notes |
|
||||||
|
|---|-------------|----------|-------|
|
||||||
|
| 1 | Page loads in < 2s on 4G | Must | Measured via Lighthouse |
|
||||||
|
| 2 | Largest contentful paint < 1.5s | Should | Core Web Vital |
|
||||||
|
| 3 | No layout shift after load | Should | CLS < 0.1 |
|
||||||
|
|
||||||
|
### Assumptions
|
||||||
|
- Current load time is > 3 seconds (based on "too long" description)
|
||||||
|
- Target browsers: modern evergreen (Chrome, Firefox, Safari)
|
||||||
|
- Build tooling already configured (webpack/vite)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Patterns Detected
|
||||||
|
|
||||||
|
| Pattern | Example | What It Triggers |
|
||||||
|
|---------|---------|------------------|
|
||||||
|
| Missing context | "Fix the bug" | "Which bug? In which file or feature?" |
|
||||||
|
| Ambiguous scope | "Update the user stuff" | "What specifically needs updating?" |
|
||||||
|
| Unclear goals | "Make it better" | "Better in what way? Faster? Cleaner? More features?" |
|
||||||
|
| Implicit knowledge | "Do it like last time" | "I don't have context from previous sessions" |
|
||||||
|
| Conflicting requests | "Keep it simple but add everything" | "Which matters more: simplicity or completeness?" |
|
||||||
|
|
||||||
|
### Quick-Clarify Example
|
||||||
|
|
||||||
|
For simpler requests that just need minor disambiguation:
|
||||||
|
|
||||||
|
```
|
||||||
|
User: /clarity quick-clarify Add a delete button to the user list
|
||||||
|
|
||||||
|
Claude: I'll add a delete button to each row in the user list.
|
||||||
|
|
||||||
|
**Should deletion require confirmation?**
|
||||||
|
1. Yes - show confirmation dialog (Recommended)
|
||||||
|
2. No - delete immediately
|
||||||
|
3. Soft delete - mark as deleted but keep data
|
||||||
|
|
||||||
|
[Asking because accidental deletions can be problematic]
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
User: 1
|
||||||
|
|
||||||
|
Claude: Got it. I'll implement a delete button with confirmation dialog. Starting now.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `CLARITY_ASSIST_AUTO_SUGGEST` | `true` | Enable/disable automatic vagueness detection |
|
||||||
|
| `CLARITY_ASSIST_VAGUENESS_THRESHOLD` | `0.6` | Score threshold to trigger suggestion (0.0-1.0) |
|
||||||
|
|
||||||
|
### Disabling Auto-Suggestions
|
||||||
|
|
||||||
|
If you find the vagueness detection unhelpful, disable it in your shell profile or `.env`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLARITY_ASSIST_AUTO_SUGGEST=false
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adjusting Sensitivity
|
||||||
|
|
||||||
|
To make detection more or less sensitive:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# More sensitive (suggests more often)
|
||||||
|
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.4
|
||||||
|
|
||||||
|
# Less sensitive (only very vague prompts)
|
||||||
|
export CLARITY_ASSIST_VAGUENESS_THRESHOLD=0.8
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tips for ND Users
|
||||||
|
|
||||||
|
### If You're Feeling Overwhelmed
|
||||||
|
|
||||||
|
- Use `/clarity quick-clarify` instead of `/clarity clarify` for faster interactions
|
||||||
|
- Say "let's focus on just one thing" to narrow scope
|
||||||
|
- Ask to "pause and summarize" at any point
|
||||||
|
- It's OK to say "I don't know" - the plugin will offer concrete alternatives
|
||||||
|
|
||||||
|
### If You Have Executive Function Challenges
|
||||||
|
|
||||||
|
- Start with `/clarity clarify` even for tasks you think are simple - it helps with planning
|
||||||
|
- The structured specification can serve as a checklist
|
||||||
|
- Use the scope boundaries to prevent scope creep
|
||||||
|
|
||||||
|
### If You Prefer Detailed Structure
|
||||||
|
|
||||||
|
- The 4-D methodology provides a predictable framework
|
||||||
|
- All output follows consistent formatting
|
||||||
|
- Questions always offer numbered options
|
||||||
|
|
||||||
|
### If You Have Anxiety About Getting It Right
|
||||||
|
|
||||||
|
- The plugin validates "good enough for now" as acceptable
|
||||||
|
- You can always revisit and change earlier answers
|
||||||
|
- Assumptions are explicitly listed - nothing is hidden
|
||||||
|
|
||||||
|
## Accessibility Notes
|
||||||
|
|
||||||
|
- All output uses standard markdown that works with screen readers
|
||||||
|
- No time pressure - take as long as you need between responses
|
||||||
|
- Questions are designed to be answerable without deep context retrieval
|
||||||
|
- Visual patterns (bold, bullets, tables) create scannable structure
|
||||||
|
|
||||||
|
## Feedback
|
||||||
|
|
||||||
|
If you have suggestions for improving neurodivergent support in clarity-assist, please open an issue at:
|
||||||
|
|
||||||
|
https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/issues
|
||||||
|
|
||||||
|
Include the label `clarity-assist` and describe:
|
||||||
|
- What challenge you faced
|
||||||
|
- What would have helped
|
||||||
|
- Any specific accommodations you'd like to see
|
||||||
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"hooks": {
|
"hooks": {
|
||||||
"PostToolUse": [
|
"UserPromptSubmit": [
|
||||||
{
|
{
|
||||||
"matcher": "Write|Edit",
|
"matcher": "",
|
||||||
"hooks": [
|
"hooks": [
|
||||||
{
|
{
|
||||||
"type": "command",
|
"type": "command",
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/cleanup.sh"
|
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/vagueness-check.sh"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
256
plugins/clarity-assist/hooks/vagueness-check.sh
Executable file
256
plugins/clarity-assist/hooks/vagueness-check.sh
Executable file
@@ -0,0 +1,256 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# clarity-assist vagueness detection hook
|
||||||
|
# Analyzes user prompts for vagueness and suggests /clarity-assist when beneficial
|
||||||
|
# All output MUST have [clarity-assist] prefix
|
||||||
|
# This is a NON-BLOCKING hook - always exits 0
|
||||||
|
|
||||||
|
PREFIX="[clarity-assist]"
|
||||||
|
|
||||||
|
# Check if auto-suggest is enabled (default: true)
|
||||||
|
AUTO_SUGGEST="${CLARITY_ASSIST_AUTO_SUGGEST:-true}"
|
||||||
|
if [[ "$AUTO_SUGGEST" != "true" ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Threshold for vagueness score (default: 0.6)
|
||||||
|
THRESHOLD="${CLARITY_ASSIST_VAGUENESS_THRESHOLD:-0.6}"
|
||||||
|
|
||||||
|
# Read user prompt from stdin
|
||||||
|
PROMPT=""
|
||||||
|
if [[ -t 0 ]]; then
|
||||||
|
# No stdin available
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
PROMPT=$(cat)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Skip empty prompts
|
||||||
|
if [[ -z "$PROMPT" ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Skip if prompt is a command (starts with /)
|
||||||
|
if [[ "$PROMPT" =~ ^[[:space:]]*/[a-zA-Z] ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Skip if prompt mentions specific files or paths
|
||||||
|
if [[ "$PROMPT" =~ \.(py|js|ts|sh|md|json|yaml|yml|txt|css|html|go|rs|java|c|cpp|h)([[:space:]]|$|[^a-zA-Z]) ]] || \
|
||||||
|
[[ "$PROMPT" =~ [/\\][a-zA-Z0-9_-]+[/\\] ]] || \
|
||||||
|
[[ "$PROMPT" =~ (src|lib|test|docs|plugins|hooks|commands)/ ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Initialize vagueness score
|
||||||
|
SCORE=0
|
||||||
|
|
||||||
|
# Count words in the prompt
|
||||||
|
WORD_COUNT=$(echo "$PROMPT" | wc -w | tr -d ' ')
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Vagueness Signal Detection
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
# Signal 1: Very short prompts (< 10 words) are often vague
|
||||||
|
if [[ "$WORD_COUNT" -lt 10 ]]; then
|
||||||
|
# But very short specific commands are OK
|
||||||
|
if [[ "$WORD_COUNT" -lt 3 ]]; then
|
||||||
|
# Extremely short - probably intentional or a command
|
||||||
|
:
|
||||||
|
else
|
||||||
|
SCORE=$(echo "$SCORE + 0.3" | bc)
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Signal 2: Vague action phrases (no specific outcome)
|
||||||
|
VAGUE_ACTIONS=(
|
||||||
|
"help me"
|
||||||
|
"help with"
|
||||||
|
"do something"
|
||||||
|
"work on"
|
||||||
|
"look at"
|
||||||
|
"check this"
|
||||||
|
"fix it"
|
||||||
|
"fix this"
|
||||||
|
"make it better"
|
||||||
|
"make this better"
|
||||||
|
"improve it"
|
||||||
|
"improve this"
|
||||||
|
"update this"
|
||||||
|
"update it"
|
||||||
|
"change it"
|
||||||
|
"change this"
|
||||||
|
"can you"
|
||||||
|
"could you"
|
||||||
|
"would you"
|
||||||
|
"please help"
|
||||||
|
)
|
||||||
|
|
||||||
|
PROMPT_LOWER=$(echo "$PROMPT" | tr '[:upper:]' '[:lower:]')
|
||||||
|
|
||||||
|
for phrase in "${VAGUE_ACTIONS[@]}"; do
|
||||||
|
if [[ "$PROMPT_LOWER" == *"$phrase"* ]]; then
|
||||||
|
SCORE=$(echo "$SCORE + 0.2" | bc)
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Signal 3: Ambiguous scope indicators
|
||||||
|
AMBIGUOUS_SCOPE=(
|
||||||
|
"somehow"
|
||||||
|
"something"
|
||||||
|
"somewhere"
|
||||||
|
"anything"
|
||||||
|
"whatever"
|
||||||
|
"stuff"
|
||||||
|
"things"
|
||||||
|
"etc"
|
||||||
|
"and so on"
|
||||||
|
)
|
||||||
|
|
||||||
|
for word in "${AMBIGUOUS_SCOPE[@]}"; do
|
||||||
|
if [[ "$PROMPT_LOWER" == *"$word"* ]]; then
|
||||||
|
SCORE=$(echo "$SCORE + 0.15" | bc)
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Signal 4: Missing context indicators (no reference to what/where)
|
||||||
|
# Check if prompt lacks specificity markers
|
||||||
|
HAS_SPECIFICS=false
|
||||||
|
|
||||||
|
# Specific technical terms suggest clarity
|
||||||
|
SPECIFIC_MARKERS=(
|
||||||
|
"function"
|
||||||
|
"class"
|
||||||
|
"method"
|
||||||
|
"variable"
|
||||||
|
"error"
|
||||||
|
"bug"
|
||||||
|
"test"
|
||||||
|
"api"
|
||||||
|
"endpoint"
|
||||||
|
"database"
|
||||||
|
"query"
|
||||||
|
"component"
|
||||||
|
"module"
|
||||||
|
"service"
|
||||||
|
"config"
|
||||||
|
"install"
|
||||||
|
"deploy"
|
||||||
|
"build"
|
||||||
|
"run"
|
||||||
|
"execute"
|
||||||
|
"create"
|
||||||
|
"delete"
|
||||||
|
"add"
|
||||||
|
"remove"
|
||||||
|
"implement"
|
||||||
|
"refactor"
|
||||||
|
"migrate"
|
||||||
|
"upgrade"
|
||||||
|
"debug"
|
||||||
|
"log"
|
||||||
|
"exception"
|
||||||
|
"stack"
|
||||||
|
"memory"
|
||||||
|
"performance"
|
||||||
|
"security"
|
||||||
|
"auth"
|
||||||
|
"token"
|
||||||
|
"session"
|
||||||
|
"route"
|
||||||
|
"controller"
|
||||||
|
"model"
|
||||||
|
"view"
|
||||||
|
"template"
|
||||||
|
"schema"
|
||||||
|
"migration"
|
||||||
|
"commit"
|
||||||
|
"branch"
|
||||||
|
"merge"
|
||||||
|
"pull"
|
||||||
|
"push"
|
||||||
|
)
|
||||||
|
|
||||||
|
for marker in "${SPECIFIC_MARKERS[@]}"; do
|
||||||
|
if [[ "$PROMPT_LOWER" == *"$marker"* ]]; then
|
||||||
|
HAS_SPECIFICS=true
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ "$HAS_SPECIFICS" == false ]] && [[ "$WORD_COUNT" -gt 3 ]]; then
|
||||||
|
SCORE=$(echo "$SCORE + 0.2" | bc)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Signal 5: Question without context
|
||||||
|
if [[ "$PROMPT" =~ \?$ ]] && [[ "$WORD_COUNT" -lt 8 ]]; then
|
||||||
|
# Short questions without specifics are often vague
|
||||||
|
if [[ "$HAS_SPECIFICS" == false ]]; then
|
||||||
|
SCORE=$(echo "$SCORE + 0.15" | bc)
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Cap score at 1.0
|
||||||
|
if (( $(echo "$SCORE > 1.0" | bc -l) )); then
|
||||||
|
SCORE="1.0"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Feature Request Detection (for RFC suggestion)
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
FEATURE_REQUEST=false
|
||||||
|
|
||||||
|
# Feature request phrases
|
||||||
|
FEATURE_PHRASES=(
|
||||||
|
"we should"
|
||||||
|
"it would be nice"
|
||||||
|
"feature request"
|
||||||
|
"idea:"
|
||||||
|
"suggestion:"
|
||||||
|
"what if we"
|
||||||
|
"wouldn't it be great"
|
||||||
|
"i think we need"
|
||||||
|
"we need to add"
|
||||||
|
"we could add"
|
||||||
|
"how about adding"
|
||||||
|
"can we add"
|
||||||
|
"new feature"
|
||||||
|
"enhancement"
|
||||||
|
"proposal"
|
||||||
|
)
|
||||||
|
|
||||||
|
for phrase in "${FEATURE_PHRASES[@]}"; do
|
||||||
|
if [[ "$PROMPT_LOWER" == *"$phrase"* ]]; then
|
||||||
|
FEATURE_REQUEST=true
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Output suggestion if score exceeds threshold
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
# Compare score to threshold using bc
|
||||||
|
if (( $(echo "$SCORE >= $THRESHOLD" | bc -l) )); then
|
||||||
|
# Format score as percentage for display
|
||||||
|
SCORE_PCT=$(echo "$SCORE * 100" | bc | cut -d'.' -f1)
|
||||||
|
|
||||||
|
# Gentle, non-blocking suggestion
|
||||||
|
echo "$PREFIX Your prompt could benefit from more clarity."
|
||||||
|
echo "$PREFIX Consider running /clarity clarify to refine your request."
|
||||||
|
echo "$PREFIX (Vagueness score: ${SCORE_PCT}% - this is a suggestion, not a block)"
|
||||||
|
|
||||||
|
# Additional RFC suggestion if feature request detected
|
||||||
|
if [[ "$FEATURE_REQUEST" == true ]]; then
|
||||||
|
echo "$PREFIX This looks like a feature idea. Consider /rfc-create to track it formally."
|
||||||
|
fi
|
||||||
|
elif [[ "$FEATURE_REQUEST" == true ]]; then
|
||||||
|
# Feature request detected but not vague - still suggest RFC
|
||||||
|
echo "$PREFIX This looks like a feature idea. Consider /rfc-create to track it formally."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Always exit 0 - this hook is non-blocking
|
||||||
|
exit 0
|
||||||
76
plugins/clarity-assist/skills/4d-methodology.md
Normal file
76
plugins/clarity-assist/skills/4d-methodology.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# 4-D Methodology for Prompt Clarification
|
||||||
|
|
||||||
|
The 4-D methodology transforms vague requests into actionable specifications.
|
||||||
|
|
||||||
|
## Phase 1: Deconstruct
|
||||||
|
|
||||||
|
Break down the user's request into components:
|
||||||
|
|
||||||
|
1. **Extract explicit requirements** - What was directly stated
|
||||||
|
2. **Identify implicit assumptions** - What seems assumed but not stated
|
||||||
|
3. **Note ambiguities** - Points that could go multiple ways
|
||||||
|
4. **List dependencies** - External factors that might affect implementation
|
||||||
|
|
||||||
|
## Phase 2: Diagnose
|
||||||
|
|
||||||
|
Analyze gaps and potential issues:
|
||||||
|
|
||||||
|
1. **Missing information** - What do we need to know?
|
||||||
|
2. **Conflicting requirements** - Do any stated goals contradict?
|
||||||
|
3. **Scope boundaries** - What is in/out of scope?
|
||||||
|
4. **Technical constraints** - Platform, language, architecture limits
|
||||||
|
|
||||||
|
## Phase 3: Develop
|
||||||
|
|
||||||
|
Gather clarifications through structured questioning:
|
||||||
|
|
||||||
|
- Present 2-4 concrete options (never open-ended alone)
|
||||||
|
- Include "Other" for custom responses
|
||||||
|
- Ask 1-2 questions at a time maximum
|
||||||
|
- Provide brief context for why you are asking
|
||||||
|
- Check for conflicts with previous answers
|
||||||
|
|
||||||
|
**Example Format:**
|
||||||
|
```
|
||||||
|
To help me understand the scope better:
|
||||||
|
|
||||||
|
**How should errors be handled?**
|
||||||
|
1. Silent logging (user sees nothing)
|
||||||
|
2. Toast notifications (brief, dismissible)
|
||||||
|
3. Modal dialogs (requires user action)
|
||||||
|
4. Other
|
||||||
|
|
||||||
|
[Context: This affects both UX and how much error-handling code we need]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 4: Deliver
|
||||||
|
|
||||||
|
Produce the refined specification:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Clarified Request
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
[1-2 sentence description of what will be built]
|
||||||
|
|
||||||
|
### Scope
|
||||||
|
**In Scope:**
|
||||||
|
- [Item 1]
|
||||||
|
- [Item 2]
|
||||||
|
|
||||||
|
**Out of Scope:**
|
||||||
|
- [Item 1]
|
||||||
|
|
||||||
|
### Requirements
|
||||||
|
|
||||||
|
| # | Requirement | Priority | Notes |
|
||||||
|
|---|-------------|----------|-------|
|
||||||
|
| 1 | ... | Must | ... |
|
||||||
|
| 2 | ... | Should | ... |
|
||||||
|
|
||||||
|
### Assumptions
|
||||||
|
- [Assumption made based on conversation]
|
||||||
|
|
||||||
|
### Open Questions
|
||||||
|
- [Any remaining ambiguities, if any]
|
||||||
|
```
|
||||||
86
plugins/clarity-assist/skills/clarification-techniques.md
Normal file
86
plugins/clarity-assist/skills/clarification-techniques.md
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
# Clarification Techniques
|
||||||
|
|
||||||
|
Structured approaches for disambiguating user requests.
|
||||||
|
|
||||||
|
## Anti-Patterns to Detect
|
||||||
|
|
||||||
|
### Vague Requests
|
||||||
|
**Triggers:** "improve", "fix", "update", "change", "better", "faster", "cleaner"
|
||||||
|
|
||||||
|
**Response:** Ask for specific metrics or outcomes
|
||||||
|
|
||||||
|
### Scope Creep Signals
|
||||||
|
**Triggers:** "while you're at it", "also", "might as well", "and another thing"
|
||||||
|
|
||||||
|
**Response:** Acknowledge, then isolate: "I'll note that for after the main task"
|
||||||
|
|
||||||
|
### Assumption Gaps
|
||||||
|
**Triggers:** References to "the" thing (which thing?), "it" (what?), "there" (where?)
|
||||||
|
|
||||||
|
**Response:** Echo back specific understanding
|
||||||
|
|
||||||
|
### Conflicting Requirements
|
||||||
|
**Triggers:** "Simple but comprehensive", "Fast but thorough", "Minimal but complete"
|
||||||
|
|
||||||
|
**Response:** Prioritize: "Which matters more: simplicity or completeness?"
|
||||||
|
|
||||||
|
## Question Templates
|
||||||
|
|
||||||
|
### For Unclear Purpose
|
||||||
|
```
|
||||||
|
**What problem does this solve?**
|
||||||
|
1. [Specific problem A]
|
||||||
|
2. [Specific problem B]
|
||||||
|
3. Combination
|
||||||
|
4. Different problem: ____
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Missing Scope
|
||||||
|
```
|
||||||
|
**What should this include?**
|
||||||
|
- [ ] Feature A
|
||||||
|
- [ ] Feature B
|
||||||
|
- [ ] Feature C
|
||||||
|
- [ ] Other: ____
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Ambiguous Behavior
|
||||||
|
```
|
||||||
|
**When [trigger event], what should happen?**
|
||||||
|
1. [Behavior option A]
|
||||||
|
2. [Behavior option B]
|
||||||
|
3. Nothing (ignore)
|
||||||
|
4. Depends on: ____
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Technical Decisions
|
||||||
|
```
|
||||||
|
**Implementation approach:**
|
||||||
|
1. [Approach A] - pros: X, cons: Y
|
||||||
|
2. [Approach B] - pros: X, cons: Y
|
||||||
|
3. Let me decide based on codebase
|
||||||
|
4. Need more info about: ____
|
||||||
|
```
|
||||||
|
|
||||||
|
## Echo Understanding Technique
|
||||||
|
|
||||||
|
Before diving into questions, restate understanding:
|
||||||
|
|
||||||
|
```
|
||||||
|
"I understand you want [X] that does [Y]."
|
||||||
|
```
|
||||||
|
|
||||||
|
This validates comprehension and gives user a chance to correct early.
|
||||||
|
|
||||||
|
## Micro-Summary Technique
|
||||||
|
|
||||||
|
For quick confirmations before proceeding:
|
||||||
|
|
||||||
|
```
|
||||||
|
"Quick summary before I start:
|
||||||
|
- [Key point 1]
|
||||||
|
- [Key point 2]
|
||||||
|
- [Assumption made]
|
||||||
|
|
||||||
|
Proceed? (Or clarify anything)"
|
||||||
|
```
|
||||||
57
plugins/clarity-assist/skills/escalation-patterns.md
Normal file
57
plugins/clarity-assist/skills/escalation-patterns.md
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# Escalation Patterns
|
||||||
|
|
||||||
|
Guidelines for when to escalate between clarification modes.
|
||||||
|
|
||||||
|
## Quick-Clarify to Full Clarify
|
||||||
|
|
||||||
|
Escalate when quick-clarify reveals unexpected complexity:
|
||||||
|
|
||||||
|
```
|
||||||
|
"This is more involved than it first appeared - there are
|
||||||
|
several decisions to make. Want me to switch to a more
|
||||||
|
thorough clarification process? (Just say 'yes' or 'clarify')"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Triggers for Escalation
|
||||||
|
|
||||||
|
- Multiple ambiguities discovered during quick pass
|
||||||
|
- User's answer reveals hidden dependencies
|
||||||
|
- Scope expands beyond original understanding
|
||||||
|
- Technical constraints emerge that need discussion
|
||||||
|
- Conflicting requirements surface
|
||||||
|
|
||||||
|
## Full Clarify to Incremental
|
||||||
|
|
||||||
|
When user is overwhelmed by full 4-D process:
|
||||||
|
|
||||||
|
```
|
||||||
|
"This touches a lot of areas. Rather than tackle everything at once,
|
||||||
|
let's start with [most critical piece]. Once that's clear, we can
|
||||||
|
add the other parts. Sound good?"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Signs of Overwhelm
|
||||||
|
|
||||||
|
- Long pauses or hesitation
|
||||||
|
- "I don't know" responses
|
||||||
|
- Requesting breaks
|
||||||
|
- Contradicting earlier answers
|
||||||
|
- Expressing frustration
|
||||||
|
|
||||||
|
## Choosing Initial Mode
|
||||||
|
|
||||||
|
### Use /clarity quick-clarify When
|
||||||
|
|
||||||
|
- Request is fairly clear, just one or two ambiguities
|
||||||
|
- User is in a hurry
|
||||||
|
- Follow-up to an already-clarified request
|
||||||
|
- Simple feature additions or bug fixes
|
||||||
|
- Confidence is high (>90%)
|
||||||
|
|
||||||
|
### Use /clarity clarify When
|
||||||
|
|
||||||
|
- Complex multi-step requests
|
||||||
|
- Requirements with multiple possible interpretations
|
||||||
|
- Tasks requiring significant context gathering
|
||||||
|
- User seems uncertain about what they want
|
||||||
|
- First time working on this feature/area
|
||||||
74
plugins/clarity-assist/skills/nd-accommodations.md
Normal file
74
plugins/clarity-assist/skills/nd-accommodations.md
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
# Neurodivergent-Friendly Accommodations
|
||||||
|
|
||||||
|
Guidelines for making clarification interactions accessible and comfortable for neurodivergent users.
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### Reduce Cognitive Load
|
||||||
|
- Maximum 4 options per question
|
||||||
|
- Always include "Other" escape hatch
|
||||||
|
- Provide examples, not just descriptions
|
||||||
|
- Use numbered lists for easy reference
|
||||||
|
|
||||||
|
### Support Working Memory
|
||||||
|
- Summarize frequently
|
||||||
|
- Reference earlier decisions explicitly
|
||||||
|
- Do not assume user remembers context from many turns ago
|
||||||
|
- Echo back understanding before proceeding
|
||||||
|
|
||||||
|
### Allow Processing Time
|
||||||
|
- Do not rapid-fire questions
|
||||||
|
- Validate answers before moving on
|
||||||
|
- Offer to revisit or change earlier answers
|
||||||
|
- One question block at a time
|
||||||
|
|
||||||
|
### Manage Overwhelm
|
||||||
|
- Offer to break into smaller sessions
|
||||||
|
- Prioritize must-haves vs nice-to-haves
|
||||||
|
- Provide "good enough for now" options
|
||||||
|
- Acknowledge complexity openly
|
||||||
|
|
||||||
|
## Question Formatting Rules
|
||||||
|
|
||||||
|
**Always do:**
|
||||||
|
```
|
||||||
|
**How should errors be handled?**
|
||||||
|
1. Silent logging (user sees nothing)
|
||||||
|
2. Toast notifications (brief, dismissible)
|
||||||
|
3. Modal dialogs (requires user action)
|
||||||
|
4. Other
|
||||||
|
|
||||||
|
[Context: This affects both UX and error-handling complexity]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Never do:**
|
||||||
|
```
|
||||||
|
How do you want to handle errors? There are many approaches...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conflict Acknowledgment
|
||||||
|
|
||||||
|
Before asking about something that might conflict with a previous answer:
|
||||||
|
|
||||||
|
```
|
||||||
|
[Internal check]
|
||||||
|
Previous: User said "keep it simple"
|
||||||
|
Current question about: Adding configuration options
|
||||||
|
Potential conflict: More options = more complexity
|
||||||
|
```
|
||||||
|
|
||||||
|
Then acknowledge: "Earlier you mentioned keeping it simple. With that in mind..."
|
||||||
|
|
||||||
|
## Escalation for Overwhelm
|
||||||
|
|
||||||
|
If the request is particularly complex or user seems overwhelmed:
|
||||||
|
|
||||||
|
1. Acknowledge the complexity openly
|
||||||
|
2. Offer to start with just ONE aspect
|
||||||
|
3. Build incrementally
|
||||||
|
|
||||||
|
```
|
||||||
|
"This touches a lot of areas. Rather than tackle everything at once,
|
||||||
|
let's start with [most critical piece]. Once that's clear, we can
|
||||||
|
add the other parts. Sound good?"
|
||||||
|
```
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"domain": "core"
|
||||||
|
}
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"name": "claude-config-maintainer",
|
"name": "claude-config-maintainer",
|
||||||
"version": "1.0.0",
|
"version": "9.0.1",
|
||||||
"description": "Maintains and optimizes CLAUDE.md configuration files for Claude Code projects",
|
"description": "Maintains and optimizes CLAUDE.md and settings.local.json configuration files for Claude Code projects",
|
||||||
"author": {
|
"author": {
|
||||||
"name": "Leo Miranda",
|
"name": "Leo Miranda",
|
||||||
"email": "leobmiranda@gmail.com"
|
"email": "leobmiranda@gmail.com"
|
||||||
@@ -14,7 +14,11 @@
|
|||||||
"configuration",
|
"configuration",
|
||||||
"optimization",
|
"optimization",
|
||||||
"claude-md",
|
"claude-md",
|
||||||
"developer-tools"
|
"developer-tools",
|
||||||
|
"settings",
|
||||||
|
"permissions"
|
||||||
],
|
],
|
||||||
"commands": ["./commands/"]
|
"commands": [
|
||||||
|
"./commands/"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,99 +0,0 @@
|
|||||||
# Claude Config Maintainer
|
|
||||||
|
|
||||||
A Claude Code plugin for creating and maintaining optimized CLAUDE.md configuration files.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
CLAUDE.md files provide instructions to Claude Code when working with a project. This plugin helps you:
|
|
||||||
|
|
||||||
- **Analyze** existing CLAUDE.md files for improvement opportunities
|
|
||||||
- **Optimize** structure, clarity, and conciseness
|
|
||||||
- **Initialize** new CLAUDE.md files with project-specific content
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
This plugin is part of the Leo Claude Marketplace. Install the marketplace and the plugin will be available.
|
|
||||||
|
|
||||||
## Commands
|
|
||||||
|
|
||||||
### `/config-analyze`
|
|
||||||
Analyze your CLAUDE.md and get a detailed report with scores and recommendations.
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-analyze
|
|
||||||
```
|
|
||||||
|
|
||||||
### `/config-optimize`
|
|
||||||
Automatically optimize your CLAUDE.md based on best practices.
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-optimize
|
|
||||||
```
|
|
||||||
|
|
||||||
### `/config-init`
|
|
||||||
Create a new CLAUDE.md tailored to your project.
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-init
|
|
||||||
```
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
A good CLAUDE.md should be:
|
|
||||||
|
|
||||||
- **Clear** - Easy to understand at a glance
|
|
||||||
- **Concise** - No unnecessary content
|
|
||||||
- **Complete** - All essential information included
|
|
||||||
- **Current** - Up to date with the project
|
|
||||||
|
|
||||||
### Recommended Structure
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# CLAUDE.md
|
|
||||||
|
|
||||||
## Project Overview
|
|
||||||
What does this project do?
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
Essential build/test/run commands.
|
|
||||||
|
|
||||||
## Critical Rules
|
|
||||||
What must Claude NEVER do?
|
|
||||||
|
|
||||||
## Architecture (optional)
|
|
||||||
Key technical decisions.
|
|
||||||
|
|
||||||
## Common Operations (optional)
|
|
||||||
Frequent tasks and workflows.
|
|
||||||
```
|
|
||||||
|
|
||||||
### Length Guidelines
|
|
||||||
|
|
||||||
| Project Size | Recommended Lines |
|
|
||||||
|-------------|------------------|
|
|
||||||
| Small | 50-100 |
|
|
||||||
| Medium | 100-200 |
|
|
||||||
| Large | 200-400 |
|
|
||||||
|
|
||||||
## Scoring System
|
|
||||||
|
|
||||||
The analyzer scores CLAUDE.md files on:
|
|
||||||
|
|
||||||
- **Structure** (25 pts) - Organization and navigation
|
|
||||||
- **Clarity** (25 pts) - Clear, unambiguous instructions
|
|
||||||
- **Completeness** (25 pts) - Essential sections present
|
|
||||||
- **Conciseness** (25 pts) - Efficient information density
|
|
||||||
|
|
||||||
Target score: **70+** for effective Claude Code usage.
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
1. Run `/config-analyze` periodically to maintain quality
|
|
||||||
2. Update CLAUDE.md when adding major features
|
|
||||||
3. Keep critical rules prominent and clear
|
|
||||||
4. Include examples where they add clarity
|
|
||||||
5. Remove generic advice that applies to all projects
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
This plugin is part of the personal-projects/leo-claude-mktplace repository.
|
|
||||||
@@ -1,12 +1,25 @@
|
|||||||
---
|
---
|
||||||
name: maintainer
|
name: maintainer
|
||||||
description: CLAUDE.md optimization and maintenance agent
|
description: CLAUDE.md optimization and maintenance agent
|
||||||
|
model: sonnet
|
||||||
|
permissionMode: acceptEdits
|
||||||
|
skills: visual-header, settings-optimization
|
||||||
---
|
---
|
||||||
|
|
||||||
# CLAUDE.md Maintainer Agent
|
# CLAUDE.md Maintainer Agent
|
||||||
|
|
||||||
You are the **Maintainer Agent** - a specialist in creating and optimizing CLAUDE.md configuration files for Claude Code projects. Your role is to ensure CLAUDE.md files are clear, concise, well-structured, and follow best practices.
|
You are the **Maintainer Agent** - a specialist in creating and optimizing CLAUDE.md configuration files for Claude Code projects. Your role is to ensure CLAUDE.md files are clear, concise, well-structured, and follow best practices.
|
||||||
|
|
||||||
|
## Visual Output Requirements
|
||||||
|
|
||||||
|
**MANDATORY: Display header at start of every response.**
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────────────────────────────┐
|
||||||
|
│ ⚙️ CONFIG-MAINTAINER · CLAUDE.md Optimization │
|
||||||
|
└──────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## Your Personality
|
## Your Personality
|
||||||
|
|
||||||
**Optimization-Focused:**
|
**Optimization-Focused:**
|
||||||
@@ -83,13 +96,13 @@ Use this mapping to identify active plugins:
|
|||||||
| `gitea` | projman |
|
| `gitea` | projman |
|
||||||
| `netbox` | cmdb-assistant |
|
| `netbox` | cmdb-assistant |
|
||||||
|
|
||||||
Also check for hook-based plugins (project-hygiene uses `PostToolUse` hooks).
|
Also check for hook-based plugins (code-sentinel, git-flow, cmdb-assistant use `PreToolUse` safety hooks; clarity-assist uses `UserPromptSubmit` quality hook).
|
||||||
|
|
||||||
**Step 2: Check CLAUDE.md for Plugin References**
|
**Step 2: Check CLAUDE.md for Plugin References**
|
||||||
|
|
||||||
For each detected plugin, search CLAUDE.md for:
|
For each detected plugin, search CLAUDE.md for:
|
||||||
- Plugin name mention (e.g., "projman", "cmdb-assistant")
|
- Plugin name mention (e.g., "projman", "cmdb-assistant")
|
||||||
- Command references (e.g., `/sprint-plan`, `/cmdb-search`)
|
- Command references (e.g., `/sprint plan`, `/cmdb search`)
|
||||||
- MCP tool mentions (e.g., `list_issues`, `dcim_list_devices`)
|
- MCP tool mentions (e.g., `list_issues`, `dcim_list_devices`)
|
||||||
|
|
||||||
**Step 3: Load Integration Snippets**
|
**Step 3: Load Integration Snippets**
|
||||||
@@ -104,7 +117,54 @@ Report plugin coverage percentage and offer to add missing integrations:
|
|||||||
- Display the integration content that would be added
|
- Display the integration content that would be added
|
||||||
- Ask user for confirmation before modifying CLAUDE.md
|
- Ask user for confirmation before modifying CLAUDE.md
|
||||||
|
|
||||||
### 2. Optimize CLAUDE.md Structure
|
### 2. Audit Settings Files
|
||||||
|
|
||||||
|
When auditing settings files, perform:
|
||||||
|
|
||||||
|
#### A. Permission Analysis
|
||||||
|
|
||||||
|
Read `.claude/settings.local.json` (primary) and check `.claude/settings.json` and `~/.claude.json` project entries (secondary).
|
||||||
|
|
||||||
|
Evaluate using `skills/settings-optimization.md`:
|
||||||
|
|
||||||
|
**Redundancy:**
|
||||||
|
- Duplicate entries in allow/deny arrays
|
||||||
|
- Subset patterns covered by broader patterns
|
||||||
|
- Patterns that could be merged
|
||||||
|
|
||||||
|
**Coverage:**
|
||||||
|
- Common safe tools missing from allow list
|
||||||
|
- MCP server tools not covered
|
||||||
|
- Directory scopes with no matching permission
|
||||||
|
|
||||||
|
**Safety Alignment:**
|
||||||
|
- Deny rules cover secrets and destructive commands
|
||||||
|
- Allow rules don't bypass active review layers
|
||||||
|
- No overly broad patterns without justification
|
||||||
|
|
||||||
|
**Profile Fit:**
|
||||||
|
- Compare against recommended profile for the project's review architecture
|
||||||
|
- Identify specific additions/removals to reach target profile
|
||||||
|
|
||||||
|
#### B. Review Layer Verification
|
||||||
|
|
||||||
|
Before recommending auto-allow patterns, verify active review layers:
|
||||||
|
|
||||||
|
1. Read `plugins/*/hooks/hooks.json` for each installed plugin
|
||||||
|
2. Map hook types (PreToolUse, UserPromptSubmit) to tool matchers (Write, Edit, MultiEdit, Bash, MCP patterns)
|
||||||
|
3. Confirm plugins are listed in `.claude-plugin/marketplace.json`
|
||||||
|
4. Only recommend auto-allow for scopes covered by ≥2 verified review layers
|
||||||
|
|
||||||
|
#### C. Settings Efficiency Score (100 points)
|
||||||
|
|
||||||
|
| Category | Points |
|
||||||
|
|----------|--------|
|
||||||
|
| Redundancy | 25 |
|
||||||
|
| Coverage | 25 |
|
||||||
|
| Safety Alignment | 25 |
|
||||||
|
| Profile Fit | 25 |
|
||||||
|
|
||||||
|
### 3. Optimize CLAUDE.md Structure
|
||||||
|
|
||||||
**Recommended Structure:**
|
**Recommended Structure:**
|
||||||
|
|
||||||
@@ -139,7 +199,7 @@ Common issues and solutions.
|
|||||||
- Use headers that scan easily
|
- Use headers that scan easily
|
||||||
- Include examples where they add clarity
|
- Include examples where they add clarity
|
||||||
|
|
||||||
### 3. Apply Best Practices
|
### 4. Apply Best Practices
|
||||||
|
|
||||||
**DO:**
|
**DO:**
|
||||||
- Use clear, direct language
|
- Use clear, direct language
|
||||||
@@ -156,7 +216,7 @@ Common issues and solutions.
|
|||||||
- Add generic advice that applies to all projects
|
- Add generic advice that applies to all projects
|
||||||
- Use emojis unless project requires them
|
- Use emojis unless project requires them
|
||||||
|
|
||||||
### 4. Generate Improvement Reports
|
### 5. Generate Improvement Reports
|
||||||
|
|
||||||
After analyzing a CLAUDE.md, provide:
|
After analyzing a CLAUDE.md, provide:
|
||||||
|
|
||||||
@@ -192,7 +252,7 @@ Suggested Actions:
|
|||||||
Would you like me to implement these improvements?
|
Would you like me to implement these improvements?
|
||||||
```
|
```
|
||||||
|
|
||||||
### 5. Insert Plugin Integrations
|
### 6. Insert Plugin Integrations
|
||||||
|
|
||||||
When adding plugin integration content to CLAUDE.md:
|
When adding plugin integration content to CLAUDE.md:
|
||||||
|
|
||||||
@@ -227,7 +287,7 @@ Add this integration to CLAUDE.md?
|
|||||||
- Allow users to skip specific plugins they don't want documented
|
- Allow users to skip specific plugins they don't want documented
|
||||||
- Preserve existing CLAUDE.md structure and content
|
- Preserve existing CLAUDE.md structure and content
|
||||||
|
|
||||||
### 6. Create New CLAUDE.md Files
|
### 7. Create New CLAUDE.md Files
|
||||||
|
|
||||||
When creating a new CLAUDE.md:
|
When creating a new CLAUDE.md:
|
||||||
|
|
||||||
@@ -267,6 +327,39 @@ Every CLAUDE.md should have:
|
|||||||
1. **Project Overview** - What is this?
|
1. **Project Overview** - What is this?
|
||||||
2. **Quick Start** - How do I build/test/run?
|
2. **Quick Start** - How do I build/test/run?
|
||||||
3. **Important Rules** - What must I NOT do?
|
3. **Important Rules** - What must I NOT do?
|
||||||
|
4. **Pre-Change Protocol** - Mandatory dependency check before code changes
|
||||||
|
|
||||||
|
### Pre-Change Protocol Section (MANDATORY)
|
||||||
|
|
||||||
|
**This section is REQUIRED in every CLAUDE.md.** It ensures Claude performs comprehensive dependency analysis before making any code changes.
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## ⛔ MANDATORY: Before Any Code Change
|
||||||
|
|
||||||
|
**Claude MUST show this checklist BEFORE editing any file:**
|
||||||
|
|
||||||
|
### 1. Impact Search Results
|
||||||
|
Run and show output of:
|
||||||
|
```bash
|
||||||
|
grep -rn "PATTERN" --include="*.sh" --include="*.md" --include="*.json" --include="*.py" | grep -v ".git"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Files That Will Be Affected
|
||||||
|
Numbered list of every file to be modified, with the specific change for each.
|
||||||
|
|
||||||
|
### 3. Files Searched But Not Changed (and why)
|
||||||
|
Proof that related files were checked and determined unchanged.
|
||||||
|
|
||||||
|
### 4. Documentation That References This
|
||||||
|
List of docs that mention this feature/script/function.
|
||||||
|
|
||||||
|
**User verifies this list before Claude proceeds. If Claude skips this, stop immediately.**
|
||||||
|
|
||||||
|
### After Changes
|
||||||
|
Run the same grep and show results proving no references remain unaddressed.
|
||||||
|
```
|
||||||
|
|
||||||
|
**When analyzing a CLAUDE.md, flag as HIGH priority issue if this section is missing.**
|
||||||
|
|
||||||
### Optional Sections (as needed)
|
### Optional Sections (as needed)
|
||||||
|
|
||||||
|
|||||||
@@ -1,16 +1,21 @@
|
|||||||
## CLAUDE.md Maintenance (claude-config-maintainer)
|
## CLAUDE.md Maintenance (claude-config-maintainer)
|
||||||
|
|
||||||
This project uses the **claude-config-maintainer** plugin to analyze and optimize CLAUDE.md configuration files.
|
This project uses the **claude-config-maintainer** plugin to analyze and optimize CLAUDE.md and settings.local.json configuration files.
|
||||||
|
|
||||||
### Available Commands
|
### Available Commands
|
||||||
|
|
||||||
| Command | Description |
|
| Command | Description |
|
||||||
|---------|-------------|
|
|---------|-------------|
|
||||||
| `/config-analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring |
|
| `/claude-config analyze` | Analyze CLAUDE.md for optimization opportunities with 100-point scoring |
|
||||||
| `/config-optimize` | Automatically optimize CLAUDE.md structure and content |
|
| `/claude-config optimize` | Automatically optimize CLAUDE.md structure and content |
|
||||||
| `/config-init` | Initialize a new CLAUDE.md file for a project |
|
| `/claude-config init` | Initialize a new CLAUDE.md file for a project |
|
||||||
|
| `/claude-config diff` | Track CLAUDE.md changes over time with behavioral impact analysis |
|
||||||
|
| `/claude-config lint` | Lint CLAUDE.md for anti-patterns and best practices (31 rules) |
|
||||||
|
| `/claude-config audit-settings` | Audit settings.local.json permissions with 100-point scoring |
|
||||||
|
| `/claude-config optimize-settings` | Optimize permission patterns and apply named profiles |
|
||||||
|
| `/claude-config permissions-map` | Visual map of review layers and permission coverage |
|
||||||
|
|
||||||
### Scoring System
|
### CLAUDE.md Scoring System
|
||||||
|
|
||||||
The analysis uses a 100-point scoring system across four categories:
|
The analysis uses a 100-point scoring system across four categories:
|
||||||
|
|
||||||
@@ -21,10 +26,31 @@ The analysis uses a 100-point scoring system across four categories:
|
|||||||
| Completeness | 25 | Overview, quick start, critical rules, workflows |
|
| Completeness | 25 | Overview, quick start, critical rules, workflows |
|
||||||
| Conciseness | 25 | Efficiency, no repetition, appropriate length |
|
| Conciseness | 25 | Efficiency, no repetition, appropriate length |
|
||||||
|
|
||||||
|
### Settings Scoring System
|
||||||
|
|
||||||
|
The settings audit uses a 100-point scoring system across four categories:
|
||||||
|
|
||||||
|
| Category | Points | What It Measures |
|
||||||
|
|----------|--------|------------------|
|
||||||
|
| Redundancy | 25 | No duplicates, no subset patterns, efficient rules |
|
||||||
|
| Coverage | 25 | Common tools allowed, MCP servers covered |
|
||||||
|
| Safety Alignment | 25 | Deny rules for secrets/destructive ops, review layers verified |
|
||||||
|
| Profile Fit | 25 | Alignment with recommended profile for review layer count |
|
||||||
|
|
||||||
|
### Permission Profiles
|
||||||
|
|
||||||
|
| Profile | Use Case |
|
||||||
|
|---------|----------|
|
||||||
|
| `conservative` | New users, minimal auto-allow, prompts for most writes |
|
||||||
|
| `reviewed` | Projects with 2+ review layers (code-sentinel, doc-guardian, PR review) |
|
||||||
|
| `autonomous` | Trusted CI/sandboxed environments only |
|
||||||
|
|
||||||
### Usage Guidelines
|
### Usage Guidelines
|
||||||
|
|
||||||
- Run `/config-analyze` periodically to assess CLAUDE.md quality
|
- Run `/claude-config analyze` periodically to assess CLAUDE.md quality
|
||||||
|
- Run `/claude-config audit-settings` to check permission efficiency
|
||||||
- Target a score of **70+/100** for effective Claude Code operation
|
- Target a score of **70+/100** for effective Claude Code operation
|
||||||
- Address HIGH priority issues first when optimizing
|
- Address HIGH priority issues first when optimizing
|
||||||
- Use `/config-init` when setting up new projects to start with best practices
|
- Use `/claude-config init` when setting up new projects to start with best practices
|
||||||
|
- Use `/claude-config permissions-map` to visualize review layer coverage
|
||||||
- Re-analyze after making changes to verify improvements
|
- Re-analyze after making changes to verify improvements
|
||||||
|
|||||||
@@ -1,186 +0,0 @@
|
|||||||
---
|
|
||||||
description: Analyze CLAUDE.md for optimization opportunities and plugin integration
|
|
||||||
---
|
|
||||||
|
|
||||||
# Analyze CLAUDE.md
|
|
||||||
|
|
||||||
This command analyzes your project's CLAUDE.md file and provides a detailed report on optimization opportunities and plugin integration status.
|
|
||||||
|
|
||||||
## What This Command Does
|
|
||||||
|
|
||||||
1. **Read CLAUDE.md** - Locates and reads the project's CLAUDE.md file
|
|
||||||
2. **Analyze Structure** - Evaluates organization, headers, and flow
|
|
||||||
3. **Check Content** - Reviews clarity, completeness, and conciseness
|
|
||||||
4. **Identify Issues** - Finds redundancy, verbosity, and missing sections
|
|
||||||
5. **Detect Active Plugins** - Identifies marketplace plugins enabled in the project
|
|
||||||
6. **Check Plugin Integration** - Verifies CLAUDE.md references active plugins
|
|
||||||
7. **Generate Report** - Provides scored assessment with recommendations
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/config-analyze
|
|
||||||
```
|
|
||||||
|
|
||||||
Or invoke the maintainer agent directly:
|
|
||||||
|
|
||||||
```
|
|
||||||
Analyze the CLAUDE.md file in this project
|
|
||||||
```
|
|
||||||
|
|
||||||
## Analysis Criteria
|
|
||||||
|
|
||||||
### Structure (25 points)
|
|
||||||
- Logical section ordering
|
|
||||||
- Clear header hierarchy
|
|
||||||
- Easy navigation
|
|
||||||
- Appropriate grouping
|
|
||||||
|
|
||||||
### Clarity (25 points)
|
|
||||||
- Clear instructions
|
|
||||||
- Good examples
|
|
||||||
- Unambiguous language
|
|
||||||
- Appropriate detail level
|
|
||||||
|
|
||||||
### Completeness (25 points)
|
|
||||||
- Project overview present
|
|
||||||
- Quick start commands documented
|
|
||||||
- Critical rules highlighted
|
|
||||||
- Key workflows covered
|
|
||||||
|
|
||||||
### Conciseness (25 points)
|
|
||||||
- No unnecessary repetition
|
|
||||||
- Efficient information density
|
|
||||||
- Appropriate length for project size
|
|
||||||
- No generic filler content
|
|
||||||
|
|
||||||
## Plugin Integration Analysis
|
|
||||||
|
|
||||||
After the content analysis, the command detects and analyzes marketplace plugin integration:
|
|
||||||
|
|
||||||
### Detection Method
|
|
||||||
|
|
||||||
1. **Read `.claude/settings.local.json`** - Check for enabled MCP servers
|
|
||||||
2. **Map MCP servers to plugins** - Use marketplace registry to identify active plugins:
|
|
||||||
- `gitea` → projman
|
|
||||||
- `netbox` → cmdb-assistant
|
|
||||||
3. **Check for hooks** - Identify hook-based plugins (project-hygiene)
|
|
||||||
4. **Scan CLAUDE.md** - Look for plugin integration content
|
|
||||||
|
|
||||||
### Plugin Coverage Scoring
|
|
||||||
|
|
||||||
For each detected plugin, verify CLAUDE.md contains:
|
|
||||||
- Plugin section header or mention
|
|
||||||
- Available commands documentation
|
|
||||||
- MCP tools reference (if applicable)
|
|
||||||
- Usage guidelines
|
|
||||||
|
|
||||||
Coverage is reported as percentage: `(plugins referenced / plugins detected) * 100`
|
|
||||||
|
|
||||||
## Expected Output
|
|
||||||
|
|
||||||
```
|
|
||||||
CLAUDE.md Analysis Report
|
|
||||||
=========================
|
|
||||||
|
|
||||||
File: /path/to/project/CLAUDE.md
|
|
||||||
Lines: 245
|
|
||||||
Last Modified: 2025-01-18
|
|
||||||
|
|
||||||
Overall Score: 72/100
|
|
||||||
|
|
||||||
Category Scores:
|
|
||||||
- Structure: 20/25 (Good)
|
|
||||||
- Clarity: 18/25 (Good)
|
|
||||||
- Completeness: 22/25 (Excellent)
|
|
||||||
- Conciseness: 12/25 (Needs Work)
|
|
||||||
|
|
||||||
Strengths:
|
|
||||||
+ Clear project overview with good context
|
|
||||||
+ Critical rules prominently displayed
|
|
||||||
+ Comprehensive coverage of workflows
|
|
||||||
|
|
||||||
Issues Found:
|
|
||||||
|
|
||||||
1. [HIGH] Verbose explanations (lines 45-78)
|
|
||||||
Section "Running Tests" has 34 lines that could be 8 lines.
|
|
||||||
Impact: Harder to scan, important info buried
|
|
||||||
|
|
||||||
2. [MEDIUM] Duplicate content (lines 102-115, 189-200)
|
|
||||||
Same git workflow documented twice.
|
|
||||||
Impact: Maintenance burden, inconsistency risk
|
|
||||||
|
|
||||||
3. [MEDIUM] Missing Quick Start section
|
|
||||||
No clear "how to get started" instructions.
|
|
||||||
Impact: Slower onboarding for Claude
|
|
||||||
|
|
||||||
4. [LOW] Inconsistent header formatting
|
|
||||||
Mix of "## Title" and "## Title:" styles.
|
|
||||||
Impact: Minor readability issue
|
|
||||||
|
|
||||||
Recommendations:
|
|
||||||
1. Add Quick Start section at top (priority: high)
|
|
||||||
2. Condense Testing section to essentials (priority: high)
|
|
||||||
3. Remove duplicate git workflow (priority: medium)
|
|
||||||
4. Standardize header formatting (priority: low)
|
|
||||||
|
|
||||||
Estimated improvement: 15-20 points after changes
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Plugin Integration Analysis
|
|
||||||
===========================
|
|
||||||
|
|
||||||
Detected Active Plugins:
|
|
||||||
✓ projman (via gitea MCP server)
|
|
||||||
✓ cmdb-assistant (via netbox MCP server)
|
|
||||||
✓ project-hygiene (via PostToolUse hook)
|
|
||||||
|
|
||||||
Plugin Coverage: 33% (1/3 plugins referenced)
|
|
||||||
|
|
||||||
✓ projman - Referenced in CLAUDE.md
|
|
||||||
✗ cmdb-assistant - NOT referenced
|
|
||||||
✗ project-hygiene - NOT referenced
|
|
||||||
|
|
||||||
Missing Integration Content:
|
|
||||||
|
|
||||||
1. cmdb-assistant
|
|
||||||
Add infrastructure management commands and NetBox MCP tools reference.
|
|
||||||
|
|
||||||
2. project-hygiene
|
|
||||||
Add cleanup hook documentation and configuration options.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Would you like me to:
|
|
||||||
[1] Implement all content recommendations
|
|
||||||
[2] Add missing plugin integrations to CLAUDE.md
|
|
||||||
[3] Do both (recommended)
|
|
||||||
[4] Show preview of changes first
|
|
||||||
```
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
Run `/config-analyze` when:
|
|
||||||
- Setting up a new project with existing CLAUDE.md
|
|
||||||
- CLAUDE.md feels too long or hard to use
|
|
||||||
- Claude seems to miss instructions
|
|
||||||
- Before major project changes
|
|
||||||
- Periodic maintenance (quarterly)
|
|
||||||
- After installing new marketplace plugins
|
|
||||||
- When Claude doesn't seem to use available plugin tools
|
|
||||||
|
|
||||||
## Follow-Up Actions
|
|
||||||
|
|
||||||
After analysis, you can:
|
|
||||||
- Run `/config-optimize` to automatically improve the file
|
|
||||||
- Manually address specific issues
|
|
||||||
- Request detailed recommendations for any section
|
|
||||||
- Compare with best practice templates
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
- Run analysis after significant project changes
|
|
||||||
- Address HIGH priority issues first
|
|
||||||
- Keep scores above 70/100 for best results
|
|
||||||
- Re-analyze after making changes to verify improvement
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user