feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]
Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action> sub-command pattern. Git-flow consolidated from 8→5 commands (commit variants absorbed into --push/--merge/--sync flags). Dispatch files, name: frontmatter, and cross-reference updates for all plugins. Phase 2: Design documents for 8 new plugins in docs/designs/. Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate, saas-react-platform, saas-test-pilot, data-seed, ops-release-manager, ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents, skills, README, and claude-md-integration. Marketplace grows from 12→20. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
83
plugins/saas-test-pilot/commands/test-coverage.md
Normal file
83
plugins/saas-test-pilot/commands/test-coverage.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
name: test coverage
|
||||
description: Analyze test coverage, identify untested paths, and prioritize gaps by risk
|
||||
---
|
||||
|
||||
# /test coverage
|
||||
|
||||
Analyze test coverage and identify gaps prioritized by risk.
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
+----------------------------------------------------------------------+
|
||||
| TEST-PILOT - Coverage Analysis |
|
||||
+----------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/test coverage [<target>] [--threshold=80] [--format=summary|detailed]
|
||||
```
|
||||
|
||||
**Target:** File, directory, or module to analyze (defaults to entire project)
|
||||
**Threshold:** Minimum acceptable coverage percentage
|
||||
**Format:** Output detail level
|
||||
|
||||
## Skills to Load
|
||||
|
||||
- skills/coverage-analysis.md
|
||||
|
||||
## Process
|
||||
|
||||
1. **Discover Coverage Data**
|
||||
- Look for existing coverage reports: `.coverage`, `coverage.xml`, `lcov.info`, `coverage/`
|
||||
- If no report exists, attempt to run coverage: `pytest --cov`, `npx vitest --coverage`
|
||||
- Parse coverage data into structured format
|
||||
|
||||
2. **Analyze Gaps**
|
||||
- Identify uncovered lines, branches, and functions
|
||||
- Classify gaps by type:
|
||||
- Error handling paths (catch/except blocks)
|
||||
- Conditional branches (if/else, switch/case)
|
||||
- Edge case logic (boundary checks, null guards)
|
||||
- Integration points (API calls, database queries)
|
||||
|
||||
3. **Risk Assessment**
|
||||
- Score each gap by:
|
||||
- Complexity of uncovered code (cyclomatic complexity)
|
||||
- Criticality of the module (auth, payments, data persistence)
|
||||
- Frequency of changes (git log analysis)
|
||||
- Proximity to user input (trust boundary distance)
|
||||
|
||||
4. **Generate Report**
|
||||
- Overall coverage metrics
|
||||
- Per-file breakdown
|
||||
- Prioritized gap list with risk scores
|
||||
- Suggested test cases for top gaps
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Coverage Report
|
||||
|
||||
### Overall: 74% lines | 61% branches
|
||||
|
||||
### Files Below Threshold (80%)
|
||||
| File | Lines | Branches | Risk |
|
||||
|------|-------|----------|------|
|
||||
| src/auth/login.py | 52% | 38% | HIGH |
|
||||
| src/api/handlers.py | 67% | 55% | MEDIUM |
|
||||
|
||||
### Top 5 Coverage Gaps (by risk)
|
||||
1. **src/auth/login.py:45-62** — OAuth error handling
|
||||
Risk: HIGH | Uncovered: 18 lines | Suggestion: test invalid token flow
|
||||
2. **src/api/handlers.py:89-104** — Rate limit branch
|
||||
Risk: MEDIUM | Uncovered: 16 lines | Suggestion: test 429 response
|
||||
|
||||
### Recommendations
|
||||
- Focus on auth module — highest risk, lowest coverage
|
||||
- Add branch coverage to CI threshold
|
||||
- 12 new test cases would bring coverage to 85%
|
||||
```
|
||||
86
plugins/saas-test-pilot/commands/test-e2e.md
Normal file
86
plugins/saas-test-pilot/commands/test-e2e.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
name: test e2e
|
||||
description: Generate end-to-end test scenarios with page object models and user flows
|
||||
---
|
||||
|
||||
# /test e2e
|
||||
|
||||
Generate end-to-end test scenarios for web applications or API workflows.
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
+----------------------------------------------------------------------+
|
||||
| TEST-PILOT - E2E Tests |
|
||||
+----------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/test e2e <target> [--framework=playwright|cypress] [--flow=<user-flow>]
|
||||
```
|
||||
|
||||
**Target:** Application area, URL path, or feature name
|
||||
**Framework:** E2E framework (auto-detected if not specified)
|
||||
**Flow:** Specific user flow to test (e.g., "login", "checkout", "signup")
|
||||
|
||||
## Skills to Load
|
||||
|
||||
- skills/test-patterns.md
|
||||
|
||||
## Process
|
||||
|
||||
1. **Analyze Application**
|
||||
- Detect E2E framework from config files
|
||||
- Identify routes/pages from router configuration
|
||||
- Map user-facing features and critical paths
|
||||
- Detect authentication requirements
|
||||
|
||||
2. **Design Test Scenarios**
|
||||
- Map user journeys (happy path first)
|
||||
- Identify critical business flows:
|
||||
- Authentication (login, logout, password reset)
|
||||
- Data creation (forms, uploads, submissions)
|
||||
- Navigation (routing, deep links, breadcrumbs)
|
||||
- Error states (404, network failures, validation)
|
||||
- Define preconditions and test data needs
|
||||
|
||||
3. **Generate Page Objects**
|
||||
- Create page object classes for each page/component
|
||||
- Encapsulate selectors and interactions
|
||||
- Keep assertions in test files, not page objects
|
||||
- Use data-testid attributes where possible
|
||||
|
||||
4. **Write Test Files**
|
||||
- One test file per user flow or feature area
|
||||
- Include setup (authentication, test data) and teardown (cleanup)
|
||||
- Use descriptive test names that read as user stories
|
||||
- Add retry logic for flaky network operations
|
||||
- Include screenshot capture on failure
|
||||
|
||||
5. **Verify**
|
||||
- Check selectors reference valid elements
|
||||
- Confirm test data setup is complete
|
||||
- Validate timeout values are reasonable
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## E2E Tests: Login Flow
|
||||
|
||||
### Page Objects Created
|
||||
- pages/LoginPage.ts — login form interactions
|
||||
- pages/DashboardPage.ts — post-login verification
|
||||
|
||||
### Test Scenarios (5)
|
||||
1. test_successful_login_redirects_to_dashboard
|
||||
2. test_invalid_credentials_shows_error
|
||||
3. test_empty_form_shows_validation
|
||||
4. test_remember_me_persists_session
|
||||
5. test_locked_account_shows_message
|
||||
|
||||
### Test Data Requirements
|
||||
- Valid user credentials (use test seed)
|
||||
- Locked account fixture
|
||||
```
|
||||
87
plugins/saas-test-pilot/commands/test-fixtures.md
Normal file
87
plugins/saas-test-pilot/commands/test-fixtures.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
name: test fixtures
|
||||
description: Generate or manage test fixtures, factories, and mock data
|
||||
---
|
||||
|
||||
# /test fixtures
|
||||
|
||||
Generate and organize test fixtures, factories, and mock data.
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
+----------------------------------------------------------------------+
|
||||
| TEST-PILOT - Fixtures |
|
||||
+----------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/test fixtures <action> [<target>]
|
||||
```
|
||||
|
||||
**Actions:**
|
||||
- `generate <model/schema>` — Create fixture/factory for a data model
|
||||
- `list` — Show existing fixtures and their usage
|
||||
- `audit` — Find unused or duplicate fixtures
|
||||
- `organize` — Restructure fixtures into standard layout
|
||||
|
||||
## Skills to Load
|
||||
|
||||
- skills/fixture-management.md
|
||||
- skills/mock-patterns.md
|
||||
|
||||
## Process
|
||||
|
||||
### Generate
|
||||
|
||||
1. **Analyze Target Model**
|
||||
- Read model/schema definition (ORM model, Pydantic, TypeScript interface)
|
||||
- Map field types, constraints, and relationships
|
||||
- Identify required vs optional fields
|
||||
|
||||
2. **Create Fixture**
|
||||
- Python: generate conftest.py fixture or factory_boy factory
|
||||
- JavaScript: generate factory function or test helper
|
||||
- Include realistic sample data (not just "test123")
|
||||
- Handle relationships (foreign keys, nested objects)
|
||||
- Create variants (minimal, full, edge-case)
|
||||
|
||||
3. **Place Fixture**
|
||||
- Follow project conventions for fixture location
|
||||
- Add to appropriate conftest.py or fixtures directory
|
||||
- Import from shared location, not duplicated per test
|
||||
|
||||
### List
|
||||
|
||||
1. Scan test directories for fixture definitions
|
||||
2. Map each fixture to its consumers (which tests use it)
|
||||
3. Display fixture tree with usage counts
|
||||
|
||||
### Audit
|
||||
|
||||
1. Find fixtures with zero consumers
|
||||
2. Detect duplicate/near-duplicate fixtures
|
||||
3. Identify fixtures with hardcoded data that should be parameterized
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Fixture: UserFactory
|
||||
|
||||
### Generated for: models.User
|
||||
### Location: tests/conftest.py
|
||||
|
||||
### Variants
|
||||
- user_factory() — standard user with defaults
|
||||
- admin_factory() — user with is_admin=True
|
||||
- minimal_user() — only required fields
|
||||
|
||||
### Fields
|
||||
| Field | Type | Default | Notes |
|
||||
|-------|------|---------|-------|
|
||||
| email | str | faker.email() | unique |
|
||||
| name | str | faker.name() | — |
|
||||
| role | enum | "viewer" | — |
|
||||
```
|
||||
84
plugins/saas-test-pilot/commands/test-generate.md
Normal file
84
plugins/saas-test-pilot/commands/test-generate.md
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
name: test generate
|
||||
description: Generate test cases for functions, classes, or modules with appropriate patterns
|
||||
---
|
||||
|
||||
# /test generate
|
||||
|
||||
Generate comprehensive test cases for specified code targets.
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
+----------------------------------------------------------------------+
|
||||
| TEST-PILOT - Generate Tests |
|
||||
+----------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/test generate <target> [--type=unit|integration] [--style=aaa|bdd]
|
||||
```
|
||||
|
||||
**Target:** File path, class name, function name, or module path
|
||||
**Type:** Test type — defaults to unit
|
||||
**Style:** Test style — defaults to arrange-act-assert (aaa)
|
||||
|
||||
## Skills to Load
|
||||
|
||||
- skills/test-patterns.md
|
||||
- skills/mock-patterns.md
|
||||
- skills/framework-detection.md
|
||||
|
||||
## Process
|
||||
|
||||
1. **Analyze Target**
|
||||
- Read the target source code
|
||||
- Identify public functions, methods, and classes
|
||||
- Map input types, return types, and exceptions
|
||||
- Detect dependencies that need mocking
|
||||
|
||||
2. **Determine Test Strategy**
|
||||
- Pure functions: direct input/output tests
|
||||
- Functions with side effects: mock external calls
|
||||
- Class methods: test through public interface
|
||||
- Integration points: setup/teardown with real or fake dependencies
|
||||
|
||||
3. **Generate Test Cases**
|
||||
- Happy path: standard inputs produce expected outputs
|
||||
- Edge cases: empty inputs, None/null, boundary values
|
||||
- Error paths: invalid inputs, exceptions, error conditions
|
||||
- Type variations: different valid types if applicable
|
||||
|
||||
4. **Write Test File**
|
||||
- Follow project conventions for test file location
|
||||
- Use detected framework syntax (pytest/Jest/Vitest)
|
||||
- Include docstrings explaining each test case
|
||||
- Group related tests in classes or describe blocks
|
||||
|
||||
5. **Verify**
|
||||
- Check test file compiles/parses
|
||||
- Verify imports are correct
|
||||
- Confirm mock targets match actual module paths
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Generated Tests for `module.function_name`
|
||||
|
||||
### Test File: tests/unit/test_module.py
|
||||
|
||||
### Test Cases (7 total)
|
||||
1. test_function_returns_expected_for_valid_input
|
||||
2. test_function_handles_empty_input
|
||||
3. test_function_raises_on_invalid_type
|
||||
4. test_function_boundary_values
|
||||
5. test_function_none_input
|
||||
6. test_function_large_input
|
||||
7. test_function_concurrent_calls (if applicable)
|
||||
|
||||
### Dependencies Mocked
|
||||
- database.connection (unittest.mock.patch)
|
||||
- external_api.client (fixture)
|
||||
```
|
||||
90
plugins/saas-test-pilot/commands/test-run.md
Normal file
90
plugins/saas-test-pilot/commands/test-run.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
name: test run
|
||||
description: Run tests with formatted output, filtering, and failure analysis
|
||||
---
|
||||
|
||||
# /test run
|
||||
|
||||
Execute tests with structured output and intelligent failure analysis.
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
+----------------------------------------------------------------------+
|
||||
| TEST-PILOT - Run Tests |
|
||||
+----------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/test run [<target>] [--type=unit|integration|e2e|all] [--verbose] [--failfast]
|
||||
```
|
||||
|
||||
**Target:** File, directory, test name pattern, or marker/tag
|
||||
**Type:** Test category to run (defaults to unit)
|
||||
**Verbose:** Show full output including passing tests
|
||||
**Failfast:** Stop on first failure
|
||||
|
||||
## Skills to Load
|
||||
|
||||
- skills/framework-detection.md
|
||||
|
||||
## Process
|
||||
|
||||
1. **Detect Test Runner**
|
||||
- Identify framework from project configuration
|
||||
- Build appropriate command:
|
||||
- pytest: `pytest <target> -v --tb=short`
|
||||
- Jest: `npx jest <target> --verbose`
|
||||
- Vitest: `npx vitest run <target>`
|
||||
- Apply type filter if specified (markers, tags, directories)
|
||||
|
||||
2. **Execute Tests**
|
||||
- Run the test command
|
||||
- Capture stdout, stderr, and exit code
|
||||
- Parse test results into structured data
|
||||
|
||||
3. **Format Results**
|
||||
- Group by status: passed, failed, skipped, errors
|
||||
- Show failure details with:
|
||||
- Test name and location
|
||||
- Assertion message
|
||||
- Relevant code snippet
|
||||
- Suggested fix if pattern is recognizable
|
||||
|
||||
4. **Analyze Failures**
|
||||
- Common patterns:
|
||||
- Import errors: missing dependency or wrong path
|
||||
- Assertion errors: expected vs actual mismatch
|
||||
- Timeout errors: slow operation or missing mock
|
||||
- Setup errors: missing fixture or database state
|
||||
- Suggest corrective action for each failure type
|
||||
|
||||
5. **Summary**
|
||||
- Total/passed/failed/skipped counts
|
||||
- Duration
|
||||
- Coverage delta if coverage is enabled
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Test Results
|
||||
|
||||
### Summary: 45 passed, 2 failed, 1 skipped (12.3s)
|
||||
|
||||
### Failures
|
||||
|
||||
1. FAIL test_user_login_with_expired_token (tests/test_auth.py:67)
|
||||
AssertionError: Expected 401, got 200
|
||||
Cause: Token expiry check not applied before validation
|
||||
Fix: Verify token_service.is_expired() is called in login handler
|
||||
|
||||
2. FAIL test_export_csv_large_dataset (tests/test_export.py:134)
|
||||
TimeoutError: Operation timed out after 30s
|
||||
Cause: No pagination in export query
|
||||
Fix: Add batch processing or mock the database call
|
||||
|
||||
### Skipped
|
||||
- test_redis_cache_eviction — requires Redis (marker: @needs_redis)
|
||||
```
|
||||
70
plugins/saas-test-pilot/commands/test-setup.md
Normal file
70
plugins/saas-test-pilot/commands/test-setup.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
name: test setup
|
||||
description: Detect test framework, configure test runner, and initialize test structure
|
||||
---
|
||||
|
||||
# /test setup
|
||||
|
||||
Setup wizard for test automation. Detects existing frameworks or helps choose one.
|
||||
|
||||
## Visual Output
|
||||
|
||||
```
|
||||
+----------------------------------------------------------------------+
|
||||
| TEST-PILOT - Setup |
|
||||
+----------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Skills to Load
|
||||
|
||||
- skills/framework-detection.md
|
||||
|
||||
## Process
|
||||
|
||||
1. **Project Detection**
|
||||
- Scan for existing test directories (`tests/`, `test/`, `__tests__/`, `spec/`)
|
||||
- Detect language from file extensions and config files
|
||||
- Identify existing test framework configuration
|
||||
|
||||
2. **Framework Detection**
|
||||
- Python: check for pytest.ini, setup.cfg [tool.pytest], pyproject.toml [tool.pytest], conftest.py, unittest patterns
|
||||
- JavaScript/TypeScript: check for jest.config.js/ts, vitest.config.ts, .mocharc.yml, karma.conf.js
|
||||
- E2E: check for playwright.config.ts, cypress.config.js, selenium configs
|
||||
|
||||
3. **Configuration Review**
|
||||
- Show detected framework and version
|
||||
- Show test directory structure
|
||||
- Show coverage configuration if present
|
||||
- Show CI/CD test integration if found
|
||||
|
||||
4. **Recommendations**
|
||||
- If no framework detected: recommend based on language and project type
|
||||
- If framework found but no coverage: suggest coverage setup
|
||||
- If no test directory structure: propose standard layout
|
||||
- If missing conftest/setup files: offer to create them
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Test Environment
|
||||
|
||||
### Detected Framework
|
||||
- Language: Python 3.x
|
||||
- Framework: pytest 8.x
|
||||
- Config: pyproject.toml [tool.pytest.ini_options]
|
||||
|
||||
### Test Structure
|
||||
tests/
|
||||
conftest.py
|
||||
unit/
|
||||
integration/
|
||||
|
||||
### Coverage
|
||||
- Tool: pytest-cov
|
||||
- Current: 72% line coverage
|
||||
|
||||
### Recommendations
|
||||
- [ ] Add conftest.py fixtures for database connection
|
||||
- [ ] Configure pytest-xdist for parallel execution
|
||||
- [ ] Add coverage threshold to CI pipeline
|
||||
```
|
||||
18
plugins/saas-test-pilot/commands/test.md
Normal file
18
plugins/saas-test-pilot/commands/test.md
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
description: Test automation — generate tests, analyze coverage, manage fixtures
|
||||
---
|
||||
|
||||
# /test
|
||||
|
||||
Test automation toolkit for unit, integration, and end-to-end testing.
|
||||
|
||||
## Sub-commands
|
||||
|
||||
| Sub-command | Description |
|
||||
|-------------|-------------|
|
||||
| `/test setup` | Setup wizard — detect framework, configure test runner |
|
||||
| `/test generate` | Generate test cases for functions, classes, or modules |
|
||||
| `/test coverage` | Analyze coverage and identify untested paths |
|
||||
| `/test fixtures` | Generate or manage test fixtures and mocks |
|
||||
| `/test e2e` | Generate end-to-end test scenarios |
|
||||
| `/test run` | Run tests with formatted output |
|
||||
Reference in New Issue
Block a user