feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]
Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action> sub-command pattern. Git-flow consolidated from 8→5 commands (commit variants absorbed into --push/--merge/--sync flags). Dispatch files, name: frontmatter, and cross-reference updates for all plugins. Phase 2: Design documents for 8 new plugins in docs/designs/. Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate, saas-react-platform, saas-test-pilot, data-seed, ops-release-manager, ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents, skills, README, and claude-md-integration. Marketplace grows from 12→20. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
63
plugins/saas-test-pilot/skills/coverage-analysis.md
Normal file
63
plugins/saas-test-pilot/skills/coverage-analysis.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
description: Coverage gap detection, risk scoring, and prioritization
|
||||
---
|
||||
|
||||
# Coverage Analysis Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Systematic approach to identifying, scoring, and prioritizing test coverage gaps. Coverage data is a tool for finding untested behavior, not a target to maximize blindly.
|
||||
|
||||
## Coverage Types
|
||||
|
||||
| Type | Measures | Tool Support |
|
||||
|------|----------|-------------|
|
||||
| **Line** | Which lines executed | All tools |
|
||||
| **Branch** | Which conditional paths taken | pytest-cov, istanbul, c8 |
|
||||
| **Function** | Which functions called | istanbul, c8 |
|
||||
| **Statement** | Which statements executed | istanbul |
|
||||
|
||||
Branch coverage is the minimum useful metric. Line coverage alone hides untested else-branches and short-circuit evaluations.
|
||||
|
||||
## Gap Classification
|
||||
|
||||
### By Code Pattern
|
||||
|
||||
| Pattern | Risk Level | Priority |
|
||||
|---------|------------|----------|
|
||||
| Exception handlers (catch/except) | HIGH | Test both the trigger and the handling |
|
||||
| Auth/permission checks | CRITICAL | Must test both allowed and denied |
|
||||
| Input validation branches | HIGH | Test valid, invalid, and boundary |
|
||||
| Default/fallback cases | MEDIUM | Often untested but triggered in production |
|
||||
| Configuration variations | MEDIUM | Test with different config values |
|
||||
| Logging/metrics code | LOW | Usually not worth dedicated tests |
|
||||
|
||||
### By Module Criticality
|
||||
|
||||
Score modules 1-5 based on:
|
||||
- **Data integrity** — Does it write to database/files? (+2)
|
||||
- **Security boundary** — Does it handle auth/authz? (+2)
|
||||
- **User-facing** — Does failure affect users directly? (+1)
|
||||
- **Frequency of change** — Changed often in git log? (+1)
|
||||
- **Dependency count** — Many callers depend on it? (+1)
|
||||
|
||||
## Prioritization Formula
|
||||
|
||||
```
|
||||
Priority = (Module Criticality * 2) + (Gap Risk Level) - (Test Complexity)
|
||||
```
|
||||
|
||||
Where Test Complexity:
|
||||
- 1: Simple unit test, no mocks needed
|
||||
- 2: Requires basic mocking
|
||||
- 3: Requires complex setup (database, fixtures)
|
||||
- 4: Requires infrastructure (message queue, external service)
|
||||
- 5: Requires E2E or manual testing
|
||||
|
||||
## Reporting Guidelines
|
||||
|
||||
- Always show current coverage alongside target
|
||||
- Group gaps by module, sorted by priority
|
||||
- For each gap: file, line range, description, suggested test
|
||||
- Estimate coverage improvement if top-N gaps are addressed
|
||||
- Never recommend deleting code to improve coverage
|
||||
88
plugins/saas-test-pilot/skills/fixture-management.md
Normal file
88
plugins/saas-test-pilot/skills/fixture-management.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
description: Fixture organization, factories, shared test data, and conftest patterns
|
||||
---
|
||||
|
||||
# Fixture Management Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Patterns for organizing test fixtures, factories, and shared test data. Well-structured fixtures reduce test maintenance and improve readability.
|
||||
|
||||
## Python Fixtures (pytest)
|
||||
|
||||
### conftest.py Hierarchy
|
||||
|
||||
```
|
||||
tests/
|
||||
conftest.py # Shared across all tests (db connection, auth)
|
||||
unit/
|
||||
conftest.py # Unit-specific fixtures (mocked services)
|
||||
integration/
|
||||
conftest.py # Integration-specific (real db, test server)
|
||||
```
|
||||
|
||||
Fixtures in parent conftest.py are available to all child directories. Keep fixtures at the narrowest scope possible.
|
||||
|
||||
### Fixture Scopes
|
||||
|
||||
| Scope | Lifetime | Use For |
|
||||
|-------|----------|---------|
|
||||
| `function` | Each test | Default. Mutable data, unique state |
|
||||
| `class` | Each test class | Shared setup within a class |
|
||||
| `module` | Each test file | Expensive setup shared across file |
|
||||
| `session` | Entire test run | Database connection, compiled assets |
|
||||
|
||||
### Factory Pattern (factory_boy)
|
||||
|
||||
Use factories for complex model creation:
|
||||
- Define a factory per model with sensible defaults
|
||||
- Override only what the specific test needs
|
||||
- Use `SubFactory` for relationships
|
||||
- Use `LazyAttribute` for computed fields
|
||||
- Use `Sequence` for unique values
|
||||
|
||||
## JavaScript Fixtures
|
||||
|
||||
### Factory Functions
|
||||
|
||||
```
|
||||
function createUser(overrides = {}) {
|
||||
return {
|
||||
id: generateId(),
|
||||
name: "Test User",
|
||||
email: "test@example.com",
|
||||
...overrides
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Shared Test Data
|
||||
|
||||
- Place in `__tests__/fixtures/` or `test/fixtures/`
|
||||
- Export factory functions, not static objects (avoid mutation between tests)
|
||||
- Use builder pattern for complex objects with many optional fields
|
||||
|
||||
## Database Fixtures
|
||||
|
||||
### Seeding Strategies
|
||||
|
||||
| Strategy | Speed | Isolation | Complexity |
|
||||
|----------|-------|-----------|------------|
|
||||
| Transaction rollback | Fast | Good | Medium |
|
||||
| Truncate + re-seed | Medium | Perfect | Low |
|
||||
| Separate test database | Fast | Perfect | High |
|
||||
| In-memory database | Fastest | Perfect | Medium |
|
||||
|
||||
### API Response Fixtures
|
||||
|
||||
- Store in `tests/fixtures/responses/` as JSON files
|
||||
- Name by endpoint and scenario: `get_user_200.json`, `get_user_404.json`
|
||||
- Update fixtures when API contracts change
|
||||
- Use fixture loading helpers to avoid hardcoded paths
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
- Global mutable fixtures shared between tests
|
||||
- Fixtures that depend on other fixtures in unpredictable order
|
||||
- Overly specific fixtures that break when models change
|
||||
- Fixtures with magic values whose meaning is unclear
|
||||
56
plugins/saas-test-pilot/skills/framework-detection.md
Normal file
56
plugins/saas-test-pilot/skills/framework-detection.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
description: Detect test frameworks, locate config files, and identify test runner
|
||||
---
|
||||
|
||||
# Framework Detection Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Detect the test framework and runner used by the current project based on configuration files, dependencies, and directory structure.
|
||||
|
||||
## Detection Matrix
|
||||
|
||||
### Python
|
||||
|
||||
| Indicator | Framework | Confidence |
|
||||
|-----------|-----------|------------|
|
||||
| `pytest.ini` | pytest | HIGH |
|
||||
| `pyproject.toml` with `[tool.pytest]` | pytest | HIGH |
|
||||
| `setup.cfg` with `[tool:pytest]` | pytest | HIGH |
|
||||
| `conftest.py` in project root or tests/ | pytest | HIGH |
|
||||
| `tests/test_*.py` with `import unittest` | unittest | MEDIUM |
|
||||
| `tox.ini` with pytest commands | pytest | MEDIUM |
|
||||
|
||||
### JavaScript / TypeScript
|
||||
|
||||
| Indicator | Framework | Confidence |
|
||||
|-----------|-----------|------------|
|
||||
| `jest.config.js` or `jest.config.ts` | Jest | HIGH |
|
||||
| `package.json` with `"jest"` config | Jest | HIGH |
|
||||
| `vitest.config.ts` or `vitest.config.js` | Vitest | HIGH |
|
||||
| `.mocharc.yml` or `.mocharc.json` | Mocha | HIGH |
|
||||
| `karma.conf.js` | Karma | MEDIUM |
|
||||
|
||||
### E2E Frameworks
|
||||
|
||||
| Indicator | Framework | Confidence |
|
||||
|-----------|-----------|------------|
|
||||
| `playwright.config.ts` | Playwright | HIGH |
|
||||
| `cypress.config.js` or `cypress.config.ts` | Cypress | HIGH |
|
||||
| `cypress/` directory | Cypress | MEDIUM |
|
||||
|
||||
## Config File Locations
|
||||
|
||||
Search order for each framework:
|
||||
1. Project root
|
||||
2. `tests/` or `test/` directory
|
||||
3. Inside `pyproject.toml`, `package.json`, or `setup.cfg` (inline config)
|
||||
|
||||
## Output
|
||||
|
||||
When detection completes, report:
|
||||
- Detected framework name and version (from lock file or dependency list)
|
||||
- Config file path
|
||||
- Test directory path
|
||||
- Coverage tool if configured (pytest-cov, istanbul, c8)
|
||||
- CI integration if found (.github/workflows, .gitlab-ci.yml, Jenkinsfile)
|
||||
83
plugins/saas-test-pilot/skills/mock-patterns.md
Normal file
83
plugins/saas-test-pilot/skills/mock-patterns.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
description: Mocking, stubbing, and dependency injection strategies for tests
|
||||
---
|
||||
|
||||
# Mock Patterns Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Mocking strategies and best practices for isolating code under test from external dependencies.
|
||||
|
||||
## When to Mock
|
||||
|
||||
| Situation | Mock? | Reason |
|
||||
|-----------|-------|--------|
|
||||
| External API calls | Yes | Unreliable, slow, costs money |
|
||||
| Database queries | Depends | Mock for unit, real for integration |
|
||||
| File system | Depends | Mock for unit, tmpdir for integration |
|
||||
| Time/date functions | Yes | Deterministic tests |
|
||||
| Random/UUID generation | Yes | Reproducible tests |
|
||||
| Pure utility functions | No | Fast, deterministic, no side effects |
|
||||
| Internal business logic | No | Test the real thing |
|
||||
|
||||
## Python Mocking
|
||||
|
||||
### unittest.mock / pytest-mock
|
||||
|
||||
```
|
||||
patch("module.path.to.dependency") # Replaces at import location
|
||||
patch.object(MyClass, "method") # Replaces on specific class
|
||||
MagicMock(return_value=expected) # Creates callable mock
|
||||
MagicMock(side_effect=Exception("e")) # Raises on call
|
||||
```
|
||||
|
||||
**Critical rule:** Patch where the dependency is USED, not where it is DEFINED.
|
||||
- If `views.py` imports `from services import send_email`, patch `views.send_email`, NOT `services.send_email`.
|
||||
|
||||
### pytest-mock (preferred)
|
||||
|
||||
Use the `mocker` fixture for cleaner syntax:
|
||||
- `mocker.patch("module.function")` — auto-cleanup after test
|
||||
- `mocker.spy(obj, "method")` — record calls without replacing
|
||||
|
||||
## JavaScript Mocking
|
||||
|
||||
### Jest
|
||||
|
||||
```
|
||||
jest.mock("./module") // Auto-mock entire module
|
||||
jest.spyOn(object, "method") // Spy without replacing
|
||||
jest.fn().mockReturnValue(value) // Create mock function
|
||||
```
|
||||
|
||||
### Vitest
|
||||
|
||||
```
|
||||
vi.mock("./module") // Same API as Jest
|
||||
vi.spyOn(object, "method")
|
||||
vi.fn().mockReturnValue(value)
|
||||
```
|
||||
|
||||
## Mock vs Stub vs Spy
|
||||
|
||||
| Type | Behavior | Use When |
|
||||
|------|----------|----------|
|
||||
| **Mock** | Replace entirely, return fake data | Isolating from external service |
|
||||
| **Stub** | Provide canned responses | Controlling specific return values |
|
||||
| **Spy** | Record calls, keep real behavior | Verifying interactions without changing behavior |
|
||||
|
||||
## Dependency Injection Patterns
|
||||
|
||||
Prefer DI over mocking when possible:
|
||||
- Constructor injection: pass dependencies as constructor args
|
||||
- Function parameters: accept collaborators as arguments with defaults
|
||||
- Context managers: swap implementations via context
|
||||
|
||||
DI makes tests simpler and avoids brittle mock paths.
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
- Mocking too deep (mock chains: `mock.return_value.method.return_value`)
|
||||
- Asserting on mock call counts instead of outcomes
|
||||
- Mocking the system under test
|
||||
- Not resetting mocks between tests (use autouse fixtures or afterEach)
|
||||
83
plugins/saas-test-pilot/skills/test-patterns.md
Normal file
83
plugins/saas-test-pilot/skills/test-patterns.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
description: Test design patterns for unit, integration, and e2e tests
|
||||
---
|
||||
|
||||
# Test Patterns Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Standard test design patterns organized by test type. Use these as templates when generating tests.
|
||||
|
||||
## Unit Test Patterns
|
||||
|
||||
### Arrange-Act-Assert (AAA)
|
||||
|
||||
The default pattern for unit tests:
|
||||
|
||||
```
|
||||
Arrange: Set up test data and dependencies
|
||||
Act: Call the function under test
|
||||
Assert: Verify the result matches expectations
|
||||
```
|
||||
|
||||
- Keep Arrange minimal — only what this specific test needs
|
||||
- Act should be a single function call
|
||||
- Assert one logical concept (multiple assertions allowed if same concept)
|
||||
|
||||
### Parameterized Tests
|
||||
|
||||
Use when testing the same logic with different inputs:
|
||||
- pytest: `@pytest.mark.parametrize("input,expected", [...])`
|
||||
- Jest: `test.each([...])("description %s", (input, expected) => {...})`
|
||||
|
||||
Best for: validation functions, parsers, formatters, math operations.
|
||||
|
||||
### Exception Testing
|
||||
|
||||
Verify error conditions explicitly:
|
||||
- pytest: `with pytest.raises(ValueError, match="expected message")`
|
||||
- Jest: `expect(() => fn()).toThrow("expected message")`
|
||||
|
||||
Always assert the exception type AND message content.
|
||||
|
||||
## Integration Test Patterns
|
||||
|
||||
### Setup/Teardown
|
||||
|
||||
Use fixtures or beforeEach/afterEach for:
|
||||
- Database connections and seeded data
|
||||
- Temporary files and directories
|
||||
- Mock server instances
|
||||
- Environment variable overrides
|
||||
|
||||
### Transaction Rollback
|
||||
|
||||
For database integration tests, wrap each test in a transaction that rolls back:
|
||||
- Ensures test isolation without slow re-seeding
|
||||
- pytest: `@pytest.fixture(autouse=True)` with session-scoped DB and function-scoped transaction
|
||||
|
||||
## E2E Test Patterns
|
||||
|
||||
### Page Object Model
|
||||
|
||||
Encapsulate page interactions in reusable classes:
|
||||
- One class per page or significant component
|
||||
- Methods return page objects for chaining
|
||||
- Selectors defined as class properties
|
||||
- No assertions inside page objects
|
||||
|
||||
### User Flow Pattern
|
||||
|
||||
Structure E2E tests as user stories:
|
||||
1. Setup — authenticate, navigate to starting point
|
||||
2. Action — perform the user's workflow steps
|
||||
3. Verification — check the final state
|
||||
4. Cleanup — reset any created data
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- Testing implementation details instead of behavior
|
||||
- Mocking the thing you are testing
|
||||
- Tests that depend on execution order
|
||||
- Assertions on exact error messages from third-party libraries
|
||||
- Sleeping instead of waiting for conditions
|
||||
27
plugins/saas-test-pilot/skills/visual-header.md
Normal file
27
plugins/saas-test-pilot/skills/visual-header.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Visual Header Skill
|
||||
|
||||
Standard visual header for saas-test-pilot commands.
|
||||
|
||||
## Header Template
|
||||
|
||||
```
|
||||
+----------------------------------------------------------------------+
|
||||
| TEST-PILOT - [Context] |
|
||||
+----------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Context Values by Command
|
||||
|
||||
| Command | Context |
|
||||
|---------|---------|
|
||||
| `/test setup` | Setup |
|
||||
| `/test generate` | Generate Tests |
|
||||
| `/test coverage` | Coverage Analysis |
|
||||
| `/test fixtures` | Fixtures |
|
||||
| `/test e2e` | E2E Tests |
|
||||
| `/test run` | Run Tests |
|
||||
| Agent mode | Test Automation |
|
||||
|
||||
## Usage
|
||||
|
||||
Display header at the start of every command response before proceeding with the operation.
|
||||
Reference in New Issue
Block a user