feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]

Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action>
sub-command pattern. Git-flow consolidated from 8→5 commands (commit
variants absorbed into --push/--merge/--sync flags). Dispatch files,
name: frontmatter, and cross-reference updates for all plugins.

Phase 2: Design documents for 8 new plugins in docs/designs/.

Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate,
saas-react-platform, saas-test-pilot, data-seed, ops-release-manager,
ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents,
skills, README, and claude-md-integration. Marketplace grows from 12→20.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-06 14:52:11 -05:00
parent 5098422858
commit 2d51df7a42
321 changed files with 13582 additions and 1019 deletions

View File

@@ -0,0 +1,26 @@
{
"name": "saas-test-pilot",
"version": "1.0.0",
"description": "Test automation toolkit for unit, integration, and end-to-end testing",
"author": {
"name": "Leo Miranda",
"email": "leobmiranda@gmail.com"
},
"homepage": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace/src/branch/main/plugins/saas-test-pilot/README.md",
"repository": "https://gitea.hotserv.cloud/personal-projects/leo-claude-mktplace.git",
"license": "MIT",
"keywords": [
"testing",
"unit-tests",
"integration-tests",
"e2e",
"coverage",
"fixtures",
"mocks",
"test-generation"
],
"commands": [
"./commands/"
],
"domain": "saas"
}

View File

@@ -0,0 +1,58 @@
# saas-test-pilot
Test automation toolkit for unit, integration, and end-to-end testing.
## Overview
saas-test-pilot provides intelligent test generation, coverage analysis, fixture management, and E2E scenario creation. It detects your project's test framework automatically and generates tests following best practices for pytest, Jest, Vitest, Playwright, and Cypress.
## Commands
| Command | Description |
|---------|-------------|
| `/test setup` | Detect framework, configure test runner, initialize test structure |
| `/test generate` | Generate test cases for functions, classes, or modules |
| `/test coverage` | Analyze coverage and identify untested paths by risk |
| `/test fixtures` | Generate or manage test fixtures, factories, and mocks |
| `/test e2e` | Generate end-to-end test scenarios with page objects |
| `/test run` | Run tests with formatted output and failure analysis |
## Agents
| Agent | Model | Mode | Role |
|-------|-------|------|------|
| test-architect | sonnet | acceptEdits | Test generation, fixtures, E2E design |
| coverage-analyst | haiku | plan (read-only) | Coverage analysis and gap detection |
## Skills
| Skill | Purpose |
|-------|---------|
| framework-detection | Auto-detect pytest/Jest/Vitest/Playwright and config files |
| test-patterns | AAA, BDD, page object model, and other test design patterns |
| mock-patterns | Mocking strategies: mock vs stub vs spy, DI patterns |
| coverage-analysis | Gap detection, risk scoring, prioritization |
| fixture-management | conftest.py patterns, factory_boy, shared fixtures |
| visual-header | Consistent command output headers |
## Supported Frameworks
### Unit / Integration
- **Python:** pytest, unittest
- **JavaScript/TypeScript:** Jest, Vitest, Mocha
### End-to-End
- **Playwright** (recommended)
- **Cypress**
### Coverage
- **Python:** pytest-cov (coverage.py)
- **JavaScript:** istanbul/nyc, c8, vitest built-in
## Installation
This plugin is part of the Leo Claude Marketplace. It is installed automatically when the marketplace is configured.
## License
MIT

View File

@@ -0,0 +1,60 @@
---
name: coverage-analyst
description: Read-only test coverage analysis and gap detection
model: haiku
permissionMode: plan
disallowedTools: Write, Edit, MultiEdit
---
# Coverage Analyst Agent
You are a test coverage specialist focused on identifying untested code paths and prioritizing test gaps by risk.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| TEST-PILOT - Coverage Analysis |
+----------------------------------------------------------------------+
```
## Core Principles
1. **Coverage is a metric, not a goal** — 100% coverage does not mean correct code. Focus on meaningful coverage of critical paths.
2. **Risk-based prioritization** — Not all uncovered code is equally important. Auth, payments, and data persistence gaps matter more than formatting helpers.
3. **Branch coverage over line coverage** — Line coverage hides untested conditional branches. Always report branch coverage when available.
4. **Actionable recommendations** — Every gap reported must include a concrete suggestion for what test to write.
## Analysis Approach
When analyzing coverage:
1. **Parse coverage data** — Read `.coverage`, `coverage.xml`, `lcov.info`, or equivalent reports. Extract per-file and per-function metrics.
2. **Identify gap categories:**
- Uncovered error handlers (catch/except blocks)
- Untested conditional branches
- Dead code (unreachable paths)
- Missing integration test coverage
- Untested configuration variations
3. **Risk-score each gap:**
- **Critical (5):** Authentication, authorization, data mutation, payment processing
- **High (4):** API endpoints, input validation, data transformation
- **Medium (3):** Business logic, workflow transitions
- **Low (2):** Logging, formatting, display helpers
- **Informational (1):** Comments, documentation generation
4. **Report with context** — Show the uncovered code, explain why it matters, and suggest the test to write.
## Output Style
- Present findings as a prioritized table
- Include file paths and line numbers
- Quantify the coverage impact of suggested tests
- Never suggest deleting code just to improve coverage numbers

View File

@@ -0,0 +1,62 @@
---
name: test-architect
description: Test generation, fixture creation, and e2e scenario design
model: sonnet
permissionMode: acceptEdits
---
# Test Architect Agent
You are a senior test engineer specializing in test design, generation, and automation across Python and JavaScript/TypeScript ecosystems.
## Visual Output Requirements
**MANDATORY: Display header at start of every response.**
```
+----------------------------------------------------------------------+
| TEST-PILOT - [Command Context] |
+----------------------------------------------------------------------+
```
## Core Principles
1. **Tests are documentation** — Every test should clearly communicate what behavior it verifies and why that behavior matters.
2. **Isolation first** — Tests must not depend on execution order, shared mutable state, or external services unless explicitly testing integration.
3. **Realistic data** — Use representative data that exercises real code paths. Avoid trivial values like "test" or "foo" that miss edge cases.
4. **One assertion per concept** — Each test should verify a single logical behavior. Multiple assertions are fine when they validate the same concept.
## Expertise
- **Python:** pytest, unittest, pytest-mock, factory_boy, hypothesis, pytest-asyncio
- **JavaScript/TypeScript:** Jest, Vitest, Testing Library, Playwright, Cypress
- **Patterns:** Arrange-Act-Assert, Given-When-Then, Page Object Model, Test Data Builder
- **Coverage:** Branch coverage analysis, mutation testing concepts, risk-based prioritization
## Test Generation Approach
When generating tests:
1. **Read the source code thoroughly** — Understand all branches, error paths, and edge cases before writing any test.
2. **Map the dependency graph** — Identify what needs mocking vs what can be tested directly. Prefer real implementations when feasible.
3. **Start with the happy path** — Establish the baseline behavior before testing error conditions.
4. **Cover boundaries systematically:**
- Empty/null/undefined inputs
- Type boundaries (int max, string length limits)
- Collection boundaries (empty, single, many)
- Temporal boundaries (expired, concurrent, sequential)
5. **Name tests descriptively**`test_login_with_expired_token_returns_401` over `test_login_3`.
## Output Style
- Show generated code with clear comments
- Explain non-obvious mock choices
- Note any assumptions about the code under test
- Flag areas where manual review is recommended

View File

@@ -0,0 +1,36 @@
# Test Pilot Integration
Add to your project's CLAUDE.md:
## Test Automation
This project uses saas-test-pilot for test generation and coverage analysis.
### Commands
- `/test setup` - Detect framework and configure test environment
- `/test generate <target>` - Generate tests for a function, class, or module
- `/test coverage` - Analyze coverage gaps prioritized by risk
- `/test fixtures generate <model>` - Create fixtures and factories
- `/test e2e <feature>` - Generate E2E test scenarios
- `/test run` - Execute tests with formatted output
### Supported Frameworks
- Python: pytest, unittest
- JavaScript/TypeScript: Jest, Vitest
- E2E: Playwright, Cypress
### Test Organization
Tests follow the standard structure:
```
tests/
conftest.py # Shared fixtures
unit/ # Unit tests (fast, isolated)
integration/ # Integration tests (database, APIs)
e2e/ # End-to-end tests (browser, full stack)
fixtures/ # Shared test data and response mocks
```
### Coverage Targets
- coverage-analyst provides risk-based gap analysis
- Focus on branch coverage, not just line coverage
- Critical modules (auth, payments) require higher thresholds

View File

@@ -0,0 +1,83 @@
---
name: test coverage
description: Analyze test coverage, identify untested paths, and prioritize gaps by risk
---
# /test coverage
Analyze test coverage and identify gaps prioritized by risk.
## Visual Output
```
+----------------------------------------------------------------------+
| TEST-PILOT - Coverage Analysis |
+----------------------------------------------------------------------+
```
## Usage
```
/test coverage [<target>] [--threshold=80] [--format=summary|detailed]
```
**Target:** File, directory, or module to analyze (defaults to entire project)
**Threshold:** Minimum acceptable coverage percentage
**Format:** Output detail level
## Skills to Load
- skills/coverage-analysis.md
## Process
1. **Discover Coverage Data**
- Look for existing coverage reports: `.coverage`, `coverage.xml`, `lcov.info`, `coverage/`
- If no report exists, attempt to run coverage: `pytest --cov`, `npx vitest --coverage`
- Parse coverage data into structured format
2. **Analyze Gaps**
- Identify uncovered lines, branches, and functions
- Classify gaps by type:
- Error handling paths (catch/except blocks)
- Conditional branches (if/else, switch/case)
- Edge case logic (boundary checks, null guards)
- Integration points (API calls, database queries)
3. **Risk Assessment**
- Score each gap by:
- Complexity of uncovered code (cyclomatic complexity)
- Criticality of the module (auth, payments, data persistence)
- Frequency of changes (git log analysis)
- Proximity to user input (trust boundary distance)
4. **Generate Report**
- Overall coverage metrics
- Per-file breakdown
- Prioritized gap list with risk scores
- Suggested test cases for top gaps
## Output Format
```
## Coverage Report
### Overall: 74% lines | 61% branches
### Files Below Threshold (80%)
| File | Lines | Branches | Risk |
|------|-------|----------|------|
| src/auth/login.py | 52% | 38% | HIGH |
| src/api/handlers.py | 67% | 55% | MEDIUM |
### Top 5 Coverage Gaps (by risk)
1. **src/auth/login.py:45-62** — OAuth error handling
Risk: HIGH | Uncovered: 18 lines | Suggestion: test invalid token flow
2. **src/api/handlers.py:89-104** — Rate limit branch
Risk: MEDIUM | Uncovered: 16 lines | Suggestion: test 429 response
### Recommendations
- Focus on auth module — highest risk, lowest coverage
- Add branch coverage to CI threshold
- 12 new test cases would bring coverage to 85%
```

View File

@@ -0,0 +1,86 @@
---
name: test e2e
description: Generate end-to-end test scenarios with page object models and user flows
---
# /test e2e
Generate end-to-end test scenarios for web applications or API workflows.
## Visual Output
```
+----------------------------------------------------------------------+
| TEST-PILOT - E2E Tests |
+----------------------------------------------------------------------+
```
## Usage
```
/test e2e <target> [--framework=playwright|cypress] [--flow=<user-flow>]
```
**Target:** Application area, URL path, or feature name
**Framework:** E2E framework (auto-detected if not specified)
**Flow:** Specific user flow to test (e.g., "login", "checkout", "signup")
## Skills to Load
- skills/test-patterns.md
## Process
1. **Analyze Application**
- Detect E2E framework from config files
- Identify routes/pages from router configuration
- Map user-facing features and critical paths
- Detect authentication requirements
2. **Design Test Scenarios**
- Map user journeys (happy path first)
- Identify critical business flows:
- Authentication (login, logout, password reset)
- Data creation (forms, uploads, submissions)
- Navigation (routing, deep links, breadcrumbs)
- Error states (404, network failures, validation)
- Define preconditions and test data needs
3. **Generate Page Objects**
- Create page object classes for each page/component
- Encapsulate selectors and interactions
- Keep assertions in test files, not page objects
- Use data-testid attributes where possible
4. **Write Test Files**
- One test file per user flow or feature area
- Include setup (authentication, test data) and teardown (cleanup)
- Use descriptive test names that read as user stories
- Add retry logic for flaky network operations
- Include screenshot capture on failure
5. **Verify**
- Check selectors reference valid elements
- Confirm test data setup is complete
- Validate timeout values are reasonable
## Output Format
```
## E2E Tests: Login Flow
### Page Objects Created
- pages/LoginPage.ts — login form interactions
- pages/DashboardPage.ts — post-login verification
### Test Scenarios (5)
1. test_successful_login_redirects_to_dashboard
2. test_invalid_credentials_shows_error
3. test_empty_form_shows_validation
4. test_remember_me_persists_session
5. test_locked_account_shows_message
### Test Data Requirements
- Valid user credentials (use test seed)
- Locked account fixture
```

View File

@@ -0,0 +1,87 @@
---
name: test fixtures
description: Generate or manage test fixtures, factories, and mock data
---
# /test fixtures
Generate and organize test fixtures, factories, and mock data.
## Visual Output
```
+----------------------------------------------------------------------+
| TEST-PILOT - Fixtures |
+----------------------------------------------------------------------+
```
## Usage
```
/test fixtures <action> [<target>]
```
**Actions:**
- `generate <model/schema>` — Create fixture/factory for a data model
- `list` — Show existing fixtures and their usage
- `audit` — Find unused or duplicate fixtures
- `organize` — Restructure fixtures into standard layout
## Skills to Load
- skills/fixture-management.md
- skills/mock-patterns.md
## Process
### Generate
1. **Analyze Target Model**
- Read model/schema definition (ORM model, Pydantic, TypeScript interface)
- Map field types, constraints, and relationships
- Identify required vs optional fields
2. **Create Fixture**
- Python: generate conftest.py fixture or factory_boy factory
- JavaScript: generate factory function or test helper
- Include realistic sample data (not just "test123")
- Handle relationships (foreign keys, nested objects)
- Create variants (minimal, full, edge-case)
3. **Place Fixture**
- Follow project conventions for fixture location
- Add to appropriate conftest.py or fixtures directory
- Import from shared location, not duplicated per test
### List
1. Scan test directories for fixture definitions
2. Map each fixture to its consumers (which tests use it)
3. Display fixture tree with usage counts
### Audit
1. Find fixtures with zero consumers
2. Detect duplicate/near-duplicate fixtures
3. Identify fixtures with hardcoded data that should be parameterized
## Output Format
```
## Fixture: UserFactory
### Generated for: models.User
### Location: tests/conftest.py
### Variants
- user_factory() — standard user with defaults
- admin_factory() — user with is_admin=True
- minimal_user() — only required fields
### Fields
| Field | Type | Default | Notes |
|-------|------|---------|-------|
| email | str | faker.email() | unique |
| name | str | faker.name() | — |
| role | enum | "viewer" | — |
```

View File

@@ -0,0 +1,84 @@
---
name: test generate
description: Generate test cases for functions, classes, or modules with appropriate patterns
---
# /test generate
Generate comprehensive test cases for specified code targets.
## Visual Output
```
+----------------------------------------------------------------------+
| TEST-PILOT - Generate Tests |
+----------------------------------------------------------------------+
```
## Usage
```
/test generate <target> [--type=unit|integration] [--style=aaa|bdd]
```
**Target:** File path, class name, function name, or module path
**Type:** Test type — defaults to unit
**Style:** Test style — defaults to arrange-act-assert (aaa)
## Skills to Load
- skills/test-patterns.md
- skills/mock-patterns.md
- skills/framework-detection.md
## Process
1. **Analyze Target**
- Read the target source code
- Identify public functions, methods, and classes
- Map input types, return types, and exceptions
- Detect dependencies that need mocking
2. **Determine Test Strategy**
- Pure functions: direct input/output tests
- Functions with side effects: mock external calls
- Class methods: test through public interface
- Integration points: setup/teardown with real or fake dependencies
3. **Generate Test Cases**
- Happy path: standard inputs produce expected outputs
- Edge cases: empty inputs, None/null, boundary values
- Error paths: invalid inputs, exceptions, error conditions
- Type variations: different valid types if applicable
4. **Write Test File**
- Follow project conventions for test file location
- Use detected framework syntax (pytest/Jest/Vitest)
- Include docstrings explaining each test case
- Group related tests in classes or describe blocks
5. **Verify**
- Check test file compiles/parses
- Verify imports are correct
- Confirm mock targets match actual module paths
## Output Format
```
## Generated Tests for `module.function_name`
### Test File: tests/unit/test_module.py
### Test Cases (7 total)
1. test_function_returns_expected_for_valid_input
2. test_function_handles_empty_input
3. test_function_raises_on_invalid_type
4. test_function_boundary_values
5. test_function_none_input
6. test_function_large_input
7. test_function_concurrent_calls (if applicable)
### Dependencies Mocked
- database.connection (unittest.mock.patch)
- external_api.client (fixture)
```

View File

@@ -0,0 +1,90 @@
---
name: test run
description: Run tests with formatted output, filtering, and failure analysis
---
# /test run
Execute tests with structured output and intelligent failure analysis.
## Visual Output
```
+----------------------------------------------------------------------+
| TEST-PILOT - Run Tests |
+----------------------------------------------------------------------+
```
## Usage
```
/test run [<target>] [--type=unit|integration|e2e|all] [--verbose] [--failfast]
```
**Target:** File, directory, test name pattern, or marker/tag
**Type:** Test category to run (defaults to unit)
**Verbose:** Show full output including passing tests
**Failfast:** Stop on first failure
## Skills to Load
- skills/framework-detection.md
## Process
1. **Detect Test Runner**
- Identify framework from project configuration
- Build appropriate command:
- pytest: `pytest <target> -v --tb=short`
- Jest: `npx jest <target> --verbose`
- Vitest: `npx vitest run <target>`
- Apply type filter if specified (markers, tags, directories)
2. **Execute Tests**
- Run the test command
- Capture stdout, stderr, and exit code
- Parse test results into structured data
3. **Format Results**
- Group by status: passed, failed, skipped, errors
- Show failure details with:
- Test name and location
- Assertion message
- Relevant code snippet
- Suggested fix if pattern is recognizable
4. **Analyze Failures**
- Common patterns:
- Import errors: missing dependency or wrong path
- Assertion errors: expected vs actual mismatch
- Timeout errors: slow operation or missing mock
- Setup errors: missing fixture or database state
- Suggest corrective action for each failure type
5. **Summary**
- Total/passed/failed/skipped counts
- Duration
- Coverage delta if coverage is enabled
## Output Format
```
## Test Results
### Summary: 45 passed, 2 failed, 1 skipped (12.3s)
### Failures
1. FAIL test_user_login_with_expired_token (tests/test_auth.py:67)
AssertionError: Expected 401, got 200
Cause: Token expiry check not applied before validation
Fix: Verify token_service.is_expired() is called in login handler
2. FAIL test_export_csv_large_dataset (tests/test_export.py:134)
TimeoutError: Operation timed out after 30s
Cause: No pagination in export query
Fix: Add batch processing or mock the database call
### Skipped
- test_redis_cache_eviction — requires Redis (marker: @needs_redis)
```

View File

@@ -0,0 +1,70 @@
---
name: test setup
description: Detect test framework, configure test runner, and initialize test structure
---
# /test setup
Setup wizard for test automation. Detects existing frameworks or helps choose one.
## Visual Output
```
+----------------------------------------------------------------------+
| TEST-PILOT - Setup |
+----------------------------------------------------------------------+
```
## Skills to Load
- skills/framework-detection.md
## Process
1. **Project Detection**
- Scan for existing test directories (`tests/`, `test/`, `__tests__/`, `spec/`)
- Detect language from file extensions and config files
- Identify existing test framework configuration
2. **Framework Detection**
- Python: check for pytest.ini, setup.cfg [tool.pytest], pyproject.toml [tool.pytest], conftest.py, unittest patterns
- JavaScript/TypeScript: check for jest.config.js/ts, vitest.config.ts, .mocharc.yml, karma.conf.js
- E2E: check for playwright.config.ts, cypress.config.js, selenium configs
3. **Configuration Review**
- Show detected framework and version
- Show test directory structure
- Show coverage configuration if present
- Show CI/CD test integration if found
4. **Recommendations**
- If no framework detected: recommend based on language and project type
- If framework found but no coverage: suggest coverage setup
- If no test directory structure: propose standard layout
- If missing conftest/setup files: offer to create them
## Output Format
```
## Test Environment
### Detected Framework
- Language: Python 3.x
- Framework: pytest 8.x
- Config: pyproject.toml [tool.pytest.ini_options]
### Test Structure
tests/
conftest.py
unit/
integration/
### Coverage
- Tool: pytest-cov
- Current: 72% line coverage
### Recommendations
- [ ] Add conftest.py fixtures for database connection
- [ ] Configure pytest-xdist for parallel execution
- [ ] Add coverage threshold to CI pipeline
```

View File

@@ -0,0 +1,18 @@
---
description: Test automation — generate tests, analyze coverage, manage fixtures
---
# /test
Test automation toolkit for unit, integration, and end-to-end testing.
## Sub-commands
| Sub-command | Description |
|-------------|-------------|
| `/test setup` | Setup wizard — detect framework, configure test runner |
| `/test generate` | Generate test cases for functions, classes, or modules |
| `/test coverage` | Analyze coverage and identify untested paths |
| `/test fixtures` | Generate or manage test fixtures and mocks |
| `/test e2e` | Generate end-to-end test scenarios |
| `/test run` | Run tests with formatted output |

View File

@@ -0,0 +1,63 @@
---
description: Coverage gap detection, risk scoring, and prioritization
---
# Coverage Analysis Skill
## Overview
Systematic approach to identifying, scoring, and prioritizing test coverage gaps. Coverage data is a tool for finding untested behavior, not a target to maximize blindly.
## Coverage Types
| Type | Measures | Tool Support |
|------|----------|-------------|
| **Line** | Which lines executed | All tools |
| **Branch** | Which conditional paths taken | pytest-cov, istanbul, c8 |
| **Function** | Which functions called | istanbul, c8 |
| **Statement** | Which statements executed | istanbul |
Branch coverage is the minimum useful metric. Line coverage alone hides untested else-branches and short-circuit evaluations.
## Gap Classification
### By Code Pattern
| Pattern | Risk Level | Priority |
|---------|------------|----------|
| Exception handlers (catch/except) | HIGH | Test both the trigger and the handling |
| Auth/permission checks | CRITICAL | Must test both allowed and denied |
| Input validation branches | HIGH | Test valid, invalid, and boundary |
| Default/fallback cases | MEDIUM | Often untested but triggered in production |
| Configuration variations | MEDIUM | Test with different config values |
| Logging/metrics code | LOW | Usually not worth dedicated tests |
### By Module Criticality
Score modules 1-5 based on:
- **Data integrity** — Does it write to database/files? (+2)
- **Security boundary** — Does it handle auth/authz? (+2)
- **User-facing** — Does failure affect users directly? (+1)
- **Frequency of change** — Changed often in git log? (+1)
- **Dependency count** — Many callers depend on it? (+1)
## Prioritization Formula
```
Priority = (Module Criticality * 2) + (Gap Risk Level) - (Test Complexity)
```
Where Test Complexity:
- 1: Simple unit test, no mocks needed
- 2: Requires basic mocking
- 3: Requires complex setup (database, fixtures)
- 4: Requires infrastructure (message queue, external service)
- 5: Requires E2E or manual testing
## Reporting Guidelines
- Always show current coverage alongside target
- Group gaps by module, sorted by priority
- For each gap: file, line range, description, suggested test
- Estimate coverage improvement if top-N gaps are addressed
- Never recommend deleting code to improve coverage

View File

@@ -0,0 +1,88 @@
---
description: Fixture organization, factories, shared test data, and conftest patterns
---
# Fixture Management Skill
## Overview
Patterns for organizing test fixtures, factories, and shared test data. Well-structured fixtures reduce test maintenance and improve readability.
## Python Fixtures (pytest)
### conftest.py Hierarchy
```
tests/
conftest.py # Shared across all tests (db connection, auth)
unit/
conftest.py # Unit-specific fixtures (mocked services)
integration/
conftest.py # Integration-specific (real db, test server)
```
Fixtures in parent conftest.py are available to all child directories. Keep fixtures at the narrowest scope possible.
### Fixture Scopes
| Scope | Lifetime | Use For |
|-------|----------|---------|
| `function` | Each test | Default. Mutable data, unique state |
| `class` | Each test class | Shared setup within a class |
| `module` | Each test file | Expensive setup shared across file |
| `session` | Entire test run | Database connection, compiled assets |
### Factory Pattern (factory_boy)
Use factories for complex model creation:
- Define a factory per model with sensible defaults
- Override only what the specific test needs
- Use `SubFactory` for relationships
- Use `LazyAttribute` for computed fields
- Use `Sequence` for unique values
## JavaScript Fixtures
### Factory Functions
```
function createUser(overrides = {}) {
return {
id: generateId(),
name: "Test User",
email: "test@example.com",
...overrides
};
}
```
### Shared Test Data
- Place in `__tests__/fixtures/` or `test/fixtures/`
- Export factory functions, not static objects (avoid mutation between tests)
- Use builder pattern for complex objects with many optional fields
## Database Fixtures
### Seeding Strategies
| Strategy | Speed | Isolation | Complexity |
|----------|-------|-----------|------------|
| Transaction rollback | Fast | Good | Medium |
| Truncate + re-seed | Medium | Perfect | Low |
| Separate test database | Fast | Perfect | High |
| In-memory database | Fastest | Perfect | Medium |
### API Response Fixtures
- Store in `tests/fixtures/responses/` as JSON files
- Name by endpoint and scenario: `get_user_200.json`, `get_user_404.json`
- Update fixtures when API contracts change
- Use fixture loading helpers to avoid hardcoded paths
## Anti-Patterns
- Global mutable fixtures shared between tests
- Fixtures that depend on other fixtures in unpredictable order
- Overly specific fixtures that break when models change
- Fixtures with magic values whose meaning is unclear

View File

@@ -0,0 +1,56 @@
---
description: Detect test frameworks, locate config files, and identify test runner
---
# Framework Detection Skill
## Overview
Detect the test framework and runner used by the current project based on configuration files, dependencies, and directory structure.
## Detection Matrix
### Python
| Indicator | Framework | Confidence |
|-----------|-----------|------------|
| `pytest.ini` | pytest | HIGH |
| `pyproject.toml` with `[tool.pytest]` | pytest | HIGH |
| `setup.cfg` with `[tool:pytest]` | pytest | HIGH |
| `conftest.py` in project root or tests/ | pytest | HIGH |
| `tests/test_*.py` with `import unittest` | unittest | MEDIUM |
| `tox.ini` with pytest commands | pytest | MEDIUM |
### JavaScript / TypeScript
| Indicator | Framework | Confidence |
|-----------|-----------|------------|
| `jest.config.js` or `jest.config.ts` | Jest | HIGH |
| `package.json` with `"jest"` config | Jest | HIGH |
| `vitest.config.ts` or `vitest.config.js` | Vitest | HIGH |
| `.mocharc.yml` or `.mocharc.json` | Mocha | HIGH |
| `karma.conf.js` | Karma | MEDIUM |
### E2E Frameworks
| Indicator | Framework | Confidence |
|-----------|-----------|------------|
| `playwright.config.ts` | Playwright | HIGH |
| `cypress.config.js` or `cypress.config.ts` | Cypress | HIGH |
| `cypress/` directory | Cypress | MEDIUM |
## Config File Locations
Search order for each framework:
1. Project root
2. `tests/` or `test/` directory
3. Inside `pyproject.toml`, `package.json`, or `setup.cfg` (inline config)
## Output
When detection completes, report:
- Detected framework name and version (from lock file or dependency list)
- Config file path
- Test directory path
- Coverage tool if configured (pytest-cov, istanbul, c8)
- CI integration if found (.github/workflows, .gitlab-ci.yml, Jenkinsfile)

View File

@@ -0,0 +1,83 @@
---
description: Mocking, stubbing, and dependency injection strategies for tests
---
# Mock Patterns Skill
## Overview
Mocking strategies and best practices for isolating code under test from external dependencies.
## When to Mock
| Situation | Mock? | Reason |
|-----------|-------|--------|
| External API calls | Yes | Unreliable, slow, costs money |
| Database queries | Depends | Mock for unit, real for integration |
| File system | Depends | Mock for unit, tmpdir for integration |
| Time/date functions | Yes | Deterministic tests |
| Random/UUID generation | Yes | Reproducible tests |
| Pure utility functions | No | Fast, deterministic, no side effects |
| Internal business logic | No | Test the real thing |
## Python Mocking
### unittest.mock / pytest-mock
```
patch("module.path.to.dependency") # Replaces at import location
patch.object(MyClass, "method") # Replaces on specific class
MagicMock(return_value=expected) # Creates callable mock
MagicMock(side_effect=Exception("e")) # Raises on call
```
**Critical rule:** Patch where the dependency is USED, not where it is DEFINED.
- If `views.py` imports `from services import send_email`, patch `views.send_email`, NOT `services.send_email`.
### pytest-mock (preferred)
Use the `mocker` fixture for cleaner syntax:
- `mocker.patch("module.function")` — auto-cleanup after test
- `mocker.spy(obj, "method")` — record calls without replacing
## JavaScript Mocking
### Jest
```
jest.mock("./module") // Auto-mock entire module
jest.spyOn(object, "method") // Spy without replacing
jest.fn().mockReturnValue(value) // Create mock function
```
### Vitest
```
vi.mock("./module") // Same API as Jest
vi.spyOn(object, "method")
vi.fn().mockReturnValue(value)
```
## Mock vs Stub vs Spy
| Type | Behavior | Use When |
|------|----------|----------|
| **Mock** | Replace entirely, return fake data | Isolating from external service |
| **Stub** | Provide canned responses | Controlling specific return values |
| **Spy** | Record calls, keep real behavior | Verifying interactions without changing behavior |
## Dependency Injection Patterns
Prefer DI over mocking when possible:
- Constructor injection: pass dependencies as constructor args
- Function parameters: accept collaborators as arguments with defaults
- Context managers: swap implementations via context
DI makes tests simpler and avoids brittle mock paths.
## Anti-Patterns
- Mocking too deep (mock chains: `mock.return_value.method.return_value`)
- Asserting on mock call counts instead of outcomes
- Mocking the system under test
- Not resetting mocks between tests (use autouse fixtures or afterEach)

View File

@@ -0,0 +1,83 @@
---
description: Test design patterns for unit, integration, and e2e tests
---
# Test Patterns Skill
## Overview
Standard test design patterns organized by test type. Use these as templates when generating tests.
## Unit Test Patterns
### Arrange-Act-Assert (AAA)
The default pattern for unit tests:
```
Arrange: Set up test data and dependencies
Act: Call the function under test
Assert: Verify the result matches expectations
```
- Keep Arrange minimal — only what this specific test needs
- Act should be a single function call
- Assert one logical concept (multiple assertions allowed if same concept)
### Parameterized Tests
Use when testing the same logic with different inputs:
- pytest: `@pytest.mark.parametrize("input,expected", [...])`
- Jest: `test.each([...])("description %s", (input, expected) => {...})`
Best for: validation functions, parsers, formatters, math operations.
### Exception Testing
Verify error conditions explicitly:
- pytest: `with pytest.raises(ValueError, match="expected message")`
- Jest: `expect(() => fn()).toThrow("expected message")`
Always assert the exception type AND message content.
## Integration Test Patterns
### Setup/Teardown
Use fixtures or beforeEach/afterEach for:
- Database connections and seeded data
- Temporary files and directories
- Mock server instances
- Environment variable overrides
### Transaction Rollback
For database integration tests, wrap each test in a transaction that rolls back:
- Ensures test isolation without slow re-seeding
- pytest: `@pytest.fixture(autouse=True)` with session-scoped DB and function-scoped transaction
## E2E Test Patterns
### Page Object Model
Encapsulate page interactions in reusable classes:
- One class per page or significant component
- Methods return page objects for chaining
- Selectors defined as class properties
- No assertions inside page objects
### User Flow Pattern
Structure E2E tests as user stories:
1. Setup — authenticate, navigate to starting point
2. Action — perform the user's workflow steps
3. Verification — check the final state
4. Cleanup — reset any created data
## Anti-Patterns to Avoid
- Testing implementation details instead of behavior
- Mocking the thing you are testing
- Tests that depend on execution order
- Assertions on exact error messages from third-party libraries
- Sleeping instead of waiting for conditions

View File

@@ -0,0 +1,27 @@
# Visual Header Skill
Standard visual header for saas-test-pilot commands.
## Header Template
```
+----------------------------------------------------------------------+
| TEST-PILOT - [Context] |
+----------------------------------------------------------------------+
```
## Context Values by Command
| Command | Context |
|---------|---------|
| `/test setup` | Setup |
| `/test generate` | Generate Tests |
| `/test coverage` | Coverage Analysis |
| `/test fixtures` | Fixtures |
| `/test e2e` | E2E Tests |
| `/test run` | Run Tests |
| Agent mode | Test Automation |
## Usage
Display header at the start of every command response before proceeding with the operation.