Sprint 4 - Plugin Commands implementation adding 18 new user-facing commands across 8 plugins as part of V5.2.0 Plugin Enhancements. **projman:** - #241: /sprint-diagram - Mermaid visualization of sprint issues **pr-review:** - #242: Confidence threshold config (PR_REVIEW_CONFIDENCE_THRESHOLD) - #243: /pr-diff - Formatted diff with inline review comments **data-platform:** - #244: /data-quality - DataFrame quality checks (nulls, duplicates, outliers) - #245: /lineage-viz - dbt lineage as Mermaid diagrams - #246: /dbt-test - Formatted dbt test runner **viz-platform:** - #247: /chart-export - Export charts to PNG/SVG/PDF via kaleido - #248: /accessibility-check - Color blind validation (WCAG contrast) - #249: /breakpoints - Responsive layout configuration **contract-validator:** - #250: /dependency-graph - Plugin dependency visualization **doc-guardian:** - #251: /changelog-gen - Generate changelog from conventional commits - #252: /doc-coverage - Documentation coverage metrics - #253: /stale-docs - Flag outdated documentation **claude-config-maintainer:** - #254: /config-diff - Track CLAUDE.md changes over time - #255: /config-lint - 31 lint rules for CLAUDE.md best practices **cmdb-assistant:** - #256: /cmdb-topology - Infrastructure topology diagrams - #257: /change-audit - NetBox audit trail queries - #258: /ip-conflicts - Detect IP conflicts and overlaps Closes #241, #242, #243, #244, #245, #246, #247, #248, #249, #250, #251, #252, #253, #254, #255, #256, #257, #258 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2.8 KiB
2.8 KiB
/dbt-test - Run dbt Tests
Execute dbt tests with formatted pass/fail results.
Usage
/dbt-test [selection] [--warn-only]
Workflow
-
Pre-validation (MANDATORY):
- Use
dbt_parseto validate project first - If validation fails, show errors and STOP
- Use
-
Execute tests:
- Use
dbt_testwith provided selection - Capture all test results
- Use
-
Format results:
- Group by test type (schema vs. data)
- Show pass/fail status with counts
- Display failure details
Report Format
=== dbt Test Results ===
Project: my_project
Selection: tag:critical
--- Summary ---
Total: 24 tests
PASS: 22 (92%)
FAIL: 1 (4%)
WARN: 1 (4%)
SKIP: 0 (0%)
--- Schema Tests (18) ---
[PASS] unique_dim_customers_customer_id
[PASS] not_null_dim_customers_customer_id
[PASS] not_null_dim_customers_email
[PASS] accepted_values_dim_customers_status
[FAIL] relationships_fct_orders_customer_id
--- Data Tests (6) ---
[PASS] assert_positive_order_amounts
[PASS] assert_valid_dates
[WARN] assert_recent_orders (threshold: 7 days)
--- Failure Details ---
Test: relationships_fct_orders_customer_id
Type: schema (relationships)
Model: fct_orders
Message: 15 records failed referential integrity check
Query: SELECT * FROM fct_orders WHERE customer_id NOT IN (SELECT customer_id FROM dim_customers)
--- Warning Details ---
Test: assert_recent_orders
Type: data
Message: No orders in last 7 days (expected for dev environment)
Severity: warn
Selection Syntax
| Pattern | Meaning |
|---|---|
| (none) | Run all tests |
model_name |
Tests for specific model |
+model_name |
Tests for model and upstream |
tag:critical |
Tests with tag |
test_type:schema |
Only schema tests |
test_type:data |
Only data tests |
Options
| Flag | Description |
|---|---|
--warn-only |
Treat failures as warnings (don't fail CI) |
Examples
/dbt-test # Run all tests
/dbt-test dim_customers # Tests for specific model
/dbt-test tag:critical # Run critical tests only
/dbt-test +fct_orders # Test model and its upstream
Test Types
Schema Tests
Built-in tests defined in schema.yml:
unique- No duplicate valuesnot_null- No null valuesaccepted_values- Value in allowed listrelationships- Foreign key integrity
Data Tests
Custom SQL tests in tests/ directory:
- Return rows that fail the assertion
- Zero rows = pass, any rows = fail
Exit Codes
| Code | Meaning |
|---|---|
| 0 | All tests passed |
| 1 | One or more tests failed |
| 2 | dbt error (parse failure, etc.) |
Available Tools
Use these MCP tools:
dbt_parse- Pre-validation (ALWAYS RUN FIRST)dbt_test- Execute tests (REQUIRED)dbt_build- Alternative: run + test together