feat(marketplace): command consolidation + 8 new plugins (v8.1.0 → v9.0.0) [BREAKING]

Phase 1b: Rename all ~94 commands across 12 plugins to /<noun> <action>
sub-command pattern. Git-flow consolidated from 8→5 commands (commit
variants absorbed into --push/--merge/--sync flags). Dispatch files,
name: frontmatter, and cross-reference updates for all plugins.

Phase 2: Design documents for 8 new plugins in docs/designs/.

Phase 3: Scaffold 8 new plugins — saas-api-platform, saas-db-migrate,
saas-react-platform, saas-test-pilot, data-seed, ops-release-manager,
ops-deploy-pipeline, debug-mcp. Each with plugin.json, commands, agents,
skills, README, and claude-md-integration. Marketplace grows from 12→20.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-06 14:52:11 -05:00
parent 5098422858
commit 2d51df7a42
321 changed files with 13582 additions and 1019 deletions

View File

@@ -0,0 +1,126 @@
# Caddy Conventions Skill
Caddyfile patterns for reverse proxy configuration in self-hosted environments.
## Subdomain Routing
Each service gets a subdomain of the server hostname:
```caddyfile
myapp.hotport {
reverse_proxy app:8080
}
```
For services on non-standard ports:
```caddyfile
myapp.hotport {
reverse_proxy app:3000
}
```
## Reverse Proxy Directives
### Basic Reverse Proxy
```caddyfile
subdomain.hostname {
reverse_proxy container_name:port
}
```
### With Health Checks
```caddyfile
subdomain.hostname {
reverse_proxy container_name:port {
health_uri /health
health_interval 30s
health_timeout 10s
}
}
```
### Load Balancing (Multiple Instances)
```caddyfile
subdomain.hostname {
reverse_proxy app1:8080 app2:8080 {
lb_policy round_robin
}
}
```
## Security Headers
Apply to all sites:
```caddyfile
(security_headers) {
header {
X-Content-Type-Options nosniff
X-Frame-Options SAMEORIGIN
Referrer-Policy strict-origin-when-cross-origin
-Server
}
}
```
Import in site blocks: `import security_headers`
## Rate Limiting
For API endpoints:
```caddyfile
subdomain.hostname {
rate_limit {
zone api_zone {
key {remote_host}
events 100
window 1m
}
}
reverse_proxy app:8080
}
```
## Docker Network Integration
Caddy must be on the same Docker network as the target service to use container DNS names. The Caddy container needs:
```yaml
networks:
- caddy-network
- app-network # Join each app's network
```
## CORS Configuration
```caddyfile
subdomain.hostname {
header Access-Control-Allow-Origin "*"
header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS"
header Access-Control-Allow-Headers "Content-Type, Authorization"
@options method OPTIONS
respond @options 204
reverse_proxy app:8080
}
```
## Automatic HTTPS
- Caddy provides automatic HTTPS for public domains
- For local `.hotport` subdomains, use HTTP only (no valid TLS cert)
- For Tailscale access, consider `tls internal` for self-signed certs
## File Server (Static Assets)
```caddyfile
files.hotport {
root * /srv/files
file_server browse
}
```

View File

@@ -0,0 +1,127 @@
# Docker Compose Patterns Skill
Best practices and patterns for Docker Compose service definitions targeting self-hosted infrastructure.
## Service Naming
- Use lowercase with hyphens: `my-service`
- Prefix with stack name for multi-project hosts: `myapp-db`, `myapp-redis`
- Container name should match service name: `container_name: myapp-db`
## Network Isolation
Every stack should define its own bridge network:
```yaml
networks:
app-network:
driver: bridge
```
Services join the stack network. Only the reverse proxy entry point should be exposed to the host.
## Volume Management
- Use **named volumes** for data persistence (databases, uploads)
- Use **bind mounts** for configuration files only
- Set explicit permissions with `:ro` for read-only mounts
- Label volumes with `labels` for identification
```yaml
volumes:
db-data:
labels:
com.project: myapp
com.service: database
```
## Healthchecks
Every service MUST have a healthcheck:
```yaml
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
```
Common healthcheck patterns:
- HTTP: `curl -f http://localhost:<port>/health`
- PostgreSQL: `pg_isready -U <user>`
- Redis: `redis-cli ping`
- MySQL: `mysqladmin ping -h localhost`
## Restart Policies
| Environment | Policy |
|-------------|--------|
| Development | `restart: "no"` |
| Production | `restart: unless-stopped` |
| Critical services | `restart: always` |
## Resource Limits
For Raspberry Pi (8GB RAM):
```yaml
deploy:
resources:
limits:
memory: 256M
cpus: '1.0'
reservations:
memory: 128M
```
## Dependency Ordering
Use healthcheck-aware dependencies:
```yaml
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
```
## Environment Variables
Never inline secrets. Use `env_file`:
```yaml
env_file:
- .env
- .env.${DEPLOY_ENV:-development}
```
## Multi-Service Patterns
### Web App + Database + Cache
```yaml
services:
app:
image: myapp:1.0.0
env_file: [.env]
depends_on:
db: { condition: service_healthy }
redis: { condition: service_healthy }
networks: [app-network]
db:
image: postgres:16-alpine
volumes: [db-data:/var/lib/postgresql/data]
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
networks: [app-network]
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
networks: [app-network]
```

View File

@@ -0,0 +1,92 @@
# Environment Management Skill
Patterns for managing environment variables across deployment stages.
## File Naming Convention
| File | Purpose | Git Tracked |
|------|---------|-------------|
| `.env.example` | Template with placeholder values | Yes |
| `.env` | Local development defaults | No |
| `.env.development` | Development-specific overrides | No |
| `.env.staging` | Staging environment values | No |
| `.env.production` | Production secrets and config | No |
## .env.example Format
Document every variable with comments:
```bash
# Application Settings
APP_NAME=myapp
APP_PORT=8080
APP_DEBUG=false
# Database Configuration
# PostgreSQL connection string
DATABASE_URL=postgresql://user:password@db:5432/myapp
DATABASE_POOL_SIZE=5
# Redis Configuration
REDIS_URL=redis://redis:6379/0
# External Services
# Generate at: https://example.com/api-keys
API_KEY=your-api-key-here
API_SECRET=your-secret-here
```
## Secret Handling Rules
1. **Never commit secrets** to version control
2. `.env.production` and `.env.staging` MUST be in `.gitignore`
3. Use placeholder values in `.env.example`: `your-api-key-here`, `changeme`, `<required>`
4. For shared team secrets, use a secrets manager or encrypted vault
5. Document where to obtain each secret in comments
## Docker Compose Integration
### Single env_file
```yaml
env_file:
- .env
```
### Multi-environment
```yaml
env_file:
- .env
- .env.${DEPLOY_ENV:-development}
```
### Variable Interpolation
Docker Compose supports `${VAR:-default}` syntax:
```yaml
services:
app:
image: myapp:${APP_VERSION:-latest}
ports:
- "${APP_PORT:-8080}:8080"
```
## Environment Diff Checking
When comparing environments, check for:
1. **Missing variables** - Present in .env.example but absent in target
2. **Extra variables** - Present in target but not in .env.example (may be stale)
3. **Placeholder values** - Production still has `changeme` or `your-*-here`
4. **Identical secrets** - Same password used in dev and prod (security risk)
## Validation Checklist
- [ ] All docker-compose `${VAR}` references have corresponding entries
- [ ] No secrets in `.env.example`
- [ ] `.gitignore` excludes `.env.production` and `.env.staging`
- [ ] Production variables have real values (no placeholders)
- [ ] Database URLs point to correct hosts per environment
- [ ] Debug flags are `false` in production

View File

@@ -0,0 +1,114 @@
# Health Checks Skill
Pre-deployment system health checks for self-hosted infrastructure.
## Disk Space Checks
```bash
# Check root filesystem
df -h / | awk 'NR==2 {print $5}'
# Check Docker data directory
df -h /var/lib/docker | awk 'NR==2 {print $5}'
# Docker-specific disk usage
docker system df
```
| Threshold | Status |
|-----------|--------|
| <70% used | OK |
| 70-85% used | Warning - consider pruning |
| >85% used | Critical - prune before deploying |
Pruning commands:
```bash
docker system prune -f # Remove stopped containers, unused networks
docker image prune -a -f # Remove unused images
docker volume prune -f # Remove unused volumes (CAUTION: data loss)
```
## Memory Checks
```bash
free -m | awk 'NR==2 {printf "Total: %sMB, Used: %sMB, Available: %sMB\n", $2, $3, $7}'
```
| Available Memory | Status |
|-----------------|--------|
| >512MB | OK |
| 256-512MB | Warning - may be tight |
| <256MB | Critical - deployment may OOM |
## Port Availability
```bash
# Check if a specific port is in use
ss -tlnp | grep :<PORT>
# List all listening ports
ss -tlnp
```
If a port is occupied:
1. Identify the process: `ss -tlnp | grep :<port>` shows PID
2. Check if it is the same service being updated (expected)
3. If it is a different service, flag as Critical conflict
## DNS Resolution
```bash
# Check if subdomain resolves
nslookup <subdomain>.<hostname>
# Check /etc/hosts for local resolution
grep <subdomain> /etc/hosts
```
For `.hotport` subdomains, DNS is resolved via router hosts file or `/etc/hosts`.
## Docker Daemon Status
```bash
# Check Docker is running
docker info > /dev/null 2>&1 && echo "OK" || echo "FAIL"
# Check Docker version
docker version --format '{{.Server.Version}}'
# Check Docker Compose
docker compose version
```
## Image Pull Verification
```bash
# Check if image exists for target architecture
docker manifest inspect <image>:<tag> 2>/dev/null
# Check available architectures
docker manifest inspect <image>:<tag> | grep architecture
```
For Raspberry Pi, required architecture is `arm64` or `arm/v8`.
## SSL Certificate Checks
```bash
# Check certificate expiry (for HTTPS services)
echo | openssl s_client -servername <domain> -connect <domain>:443 2>/dev/null | openssl x509 -noout -dates
```
## Temperature (Raspberry Pi)
```bash
vcgencmd measure_temp
# or
cat /sys/class/thermal/thermal_zone0/temp # Divide by 1000 for Celsius
```
| Temperature | Status |
|-------------|--------|
| <60C | OK |
| 60-70C | Warning - fan should be active |
| >70C | Critical - may throttle |

View File

@@ -0,0 +1,136 @@
# Rollback Patterns Skill
Strategies for reverting deployments safely with minimal data loss and downtime.
## Strategy: Recreate (Default)
Simple stop-and-restart with previous configuration.
### Steps
1. **Backup current state**
```bash
cp docker-compose.yml docker-compose.yml.bak
cp .env .env.bak
docker compose images > current-images.txt
```
2. **Backup volumes with data**
```bash
docker run --rm -v <volume_name>:/data -v $(pwd)/backups:/backup \
alpine tar czf /backup/<volume_name>-$(date +%Y%m%d%H%M).tar.gz /data
```
3. **Stop current deployment**
```bash
docker compose down
```
4. **Restore previous config**
```bash
git checkout <previous_commit> -- docker-compose.yml .env
```
5. **Start previous version**
```bash
docker compose pull
docker compose up -d
```
6. **Verify health**
```bash
docker compose ps
docker compose logs --tail=20
```
### Estimated Downtime
- Small stack (1-3 services): 10-30 seconds
- Medium stack (4-8 services): 30-60 seconds
- Large stack with DB: 1-3 minutes (depends on DB startup)
## Strategy: Blue-Green
Zero-downtime rollback by running both versions simultaneously.
### Prerequisites
- Available ports for the alternate deployment
- Reverse proxy that can switch upstream targets
- No port conflicts between blue and green instances
### Steps
1. **Start previous version on alternate ports**
- Modify docker-compose to use different host ports
- Start with `docker compose -p <stack>-green up -d`
2. **Verify previous version health**
- Hit health endpoints on alternate ports
- Confirm service functionality
3. **Switch reverse proxy**
- Update Caddyfile to point to green deployment
- Reload Caddy: `docker exec caddy caddy reload --config /etc/caddy/Caddyfile`
4. **Stop current (blue) version**
```bash
docker compose -p <stack>-blue down
```
5. **Rename green to primary**
- Restore original ports in docker-compose
- Recreate with standard project name
### Estimated Downtime
- Near zero: Only the Caddy reload (sub-second)
## Database Rollback Considerations
### Safe (Reversible)
- Data inserts only (can delete new rows)
- No schema changes
- Configuration changes in env vars
### Dangerous (May Cause Data Loss)
- Schema migrations that drop columns
- Data transformations (one-way)
- Index changes on large tables
### Mitigation
1. Always backup database volume before rollback
2. Check for migration files between versions
3. If schema changed, may need to restore from backup rather than rollback
4. Document migration reversibility in deploy notes
## Volume Backup and Restore
### Backup
```bash
docker run --rm \
-v <volume>:/data:ro \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/<volume>.tar.gz -C /data .
```
### Restore
```bash
docker run --rm \
-v <volume>:/data \
-v $(pwd)/backups:/backup \
alpine sh -c "rm -rf /data/* && tar xzf /backup/<volume>.tar.gz -C /data"
```
## Post-Rollback Verification
1. All containers running: `docker compose ps`
2. Health checks passing: `docker compose ps --format json | grep -c healthy`
3. Logs clean: `docker compose logs --tail=50 --no-color`
4. Application responding: `curl -s http://localhost:<port>/health`
5. Data integrity: Spot-check recent records in database

View File

@@ -0,0 +1,27 @@
# Visual Header Skill
Standard visual header for ops-deploy-pipeline commands.
## Header Template
```
+----------------------------------------------------------------------+
| DEPLOY-PIPELINE - [Context] |
+----------------------------------------------------------------------+
```
## Context Values by Command
| Command | Context |
|---------|---------|
| `/deploy setup` | Setup Wizard |
| `/deploy generate` | Config Generation |
| `/deploy validate` | Config Validation |
| `/deploy env` | Environment Management |
| `/deploy check` | Pre-Deployment Check |
| `/deploy rollback` | Rollback Planning |
| Agent mode | Deployment Management |
## Usage
Display header at the start of every command response before proceeding with the operation.