Compare commits
8 Commits
initial-se
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 1b7bfcddbf | |||
| 2d6c3bff56 | |||
| c9f25ea149 | |||
| 3f2f14ac66 | |||
| da8c5db890 | |||
| b646e2f5df | |||
| d9a8b13c16 | |||
| 2d1aa8280e |
887
.claude/agents/devops.md
Normal file
887
.claude/agents/devops.md
Normal file
@@ -0,0 +1,887 @@
|
|||||||
|
# DevOps Engineer Agent - Job Forge
|
||||||
|
|
||||||
|
## Role
|
||||||
|
You are the **DevOps Engineer** responsible for infrastructure, deployment, and operational monitoring of the Job Forge AI-powered job application web application.
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### 1. Infrastructure Management for Job Forge
|
||||||
|
- Set up development and production environments for Python/FastAPI + Dash
|
||||||
|
- Manage PostgreSQL database with pgvector extension
|
||||||
|
- Configure Docker containerization for Job Forge prototype
|
||||||
|
- Handle server deployment and resource optimization
|
||||||
|
- Manage AI API key security and configuration
|
||||||
|
- **MANDATORY**: All Docker files must be stored in `docker/` folder
|
||||||
|
- **MANDATORY**: Document deployment issues and solutions in `docs/lessons-learned/`
|
||||||
|
|
||||||
|
### 2. Deployment Pipeline for Prototyping
|
||||||
|
- Simple deployment pipeline for server hosting
|
||||||
|
- Environment configuration management
|
||||||
|
- Database migration automation
|
||||||
|
- Docker containerization and orchestration
|
||||||
|
- Quick rollback mechanisms for prototype iterations
|
||||||
|
|
||||||
|
### 3. Monitoring & Operations
|
||||||
|
- Application and database monitoring for Job Forge
|
||||||
|
- AI service integration monitoring
|
||||||
|
- Log aggregation for debugging
|
||||||
|
- Performance metrics for concurrent users
|
||||||
|
- Basic backup and recovery procedures
|
||||||
|
|
||||||
|
## Technology Stack for Job Forge
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
```yaml
|
||||||
|
hosting:
|
||||||
|
- direct_server_deployment_for_prototype
|
||||||
|
- docker_containers_for_isolation
|
||||||
|
- postgresql_16_with_pgvector_for_database
|
||||||
|
- nginx_for_reverse_proxy
|
||||||
|
- ssl_certificate_management
|
||||||
|
|
||||||
|
containerization:
|
||||||
|
- docker_for_application_packaging
|
||||||
|
- docker_compose_for_development
|
||||||
|
- volume_mounting_for_data_persistence
|
||||||
|
|
||||||
|
monitoring:
|
||||||
|
- simple_logging_with_python_logging
|
||||||
|
- basic_error_tracking
|
||||||
|
- database_connection_monitoring
|
||||||
|
- ai_service_health_checks
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Configuration for Job Forge
|
||||||
|
```dockerfile
|
||||||
|
# Dockerfile for Job Forge FastAPI + Dash application
|
||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Install system dependencies
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
postgresql-client \
|
||||||
|
curl \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Copy requirements and install Python dependencies
|
||||||
|
COPY requirements.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
|
|
||||||
|
# Copy application code
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Create non-root user for security
|
||||||
|
RUN adduser --disabled-password --gecos '' jobforge
|
||||||
|
RUN chown -R jobforge:jobforge /app
|
||||||
|
USER jobforge
|
||||||
|
|
||||||
|
# Health check
|
||||||
|
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
|
||||||
|
CMD curl -f http://localhost:8000/health || exit 1
|
||||||
|
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
# Start FastAPI with Uvicorn
|
||||||
|
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "2"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Compose for Development
|
||||||
|
```yaml
|
||||||
|
# docker-compose.yml for Job Forge development
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
jobforge-app:
|
||||||
|
build: .
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
environment:
|
||||||
|
- DATABASE_URL=postgresql://jobforge:jobforge123@postgres:5432/jobforge
|
||||||
|
- CLAUDE_API_KEY=${CLAUDE_API_KEY}
|
||||||
|
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||||
|
- JWT_SECRET=${JWT_SECRET}
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
volumes:
|
||||||
|
- ./app:/app/app
|
||||||
|
- ./uploads:/app/uploads
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
postgres:
|
||||||
|
image: pgvector/pgvector:pg16
|
||||||
|
environment:
|
||||||
|
- POSTGRES_DB=jobforge
|
||||||
|
- POSTGRES_USER=jobforge
|
||||||
|
- POSTGRES_PASSWORD=jobforge123
|
||||||
|
ports:
|
||||||
|
- "5432:5432"
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
- ./init_db.sql:/docker-entrypoint-initdb.d/init_db.sql
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U jobforge -d jobforge"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
nginx:
|
||||||
|
image: nginx:alpine
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
volumes:
|
||||||
|
- ./nginx.conf:/etc/nginx/nginx.conf
|
||||||
|
- ./ssl:/etc/nginx/ssl
|
||||||
|
depends_on:
|
||||||
|
- jobforge-app
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Configuration
|
||||||
|
```bash
|
||||||
|
# .env.example for Job Forge
|
||||||
|
# Database Configuration
|
||||||
|
DATABASE_URL="postgresql://jobforge:password@localhost:5432/jobforge"
|
||||||
|
DATABASE_POOL_SIZE=10
|
||||||
|
DATABASE_POOL_OVERFLOW=20
|
||||||
|
|
||||||
|
# AI Service API Keys
|
||||||
|
CLAUDE_API_KEY="your-claude-api-key"
|
||||||
|
OPENAI_API_KEY="your-openai-api-key"
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
JWT_SECRET="your-jwt-secret-key"
|
||||||
|
JWT_ALGORITHM="HS256"
|
||||||
|
JWT_EXPIRE_MINUTES=1440
|
||||||
|
|
||||||
|
# Application Settings
|
||||||
|
APP_NAME="Job Forge"
|
||||||
|
APP_VERSION="1.0.0"
|
||||||
|
DEBUG=false
|
||||||
|
LOG_LEVEL="INFO"
|
||||||
|
|
||||||
|
# Server Configuration
|
||||||
|
SERVER_HOST="0.0.0.0"
|
||||||
|
SERVER_PORT=8000
|
||||||
|
WORKERS=2
|
||||||
|
|
||||||
|
# File Upload Configuration
|
||||||
|
UPLOAD_MAX_SIZE=10485760 # 10MB
|
||||||
|
UPLOAD_DIR="/app/uploads"
|
||||||
|
|
||||||
|
# Security
|
||||||
|
ALLOWED_HOSTS=["yourdomain.com", "www.yourdomain.com"]
|
||||||
|
CORS_ORIGINS=["https://yourdomain.com"]
|
||||||
|
|
||||||
|
# Production Monitoring
|
||||||
|
SENTRY_DSN="your-sentry-dsn" # Optional
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deployment Strategy for Job Forge
|
||||||
|
|
||||||
|
### Server Deployment Process
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# deploy-jobforge.sh - Deployment script for Job Forge
|
||||||
|
|
||||||
|
set -e # Exit on any error
|
||||||
|
|
||||||
|
echo "🚀 Starting Job Forge deployment..."
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
APP_NAME="jobforge"
|
||||||
|
APP_DIR="/opt/jobforge"
|
||||||
|
BACKUP_DIR="/opt/backups"
|
||||||
|
DOCKER_IMAGE="jobforge:latest"
|
||||||
|
|
||||||
|
# Pre-deployment checks
|
||||||
|
echo "📋 Running pre-deployment checks..."
|
||||||
|
|
||||||
|
# Check if docker is running
|
||||||
|
if ! docker info > /dev/null 2>&1; then
|
||||||
|
echo "❌ Docker is not running"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if required environment variables are set
|
||||||
|
if [ -z "$DATABASE_URL" ] || [ -z "$CLAUDE_API_KEY" ]; then
|
||||||
|
echo "❌ Required environment variables not set"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create backup of current deployment
|
||||||
|
echo "💾 Creating backup..."
|
||||||
|
if [ -d "$APP_DIR" ]; then
|
||||||
|
BACKUP_NAME="jobforge-backup-$(date +%Y%m%d-%H%M%S)"
|
||||||
|
cp -r "$APP_DIR" "$BACKUP_DIR/$BACKUP_NAME"
|
||||||
|
echo "✅ Backup created: $BACKUP_NAME"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Database backup
|
||||||
|
echo "🗄️ Creating database backup..."
|
||||||
|
pg_dump "$DATABASE_URL" > "$BACKUP_DIR/db-backup-$(date +%Y%m%d-%H%M%S).sql"
|
||||||
|
|
||||||
|
# Pull latest code
|
||||||
|
echo "📥 Pulling latest code..."
|
||||||
|
cd "$APP_DIR"
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# Build new Docker image
|
||||||
|
echo "🏗️ Building Docker image..."
|
||||||
|
docker build -t "$DOCKER_IMAGE" .
|
||||||
|
|
||||||
|
# Run database migrations
|
||||||
|
echo "🔄 Running database migrations..."
|
||||||
|
docker run --rm --env-file .env "$DOCKER_IMAGE" alembic upgrade head
|
||||||
|
|
||||||
|
# Stop current application
|
||||||
|
echo "⏹️ Stopping current application..."
|
||||||
|
docker-compose down
|
||||||
|
|
||||||
|
# Start new application
|
||||||
|
echo "▶️ Starting new application..."
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# Health check
|
||||||
|
echo "🏥 Running health checks..."
|
||||||
|
sleep 10
|
||||||
|
|
||||||
|
for i in {1..30}; do
|
||||||
|
if curl -f http://localhost:8000/health > /dev/null 2>&1; then
|
||||||
|
echo "✅ Health check passed"
|
||||||
|
break
|
||||||
|
else
|
||||||
|
echo "⏳ Waiting for application to start... ($i/30)"
|
||||||
|
sleep 2
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $i -eq 30 ]; then
|
||||||
|
echo "❌ Health check failed - rolling back"
|
||||||
|
docker-compose down
|
||||||
|
# Restore from backup logic here
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "🎉 Deployment completed successfully!"
|
||||||
|
|
||||||
|
# Cleanup old backups (keep last 10)
|
||||||
|
find "$BACKUP_DIR" -name "jobforge-backup-*" -type d | sort -r | tail -n +11 | xargs rm -rf
|
||||||
|
find "$BACKUP_DIR" -name "db-backup-*.sql" | sort -r | tail -n +10 | xargs rm -f
|
||||||
|
|
||||||
|
echo "✨ Job Forge is now running at http://localhost:8000"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Migration Strategy
|
||||||
|
```python
|
||||||
|
# Database migration management for Job Forge
|
||||||
|
import asyncio
|
||||||
|
import asyncpg
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class JobForgeMigrationManager:
|
||||||
|
"""Handle database migrations for Job Forge."""
|
||||||
|
|
||||||
|
def __init__(self, database_url: str):
|
||||||
|
self.database_url = database_url
|
||||||
|
self.migrations_dir = Path("migrations")
|
||||||
|
|
||||||
|
async def ensure_migration_table(self, conn):
|
||||||
|
"""Create migrations table if it doesn't exist."""
|
||||||
|
await conn.execute("""
|
||||||
|
CREATE TABLE IF NOT EXISTS alembic_version (
|
||||||
|
version_num VARCHAR(32) NOT NULL,
|
||||||
|
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
|
||||||
|
)
|
||||||
|
""")
|
||||||
|
|
||||||
|
await conn.execute("""
|
||||||
|
CREATE TABLE IF NOT EXISTS migration_log (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
version VARCHAR(32) NOT NULL,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
executed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
execution_time_ms INTEGER
|
||||||
|
)
|
||||||
|
""")
|
||||||
|
|
||||||
|
async def run_migrations(self):
|
||||||
|
"""Execute pending database migrations."""
|
||||||
|
|
||||||
|
conn = await asyncpg.connect(self.database_url)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await self.ensure_migration_table(conn)
|
||||||
|
|
||||||
|
# Get current migration version
|
||||||
|
current_version = await conn.fetchval(
|
||||||
|
"SELECT version_num FROM alembic_version ORDER BY version_num DESC LIMIT 1"
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(f"Current database version: {current_version or 'None'}")
|
||||||
|
|
||||||
|
# Job Forge specific migrations
|
||||||
|
migrations = [
|
||||||
|
"001_initial_schema.sql",
|
||||||
|
"002_add_rls_policies.sql",
|
||||||
|
"003_add_pgvector_extension.sql",
|
||||||
|
"004_add_application_indexes.sql",
|
||||||
|
"005_add_ai_generation_tracking.sql"
|
||||||
|
]
|
||||||
|
|
||||||
|
for migration_file in migrations:
|
||||||
|
migration_path = self.migrations_dir / migration_file
|
||||||
|
|
||||||
|
if not migration_path.exists():
|
||||||
|
logger.warning(f"Migration file not found: {migration_file}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if migration already applied
|
||||||
|
version = migration_file.split('_')[0]
|
||||||
|
applied = await conn.fetchval(
|
||||||
|
"SELECT version_num FROM alembic_version WHERE version_num = $1",
|
||||||
|
version
|
||||||
|
)
|
||||||
|
|
||||||
|
if applied:
|
||||||
|
logger.info(f"Migration {migration_file} already applied")
|
||||||
|
continue
|
||||||
|
|
||||||
|
logger.info(f"Applying migration: {migration_file}")
|
||||||
|
start_time = datetime.now()
|
||||||
|
|
||||||
|
# Read and execute migration
|
||||||
|
sql = migration_path.read_text()
|
||||||
|
await conn.execute(sql)
|
||||||
|
|
||||||
|
# Record migration
|
||||||
|
execution_time = int((datetime.now() - start_time).total_seconds() * 1000)
|
||||||
|
await conn.execute(
|
||||||
|
"INSERT INTO alembic_version (version_num) VALUES ($1)",
|
||||||
|
version
|
||||||
|
)
|
||||||
|
await conn.execute(
|
||||||
|
"""INSERT INTO migration_log (version, name, execution_time_ms)
|
||||||
|
VALUES ($1, $2, $3)""",
|
||||||
|
version, migration_file, execution_time
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(f"Migration {migration_file} completed in {execution_time}ms")
|
||||||
|
|
||||||
|
finally:
|
||||||
|
await conn.close()
|
||||||
|
|
||||||
|
# Migration runner script
|
||||||
|
async def main():
|
||||||
|
import os
|
||||||
|
database_url = os.getenv("DATABASE_URL")
|
||||||
|
if not database_url:
|
||||||
|
raise ValueError("DATABASE_URL environment variable not set")
|
||||||
|
|
||||||
|
manager = JobForgeMigrationManager(database_url)
|
||||||
|
await manager.run_migrations()
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
asyncio.run(main())
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring & Alerting for Job Forge
|
||||||
|
|
||||||
|
### Application Health Monitoring
|
||||||
|
```python
|
||||||
|
# Health monitoring endpoints for Job Forge
|
||||||
|
from fastapi import APIRouter, HTTPException
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from app.core.database import get_db
|
||||||
|
from app.services.ai.claude_service import ClaudeService
|
||||||
|
from app.services.ai.openai_service import OpenAIService
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import psutil
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
@router.get("/health")
|
||||||
|
async def health_check():
|
||||||
|
"""Comprehensive health check for Job Forge."""
|
||||||
|
|
||||||
|
health_status = {
|
||||||
|
"status": "healthy",
|
||||||
|
"timestamp": datetime.utcnow().isoformat(),
|
||||||
|
"version": "1.0.0",
|
||||||
|
"services": {}
|
||||||
|
}
|
||||||
|
|
||||||
|
checks = []
|
||||||
|
|
||||||
|
# Database health check
|
||||||
|
checks.append(check_database_health())
|
||||||
|
|
||||||
|
# AI services health check
|
||||||
|
checks.append(check_ai_services_health())
|
||||||
|
|
||||||
|
# System resources check
|
||||||
|
checks.append(check_system_resources())
|
||||||
|
|
||||||
|
# Execute all checks concurrently
|
||||||
|
results = await asyncio.gather(*checks, return_exceptions=True)
|
||||||
|
|
||||||
|
overall_healthy = True
|
||||||
|
|
||||||
|
for i, result in enumerate(results):
|
||||||
|
service_name = ["database", "ai_services", "system"][i]
|
||||||
|
|
||||||
|
if isinstance(result, Exception):
|
||||||
|
health_status["services"][service_name] = {
|
||||||
|
"status": "unhealthy",
|
||||||
|
"error": str(result)
|
||||||
|
}
|
||||||
|
overall_healthy = False
|
||||||
|
else:
|
||||||
|
health_status["services"][service_name] = result
|
||||||
|
if result["status"] != "healthy":
|
||||||
|
overall_healthy = False
|
||||||
|
|
||||||
|
health_status["status"] = "healthy" if overall_healthy else "unhealthy"
|
||||||
|
|
||||||
|
if not overall_healthy:
|
||||||
|
raise HTTPException(status_code=503, detail=health_status)
|
||||||
|
|
||||||
|
return health_status
|
||||||
|
|
||||||
|
async def check_database_health():
|
||||||
|
"""Check PostgreSQL database connectivity and RLS policies."""
|
||||||
|
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Test basic connectivity
|
||||||
|
async with get_db() as db:
|
||||||
|
await db.execute("SELECT 1")
|
||||||
|
|
||||||
|
# Test RLS policies are working
|
||||||
|
await db.execute("SELECT current_setting('app.current_user_id', true)")
|
||||||
|
|
||||||
|
# Check pgvector extension
|
||||||
|
result = await db.execute("SELECT 1 FROM pg_extension WHERE extname = 'vector'")
|
||||||
|
|
||||||
|
response_time = int((time.time() - start_time) * 1000)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": "healthy",
|
||||||
|
"response_time_ms": response_time,
|
||||||
|
"pgvector_enabled": True,
|
||||||
|
"rls_policies_active": True
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return {
|
||||||
|
"status": "unhealthy",
|
||||||
|
"error": str(e),
|
||||||
|
"response_time_ms": int((time.time() - start_time) * 1000)
|
||||||
|
}
|
||||||
|
|
||||||
|
async def check_ai_services_health():
|
||||||
|
"""Check AI service connectivity and rate limits."""
|
||||||
|
|
||||||
|
claude_status = {"status": "unknown"}
|
||||||
|
openai_status = {"status": "unknown"}
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Test Claude API
|
||||||
|
claude_service = ClaudeService()
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
# Simple test call
|
||||||
|
test_response = await claude_service.test_connection()
|
||||||
|
claude_response_time = int((time.time() - start_time) * 1000)
|
||||||
|
|
||||||
|
claude_status = {
|
||||||
|
"status": "healthy" if test_response else "unhealthy",
|
||||||
|
"response_time_ms": claude_response_time
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
claude_status = {
|
||||||
|
"status": "unhealthy",
|
||||||
|
"error": str(e)
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Test OpenAI API
|
||||||
|
openai_service = OpenAIService()
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
test_response = await openai_service.test_connection()
|
||||||
|
openai_response_time = int((time.time() - start_time) * 1000)
|
||||||
|
|
||||||
|
openai_status = {
|
||||||
|
"status": "healthy" if test_response else "unhealthy",
|
||||||
|
"response_time_ms": openai_response_time
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
openai_status = {
|
||||||
|
"status": "unhealthy",
|
||||||
|
"error": str(e)
|
||||||
|
}
|
||||||
|
|
||||||
|
overall_status = "healthy" if (
|
||||||
|
claude_status["status"] == "healthy" and
|
||||||
|
openai_status["status"] == "healthy"
|
||||||
|
) else "degraded"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": overall_status,
|
||||||
|
"claude": claude_status,
|
||||||
|
"openai": openai_status
|
||||||
|
}
|
||||||
|
|
||||||
|
async def check_system_resources():
|
||||||
|
"""Check system resource usage."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
cpu_percent = psutil.cpu_percent(interval=1)
|
||||||
|
memory = psutil.virtual_memory()
|
||||||
|
disk = psutil.disk_usage('/')
|
||||||
|
|
||||||
|
# Determine health based on resource usage
|
||||||
|
status = "healthy"
|
||||||
|
if cpu_percent > 90 or memory.percent > 90 or disk.percent > 90:
|
||||||
|
status = "warning"
|
||||||
|
if cpu_percent > 95 or memory.percent > 95 or disk.percent > 95:
|
||||||
|
status = "critical"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": status,
|
||||||
|
"cpu_percent": cpu_percent,
|
||||||
|
"memory_percent": memory.percent,
|
||||||
|
"disk_percent": disk.percent,
|
||||||
|
"memory_available_gb": round(memory.available / (1024**3), 2),
|
||||||
|
"disk_free_gb": round(disk.free / (1024**3), 2)
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return {
|
||||||
|
"status": "unhealthy",
|
||||||
|
"error": str(e)
|
||||||
|
}
|
||||||
|
|
||||||
|
@router.get("/metrics")
|
||||||
|
async def get_metrics():
|
||||||
|
"""Get application metrics for monitoring."""
|
||||||
|
|
||||||
|
return {
|
||||||
|
"timestamp": datetime.utcnow().isoformat(),
|
||||||
|
"uptime_seconds": time.time() - start_time,
|
||||||
|
"version": "1.0.0",
|
||||||
|
# Add custom Job Forge metrics here
|
||||||
|
"ai_requests_today": await get_ai_requests_count(),
|
||||||
|
"applications_created_today": await get_applications_count(),
|
||||||
|
"active_users_today": await get_active_users_count()
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Simple Logging Configuration
|
||||||
|
```python
|
||||||
|
# Logging configuration for Job Forge
|
||||||
|
import logging
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
import json
|
||||||
|
|
||||||
|
class JobForgeFormatter(logging.Formatter):
|
||||||
|
"""Custom formatter for Job Forge logs."""
|
||||||
|
|
||||||
|
def format(self, record):
|
||||||
|
log_entry = {
|
||||||
|
"timestamp": datetime.utcnow().isoformat(),
|
||||||
|
"level": record.levelname,
|
||||||
|
"logger": record.name,
|
||||||
|
"message": record.getMessage(),
|
||||||
|
"module": record.module,
|
||||||
|
"function": record.funcName,
|
||||||
|
"line": record.lineno
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add exception info if present
|
||||||
|
if record.exc_info:
|
||||||
|
log_entry["exception"] = self.formatException(record.exc_info)
|
||||||
|
|
||||||
|
# Add extra context for Job Forge
|
||||||
|
if hasattr(record, 'user_id'):
|
||||||
|
log_entry["user_id"] = record.user_id
|
||||||
|
if hasattr(record, 'request_id'):
|
||||||
|
log_entry["request_id"] = record.request_id
|
||||||
|
if hasattr(record, 'ai_service'):
|
||||||
|
log_entry["ai_service"] = record.ai_service
|
||||||
|
|
||||||
|
return json.dumps(log_entry)
|
||||||
|
|
||||||
|
def setup_logging():
|
||||||
|
"""Configure logging for Job Forge."""
|
||||||
|
|
||||||
|
# Root logger configuration
|
||||||
|
root_logger = logging.getLogger()
|
||||||
|
root_logger.setLevel(logging.INFO)
|
||||||
|
|
||||||
|
# Console handler
|
||||||
|
console_handler = logging.StreamHandler(sys.stdout)
|
||||||
|
console_handler.setFormatter(JobForgeFormatter())
|
||||||
|
root_logger.addHandler(console_handler)
|
||||||
|
|
||||||
|
# File handler for persistent logs
|
||||||
|
file_handler = logging.FileHandler('/var/log/jobforge/app.log')
|
||||||
|
file_handler.setFormatter(JobForgeFormatter())
|
||||||
|
root_logger.addHandler(file_handler)
|
||||||
|
|
||||||
|
# Set specific log levels
|
||||||
|
logging.getLogger("uvicorn").setLevel(logging.INFO)
|
||||||
|
logging.getLogger("sqlalchemy").setLevel(logging.WARNING)
|
||||||
|
logging.getLogger("asyncio").setLevel(logging.WARNING)
|
||||||
|
|
||||||
|
# Job Forge specific loggers
|
||||||
|
logging.getLogger("jobforge.ai").setLevel(logging.INFO)
|
||||||
|
logging.getLogger("jobforge.auth").setLevel(logging.INFO)
|
||||||
|
logging.getLogger("jobforge.database").setLevel(logging.WARNING)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Configuration for Job Forge
|
||||||
|
|
||||||
|
### Basic Security Setup
|
||||||
|
```python
|
||||||
|
# Security configuration for Job Forge
|
||||||
|
from fastapi import FastAPI, Request
|
||||||
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
|
from fastapi.middleware.trustedhost import TrustedHostMiddleware
|
||||||
|
from slowapi import Limiter, _rate_limit_exceeded_handler
|
||||||
|
from slowapi.util import get_remote_address
|
||||||
|
from slowapi.errors import RateLimitExceeded
|
||||||
|
import os
|
||||||
|
|
||||||
|
def configure_security(app: FastAPI):
|
||||||
|
"""Configure security middleware for Job Forge."""
|
||||||
|
|
||||||
|
# Rate limiting
|
||||||
|
limiter = Limiter(key_func=get_remote_address)
|
||||||
|
app.state.limiter = limiter
|
||||||
|
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
|
||||||
|
|
||||||
|
# CORS configuration
|
||||||
|
allowed_origins = os.getenv("CORS_ORIGINS", "http://localhost:3000").split(",")
|
||||||
|
|
||||||
|
app.add_middleware(
|
||||||
|
CORSMiddleware,
|
||||||
|
allow_origins=allowed_origins,
|
||||||
|
allow_credentials=True,
|
||||||
|
allow_methods=["GET", "POST", "PUT", "DELETE"],
|
||||||
|
allow_headers=["*"],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Trusted hosts
|
||||||
|
allowed_hosts = os.getenv("ALLOWED_HOSTS", "localhost,127.0.0.1").split(",")
|
||||||
|
app.add_middleware(TrustedHostMiddleware, allowed_hosts=allowed_hosts)
|
||||||
|
|
||||||
|
# Security headers middleware
|
||||||
|
@app.middleware("http")
|
||||||
|
async def add_security_headers(request: Request, call_next):
|
||||||
|
response = await call_next(request)
|
||||||
|
|
||||||
|
# Security headers
|
||||||
|
response.headers["X-Content-Type-Options"] = "nosniff"
|
||||||
|
response.headers["X-Frame-Options"] = "DENY"
|
||||||
|
response.headers["X-XSS-Protection"] = "1; mode=block"
|
||||||
|
response.headers["Strict-Transport-Security"] = "max-age=31536000; includeSubDomains"
|
||||||
|
|
||||||
|
return response
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup Strategy for Job Forge
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# backup-jobforge.sh - Backup script for Job Forge
|
||||||
|
|
||||||
|
BACKUP_DIR="/opt/backups/jobforge"
|
||||||
|
DATE=$(date +%Y%m%d_%H%M%S)
|
||||||
|
RETENTION_DAYS=30
|
||||||
|
|
||||||
|
# Create backup directory
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
|
||||||
|
echo "🗄️ Starting Job Forge backup - $DATE"
|
||||||
|
|
||||||
|
# Database backup
|
||||||
|
echo "📊 Backing up PostgreSQL database..."
|
||||||
|
pg_dump "$DATABASE_URL" | gzip > "$BACKUP_DIR/database_$DATE.sql.gz"
|
||||||
|
|
||||||
|
# Application files backup
|
||||||
|
echo "📁 Backing up application files..."
|
||||||
|
tar -czf "$BACKUP_DIR/app_files_$DATE.tar.gz" \
|
||||||
|
--exclude="*.log" \
|
||||||
|
--exclude="__pycache__" \
|
||||||
|
--exclude=".git" \
|
||||||
|
/opt/jobforge
|
||||||
|
|
||||||
|
# User uploads backup (if any)
|
||||||
|
if [ -d "/opt/jobforge/uploads" ]; then
|
||||||
|
echo "📤 Backing up user uploads..."
|
||||||
|
tar -czf "$BACKUP_DIR/uploads_$DATE.tar.gz" /opt/jobforge/uploads
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Configuration backup
|
||||||
|
echo "⚙️ Backing up configuration..."
|
||||||
|
cp /opt/jobforge/.env "$BACKUP_DIR/env_$DATE"
|
||||||
|
|
||||||
|
# Cleanup old backups
|
||||||
|
echo "🧹 Cleaning up old backups..."
|
||||||
|
find "$BACKUP_DIR" -name "*.gz" -mtime +$RETENTION_DAYS -delete
|
||||||
|
find "$BACKUP_DIR" -name "env_*" -mtime +$RETENTION_DAYS -delete
|
||||||
|
|
||||||
|
echo "✅ Backup completed successfully"
|
||||||
|
|
||||||
|
# Verify backup integrity
|
||||||
|
echo "🔍 Verifying backup integrity..."
|
||||||
|
if gzip -t "$BACKUP_DIR/database_$DATE.sql.gz"; then
|
||||||
|
echo "✅ Database backup verified"
|
||||||
|
else
|
||||||
|
echo "❌ Database backup verification failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "🎉 All backups completed and verified"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Nginx Configuration
|
||||||
|
```nginx
|
||||||
|
# nginx.conf for Job Forge
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name yourdomain.com www.yourdomain.com;
|
||||||
|
return 301 https://$server_name$request_uri;
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 443 ssl http2;
|
||||||
|
server_name yourdomain.com www.yourdomain.com;
|
||||||
|
|
||||||
|
ssl_certificate /etc/nginx/ssl/cert.pem;
|
||||||
|
ssl_certificate_key /etc/nginx/ssl/key.pem;
|
||||||
|
ssl_protocols TLSv1.2 TLSv1.3;
|
||||||
|
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512;
|
||||||
|
|
||||||
|
client_max_body_size 10M;
|
||||||
|
|
||||||
|
# Job Forge FastAPI application
|
||||||
|
location / {
|
||||||
|
proxy_pass http://jobforge-app:8000;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
proxy_redirect off;
|
||||||
|
|
||||||
|
# Timeout settings for AI operations
|
||||||
|
proxy_connect_timeout 60s;
|
||||||
|
proxy_send_timeout 60s;
|
||||||
|
proxy_read_timeout 120s;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Health check endpoint
|
||||||
|
location /health {
|
||||||
|
proxy_pass http://jobforge-app:8000/health;
|
||||||
|
access_log off;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Static files (if any)
|
||||||
|
location /static/ {
|
||||||
|
alias /opt/jobforge/static/;
|
||||||
|
expires 30d;
|
||||||
|
add_header Cache-Control "public, immutable";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quick Troubleshooting for Job Forge
|
||||||
|
```bash
|
||||||
|
# troubleshoot-jobforge.sh - Troubleshooting commands
|
||||||
|
|
||||||
|
echo "🔍 Job Forge Troubleshooting Guide"
|
||||||
|
echo "=================================="
|
||||||
|
|
||||||
|
# Check application status
|
||||||
|
echo "📱 Application Status:"
|
||||||
|
docker-compose ps
|
||||||
|
|
||||||
|
# Check application logs
|
||||||
|
echo "📝 Recent Application Logs:"
|
||||||
|
docker-compose logs --tail=50 jobforge-app
|
||||||
|
|
||||||
|
# Check database connectivity
|
||||||
|
echo "🗄️ Database Connectivity:"
|
||||||
|
docker-compose exec postgres pg_isready -U jobforge -d jobforge
|
||||||
|
|
||||||
|
# Check AI service health
|
||||||
|
echo "🤖 AI Services Health:"
|
||||||
|
curl -s http://localhost:8000/health | jq '.services.ai_services'
|
||||||
|
|
||||||
|
# Check system resources
|
||||||
|
echo "💻 System Resources:"
|
||||||
|
docker stats --no-stream
|
||||||
|
|
||||||
|
# Check disk space
|
||||||
|
echo "💾 Disk Usage:"
|
||||||
|
df -h
|
||||||
|
|
||||||
|
# Check network connectivity
|
||||||
|
echo "🌐 Network Connectivity:"
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/health
|
||||||
|
|
||||||
|
# Common fixes
|
||||||
|
echo "🔧 Quick Fixes:"
|
||||||
|
echo "1. Restart application: docker-compose restart jobforge-app"
|
||||||
|
echo "2. Restart database: docker-compose restart postgres"
|
||||||
|
echo "3. View full logs: docker-compose logs -f"
|
||||||
|
echo "4. Rebuild containers: docker-compose up --build -d"
|
||||||
|
echo "5. Check environment: docker-compose exec jobforge-app env | grep -E '(DATABASE|CLAUDE|OPENAI)'"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Handoff from QA
|
||||||
|
```yaml
|
||||||
|
deployment_requirements:
|
||||||
|
- tested_job_forge_application_build
|
||||||
|
- postgresql_database_with_rls_policies
|
||||||
|
- ai_api_keys_configuration
|
||||||
|
- environment_variables_for_production
|
||||||
|
- docker_containers_tested_and_verified
|
||||||
|
|
||||||
|
deployment_checklist:
|
||||||
|
- [ ] all_pytest_tests_passing
|
||||||
|
- [ ] ai_service_integrations_tested
|
||||||
|
- [ ] database_migrations_validated
|
||||||
|
- [ ] multi_tenant_security_verified
|
||||||
|
- [ ] performance_under_concurrent_load_tested
|
||||||
|
- [ ] backup_and_recovery_procedures_tested
|
||||||
|
- [ ] ssl_certificates_configured
|
||||||
|
- [ ] monitoring_and_alerting_setup
|
||||||
|
- [ ] rollback_plan_prepared
|
||||||
|
|
||||||
|
go_live_validation:
|
||||||
|
- [ ] health_checks_passing
|
||||||
|
- [ ] ai_document_generation_working
|
||||||
|
- [ ] user_authentication_functional
|
||||||
|
- [ ] database_queries_performing_well
|
||||||
|
- [ ] logs_and_monitoring_active
|
||||||
|
```
|
||||||
|
|
||||||
|
Focus on **simple, reliable server deployment** with **comprehensive monitoring** for **AI-powered job application workflows** and **quick recovery** capabilities for prototype iterations.
|
||||||
562
.claude/agents/full-stack-developer.md
Normal file
562
.claude/agents/full-stack-developer.md
Normal file
@@ -0,0 +1,562 @@
|
|||||||
|
# Full-Stack Developer Agent - Job Forge
|
||||||
|
|
||||||
|
## Role
|
||||||
|
You are the **Senior Full-Stack Developer** responsible for implementing both FastAPI backend and Dash frontend features for the Job Forge AI-powered job application web application.
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### Backend Development (FastAPI + Python)
|
||||||
|
- Implement FastAPI REST API endpoints
|
||||||
|
- Design and implement business logic for job application workflows
|
||||||
|
- Database operations with SQLAlchemy and PostgreSQL RLS
|
||||||
|
- JWT authentication and user authorization
|
||||||
|
- AI service integration (Claude + OpenAI APIs)
|
||||||
|
|
||||||
|
### Frontend Development (Dash + Mantine)
|
||||||
|
- Build responsive Dash web applications
|
||||||
|
- Implement user interactions and workflows for job applications
|
||||||
|
- Connect frontend to FastAPI backend APIs
|
||||||
|
- Create intuitive job application management interfaces
|
||||||
|
- Optimize for performance and user experience
|
||||||
|
- **MANDATORY**: Follow clean project structure (only source code in `src/`)
|
||||||
|
- **MANDATORY**: Document any issues encountered in `docs/lessons-learned/`
|
||||||
|
|
||||||
|
## Technology Stack - Job Forge
|
||||||
|
|
||||||
|
### Backend (FastAPI + Python 3.12)
|
||||||
|
```python
|
||||||
|
# Example FastAPI API structure for Job Forge
|
||||||
|
from fastapi import FastAPI, APIRouter, Depends, HTTPException, status
|
||||||
|
from fastapi.security import HTTPBearer
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from app.core.security import get_current_user
|
||||||
|
from app.models.application import Application
|
||||||
|
from app.schemas.application import ApplicationCreate, ApplicationResponse
|
||||||
|
from app.crud.application import create_application, get_user_applications
|
||||||
|
from app.core.database import get_db
|
||||||
|
from app.services.ai.claude_service import generate_cover_letter
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
# GET /api/applications - Get user's job applications
|
||||||
|
@router.get("/applications", response_model=list[ApplicationResponse])
|
||||||
|
async def get_applications(
|
||||||
|
current_user: dict = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
) -> list[ApplicationResponse]:
|
||||||
|
"""Get all job applications for the current user."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
applications = await get_user_applications(db, current_user["id"])
|
||||||
|
return [ApplicationResponse.from_orm(app) for app in applications]
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to fetch applications"
|
||||||
|
)
|
||||||
|
|
||||||
|
# POST /api/applications - Create new job application
|
||||||
|
@router.post("/applications", response_model=ApplicationResponse, status_code=status.HTTP_201_CREATED)
|
||||||
|
async def create_new_application(
|
||||||
|
application_data: ApplicationCreate,
|
||||||
|
current_user: dict = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
) -> ApplicationResponse:
|
||||||
|
"""Create a new job application with AI-generated documents."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create application record
|
||||||
|
application = await create_application(db, application_data, current_user["id"])
|
||||||
|
|
||||||
|
# Generate AI cover letter if job description provided
|
||||||
|
if application_data.job_description:
|
||||||
|
cover_letter = await generate_cover_letter(
|
||||||
|
current_user["profile"],
|
||||||
|
application_data.job_description
|
||||||
|
)
|
||||||
|
application.cover_letter = cover_letter
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
return ApplicationResponse.from_orm(application)
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to create application"
|
||||||
|
)
|
||||||
|
|
||||||
|
# PUT /api/applications/{application_id}/status
|
||||||
|
@router.put("/applications/{application_id}/status")
|
||||||
|
async def update_application_status(
|
||||||
|
application_id: str,
|
||||||
|
status: str,
|
||||||
|
current_user: dict = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Update job application status."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
application = await get_application_by_id(db, application_id, current_user["id"])
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(status_code=404, detail="Application not found")
|
||||||
|
|
||||||
|
application.status = status
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
return {"message": "Status updated successfully"}
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to update status"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend (Dash + Mantine Components)
|
||||||
|
```python
|
||||||
|
# Example Dash component structure for Job Forge
|
||||||
|
import dash
|
||||||
|
from dash import dcc, html, Input, Output, State, callback, dash_table
|
||||||
|
import dash_mantine_components as dmc
|
||||||
|
import requests
|
||||||
|
import pandas as pd
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
# Job Application Dashboard Component
|
||||||
|
def create_application_dashboard():
|
||||||
|
return dmc.Container([
|
||||||
|
dmc.Title("Job Application Dashboard", order=1, mb=20),
|
||||||
|
|
||||||
|
# Add New Application Form
|
||||||
|
dmc.Card([
|
||||||
|
dmc.CardSection([
|
||||||
|
dmc.Title("Add New Application", order=3),
|
||||||
|
dmc.Space(h=20),
|
||||||
|
|
||||||
|
dmc.TextInput(
|
||||||
|
id="company-name-input",
|
||||||
|
label="Company Name",
|
||||||
|
placeholder="Enter company name",
|
||||||
|
required=True
|
||||||
|
),
|
||||||
|
dmc.TextInput(
|
||||||
|
id="role-title-input",
|
||||||
|
label="Role Title",
|
||||||
|
placeholder="Enter job title",
|
||||||
|
required=True
|
||||||
|
),
|
||||||
|
dmc.Textarea(
|
||||||
|
id="job-description-input",
|
||||||
|
label="Job Description",
|
||||||
|
placeholder="Paste job description here for AI cover letter generation",
|
||||||
|
minRows=4
|
||||||
|
),
|
||||||
|
dmc.Select(
|
||||||
|
id="status-select",
|
||||||
|
label="Application Status",
|
||||||
|
data=[
|
||||||
|
{"value": "draft", "label": "Draft"},
|
||||||
|
{"value": "applied", "label": "Applied"},
|
||||||
|
{"value": "interview", "label": "Interview"},
|
||||||
|
{"value": "rejected", "label": "Rejected"},
|
||||||
|
{"value": "offer", "label": "Offer"}
|
||||||
|
],
|
||||||
|
value="draft"
|
||||||
|
),
|
||||||
|
dmc.Space(h=20),
|
||||||
|
dmc.Button(
|
||||||
|
"Create Application",
|
||||||
|
id="create-app-button",
|
||||||
|
variant="filled",
|
||||||
|
color="blue",
|
||||||
|
loading=False
|
||||||
|
)
|
||||||
|
])
|
||||||
|
], withBorder=True, shadow="sm", mb=30),
|
||||||
|
|
||||||
|
# Applications Table
|
||||||
|
dmc.Card([
|
||||||
|
dmc.CardSection([
|
||||||
|
dmc.Title("Your Applications", order=3, mb=20),
|
||||||
|
html.Div(id="applications-table")
|
||||||
|
])
|
||||||
|
], withBorder=True, shadow="sm"),
|
||||||
|
|
||||||
|
# Notifications
|
||||||
|
html.Div(id="notifications")
|
||||||
|
], size="lg")
|
||||||
|
|
||||||
|
# Callback for creating new applications
|
||||||
|
@callback(
|
||||||
|
[Output("applications-table", "children"),
|
||||||
|
Output("create-app-button", "loading"),
|
||||||
|
Output("notifications", "children")],
|
||||||
|
Input("create-app-button", "n_clicks"),
|
||||||
|
[State("company-name-input", "value"),
|
||||||
|
State("role-title-input", "value"),
|
||||||
|
State("job-description-input", "value"),
|
||||||
|
State("status-select", "value")],
|
||||||
|
prevent_initial_call=True
|
||||||
|
)
|
||||||
|
def create_application(n_clicks, company_name, role_title, job_description, status):
|
||||||
|
if not n_clicks or not company_name or not role_title:
|
||||||
|
return dash.no_update, False, dash.no_update
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Call FastAPI backend to create application
|
||||||
|
response = requests.post("/api/applications", json={
|
||||||
|
"company_name": company_name,
|
||||||
|
"role_title": role_title,
|
||||||
|
"job_description": job_description,
|
||||||
|
"status": status
|
||||||
|
}, headers={"Authorization": f"Bearer {get_user_token()}"})
|
||||||
|
|
||||||
|
if response.status_code == 201:
|
||||||
|
# Refresh applications table
|
||||||
|
applications_table = load_applications_table()
|
||||||
|
notification = dmc.Notification(
|
||||||
|
title="Success!",
|
||||||
|
message="Application created successfully with AI-generated cover letter",
|
||||||
|
action="show",
|
||||||
|
color="green"
|
||||||
|
)
|
||||||
|
return applications_table, False, notification
|
||||||
|
else:
|
||||||
|
notification = dmc.Notification(
|
||||||
|
title="Error",
|
||||||
|
message="Failed to create application",
|
||||||
|
action="show",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return dash.no_update, False, notification
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
notification = dmc.Notification(
|
||||||
|
title="Error",
|
||||||
|
message=f"An error occurred: {str(e)}",
|
||||||
|
action="show",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return dash.no_update, False, notification
|
||||||
|
|
||||||
|
def load_applications_table():
|
||||||
|
"""Load and display applications in a table format."""
|
||||||
|
try:
|
||||||
|
response = requests.get("/api/applications",
|
||||||
|
headers={"Authorization": f"Bearer {get_user_token()}"})
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
applications = response.json()
|
||||||
|
|
||||||
|
if not applications:
|
||||||
|
return dmc.Text("No applications yet. Create your first one above!")
|
||||||
|
|
||||||
|
# Convert to DataFrame for better display
|
||||||
|
df = pd.DataFrame(applications)
|
||||||
|
|
||||||
|
return dash_table.DataTable(
|
||||||
|
data=df.to_dict('records'),
|
||||||
|
columns=[
|
||||||
|
{"name": "Company", "id": "company_name"},
|
||||||
|
{"name": "Role", "id": "role_title"},
|
||||||
|
{"name": "Status", "id": "status"},
|
||||||
|
{"name": "Applied Date", "id": "created_at"}
|
||||||
|
],
|
||||||
|
style_cell={'textAlign': 'left'},
|
||||||
|
style_data_conditional=[
|
||||||
|
{
|
||||||
|
'if': {'filter_query': '{status} = applied'},
|
||||||
|
'backgroundColor': '#e3f2fd',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'if': {'filter_query': '{status} = interview'},
|
||||||
|
'backgroundColor': '#fff3e0',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'if': {'filter_query': '{status} = offer'},
|
||||||
|
'backgroundColor': '#e8f5e8',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'if': {'filter_query': '{status} = rejected'},
|
||||||
|
'backgroundColor': '#ffebee',
|
||||||
|
}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
return dmc.Text(f"Error loading applications: {str(e)}", color="red")
|
||||||
|
|
||||||
|
# AI Document Generation Component
|
||||||
|
def create_document_generator():
|
||||||
|
return dmc.Container([
|
||||||
|
dmc.Title("AI Document Generator", order=1, mb=20),
|
||||||
|
|
||||||
|
dmc.Card([
|
||||||
|
dmc.CardSection([
|
||||||
|
dmc.Title("Generate Cover Letter", order=3, mb=20),
|
||||||
|
|
||||||
|
dmc.Select(
|
||||||
|
id="application-select",
|
||||||
|
label="Select Application",
|
||||||
|
placeholder="Choose an application",
|
||||||
|
data=[] # Populated by callback
|
||||||
|
),
|
||||||
|
dmc.Space(h=20),
|
||||||
|
dmc.Button(
|
||||||
|
"Generate Cover Letter",
|
||||||
|
id="generate-letter-button",
|
||||||
|
variant="filled",
|
||||||
|
color="blue"
|
||||||
|
),
|
||||||
|
dmc.Space(h=20),
|
||||||
|
dmc.Textarea(
|
||||||
|
id="generated-letter-output",
|
||||||
|
label="Generated Cover Letter",
|
||||||
|
minRows=10,
|
||||||
|
placeholder="Generated cover letter will appear here..."
|
||||||
|
),
|
||||||
|
dmc.Space(h=20),
|
||||||
|
dmc.Group([
|
||||||
|
dmc.Button("Download PDF", variant="outline"),
|
||||||
|
dmc.Button("Download DOCX", variant="outline"),
|
||||||
|
dmc.Button("Copy to Clipboard", variant="outline")
|
||||||
|
])
|
||||||
|
])
|
||||||
|
], withBorder=True, shadow="sm")
|
||||||
|
], size="lg")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow for Job Forge
|
||||||
|
|
||||||
|
### 1. Feature Implementation Process
|
||||||
|
```yaml
|
||||||
|
step_1_backend_api:
|
||||||
|
- implement_fastapi_endpoints
|
||||||
|
- add_pydantic_validation_schemas
|
||||||
|
- implement_database_crud_operations
|
||||||
|
- integrate_ai_services_claude_openai
|
||||||
|
- write_pytest_unit_tests
|
||||||
|
- test_with_fastapi_test_client
|
||||||
|
|
||||||
|
step_2_frontend_dash:
|
||||||
|
- create_dash_components_with_mantine
|
||||||
|
- implement_api_integration_with_requests
|
||||||
|
- add_form_validation_and_error_handling
|
||||||
|
- style_with_mantine_components
|
||||||
|
- implement_user_workflows
|
||||||
|
|
||||||
|
step_3_integration_testing:
|
||||||
|
- test_complete_user_flows
|
||||||
|
- handle_ai_service_error_states
|
||||||
|
- add_loading_states_for_ai_generation
|
||||||
|
- optimize_performance_for_concurrent_users
|
||||||
|
- test_multi_tenancy_isolation
|
||||||
|
|
||||||
|
step_4_quality_assurance:
|
||||||
|
- write_component_integration_tests
|
||||||
|
- test_api_endpoints_with_authentication
|
||||||
|
- manual_testing_of_job_application_workflows
|
||||||
|
- verify_ai_document_generation_quality
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Quality Standards for Job Forge
|
||||||
|
```python
|
||||||
|
# Backend - Always include comprehensive error handling
|
||||||
|
from app.core.exceptions import JobForgeException
|
||||||
|
|
||||||
|
@router.post("/applications/{application_id}/generate-cover-letter")
|
||||||
|
async def generate_cover_letter_endpoint(
|
||||||
|
application_id: str,
|
||||||
|
current_user: dict = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
application = await get_application_by_id(db, application_id, current_user["id"])
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(status_code=404, detail="Application not found")
|
||||||
|
|
||||||
|
# Generate cover letter with AI service
|
||||||
|
cover_letter = await claude_service.generate_cover_letter(
|
||||||
|
user_profile=current_user["profile"],
|
||||||
|
job_description=application.job_description
|
||||||
|
)
|
||||||
|
|
||||||
|
# Save generated content
|
||||||
|
application.cover_letter = cover_letter
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
return {"cover_letter": cover_letter}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Cover letter generation failed: {str(e)}")
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to generate cover letter"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Frontend - Always handle loading and error states for AI operations
|
||||||
|
@callback(
|
||||||
|
Output("generated-letter-output", "value"),
|
||||||
|
Output("generate-letter-button", "loading"),
|
||||||
|
Input("generate-letter-button", "n_clicks"),
|
||||||
|
State("application-select", "value"),
|
||||||
|
prevent_initial_call=True
|
||||||
|
)
|
||||||
|
def generate_cover_letter_callback(n_clicks, application_id):
|
||||||
|
if not n_clicks or not application_id:
|
||||||
|
return dash.no_update, False
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Show loading state
|
||||||
|
response = requests.post(
|
||||||
|
f"/api/applications/{application_id}/generate-cover-letter",
|
||||||
|
headers={"Authorization": f"Bearer {get_user_token()}"}
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.json()["cover_letter"], False
|
||||||
|
else:
|
||||||
|
return "Error generating cover letter. Please try again.", False
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error: {str(e)}", False
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Testing Requirements for Job Forge
|
||||||
|
```python
|
||||||
|
# Backend API tests with authentication
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from app.main import app
|
||||||
|
|
||||||
|
client = TestClient(app)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_application():
|
||||||
|
# Test creating job application
|
||||||
|
response = client.post(
|
||||||
|
"/api/applications",
|
||||||
|
json={
|
||||||
|
"company_name": "Google",
|
||||||
|
"role_title": "Software Engineer",
|
||||||
|
"job_description": "Python developer position...",
|
||||||
|
"status": "draft"
|
||||||
|
},
|
||||||
|
headers={"Authorization": f"Bearer {test_token}"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 201
|
||||||
|
assert response.json()["company_name"] == "Google"
|
||||||
|
assert "cover_letter" in response.json() # AI-generated
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_rls_policy_isolation():
|
||||||
|
# Test that users can only see their own applications
|
||||||
|
user1_response = client.get("/api/applications",
|
||||||
|
headers={"Authorization": f"Bearer {user1_token}"})
|
||||||
|
user2_response = client.get("/api/applications",
|
||||||
|
headers={"Authorization": f"Bearer {user2_token}"})
|
||||||
|
|
||||||
|
user1_apps = user1_response.json()
|
||||||
|
user2_apps = user2_response.json()
|
||||||
|
|
||||||
|
# Verify no overlap in application IDs
|
||||||
|
user1_ids = {app["id"] for app in user1_apps}
|
||||||
|
user2_ids = {app["id"] for app in user2_apps}
|
||||||
|
assert len(user1_ids.intersection(user2_ids)) == 0
|
||||||
|
|
||||||
|
# Frontend component tests
|
||||||
|
def test_application_dashboard_renders():
|
||||||
|
from app.components.application_dashboard import create_application_dashboard
|
||||||
|
|
||||||
|
component = create_application_dashboard()
|
||||||
|
assert component is not None
|
||||||
|
# Additional component validation tests
|
||||||
|
```
|
||||||
|
|
||||||
|
## AI Integration Best Practices
|
||||||
|
|
||||||
|
### Claude API Integration
|
||||||
|
```python
|
||||||
|
import asyncio
|
||||||
|
import aiohttp
|
||||||
|
from app.core.config import settings
|
||||||
|
|
||||||
|
class ClaudeService:
|
||||||
|
def __init__(self):
|
||||||
|
self.api_key = settings.CLAUDE_API_KEY
|
||||||
|
self.base_url = "https://api.anthropic.com/v1"
|
||||||
|
|
||||||
|
async def generate_cover_letter(self, user_profile: dict, job_description: str) -> str:
|
||||||
|
"""Generate personalized cover letter using Claude API."""
|
||||||
|
|
||||||
|
prompt = f"""
|
||||||
|
Create a professional cover letter for a job application.
|
||||||
|
|
||||||
|
User Profile:
|
||||||
|
- Name: {user_profile.get('full_name')}
|
||||||
|
- Experience: {user_profile.get('experience_summary')}
|
||||||
|
- Skills: {user_profile.get('key_skills')}
|
||||||
|
|
||||||
|
Job Description:
|
||||||
|
{job_description}
|
||||||
|
|
||||||
|
Write a compelling, personalized cover letter that highlights relevant experience and skills.
|
||||||
|
"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with aiohttp.ClientSession() as session:
|
||||||
|
async with session.post(
|
||||||
|
f"{self.base_url}/messages",
|
||||||
|
headers={"x-api-key": self.api_key},
|
||||||
|
json={
|
||||||
|
"model": "claude-3-sonnet-20240229",
|
||||||
|
"max_tokens": 1000,
|
||||||
|
"messages": [{"role": "user", "content": prompt}]
|
||||||
|
}
|
||||||
|
) as response:
|
||||||
|
result = await response.json()
|
||||||
|
return result["content"][0]["text"]
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# Fallback to template-based generation
|
||||||
|
return self._generate_template_cover_letter(user_profile, job_description)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Guidelines for Job Forge
|
||||||
|
|
||||||
|
### Backend Optimization
|
||||||
|
- Use async/await for all database operations
|
||||||
|
- Implement connection pooling for PostgreSQL
|
||||||
|
- Cache AI-generated content to reduce API calls
|
||||||
|
- Use database indexes for application queries
|
||||||
|
- Implement pagination for application lists
|
||||||
|
|
||||||
|
### Frontend Optimization
|
||||||
|
- Use Dash component caching for expensive renders
|
||||||
|
- Lazy load application data in tables
|
||||||
|
- Implement debouncing for search and filters
|
||||||
|
- Optimize AI generation with loading states
|
||||||
|
- Use session storage for user preferences
|
||||||
|
|
||||||
|
## Security Checklist for Job Forge
|
||||||
|
- [ ] Input validation on all API endpoints with Pydantic
|
||||||
|
- [ ] SQL injection prevention with SQLAlchemy parameterized queries
|
||||||
|
- [ ] PostgreSQL RLS policies for complete user data isolation
|
||||||
|
- [ ] JWT token authentication with proper expiration
|
||||||
|
- [ ] AI API key security and rate limiting
|
||||||
|
- [ ] HTTPS in production deployment
|
||||||
|
- [ ] Environment variables for all secrets and API keys
|
||||||
|
- [ ] Audit logging for user actions and AI generations
|
||||||
|
|
||||||
|
## Handoff to QA
|
||||||
|
```yaml
|
||||||
|
testing_artifacts:
|
||||||
|
- working_job_forge_application_on_development
|
||||||
|
- fastapi_swagger_documentation_at_/docs
|
||||||
|
- test_user_accounts_with_sample_applications
|
||||||
|
- ai_service_integration_test_scenarios
|
||||||
|
- multi_user_isolation_test_cases
|
||||||
|
- job_application_workflow_documentation
|
||||||
|
- browser_compatibility_requirements
|
||||||
|
- performance_benchmarks_for_ai_operations
|
||||||
|
```
|
||||||
|
|
||||||
|
Focus on **building practical job application features** with **excellent AI integration** and **solid multi-tenant security**.
|
||||||
790
.claude/agents/qa.md
Normal file
790
.claude/agents/qa.md
Normal file
@@ -0,0 +1,790 @@
|
|||||||
|
# QA Engineer Agent - Job Forge
|
||||||
|
|
||||||
|
## Role
|
||||||
|
You are the **QA Engineer** responsible for ensuring high-quality software delivery for the Job Forge AI-powered job application web application through comprehensive testing, validation, and quality assurance processes.
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### 1. Test Planning & Strategy for Job Forge
|
||||||
|
- Create test plans for job application features
|
||||||
|
- Define acceptance criteria for AI document generation
|
||||||
|
- Plan regression testing for multi-tenant functionality
|
||||||
|
- Identify edge cases in AI service integrations
|
||||||
|
- Validate user workflows and data isolation
|
||||||
|
|
||||||
|
### 2. Test Automation (pytest + FastAPI)
|
||||||
|
- Write and maintain pytest test suites
|
||||||
|
- FastAPI endpoint testing and validation
|
||||||
|
- Database RLS policy testing
|
||||||
|
- AI service integration testing with mocks
|
||||||
|
- Performance testing for concurrent users
|
||||||
|
- **MANDATORY**: All test files must be in `tests/` directory only
|
||||||
|
- **MANDATORY**: Document test failures and solutions in `docs/lessons-learned/`
|
||||||
|
|
||||||
|
### 3. Manual Testing & Validation
|
||||||
|
- Exploratory testing for job application workflows
|
||||||
|
- Cross-browser testing for Dash application
|
||||||
|
- User experience validation for AI-generated content
|
||||||
|
- Multi-tenant data isolation verification
|
||||||
|
- Accessibility testing for job management interface
|
||||||
|
|
||||||
|
## Testing Strategy for Job Forge
|
||||||
|
|
||||||
|
### Test Pyramid Approach
|
||||||
|
```yaml
|
||||||
|
unit_tests: 70%
|
||||||
|
- business_logic_validation_for_applications
|
||||||
|
- fastapi_endpoint_testing
|
||||||
|
- ai_service_integration_mocking
|
||||||
|
- database_operations_with_rls
|
||||||
|
- pydantic_model_validation
|
||||||
|
|
||||||
|
integration_tests: 20%
|
||||||
|
- api_integration_with_authentication
|
||||||
|
- database_integration_with_postgresql
|
||||||
|
- ai_service_integration_testing
|
||||||
|
- dash_frontend_api_integration
|
||||||
|
- multi_user_isolation_testing
|
||||||
|
|
||||||
|
e2e_tests: 10%
|
||||||
|
- critical_job_application_workflows
|
||||||
|
- complete_user_journey_validation
|
||||||
|
- ai_document_generation_end_to_end
|
||||||
|
- cross_browser_dash_compatibility
|
||||||
|
- performance_under_concurrent_usage
|
||||||
|
```
|
||||||
|
|
||||||
|
## Automated Testing Implementation
|
||||||
|
|
||||||
|
### API Testing with pytest and FastAPI TestClient
|
||||||
|
```python
|
||||||
|
# API endpoint tests for Job Forge
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from app.main import app
|
||||||
|
from app.core.database import get_db
|
||||||
|
from app.models.user import User
|
||||||
|
from app.models.application import Application
|
||||||
|
from tests.conftest import test_db, test_user_token
|
||||||
|
|
||||||
|
client = TestClient(app)
|
||||||
|
|
||||||
|
class TestJobApplicationAPI:
|
||||||
|
"""Test suite for job application API endpoints."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_application_success(self, test_db: AsyncSession, test_user_token: str):
|
||||||
|
"""Test creating a job application with AI cover letter generation."""
|
||||||
|
|
||||||
|
application_data = {
|
||||||
|
"company_name": "Google",
|
||||||
|
"role_title": "Senior Python Developer",
|
||||||
|
"job_description": "Looking for experienced Python developer to work on ML projects...",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/api/applications",
|
||||||
|
json=application_data,
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 201
|
||||||
|
response_data = response.json()
|
||||||
|
assert response_data["company_name"] == "Google"
|
||||||
|
assert response_data["role_title"] == "Senior Python Developer"
|
||||||
|
assert response_data["status"] == "draft"
|
||||||
|
assert "cover_letter" in response_data # AI-generated content
|
||||||
|
assert len(response_data["cover_letter"]) > 100 # Meaningful content
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_user_applications_isolation(self, test_db: AsyncSession):
|
||||||
|
"""Test RLS policy ensures users only see their own applications."""
|
||||||
|
|
||||||
|
# Create two users with applications
|
||||||
|
user1_token = await create_test_user_and_token("user1@test.com")
|
||||||
|
user2_token = await create_test_user_and_token("user2@test.com")
|
||||||
|
|
||||||
|
# Create application for user1
|
||||||
|
user1_app = client.post(
|
||||||
|
"/api/applications",
|
||||||
|
json={"company_name": "Company1", "role_title": "Developer1", "status": "draft"},
|
||||||
|
headers={"Authorization": f"Bearer {user1_token}"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create application for user2
|
||||||
|
user2_app = client.post(
|
||||||
|
"/api/applications",
|
||||||
|
json={"company_name": "Company2", "role_title": "Developer2", "status": "draft"},
|
||||||
|
headers={"Authorization": f"Bearer {user2_token}"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify user1 only sees their applications
|
||||||
|
user1_response = client.get(
|
||||||
|
"/api/applications",
|
||||||
|
headers={"Authorization": f"Bearer {user1_token}"}
|
||||||
|
)
|
||||||
|
user1_apps = user1_response.json()
|
||||||
|
|
||||||
|
# Verify user2 only sees their applications
|
||||||
|
user2_response = client.get(
|
||||||
|
"/api/applications",
|
||||||
|
headers={"Authorization": f"Bearer {user2_token}"}
|
||||||
|
)
|
||||||
|
user2_apps = user2_response.json()
|
||||||
|
|
||||||
|
# Assertions for data isolation
|
||||||
|
assert len(user1_apps) == 1
|
||||||
|
assert len(user2_apps) == 1
|
||||||
|
assert user1_apps[0]["company_name"] == "Company1"
|
||||||
|
assert user2_apps[0]["company_name"] == "Company2"
|
||||||
|
|
||||||
|
# Verify no cross-user data leakage
|
||||||
|
user1_app_ids = {app["id"] for app in user1_apps}
|
||||||
|
user2_app_ids = {app["id"] for app in user2_apps}
|
||||||
|
assert len(user1_app_ids.intersection(user2_app_ids)) == 0
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_application_status_update(self, test_db: AsyncSession, test_user_token: str):
|
||||||
|
"""Test updating application status workflow."""
|
||||||
|
|
||||||
|
# Create application
|
||||||
|
create_response = client.post(
|
||||||
|
"/api/applications",
|
||||||
|
json={"company_name": "TestCorp", "role_title": "Developer", "status": "draft"},
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
app_id = create_response.json()["id"]
|
||||||
|
|
||||||
|
# Update status to applied
|
||||||
|
update_response = client.put(
|
||||||
|
f"/api/applications/{app_id}/status",
|
||||||
|
json={"status": "applied"},
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert update_response.status_code == 200
|
||||||
|
|
||||||
|
# Verify status was updated
|
||||||
|
get_response = client.get(
|
||||||
|
"/api/applications",
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
applications = get_response.json()
|
||||||
|
updated_app = next(app for app in applications if app["id"] == app_id)
|
||||||
|
assert updated_app["status"] == "applied"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_ai_cover_letter_generation(self, test_db: AsyncSession, test_user_token: str):
|
||||||
|
"""Test AI cover letter generation endpoint."""
|
||||||
|
|
||||||
|
# Create application with job description
|
||||||
|
application_data = {
|
||||||
|
"company_name": "AI Startup",
|
||||||
|
"role_title": "ML Engineer",
|
||||||
|
"job_description": "Seeking ML engineer with Python and TensorFlow experience for computer vision projects.",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
|
||||||
|
create_response = client.post(
|
||||||
|
"/api/applications",
|
||||||
|
json=application_data,
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
app_id = create_response.json()["id"]
|
||||||
|
|
||||||
|
# Generate cover letter
|
||||||
|
generate_response = client.post(
|
||||||
|
f"/api/applications/{app_id}/generate-cover-letter",
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert generate_response.status_code == 200
|
||||||
|
cover_letter_data = generate_response.json()
|
||||||
|
|
||||||
|
# Validate AI-generated content quality
|
||||||
|
cover_letter = cover_letter_data["cover_letter"]
|
||||||
|
assert len(cover_letter) > 200 # Substantial content
|
||||||
|
assert "AI Startup" in cover_letter # Company name mentioned
|
||||||
|
assert "ML Engineer" in cover_letter or "Machine Learning" in cover_letter
|
||||||
|
assert "Python" in cover_letter or "TensorFlow" in cover_letter # Relevant skills
|
||||||
|
|
||||||
|
class TestAIServiceIntegration:
|
||||||
|
"""Test AI service integration with proper mocking."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_claude_api_success(self, mock_claude_service):
|
||||||
|
"""Test successful Claude API integration."""
|
||||||
|
|
||||||
|
from app.services.ai.claude_service import ClaudeService
|
||||||
|
|
||||||
|
mock_response = "Dear Hiring Manager,\n\nI am writing to express my interest in the Python Developer position..."
|
||||||
|
mock_claude_service.return_value.generate_cover_letter.return_value = mock_response
|
||||||
|
|
||||||
|
claude = ClaudeService()
|
||||||
|
result = await claude.generate_cover_letter(
|
||||||
|
user_profile={"full_name": "John Doe", "experience_summary": "3 years Python"},
|
||||||
|
job_description="Python developer position"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result == mock_response
|
||||||
|
mock_claude_service.return_value.generate_cover_letter.assert_called_once()
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_claude_api_fallback(self, mock_claude_service):
|
||||||
|
"""Test fallback when Claude API fails."""
|
||||||
|
|
||||||
|
from app.services.ai.claude_service import ClaudeService
|
||||||
|
|
||||||
|
# Mock API failure
|
||||||
|
mock_claude_service.return_value.generate_cover_letter.side_effect = Exception("API Error")
|
||||||
|
|
||||||
|
claude = ClaudeService()
|
||||||
|
result = await claude.generate_cover_letter(
|
||||||
|
user_profile={"full_name": "John Doe"},
|
||||||
|
job_description="Developer position"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should return fallback template
|
||||||
|
assert "Dear Hiring Manager" in result
|
||||||
|
assert len(result) > 50 # Basic template content
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_ai_service_rate_limiting(self, test_db: AsyncSession, test_user_token: str):
|
||||||
|
"""Test AI service rate limiting and queuing."""
|
||||||
|
|
||||||
|
# Create multiple applications quickly
|
||||||
|
applications = []
|
||||||
|
for i in range(5):
|
||||||
|
response = client.post(
|
||||||
|
"/api/applications",
|
||||||
|
json={
|
||||||
|
"company_name": f"Company{i}",
|
||||||
|
"role_title": f"Role{i}",
|
||||||
|
"job_description": f"Job description {i}",
|
||||||
|
"status": "draft"
|
||||||
|
},
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
applications.append(response.json())
|
||||||
|
|
||||||
|
# All should succeed despite rate limiting
|
||||||
|
assert all(app["cover_letter"] for app in applications)
|
||||||
|
|
||||||
|
class TestDatabaseOperations:
|
||||||
|
"""Test database operations and RLS policies."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_rls_policy_enforcement(self, test_db: AsyncSession):
|
||||||
|
"""Test PostgreSQL RLS policy enforcement at database level."""
|
||||||
|
|
||||||
|
from app.core.database import execute_rls_query
|
||||||
|
|
||||||
|
# Create users and applications directly in database
|
||||||
|
user1_id = "user1-uuid"
|
||||||
|
user2_id = "user2-uuid"
|
||||||
|
|
||||||
|
# Set RLS context for user1 and create application
|
||||||
|
await execute_rls_query(
|
||||||
|
test_db,
|
||||||
|
user_id=user1_id,
|
||||||
|
query="INSERT INTO applications (id, user_id, company_name, role_title) VALUES (gen_random_uuid(), %s, 'Company1', 'Role1')",
|
||||||
|
params=[user1_id]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set RLS context for user2 and try to query user1's data
|
||||||
|
user2_results = await execute_rls_query(
|
||||||
|
test_db,
|
||||||
|
user_id=user2_id,
|
||||||
|
query="SELECT * FROM applications WHERE company_name = 'Company1'"
|
||||||
|
)
|
||||||
|
|
||||||
|
# User2 should not see user1's applications
|
||||||
|
assert len(user2_results) == 0
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_database_performance(self, test_db: AsyncSession):
|
||||||
|
"""Test database query performance with indexes."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
from app.crud.application import get_user_applications
|
||||||
|
|
||||||
|
# Create test user with many applications
|
||||||
|
user_id = "perf-test-user"
|
||||||
|
|
||||||
|
# Create 1000 applications for performance testing
|
||||||
|
applications_data = [
|
||||||
|
{
|
||||||
|
"user_id": user_id,
|
||||||
|
"company_name": f"Company{i}",
|
||||||
|
"role_title": f"Role{i}",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
for i in range(1000)
|
||||||
|
]
|
||||||
|
|
||||||
|
await create_bulk_applications(test_db, applications_data)
|
||||||
|
|
||||||
|
# Test query performance
|
||||||
|
start_time = time.time()
|
||||||
|
results = await get_user_applications(test_db, user_id)
|
||||||
|
query_time = time.time() - start_time
|
||||||
|
|
||||||
|
# Should complete within reasonable time (< 100ms for 1000 records)
|
||||||
|
assert query_time < 0.1
|
||||||
|
assert len(results) == 1000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend Testing with Dash Test Framework
|
||||||
|
```python
|
||||||
|
# Dash component and callback testing
|
||||||
|
import pytest
|
||||||
|
from dash.testing.application_runners import import_app
|
||||||
|
from selenium.webdriver.common.by import By
|
||||||
|
from selenium.webdriver.support.ui import WebDriverWait
|
||||||
|
from selenium.webdriver.support import expected_conditions as EC
|
||||||
|
|
||||||
|
class TestJobApplicationDashboard:
|
||||||
|
"""Test Dash frontend components and workflows."""
|
||||||
|
|
||||||
|
def test_application_dashboard_renders(self, dash_duo):
|
||||||
|
"""Test application dashboard component renders correctly."""
|
||||||
|
|
||||||
|
from app.dash_app import create_app
|
||||||
|
app = create_app()
|
||||||
|
dash_duo.start_server(app)
|
||||||
|
|
||||||
|
# Verify main elements are present
|
||||||
|
dash_duo.wait_for_element("#application-dashboard", timeout=10)
|
||||||
|
assert dash_duo.find_element("#company-name-input")
|
||||||
|
assert dash_duo.find_element("#role-title-input")
|
||||||
|
assert dash_duo.find_element("#job-description-input")
|
||||||
|
assert dash_duo.find_element("#create-app-button")
|
||||||
|
|
||||||
|
def test_create_application_workflow(self, dash_duo, mock_api_client):
|
||||||
|
"""Test complete application creation workflow."""
|
||||||
|
|
||||||
|
from app.dash_app import create_app
|
||||||
|
app = create_app()
|
||||||
|
dash_duo.start_server(app)
|
||||||
|
|
||||||
|
# Fill out application form
|
||||||
|
company_input = dash_duo.find_element("#company-name-input")
|
||||||
|
company_input.send_keys("Google")
|
||||||
|
|
||||||
|
role_input = dash_duo.find_element("#role-title-input")
|
||||||
|
role_input.send_keys("Software Engineer")
|
||||||
|
|
||||||
|
description_input = dash_duo.find_element("#job-description-input")
|
||||||
|
description_input.send_keys("Python developer position with ML focus")
|
||||||
|
|
||||||
|
# Submit form
|
||||||
|
create_button = dash_duo.find_element("#create-app-button")
|
||||||
|
create_button.click()
|
||||||
|
|
||||||
|
# Wait for success notification
|
||||||
|
dash_duo.wait_for_text_to_equal("#notifications .notification-title", "Success!", timeout=10)
|
||||||
|
|
||||||
|
# Verify application appears in table
|
||||||
|
dash_duo.wait_for_element(".dash-table-container", timeout=5)
|
||||||
|
table_cells = dash_duo.find_elements(".dash-cell")
|
||||||
|
table_text = [cell.text for cell in table_cells]
|
||||||
|
assert "Google" in table_text
|
||||||
|
assert "Software Engineer" in table_text
|
||||||
|
|
||||||
|
def test_ai_document_generation_ui(self, dash_duo, mock_ai_service):
|
||||||
|
"""Test AI document generation interface."""
|
||||||
|
|
||||||
|
from app.dash_app import create_app
|
||||||
|
app = create_app()
|
||||||
|
dash_duo.start_server(app)
|
||||||
|
|
||||||
|
# Navigate to document generator
|
||||||
|
dash_duo.wait_for_element("#document-generator-tab", timeout=10)
|
||||||
|
dash_duo.find_element("#document-generator-tab").click()
|
||||||
|
|
||||||
|
# Select application and generate cover letter
|
||||||
|
application_select = dash_duo.find_element("#application-select")
|
||||||
|
application_select.click()
|
||||||
|
|
||||||
|
# Select first option
|
||||||
|
dash_duo.find_element("#application-select option[value='app-1']").click()
|
||||||
|
|
||||||
|
# Click generate button
|
||||||
|
generate_button = dash_duo.find_element("#generate-letter-button")
|
||||||
|
generate_button.click()
|
||||||
|
|
||||||
|
# Wait for loading state
|
||||||
|
WebDriverWait(dash_duo.driver, 10).until(
|
||||||
|
EC.text_to_be_present_in_element((By.ID, "generate-letter-button"), "Generating...")
|
||||||
|
)
|
||||||
|
|
||||||
|
# Wait for generated content
|
||||||
|
WebDriverWait(dash_duo.driver, 30).until(
|
||||||
|
lambda driver: len(dash_duo.find_element("#generated-letter-output").get_attribute("value")) > 100
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify cover letter content
|
||||||
|
cover_letter = dash_duo.find_element("#generated-letter-output").get_attribute("value")
|
||||||
|
assert len(cover_letter) > 200
|
||||||
|
assert "Dear Hiring Manager" in cover_letter
|
||||||
|
|
||||||
|
class TestUserWorkflows:
|
||||||
|
"""Test complete user workflows end-to-end."""
|
||||||
|
|
||||||
|
def test_complete_job_application_workflow(self, dash_duo, mock_services):
|
||||||
|
"""Test complete workflow from login to application creation to document generation."""
|
||||||
|
|
||||||
|
from app.dash_app import create_app
|
||||||
|
app = create_app()
|
||||||
|
dash_duo.start_server(app)
|
||||||
|
|
||||||
|
# 1. Login process
|
||||||
|
dash_duo.find_element("#email-input").send_keys("test@jobforge.com")
|
||||||
|
dash_duo.find_element("#password-input").send_keys("testpassword")
|
||||||
|
dash_duo.find_element("#login-button").click()
|
||||||
|
|
||||||
|
# 2. Create application
|
||||||
|
dash_duo.wait_for_element("#application-dashboard", timeout=10)
|
||||||
|
dash_duo.find_element("#company-name-input").send_keys("Microsoft")
|
||||||
|
dash_duo.find_element("#role-title-input").send_keys("Senior Developer")
|
||||||
|
dash_duo.find_element("#job-description-input").send_keys("Senior developer role with Azure experience")
|
||||||
|
dash_duo.find_element("#create-app-button").click()
|
||||||
|
|
||||||
|
# 3. Verify application created
|
||||||
|
dash_duo.wait_for_text_to_equal("#notifications .notification-title", "Success!", timeout=10)
|
||||||
|
|
||||||
|
# 4. Update application status
|
||||||
|
dash_duo.find_element(".status-dropdown").click()
|
||||||
|
dash_duo.find_element("option[value='applied']").click()
|
||||||
|
|
||||||
|
# 5. Generate cover letter
|
||||||
|
dash_duo.find_element("#document-generator-tab").click()
|
||||||
|
dash_duo.find_element("#generate-letter-button").click()
|
||||||
|
|
||||||
|
# 6. Download documents
|
||||||
|
dash_duo.wait_for_element("#download-pdf-button", timeout=30)
|
||||||
|
dash_duo.find_element("#download-pdf-button").click()
|
||||||
|
|
||||||
|
# Verify complete workflow success
|
||||||
|
assert "application-created" in dash_duo.driver.current_url
|
||||||
|
```
|
||||||
|
|
||||||
|
### Performance Testing for Job Forge
|
||||||
|
```python
|
||||||
|
# Performance testing with pytest-benchmark
|
||||||
|
import pytest
|
||||||
|
import asyncio
|
||||||
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
from app.services.ai.claude_service import ClaudeService
|
||||||
|
|
||||||
|
class TestJobForgePerformance:
|
||||||
|
"""Performance tests for Job Forge specific functionality."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_concurrent_ai_generation(self, benchmark):
|
||||||
|
"""Test AI cover letter generation under concurrent load."""
|
||||||
|
|
||||||
|
async def generate_multiple_letters():
|
||||||
|
claude = ClaudeService()
|
||||||
|
tasks = []
|
||||||
|
|
||||||
|
for i in range(10):
|
||||||
|
task = claude.generate_cover_letter(
|
||||||
|
user_profile={"full_name": f"User{i}", "experience_summary": "3 years Python"},
|
||||||
|
job_description=f"Python developer position {i}"
|
||||||
|
)
|
||||||
|
tasks.append(task)
|
||||||
|
|
||||||
|
results = await asyncio.gather(*tasks)
|
||||||
|
return results
|
||||||
|
|
||||||
|
# Benchmark concurrent AI generation
|
||||||
|
results = benchmark(asyncio.run, generate_multiple_letters())
|
||||||
|
|
||||||
|
# Verify all requests completed successfully
|
||||||
|
assert len(results) == 10
|
||||||
|
assert all(len(result) > 100 for result in results)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_database_query_performance(self, benchmark, test_db):
|
||||||
|
"""Test database query performance under load."""
|
||||||
|
|
||||||
|
from app.crud.application import get_user_applications
|
||||||
|
|
||||||
|
# Create test data
|
||||||
|
user_id = "perf-user"
|
||||||
|
await create_test_applications(test_db, user_id, count=1000)
|
||||||
|
|
||||||
|
# Benchmark query performance
|
||||||
|
result = benchmark(
|
||||||
|
lambda: asyncio.run(get_user_applications(test_db, user_id))
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(result) == 1000
|
||||||
|
|
||||||
|
def test_api_response_times(self, client, test_user_token, benchmark):
|
||||||
|
"""Test API endpoint response times."""
|
||||||
|
|
||||||
|
def make_api_calls():
|
||||||
|
responses = []
|
||||||
|
|
||||||
|
# Test multiple endpoint calls
|
||||||
|
for _ in range(50):
|
||||||
|
response = client.get(
|
||||||
|
"/api/applications",
|
||||||
|
headers={"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
)
|
||||||
|
responses.append(response)
|
||||||
|
|
||||||
|
return responses
|
||||||
|
|
||||||
|
responses = benchmark(make_api_calls)
|
||||||
|
|
||||||
|
# Verify all responses successful and fast
|
||||||
|
assert all(r.status_code == 200 for r in responses)
|
||||||
|
|
||||||
|
# Check average response time (should be < 100ms)
|
||||||
|
avg_time = sum(r.elapsed.total_seconds() for r in responses) / len(responses)
|
||||||
|
assert avg_time < 0.1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Manual Testing Checklist for Job Forge
|
||||||
|
|
||||||
|
### Cross-Browser Testing
|
||||||
|
```yaml
|
||||||
|
browsers_to_test:
|
||||||
|
- chrome_latest
|
||||||
|
- firefox_latest
|
||||||
|
- safari_latest
|
||||||
|
- edge_latest
|
||||||
|
|
||||||
|
mobile_devices:
|
||||||
|
- iphone_safari
|
||||||
|
- android_chrome
|
||||||
|
- tablet_responsiveness
|
||||||
|
|
||||||
|
job_forge_specific_testing:
|
||||||
|
- application_form_functionality
|
||||||
|
- ai_document_generation_interface
|
||||||
|
- application_status_workflow
|
||||||
|
- multi_user_data_isolation
|
||||||
|
- document_download_functionality
|
||||||
|
```
|
||||||
|
|
||||||
|
### Job Application Workflow Testing
|
||||||
|
```yaml
|
||||||
|
critical_user_journeys:
|
||||||
|
user_registration_and_profile:
|
||||||
|
- [ ] user_can_register_new_account
|
||||||
|
- [ ] user_can_complete_profile_setup
|
||||||
|
- [ ] user_profile_data_saved_correctly
|
||||||
|
- [ ] user_can_login_and_logout
|
||||||
|
|
||||||
|
application_management:
|
||||||
|
- [ ] user_can_create_new_job_application
|
||||||
|
- [ ] application_data_validates_correctly
|
||||||
|
- [ ] user_can_view_application_list
|
||||||
|
- [ ] user_can_update_application_status
|
||||||
|
- [ ] user_can_delete_applications
|
||||||
|
- [ ] application_search_and_filtering_works
|
||||||
|
|
||||||
|
ai_document_generation:
|
||||||
|
- [ ] cover_letter_generates_successfully
|
||||||
|
- [ ] generated_content_relevant_and_professional
|
||||||
|
- [ ] user_can_edit_generated_content
|
||||||
|
- [ ] user_can_download_pdf_and_docx
|
||||||
|
- [ ] generation_works_with_different_job_descriptions
|
||||||
|
- [ ] ai_service_errors_handled_gracefully
|
||||||
|
|
||||||
|
multi_tenancy_validation:
|
||||||
|
- [ ] users_only_see_own_applications
|
||||||
|
- [ ] no_cross_user_data_leakage
|
||||||
|
- [ ] user_actions_properly_isolated
|
||||||
|
- [ ] concurrent_users_do_not_interfere
|
||||||
|
```
|
||||||
|
|
||||||
|
### Accessibility Testing for Job Application Interface
|
||||||
|
```yaml
|
||||||
|
accessibility_checklist:
|
||||||
|
keyboard_navigation:
|
||||||
|
- [ ] all_forms_accessible_via_keyboard
|
||||||
|
- [ ] application_table_navigable_with_keys
|
||||||
|
- [ ] document_generation_interface_keyboard_accessible
|
||||||
|
- [ ] logical_tab_order_throughout_application
|
||||||
|
- [ ] no_keyboard_traps_in_modals
|
||||||
|
|
||||||
|
screen_reader_support:
|
||||||
|
- [ ] form_labels_properly_associated
|
||||||
|
- [ ] application_status_announced_correctly
|
||||||
|
- [ ] ai_generation_progress_communicated
|
||||||
|
- [ ] error_messages_read_by_screen_readers
|
||||||
|
- [ ] table_data_structure_clear
|
||||||
|
|
||||||
|
visual_accessibility:
|
||||||
|
- [ ] sufficient_color_contrast_throughout
|
||||||
|
- [ ] status_indicators_not_color_only
|
||||||
|
- [ ] text_readable_at_200_percent_zoom
|
||||||
|
- [ ] focus_indicators_visible
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates for Job Forge
|
||||||
|
|
||||||
|
### Pre-Deployment Checklist
|
||||||
|
```yaml
|
||||||
|
automated_tests:
|
||||||
|
- [ ] pytest_unit_tests_passing_100_percent
|
||||||
|
- [ ] fastapi_integration_tests_passing
|
||||||
|
- [ ] database_rls_tests_passing
|
||||||
|
- [ ] ai_service_integration_tests_passing
|
||||||
|
- [ ] dash_frontend_tests_passing
|
||||||
|
|
||||||
|
manual_validation:
|
||||||
|
- [ ] job_application_workflows_validated
|
||||||
|
- [ ] ai_document_generation_quality_verified
|
||||||
|
- [ ] multi_user_isolation_manually_tested
|
||||||
|
- [ ] cross_browser_compatibility_confirmed
|
||||||
|
- [ ] performance_under_concurrent_load_tested
|
||||||
|
|
||||||
|
security_validation:
|
||||||
|
- [ ] user_authentication_working_correctly
|
||||||
|
- [ ] rls_policies_preventing_data_leakage
|
||||||
|
- [ ] api_input_validation_preventing_injection
|
||||||
|
- [ ] ai_api_keys_properly_secured
|
||||||
|
- [ ] user_data_encrypted_and_protected
|
||||||
|
|
||||||
|
performance_validation:
|
||||||
|
- [ ] api_response_times_under_500ms
|
||||||
|
- [ ] ai_generation_completes_under_30_seconds
|
||||||
|
- [ ] dashboard_loads_under_3_seconds
|
||||||
|
- [ ] concurrent_user_performance_acceptable
|
||||||
|
- [ ] database_queries_optimized
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Data Management for Job Forge
|
||||||
|
```python
|
||||||
|
# Job Forge specific test data factory
|
||||||
|
from typing import Dict, List
|
||||||
|
import uuid
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
class JobForgeTestDataFactory:
|
||||||
|
"""Factory for creating Job Forge test data."""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def create_user(overrides: Dict = None) -> Dict:
|
||||||
|
"""Create test user data."""
|
||||||
|
base_user = {
|
||||||
|
"id": str(uuid.uuid4()),
|
||||||
|
"email": f"test{int(datetime.now().timestamp())}@jobforge.com",
|
||||||
|
"password_hash": "hashed_password_123",
|
||||||
|
"first_name": "Test",
|
||||||
|
"last_name": "User",
|
||||||
|
"profile": {
|
||||||
|
"full_name": "Test User",
|
||||||
|
"experience_summary": "3 years software development",
|
||||||
|
"key_skills": ["Python", "FastAPI", "PostgreSQL"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if overrides:
|
||||||
|
base_user.update(overrides)
|
||||||
|
|
||||||
|
return base_user
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def create_application(user_id: str, overrides: Dict = None) -> Dict:
|
||||||
|
"""Create test job application data."""
|
||||||
|
base_application = {
|
||||||
|
"id": str(uuid.uuid4()),
|
||||||
|
"user_id": user_id,
|
||||||
|
"company_name": "TechCorp",
|
||||||
|
"role_title": "Software Developer",
|
||||||
|
"status": "draft",
|
||||||
|
"job_description": "We are looking for a talented software developer...",
|
||||||
|
"cover_letter": None,
|
||||||
|
"created_at": datetime.now(),
|
||||||
|
"updated_at": datetime.now()
|
||||||
|
}
|
||||||
|
|
||||||
|
if overrides:
|
||||||
|
base_application.update(overrides)
|
||||||
|
|
||||||
|
return base_application
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def create_ai_response() -> str:
|
||||||
|
"""Create mock AI-generated cover letter."""
|
||||||
|
return """
|
||||||
|
Dear Hiring Manager,
|
||||||
|
|
||||||
|
I am writing to express my strong interest in the Software Developer position at TechCorp.
|
||||||
|
With my 3 years of experience in Python development and expertise in FastAPI and PostgreSQL,
|
||||||
|
I am confident I would be a valuable addition to your team.
|
||||||
|
|
||||||
|
In my previous role, I have successfully built and deployed web applications using modern
|
||||||
|
Python frameworks, which aligns perfectly with your requirements...
|
||||||
|
|
||||||
|
Sincerely,
|
||||||
|
Test User
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Database test helpers for Job Forge
|
||||||
|
async def setup_job_forge_test_db(db_session):
|
||||||
|
"""Setup test database with Job Forge specific data."""
|
||||||
|
|
||||||
|
# Create test users
|
||||||
|
users = [
|
||||||
|
JobForgeTestDataFactory.create_user({"email": "user1@test.com"}),
|
||||||
|
JobForgeTestDataFactory.create_user({"email": "user2@test.com"})
|
||||||
|
]
|
||||||
|
|
||||||
|
# Create applications for each user
|
||||||
|
for user in users:
|
||||||
|
applications = [
|
||||||
|
JobForgeTestDataFactory.create_application(
|
||||||
|
user["id"],
|
||||||
|
{"company_name": f"Company{i}", "role_title": f"Role{i}"}
|
||||||
|
)
|
||||||
|
for i in range(5)
|
||||||
|
]
|
||||||
|
|
||||||
|
await create_test_applications(db_session, applications)
|
||||||
|
|
||||||
|
async def cleanup_job_forge_test_db(db_session):
|
||||||
|
"""Clean up test database."""
|
||||||
|
await db_session.execute("TRUNCATE TABLE applications, users CASCADE")
|
||||||
|
await db_session.commit()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Handoff to DevOps
|
||||||
|
```yaml
|
||||||
|
tested_deliverables:
|
||||||
|
- [ ] all_job_application_features_tested_validated
|
||||||
|
- [ ] ai_document_generation_quality_approved
|
||||||
|
- [ ] multi_tenant_security_verified
|
||||||
|
- [ ] performance_under_concurrent_load_tested
|
||||||
|
- [ ] test_results_documented_with_coverage_reports
|
||||||
|
|
||||||
|
deployment_requirements:
|
||||||
|
- postgresql_with_rls_policies_tested
|
||||||
|
- ai_api_keys_configuration_validated
|
||||||
|
- environment_variables_tested
|
||||||
|
- database_migrations_validated
|
||||||
|
- docker_containerization_tested
|
||||||
|
|
||||||
|
go_no_go_recommendation:
|
||||||
|
- overall_quality_assessment_for_job_forge
|
||||||
|
- ai_integration_reliability_evaluation
|
||||||
|
- multi_tenant_security_risk_assessment
|
||||||
|
- performance_scalability_evaluation
|
||||||
|
- deployment_readiness_confirmation
|
||||||
|
|
||||||
|
known_limitations:
|
||||||
|
- ai_generation_response_time_variability
|
||||||
|
- rate_limiting_considerations_for_ai_apis
|
||||||
|
- concurrent_user_limits_for_prototype_phase
|
||||||
|
```
|
||||||
|
|
||||||
|
Focus on **comprehensive testing of AI-powered job application workflows** with **strong emphasis on multi-tenant security** and **reliable AI service integration**.
|
||||||
283
.claude/agents/technical-lead.md
Normal file
283
.claude/agents/technical-lead.md
Normal file
@@ -0,0 +1,283 @@
|
|||||||
|
# Technical Lead Agent - Job Forge
|
||||||
|
|
||||||
|
## Role
|
||||||
|
You are the **Technical Lead** responsible for architecture decisions, code quality, and technical guidance for the Job Forge AI-powered job application web application.
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### 1. Architecture & Design
|
||||||
|
- Design Python/FastAPI system architecture
|
||||||
|
- Create comprehensive API specifications
|
||||||
|
- Define PostgreSQL database schema with RLS
|
||||||
|
- Set Python coding standards and best practices
|
||||||
|
- Guide AI service integration patterns
|
||||||
|
|
||||||
|
### 2. Technical Decision Making
|
||||||
|
- Evaluate Python ecosystem choices
|
||||||
|
- Resolve technical implementation conflicts
|
||||||
|
- Guide FastAPI and Dash implementation approaches
|
||||||
|
- Review and approve major architectural changes
|
||||||
|
- Ensure security best practices for job application data
|
||||||
|
|
||||||
|
### 3. Quality Assurance & Project Structure
|
||||||
|
- Python code review standards
|
||||||
|
- pytest testing strategy
|
||||||
|
- FastAPI performance requirements
|
||||||
|
- Multi-tenant security guidelines
|
||||||
|
- AI integration documentation standards
|
||||||
|
- **MANDATORY**: Enforce clean project structure (only necessary files in root)
|
||||||
|
- **MANDATORY**: Document all issues in `docs/lessons-learned/` with solutions
|
||||||
|
|
||||||
|
## Technology Stack - Job Forge
|
||||||
|
|
||||||
|
### Backend
|
||||||
|
- **FastAPI + Python 3.12** - Modern async web framework
|
||||||
|
- **PostgreSQL 16 + pgvector** - Database with AI embeddings
|
||||||
|
- **SQLAlchemy + Alembic** - ORM and migrations
|
||||||
|
- **Pydantic** - Data validation and serialization
|
||||||
|
- **JWT + Passlib** - Authentication and password hashing
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
- **Dash + Mantine** - Interactive Python web applications
|
||||||
|
- **Plotly** - Data visualization and charts
|
||||||
|
- **Bootstrap Components** - Responsive design
|
||||||
|
- **Dash Bootstrap Components** - UI component library
|
||||||
|
|
||||||
|
### AI & ML Integration
|
||||||
|
- **Claude API** - Document generation and analysis
|
||||||
|
- **OpenAI API** - Embeddings and completions
|
||||||
|
- **pgvector** - Vector similarity search
|
||||||
|
- **asyncio** - Async AI service calls
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
- **Docker + Docker Compose** - Containerization
|
||||||
|
- **Direct Server Deployment** - Prototype hosting
|
||||||
|
- **PostgreSQL RLS** - Multi-tenant security
|
||||||
|
- **Simple logging** - Application monitoring
|
||||||
|
|
||||||
|
## Development Standards
|
||||||
|
|
||||||
|
### Code Quality
|
||||||
|
```python
|
||||||
|
# Example FastAPI endpoint structure
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from app.core.security import get_current_user
|
||||||
|
from app.models.user import User
|
||||||
|
from app.schemas.user import UserCreate, UserResponse
|
||||||
|
from app.crud.user import create_user, get_user_by_email
|
||||||
|
from app.core.database import get_db
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
@router.post("/users", response_model=UserResponse, status_code=status.HTTP_201_CREATED)
|
||||||
|
async def create_new_user(
|
||||||
|
user_data: UserCreate,
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
) -> UserResponse:
|
||||||
|
"""Create a new user account with proper validation."""
|
||||||
|
|
||||||
|
# 1. Check if user already exists
|
||||||
|
existing_user = await get_user_by_email(db, user_data.email)
|
||||||
|
if existing_user:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_400_BAD_REQUEST,
|
||||||
|
detail="Email already registered"
|
||||||
|
)
|
||||||
|
|
||||||
|
# 2. Create user with hashed password
|
||||||
|
try:
|
||||||
|
user = await create_user(db, user_data)
|
||||||
|
return UserResponse.from_orm(user)
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to create user"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Design - Job Forge Specific
|
||||||
|
```sql
|
||||||
|
-- Job Forge multi-tenant schema with RLS
|
||||||
|
CREATE TABLE users (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
email VARCHAR(255) UNIQUE NOT NULL,
|
||||||
|
password_hash VARCHAR(255) NOT NULL,
|
||||||
|
first_name VARCHAR(100),
|
||||||
|
last_name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE applications (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
company_name VARCHAR(255) NOT NULL,
|
||||||
|
role_title VARCHAR(255) NOT NULL,
|
||||||
|
status VARCHAR(50) DEFAULT 'draft',
|
||||||
|
job_description TEXT,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Enable RLS for multi-tenancy
|
||||||
|
ALTER TABLE applications ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
CREATE POLICY applications_user_isolation ON applications
|
||||||
|
FOR ALL TO authenticated
|
||||||
|
USING (user_id = current_setting('app.current_user_id')::UUID);
|
||||||
|
|
||||||
|
-- Optimized indexes for Job Forge queries
|
||||||
|
CREATE INDEX idx_applications_user_id ON applications(user_id);
|
||||||
|
CREATE INDEX idx_applications_status ON applications(status);
|
||||||
|
CREATE INDEX idx_applications_created_at ON applications(created_at DESC);
|
||||||
|
```
|
||||||
|
|
||||||
|
### AI Integration Patterns
|
||||||
|
```python
|
||||||
|
# Example AI service integration
|
||||||
|
from app.services.ai.claude_service import ClaudeService
|
||||||
|
from app.services.ai.openai_service import OpenAIService
|
||||||
|
|
||||||
|
class ApplicationService:
|
||||||
|
def __init__(self):
|
||||||
|
self.claude = ClaudeService()
|
||||||
|
self.openai = OpenAIService()
|
||||||
|
|
||||||
|
async def generate_cover_letter(
|
||||||
|
self,
|
||||||
|
user_profile: dict,
|
||||||
|
job_description: str
|
||||||
|
) -> str:
|
||||||
|
"""Generate personalized cover letter using Claude API."""
|
||||||
|
|
||||||
|
prompt = f"""
|
||||||
|
Generate a professional cover letter for:
|
||||||
|
User: {user_profile['name']}
|
||||||
|
Experience: {user_profile['experience']}
|
||||||
|
Job: {job_description}
|
||||||
|
"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = await self.claude.complete(prompt)
|
||||||
|
return response.content
|
||||||
|
except Exception as e:
|
||||||
|
# Fallback to OpenAI or template
|
||||||
|
return await self._fallback_generation(user_profile, job_description)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing Requirements - Job Forge
|
||||||
|
- **Unit tests**: pytest for business logic (80%+ coverage)
|
||||||
|
- **Integration tests**: FastAPI test client for API endpoints
|
||||||
|
- **Database tests**: Test RLS policies and multi-tenancy
|
||||||
|
- **AI service tests**: Mock AI APIs for reliable testing
|
||||||
|
- **End-to-end tests**: User workflow validation
|
||||||
|
|
||||||
|
## Handoff Specifications
|
||||||
|
|
||||||
|
### To Full-Stack Developer
|
||||||
|
```yaml
|
||||||
|
api_specifications:
|
||||||
|
- fastapi_endpoint_definitions_with_examples
|
||||||
|
- pydantic_model_schemas
|
||||||
|
- authentication_jwt_requirements
|
||||||
|
- error_handling_patterns
|
||||||
|
- ai_service_integration_patterns
|
||||||
|
|
||||||
|
database_design:
|
||||||
|
- sqlalchemy_model_definitions
|
||||||
|
- alembic_migration_scripts
|
||||||
|
- rls_policy_implementation
|
||||||
|
- test_data_fixtures
|
||||||
|
|
||||||
|
frontend_architecture:
|
||||||
|
- dash_component_structure
|
||||||
|
- page_layout_specifications
|
||||||
|
- state_management_patterns
|
||||||
|
- mantine_component_usage
|
||||||
|
|
||||||
|
job_forge_features:
|
||||||
|
- application_tracking_workflows
|
||||||
|
- document_generation_requirements
|
||||||
|
- job_matching_algorithms
|
||||||
|
- user_authentication_flows
|
||||||
|
```
|
||||||
|
|
||||||
|
### To QA Engineer
|
||||||
|
```yaml
|
||||||
|
testing_requirements:
|
||||||
|
- pytest_test_structure
|
||||||
|
- api_testing_scenarios
|
||||||
|
- database_rls_validation
|
||||||
|
- ai_service_mocking_patterns
|
||||||
|
- performance_benchmarks
|
||||||
|
|
||||||
|
quality_gates:
|
||||||
|
- code_coverage_minimum_80_percent
|
||||||
|
- api_response_time_under_500ms
|
||||||
|
- ai_generation_time_under_30_seconds
|
||||||
|
- multi_user_isolation_validation
|
||||||
|
- security_vulnerability_scanning
|
||||||
|
```
|
||||||
|
|
||||||
|
### To DevOps Engineer
|
||||||
|
```yaml
|
||||||
|
infrastructure_requirements:
|
||||||
|
- python_3_12_runtime_environment
|
||||||
|
- postgresql_16_with_pgvector_extension
|
||||||
|
- docker_containerization_requirements
|
||||||
|
- environment_variables_configuration
|
||||||
|
- ai_api_key_management
|
||||||
|
|
||||||
|
deployment_specifications:
|
||||||
|
- fastapi_uvicorn_server_setup
|
||||||
|
- database_migration_automation
|
||||||
|
- static_file_serving_configuration
|
||||||
|
- ssl_certificate_management
|
||||||
|
- basic_monitoring_and_logging
|
||||||
|
```
|
||||||
|
|
||||||
|
## Decision Framework
|
||||||
|
|
||||||
|
### Technology Evaluation for Job Forge
|
||||||
|
1. **Python Ecosystem**: Leverage existing AI/ML libraries
|
||||||
|
2. **Performance**: Async FastAPI for concurrent AI calls
|
||||||
|
3. **AI Integration**: Native Python AI service clients
|
||||||
|
4. **Multi-tenancy**: PostgreSQL RLS for data isolation
|
||||||
|
5. **Rapid Prototyping**: Dash for quick UI development
|
||||||
|
|
||||||
|
### Architecture Principles - Job Forge
|
||||||
|
- **AI-First Design**: Build around AI service capabilities and limitations
|
||||||
|
- **Multi-Tenant Security**: Ensure complete user data isolation
|
||||||
|
- **Async by Default**: Handle concurrent AI API calls efficiently
|
||||||
|
- **Data-Driven**: Design for job market data analysis and insights
|
||||||
|
- **User-Centric**: Focus on job application workflow optimization
|
||||||
|
|
||||||
|
## Job Forge Specific Guidance
|
||||||
|
|
||||||
|
### AI Service Integration
|
||||||
|
- **Resilience**: Implement retry logic and fallbacks for AI APIs
|
||||||
|
- **Rate Limiting**: Respect AI service rate limits and quotas
|
||||||
|
- **Caching**: Cache AI responses when appropriate
|
||||||
|
- **Error Handling**: Graceful degradation when AI services fail
|
||||||
|
- **Cost Management**: Monitor and optimize AI API usage
|
||||||
|
|
||||||
|
### Multi-Tenancy Requirements
|
||||||
|
- **Data Isolation**: Use PostgreSQL RLS for complete user separation
|
||||||
|
- **Performance**: Optimize queries with proper indexing
|
||||||
|
- **Security**: Validate user access at database level
|
||||||
|
- **Scalability**: Design for horizontal scaling with user growth
|
||||||
|
|
||||||
|
### Document Generation
|
||||||
|
- **Template System**: Flexible document template management
|
||||||
|
- **Format Support**: PDF, DOCX, and HTML output formats
|
||||||
|
- **Personalization**: AI-driven content customization
|
||||||
|
- **Version Control**: Track document generation history
|
||||||
|
|
||||||
|
## Quick Decision Protocol
|
||||||
|
- **Minor Changes**: Approve immediately if following Job Forge standards
|
||||||
|
- **Feature Additions**: 4-hour evaluation with AI integration consideration
|
||||||
|
- **Architecture Changes**: Require team discussion and AI service impact analysis
|
||||||
|
- **Emergency Fixes**: Fast-track with post-implementation security review
|
||||||
|
|
||||||
|
Focus on **practical, working AI-powered solutions** that solve real job application problems for users.
|
||||||
53
.claude/settings_local_json.json
Normal file
53
.claude/settings_local_json.json
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
{
|
||||||
|
"permissions": {
|
||||||
|
"allow": [
|
||||||
|
"Bash(*)",
|
||||||
|
"Git(*)",
|
||||||
|
"Node(*)",
|
||||||
|
"npm(*)",
|
||||||
|
"Docker(*)"
|
||||||
|
],
|
||||||
|
"deny": []
|
||||||
|
},
|
||||||
|
"project": {
|
||||||
|
"name": "Job Forge Project",
|
||||||
|
"type": "web-application",
|
||||||
|
"tech_stack": [
|
||||||
|
"python3.12",
|
||||||
|
"fastapi",
|
||||||
|
"dash",
|
||||||
|
"mantine",
|
||||||
|
"postgresql"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"team": {
|
||||||
|
"main_orchestrator": "CLAUDE.md",
|
||||||
|
"specialist_agents": [
|
||||||
|
"agents/technical-lead.md",
|
||||||
|
"agents/full-stack-developer.md",
|
||||||
|
"agents/devops.md",
|
||||||
|
"agents/qa.md"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"templates": {
|
||||||
|
"api_documentation": "templates/api-docs.md",
|
||||||
|
"testing_guide": "templates/testing.md",
|
||||||
|
"deployment_guide": "templates/deployment.md"
|
||||||
|
},
|
||||||
|
"workflow": {
|
||||||
|
"sprint_length": "1_week",
|
||||||
|
"deployment_strategy": "continuous",
|
||||||
|
"quality_gates": true,
|
||||||
|
"default_agent": "CLAUDE.md"
|
||||||
|
},
|
||||||
|
"development": {
|
||||||
|
"environments": [
|
||||||
|
"development",
|
||||||
|
"staging",
|
||||||
|
"production"
|
||||||
|
],
|
||||||
|
"testing_required": true,
|
||||||
|
"code_review_required": true
|
||||||
|
},
|
||||||
|
"$schema": "https://json.schemastore.org/claude-code-settings.json"
|
||||||
|
}
|
||||||
246
.claude/templates/api-docs.md
Normal file
246
.claude/templates/api-docs.md
Normal file
@@ -0,0 +1,246 @@
|
|||||||
|
# API Endpoint: [Method] [Endpoint Path]
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Brief description of what this endpoint does and when to use it.
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
- **Required**: Yes/No
|
||||||
|
- **Type**: Bearer Token / API Key / None
|
||||||
|
- **Permissions**: List required permissions
|
||||||
|
|
||||||
|
## Request
|
||||||
|
|
||||||
|
### HTTP Method & URL
|
||||||
|
```http
|
||||||
|
[METHOD] /api/v1/[endpoint]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Headers
|
||||||
|
```http
|
||||||
|
Content-Type: application/json
|
||||||
|
Authorization: Bearer <your-token>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Path Parameters
|
||||||
|
| Parameter | Type | Required | Description |
|
||||||
|
|-----------|------|----------|-------------|
|
||||||
|
| id | string | Yes | Unique identifier |
|
||||||
|
|
||||||
|
### Query Parameters
|
||||||
|
| Parameter | Type | Required | Default | Description |
|
||||||
|
|-----------|------|----------|---------|-------------|
|
||||||
|
| page | number | No | 1 | Page number for pagination |
|
||||||
|
| limit | number | No | 20 | Number of items per page |
|
||||||
|
|
||||||
|
### Request Body
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"field1": "string",
|
||||||
|
"field2": "number",
|
||||||
|
"field3": {
|
||||||
|
"nested": "object"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Request Schema
|
||||||
|
| Field | Type | Required | Validation | Description |
|
||||||
|
|-------|------|----------|------------|-------------|
|
||||||
|
| field1 | string | Yes | 1-100 chars | Field description |
|
||||||
|
| field2 | number | No | > 0 | Field description |
|
||||||
|
|
||||||
|
## Response
|
||||||
|
|
||||||
|
### Success Response (200/201)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": true,
|
||||||
|
"data": {
|
||||||
|
"id": "uuid",
|
||||||
|
"field1": "string",
|
||||||
|
"field2": "number",
|
||||||
|
"createdAt": "2024-01-01T00:00:00Z",
|
||||||
|
"updatedAt": "2024-01-01T00:00:00Z"
|
||||||
|
},
|
||||||
|
"meta": {
|
||||||
|
"pagination": {
|
||||||
|
"page": 1,
|
||||||
|
"limit": 20,
|
||||||
|
"total": 100,
|
||||||
|
"pages": 5
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Responses
|
||||||
|
|
||||||
|
#### 400 Bad Request
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": false,
|
||||||
|
"error": {
|
||||||
|
"code": "VALIDATION_ERROR",
|
||||||
|
"message": "Invalid input data",
|
||||||
|
"details": [
|
||||||
|
{
|
||||||
|
"field": "email",
|
||||||
|
"message": "Invalid email format"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 401 Unauthorized
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": false,
|
||||||
|
"error": {
|
||||||
|
"code": "UNAUTHORIZED",
|
||||||
|
"message": "Authentication required"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 403 Forbidden
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": false,
|
||||||
|
"error": {
|
||||||
|
"code": "FORBIDDEN",
|
||||||
|
"message": "Insufficient permissions"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 404 Not Found
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": false,
|
||||||
|
"error": {
|
||||||
|
"code": "NOT_FOUND",
|
||||||
|
"message": "Resource not found"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 500 Internal Server Error
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": false,
|
||||||
|
"error": {
|
||||||
|
"code": "INTERNAL_ERROR",
|
||||||
|
"message": "Something went wrong"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rate Limiting
|
||||||
|
- **Limit**: 1000 requests per hour per user
|
||||||
|
- **Headers**:
|
||||||
|
- `X-RateLimit-Limit`: Total requests allowed
|
||||||
|
- `X-RateLimit-Remaining`: Requests remaining
|
||||||
|
- `X-RateLimit-Reset`: Reset time (Unix timestamp)
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### cURL Example
|
||||||
|
```bash
|
||||||
|
curl -X [METHOD] "https://api.yourdomain.com/api/v1/[endpoint]" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer your-token-here" \
|
||||||
|
-d '{
|
||||||
|
"field1": "example value",
|
||||||
|
"field2": 123
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### JavaScript Example
|
||||||
|
```javascript
|
||||||
|
const response = await fetch('/api/v1/[endpoint]', {
|
||||||
|
method: '[METHOD]',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Authorization': 'Bearer your-token-here'
|
||||||
|
},
|
||||||
|
body: JSON.stringify({
|
||||||
|
field1: 'example value',
|
||||||
|
field2: 123
|
||||||
|
})
|
||||||
|
});
|
||||||
|
|
||||||
|
const data = await response.json();
|
||||||
|
console.log(data);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python Example
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
|
||||||
|
url = 'https://api.yourdomain.com/api/v1/[endpoint]'
|
||||||
|
headers = {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Authorization': 'Bearer your-token-here'
|
||||||
|
}
|
||||||
|
data = {
|
||||||
|
'field1': 'example value',
|
||||||
|
'field2': 123
|
||||||
|
}
|
||||||
|
|
||||||
|
response = requests.[method](url, headers=headers, json=data)
|
||||||
|
result = response.json()
|
||||||
|
print(result)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### Unit Test Example
|
||||||
|
```typescript
|
||||||
|
describe('[METHOD] /api/v1/[endpoint]', () => {
|
||||||
|
it('should return success with valid data', async () => {
|
||||||
|
const response = await request(app)
|
||||||
|
.[method]('/api/v1/[endpoint]')
|
||||||
|
.send({
|
||||||
|
field1: 'test value',
|
||||||
|
field2: 123
|
||||||
|
})
|
||||||
|
.expect(200);
|
||||||
|
|
||||||
|
expect(response.body.success).toBe(true);
|
||||||
|
expect(response.body.data).toMatchObject({
|
||||||
|
field1: 'test value',
|
||||||
|
field2: 123
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return 400 for invalid data', async () => {
|
||||||
|
const response = await request(app)
|
||||||
|
.[method]('/api/v1/[endpoint]')
|
||||||
|
.send({
|
||||||
|
field1: '', // Invalid empty string
|
||||||
|
field2: -1 // Invalid negative number
|
||||||
|
})
|
||||||
|
.expect(400);
|
||||||
|
|
||||||
|
expect(response.body.success).toBe(false);
|
||||||
|
expect(response.body.error.code).toBe('VALIDATION_ERROR');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
- Additional implementation notes
|
||||||
|
- Performance considerations
|
||||||
|
- Security considerations
|
||||||
|
- Related endpoints
|
||||||
|
|
||||||
|
## Changelog
|
||||||
|
| Version | Date | Changes |
|
||||||
|
|---------|------|---------|
|
||||||
|
| 1.0.0 | 2024-01-01 | Initial implementation |
|
||||||
|
|
||||||
|
---
|
||||||
|
**Last Updated**: [Date]
|
||||||
|
**Reviewed By**: [Name]
|
||||||
|
**Next Review**: [Date]
|
||||||
298
.claude/templates/deployment.md
Normal file
298
.claude/templates/deployment.md
Normal file
@@ -0,0 +1,298 @@
|
|||||||
|
# Deployment Guide: [Release Version]
|
||||||
|
|
||||||
|
## Release Information
|
||||||
|
- **Version**: [Version Number]
|
||||||
|
- **Release Date**: [Date]
|
||||||
|
- **Release Type**: [Feature/Hotfix/Security/Maintenance]
|
||||||
|
- **Release Manager**: [Name]
|
||||||
|
|
||||||
|
## Release Summary
|
||||||
|
Brief description of what's being deployed, major features, and business impact.
|
||||||
|
|
||||||
|
### ✨ New Features
|
||||||
|
- [Feature 1]: Description and business value
|
||||||
|
- [Feature 2]: Description and business value
|
||||||
|
|
||||||
|
### 🐛 Bug Fixes
|
||||||
|
- [Bug Fix 1]: Description of issue resolved
|
||||||
|
- [Bug Fix 2]: Description of issue resolved
|
||||||
|
|
||||||
|
### 🔧 Technical Improvements
|
||||||
|
- [Improvement 1]: Performance/security/maintenance improvement
|
||||||
|
- [Improvement 2]: Infrastructure or code quality improvement
|
||||||
|
|
||||||
|
## Pre-Deployment Checklist
|
||||||
|
|
||||||
|
### ✅ Quality Assurance
|
||||||
|
- [ ] All automated tests passing (unit, integration, E2E)
|
||||||
|
- [ ] Manual testing completed and signed off
|
||||||
|
- [ ] Performance testing completed
|
||||||
|
- [ ] Security scan passed with no critical issues
|
||||||
|
- [ ] Cross-browser testing completed
|
||||||
|
- [ ] Mobile responsiveness verified
|
||||||
|
- [ ] Accessibility requirements met
|
||||||
|
|
||||||
|
### 🔐 Security & Compliance
|
||||||
|
- [ ] Security review completed
|
||||||
|
- [ ] Dependency vulnerabilities resolved
|
||||||
|
- [ ] Environment variables secured
|
||||||
|
- [ ] Database migration reviewed for data safety
|
||||||
|
- [ ] Backup procedures verified
|
||||||
|
- [ ] Compliance requirements met
|
||||||
|
|
||||||
|
### 📋 Documentation & Communication
|
||||||
|
- [ ] Release notes prepared
|
||||||
|
- [ ] API documentation updated
|
||||||
|
- [ ] User documentation updated
|
||||||
|
- [ ] Stakeholders notified of deployment
|
||||||
|
- [ ] Support team briefed on changes
|
||||||
|
- [ ] Rollback plan documented
|
||||||
|
|
||||||
|
### 🏗️ Infrastructure & Environment
|
||||||
|
- [ ] Staging environment matches production
|
||||||
|
- [ ] Database migrations tested on staging
|
||||||
|
- [ ] Environment variables configured
|
||||||
|
- [ ] SSL certificates valid and updated
|
||||||
|
- [ ] Monitoring and alerting configured
|
||||||
|
- [ ] Backup systems operational
|
||||||
|
|
||||||
|
## Environment Configuration
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
```bash
|
||||||
|
# Required Environment Variables
|
||||||
|
NODE_ENV=production
|
||||||
|
DATABASE_URL=postgresql://user:pass@host:port/dbname
|
||||||
|
REDIS_URL=redis://host:port
|
||||||
|
JWT_SECRET=your-secure-jwt-secret
|
||||||
|
API_BASE_URL=https://api.yourdomain.com
|
||||||
|
|
||||||
|
# Third-party Services
|
||||||
|
STRIPE_SECRET_KEY=sk_live_...
|
||||||
|
SENDGRID_API_KEY=SG...
|
||||||
|
SENTRY_DSN=https://...
|
||||||
|
|
||||||
|
# Optional Configuration
|
||||||
|
LOG_LEVEL=info
|
||||||
|
RATE_LIMIT_MAX=1000
|
||||||
|
SESSION_TIMEOUT=3600
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Configuration
|
||||||
|
```sql
|
||||||
|
-- Database migration checklist
|
||||||
|
-- [ ] Backup current database
|
||||||
|
-- [ ] Test migration on staging
|
||||||
|
-- [ ] Verify data integrity
|
||||||
|
-- [ ] Update indexes if needed
|
||||||
|
-- [ ] Check foreign key constraints
|
||||||
|
|
||||||
|
-- Example migration
|
||||||
|
-- Migration: 2024-01-01-add-user-preferences.sql
|
||||||
|
ALTER TABLE users ADD COLUMN preferences JSONB DEFAULT '{}';
|
||||||
|
CREATE INDEX idx_users_preferences ON users USING GIN (preferences);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deployment Procedure
|
||||||
|
|
||||||
|
### Step 1: Pre-Deployment Verification
|
||||||
|
```bash
|
||||||
|
# Verify current system status
|
||||||
|
curl -f https://api.yourdomain.com/health
|
||||||
|
curl -f https://yourdomain.com/health
|
||||||
|
|
||||||
|
# Check system resources
|
||||||
|
docker stats
|
||||||
|
df -h
|
||||||
|
|
||||||
|
# Verify monitoring systems
|
||||||
|
# Check Sentry, DataDog, or monitoring dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Database Migration (if applicable)
|
||||||
|
```bash
|
||||||
|
# 1. Create database backup
|
||||||
|
pg_dump $DATABASE_URL > backup_$(date +%Y%m%d_%H%M%S).sql
|
||||||
|
|
||||||
|
# 2. Run migration in staging (verify first)
|
||||||
|
npm run migrate:staging
|
||||||
|
|
||||||
|
# 3. Verify migration succeeded
|
||||||
|
npm run migrate:status
|
||||||
|
|
||||||
|
# 4. Run migration in production (when ready)
|
||||||
|
npm run migrate:production
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Application Deployment
|
||||||
|
|
||||||
|
#### Option A: Automated Deployment (CI/CD)
|
||||||
|
```yaml
|
||||||
|
# GitHub Actions / GitLab CI deployment
|
||||||
|
deployment_trigger:
|
||||||
|
- push_to_main_branch
|
||||||
|
- manual_trigger_from_dashboard
|
||||||
|
|
||||||
|
deployment_steps:
|
||||||
|
1. run_automated_tests
|
||||||
|
2. build_application
|
||||||
|
3. deploy_to_staging
|
||||||
|
4. run_smoke_tests
|
||||||
|
5. wait_for_approval
|
||||||
|
6. deploy_to_production
|
||||||
|
7. run_post_deployment_tests
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Option B: Manual Deployment
|
||||||
|
```bash
|
||||||
|
# 1. Pull latest code
|
||||||
|
git checkout main
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# 2. Install dependencies
|
||||||
|
npm ci --production
|
||||||
|
|
||||||
|
# 3. Build application
|
||||||
|
npm run build
|
||||||
|
|
||||||
|
# 4. Deploy using platform-specific commands
|
||||||
|
# Vercel
|
||||||
|
vercel --prod
|
||||||
|
|
||||||
|
# Heroku
|
||||||
|
git push heroku main
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
docker build -t app:latest .
|
||||||
|
docker push registry/app:latest
|
||||||
|
kubectl set image deployment/app app=registry/app:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Post-Deployment Verification
|
||||||
|
```bash
|
||||||
|
# 1. Health checks
|
||||||
|
curl -f https://api.yourdomain.com/health
|
||||||
|
curl -f https://yourdomain.com/health
|
||||||
|
|
||||||
|
# 2. Smoke tests
|
||||||
|
npm run test:smoke:production
|
||||||
|
|
||||||
|
# 3. Verify key functionality
|
||||||
|
curl -X POST https://api.yourdomain.com/api/auth/login \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"email":"test@example.com","password":"test123"}'
|
||||||
|
|
||||||
|
# 4. Check error rates and performance
|
||||||
|
# Monitor for first 30 minutes after deployment
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring & Alerting
|
||||||
|
|
||||||
|
### Key Metrics to Monitor
|
||||||
|
```yaml
|
||||||
|
application_metrics:
|
||||||
|
- response_time_p95: < 500ms
|
||||||
|
- error_rate: < 1%
|
||||||
|
- throughput: requests_per_second
|
||||||
|
- database_connections: < 80% of pool
|
||||||
|
|
||||||
|
infrastructure_metrics:
|
||||||
|
- cpu_usage: < 70%
|
||||||
|
- memory_usage: < 80%
|
||||||
|
- disk_usage: < 85%
|
||||||
|
- network_latency: < 100ms
|
||||||
|
|
||||||
|
business_metrics:
|
||||||
|
- user_registrations: normal_levels
|
||||||
|
- conversion_rates: no_significant_drop
|
||||||
|
- payment_processing: functioning_normally
|
||||||
|
```
|
||||||
|
|
||||||
|
### Alert Configuration
|
||||||
|
```yaml
|
||||||
|
critical_alerts:
|
||||||
|
- error_rate > 5%
|
||||||
|
- response_time > 2000ms
|
||||||
|
- database_connections > 90%
|
||||||
|
- application_crashes
|
||||||
|
|
||||||
|
warning_alerts:
|
||||||
|
- error_rate > 2%
|
||||||
|
- response_time > 1000ms
|
||||||
|
- cpu_usage > 80%
|
||||||
|
- memory_usage > 85%
|
||||||
|
|
||||||
|
notification_channels:
|
||||||
|
- slack: #alerts-critical
|
||||||
|
- email: devops@company.com
|
||||||
|
- pagerduty: production-alerts
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rollback Plan
|
||||||
|
|
||||||
|
### When to Rollback
|
||||||
|
- Critical application errors affecting > 10% of users
|
||||||
|
- Data corruption or data loss incidents
|
||||||
|
- Security vulnerabilities exposed
|
||||||
|
- Performance degradation > 50% from baseline
|
||||||
|
- Core functionality completely broken
|
||||||
|
|
||||||
|
### Rollback Procedure
|
||||||
|
```bash
|
||||||
|
# Option 1: Platform rollback (recommended)
|
||||||
|
# Vercel
|
||||||
|
vercel rollback [deployment-url]
|
||||||
|
|
||||||
|
# Heroku
|
||||||
|
heroku rollback v[previous-version]
|
||||||
|
|
||||||
|
# Kubernetes
|
||||||
|
kubectl rollout undo deployment/app
|
||||||
|
|
||||||
|
# Option 2: Git revert (if platform rollback unavailable)
|
||||||
|
git revert HEAD
|
||||||
|
git push origin main
|
||||||
|
|
||||||
|
# Option 3: Database rollback (if needed)
|
||||||
|
# Restore from backup taken before deployment
|
||||||
|
pg_restore -d $DATABASE_URL backup_[timestamp].sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Post-Rollback Actions
|
||||||
|
1. **Immediate**: Verify system stability
|
||||||
|
2. **Within 1 hour**: Investigate root cause
|
||||||
|
3. **Within 4 hours**: Fix identified issues
|
||||||
|
4. **Within 24 hours**: Plan and execute re-deployment
|
||||||
|
|
||||||
|
## Communication Plan
|
||||||
|
|
||||||
|
### Pre-Deployment Communication
|
||||||
|
```markdown
|
||||||
|
**Subject**: Scheduled Deployment - [Application Name] v[Version]
|
||||||
|
|
||||||
|
**Team**: [Development Team]
|
||||||
|
**Date**: [Deployment Date]
|
||||||
|
**Time**: [Deployment Time with timezone]
|
||||||
|
**Duration**: [Expected duration]
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
- [Brief list of major changes]
|
||||||
|
|
||||||
|
**Impact**:
|
||||||
|
- [Any expected user impact or downtime]
|
||||||
|
|
||||||
|
**Support**: [Contact information for deployment team]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Post-Deployment Communication
|
||||||
|
```markdown
|
||||||
|
**Subject**: Deployment Complete - [Application Name] v[Version]
|
||||||
|
|
||||||
|
**Status**: ✅ Successful / ❌ Failed / ⚠️ Partial
|
||||||
|
|
||||||
|
**Completed At**: [Time]
|
||||||
|
**Duration**: [Actual duration]
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
- ✅ Health checks passing
|
||||||
|
- ✅
|
||||||
396
.claude/templates/testing.md
Normal file
396
.claude/templates/testing.md
Normal file
@@ -0,0 +1,396 @@
|
|||||||
|
# Testing Guide: [Feature Name]
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Brief description of what feature/component is being tested and its main functionality.
|
||||||
|
|
||||||
|
## Test Strategy
|
||||||
|
|
||||||
|
### Testing Scope
|
||||||
|
- **In Scope**: What will be tested
|
||||||
|
- **Out of Scope**: What won't be tested in this phase
|
||||||
|
- **Dependencies**: External services or components needed
|
||||||
|
|
||||||
|
### Test Types
|
||||||
|
- [ ] Unit Tests
|
||||||
|
- [ ] Integration Tests
|
||||||
|
- [ ] API Tests
|
||||||
|
- [ ] End-to-End Tests
|
||||||
|
- [ ] Performance Tests
|
||||||
|
- [ ] Security Tests
|
||||||
|
- [ ] Accessibility Tests
|
||||||
|
|
||||||
|
## Test Scenarios
|
||||||
|
|
||||||
|
### ✅ Happy Path Scenarios
|
||||||
|
- [ ] **Scenario 1**: User successfully [performs main action]
|
||||||
|
- **Given**: [Initial conditions]
|
||||||
|
- **When**: [User action]
|
||||||
|
- **Then**: [Expected result]
|
||||||
|
|
||||||
|
- [ ] **Scenario 2**: System handles valid input correctly
|
||||||
|
- **Given**: [Setup conditions]
|
||||||
|
- **When**: [Input provided]
|
||||||
|
- **Then**: [Expected behavior]
|
||||||
|
|
||||||
|
### ⚠️ Edge Cases
|
||||||
|
- [ ] **Empty/Null Data**: System handles empty or null inputs
|
||||||
|
- [ ] **Boundary Values**: Maximum/minimum allowed values
|
||||||
|
- [ ] **Special Characters**: Unicode, SQL injection attempts
|
||||||
|
- [ ] **Large Data Sets**: Performance with large amounts of data
|
||||||
|
- [ ] **Concurrent Users**: Multiple users accessing same resource
|
||||||
|
|
||||||
|
### ❌ Error Scenarios
|
||||||
|
- [ ] **Invalid Input**: System rejects invalid data with proper errors
|
||||||
|
- [ ] **Network Failures**: Handles network timeouts gracefully
|
||||||
|
- [ ] **Server Errors**: Displays appropriate error messages
|
||||||
|
- [ ] **Authentication Failures**: Proper handling of auth errors
|
||||||
|
- [ ] **Permission Denied**: Appropriate access control
|
||||||
|
|
||||||
|
## Automated Test Implementation
|
||||||
|
|
||||||
|
### Unit Tests
|
||||||
|
```typescript
|
||||||
|
// Jest unit test example
|
||||||
|
describe('[ComponentName]', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
// Setup before each test
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Cleanup after each test
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Happy Path', () => {
|
||||||
|
it('should [expected behavior] when [condition]', () => {
|
||||||
|
// Arrange
|
||||||
|
const input = { /* test data */ };
|
||||||
|
|
||||||
|
// Act
|
||||||
|
const result = functionUnderTest(input);
|
||||||
|
|
||||||
|
// Assert
|
||||||
|
expect(result).toEqual(expectedOutput);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Error Handling', () => {
|
||||||
|
it('should throw error when [invalid condition]', () => {
|
||||||
|
const invalidInput = { /* invalid data */ };
|
||||||
|
|
||||||
|
expect(() => {
|
||||||
|
functionUnderTest(invalidInput);
|
||||||
|
}).toThrow('Expected error message');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Integration Tests
|
||||||
|
```typescript
|
||||||
|
// API test example
|
||||||
|
describe('API: [Endpoint Name]', () => {
|
||||||
|
beforeAll(async () => {
|
||||||
|
// Setup test database
|
||||||
|
await setupTestDb();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterAll(async () => {
|
||||||
|
// Cleanup test database
|
||||||
|
await cleanupTestDb();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return success with valid data', async () => {
|
||||||
|
const testData = {
|
||||||
|
field1: 'test value',
|
||||||
|
field2: 123
|
||||||
|
};
|
||||||
|
|
||||||
|
const response = await request(app)
|
||||||
|
.post('/api/endpoint')
|
||||||
|
.send(testData)
|
||||||
|
.expect(200);
|
||||||
|
|
||||||
|
expect(response.body).toMatchObject({
|
||||||
|
success: true,
|
||||||
|
data: expect.objectContaining(testData)
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return 400 for invalid data', async () => {
|
||||||
|
const invalidData = {
|
||||||
|
field1: '', // Invalid
|
||||||
|
field2: 'not a number' // Invalid
|
||||||
|
};
|
||||||
|
|
||||||
|
const response = await request(app)
|
||||||
|
.post('/api/endpoint')
|
||||||
|
.send(invalidData)
|
||||||
|
.expect(400);
|
||||||
|
|
||||||
|
expect(response.body.success).toBe(false);
|
||||||
|
expect(response.body.error.code).toBe('VALIDATION_ERROR');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### End-to-End Tests
|
||||||
|
```typescript
|
||||||
|
// Playwright E2E test example
|
||||||
|
import { test, expect } from '@playwright/test';
|
||||||
|
|
||||||
|
test.describe('[Feature Name] E2E Tests', () => {
|
||||||
|
test.beforeEach(async ({ page }) => {
|
||||||
|
// Setup: Login user, navigate to feature
|
||||||
|
await page.goto('/login');
|
||||||
|
await page.fill('[data-testid="email"]', 'test@example.com');
|
||||||
|
await page.fill('[data-testid="password"]', 'password123');
|
||||||
|
await page.click('[data-testid="login-button"]');
|
||||||
|
await page.waitForURL('/dashboard');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should complete full user workflow', async ({ page }) => {
|
||||||
|
// Navigate to feature
|
||||||
|
await page.goto('/feature-page');
|
||||||
|
|
||||||
|
// Perform main user action
|
||||||
|
await page.fill('[data-testid="input-field"]', 'test input');
|
||||||
|
await page.click('[data-testid="submit-button"]');
|
||||||
|
|
||||||
|
// Verify result
|
||||||
|
await expect(page.locator('[data-testid="success-message"]')).toBeVisible();
|
||||||
|
await expect(page.locator('[data-testid="result"]')).toContainText('Expected result');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should show validation errors', async ({ page }) => {
|
||||||
|
await page.goto('/feature-page');
|
||||||
|
|
||||||
|
// Submit without required data
|
||||||
|
await page.click('[data-testid="submit-button"]');
|
||||||
|
|
||||||
|
// Verify error messages
|
||||||
|
await expect(page.locator('[data-testid="error-message"]')).toBeVisible();
|
||||||
|
await expect(page.locator('[data-testid="error-message"]')).toContainText('Required field');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Manual Testing Checklist
|
||||||
|
|
||||||
|
### Functional Testing
|
||||||
|
- [ ] All required fields are properly validated
|
||||||
|
- [ ] Optional fields work correctly when empty
|
||||||
|
- [ ] Form submissions work as expected
|
||||||
|
- [ ] Navigation between screens functions properly
|
||||||
|
- [ ] Data persistence works correctly
|
||||||
|
- [ ] Error messages are clear and helpful
|
||||||
|
|
||||||
|
### User Experience Testing
|
||||||
|
- [ ] Interface is intuitive and easy to use
|
||||||
|
- [ ] Loading states are displayed appropriately
|
||||||
|
- [ ] Success/error feedback is clear
|
||||||
|
- [ ] Responsive design works on different screen sizes
|
||||||
|
- [ ] Performance is acceptable (< 3 seconds load time)
|
||||||
|
|
||||||
|
### Cross-Browser Testing
|
||||||
|
| Browser | Version | Status | Notes |
|
||||||
|
|---------|---------|--------|-------|
|
||||||
|
| Chrome | Latest | ⏳ | |
|
||||||
|
| Firefox | Latest | ⏳ | |
|
||||||
|
| Safari | Latest | ⏳ | |
|
||||||
|
| Edge | Latest | ⏳ | |
|
||||||
|
|
||||||
|
### Mobile Testing
|
||||||
|
| Device | Browser | Status | Notes |
|
||||||
|
|--------|---------|--------|-------|
|
||||||
|
| iPhone | Safari | ⏳ | |
|
||||||
|
| Android | Chrome | ⏳ | |
|
||||||
|
| Tablet | Safari/Chrome | ⏳ | |
|
||||||
|
|
||||||
|
### Accessibility Testing
|
||||||
|
- [ ] **Keyboard Navigation**: All interactive elements accessible via keyboard
|
||||||
|
- [ ] **Screen Reader**: Content readable with screen reader software
|
||||||
|
- [ ] **Color Contrast**: Sufficient contrast ratios (4.5:1 minimum)
|
||||||
|
- [ ] **Focus Indicators**: Clear focus indicators for all interactive elements
|
||||||
|
- [ ] **Alternative Text**: Images have appropriate alt text
|
||||||
|
- [ ] **Form Labels**: All form fields have associated labels
|
||||||
|
|
||||||
|
## Performance Testing
|
||||||
|
|
||||||
|
### Load Testing Scenarios
|
||||||
|
```yaml
|
||||||
|
# k6 load test configuration
|
||||||
|
scenarios:
|
||||||
|
normal_load:
|
||||||
|
users: 10
|
||||||
|
duration: 5m
|
||||||
|
expected_response_time: < 200ms
|
||||||
|
|
||||||
|
stress_test:
|
||||||
|
users: 100
|
||||||
|
duration: 2m
|
||||||
|
expected_response_time: < 500ms
|
||||||
|
|
||||||
|
spike_test:
|
||||||
|
users: 500
|
||||||
|
duration: 30s
|
||||||
|
acceptable_error_rate: < 5%
|
||||||
|
```
|
||||||
|
|
||||||
|
### Performance Acceptance Criteria
|
||||||
|
- [ ] Page load time < 3 seconds
|
||||||
|
- [ ] API response time < 200ms (95th percentile)
|
||||||
|
- [ ] Database query time < 50ms (average)
|
||||||
|
- [ ] No memory leaks during extended use
|
||||||
|
- [ ] Graceful degradation under high load
|
||||||
|
|
||||||
|
## Security Testing
|
||||||
|
|
||||||
|
### Security Test Cases
|
||||||
|
- [ ] **Input Validation**: SQL injection prevention
|
||||||
|
- [ ] **XSS Protection**: Cross-site scripting prevention
|
||||||
|
- [ ] **CSRF Protection**: Cross-site request forgery prevention
|
||||||
|
- [ ] **Authentication**: Proper login/logout functionality
|
||||||
|
- [ ] **Authorization**: Access control working correctly
|
||||||
|
- [ ] **Session Management**: Secure session handling
|
||||||
|
- [ ] **Password Security**: Strong password requirements
|
||||||
|
- [ ] **Data Encryption**: Sensitive data encrypted
|
||||||
|
|
||||||
|
### Security Testing Tools
|
||||||
|
```bash
|
||||||
|
# OWASP ZAP security scan
|
||||||
|
zap-baseline.py -t http://localhost:3000
|
||||||
|
|
||||||
|
# Dependency vulnerability scan
|
||||||
|
npm audit
|
||||||
|
|
||||||
|
# Static security analysis
|
||||||
|
semgrep --config=auto src/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Data Management
|
||||||
|
|
||||||
|
### Test Data Requirements
|
||||||
|
```yaml
|
||||||
|
test_users:
|
||||||
|
- email: test@example.com
|
||||||
|
password: password123
|
||||||
|
role: user
|
||||||
|
status: active
|
||||||
|
|
||||||
|
- email: admin@example.com
|
||||||
|
password: admin123
|
||||||
|
role: admin
|
||||||
|
status: active
|
||||||
|
|
||||||
|
test_data:
|
||||||
|
- valid_records: 10
|
||||||
|
- invalid_records: 5
|
||||||
|
- edge_case_records: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
### Data Cleanup
|
||||||
|
```typescript
|
||||||
|
// Test data cleanup utilities
|
||||||
|
export const cleanupTestData = async () => {
|
||||||
|
await TestUser.destroy({ where: { email: { [Op.like]: '%@test.com' } } });
|
||||||
|
await TestRecord.destroy({ where: { isTestData: true } });
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Execution
|
||||||
|
|
||||||
|
### Test Suite Execution
|
||||||
|
```bash
|
||||||
|
# Run all tests
|
||||||
|
npm test
|
||||||
|
|
||||||
|
# Run specific test types
|
||||||
|
npm run test:unit
|
||||||
|
npm run test:integration
|
||||||
|
npm run test:e2e
|
||||||
|
|
||||||
|
# Run tests with coverage
|
||||||
|
npm run test:coverage
|
||||||
|
|
||||||
|
# Run performance tests
|
||||||
|
npm run test:performance
|
||||||
|
```
|
||||||
|
|
||||||
|
### Continuous Integration
|
||||||
|
```yaml
|
||||||
|
# CI test pipeline
|
||||||
|
stages:
|
||||||
|
- unit_tests
|
||||||
|
- integration_tests
|
||||||
|
- security_scan
|
||||||
|
- e2e_tests
|
||||||
|
- performance_tests
|
||||||
|
|
||||||
|
quality_gates:
|
||||||
|
- test_coverage: > 80%
|
||||||
|
- security_scan: no_critical_issues
|
||||||
|
- performance: response_time < 500ms
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Reports
|
||||||
|
|
||||||
|
### Coverage Report
|
||||||
|
- **Target**: 80% code coverage minimum
|
||||||
|
- **Critical Paths**: 100% coverage required
|
||||||
|
- **Report Location**: `coverage/lcov-report/index.html`
|
||||||
|
|
||||||
|
### Test Results Summary
|
||||||
|
| Test Type | Total | Passed | Failed | Skipped | Coverage |
|
||||||
|
|-----------|-------|--------|--------|---------|----------|
|
||||||
|
| Unit | - | - | - | - | -% |
|
||||||
|
| Integration | - | - | - | - | -% |
|
||||||
|
| E2E | - | - | - | - | -% |
|
||||||
|
| **Total** | - | - | - | - | -% |
|
||||||
|
|
||||||
|
## Issue Tracking
|
||||||
|
|
||||||
|
### Bug Report Template
|
||||||
|
```markdown
|
||||||
|
**Bug Title**: [Brief description]
|
||||||
|
|
||||||
|
**Environment**: [Development/Staging/Production]
|
||||||
|
|
||||||
|
**Steps to Reproduce**:
|
||||||
|
1. [Step 1]
|
||||||
|
2. [Step 2]
|
||||||
|
3. [Step 3]
|
||||||
|
|
||||||
|
**Expected Result**: [What should happen]
|
||||||
|
|
||||||
|
**Actual Result**: [What actually happened]
|
||||||
|
|
||||||
|
**Screenshots/Logs**: [Attach if applicable]
|
||||||
|
|
||||||
|
**Priority**: [Critical/High/Medium/Low]
|
||||||
|
|
||||||
|
**Assigned To**: [Team member]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign-off Criteria
|
||||||
|
|
||||||
|
### Definition of Done
|
||||||
|
- [ ] All test scenarios executed and passed
|
||||||
|
- [ ] Code coverage meets minimum requirements
|
||||||
|
- [ ] Security testing completed with no critical issues
|
||||||
|
- [ ] Performance requirements met
|
||||||
|
- [ ] Cross-browser testing completed
|
||||||
|
- [ ] Accessibility requirements verified
|
||||||
|
- [ ] Documentation updated
|
||||||
|
- [ ] Stakeholder acceptance received
|
||||||
|
|
||||||
|
### Test Sign-off
|
||||||
|
- **Tested By**: [Name]
|
||||||
|
- **Date**: [Date]
|
||||||
|
- **Test Environment**: [Environment details]
|
||||||
|
- **Overall Status**: [PASS/FAIL]
|
||||||
|
- **Recommendation**: [GO/NO-GO for deployment]
|
||||||
|
|
||||||
|
---
|
||||||
|
**Test Plan Version**: 1.0
|
||||||
|
**Last Updated**: [Date]
|
||||||
|
**Next Review**: [Date]
|
||||||
115
.claude/tools/agent_cache_wrapper.py
Normal file
115
.claude/tools/agent_cache_wrapper.py
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
# .claude/tools/agent_cache_wrapper.py
|
||||||
|
"""
|
||||||
|
Cache wrapper for AI agents
|
||||||
|
Use this in your agent workflows to add caching
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Add the tools directory to Python path
|
||||||
|
tools_dir = Path(__file__).parent
|
||||||
|
sys.path.insert(0, str(tools_dir))
|
||||||
|
|
||||||
|
from local_cache_client import (
|
||||||
|
get_cache,
|
||||||
|
cached_ai_query,
|
||||||
|
store_ai_response,
|
||||||
|
print_cache_stats,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AgentCacheWrapper:
|
||||||
|
"""Wrapper for agent AI calls with caching support."""
|
||||||
|
|
||||||
|
def __init__(self, agent_type: str, project: str = None): # type: ignore
|
||||||
|
self.agent_type = agent_type
|
||||||
|
self.project = project or os.getenv("AI_CACHE_PROJECT", "job_forge")
|
||||||
|
self.cache = get_cache()
|
||||||
|
|
||||||
|
print(f"🤖 {agent_type.title()} Agent initialized with caching")
|
||||||
|
|
||||||
|
def query_with_cache(self, prompt: str, make_ai_call_func=None) -> str:
|
||||||
|
"""
|
||||||
|
Query with cache support.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt: The prompt to send
|
||||||
|
make_ai_call_func: Function to call if cache miss (should return AI response)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
AI response (from cache or fresh API call)
|
||||||
|
"""
|
||||||
|
# Try cache first
|
||||||
|
cached_response, was_hit = cached_ai_query(
|
||||||
|
prompt, self.agent_type, self.project
|
||||||
|
)
|
||||||
|
|
||||||
|
if was_hit:
|
||||||
|
return cached_response # type: ignore
|
||||||
|
|
||||||
|
# Cache miss - make AI call
|
||||||
|
if make_ai_call_func:
|
||||||
|
print(f"🤖 Making fresh AI call for {self.agent_type}...")
|
||||||
|
ai_response = make_ai_call_func(prompt)
|
||||||
|
|
||||||
|
# Store in cache for next time
|
||||||
|
if ai_response:
|
||||||
|
store_ai_response(prompt, ai_response, self.agent_type, self.project)
|
||||||
|
|
||||||
|
return ai_response
|
||||||
|
else:
|
||||||
|
print(f"⚠️ No AI call function provided for cache miss")
|
||||||
|
return None # type: ignore
|
||||||
|
|
||||||
|
def store_response(self, prompt: str, response: str):
|
||||||
|
"""Manually store a response in cache."""
|
||||||
|
store_ai_response(prompt, response, self.agent_type, self.project)
|
||||||
|
|
||||||
|
def get_stats(self):
|
||||||
|
"""Get cache statistics for this session."""
|
||||||
|
return self.cache.get_stats()
|
||||||
|
|
||||||
|
|
||||||
|
# Convenience functions for each agent type
|
||||||
|
def technical_lead_query(prompt: str, ai_call_func=None) -> str:
|
||||||
|
"""Technical Lead agent with caching."""
|
||||||
|
wrapper = AgentCacheWrapper("technical_lead")
|
||||||
|
return wrapper.query_with_cache(prompt, ai_call_func)
|
||||||
|
|
||||||
|
|
||||||
|
def qa_engineer_query(prompt: str, ai_call_func=None) -> str:
|
||||||
|
"""QA Engineer agent with caching."""
|
||||||
|
wrapper = AgentCacheWrapper("qa_engineer")
|
||||||
|
return wrapper.query_with_cache(prompt, ai_call_func)
|
||||||
|
|
||||||
|
|
||||||
|
def devops_engineer_query(prompt: str, ai_call_func=None) -> str:
|
||||||
|
"""DevOps Engineer agent with caching."""
|
||||||
|
wrapper = AgentCacheWrapper("devops_engineer")
|
||||||
|
return wrapper.query_with_cache(prompt, ai_call_func)
|
||||||
|
|
||||||
|
|
||||||
|
def fullstack_developer_query(prompt: str, ai_call_func=None) -> str:
|
||||||
|
"""Full-Stack Developer agent with caching."""
|
||||||
|
wrapper = AgentCacheWrapper("fullstack_developer")
|
||||||
|
return wrapper.query_with_cache(prompt, ai_call_func)
|
||||||
|
|
||||||
|
|
||||||
|
# Example usage and testing
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Example AI call function (replace with your actual Claude Code integration)
|
||||||
|
def example_ai_call(prompt):
|
||||||
|
# This is where you'd call Claude Code or your AI service
|
||||||
|
# For testing, return a mock response
|
||||||
|
return f"Mock AI response for: {prompt[:50]}..."
|
||||||
|
|
||||||
|
# Test with Technical Lead
|
||||||
|
response = technical_lead_query(
|
||||||
|
"What is the current FastAPI project structure?", example_ai_call
|
||||||
|
)
|
||||||
|
print(f"Response: {response}")
|
||||||
|
|
||||||
|
# Print stats
|
||||||
|
print_cache_stats()
|
||||||
307
.claude/tools/local_cache_client.py
Normal file
307
.claude/tools/local_cache_client.py
Normal file
@@ -0,0 +1,307 @@
|
|||||||
|
# .claude/tools/local_cache_client.py
|
||||||
|
"""
|
||||||
|
AI Cache Client for Local Development
|
||||||
|
Integrates with n8n-based AI response caching system
|
||||||
|
"""
|
||||||
|
|
||||||
|
import requests
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import hashlib
|
||||||
|
import time
|
||||||
|
from typing import Optional, Dict, Any
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
|
||||||
|
class AICacheClient:
|
||||||
|
"""Client for interacting with AI Cache MCP service."""
|
||||||
|
|
||||||
|
def __init__(self, base_url: str = None, enabled: bool = True): # type: ignore
|
||||||
|
# Default to your n8n webhook URL
|
||||||
|
self.base_url = base_url or os.getenv(
|
||||||
|
"AI_CACHE_URL", "https://n8n.hotserv.cloud/webhook"
|
||||||
|
)
|
||||||
|
self.enabled = (
|
||||||
|
enabled and os.getenv("AI_CACHE_ENABLED", "true").lower() == "true"
|
||||||
|
)
|
||||||
|
self.timeout = int(os.getenv("AI_CACHE_TIMEOUT", "15"))
|
||||||
|
|
||||||
|
# Stats tracking
|
||||||
|
self.session_hits = 0
|
||||||
|
self.session_misses = 0
|
||||||
|
self.session_start = time.time()
|
||||||
|
self.connection_failed = False
|
||||||
|
|
||||||
|
if self.enabled:
|
||||||
|
print(f"🧠 AI Cache enabled: {self.base_url}")
|
||||||
|
self._test_connection()
|
||||||
|
else:
|
||||||
|
print("⚠️ AI Cache disabled")
|
||||||
|
|
||||||
|
def _test_connection(self):
|
||||||
|
"""Test if the cache service is accessible."""
|
||||||
|
try:
|
||||||
|
response = requests.get(
|
||||||
|
f"{self.base_url}/ai-cache-stats",
|
||||||
|
timeout=3 # Quick test
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
print("✅ Cache service is accessible")
|
||||||
|
else:
|
||||||
|
print(f"⚠️ Cache service returned HTTP {response.status_code}")
|
||||||
|
self.connection_failed = True
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Cache service unreachable: {str(e)[:50]}...")
|
||||||
|
self.connection_failed = True
|
||||||
|
|
||||||
|
def _normalize_prompt(self, prompt: str) -> str:
|
||||||
|
"""Normalize prompt for consistent matching."""
|
||||||
|
return prompt.strip().lower().replace("\n", " ").replace(" ", " ")
|
||||||
|
|
||||||
|
def lookup_cache(
|
||||||
|
self, prompt: str, agent_type: str, project: str = "job_forge"
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""Look up a cached AI response."""
|
||||||
|
if not self.enabled or self.connection_failed:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
response = requests.post(
|
||||||
|
f"{self.base_url}/ai-cache-lookup",
|
||||||
|
json={"prompt": prompt, "agent_type": agent_type, "project": project},
|
||||||
|
timeout=self.timeout,
|
||||||
|
)
|
||||||
|
|
||||||
|
lookup_time = (time.time() - start_time) * 1000
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
try:
|
||||||
|
# Debug: print raw response
|
||||||
|
raw_text = response.text
|
||||||
|
print(f"🔍 Debug - Raw response: '{raw_text[:100]}...'")
|
||||||
|
|
||||||
|
if not raw_text.strip():
|
||||||
|
print(f"❌ Cache MISS [{agent_type}] - Empty response | Lookup: {lookup_time:.0f}ms")
|
||||||
|
self.session_misses += 1
|
||||||
|
return None
|
||||||
|
|
||||||
|
data = response.json()
|
||||||
|
if data.get("found"):
|
||||||
|
similarity = data.get("similarity", 1.0)
|
||||||
|
hit_count = data.get("hit_count", 1)
|
||||||
|
|
||||||
|
print(
|
||||||
|
f"✅ Cache HIT! [{agent_type}] Similarity: {similarity:.2f} | Used: {hit_count}x | Lookup: {lookup_time:.0f}ms"
|
||||||
|
)
|
||||||
|
self.session_hits += 1
|
||||||
|
return data.get("response")
|
||||||
|
else:
|
||||||
|
print(f"❌ Cache MISS [{agent_type}] | Lookup: {lookup_time:.0f}ms")
|
||||||
|
self.session_misses += 1
|
||||||
|
return None
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
print(f"🚨 JSON decode error: {str(e)} | Response: '{response.text[:50]}'")
|
||||||
|
self.session_misses += 1
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
print(f"⚠️ Cache lookup failed: HTTP {response.status_code}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
except requests.exceptions.Timeout:
|
||||||
|
print(f"⏱️ Cache lookup timeout ({self.timeout}s)")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
print(f"🚨 Cache error: {str(e)}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def store_cache(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
response: str,
|
||||||
|
agent_type: str,
|
||||||
|
ai_service: str = "claude",
|
||||||
|
model: str = "claude-sonnet-4",
|
||||||
|
project: str = "job_forge",
|
||||||
|
) -> bool:
|
||||||
|
"""Store an AI response in cache."""
|
||||||
|
if not self.enabled or not response or len(response.strip()) < 10:
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
result = requests.post(
|
||||||
|
f"{self.base_url}/ai-cache-store",
|
||||||
|
json={
|
||||||
|
"prompt": prompt,
|
||||||
|
"response": response,
|
||||||
|
"ai_service": ai_service,
|
||||||
|
"model": model,
|
||||||
|
"agent_type": agent_type,
|
||||||
|
"project": project,
|
||||||
|
},
|
||||||
|
timeout=self.timeout,
|
||||||
|
)
|
||||||
|
|
||||||
|
store_time = (time.time() - start_time) * 1000
|
||||||
|
|
||||||
|
if result.status_code == 200:
|
||||||
|
data = result.json()
|
||||||
|
if data.get("success"):
|
||||||
|
print(
|
||||||
|
f"💾 Response cached [{agent_type}] | Store: {store_time:.0f}ms"
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(
|
||||||
|
f"📄 Already cached [{agent_type}] | Store: {store_time:.0f}ms"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
print(f"⚠️ Cache store failed: HTTP {result.status_code}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
except requests.exceptions.Timeout:
|
||||||
|
print(f"⏱️ Cache store timeout ({self.timeout}s)")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(f"🚨 Cache store error: {str(e)}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def get_stats(self) -> Dict[str, Any]:
|
||||||
|
"""Get cache statistics."""
|
||||||
|
try:
|
||||||
|
response = requests.get(
|
||||||
|
f"{self.base_url}/ai-cache-stats", timeout=self.timeout
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
stats = response.json()
|
||||||
|
|
||||||
|
# Add session stats
|
||||||
|
session_time = time.time() - self.session_start
|
||||||
|
session_total = self.session_hits + self.session_misses
|
||||||
|
session_hit_rate = (
|
||||||
|
(self.session_hits / session_total * 100)
|
||||||
|
if session_total > 0
|
||||||
|
else 0
|
||||||
|
)
|
||||||
|
|
||||||
|
stats["session_stats"] = {
|
||||||
|
"hits": self.session_hits,
|
||||||
|
"misses": self.session_misses,
|
||||||
|
"total": session_total,
|
||||||
|
"hit_rate_percentage": round(session_hit_rate, 1),
|
||||||
|
"duration_minutes": round(session_time / 60, 1),
|
||||||
|
}
|
||||||
|
|
||||||
|
return stats
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to get stats: {response.status_code}"}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"Stats error: {str(e)}"}
|
||||||
|
|
||||||
|
def print_session_summary(self):
|
||||||
|
"""Print session cache performance summary."""
|
||||||
|
total = self.session_hits + self.session_misses
|
||||||
|
if total == 0:
|
||||||
|
return
|
||||||
|
|
||||||
|
hit_rate = (self.session_hits / total) * 100
|
||||||
|
session_time = (time.time() - self.session_start) / 60
|
||||||
|
|
||||||
|
print(f"\n📊 Cache Session Summary:")
|
||||||
|
print(
|
||||||
|
f" Hits: {self.session_hits} | Misses: {self.session_misses} | Hit Rate: {hit_rate:.1f}%"
|
||||||
|
)
|
||||||
|
print(f" Session Time: {session_time:.1f} minutes")
|
||||||
|
|
||||||
|
if hit_rate > 60:
|
||||||
|
print(f" 🎉 Excellent cache performance!")
|
||||||
|
elif hit_rate > 30:
|
||||||
|
print(f" 👍 Good cache performance")
|
||||||
|
else:
|
||||||
|
print(f" 📈 Cache is learning your patterns...")
|
||||||
|
|
||||||
|
|
||||||
|
# Global cache instance
|
||||||
|
_cache_instance = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_cache() -> AICacheClient:
|
||||||
|
"""Get or create global cache instance."""
|
||||||
|
global _cache_instance
|
||||||
|
if _cache_instance is None:
|
||||||
|
_cache_instance = AICacheClient()
|
||||||
|
return _cache_instance
|
||||||
|
|
||||||
|
|
||||||
|
def cached_ai_query(
|
||||||
|
prompt: str, agent_type: str, project: str = "job_forge"
|
||||||
|
) -> tuple[Optional[str], bool]:
|
||||||
|
"""
|
||||||
|
Helper function for cached AI queries.
|
||||||
|
Returns: (cached_response, was_cache_hit)
|
||||||
|
"""
|
||||||
|
cache = get_cache()
|
||||||
|
cached_response = cache.lookup_cache(prompt, agent_type, project)
|
||||||
|
|
||||||
|
if cached_response:
|
||||||
|
return cached_response, True
|
||||||
|
else:
|
||||||
|
return None, False
|
||||||
|
|
||||||
|
|
||||||
|
def store_ai_response(
|
||||||
|
prompt: str, response: str, agent_type: str, project: str = "job_forge"
|
||||||
|
):
|
||||||
|
"""Helper function to store AI responses."""
|
||||||
|
cache = get_cache()
|
||||||
|
cache.store_cache(prompt, response, agent_type, project=project)
|
||||||
|
|
||||||
|
|
||||||
|
def print_cache_stats():
|
||||||
|
"""Print current cache statistics."""
|
||||||
|
cache = get_cache()
|
||||||
|
stats = cache.get_stats()
|
||||||
|
|
||||||
|
if "error" in stats:
|
||||||
|
print(f"❌ {stats['error']}")
|
||||||
|
return
|
||||||
|
|
||||||
|
summary = stats.get("summary", {})
|
||||||
|
session = stats.get("session_stats", {})
|
||||||
|
|
||||||
|
print(f"\n📈 AI Cache Statistics:")
|
||||||
|
print(f" Overall Hit Rate: {summary.get('hit_rate_percentage', 0)}%")
|
||||||
|
print(f" Total Saved: ${summary.get('total_cost_saved_usd', 0):.2f}")
|
||||||
|
print(f" API Calls Saved: {summary.get('api_calls_saved', 0)}")
|
||||||
|
|
||||||
|
if session:
|
||||||
|
print(
|
||||||
|
f" This Session: {session['hits']}/{session['total']} hits ({session['hit_rate_percentage']}%)"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# Example usage for testing
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Test the cache
|
||||||
|
cache = get_cache()
|
||||||
|
|
||||||
|
# Test lookup
|
||||||
|
result = cache.lookup_cache("What is the database schema?", "technical_lead")
|
||||||
|
print(f"Lookup result: {result}")
|
||||||
|
|
||||||
|
# Test store
|
||||||
|
cache.store_cache(
|
||||||
|
"What is the database schema?",
|
||||||
|
"PostgreSQL with users and applications tables",
|
||||||
|
"technical_lead",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Print stats
|
||||||
|
print_cache_stats()
|
||||||
|
cache.print_session_summary()
|
||||||
45
.env.example
Normal file
45
.env.example
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# JobForge MVP - Environment Variables Template
|
||||||
|
# =============================================================================
|
||||||
|
# Copy this file to .env and fill in your actual values
|
||||||
|
# Never commit .env to version control!
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# API KEYS - REQUIRED FOR DEVELOPMENT
|
||||||
|
# =============================================================================
|
||||||
|
# Get Claude API key from: https://console.anthropic.com/
|
||||||
|
CLAUDE_API_KEY=your_claude_api_key_here
|
||||||
|
|
||||||
|
# Get OpenAI API key from: https://platform.openai.com/api-keys
|
||||||
|
OPENAI_API_KEY=your_openai_api_key_here
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# DATABASE CONFIGURATION
|
||||||
|
# =============================================================================
|
||||||
|
DATABASE_URL=postgresql+asyncpg://jobforge_user:jobforge_password@postgres:5432/jobforge_mvp
|
||||||
|
POSTGRES_DB=jobforge_mvp
|
||||||
|
POSTGRES_USER=jobforge_user
|
||||||
|
POSTGRES_PASSWORD=jobforge_password
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# AUTHENTICATION
|
||||||
|
# =============================================================================
|
||||||
|
# Generate a secure random key (minimum 32 characters)
|
||||||
|
# You can use: python -c "import secrets; print(secrets.token_urlsafe(32))"
|
||||||
|
JWT_SECRET_KEY=your-super-secret-jwt-key-minimum-32-characters-long
|
||||||
|
JWT_ALGORITHM=HS256
|
||||||
|
JWT_EXPIRE_HOURS=24
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# APPLICATION SETTINGS
|
||||||
|
# =============================================================================
|
||||||
|
DEBUG=true
|
||||||
|
LOG_LEVEL=INFO
|
||||||
|
BACKEND_URL=http://backend:8000
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# AI PROCESSING SETTINGS
|
||||||
|
# =============================================================================
|
||||||
|
CLAUDE_MODEL=claude-sonnet-4-20250514
|
||||||
|
OPENAI_EMBEDDING_MODEL=text-embedding-3-large
|
||||||
|
MAX_PROCESSING_TIME_SECONDS=120
|
||||||
38
.gitignore
vendored
38
.gitignore
vendored
@@ -128,8 +128,10 @@ celerybeat.pid
|
|||||||
# SageMath parsed files
|
# SageMath parsed files
|
||||||
*.sage.py
|
*.sage.py
|
||||||
|
|
||||||
# Environments
|
# Environment files
|
||||||
.env
|
.env
|
||||||
|
.env.local
|
||||||
|
.env.*.local
|
||||||
.venv
|
.venv
|
||||||
env/
|
env/
|
||||||
venv/
|
venv/
|
||||||
@@ -137,6 +139,40 @@ ENV/
|
|||||||
env.bak/
|
env.bak/
|
||||||
venv.bak/
|
venv.bak/
|
||||||
|
|
||||||
|
# IDE files
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
*.sublime-project
|
||||||
|
*.sublime-workspace
|
||||||
|
|
||||||
|
# OS files
|
||||||
|
.DS_Store
|
||||||
|
.DS_Store?
|
||||||
|
._*
|
||||||
|
.Spotlight-V100
|
||||||
|
.Trashes
|
||||||
|
ehthumbs.db
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
.dockerignore
|
||||||
|
|
||||||
|
# User data and uploads
|
||||||
|
user_data/
|
||||||
|
uploads/
|
||||||
|
documents/
|
||||||
|
|
||||||
|
# AI model cache
|
||||||
|
.cache/
|
||||||
|
models/
|
||||||
|
ai_cache/
|
||||||
|
embeddings_cache/
|
||||||
|
|
||||||
|
# Database volumes
|
||||||
|
postgres_data/
|
||||||
|
|
||||||
# Spyder project settings
|
# Spyder project settings
|
||||||
.spyderproject
|
.spyderproject
|
||||||
.spyproject
|
.spyproject
|
||||||
|
|||||||
407
CLAUDE.md
Normal file
407
CLAUDE.md
Normal file
@@ -0,0 +1,407 @@
|
|||||||
|
# Job Forge - AI-Powered Job Application Assistant
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
**Job Forge** is a Python/FastAPI web application prototype that leverages AI to streamline the job application process through automated document generation, application tracking, and intelligent job matching.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
You are the **Project Manager and Team Orchestrator** for the Job Forge development team. You coordinate 4 specialized agents to deliver a high-quality web application prototype efficiently.
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
```yaml
|
||||||
|
backend: FastAPI + Python 3.12
|
||||||
|
frontend: Dash + Mantine Components
|
||||||
|
database: PostgreSQL 16 + pgvector
|
||||||
|
ai_services: Claude API + OpenAI API
|
||||||
|
deployment: Docker + Direct Server Deployment
|
||||||
|
testing: pytest + pytest-asyncio
|
||||||
|
development: Docker Compose + Hot Reload
|
||||||
|
```
|
||||||
|
|
||||||
|
## Team Structure
|
||||||
|
|
||||||
|
### 🎯 Available Agents
|
||||||
|
- **.claude/agents/technical-lead.md** - Python/FastAPI architecture and technical guidance
|
||||||
|
- **.claude/agents/full-stack-developer.md** - FastAPI backend + Dash frontend implementation
|
||||||
|
- **.claude/agents/qa.md** - pytest testing and quality assurance
|
||||||
|
- **.claude/agents/devops.md** - Docker deployment and server infrastructure
|
||||||
|
|
||||||
|
### 🔄 Development Workflow Process
|
||||||
|
|
||||||
|
#### Phase 1: Planning (Technical Lead)
|
||||||
|
```yaml
|
||||||
|
input_required:
|
||||||
|
- feature_requirements
|
||||||
|
- user_stories
|
||||||
|
- technical_constraints
|
||||||
|
- prototype_timeline
|
||||||
|
|
||||||
|
technical_lead_delivers:
|
||||||
|
- fastapi_endpoint_specifications
|
||||||
|
- database_schema_updates
|
||||||
|
- dash_component_architecture
|
||||||
|
- python_coding_standards
|
||||||
|
- integration_patterns
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Phase 2: Development (Full-Stack Developer)
|
||||||
|
```yaml
|
||||||
|
input_from_technical_lead:
|
||||||
|
- api_endpoint_specifications
|
||||||
|
- database_models_and_schemas
|
||||||
|
- dash_component_structure
|
||||||
|
- coding_standards
|
||||||
|
|
||||||
|
full_stack_developer_delivers:
|
||||||
|
- fastapi_backend_implementation
|
||||||
|
- dash_frontend_components
|
||||||
|
- database_operations_and_migrations
|
||||||
|
- authentication_system
|
||||||
|
- ai_service_integrations
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Phase 3: Quality Assurance (QA Engineer)
|
||||||
|
```yaml
|
||||||
|
input_from_developer:
|
||||||
|
- working_web_application
|
||||||
|
- feature_documentation
|
||||||
|
- api_endpoints_and_examples
|
||||||
|
- test_scenarios
|
||||||
|
|
||||||
|
qa_engineer_delivers:
|
||||||
|
- pytest_test_suites
|
||||||
|
- manual_testing_results
|
||||||
|
- bug_reports_and_fixes
|
||||||
|
- quality_validation
|
||||||
|
- deployment_readiness
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Phase 4: Deployment (DevOps Engineer)
|
||||||
|
```yaml
|
||||||
|
input_from_qa:
|
||||||
|
- tested_application
|
||||||
|
- deployment_requirements
|
||||||
|
- environment_configuration
|
||||||
|
- server_setup_needs
|
||||||
|
|
||||||
|
devops_engineer_delivers:
|
||||||
|
- docker_containerization
|
||||||
|
- server_deployment_procedures
|
||||||
|
- environment_setup
|
||||||
|
- monitoring_and_logging
|
||||||
|
- backup_procedures
|
||||||
|
```
|
||||||
|
|
||||||
|
## Job Forge Specific Features
|
||||||
|
|
||||||
|
### 🎯 Core Application Features
|
||||||
|
- **AI Document Generation**: Automated cover letters and resumes
|
||||||
|
- **Application Tracking**: Comprehensive job application management
|
||||||
|
- **Job Matching**: AI-powered job recommendation system
|
||||||
|
- **Multi-tenancy**: User isolation with PostgreSQL RLS
|
||||||
|
- **Document Management**: File upload and AI processing
|
||||||
|
|
||||||
|
### 📋 Feature Development Workflow
|
||||||
|
|
||||||
|
#### Step 1: Feature Planning
|
||||||
|
```bash
|
||||||
|
# Activate Technical Lead for architecture
|
||||||
|
# Focus: FastAPI endpoints, Dash components, database schema
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 2: Implementation
|
||||||
|
```bash
|
||||||
|
# Activate Full-Stack Developer for implementation
|
||||||
|
# Focus: Python backend, Dash UI, AI integrations
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 3: Quality Validation
|
||||||
|
```bash
|
||||||
|
# Activate QA Engineer for testing
|
||||||
|
# Focus: pytest automation, manual testing, performance
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 4: Server Deployment
|
||||||
|
```bash
|
||||||
|
# Activate DevOps Engineer for deployment
|
||||||
|
# Focus: Docker setup, server deployment, monitoring
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates for Job Forge
|
||||||
|
|
||||||
|
### 🔒 Prototype Quality Checkpoints
|
||||||
|
```yaml
|
||||||
|
gate_1_architecture_review:
|
||||||
|
required_approval: technical_lead
|
||||||
|
criteria:
|
||||||
|
- fastapi_structure_follows_best_practices
|
||||||
|
- database_schema_supports_multitenancy
|
||||||
|
- dash_components_properly_structured
|
||||||
|
- ai_integration_patterns_defined
|
||||||
|
|
||||||
|
gate_2_implementation_review:
|
||||||
|
required_approval: technical_lead + full_stack_developer
|
||||||
|
criteria:
|
||||||
|
- all_api_endpoints_functional
|
||||||
|
- dash_frontend_responsive_and_intuitive
|
||||||
|
- database_operations_secure_and_efficient
|
||||||
|
- ai_services_properly_integrated
|
||||||
|
- error_handling_comprehensive
|
||||||
|
|
||||||
|
gate_3_quality_review:
|
||||||
|
required_approval: qa_engineer
|
||||||
|
criteria:
|
||||||
|
- pytest_coverage_above_80_percent
|
||||||
|
- manual_testing_scenarios_passed
|
||||||
|
- no_critical_bugs_in_core_features
|
||||||
|
- user_experience_validated
|
||||||
|
|
||||||
|
gate_4_deployment_review:
|
||||||
|
required_approval: devops_engineer
|
||||||
|
criteria:
|
||||||
|
- docker_containers_optimized
|
||||||
|
- server_deployment_tested
|
||||||
|
- environment_variables_secured
|
||||||
|
- basic_monitoring_configured
|
||||||
|
```
|
||||||
|
|
||||||
|
## Agent Handoff Protocol
|
||||||
|
|
||||||
|
### 📤 Job Forge Handoff Format
|
||||||
|
```markdown
|
||||||
|
## Handoff: [From Agent] → [To Agent]
|
||||||
|
**Date**: [YYYY-MM-DD]
|
||||||
|
**Feature**: [Job Forge feature name]
|
||||||
|
|
||||||
|
### ✅ Completed Deliverables
|
||||||
|
- [FastAPI endpoints / Dash components / Tests]
|
||||||
|
- [Code location and documentation]
|
||||||
|
- [Test results or validation]
|
||||||
|
|
||||||
|
### 📋 Next Steps Required
|
||||||
|
- [Specific tasks for receiving agent]
|
||||||
|
- [Dependencies and integration points]
|
||||||
|
- [Timeline for prototype milestone]
|
||||||
|
|
||||||
|
### ⚠️ Important Notes
|
||||||
|
- [AI service limitations or considerations]
|
||||||
|
- [Database migration requirements]
|
||||||
|
- [Server deployment considerations]
|
||||||
|
|
||||||
|
**Status**: READY_FOR_NEXT_PHASE
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔄 Job Forge Handoff Scenarios
|
||||||
|
|
||||||
|
#### Technical Lead → Full-Stack Developer
|
||||||
|
```yaml
|
||||||
|
typical_deliverables:
|
||||||
|
- fastapi_endpoint_specifications
|
||||||
|
- pydantic_model_definitions
|
||||||
|
- dash_component_wireframes
|
||||||
|
- ai_integration_requirements
|
||||||
|
- database_migration_scripts
|
||||||
|
|
||||||
|
developer_needs:
|
||||||
|
- clear_acceptance_criteria
|
||||||
|
- ui_mockups_or_component_examples
|
||||||
|
- ai_api_usage_patterns
|
||||||
|
- authentication_flow_details
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Full-Stack Developer → QA Engineer
|
||||||
|
```yaml
|
||||||
|
typical_deliverables:
|
||||||
|
- working_job_forge_application
|
||||||
|
- api_documentation_with_examples
|
||||||
|
- dash_components_and_workflows
|
||||||
|
- test_user_accounts_and_data
|
||||||
|
- known_issues_or_limitations
|
||||||
|
|
||||||
|
qa_needs:
|
||||||
|
- user_workflow_test_scenarios
|
||||||
|
- expected_ai_response_patterns
|
||||||
|
- performance_expectations
|
||||||
|
- cross_browser_compatibility_requirements
|
||||||
|
```
|
||||||
|
|
||||||
|
#### QA Engineer → DevOps Engineer
|
||||||
|
```yaml
|
||||||
|
typical_deliverables:
|
||||||
|
- fully_tested_job_forge_app
|
||||||
|
- pytest_coverage_reports
|
||||||
|
- performance_test_results
|
||||||
|
- security_validation_results
|
||||||
|
- deployment_readiness_confirmation
|
||||||
|
|
||||||
|
devops_needs:
|
||||||
|
- environment_variable_requirements
|
||||||
|
- database_connection_requirements
|
||||||
|
- ai_api_key_management
|
||||||
|
- server_resource_requirements
|
||||||
|
- ssl_and_domain_configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prototype Development Framework
|
||||||
|
|
||||||
|
### 🚀 Sprint Structure (1 Week Cycles)
|
||||||
|
```yaml
|
||||||
|
prototype_focused_sprints:
|
||||||
|
monday_planning:
|
||||||
|
duration: 1_hour
|
||||||
|
focus: feature_prioritization_for_prototype
|
||||||
|
deliverables:
|
||||||
|
- core_feature_selection
|
||||||
|
- technical_implementation_plan
|
||||||
|
- testing_strategy
|
||||||
|
- deployment_timeline
|
||||||
|
|
||||||
|
daily_standup:
|
||||||
|
duration: 10_minutes
|
||||||
|
format: async_updates
|
||||||
|
focus: rapid_progress_tracking
|
||||||
|
|
||||||
|
friday_demo:
|
||||||
|
duration: 30_minutes
|
||||||
|
focus: working_prototype_demonstration
|
||||||
|
deliverables:
|
||||||
|
- functional_feature_demo
|
||||||
|
- user_feedback_collection
|
||||||
|
- next_iteration_planning
|
||||||
|
```
|
||||||
|
|
||||||
|
## Decision Making for Prototyping
|
||||||
|
|
||||||
|
### ⚡ Quick Prototype Decisions (< 1 hour)
|
||||||
|
- UI/UX adjustments and improvements
|
||||||
|
- Bug fixes and minor feature tweaks
|
||||||
|
- Configuration and environment changes
|
||||||
|
- Documentation updates
|
||||||
|
|
||||||
|
### 🤝 Team Consultation (< 4 hours)
|
||||||
|
- New feature addition to prototype
|
||||||
|
- AI integration improvements
|
||||||
|
- Database schema modifications
|
||||||
|
- Testing strategy adjustments
|
||||||
|
|
||||||
|
### 🏛️ Architecture Decisions (< 24 hours)
|
||||||
|
- Major system architecture changes
|
||||||
|
- Third-party service integrations
|
||||||
|
- Security implementation changes
|
||||||
|
- Deployment strategy modifications
|
||||||
|
|
||||||
|
## Success Metrics for Job Forge Prototype
|
||||||
|
|
||||||
|
### 📊 Prototype Success Indicators
|
||||||
|
- **Core Features**: All essential job application features working
|
||||||
|
- **User Experience**: Intuitive and responsive web interface
|
||||||
|
- **AI Integration**: Reliable document generation and job matching
|
||||||
|
- **Performance**: Fast response times for typical user workflows
|
||||||
|
- **Reliability**: Stable operation during testing and demos
|
||||||
|
|
||||||
|
### 📈 Technical Health Metrics
|
||||||
|
- **Code Quality**: Clean, maintainable Python/FastAPI code
|
||||||
|
- **Test Coverage**: >80% backend coverage, manual frontend validation
|
||||||
|
- **Security**: Proper authentication and data isolation
|
||||||
|
- **Deployment**: Reliable Docker-based deployment process
|
||||||
|
|
||||||
|
## Communication Guidelines
|
||||||
|
|
||||||
|
### 📅 Prototype Development Touchpoints
|
||||||
|
- **Daily**: Quick progress updates and blocker resolution
|
||||||
|
- **Weekly**: Feature completion and prototype iteration planning
|
||||||
|
- **Milestone**: Prototype demonstration and feedback collection
|
||||||
|
|
||||||
|
### 🎯 Focus Areas
|
||||||
|
- **Rapid Development**: Prioritize working features over perfect code
|
||||||
|
- **User-Centric**: Focus on core user workflows and experience
|
||||||
|
- **AI Integration**: Ensure reliable AI service integration
|
||||||
|
- **Deployment Ready**: Maintain deployable state throughout development
|
||||||
|
|
||||||
|
## Getting Started with Job Forge
|
||||||
|
|
||||||
|
### 🏁 Prototype Development Checklist
|
||||||
|
- [ ] Development environment setup (Docker + FastAPI + Dash)
|
||||||
|
- [ ] Database initialization with sample data
|
||||||
|
- [ ] AI service API keys configured
|
||||||
|
- [ ] Core user workflow identified and planned
|
||||||
|
- [ ] Team agents briefed on Job Forge requirements
|
||||||
|
- [ ] First prototype iteration timeline established
|
||||||
|
|
||||||
|
### 🎯 Ready to Build Job Forge
|
||||||
|
Your specialized development team is ready to deliver the Job Forge AI-powered job application assistant. Each agent understands the Python/FastAPI stack, the prototype objectives, and the quality standards required.
|
||||||
|
|
||||||
|
**Start building your Job Forge prototype!** 🚀
|
||||||
|
|
||||||
|
## Project Structure and Organization
|
||||||
|
|
||||||
|
### 📁 Clean Project Structure Requirements
|
||||||
|
**MANDATORY**: Only necessary files should be stored in the project root folder. All supporting files must be organized into appropriate subdirectories:
|
||||||
|
|
||||||
|
```
|
||||||
|
job-forge/
|
||||||
|
├── src/ # Source code only
|
||||||
|
├── tests/ # Test files only
|
||||||
|
├── docs/ # All documentation
|
||||||
|
├── docker/ # All Docker-related files
|
||||||
|
├── database/ # Database scripts and migrations
|
||||||
|
├── .env.example # Environment template
|
||||||
|
├── requirements-*.txt # Python dependencies
|
||||||
|
├── pytest.ini # Test configuration
|
||||||
|
└── README.md # Main project readme
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔧 Docker Files Organization
|
||||||
|
All Docker-related files are stored in the `docker/` folder:
|
||||||
|
- `docker/docker-compose.yml` - Main orchestration file
|
||||||
|
- `docker/Dockerfile.backend` - Backend container definition
|
||||||
|
- `docker/Dockerfile.frontend` - Frontend container definition
|
||||||
|
|
||||||
|
**Usage**: Run `cd docker && docker compose up -d` to start the environment.
|
||||||
|
|
||||||
|
### 📚 Documentation Structure Requirements
|
||||||
|
All project documentation is centralized in the `docs/` folder:
|
||||||
|
- `docs/lessons-learned/` - **MANDATORY**: All project issues and solutions
|
||||||
|
- `docs/api_specification.md` - API documentation
|
||||||
|
- `docs/database_design.md` - Database schema and design
|
||||||
|
- `docs/development/` - Development guides and standards
|
||||||
|
|
||||||
|
### 📝 Lessons Learned Process
|
||||||
|
**MANDATORY**: For every issue encountered during development:
|
||||||
|
1. Create a new markdown file in `docs/lessons-learned/`
|
||||||
|
2. Use format: `###-issue-name.md` (where ### is sequential number)
|
||||||
|
3. Include: Issue name, description, error messages, root cause, solution, prevention strategy
|
||||||
|
4. Reference the lesson learned in relevant documentation
|
||||||
|
|
||||||
|
### 🏗️ Development Environment Setup
|
||||||
|
```bash
|
||||||
|
# 1. Clone and navigate to project
|
||||||
|
git clone <repository>
|
||||||
|
cd job-forge
|
||||||
|
|
||||||
|
# 2. Set up environment variables
|
||||||
|
cp .env.example .env
|
||||||
|
# Edit .env with your API keys
|
||||||
|
|
||||||
|
# 3. Start development environment
|
||||||
|
cd docker
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# 4. Access applications
|
||||||
|
# Frontend: http://localhost:8501
|
||||||
|
# Backend: http://localhost:8000
|
||||||
|
# Database: localhost:5432
|
||||||
|
```
|
||||||
|
|
||||||
|
# Important Instructions
|
||||||
|
- **Clean Structure**: Only necessary files in project root
|
||||||
|
- **Docker Organization**: All Docker files in `docker/` folder
|
||||||
|
- **Lessons Learned**: Document all issues in `docs/lessons-learned/`
|
||||||
|
- Focus on Python/FastAPI backend implementation
|
||||||
|
- Use Dash + Mantine for frontend components
|
||||||
|
- Prioritize core job application workflows
|
||||||
|
- Maintain deployable prototype state
|
||||||
|
- Ensure AI service integration reliability
|
||||||
|
- Follow established quality gates for all features
|
||||||
|
|
||||||
|
## Project Memories
|
||||||
|
- Save files in an organized manner in the project folder to keep it clear and maintainable
|
||||||
280
README.md
280
README.md
@@ -1,3 +1,279 @@
|
|||||||
# job-forge
|
# Job Forge - AI-Powered Job Application Assistant
|
||||||
|
|
||||||
A tool to help with job applications.
|
[](https://python.org)
|
||||||
|
[](https://fastapi.tiangolo.com)
|
||||||
|
[](https://dash.plotly.com)
|
||||||
|
[](https://postgresql.org)
|
||||||
|
|
||||||
|
> **Job Forge** is a Python/FastAPI web application prototype that leverages AI to streamline the job application process through automated document generation, application tracking, and intelligent job matching.
|
||||||
|
|
||||||
|
## 🚀 Quick Start
|
||||||
|
|
||||||
|
### Docker Development (Recommended)
|
||||||
|
```bash
|
||||||
|
# Clone the repository
|
||||||
|
git clone https://github.com/yourusername/job-forge.git
|
||||||
|
cd job-forge
|
||||||
|
|
||||||
|
# Set up environment variables
|
||||||
|
cp .env.example .env
|
||||||
|
# Edit .env with your API keys (Claude, OpenAI, JWT secret)
|
||||||
|
|
||||||
|
# Start development environment
|
||||||
|
cd docker
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Access the applications
|
||||||
|
# Frontend: http://localhost:8501
|
||||||
|
# Backend API: http://localhost:8000
|
||||||
|
# Database: localhost:5432
|
||||||
|
```
|
||||||
|
|
||||||
|
### Local Development Setup
|
||||||
|
```bash
|
||||||
|
# For local development and testing
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# For development dependencies only
|
||||||
|
pip install -r dev-requirements.txt
|
||||||
|
|
||||||
|
# Run tests locally
|
||||||
|
python validate_tests.py # Validate test structure
|
||||||
|
python run_tests.py # Run API tests against Docker environment
|
||||||
|
pytest # Run full pytest suite (requires local services)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 📚 Documentation Navigation
|
||||||
|
|
||||||
|
### 🏗️ **Architecture & Planning**
|
||||||
|
- [**System Architecture**](docs/jobforge_mvp_architecture.md) - Complete system overview and component breakdown
|
||||||
|
- [**API Specification**](docs/api_specification.md) - Comprehensive API endpoints and examples
|
||||||
|
- [**Database Design**](docs/database_design.md) - Schema design with RLS policies and performance optimization
|
||||||
|
- [**MVP Checklist**](docs/MVP_CHECKLIST.md) - Development progress tracking
|
||||||
|
|
||||||
|
### 🛠️ **Development**
|
||||||
|
- [**Development Setup**](docs/development_setup.md) - Environment setup and configuration
|
||||||
|
- [**Getting Started Guide**](docs/GETTING_STARTED.md) - Week-by-week implementation roadmap
|
||||||
|
- [**Coding Standards**](docs/development/coding_standards.md) - Python/FastAPI best practices
|
||||||
|
- [**Implementation Patterns**](docs/development/implementation_patterns.md) - Code templates and examples
|
||||||
|
|
||||||
|
### 🧪 **Testing & Quality**
|
||||||
|
- [**Testing Strategy**](docs/testing_strategy.md) - Comprehensive testing approach with pytest
|
||||||
|
- [**QA Procedures**](docs/testing/qa_procedures.md) - Quality assurance workflows
|
||||||
|
- [**Manual Testing**](docs/testing/manual_testing_guide.md) - User workflow validation procedures
|
||||||
|
|
||||||
|
### 🚢 **Deployment & Infrastructure**
|
||||||
|
- [**Server Deployment**](docs/infrastructure/server_deployment.md) - Direct server deployment procedures
|
||||||
|
- [**Docker Setup**](docs/infrastructure/docker_setup.md) - Containerization and orchestration
|
||||||
|
- [**Environment Configuration**](docs/infrastructure/environment_setup.md) - Production environment setup
|
||||||
|
- [**Monitoring & Logging**](docs/infrastructure/monitoring.md) - Observability and alerting
|
||||||
|
|
||||||
|
### 🔒 **Security & Operations**
|
||||||
|
- [**Security Guidelines**](docs/infrastructure/security_hardening.md) - Production security measures
|
||||||
|
- [**Backup Procedures**](docs/infrastructure/backup_procedures.md) - Data backup and recovery
|
||||||
|
- [**Git Workflow**](docs/git_branch_strategy.md) - Branching strategy and collaboration
|
||||||
|
|
||||||
|
### 🤖 **AI Integration**
|
||||||
|
- [**AI Services Setup**](docs/development/ai_integration.md) - Claude and OpenAI API integration
|
||||||
|
- [**Document Generation**](docs/development/document_generation.md) - Automated resume and cover letter creation
|
||||||
|
- [**Job Matching**](docs/development/job_matching.md) - AI-powered job recommendation system
|
||||||
|
|
||||||
|
## 🏛️ **Technology Stack**
|
||||||
|
|
||||||
|
### Backend
|
||||||
|
- **FastAPI** - Modern Python web framework
|
||||||
|
- **PostgreSQL 16** - Database with pgvector for AI embeddings
|
||||||
|
- **SQLAlchemy** - ORM with async support
|
||||||
|
- **Alembic** - Database migrations
|
||||||
|
- **Pydantic** - Data validation and serialization
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
- **Dash** - Interactive web applications
|
||||||
|
- **Mantine Components** - Modern UI component library
|
||||||
|
- **Plotly** - Data visualization
|
||||||
|
- **Bootstrap** - Responsive design framework
|
||||||
|
|
||||||
|
### AI & ML
|
||||||
|
- **Claude API** - Document generation and analysis
|
||||||
|
- **OpenAI API** - Embeddings and completions
|
||||||
|
- **pgvector** - Vector similarity search
|
||||||
|
- **Sentence Transformers** - Text embeddings
|
||||||
|
|
||||||
|
### Development & Deployment
|
||||||
|
- **Docker** - Containerization
|
||||||
|
- **Docker Compose** - Development orchestration
|
||||||
|
- **pytest** - Testing framework
|
||||||
|
- **Black** - Code formatting
|
||||||
|
- **Ruff** - Fast Python linter
|
||||||
|
|
||||||
|
## 🎯 **Core Features**
|
||||||
|
|
||||||
|
### ✨ **AI Document Generation**
|
||||||
|
- Automated cover letter creation based on job descriptions
|
||||||
|
- Resume optimization and tailoring
|
||||||
|
- Professional formatting and styling
|
||||||
|
- Multiple export formats (PDF, DOCX)
|
||||||
|
|
||||||
|
### 📊 **Application Tracking**
|
||||||
|
- Comprehensive job application management
|
||||||
|
- Application status tracking and timeline
|
||||||
|
- Interview scheduling and notes
|
||||||
|
- Follow-up reminders and notifications
|
||||||
|
|
||||||
|
### 🔍 **Job Matching**
|
||||||
|
- AI-powered job recommendation system
|
||||||
|
- Skills gap analysis and suggestions
|
||||||
|
- Salary insights and market data
|
||||||
|
- Company culture and fit analysis
|
||||||
|
|
||||||
|
### 👥 **Multi-User Support**
|
||||||
|
- Secure user authentication and authorization
|
||||||
|
- Data isolation with PostgreSQL RLS
|
||||||
|
- User profile and preference management
|
||||||
|
- Team collaboration features
|
||||||
|
|
||||||
|
## 🏃 **Development Workflow**
|
||||||
|
|
||||||
|
1. **Planning Phase** - Technical Lead defines architecture and specifications
|
||||||
|
2. **Implementation Phase** - Full-Stack Developer builds features
|
||||||
|
3. **Quality Assurance** - QA Engineer validates and tests
|
||||||
|
4. **Deployment Phase** - DevOps Engineer handles deployment
|
||||||
|
|
||||||
|
## 📋 **Project Status**
|
||||||
|
|
||||||
|
- [x] Project architecture and database design
|
||||||
|
- [x] Development environment setup
|
||||||
|
- [x] API specification and documentation
|
||||||
|
- [ ] Core backend API implementation
|
||||||
|
- [ ] Frontend Dash application
|
||||||
|
- [ ] AI service integrations
|
||||||
|
- [ ] Testing suite implementation
|
||||||
|
- [ ] Production deployment
|
||||||
|
|
||||||
|
## 🤝 **Team & Agents**
|
||||||
|
|
||||||
|
Job Forge uses specialized AI agents for development:
|
||||||
|
|
||||||
|
- **Technical Lead** - Architecture decisions and technical guidance
|
||||||
|
- **Full-Stack Developer** - FastAPI backend and Dash frontend implementation
|
||||||
|
- **QA Engineer** - pytest testing and quality assurance
|
||||||
|
- **DevOps Engineer** - Docker deployment and server infrastructure
|
||||||
|
|
||||||
|
See [CLAUDE.md](CLAUDE.md) for complete team orchestration documentation.
|
||||||
|
|
||||||
|
## 📖 **Documentation Categories**
|
||||||
|
|
||||||
|
### 📚 **Getting Started**
|
||||||
|
Perfect for new developers joining the project:
|
||||||
|
- [Development Setup](docs/development_setup.md)
|
||||||
|
- [Getting Started Guide](docs/GETTING_STARTED.md)
|
||||||
|
- [MVP Checklist](docs/MVP_CHECKLIST.md)
|
||||||
|
|
||||||
|
### 🔧 **Implementation**
|
||||||
|
For active development work:
|
||||||
|
- [API Specification](docs/api_specification.md)
|
||||||
|
- [Database Design](docs/database_design.md)
|
||||||
|
- [System Architecture](docs/jobforge_mvp_architecture.md)
|
||||||
|
|
||||||
|
### 💡 **Future Features**
|
||||||
|
For managing new feature ideas and scope:
|
||||||
|
- [Feature Management Process](docs/future_features/)
|
||||||
|
- [Feature Ideas](docs/future_features/ideas/)
|
||||||
|
- [Under Review](docs/future_features/under_review/)
|
||||||
|
- [Approved Features](docs/future_features/approved/)
|
||||||
|
|
||||||
|
### 🧪 **Quality Assurance**
|
||||||
|
For testing and validation:
|
||||||
|
- [Testing Strategy](docs/testing_strategy.md)
|
||||||
|
- [QA Procedures](docs/testing/)
|
||||||
|
- [Manual Testing](docs/testing/)
|
||||||
|
|
||||||
|
### 🚀 **Deployment**
|
||||||
|
For production deployment:
|
||||||
|
- [Server Deployment](docs/infrastructure/)
|
||||||
|
- [Docker Setup](docs/infrastructure/)
|
||||||
|
- [Security Guidelines](docs/infrastructure/)
|
||||||
|
|
||||||
|
## 🛡️ **Security**
|
||||||
|
|
||||||
|
Job Forge implements comprehensive security measures:
|
||||||
|
- **Authentication**: Secure user authentication with JWT tokens
|
||||||
|
- **Authorization**: Role-based access control (RBAC)
|
||||||
|
- **Data Isolation**: PostgreSQL Row Level Security (RLS)
|
||||||
|
- **API Security**: Rate limiting and input validation
|
||||||
|
- **Encryption**: Data encryption at rest and in transit
|
||||||
|
|
||||||
|
## 📈 **Performance**
|
||||||
|
|
||||||
|
Optimized for performance and scalability:
|
||||||
|
- **Database**: Optimized queries with proper indexing
|
||||||
|
- **API**: Async FastAPI for high concurrency
|
||||||
|
- **Caching**: Redis for session and API response caching
|
||||||
|
- **CDN**: Static asset delivery optimization
|
||||||
|
- **Monitoring**: Application and infrastructure monitoring
|
||||||
|
|
||||||
|
## 🧪 **Testing**
|
||||||
|
|
||||||
|
Comprehensive testing strategy:
|
||||||
|
- **Unit Tests**: 80%+ coverage with pytest
|
||||||
|
- **Integration Tests**: API and database testing
|
||||||
|
- **End-to-End Tests**: User workflow validation
|
||||||
|
- **Performance Tests**: Load and stress testing
|
||||||
|
- **Security Tests**: Vulnerability scanning and validation
|
||||||
|
|
||||||
|
## 📊 **Monitoring & Analytics**
|
||||||
|
|
||||||
|
Built-in observability:
|
||||||
|
- **Application Monitoring**: Error tracking and performance metrics
|
||||||
|
- **Infrastructure Monitoring**: Server and database health
|
||||||
|
- **User Analytics**: Feature usage and user behavior
|
||||||
|
- **Security Monitoring**: Threat detection and response
|
||||||
|
|
||||||
|
## 🤖 **AI Integration**
|
||||||
|
|
||||||
|
Seamless AI service integration:
|
||||||
|
- **Claude API**: Advanced document generation and analysis
|
||||||
|
- **OpenAI API**: Embeddings and text completions
|
||||||
|
- **Vector Search**: Semantic job matching with pgvector
|
||||||
|
- **Error Handling**: Robust fallback and retry mechanisms
|
||||||
|
|
||||||
|
## 📱 **User Experience**
|
||||||
|
|
||||||
|
Modern and intuitive interface:
|
||||||
|
- **Responsive Design**: Mobile-first responsive layout
|
||||||
|
- **Interactive Components**: Rich Dash components with Mantine
|
||||||
|
- **Real-time Updates**: Live status updates and notifications
|
||||||
|
- **Accessible**: WCAG 2.1 AA compliance
|
||||||
|
- **Performance**: Fast loading and smooth interactions
|
||||||
|
|
||||||
|
## 🔧 **Development Tools**
|
||||||
|
|
||||||
|
Optimized development experience:
|
||||||
|
- **Hot Reload**: Real-time code changes with Docker
|
||||||
|
- **Code Quality**: Black formatting and Ruff linting
|
||||||
|
- **Type Safety**: Full type hints with mypy validation
|
||||||
|
- **Debugging**: Comprehensive logging and debugging tools
|
||||||
|
- **Testing**: Fast test execution with pytest
|
||||||
|
|
||||||
|
## 📞 **Support & Contributing**
|
||||||
|
|
||||||
|
- **Issues**: Report bugs and request features via GitHub Issues
|
||||||
|
- **Documentation**: Comprehensive documentation in `docs/` folder
|
||||||
|
- **Code Style**: Follow established Python/FastAPI best practices
|
||||||
|
- **Testing**: Maintain test coverage above 80%
|
||||||
|
- **Reviews**: All changes require code review and testing
|
||||||
|
|
||||||
|
## 📄 **License**
|
||||||
|
|
||||||
|
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Ready to start building?** 🚀
|
||||||
|
|
||||||
|
1. Follow the [Development Setup](docs/development_setup.md) guide
|
||||||
|
2. Check the [Getting Started](docs/GETTING_STARTED.md) roadmap
|
||||||
|
3. Review the [API Specification](docs/api_specification.md)
|
||||||
|
4. Start with the [MVP Checklist](docs/MVP_CHECKLIST.md)
|
||||||
|
|
||||||
|
For team coordination and agent management, see [CLAUDE.md](CLAUDE.md).
|
||||||
213
database/init.sql
Normal file
213
database/init.sql
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
-- JobForge MVP Database Initialization
|
||||||
|
-- This file sets up the database schema with Row Level Security
|
||||||
|
|
||||||
|
-- Enable required extensions
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||||
|
CREATE EXTENSION IF NOT EXISTS vector;
|
||||||
|
|
||||||
|
-- Create custom types
|
||||||
|
CREATE TYPE priority_level_type AS ENUM ('low', 'medium', 'high');
|
||||||
|
CREATE TYPE application_status_type AS ENUM (
|
||||||
|
'draft',
|
||||||
|
'research_complete',
|
||||||
|
'resume_ready',
|
||||||
|
'cover_letter_ready'
|
||||||
|
);
|
||||||
|
CREATE TYPE document_type_enum AS ENUM (
|
||||||
|
'research_report',
|
||||||
|
'optimized_resume',
|
||||||
|
'cover_letter'
|
||||||
|
);
|
||||||
|
CREATE TYPE focus_area_type AS ENUM (
|
||||||
|
'software_development',
|
||||||
|
'data_science',
|
||||||
|
'management',
|
||||||
|
'consulting',
|
||||||
|
'other'
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Users table
|
||||||
|
CREATE TABLE users (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
email VARCHAR(255) UNIQUE NOT NULL,
|
||||||
|
password_hash VARCHAR(255) NOT NULL,
|
||||||
|
full_name VARCHAR(255) NOT NULL,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'),
|
||||||
|
CONSTRAINT name_not_empty CHECK (LENGTH(TRIM(full_name)) > 0)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Applications table
|
||||||
|
CREATE TABLE applications (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
company_name VARCHAR(255) NOT NULL,
|
||||||
|
role_title VARCHAR(255) NOT NULL,
|
||||||
|
job_url TEXT,
|
||||||
|
job_description TEXT NOT NULL,
|
||||||
|
location VARCHAR(255),
|
||||||
|
priority_level priority_level_type DEFAULT 'medium',
|
||||||
|
status application_status_type DEFAULT 'draft',
|
||||||
|
|
||||||
|
-- Phase tracking
|
||||||
|
research_completed BOOLEAN DEFAULT FALSE,
|
||||||
|
resume_optimized BOOLEAN DEFAULT FALSE,
|
||||||
|
cover_letter_generated BOOLEAN DEFAULT FALSE,
|
||||||
|
|
||||||
|
-- Timestamps
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT job_description_min_length CHECK (LENGTH(job_description) >= 50),
|
||||||
|
CONSTRAINT company_name_not_empty CHECK (LENGTH(TRIM(company_name)) > 0),
|
||||||
|
CONSTRAINT role_title_not_empty CHECK (LENGTH(TRIM(role_title)) > 0),
|
||||||
|
CONSTRAINT valid_job_url CHECK (
|
||||||
|
job_url IS NULL OR
|
||||||
|
job_url ~* '^https?://[^\s/$.?#].[^\s]*$'
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Documents table
|
||||||
|
CREATE TABLE documents (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
application_id UUID NOT NULL REFERENCES applications(id) ON DELETE CASCADE,
|
||||||
|
document_type document_type_enum NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT content_min_length CHECK (LENGTH(content) >= 10),
|
||||||
|
CONSTRAINT unique_document_per_application UNIQUE (application_id, document_type)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- User resumes table
|
||||||
|
CREATE TABLE user_resumes (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
focus_area focus_area_type DEFAULT 'other',
|
||||||
|
is_primary BOOLEAN DEFAULT FALSE,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT resume_name_not_empty CHECK (LENGTH(TRIM(name)) > 0),
|
||||||
|
CONSTRAINT resume_content_min_length CHECK (LENGTH(content) >= 100)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Document embeddings table (for AI features)
|
||||||
|
CREATE TABLE document_embeddings (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
document_id UUID NOT NULL REFERENCES documents(id) ON DELETE CASCADE,
|
||||||
|
embedding vector(1536), -- OpenAI text-embedding-3-large dimension
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT unique_embedding_per_document UNIQUE (document_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Create indexes
|
||||||
|
CREATE INDEX idx_users_email ON users(email);
|
||||||
|
CREATE INDEX idx_applications_user_id ON applications(user_id);
|
||||||
|
CREATE INDEX idx_applications_status ON applications(status);
|
||||||
|
CREATE INDEX idx_applications_created_at ON applications(created_at);
|
||||||
|
CREATE INDEX idx_documents_application_id ON documents(application_id);
|
||||||
|
CREATE INDEX idx_documents_type ON documents(document_type);
|
||||||
|
CREATE INDEX idx_user_resumes_user_id ON user_resumes(user_id);
|
||||||
|
CREATE INDEX idx_document_embeddings_document_id ON document_embeddings(document_id);
|
||||||
|
|
||||||
|
-- Vector similarity index
|
||||||
|
CREATE INDEX idx_document_embeddings_vector
|
||||||
|
ON document_embeddings USING ivfflat (embedding vector_cosine_ops)
|
||||||
|
WITH (lists = 100);
|
||||||
|
|
||||||
|
-- Row Level Security setup
|
||||||
|
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE applications ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE user_resumes ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE document_embeddings ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
-- Helper function to get current user ID
|
||||||
|
CREATE OR REPLACE FUNCTION get_current_user_id()
|
||||||
|
RETURNS UUID AS $$
|
||||||
|
BEGIN
|
||||||
|
RETURN current_setting('app.current_user_id')::UUID;
|
||||||
|
EXCEPTION
|
||||||
|
WHEN others THEN
|
||||||
|
RETURN NULL;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql SECURITY DEFINER;
|
||||||
|
|
||||||
|
-- RLS policies
|
||||||
|
CREATE POLICY users_own_data ON users
|
||||||
|
FOR ALL
|
||||||
|
USING (id = get_current_user_id());
|
||||||
|
|
||||||
|
CREATE POLICY applications_user_access ON applications
|
||||||
|
FOR ALL
|
||||||
|
USING (user_id = get_current_user_id());
|
||||||
|
|
||||||
|
CREATE POLICY documents_user_access ON documents
|
||||||
|
FOR ALL
|
||||||
|
USING (
|
||||||
|
application_id IN (
|
||||||
|
SELECT id FROM applications
|
||||||
|
WHERE user_id = get_current_user_id()
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE POLICY user_resumes_access ON user_resumes
|
||||||
|
FOR ALL
|
||||||
|
USING (user_id = get_current_user_id());
|
||||||
|
|
||||||
|
CREATE POLICY document_embeddings_access ON document_embeddings
|
||||||
|
FOR ALL
|
||||||
|
USING (
|
||||||
|
document_id IN (
|
||||||
|
SELECT d.id FROM documents d
|
||||||
|
JOIN applications a ON d.application_id = a.id
|
||||||
|
WHERE a.user_id = get_current_user_id()
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Trigger function for updating timestamps
|
||||||
|
CREATE OR REPLACE FUNCTION update_updated_at_column()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
BEGIN
|
||||||
|
NEW.updated_at = NOW();
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
-- Apply timestamp triggers
|
||||||
|
CREATE TRIGGER update_users_updated_at
|
||||||
|
BEFORE UPDATE ON users
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
|
||||||
|
CREATE TRIGGER update_applications_updated_at
|
||||||
|
BEFORE UPDATE ON applications
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
|
||||||
|
CREATE TRIGGER update_documents_updated_at
|
||||||
|
BEFORE UPDATE ON documents
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
|
||||||
|
CREATE TRIGGER update_user_resumes_updated_at
|
||||||
|
BEFORE UPDATE ON user_resumes
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
|
||||||
|
-- Insert a test user for development (password: "testpass123")
|
||||||
|
INSERT INTO users (id, email, password_hash, full_name) VALUES (
|
||||||
|
'123e4567-e89b-12d3-a456-426614174000',
|
||||||
|
'test@example.com',
|
||||||
|
'$2b$12$LQv3c1yqBWVHxkd0LHAkCOYz6TtxMQJqhN8/LewgdyN8yF5V4M2kq',
|
||||||
|
'Test User'
|
||||||
|
) ON CONFLICT (email) DO NOTHING;
|
||||||
33
dev-requirements.txt
Normal file
33
dev-requirements.txt
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# Job Forge - Development and Testing Requirements
|
||||||
|
# Install with: pip install -r dev-requirements.txt
|
||||||
|
|
||||||
|
# Testing framework
|
||||||
|
pytest==8.0.2
|
||||||
|
pytest-asyncio==0.23.5
|
||||||
|
pytest-cov==4.0.0
|
||||||
|
pytest-mock==3.12.0
|
||||||
|
pytest-dash==2.1.2
|
||||||
|
|
||||||
|
# Code quality
|
||||||
|
black==24.2.0
|
||||||
|
isort==5.13.2
|
||||||
|
flake8==7.0.0
|
||||||
|
mypy==1.8.0
|
||||||
|
|
||||||
|
# Security testing
|
||||||
|
bandit==1.7.7
|
||||||
|
|
||||||
|
# Core dependencies for testing
|
||||||
|
structlog==24.1.0
|
||||||
|
sqlalchemy[asyncio]==2.0.29
|
||||||
|
fastapi==0.109.2
|
||||||
|
httpx==0.27.0
|
||||||
|
python-dotenv==1.0.1
|
||||||
|
|
||||||
|
# Authentication testing
|
||||||
|
python-jose[cryptography]==3.3.0
|
||||||
|
passlib[bcrypt]==1.7.4
|
||||||
|
|
||||||
|
# Database testing
|
||||||
|
asyncpg==0.29.0
|
||||||
|
psycopg2-binary==2.9.9
|
||||||
28
docker/Dockerfile.backend
Normal file
28
docker/Dockerfile.backend
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Install system dependencies
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
gcc \
|
||||||
|
curl \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Copy requirements and install Python dependencies
|
||||||
|
COPY requirements-backend.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements-backend.txt
|
||||||
|
|
||||||
|
# Copy source code
|
||||||
|
COPY src/ ./src/
|
||||||
|
|
||||||
|
# Create non-root user
|
||||||
|
RUN useradd -m -u 1000 jobforge && chown -R jobforge:jobforge /app
|
||||||
|
USER jobforge
|
||||||
|
|
||||||
|
# Health check
|
||||||
|
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||||
|
CMD curl -f http://localhost:8000/health || exit 1
|
||||||
|
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
CMD ["uvicorn", "src.backend.main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||||
23
docker/Dockerfile.frontend
Normal file
23
docker/Dockerfile.frontend
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Install system dependencies
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
gcc \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Copy requirements and install Python dependencies
|
||||||
|
COPY requirements-frontend.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements-frontend.txt
|
||||||
|
|
||||||
|
# Copy source code
|
||||||
|
COPY src/ ./src/
|
||||||
|
|
||||||
|
# Create non-root user
|
||||||
|
RUN useradd -m -u 1000 jobforge && chown -R jobforge:jobforge /app
|
||||||
|
USER jobforge
|
||||||
|
|
||||||
|
EXPOSE 8501
|
||||||
|
|
||||||
|
CMD ["python", "src/frontend/main.py"]
|
||||||
65
docker/docker-compose.yml
Normal file
65
docker/docker-compose.yml
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: pgvector/pgvector:pg16
|
||||||
|
container_name: jobforge_postgres
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: jobforge_mvp
|
||||||
|
POSTGRES_USER: jobforge_user
|
||||||
|
POSTGRES_PASSWORD: jobforge_password
|
||||||
|
ports:
|
||||||
|
- "5432:5432"
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
- ../database/init.sql:/docker-entrypoint-initdb.d/init.sql
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U jobforge_user -d jobforge_mvp"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
|
||||||
|
backend:
|
||||||
|
build:
|
||||||
|
context: ..
|
||||||
|
dockerfile: docker/Dockerfile.backend
|
||||||
|
container_name: jobforge_backend
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
environment:
|
||||||
|
- DATABASE_URL=postgresql+asyncpg://jobforge_user:jobforge_password@postgres:5432/jobforge_mvp
|
||||||
|
- CLAUDE_API_KEY=${CLAUDE_API_KEY}
|
||||||
|
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||||
|
- JWT_SECRET_KEY=${JWT_SECRET_KEY}
|
||||||
|
- DEBUG=true
|
||||||
|
- LOG_LEVEL=INFO
|
||||||
|
volumes:
|
||||||
|
- ../src:/app/src
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
command: uvicorn src.backend.main:app --host 0.0.0.0 --port 8000 --reload
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
|
||||||
|
frontend:
|
||||||
|
build:
|
||||||
|
context: ..
|
||||||
|
dockerfile: docker/Dockerfile.frontend
|
||||||
|
container_name: jobforge_frontend
|
||||||
|
ports:
|
||||||
|
- "8501:8501"
|
||||||
|
environment:
|
||||||
|
- BACKEND_URL=http://backend:8000
|
||||||
|
volumes:
|
||||||
|
- ../src/frontend:/app/src/frontend
|
||||||
|
depends_on:
|
||||||
|
backend:
|
||||||
|
condition: service_healthy
|
||||||
|
command: sh -c "cd src/frontend && python main.py"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
23
docs/ai/dillon_cover_letter.md
Normal file
23
docs/ai/dillon_cover_letter.md
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
**Leo Miranda**
|
||||||
|
leobrmi@hotmail.com | (416) 859-7936
|
||||||
|
|
||||||
|
July 21, 2025
|
||||||
|
|
||||||
|
Dillon Consulting
|
||||||
|
Data Analyst Hiring Team
|
||||||
|
Toronto, ON
|
||||||
|
|
||||||
|
Dear Hiring Manager,
|
||||||
|
|
||||||
|
I'm writing to apply for the Data Analyst position at Dillon Consulting. After five years of designing enterprise data solutions that consistently deliver measurable results—including 50% reporting time reductions and 40% efficiency improvements—I'm excited about the opportunity to bring this same optimization mindset to your multidisciplinary team. Your focus on transforming complex data into actionable client strategies aligns perfectly with what I've been building my career around.
|
||||||
|
|
||||||
|
Here's what I'd contribute to your daily operations: While most analysts work within Power BI's standard capabilities, my Python automation expertise with SQLAlchemy and FastAPI means I can create sophisticated data integration solutions that go far beyond basic reporting. When you're dealing with complex multi-source data workflows and connectivity challenges, I bring enterprise architecture experience from designing and evolving a comprehensive DataFlow system over five years—progressing through four major version updates to handle everything from APIs to flat files to multiple database platforms. This isn't theoretical—I've reduced manual reporting efforts by 40% while achieving near-zero error rates through robust automated workflows and comprehensive error handling. Your clients would benefit from the same scalable, reliable data solutions that I've proven can handle complex integration requirements, establish data quality standards, and deliver consistent, actionable insights that drive business decisions.
|
||||||
|
|
||||||
|
What sets me apart is how I bridge technical depth with business impact. My CAPM certification combined with hands-on implementation experience means I understand both the project management methodology you need and the technical realities of delivering complex data solutions. I've successfully managed cross-functional teams, collaborated with IT departments on architectural decisions, and translated technical complexity into business value for stakeholders. At Summitt Energy, I didn't just analyze data—I designed the systems that transformed how an entire department operates. This aligns with Dillon's values of achievement and continuous development, where employee ownership drives long-term thinking about sustainable solutions.
|
||||||
|
|
||||||
|
I'm particularly drawn to Dillon's employee-owned culture and your reputation as a Great Place to Work. Your emphasis on innovation and collaborative problem-solving matches exactly how I approach data challenges—not just finding answers, but building the foundation for better questions. The fact that you've maintained Canada's Best Managed Company status for 18 consecutive years tells me you value the kind of long-term, sustainable solutions I specialize in creating. I'd welcome the opportunity to discuss how my enterprise data experience and proven optimization results can contribute to your client success and your continued growth in the data-driven consulting space.
|
||||||
|
|
||||||
|
Thank you for your consideration. I'm available for an interview at your convenience and look forward to hearing from you.
|
||||||
|
|
||||||
|
Sincerely,
|
||||||
|
Leo Miranda
|
||||||
407
docs/ai/dillon_research_report.md
Normal file
407
docs/ai/dillon_research_report.md
Normal file
@@ -0,0 +1,407 @@
|
|||||||
|
# Job Application Research Report
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
**Candidate:** Leo Miranda
|
||||||
|
**Target Role:** Data Analyst
|
||||||
|
**Company:** Dillon Consulting
|
||||||
|
**Analysis Date:** July 21, 2025
|
||||||
|
**Overall Fit Score:** 9/10
|
||||||
|
**Recommendation:** Proceed - Excellent Fit
|
||||||
|
|
||||||
|
**Key Takeaways:**
|
||||||
|
- **Primary Strength:** 5+ years data analysis experience with proven dashboard/Power BI expertise perfectly aligns with requirements
|
||||||
|
- **Unique Value Proposition:** Rare combination of technical depth (Python, SQL) + project management + consulting mindset + client-facing experience
|
||||||
|
- **Strategic Focus:** Position as senior technical leader who can drive full project lifecycle and stakeholder collaboration
|
||||||
|
- **Potential Challenge:** Limited formal GIS experience, but strong technical foundation enables rapid skill transfer
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source Documentation
|
||||||
|
|
||||||
|
### Variable 1: `original-job-description`
|
||||||
|
*Original job description with formatting improvements only - NO content changes*
|
||||||
|
|
||||||
|
```
|
||||||
|
📋 **Data Analyst**
|
||||||
|
🏢 **Dillon Consulting**
|
||||||
|
⭐ **3.5 out of 5 stars**
|
||||||
|
📍 **Toronto, ON • Hybrid work**
|
||||||
|
|
||||||
|
💼 **Benefits:**
|
||||||
|
• Employee assistance program
|
||||||
|
• Flexible schedule
|
||||||
|
|
||||||
|
📋 **Overview**
|
||||||
|
|
||||||
|
Are you a skilled Data Analyst passionate about leading the implementation and management of impactful data-driven solutions? Do you excel at collaborating with diverse stakeholders to understand organizational needs, and then designing and developing customized data models and interactive dashboards using tools like Power BI? Are you experienced in integrating spatial data and GIS solutions to unlock deeper insights?
|
||||||
|
|
||||||
|
If you enjoy managing the full project lifecycle, from initial assessment to final deployment and user training, and are committed to driving process improvement by automating reporting and enhancing the accuracy of insights, this opportunity is for you! As someone with a blend of technical expertise, project management acumen, and strong communication skills, you will thrive in our fast-paced, collaborative, and innovative environment dedicated to fostering a data-driven culture.
|
||||||
|
|
||||||
|
🎯 **Your Opportunity**
|
||||||
|
|
||||||
|
Dillon's is looking for a Data Analyst to join our multidisciplinary team of professionals. You will have the opportunity to lead exciting projects, transforming complex data into actionable strategies and providing our clients with fully integrated and superior data solutions.
|
||||||
|
|
||||||
|
At Dillon, we operate as one team. The successful candidate can be based at any one of our offices across Canada.
|
||||||
|
|
||||||
|
We offer flexible work hours to help balance the competing demands of work and personal life.
|
||||||
|
|
||||||
|
🔧 **Responsibilities**
|
||||||
|
|
||||||
|
**What Your Day Will Look Like**
|
||||||
|
|
||||||
|
**Client Enablement & Stakeholder Collaboration**
|
||||||
|
• Collaborate closely with a wide range of stakeholders, including senior leadership, IT departments, data scientists, and client teams, to assess organizational needs and define project scope.
|
||||||
|
• Implement effective change management strategies, including comprehensive user training and ongoing support, to empower client teams and foster a data-driven culture.
|
||||||
|
• Facilitate working groups and workshops to promote knowledge transfer and ensure solutions meet evolving client requirements.
|
||||||
|
• Clearly communicate complex data concepts and project progress to both technical and non-technical audiences.
|
||||||
|
|
||||||
|
**Project Leadership & Solution Delivery**
|
||||||
|
• Lead the implementation and management of data-driven solutions for clients, overseeing the full project lifecycle from initial assessment and requirements gathering to deployment, user training, and ongoing support.
|
||||||
|
• Design and develop customized, relational data models and interactive, visually appealing dashboards using tools such as Power BI, ensuring they are user-friendly and provide actionable insights aligned with KPIs.
|
||||||
|
• Integrate spatial data and GIS solutions to enhance analytical capabilities and reporting.
|
||||||
|
• Ensure seamless data integration from multiple, diverse sources, adhering to data management best practices and establishing organizational data standards.
|
||||||
|
• Drive process improvement by identifying opportunities for automating reporting, reducing manual efforts, and enhancing the accuracy and timeliness of insights.
|
||||||
|
• Address and resolve challenges related to data connectivity, data quality, and visualization.
|
||||||
|
|
||||||
|
**Learning and Development**
|
||||||
|
• Commit to self-development and ongoing learning and professional development
|
||||||
|
• Contribute to Dillon's corporate profile through active participation in professional associations and committees
|
||||||
|
|
||||||
|
🎯 **Qualifications**
|
||||||
|
|
||||||
|
**What You Will Need To Succeed**
|
||||||
|
• A degree in Data Science, Computer Science, Information Management, Statistics, Mathematics, Engineering, Business Analytics, or a related field.
|
||||||
|
• A minimum of 5-7+ years of professional experience in a data analyst role, preferably with experience in a consulting or client-facing environment.
|
||||||
|
• Proven project management acumen with experience managing projects involving multiple data sources, diverse stakeholder groups, and complex reporting requirements.
|
||||||
|
• Relevant certifications in Power BI, other BI tools, data analytics, or project management (e.g., PMP) are highly desirable.
|
||||||
|
|
||||||
|
🔧 **Experience**
|
||||||
|
• Proven experience in designing, developing, and implementing customized relational data models and interactive dashboards using Power BI (or similar BI tools like Tableau, Qlik).
|
||||||
|
• Demonstrated ability in managing the full project lifecycle for data analytics initiatives, including requirements gathering, solution design, development, testing, deployment, and user training.
|
||||||
|
• Experience with data integration from various sources (databases, APIs, flat files) and establishing data quality and data governance practices.
|
||||||
|
• Proficiency in SQL and experience with data manipulation and analysis using Python or R is a strong asset.
|
||||||
|
• Experience with integrating and visualizing spatial data using GIS tools (e.g., ArcGIS, QGIS) and techniques.
|
||||||
|
• Strong understanding of KPI development and aligning data solutions to meet strategic business objectives.
|
||||||
|
• Demonstrated ability to automate reporting processes and improve data accuracy.
|
||||||
|
• Exceptional analytical, problem-solving, and critical-thinking skills.
|
||||||
|
• Excellent verbal and written communication skills, with the ability to present complex information clearly and persuasively to diverse audiences.
|
||||||
|
• Proven experience in facilitating workshops, leading training sessions, and managing change within organizations.
|
||||||
|
|
||||||
|
🏢 **Why Choose Dillon**
|
||||||
|
|
||||||
|
Dillon is powered by people who are technically proficient, passionate about socially important projects, and motivated to deliver superior, tangible results. We strive to remain at the forefront of technology and innovation, and are empowered to continually grow and develop.
|
||||||
|
|
||||||
|
**We live our core values:**
|
||||||
|
• **Reliability:** words result in actions that build trust;
|
||||||
|
• **Achievement:** do the work to hit the target;
|
||||||
|
• **Continuous development:** always learning; always adapting; always growing;
|
||||||
|
• **Creativity:** discover new possibilities;
|
||||||
|
• **Courage:** do the things that matter, especially when it's hard;
|
||||||
|
• **Inclusiveness:** enabling belonging to draw strength from our differences.
|
||||||
|
|
||||||
|
Dillon is a certified Great Place to Work. This recognition underscores our commitment to fostering an outstanding employee experience and cultivating an exceptional workplace culture. At Dillon, we believe that our people are our greatest asset. This designation reflects our ongoing efforts to ensure that our workplace is not just a place of work, but a community where everyone can thrive.
|
||||||
|
|
||||||
|
💰 **In addition, we offer:**
|
||||||
|
• Employee share purchase plan - Dillon is 100% employee owned and share ownership is open to all employees.
|
||||||
|
• A competitive compensation package
|
||||||
|
• Comprehensive health benefits
|
||||||
|
• Generous retirement savings plan
|
||||||
|
• Student loan repayment assistance with matching employer contributions
|
||||||
|
• Flexible work hours and hybrid working options
|
||||||
|
• Learning and Development opportunities
|
||||||
|
• Focus on Innovation
|
||||||
|
• Employee and Family Assistance program
|
||||||
|
• Goodlife Fitness Corporate Membership
|
||||||
|
• Wellness Subsidy
|
||||||
|
|
||||||
|
📋 **About Dillon**
|
||||||
|
|
||||||
|
Dillon is a proudly Canadian, employee-owned, professional consulting firm specializing in planning, engineering, environmental science, and management. We partner with clients to provide committed, collaborative, and inventive solutions to complex, multi-faceted projects. With over 20 offices and more than 1000 employees across Canada, Dillon offers a wide range of services related to building and improving facilities and infrastructure, protecting the environment, and developing communities.
|
||||||
|
|
||||||
|
Now operating for over 75 years, we continue to strive for excellence in everything we do. Dillon has been listed as one of Canada's Best Managed Companies for the past 18 years and has the distinction of having achieved Platinum Club member status in this program.
|
||||||
|
|
||||||
|
🌟 **Employment Equity, Diversity & Inclusion at Dillon**
|
||||||
|
|
||||||
|
Dillon is committed to the principles of employment equity, inclusiveness, and diversity within our organization. We strive to achieve a workplace where opportunities are based on skills and abilities and that respects and values differences.
|
||||||
|
|
||||||
|
Inclusion is more than a word to us, it is the way we choose to run our business. We encourage you to contact us if you require accommodation during the interview process. We would love to hear from you!
|
||||||
|
```
|
||||||
|
|
||||||
|
### Variable 2: `research-final-version`
|
||||||
|
*Processed and categorized information for analysis*
|
||||||
|
|
||||||
|
**Extracted Core Elements:**
|
||||||
|
- **Company Profile:** 75-year-old Canadian consulting firm, employee-owned, 1000+ employees, Great Place to Work certified
|
||||||
|
- **Role Level:** Senior individual contributor with project leadership responsibilities
|
||||||
|
- **Technical Stack:** Power BI (primary), SQL, Python/R, GIS tools (ArcGIS, QGIS), data integration tools
|
||||||
|
- **Soft Skills:** Stakeholder collaboration, change management, training/workshops, cross-functional communication
|
||||||
|
- **Experience Level:** 5-7+ years minimum, consulting/client-facing preferred
|
||||||
|
- **Team Context:** Multidisciplinary team, reports to senior leadership, collaborates with IT and data scientists
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Job Description Analysis
|
||||||
|
|
||||||
|
### Company & Role Profile
|
||||||
|
**Company:** Dillon Consulting - 75-year-old Canadian consulting firm
|
||||||
|
**Department:** Data Analytics/Multidisciplinary team
|
||||||
|
**Industry:** Engineering/Environmental consulting with strong data focus
|
||||||
|
**Role Level:** Senior individual contributor with project leadership
|
||||||
|
**Team Size:** Large organization (1000+ employees)
|
||||||
|
**Reporting Structure:** Multidisciplinary team structure
|
||||||
|
|
||||||
|
### Company Intelligence
|
||||||
|
**Recent Developments:**
|
||||||
|
- Certified Great Place to Work recognition
|
||||||
|
- 18 consecutive years as Canada's Best Managed Company
|
||||||
|
- Platinum Club member status
|
||||||
|
- Strong focus on innovation and technology advancement
|
||||||
|
- Employee ownership structure (100% employee-owned)
|
||||||
|
|
||||||
|
**Company Culture Indicators:**
|
||||||
|
- Collaborative "one team" approach
|
||||||
|
- Values-driven (reliability, achievement, continuous development, creativity, courage, inclusiveness)
|
||||||
|
- Focus on work-life balance with flexible/hybrid options
|
||||||
|
- Strong learning and development culture
|
||||||
|
- Innovation-focused environment
|
||||||
|
|
||||||
|
**Industry Context:**
|
||||||
|
- Infrastructure and environmental consulting market
|
||||||
|
- Growing demand for data-driven solutions in consulting
|
||||||
|
- Emphasis on spatial data and GIS integration
|
||||||
|
- Client-facing technical roles in high demand
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Requirements Analysis
|
||||||
|
|
||||||
|
### Technical Skills Assessment
|
||||||
|
| Required Skill | Skill Type | Explicitly Met? | Evidence Location | Strength Level | Strategic Notes |
|
||||||
|
|---|---|---|---|---|---|
|
||||||
|
| Power BI | Technical | Yes | Skills Summary + CS Portal project | Strong | Direct experience with dashboard development |
|
||||||
|
| SQL | Technical | Yes | Skills Summary + Summitt Energy role | Strong | Multiple database platforms (MSSQL, MySQL, PostgreSQL) |
|
||||||
|
| Python | Technical | Yes | Skills Summary + DataFlow projects | Strong | Advanced usage with Pandas, NumPy, SQLAlchemy |
|
||||||
|
| Data Modeling | Technical | Yes | DataFlow Development project | Strong | 5-year evolution of relational data models |
|
||||||
|
| Dashboard Development | Technical | Yes | CS Data Portal + DataFlow | Strong | Interactive dashboards with real-time insights |
|
||||||
|
| Data Integration | Technical | Yes | Multiple database workflows | Strong | APIs, flat files, multiple source integration |
|
||||||
|
| Project Management | Technical | Yes | CAPM certification + project experience | Strong | PMI standards, multiple complex projects |
|
||||||
|
| KPI Development | Technical | Yes | Customer Service KPIs project | Strong | 30% improvement in abandon rate |
|
||||||
|
| Process Automation | Technical | Yes | DataFlow automation + Python scripts | Strong | 40% efficiency improvements |
|
||||||
|
| GIS Tools | Technical | No | Not mentioned in resume | Developing | No direct experience, but strong technical foundation |
|
||||||
|
| Training/Workshops | Soft | Yes | Retention team management | Moderate | Led team of 4, conducted implementations |
|
||||||
|
| Stakeholder Collaboration | Soft | Yes | Cross-departmental work | Strong | IT collaboration, executive reporting |
|
||||||
|
| Change Management | Soft | Partial | Genesys Cloud migration | Moderate | Technical implementation focus |
|
||||||
|
|
||||||
|
### Soft Skills Assessment
|
||||||
|
| Required Skill | Met? | Evidence Location | Demonstration Method |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Communication (Technical/Non-technical) | Yes | Executive dashboards + IT collaboration | Cross-functional technical communication |
|
||||||
|
| Workshop Facilitation | Partial | Retention team implementation | Team management and training |
|
||||||
|
| Project Leadership | Yes | Multiple large projects | End-to-end project ownership |
|
||||||
|
| Problem-solving | Yes | Error handling + optimization | Complex technical problem resolution |
|
||||||
|
|
||||||
|
### Experience Requirements
|
||||||
|
| Requirement | Leo's Background | Gap Analysis | Positioning Strategy |
|
||||||
|
|---|---|---|---|
|
||||||
|
| 5-7+ years Data Analyst | 5+ years at Summitt Energy | Meets minimum requirement | Emphasize depth and progression |
|
||||||
|
| Consulting/Client-facing | Internal consulting + PMI experience | Partial external consulting | Highlight internal stakeholder management |
|
||||||
|
| Degree in relevant field | Business Administration + certifications | Non-technical degree | Emphasize certifications and practical experience |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Responsibilities Matching & Performance Analysis
|
||||||
|
|
||||||
|
| Job Responsibility | Direct Experience | Related Experience | Performance Capability (1-5) | Implementation Approach |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| Lead data-driven solution implementation | Yes - DataFlow system | 5-year major system evolution | 5 | Leverage proven experience building enterprise data solutions from scratch |
|
||||||
|
| Design customized relational data models | Yes - SQLAlchemy implementation | OOP architecture with declarative tables | 5 | Apply advanced SQLAlchemy expertise to create scalable, maintainable models |
|
||||||
|
| Develop Power BI dashboards | Yes - Multiple dashboard projects | Customer Service Portal + reporting | 4 | Combine Power BI experience with Python automation for enhanced functionality |
|
||||||
|
| Manage full project lifecycle | Yes - Multiple complex projects | DataFlow, Retention Team, CMDB | 5 | Apply CAPM training and proven track record across 4+ major implementations |
|
||||||
|
| Stakeholder collaboration | Yes - Cross-departmental work | IT teams, executives, specialists | 4 | Leverage experience translating technical concepts for diverse audiences |
|
||||||
|
| Data integration from multiple sources | Yes - Complex data workflows | APIs, databases, flat files | 5 | Apply expertise with SQLAlchemy, FastAPI, and multiple database platforms |
|
||||||
|
| Automate reporting processes | Yes - Extensive automation | Python scripts, batch uploads, CLI | 5 | Use proven automation framework that achieved 40%+ efficiency gains |
|
||||||
|
| Establish data quality practices | Yes - Error handling systems | Comprehensive logging, validation | 4 | Implement robust error handling and data validation methodologies |
|
||||||
|
| GIS data integration | No direct experience | Strong technical foundation | 3 | Apply Python spatial libraries and database skills to rapidly acquire GIS expertise |
|
||||||
|
| User training and support | Partial - Team training | Retention team management | 3 | Expand team leadership experience to client training scenarios |
|
||||||
|
| Change management strategies | Partial - Migration projects | Genesys Cloud implementation | 3 | Build on technical change management to include organizational aspects |
|
||||||
|
|
||||||
|
**Performance Capability Legend:**
|
||||||
|
- 5: Expert level, immediate impact
|
||||||
|
- 4: Proficient, minimal ramp-up
|
||||||
|
- 3: Competent, moderate learning
|
||||||
|
- 2: Developing, significant growth needed
|
||||||
|
- 1: Beginner, extensive training required
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Strategic Skill Transferability Analysis
|
||||||
|
|
||||||
|
### Hidden Value Opportunities
|
||||||
|
**Advanced Automation Capabilities:**
|
||||||
|
- Job mentions: "automate reporting processes and improve data accuracy"
|
||||||
|
- Leo's advantage: Python expertise with SQLAlchemy, FastAPI, and CLI development enables sophisticated automation solutions beyond standard BI tools. Can create enterprise-grade data pipelines with robust error handling.
|
||||||
|
|
||||||
|
**Technical Infrastructure Perspective:**
|
||||||
|
- Job mentions: "data integration from multiple sources"
|
||||||
|
- Leo's advantage: Experience with Azure DevOps, multiple database platforms, and API development provides infrastructure perspective often missing in traditional data analyst roles.
|
||||||
|
|
||||||
|
**Performance Optimization Mindset:**
|
||||||
|
- Job mentions: "enhance accuracy and timeliness of insights"
|
||||||
|
- Leo's advantage: Proven track record of 50% reporting time reduction and 40% efficiency improvements demonstrates optimization expertise that goes beyond basic reporting.
|
||||||
|
|
||||||
|
### Cross-Domain Value Creation
|
||||||
|
| Job Area | Standard Approach | Leo's Enhanced Approach | Competitive Advantage |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Data Modeling | Basic Power BI models | SQLAlchemy declarative architecture + OOP design | Scalable, maintainable enterprise solutions |
|
||||||
|
| Dashboard Development | Static BI dashboards | Interactive dashboards + Python automation + APIs | Real-time, automated insights with advanced functionality |
|
||||||
|
| Project Management | Traditional PM tools | CAPM methodology + technical implementation | Bridge between business requirements and technical delivery |
|
||||||
|
| Data Quality | Manual validation | Automated error handling + comprehensive logging | Proactive quality assurance with detailed audit trails |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Keywords & Messaging Strategy
|
||||||
|
|
||||||
|
### Primary Keywords (Must Include)
|
||||||
|
- Power BI, SQL, Python, data modeling, dashboard development
|
||||||
|
- Project lifecycle management, stakeholder collaboration, data integration
|
||||||
|
- Process automation, KPI development, data quality, reporting optimization
|
||||||
|
- Cross-functional communication, requirements gathering, solution deployment
|
||||||
|
|
||||||
|
### Secondary Keywords (Should Include)
|
||||||
|
- SQLAlchemy, FastAPI, Azure DevOps, change management, user training
|
||||||
|
- Data governance, business intelligence, analytical solutions, performance optimization
|
||||||
|
- Multidisciplinary teams, client-facing, consulting environment
|
||||||
|
|
||||||
|
### Leo's Unique Keywords (Differentiators)
|
||||||
|
- Enterprise data architecture, OOP data modeling, API development
|
||||||
|
- Batch processing optimization, automated data workflows, CLI development
|
||||||
|
- Cross-platform database integration, technical project leadership
|
||||||
|
|
||||||
|
### Messaging Themes
|
||||||
|
1. **Primary Theme:** Senior technical leader with proven ability to design and implement enterprise-scale data solutions
|
||||||
|
2. **Supporting Themes:**
|
||||||
|
- Bridge between technical complexity and business value
|
||||||
|
- Optimization expert with quantifiable efficiency improvements
|
||||||
|
- Full-stack data professional combining analysis, automation, and architecture
|
||||||
|
3. **Proof Points:** 5-year DataFlow evolution, 50% reporting time reduction, 40% efficiency improvements
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Competitive Positioning
|
||||||
|
|
||||||
|
### Leo's Unique Advantages
|
||||||
|
1. **Enterprise Architecture Experience:** Unlike typical data analysts, Leo has designed and evolved enterprise-scale data systems over 5 years, demonstrating rare combination of analytical and architectural skills
|
||||||
|
2. **Proven Optimization Results:** Quantifiable improvements (50% reporting reduction, 40% efficiency gains) demonstrate ability to deliver measurable business value
|
||||||
|
3. **Technical Depth + Business Acumen:** Combination of advanced programming skills (SQLAlchemy, FastAPI, CLI) with business process optimization and stakeholder management
|
||||||
|
|
||||||
|
### Potential Differentiators
|
||||||
|
- **Technical Depth:** Advanced Python automation and database architecture skills exceed typical Power BI analyst requirements
|
||||||
|
- **Cross-Functional Value:** Project management certification combined with hands-on technical implementation
|
||||||
|
- **Scalability Focus:** Experience building systems that evolved over 5 years shows long-term thinking and maintainable design
|
||||||
|
|
||||||
|
### Gap Mitigation Strategies
|
||||||
|
| Identified Gap | Mitigation Approach | Supporting Evidence |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Formal GIS experience | Emphasize rapid technical learning ability | Mastered complex tech stack including multiple databases, APIs, automation |
|
||||||
|
| External consulting experience | Highlight internal consulting and PMI background | Project Management Institute experience + cross-departmental collaboration |
|
||||||
|
| Formal data science degree | Emphasize practical results and ongoing certification | Ryerson Big Data certification + IBM Data Science (ongoing) + 5+ years proven results |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Application Strategy Recommendations
|
||||||
|
|
||||||
|
### Resume Optimization Priorities
|
||||||
|
1. **Lead with:** Data analysis expertise with enterprise system design and 5+ years progressive experience
|
||||||
|
2. **Quantify:** 50% reporting time reduction, 40% efficiency improvements, 30% process optimization, zero-error achievement
|
||||||
|
3. **Technical Focus:** Power BI + Python automation + SQL + project management combination
|
||||||
|
4. **Experience Narrative:** Evolution from analyst to technical leader driving enterprise solutions
|
||||||
|
|
||||||
|
### Cover Letter Strategy
|
||||||
|
1. **Opening Hook:** "5+ years transforming data challenges into scalable enterprise solutions"
|
||||||
|
2. **Core Message:** Unique combination of analytical expertise, technical architecture, and proven optimization results
|
||||||
|
3. **Supporting Examples:**
|
||||||
|
- DataFlow 5-year evolution demonstrating long-term system thinking
|
||||||
|
- Quantifiable efficiency improvements aligning with Dillon's achievement values
|
||||||
|
- Cross-functional collaboration matching their "one team" approach
|
||||||
|
4. **Company Connection:** Align with Dillon's values of continuous development, achievement, and innovation
|
||||||
|
|
||||||
|
### Potential Red Flags to Address
|
||||||
|
- **Non-technical degree:** Proactively emphasize practical certifications, ongoing learning, and 5+ years of proven technical results
|
||||||
|
- **Limited external consulting:** Position internal cross-departmental work as equivalent stakeholder management experience
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Phase 2 Handoff Information
|
||||||
|
|
||||||
|
### Resume Content Priorities (High to Low)
|
||||||
|
1. **Summitt Energy Data Analyst role** - Emphasize Power BI, Python, SQL, dashboard development, project leadership
|
||||||
|
2. **DataFlow Development project** - Highlight enterprise architecture, 5-year evolution, OOP design, automation
|
||||||
|
3. **Customer Service KPIs project** - Showcase quantifiable business improvements and stakeholder collaboration
|
||||||
|
4. **Technical skills summary** - Focus on Power BI, Python, SQL, project management combination
|
||||||
|
5. **Education and certifications** - Emphasize relevant certifications and ongoing learning
|
||||||
|
|
||||||
|
### Key Messages for Integration
|
||||||
|
- **Primary Value Prop:** Senior data analyst with enterprise architecture experience and proven optimization results
|
||||||
|
- **Technical Emphasis:** Power BI + Python automation + SQL + project management
|
||||||
|
- **Achievement Focus:** 50% reporting reduction, 40% efficiency improvements, 30% process optimization
|
||||||
|
|
||||||
|
### Style Guidance
|
||||||
|
- **Tone:** Technical leadership with business impact focus
|
||||||
|
- **Emphasis:** Scalable solutions, quantifiable results, cross-functional collaboration
|
||||||
|
- **Keywords:** Enterprise data solutions, process optimization, stakeholder collaboration, technical leadership
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Research Quality Metrics
|
||||||
|
|
||||||
|
**Analysis Completeness:**
|
||||||
|
- Job requirements coverage: 95%
|
||||||
|
- Skills assessment depth: Comprehensive
|
||||||
|
- Company research depth: Comprehensive
|
||||||
|
- Strategic insights quality: High
|
||||||
|
|
||||||
|
**Evidence Base:**
|
||||||
|
- All assessments tied to resume evidence: ✅ Yes
|
||||||
|
- Transferability analysis completed: ✅ Yes
|
||||||
|
- Competitive advantages identified: 6 major advantages found
|
||||||
|
|
||||||
|
**Source Documentation Quality:**
|
||||||
|
- Original job description preserved intact: ✅ Yes
|
||||||
|
- Formatting improvements applied appropriately: ✅ Yes
|
||||||
|
- Research version comprehensively categorized: ✅ Yes
|
||||||
|
- Cross-reference accuracy verified: ✅ Yes
|
||||||
|
|
||||||
|
**Readiness for Phase 2:**
|
||||||
|
- Clear content priorities established: ✅ Yes
|
||||||
|
- Strategic direction defined: ✅ Yes
|
||||||
|
- All handoff information complete: ✅ Yes
|
||||||
|
- Original source material available for reference: ✅ Yes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Final Validation Against Original Source
|
||||||
|
|
||||||
|
**Cross-Reference Check:**
|
||||||
|
- ✅ All analyzed requirements traced back to `original-job-description`
|
||||||
|
- ✅ No requirements missed or misinterpreted
|
||||||
|
- ✅ Analysis accurately reflects original posting intent
|
||||||
|
- ✅ Strategic recommendations align with actual job needs
|
||||||
|
|
||||||
|
**Original Source Integrity:**
|
||||||
|
- ✅ `original-job-description` contains exact text as provided
|
||||||
|
- ✅ Only formatting/organization improvements applied
|
||||||
|
- ✅ No content modifications or interpretations added
|
||||||
|
- ✅ Serves as reliable reference for future phases
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Phase 1 Status:** ✅ Complete
|
||||||
|
**Next Phase:** Resume Optimization
|
||||||
|
**Analyst:** Job Application Research Agent
|
||||||
|
**Review Required:** No - proceeding to Phase 2
|
||||||
|
|
||||||
|
**Documentation Archive:**
|
||||||
|
- ✅ `original-job-description` preserved and formatted
|
||||||
|
- ✅ `research-final-version` created and analyzed
|
||||||
|
- ✅ Strategic analysis completed
|
||||||
|
- ✅ Ready for Phase 2 handoff
|
||||||
49
docs/ai/optimized_resume_dillon.md
Normal file
49
docs/ai/optimized_resume_dillon.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
# **Leo Miranda**
|
||||||
|
**leobrmi@hotmail.com | (416) 859-7936**
|
||||||
|
|
||||||
|
## **Professional Summary**
|
||||||
|
Senior Data Analyst with 5+ years of enterprise data system design and implementation experience. Proven track record of delivering measurable business impact through advanced dashboard development, process automation, and stakeholder collaboration. Expert in Power BI, Python, and SQL with demonstrated ability to reduce reporting times by 50% and improve operational efficiency by 40% through innovative data solutions.
|
||||||
|
|
||||||
|
## **Core Competencies**
|
||||||
|
**Technical Expertise:** Power BI • SQL (Microsoft SQL, MySQL, PostgreSQL) • Python (Pandas, NumPy, SQLAlchemy) • Data Modeling • Dashboard Development • Process Automation • FastAPI • Azure DevOps
|
||||||
|
|
||||||
|
**Business Intelligence:** KPI Development • Data Integration • Reporting Optimization • Data Quality Management • Business Process Analysis • Performance Analytics
|
||||||
|
|
||||||
|
**Project Leadership:** Full Lifecycle Management • Stakeholder Collaboration • Change Management • Cross-Functional Communication • Requirements Gathering • User Training
|
||||||
|
|
||||||
|
## **Professional Experience**
|
||||||
|
|
||||||
|
### **Data and Reporting Analyst | Summitt Energy | 2020-Present**
|
||||||
|
|
||||||
|
**Enterprise Data Solutions & Architecture**
|
||||||
|
• Designed and evolved comprehensive DataFlow system over 5 years through 4 major version updates, establishing foundational reporting infrastructure for Customer Service department
|
||||||
|
• Implemented Object-Oriented Programming architecture with SQLAlchemy declarative table definitions, creating scalable enterprise data models
|
||||||
|
• Developed high-performance batch upload methods for seamless CSV-to-MSSQL data ingestion and optimized CRUD operations
|
||||||
|
|
||||||
|
**Dashboard Development & Business Intelligence**
|
||||||
|
• Created interactive dashboards using Power BI, Plotly-Dash, and advanced visualization tools, providing real-time insights into business operations and partner performance
|
||||||
|
• Reduced reporting times by 50% through automated data workflows and enhanced decision-making capabilities
|
||||||
|
• Developed Customer Service Data Portal ensuring 100% accessibility with role-based access controls
|
||||||
|
|
||||||
|
**Process Optimization & Automation**
|
||||||
|
• Engineered Python automation scripts improving department reporting efficiency by 40% with near-zero error rates
|
||||||
|
• Implemented comprehensive KPI tracking resulting in 30% improvement in abandon rate and 50% increase in Average Speed Answer
|
||||||
|
• Established robust error handling mechanisms with comprehensive logging, drastically improving system reliability and maintainability
|
||||||
|
|
||||||
|
**Project Leadership & Stakeholder Collaboration**
|
||||||
|
• Led end-to-end implementation of retention team, managing 4 specialists and executing technical configurations for multi-state campaigns
|
||||||
|
• Collaborated with IT department on architectural design and implementation of inbound and outbound workflows
|
||||||
|
• Orchestrated Genesys Cloud migration, focusing on tool configuration, data architecture design, and workflow optimization
|
||||||
|
• Implemented Azure DevOps for centralized project repository management with Agile-based boards
|
||||||
|
|
||||||
|
### **Commercial Sales Admin | Summitt Energy | 2017-2020**
|
||||||
|
• Developed Commercial Drop Manager tool achieving 100% accuracy and reducing task completion time by 75%
|
||||||
|
• Created data visualization tools for commercial volume forecasting, enhancing accuracy by 25% and promoting data-driven decision-making
|
||||||
|
• Monitored sales team performance leading to 15% increase in sales efficiency through weekly reporting and analysis
|
||||||
|
|
||||||
|
## **Education & Certifications**
|
||||||
|
• **Ryerson University:** Big Data and Analytics Certification (2021)
|
||||||
|
• **IBM Data Science Professional Certificate** (Ongoing, Expected 2024)
|
||||||
|
• **CAPM Certification** (Project Management Institute)
|
||||||
|
• **Bachelor's Degree in Business Administration** - UCAM (Rio de Janeiro/Brazil)
|
||||||
|
• **ITIL V3 Foundation Certification**
|
||||||
37
docs/ai/phase1_template.md
Normal file
37
docs/ai/phase1_template.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# Job Application Research Phase
|
||||||
|
|
||||||
|
## Job Description Analysis
|
||||||
|
**Company:** [Company Name]
|
||||||
|
**Role:** [Position Title]
|
||||||
|
**Link/Source:** [URL or "Pasted Content"]
|
||||||
|
|
||||||
|
### Company/Department Details
|
||||||
|
- [Company mission, culture, values]
|
||||||
|
- [Department structure and team dynamics]
|
||||||
|
- [Key stakeholders and reporting structure]
|
||||||
|
|
||||||
|
### Qualification Requirements Analysis
|
||||||
|
|
||||||
|
| Required Skill | Type | Met Requirement? | Location in Resume | Priority Level |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| [Skill 1] | Technical/Soft | Yes/No | [Section] | High/Medium/Low |
|
||||||
|
|
||||||
|
### Role Responsibilities Matching
|
||||||
|
|
||||||
|
| Responsibility | Past Experience Met | Corresponding Experience | Reference (Role/Company) | Strength of Match |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| [Responsibility 1] | Yes/No | [Description] | [Role - Company] | Strong/Moderate/Weak |
|
||||||
|
|
||||||
|
### Keywords & Themes
|
||||||
|
**Primary Keywords:** [List]
|
||||||
|
**Secondary Keywords:** [List]
|
||||||
|
**Underlying Themes:** [Company values, desired traits]
|
||||||
|
|
||||||
|
### Phase 1 Summary & Recommendations
|
||||||
|
- **Alignment Score:** [X/10]
|
||||||
|
- **Key Strengths:** [Top 3 matching areas]
|
||||||
|
- **Potential Gaps:** [Areas needing emphasis]
|
||||||
|
- **Adaptation Strategy:** [High-level approach]
|
||||||
|
|
||||||
|
---
|
||||||
|
**Handoff to Phase 2:** ✅ Ready for Resume Optimization
|
||||||
190
docs/ai/phase2_resume_prompt.md
Normal file
190
docs/ai/phase2_resume_prompt.md
Normal file
@@ -0,0 +1,190 @@
|
|||||||
|
# Phase 2: Resume Optimization Agent
|
||||||
|
|
||||||
|
You are an expert Resume Optimization Agent, specialized in transforming comprehensive professional backgrounds into targeted, high-impact resumes. You will adapt Leo Miranda's complete resume to maximize alignment with specific job opportunities using strategic insights from Phase 1 research.
|
||||||
|
|
||||||
|
## Core Mission
|
||||||
|
Create a compelling, strategically-optimized resume that positions Leo as the ideal candidate while maintaining 100% accuracy to his actual experience and staying within a strict 600-word limit.
|
||||||
|
|
||||||
|
## Required Inputs & Resources
|
||||||
|
- **Phase 1 Research Report**: Complete strategic analysis with competitive advantages, keyword priorities, and positioning recommendations
|
||||||
|
- **`complete_resume`**: Leo's comprehensive professional background (access via google_drive_search)
|
||||||
|
- **Target Word Count**: Maximum 600 words total
|
||||||
|
- **Output Format**: Markdown for easy copy/paste
|
||||||
|
|
||||||
|
## Resume Optimization Workflow
|
||||||
|
|
||||||
|
### Step 1: Load Strategic Direction
|
||||||
|
**Action**: Review Phase 1 Research Report thoroughly to extract:
|
||||||
|
- **Primary positioning strategy** and messaging themes
|
||||||
|
- **Content prioritization order** (High/Medium/Low priority experiences)
|
||||||
|
- **Keywords integration list** (primary and secondary)
|
||||||
|
- **Competitive advantages** to emphasize
|
||||||
|
- **Gap mitigation strategies** to implement
|
||||||
|
- **Quantifiable achievements** to highlight
|
||||||
|
|
||||||
|
### Step 2: Access Complete Resume
|
||||||
|
**Action**: Use google_drive_search to locate and access Leo's `complete_resume` document
|
||||||
|
- Extract ALL content and experiences available
|
||||||
|
- Catalog all skills, projects, achievements, and quantifiable results
|
||||||
|
- Note all technical proficiencies and soft skills demonstrated
|
||||||
|
|
||||||
|
### Step 3: Strategic Content Selection
|
||||||
|
**Action**: Based on Phase 1 priorities, categorize resume content:
|
||||||
|
|
||||||
|
**MUST INCLUDE (High Priority):**
|
||||||
|
- Experiences directly matching job requirements
|
||||||
|
- Projects demonstrating key capabilities
|
||||||
|
- Quantifiable achievements supporting competitive advantages
|
||||||
|
- Skills explicitly requested in job description
|
||||||
|
|
||||||
|
**SHOULD INCLUDE (Medium Priority):**
|
||||||
|
- Supporting experiences that reinforce primary themes
|
||||||
|
- Additional technical skills enhancing value proposition
|
||||||
|
- Relevant certifications and education
|
||||||
|
|
||||||
|
**COULD INCLUDE (Low Priority):**
|
||||||
|
- Supplementary experiences if word count allows
|
||||||
|
- Additional context for career progression
|
||||||
|
|
||||||
|
### Step 4: Content Optimization & Adaptation
|
||||||
|
**Action**: Transform selected content to maximize impact:
|
||||||
|
|
||||||
|
**Achievement Amplification:**
|
||||||
|
- Lead with quantifiable results and business impact
|
||||||
|
- Use action verbs that align with job responsibilities
|
||||||
|
- Frame experiences in terms of value delivered
|
||||||
|
|
||||||
|
**Keyword Integration:**
|
||||||
|
- Naturally incorporate primary keywords from Phase 1
|
||||||
|
- Ensure technical terms match job description language
|
||||||
|
- Maintain readability while optimizing for ATS systems
|
||||||
|
|
||||||
|
**Strategic Positioning:**
|
||||||
|
- Present Leo according to Phase 1 positioning strategy
|
||||||
|
- Emphasize unique competitive advantages identified
|
||||||
|
- Address potential gaps proactively through framing
|
||||||
|
|
||||||
|
### Step 5: Structure & Formatting
|
||||||
|
**Action**: Organize optimized content using professional resume structure:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# **Leo Miranda**
|
||||||
|
[Contact Information]
|
||||||
|
|
||||||
|
## **Professional Summary**
|
||||||
|
[2-3 lines capturing value proposition and primary strengths]
|
||||||
|
|
||||||
|
## **Core Competencies**
|
||||||
|
[Strategic skill groupings based on job requirements]
|
||||||
|
|
||||||
|
## **Professional Experience**
|
||||||
|
[Prioritized positions with optimized bullet points]
|
||||||
|
|
||||||
|
## **Key Projects** (if space allows)
|
||||||
|
[High-impact projects demonstrating capabilities]
|
||||||
|
|
||||||
|
## **Education & Certifications**
|
||||||
|
[Relevant credentials supporting positioning]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Word Count Management
|
||||||
|
**Action**: Achieve 600-word target through strategic editing:
|
||||||
|
|
||||||
|
**Prioritization Approach:**
|
||||||
|
1. Preserve all high-priority content
|
||||||
|
2. Condense medium-priority content effectively
|
||||||
|
3. Remove low-priority content if necessary
|
||||||
|
4. Optimize language for conciseness without losing impact
|
||||||
|
|
||||||
|
**Quality Standards:**
|
||||||
|
- Every word must add value
|
||||||
|
- Maintain professional tone and readability
|
||||||
|
- Preserve all quantifiable achievements
|
||||||
|
- Ensure technical accuracy
|
||||||
|
|
||||||
|
### Step 7: Keyword Validation
|
||||||
|
**Action**: Cross-reference final resume against Phase 1 keyword list:
|
||||||
|
- Verify primary keywords are naturally integrated
|
||||||
|
- Confirm secondary keywords are included where appropriate
|
||||||
|
- Ensure technical terms match job description language
|
||||||
|
- Validate ATS optimization without keyword stuffing
|
||||||
|
|
||||||
|
### Step 8: Strategic Alignment Review
|
||||||
|
**Action**: Validate resume against Phase 1 strategic recommendations:
|
||||||
|
- Confirm positioning strategy is effectively implemented
|
||||||
|
- Verify competitive advantages are prominently featured
|
||||||
|
- Ensure gap mitigation strategies are reflected
|
||||||
|
- Check that messaging themes are consistently reinforced
|
||||||
|
|
||||||
|
### Step 9: Quality Assurance
|
||||||
|
**Action**: Comprehensive resume validation:
|
||||||
|
|
||||||
|
**Accuracy Check:**
|
||||||
|
- All content must be verifiable against original resume
|
||||||
|
- No fabrication or exaggeration of experiences
|
||||||
|
- Dates, companies, and roles must be accurate
|
||||||
|
- Technical skills must reflect actual proficiency
|
||||||
|
|
||||||
|
**Impact Assessment:**
|
||||||
|
- Quantifiable achievements prominently featured
|
||||||
|
- Value propositions clearly articulated
|
||||||
|
- Career progression logically presented
|
||||||
|
- Unique strengths effectively highlighted
|
||||||
|
|
||||||
|
**Professional Standards:**
|
||||||
|
- Consistent formatting and structure
|
||||||
|
- Error-free grammar and spelling
|
||||||
|
- Professional language and tone
|
||||||
|
- Appropriate level of detail for space constraints
|
||||||
|
|
||||||
|
## Output Requirements
|
||||||
|
|
||||||
|
### Deliverable 1: Strategic Resume (Markdown Format)
|
||||||
|
Present the optimized resume in clean markdown format ready for copy/paste, including:
|
||||||
|
- Professional header with contact information
|
||||||
|
- Strategic summary aligned with Phase 1 positioning
|
||||||
|
- Prioritized experience section with optimized bullet points
|
||||||
|
- Technical skills emphasizing job-relevant competencies
|
||||||
|
- Education and certifications supporting candidacy
|
||||||
|
|
||||||
|
### Deliverable 2: Optimization Summary
|
||||||
|
Provide brief analysis including:
|
||||||
|
- **Content decisions made** and rationale
|
||||||
|
- **Keywords successfully integrated** from Phase 1 list
|
||||||
|
- **Competitive advantages emphasized** in final version
|
||||||
|
- **Word count breakdown** by section
|
||||||
|
- **Strategic positioning implementation** summary
|
||||||
|
|
||||||
|
### Deliverable 3: User Review Points
|
||||||
|
Present specific areas for Leo's feedback:
|
||||||
|
- Content prioritization decisions
|
||||||
|
- Technical skill emphasis choices
|
||||||
|
- Achievement quantification accuracy
|
||||||
|
- Any content that required significant condensation
|
||||||
|
|
||||||
|
## Quality Standards & Constraints
|
||||||
|
|
||||||
|
### Mandatory Requirements:
|
||||||
|
- **600-word maximum** (strict limit)
|
||||||
|
- **100% factual accuracy** to original resume
|
||||||
|
- **Strategic alignment** with Phase 1 recommendations
|
||||||
|
- **Professional markdown formatting**
|
||||||
|
- **ATS optimization** without sacrificing readability
|
||||||
|
|
||||||
|
### Success Metrics:
|
||||||
|
- High-impact content within word constraints
|
||||||
|
- Strategic positioning clearly implemented
|
||||||
|
- Primary keywords naturally integrated
|
||||||
|
- Competitive advantages prominently featured
|
||||||
|
- Ready for immediate job application submission
|
||||||
|
|
||||||
|
## Operational Rules
|
||||||
|
1. **Evidence-Based Only**: Every statement must be verifiable against original resume
|
||||||
|
2. **No Fabrication**: Never invent experiences, skills, or achievements
|
||||||
|
3. **Strategic Focus**: Prioritize content supporting Phase 1 positioning strategy
|
||||||
|
4. **Word Discipline**: Respect 600-word limit through strategic editing, not content reduction
|
||||||
|
5. **Quality Priority**: Maintain professional standards while optimizing for impact
|
||||||
|
6. **User Collaboration**: Present clear review points for Leo's validation
|
||||||
|
7. **Phase Integration**: Seamlessly implement Phase 1 strategic recommendations
|
||||||
|
|
||||||
|
**Success Definition**: A compelling, strategically-optimized resume that positions Leo as the ideal candidate while maintaining complete accuracy and staying within word constraints, ready for immediate submission and Phase 3 handoff.
|
||||||
230
docs/ai/phase3_cover_letter_prompt.md
Normal file
230
docs/ai/phase3_cover_letter_prompt.md
Normal file
@@ -0,0 +1,230 @@
|
|||||||
|
# Phase 3: Cover Letter Generation Agent
|
||||||
|
|
||||||
|
You are an expert Cover Letter Generation Agent, specialized in creating compelling, authentic, and strategically-targeted cover letters that cut through generic application noise. You will craft Leo Miranda's cover letter using strategic insights from Phase 1 research and the optimized resume from Phase 2, focusing on genuine value proposition and direct impact.
|
||||||
|
|
||||||
|
## Core Mission
|
||||||
|
Create a persuasive, personal, and authentic cover letter that demonstrates exactly how Leo's skills will contribute to the daily operations of the target role, while maintaining a professional yet direct tone that eliminates corporate fluff.
|
||||||
|
|
||||||
|
## Required Inputs & Resources
|
||||||
|
- **Phase 1 Research Report**: Complete strategic analysis with transferability insights and competitive advantages
|
||||||
|
- **Phase 2 Optimized Resume**: Tailored resume content and positioning strategy
|
||||||
|
- **Leo's Style Reference**: File `Leonardo-Miranda_20250715_summitt-ops.docx` for authentic voice training
|
||||||
|
- **Leo's Background Context**: Neurodivergent data scientist, Toronto-based, self-taught expertise
|
||||||
|
- **Target Tone**: Personal, convincing, authentic - using Leo's proven successful voice
|
||||||
|
- **Output Format**: Complete cover letter in markdown, ready for submission
|
||||||
|
|
||||||
|
## Cover Letter Generation Workflow
|
||||||
|
|
||||||
|
### Step 1: Extract Strategic Intelligence
|
||||||
|
**Action**: Review Phase 1 Research Report to identify:
|
||||||
|
- **Daily Operations Impact**: Specific ways Leo's skills enhance day-to-day role performance
|
||||||
|
- **Transferability Opportunities**: How Leo's technical skills solve problems beyond basic requirements
|
||||||
|
- **Competitive Advantages**: Unique value propositions that differentiate Leo
|
||||||
|
- **Company Intelligence**: Culture, values, and specific organizational needs
|
||||||
|
- **Gap Mitigation**: Strategies to address any experience gaps authentically
|
||||||
|
|
||||||
|
### Step 2: Learn Leo's Authentic Writing Style
|
||||||
|
**Action**: Access and analyze `Leonardo-Miranda_20250715_summitt-ops.docx` via google_drive_search to extract:
|
||||||
|
|
||||||
|
**Leo's Voice Characteristics:**
|
||||||
|
- **Warmth & Genuine Enthusiasm**: Uses phrases like "genuine excitement," "I've developed a deep appreciation," "truly draws me to"
|
||||||
|
- **Natural Flow**: Longer, connected sentences that feel conversational rather than corporate
|
||||||
|
- **Humble Confidence**: Acknowledges learning opportunities while demonstrating expertise ("it's an area where I know I have much to learn, but...")
|
||||||
|
- **Specific Knowledge**: References actual work context and business challenges with insider understanding
|
||||||
|
- **Personal Connection**: Shows genuine interest in the work itself, not just career advancement
|
||||||
|
- **Natural Time References**: "Having spent the past five years," "I've spent these past years" instead of generic time markers
|
||||||
|
|
||||||
|
**Sentence Structure Patterns:**
|
||||||
|
- Uses connecting phrases to create flowing paragraphs
|
||||||
|
- Balances technical specificity with business impact
|
||||||
|
- Expresses learning curiosity while highlighting relevant experience
|
||||||
|
- Maintains professional warmth throughout
|
||||||
|
|
||||||
|
**Authentic Language Markers:**
|
||||||
|
- "I've had the privilege of developing..."
|
||||||
|
- "The most rewarding part has been watching..."
|
||||||
|
- "What genuinely excites me about this opportunity..."
|
||||||
|
- "I'm eager to bring my problem-solving mindset..."
|
||||||
|
- References to learning from the target team's expertise
|
||||||
|
|
||||||
|
### Step 2: Analyze Target Role Daily Realities
|
||||||
|
**Action**: Based on research, identify the actual daily challenges of the position:
|
||||||
|
- **Technical Problems**: Data connectivity, quality issues, complex integrations
|
||||||
|
- **Stakeholder Challenges**: Cross-functional communication, requirement gathering
|
||||||
|
- **Process Inefficiencies**: Manual reporting, time-consuming workflows
|
||||||
|
- **Business Needs**: KPI development, performance optimization, strategic insights
|
||||||
|
|
||||||
|
### Step 3: Map Leo's Solutions to Daily Operations
|
||||||
|
**Action**: Create specific connections between Leo's experience and daily role requirements:
|
||||||
|
|
||||||
|
**Technical Contributions:**
|
||||||
|
- How Leo's Python automation solves their process improvement needs
|
||||||
|
- How Leo's enterprise data architecture experience handles their complex integration challenges
|
||||||
|
- How Leo's proven optimization results (40% efficiency gains) directly apply to their workflow enhancement goals
|
||||||
|
|
||||||
|
**Business Impact Contributions:**
|
||||||
|
- How Leo's stakeholder collaboration experience supports their client-facing requirements
|
||||||
|
- How Leo's project management expertise enables their full lifecycle delivery needs
|
||||||
|
- How Leo's proven track record delivers measurable results they're seeking
|
||||||
|
|
||||||
|
### Step 4: Gather Company-Specific Intelligence
|
||||||
|
**Action**: Extract from Phase 1 research:
|
||||||
|
- **Company Values**: How Leo's approach aligns with Dillon's stated values
|
||||||
|
- **Cultural Fit**: Connections between Leo's work style and their collaborative environment
|
||||||
|
- **Strategic Initiatives**: How Leo contributes to their innovation and technology advancement goals
|
||||||
|
- **Employee Ownership**: How Leo's long-term thinking aligns with their ownership culture
|
||||||
|
|
||||||
|
### Step 5: Structure Using Leo's Proven Framework
|
||||||
|
**Action**: Organize content using Leo's successful cover letter structure:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
[Date]
|
||||||
|
[Company Information]
|
||||||
|
|
||||||
|
**Opening Paragraph: Personal Connection + Genuine Enthusiasm**
|
||||||
|
- Start with "I'm writing to you today with genuine excitement about..."
|
||||||
|
- Express specific appreciation for the company's work/reputation
|
||||||
|
- Create personal connection to the role/industry
|
||||||
|
- Preview unique qualifications naturally
|
||||||
|
|
||||||
|
**Body Paragraph 1: Deep Contextual Knowledge + Specific Examples**
|
||||||
|
- "Having spent the past [X] years immersed in..." or "I've spent these past years working with..."
|
||||||
|
- Demonstrate specific understanding of relevant business challenges
|
||||||
|
- Reference actual projects (like DataFlow) with natural evolution story
|
||||||
|
- Show impact through specific, relatable examples
|
||||||
|
|
||||||
|
**Body Paragraph 2: Learning Excitement + Value Contribution**
|
||||||
|
- "What genuinely excites me about this opportunity..."
|
||||||
|
- Express humble confidence: acknowledge learning opportunities while highlighting transferable expertise
|
||||||
|
- Connect Leo's skills to company's specific needs
|
||||||
|
- Show enthusiasm for contributing to their team's success
|
||||||
|
|
||||||
|
**Closing Paragraph: Genuine Forward-Looking Appreciation**
|
||||||
|
- "I'm genuinely looking forward to the possibility of discussing..."
|
||||||
|
- Express authentic appreciation for consideration
|
||||||
|
- Professional but warm sign-off
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Craft Authentic Content Using Leo's Voice
|
||||||
|
**Action**: Write each section channeling Leo's proven successful style:
|
||||||
|
|
||||||
|
**Tone Guidelines:**
|
||||||
|
- **Warm Professional Enthusiasm**: Genuine excitement about the work, not just the opportunity
|
||||||
|
- **Natural Conversational Flow**: Longer, connected sentences that feel organic
|
||||||
|
- **Humble Confidence**: Show expertise while expressing genuine learning curiosity
|
||||||
|
- **Specific and Contextual**: Reference actual work challenges and business understanding
|
||||||
|
- **Personal Connection**: Demonstrate authentic interest in the company's mission and work
|
||||||
|
|
||||||
|
**Leo's Authentic Language Patterns:**
|
||||||
|
- **Opening**: "I'm writing to you today with genuine excitement about..."
|
||||||
|
- **Experience Framing**: "Having spent the past [X] years immersed in..." / "I've spent these past years working with..."
|
||||||
|
- **Project Descriptions**: "I've had the privilege of developing..." / "The most rewarding part has been watching..."
|
||||||
|
- **Learning Mindset**: "it's an area where I know I have much to learn, but my experience with... has taught me..."
|
||||||
|
- **Enthusiasm**: "What genuinely excites me about this opportunity..." / "I'm eager to bring my problem-solving mindset..."
|
||||||
|
- **Closing**: "I'm genuinely looking forward to the possibility of discussing..."
|
||||||
|
|
||||||
|
**Content Principles:**
|
||||||
|
- Use Leo's natural flowing sentence structure from reference document
|
||||||
|
- Express genuine curiosity about learning from the target team
|
||||||
|
- Balance technical expertise with humble learning attitude
|
||||||
|
- Show specific understanding of business challenges
|
||||||
|
- Maintain warm professionalism throughout
|
||||||
|
- Reference actual project names and specific accomplishments naturally
|
||||||
|
|
||||||
|
### Step 7: Integration of Research Insights
|
||||||
|
**Action**: Weave Phase 1 strategic findings throughout:
|
||||||
|
- **Competitive Advantages**: Naturally integrate identified differentiators
|
||||||
|
- **Transferability Examples**: Include specific skill applications
|
||||||
|
- **Company Alignment**: Reference their values and culture appropriately
|
||||||
|
- **Gap Addressing**: Proactively handle any experience gaps with confidence
|
||||||
|
|
||||||
|
### Step 8: Optimize for Impact and Authenticity
|
||||||
|
**Action**: Ensure cover letter achieves maximum impact:
|
||||||
|
|
||||||
|
**Impact Optimization:**
|
||||||
|
- Lead with strongest value propositions
|
||||||
|
- Use active voice and strong action verbs
|
||||||
|
- Include specific, quantifiable achievements
|
||||||
|
- Connect Leo's experience to their business needs
|
||||||
|
|
||||||
|
**Authenticity Checks:**
|
||||||
|
- Eliminate corporate jargon and buzzwords
|
||||||
|
- Use conversational but professional language
|
||||||
|
- Reflect Leo's genuine enthusiasm and approach
|
||||||
|
- Maintain consistency with resume positioning
|
||||||
|
|
||||||
|
### Step 9: Final Quality Assurance
|
||||||
|
**Action**: Comprehensive validation of final cover letter:
|
||||||
|
|
||||||
|
**Accuracy Verification:**
|
||||||
|
- All claims must be verifiable against resume and research
|
||||||
|
- Technical details must be accurate
|
||||||
|
- Company information must be correct
|
||||||
|
- Quantifiable results must match source materials
|
||||||
|
|
||||||
|
**Professional Standards:**
|
||||||
|
- Error-free grammar, spelling, and formatting
|
||||||
|
- Appropriate length (typically 3-4 paragraphs, 350-450 words)
|
||||||
|
- Professional formatting ready for submission
|
||||||
|
- Clear, scannable structure
|
||||||
|
|
||||||
|
## Output Requirements
|
||||||
|
|
||||||
|
### Deliverable: Complete Cover Letter (Markdown Format)
|
||||||
|
Present the final cover letter including:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
**Leo Miranda**
|
||||||
|
leobrmi@hotmail.com | (416) 859-7936
|
||||||
|
|
||||||
|
[Date]
|
||||||
|
|
||||||
|
[Hiring Manager/Dillon Consulting]
|
||||||
|
[Address if available]
|
||||||
|
|
||||||
|
Dear Hiring Manager,
|
||||||
|
|
||||||
|
[Complete cover letter content with proper paragraph structure]
|
||||||
|
|
||||||
|
Sincerely,
|
||||||
|
Leo Miranda
|
||||||
|
```
|
||||||
|
|
||||||
|
### Content Standards:
|
||||||
|
- **Length**: 350-450 words optimal
|
||||||
|
- **Structure**: Conventional 3-4 paragraph format
|
||||||
|
- **Tone**: Personal, direct, authentic - no corporate fluff
|
||||||
|
- **Focus**: Daily operations impact and specific value delivery
|
||||||
|
- **Evidence**: Quantifiable results and concrete examples
|
||||||
|
- **Alignment**: Clear connection to company culture and values
|
||||||
|
|
||||||
|
## Quality Standards & Success Metrics
|
||||||
|
|
||||||
|
### Mandatory Requirements:
|
||||||
|
- **Authentic Voice**: Reflects Leo's genuine enthusiasm and approach
|
||||||
|
- **Specific Value**: Clear daily operations contributions identified
|
||||||
|
- **Evidence-Based**: All claims supported by resume and research
|
||||||
|
- **Direct Communication**: Eliminates unnecessary corporate language
|
||||||
|
- **Complete Format**: Ready for immediate submission
|
||||||
|
|
||||||
|
### Success Metrics:
|
||||||
|
- Captures Leo's authentic voice and writing style from reference document
|
||||||
|
- Demonstrates clear understanding of role daily realities
|
||||||
|
- Shows specific ways Leo's skills solve their actual problems
|
||||||
|
- Reflects genuine interest in company and position using Leo's natural enthusiasm
|
||||||
|
- Positions Leo as ideal candidate through authentic differentiation
|
||||||
|
- Creates compelling case for interview invitation using proven successful approach
|
||||||
|
- Maintains Leo's characteristic warmth while staying professional
|
||||||
|
- Uses Leo's natural sentence flow and language patterns
|
||||||
|
|
||||||
|
## Operational Rules
|
||||||
|
1. **Authentic Voice Priority**: Must sound exactly like Leo based on reference document analysis
|
||||||
|
2. **Style Training Required**: Always access and analyze `Leonardo-Miranda_20250715_summitt-ops.docx` first
|
||||||
|
3. **Evidence-Based Claims**: Every statement must be verifiable against resume and research
|
||||||
|
4. **Natural Flow**: Use Leo's flowing, connected sentence structure
|
||||||
|
5. **Humble Confidence**: Balance expertise demonstration with genuine learning curiosity
|
||||||
|
6. **Strategic Integration**: Seamlessly incorporate Phase 1 and Phase 2 insights using Leo's voice
|
||||||
|
7. **Warm Professionalism**: Maintain Leo's characteristic warmth while staying professional
|
||||||
|
8. **Submission Ready**: Deliver complete, formatted document requiring no additional editing
|
||||||
|
|
||||||
|
**Success Definition**: A compelling, authentically Leo-voiced cover letter that demonstrates specific value contribution to daily operations while expressing genuine enthusiasm for learning and collaboration, written in Leo's proven successful style that has generated positive responses.
|
||||||
280
docs/ai/research_output_report.md
Normal file
280
docs/ai/research_output_report.md
Normal file
@@ -0,0 +1,280 @@
|
|||||||
|
# Job Application Research Report
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
**Candidate:** Leo Miranda
|
||||||
|
**Target Role:** [Position Title]
|
||||||
|
**Company:** [Company Name]
|
||||||
|
**Analysis Date:** [Date]
|
||||||
|
**Overall Fit Score:** [X/10]
|
||||||
|
**Recommendation:** [Proceed/Proceed with Caution/Reconsider]
|
||||||
|
|
||||||
|
**Key Takeaways:**
|
||||||
|
- **Primary Strength:** [Top competitive advantage]
|
||||||
|
- **Unique Value Proposition:** [What sets Leo apart]
|
||||||
|
- **Strategic Focus:** [Main positioning theme]
|
||||||
|
- **Potential Challenge:** [Primary gap to address]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source Documentation
|
||||||
|
|
||||||
|
### Variable 1: `original-job-description`
|
||||||
|
*Original job description with formatting improvements only - NO content changes*
|
||||||
|
|
||||||
|
```
|
||||||
|
[EXACT job description text as provided by user]
|
||||||
|
[Only formatting applied: bullet points, icons, spacing, headers for organization]
|
||||||
|
[NO words, phrases, or meaning altered]
|
||||||
|
|
||||||
|
📋 **Role Title:** [As stated in original]
|
||||||
|
|
||||||
|
🏢 **Company:** [As stated in original]
|
||||||
|
|
||||||
|
📍 **Location:** [As stated in original]
|
||||||
|
|
||||||
|
🔧 **Key Responsibilities:**
|
||||||
|
• [Original responsibility 1]
|
||||||
|
• [Original responsibility 2]
|
||||||
|
• [Original responsibility 3]
|
||||||
|
|
||||||
|
🎯 **Required Qualifications:**
|
||||||
|
• [Original qualification 1]
|
||||||
|
• [Original qualification 2]
|
||||||
|
• [Original qualification 3]
|
||||||
|
|
||||||
|
⭐ **Preferred Qualifications:**
|
||||||
|
• [Original preferred 1]
|
||||||
|
• [Original preferred 2]
|
||||||
|
|
||||||
|
💼 **Company Information:**
|
||||||
|
[Any company description as provided in original]
|
||||||
|
|
||||||
|
📝 **Additional Details:**
|
||||||
|
[Any other information from original posting]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Variable 2: `research-final-version`
|
||||||
|
*Processed and categorized information for analysis*
|
||||||
|
|
||||||
|
**Extracted Core Elements:**
|
||||||
|
- **Company Profile:** [Analytical summary]
|
||||||
|
- **Role Level:** [Analyzed level and scope]
|
||||||
|
- **Technical Stack:** [Identified technologies]
|
||||||
|
- **Soft Skills:** [Communication, leadership requirements]
|
||||||
|
- **Experience Level:** [Years, background needed]
|
||||||
|
- **Team Context:** [Reporting structure, collaboration needs]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Job Description Analysis
|
||||||
|
|
||||||
|
### Company & Role Profile
|
||||||
|
**Company:** [Name and brief description]
|
||||||
|
**Department:** [Team/Division]
|
||||||
|
**Industry:** [Sector and market position]
|
||||||
|
**Role Level:** [Junior/Mid/Senior/Lead]
|
||||||
|
**Team Size:** [If specified]
|
||||||
|
**Reporting Structure:** [Manager title/department]
|
||||||
|
|
||||||
|
### Company Intelligence
|
||||||
|
**Recent Developments:**
|
||||||
|
- [Key news, funding, acquisitions, strategic initiatives]
|
||||||
|
|
||||||
|
**Company Culture Indicators:**
|
||||||
|
- [Values, work style, team dynamics from job posting and research]
|
||||||
|
|
||||||
|
**Industry Context:**
|
||||||
|
- [Market trends, competitive landscape, growth areas]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Requirements Analysis
|
||||||
|
|
||||||
|
### Technical Skills Assessment
|
||||||
|
| Required Skill | Skill Type | Explicitly Met? | Evidence Location | Strength Level | Strategic Notes |
|
||||||
|
|---|---|---|---|---|---|
|
||||||
|
| [Example: Python] | Technical | Yes | Data Science Projects | Strong | Core expertise, multiple implementations |
|
||||||
|
| [Example: SQL] | Technical | Yes | Summitt Energy role | Strong | Production database experience |
|
||||||
|
| [Example: Machine Learning] | Technical | Partial | Self-taught projects | Moderate | Strong foundation, can emphasize growth trajectory |
|
||||||
|
|
||||||
|
### Soft Skills Assessment
|
||||||
|
| Required Skill | Met? | Evidence Location | Demonstration Method |
|
||||||
|
|---|---|---|---|
|
||||||
|
| [Example: Leadership] | Yes | Startup Founder experience | Team building and project management |
|
||||||
|
| [Example: Communication] | Yes | Cross-departmental collaboration | Stakeholder presentation experience |
|
||||||
|
|
||||||
|
### Experience Requirements
|
||||||
|
| Requirement | Leo's Background | Gap Analysis | Positioning Strategy |
|
||||||
|
|---|---|---|---|
|
||||||
|
| [Example: 3+ years Data Science] | 2+ years practical experience | 1 year formal gap | Emphasize depth over duration, self-taught dedication |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Responsibilities Matching & Performance Analysis
|
||||||
|
|
||||||
|
| Job Responsibility | Direct Experience | Related Experience | Performance Capability (1-5) | Implementation Approach |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| [Example: Build ML models] | Yes - customer segmentation | Multiple personal projects | 4 | Leverage scikit-learn, pandas expertise for rapid prototyping |
|
||||||
|
| [Example: Database optimization] | Partial - query optimization | VPS performance tuning | 4 | Apply DevOps optimization mindset to database performance |
|
||||||
|
| [Example: Stakeholder reporting] | Yes - executive dashboards | Cross-departmental communication | 3 | Combine technical depth with business communication skills |
|
||||||
|
|
||||||
|
**Performance Capability Legend:**
|
||||||
|
- 5: Expert level, immediate impact
|
||||||
|
- 4: Proficient, minimal ramp-up
|
||||||
|
- 3: Competent, moderate learning
|
||||||
|
- 2: Developing, significant growth needed
|
||||||
|
- 1: Beginner, extensive training required
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Strategic Skill Transferability Analysis
|
||||||
|
|
||||||
|
### Hidden Value Opportunities
|
||||||
|
**Automation Capabilities:**
|
||||||
|
- Job mentions: [Example: "streamline reporting processes"]
|
||||||
|
- Leo's advantage: Python automation, VBA scripting, and DevOps practices enable sophisticated solutions beyond standard tools
|
||||||
|
|
||||||
|
**Technical Infrastructure:**
|
||||||
|
- Job mentions: [Example: "manage data systems"]
|
||||||
|
- Leo's advantage: VPS/DevOps background provides infrastructure perspective often missing in pure data science roles
|
||||||
|
|
||||||
|
**Innovation Potential:**
|
||||||
|
- Job mentions: [Example: "improve data accuracy"]
|
||||||
|
- Leo's advantage: AI/ML expertise can introduce predictive validation and anomaly detection beyond traditional QA methods
|
||||||
|
|
||||||
|
### Cross-Domain Value Creation
|
||||||
|
| Job Area | Standard Approach | Leo's Enhanced Approach | Competitive Advantage |
|
||||||
|
|---|---|---|---|
|
||||||
|
| [Example: Data Analysis] | Excel/BI tools | Python automation + statistical modeling | Deeper insights, scalable solutions |
|
||||||
|
| [Example: System Integration] | Manual processes | DevOps automation + API development | Efficiency gains, reduced errors |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Keywords & Messaging Strategy
|
||||||
|
|
||||||
|
### Primary Keywords (Must Include)
|
||||||
|
- [List of critical terms from job description]
|
||||||
|
|
||||||
|
### Secondary Keywords (Should Include)
|
||||||
|
- [Supporting terms and industry language]
|
||||||
|
|
||||||
|
### Leo's Unique Keywords (Differentiators)
|
||||||
|
- [Technical terms that showcase Leo's unique skill combination]
|
||||||
|
|
||||||
|
### Messaging Themes
|
||||||
|
1. **Primary Theme:** [Main positioning message]
|
||||||
|
2. **Supporting Themes:** [2-3 additional value propositions]
|
||||||
|
3. **Proof Points:** [Specific achievements that support themes]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Competitive Positioning
|
||||||
|
|
||||||
|
### Leo's Unique Advantages
|
||||||
|
1. **[Advantage 1]:** [Description and impact]
|
||||||
|
2. **[Advantage 2]:** [Description and impact]
|
||||||
|
3. **[Advantage 3]:** [Description and impact]
|
||||||
|
|
||||||
|
### Potential Differentiators
|
||||||
|
- **Technical Depth:** [How Leo's technical skills exceed typical requirements]
|
||||||
|
- **Cross-Functional Value:** [How multiple skill areas create synergy]
|
||||||
|
- **Growth Trajectory:** [Self-taught journey demonstrates adaptability]
|
||||||
|
|
||||||
|
### Gap Mitigation Strategies
|
||||||
|
| Identified Gap | Mitigation Approach | Supporting Evidence |
|
||||||
|
|---|---|---|
|
||||||
|
| [Example: Formal ML education] | Emphasize practical application and continuous learning | Project portfolio, certifications, results achieved |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Application Strategy Recommendations
|
||||||
|
|
||||||
|
### Resume Optimization Priorities
|
||||||
|
1. **Lead with:** [Primary skill/experience to emphasize]
|
||||||
|
2. **Quantify:** [Specific achievements to highlight with metrics]
|
||||||
|
3. **Technical Focus:** [Key technologies to prominently feature]
|
||||||
|
4. **Experience Narrative:** [How to frame career progression]
|
||||||
|
|
||||||
|
### Cover Letter Strategy
|
||||||
|
1. **Opening Hook:** [Compelling way to start]
|
||||||
|
2. **Core Message:** [Central value proposition]
|
||||||
|
3. **Supporting Examples:** [2-3 specific achievements to highlight]
|
||||||
|
4. **Company Connection:** [How to demonstrate company-specific interest]
|
||||||
|
|
||||||
|
### Potential Red Flags to Address
|
||||||
|
- [Any concerns from gap analysis and how to proactively address them]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Phase 2 Handoff Information
|
||||||
|
|
||||||
|
### Resume Content Priorities (High to Low)
|
||||||
|
1. [Most important experiences/skills to feature prominently]
|
||||||
|
2. [Secondary content to include]
|
||||||
|
3. [Supporting content if space allows]
|
||||||
|
|
||||||
|
### Key Messages for Integration
|
||||||
|
- **Primary Value Prop:** [Main selling point]
|
||||||
|
- **Technical Emphasis:** [Technologies to highlight]
|
||||||
|
- **Achievement Focus:** [Quantifiable results to feature]
|
||||||
|
|
||||||
|
### Style Guidance
|
||||||
|
- **Tone:** [Professional, technical, innovative, etc.]
|
||||||
|
- **Emphasis:** [What aspects of background to stress]
|
||||||
|
- **Keywords:** [Critical terms for ATS optimization]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Research Quality Metrics
|
||||||
|
|
||||||
|
**Analysis Completeness:**
|
||||||
|
- Job requirements coverage: [X%]
|
||||||
|
- Skills assessment depth: [Comprehensive/Moderate/Basic]
|
||||||
|
- Company research depth: [Comprehensive/Moderate/Basic]
|
||||||
|
- Strategic insights quality: [High/Medium/Low]
|
||||||
|
|
||||||
|
**Evidence Base:**
|
||||||
|
- All assessments tied to resume evidence: [Yes/No]
|
||||||
|
- Transferability analysis completed: [Yes/No]
|
||||||
|
- Competitive advantages identified: [X advantages found]
|
||||||
|
|
||||||
|
**Source Documentation Quality:**
|
||||||
|
- Original job description preserved intact: [✅/❌]
|
||||||
|
- Formatting improvements applied appropriately: [✅/❌]
|
||||||
|
- Research version comprehensively categorized: [✅/❌]
|
||||||
|
- Cross-reference accuracy verified: [✅/❌]
|
||||||
|
|
||||||
|
**Readiness for Phase 2:**
|
||||||
|
- Clear content priorities established: [Yes/No]
|
||||||
|
- Strategic direction defined: [Yes/No]
|
||||||
|
- All handoff information complete: [Yes/No]
|
||||||
|
- Original source material available for reference: [✅/❌]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Final Validation Against Original Source
|
||||||
|
|
||||||
|
**Cross-Reference Check:**
|
||||||
|
- [ ] All analyzed requirements traced back to `original-job-description`
|
||||||
|
- [ ] No requirements missed or misinterpreted
|
||||||
|
- [ ] Analysis accurately reflects original posting intent
|
||||||
|
- [ ] Strategic recommendations align with actual job needs
|
||||||
|
|
||||||
|
**Original Source Integrity:**
|
||||||
|
- [ ] `original-job-description` contains exact text as provided
|
||||||
|
- [ ] Only formatting/organization improvements applied
|
||||||
|
- [ ] No content modifications or interpretations added
|
||||||
|
- [ ] Serves as reliable reference for future phases
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Phase 1 Status:** ✅ Complete
|
||||||
|
**Next Phase:** Resume Optimization
|
||||||
|
**Analyst:** Job Application Research Agent
|
||||||
|
**Review Required:** [Yes/No - pending user feedback]
|
||||||
|
|
||||||
|
**Documentation Archive:**
|
||||||
|
- ✅ `original-job-description` preserved and formatted
|
||||||
|
- ✅ `research-final-version` created and analyzed
|
||||||
|
- ✅ Strategic analysis completed
|
||||||
|
- ✅ Ready for Phase 2 handoff
|
||||||
172
docs/ai/research_prompt.md
Normal file
172
docs/ai/research_prompt.md
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
# Phase 1: Job Application Research Agent
|
||||||
|
|
||||||
|
You are an expert Job Application Research Agent, specialized in deep analysis of job descriptions and comprehensive candidate-role matching. You will conduct thorough research for Leo Miranda's job applications, leveraging his complete professional background and proven application strategies.
|
||||||
|
|
||||||
|
## Core Mission
|
||||||
|
Perform comprehensive research and analysis to understand job requirements, assess candidate fit, and identify strategic positioning opportunities for the application process.
|
||||||
|
|
||||||
|
## Available Resources
|
||||||
|
- **'complete_resume'**: Leo's comprehensive professional experience document
|
||||||
|
- **Files starting with 'Leonardo-Miranda'**: Past successful job applications for style and approach analysis
|
||||||
|
- **Web search capabilities**: For company research and job posting analysis
|
||||||
|
- **Leo's professional context**: Neurodivergent data scientist, Toronto-based, expertise in VPS/DevOps/AI web apps, Raspberry Pi enthusiast
|
||||||
|
|
||||||
|
## Research Workflow
|
||||||
|
|
||||||
|
### Step 1: Job Description Processing & Variable Creation
|
||||||
|
|
||||||
|
**CRITICAL FIRST STEP - Create Two Required Variables:**
|
||||||
|
|
||||||
|
**Variable 1: `original-job-description`**
|
||||||
|
- Capture the EXACT job description text as provided by the user
|
||||||
|
- Make NO content changes whatsoever - preserve every word, phrase, and detail
|
||||||
|
- ONLY apply formatting improvements:
|
||||||
|
- Clean up spacing and line breaks
|
||||||
|
- Add bullet points for better readability
|
||||||
|
- Add relevant icons (📋 for responsibilities, 🔧 for technical skills, etc.)
|
||||||
|
- Organize sections with headers if structure is unclear
|
||||||
|
- Fix obvious formatting issues (missing line breaks, inconsistent spacing)
|
||||||
|
- **RULE: Original meaning and text must remain 100% intact**
|
||||||
|
|
||||||
|
**Variable 2: `research-final-version`**
|
||||||
|
- This will contain your analytical processing and categorization
|
||||||
|
- Extract and organize information for analysis purposes
|
||||||
|
- This is where you apply your analytical framework
|
||||||
|
|
||||||
|
**Job Description Acquisition:**
|
||||||
|
**If URL provided:**
|
||||||
|
1. Use web_search to access and analyze the job posting
|
||||||
|
2. If link is inaccessible or insufficient, request full content from Leo
|
||||||
|
3. Create both variables from the acquired content
|
||||||
|
|
||||||
|
**If content provided directly:**
|
||||||
|
1. Create `original-job-description` with formatting-only improvements
|
||||||
|
2. Create `research-final-version` with analytical processing
|
||||||
|
3. Confirm completeness and request missing sections if needed
|
||||||
|
|
||||||
|
**Analysis Framework for `research-final-version`:**
|
||||||
|
- **Company/Department Profile**: Mission, culture, team structure, recent news
|
||||||
|
- **Role Definition**: Title, level, reporting structure, team dynamics
|
||||||
|
- **Core Responsibilities**: Primary duties, expected outcomes, project types
|
||||||
|
- **Technical Requirements**: Hard skills, tools, technologies, methodologies
|
||||||
|
- **Soft Skills**: Communication, leadership, collaboration requirements
|
||||||
|
- **Experience Criteria**: Years, industries, specific background preferences
|
||||||
|
- **Keywords Extraction**: Critical terms, buzzwords, industry language
|
||||||
|
- **Implicit Requirements**: Underlying expectations, cultural fit indicators
|
||||||
|
|
||||||
|
### Step 2: Comprehensive Skills Assessment
|
||||||
|
**Action:** Access 'complete_resume' via google_drive_search
|
||||||
|
|
||||||
|
Create detailed skills assessment table:
|
||||||
|
|
||||||
|
| Required Skill | Skill Type | Explicitly Met? | Evidence Location | Strength Level | Transferability Notes |
|
||||||
|
|---|---|---|---|---|---|
|
||||||
|
| [Skill] | Technical/Soft/Domain | Yes/Partial/No | [Resume Section] | Strong/Moderate/Developing | [How related skills apply] |
|
||||||
|
|
||||||
|
**Assessment Criteria:**
|
||||||
|
- **Explicitly Met**: Direct match found in resume
|
||||||
|
- **Partial**: Related experience that could transfer
|
||||||
|
- **Transferability Notes**: How Leo's adjacent skills could fulfill this requirement
|
||||||
|
|
||||||
|
### Step 3: Responsibilities Matching & Performance Analysis
|
||||||
|
Create comprehensive responsibilities analysis:
|
||||||
|
|
||||||
|
| Job Responsibility | Direct Experience | Related Experience | Performance Capability | Implementation Approach |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| [Responsibility] | Yes/No | [Description] | [1-5 scale] | [How Leo would execute this] |
|
||||||
|
|
||||||
|
**Performance Capability Scale:**
|
||||||
|
- 5: Expert level, immediate impact
|
||||||
|
- 4: Proficient, minimal ramp-up time
|
||||||
|
- 3: Competent, moderate learning curve
|
||||||
|
- 2: Developing, significant growth needed
|
||||||
|
- 1: Beginner, extensive training required
|
||||||
|
|
||||||
|
**Implementation Approach Examples:**
|
||||||
|
- "Could leverage Python automation skills for manual process optimization"
|
||||||
|
- "VPS/DevOps background enables infrastructure scaling responsibilities"
|
||||||
|
- "Data science expertise translates to business intelligence requirements"
|
||||||
|
|
||||||
|
### Step 4: Strategic Skill Transferability Analysis
|
||||||
|
**NEW REQUIREMENT**: Analyze how Leo's unique skill combination can address job requirements creatively:
|
||||||
|
|
||||||
|
**Hidden Value Opportunities:**
|
||||||
|
- Identify responsibilities that don't specify technical approaches
|
||||||
|
- Map Leo's technical skills to unspecified implementation methods
|
||||||
|
- Highlight cross-functional capabilities that exceed basic requirements
|
||||||
|
|
||||||
|
**Example Analysis:**
|
||||||
|
```
|
||||||
|
Job Requirement: "Automate reporting processes"
|
||||||
|
Leo's Advantage: "While job doesn't specify programming languages, Leo's Python expertise with pandas, SQL integration, and VBA skills enable sophisticated automation solutions beyond basic tools"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Company Intelligence Gathering
|
||||||
|
**Action:** Use web_search for company research
|
||||||
|
- Recent company news and developments
|
||||||
|
- Industry position and competitive landscape
|
||||||
|
- Company culture indicators from public content
|
||||||
|
- Leadership team background
|
||||||
|
- Recent initiatives or strategic directions
|
||||||
|
|
||||||
|
### Step 6: Competitive Positioning Analysis
|
||||||
|
**Determine Leo's unique value proposition:**
|
||||||
|
- Skill combinations that differentiate from typical candidates
|
||||||
|
- Experience intersections that solve multiple job requirements
|
||||||
|
- Technical depth that enables innovation beyond standard approaches
|
||||||
|
- Cross-domain expertise advantages
|
||||||
|
|
||||||
|
### Step 7: Application Strategy Recommendations
|
||||||
|
**Based on complete analysis:**
|
||||||
|
- Primary positioning strategy (how to present Leo's candidacy)
|
||||||
|
- Key messaging themes for resume and cover letter
|
||||||
|
- Specific achievements to emphasize
|
||||||
|
- Potential concerns to address proactively
|
||||||
|
- Unique value propositions to highlight
|
||||||
|
|
||||||
|
## Quality Standards
|
||||||
|
- **Accuracy**: All assessments must be evidence-based from resume content
|
||||||
|
- **Depth**: Go beyond surface-level matching to find strategic advantages
|
||||||
|
- **Specificity**: Provide concrete examples and implementation approaches
|
||||||
|
- **Honesty**: Acknowledge gaps while highlighting transferable strengths
|
||||||
|
- **Strategic**: Focus on positioning for maximum competitive advantage
|
||||||
|
|
||||||
|
## Output Requirements
|
||||||
|
Generate comprehensive research report using the standardized output format, ensuring:
|
||||||
|
|
||||||
|
**MANDATORY Variable Inclusion:**
|
||||||
|
1. **`original-job-description`**: Must be included in "Source Documentation" section
|
||||||
|
- Preserve 100% of original text content
|
||||||
|
- Apply ONLY formatting improvements (bullets, icons, spacing, headers)
|
||||||
|
- Serve as reference point for all analysis
|
||||||
|
|
||||||
|
2. **`research-final-version`**: Include in "Source Documentation" section
|
||||||
|
- Show your analytical processing and categorization
|
||||||
|
- Extract key elements for systematic analysis
|
||||||
|
- Cross-reference with original to ensure nothing is missed
|
||||||
|
|
||||||
|
**Documentation Standards:**
|
||||||
|
- Both variables must be clearly labeled and separated
|
||||||
|
- Original text integrity is paramount - any modifications beyond formatting are strictly prohibited
|
||||||
|
- Final report should seamlessly reference both versions
|
||||||
|
- All analysis must be traceable back to original source material
|
||||||
|
|
||||||
|
All other analysis documented for seamless handoff to Phase 2 (Resume Optimization).
|
||||||
|
|
||||||
|
## Operational Rules
|
||||||
|
1. **Evidence-Based**: Every assessment must reference specific resume content
|
||||||
|
2. **No Fabrication**: Never invent experiences or capabilities
|
||||||
|
3. **Original Preservation**: `original-job-description` must remain content-identical to user input
|
||||||
|
4. **Strategic Focus**: Emphasize competitive advantages and unique value
|
||||||
|
5. **Transferability**: Actively look for skill applications beyond obvious matches
|
||||||
|
6. **Completeness**: Address every significant job requirement
|
||||||
|
7. **Dual Documentation**: Always maintain both original and processed versions
|
||||||
|
8. **User Feedback**: Present findings for Leo's review and input before finalizing
|
||||||
|
|
||||||
|
**Success Metrics:**
|
||||||
|
- Complete coverage of all job requirements
|
||||||
|
- Strategic positioning identified
|
||||||
|
- Transferable skills mapped effectively
|
||||||
|
- Original job description perfectly preserved
|
||||||
|
- Clear handoff documentation for Phase 2
|
||||||
|
- Actionable insights for application strategy
|
||||||
651
docs/api_specification.md
Normal file
651
docs/api_specification.md
Normal file
@@ -0,0 +1,651 @@
|
|||||||
|
# Job Forge - FastAPI Web Application API Specification
|
||||||
|
|
||||||
|
**Version:** 1.0.0 Prototype
|
||||||
|
**Base URL:** `http://localhost:8000` (Development), `https://yourdomain.com` (Production)
|
||||||
|
**Target Audience:** Full-Stack Developers and API Consumers
|
||||||
|
**Last Updated:** August 2025
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔐 Authentication
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
- **Method:** JWT Bearer tokens
|
||||||
|
- **Token Expiry:** 24 hours (configurable)
|
||||||
|
- **Refresh:** Token refresh endpoint available
|
||||||
|
- **Header Format:** `Authorization: Bearer <jwt_token>`
|
||||||
|
- **Security:** HTTPS required in production
|
||||||
|
|
||||||
|
### Authentication Endpoints
|
||||||
|
|
||||||
|
#### POST /api/v1/auth/register
|
||||||
|
Register new user account.
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"email": "user@example.com",
|
||||||
|
"password": "SecurePass123!",
|
||||||
|
"first_name": "John",
|
||||||
|
"last_name": "Doe"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (201):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"user": {
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"email": "user@example.com",
|
||||||
|
"first_name": "John",
|
||||||
|
"last_name": "Doe",
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": "2025-08-02T10:00:00Z"
|
||||||
|
},
|
||||||
|
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||||
|
"token_type": "bearer",
|
||||||
|
"expires_in": 86400
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `400` - Invalid email format or weak password
|
||||||
|
- `409` - Email already registered
|
||||||
|
|
||||||
|
#### POST /api/v1/auth/login
|
||||||
|
Authenticate user and return JWT token.
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"email": "user@example.com",
|
||||||
|
"password": "SecurePass123!"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"user": {
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"email": "user@example.com",
|
||||||
|
"first_name": "John",
|
||||||
|
"last_name": "Doe",
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": "2025-08-02T10:00:00Z"
|
||||||
|
},
|
||||||
|
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||||
|
"token_type": "bearer",
|
||||||
|
"expires_in": 86400
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `401` - Invalid credentials
|
||||||
|
- `400` - Missing email or password
|
||||||
|
|
||||||
|
#### GET /api/v1/auth/me
|
||||||
|
Get current user profile (requires authentication).
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"email": "user@example.com",
|
||||||
|
"first_name": "John",
|
||||||
|
"last_name": "Doe",
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": "2025-08-02T10:00:00Z",
|
||||||
|
"updated_at": "2025-08-02T10:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `401` - Invalid or expired token
|
||||||
|
|
||||||
|
#### POST /api/v1/auth/refresh
|
||||||
|
Refresh JWT access token.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||||
|
"token_type": "bearer",
|
||||||
|
"expires_in": 86400
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `401` - Invalid or expired token
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 Applications API
|
||||||
|
|
||||||
|
### Application Model
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"name": "google_senior_developer_2025_07_01",
|
||||||
|
"company_name": "Google",
|
||||||
|
"role_title": "Senior Developer",
|
||||||
|
"job_url": "https://careers.google.com/jobs/123",
|
||||||
|
"job_description": "We are looking for...",
|
||||||
|
"location": "Toronto, ON",
|
||||||
|
"priority_level": "high",
|
||||||
|
"status": "draft",
|
||||||
|
"research_completed": false,
|
||||||
|
"resume_optimized": false,
|
||||||
|
"cover_letter_generated": false,
|
||||||
|
"created_at": "2025-07-01T10:00:00Z",
|
||||||
|
"updated_at": "2025-07-01T10:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Application Endpoints
|
||||||
|
|
||||||
|
#### POST /api/v1/applications
|
||||||
|
Create new job application.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"company_name": "Google",
|
||||||
|
"role_title": "Senior Developer",
|
||||||
|
"job_description": "We are looking for an experienced developer...",
|
||||||
|
"job_url": "https://careers.google.com/jobs/123",
|
||||||
|
"location": "Toronto, ON",
|
||||||
|
"priority_level": "high",
|
||||||
|
"additional_context": "Found through LinkedIn, know someone there"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (201):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"name": "google_senior_developer_2025_07_01",
|
||||||
|
"company_name": "Google",
|
||||||
|
"role_title": "Senior Developer",
|
||||||
|
"job_url": "https://careers.google.com/jobs/123",
|
||||||
|
"job_description": "We are looking for an experienced developer...",
|
||||||
|
"location": "Toronto, ON",
|
||||||
|
"priority_level": "high",
|
||||||
|
"status": "draft",
|
||||||
|
"research_completed": false,
|
||||||
|
"resume_optimized": false,
|
||||||
|
"cover_letter_generated": false,
|
||||||
|
"created_at": "2025-07-01T10:00:00Z",
|
||||||
|
"updated_at": "2025-07-01T10:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Validation Rules:**
|
||||||
|
- `company_name`: Required, 1-255 characters
|
||||||
|
- `role_title`: Required, 1-255 characters
|
||||||
|
- `job_description`: Required, minimum 50 characters
|
||||||
|
- `job_url`: Optional, valid URL format
|
||||||
|
- `priority_level`: Optional, enum: `low|medium|high`
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `400` - Validation errors
|
||||||
|
- `401` - Unauthorized
|
||||||
|
|
||||||
|
#### GET /api/v1/applications
|
||||||
|
List user's applications.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Query Parameters:**
|
||||||
|
- `status`: Filter by status (optional)
|
||||||
|
- `priority`: Filter by priority level (optional)
|
||||||
|
- `limit`: Number of results (default: 50, max: 100)
|
||||||
|
- `offset`: Pagination offset (default: 0)
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"applications": [
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"name": "google_senior_developer_2025_07_01",
|
||||||
|
"company_name": "Google",
|
||||||
|
"role_title": "Senior Developer",
|
||||||
|
"status": "research_complete",
|
||||||
|
"priority_level": "high",
|
||||||
|
"research_completed": true,
|
||||||
|
"resume_optimized": false,
|
||||||
|
"cover_letter_generated": false,
|
||||||
|
"created_at": "2025-07-01T10:00:00Z",
|
||||||
|
"updated_at": "2025-07-01T11:30:00Z"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"total": 1,
|
||||||
|
"limit": 50,
|
||||||
|
"offset": 0
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### GET /api/v1/applications/{application_id}
|
||||||
|
Get specific application details.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (200):** Full application object (see Application Model above)
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Application not found or not owned by user
|
||||||
|
- `401` - Unauthorized
|
||||||
|
|
||||||
|
#### PUT /api/v1/applications/{application_id}
|
||||||
|
Update application details.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"company_name": "Google Inc.",
|
||||||
|
"location": "Toronto, ON, Canada",
|
||||||
|
"priority_level": "medium"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (200):** Updated application object
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Application not found
|
||||||
|
- `400` - Validation errors
|
||||||
|
- `401` - Unauthorized
|
||||||
|
|
||||||
|
#### DELETE /api/v1/applications/{application_id}
|
||||||
|
Delete application and all associated documents.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (204):** No content
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Application not found
|
||||||
|
- `401` - Unauthorized
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📄 Documents API
|
||||||
|
|
||||||
|
### Document Model
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"application_id": "456e7890-e89b-12d3-a456-426614174000",
|
||||||
|
"document_type": "research_report",
|
||||||
|
"content": "# Research Report\n\n## Job Analysis\n...",
|
||||||
|
"created_at": "2025-07-01T10:30:00Z",
|
||||||
|
"updated_at": "2025-07-01T10:30:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Document Endpoints
|
||||||
|
|
||||||
|
#### GET /api/v1/applications/{application_id}/documents
|
||||||
|
Get all documents for an application.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"research_report": {
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"content": "# Research Report\n\n## Job Analysis\n...",
|
||||||
|
"created_at": "2025-07-01T10:30:00Z",
|
||||||
|
"updated_at": "2025-07-01T10:30:00Z"
|
||||||
|
},
|
||||||
|
"optimized_resume": {
|
||||||
|
"id": "234e5678-e89b-12d3-a456-426614174000",
|
||||||
|
"content": "# John Doe\n\n## Experience\n...",
|
||||||
|
"created_at": "2025-07-01T11:00:00Z",
|
||||||
|
"updated_at": "2025-07-01T11:00:00Z"
|
||||||
|
},
|
||||||
|
"cover_letter": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### GET /api/v1/applications/{application_id}/documents/{document_type}
|
||||||
|
Get specific document.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**URL Parameters:**
|
||||||
|
- `document_type`: enum: `research_report|optimized_resume|cover_letter`
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"application_id": "456e7890-e89b-12d3-a456-426614174000",
|
||||||
|
"document_type": "research_report",
|
||||||
|
"content": "# Research Report\n\n## Job Analysis\n...",
|
||||||
|
"created_at": "2025-07-01T10:30:00Z",
|
||||||
|
"updated_at": "2025-07-01T10:30:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Document not found or application not owned by user
|
||||||
|
|
||||||
|
#### PUT /api/v1/applications/{application_id}/documents/{document_type}
|
||||||
|
Update document content (user editing).
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"content": "# Updated Research Report\n\n## Job Analysis\nUpdated content..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (200):** Updated document object
|
||||||
|
|
||||||
|
**Validation:**
|
||||||
|
- `content`: Required, minimum 10 characters
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Document or application not found
|
||||||
|
- `400` - Validation errors
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🤖 AI Processing API
|
||||||
|
|
||||||
|
### Processing Status Model
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"application_id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"current_phase": "research",
|
||||||
|
"status": "processing",
|
||||||
|
"progress": 0.6,
|
||||||
|
"estimated_completion": "2025-07-01T10:35:00Z",
|
||||||
|
"error_message": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Processing Endpoints
|
||||||
|
|
||||||
|
#### POST /api/v1/processing/applications/{application_id}/research
|
||||||
|
Start research phase processing.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (202):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "Research phase started",
|
||||||
|
"application_id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"estimated_completion": "2025-07-01T10:35:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Application not found
|
||||||
|
- `409` - Research already completed
|
||||||
|
- `400` - Application not in correct state
|
||||||
|
|
||||||
|
#### POST /api/v1/processing/applications/{application_id}/resume
|
||||||
|
Start resume optimization phase.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Requirements:** Research phase must be completed
|
||||||
|
|
||||||
|
**Response (202):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "Resume optimization started",
|
||||||
|
"application_id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"estimated_completion": "2025-07-01T11:05:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Application not found
|
||||||
|
- `409` - Resume already optimized
|
||||||
|
- `412` - Research phase not completed
|
||||||
|
|
||||||
|
#### POST /api/v1/processing/applications/{application_id}/cover-letter
|
||||||
|
Start cover letter generation phase.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"additional_context": "I'm particularly interested in their AI/ML projects. I have experience with TensorFlow and PyTorch."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Requirements:** Resume optimization must be completed
|
||||||
|
|
||||||
|
**Response (202):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "Cover letter generation started",
|
||||||
|
"application_id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"estimated_completion": "2025-07-01T11:15:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `404` - Application not found
|
||||||
|
- `409` - Cover letter already generated
|
||||||
|
- `412` - Resume optimization not completed
|
||||||
|
|
||||||
|
#### GET /api/v1/processing/applications/{application_id}/status
|
||||||
|
Get current processing status.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"application_id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"current_phase": "resume",
|
||||||
|
"status": "completed",
|
||||||
|
"progress": 1.0,
|
||||||
|
"completed_at": "2025-07-01T11:05:00Z",
|
||||||
|
"error_message": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Status Values:**
|
||||||
|
- `idle` - No processing active
|
||||||
|
- `processing` - AI generation in progress
|
||||||
|
- `completed` - Phase completed successfully
|
||||||
|
- `failed` - Processing failed with error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 👤 User Resumes API
|
||||||
|
|
||||||
|
### Resume Model
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"name": "Technical Resume",
|
||||||
|
"content": "# John Doe\n\n## Technical Skills\n...",
|
||||||
|
"focus_area": "software_development",
|
||||||
|
"is_primary": true,
|
||||||
|
"created_at": "2025-07-01T09:00:00Z",
|
||||||
|
"updated_at": "2025-07-01T09:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Resume Endpoints
|
||||||
|
|
||||||
|
#### GET /api/v1/resumes
|
||||||
|
Get user's resume library.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (200):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"resumes": [
|
||||||
|
{
|
||||||
|
"id": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"name": "Technical Resume",
|
||||||
|
"focus_area": "software_development",
|
||||||
|
"is_primary": true,
|
||||||
|
"created_at": "2025-07-01T09:00:00Z",
|
||||||
|
"updated_at": "2025-07-01T09:00:00Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### POST /api/v1/resumes
|
||||||
|
Upload new resume to library.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "Management Resume",
|
||||||
|
"content": "# John Doe\n\n## Leadership Experience\n...",
|
||||||
|
"focus_area": "management",
|
||||||
|
"is_primary": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (201):** Created resume object
|
||||||
|
|
||||||
|
**Validation:**
|
||||||
|
- `name`: Required, 1-255 characters
|
||||||
|
- `content`: Required, minimum 100 characters
|
||||||
|
- `focus_area`: Optional, enum: `software_development|management|data_science|consulting|other`
|
||||||
|
|
||||||
|
#### GET /api/v1/resumes/{resume_id}
|
||||||
|
Get specific resume details.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (200):** Full resume object
|
||||||
|
|
||||||
|
#### PUT /api/v1/resumes/{resume_id}
|
||||||
|
Update resume content.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Request:** Same as POST
|
||||||
|
|
||||||
|
**Response (200):** Updated resume object
|
||||||
|
|
||||||
|
#### DELETE /api/v1/resumes/{resume_id}
|
||||||
|
Delete resume from library.
|
||||||
|
|
||||||
|
**Headers:** `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
**Response (204):** No content
|
||||||
|
|
||||||
|
**Errors:**
|
||||||
|
- `409` - Cannot delete primary resume if it's the only one
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚨 Error Handling
|
||||||
|
|
||||||
|
### Standard Error Response
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"error": {
|
||||||
|
"code": "VALIDATION_ERROR",
|
||||||
|
"message": "Invalid input data",
|
||||||
|
"details": {
|
||||||
|
"company_name": ["This field is required"],
|
||||||
|
"job_description": ["Must be at least 50 characters"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"timestamp": "2025-07-01T10:00:00Z",
|
||||||
|
"path": "/api/v1/applications"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### HTTP Status Codes
|
||||||
|
- `200` - Success
|
||||||
|
- `201` - Created successfully
|
||||||
|
- `202` - Accepted (async processing started)
|
||||||
|
- `204` - No content (successful deletion)
|
||||||
|
- `400` - Bad request (validation errors)
|
||||||
|
- `401` - Unauthorized (invalid/missing token)
|
||||||
|
- `403` - Forbidden (valid token, insufficient permissions)
|
||||||
|
- `404` - Not found
|
||||||
|
- `409` - Conflict (duplicate email, invalid state transition)
|
||||||
|
- `412` - Precondition failed (phase not completed)
|
||||||
|
- `422` - Unprocessable entity (semantic errors)
|
||||||
|
- `500` - Internal server error
|
||||||
|
|
||||||
|
### Error Codes
|
||||||
|
- `VALIDATION_ERROR` - Input validation failed
|
||||||
|
- `AUTHENTICATION_ERROR` - Invalid credentials
|
||||||
|
- `AUTHORIZATION_ERROR` - Insufficient permissions
|
||||||
|
- `NOT_FOUND` - Resource not found
|
||||||
|
- `DUPLICATE_RESOURCE` - Resource already exists
|
||||||
|
- `INVALID_STATE` - Operation not valid for current state
|
||||||
|
- `EXTERNAL_API_ERROR` - Claude/OpenAI API error
|
||||||
|
- `PROCESSING_ERROR` - AI processing failed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔧 Development Notes
|
||||||
|
|
||||||
|
### Environment Configuration
|
||||||
|
- Development: `http://localhost:8000`
|
||||||
|
- Production: HTTPS required with proper SSL certificates
|
||||||
|
- Environment variables for API keys and database connections
|
||||||
|
|
||||||
|
### Rate Limiting
|
||||||
|
- Implemented for AI endpoints to prevent abuse
|
||||||
|
- Authentication endpoints have basic rate limiting
|
||||||
|
- Configurable limits based on deployment environment
|
||||||
|
|
||||||
|
### Pagination
|
||||||
|
- Default limit: 50
|
||||||
|
- Maximum limit: 100
|
||||||
|
- Use `offset` for pagination
|
||||||
|
- Consider cursor-based pagination for future versions
|
||||||
|
|
||||||
|
### Content Validation
|
||||||
|
- Job description: 50-10000 characters
|
||||||
|
- Resume content: 100-50000 characters
|
||||||
|
- Names: 1-255 characters
|
||||||
|
- URLs: Valid HTTP/HTTPS format
|
||||||
|
- Email: RFC 5322 compliant
|
||||||
|
|
||||||
|
### Background Processing
|
||||||
|
- AI operations run asynchronously via background tasks
|
||||||
|
- Use `/processing/applications/{id}/status` to check progress
|
||||||
|
- Frontend should poll every 2-3 seconds during processing
|
||||||
|
- Proper error handling and retry mechanisms implemented
|
||||||
|
|
||||||
|
### Security Considerations
|
||||||
|
- JWT tokens signed with secure secret keys
|
||||||
|
- Row Level Security (RLS) enforced at database level
|
||||||
|
- Input validation and sanitization on all endpoints
|
||||||
|
- CORS properly configured for web application
|
||||||
|
|
||||||
|
### Monitoring and Logging
|
||||||
|
- Structured logging with request IDs
|
||||||
|
- Performance monitoring for AI service calls
|
||||||
|
- Error tracking and alerting configured
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This API specification covers all endpoints for the Job Forge web application. Interactive API documentation is available at `/docs` (Swagger UI) and `/redoc` (ReDoc) for development and testing.*
|
||||||
616
docs/database_design.md
Normal file
616
docs/database_design.md
Normal file
@@ -0,0 +1,616 @@
|
|||||||
|
# Job Forge - Database Design & Schema
|
||||||
|
|
||||||
|
**Version:** 1.0.0 Prototype
|
||||||
|
**Database:** PostgreSQL 16 with pgvector
|
||||||
|
**Target Audience:** Full-Stack Developers
|
||||||
|
**Last Updated:** August 2025
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 Database Overview
|
||||||
|
|
||||||
|
### Technology Stack
|
||||||
|
- **Database:** PostgreSQL 16
|
||||||
|
- **Extensions:** pgvector (for AI embeddings), uuid-ossp (for UUID generation)
|
||||||
|
- **Security:** Row Level Security (RLS) for multi-tenant architecture
|
||||||
|
- **Connection:** AsyncPG with SQLAlchemy 2.0 async ORM
|
||||||
|
- **Migrations:** Alembic for database schema versioning
|
||||||
|
|
||||||
|
### Design Principles
|
||||||
|
- **Multi-Tenancy:** Complete data isolation between users via RLS
|
||||||
|
- **Data Integrity:** Foreign key constraints and comprehensive validation
|
||||||
|
- **Performance:** Strategic indexes for query optimization
|
||||||
|
- **Security:** Defense-in-depth with RLS policies and input validation
|
||||||
|
- **Scalability:** Schema designed for horizontal scaling and future features
|
||||||
|
- **Maintainability:** Clear naming conventions and well-documented structure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Entity Relationship Diagram
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
erDiagram
|
||||||
|
USERS ||--o{ APPLICATIONS : creates
|
||||||
|
USERS ||--o{ USER_RESUMES : owns
|
||||||
|
APPLICATIONS ||--o{ DOCUMENTS : contains
|
||||||
|
DOCUMENTS ||--o| DOCUMENT_EMBEDDINGS : has_embedding
|
||||||
|
|
||||||
|
USERS {
|
||||||
|
uuid id PK
|
||||||
|
varchar email UK
|
||||||
|
varchar password_hash
|
||||||
|
varchar first_name
|
||||||
|
varchar last_name
|
||||||
|
boolean is_active
|
||||||
|
timestamptz created_at
|
||||||
|
timestamptz updated_at
|
||||||
|
}
|
||||||
|
|
||||||
|
APPLICATIONS {
|
||||||
|
uuid id PK
|
||||||
|
uuid user_id FK
|
||||||
|
varchar company_name
|
||||||
|
varchar role_title
|
||||||
|
text job_description
|
||||||
|
text job_url
|
||||||
|
varchar location
|
||||||
|
varchar status
|
||||||
|
timestamptz created_at
|
||||||
|
timestamptz updated_at
|
||||||
|
}
|
||||||
|
|
||||||
|
DOCUMENTS {
|
||||||
|
uuid id PK
|
||||||
|
uuid application_id FK
|
||||||
|
varchar document_type
|
||||||
|
text content
|
||||||
|
timestamptz created_at
|
||||||
|
timestamptz updated_at
|
||||||
|
}
|
||||||
|
|
||||||
|
USER_RESUMES {
|
||||||
|
uuid id PK
|
||||||
|
uuid user_id FK
|
||||||
|
varchar name
|
||||||
|
text content
|
||||||
|
varchar focus_area
|
||||||
|
boolean is_primary
|
||||||
|
timestamptz created_at
|
||||||
|
timestamptz updated_at
|
||||||
|
}
|
||||||
|
|
||||||
|
DOCUMENT_EMBEDDINGS {
|
||||||
|
uuid id PK
|
||||||
|
uuid document_id FK
|
||||||
|
vector_1536 embedding
|
||||||
|
timestamptz created_at
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🗄️ Complete Database Schema
|
||||||
|
|
||||||
|
### Database Initialization
|
||||||
|
```sql
|
||||||
|
-- Enable required extensions
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||||
|
CREATE EXTENSION IF NOT EXISTS vector;
|
||||||
|
|
||||||
|
-- Create custom types
|
||||||
|
CREATE TYPE application_status_type AS ENUM (
|
||||||
|
'draft',
|
||||||
|
'applied',
|
||||||
|
'interview',
|
||||||
|
'rejected',
|
||||||
|
'offer'
|
||||||
|
);
|
||||||
|
CREATE TYPE document_type_enum AS ENUM (
|
||||||
|
'research_report',
|
||||||
|
'optimized_resume',
|
||||||
|
'cover_letter'
|
||||||
|
);
|
||||||
|
CREATE TYPE focus_area_type AS ENUM (
|
||||||
|
'software_development',
|
||||||
|
'data_science',
|
||||||
|
'management',
|
||||||
|
'consulting',
|
||||||
|
'other'
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Core Tables
|
||||||
|
|
||||||
|
#### Users Table
|
||||||
|
```sql
|
||||||
|
CREATE TABLE users (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
email VARCHAR(255) UNIQUE NOT NULL,
|
||||||
|
password_hash VARCHAR(255) NOT NULL,
|
||||||
|
first_name VARCHAR(100) NOT NULL,
|
||||||
|
last_name VARCHAR(100) NOT NULL,
|
||||||
|
is_active BOOLEAN DEFAULT TRUE,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'),
|
||||||
|
CONSTRAINT first_name_not_empty CHECK (LENGTH(TRIM(first_name)) > 0),
|
||||||
|
CONSTRAINT last_name_not_empty CHECK (LENGTH(TRIM(last_name)) > 0)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Indexes
|
||||||
|
CREATE INDEX idx_users_email ON users(email);
|
||||||
|
CREATE INDEX idx_users_active ON users(is_active);
|
||||||
|
CREATE INDEX idx_users_created_at ON users(created_at);
|
||||||
|
|
||||||
|
-- Row Level Security
|
||||||
|
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
-- Users can only see their own record
|
||||||
|
CREATE POLICY users_own_data ON users
|
||||||
|
FOR ALL
|
||||||
|
USING (id = current_setting('app.current_user_id')::UUID);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Applications Table
|
||||||
|
```sql
|
||||||
|
CREATE TABLE applications (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
company_name VARCHAR(255) NOT NULL,
|
||||||
|
role_title VARCHAR(255) NOT NULL,
|
||||||
|
job_description TEXT NOT NULL,
|
||||||
|
job_url TEXT,
|
||||||
|
location VARCHAR(255),
|
||||||
|
status application_status_type DEFAULT 'draft',
|
||||||
|
|
||||||
|
-- Timestamps
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT job_description_min_length CHECK (LENGTH(job_description) >= 50),
|
||||||
|
CONSTRAINT company_name_not_empty CHECK (LENGTH(TRIM(company_name)) > 0),
|
||||||
|
CONSTRAINT role_title_not_empty CHECK (LENGTH(TRIM(role_title)) > 0),
|
||||||
|
CONSTRAINT valid_job_url CHECK (
|
||||||
|
job_url IS NULL OR
|
||||||
|
job_url ~* '^https?://[^\s/$.?#].[^\s]*$'
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Indexes
|
||||||
|
CREATE INDEX idx_applications_user_id ON applications(user_id);
|
||||||
|
CREATE INDEX idx_applications_status ON applications(status);
|
||||||
|
CREATE INDEX idx_applications_created_at ON applications(created_at);
|
||||||
|
CREATE INDEX idx_applications_company_name ON applications(company_name);
|
||||||
|
|
||||||
|
-- Full text search index for job descriptions
|
||||||
|
CREATE INDEX idx_applications_job_description_fts
|
||||||
|
ON applications USING gin(to_tsvector('english', job_description));
|
||||||
|
|
||||||
|
-- Row Level Security
|
||||||
|
ALTER TABLE applications ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
CREATE POLICY applications_user_access ON applications
|
||||||
|
FOR ALL
|
||||||
|
USING (user_id = current_setting('app.current_user_id')::UUID);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Documents Table
|
||||||
|
```sql
|
||||||
|
CREATE TABLE documents (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
application_id UUID NOT NULL REFERENCES applications(id) ON DELETE CASCADE,
|
||||||
|
document_type document_type_enum NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT content_min_length CHECK (LENGTH(content) >= 10),
|
||||||
|
CONSTRAINT unique_document_per_application UNIQUE (application_id, document_type)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Indexes
|
||||||
|
CREATE INDEX idx_documents_application_id ON documents(application_id);
|
||||||
|
CREATE INDEX idx_documents_type ON documents(document_type);
|
||||||
|
CREATE INDEX idx_documents_updated_at ON documents(updated_at);
|
||||||
|
|
||||||
|
-- Full text search index for document content
|
||||||
|
CREATE INDEX idx_documents_content_fts
|
||||||
|
ON documents USING gin(to_tsvector('english', content));
|
||||||
|
|
||||||
|
-- Row Level Security
|
||||||
|
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
CREATE POLICY documents_user_access ON documents
|
||||||
|
FOR ALL
|
||||||
|
USING (
|
||||||
|
application_id IN (
|
||||||
|
SELECT id FROM applications
|
||||||
|
WHERE user_id = current_setting('app.current_user_id')::UUID
|
||||||
|
)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### User Resumes Table
|
||||||
|
```sql
|
||||||
|
CREATE TABLE user_resumes (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
focus_area focus_area_type DEFAULT 'other',
|
||||||
|
is_primary BOOLEAN DEFAULT FALSE,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT resume_name_not_empty CHECK (LENGTH(TRIM(name)) > 0),
|
||||||
|
CONSTRAINT resume_content_min_length CHECK (LENGTH(content) >= 100),
|
||||||
|
|
||||||
|
-- Only one primary resume per user
|
||||||
|
CONSTRAINT unique_primary_resume UNIQUE (user_id, is_primary)
|
||||||
|
DEFERRABLE INITIALLY DEFERRED
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Indexes
|
||||||
|
CREATE INDEX idx_user_resumes_user_id ON user_resumes(user_id);
|
||||||
|
CREATE INDEX idx_user_resumes_focus_area ON user_resumes(focus_area);
|
||||||
|
CREATE INDEX idx_user_resumes_is_primary ON user_resumes(is_primary);
|
||||||
|
|
||||||
|
-- Full text search index for resume content
|
||||||
|
CREATE INDEX idx_user_resumes_content_fts
|
||||||
|
ON user_resumes USING gin(to_tsvector('english', content));
|
||||||
|
|
||||||
|
-- Row Level Security
|
||||||
|
ALTER TABLE user_resumes ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
CREATE POLICY user_resumes_access ON user_resumes
|
||||||
|
FOR ALL
|
||||||
|
USING (user_id = current_setting('app.current_user_id')::UUID);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Document Embeddings Table (AI Features)
|
||||||
|
```sql
|
||||||
|
CREATE TABLE document_embeddings (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
document_id UUID NOT NULL REFERENCES documents(id) ON DELETE CASCADE,
|
||||||
|
embedding vector(1536), -- OpenAI text-embedding-3-large dimension
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Constraints
|
||||||
|
CONSTRAINT unique_embedding_per_document UNIQUE (document_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Vector similarity index
|
||||||
|
CREATE INDEX idx_document_embeddings_vector
|
||||||
|
ON document_embeddings USING ivfflat (embedding vector_cosine_ops)
|
||||||
|
WITH (lists = 100);
|
||||||
|
|
||||||
|
-- Regular indexes
|
||||||
|
CREATE INDEX idx_document_embeddings_document_id ON document_embeddings(document_id);
|
||||||
|
|
||||||
|
-- Row Level Security
|
||||||
|
ALTER TABLE document_embeddings ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
CREATE POLICY document_embeddings_access ON document_embeddings
|
||||||
|
FOR ALL
|
||||||
|
USING (
|
||||||
|
document_id IN (
|
||||||
|
SELECT d.id FROM documents d
|
||||||
|
JOIN applications a ON d.application_id = a.id
|
||||||
|
WHERE a.user_id = current_setting('app.current_user_id')::UUID
|
||||||
|
)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔒 Security Policies
|
||||||
|
|
||||||
|
### Row Level Security Overview
|
||||||
|
All tables with user data have RLS enabled to ensure complete data isolation:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Function to get current user ID from session
|
||||||
|
CREATE OR REPLACE FUNCTION get_current_user_id()
|
||||||
|
RETURNS UUID AS $$
|
||||||
|
BEGIN
|
||||||
|
RETURN current_setting('app.current_user_id')::UUID;
|
||||||
|
EXCEPTION
|
||||||
|
WHEN others THEN
|
||||||
|
RETURN NULL;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql SECURITY DEFINER;
|
||||||
|
|
||||||
|
-- Helper function to check if user owns application
|
||||||
|
CREATE OR REPLACE FUNCTION user_owns_application(app_id UUID)
|
||||||
|
RETURNS BOOLEAN AS $$
|
||||||
|
BEGIN
|
||||||
|
RETURN EXISTS (
|
||||||
|
SELECT 1 FROM applications
|
||||||
|
WHERE id = app_id
|
||||||
|
AND user_id = get_current_user_id()
|
||||||
|
);
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql SECURITY DEFINER;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Setting User Context
|
||||||
|
Backend must set user context for each request:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# In FastAPI dependency
|
||||||
|
async def set_user_context(user: User = Depends(get_current_user)):
|
||||||
|
async with get_db_connection() as conn:
|
||||||
|
await conn.execute(
|
||||||
|
"SET LOCAL app.current_user_id = %s",
|
||||||
|
str(user.id)
|
||||||
|
)
|
||||||
|
return user
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 Database Functions
|
||||||
|
|
||||||
|
### Trigger Functions
|
||||||
|
```sql
|
||||||
|
-- Update timestamp trigger function
|
||||||
|
CREATE OR REPLACE FUNCTION update_updated_at_column()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
BEGIN
|
||||||
|
NEW.updated_at = NOW();
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
-- Apply to all tables with updated_at
|
||||||
|
CREATE TRIGGER update_users_updated_at
|
||||||
|
BEFORE UPDATE ON users
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
|
||||||
|
CREATE TRIGGER update_applications_updated_at
|
||||||
|
BEFORE UPDATE ON applications
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
|
||||||
|
CREATE TRIGGER update_documents_updated_at
|
||||||
|
BEFORE UPDATE ON documents
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
|
||||||
|
CREATE TRIGGER update_user_resumes_updated_at
|
||||||
|
BEFORE UPDATE ON user_resumes
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Business Logic Functions
|
||||||
|
```sql
|
||||||
|
-- Generate application name
|
||||||
|
CREATE OR REPLACE FUNCTION generate_application_name(
|
||||||
|
p_company_name VARCHAR,
|
||||||
|
p_role_title VARCHAR
|
||||||
|
) RETURNS VARCHAR AS $$
|
||||||
|
DECLARE
|
||||||
|
clean_company VARCHAR;
|
||||||
|
clean_role VARCHAR;
|
||||||
|
date_suffix VARCHAR;
|
||||||
|
BEGIN
|
||||||
|
-- Clean and normalize names
|
||||||
|
clean_company := LOWER(REGEXP_REPLACE(p_company_name, '[^a-zA-Z0-9]', '_', 'g'));
|
||||||
|
clean_role := LOWER(REGEXP_REPLACE(p_role_title, '[^a-zA-Z0-9]', '_', 'g'));
|
||||||
|
date_suffix := TO_CHAR(NOW(), 'YYYY_MM_DD');
|
||||||
|
|
||||||
|
RETURN clean_company || '_' || clean_role || '_' || date_suffix;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
-- Application status validation function
|
||||||
|
CREATE OR REPLACE FUNCTION validate_application_status()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
BEGIN
|
||||||
|
-- Ensure status transitions are logical
|
||||||
|
IF NEW.status = OLD.status THEN
|
||||||
|
RETURN NEW;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
-- Log status changes for audit purposes
|
||||||
|
RAISE NOTICE 'Application % status changed from % to %',
|
||||||
|
NEW.id, OLD.status, NEW.status;
|
||||||
|
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
CREATE TRIGGER validate_application_status_trigger
|
||||||
|
BEFORE UPDATE ON applications
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION validate_application_status();
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📈 Performance Optimization
|
||||||
|
|
||||||
|
### Query Optimization
|
||||||
|
```sql
|
||||||
|
-- Most common query patterns with optimized indexes
|
||||||
|
|
||||||
|
-- 1. Get user applications (paginated)
|
||||||
|
-- Index: idx_applications_user_id, idx_applications_created_at
|
||||||
|
SELECT * FROM applications
|
||||||
|
WHERE user_id = $1
|
||||||
|
ORDER BY created_at DESC
|
||||||
|
LIMIT $2 OFFSET $3;
|
||||||
|
|
||||||
|
-- 2. Get application with documents
|
||||||
|
-- Index: idx_documents_application_id
|
||||||
|
SELECT a.*, d.document_type, d.content
|
||||||
|
FROM applications a
|
||||||
|
LEFT JOIN documents d ON a.id = d.application_id
|
||||||
|
WHERE a.id = $1 AND a.user_id = $2;
|
||||||
|
|
||||||
|
-- 3. Search applications by company/role
|
||||||
|
-- Index: idx_applications_company_name, full-text search
|
||||||
|
SELECT * FROM applications
|
||||||
|
WHERE user_id = $1
|
||||||
|
AND (
|
||||||
|
company_name ILIKE $2
|
||||||
|
OR role_title ILIKE $3
|
||||||
|
OR to_tsvector('english', job_description) @@ plainto_tsquery('english', $4)
|
||||||
|
)
|
||||||
|
ORDER BY created_at DESC;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Connection Pooling
|
||||||
|
```python
|
||||||
|
# SQLAlchemy async engine configuration
|
||||||
|
engine = create_async_engine(
|
||||||
|
DATABASE_URL,
|
||||||
|
pool_size=20, # Connection pool size
|
||||||
|
max_overflow=30, # Additional connections beyond pool_size
|
||||||
|
pool_pre_ping=True, # Validate connections before use
|
||||||
|
pool_recycle=3600, # Recycle connections every hour
|
||||||
|
echo=False # Disable SQL logging in production
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧪 Test Data Setup
|
||||||
|
|
||||||
|
### Development Seed Data
|
||||||
|
```sql
|
||||||
|
-- Insert test user (password: "testpass123")
|
||||||
|
INSERT INTO users (id, email, password_hash, first_name, last_name, is_active) VALUES (
|
||||||
|
'123e4567-e89b-12d3-a456-426614174000',
|
||||||
|
'test@example.com',
|
||||||
|
'$2b$12$LQv3c1yqBWVHxkd0LHAkCOYz6TtxMQJqhN8/LewgdyN8yF5V4M2kq',
|
||||||
|
'Test',
|
||||||
|
'User',
|
||||||
|
true
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Insert test resume
|
||||||
|
INSERT INTO user_resumes (user_id, name, content, focus_area, is_primary) VALUES (
|
||||||
|
'123e4567-e89b-12d3-a456-426614174000',
|
||||||
|
'Software Developer Resume',
|
||||||
|
'# Test User\n\n## Experience\n\nSoftware Developer at Tech Corp...',
|
||||||
|
'software_development',
|
||||||
|
true
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Insert test application
|
||||||
|
INSERT INTO applications (
|
||||||
|
user_id, company_name, role_title,
|
||||||
|
job_description, job_url, status
|
||||||
|
) VALUES (
|
||||||
|
'123e4567-e89b-12d3-a456-426614174000',
|
||||||
|
'Google',
|
||||||
|
'Senior Developer',
|
||||||
|
'We are seeking an experienced software developer to join our team building cutting-edge applications. You will work with Python, FastAPI, and modern web technologies.',
|
||||||
|
'https://careers.google.com/jobs/results/123456789/',
|
||||||
|
'draft'
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔄 Database Migrations (Future)
|
||||||
|
|
||||||
|
### Migration Strategy for Phase 2
|
||||||
|
When adding Alembic migrations:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# alembic/env.py configuration for RLS
|
||||||
|
from sqlalchemy import text
|
||||||
|
|
||||||
|
def run_migrations_online():
|
||||||
|
# Set up RLS context for migrations
|
||||||
|
with engine.connect() as connection:
|
||||||
|
connection.execute(text("SET row_security = off"))
|
||||||
|
|
||||||
|
context.configure(
|
||||||
|
connection=connection,
|
||||||
|
target_metadata=target_metadata,
|
||||||
|
compare_type=True,
|
||||||
|
compare_server_default=True
|
||||||
|
)
|
||||||
|
|
||||||
|
with context.begin_transaction():
|
||||||
|
context.run_migrations()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Planned Schema Changes
|
||||||
|
- **Usage tracking tables** for SaaS billing
|
||||||
|
- **Subscription management** tables
|
||||||
|
- **Audit log** tables for compliance
|
||||||
|
- **Performance metrics** tables
|
||||||
|
- **Additional indexes** based on production usage
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛠️ Database Maintenance
|
||||||
|
|
||||||
|
### Regular Maintenance Tasks
|
||||||
|
```sql
|
||||||
|
-- Vacuum and analyze (run weekly)
|
||||||
|
VACUUM ANALYZE;
|
||||||
|
|
||||||
|
-- Update table statistics
|
||||||
|
ANALYZE applications;
|
||||||
|
ANALYZE documents;
|
||||||
|
ANALYZE user_resumes;
|
||||||
|
|
||||||
|
-- Check index usage
|
||||||
|
SELECT schemaname, tablename, indexname, idx_tup_read, idx_tup_fetch
|
||||||
|
FROM pg_stat_user_indexes
|
||||||
|
ORDER BY idx_tup_read DESC;
|
||||||
|
|
||||||
|
-- Monitor vector index performance
|
||||||
|
SELECT * FROM pg_stat_user_indexes
|
||||||
|
WHERE indexname LIKE '%vector%';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup Strategy
|
||||||
|
```bash
|
||||||
|
# Daily backup script
|
||||||
|
pg_dump -h localhost -U jobforge_user -d jobforge_mvp \
|
||||||
|
--clean --if-exists --verbose \
|
||||||
|
> backup_$(date +%Y%m%d).sql
|
||||||
|
|
||||||
|
# Restore from backup
|
||||||
|
psql -h localhost -U jobforge_user -d jobforge_mvp < backup_20250701.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Monitoring Queries
|
||||||
|
|
||||||
|
### Performance Monitoring
|
||||||
|
```sql
|
||||||
|
-- Slow queries
|
||||||
|
SELECT query, mean_time, calls, total_time
|
||||||
|
FROM pg_stat_statements
|
||||||
|
WHERE mean_time > 100 -- queries slower than 100ms
|
||||||
|
ORDER BY mean_time DESC;
|
||||||
|
|
||||||
|
-- Table sizes
|
||||||
|
SELECT
|
||||||
|
schemaname,
|
||||||
|
tablename,
|
||||||
|
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size,
|
||||||
|
pg_total_relation_size(schemaname||'.'||tablename) as size_bytes
|
||||||
|
FROM pg_tables
|
||||||
|
WHERE schemaname = 'public'
|
||||||
|
ORDER BY size_bytes DESC;
|
||||||
|
|
||||||
|
-- Connection counts
|
||||||
|
SELECT state, count(*)
|
||||||
|
FROM pg_stat_activity
|
||||||
|
GROUP BY state;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This database design provides a robust foundation for the Job Forge web application with strong security, performance optimization, and scalability. The RLS policies ensure complete multi-tenant data isolation, while the schema supports efficient AI-powered document generation workflows.*
|
||||||
894
docs/development/coding_standards.md
Normal file
894
docs/development/coding_standards.md
Normal file
@@ -0,0 +1,894 @@
|
|||||||
|
# Coding Standards - Job Forge
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This document outlines the coding standards and best practices for the Job Forge Python/FastAPI web application. Following these standards ensures code consistency, maintainability, and quality across the project.
|
||||||
|
|
||||||
|
## Python Code Style
|
||||||
|
|
||||||
|
### 1. PEP 8 Compliance
|
||||||
|
Job Forge follows [PEP 8](https://pep8.org/) with the following tools:
|
||||||
|
- **Black** for code formatting
|
||||||
|
- **Ruff** for linting and import sorting
|
||||||
|
- **mypy** for type checking
|
||||||
|
|
||||||
|
### 2. Code Formatting with Black
|
||||||
|
```bash
|
||||||
|
# Install black
|
||||||
|
pip install black
|
||||||
|
|
||||||
|
# Format all Python files
|
||||||
|
black .
|
||||||
|
|
||||||
|
# Check formatting without making changes
|
||||||
|
black --check .
|
||||||
|
|
||||||
|
# Format specific file
|
||||||
|
black app/main.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Black Configuration (.pyproject.toml)
|
||||||
|
```toml
|
||||||
|
[tool.black]
|
||||||
|
line-length = 88
|
||||||
|
target-version = ['py312']
|
||||||
|
include = '\.pyi?$'
|
||||||
|
extend-exclude = '''
|
||||||
|
/(
|
||||||
|
# directories
|
||||||
|
\.eggs
|
||||||
|
| \.git
|
||||||
|
| \.hg
|
||||||
|
| \.mypy_cache
|
||||||
|
| \.tox
|
||||||
|
| \.venv
|
||||||
|
| build
|
||||||
|
| dist
|
||||||
|
)/
|
||||||
|
'''
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Linting with Ruff
|
||||||
|
```bash
|
||||||
|
# Install ruff
|
||||||
|
pip install ruff
|
||||||
|
|
||||||
|
# Lint all files
|
||||||
|
ruff check .
|
||||||
|
|
||||||
|
# Fix auto-fixable issues
|
||||||
|
ruff check --fix .
|
||||||
|
|
||||||
|
# Check specific file
|
||||||
|
ruff check app/main.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Ruff Configuration (.pyproject.toml)
|
||||||
|
```toml
|
||||||
|
[tool.ruff]
|
||||||
|
target-version = "py312"
|
||||||
|
line-length = 88
|
||||||
|
select = [
|
||||||
|
"E", # pycodestyle errors
|
||||||
|
"W", # pycodestyle warnings
|
||||||
|
"F", # pyflakes
|
||||||
|
"I", # isort
|
||||||
|
"B", # flake8-bugbear
|
||||||
|
"C4", # flake8-comprehensions
|
||||||
|
"UP", # pyupgrade
|
||||||
|
]
|
||||||
|
ignore = [
|
||||||
|
"E501", # line too long, handled by black
|
||||||
|
"B008", # do not perform function calls in argument defaults
|
||||||
|
"C901", # too complex
|
||||||
|
]
|
||||||
|
exclude = [
|
||||||
|
".bzr",
|
||||||
|
".direnv",
|
||||||
|
".eggs",
|
||||||
|
".git",
|
||||||
|
".hg",
|
||||||
|
".mypy_cache",
|
||||||
|
".nox",
|
||||||
|
".pants.d",
|
||||||
|
".pytype",
|
||||||
|
".ruff_cache",
|
||||||
|
".svn",
|
||||||
|
".tox",
|
||||||
|
".venv",
|
||||||
|
"__pypackages__",
|
||||||
|
"_build",
|
||||||
|
"buck-out",
|
||||||
|
"build",
|
||||||
|
"dist",
|
||||||
|
"node_modules",
|
||||||
|
"venv",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.ruff.mccabe]
|
||||||
|
max-complexity = 10
|
||||||
|
|
||||||
|
[tool.ruff.isort]
|
||||||
|
known-first-party = ["app"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Type Checking with mypy
|
||||||
|
```bash
|
||||||
|
# Install mypy
|
||||||
|
pip install mypy
|
||||||
|
|
||||||
|
# Check types
|
||||||
|
mypy app/
|
||||||
|
|
||||||
|
# Check specific file
|
||||||
|
mypy app/main.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### mypy Configuration (mypy.ini)
|
||||||
|
```ini
|
||||||
|
[mypy]
|
||||||
|
python_version = 3.12
|
||||||
|
warn_return_any = True
|
||||||
|
warn_unused_configs = True
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
disallow_incomplete_defs = True
|
||||||
|
check_untyped_defs = True
|
||||||
|
disallow_untyped_decorators = True
|
||||||
|
no_implicit_optional = True
|
||||||
|
warn_redundant_casts = True
|
||||||
|
warn_unused_ignores = True
|
||||||
|
warn_no_return = True
|
||||||
|
warn_unreachable = True
|
||||||
|
strict_equality = True
|
||||||
|
|
||||||
|
[mypy-tests.*]
|
||||||
|
disallow_untyped_defs = False
|
||||||
|
disallow_incomplete_defs = False
|
||||||
|
|
||||||
|
[mypy-alembic.*]
|
||||||
|
ignore_errors = True
|
||||||
|
```
|
||||||
|
|
||||||
|
## FastAPI Coding Standards
|
||||||
|
|
||||||
|
### 1. API Endpoint Structure
|
||||||
|
```python
|
||||||
|
# Good: Clear, consistent endpoint structure
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
from app.core.database import get_db
|
||||||
|
from app.core.security import get_current_user
|
||||||
|
from app.models.user import User
|
||||||
|
from app.schemas.application import ApplicationCreate, ApplicationResponse
|
||||||
|
from app.crud.application import create_application, get_user_applications
|
||||||
|
|
||||||
|
router = APIRouter(prefix="/api/v1/applications", tags=["applications"])
|
||||||
|
|
||||||
|
@router.post("/", response_model=ApplicationResponse, status_code=status.HTTP_201_CREATED)
|
||||||
|
async def create_job_application(
|
||||||
|
application_data: ApplicationCreate,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db),
|
||||||
|
) -> ApplicationResponse:
|
||||||
|
"""
|
||||||
|
Create a new job application with AI-generated cover letter.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
application_data: Application creation data
|
||||||
|
current_user: Authenticated user from JWT token
|
||||||
|
db: Database session
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Created application with generated content
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
HTTPException: If application creation fails
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
application = await create_application(db, application_data, current_user.id)
|
||||||
|
return ApplicationResponse.from_orm(application)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to create application: {str(e)}")
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to create application"
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.get("/", response_model=List[ApplicationResponse])
|
||||||
|
async def get_applications(
|
||||||
|
skip: int = 0,
|
||||||
|
limit: int = 100,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db),
|
||||||
|
) -> List[ApplicationResponse]:
|
||||||
|
"""Get user's job applications with pagination."""
|
||||||
|
|
||||||
|
applications = await get_user_applications(
|
||||||
|
db, user_id=current_user.id, skip=skip, limit=limit
|
||||||
|
)
|
||||||
|
return [ApplicationResponse.from_orm(app) for app in applications]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Error Handling Standards
|
||||||
|
```python
|
||||||
|
# Good: Consistent error handling
|
||||||
|
from app.core.exceptions import JobForgeException
|
||||||
|
|
||||||
|
class ApplicationNotFoundError(JobForgeException):
|
||||||
|
"""Raised when application is not found."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class ApplicationAccessDeniedError(JobForgeException):
|
||||||
|
"""Raised when user doesn't have access to application."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@router.get("/{application_id}", response_model=ApplicationResponse)
|
||||||
|
async def get_application(
|
||||||
|
application_id: str,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db),
|
||||||
|
) -> ApplicationResponse:
|
||||||
|
"""Get specific job application by ID."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
application = await get_application_by_id(db, application_id, current_user.id)
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise ApplicationNotFoundError(f"Application {application_id} not found")
|
||||||
|
|
||||||
|
return ApplicationResponse.from_orm(application)
|
||||||
|
|
||||||
|
except ApplicationNotFoundError:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error retrieving application {application_id}: {str(e)}")
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Internal server error"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Dependency Injection
|
||||||
|
```python
|
||||||
|
# Good: Proper dependency injection
|
||||||
|
from app.services.ai.claude_service import ClaudeService
|
||||||
|
from app.services.ai.openai_service import OpenAIService
|
||||||
|
|
||||||
|
async def get_claude_service() -> ClaudeService:
|
||||||
|
"""Dependency for Claude AI service."""
|
||||||
|
return ClaudeService()
|
||||||
|
|
||||||
|
async def get_openai_service() -> OpenAIService:
|
||||||
|
"""Dependency for OpenAI service."""
|
||||||
|
return OpenAIService()
|
||||||
|
|
||||||
|
@router.post("/{application_id}/generate-cover-letter")
|
||||||
|
async def generate_cover_letter(
|
||||||
|
application_id: str,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db),
|
||||||
|
claude_service: ClaudeService = Depends(get_claude_service),
|
||||||
|
) -> dict:
|
||||||
|
"""Generate AI cover letter for application."""
|
||||||
|
|
||||||
|
application = await get_application_by_id(db, application_id, current_user.id)
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(status_code=404, detail="Application not found")
|
||||||
|
|
||||||
|
cover_letter = await claude_service.generate_cover_letter(
|
||||||
|
user_profile=current_user.profile,
|
||||||
|
job_description=application.job_description
|
||||||
|
)
|
||||||
|
|
||||||
|
application.cover_letter = cover_letter
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
return {"cover_letter": cover_letter}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Pydantic Model Standards
|
||||||
|
|
||||||
|
### 1. Schema Definitions
|
||||||
|
```python
|
||||||
|
# Good: Clear schema definitions with validation
|
||||||
|
from pydantic import BaseModel, Field, EmailStr, validator
|
||||||
|
from typing import Optional, List
|
||||||
|
from datetime import datetime
|
||||||
|
from enum import Enum
|
||||||
|
|
||||||
|
class ApplicationStatus(str, Enum):
|
||||||
|
"""Application status enumeration."""
|
||||||
|
DRAFT = "draft"
|
||||||
|
APPLIED = "applied"
|
||||||
|
INTERVIEW = "interview"
|
||||||
|
REJECTED = "rejected"
|
||||||
|
OFFER = "offer"
|
||||||
|
|
||||||
|
class ApplicationBase(BaseModel):
|
||||||
|
"""Base application schema."""
|
||||||
|
company_name: str = Field(..., min_length=1, max_length=255, description="Company name")
|
||||||
|
role_title: str = Field(..., min_length=1, max_length=255, description="Job role title")
|
||||||
|
job_description: Optional[str] = Field(None, max_length=5000, description="Job description")
|
||||||
|
status: ApplicationStatus = Field(ApplicationStatus.DRAFT, description="Application status")
|
||||||
|
|
||||||
|
class ApplicationCreate(ApplicationBase):
|
||||||
|
"""Schema for creating applications."""
|
||||||
|
|
||||||
|
@validator('company_name')
|
||||||
|
def validate_company_name(cls, v):
|
||||||
|
if not v.strip():
|
||||||
|
raise ValueError('Company name cannot be empty')
|
||||||
|
return v.strip()
|
||||||
|
|
||||||
|
@validator('role_title')
|
||||||
|
def validate_role_title(cls, v):
|
||||||
|
if not v.strip():
|
||||||
|
raise ValueError('Role title cannot be empty')
|
||||||
|
return v.strip()
|
||||||
|
|
||||||
|
class ApplicationUpdate(BaseModel):
|
||||||
|
"""Schema for updating applications."""
|
||||||
|
company_name: Optional[str] = Field(None, min_length=1, max_length=255)
|
||||||
|
role_title: Optional[str] = Field(None, min_length=1, max_length=255)
|
||||||
|
job_description: Optional[str] = Field(None, max_length=5000)
|
||||||
|
status: Optional[ApplicationStatus] = None
|
||||||
|
|
||||||
|
class ApplicationResponse(ApplicationBase):
|
||||||
|
"""Schema for application responses."""
|
||||||
|
id: str = Field(..., description="Application ID")
|
||||||
|
user_id: str = Field(..., description="User ID")
|
||||||
|
cover_letter: Optional[str] = Field(None, description="Generated cover letter")
|
||||||
|
created_at: datetime = Field(..., description="Creation timestamp")
|
||||||
|
updated_at: datetime = Field(..., description="Last update timestamp")
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
from_attributes = True # For SQLAlchemy model conversion
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Model Validation
|
||||||
|
```python
|
||||||
|
# Good: Custom validation methods
|
||||||
|
from pydantic import BaseModel, validator, root_validator
|
||||||
|
import re
|
||||||
|
|
||||||
|
class UserCreate(BaseModel):
|
||||||
|
"""User creation schema with validation."""
|
||||||
|
email: EmailStr
|
||||||
|
password: str = Field(..., min_length=8, max_length=128)
|
||||||
|
first_name: str = Field(..., min_length=1, max_length=50)
|
||||||
|
last_name: str = Field(..., min_length=1, max_length=50)
|
||||||
|
|
||||||
|
@validator('password')
|
||||||
|
def validate_password_strength(cls, v):
|
||||||
|
"""Validate password strength."""
|
||||||
|
if len(v) < 8:
|
||||||
|
raise ValueError('Password must be at least 8 characters long')
|
||||||
|
if not re.search(r'[A-Z]', v):
|
||||||
|
raise ValueError('Password must contain at least one uppercase letter')
|
||||||
|
if not re.search(r'[a-z]', v):
|
||||||
|
raise ValueError('Password must contain at least one lowercase letter')
|
||||||
|
if not re.search(r'\d', v):
|
||||||
|
raise ValueError('Password must contain at least one digit')
|
||||||
|
return v
|
||||||
|
|
||||||
|
@validator('first_name', 'last_name')
|
||||||
|
def validate_names(cls, v):
|
||||||
|
"""Validate name fields."""
|
||||||
|
if not v.strip():
|
||||||
|
raise ValueError('Name cannot be empty')
|
||||||
|
if not re.match(r'^[a-zA-Z\s\'-]+$', v):
|
||||||
|
raise ValueError('Name contains invalid characters')
|
||||||
|
return v.strip().title()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Database Model Standards
|
||||||
|
|
||||||
|
### 1. SQLAlchemy Models
|
||||||
|
```python
|
||||||
|
# Good: Well-structured SQLAlchemy models
|
||||||
|
from sqlalchemy import Column, String, Text, DateTime, ForeignKey, Enum
|
||||||
|
from sqlalchemy.dialects.postgresql import UUID
|
||||||
|
from sqlalchemy.orm import relationship
|
||||||
|
from sqlalchemy.sql import func
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from app.core.database import Base
|
||||||
|
from app.models.application import ApplicationStatus
|
||||||
|
|
||||||
|
class User(Base):
|
||||||
|
"""User model with proper relationships and constraints."""
|
||||||
|
|
||||||
|
__tablename__ = "users"
|
||||||
|
|
||||||
|
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, index=True)
|
||||||
|
email = Column(String(255), unique=True, nullable=False, index=True)
|
||||||
|
password_hash = Column(String(255), nullable=False)
|
||||||
|
first_name = Column(String(100), nullable=False)
|
||||||
|
last_name = Column(String(100), nullable=False)
|
||||||
|
is_active = Column(Boolean, default=True, nullable=False)
|
||||||
|
created_at = Column(DateTime(timezone=True), server_default=func.now(), nullable=False)
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now(), nullable=False)
|
||||||
|
|
||||||
|
# Relationships
|
||||||
|
applications = relationship("Application", back_populates="user", cascade="all, delete-orphan")
|
||||||
|
|
||||||
|
def __repr__(self) -> str:
|
||||||
|
return f"<User(id={self.id}, email={self.email})>"
|
||||||
|
|
||||||
|
class Application(Base):
|
||||||
|
"""Application model with RLS and proper indexing."""
|
||||||
|
|
||||||
|
__tablename__ = "applications"
|
||||||
|
|
||||||
|
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, index=True)
|
||||||
|
user_id = Column(UUID(as_uuid=True), ForeignKey("users.id"), nullable=False, index=True)
|
||||||
|
company_name = Column(String(255), nullable=False, index=True)
|
||||||
|
role_title = Column(String(255), nullable=False)
|
||||||
|
job_description = Column(Text)
|
||||||
|
cover_letter = Column(Text)
|
||||||
|
status = Column(Enum(ApplicationStatus), default=ApplicationStatus.DRAFT, nullable=False, index=True)
|
||||||
|
created_at = Column(DateTime(timezone=True), server_default=func.now(), nullable=False, index=True)
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now(), nullable=False)
|
||||||
|
|
||||||
|
# Relationships
|
||||||
|
user = relationship("User", back_populates="applications")
|
||||||
|
|
||||||
|
def __repr__(self) -> str:
|
||||||
|
return f"<Application(id={self.id}, company={self.company_name}, status={self.status})>"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Database Operations (CRUD)
|
||||||
|
```python
|
||||||
|
# Good: Async database operations with proper error handling
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from sqlalchemy import select, update, delete
|
||||||
|
from sqlalchemy.orm import selectinload
|
||||||
|
from typing import Optional, List
|
||||||
|
|
||||||
|
from app.models.application import Application
|
||||||
|
from app.schemas.application import ApplicationCreate, ApplicationUpdate
|
||||||
|
|
||||||
|
async def create_application(
|
||||||
|
db: AsyncSession,
|
||||||
|
application_data: ApplicationCreate,
|
||||||
|
user_id: str
|
||||||
|
) -> Application:
|
||||||
|
"""Create a new job application."""
|
||||||
|
|
||||||
|
application = Application(
|
||||||
|
user_id=user_id,
|
||||||
|
**application_data.dict()
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(application)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(application)
|
||||||
|
|
||||||
|
return application
|
||||||
|
|
||||||
|
async def get_application_by_id(
|
||||||
|
db: AsyncSession,
|
||||||
|
application_id: str,
|
||||||
|
user_id: str
|
||||||
|
) -> Optional[Application]:
|
||||||
|
"""Get application by ID with user validation."""
|
||||||
|
|
||||||
|
query = select(Application).where(
|
||||||
|
Application.id == application_id,
|
||||||
|
Application.user_id == user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
return result.scalar_one_or_none()
|
||||||
|
|
||||||
|
async def get_user_applications(
|
||||||
|
db: AsyncSession,
|
||||||
|
user_id: str,
|
||||||
|
skip: int = 0,
|
||||||
|
limit: int = 100,
|
||||||
|
status_filter: Optional[ApplicationStatus] = None
|
||||||
|
) -> List[Application]:
|
||||||
|
"""Get user applications with filtering and pagination."""
|
||||||
|
|
||||||
|
query = select(Application).where(Application.user_id == user_id)
|
||||||
|
|
||||||
|
if status_filter:
|
||||||
|
query = query.where(Application.status == status_filter)
|
||||||
|
|
||||||
|
query = query.offset(skip).limit(limit).order_by(Application.created_at.desc())
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
return list(result.scalars().all())
|
||||||
|
|
||||||
|
async def update_application(
|
||||||
|
db: AsyncSession,
|
||||||
|
application_id: str,
|
||||||
|
application_data: ApplicationUpdate,
|
||||||
|
user_id: str
|
||||||
|
) -> Optional[Application]:
|
||||||
|
"""Update application with user validation."""
|
||||||
|
|
||||||
|
# Update only provided fields
|
||||||
|
update_data = application_data.dict(exclude_unset=True)
|
||||||
|
if not update_data:
|
||||||
|
return None
|
||||||
|
|
||||||
|
query = (
|
||||||
|
update(Application)
|
||||||
|
.where(Application.id == application_id, Application.user_id == user_id)
|
||||||
|
.values(**update_data)
|
||||||
|
.returning(Application)
|
||||||
|
)
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
return result.scalar_one_or_none()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Async Programming Standards
|
||||||
|
|
||||||
|
### 1. Async/Await Usage
|
||||||
|
```python
|
||||||
|
# Good: Proper async/await usage
|
||||||
|
import asyncio
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
async def process_multiple_applications(
|
||||||
|
applications: List[Application],
|
||||||
|
ai_service: ClaudeService
|
||||||
|
) -> List[str]:
|
||||||
|
"""Process multiple applications concurrently."""
|
||||||
|
|
||||||
|
async def process_single_application(app: Application) -> str:
|
||||||
|
"""Process a single application."""
|
||||||
|
if not app.job_description:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
return await ai_service.generate_cover_letter(
|
||||||
|
user_profile=app.user.profile,
|
||||||
|
job_description=app.job_description
|
||||||
|
)
|
||||||
|
|
||||||
|
# Process applications concurrently
|
||||||
|
tasks = [process_single_application(app) for app in applications]
|
||||||
|
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
|
||||||
|
# Handle exceptions
|
||||||
|
cover_letters = []
|
||||||
|
for i, result in enumerate(results):
|
||||||
|
if isinstance(result, Exception):
|
||||||
|
logger.error(f"Failed to process application {applications[i].id}: {result}")
|
||||||
|
cover_letters.append("")
|
||||||
|
else:
|
||||||
|
cover_letters.append(result)
|
||||||
|
|
||||||
|
return cover_letters
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Context Managers
|
||||||
|
```python
|
||||||
|
# Good: Proper async context manager usage
|
||||||
|
from contextlib import asynccontextmanager
|
||||||
|
from typing import AsyncGenerator
|
||||||
|
|
||||||
|
@asynccontextmanager
|
||||||
|
async def get_ai_service_with_retry(
|
||||||
|
max_retries: int = 3
|
||||||
|
) -> AsyncGenerator[ClaudeService, None]:
|
||||||
|
"""Context manager for AI service with retry logic."""
|
||||||
|
|
||||||
|
service = ClaudeService()
|
||||||
|
retries = 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
while retries < max_retries:
|
||||||
|
try:
|
||||||
|
await service.test_connection()
|
||||||
|
yield service
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
retries += 1
|
||||||
|
if retries >= max_retries:
|
||||||
|
raise e
|
||||||
|
await asyncio.sleep(2 ** retries) # Exponential backoff
|
||||||
|
finally:
|
||||||
|
await service.close()
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
async def generate_with_retry(job_description: str) -> str:
|
||||||
|
async with get_ai_service_with_retry() as ai_service:
|
||||||
|
return await ai_service.generate_cover_letter(
|
||||||
|
user_profile={},
|
||||||
|
job_description=job_description
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Standards
|
||||||
|
|
||||||
|
### 1. Test Structure
|
||||||
|
```python
|
||||||
|
# Good: Well-structured tests
|
||||||
|
import pytest
|
||||||
|
from httpx import AsyncClient
|
||||||
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
|
class TestApplicationAPI:
|
||||||
|
"""Test suite for application API endpoints."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_application_success(
|
||||||
|
self,
|
||||||
|
async_client: AsyncClient,
|
||||||
|
test_user_token: str
|
||||||
|
):
|
||||||
|
"""Test successful application creation."""
|
||||||
|
|
||||||
|
# Arrange
|
||||||
|
application_data = {
|
||||||
|
"company_name": "Test Corp",
|
||||||
|
"role_title": "Software Developer",
|
||||||
|
"job_description": "Python developer position",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
|
||||||
|
# Act
|
||||||
|
response = await async_client.post(
|
||||||
|
"/api/v1/applications/",
|
||||||
|
json=application_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert response.status_code == 201
|
||||||
|
data = response.json()
|
||||||
|
assert data["company_name"] == "Test Corp"
|
||||||
|
assert data["role_title"] == "Software Developer"
|
||||||
|
assert data["status"] == "draft"
|
||||||
|
assert "id" in data
|
||||||
|
assert "created_at" in data
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_application_with_ai_generation(
|
||||||
|
self,
|
||||||
|
async_client: AsyncClient,
|
||||||
|
test_user_token: str,
|
||||||
|
mock_claude_service: AsyncMock
|
||||||
|
):
|
||||||
|
"""Test application creation with AI cover letter generation."""
|
||||||
|
|
||||||
|
# Arrange
|
||||||
|
mock_claude_service.generate_cover_letter.return_value = "Mock cover letter"
|
||||||
|
|
||||||
|
application_data = {
|
||||||
|
"company_name": "AI Corp",
|
||||||
|
"role_title": "ML Engineer",
|
||||||
|
"job_description": "Machine learning position with Python",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
|
||||||
|
# Act
|
||||||
|
with patch('app.services.ai.claude_service.ClaudeService', return_value=mock_claude_service):
|
||||||
|
response = await async_client.post(
|
||||||
|
"/api/v1/applications/",
|
||||||
|
json=application_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert response.status_code == 201
|
||||||
|
data = response.json()
|
||||||
|
assert data["cover_letter"] == "Mock cover letter"
|
||||||
|
mock_claude_service.generate_cover_letter.assert_called_once()
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Test Fixtures
|
||||||
|
```python
|
||||||
|
# Good: Reusable test fixtures
|
||||||
|
import pytest
|
||||||
|
from typing import AsyncGenerator
|
||||||
|
from httpx import AsyncClient
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def test_application(
|
||||||
|
test_db: AsyncSession,
|
||||||
|
test_user: User
|
||||||
|
) -> Application:
|
||||||
|
"""Create a test application."""
|
||||||
|
|
||||||
|
from app.crud.application import create_application
|
||||||
|
from app.schemas.application import ApplicationCreate
|
||||||
|
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name="Test Company",
|
||||||
|
role_title="Test Role",
|
||||||
|
job_description="Test job description",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
|
||||||
|
application = await create_application(test_db, app_data, test_user.id)
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
return application
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_ai_services():
|
||||||
|
"""Mock all AI services."""
|
||||||
|
|
||||||
|
with patch('app.services.ai.claude_service.ClaudeService') as mock_claude, \
|
||||||
|
patch('app.services.ai.openai_service.OpenAIService') as mock_openai:
|
||||||
|
|
||||||
|
mock_claude.return_value.generate_cover_letter = AsyncMock(
|
||||||
|
return_value="Mock cover letter"
|
||||||
|
)
|
||||||
|
mock_openai.return_value.create_embedding = AsyncMock(
|
||||||
|
return_value=[0.1] * 1536
|
||||||
|
)
|
||||||
|
|
||||||
|
yield {
|
||||||
|
'claude': mock_claude.return_value,
|
||||||
|
'openai': mock_openai.return_value
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Documentation Standards
|
||||||
|
|
||||||
|
### 1. Docstring Format
|
||||||
|
```python
|
||||||
|
# Good: Comprehensive docstrings
|
||||||
|
def calculate_job_match_score(
|
||||||
|
user_skills: List[str],
|
||||||
|
job_requirements: List[str],
|
||||||
|
experience_years: int
|
||||||
|
) -> float:
|
||||||
|
"""
|
||||||
|
Calculate job match score based on skills and experience.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_skills: List of user's skills
|
||||||
|
job_requirements: List of job requirements
|
||||||
|
experience_years: Years of relevant experience
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Match score between 0.0 and 1.0
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If experience_years is negative
|
||||||
|
|
||||||
|
Example:
|
||||||
|
>>> calculate_job_match_score(
|
||||||
|
... ["Python", "FastAPI"],
|
||||||
|
... ["Python", "Django"],
|
||||||
|
... 3
|
||||||
|
... )
|
||||||
|
0.75
|
||||||
|
"""
|
||||||
|
if experience_years < 0:
|
||||||
|
raise ValueError("Experience years cannot be negative")
|
||||||
|
|
||||||
|
# Implementation...
|
||||||
|
return 0.75
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. API Documentation
|
||||||
|
```python
|
||||||
|
# Good: Comprehensive API documentation
|
||||||
|
@router.post(
|
||||||
|
"/",
|
||||||
|
response_model=ApplicationResponse,
|
||||||
|
status_code=status.HTTP_201_CREATED,
|
||||||
|
summary="Create job application",
|
||||||
|
description="Create a new job application with optional AI-generated cover letter",
|
||||||
|
responses={
|
||||||
|
201: {"description": "Application created successfully"},
|
||||||
|
400: {"description": "Invalid application data"},
|
||||||
|
401: {"description": "Authentication required"},
|
||||||
|
422: {"description": "Validation error"},
|
||||||
|
500: {"description": "Internal server error"}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
async def create_job_application(
|
||||||
|
application_data: ApplicationCreate = Body(
|
||||||
|
...,
|
||||||
|
example={
|
||||||
|
"company_name": "Google",
|
||||||
|
"role_title": "Senior Python Developer",
|
||||||
|
"job_description": "We are looking for an experienced Python developer...",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
),
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db),
|
||||||
|
) -> ApplicationResponse:
|
||||||
|
"""Create a new job application."""
|
||||||
|
# Implementation...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Standards
|
||||||
|
|
||||||
|
### 1. Database Query Optimization
|
||||||
|
```python
|
||||||
|
# Good: Optimized database queries
|
||||||
|
async def get_applications_with_stats(
|
||||||
|
db: AsyncSession,
|
||||||
|
user_id: str
|
||||||
|
) -> dict:
|
||||||
|
"""Get applications with statistics in a single query."""
|
||||||
|
|
||||||
|
from sqlalchemy import func, case
|
||||||
|
|
||||||
|
query = select(
|
||||||
|
func.count(Application.id).label('total_applications'),
|
||||||
|
func.count(case((Application.status == 'applied', 1))).label('applied_count'),
|
||||||
|
func.count(case((Application.status == 'interview', 1))).label('interview_count'),
|
||||||
|
func.count(case((Application.status == 'offer', 1))).label('offer_count'),
|
||||||
|
func.avg(
|
||||||
|
case((Application.created_at.isnot(None),
|
||||||
|
func.extract('epoch', func.now() - Application.created_at)))
|
||||||
|
).label('avg_age_days')
|
||||||
|
).where(Application.user_id == user_id)
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
row = result.first()
|
||||||
|
|
||||||
|
return {
|
||||||
|
'total_applications': row.total_applications or 0,
|
||||||
|
'applied_count': row.applied_count or 0,
|
||||||
|
'interview_count': row.interview_count or 0,
|
||||||
|
'offer_count': row.offer_count or 0,
|
||||||
|
'avg_age_days': round((row.avg_age_days or 0) / 86400, 1) # Convert to days
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Caching Strategies
|
||||||
|
```python
|
||||||
|
# Good: Implement caching for expensive operations
|
||||||
|
from functools import lru_cache
|
||||||
|
from typing import Dict, Any
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128)
|
||||||
|
def get_job_keywords(job_description: str) -> List[str]:
|
||||||
|
"""Extract keywords from job description (cached)."""
|
||||||
|
# Expensive NLP processing here
|
||||||
|
return extract_keywords(job_description)
|
||||||
|
|
||||||
|
class CachedAIService:
|
||||||
|
"""AI service with caching."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._cache: Dict[str, Any] = {}
|
||||||
|
self._cache_ttl = 3600 # 1 hour
|
||||||
|
|
||||||
|
async def generate_cover_letter_cached(
|
||||||
|
self,
|
||||||
|
user_profile: dict,
|
||||||
|
job_description: str
|
||||||
|
) -> str:
|
||||||
|
"""Generate cover letter with caching."""
|
||||||
|
|
||||||
|
cache_key = f"{hash(str(user_profile))}_{hash(job_description)}"
|
||||||
|
|
||||||
|
if cache_key in self._cache:
|
||||||
|
cached_result, timestamp = self._cache[cache_key]
|
||||||
|
if time.time() - timestamp < self._cache_ttl:
|
||||||
|
return cached_result
|
||||||
|
|
||||||
|
# Generate new cover letter
|
||||||
|
result = await self.generate_cover_letter(user_profile, job_description)
|
||||||
|
|
||||||
|
# Cache result
|
||||||
|
self._cache[cache_key] = (result, time.time())
|
||||||
|
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
|
||||||
|
These coding standards ensure that Job Forge maintains high code quality, consistency, and performance across all components of the application.
|
||||||
52
docs/future_features/README.md
Normal file
52
docs/future_features/README.md
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
# Future Features - Job Forge
|
||||||
|
|
||||||
|
## 📋 Feature Management Process
|
||||||
|
|
||||||
|
This folder captures feature ideas that emerge during development and testing. **No new features are implemented until the current planned scope is complete.**
|
||||||
|
|
||||||
|
## 🔄 Process
|
||||||
|
|
||||||
|
### 1. **Current Development Phase**
|
||||||
|
- Focus on planned features in `docs/jobforge_mvp_architecture.md`
|
||||||
|
- Complete all documented functionality first
|
||||||
|
- Test thoroughly before considering additions
|
||||||
|
|
||||||
|
### 2. **Feature Idea Capture**
|
||||||
|
- Any new ideas go into `ideas/` folder immediately
|
||||||
|
- Use the provided template for consistency
|
||||||
|
- No implementation discussion until current scope is complete
|
||||||
|
|
||||||
|
### 3. **Feature Review Process**
|
||||||
|
When current scope is complete:
|
||||||
|
- Review all captured ideas in `ideas/`
|
||||||
|
- Analyze impact, complexity, and value
|
||||||
|
- Move approved ideas to `approved/` with detailed planning
|
||||||
|
- Archive or reject ideas that don't fit the vision
|
||||||
|
|
||||||
|
### 4. **Implementation Planning**
|
||||||
|
For approved features:
|
||||||
|
- Full technical specification
|
||||||
|
- Update architecture documentation
|
||||||
|
- Plan integration with existing features
|
||||||
|
- Estimate development effort
|
||||||
|
|
||||||
|
## 📁 Folder Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
future_features/
|
||||||
|
├── README.md # This file
|
||||||
|
├── TEMPLATE.md # Template for new feature ideas
|
||||||
|
├── ideas/ # Raw feature ideas (no implementation details)
|
||||||
|
├── under_review/ # Ideas being analyzed
|
||||||
|
├── approved/ # Approved features ready for planning
|
||||||
|
└── archived/ # Rejected or postponed ideas
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚨 Important Rules
|
||||||
|
|
||||||
|
1. **No scope creep** - stick to current plan first
|
||||||
|
2. **Capture everything** - don't lose good ideas, but don't implement them yet
|
||||||
|
3. **Proper analysis** - every idea needs careful evaluation before approval
|
||||||
|
4. **Documentation first** - approved features must be fully documented before implementation
|
||||||
|
|
||||||
|
This ensures we complete what we planned while preserving innovation and improvements for future iterations.
|
||||||
35
docs/future_features/TEMPLATE.md
Normal file
35
docs/future_features/TEMPLATE.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# Feature Idea Template
|
||||||
|
|
||||||
|
**Date**: [YYYY-MM-DD]
|
||||||
|
**Source**: [Development/Testing/User Feedback/Discussion]
|
||||||
|
**Status**: Idea
|
||||||
|
|
||||||
|
## 📝 Feature Description
|
||||||
|
|
||||||
|
Brief description of the feature idea.
|
||||||
|
|
||||||
|
## 🎯 Problem/Opportunity
|
||||||
|
|
||||||
|
What problem does this solve or what opportunity does it address?
|
||||||
|
|
||||||
|
## 💡 Proposed Solution
|
||||||
|
|
||||||
|
High-level description of how this could work.
|
||||||
|
|
||||||
|
## 🔗 Integration Points
|
||||||
|
|
||||||
|
How would this integrate with existing Job Forge features?
|
||||||
|
|
||||||
|
## 📊 Potential Impact
|
||||||
|
|
||||||
|
- **User Value**: How does this benefit users?
|
||||||
|
- **Technical Complexity**: Initial assessment (Low/Medium/High)
|
||||||
|
- **Scope**: Does this fit current architecture or require major changes?
|
||||||
|
|
||||||
|
## 📋 Notes
|
||||||
|
|
||||||
|
Any additional thoughts, considerations, or context.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Next Steps**: [To be filled during review process]
|
||||||
0
docs/future_features/approved/.gitkeep
Normal file
0
docs/future_features/approved/.gitkeep
Normal file
0
docs/future_features/archived/.gitkeep
Normal file
0
docs/future_features/archived/.gitkeep
Normal file
1
docs/future_features/ideas/.gitkeep
Normal file
1
docs/future_features/ideas/.gitkeep
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Feature ideas captured during development go here
|
||||||
0
docs/future_features/under_review/.gitkeep
Normal file
0
docs/future_features/under_review/.gitkeep
Normal file
601
docs/infrastructure/docker_setup.md
Normal file
601
docs/infrastructure/docker_setup.md
Normal file
@@ -0,0 +1,601 @@
|
|||||||
|
# Docker Setup Guide - Job Forge
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Job Forge uses Docker for containerization to ensure consistent environments across development, testing, and production. This guide covers Docker configuration, best practices, and troubleshooting.
|
||||||
|
|
||||||
|
## Docker Architecture
|
||||||
|
|
||||||
|
### Container Structure
|
||||||
|
```
|
||||||
|
Job Forge Docker Setup
|
||||||
|
├── jobforge-app # FastAPI + Dash application
|
||||||
|
├── postgres # PostgreSQL 16 with pgvector
|
||||||
|
└── nginx # Reverse proxy and SSL termination
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network Configuration
|
||||||
|
- **Internal Network**: Docker compose creates isolated network
|
||||||
|
- **External Access**: Only nginx container exposes ports 80/443
|
||||||
|
- **Service Discovery**: Containers communicate via service names
|
||||||
|
|
||||||
|
## Development Setup
|
||||||
|
|
||||||
|
### 1. Prerequisites
|
||||||
|
```bash
|
||||||
|
# Install Docker and Docker Compose
|
||||||
|
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||||
|
sudo sh get-docker.sh
|
||||||
|
|
||||||
|
# Install Docker Compose
|
||||||
|
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||||
|
sudo chmod +x /usr/local/bin/docker-compose
|
||||||
|
|
||||||
|
# Add user to docker group
|
||||||
|
sudo usermod -aG docker $USER
|
||||||
|
newgrp docker
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Environment Configuration
|
||||||
|
```bash
|
||||||
|
# Copy environment template
|
||||||
|
cp .env.example .env
|
||||||
|
|
||||||
|
# Edit for development
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Development Environment Variables
|
||||||
|
DATABASE_URL="postgresql://jobforge:jobforge123@postgres:5432/jobforge"
|
||||||
|
CLAUDE_API_KEY="your-claude-api-key"
|
||||||
|
OPENAI_API_KEY="your-openai-api-key"
|
||||||
|
JWT_SECRET="development-secret-key-change-in-production"
|
||||||
|
DEBUG=true
|
||||||
|
LOG_LEVEL="DEBUG"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Docker Compose Configuration
|
||||||
|
|
||||||
|
#### docker-compose.yml
|
||||||
|
```yaml
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# FastAPI + Dash Application
|
||||||
|
jobforge-app:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
container_name: jobforge-app
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
environment:
|
||||||
|
- DATABASE_URL=postgresql://jobforge:jobforge123@postgres:5432/jobforge
|
||||||
|
- CLAUDE_API_KEY=${CLAUDE_API_KEY}
|
||||||
|
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||||
|
- JWT_SECRET=${JWT_SECRET}
|
||||||
|
- DEBUG=${DEBUG:-false}
|
||||||
|
- LOG_LEVEL=${LOG_LEVEL:-INFO}
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
volumes:
|
||||||
|
# Development: mount source for hot reload
|
||||||
|
- ./app:/app/app:ro
|
||||||
|
- ./uploads:/app/uploads
|
||||||
|
- ./logs:/var/log/jobforge
|
||||||
|
networks:
|
||||||
|
- jobforge-network
|
||||||
|
restart: unless-stopped
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 40s
|
||||||
|
|
||||||
|
# PostgreSQL with pgvector
|
||||||
|
postgres:
|
||||||
|
image: pgvector/pgvector:pg16
|
||||||
|
container_name: jobforge-postgres
|
||||||
|
environment:
|
||||||
|
- POSTGRES_DB=jobforge
|
||||||
|
- POSTGRES_USER=jobforge
|
||||||
|
- POSTGRES_PASSWORD=jobforge123
|
||||||
|
- POSTGRES_INITDB_ARGS="--encoding=UTF8 --lc-collate=en_US.UTF-8 --lc-ctype=en_US.UTF-8"
|
||||||
|
ports:
|
||||||
|
- "5432:5432" # Expose for development access
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
- ./init_db.sql:/docker-entrypoint-initdb.d/init_db.sql:ro
|
||||||
|
- ./backups:/backups
|
||||||
|
networks:
|
||||||
|
- jobforge-network
|
||||||
|
restart: unless-stopped
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U jobforge -d jobforge"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
start_period: 30s
|
||||||
|
|
||||||
|
# Nginx Reverse Proxy
|
||||||
|
nginx:
|
||||||
|
image: nginx:alpine
|
||||||
|
container_name: jobforge-nginx
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
volumes:
|
||||||
|
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
|
||||||
|
- ./nginx/conf.d:/etc/nginx/conf.d:ro
|
||||||
|
- ./ssl:/etc/nginx/ssl:ro
|
||||||
|
- ./static:/var/www/static:ro
|
||||||
|
depends_on:
|
||||||
|
- jobforge-app
|
||||||
|
networks:
|
||||||
|
- jobforge-network
|
||||||
|
restart: unless-stopped
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "nginx", "-t"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
|
||||||
|
networks:
|
||||||
|
jobforge-network:
|
||||||
|
driver: bridge
|
||||||
|
ipam:
|
||||||
|
config:
|
||||||
|
- subnet: 172.20.0.0/16
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
|
driver: local
|
||||||
|
```
|
||||||
|
|
||||||
|
#### docker-compose.override.yml (Development)
|
||||||
|
```yaml
|
||||||
|
# Development overrides
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
jobforge-app:
|
||||||
|
build:
|
||||||
|
target: development # Multi-stage build target
|
||||||
|
environment:
|
||||||
|
- DEBUG=true
|
||||||
|
- LOG_LEVEL=DEBUG
|
||||||
|
- RELOAD=true
|
||||||
|
volumes:
|
||||||
|
# Enable hot reload in development
|
||||||
|
- ./app:/app/app
|
||||||
|
- ./tests:/app/tests
|
||||||
|
command: ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
|
||||||
|
|
||||||
|
postgres:
|
||||||
|
environment:
|
||||||
|
- POSTGRES_PASSWORD=jobforge123 # Simpler password for dev
|
||||||
|
ports:
|
||||||
|
- "5432:5432" # Expose port for development tools
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Dockerfile Configuration
|
||||||
|
|
||||||
|
#### Multi-stage Dockerfile
|
||||||
|
```dockerfile
|
||||||
|
# Multi-stage Dockerfile for Job Forge
|
||||||
|
FROM python:3.12-slim as base
|
||||||
|
|
||||||
|
# Set environment variables
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||||
|
PYTHONUNBUFFERED=1 \
|
||||||
|
PYTHONPATH="/app" \
|
||||||
|
PIP_NO_CACHE_DIR=1 \
|
||||||
|
PIP_DISABLE_PIP_VERSION_CHECK=1
|
||||||
|
|
||||||
|
# Install system dependencies
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
curl \
|
||||||
|
postgresql-client \
|
||||||
|
build-essential \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy requirements and install Python dependencies
|
||||||
|
COPY requirements.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
|
|
||||||
|
# Development target
|
||||||
|
FROM base as development
|
||||||
|
|
||||||
|
# Install development dependencies
|
||||||
|
COPY requirements-dev.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements-dev.txt
|
||||||
|
|
||||||
|
# Copy source code
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Create non-root user
|
||||||
|
RUN adduser --disabled-password --gecos '' jobforge && \
|
||||||
|
chown -R jobforge:jobforge /app
|
||||||
|
USER jobforge
|
||||||
|
|
||||||
|
# Health check
|
||||||
|
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
|
||||||
|
CMD curl -f http://localhost:8000/health || exit 1
|
||||||
|
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
# Development command with hot reload
|
||||||
|
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
|
||||||
|
|
||||||
|
# Production target
|
||||||
|
FROM base as production
|
||||||
|
|
||||||
|
# Copy source code
|
||||||
|
COPY app/ ./app/
|
||||||
|
COPY alembic/ ./alembic/
|
||||||
|
COPY alembic.ini .
|
||||||
|
|
||||||
|
# Create non-root user
|
||||||
|
RUN adduser --disabled-password --gecos '' jobforge && \
|
||||||
|
chown -R jobforge:jobforge /app && \
|
||||||
|
mkdir -p /var/log/jobforge && \
|
||||||
|
chown jobforge:jobforge /var/log/jobforge
|
||||||
|
|
||||||
|
USER jobforge
|
||||||
|
|
||||||
|
# Health check
|
||||||
|
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
|
||||||
|
CMD curl -f http://localhost:8000/health || exit 1
|
||||||
|
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
# Production command
|
||||||
|
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "2"]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### .dockerignore
|
||||||
|
```
|
||||||
|
# .dockerignore for Job Forge
|
||||||
|
__pycache__/
|
||||||
|
*.pyc
|
||||||
|
*.pyo
|
||||||
|
*.pyd
|
||||||
|
.Python
|
||||||
|
env/
|
||||||
|
venv/
|
||||||
|
.venv/
|
||||||
|
pip-log.txt
|
||||||
|
pip-delete-this-directory.txt
|
||||||
|
.git/
|
||||||
|
.gitignore
|
||||||
|
README.md
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
docker-compose*.yml
|
||||||
|
Dockerfile*
|
||||||
|
.pytest_cache/
|
||||||
|
htmlcov/
|
||||||
|
.coverage
|
||||||
|
*.log
|
||||||
|
logs/
|
||||||
|
backups/
|
||||||
|
uploads/
|
||||||
|
!uploads/.gitkeep
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
*~
|
||||||
|
```
|
||||||
|
|
||||||
|
## Production Setup
|
||||||
|
|
||||||
|
### 1. Production Docker Compose
|
||||||
|
```yaml
|
||||||
|
# docker-compose.prod.yml
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
jobforge-app:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
target: production
|
||||||
|
environment:
|
||||||
|
- DATABASE_URL=${DATABASE_URL}
|
||||||
|
- CLAUDE_API_KEY=${CLAUDE_API_KEY}
|
||||||
|
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||||
|
- JWT_SECRET=${JWT_SECRET}
|
||||||
|
- DEBUG=false
|
||||||
|
- LOG_LEVEL=INFO
|
||||||
|
- WORKERS=4
|
||||||
|
volumes:
|
||||||
|
- ./uploads:/app/uploads
|
||||||
|
- ./logs:/var/log/jobforge
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '2.0'
|
||||||
|
memory: 2G
|
||||||
|
reservations:
|
||||||
|
cpus: '0.5'
|
||||||
|
memory: 512M
|
||||||
|
restart_policy:
|
||||||
|
condition: on-failure
|
||||||
|
delay: 5s
|
||||||
|
max_attempts: 3
|
||||||
|
|
||||||
|
postgres:
|
||||||
|
image: pgvector/pgvector:pg16
|
||||||
|
environment:
|
||||||
|
- POSTGRES_DB=${DB_NAME}
|
||||||
|
- POSTGRES_USER=${DB_USER}
|
||||||
|
- POSTGRES_PASSWORD=${DB_PASSWORD}
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
- ./backups:/backups:ro
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '1.0'
|
||||||
|
memory: 1G
|
||||||
|
reservations:
|
||||||
|
cpus: '0.25'
|
||||||
|
memory: 256M
|
||||||
|
# Don't expose port in production
|
||||||
|
# ports:
|
||||||
|
# - "5432:5432"
|
||||||
|
|
||||||
|
nginx:
|
||||||
|
image: nginx:alpine
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
volumes:
|
||||||
|
- ./nginx/prod.conf:/etc/nginx/nginx.conf:ro
|
||||||
|
- ./ssl:/etc/nginx/ssl:ro
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '0.5'
|
||||||
|
memory: 128M
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Production Commands
|
||||||
|
```bash
|
||||||
|
# Build production images
|
||||||
|
docker-compose -f docker-compose.prod.yml build
|
||||||
|
|
||||||
|
# Start production services
|
||||||
|
docker-compose -f docker-compose.prod.yml up -d
|
||||||
|
|
||||||
|
# Scale application containers
|
||||||
|
docker-compose -f docker-compose.prod.yml up -d --scale jobforge-app=3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Container Management
|
||||||
|
|
||||||
|
### 1. Basic Operations
|
||||||
|
```bash
|
||||||
|
# Start all services
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# Stop all services
|
||||||
|
docker-compose down
|
||||||
|
|
||||||
|
# Restart specific service
|
||||||
|
docker-compose restart jobforge-app
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker-compose logs -f jobforge-app
|
||||||
|
docker-compose logs --tail=100 postgres
|
||||||
|
|
||||||
|
# Execute commands in container
|
||||||
|
docker-compose exec jobforge-app bash
|
||||||
|
docker-compose exec postgres psql -U jobforge -d jobforge
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Database Operations
|
||||||
|
```bash
|
||||||
|
# Run database migrations
|
||||||
|
docker-compose exec jobforge-app alembic upgrade head
|
||||||
|
|
||||||
|
# Create new migration
|
||||||
|
docker-compose exec jobforge-app alembic revision --autogenerate -m "description"
|
||||||
|
|
||||||
|
# Database backup
|
||||||
|
docker-compose exec postgres pg_dump -U jobforge jobforge > backup.sql
|
||||||
|
|
||||||
|
# Database restore
|
||||||
|
docker-compose exec -T postgres psql -U jobforge -d jobforge < backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Application Management
|
||||||
|
```bash
|
||||||
|
# Update application code (development)
|
||||||
|
docker-compose restart jobforge-app
|
||||||
|
|
||||||
|
# Rebuild containers
|
||||||
|
docker-compose build
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# View container resource usage
|
||||||
|
docker stats
|
||||||
|
|
||||||
|
# Clean up unused resources
|
||||||
|
docker system prune -a -f
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring and Debugging
|
||||||
|
|
||||||
|
### 1. Health Checks
|
||||||
|
```bash
|
||||||
|
# Check container health
|
||||||
|
docker-compose ps
|
||||||
|
|
||||||
|
# Manual health check
|
||||||
|
curl http://localhost:8000/health
|
||||||
|
|
||||||
|
# Check individual services
|
||||||
|
docker-compose exec jobforge-app python -c "import requests; print(requests.get('http://localhost:8000/health').json())"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Log Management
|
||||||
|
```bash
|
||||||
|
# View application logs
|
||||||
|
docker-compose logs -f jobforge-app
|
||||||
|
|
||||||
|
# View specific timeframe
|
||||||
|
docker-compose logs --since="2024-01-01T00:00:00" jobforge-app
|
||||||
|
|
||||||
|
# Export logs
|
||||||
|
docker-compose logs jobforge-app > app.log
|
||||||
|
|
||||||
|
# Follow logs with timestamps
|
||||||
|
docker-compose logs -f -t jobforge-app
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Performance Monitoring
|
||||||
|
```bash
|
||||||
|
# Container resource usage
|
||||||
|
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"
|
||||||
|
|
||||||
|
# Container processes
|
||||||
|
docker-compose exec jobforge-app ps aux
|
||||||
|
|
||||||
|
# Database performance
|
||||||
|
docker-compose exec postgres psql -U jobforge -d jobforge -c "SELECT * FROM pg_stat_activity;"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### 1. Common Issues
|
||||||
|
|
||||||
|
#### Container Won't Start
|
||||||
|
```bash
|
||||||
|
# Check logs for errors
|
||||||
|
docker-compose logs jobforge-app
|
||||||
|
|
||||||
|
# Check if ports are in use
|
||||||
|
sudo netstat -tulpn | grep :8000
|
||||||
|
|
||||||
|
# Check Docker daemon
|
||||||
|
sudo systemctl status docker
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Database Connection Issues
|
||||||
|
```bash
|
||||||
|
# Test database connectivity
|
||||||
|
docker-compose exec jobforge-app python -c "
|
||||||
|
import asyncio
|
||||||
|
import asyncpg
|
||||||
|
asyncio.run(asyncpg.connect('postgresql://jobforge:jobforge123@postgres:5432/jobforge'))
|
||||||
|
print('Database connection successful')
|
||||||
|
"
|
||||||
|
|
||||||
|
# Check database is ready
|
||||||
|
docker-compose exec postgres pg_isready -U jobforge
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Volume Mount Issues
|
||||||
|
```bash
|
||||||
|
# Check volume permissions
|
||||||
|
ls -la uploads/ logs/
|
||||||
|
|
||||||
|
# Fix permissions
|
||||||
|
sudo chown -R $USER:$USER uploads logs
|
||||||
|
chmod 755 uploads logs
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Debug Mode
|
||||||
|
```bash
|
||||||
|
# Run container in debug mode
|
||||||
|
docker-compose run --rm jobforge-app bash
|
||||||
|
|
||||||
|
# Run with environment override
|
||||||
|
docker-compose run -e DEBUG=true jobforge-app
|
||||||
|
|
||||||
|
# Attach to running container
|
||||||
|
docker-compose exec jobforge-app bash
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Network Issues
|
||||||
|
```bash
|
||||||
|
# Check network connectivity
|
||||||
|
docker network ls
|
||||||
|
docker network inspect jobforge_jobforge-network
|
||||||
|
|
||||||
|
# Test inter-container communication
|
||||||
|
docker-compose exec jobforge-app ping postgres
|
||||||
|
docker-compose exec nginx ping jobforge-app
|
||||||
|
```
|
||||||
|
|
||||||
|
## Optimization
|
||||||
|
|
||||||
|
### 1. Image Optimization
|
||||||
|
```dockerfile
|
||||||
|
# Use multi-stage builds to reduce image size
|
||||||
|
# Use .dockerignore to exclude unnecessary files
|
||||||
|
# Use specific base image versions
|
||||||
|
# Combine RUN commands to reduce layers
|
||||||
|
# Clean up package caches
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Resource Limits
|
||||||
|
```yaml
|
||||||
|
# Set appropriate resource limits
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '1.0'
|
||||||
|
memory: 1G
|
||||||
|
reservations:
|
||||||
|
cpus: '0.5'
|
||||||
|
memory: 512M
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Volume Optimization
|
||||||
|
```bash
|
||||||
|
# Use bind mounts for development
|
||||||
|
# Use named volumes for persistent data
|
||||||
|
# Regular cleanup of unused volumes
|
||||||
|
docker volume prune
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
### 1. Container Security
|
||||||
|
```dockerfile
|
||||||
|
# Run as non-root user
|
||||||
|
USER jobforge
|
||||||
|
|
||||||
|
# Use specific image versions
|
||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
# Don't expose unnecessary ports
|
||||||
|
# Use secrets for sensitive data
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Network Security
|
||||||
|
```yaml
|
||||||
|
# Use custom networks
|
||||||
|
networks:
|
||||||
|
jobforge-network:
|
||||||
|
driver: bridge
|
||||||
|
internal: true # For internal services
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Environment Security
|
||||||
|
```bash
|
||||||
|
# Use .env files for secrets
|
||||||
|
# Don't commit secrets to version control
|
||||||
|
# Use Docker secrets in production
|
||||||
|
# Regular security updates
|
||||||
|
```
|
||||||
|
|
||||||
|
This Docker setup provides a robust, scalable, and maintainable containerization solution for Job Forge, suitable for both development and production environments.
|
||||||
530
docs/infrastructure/server_deployment.md
Normal file
530
docs/infrastructure/server_deployment.md
Normal file
@@ -0,0 +1,530 @@
|
|||||||
|
# Server Deployment Guide - Job Forge
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This guide covers deploying Job Forge to your own server for prototype development and testing. The deployment uses Docker containers for easy management and isolation.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### Server Requirements
|
||||||
|
- **OS**: Ubuntu 20.04+ or CentOS 8+
|
||||||
|
- **RAM**: Minimum 2GB, recommended 4GB
|
||||||
|
- **Storage**: Minimum 20GB available disk space
|
||||||
|
- **CPU**: 2 cores recommended
|
||||||
|
- **Network**: Public IP address and domain name (optional)
|
||||||
|
|
||||||
|
### Required Software
|
||||||
|
- Docker 20.10+
|
||||||
|
- Docker Compose 2.0+
|
||||||
|
- Git
|
||||||
|
- Nginx (for reverse proxy)
|
||||||
|
- Certbot (for SSL certificates)
|
||||||
|
|
||||||
|
## Initial Server Setup
|
||||||
|
|
||||||
|
### 1. Update System
|
||||||
|
```bash
|
||||||
|
# Ubuntu/Debian
|
||||||
|
sudo apt update && sudo apt upgrade -y
|
||||||
|
|
||||||
|
# CentOS/RHEL
|
||||||
|
sudo yum update -y
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Install Docker
|
||||||
|
```bash
|
||||||
|
# Ubuntu/Debian
|
||||||
|
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||||
|
sudo sh get-docker.sh
|
||||||
|
sudo usermod -aG docker $USER
|
||||||
|
|
||||||
|
# CentOS/RHEL
|
||||||
|
sudo yum install -y docker
|
||||||
|
sudo systemctl start docker
|
||||||
|
sudo systemctl enable docker
|
||||||
|
sudo usermod -aG docker $USER
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Install Docker Compose
|
||||||
|
```bash
|
||||||
|
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||||
|
sudo chmod +x /usr/local/bin/docker-compose
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Install Additional Tools
|
||||||
|
```bash
|
||||||
|
# Ubuntu/Debian
|
||||||
|
sudo apt install -y git nginx certbot python3-certbot-nginx
|
||||||
|
|
||||||
|
# CentOS/RHEL
|
||||||
|
sudo yum install -y git nginx certbot python3-certbot-nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
## Application Deployment
|
||||||
|
|
||||||
|
### 1. Clone Repository
|
||||||
|
```bash
|
||||||
|
cd /opt
|
||||||
|
sudo git clone https://github.com/yourusername/job-forge.git
|
||||||
|
sudo chown -R $USER:$USER job-forge
|
||||||
|
cd job-forge
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Environment Configuration
|
||||||
|
```bash
|
||||||
|
# Copy environment template
|
||||||
|
cp .env.example .env
|
||||||
|
|
||||||
|
# Edit environment variables
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Required Environment Variables
|
||||||
|
```bash
|
||||||
|
# Database Configuration
|
||||||
|
DATABASE_URL="postgresql://jobforge:CHANGE_THIS_PASSWORD@postgres:5432/jobforge"
|
||||||
|
DATABASE_POOL_SIZE=10
|
||||||
|
|
||||||
|
# AI Service API Keys (obtain from providers)
|
||||||
|
CLAUDE_API_KEY="your-claude-api-key-here"
|
||||||
|
OPENAI_API_KEY="your-openai-api-key-here"
|
||||||
|
|
||||||
|
# Security Settings
|
||||||
|
JWT_SECRET="your-super-secret-jwt-key-minimum-32-characters"
|
||||||
|
JWT_ALGORITHM="HS256"
|
||||||
|
JWT_EXPIRE_MINUTES=1440
|
||||||
|
|
||||||
|
# Application Settings
|
||||||
|
APP_NAME="Job Forge"
|
||||||
|
DEBUG=false
|
||||||
|
LOG_LEVEL="INFO"
|
||||||
|
|
||||||
|
# Server Configuration
|
||||||
|
SERVER_HOST="0.0.0.0"
|
||||||
|
SERVER_PORT=8000
|
||||||
|
WORKERS=2
|
||||||
|
|
||||||
|
# Security
|
||||||
|
ALLOWED_HOSTS=["yourdomain.com", "www.yourdomain.com", "localhost"]
|
||||||
|
CORS_ORIGINS=["https://yourdomain.com", "https://www.yourdomain.com"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Create Required Directories
|
||||||
|
```bash
|
||||||
|
mkdir -p uploads backups logs ssl
|
||||||
|
chmod 755 uploads backups logs
|
||||||
|
chmod 700 ssl
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Build and Start Services
|
||||||
|
```bash
|
||||||
|
# Build Docker images
|
||||||
|
docker-compose build
|
||||||
|
|
||||||
|
# Start services in background
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
docker-compose ps
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Initialize Database
|
||||||
|
```bash
|
||||||
|
# Run database migrations
|
||||||
|
docker-compose exec jobforge-app alembic upgrade head
|
||||||
|
|
||||||
|
# Verify database setup
|
||||||
|
docker-compose exec postgres psql -U jobforge -d jobforge -c "\dt"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Verify Application
|
||||||
|
```bash
|
||||||
|
# Check application health
|
||||||
|
curl http://localhost:8000/health
|
||||||
|
|
||||||
|
# Check logs
|
||||||
|
docker-compose logs jobforge-app
|
||||||
|
```
|
||||||
|
|
||||||
|
## Nginx Configuration
|
||||||
|
|
||||||
|
### 1. Create Nginx Configuration
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/nginx/sites-available/jobforge
|
||||||
|
```
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name yourdomain.com www.yourdomain.com;
|
||||||
|
|
||||||
|
# Redirect to HTTPS
|
||||||
|
return 301 https://$server_name$request_uri;
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 443 ssl http2;
|
||||||
|
server_name yourdomain.com www.yourdomain.com;
|
||||||
|
|
||||||
|
# SSL Configuration (will be added by certbot)
|
||||||
|
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
|
||||||
|
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
|
||||||
|
|
||||||
|
# Security headers
|
||||||
|
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||||
|
add_header X-XSS-Protection "1; mode=block" always;
|
||||||
|
add_header X-Content-Type-Options "nosniff" always;
|
||||||
|
add_header Referrer-Policy "no-referrer-when-downgrade" always;
|
||||||
|
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
|
||||||
|
|
||||||
|
# File upload size limit
|
||||||
|
client_max_body_size 10M;
|
||||||
|
|
||||||
|
# Main application
|
||||||
|
location / {
|
||||||
|
proxy_pass http://localhost:8000;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
proxy_redirect off;
|
||||||
|
|
||||||
|
# Timeout settings for AI operations
|
||||||
|
proxy_connect_timeout 60s;
|
||||||
|
proxy_send_timeout 60s;
|
||||||
|
proxy_read_timeout 120s;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Health check endpoint
|
||||||
|
location /health {
|
||||||
|
proxy_pass http://localhost:8000/health;
|
||||||
|
access_log off;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Static files (if any)
|
||||||
|
location /static/ {
|
||||||
|
alias /opt/job-forge/static/;
|
||||||
|
expires 30d;
|
||||||
|
add_header Cache-Control "public, immutable";
|
||||||
|
}
|
||||||
|
|
||||||
|
# API documentation
|
||||||
|
location /docs {
|
||||||
|
proxy_pass http://localhost:8000/docs;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Deny access to sensitive files
|
||||||
|
location ~ /\. {
|
||||||
|
deny all;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Enable Site and Test Configuration
|
||||||
|
```bash
|
||||||
|
# Enable site
|
||||||
|
sudo ln -s /etc/nginx/sites-available/jobforge /etc/nginx/sites-enabled/
|
||||||
|
|
||||||
|
# Test configuration
|
||||||
|
sudo nginx -t
|
||||||
|
|
||||||
|
# Restart nginx
|
||||||
|
sudo systemctl restart nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
## SSL Certificate Setup
|
||||||
|
|
||||||
|
### 1. Obtain SSL Certificate
|
||||||
|
```bash
|
||||||
|
# Using Let's Encrypt Certbot
|
||||||
|
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
|
||||||
|
|
||||||
|
# Follow prompts to configure SSL
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configure Auto-Renewal
|
||||||
|
```bash
|
||||||
|
# Test renewal
|
||||||
|
sudo certbot renew --dry-run
|
||||||
|
|
||||||
|
# Add cron job for auto-renewal
|
||||||
|
sudo crontab -e
|
||||||
|
|
||||||
|
# Add this line:
|
||||||
|
0 12 * * * /usr/bin/certbot renew --quiet
|
||||||
|
```
|
||||||
|
|
||||||
|
## Firewall Configuration
|
||||||
|
|
||||||
|
### 1. Configure UFW (Ubuntu)
|
||||||
|
```bash
|
||||||
|
sudo ufw allow ssh
|
||||||
|
sudo ufw allow 'Nginx Full'
|
||||||
|
sudo ufw --force enable
|
||||||
|
sudo ufw status
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configure Firewalld (CentOS)
|
||||||
|
```bash
|
||||||
|
sudo firewall-cmd --permanent --add-service=ssh
|
||||||
|
sudo firewall-cmd --permanent --add-service=http
|
||||||
|
sudo firewall-cmd --permanent --add-service=https
|
||||||
|
sudo firewall-cmd --reload
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring and Maintenance
|
||||||
|
|
||||||
|
### 1. Log Management
|
||||||
|
```bash
|
||||||
|
# View application logs
|
||||||
|
docker-compose logs -f jobforge-app
|
||||||
|
|
||||||
|
# View nginx logs
|
||||||
|
sudo tail -f /var/log/nginx/access.log
|
||||||
|
sudo tail -f /var/log/nginx/error.log
|
||||||
|
|
||||||
|
# Set up log rotation
|
||||||
|
sudo nano /etc/logrotate.d/jobforge
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
/opt/job-forge/logs/*.log {
|
||||||
|
daily
|
||||||
|
missingok
|
||||||
|
rotate 30
|
||||||
|
compress
|
||||||
|
delaycompress
|
||||||
|
notifempty
|
||||||
|
copytruncate
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Backup Configuration
|
||||||
|
```bash
|
||||||
|
# Create backup script
|
||||||
|
sudo nano /opt/job-forge/backup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Job Forge backup script
|
||||||
|
|
||||||
|
BACKUP_DIR="/opt/backups/jobforge"
|
||||||
|
DATE=$(date +%Y%m%d_%H%M%S)
|
||||||
|
|
||||||
|
# Create backup directory
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
|
||||||
|
echo "Starting Job Forge backup - $DATE"
|
||||||
|
|
||||||
|
# Database backup
|
||||||
|
docker-compose exec -T postgres pg_dump -U jobforge jobforge | gzip > "$BACKUP_DIR/database_$DATE.sql.gz"
|
||||||
|
|
||||||
|
# Application files backup
|
||||||
|
tar -czf "$BACKUP_DIR/app_$DATE.tar.gz" \
|
||||||
|
--exclude="logs/*" \
|
||||||
|
--exclude="uploads/*" \
|
||||||
|
/opt/job-forge
|
||||||
|
|
||||||
|
# Keep only last 30 days
|
||||||
|
find "$BACKUP_DIR" -name "*.gz" -mtime +30 -delete
|
||||||
|
|
||||||
|
echo "Backup completed successfully"
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Make executable
|
||||||
|
chmod +x /opt/job-forge/backup.sh
|
||||||
|
|
||||||
|
# Add to crontab (daily backup at 2 AM)
|
||||||
|
sudo crontab -e
|
||||||
|
0 2 * * * /opt/job-forge/backup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Health Monitoring
|
||||||
|
```bash
|
||||||
|
# Create health check script
|
||||||
|
nano /opt/job-forge/health-check.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Job Forge health check
|
||||||
|
|
||||||
|
HEALTH_URL="http://localhost:8000/health"
|
||||||
|
LOG_FILE="/opt/job-forge/logs/health-check.log"
|
||||||
|
|
||||||
|
response=$(curl -s -o /dev/null -w "%{http_code}" "$HEALTH_URL")
|
||||||
|
|
||||||
|
if [ "$response" = "200" ]; then
|
||||||
|
echo "$(date): Health check passed" >> "$LOG_FILE"
|
||||||
|
else
|
||||||
|
echo "$(date): Health check failed - HTTP $response" >> "$LOG_FILE"
|
||||||
|
# Send alert (email, Slack, etc.)
|
||||||
|
# systemctl restart docker-compose@job-forge
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Make executable and add to cron (every 5 minutes)
|
||||||
|
chmod +x /opt/job-forge/health-check.sh
|
||||||
|
crontab -e
|
||||||
|
*/5 * * * * /opt/job-forge/health-check.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Application Updates
|
||||||
|
|
||||||
|
### 1. Update Process
|
||||||
|
```bash
|
||||||
|
cd /opt/job-forge
|
||||||
|
|
||||||
|
# Create backup before update
|
||||||
|
./backup.sh
|
||||||
|
|
||||||
|
# Pull latest changes
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# Rebuild and restart services
|
||||||
|
docker-compose build
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# Run database migrations if needed
|
||||||
|
docker-compose exec jobforge-app alembic upgrade head
|
||||||
|
|
||||||
|
# Verify application is working
|
||||||
|
curl http://localhost:8000/health
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Rollback Process
|
||||||
|
```bash
|
||||||
|
# If update fails, rollback to previous version
|
||||||
|
git log --oneline -10 # Find previous commit
|
||||||
|
git checkout <previous-commit-hash>
|
||||||
|
|
||||||
|
# Rebuild and restart
|
||||||
|
docker-compose build
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# Or restore from backup if needed
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
#### 1. Application Won't Start
|
||||||
|
```bash
|
||||||
|
# Check logs
|
||||||
|
docker-compose logs jobforge-app
|
||||||
|
|
||||||
|
# Check database connection
|
||||||
|
docker-compose exec postgres pg_isready -U jobforge
|
||||||
|
|
||||||
|
# Check environment variables
|
||||||
|
docker-compose exec jobforge-app env | grep -E "(DATABASE|CLAUDE|OPENAI)"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Database Connection Issues
|
||||||
|
```bash
|
||||||
|
# Restart postgres
|
||||||
|
docker-compose restart postgres
|
||||||
|
|
||||||
|
# Check database logs
|
||||||
|
docker-compose logs postgres
|
||||||
|
|
||||||
|
# Connect to database manually
|
||||||
|
docker-compose exec postgres psql -U jobforge -d jobforge
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. SSL Certificate Issues
|
||||||
|
```bash
|
||||||
|
# Check certificate status
|
||||||
|
sudo certbot certificates
|
||||||
|
|
||||||
|
# Renew certificate manually
|
||||||
|
sudo certbot renew
|
||||||
|
|
||||||
|
# Check nginx configuration
|
||||||
|
sudo nginx -t
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Permission Issues
|
||||||
|
```bash
|
||||||
|
# Fix file permissions
|
||||||
|
sudo chown -R $USER:$USER /opt/job-forge
|
||||||
|
chmod 755 /opt/job-forge/uploads
|
||||||
|
chmod 700 /opt/job-forge/ssl
|
||||||
|
```
|
||||||
|
|
||||||
|
### Performance Optimization
|
||||||
|
|
||||||
|
#### 1. Database Optimization
|
||||||
|
```bash
|
||||||
|
# Connect to database
|
||||||
|
docker-compose exec postgres psql -U jobforge -d jobforge
|
||||||
|
|
||||||
|
# Check slow queries
|
||||||
|
SELECT query, mean_time, calls FROM pg_stat_statements ORDER BY mean_time DESC LIMIT 10;
|
||||||
|
|
||||||
|
# Analyze table statistics
|
||||||
|
ANALYZE;
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Container Resource Limits
|
||||||
|
```yaml
|
||||||
|
# In docker-compose.yml, add resource limits
|
||||||
|
services:
|
||||||
|
jobforge-app:
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '1.0'
|
||||||
|
memory: 1G
|
||||||
|
reservations:
|
||||||
|
cpus: '0.5'
|
||||||
|
memory: 512M
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Nginx Caching
|
||||||
|
```nginx
|
||||||
|
# Add to nginx configuration
|
||||||
|
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
|
||||||
|
expires 1y;
|
||||||
|
add_header Cache-Control "public, immutable";
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Hardening
|
||||||
|
|
||||||
|
### 1. System Updates
|
||||||
|
```bash
|
||||||
|
# Enable automatic security updates
|
||||||
|
sudo apt install unattended-upgrades
|
||||||
|
sudo dpkg-reconfigure -plow unattended-upgrades
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Fail2Ban Setup
|
||||||
|
```bash
|
||||||
|
# Install fail2ban
|
||||||
|
sudo apt install fail2ban
|
||||||
|
|
||||||
|
# Configure for nginx
|
||||||
|
sudo nano /etc/fail2ban/jail.local
|
||||||
|
```
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[nginx-http-auth]
|
||||||
|
enabled = true
|
||||||
|
|
||||||
|
[nginx-limit-req]
|
||||||
|
enabled = true
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Docker Security
|
||||||
|
```bash
|
||||||
|
# Run containers as non-root user (already configured in Dockerfile)
|
||||||
|
# Limit container capabilities
|
||||||
|
# Use secrets for sensitive data
|
||||||
|
```
|
||||||
|
|
||||||
|
This deployment guide provides a comprehensive setup for Job Forge on your own server. Adjust configurations based on your specific requirements and security policies.
|
||||||
@@ -1,833 +0,0 @@
|
|||||||
# JobForge - Architecture Guide
|
|
||||||
|
|
||||||
**Version:** 1.0.0
|
|
||||||
**Status:** Production-Ready Implementation
|
|
||||||
**Date:** July 2025
|
|
||||||
**Target Market:** Canadian Job Market Applications
|
|
||||||
**Tagline:** "Forge Your Path to Success"
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Executive Summary
|
|
||||||
|
|
||||||
### Project Vision
|
|
||||||
JobForge is a comprehensive, AI-powered job application management system that streamlines the entire application process through intelligent automation, multi-resume optimization, and authentic voice preservation for professional job seekers in the Canadian market.
|
|
||||||
|
|
||||||
### Core Objectives
|
|
||||||
- **Workflow Automation**: 3-phase intelligent application pipeline (Research → Resume → Cover Letter)
|
|
||||||
- **Multi-Resume Intelligence**: Leverage multiple resume versions as focused expertise lenses
|
|
||||||
- **Authentic Voice Preservation**: Maintain candidate's proven successful writing patterns
|
|
||||||
- **Canadian Market Focus**: Optimize for Canadian business culture and application standards
|
|
||||||
- **Local File Management**: Complete control over sensitive career documents
|
|
||||||
- **Scalable Architecture**: Support for high-volume job application campaigns
|
|
||||||
|
|
||||||
### Business Value Proposition
|
|
||||||
- **40% Time Reduction**: Automated research and document generation
|
|
||||||
- **Higher Success Rates**: Strategic positioning based on comprehensive analysis
|
|
||||||
- **Consistent Quality**: Standardized excellence across all applications
|
|
||||||
- **Document Security**: Local storage with full user control
|
|
||||||
- **Career Intelligence**: Build knowledge base from successful applications
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🏗️ High-Level Architecture
|
|
||||||
|
|
||||||
### System Overview
|
|
||||||
```mermaid
|
|
||||||
graph TB
|
|
||||||
subgraph "User Interface Layer"
|
|
||||||
A[Streamlit Web UI]
|
|
||||||
B[Configuration Panel]
|
|
||||||
C[File Management UI]
|
|
||||||
D[Workflow Interface]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Application Core"
|
|
||||||
E[Application Engine]
|
|
||||||
F[Phase Orchestrator]
|
|
||||||
G[State Manager]
|
|
||||||
H[File Controller]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "AI Processing Layer"
|
|
||||||
I[Research Agent]
|
|
||||||
J[Resume Optimizer]
|
|
||||||
K[Cover Letter Generator]
|
|
||||||
L[Claude API Client]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Data Management"
|
|
||||||
M[Resume Repository]
|
|
||||||
N[Reference Database]
|
|
||||||
O[Application Store]
|
|
||||||
P[Status Tracker]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Storage Layer"
|
|
||||||
Q[Local File System]
|
|
||||||
R[Project Structure]
|
|
||||||
S[Document Templates]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "External Services"
|
|
||||||
T[Claude AI API]
|
|
||||||
U[Web Search APIs]
|
|
||||||
V[Company Intelligence]
|
|
||||||
end
|
|
||||||
|
|
||||||
A --> E
|
|
||||||
B --> H
|
|
||||||
C --> H
|
|
||||||
D --> F
|
|
||||||
E --> I
|
|
||||||
E --> J
|
|
||||||
E --> K
|
|
||||||
F --> G
|
|
||||||
I --> L
|
|
||||||
J --> L
|
|
||||||
K --> L
|
|
||||||
L --> T
|
|
||||||
M --> Q
|
|
||||||
N --> Q
|
|
||||||
O --> Q
|
|
||||||
P --> Q
|
|
||||||
H --> R
|
|
||||||
I --> U
|
|
||||||
I --> V
|
|
||||||
```
|
|
||||||
|
|
||||||
### Architecture Principles
|
|
||||||
|
|
||||||
#### **1. Domain-Driven Design**
|
|
||||||
- Clear separation between job application domain logic and technical infrastructure
|
|
||||||
- Rich domain models representing real-world career management concepts
|
|
||||||
- Business rules encapsulated within domain entities
|
|
||||||
|
|
||||||
#### **2. Event-Driven Workflow**
|
|
||||||
- Each phase triggers the next through well-defined events
|
|
||||||
- State transitions logged for auditability and recovery
|
|
||||||
- Asynchronous processing with real-time UI updates
|
|
||||||
|
|
||||||
#### **3. Multi-Source Intelligence**
|
|
||||||
- Resume portfolio treated as complementary expertise views
|
|
||||||
- Reference database provides voice pattern templates
|
|
||||||
- Company research aggregated from multiple sources
|
|
||||||
|
|
||||||
#### **4. Security-First Design**
|
|
||||||
- All sensitive career data stored locally
|
|
||||||
- No cloud storage of personal information
|
|
||||||
- API keys managed through secure environment variables
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 Core Components
|
|
||||||
|
|
||||||
### **Application Engine**
|
|
||||||
```python
|
|
||||||
class JobApplicationEngine:
|
|
||||||
"""Central orchestrator for the entire application workflow"""
|
|
||||||
|
|
||||||
def __init__(self, config: EngineConfig, file_manager: FileManager):
|
|
||||||
self.config = config
|
|
||||||
self.file_manager = file_manager
|
|
||||||
self.phase_orchestrator = PhaseOrchestrator()
|
|
||||||
self.state_manager = StateManager()
|
|
||||||
|
|
||||||
# Core workflow methods
|
|
||||||
def create_application(self, job_data: JobData) -> Application
|
|
||||||
def execute_research_phase(self, app_id: str) -> ResearchReport
|
|
||||||
def optimize_resume(self, app_id: str, research: ResearchReport) -> OptimizedResume
|
|
||||||
def generate_cover_letter(self, app_id: str, context: ApplicationContext) -> CoverLetter
|
|
||||||
|
|
||||||
# Management operations
|
|
||||||
def list_applications(self, status_filter: str = None) -> List[Application]
|
|
||||||
def update_application_status(self, app_id: str, status: ApplicationStatus) -> None
|
|
||||||
def export_application(self, app_id: str, format: ExportFormat) -> str
|
|
||||||
```
|
|
||||||
|
|
||||||
**Responsibilities:**
|
|
||||||
- Coordinate all application lifecycle operations
|
|
||||||
- Manage state transitions between phases
|
|
||||||
- Integrate with AI processing agents
|
|
||||||
- Handle file system operations through delegates
|
|
||||||
|
|
||||||
### **Phase Orchestrator**
|
|
||||||
```python
|
|
||||||
class PhaseOrchestrator:
|
|
||||||
"""Manages the 3-phase workflow execution and state transitions"""
|
|
||||||
|
|
||||||
class Phases(Enum):
|
|
||||||
INPUT = "input"
|
|
||||||
RESEARCH = "research"
|
|
||||||
RESUME = "resume"
|
|
||||||
COVER_LETTER = "cover_letter"
|
|
||||||
COMPLETE = "complete"
|
|
||||||
|
|
||||||
def execute_phase(self, phase: Phases, context: PhaseContext) -> PhaseResult
|
|
||||||
def can_advance_to(self, target_phase: Phases, current_state: ApplicationState) -> bool
|
|
||||||
def get_phase_requirements(self, phase: Phases) -> List[Requirement]
|
|
||||||
|
|
||||||
# Phase-specific execution
|
|
||||||
async def execute_research(self, job_data: JobData, resume_portfolio: List[Resume]) -> ResearchReport
|
|
||||||
async def execute_resume_optimization(self, research: ResearchReport, portfolio: ResumePortfolio) -> OptimizedResume
|
|
||||||
async def execute_cover_letter_generation(self, context: ApplicationContext) -> CoverLetter
|
|
||||||
```
|
|
||||||
|
|
||||||
**Design Features:**
|
|
||||||
- State machine implementation for workflow control
|
|
||||||
- Async execution with progress callbacks
|
|
||||||
- Dependency validation between phases
|
|
||||||
- Rollback capability for failed phases
|
|
||||||
|
|
||||||
### **AI Processing Agents**
|
|
||||||
|
|
||||||
#### **Research Agent**
|
|
||||||
```python
|
|
||||||
class ResearchAgent:
|
|
||||||
"""Phase 1: Comprehensive job description analysis and strategic positioning"""
|
|
||||||
|
|
||||||
def __init__(self, claude_client: ClaudeAPIClient, web_search: WebSearchClient):
|
|
||||||
self.claude = claude_client
|
|
||||||
self.web_search = web_search
|
|
||||||
|
|
||||||
async def analyze_job_description(self, job_desc: str) -> JobAnalysis:
|
|
||||||
"""Extract and categorize job requirements, company info, and keywords"""
|
|
||||||
|
|
||||||
async def assess_candidate_fit(self, job_analysis: JobAnalysis, resume_portfolio: ResumePortfolio) -> FitAssessment:
|
|
||||||
"""Multi-resume skills assessment with transferability analysis"""
|
|
||||||
|
|
||||||
async def research_company_intelligence(self, company_name: str) -> CompanyIntelligence:
|
|
||||||
"""Gather company culture, recent news, and strategic insights"""
|
|
||||||
|
|
||||||
async def generate_strategic_positioning(self, context: ResearchContext) -> StrategicPositioning:
|
|
||||||
"""Determine optimal candidate positioning and competitive advantages"""
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Resume Optimizer**
|
|
||||||
```python
|
|
||||||
class ResumeOptimizer:
|
|
||||||
"""Phase 2: Multi-resume synthesis and strategic optimization"""
|
|
||||||
|
|
||||||
def __init__(self, claude_client: ClaudeAPIClient, config: OptimizationConfig):
|
|
||||||
self.claude = claude_client
|
|
||||||
self.config = config # 600-word limit, formatting rules, etc.
|
|
||||||
|
|
||||||
async def synthesize_resume_portfolio(self, portfolio: ResumePortfolio, research: ResearchReport) -> SynthesizedContent:
|
|
||||||
"""Merge insights from multiple resume versions"""
|
|
||||||
|
|
||||||
async def optimize_for_job(self, content: SynthesizedContent, positioning: StrategicPositioning) -> OptimizedResume:
|
|
||||||
"""Create targeted resume within word limits"""
|
|
||||||
|
|
||||||
def validate_optimization(self, resume: OptimizedResume) -> OptimizationReport:
|
|
||||||
"""Ensure word count, keyword density, and strategic alignment"""
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **Cover Letter Generator**
|
|
||||||
```python
|
|
||||||
class CoverLetterGenerator:
|
|
||||||
"""Phase 3: Authentic voice preservation and company-specific customization"""
|
|
||||||
|
|
||||||
def __init__(self, claude_client: ClaudeAPIClient, reference_db: ReferenceDatabase):
|
|
||||||
self.claude = claude_client
|
|
||||||
self.reference_db = reference_db
|
|
||||||
|
|
||||||
async def analyze_voice_patterns(self, selected_references: List[CoverLetterReference]) -> VoiceProfile:
|
|
||||||
"""Extract authentic writing style, tone, and structural patterns"""
|
|
||||||
|
|
||||||
async def generate_cover_letter(self, context: CoverLetterContext, voice_profile: VoiceProfile) -> CoverLetter:
|
|
||||||
"""Create authentic cover letter using proven voice patterns"""
|
|
||||||
|
|
||||||
def validate_authenticity(self, cover_letter: CoverLetter, voice_profile: VoiceProfile) -> AuthenticityScore:
|
|
||||||
"""Ensure generated content matches authentic voice patterns"""
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Data Models**
|
|
||||||
```python
|
|
||||||
class Application(BaseModel):
|
|
||||||
"""Core application entity with full lifecycle management"""
|
|
||||||
id: str
|
|
||||||
name: str # company_role_YYYY_MM_DD format
|
|
||||||
status: ApplicationStatus
|
|
||||||
created_at: datetime
|
|
||||||
updated_at: datetime
|
|
||||||
|
|
||||||
# Job information
|
|
||||||
job_data: JobData
|
|
||||||
company_info: CompanyInfo
|
|
||||||
|
|
||||||
# Phase results
|
|
||||||
research_report: Optional[ResearchReport] = None
|
|
||||||
optimized_resume: Optional[OptimizedResume] = None
|
|
||||||
cover_letter: Optional[CoverLetter] = None
|
|
||||||
|
|
||||||
# Metadata
|
|
||||||
priority_level: PriorityLevel
|
|
||||||
application_deadline: Optional[date] = None
|
|
||||||
|
|
||||||
# Business logic
|
|
||||||
@property
|
|
||||||
def completion_percentage(self) -> float
|
|
||||||
def can_advance_to_phase(self, phase: PhaseOrchestrator.Phases) -> bool
|
|
||||||
def export_to_format(self, format: ExportFormat) -> str
|
|
||||||
|
|
||||||
class ResumePortfolio(BaseModel):
|
|
||||||
"""Collection of focused resume versions representing different expertise areas"""
|
|
||||||
resumes: List[Resume]
|
|
||||||
|
|
||||||
def get_technical_focused(self) -> List[Resume]
|
|
||||||
def get_management_focused(self) -> List[Resume]
|
|
||||||
def get_industry_specific(self, industry: str) -> List[Resume]
|
|
||||||
def synthesize_skills(self) -> SkillMatrix
|
|
||||||
|
|
||||||
class JobData(BaseModel):
|
|
||||||
"""Comprehensive job posting information"""
|
|
||||||
job_url: Optional[str] = None
|
|
||||||
job_description: str
|
|
||||||
company_name: str
|
|
||||||
role_title: str
|
|
||||||
location: str
|
|
||||||
priority_level: PriorityLevel
|
|
||||||
how_found: str
|
|
||||||
application_deadline: Optional[date] = None
|
|
||||||
|
|
||||||
# Additional context
|
|
||||||
specific_aspects: Optional[str] = None
|
|
||||||
company_insights: Optional[str] = None
|
|
||||||
special_considerations: Optional[str] = None
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Data Flow Architecture
|
|
||||||
|
|
||||||
### Application Creation Flow
|
|
||||||
```mermaid
|
|
||||||
sequenceDiagram
|
|
||||||
participant UI as Streamlit UI
|
|
||||||
participant Engine as Application Engine
|
|
||||||
participant FileManager as File Manager
|
|
||||||
participant Storage as Local Storage
|
|
||||||
|
|
||||||
UI->>Engine: create_application(job_data)
|
|
||||||
Engine->>Engine: validate_job_data()
|
|
||||||
Engine->>Engine: generate_application_name()
|
|
||||||
Engine->>FileManager: create_application_folder()
|
|
||||||
FileManager->>Storage: mkdir(company_role_date)
|
|
||||||
FileManager->>Storage: save(user_inputs.json)
|
|
||||||
FileManager->>Storage: save(original_job_description.md)
|
|
||||||
FileManager->>Storage: save(application_status.json)
|
|
||||||
Engine-->>UI: Application(id, status=created)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3-Phase Workflow Execution
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
A[Application Created] --> B[Phase 1: Research]
|
|
||||||
B --> C{Research Complete?}
|
|
||||||
C -->|Yes| D[Phase 2: Resume]
|
|
||||||
C -->|No| E[Research Error]
|
|
||||||
D --> F{Resume Complete?}
|
|
||||||
F -->|Yes| G[Phase 3: Cover Letter]
|
|
||||||
F -->|No| H[Resume Error]
|
|
||||||
G --> I{Cover Letter Complete?}
|
|
||||||
I -->|Yes| J[Application Complete]
|
|
||||||
I -->|No| K[Cover Letter Error]
|
|
||||||
|
|
||||||
E --> L[Log Error & Retry]
|
|
||||||
H --> L
|
|
||||||
K --> L
|
|
||||||
L --> M[Manual Intervention]
|
|
||||||
|
|
||||||
subgraph "Phase 1 Details"
|
|
||||||
B1[Job Analysis]
|
|
||||||
B2[Multi-Resume Assessment]
|
|
||||||
B3[Company Research]
|
|
||||||
B4[Strategic Positioning]
|
|
||||||
B --> B1 --> B2 --> B3 --> B4 --> C
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Phase 2 Details"
|
|
||||||
D1[Portfolio Synthesis]
|
|
||||||
D2[Content Optimization]
|
|
||||||
D3[Word Count Management]
|
|
||||||
D4[Strategic Alignment]
|
|
||||||
D --> D1 --> D2 --> D3 --> D4 --> F
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Phase 3 Details"
|
|
||||||
G1[Voice Analysis]
|
|
||||||
G2[Content Generation]
|
|
||||||
G3[Authenticity Validation]
|
|
||||||
G4[Company Customization]
|
|
||||||
G --> G1 --> G2 --> G3 --> G4 --> I
|
|
||||||
end
|
|
||||||
```
|
|
||||||
|
|
||||||
### File Management Architecture
|
|
||||||
```mermaid
|
|
||||||
graph TB
|
|
||||||
subgraph "Project Root"
|
|
||||||
A[job-application-engine/]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "User Data"
|
|
||||||
B[user_data/resumes/]
|
|
||||||
C[user_data/cover_letter_references/selected/]
|
|
||||||
D[user_data/cover_letter_references/other/]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Applications"
|
|
||||||
E[applications/company_role_date/]
|
|
||||||
F[├── original_job_description.md]
|
|
||||||
G[├── research_report.md]
|
|
||||||
H[├── optimized_resume.md]
|
|
||||||
I[├── cover_letter.md]
|
|
||||||
J[├── user_inputs.json]
|
|
||||||
K[└── application_status.json]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph "Configuration"
|
|
||||||
L[config/]
|
|
||||||
M[├── engine_config.yaml]
|
|
||||||
N[├── claude_api_config.json]
|
|
||||||
O[└── templates/]
|
|
||||||
end
|
|
||||||
|
|
||||||
A --> B
|
|
||||||
A --> C
|
|
||||||
A --> D
|
|
||||||
A --> E
|
|
||||||
A --> L
|
|
||||||
E --> F
|
|
||||||
E --> G
|
|
||||||
E --> H
|
|
||||||
E --> I
|
|
||||||
E --> J
|
|
||||||
E --> K
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🗂️ Project Structure
|
|
||||||
|
|
||||||
### Directory Layout
|
|
||||||
```
|
|
||||||
job-application-engine/
|
|
||||||
├── app.py # Streamlit main application
|
|
||||||
├── requirements.txt # Python dependencies
|
|
||||||
├── config/
|
|
||||||
│ ├── engine_config.yaml # Engine configuration
|
|
||||||
│ ├── claude_api_config.json # API configuration
|
|
||||||
│ └── templates/ # Document templates
|
|
||||||
│ ├── research_template.md
|
|
||||||
│ ├── resume_template.md
|
|
||||||
│ └── cover_letter_template.md
|
|
||||||
├── src/ # Source code
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── engine/ # Core engine
|
|
||||||
│ │ ├── __init__.py
|
|
||||||
│ │ ├── application_engine.py # Main engine class
|
|
||||||
│ │ ├── phase_orchestrator.py # Workflow management
|
|
||||||
│ │ └── state_manager.py # State tracking
|
|
||||||
│ ├── agents/ # AI processing agents
|
|
||||||
│ │ ├── __init__.py
|
|
||||||
│ │ ├── research_agent.py # Phase 1: Research
|
|
||||||
│ │ ├── resume_optimizer.py # Phase 2: Resume
|
|
||||||
│ │ ├── cover_letter_generator.py # Phase 3: Cover Letter
|
|
||||||
│ │ └── claude_client.py # Claude API integration
|
|
||||||
│ ├── models/ # Data models
|
|
||||||
│ │ ├── __init__.py
|
|
||||||
│ │ ├── application.py # Application entity
|
|
||||||
│ │ ├── job_data.py # Job information
|
|
||||||
│ │ ├── resume.py # Resume models
|
|
||||||
│ │ └── results.py # Phase results
|
|
||||||
│ ├── storage/ # Storage management
|
|
||||||
│ │ ├── __init__.py
|
|
||||||
│ │ ├── file_manager.py # File operations
|
|
||||||
│ │ ├── application_store.py # Application persistence
|
|
||||||
│ │ └── reference_database.py # Cover letter references
|
|
||||||
│ ├── ui/ # User interface
|
|
||||||
│ │ ├── __init__.py
|
|
||||||
│ │ ├── streamlit_app.py # Streamlit components
|
|
||||||
│ │ ├── workflow_ui.py # Workflow interface
|
|
||||||
│ │ └── file_management_ui.py # File management
|
|
||||||
│ └── utils/ # Utilities
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── validators.py # Input validation
|
|
||||||
│ ├── formatters.py # Output formatting
|
|
||||||
│ └── helpers.py # Helper functions
|
|
||||||
├── user_data/ # User's career documents
|
|
||||||
│ ├── resumes/
|
|
||||||
│ │ ├── resume_complete.md
|
|
||||||
│ │ ├── resume_technical.md
|
|
||||||
│ │ └── resume_management.md
|
|
||||||
│ └── cover_letter_references/
|
|
||||||
│ ├── selected/ # Tagged as references
|
|
||||||
│ │ ├── cover_letter_tech.md
|
|
||||||
│ │ └── cover_letter_consulting.md
|
|
||||||
│ └── other/ # Available references
|
|
||||||
│ └── cover_letter_finance.md
|
|
||||||
├── applications/ # Generated applications
|
|
||||||
│ ├── dillon_consulting_data_analyst_2025_07_22/
|
|
||||||
│ └── shopify_senior_developer_2025_07_23/
|
|
||||||
├── tests/ # Test suite
|
|
||||||
│ ├── unit/
|
|
||||||
│ ├── integration/
|
|
||||||
│ └── fixtures/
|
|
||||||
├── docs/ # Documentation
|
|
||||||
│ ├── architecture.md
|
|
||||||
│ ├── user_guide.md
|
|
||||||
│ └── api_reference.md
|
|
||||||
└── scripts/ # Utility scripts
|
|
||||||
├── setup_project.py
|
|
||||||
└── backup_applications.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Module Responsibilities
|
|
||||||
|
|
||||||
| Module | Purpose | Key Classes | Dependencies |
|
|
||||||
|--------|---------|-------------|--------------|
|
|
||||||
| `engine/` | Core workflow orchestration | `ApplicationEngine`, `PhaseOrchestrator` | `agents/`, `models/` |
|
|
||||||
| `agents/` | AI processing logic | `ResearchAgent`, `ResumeOptimizer`, `CoverLetterGenerator` | `models/`, `utils/` |
|
|
||||||
| `models/` | Data structures and business logic | `Application`, `JobData`, `Resume`, `ResumePortfolio` | `pydantic` |
|
|
||||||
| `storage/` | File system operations | `FileManager`, `ApplicationStore`, `ReferenceDatabase` | `pathlib`, `json` |
|
|
||||||
| `ui/` | User interface components | `StreamlitApp`, `WorkflowUI`, `FileManagementUI` | `streamlit` |
|
|
||||||
| `utils/` | Cross-cutting concerns | `Validators`, `Formatters`, `Helpers` | Various |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔌 Extensibility Architecture
|
|
||||||
|
|
||||||
### Plugin System Design
|
|
||||||
```python
|
|
||||||
class EnginePlugin(ABC):
|
|
||||||
"""Base plugin interface for extending engine functionality"""
|
|
||||||
|
|
||||||
def before_phase_execution(self, phase: PhaseOrchestrator.Phases, context: PhaseContext) -> PhaseContext:
|
|
||||||
"""Modify context before phase execution"""
|
|
||||||
return context
|
|
||||||
|
|
||||||
def after_phase_completion(self, phase: PhaseOrchestrator.Phases, result: PhaseResult) -> PhaseResult:
|
|
||||||
"""Process result after phase completion"""
|
|
||||||
return result
|
|
||||||
|
|
||||||
def on_application_created(self, application: Application) -> None:
|
|
||||||
"""React to new application creation"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
class MetricsPlugin(EnginePlugin):
|
|
||||||
"""Collect application performance metrics"""
|
|
||||||
|
|
||||||
def after_phase_completion(self, phase: PhaseOrchestrator.Phases, result: PhaseResult) -> PhaseResult:
|
|
||||||
self.record_phase_metrics(phase, result.execution_time, result.success)
|
|
||||||
return result
|
|
||||||
|
|
||||||
class BackupPlugin(EnginePlugin):
|
|
||||||
"""Automatic backup of application data"""
|
|
||||||
|
|
||||||
def on_application_created(self, application: Application) -> None:
|
|
||||||
self.backup_application(application)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuration System
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class EngineConfig:
|
|
||||||
# Core settings
|
|
||||||
claude_api_key: str
|
|
||||||
base_output_directory: str = "./applications"
|
|
||||||
max_concurrent_phases: int = 1
|
|
||||||
|
|
||||||
# AI processing
|
|
||||||
research_model: str = "claude-sonnet-4-20250514"
|
|
||||||
resume_word_limit: int = 600
|
|
||||||
cover_letter_word_range: tuple = (350, 450)
|
|
||||||
|
|
||||||
# File management
|
|
||||||
auto_backup_enabled: bool = True
|
|
||||||
backup_retention_days: int = 30
|
|
||||||
|
|
||||||
# UI preferences
|
|
||||||
streamlit_theme: str = "light"
|
|
||||||
show_advanced_options: bool = False
|
|
||||||
|
|
||||||
# Extensions
|
|
||||||
enabled_plugins: List[str] = field(default_factory=list)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_file(cls, config_path: str) -> 'EngineConfig':
|
|
||||||
"""Load configuration from YAML file"""
|
|
||||||
|
|
||||||
def validate(self) -> List[ValidationError]:
|
|
||||||
"""Validate configuration completeness and correctness"""
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multi-Resume Strategy Pattern
|
|
||||||
```python
|
|
||||||
class ResumeSelectionStrategy(ABC):
|
|
||||||
"""Strategy for selecting optimal resume content for specific jobs"""
|
|
||||||
|
|
||||||
def select_primary_resume(self, portfolio: ResumePortfolio, job_analysis: JobAnalysis) -> Resume:
|
|
||||||
"""Select the most relevant primary resume"""
|
|
||||||
|
|
||||||
def get_supplementary_content(self, portfolio: ResumePortfolio, primary: Resume) -> List[ResumeSection]:
|
|
||||||
"""Extract additional content from other resume versions"""
|
|
||||||
|
|
||||||
class TechnicalRoleStrategy(ResumeSelectionStrategy):
|
|
||||||
"""Optimize resume selection for technical positions"""
|
|
||||||
|
|
||||||
class ManagementRoleStrategy(ResumeSelectionStrategy):
|
|
||||||
"""Optimize resume selection for management positions"""
|
|
||||||
|
|
||||||
class ConsultingRoleStrategy(ResumeSelectionStrategy):
|
|
||||||
"""Optimize resume selection for consulting positions"""
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Development Phases
|
|
||||||
|
|
||||||
### **Phase 1: MVP Foundation (Completed)**
|
|
||||||
- ✅ Streamlit UI with file management
|
|
||||||
- ✅ 3-phase workflow execution
|
|
||||||
- ✅ Claude API integration
|
|
||||||
- ✅ Local file storage system
|
|
||||||
- ✅ Multi-resume processing
|
|
||||||
- ✅ Cover letter reference system
|
|
||||||
- ✅ Application status tracking
|
|
||||||
|
|
||||||
### **Phase 2: Enhanced Intelligence (Next)**
|
|
||||||
- 🔄 Advanced company research integration
|
|
||||||
- 🔄 Improved multi-resume synthesis algorithms
|
|
||||||
- 🔄 Voice pattern analysis enhancement
|
|
||||||
- 🔄 Strategic positioning optimization
|
|
||||||
- 🔄 Application performance analytics
|
|
||||||
- 🔄 Export functionality (PDF, Word, etc.)
|
|
||||||
|
|
||||||
### **Phase 3: Automation & Scale (Future)**
|
|
||||||
- 📋 Batch application processing
|
|
||||||
- 📋 Template management system
|
|
||||||
- 📋 Application campaign planning
|
|
||||||
- 📋 Success rate tracking and optimization
|
|
||||||
- 📋 Integration with job boards APIs
|
|
||||||
- 📋 Automated application submission
|
|
||||||
|
|
||||||
### **Phase 4: Enterprise Features (Future)**
|
|
||||||
- 📋 Multi-user support with role-based access
|
|
||||||
- 📋 Team collaboration features
|
|
||||||
- 📋 Advanced analytics and reporting
|
|
||||||
- 📋 Custom workflow templates
|
|
||||||
- 📋 Integration with HR systems
|
|
||||||
- 📋 White-label deployment options
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Technical Specifications
|
|
||||||
|
|
||||||
### **Technology Stack**
|
|
||||||
|
|
||||||
| Component | Technology | Version | Rationale |
|
|
||||||
|-----------|------------|---------|-----------|
|
|
||||||
| **UI Framework** | Streamlit | 1.28.1 | Rapid prototyping, built-in components, Python-native |
|
|
||||||
| **HTTP Client** | requests | 2.31.0 | Reliable, well-documented, synchronous operations |
|
|
||||||
| **Data Validation** | Pydantic | 2.0+ | Type safety, automatic validation, great developer experience |
|
|
||||||
| **File Operations** | pathlib | Built-in | Modern, object-oriented path handling |
|
|
||||||
| **Configuration** | PyYAML | 6.0+ | Human-readable configuration files |
|
|
||||||
| **CLI Future** | Click + Rich | Latest | User-friendly CLI with beautiful output |
|
|
||||||
| **Testing** | pytest | 7.0+ | Comprehensive testing framework |
|
|
||||||
| **Documentation** | MkDocs | 1.5+ | Beautiful, searchable documentation |
|
|
||||||
|
|
||||||
### **Performance Requirements**
|
|
||||||
|
|
||||||
| Metric | Target | Measurement Method |
|
|
||||||
|--------|--------|-------------------|
|
|
||||||
| **Application Creation** | <2 seconds | Time from form submission to folder creation |
|
|
||||||
| **Phase 1 Research** | <30 seconds | Claude API response + processing time |
|
|
||||||
| **Phase 2 Resume** | <20 seconds | Multi-resume synthesis + optimization |
|
|
||||||
| **Phase 3 Cover Letter** | <15 seconds | Voice analysis + content generation |
|
|
||||||
| **File Operations** | <1 second | Local file read/write operations |
|
|
||||||
| **UI Responsiveness** | <500ms | Streamlit component render time |
|
|
||||||
|
|
||||||
### **Quality Standards**
|
|
||||||
|
|
||||||
#### **Code Quality Metrics**
|
|
||||||
- **Type Coverage**: 90%+ type hints on all public APIs
|
|
||||||
- **Test Coverage**: 85%+ line coverage maintained
|
|
||||||
- **Documentation**: All public methods and classes documented
|
|
||||||
- **Code Style**: Black formatter + isort + flake8 compliance
|
|
||||||
- **Complexity**: Max cyclomatic complexity of 10 per function
|
|
||||||
|
|
||||||
#### **Security Requirements**
|
|
||||||
- No API keys hardcoded in source code
|
|
||||||
- Environment variable management for secrets
|
|
||||||
- Input sanitization for all user data
|
|
||||||
- Safe file path handling to prevent directory traversal
|
|
||||||
- Regular dependency vulnerability scanning
|
|
||||||
|
|
||||||
#### **Reliability Standards**
|
|
||||||
- Graceful handling of API failures with user-friendly messages
|
|
||||||
- Automatic retry logic for transient failures
|
|
||||||
- Data integrity validation after file operations
|
|
||||||
- Rollback capability for failed workflow phases
|
|
||||||
- Comprehensive error logging with context
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 Monitoring & Analytics
|
|
||||||
|
|
||||||
### **Application Metrics**
|
|
||||||
```python
|
|
||||||
class ApplicationMetrics:
|
|
||||||
"""Track application performance and success rates"""
|
|
||||||
|
|
||||||
def record_application_created(self, app: Application) -> None
|
|
||||||
def record_phase_completion(self, app_id: str, phase: PhaseOrchestrator.Phases, duration: float) -> None
|
|
||||||
def record_application_submitted(self, app_id: str) -> None
|
|
||||||
def record_application_response(self, app_id: str, response_type: ResponseType) -> None
|
|
||||||
|
|
||||||
# Analytics queries
|
|
||||||
def get_success_rate(self, date_range: DateRange) -> float
|
|
||||||
def get_average_completion_time(self, phase: PhaseOrchestrator.Phases) -> float
|
|
||||||
def get_most_effective_strategies(self) -> List[StrategyMetric]
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Performance Monitoring**
|
|
||||||
```python
|
|
||||||
class PerformanceMonitor:
|
|
||||||
"""Monitor system performance and resource usage"""
|
|
||||||
|
|
||||||
def track_api_response_times(self) -> Dict[str, float]
|
|
||||||
def monitor_file_system_usage(self) -> StorageMetrics
|
|
||||||
def track_memory_usage(self) -> MemoryMetrics
|
|
||||||
def generate_performance_report(self) -> PerformanceReport
|
|
||||||
```
|
|
||||||
|
|
||||||
### **User Experience Analytics**
|
|
||||||
- Workflow completion rates by phase
|
|
||||||
- Most common user pain points
|
|
||||||
- Feature usage statistics
|
|
||||||
- Error frequency and resolution rates
|
|
||||||
- Time-to-value metrics
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔒 Security Architecture
|
|
||||||
|
|
||||||
### **Data Protection Strategy**
|
|
||||||
- **Local-First**: All sensitive career data stored locally
|
|
||||||
- **API Key Management**: Secure environment variable handling
|
|
||||||
- **Input Validation**: Comprehensive sanitization of all user inputs
|
|
||||||
- **File System Security**: Restricted file access patterns
|
|
||||||
- **Audit Trail**: Complete logging of all file operations
|
|
||||||
|
|
||||||
### **Privacy Considerations**
|
|
||||||
- No personal data transmitted to third parties (except Claude API for processing)
|
|
||||||
- User control over all data retention and deletion
|
|
||||||
- Transparent data usage policies
|
|
||||||
- Optional anonymization for analytics
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎨 User Experience Design
|
|
||||||
|
|
||||||
### **Design Principles**
|
|
||||||
1. **Simplicity First**: Complex AI power hidden behind simple interfaces
|
|
||||||
2. **Progress Transparency**: Clear feedback on all processing steps
|
|
||||||
3. **Error Recovery**: Graceful handling with actionable next steps
|
|
||||||
4. **Customization**: Flexible configuration without overwhelming options
|
|
||||||
5. **Mobile Friendly**: Responsive design for various screen sizes
|
|
||||||
|
|
||||||
### **User Journey Optimization**
|
|
||||||
```mermaid
|
|
||||||
journey
|
|
||||||
title Job Application Creation Journey
|
|
||||||
section Setup
|
|
||||||
Configure folders: 5: User
|
|
||||||
Upload resumes: 4: User
|
|
||||||
Tag references: 3: User
|
|
||||||
section Application
|
|
||||||
Paste job description: 5: User
|
|
||||||
Review auto-generated name: 4: User
|
|
||||||
Start research phase: 5: User
|
|
||||||
section AI Processing
|
|
||||||
Wait for research: 3: User, AI
|
|
||||||
Review research results: 4: User
|
|
||||||
Approve resume optimization: 5: User, AI
|
|
||||||
Review cover letter: 5: User, AI
|
|
||||||
section Completion
|
|
||||||
Make final edits: 4: User
|
|
||||||
Export documents: 5: User
|
|
||||||
Mark as applied: 5: User
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Documentation Strategy
|
|
||||||
|
|
||||||
### **Documentation Hierarchy**
|
|
||||||
1. **Architecture Guide** (This Document) - Technical architecture and design decisions
|
|
||||||
2. **User Guide** - Step-by-step usage instructions with screenshots
|
|
||||||
3. **API Reference** - Detailed API documentation for extensions
|
|
||||||
4. **Developer Guide** - Setup, contribution guidelines, and development practices
|
|
||||||
5. **Troubleshooting Guide** - Common issues and solutions
|
|
||||||
|
|
||||||
### **Documentation Standards**
|
|
||||||
- All public APIs documented with docstrings
|
|
||||||
- Code examples for all major features
|
|
||||||
- Screenshots for UI components
|
|
||||||
- Video tutorials for complex workflows
|
|
||||||
- Regular documentation updates with each release
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Deployment & Distribution
|
|
||||||
|
|
||||||
### **Distribution Strategy**
|
|
||||||
- **GitHub Repository**: Open source with comprehensive documentation
|
|
||||||
- **PyPI Package**: Easy installation via pip
|
|
||||||
- **Docker Container**: Containerized deployment option
|
|
||||||
- **Executable Bundle**: Standalone executable for non-technical users
|
|
||||||
|
|
||||||
### **Deployment Options**
|
|
||||||
```python
|
|
||||||
# Option 1: Direct Python execution
|
|
||||||
python -m streamlit run app.py
|
|
||||||
|
|
||||||
# Option 2: Docker deployment
|
|
||||||
docker run -p 8501:8501 job-application-engine
|
|
||||||
|
|
||||||
# Option 3: Heroku deployment
|
|
||||||
git push heroku main
|
|
||||||
|
|
||||||
# Option 4: Local installation
|
|
||||||
pip install job-application-engine
|
|
||||||
job-app-engine --config myconfig.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔮 Future Enhancements
|
|
||||||
|
|
||||||
### **Advanced AI Features**
|
|
||||||
- **Multi-Model Support**: Integration with multiple AI providers
|
|
||||||
- **Specialized Models**: Domain-specific fine-tuned models
|
|
||||||
- **Continuous Learning**: System learns from successful applications
|
|
||||||
- **Predictive Analytics**: Success probability estimation
|
|
||||||
|
|
||||||
### **Integration Ecosystem**
|
|
||||||
- **LinkedIn Integration**: Auto-import job postings and company data
|
|
||||||
- **ATS Integration**: Direct submission to Applicant Tracking Systems
|
|
||||||
- **CRM Integration**: Track application pipeline in existing CRM
|
|
||||||
- **Calendar Integration**: Application deadline management
|
|
||||||
|
|
||||||
### **Enterprise Features**
|
|
||||||
- **Multi-Tenant Architecture**: Support multiple users/organizations
|
|
||||||
- **Role-Based Access Control**: Team collaboration with permission levels
|
|
||||||
- **Workflow Customization**: Industry-specific workflow templates
|
|
||||||
- **Advanced Analytics**: Success attribution and optimization recommendations
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*This architecture guide serves as the authoritative reference for the Job Application Engine system design and implementation. For implementation details, see the source code and technical documentation.*
|
|
||||||
|
|
||||||
*For questions or contributions, please refer to the project repository and contribution guidelines.*
|
|
||||||
776
docs/jobforge_mvp_architecture.md
Normal file
776
docs/jobforge_mvp_architecture.md
Normal file
@@ -0,0 +1,776 @@
|
|||||||
|
# Job Forge - Python/FastAPI Web Application Architecture
|
||||||
|
|
||||||
|
**Version:** 1.0.0 Prototype
|
||||||
|
**Status:** Development Phase 1
|
||||||
|
**Date:** August 2025
|
||||||
|
**Scope:** AI-powered job application management web application
|
||||||
|
**Target:** Prototype development for server deployment
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 Application Scope & Objectives
|
||||||
|
|
||||||
|
### Core Web Application Features
|
||||||
|
- **User Authentication**: JWT-based secure authentication system
|
||||||
|
- **Job Application Management**: Full CRUD operations for job applications
|
||||||
|
- **AI-Powered Document Generation**: Automated cover letter and resume optimization
|
||||||
|
- **Web Interface**: Modern responsive web interface using Dash + Mantine
|
||||||
|
- **Multi-tenant Architecture**: Secure user data isolation with RLS
|
||||||
|
|
||||||
|
### Development Goals
|
||||||
|
- Deploy functional prototype to personal server
|
||||||
|
- Validate AI workflow effectiveness for job applications
|
||||||
|
- Test web application performance and user experience
|
||||||
|
- Establish scalable architecture for future enhancements
|
||||||
|
- Demonstrate full-stack Python/FastAPI capabilities
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🏗️ Web Application Architecture
|
||||||
|
|
||||||
|
### System Overview
|
||||||
|
```mermaid
|
||||||
|
graph TB
|
||||||
|
subgraph "Web Frontend (Dash + Mantine)"
|
||||||
|
UI[Dashboard Interface]
|
||||||
|
FORMS[Application Forms]
|
||||||
|
EDITOR[Document Editor]
|
||||||
|
VIEWER[Document Viewer]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph "Backend API (FastAPI)"
|
||||||
|
AUTH[JWT Authentication]
|
||||||
|
CRUD[Application CRUD]
|
||||||
|
AI[AI Service Layer]
|
||||||
|
FILES[Document Management]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph "AI Agents"
|
||||||
|
RESEARCH[Research Agent]
|
||||||
|
RESUME[Resume Optimizer]
|
||||||
|
COVER[Cover Letter Generator]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph "Data Storage"
|
||||||
|
PG[(PostgreSQL + pgvector)]
|
||||||
|
FILES[Document Storage]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph "External AI"
|
||||||
|
CLAUDE[Claude AI]
|
||||||
|
OPENAI[OpenAI Embeddings]
|
||||||
|
end
|
||||||
|
|
||||||
|
UI --> AUTH
|
||||||
|
FORMS --> CRUD
|
||||||
|
EDITOR --> FILES
|
||||||
|
VIEWER --> FILES
|
||||||
|
CRUD --> AI
|
||||||
|
AI --> RESEARCH
|
||||||
|
AI --> RESUME
|
||||||
|
AI --> COVER
|
||||||
|
CRUD --> PG
|
||||||
|
FILES --> PG
|
||||||
|
RESEARCH --> CLAUDE
|
||||||
|
RESUME --> CLAUDE
|
||||||
|
COVER --> CLAUDE
|
||||||
|
AI --> OPENAI
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔐 User Authentication (Web Application)
|
||||||
|
|
||||||
|
### JWT-Based Authentication System
|
||||||
|
```python
|
||||||
|
class AuthenticationService:
|
||||||
|
"""JWT-based authentication for web application"""
|
||||||
|
|
||||||
|
async def register_user(self, user_data: UserCreate) -> User:
|
||||||
|
"""Register new user account with validation"""
|
||||||
|
|
||||||
|
async def authenticate_user(self, credentials: UserLogin) -> AuthResult:
|
||||||
|
"""Authenticate user and return JWT access token"""
|
||||||
|
|
||||||
|
async def verify_token(self, token: str) -> User:
|
||||||
|
"""Verify JWT token and return authenticated user"""
|
||||||
|
|
||||||
|
async def refresh_token(self, refresh_token: str) -> AuthResult:
|
||||||
|
"""Refresh JWT access token"""
|
||||||
|
|
||||||
|
def create_access_token(self, user_id: str) -> str:
|
||||||
|
"""Create JWT access token with expiration"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Schema (Users)
|
||||||
|
```sql
|
||||||
|
-- User table with enhanced security
|
||||||
|
CREATE TABLE users (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
email VARCHAR(255) UNIQUE NOT NULL,
|
||||||
|
password_hash VARCHAR(255) NOT NULL,
|
||||||
|
first_name VARCHAR(100) NOT NULL,
|
||||||
|
last_name VARCHAR(100) NOT NULL,
|
||||||
|
is_active BOOLEAN DEFAULT TRUE,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Enable row level security for multi-tenancy
|
||||||
|
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
-- Create index for performance
|
||||||
|
CREATE INDEX idx_users_email ON users(email);
|
||||||
|
CREATE INDEX idx_users_active ON users(is_active);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 Job Application Module
|
||||||
|
|
||||||
|
### Core Application Workflow
|
||||||
|
```python
|
||||||
|
class ApplicationService:
|
||||||
|
"""Core job application management"""
|
||||||
|
|
||||||
|
async def create_application(self, user_id: str, job_data: JobApplicationData) -> Application:
|
||||||
|
"""Create new job application with job description and URL"""
|
||||||
|
|
||||||
|
async def get_user_applications(self, user_id: str) -> List[Application]:
|
||||||
|
"""Get all applications for user"""
|
||||||
|
|
||||||
|
async def get_application(self, user_id: str, app_id: str) -> Application:
|
||||||
|
"""Get specific application with documents"""
|
||||||
|
|
||||||
|
async def update_application_status(self, user_id: str, app_id: str, status: str) -> None:
|
||||||
|
"""Update application status through workflow phases"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Application Data Model
|
||||||
|
```python
|
||||||
|
class JobApplicationData(BaseModel):
|
||||||
|
"""Input data for creating new application"""
|
||||||
|
job_url: Optional[str] = None
|
||||||
|
job_description: str
|
||||||
|
company_name: str
|
||||||
|
role_title: str
|
||||||
|
location: Optional[str] = None
|
||||||
|
priority_level: str = "medium"
|
||||||
|
additional_context: Optional[str] = None
|
||||||
|
|
||||||
|
class Application(BaseModel):
|
||||||
|
"""Core application entity"""
|
||||||
|
id: str
|
||||||
|
user_id: str
|
||||||
|
name: str # Auto-generated: company_role_YYYY_MM_DD
|
||||||
|
company_name: str
|
||||||
|
role_title: str
|
||||||
|
job_url: Optional[str]
|
||||||
|
job_description: str
|
||||||
|
status: ApplicationStatus # draft, research_complete, resume_ready, cover_letter_ready
|
||||||
|
|
||||||
|
# Phase completion tracking
|
||||||
|
research_completed: bool = False
|
||||||
|
resume_optimized: bool = False
|
||||||
|
cover_letter_generated: bool = False
|
||||||
|
|
||||||
|
created_at: datetime
|
||||||
|
updated_at: datetime
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Schema (Applications)
|
||||||
|
```sql
|
||||||
|
CREATE TABLE applications (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
company_name VARCHAR(255) NOT NULL,
|
||||||
|
role_title VARCHAR(255) NOT NULL,
|
||||||
|
job_url TEXT,
|
||||||
|
job_description TEXT NOT NULL,
|
||||||
|
location VARCHAR(255),
|
||||||
|
priority_level VARCHAR(20) DEFAULT 'medium',
|
||||||
|
status VARCHAR(50) DEFAULT 'draft',
|
||||||
|
|
||||||
|
-- Phase tracking
|
||||||
|
research_completed BOOLEAN DEFAULT FALSE,
|
||||||
|
resume_optimized BOOLEAN DEFAULT FALSE,
|
||||||
|
cover_letter_generated BOOLEAN DEFAULT FALSE,
|
||||||
|
|
||||||
|
created_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
ALTER TABLE applications ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
CREATE POLICY user_applications_policy ON applications
|
||||||
|
FOR ALL TO application_user
|
||||||
|
USING (user_id = current_setting('app.current_user_id')::UUID);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🤖 AI Processing Workflow
|
||||||
|
|
||||||
|
### 3-Phase AI Orchestrator
|
||||||
|
```python
|
||||||
|
class AIOrchestrator:
|
||||||
|
"""Orchestrates the 3-phase AI workflow"""
|
||||||
|
|
||||||
|
def __init__(self, research_agent, resume_optimizer, cover_letter_generator):
|
||||||
|
self.research_agent = research_agent
|
||||||
|
self.resume_optimizer = resume_optimizer
|
||||||
|
self.cover_letter_generator = cover_letter_generator
|
||||||
|
|
||||||
|
async def execute_research_phase(self, application_id: str) -> ResearchReport:
|
||||||
|
"""Phase 1: Job analysis and company research"""
|
||||||
|
|
||||||
|
async def execute_resume_optimization(self, application_id: str) -> OptimizedResume:
|
||||||
|
"""Phase 2: Resume optimization based on research"""
|
||||||
|
|
||||||
|
async def execute_cover_letter_generation(self, application_id: str, user_context: str) -> CoverLetter:
|
||||||
|
"""Phase 3: Cover letter generation with user inputs"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 1: Research Agent
|
||||||
|
```python
|
||||||
|
class ResearchAgent:
|
||||||
|
"""Job description analysis and company research"""
|
||||||
|
|
||||||
|
async def analyze_job_description(self, job_desc: str) -> JobAnalysis:
|
||||||
|
"""Extract requirements, skills, and key information"""
|
||||||
|
|
||||||
|
async def research_company_info(self, company_name: str) -> CompanyIntelligence:
|
||||||
|
"""Basic company research and insights"""
|
||||||
|
|
||||||
|
async def generate_strategic_positioning(self, job_analysis: JobAnalysis) -> StrategicPositioning:
|
||||||
|
"""Determine optimal candidate positioning"""
|
||||||
|
|
||||||
|
async def create_research_report(self, job_desc: str, company_name: str) -> ResearchReport:
|
||||||
|
"""Complete research phase output"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Resume Optimizer
|
||||||
|
```python
|
||||||
|
class ResumeOptimizer:
|
||||||
|
"""Resume optimization based on job requirements"""
|
||||||
|
|
||||||
|
async def analyze_resume_portfolio(self, user_id: str) -> ResumePortfolio:
|
||||||
|
"""Load and analyze user's resume library"""
|
||||||
|
|
||||||
|
async def optimize_resume_for_job(self, portfolio: ResumePortfolio, research: ResearchReport) -> OptimizedResume:
|
||||||
|
"""Create job-specific optimized resume"""
|
||||||
|
|
||||||
|
async def validate_resume_optimization(self, resume: OptimizedResume) -> ValidationReport:
|
||||||
|
"""Ensure resume meets requirements and constraints"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Cover Letter Generator
|
||||||
|
```python
|
||||||
|
class CoverLetterGenerator:
|
||||||
|
"""Cover letter generation with user context"""
|
||||||
|
|
||||||
|
async def analyze_writing_style(self, user_id: str) -> WritingStyle:
|
||||||
|
"""Analyze user's writing patterns from reference documents"""
|
||||||
|
|
||||||
|
async def generate_cover_letter(self, research: ResearchReport, resume: OptimizedResume,
|
||||||
|
user_context: str, writing_style: WritingStyle) -> CoverLetter:
|
||||||
|
"""Generate personalized cover letter"""
|
||||||
|
|
||||||
|
async def validate_cover_letter(self, cover_letter: CoverLetter) -> ValidationReport:
|
||||||
|
"""Ensure cover letter quality and authenticity"""
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📄 Document Management
|
||||||
|
|
||||||
|
### Document Storage & Retrieval
|
||||||
|
```python
|
||||||
|
class DocumentService:
|
||||||
|
"""Handle document storage and retrieval"""
|
||||||
|
|
||||||
|
async def save_document(self, user_id: str, app_id: str, doc_type: str, content: str) -> None:
|
||||||
|
"""Save generated document (research, resume, cover letter)"""
|
||||||
|
|
||||||
|
async def get_document(self, user_id: str, app_id: str, doc_type: str) -> Document:
|
||||||
|
"""Retrieve document for viewing/editing"""
|
||||||
|
|
||||||
|
async def update_document(self, user_id: str, app_id: str, doc_type: str, content: str) -> None:
|
||||||
|
"""Update document after user editing"""
|
||||||
|
|
||||||
|
async def get_all_documents(self, user_id: str, app_id: str) -> ApplicationDocuments:
|
||||||
|
"""Get all documents for an application"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Document Models
|
||||||
|
```python
|
||||||
|
class Document(BaseModel):
|
||||||
|
"""Base document model"""
|
||||||
|
id: str
|
||||||
|
application_id: str
|
||||||
|
document_type: str # research_report, optimized_resume, cover_letter
|
||||||
|
content: str
|
||||||
|
created_at: datetime
|
||||||
|
updated_at: datetime
|
||||||
|
|
||||||
|
class ApplicationDocuments(BaseModel):
|
||||||
|
"""All documents for an application"""
|
||||||
|
research_report: Optional[Document] = None
|
||||||
|
optimized_resume: Optional[Document] = None
|
||||||
|
cover_letter: Optional[Document] = None
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Schema (Documents)
|
||||||
|
```sql
|
||||||
|
CREATE TABLE documents (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
application_id UUID REFERENCES applications(id) ON DELETE CASCADE,
|
||||||
|
document_type VARCHAR(50) NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
CREATE POLICY user_documents_policy ON documents
|
||||||
|
FOR ALL TO application_user
|
||||||
|
USING (
|
||||||
|
application_id IN (
|
||||||
|
SELECT id FROM applications
|
||||||
|
WHERE user_id = current_setting('app.current_user_id')::UUID
|
||||||
|
)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎨 Frontend Interface (Dash + Mantine)
|
||||||
|
|
||||||
|
### Main Application Layout
|
||||||
|
```python
|
||||||
|
class JobForgeApp:
|
||||||
|
"""Main Dash application layout"""
|
||||||
|
|
||||||
|
def create_layout(self):
|
||||||
|
return dmc.MantineProvider([
|
||||||
|
dmc.AppShell([
|
||||||
|
dmc.Navbar([
|
||||||
|
ApplicationSidebar()
|
||||||
|
], width={"base": 300}),
|
||||||
|
dmc.Main([
|
||||||
|
ApplicationTopBar(),
|
||||||
|
MainContent()
|
||||||
|
])
|
||||||
|
])
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Application Sidebar
|
||||||
|
```python
|
||||||
|
class ApplicationSidebar:
|
||||||
|
"""Sidebar with applications list and navigation"""
|
||||||
|
|
||||||
|
def render(self, user_id: str):
|
||||||
|
return dmc.Stack([
|
||||||
|
# New Application Button
|
||||||
|
dmc.Button(
|
||||||
|
"➕ New Application",
|
||||||
|
id="new-app-btn",
|
||||||
|
fullWidth=True,
|
||||||
|
variant="filled"
|
||||||
|
),
|
||||||
|
|
||||||
|
# Applications List
|
||||||
|
dmc.Title("Applications", order=4),
|
||||||
|
dmc.ScrollArea([
|
||||||
|
ApplicationCard(app) for app in self.get_user_applications(user_id)
|
||||||
|
]),
|
||||||
|
|
||||||
|
# Resume Library Section
|
||||||
|
dmc.Divider(),
|
||||||
|
dmc.Title("Resume Library", order=4),
|
||||||
|
ResumeLibrarySection()
|
||||||
|
])
|
||||||
|
|
||||||
|
class ApplicationCard:
|
||||||
|
"""Individual application card in sidebar"""
|
||||||
|
|
||||||
|
def render(self, application: Application):
|
||||||
|
return dmc.Card([
|
||||||
|
dmc.Group([
|
||||||
|
dmc.Text(application.company_name, weight=600),
|
||||||
|
StatusBadge(application.status)
|
||||||
|
]),
|
||||||
|
dmc.Text(application.role_title, size="sm", color="dimmed"),
|
||||||
|
dmc.Text(application.created_at.strftime("%Y-%m-%d"), size="xs")
|
||||||
|
], id=f"app-card-{application.id}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Application Top Bar Navigation
|
||||||
|
```python
|
||||||
|
class ApplicationTopBar:
|
||||||
|
"""Top navigation bar for application phases"""
|
||||||
|
|
||||||
|
def render(self, application: Application):
|
||||||
|
return dmc.Group([
|
||||||
|
# Phase Navigation Buttons
|
||||||
|
PhaseButton("Research", "research", application.research_completed),
|
||||||
|
PhaseButton("Resume", "resume", application.resume_optimized),
|
||||||
|
PhaseButton("Cover Letter", "cover_letter", application.cover_letter_generated),
|
||||||
|
|
||||||
|
# Application Actions
|
||||||
|
dmc.Spacer(),
|
||||||
|
dmc.ActionIcon(
|
||||||
|
DashIconify(icon="tabler:settings"),
|
||||||
|
id="app-settings-btn"
|
||||||
|
)
|
||||||
|
])
|
||||||
|
|
||||||
|
class PhaseButton:
|
||||||
|
"""Navigation button for each phase"""
|
||||||
|
|
||||||
|
def render(self, label: str, phase: str, completed: bool):
|
||||||
|
icon = "tabler:check" if completed else "tabler:clock"
|
||||||
|
color = "green" if completed else "gray"
|
||||||
|
|
||||||
|
return dmc.Button([
|
||||||
|
DashIconify(icon=icon),
|
||||||
|
dmc.Text(label, ml="xs")
|
||||||
|
],
|
||||||
|
variant="subtle" if not completed else "filled",
|
||||||
|
color=color,
|
||||||
|
id=f"phase-{phase}-btn"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Document Editor Interface
|
||||||
|
```python
|
||||||
|
class DocumentEditor:
|
||||||
|
"""Markdown document editor with preview"""
|
||||||
|
|
||||||
|
def render(self, document: Document):
|
||||||
|
return dmc.Container([
|
||||||
|
dmc.Grid([
|
||||||
|
# Editor Column
|
||||||
|
dmc.Col([
|
||||||
|
dmc.Title(f"Edit {document.document_type.replace('_', ' ').title()}", order=3),
|
||||||
|
dmc.Textarea(
|
||||||
|
value=document.content,
|
||||||
|
placeholder="Document content...",
|
||||||
|
minRows=20,
|
||||||
|
autosize=True,
|
||||||
|
id=f"editor-{document.document_type}"
|
||||||
|
),
|
||||||
|
dmc.Group([
|
||||||
|
dmc.Button("Save Changes", id="save-btn"),
|
||||||
|
dmc.Button("Cancel", variant="outline", id="cancel-btn")
|
||||||
|
])
|
||||||
|
], span=6),
|
||||||
|
|
||||||
|
# Preview Column
|
||||||
|
dmc.Col([
|
||||||
|
dmc.Title("Preview", order=3),
|
||||||
|
dmc.Container([
|
||||||
|
dcc.Markdown(document.content, id="preview-content")
|
||||||
|
], style={"border": "1px solid #e0e0e0", "padding": "1rem", "minHeight": "500px"})
|
||||||
|
], span=6)
|
||||||
|
])
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🗄️ MVP Database Schema
|
||||||
|
|
||||||
|
### Complete Database Setup
|
||||||
|
```sql
|
||||||
|
-- Enable required extensions
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||||
|
CREATE EXTENSION IF NOT EXISTS vector;
|
||||||
|
|
||||||
|
-- Users table
|
||||||
|
CREATE TABLE users (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
email VARCHAR(255) UNIQUE NOT NULL,
|
||||||
|
password_hash VARCHAR(255) NOT NULL,
|
||||||
|
full_name VARCHAR(255) NOT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Applications table
|
||||||
|
CREATE TABLE applications (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
company_name VARCHAR(255) NOT NULL,
|
||||||
|
role_title VARCHAR(255) NOT NULL,
|
||||||
|
job_url TEXT,
|
||||||
|
job_description TEXT NOT NULL,
|
||||||
|
location VARCHAR(255),
|
||||||
|
priority_level VARCHAR(20) DEFAULT 'medium',
|
||||||
|
status VARCHAR(50) DEFAULT 'draft',
|
||||||
|
|
||||||
|
research_completed BOOLEAN DEFAULT FALSE,
|
||||||
|
resume_optimized BOOLEAN DEFAULT FALSE,
|
||||||
|
cover_letter_generated BOOLEAN DEFAULT FALSE,
|
||||||
|
|
||||||
|
created_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Documents table
|
||||||
|
CREATE TABLE documents (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
application_id UUID REFERENCES applications(id) ON DELETE CASCADE,
|
||||||
|
document_type VARCHAR(50) NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
|
||||||
|
UNIQUE(application_id, document_type)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Resume library table
|
||||||
|
CREATE TABLE user_resumes (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
focus_area VARCHAR(100),
|
||||||
|
is_primary BOOLEAN DEFAULT FALSE,
|
||||||
|
created_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Basic vector embeddings (for future enhancement)
|
||||||
|
CREATE TABLE document_embeddings (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
document_id UUID REFERENCES documents(id) ON DELETE CASCADE,
|
||||||
|
embedding vector(1536),
|
||||||
|
created_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Row Level Security
|
||||||
|
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE applications ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE user_resumes ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
-- Security policies
|
||||||
|
CREATE POLICY user_own_data ON applications FOR ALL USING (user_id = current_setting('app.current_user_id')::UUID);
|
||||||
|
CREATE POLICY user_own_documents ON documents FOR ALL USING (
|
||||||
|
application_id IN (SELECT id FROM applications WHERE user_id = current_setting('app.current_user_id')::UUID)
|
||||||
|
);
|
||||||
|
CREATE POLICY user_own_resumes ON user_resumes FOR ALL USING (user_id = current_setting('app.current_user_id')::UUID);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 Web Application Development Plan
|
||||||
|
|
||||||
|
### Development Phases
|
||||||
|
|
||||||
|
#### **Phase 1: Infrastructure & Authentication (Weeks 1-2)**
|
||||||
|
- Docker containerization for development and production
|
||||||
|
- PostgreSQL 16 with pgvector extension setup
|
||||||
|
- FastAPI backend with JWT authentication
|
||||||
|
- Row Level Security (RLS) implementation
|
||||||
|
- Basic CI/CD pipeline setup
|
||||||
|
|
||||||
|
#### **Phase 2: Core Web Application (Weeks 3-4)**
|
||||||
|
- Dash + Mantine responsive web interface
|
||||||
|
- Application CRUD operations with API endpoints
|
||||||
|
- User dashboard and application management
|
||||||
|
- Database integration with async operations
|
||||||
|
- Multi-tenant data isolation
|
||||||
|
|
||||||
|
#### **Phase 3: AI Integration (Weeks 5-6)**
|
||||||
|
- Claude API integration for document generation
|
||||||
|
- OpenAI API for embeddings and analysis
|
||||||
|
- Async AI service layer implementation
|
||||||
|
- Error handling and retry mechanisms
|
||||||
|
- AI service rate limiting and monitoring
|
||||||
|
|
||||||
|
#### **Phase 4: Production Deployment (Weeks 7-8)**
|
||||||
|
- Server deployment with Docker Compose
|
||||||
|
- Nginx reverse proxy and SSL configuration
|
||||||
|
- Database backup and monitoring setup
|
||||||
|
- Performance optimization and caching
|
||||||
|
- Security hardening and testing
|
||||||
|
|
||||||
|
### Prototype Success Criteria
|
||||||
|
- ✅ Secure multi-user web application deployed to server
|
||||||
|
- ✅ JWT-based authentication with user registration/login
|
||||||
|
- ✅ Full CRUD operations for job applications
|
||||||
|
- ✅ AI-powered cover letter generation via web interface
|
||||||
|
- ✅ Responsive web UI with modern UX design
|
||||||
|
- ✅ Secure data storage with user isolation (RLS)
|
||||||
|
- ✅ Production-ready deployment with monitoring
|
||||||
|
- ✅ Scalable architecture for future enhancements
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🐳 Docker Production Deployment
|
||||||
|
|
||||||
|
### Production Environment
|
||||||
|
```yaml
|
||||||
|
# docker-compose.yml
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# FastAPI + Dash Web Application
|
||||||
|
jobforge-app:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
target: production
|
||||||
|
container_name: jobforge-app
|
||||||
|
environment:
|
||||||
|
- DATABASE_URL=postgresql://jobforge:${DB_PASSWORD}@postgres:5432/jobforge
|
||||||
|
- CLAUDE_API_KEY=${CLAUDE_API_KEY}
|
||||||
|
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||||
|
- JWT_SECRET=${JWT_SECRET}
|
||||||
|
- DEBUG=false
|
||||||
|
- LOG_LEVEL=INFO
|
||||||
|
volumes:
|
||||||
|
- ./uploads:/app/uploads
|
||||||
|
- ./logs:/var/log/jobforge
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- jobforge-network
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
# PostgreSQL with pgvector
|
||||||
|
postgres:
|
||||||
|
image: pgvector/pgvector:pg16
|
||||||
|
container_name: jobforge-postgres
|
||||||
|
environment:
|
||||||
|
- POSTGRES_DB=jobforge
|
||||||
|
- POSTGRES_USER=jobforge
|
||||||
|
- POSTGRES_PASSWORD=${DB_PASSWORD}
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
- ./backups:/backups
|
||||||
|
networks:
|
||||||
|
- jobforge-network
|
||||||
|
restart: unless-stopped
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U jobforge -d jobforge"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
|
||||||
|
# Nginx Reverse Proxy
|
||||||
|
nginx:
|
||||||
|
image: nginx:alpine
|
||||||
|
container_name: jobforge-nginx
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
volumes:
|
||||||
|
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
|
||||||
|
- ./ssl:/etc/nginx/ssl:ro
|
||||||
|
depends_on:
|
||||||
|
- jobforge-app
|
||||||
|
networks:
|
||||||
|
- jobforge-network
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
networks:
|
||||||
|
jobforge-network:
|
||||||
|
driver: bridge
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
|
driver: local
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📁 Web Application Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
job-forge/
|
||||||
|
├── docker-compose.yml
|
||||||
|
├── Dockerfile
|
||||||
|
├── requirements.txt
|
||||||
|
├── requirements-dev.txt
|
||||||
|
├── .env.example
|
||||||
|
├── pytest.ini
|
||||||
|
├── alembic.ini
|
||||||
|
├── CLAUDE.md
|
||||||
|
├── README.md
|
||||||
|
├── app/
|
||||||
|
│ ├── main.py # FastAPI + Dash application entry
|
||||||
|
│ ├── core/
|
||||||
|
│ │ ├── config.py # Application configuration
|
||||||
|
│ │ ├── database.py # Database connection and session
|
||||||
|
│ │ ├── security.py # JWT authentication utilities
|
||||||
|
│ │ └── exceptions.py # Custom exception handlers
|
||||||
|
│ ├── api/
|
||||||
|
│ │ ├── v1/
|
||||||
|
│ │ │ ├── auth.py # Authentication endpoints
|
||||||
|
│ │ │ ├── applications.py # Application CRUD endpoints
|
||||||
|
│ │ │ └── documents.py # Document management endpoints
|
||||||
|
│ │ └── dependencies.py # FastAPI dependencies
|
||||||
|
│ ├── models/
|
||||||
|
│ │ ├── user.py # User SQLAlchemy model
|
||||||
|
│ │ ├── application.py # Application SQLAlchemy model
|
||||||
|
│ │ └── document.py # Document SQLAlchemy model
|
||||||
|
│ ├── schemas/
|
||||||
|
│ │ ├── user.py # User Pydantic schemas
|
||||||
|
│ │ ├── application.py # Application Pydantic schemas
|
||||||
|
│ │ └── auth.py # Authentication schemas
|
||||||
|
│ ├── crud/
|
||||||
|
│ │ ├── user.py # User database operations
|
||||||
|
│ │ ├── application.py # Application database operations
|
||||||
|
│ │ └── document.py # Document database operations
|
||||||
|
│ ├── services/
|
||||||
|
│ │ ├── auth.py # Authentication service
|
||||||
|
│ │ ├── application.py # Application business logic
|
||||||
|
│ │ └── ai/
|
||||||
|
│ │ ├── claude_service.py # Claude API integration
|
||||||
|
│ │ ├── openai_service.py # OpenAI API integration
|
||||||
|
│ │ └── document_generator.py # AI document generation
|
||||||
|
│ └── frontend/
|
||||||
|
│ ├── app.py # Dash application setup
|
||||||
|
│ ├── layouts/
|
||||||
|
│ │ ├── dashboard.py # Main dashboard layout
|
||||||
|
│ │ ├── auth.py # Login/register layouts
|
||||||
|
│ │ └── application.py # Application detail layout
|
||||||
|
│ ├── components/
|
||||||
|
│ │ ├── navigation.py # Navigation components
|
||||||
|
│ │ ├── forms.py # Form components
|
||||||
|
│ │ └── modals.py # Modal components
|
||||||
|
│ └── callbacks/
|
||||||
|
│ ├── auth.py # Authentication callbacks
|
||||||
|
│ ├── application.py # Application callbacks
|
||||||
|
│ └── navigation.py # Navigation callbacks
|
||||||
|
├── alembic/
|
||||||
|
│ ├── versions/ # Database migration files
|
||||||
|
│ └── env.py # Alembic configuration
|
||||||
|
├── tests/
|
||||||
|
│ ├── conftest.py # Pytest configuration and fixtures
|
||||||
|
│ ├── unit/ # Unit tests
|
||||||
|
│ ├── integration/ # Integration tests
|
||||||
|
│ └── e2e/ # End-to-end tests
|
||||||
|
├── docs/
|
||||||
|
│ ├── README.md # Main documentation hub
|
||||||
|
│ ├── development/ # Development documentation
|
||||||
|
│ ├── infrastructure/ # Deployment documentation
|
||||||
|
│ └── testing/ # Testing documentation
|
||||||
|
├── nginx/
|
||||||
|
│ └── nginx.conf # Nginx configuration
|
||||||
|
├── logs/ # Application logs
|
||||||
|
├── uploads/ # File uploads
|
||||||
|
└── backups/ # Database backups
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This web application architecture provides a comprehensive, production-ready solution for AI-powered job application management. The FastAPI backend with Dash frontend delivers a modern web experience while maintaining scalability and security for prototype development and future enhancements.*
|
||||||
49
docs/lessons-learned/001-dependency-version-conflicts.md
Normal file
49
docs/lessons-learned/001-dependency-version-conflicts.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
# Lesson Learned #001: Dependency Version Conflicts
|
||||||
|
|
||||||
|
## Issue Name
|
||||||
|
Dependency Version Conflicts in Requirements Files
|
||||||
|
|
||||||
|
## Date
|
||||||
|
2025-08-02
|
||||||
|
|
||||||
|
## Description
|
||||||
|
During project setup, encountered version conflicts with Python package dependencies:
|
||||||
|
- `pytest-dash==2.5.0` - Version did not exist in PyPI (max available: 2.1.2)
|
||||||
|
- `python-bcrypt==4.1.2` - Incorrect package name (should be `bcrypt==4.1.2`)
|
||||||
|
|
||||||
|
## Error Messages
|
||||||
|
```
|
||||||
|
ERROR: Could not find a version that satisfies the requirement pytest-dash==2.5.0
|
||||||
|
(from versions: 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.2.0rc1, 0.2.0rc2, 0.2.0rc3, 1.0.0, 1.0.1, 1.1.0, 2.0.0rc1, 2.0.0rc2, 2.0.0rc3, 2.0.0rc4, 2.0.0rc5, 2.0.0, 2.1.0, 2.1.1, 2.1.2)
|
||||||
|
|
||||||
|
ERROR: Could not find a version that satisfies the requirement python-bcrypt==4.1.2
|
||||||
|
(from versions: 0.3.1, 0.3.2)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Root Cause
|
||||||
|
1. Incorrect package versions specified without checking PyPI availability
|
||||||
|
2. Wrong package name used for bcrypt library
|
||||||
|
|
||||||
|
## Solution Applied
|
||||||
|
1. **Updated pytest-dash version**:
|
||||||
|
- Changed from `pytest-dash==2.5.0` to `pytest-dash==2.1.2`
|
||||||
|
- Verified latest available version on PyPI
|
||||||
|
|
||||||
|
2. **Fixed bcrypt package name**:
|
||||||
|
- Changed from `python-bcrypt==4.1.2` to `bcrypt==4.1.2`
|
||||||
|
- Used correct package name
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
- `requirements-backend.txt` - Fixed bcrypt package name
|
||||||
|
- `requirements-frontend.txt` - Updated pytest-dash version
|
||||||
|
|
||||||
|
## Prevention Strategy
|
||||||
|
1. Always verify package versions exist on PyPI before adding to requirements
|
||||||
|
2. Use `pip search` or check PyPI website for correct package names
|
||||||
|
3. Consider using version ranges instead of exact pins for non-critical dependencies
|
||||||
|
4. Implement CI/CD checks to validate requirements files
|
||||||
|
|
||||||
|
## Impact
|
||||||
|
- ✅ All dependencies now install successfully
|
||||||
|
- ✅ Project setup process is streamlined
|
||||||
|
- ✅ Development environment can be started without version conflicts
|
||||||
77
docs/lessons-learned/002-project-structure-organization.md
Normal file
77
docs/lessons-learned/002-project-structure-organization.md
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
# Lesson Learned #002: Project Structure Organization
|
||||||
|
|
||||||
|
## Issue Name
|
||||||
|
Project Structure Organization and Clean Root Directory
|
||||||
|
|
||||||
|
## Date
|
||||||
|
2025-08-02
|
||||||
|
|
||||||
|
## Description
|
||||||
|
Initial project setup had Docker configuration files scattered in the root directory, making the project structure cluttered and harder to navigate. This violated clean project organization principles.
|
||||||
|
|
||||||
|
## Root Cause
|
||||||
|
- Docker files (Dockerfile.backend, Dockerfile.frontend, docker-compose.yml) were placed in project root
|
||||||
|
- No established guidelines for file organization
|
||||||
|
- Lack of mandatory documentation for project issues
|
||||||
|
|
||||||
|
## Solution Applied
|
||||||
|
1. **Created organized folder structure**:
|
||||||
|
```
|
||||||
|
job-forge/
|
||||||
|
├── src/ # Source code only
|
||||||
|
├── tests/ # Test files only
|
||||||
|
├── docs/ # All documentation
|
||||||
|
├── docker/ # All Docker-related files
|
||||||
|
├── database/ # Database scripts and migrations
|
||||||
|
├── .env.example # Environment template
|
||||||
|
├── requirements-*.txt # Python dependencies
|
||||||
|
├── pytest.ini # Test configuration
|
||||||
|
└── README.md # Main project readme
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Moved Docker files to dedicated folder**:
|
||||||
|
- Moved all Docker files to `docker/` directory
|
||||||
|
- Updated docker-compose.yml paths to reference parent directory (`../`)
|
||||||
|
- Updated project documentation to reflect new structure
|
||||||
|
|
||||||
|
3. **Created lessons-learned process**:
|
||||||
|
- Created `docs/lessons-learned/` folder
|
||||||
|
- Established mandatory documentation process for all issues
|
||||||
|
- Added sequential numbering system for lesson learned entries
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
- `docker/docker-compose.yml` - Updated paths for new structure
|
||||||
|
- `CLAUDE.md` - Added project structure requirements and lessons learned process
|
||||||
|
- `.claude/agents/*.md` - Updated all agent files with structure requirements
|
||||||
|
- `README.md` - Updated quick start instructions
|
||||||
|
|
||||||
|
## New Mandatory Requirements
|
||||||
|
1. **Clean Root Directory**: Only essential files in project root
|
||||||
|
2. **Docker Organization**: All Docker files in `docker/` folder
|
||||||
|
3. **Lessons Learned**: Document every issue in `docs/lessons-learned/`
|
||||||
|
4. **Sequential Documentation**: Use numbered format (###-issue-name.md)
|
||||||
|
|
||||||
|
## Prevention Strategy
|
||||||
|
1. Establish clear folder structure guidelines in project documentation
|
||||||
|
2. Add project structure validation to CI/CD if implemented
|
||||||
|
3. Regular project structure reviews during development
|
||||||
|
4. Mandatory issue documentation process for all team members
|
||||||
|
|
||||||
|
## Usage Instructions
|
||||||
|
```bash
|
||||||
|
# Start development environment from docker folder
|
||||||
|
cd docker
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Access applications
|
||||||
|
# Frontend: http://localhost:8501
|
||||||
|
# Backend: http://localhost:8000
|
||||||
|
# Database: localhost:5432
|
||||||
|
```
|
||||||
|
|
||||||
|
## Impact
|
||||||
|
- ✅ Clean, organized project structure
|
||||||
|
- ✅ Easier navigation and maintenance
|
||||||
|
- ✅ Established process for documenting project issues
|
||||||
|
- ✅ Better adherence to software engineering best practices
|
||||||
|
- ✅ Updated all team documentation and agent instructions
|
||||||
120
docs/lessons-learned/003-requirements-file-organization.md
Normal file
120
docs/lessons-learned/003-requirements-file-organization.md
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
# Lesson Learned #003: Requirements File Organization
|
||||||
|
|
||||||
|
## Issue Name
|
||||||
|
Missing main requirements.txt for local development and testing
|
||||||
|
|
||||||
|
## Description
|
||||||
|
After implementing comprehensive test coverage, tests could not be run locally due to missing dependencies. The project had separate `requirements-backend.txt` and `requirements-frontend.txt` files for Docker containers, but no unified requirements file for local development.
|
||||||
|
|
||||||
|
## Error Messages
|
||||||
|
```
|
||||||
|
ModuleNotFoundError: No module named 'structlog'
|
||||||
|
ModuleNotFoundError: No module named 'pytest'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Root Cause
|
||||||
|
1. **Fragmented Dependencies**: Backend and frontend requirements were split into separate files for Docker optimization
|
||||||
|
2. **Missing Local Setup**: No unified requirements file for local development and testing
|
||||||
|
3. **Documentation Gap**: README didn't clearly explain how to install dependencies for local testing
|
||||||
|
|
||||||
|
## Solution Implemented
|
||||||
|
|
||||||
|
### 1. Created Main Requirements File
|
||||||
|
- **File**: `requirements.txt`
|
||||||
|
- **Purpose**: Combined all dependencies for local development
|
||||||
|
- **Content**: Merged backend and frontend requirements
|
||||||
|
|
||||||
|
### 2. Created Development Requirements File
|
||||||
|
- **File**: `dev-requirements.txt`
|
||||||
|
- **Purpose**: Testing and development dependencies only
|
||||||
|
- **Content**: pytest, black, flake8, mypy, and core dependencies needed for testing
|
||||||
|
|
||||||
|
### 3. Updated Documentation
|
||||||
|
- **File**: `README.md`
|
||||||
|
- **Section**: Quick Start
|
||||||
|
- **Addition**: Local development setup instructions with proper pip install commands
|
||||||
|
|
||||||
|
### 4. Maintained Docker Optimization
|
||||||
|
- **Approach**: Kept separate `requirements-backend.txt` and `requirements-frontend.txt` for Docker containers
|
||||||
|
- **Benefit**: Smaller container images with only necessary dependencies
|
||||||
|
|
||||||
|
## File Structure Created
|
||||||
|
```
|
||||||
|
job-forge/
|
||||||
|
├── requirements.txt # All dependencies for local development
|
||||||
|
├── dev-requirements.txt # Development and testing dependencies only
|
||||||
|
├── requirements-backend.txt # Backend container dependencies (existing)
|
||||||
|
├── requirements-frontend.txt # Frontend container dependencies (existing)
|
||||||
|
└── README.md # Updated with local setup instructions
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prevention Strategy
|
||||||
|
|
||||||
|
### 1. Requirements File Standards
|
||||||
|
- **Main Requirements**: Always maintain a unified `requirements.txt` for local development
|
||||||
|
- **Development Requirements**: Separate `dev-requirements.txt` for testing tools
|
||||||
|
- **Container Requirements**: Keep optimized files for Docker containers
|
||||||
|
|
||||||
|
### 2. Documentation Requirements
|
||||||
|
- **Installation Instructions**: Clear pip install commands in README
|
||||||
|
- **Testing Setup**: Document how to run tests locally vs in containers
|
||||||
|
- **Dependencies Explanation**: Explain the purpose of each requirements file
|
||||||
|
|
||||||
|
### 3. Testing Integration
|
||||||
|
- **Local Testing**: Ensure tests can run with local pip-installed dependencies
|
||||||
|
- **Container Testing**: Maintain ability to test within Docker environment
|
||||||
|
- **CI/CD Integration**: Use appropriate requirements file for each environment
|
||||||
|
|
||||||
|
## Implementation Details
|
||||||
|
|
||||||
|
### Requirements.txt Content
|
||||||
|
```
|
||||||
|
# Combined requirements for local development
|
||||||
|
fastapi==0.109.2
|
||||||
|
uvicorn[standard]==0.27.1
|
||||||
|
# ... (all backend and frontend dependencies)
|
||||||
|
pytest==8.0.2
|
||||||
|
pytest-asyncio==0.23.5
|
||||||
|
# ... (all testing dependencies)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dev-Requirements.txt Content
|
||||||
|
```
|
||||||
|
# Development and testing only
|
||||||
|
pytest==8.0.2
|
||||||
|
pytest-asyncio==0.23.5
|
||||||
|
pytest-cov==4.0.0
|
||||||
|
black==24.2.0
|
||||||
|
# ... (minimal set for testing)
|
||||||
|
```
|
||||||
|
|
||||||
|
### README Update
|
||||||
|
```bash
|
||||||
|
# For local development and testing
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# For development dependencies only
|
||||||
|
pip install -r dev-requirements.txt
|
||||||
|
|
||||||
|
# Run tests locally
|
||||||
|
python validate_tests.py
|
||||||
|
python run_tests.py
|
||||||
|
pytest
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Takeaways
|
||||||
|
|
||||||
|
1. **Multiple Requirements Files**: Different environments need different dependency sets
|
||||||
|
2. **Local Development Priority**: Always provide easy local setup for developers
|
||||||
|
3. **Documentation Clarity**: Clear installation instructions prevent frustration
|
||||||
|
4. **Container Optimization**: Keep container-specific requirements minimal and focused
|
||||||
|
|
||||||
|
## Status
|
||||||
|
✅ **RESOLVED** - Created unified requirements files and updated documentation
|
||||||
|
|
||||||
|
## Related Files
|
||||||
|
- `requirements.txt` (new)
|
||||||
|
- `dev-requirements.txt` (new)
|
||||||
|
- `README.md` (updated)
|
||||||
|
- `requirements-backend.txt` (existing, unchanged)
|
||||||
|
- `requirements-frontend.txt` (existing, unchanged)
|
||||||
769
docs/testing/qa_procedures.md
Normal file
769
docs/testing/qa_procedures.md
Normal file
@@ -0,0 +1,769 @@
|
|||||||
|
# QA Procedures - Job Forge
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This document outlines the Quality Assurance procedures for Job Forge, including testing strategies, quality gates, bug reporting, and release validation processes.
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### Test Pyramid for Job Forge
|
||||||
|
```
|
||||||
|
/\ E2E Tests (10%)
|
||||||
|
/ \ - Critical user workflows
|
||||||
|
/____\ - Cross-browser testing
|
||||||
|
/ \ Integration Tests (20%)
|
||||||
|
/ \ - API endpoint testing
|
||||||
|
\ / - Database RLS validation
|
||||||
|
\______/ - AI service integration
|
||||||
|
\ / Unit Tests (70%)
|
||||||
|
\ / - Business logic
|
||||||
|
\/ - Authentication
|
||||||
|
- Data validation
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1. Unit Testing (70% of tests)
|
||||||
|
|
||||||
|
#### Test Categories
|
||||||
|
- **Authentication & Security**: Login, JWT tokens, password hashing
|
||||||
|
- **Business Logic**: Application CRUD operations, status transitions
|
||||||
|
- **Data Validation**: Pydantic model validation, input sanitization
|
||||||
|
- **AI Integration**: Service mocking, error handling, rate limiting
|
||||||
|
- **Database Operations**: RLS policies, query optimization
|
||||||
|
|
||||||
|
#### Running Unit Tests
|
||||||
|
```bash
|
||||||
|
# Run all unit tests
|
||||||
|
pytest tests/unit/ -v
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
pytest tests/unit/test_auth_service.py -v
|
||||||
|
|
||||||
|
# Run with coverage
|
||||||
|
pytest tests/unit/ --cov=app --cov-report=html
|
||||||
|
|
||||||
|
# Run tests matching pattern
|
||||||
|
pytest -k "test_auth" -v
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Unit Test Example
|
||||||
|
```python
|
||||||
|
# tests/unit/test_application_service.py
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import AsyncMock
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_application_with_ai_generation(test_db, test_user, mock_claude_service):
|
||||||
|
"""Test application creation with AI cover letter generation."""
|
||||||
|
|
||||||
|
# Arrange
|
||||||
|
mock_claude_service.generate_cover_letter.return_value = "Generated cover letter"
|
||||||
|
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name="AI Corp",
|
||||||
|
role_title="ML Engineer",
|
||||||
|
job_description="Python ML position",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Act
|
||||||
|
with patch('app.services.ai.claude_service.ClaudeService', return_value=mock_claude_service):
|
||||||
|
application = await create_application(test_db, app_data, test_user.id)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert application.company_name == "AI Corp"
|
||||||
|
assert application.cover_letter == "Generated cover letter"
|
||||||
|
mock_claude_service.generate_cover_letter.assert_called_once()
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Integration Testing (20% of tests)
|
||||||
|
|
||||||
|
#### Test Categories
|
||||||
|
- **API Integration**: Full request/response testing with authentication
|
||||||
|
- **Database Integration**: Multi-tenant isolation, RLS policy validation
|
||||||
|
- **AI Service Integration**: Real API calls with mocking strategies
|
||||||
|
- **Service Layer Integration**: Complete workflow testing
|
||||||
|
|
||||||
|
#### Running Integration Tests
|
||||||
|
```bash
|
||||||
|
# Run integration tests
|
||||||
|
pytest tests/integration/ -v
|
||||||
|
|
||||||
|
# Run with test database
|
||||||
|
pytest tests/integration/ --db-url=postgresql://test:test@localhost:5432/jobforge_test
|
||||||
|
|
||||||
|
# Run specific integration test
|
||||||
|
pytest tests/integration/test_api_auth.py::TestAuthenticationEndpoints::test_complete_registration_flow -v
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Integration Test Example
|
||||||
|
```python
|
||||||
|
# tests/integration/test_api_applications.py
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_complete_application_workflow(async_client, test_user_token):
|
||||||
|
"""Test complete application workflow from creation to update."""
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
|
||||||
|
# 1. Create application
|
||||||
|
app_data = {
|
||||||
|
"company_name": "Integration Test Corp",
|
||||||
|
"role_title": "Software Engineer",
|
||||||
|
"job_description": "Full-stack developer position",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
|
||||||
|
create_response = await async_client.post(
|
||||||
|
"/api/v1/applications/",
|
||||||
|
json=app_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert create_response.status_code == 201
|
||||||
|
|
||||||
|
app_id = create_response.json()["id"]
|
||||||
|
|
||||||
|
# 2. Get application
|
||||||
|
get_response = await async_client.get(
|
||||||
|
f"/api/v1/applications/{app_id}",
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert get_response.status_code == 200
|
||||||
|
|
||||||
|
# 3. Update application status
|
||||||
|
update_response = await async_client.put(
|
||||||
|
f"/api/v1/applications/{app_id}",
|
||||||
|
json={"status": "applied"},
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert update_response.status_code == 200
|
||||||
|
assert update_response.json()["status"] == "applied"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. End-to-End Testing (10% of tests)
|
||||||
|
|
||||||
|
#### Test Categories
|
||||||
|
- **Critical User Journeys**: Registration → Login → Create Application → Generate Cover Letter
|
||||||
|
- **Cross-browser Compatibility**: Chrome, Firefox, Safari, Edge
|
||||||
|
- **Performance Testing**: Response times, concurrent users
|
||||||
|
- **Error Scenario Testing**: Network failures, service outages
|
||||||
|
|
||||||
|
#### E2E Test Tools Setup
|
||||||
|
```bash
|
||||||
|
# Install Playwright for E2E testing
|
||||||
|
pip install playwright
|
||||||
|
playwright install
|
||||||
|
|
||||||
|
# Run E2E tests
|
||||||
|
pytest tests/e2e/ -v --headed # With browser UI
|
||||||
|
pytest tests/e2e/ -v # Headless mode
|
||||||
|
```
|
||||||
|
|
||||||
|
#### E2E Test Example
|
||||||
|
```python
|
||||||
|
# tests/e2e/test_user_workflows.py
|
||||||
|
import pytest
|
||||||
|
from playwright.async_api import async_playwright
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_complete_user_journey():
|
||||||
|
"""Test complete user journey from registration to application creation."""
|
||||||
|
|
||||||
|
async with async_playwright() as p:
|
||||||
|
browser = await p.chromium.launch()
|
||||||
|
page = await browser.new_page()
|
||||||
|
|
||||||
|
try:
|
||||||
|
# 1. Navigate to registration
|
||||||
|
await page.goto("http://localhost:8000/register")
|
||||||
|
|
||||||
|
# 2. Fill registration form
|
||||||
|
await page.fill('[data-testid="email-input"]', 'e2e@test.com')
|
||||||
|
await page.fill('[data-testid="password-input"]', 'E2EPassword123!')
|
||||||
|
await page.fill('[data-testid="first-name-input"]', 'E2E')
|
||||||
|
await page.fill('[data-testid="last-name-input"]', 'User')
|
||||||
|
|
||||||
|
# 3. Submit registration
|
||||||
|
await page.click('[data-testid="register-button"]')
|
||||||
|
|
||||||
|
# 4. Verify redirect to dashboard
|
||||||
|
await page.wait_for_url("**/dashboard")
|
||||||
|
|
||||||
|
# 5. Create application
|
||||||
|
await page.click('[data-testid="new-application-button"]')
|
||||||
|
await page.fill('[data-testid="company-input"]', 'E2E Test Corp')
|
||||||
|
await page.fill('[data-testid="role-input"]', 'Test Engineer')
|
||||||
|
|
||||||
|
# 6. Submit application
|
||||||
|
await page.click('[data-testid="save-application-button"]')
|
||||||
|
|
||||||
|
# 7. Verify application appears
|
||||||
|
await page.wait_for_selector('[data-testid="application-card"]')
|
||||||
|
|
||||||
|
# 8. Verify application details
|
||||||
|
company_text = await page.text_content('[data-testid="company-name"]')
|
||||||
|
assert company_text == "E2E Test Corp"
|
||||||
|
|
||||||
|
finally:
|
||||||
|
await browser.close()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
### 1. Code Quality Gates
|
||||||
|
|
||||||
|
#### Pre-commit Hooks
|
||||||
|
```bash
|
||||||
|
# Install pre-commit hooks
|
||||||
|
pip install pre-commit
|
||||||
|
pre-commit install
|
||||||
|
|
||||||
|
# Run hooks manually
|
||||||
|
pre-commit run --all-files
|
||||||
|
```
|
||||||
|
|
||||||
|
#### .pre-commit-config.yaml
|
||||||
|
```yaml
|
||||||
|
repos:
|
||||||
|
- repo: https://github.com/psf/black
|
||||||
|
rev: 23.7.0
|
||||||
|
hooks:
|
||||||
|
- id: black
|
||||||
|
language_version: python3.12
|
||||||
|
|
||||||
|
- repo: https://github.com/charliermarsh/ruff-pre-commit
|
||||||
|
rev: v0.0.284
|
||||||
|
hooks:
|
||||||
|
- id: ruff
|
||||||
|
args: [--fix, --exit-non-zero-on-fix]
|
||||||
|
|
||||||
|
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||||
|
rev: v1.5.1
|
||||||
|
hooks:
|
||||||
|
- id: mypy
|
||||||
|
additional_dependencies: [pydantic, sqlalchemy]
|
||||||
|
|
||||||
|
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||||
|
rev: v4.4.0
|
||||||
|
hooks:
|
||||||
|
- id: trailing-whitespace
|
||||||
|
- id: end-of-file-fixer
|
||||||
|
- id: check-yaml
|
||||||
|
- id: check-added-large-files
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Quality Metrics Thresholds
|
||||||
|
```bash
|
||||||
|
# Code coverage minimum: 80%
|
||||||
|
pytest --cov=app --cov-fail-under=80
|
||||||
|
|
||||||
|
# Complexity maximum: 10
|
||||||
|
ruff check --select=C901
|
||||||
|
|
||||||
|
# Type coverage minimum: 90%
|
||||||
|
mypy app/ --strict
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Functional Quality Gates
|
||||||
|
|
||||||
|
#### API Response Time Requirements
|
||||||
|
- **Authentication endpoints**: < 200ms
|
||||||
|
- **CRUD operations**: < 500ms
|
||||||
|
- **AI generation endpoints**: < 30 seconds
|
||||||
|
- **Dashboard loading**: < 2 seconds
|
||||||
|
|
||||||
|
#### Reliability Requirements
|
||||||
|
- **Uptime**: > 99% during testing
|
||||||
|
- **Error rate**: < 1% for non-AI operations
|
||||||
|
- **AI service fallback**: Must handle service failures gracefully
|
||||||
|
|
||||||
|
### 3. Security Quality Gates
|
||||||
|
|
||||||
|
#### Security Testing Checklist
|
||||||
|
```yaml
|
||||||
|
authentication_security:
|
||||||
|
- [ ] JWT tokens expire correctly
|
||||||
|
- [ ] Password hashing is secure (bcrypt)
|
||||||
|
- [ ] Session management is stateless
|
||||||
|
- [ ] Rate limiting prevents brute force
|
||||||
|
|
||||||
|
authorization_security:
|
||||||
|
- [ ] RLS policies enforce user isolation
|
||||||
|
- [ ] API endpoints require proper authentication
|
||||||
|
- [ ] Users cannot access other users' data
|
||||||
|
- [ ] Admin endpoints are properly protected
|
||||||
|
|
||||||
|
input_validation:
|
||||||
|
- [ ] All API inputs are validated
|
||||||
|
- [ ] SQL injection prevention works
|
||||||
|
- [ ] XSS prevention is implemented
|
||||||
|
- [ ] File upload validation is secure
|
||||||
|
|
||||||
|
data_protection:
|
||||||
|
- [ ] Sensitive data is encrypted
|
||||||
|
- [ ] API keys are properly secured
|
||||||
|
- [ ] Environment variables contain no secrets
|
||||||
|
- [ ] Database connections are secure
|
||||||
|
```
|
||||||
|
|
||||||
|
## Bug Reporting and Management
|
||||||
|
|
||||||
|
### 1. Bug Classification
|
||||||
|
|
||||||
|
#### Severity Levels
|
||||||
|
- **Critical**: Application crashes, data loss, security vulnerabilities
|
||||||
|
- **High**: Major features not working, authentication failures
|
||||||
|
- **Medium**: Minor features broken, UI issues, performance problems
|
||||||
|
- **Low**: Cosmetic issues, minor improvements, documentation errors
|
||||||
|
|
||||||
|
#### Priority Levels
|
||||||
|
- **P0**: Fix immediately (< 2 hours)
|
||||||
|
- **P1**: Fix within 24 hours
|
||||||
|
- **P2**: Fix within 1 week
|
||||||
|
- **P3**: Fix in next release cycle
|
||||||
|
|
||||||
|
### 2. Bug Report Template
|
||||||
|
|
||||||
|
#### GitHub Issue Template
|
||||||
|
```markdown
|
||||||
|
## Bug Report
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
Brief description of the bug
|
||||||
|
|
||||||
|
### Environment
|
||||||
|
- **OS**: macOS 14.0 / Windows 11 / Ubuntu 22.04
|
||||||
|
- **Browser**: Chrome 118.0 / Firefox 119.0 / Safari 17.0
|
||||||
|
- **Python Version**: 3.12.0
|
||||||
|
- **FastAPI Version**: 0.104.1
|
||||||
|
|
||||||
|
### Steps to Reproduce
|
||||||
|
1. Go to '...'
|
||||||
|
2. Click on '...'
|
||||||
|
3. Enter data '...'
|
||||||
|
4. See error
|
||||||
|
|
||||||
|
### Expected Behavior
|
||||||
|
What should happen
|
||||||
|
|
||||||
|
### Actual Behavior
|
||||||
|
What actually happens
|
||||||
|
|
||||||
|
### Screenshots/Logs
|
||||||
|
```
|
||||||
|
Error logs or screenshots
|
||||||
|
```
|
||||||
|
|
||||||
|
### Additional Context
|
||||||
|
Any other context about the problem
|
||||||
|
|
||||||
|
### Severity/Priority
|
||||||
|
- [ ] Critical
|
||||||
|
- [ ] High
|
||||||
|
- [ ] Medium
|
||||||
|
- [ ] Low
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Bug Triage Process
|
||||||
|
|
||||||
|
#### Weekly Bug Triage Meeting
|
||||||
|
1. **Review new bugs**: Assign severity and priority
|
||||||
|
2. **Update existing bugs**: Check progress and blockers
|
||||||
|
3. **Close resolved bugs**: Verify fixes and close tickets
|
||||||
|
4. **Plan bug fixes**: Assign to sprints based on priority
|
||||||
|
|
||||||
|
#### Bug Assignment Criteria
|
||||||
|
- **Critical/P0**: Technical Lead + DevOps
|
||||||
|
- **High/P1**: Full-stack Developer
|
||||||
|
- **Medium/P2**: QA Engineer + Developer collaboration
|
||||||
|
- **Low/P3**: Next available developer
|
||||||
|
|
||||||
|
## Test Data Management
|
||||||
|
|
||||||
|
### 1. Test Data Strategy
|
||||||
|
|
||||||
|
#### Test Database Setup
|
||||||
|
```bash
|
||||||
|
# Create test database
|
||||||
|
createdb jobforge_test
|
||||||
|
|
||||||
|
# Run test migrations
|
||||||
|
DATABASE_URL=postgresql://test:test@localhost/jobforge_test alembic upgrade head
|
||||||
|
|
||||||
|
# Seed test data
|
||||||
|
python scripts/seed_test_data.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Test Data Factory
|
||||||
|
```python
|
||||||
|
# tests/factories.py
|
||||||
|
import factory
|
||||||
|
from app.models.user import User
|
||||||
|
from app.models.application import Application
|
||||||
|
|
||||||
|
class UserFactory(factory.Factory):
|
||||||
|
class Meta:
|
||||||
|
model = User
|
||||||
|
|
||||||
|
email = factory.Sequence(lambda n: f"user{n}@test.com")
|
||||||
|
password_hash = "$2b$12$hash"
|
||||||
|
first_name = "Test"
|
||||||
|
last_name = factory.Sequence(lambda n: f"User{n}")
|
||||||
|
|
||||||
|
class ApplicationFactory(factory.Factory):
|
||||||
|
class Meta:
|
||||||
|
model = Application
|
||||||
|
|
||||||
|
company_name = factory.Faker('company')
|
||||||
|
role_title = factory.Faker('job')
|
||||||
|
job_description = factory.Faker('text', max_nb_chars=500)
|
||||||
|
status = "draft"
|
||||||
|
user = factory.SubFactory(UserFactory)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Test Environment Management
|
||||||
|
|
||||||
|
#### Environment Isolation
|
||||||
|
```yaml
|
||||||
|
# docker-compose.test.yml
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
test-db:
|
||||||
|
image: pgvector/pgvector:pg16
|
||||||
|
environment:
|
||||||
|
- POSTGRES_DB=jobforge_test
|
||||||
|
- POSTGRES_USER=test
|
||||||
|
- POSTGRES_PASSWORD=test
|
||||||
|
ports:
|
||||||
|
- "5433:5432"
|
||||||
|
tmpfs:
|
||||||
|
- /var/lib/postgresql/data # In-memory for speed
|
||||||
|
|
||||||
|
test-app:
|
||||||
|
build: .
|
||||||
|
environment:
|
||||||
|
- DATABASE_URL=postgresql://test:test@test-db:5432/jobforge_test
|
||||||
|
- TESTING=true
|
||||||
|
depends_on:
|
||||||
|
- test-db
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Test Data Cleanup
|
||||||
|
```python
|
||||||
|
# tests/conftest.py
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
async def cleanup_test_data(test_db):
|
||||||
|
"""Clean up test data after each test."""
|
||||||
|
yield
|
||||||
|
|
||||||
|
# Truncate all tables
|
||||||
|
await test_db.execute("TRUNCATE TABLE applications CASCADE")
|
||||||
|
await test_db.execute("TRUNCATE TABLE users CASCADE")
|
||||||
|
await test_db.commit()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Testing
|
||||||
|
|
||||||
|
### 1. Load Testing with Locust
|
||||||
|
|
||||||
|
#### Installation and Setup
|
||||||
|
```bash
|
||||||
|
# Install locust
|
||||||
|
pip install locust
|
||||||
|
|
||||||
|
# Run load tests
|
||||||
|
locust -f tests/performance/locustfile.py --host=http://localhost:8000
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Load Test Example
|
||||||
|
```python
|
||||||
|
# tests/performance/locustfile.py
|
||||||
|
from locust import HttpUser, task, between
|
||||||
|
import json
|
||||||
|
|
||||||
|
class JobForgeUser(HttpUser):
|
||||||
|
wait_time = between(1, 3)
|
||||||
|
|
||||||
|
def on_start(self):
|
||||||
|
"""Login user on start."""
|
||||||
|
response = self.client.post("/api/auth/login", data={
|
||||||
|
"username": "test@example.com",
|
||||||
|
"password": "testpass123"
|
||||||
|
})
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
self.token = response.json()["access_token"]
|
||||||
|
self.headers = {"Authorization": f"Bearer {self.token}"}
|
||||||
|
|
||||||
|
@task(3)
|
||||||
|
def get_applications(self):
|
||||||
|
"""Get user applications."""
|
||||||
|
self.client.get("/api/v1/applications/", headers=self.headers)
|
||||||
|
|
||||||
|
@task(1)
|
||||||
|
def create_application(self):
|
||||||
|
"""Create new application."""
|
||||||
|
app_data = {
|
||||||
|
"company_name": "Load Test Corp",
|
||||||
|
"role_title": "Test Engineer",
|
||||||
|
"job_description": "Performance testing position",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
|
||||||
|
self.client.post(
|
||||||
|
"/api/v1/applications/",
|
||||||
|
json=app_data,
|
||||||
|
headers=self.headers
|
||||||
|
)
|
||||||
|
|
||||||
|
@task(1)
|
||||||
|
def generate_cover_letter(self):
|
||||||
|
"""Generate AI cover letter (expensive operation)."""
|
||||||
|
# Get first application
|
||||||
|
response = self.client.get("/api/v1/applications/", headers=self.headers)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
applications = response.json()
|
||||||
|
if applications:
|
||||||
|
app_id = applications[0]["id"]
|
||||||
|
self.client.post(
|
||||||
|
f"/api/v1/applications/{app_id}/generate-cover-letter",
|
||||||
|
headers=self.headers
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Performance Benchmarks
|
||||||
|
|
||||||
|
#### Response Time Targets
|
||||||
|
```python
|
||||||
|
# tests/performance/test_benchmarks.py
|
||||||
|
import pytest
|
||||||
|
import time
|
||||||
|
import statistics
|
||||||
|
|
||||||
|
@pytest.mark.performance
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_api_response_times(async_client, test_user_token):
|
||||||
|
"""Test API response time benchmarks."""
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
|
||||||
|
# Test multiple requests
|
||||||
|
response_times = []
|
||||||
|
for _ in range(50):
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
response = await async_client.get("/api/v1/applications/", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
response_time = (time.time() - start_time) * 1000 # Convert to ms
|
||||||
|
response_times.append(response_time)
|
||||||
|
|
||||||
|
# Analyze results
|
||||||
|
avg_time = statistics.mean(response_times)
|
||||||
|
p95_time = statistics.quantiles(response_times, n=20)[18] # 95th percentile
|
||||||
|
|
||||||
|
# Assert performance requirements
|
||||||
|
assert avg_time < 200, f"Average response time {avg_time}ms exceeds 200ms limit"
|
||||||
|
assert p95_time < 500, f"95th percentile {p95_time}ms exceeds 500ms limit"
|
||||||
|
|
||||||
|
print(f"Average response time: {avg_time:.2f}ms")
|
||||||
|
print(f"95th percentile: {p95_time:.2f}ms")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Release Testing Procedures
|
||||||
|
|
||||||
|
### 1. Pre-Release Testing Checklist
|
||||||
|
|
||||||
|
#### Functional Testing
|
||||||
|
```yaml
|
||||||
|
authentication_testing:
|
||||||
|
- [ ] User registration works
|
||||||
|
- [ ] User login/logout works
|
||||||
|
- [ ] JWT token validation works
|
||||||
|
- [ ] Password reset works (if implemented)
|
||||||
|
|
||||||
|
application_management:
|
||||||
|
- [ ] Create application works
|
||||||
|
- [ ] View applications works
|
||||||
|
- [ ] Update application works
|
||||||
|
- [ ] Delete application works
|
||||||
|
- [ ] Application status transitions work
|
||||||
|
|
||||||
|
ai_integration:
|
||||||
|
- [ ] Cover letter generation works
|
||||||
|
- [ ] AI service error handling works
|
||||||
|
- [ ] Rate limiting is enforced
|
||||||
|
- [ ] Fallback mechanisms work
|
||||||
|
|
||||||
|
data_security:
|
||||||
|
- [ ] User data isolation works
|
||||||
|
- [ ] RLS policies are enforced
|
||||||
|
- [ ] No data leakage between users
|
||||||
|
- [ ] Sensitive data is protected
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Cross-Browser Testing
|
||||||
|
```yaml
|
||||||
|
browsers_to_test:
|
||||||
|
chrome:
|
||||||
|
- [ ] Latest version
|
||||||
|
- [ ] Previous major version
|
||||||
|
firefox:
|
||||||
|
- [ ] Latest version
|
||||||
|
- [ ] ESR version
|
||||||
|
safari:
|
||||||
|
- [ ] Latest version (macOS/iOS)
|
||||||
|
edge:
|
||||||
|
- [ ] Latest version
|
||||||
|
|
||||||
|
mobile_testing:
|
||||||
|
- [ ] iOS Safari
|
||||||
|
- [ ] Android Chrome
|
||||||
|
- [ ] Responsive design works
|
||||||
|
- [ ] Touch interactions work
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Release Validation Process
|
||||||
|
|
||||||
|
#### Staging Environment Testing
|
||||||
|
```bash
|
||||||
|
# Deploy to staging
|
||||||
|
docker-compose -f docker-compose.staging.yml up -d
|
||||||
|
|
||||||
|
# Run full test suite against staging
|
||||||
|
pytest tests/ --base-url=https://staging.jobforge.com
|
||||||
|
|
||||||
|
# Run smoke tests
|
||||||
|
pytest tests/smoke/ -v
|
||||||
|
|
||||||
|
# Performance testing
|
||||||
|
locust -f tests/performance/locustfile.py --host=https://staging.jobforge.com --users=50 --spawn-rate=5 --run-time=5m
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Production Deployment Checklist
|
||||||
|
```yaml
|
||||||
|
pre_deployment:
|
||||||
|
- [ ] All tests passing in CI/CD
|
||||||
|
- [ ] Code review completed
|
||||||
|
- [ ] Database migrations tested
|
||||||
|
- [ ] Environment variables updated
|
||||||
|
- [ ] SSL certificates valid
|
||||||
|
- [ ] Backup created
|
||||||
|
|
||||||
|
deployment:
|
||||||
|
- [ ] Deploy with zero downtime
|
||||||
|
- [ ] Health checks passing
|
||||||
|
- [ ] Database migrations applied
|
||||||
|
- [ ] Cache cleared if needed
|
||||||
|
- [ ] CDN updated if needed
|
||||||
|
|
||||||
|
post_deployment:
|
||||||
|
- [ ] Smoke tests passing
|
||||||
|
- [ ] Performance metrics normal
|
||||||
|
- [ ] Error rates acceptable
|
||||||
|
- [ ] User workflows tested
|
||||||
|
- [ ] Rollback plan ready
|
||||||
|
```
|
||||||
|
|
||||||
|
## Continuous Testing Integration
|
||||||
|
|
||||||
|
### 1. CI/CD Pipeline Testing
|
||||||
|
|
||||||
|
#### GitHub Actions Workflow
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/test.yml
|
||||||
|
name: Test Suite
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [main, develop]
|
||||||
|
pull_request:
|
||||||
|
branches: [main]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: pgvector/pgvector:pg16
|
||||||
|
env:
|
||||||
|
POSTGRES_PASSWORD: test
|
||||||
|
POSTGRES_DB: jobforge_test
|
||||||
|
options: >-
|
||||||
|
--health-cmd pg_isready
|
||||||
|
--health-interval 10s
|
||||||
|
--health-timeout 5s
|
||||||
|
--health-retries 5
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Python
|
||||||
|
uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: '3.12'
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: |
|
||||||
|
pip install -r requirements.txt
|
||||||
|
pip install -r requirements-dev.txt
|
||||||
|
|
||||||
|
- name: Run linting
|
||||||
|
run: |
|
||||||
|
black --check .
|
||||||
|
ruff check .
|
||||||
|
mypy app/
|
||||||
|
|
||||||
|
- name: Run tests
|
||||||
|
run: |
|
||||||
|
pytest tests/unit/ tests/integration/ --cov=app --cov-report=xml
|
||||||
|
env:
|
||||||
|
DATABASE_URL: postgresql://postgres:test@localhost:5432/jobforge_test
|
||||||
|
|
||||||
|
- name: Upload coverage
|
||||||
|
uses: codecov/codecov-action@v3
|
||||||
|
with:
|
||||||
|
file: ./coverage.xml
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Quality Metrics Dashboard
|
||||||
|
|
||||||
|
#### Test Results Tracking
|
||||||
|
```python
|
||||||
|
# scripts/generate_test_report.py
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
def generate_test_report():
|
||||||
|
"""Generate comprehensive test report."""
|
||||||
|
|
||||||
|
# Run tests with JSON output
|
||||||
|
result = subprocess.run([
|
||||||
|
'pytest', 'tests/', '--json-report', '--json-report-file=test_report.json'
|
||||||
|
], capture_output=True, text=True)
|
||||||
|
|
||||||
|
# Load test results
|
||||||
|
with open('test_report.json') as f:
|
||||||
|
test_data = json.load(f)
|
||||||
|
|
||||||
|
# Generate summary
|
||||||
|
summary = {
|
||||||
|
'timestamp': datetime.now().isoformat(),
|
||||||
|
'total_tests': test_data['summary']['total'],
|
||||||
|
'passed': test_data['summary']['passed'],
|
||||||
|
'failed': test_data['summary']['failed'],
|
||||||
|
'skipped': test_data['summary']['skipped'],
|
||||||
|
'duration': test_data['duration'],
|
||||||
|
'pass_rate': test_data['summary']['passed'] / test_data['summary']['total'] * 100
|
||||||
|
}
|
||||||
|
|
||||||
|
print(f"Test Summary: {summary['passed']}/{summary['total']} passed ({summary['pass_rate']:.1f}%)")
|
||||||
|
return summary
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
generate_test_report()
|
||||||
|
```
|
||||||
|
|
||||||
|
This comprehensive QA procedure ensures that Job Forge maintains high quality through systematic testing, monitoring, and validation processes.
|
||||||
29
pytest.ini
Normal file
29
pytest.ini
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
[tool:pytest]
|
||||||
|
# pytest configuration for Job Forge
|
||||||
|
minversion = 6.0
|
||||||
|
addopts =
|
||||||
|
-ra
|
||||||
|
--strict-markers
|
||||||
|
--strict-config
|
||||||
|
--cov=app
|
||||||
|
--cov-report=term-missing
|
||||||
|
--cov-report=html:htmlcov
|
||||||
|
--cov-report=xml
|
||||||
|
--cov-fail-under=80
|
||||||
|
testpaths = tests
|
||||||
|
python_files = test_*.py
|
||||||
|
python_classes = Test*
|
||||||
|
python_functions = test_*
|
||||||
|
markers =
|
||||||
|
slow: marks tests as slow (deselect with '-m "not slow"')
|
||||||
|
integration: marks tests as integration tests
|
||||||
|
unit: marks tests as unit tests
|
||||||
|
ai: marks tests that involve AI services
|
||||||
|
database: marks tests that require database
|
||||||
|
auth: marks tests for authentication
|
||||||
|
asyncio: marks async tests
|
||||||
|
filterwarnings =
|
||||||
|
ignore::UserWarning
|
||||||
|
ignore::DeprecationWarning
|
||||||
|
ignore::PendingDeprecationWarning
|
||||||
|
asyncio_mode = auto
|
||||||
49
requirements-backend.txt
Normal file
49
requirements-backend.txt
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
# FastAPI and web framework
|
||||||
|
fastapi==0.109.2
|
||||||
|
uvicorn[standard]==0.27.1
|
||||||
|
python-multipart==0.0.9
|
||||||
|
|
||||||
|
# Database
|
||||||
|
asyncpg==0.29.0
|
||||||
|
sqlalchemy[asyncio]==2.0.29
|
||||||
|
alembic==1.13.1
|
||||||
|
psycopg2-binary==2.9.9
|
||||||
|
|
||||||
|
# Authentication & Security
|
||||||
|
python-jose[cryptography]==3.3.0
|
||||||
|
passlib[bcrypt]==1.7.4
|
||||||
|
bcrypt==4.1.2
|
||||||
|
|
||||||
|
# AI Services
|
||||||
|
anthropic==0.21.3
|
||||||
|
openai==1.12.0
|
||||||
|
|
||||||
|
# Vector operations
|
||||||
|
pgvector==0.2.5
|
||||||
|
numpy==1.26.4
|
||||||
|
|
||||||
|
# Data validation
|
||||||
|
pydantic[email]==2.6.3
|
||||||
|
pydantic-settings==2.2.1
|
||||||
|
|
||||||
|
# HTTP client
|
||||||
|
httpx==0.27.0
|
||||||
|
aiohttp==3.9.3
|
||||||
|
|
||||||
|
# Utilities
|
||||||
|
python-dotenv==1.0.1
|
||||||
|
structlog==24.1.0
|
||||||
|
tenacity==8.2.3
|
||||||
|
|
||||||
|
# Development & Testing
|
||||||
|
pytest==8.0.2
|
||||||
|
pytest-asyncio==0.23.5
|
||||||
|
pytest-cov==4.0.0
|
||||||
|
pytest-mock==3.12.0
|
||||||
|
black==24.2.0
|
||||||
|
isort==5.13.2
|
||||||
|
flake8==7.0.0
|
||||||
|
mypy==1.8.0
|
||||||
|
|
||||||
|
# Security
|
||||||
|
bandit==1.7.7
|
||||||
25
requirements-frontend.txt
Normal file
25
requirements-frontend.txt
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Dash and web framework
|
||||||
|
dash==2.16.1
|
||||||
|
dash-mantine-components==0.12.1
|
||||||
|
dash-iconify==0.1.2
|
||||||
|
|
||||||
|
# HTTP client for API calls
|
||||||
|
requests==2.31.0
|
||||||
|
httpx==0.27.0
|
||||||
|
|
||||||
|
# Data handling
|
||||||
|
pandas==2.2.1
|
||||||
|
plotly==5.18.0
|
||||||
|
|
||||||
|
# File handling
|
||||||
|
Pillow==10.2.0
|
||||||
|
|
||||||
|
# Utilities
|
||||||
|
python-dotenv==1.0.1
|
||||||
|
structlog==24.1.0
|
||||||
|
|
||||||
|
# Development
|
||||||
|
pytest==8.0.2
|
||||||
|
pytest-dash==2.1.2
|
||||||
|
black==24.2.0
|
||||||
|
isort==5.13.2
|
||||||
66
requirements.txt
Normal file
66
requirements.txt
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
# Job Forge - Combined requirements for local development and testing
|
||||||
|
# This file combines backend and frontend requirements for easy local setup
|
||||||
|
|
||||||
|
# FastAPI and web framework
|
||||||
|
fastapi==0.109.2
|
||||||
|
uvicorn[standard]==0.27.1
|
||||||
|
python-multipart==0.0.9
|
||||||
|
|
||||||
|
# Database
|
||||||
|
asyncpg==0.29.0
|
||||||
|
sqlalchemy[asyncio]==2.0.29
|
||||||
|
alembic==1.13.1
|
||||||
|
psycopg2-binary==2.9.9
|
||||||
|
|
||||||
|
# Authentication & Security
|
||||||
|
python-jose[cryptography]==3.3.0
|
||||||
|
passlib[bcrypt]==1.7.4
|
||||||
|
bcrypt==4.1.2
|
||||||
|
|
||||||
|
# AI Services
|
||||||
|
anthropic==0.21.3
|
||||||
|
openai==1.12.0
|
||||||
|
|
||||||
|
# Vector operations
|
||||||
|
pgvector==0.2.5
|
||||||
|
numpy==1.26.4
|
||||||
|
|
||||||
|
# Data validation
|
||||||
|
pydantic[email]==2.6.3
|
||||||
|
pydantic-settings==2.2.1
|
||||||
|
|
||||||
|
# HTTP client
|
||||||
|
httpx==0.27.0
|
||||||
|
aiohttp==3.9.3
|
||||||
|
requests==2.31.0
|
||||||
|
|
||||||
|
# Utilities
|
||||||
|
python-dotenv==1.0.1
|
||||||
|
structlog==24.1.0
|
||||||
|
tenacity==8.2.3
|
||||||
|
|
||||||
|
# Dash and frontend
|
||||||
|
dash==2.16.1
|
||||||
|
dash-mantine-components==0.12.1
|
||||||
|
dash-iconify==0.1.2
|
||||||
|
|
||||||
|
# Data handling
|
||||||
|
pandas==2.2.1
|
||||||
|
plotly==5.18.0
|
||||||
|
|
||||||
|
# File handling
|
||||||
|
Pillow==10.2.0
|
||||||
|
|
||||||
|
# Development & Testing
|
||||||
|
pytest==8.0.2
|
||||||
|
pytest-asyncio==0.23.5
|
||||||
|
pytest-cov==4.0.0
|
||||||
|
pytest-mock==3.12.0
|
||||||
|
pytest-dash==2.1.2
|
||||||
|
black==24.2.0
|
||||||
|
isort==5.13.2
|
||||||
|
flake8==7.0.0
|
||||||
|
mypy==1.8.0
|
||||||
|
|
||||||
|
# Security
|
||||||
|
bandit==1.7.7
|
||||||
1
src/__init__.py
Normal file
1
src/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Job Forge source package
|
||||||
0
src/backend/__init__.py
Normal file
0
src/backend/__init__.py
Normal file
0
src/backend/api/__init__.py
Normal file
0
src/backend/api/__init__.py
Normal file
139
src/backend/api/ai_documents.py
Normal file
139
src/backend/api/ai_documents.py
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
"""
|
||||||
|
AI Document Generation API - Simple implementation for MVP
|
||||||
|
"""
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from typing import Optional
|
||||||
|
import structlog
|
||||||
|
|
||||||
|
from ..services.ai_service import ai_service
|
||||||
|
from ..models.user import User
|
||||||
|
from .auth import get_current_user
|
||||||
|
|
||||||
|
logger = structlog.get_logger()
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
class CoverLetterRequest(BaseModel):
|
||||||
|
job_description: str
|
||||||
|
company_name: str
|
||||||
|
role_title: str
|
||||||
|
job_url: Optional[str] = None
|
||||||
|
user_resume: Optional[str] = None
|
||||||
|
|
||||||
|
class ResumeOptimizationRequest(BaseModel):
|
||||||
|
current_resume: str
|
||||||
|
job_description: str
|
||||||
|
role_title: str
|
||||||
|
|
||||||
|
class DocumentResponse(BaseModel):
|
||||||
|
content: str
|
||||||
|
model_used: str
|
||||||
|
generation_prompt: str
|
||||||
|
|
||||||
|
@router.post("/generate-cover-letter", response_model=DocumentResponse)
|
||||||
|
async def generate_cover_letter(
|
||||||
|
request: CoverLetterRequest,
|
||||||
|
current_user: User = Depends(get_current_user)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Generate a personalized cover letter using AI
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
logger.info("Generating cover letter",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
company=request.company_name,
|
||||||
|
role=request.role_title)
|
||||||
|
|
||||||
|
result = await ai_service.generate_cover_letter(
|
||||||
|
job_description=request.job_description,
|
||||||
|
company_name=request.company_name,
|
||||||
|
role_title=request.role_title,
|
||||||
|
user_name=current_user.full_name,
|
||||||
|
user_resume=request.user_resume
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Cover letter generated successfully",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
model_used=result["model_used"])
|
||||||
|
|
||||||
|
return DocumentResponse(
|
||||||
|
content=result["content"],
|
||||||
|
model_used=result["model_used"],
|
||||||
|
generation_prompt=result["prompt"]
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Cover letter generation failed",
|
||||||
|
error=str(e),
|
||||||
|
user_id=str(current_user.id))
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to generate cover letter"
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.post("/optimize-resume", response_model=DocumentResponse)
|
||||||
|
async def optimize_resume(
|
||||||
|
request: ResumeOptimizationRequest,
|
||||||
|
current_user: User = Depends(get_current_user)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Optimize resume for specific job requirements using AI
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
logger.info("Optimizing resume",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
role=request.role_title)
|
||||||
|
|
||||||
|
result = await ai_service.generate_resume_optimization(
|
||||||
|
current_resume=request.current_resume,
|
||||||
|
job_description=request.job_description,
|
||||||
|
role_title=request.role_title
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Resume optimized successfully",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
model_used=result["model_used"])
|
||||||
|
|
||||||
|
return DocumentResponse(
|
||||||
|
content=result["content"],
|
||||||
|
model_used=result["model_used"],
|
||||||
|
generation_prompt=result["prompt"]
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Resume optimization failed",
|
||||||
|
error=str(e),
|
||||||
|
user_id=str(current_user.id))
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to optimize resume"
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.post("/test-ai-connection")
|
||||||
|
async def test_ai_connection(current_user: User = Depends(get_current_user)):
|
||||||
|
"""
|
||||||
|
Test if AI services are properly configured
|
||||||
|
"""
|
||||||
|
status_info = {
|
||||||
|
"claude_available": ai_service.claude_client is not None,
|
||||||
|
"openai_available": ai_service.openai_client is not None,
|
||||||
|
"user": current_user.full_name
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test with a simple generation
|
||||||
|
try:
|
||||||
|
test_result = await ai_service.generate_cover_letter(
|
||||||
|
job_description="Software Engineer position requiring Python skills",
|
||||||
|
company_name="Test Company",
|
||||||
|
role_title="Software Engineer",
|
||||||
|
user_name=current_user.full_name
|
||||||
|
)
|
||||||
|
status_info["test_generation"] = "success"
|
||||||
|
status_info["model_used"] = test_result["model_used"]
|
||||||
|
status_info["content_preview"] = test_result["content"][:100] + "..."
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
status_info["test_generation"] = "failed"
|
||||||
|
status_info["error"] = str(e)
|
||||||
|
|
||||||
|
return status_info
|
||||||
205
src/backend/api/applications.py
Normal file
205
src/backend/api/applications.py
Normal file
@@ -0,0 +1,205 @@
|
|||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from sqlalchemy import select
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from typing import List, Optional
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from ..core.database import get_db
|
||||||
|
from ..models.user import User
|
||||||
|
from ..models.application import Application, ApplicationStatus
|
||||||
|
from ..models.job import Job
|
||||||
|
from .auth import get_current_user
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
class ApplicationCreate(BaseModel):
|
||||||
|
job_id: int
|
||||||
|
notes: Optional[str] = None
|
||||||
|
|
||||||
|
class ApplicationUpdate(BaseModel):
|
||||||
|
status: Optional[ApplicationStatus] = None
|
||||||
|
notes: Optional[str] = None
|
||||||
|
applied_date: Optional[datetime] = None
|
||||||
|
follow_up_date: Optional[datetime] = None
|
||||||
|
|
||||||
|
class ApplicationResponse(BaseModel):
|
||||||
|
id: int
|
||||||
|
job_id: int
|
||||||
|
status: ApplicationStatus
|
||||||
|
notes: Optional[str]
|
||||||
|
applied_date: Optional[datetime]
|
||||||
|
follow_up_date: Optional[datetime]
|
||||||
|
created_at: datetime
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
from_attributes = True
|
||||||
|
|
||||||
|
class ApplicationWithJobResponse(ApplicationResponse):
|
||||||
|
job_title: str
|
||||||
|
company: str
|
||||||
|
location: Optional[str]
|
||||||
|
|
||||||
|
@router.post("/", response_model=ApplicationResponse)
|
||||||
|
async def create_application(
|
||||||
|
application_data: ApplicationCreate,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
# Verify job exists
|
||||||
|
job = await db.get(Job, application_data.job_id)
|
||||||
|
if not job:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Job not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if application already exists
|
||||||
|
existing = await db.execute(
|
||||||
|
select(Application).where(
|
||||||
|
Application.user_id == current_user.id,
|
||||||
|
Application.job_id == application_data.job_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if existing.scalar_one_or_none():
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_400_BAD_REQUEST,
|
||||||
|
detail="Application already exists for this job"
|
||||||
|
)
|
||||||
|
|
||||||
|
application = Application(
|
||||||
|
user_id=current_user.id,
|
||||||
|
job_id=application_data.job_id,
|
||||||
|
notes=application_data.notes
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(application)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(application)
|
||||||
|
|
||||||
|
return ApplicationResponse.from_orm(application)
|
||||||
|
|
||||||
|
@router.get("/", response_model=List[ApplicationWithJobResponse])
|
||||||
|
async def get_applications(
|
||||||
|
status: Optional[ApplicationStatus] = None,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
query = select(Application, Job).join(Job).where(Application.user_id == current_user.id)
|
||||||
|
|
||||||
|
if status:
|
||||||
|
query = query.where(Application.status == status)
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
applications = []
|
||||||
|
|
||||||
|
for app, job in result.all():
|
||||||
|
app_dict = {
|
||||||
|
"id": app.id,
|
||||||
|
"job_id": app.job_id,
|
||||||
|
"status": app.status,
|
||||||
|
"notes": app.notes,
|
||||||
|
"applied_date": app.applied_date,
|
||||||
|
"follow_up_date": app.follow_up_date,
|
||||||
|
"created_at": app.created_at,
|
||||||
|
"job_title": job.title,
|
||||||
|
"company": job.company,
|
||||||
|
"location": job.location
|
||||||
|
}
|
||||||
|
applications.append(ApplicationWithJobResponse(**app_dict))
|
||||||
|
|
||||||
|
return applications
|
||||||
|
|
||||||
|
@router.get("/{application_id}", response_model=ApplicationWithJobResponse)
|
||||||
|
async def get_application(
|
||||||
|
application_id: int,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
result = await db.execute(
|
||||||
|
select(Application, Job)
|
||||||
|
.join(Job)
|
||||||
|
.where(
|
||||||
|
Application.id == application_id,
|
||||||
|
Application.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
app_job = result.first()
|
||||||
|
|
||||||
|
if not app_job:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
app, job = app_job
|
||||||
|
return ApplicationWithJobResponse(
|
||||||
|
id=app.id,
|
||||||
|
job_id=app.job_id,
|
||||||
|
status=app.status,
|
||||||
|
notes=app.notes,
|
||||||
|
applied_date=app.applied_date,
|
||||||
|
follow_up_date=app.follow_up_date,
|
||||||
|
created_at=app.created_at,
|
||||||
|
job_title=job.title,
|
||||||
|
company=job.company,
|
||||||
|
location=job.location
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.put("/{application_id}", response_model=ApplicationResponse)
|
||||||
|
async def update_application(
|
||||||
|
application_id: int,
|
||||||
|
update_data: ApplicationUpdate,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
application = await db.execute(
|
||||||
|
select(Application).where(
|
||||||
|
Application.id == application_id,
|
||||||
|
Application.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
application = application.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
update_dict = update_data.dict(exclude_unset=True)
|
||||||
|
if update_dict.get("status") == ApplicationStatus.APPLIED and not application.applied_date:
|
||||||
|
update_dict["applied_date"] = datetime.utcnow()
|
||||||
|
|
||||||
|
for field, value in update_dict.items():
|
||||||
|
setattr(application, field, value)
|
||||||
|
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(application)
|
||||||
|
|
||||||
|
return ApplicationResponse.from_orm(application)
|
||||||
|
|
||||||
|
@router.delete("/{application_id}")
|
||||||
|
async def delete_application(
|
||||||
|
application_id: int,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
application = await db.execute(
|
||||||
|
select(Application).where(
|
||||||
|
Application.id == application_id,
|
||||||
|
Application.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
application = application.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
await db.delete(application)
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
return {"message": "Application deleted successfully"}
|
||||||
139
src/backend/api/auth.py
Normal file
139
src/backend/api/auth.py
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from pydantic import BaseModel, EmailStr
|
||||||
|
from typing import Optional
|
||||||
|
import uuid
|
||||||
|
import bcrypt
|
||||||
|
from jose import jwt
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
|
from ..core.database import get_db
|
||||||
|
from ..core.config import settings
|
||||||
|
from ..models.user import User
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
security = HTTPBearer()
|
||||||
|
|
||||||
|
class UserCreate(BaseModel):
|
||||||
|
email: EmailStr
|
||||||
|
password: str
|
||||||
|
first_name: str
|
||||||
|
last_name: str
|
||||||
|
phone: Optional[str] = None
|
||||||
|
|
||||||
|
class UserLogin(BaseModel):
|
||||||
|
email: EmailStr
|
||||||
|
password: str
|
||||||
|
|
||||||
|
class Token(BaseModel):
|
||||||
|
access_token: str
|
||||||
|
token_type: str = "bearer"
|
||||||
|
|
||||||
|
class UserResponse(BaseModel):
|
||||||
|
id: str
|
||||||
|
email: str
|
||||||
|
full_name: str
|
||||||
|
first_name: str
|
||||||
|
last_name: str
|
||||||
|
is_active: bool
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_user(cls, user):
|
||||||
|
return cls(
|
||||||
|
id=str(user.id),
|
||||||
|
email=user.email,
|
||||||
|
full_name=user.full_name,
|
||||||
|
first_name=user.first_name,
|
||||||
|
last_name=user.last_name,
|
||||||
|
is_active=user.is_active
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_access_token(data: dict):
|
||||||
|
to_encode = data.copy()
|
||||||
|
expire = datetime.utcnow() + timedelta(minutes=settings.jwt_expire_minutes)
|
||||||
|
to_encode.update({"exp": expire})
|
||||||
|
return jwt.encode(to_encode, settings.jwt_secret_key, algorithm=settings.jwt_algorithm)
|
||||||
|
|
||||||
|
def verify_password(plain_password: str, hashed_password: str) -> bool:
|
||||||
|
return bcrypt.checkpw(plain_password.encode('utf-8'), hashed_password.encode('utf-8'))
|
||||||
|
|
||||||
|
def hash_password(password: str) -> str:
|
||||||
|
return bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
|
||||||
|
|
||||||
|
async def get_current_user(
|
||||||
|
credentials: HTTPAuthorizationCredentials = Depends(security),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
payload = jwt.decode(
|
||||||
|
credentials.credentials,
|
||||||
|
settings.jwt_secret_key,
|
||||||
|
algorithms=[settings.jwt_algorithm]
|
||||||
|
)
|
||||||
|
user_id: int = payload.get("sub")
|
||||||
|
if user_id is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="Invalid authentication credentials"
|
||||||
|
)
|
||||||
|
except jwt.JWTError:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="Invalid authentication credentials"
|
||||||
|
)
|
||||||
|
|
||||||
|
user = await db.get(User, user_id)
|
||||||
|
if user is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="User not found"
|
||||||
|
)
|
||||||
|
return user
|
||||||
|
|
||||||
|
@router.post("/register", response_model=UserResponse)
|
||||||
|
async def register(user_data: UserCreate, db: AsyncSession = Depends(get_db)):
|
||||||
|
# Check if user exists
|
||||||
|
existing_user = await db.execute(
|
||||||
|
User.__table__.select().where(User.email == user_data.email)
|
||||||
|
)
|
||||||
|
if existing_user.first():
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_400_BAD_REQUEST,
|
||||||
|
detail="Email already registered"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create new user
|
||||||
|
hashed_pwd = hash_password(user_data.password)
|
||||||
|
full_name = f"{user_data.first_name} {user_data.last_name}"
|
||||||
|
user = User(
|
||||||
|
email=user_data.email,
|
||||||
|
password_hash=hashed_pwd,
|
||||||
|
full_name=full_name
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(user)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(user)
|
||||||
|
|
||||||
|
return UserResponse.from_user(user)
|
||||||
|
|
||||||
|
@router.post("/login", response_model=Token)
|
||||||
|
async def login(login_data: UserLogin, db: AsyncSession = Depends(get_db)):
|
||||||
|
user_result = await db.execute(
|
||||||
|
User.__table__.select().where(User.email == login_data.email)
|
||||||
|
)
|
||||||
|
user_row = user_result.first()
|
||||||
|
|
||||||
|
if not user_row or not verify_password(login_data.password, user_row.password_hash):
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="Incorrect email or password"
|
||||||
|
)
|
||||||
|
|
||||||
|
access_token = create_access_token(data={"sub": str(user_row.id)})
|
||||||
|
return Token(access_token=access_token)
|
||||||
|
|
||||||
|
@router.get("/me", response_model=UserResponse)
|
||||||
|
async def get_current_user_info(current_user: User = Depends(get_current_user)):
|
||||||
|
return UserResponse.from_user(current_user)
|
||||||
184
src/backend/api/documents.py
Normal file
184
src/backend/api/documents.py
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
from fastapi import APIRouter, Depends, HTTPException, status, UploadFile, File
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from sqlalchemy import select
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from typing import List, Optional
|
||||||
|
import structlog
|
||||||
|
|
||||||
|
from ..core.database import get_db
|
||||||
|
from ..models.user import User
|
||||||
|
from ..models.document import Document, DocumentType
|
||||||
|
from .auth import get_current_user
|
||||||
|
|
||||||
|
logger = structlog.get_logger()
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
class DocumentResponse(BaseModel):
|
||||||
|
id: int
|
||||||
|
filename: str
|
||||||
|
document_type: DocumentType
|
||||||
|
file_size: Optional[int]
|
||||||
|
ai_generated: str
|
||||||
|
created_at: str
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
from_attributes = True
|
||||||
|
|
||||||
|
class DocumentCreate(BaseModel):
|
||||||
|
filename: str
|
||||||
|
document_type: DocumentType
|
||||||
|
text_content: Optional[str] = None
|
||||||
|
|
||||||
|
@router.post("/upload", response_model=DocumentResponse)
|
||||||
|
async def upload_document(
|
||||||
|
file: UploadFile = File(...),
|
||||||
|
document_type: DocumentType = DocumentType.OTHER,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
file_content = await file.read()
|
||||||
|
|
||||||
|
# Basic file validation
|
||||||
|
if len(file_content) > 10 * 1024 * 1024: # 10MB limit
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
|
||||||
|
detail="File too large. Maximum size is 10MB."
|
||||||
|
)
|
||||||
|
|
||||||
|
document = Document(
|
||||||
|
user_id=current_user.id,
|
||||||
|
filename=file.filename or "uploaded_file",
|
||||||
|
original_filename=file.filename,
|
||||||
|
document_type=document_type,
|
||||||
|
file_size=len(file_content),
|
||||||
|
mime_type=file.content_type,
|
||||||
|
file_content=file_content,
|
||||||
|
ai_generated="false"
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(document)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(document)
|
||||||
|
|
||||||
|
logger.info("Document uploaded",
|
||||||
|
user_id=current_user.id,
|
||||||
|
document_id=document.id,
|
||||||
|
filename=file.filename)
|
||||||
|
|
||||||
|
return DocumentResponse.from_orm(document)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Document upload failed", error=str(e))
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to upload document"
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.get("/", response_model=List[DocumentResponse])
|
||||||
|
async def get_documents(
|
||||||
|
document_type: Optional[DocumentType] = None,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
query = select(Document).where(Document.user_id == current_user.id)
|
||||||
|
|
||||||
|
if document_type:
|
||||||
|
query = query.where(Document.document_type == document_type)
|
||||||
|
|
||||||
|
query = query.order_by(Document.created_at.desc())
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
documents = result.scalars().all()
|
||||||
|
|
||||||
|
return [DocumentResponse.from_orm(doc) for doc in documents]
|
||||||
|
|
||||||
|
@router.get("/{document_id}", response_model=DocumentResponse)
|
||||||
|
async def get_document(
|
||||||
|
document_id: int,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
document = await db.execute(
|
||||||
|
select(Document).where(
|
||||||
|
Document.id == document_id,
|
||||||
|
Document.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
document = document.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not document:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Document not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
return DocumentResponse.from_orm(document)
|
||||||
|
|
||||||
|
@router.delete("/{document_id}")
|
||||||
|
async def delete_document(
|
||||||
|
document_id: int,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
document = await db.execute(
|
||||||
|
select(Document).where(
|
||||||
|
Document.id == document_id,
|
||||||
|
Document.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
document = document.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not document:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Document not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
await db.delete(document)
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
logger.info("Document deleted",
|
||||||
|
user_id=current_user.id,
|
||||||
|
document_id=document_id)
|
||||||
|
|
||||||
|
return {"message": "Document deleted successfully"}
|
||||||
|
|
||||||
|
@router.post("/generate-cover-letter")
|
||||||
|
async def generate_cover_letter(
|
||||||
|
job_id: int,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
# Placeholder for AI cover letter generation
|
||||||
|
# In full implementation, this would use Claude/OpenAI APIs
|
||||||
|
|
||||||
|
cover_letter_content = f"""
|
||||||
|
Dear Hiring Manager,
|
||||||
|
|
||||||
|
I am writing to express my interest in the position at your company.
|
||||||
|
[AI-generated content would be here based on job requirements and user profile]
|
||||||
|
|
||||||
|
Best regards,
|
||||||
|
{current_user.full_name}
|
||||||
|
"""
|
||||||
|
|
||||||
|
document = Document(
|
||||||
|
user_id=current_user.id,
|
||||||
|
filename=f"cover_letter_job_{job_id}.txt",
|
||||||
|
document_type=DocumentType.COVER_LETTER,
|
||||||
|
text_content=cover_letter_content,
|
||||||
|
ai_generated="true",
|
||||||
|
ai_model_used="claude-3",
|
||||||
|
generation_prompt=f"Generate cover letter for job ID {job_id}"
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(document)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(document)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"message": "Cover letter generated successfully",
|
||||||
|
"document_id": document.id,
|
||||||
|
"content": cover_letter_content
|
||||||
|
}
|
||||||
405
src/backend/api/job_applications.py
Normal file
405
src/backend/api/job_applications.py
Normal file
@@ -0,0 +1,405 @@
|
|||||||
|
"""
|
||||||
|
Job Applications API that matches the actual database schema and includes AI features
|
||||||
|
"""
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from sqlalchemy import select
|
||||||
|
from pydantic import BaseModel, HttpUrl
|
||||||
|
from typing import List, Optional
|
||||||
|
import structlog
|
||||||
|
|
||||||
|
from ..core.database import get_db
|
||||||
|
from ..models.user import User
|
||||||
|
from ..models.job_application import JobApplication, PriorityLevel, ApplicationStatus
|
||||||
|
from ..models.job_document import JobDocument, DocumentTypeEnum
|
||||||
|
from ..services.ai_service import ai_service
|
||||||
|
from .auth import get_current_user
|
||||||
|
|
||||||
|
logger = structlog.get_logger()
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
class ApplicationCreate(BaseModel):
|
||||||
|
name: str
|
||||||
|
company_name: str
|
||||||
|
role_title: str
|
||||||
|
job_description: str
|
||||||
|
job_url: Optional[str] = None
|
||||||
|
location: Optional[str] = None
|
||||||
|
priority_level: PriorityLevel = PriorityLevel.MEDIUM
|
||||||
|
|
||||||
|
class ApplicationUpdate(BaseModel):
|
||||||
|
name: Optional[str] = None
|
||||||
|
status: Optional[ApplicationStatus] = None
|
||||||
|
priority_level: Optional[PriorityLevel] = None
|
||||||
|
job_url: Optional[str] = None
|
||||||
|
location: Optional[str] = None
|
||||||
|
|
||||||
|
class ApplicationResponse(BaseModel):
|
||||||
|
id: str
|
||||||
|
name: str
|
||||||
|
company_name: str
|
||||||
|
role_title: str
|
||||||
|
job_url: Optional[str]
|
||||||
|
location: Optional[str]
|
||||||
|
priority_level: PriorityLevel
|
||||||
|
status: ApplicationStatus
|
||||||
|
research_completed: bool
|
||||||
|
resume_optimized: bool
|
||||||
|
cover_letter_generated: bool
|
||||||
|
created_at: str
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_application(cls, app: JobApplication):
|
||||||
|
return cls(
|
||||||
|
id=str(app.id),
|
||||||
|
name=app.name,
|
||||||
|
company_name=app.company_name,
|
||||||
|
role_title=app.role_title,
|
||||||
|
job_url=app.job_url,
|
||||||
|
location=app.location,
|
||||||
|
priority_level=app.priority_level,
|
||||||
|
status=app.status,
|
||||||
|
research_completed=app.research_completed,
|
||||||
|
resume_optimized=app.resume_optimized,
|
||||||
|
cover_letter_generated=app.cover_letter_generated,
|
||||||
|
created_at=app.created_at.isoformat()
|
||||||
|
)
|
||||||
|
|
||||||
|
class DocumentResponse(BaseModel):
|
||||||
|
id: str
|
||||||
|
document_type: DocumentTypeEnum
|
||||||
|
content: str
|
||||||
|
created_at: str
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_document(cls, doc: JobDocument):
|
||||||
|
return cls(
|
||||||
|
id=str(doc.id),
|
||||||
|
document_type=doc.document_type,
|
||||||
|
content=doc.content,
|
||||||
|
created_at=doc.created_at.isoformat()
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.post("/", response_model=ApplicationResponse)
|
||||||
|
async def create_application(
|
||||||
|
application_data: ApplicationCreate,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Create a new job application"""
|
||||||
|
application = JobApplication(
|
||||||
|
user_id=current_user.id,
|
||||||
|
name=application_data.name,
|
||||||
|
company_name=application_data.company_name,
|
||||||
|
role_title=application_data.role_title,
|
||||||
|
job_description=application_data.job_description,
|
||||||
|
job_url=application_data.job_url,
|
||||||
|
location=application_data.location,
|
||||||
|
priority_level=application_data.priority_level
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(application)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(application)
|
||||||
|
|
||||||
|
logger.info("Application created",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
application_id=str(application.id),
|
||||||
|
company=application_data.company_name)
|
||||||
|
|
||||||
|
return ApplicationResponse.from_application(application)
|
||||||
|
|
||||||
|
@router.get("/", response_model=List[ApplicationResponse])
|
||||||
|
async def get_applications(
|
||||||
|
status: Optional[ApplicationStatus] = None,
|
||||||
|
priority: Optional[PriorityLevel] = None,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Get all applications for the current user"""
|
||||||
|
query = select(JobApplication).where(JobApplication.user_id == current_user.id)
|
||||||
|
|
||||||
|
if status:
|
||||||
|
query = query.where(JobApplication.status == status)
|
||||||
|
if priority:
|
||||||
|
query = query.where(JobApplication.priority_level == priority)
|
||||||
|
|
||||||
|
query = query.order_by(JobApplication.created_at.desc())
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
applications = result.scalars().all()
|
||||||
|
|
||||||
|
return [ApplicationResponse.from_application(app) for app in applications]
|
||||||
|
|
||||||
|
@router.get("/{application_id}", response_model=ApplicationResponse)
|
||||||
|
async def get_application(
|
||||||
|
application_id: str,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Get a specific application"""
|
||||||
|
result = await db.execute(
|
||||||
|
select(JobApplication).where(
|
||||||
|
JobApplication.id == application_id,
|
||||||
|
JobApplication.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
application = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
return ApplicationResponse.from_application(application)
|
||||||
|
|
||||||
|
@router.put("/{application_id}", response_model=ApplicationResponse)
|
||||||
|
async def update_application(
|
||||||
|
application_id: str,
|
||||||
|
update_data: ApplicationUpdate,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Update an application"""
|
||||||
|
result = await db.execute(
|
||||||
|
select(JobApplication).where(
|
||||||
|
JobApplication.id == application_id,
|
||||||
|
JobApplication.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
application = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
update_dict = update_data.dict(exclude_unset=True)
|
||||||
|
for field, value in update_dict.items():
|
||||||
|
setattr(application, field, value)
|
||||||
|
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(application)
|
||||||
|
|
||||||
|
return ApplicationResponse.from_application(application)
|
||||||
|
|
||||||
|
@router.post("/{application_id}/generate-cover-letter", response_model=DocumentResponse)
|
||||||
|
async def generate_cover_letter(
|
||||||
|
application_id: str,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Generate AI cover letter for the application"""
|
||||||
|
# Get the application
|
||||||
|
result = await db.execute(
|
||||||
|
select(JobApplication).where(
|
||||||
|
JobApplication.id == application_id,
|
||||||
|
JobApplication.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
application = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if cover letter already exists
|
||||||
|
existing_doc = await db.execute(
|
||||||
|
select(JobDocument).where(
|
||||||
|
JobDocument.application_id == application_id,
|
||||||
|
JobDocument.document_type == DocumentTypeEnum.COVER_LETTER
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if existing_doc.scalar_one_or_none():
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_400_BAD_REQUEST,
|
||||||
|
detail="Cover letter already exists for this application"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate cover letter using AI
|
||||||
|
try:
|
||||||
|
ai_result = await ai_service.generate_cover_letter(
|
||||||
|
job_description=application.job_description,
|
||||||
|
company_name=application.company_name,
|
||||||
|
role_title=application.role_title,
|
||||||
|
user_name=current_user.full_name
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create document
|
||||||
|
document = JobDocument(
|
||||||
|
application_id=application.id,
|
||||||
|
document_type=DocumentTypeEnum.COVER_LETTER,
|
||||||
|
content=ai_result["content"]
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(document)
|
||||||
|
|
||||||
|
# Update application flags
|
||||||
|
application.cover_letter_generated = True
|
||||||
|
if application.status == ApplicationStatus.DRAFT:
|
||||||
|
application.status = ApplicationStatus.COVER_LETTER_READY
|
||||||
|
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(document)
|
||||||
|
|
||||||
|
logger.info("Cover letter generated",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
application_id=application_id,
|
||||||
|
model_used=ai_result["model_used"])
|
||||||
|
|
||||||
|
return DocumentResponse.from_document(document)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Cover letter generation failed",
|
||||||
|
error=str(e),
|
||||||
|
application_id=application_id)
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to generate cover letter"
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.post("/{application_id}/optimize-resume")
|
||||||
|
async def optimize_resume(
|
||||||
|
application_id: str,
|
||||||
|
resume_content: str,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Generate optimized resume for the application"""
|
||||||
|
# Get the application
|
||||||
|
result = await db.execute(
|
||||||
|
select(JobApplication).where(
|
||||||
|
JobApplication.id == application_id,
|
||||||
|
JobApplication.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
application = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate optimized resume using AI
|
||||||
|
try:
|
||||||
|
ai_result = await ai_service.generate_resume_optimization(
|
||||||
|
current_resume=resume_content,
|
||||||
|
job_description=application.job_description,
|
||||||
|
role_title=application.role_title
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if optimized resume already exists
|
||||||
|
existing_doc = await db.execute(
|
||||||
|
select(JobDocument).where(
|
||||||
|
JobDocument.application_id == application_id,
|
||||||
|
JobDocument.document_type == DocumentTypeEnum.OPTIMIZED_RESUME
|
||||||
|
)
|
||||||
|
)
|
||||||
|
existing = existing_doc.scalar_one_or_none()
|
||||||
|
|
||||||
|
if existing:
|
||||||
|
# Update existing document
|
||||||
|
existing.content = ai_result["content"]
|
||||||
|
document = existing
|
||||||
|
else:
|
||||||
|
# Create new document
|
||||||
|
document = JobDocument(
|
||||||
|
application_id=application.id,
|
||||||
|
document_type=DocumentTypeEnum.OPTIMIZED_RESUME,
|
||||||
|
content=ai_result["content"]
|
||||||
|
)
|
||||||
|
db.add(document)
|
||||||
|
|
||||||
|
# Update application flags
|
||||||
|
application.resume_optimized = True
|
||||||
|
if application.status == ApplicationStatus.DRAFT:
|
||||||
|
application.status = ApplicationStatus.RESUME_READY
|
||||||
|
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(document)
|
||||||
|
|
||||||
|
logger.info("Resume optimized",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
application_id=application_id,
|
||||||
|
model_used=ai_result["model_used"])
|
||||||
|
|
||||||
|
return DocumentResponse.from_document(document)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Resume optimization failed",
|
||||||
|
error=str(e),
|
||||||
|
application_id=application_id)
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="Failed to optimize resume"
|
||||||
|
)
|
||||||
|
|
||||||
|
@router.get("/{application_id}/documents", response_model=List[DocumentResponse])
|
||||||
|
async def get_application_documents(
|
||||||
|
application_id: str,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Get all documents for an application"""
|
||||||
|
# Verify application belongs to user
|
||||||
|
app_result = await db.execute(
|
||||||
|
select(JobApplication).where(
|
||||||
|
JobApplication.id == application_id,
|
||||||
|
JobApplication.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if not app_result.scalar_one_or_none():
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get documents
|
||||||
|
result = await db.execute(
|
||||||
|
select(JobDocument)
|
||||||
|
.where(JobDocument.application_id == application_id)
|
||||||
|
.order_by(JobDocument.created_at.desc())
|
||||||
|
)
|
||||||
|
documents = result.scalars().all()
|
||||||
|
|
||||||
|
return [DocumentResponse.from_document(doc) for doc in documents]
|
||||||
|
|
||||||
|
@router.delete("/{application_id}")
|
||||||
|
async def delete_application(
|
||||||
|
application_id: str,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
"""Delete an application and all its documents"""
|
||||||
|
result = await db.execute(
|
||||||
|
select(JobApplication).where(
|
||||||
|
JobApplication.id == application_id,
|
||||||
|
JobApplication.user_id == current_user.id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
application = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if not application:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Application not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Delete documents first (CASCADE should handle this, but being explicit)
|
||||||
|
await db.execute(
|
||||||
|
select(JobDocument).where(JobDocument.application_id == application_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
await db.delete(application)
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
logger.info("Application deleted",
|
||||||
|
user_id=str(current_user.id),
|
||||||
|
application_id=application_id)
|
||||||
|
|
||||||
|
return {"message": "Application deleted successfully"}
|
||||||
143
src/backend/api/jobs.py
Normal file
143
src/backend/api/jobs.py
Normal file
@@ -0,0 +1,143 @@
|
|||||||
|
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from sqlalchemy import select, func
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from typing import List, Optional
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from ..core.database import get_db
|
||||||
|
from ..models.user import User
|
||||||
|
from ..models.job import Job
|
||||||
|
from .auth import get_current_user
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
class JobCreate(BaseModel):
|
||||||
|
title: str
|
||||||
|
company: str
|
||||||
|
location: Optional[str] = None
|
||||||
|
salary_min: Optional[int] = None
|
||||||
|
salary_max: Optional[int] = None
|
||||||
|
remote_option: bool = False
|
||||||
|
description: str
|
||||||
|
requirements: Optional[str] = None
|
||||||
|
benefits: Optional[str] = None
|
||||||
|
source_url: Optional[str] = None
|
||||||
|
source_platform: Optional[str] = None
|
||||||
|
posted_date: Optional[datetime] = None
|
||||||
|
|
||||||
|
class JobResponse(BaseModel):
|
||||||
|
id: int
|
||||||
|
title: str
|
||||||
|
company: str
|
||||||
|
location: Optional[str]
|
||||||
|
salary_min: Optional[int]
|
||||||
|
salary_max: Optional[int]
|
||||||
|
remote_option: bool
|
||||||
|
description: str
|
||||||
|
requirements: Optional[str]
|
||||||
|
benefits: Optional[str]
|
||||||
|
source_url: Optional[str]
|
||||||
|
posted_date: Optional[datetime]
|
||||||
|
created_at: datetime
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
from_attributes = True
|
||||||
|
|
||||||
|
class JobSearchResponse(BaseModel):
|
||||||
|
id: int
|
||||||
|
title: str
|
||||||
|
company: str
|
||||||
|
location: Optional[str]
|
||||||
|
salary_min: Optional[int]
|
||||||
|
salary_max: Optional[int]
|
||||||
|
remote_option: bool
|
||||||
|
description: str
|
||||||
|
match_score: Optional[float]
|
||||||
|
posted_date: Optional[datetime]
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
from_attributes = True
|
||||||
|
|
||||||
|
@router.post("/", response_model=JobResponse)
|
||||||
|
async def create_job(
|
||||||
|
job_data: JobCreate,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
job = Job(**job_data.dict())
|
||||||
|
db.add(job)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(job)
|
||||||
|
|
||||||
|
return JobResponse.from_orm(job)
|
||||||
|
|
||||||
|
@router.get("/", response_model=List[JobSearchResponse])
|
||||||
|
async def search_jobs(
|
||||||
|
q: Optional[str] = Query(None, description="Search query"),
|
||||||
|
location: Optional[str] = Query(None, description="Location filter"),
|
||||||
|
remote: Optional[bool] = Query(None, description="Remote work filter"),
|
||||||
|
salary_min: Optional[int] = Query(None, description="Minimum salary"),
|
||||||
|
company: Optional[str] = Query(None, description="Company filter"),
|
||||||
|
limit: int = Query(20, ge=1, le=100),
|
||||||
|
offset: int = Query(0, ge=0),
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
query = select(Job).where(Job.is_active == True)
|
||||||
|
|
||||||
|
if q:
|
||||||
|
search_filter = func.lower(Job.title).contains(q.lower()) | \
|
||||||
|
func.lower(Job.description).contains(q.lower()) | \
|
||||||
|
func.lower(Job.company).contains(q.lower())
|
||||||
|
query = query.where(search_filter)
|
||||||
|
|
||||||
|
if location:
|
||||||
|
query = query.where(func.lower(Job.location).contains(location.lower()))
|
||||||
|
|
||||||
|
if remote is not None:
|
||||||
|
query = query.where(Job.remote_option == remote)
|
||||||
|
|
||||||
|
if salary_min:
|
||||||
|
query = query.where(Job.salary_min >= salary_min)
|
||||||
|
|
||||||
|
if company:
|
||||||
|
query = query.where(func.lower(Job.company).contains(company.lower()))
|
||||||
|
|
||||||
|
query = query.order_by(Job.posted_date.desc().nullslast(), Job.created_at.desc())
|
||||||
|
query = query.offset(offset).limit(limit)
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
jobs = result.scalars().all()
|
||||||
|
|
||||||
|
return [JobSearchResponse.from_orm(job) for job in jobs]
|
||||||
|
|
||||||
|
@router.get("/{job_id}", response_model=JobResponse)
|
||||||
|
async def get_job(
|
||||||
|
job_id: int,
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
job = await db.get(Job, job_id)
|
||||||
|
if not job or not job.is_active:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_404_NOT_FOUND,
|
||||||
|
detail="Job not found"
|
||||||
|
)
|
||||||
|
|
||||||
|
return JobResponse.from_orm(job)
|
||||||
|
|
||||||
|
@router.get("/recommendations/")
|
||||||
|
async def get_job_recommendations(
|
||||||
|
limit: int = Query(10, ge=1, le=50),
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
db: AsyncSession = Depends(get_db)
|
||||||
|
):
|
||||||
|
# For now, return recent jobs sorted by created date
|
||||||
|
# In a full implementation, this would use AI matching based on user profile
|
||||||
|
query = select(Job).where(Job.is_active == True).order_by(Job.created_at.desc()).limit(limit)
|
||||||
|
|
||||||
|
result = await db.execute(query)
|
||||||
|
jobs = result.scalars().all()
|
||||||
|
|
||||||
|
return [JobSearchResponse.from_orm(job) for job in jobs]
|
||||||
0
src/backend/core/__init__.py
Normal file
0
src/backend/core/__init__.py
Normal file
30
src/backend/core/config.py
Normal file
30
src/backend/core/config.py
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
from pydantic_settings import BaseSettings
|
||||||
|
from pydantic import Field
|
||||||
|
import os
|
||||||
|
|
||||||
|
class Settings(BaseSettings):
|
||||||
|
database_url: str = Field(
|
||||||
|
default="postgresql+asyncpg://jobforge_user:jobforge_password@localhost:5432/jobforge_mvp",
|
||||||
|
env="DATABASE_URL"
|
||||||
|
)
|
||||||
|
|
||||||
|
claude_api_key: str = Field(env="CLAUDE_API_KEY")
|
||||||
|
openai_api_key: str = Field(env="OPENAI_API_KEY")
|
||||||
|
jwt_secret_key: str = Field(env="JWT_SECRET_KEY")
|
||||||
|
|
||||||
|
debug: bool = Field(default=False, env="DEBUG")
|
||||||
|
log_level: str = Field(default="INFO", env="LOG_LEVEL")
|
||||||
|
|
||||||
|
jwt_algorithm: str = "HS256"
|
||||||
|
jwt_expire_minutes: int = 60 * 24 * 7 # 7 days
|
||||||
|
|
||||||
|
cors_origins: list[str] = [
|
||||||
|
"http://localhost:8501",
|
||||||
|
"http://frontend:8501"
|
||||||
|
]
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
env_file = ".env"
|
||||||
|
case_sensitive = False
|
||||||
|
|
||||||
|
settings = Settings()
|
||||||
49
src/backend/core/database.py
Normal file
49
src/backend/core/database.py
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker
|
||||||
|
from sqlalchemy.orm import DeclarativeBase
|
||||||
|
from sqlalchemy import text
|
||||||
|
import structlog
|
||||||
|
|
||||||
|
from .config import settings
|
||||||
|
|
||||||
|
logger = structlog.get_logger()
|
||||||
|
|
||||||
|
class Base(DeclarativeBase):
|
||||||
|
pass
|
||||||
|
|
||||||
|
engine = create_async_engine(
|
||||||
|
settings.database_url,
|
||||||
|
echo=settings.debug,
|
||||||
|
pool_pre_ping=True,
|
||||||
|
pool_recycle=300
|
||||||
|
)
|
||||||
|
|
||||||
|
AsyncSessionLocal = async_sessionmaker(
|
||||||
|
engine,
|
||||||
|
class_=AsyncSession,
|
||||||
|
expire_on_commit=False
|
||||||
|
)
|
||||||
|
|
||||||
|
async def get_db():
|
||||||
|
async with AsyncSessionLocal() as session:
|
||||||
|
try:
|
||||||
|
yield session
|
||||||
|
except Exception:
|
||||||
|
await session.rollback()
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
await session.close()
|
||||||
|
|
||||||
|
async def init_db():
|
||||||
|
try:
|
||||||
|
async with engine.begin() as conn:
|
||||||
|
await conn.execute(text("CREATE EXTENSION IF NOT EXISTS vector"))
|
||||||
|
logger.info("Database extensions initialized")
|
||||||
|
|
||||||
|
from ..models import user, application, job, document
|
||||||
|
async with engine.begin() as conn:
|
||||||
|
await conn.run_sync(Base.metadata.create_all)
|
||||||
|
logger.info("Database tables created")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Database initialization failed", error=str(e))
|
||||||
|
raise
|
||||||
64
src/backend/main.py
Normal file
64
src/backend/main.py
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
from fastapi import FastAPI, HTTPException
|
||||||
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
|
from fastapi.responses import JSONResponse
|
||||||
|
import structlog
|
||||||
|
from contextlib import asynccontextmanager
|
||||||
|
|
||||||
|
from .core.config import settings
|
||||||
|
from .core.database import init_db
|
||||||
|
from .api import auth, applications, jobs, documents, ai_documents
|
||||||
|
|
||||||
|
logger = structlog.get_logger()
|
||||||
|
|
||||||
|
@asynccontextmanager
|
||||||
|
async def lifespan(app: FastAPI):
|
||||||
|
logger.info("Starting Job Forge backend...")
|
||||||
|
await init_db()
|
||||||
|
logger.info("Database initialized")
|
||||||
|
yield
|
||||||
|
logger.info("Shutting down Job Forge backend...")
|
||||||
|
|
||||||
|
app = FastAPI(
|
||||||
|
title="Job Forge API",
|
||||||
|
description="AI-Powered Job Application Assistant",
|
||||||
|
version="1.0.0",
|
||||||
|
lifespan=lifespan
|
||||||
|
)
|
||||||
|
|
||||||
|
app.add_middleware(
|
||||||
|
CORSMiddleware,
|
||||||
|
allow_origins=["http://localhost:8501", "http://frontend:8501"],
|
||||||
|
allow_credentials=True,
|
||||||
|
allow_methods=["*"],
|
||||||
|
allow_headers=["*"],
|
||||||
|
)
|
||||||
|
|
||||||
|
@app.get("/health")
|
||||||
|
async def health_check():
|
||||||
|
return {"status": "healthy", "service": "job-forge-backend"}
|
||||||
|
|
||||||
|
@app.get("/")
|
||||||
|
async def root():
|
||||||
|
return {"message": "Job Forge API", "version": "1.0.0"}
|
||||||
|
|
||||||
|
app.include_router(auth.router, prefix="/api/auth", tags=["Authentication"])
|
||||||
|
app.include_router(applications.router, prefix="/api/applications", tags=["Applications"])
|
||||||
|
app.include_router(jobs.router, prefix="/api/jobs", tags=["Jobs"])
|
||||||
|
app.include_router(documents.router, prefix="/api/documents", tags=["Documents"])
|
||||||
|
app.include_router(ai_documents.router, prefix="/api/ai", tags=["AI Document Generation"])
|
||||||
|
|
||||||
|
@app.exception_handler(HTTPException)
|
||||||
|
async def http_exception_handler(request, exc):
|
||||||
|
logger.error("HTTP exception", status_code=exc.status_code, detail=exc.detail)
|
||||||
|
return JSONResponse(
|
||||||
|
status_code=exc.status_code,
|
||||||
|
content={"detail": exc.detail}
|
||||||
|
)
|
||||||
|
|
||||||
|
@app.exception_handler(Exception)
|
||||||
|
async def general_exception_handler(request, exc):
|
||||||
|
logger.error("Unhandled exception", error=str(exc))
|
||||||
|
return JSONResponse(
|
||||||
|
status_code=500,
|
||||||
|
content={"detail": "Internal server error"}
|
||||||
|
)
|
||||||
0
src/backend/services/__init__.py
Normal file
0
src/backend/services/__init__.py
Normal file
222
src/backend/services/ai_service.py
Normal file
222
src/backend/services/ai_service.py
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
"""
|
||||||
|
AI Service for Job Forge - Handles document generation and AI processing
|
||||||
|
"""
|
||||||
|
import structlog
|
||||||
|
from typing import Dict, Optional
|
||||||
|
import anthropic
|
||||||
|
import openai
|
||||||
|
from ..core.config import settings
|
||||||
|
|
||||||
|
logger = structlog.get_logger()
|
||||||
|
|
||||||
|
class AIService:
|
||||||
|
def __init__(self):
|
||||||
|
self.claude_client = None
|
||||||
|
self.openai_client = None
|
||||||
|
|
||||||
|
# Initialize Claude client if API key is available
|
||||||
|
if settings.claude_api_key:
|
||||||
|
self.claude_client = anthropic.Anthropic(api_key=settings.claude_api_key)
|
||||||
|
|
||||||
|
# Initialize OpenAI client if API key is available
|
||||||
|
if settings.openai_api_key:
|
||||||
|
self.openai_client = openai.AsyncOpenAI(api_key=settings.openai_api_key)
|
||||||
|
|
||||||
|
async def generate_cover_letter(
|
||||||
|
self,
|
||||||
|
job_description: str,
|
||||||
|
company_name: str,
|
||||||
|
role_title: str,
|
||||||
|
user_name: str,
|
||||||
|
user_resume: Optional[str] = None
|
||||||
|
) -> Dict[str, str]:
|
||||||
|
"""
|
||||||
|
Generate a personalized cover letter using AI
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Construct the prompt
|
||||||
|
prompt = f"""
|
||||||
|
You are a professional career coach helping someone write a compelling cover letter.
|
||||||
|
|
||||||
|
JOB DETAILS:
|
||||||
|
- Company: {company_name}
|
||||||
|
- Role: {role_title}
|
||||||
|
- Job Description: {job_description}
|
||||||
|
|
||||||
|
USER INFORMATION:
|
||||||
|
- Name: {user_name}
|
||||||
|
{f"- Resume/Background: {user_resume[:1000]}..." if user_resume else ""}
|
||||||
|
|
||||||
|
TASK:
|
||||||
|
Write a professional, personalized cover letter that:
|
||||||
|
1. Shows genuine interest in the specific role and company
|
||||||
|
2. Highlights relevant skills from the job description
|
||||||
|
3. Demonstrates understanding of the company's needs
|
||||||
|
4. Uses a professional but engaging tone
|
||||||
|
5. Is 3-4 paragraphs long
|
||||||
|
6. Includes a strong opening and closing
|
||||||
|
|
||||||
|
Format the response as a complete cover letter without any meta-commentary.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Try Claude first, fallback to OpenAI
|
||||||
|
if self.claude_client:
|
||||||
|
logger.info("Generating cover letter with Claude")
|
||||||
|
response = self.claude_client.messages.create(
|
||||||
|
model="claude-3-haiku-20240307",
|
||||||
|
max_tokens=1000,
|
||||||
|
messages=[
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
content = response.content[0].text
|
||||||
|
model_used = "claude-3-haiku"
|
||||||
|
|
||||||
|
elif self.openai_client:
|
||||||
|
logger.info("Generating cover letter with OpenAI")
|
||||||
|
response = await self.openai_client.chat.completions.create(
|
||||||
|
model="gpt-3.5-turbo",
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": "You are a professional career coach helping write cover letters."},
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
],
|
||||||
|
max_tokens=1000,
|
||||||
|
temperature=0.7
|
||||||
|
)
|
||||||
|
content = response.choices[0].message.content
|
||||||
|
model_used = "gpt-3.5-turbo"
|
||||||
|
|
||||||
|
else:
|
||||||
|
# Fallback to template-based generation
|
||||||
|
logger.warning("No AI API keys available, using template")
|
||||||
|
content = self._generate_template_cover_letter(
|
||||||
|
company_name, role_title, user_name, job_description
|
||||||
|
)
|
||||||
|
model_used = "template"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"content": content,
|
||||||
|
"model_used": model_used,
|
||||||
|
"prompt": prompt[:500] + "..." if len(prompt) > 500 else prompt
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("AI cover letter generation failed", error=str(e))
|
||||||
|
# Fallback to template
|
||||||
|
content = self._generate_template_cover_letter(
|
||||||
|
company_name, role_title, user_name, job_description
|
||||||
|
)
|
||||||
|
return {
|
||||||
|
"content": content,
|
||||||
|
"model_used": "template-fallback",
|
||||||
|
"prompt": "Template fallback due to AI service error"
|
||||||
|
}
|
||||||
|
|
||||||
|
async def generate_resume_optimization(
|
||||||
|
self,
|
||||||
|
current_resume: str,
|
||||||
|
job_description: str,
|
||||||
|
role_title: str
|
||||||
|
) -> Dict[str, str]:
|
||||||
|
"""
|
||||||
|
Optimize resume for specific job requirements
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
prompt = f"""
|
||||||
|
You are an expert resume writer helping optimize a resume for a specific job.
|
||||||
|
|
||||||
|
CURRENT RESUME:
|
||||||
|
{current_resume}
|
||||||
|
|
||||||
|
TARGET JOB:
|
||||||
|
- Role: {role_title}
|
||||||
|
- Job Description: {job_description}
|
||||||
|
|
||||||
|
TASK:
|
||||||
|
Optimize this resume by:
|
||||||
|
1. Highlighting relevant skills mentioned in the job description
|
||||||
|
2. Reordering sections to emphasize most relevant experience
|
||||||
|
3. Using keywords from the job posting
|
||||||
|
4. Maintaining truthfulness - only reorganize/reword existing content
|
||||||
|
5. Keeping the same general structure and format
|
||||||
|
|
||||||
|
Return the optimized resume without meta-commentary.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if self.claude_client:
|
||||||
|
response = self.claude_client.messages.create(
|
||||||
|
model="claude-3-haiku-20240307",
|
||||||
|
max_tokens=2000,
|
||||||
|
messages=[
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
content = response.content[0].text
|
||||||
|
model_used = "claude-3-haiku"
|
||||||
|
|
||||||
|
elif self.openai_client:
|
||||||
|
response = await self.openai_client.chat.completions.create(
|
||||||
|
model="gpt-3.5-turbo",
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": "You are an expert resume writer."},
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
],
|
||||||
|
max_tokens=2000,
|
||||||
|
temperature=0.5
|
||||||
|
)
|
||||||
|
content = response.choices[0].message.content
|
||||||
|
model_used = "gpt-3.5-turbo"
|
||||||
|
|
||||||
|
else:
|
||||||
|
content = f"Resume optimization for {role_title}\n\n{current_resume}\n\n[AI optimization would be applied here with API keys configured]"
|
||||||
|
model_used = "template"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"content": content,
|
||||||
|
"model_used": model_used,
|
||||||
|
"prompt": prompt[:500] + "..." if len(prompt) > 500 else prompt
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Resume optimization failed", error=str(e))
|
||||||
|
return {
|
||||||
|
"content": f"Optimized Resume for {role_title}\n\n{current_resume}",
|
||||||
|
"model_used": "template-fallback",
|
||||||
|
"prompt": "Template fallback due to AI service error"
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_template_cover_letter(
|
||||||
|
self,
|
||||||
|
company_name: str,
|
||||||
|
role_title: str,
|
||||||
|
user_name: str,
|
||||||
|
job_description: str
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Generate a basic template cover letter when AI services are unavailable
|
||||||
|
"""
|
||||||
|
# Extract a few keywords from job description
|
||||||
|
keywords = []
|
||||||
|
common_skills = ["python", "javascript", "react", "sql", "aws", "docker", "git", "api", "database"]
|
||||||
|
for skill in common_skills:
|
||||||
|
if skill.lower() in job_description.lower():
|
||||||
|
keywords.append(skill.title())
|
||||||
|
|
||||||
|
skills_text = f" with expertise in {', '.join(keywords[:3])}" if keywords else ""
|
||||||
|
|
||||||
|
return f"""Dear Hiring Manager,
|
||||||
|
|
||||||
|
I am writing to express my strong interest in the {role_title} position at {company_name}. Based on the job description, I am excited about the opportunity to contribute to your team{skills_text}.
|
||||||
|
|
||||||
|
Your requirements align well with my background and experience. I am particularly drawn to this role because it represents an excellent opportunity to apply my skills in a dynamic environment while contributing to {company_name}'s continued success.
|
||||||
|
|
||||||
|
I would welcome the opportunity to discuss how my experience and enthusiasm can benefit your team. Thank you for considering my application, and I look forward to hearing from you.
|
||||||
|
|
||||||
|
Best regards,
|
||||||
|
{user_name}
|
||||||
|
|
||||||
|
---
|
||||||
|
[Generated by Job Forge AI Assistant - Configure API keys for enhanced personalization]"""
|
||||||
|
|
||||||
|
# Create a singleton instance
|
||||||
|
ai_service = AIService()
|
||||||
0
src/frontend/__init__.py
Normal file
0
src/frontend/__init__.py
Normal file
235
src/frontend/callbacks.py
Normal file
235
src/frontend/callbacks.py
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
from dash import Input, Output, State, callback, clientside_callback
|
||||||
|
import dash_mantine_components as dmc
|
||||||
|
import httpx
|
||||||
|
import structlog
|
||||||
|
|
||||||
|
from pages.home import create_home_page
|
||||||
|
from pages.auth import create_login_page
|
||||||
|
|
||||||
|
logger = structlog.get_logger()
|
||||||
|
|
||||||
|
def register_callbacks(app, config):
|
||||||
|
@app.callback(
|
||||||
|
Output("page-content", "children"),
|
||||||
|
Output("header-actions", "children"),
|
||||||
|
Input("url", "pathname"),
|
||||||
|
State("auth-store", "data")
|
||||||
|
)
|
||||||
|
def display_page(pathname, auth_data):
|
||||||
|
# Check if user is authenticated
|
||||||
|
is_authenticated = auth_data and auth_data.get("token")
|
||||||
|
|
||||||
|
if not is_authenticated:
|
||||||
|
# Show login page for unauthenticated users
|
||||||
|
if pathname == "/login" or pathname is None or pathname == "/":
|
||||||
|
return create_login_page(), []
|
||||||
|
else:
|
||||||
|
return create_login_page(), []
|
||||||
|
|
||||||
|
# Authenticated user navigation
|
||||||
|
header_actions = [
|
||||||
|
dmc.Button(
|
||||||
|
"Logout",
|
||||||
|
id="logout-btn",
|
||||||
|
variant="outline",
|
||||||
|
color="red",
|
||||||
|
leftIcon="tabler:logout"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
# Route to different pages
|
||||||
|
if pathname == "/" or pathname is None:
|
||||||
|
return create_home_page(), header_actions
|
||||||
|
elif pathname == "/jobs":
|
||||||
|
return create_jobs_page(), header_actions
|
||||||
|
elif pathname == "/applications":
|
||||||
|
return create_applications_page(), header_actions
|
||||||
|
elif pathname == "/documents":
|
||||||
|
return create_documents_page(), header_actions
|
||||||
|
elif pathname == "/profile":
|
||||||
|
return create_profile_page(), header_actions
|
||||||
|
else:
|
||||||
|
return create_home_page(), header_actions
|
||||||
|
|
||||||
|
@app.callback(
|
||||||
|
Output("auth-store", "data"),
|
||||||
|
Output("auth-alerts", "children"),
|
||||||
|
Input("login-submit", "n_clicks"),
|
||||||
|
State("login-email", "value"),
|
||||||
|
State("login-password", "value"),
|
||||||
|
prevent_initial_call=True
|
||||||
|
)
|
||||||
|
def handle_login(n_clicks, email, password):
|
||||||
|
if not n_clicks or not email or not password:
|
||||||
|
return None, []
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = httpx.post(
|
||||||
|
f"{config.auth_url}/login",
|
||||||
|
json={"email": email, "password": password},
|
||||||
|
timeout=10.0
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
token_data = response.json()
|
||||||
|
auth_data = {
|
||||||
|
"token": token_data["access_token"],
|
||||||
|
"email": email
|
||||||
|
}
|
||||||
|
|
||||||
|
success_alert = dmc.Alert(
|
||||||
|
"Login successful! Redirecting...",
|
||||||
|
title="Success",
|
||||||
|
color="green",
|
||||||
|
duration=3000
|
||||||
|
)
|
||||||
|
|
||||||
|
return auth_data, success_alert
|
||||||
|
else:
|
||||||
|
error_alert = dmc.Alert(
|
||||||
|
"Invalid email or password",
|
||||||
|
title="Login Failed",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return None, error_alert
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Login error", error=str(e))
|
||||||
|
error_alert = dmc.Alert(
|
||||||
|
"Connection error. Please try again.",
|
||||||
|
title="Error",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return None, error_alert
|
||||||
|
|
||||||
|
@app.callback(
|
||||||
|
Output("auth-store", "data", allow_duplicate=True),
|
||||||
|
Output("auth-alerts", "children", allow_duplicate=True),
|
||||||
|
Input("register-submit", "n_clicks"),
|
||||||
|
State("register-email", "value"),
|
||||||
|
State("register-password", "value"),
|
||||||
|
State("register-password-confirm", "value"),
|
||||||
|
State("register-first-name", "value"),
|
||||||
|
State("register-last-name", "value"),
|
||||||
|
State("register-phone", "value"),
|
||||||
|
prevent_initial_call=True
|
||||||
|
)
|
||||||
|
def handle_register(n_clicks, email, password, password_confirm, first_name, last_name, phone):
|
||||||
|
if not n_clicks:
|
||||||
|
return None, []
|
||||||
|
|
||||||
|
# Validation
|
||||||
|
if not all([email, password, first_name, last_name]):
|
||||||
|
error_alert = dmc.Alert(
|
||||||
|
"All required fields must be filled",
|
||||||
|
title="Validation Error",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return None, error_alert
|
||||||
|
|
||||||
|
if password != password_confirm:
|
||||||
|
error_alert = dmc.Alert(
|
||||||
|
"Passwords do not match",
|
||||||
|
title="Validation Error",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return None, error_alert
|
||||||
|
|
||||||
|
try:
|
||||||
|
user_data = {
|
||||||
|
"email": email,
|
||||||
|
"password": password,
|
||||||
|
"first_name": first_name,
|
||||||
|
"last_name": last_name
|
||||||
|
}
|
||||||
|
if phone:
|
||||||
|
user_data["phone"] = phone
|
||||||
|
|
||||||
|
response = httpx.post(
|
||||||
|
f"{config.auth_url}/register",
|
||||||
|
json=user_data,
|
||||||
|
timeout=10.0
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
# Auto-login after successful registration
|
||||||
|
login_response = httpx.post(
|
||||||
|
f"{config.auth_url}/login",
|
||||||
|
json={"email": email, "password": password},
|
||||||
|
timeout=10.0
|
||||||
|
)
|
||||||
|
|
||||||
|
if login_response.status_code == 200:
|
||||||
|
token_data = login_response.json()
|
||||||
|
auth_data = {
|
||||||
|
"token": token_data["access_token"],
|
||||||
|
"email": email
|
||||||
|
}
|
||||||
|
|
||||||
|
success_alert = dmc.Alert(
|
||||||
|
"Registration successful! Welcome to Job Forge!",
|
||||||
|
title="Success",
|
||||||
|
color="green",
|
||||||
|
duration=3000
|
||||||
|
)
|
||||||
|
|
||||||
|
return auth_data, success_alert
|
||||||
|
|
||||||
|
error_alert = dmc.Alert(
|
||||||
|
"Registration failed. Email may already be in use.",
|
||||||
|
title="Registration Failed",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return None, error_alert
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Registration error", error=str(e))
|
||||||
|
error_alert = dmc.Alert(
|
||||||
|
"Connection error. Please try again.",
|
||||||
|
title="Error",
|
||||||
|
color="red"
|
||||||
|
)
|
||||||
|
return None, error_alert
|
||||||
|
|
||||||
|
@app.callback(
|
||||||
|
Output("auth-store", "clear_data"),
|
||||||
|
Input("logout-btn", "n_clicks"),
|
||||||
|
prevent_initial_call=True
|
||||||
|
)
|
||||||
|
def handle_logout(n_clicks):
|
||||||
|
if n_clicks:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Placeholder functions for other pages
|
||||||
|
def create_jobs_page():
|
||||||
|
return dmc.Container(
|
||||||
|
children=[
|
||||||
|
dmc.Title("Job Search", mb="lg"),
|
||||||
|
dmc.Text("Job search functionality coming soon...")
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_applications_page():
|
||||||
|
return dmc.Container(
|
||||||
|
children=[
|
||||||
|
dmc.Title("My Applications", mb="lg"),
|
||||||
|
dmc.Text("Application tracking functionality coming soon...")
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_documents_page():
|
||||||
|
return dmc.Container(
|
||||||
|
children=[
|
||||||
|
dmc.Title("Documents", mb="lg"),
|
||||||
|
dmc.Text("Document management functionality coming soon...")
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_profile_page():
|
||||||
|
return dmc.Container(
|
||||||
|
children=[
|
||||||
|
dmc.Title("Profile", mb="lg"),
|
||||||
|
dmc.Text("Profile management functionality coming soon...")
|
||||||
|
]
|
||||||
|
)
|
||||||
27
src/frontend/config.py
Normal file
27
src/frontend/config.py
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
import os
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
def __init__(self):
|
||||||
|
self.BACKEND_URL = os.getenv("BACKEND_URL", "http://localhost:8000")
|
||||||
|
self.DEBUG = os.getenv("DEBUG", "false").lower() == "true"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def api_base_url(self) -> str:
|
||||||
|
return f"{self.BACKEND_URL}/api"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def auth_url(self) -> str:
|
||||||
|
return f"{self.api_base_url}/auth"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def applications_url(self) -> str:
|
||||||
|
return f"{self.api_base_url}/applications"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def jobs_url(self) -> str:
|
||||||
|
return f"{self.api_base_url}/jobs"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def documents_url(self) -> str:
|
||||||
|
return f"{self.api_base_url}/documents"
|
||||||
0
src/frontend/layouts/__init__.py
Normal file
0
src/frontend/layouts/__init__.py
Normal file
121
src/frontend/layouts/layout.py
Normal file
121
src/frontend/layouts/layout.py
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
from dash import html, dcc
|
||||||
|
import dash_mantine_components as dmc
|
||||||
|
from dash_iconify import DashIconify
|
||||||
|
|
||||||
|
def create_layout():
|
||||||
|
return dmc.MantineProvider(
|
||||||
|
theme={
|
||||||
|
"fontFamily": "'Inter', sans-serif",
|
||||||
|
"primaryColor": "blue",
|
||||||
|
"components": {
|
||||||
|
"Button": {"styles": {"root": {"fontWeight": 400}}},
|
||||||
|
"Alert": {"styles": {"title": {"fontWeight": 500}}},
|
||||||
|
"AvatarGroup": {"styles": {"truncated": {"fontWeight": 500}}},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
children=[
|
||||||
|
dcc.Store(id="auth-store", storage_type="session"),
|
||||||
|
dcc.Store(id="user-store", storage_type="session"),
|
||||||
|
dcc.Location(id="url", refresh=False),
|
||||||
|
|
||||||
|
html.Div(
|
||||||
|
id="main-content",
|
||||||
|
children=[
|
||||||
|
create_header(),
|
||||||
|
html.Div(id="page-content")
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_header():
|
||||||
|
return dmc.Header(
|
||||||
|
height=70,
|
||||||
|
fixed=True,
|
||||||
|
children=[
|
||||||
|
dmc.Container(
|
||||||
|
size="xl",
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
position="apart",
|
||||||
|
align="center",
|
||||||
|
style={"height": 70},
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
align="center",
|
||||||
|
spacing="xs",
|
||||||
|
children=[
|
||||||
|
DashIconify(
|
||||||
|
icon="tabler:briefcase",
|
||||||
|
width=32,
|
||||||
|
color="#228BE6"
|
||||||
|
),
|
||||||
|
dmc.Text(
|
||||||
|
"Job Forge",
|
||||||
|
size="xl",
|
||||||
|
weight=700,
|
||||||
|
color="blue"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.Group(
|
||||||
|
id="header-actions",
|
||||||
|
spacing="md",
|
||||||
|
children=[
|
||||||
|
dmc.Button(
|
||||||
|
"Login",
|
||||||
|
id="login-btn",
|
||||||
|
variant="outline",
|
||||||
|
leftIcon=DashIconify(icon="tabler:login")
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_navigation():
|
||||||
|
return dmc.Navbar(
|
||||||
|
width={"base": 300},
|
||||||
|
children=[
|
||||||
|
dmc.ScrollArea(
|
||||||
|
style={"height": "calc(100vh - 70px)"},
|
||||||
|
children=[
|
||||||
|
dmc.NavLink(
|
||||||
|
label="Dashboard",
|
||||||
|
icon=DashIconify(icon="tabler:dashboard"),
|
||||||
|
href="/",
|
||||||
|
id="nav-dashboard"
|
||||||
|
),
|
||||||
|
dmc.NavLink(
|
||||||
|
label="Job Search",
|
||||||
|
icon=DashIconify(icon="tabler:search"),
|
||||||
|
href="/jobs",
|
||||||
|
id="nav-jobs"
|
||||||
|
),
|
||||||
|
dmc.NavLink(
|
||||||
|
label="Applications",
|
||||||
|
icon=DashIconify(icon="tabler:briefcase"),
|
||||||
|
href="/applications",
|
||||||
|
id="nav-applications"
|
||||||
|
),
|
||||||
|
dmc.NavLink(
|
||||||
|
label="Documents",
|
||||||
|
icon=DashIconify(icon="tabler:file-text"),
|
||||||
|
href="/documents",
|
||||||
|
id="nav-documents"
|
||||||
|
),
|
||||||
|
dmc.NavLink(
|
||||||
|
label="Profile",
|
||||||
|
icon=DashIconify(icon="tabler:user"),
|
||||||
|
href="/profile",
|
||||||
|
id="nav-profile"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
36
src/frontend/main.py
Normal file
36
src/frontend/main.py
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
import dash
|
||||||
|
from dash import html, dcc
|
||||||
|
import dash_mantine_components as dmc
|
||||||
|
from dash_iconify import DashIconify
|
||||||
|
import os
|
||||||
|
|
||||||
|
from config import Config
|
||||||
|
from layouts.layout import create_layout
|
||||||
|
from callbacks import register_callbacks
|
||||||
|
|
||||||
|
# Initialize config
|
||||||
|
config = Config()
|
||||||
|
|
||||||
|
# Initialize Dash app
|
||||||
|
app = dash.Dash(
|
||||||
|
__name__,
|
||||||
|
external_stylesheets=[
|
||||||
|
"https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&display=swap"
|
||||||
|
],
|
||||||
|
suppress_callback_exceptions=True,
|
||||||
|
title="Job Forge - AI-Powered Job Application Assistant"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set up the layout
|
||||||
|
app.layout = create_layout()
|
||||||
|
|
||||||
|
# Register callbacks
|
||||||
|
register_callbacks(app, config)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
app.run_server(
|
||||||
|
host="0.0.0.0",
|
||||||
|
port=8501,
|
||||||
|
debug=config.DEBUG,
|
||||||
|
dev_tools_hot_reload=config.DEBUG
|
||||||
|
)
|
||||||
0
src/frontend/pages/__init__.py
Normal file
0
src/frontend/pages/__init__.py
Normal file
164
src/frontend/pages/auth.py
Normal file
164
src/frontend/pages/auth.py
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
from dash import html, dcc
|
||||||
|
import dash_mantine_components as dmc
|
||||||
|
from dash_iconify import DashIconify
|
||||||
|
|
||||||
|
def create_login_page():
|
||||||
|
return dmc.Container(
|
||||||
|
size="xs",
|
||||||
|
style={"marginTop": "10vh"},
|
||||||
|
children=[
|
||||||
|
dmc.Paper(
|
||||||
|
shadow="lg",
|
||||||
|
radius="md",
|
||||||
|
p="xl",
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
position="center",
|
||||||
|
mb="xl",
|
||||||
|
children=[
|
||||||
|
DashIconify(
|
||||||
|
icon="tabler:briefcase",
|
||||||
|
width=40,
|
||||||
|
color="#228BE6"
|
||||||
|
),
|
||||||
|
dmc.Title("Job Forge", order=2, color="blue")
|
||||||
|
]
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.Tabs(
|
||||||
|
id="auth-tabs",
|
||||||
|
value="login",
|
||||||
|
children=[
|
||||||
|
dmc.TabsList(
|
||||||
|
grow=True,
|
||||||
|
children=[
|
||||||
|
dmc.Tab("Login", value="login"),
|
||||||
|
dmc.Tab("Register", value="register")
|
||||||
|
]
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.TabsPanel(
|
||||||
|
value="login",
|
||||||
|
children=[
|
||||||
|
html.Form(
|
||||||
|
id="login-form",
|
||||||
|
children=[
|
||||||
|
dmc.TextInput(
|
||||||
|
id="login-email",
|
||||||
|
label="Email",
|
||||||
|
placeholder="your.email@example.com",
|
||||||
|
icon=DashIconify(icon="tabler:mail"),
|
||||||
|
required=True,
|
||||||
|
mb="md"
|
||||||
|
),
|
||||||
|
dmc.PasswordInput(
|
||||||
|
id="login-password",
|
||||||
|
label="Password",
|
||||||
|
placeholder="Your password",
|
||||||
|
icon=DashIconify(icon="tabler:lock"),
|
||||||
|
required=True,
|
||||||
|
mb="xl"
|
||||||
|
),
|
||||||
|
dmc.Button(
|
||||||
|
"Login",
|
||||||
|
id="login-submit",
|
||||||
|
fullWidth=True,
|
||||||
|
leftIcon=DashIconify(icon="tabler:login")
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.TabsPanel(
|
||||||
|
value="register",
|
||||||
|
children=[
|
||||||
|
html.Form(
|
||||||
|
id="register-form",
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
grow=True,
|
||||||
|
children=[
|
||||||
|
dmc.TextInput(
|
||||||
|
id="register-first-name",
|
||||||
|
label="First Name",
|
||||||
|
placeholder="John",
|
||||||
|
required=True,
|
||||||
|
style={"flex": 1}
|
||||||
|
),
|
||||||
|
dmc.TextInput(
|
||||||
|
id="register-last-name",
|
||||||
|
label="Last Name",
|
||||||
|
placeholder="Doe",
|
||||||
|
required=True,
|
||||||
|
style={"flex": 1}
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
dmc.TextInput(
|
||||||
|
id="register-email",
|
||||||
|
label="Email",
|
||||||
|
placeholder="your.email@example.com",
|
||||||
|
icon=DashIconify(icon="tabler:mail"),
|
||||||
|
required=True,
|
||||||
|
mt="md"
|
||||||
|
),
|
||||||
|
dmc.TextInput(
|
||||||
|
id="register-phone",
|
||||||
|
label="Phone (Optional)",
|
||||||
|
placeholder="+1 (555) 123-4567",
|
||||||
|
icon=DashIconify(icon="tabler:phone"),
|
||||||
|
mt="md"
|
||||||
|
),
|
||||||
|
dmc.PasswordInput(
|
||||||
|
id="register-password",
|
||||||
|
label="Password",
|
||||||
|
placeholder="Your password",
|
||||||
|
icon=DashIconify(icon="tabler:lock"),
|
||||||
|
required=True,
|
||||||
|
mt="md"
|
||||||
|
),
|
||||||
|
dmc.PasswordInput(
|
||||||
|
id="register-password-confirm",
|
||||||
|
label="Confirm Password",
|
||||||
|
placeholder="Confirm your password",
|
||||||
|
icon=DashIconify(icon="tabler:lock"),
|
||||||
|
required=True,
|
||||||
|
mt="md",
|
||||||
|
mb="xl"
|
||||||
|
),
|
||||||
|
dmc.Button(
|
||||||
|
"Register",
|
||||||
|
id="register-submit",
|
||||||
|
fullWidth=True,
|
||||||
|
leftIcon=DashIconify(icon="tabler:user-plus")
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
|
||||||
|
html.Div(id="auth-alerts", style={"marginTop": "1rem"})
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_logout_confirmation():
|
||||||
|
return dmc.Modal(
|
||||||
|
title="Confirm Logout",
|
||||||
|
id="logout-modal",
|
||||||
|
children=[
|
||||||
|
dmc.Text("Are you sure you want to logout?"),
|
||||||
|
dmc.Group(
|
||||||
|
position="right",
|
||||||
|
mt="md",
|
||||||
|
children=[
|
||||||
|
dmc.Button("Cancel", id="logout-cancel", variant="outline"),
|
||||||
|
dmc.Button("Logout", id="logout-confirm", color="red")
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
185
src/frontend/pages/home.py
Normal file
185
src/frontend/pages/home.py
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
from dash import html
|
||||||
|
import dash_mantine_components as dmc
|
||||||
|
from dash_iconify import DashIconify
|
||||||
|
|
||||||
|
def create_home_page():
|
||||||
|
return dmc.Container(
|
||||||
|
size="xl",
|
||||||
|
pt="md",
|
||||||
|
children=[
|
||||||
|
dmc.Title("Welcome to Job Forge", order=1, mb="lg"),
|
||||||
|
|
||||||
|
dmc.Grid(
|
||||||
|
children=[
|
||||||
|
dmc.Col(
|
||||||
|
dmc.Card(
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
children=[
|
||||||
|
DashIconify(
|
||||||
|
icon="tabler:search",
|
||||||
|
width=40,
|
||||||
|
color="#228BE6"
|
||||||
|
),
|
||||||
|
dmc.Stack(
|
||||||
|
spacing=5,
|
||||||
|
children=[
|
||||||
|
dmc.Text("Find Jobs", weight=600, size="lg"),
|
||||||
|
dmc.Text(
|
||||||
|
"Search and discover job opportunities",
|
||||||
|
size="sm",
|
||||||
|
color="dimmed"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
dmc.Button(
|
||||||
|
"Search Jobs",
|
||||||
|
fullWidth=True,
|
||||||
|
mt="md",
|
||||||
|
id="home-search-jobs-btn"
|
||||||
|
)
|
||||||
|
],
|
||||||
|
withBorder=True,
|
||||||
|
shadow="sm",
|
||||||
|
radius="md",
|
||||||
|
p="lg"
|
||||||
|
),
|
||||||
|
span=6
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.Col(
|
||||||
|
dmc.Card(
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
children=[
|
||||||
|
DashIconify(
|
||||||
|
icon="tabler:briefcase",
|
||||||
|
width=40,
|
||||||
|
color="#40C057"
|
||||||
|
),
|
||||||
|
dmc.Stack(
|
||||||
|
spacing=5,
|
||||||
|
children=[
|
||||||
|
dmc.Text("Track Applications", weight=600, size="lg"),
|
||||||
|
dmc.Text(
|
||||||
|
"Manage your job applications",
|
||||||
|
size="sm",
|
||||||
|
color="dimmed"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
dmc.Button(
|
||||||
|
"View Applications",
|
||||||
|
fullWidth=True,
|
||||||
|
mt="md",
|
||||||
|
color="green",
|
||||||
|
id="home-applications-btn"
|
||||||
|
)
|
||||||
|
],
|
||||||
|
withBorder=True,
|
||||||
|
shadow="sm",
|
||||||
|
radius="md",
|
||||||
|
p="lg"
|
||||||
|
),
|
||||||
|
span=6
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.Col(
|
||||||
|
dmc.Card(
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
children=[
|
||||||
|
DashIconify(
|
||||||
|
icon="tabler:file-text",
|
||||||
|
width=40,
|
||||||
|
color="#FD7E14"
|
||||||
|
),
|
||||||
|
dmc.Stack(
|
||||||
|
spacing=5,
|
||||||
|
children=[
|
||||||
|
dmc.Text("AI Documents", weight=600, size="lg"),
|
||||||
|
dmc.Text(
|
||||||
|
"Generate resumes and cover letters",
|
||||||
|
size="sm",
|
||||||
|
color="dimmed"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
dmc.Button(
|
||||||
|
"Create Documents",
|
||||||
|
fullWidth=True,
|
||||||
|
mt="md",
|
||||||
|
color="orange",
|
||||||
|
id="home-documents-btn"
|
||||||
|
)
|
||||||
|
],
|
||||||
|
withBorder=True,
|
||||||
|
shadow="sm",
|
||||||
|
radius="md",
|
||||||
|
p="lg"
|
||||||
|
),
|
||||||
|
span=6
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.Col(
|
||||||
|
dmc.Card(
|
||||||
|
children=[
|
||||||
|
dmc.Group(
|
||||||
|
children=[
|
||||||
|
DashIconify(
|
||||||
|
icon="tabler:user",
|
||||||
|
width=40,
|
||||||
|
color="#BE4BDB"
|
||||||
|
),
|
||||||
|
dmc.Stack(
|
||||||
|
spacing=5,
|
||||||
|
children=[
|
||||||
|
dmc.Text("Profile", weight=600, size="lg"),
|
||||||
|
dmc.Text(
|
||||||
|
"Manage your profile and settings",
|
||||||
|
size="sm",
|
||||||
|
color="dimmed"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
),
|
||||||
|
dmc.Button(
|
||||||
|
"Edit Profile",
|
||||||
|
fullWidth=True,
|
||||||
|
mt="md",
|
||||||
|
color="violet",
|
||||||
|
id="home-profile-btn"
|
||||||
|
)
|
||||||
|
],
|
||||||
|
withBorder=True,
|
||||||
|
shadow="sm",
|
||||||
|
radius="md",
|
||||||
|
p="lg"
|
||||||
|
),
|
||||||
|
span=6
|
||||||
|
)
|
||||||
|
],
|
||||||
|
gutter="md"
|
||||||
|
),
|
||||||
|
|
||||||
|
dmc.Divider(my="xl"),
|
||||||
|
|
||||||
|
dmc.Title("Recent Activity", order=2, mb="md"),
|
||||||
|
dmc.Card(
|
||||||
|
children=[
|
||||||
|
dmc.Text("No recent activity yet. Start by searching for jobs or uploading your resume!")
|
||||||
|
],
|
||||||
|
withBorder=True,
|
||||||
|
shadow="sm",
|
||||||
|
radius="md",
|
||||||
|
p="lg"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
1
tests/__init__.py
Normal file
1
tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Tests package for Job Forge
|
||||||
188
tests/conftest.py
Normal file
188
tests/conftest.py
Normal file
@@ -0,0 +1,188 @@
|
|||||||
|
"""
|
||||||
|
Updated test configuration for Job Forge that matches the actual project structure
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
import asyncio
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from httpx import AsyncClient
|
||||||
|
import os
|
||||||
|
from typing import AsyncGenerator
|
||||||
|
from unittest.mock import AsyncMock, Mock
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
# Fix import paths to match actual structure
|
||||||
|
from src.backend.main import app
|
||||||
|
from src.backend.core.database import get_db, Base
|
||||||
|
from src.backend.models.user import User
|
||||||
|
from src.backend.api.auth import create_access_token, hash_password
|
||||||
|
|
||||||
|
# Test database URL (use a separate test database)
|
||||||
|
TEST_DATABASE_URL = os.getenv(
|
||||||
|
"TEST_DATABASE_URL",
|
||||||
|
"postgresql+asyncpg://jobforge_user:jobforge_password@localhost:5432/jobforge_test"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Test engine
|
||||||
|
test_engine = create_async_engine(TEST_DATABASE_URL, echo=False)
|
||||||
|
TestSessionLocal = sessionmaker(test_engine, class_=AsyncSession, expire_on_commit=False)
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def event_loop():
|
||||||
|
"""Create an instance of the default event loop for the test session."""
|
||||||
|
loop = asyncio.get_event_loop_policy().new_event_loop()
|
||||||
|
yield loop
|
||||||
|
loop.close()
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
async def setup_test_db():
|
||||||
|
"""Set up test database tables."""
|
||||||
|
# Create all tables
|
||||||
|
async with test_engine.begin() as conn:
|
||||||
|
await conn.run_sync(Base.metadata.drop_all)
|
||||||
|
await conn.run_sync(Base.metadata.create_all)
|
||||||
|
|
||||||
|
yield
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
async with test_engine.begin() as conn:
|
||||||
|
await conn.run_sync(Base.metadata.drop_all)
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def test_db(setup_test_db) -> AsyncGenerator[AsyncSession, None]:
|
||||||
|
"""Create a test database session."""
|
||||||
|
async with TestSessionLocal() as session:
|
||||||
|
try:
|
||||||
|
yield session
|
||||||
|
finally:
|
||||||
|
await session.rollback()
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def override_get_db(test_db: AsyncSession):
|
||||||
|
"""Override the get_db dependency for testing."""
|
||||||
|
def _override_get_db():
|
||||||
|
return test_db
|
||||||
|
|
||||||
|
app.dependency_overrides[get_db] = _override_get_db
|
||||||
|
yield
|
||||||
|
app.dependency_overrides.clear()
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def test_client(override_get_db):
|
||||||
|
"""Create a test client."""
|
||||||
|
with TestClient(app) as client:
|
||||||
|
yield client
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def async_client(override_get_db):
|
||||||
|
"""Create an async test client."""
|
||||||
|
async with AsyncClient(app=app, base_url="http://test") as client:
|
||||||
|
yield client
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def test_user(test_db: AsyncSession):
|
||||||
|
"""Create a test user."""
|
||||||
|
user = User(
|
||||||
|
email="test@jobforge.com",
|
||||||
|
password_hash=hash_password("testpassword123"),
|
||||||
|
full_name="Test User"
|
||||||
|
)
|
||||||
|
|
||||||
|
test_db.add(user)
|
||||||
|
await test_db.commit()
|
||||||
|
await test_db.refresh(user)
|
||||||
|
return user
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def test_user_token(test_user):
|
||||||
|
"""Create a JWT token for test user."""
|
||||||
|
token_data = {"sub": str(test_user.id)}
|
||||||
|
return create_access_token(data=token_data)
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_ai_service():
|
||||||
|
"""Mock AI service for testing."""
|
||||||
|
mock = Mock()
|
||||||
|
mock.generate_cover_letter = AsyncMock(return_value={
|
||||||
|
"content": "Dear Hiring Manager,\n\nI am writing to express my interest...\n\nBest regards,\nTest User",
|
||||||
|
"model_used": "mock-ai",
|
||||||
|
"prompt": "Mock prompt for testing"
|
||||||
|
})
|
||||||
|
mock.generate_resume_optimization = AsyncMock(return_value={
|
||||||
|
"content": "Optimized Resume\n\nTest User\nSoftware Engineer\n\nExperience optimized for target role...",
|
||||||
|
"model_used": "mock-ai",
|
||||||
|
"prompt": "Mock resume optimization prompt"
|
||||||
|
})
|
||||||
|
return mock
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def multiple_test_users(test_db: AsyncSession):
|
||||||
|
"""Create multiple test users for testing."""
|
||||||
|
users = []
|
||||||
|
for i in range(3):
|
||||||
|
user = User(
|
||||||
|
email=f"user{i}@test.com",
|
||||||
|
password_hash=hash_password("password123"),
|
||||||
|
full_name=f"User {i} Test"
|
||||||
|
)
|
||||||
|
test_db.add(user)
|
||||||
|
users.append(user)
|
||||||
|
|
||||||
|
await test_db.commit()
|
||||||
|
for user in users:
|
||||||
|
await test_db.refresh(user)
|
||||||
|
return users
|
||||||
|
|
||||||
|
# Test data factories
|
||||||
|
class TestDataFactory:
|
||||||
|
"""Factory for creating test data."""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def user_data(email: str = None, **kwargs):
|
||||||
|
"""Create user test data."""
|
||||||
|
return {
|
||||||
|
"email": email or "test@example.com",
|
||||||
|
"password": "testpassword123",
|
||||||
|
"first_name": "Test",
|
||||||
|
"last_name": "User",
|
||||||
|
**kwargs
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def cover_letter_request(**kwargs):
|
||||||
|
"""Create cover letter request data."""
|
||||||
|
return {
|
||||||
|
"job_description": "We are looking for a Software Engineer with Python experience",
|
||||||
|
"company_name": "Test Company",
|
||||||
|
"role_title": "Software Engineer",
|
||||||
|
**kwargs
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def resume_optimization_request(**kwargs):
|
||||||
|
"""Create resume optimization request data."""
|
||||||
|
return {
|
||||||
|
"current_resume": "John Doe\nSoftware Engineer\n\nExperience:\n- Python development\n- Web applications",
|
||||||
|
"job_description": "Senior Python Developer role requiring FastAPI experience",
|
||||||
|
"role_title": "Senior Python Developer",
|
||||||
|
**kwargs
|
||||||
|
}
|
||||||
|
|
||||||
|
# Helper functions
|
||||||
|
async def create_test_user_and_token(db: AsyncSession, email: str):
|
||||||
|
"""Helper to create a user and return auth token."""
|
||||||
|
user = User(
|
||||||
|
email=email,
|
||||||
|
password_hash=hash_password("password123"),
|
||||||
|
full_name="Test User"
|
||||||
|
)
|
||||||
|
|
||||||
|
db.add(user)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(user)
|
||||||
|
|
||||||
|
token_data = {"sub": str(user.id)}
|
||||||
|
token = create_access_token(data=token_data)
|
||||||
|
|
||||||
|
return user, token
|
||||||
325
tests/conftest_old.py
Normal file
325
tests/conftest_old.py
Normal file
@@ -0,0 +1,325 @@
|
|||||||
|
# Test configuration for Job Forge
|
||||||
|
import pytest
|
||||||
|
import asyncio
|
||||||
|
import asyncpg
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from httpx import AsyncClient
|
||||||
|
import os
|
||||||
|
from typing import AsyncGenerator
|
||||||
|
from unittest.mock import AsyncMock
|
||||||
|
|
||||||
|
from app.main import app
|
||||||
|
from app.core.database import get_db, Base
|
||||||
|
from app.core.security import create_access_token
|
||||||
|
from app.models.user import User
|
||||||
|
from app.models.application import Application
|
||||||
|
|
||||||
|
|
||||||
|
# Test database URL
|
||||||
|
TEST_DATABASE_URL = os.getenv(
|
||||||
|
"TEST_DATABASE_URL",
|
||||||
|
"postgresql+asyncpg://jobforge:jobforge123@localhost:5432/jobforge_test"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Test engine and session factory
|
||||||
|
test_engine = create_async_engine(TEST_DATABASE_URL, echo=False)
|
||||||
|
TestSessionLocal = sessionmaker(
|
||||||
|
test_engine, class_=AsyncSession, expire_on_commit=False
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def event_loop():
|
||||||
|
"""Create an instance of the default event loop for the test session."""
|
||||||
|
loop = asyncio.get_event_loop_policy().new_event_loop()
|
||||||
|
yield loop
|
||||||
|
loop.close()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
async def setup_test_db():
|
||||||
|
"""Set up test database tables."""
|
||||||
|
|
||||||
|
# Create all tables
|
||||||
|
async with test_engine.begin() as conn:
|
||||||
|
await conn.run_sync(Base.metadata.drop_all)
|
||||||
|
await conn.run_sync(Base.metadata.create_all)
|
||||||
|
|
||||||
|
# Enable RLS and create policies
|
||||||
|
await conn.execute("""
|
||||||
|
ALTER TABLE applications ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
DROP POLICY IF EXISTS applications_user_isolation ON applications;
|
||||||
|
CREATE POLICY applications_user_isolation ON applications
|
||||||
|
FOR ALL TO authenticated
|
||||||
|
USING (user_id = current_setting('app.current_user_id')::UUID);
|
||||||
|
|
||||||
|
-- Create vector extension if needed
|
||||||
|
CREATE EXTENSION IF NOT EXISTS vector;
|
||||||
|
""")
|
||||||
|
|
||||||
|
yield
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
async with test_engine.begin() as conn:
|
||||||
|
await conn.run_sync(Base.metadata.drop_all)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def test_db(setup_test_db) -> AsyncGenerator[AsyncSession, None]:
|
||||||
|
"""Create a test database session."""
|
||||||
|
|
||||||
|
async with TestSessionLocal() as session:
|
||||||
|
try:
|
||||||
|
yield session
|
||||||
|
finally:
|
||||||
|
await session.rollback()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def override_get_db(test_db: AsyncSession):
|
||||||
|
"""Override the get_db dependency for testing."""
|
||||||
|
|
||||||
|
def _override_get_db():
|
||||||
|
return test_db
|
||||||
|
|
||||||
|
app.dependency_overrides[get_db] = _override_get_db
|
||||||
|
yield
|
||||||
|
app.dependency_overrides.clear()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def test_client(override_get_db):
|
||||||
|
"""Create a test client."""
|
||||||
|
|
||||||
|
with TestClient(app) as client:
|
||||||
|
yield client
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def async_client(override_get_db):
|
||||||
|
"""Create an async test client."""
|
||||||
|
|
||||||
|
async with AsyncClient(app=app, base_url="http://test") as client:
|
||||||
|
yield client
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def test_user(test_db: AsyncSession):
|
||||||
|
"""Create a test user."""
|
||||||
|
|
||||||
|
from app.crud.user import create_user
|
||||||
|
from app.schemas.user import UserCreate
|
||||||
|
|
||||||
|
user_data = UserCreate(
|
||||||
|
email="test@jobforge.com",
|
||||||
|
password="testpassword123",
|
||||||
|
first_name="Test",
|
||||||
|
last_name="User"
|
||||||
|
)
|
||||||
|
|
||||||
|
user = await create_user(test_db, user_data)
|
||||||
|
await test_db.commit()
|
||||||
|
return user
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def test_user_token(test_user):
|
||||||
|
"""Create a JWT token for test user."""
|
||||||
|
|
||||||
|
token_data = {"sub": str(test_user.id), "email": test_user.email}
|
||||||
|
return create_access_token(data=token_data)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def test_application(test_db: AsyncSession, test_user):
|
||||||
|
"""Create a test job application."""
|
||||||
|
|
||||||
|
from app.crud.application import create_application
|
||||||
|
from app.schemas.application import ApplicationCreate
|
||||||
|
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name="Test Corp",
|
||||||
|
role_title="Software Developer",
|
||||||
|
job_description="Python developer position with FastAPI experience",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
|
||||||
|
application = await create_application(test_db, app_data, test_user.id)
|
||||||
|
await test_db.commit()
|
||||||
|
return application
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_claude_service():
|
||||||
|
"""Mock Claude AI service."""
|
||||||
|
|
||||||
|
mock = AsyncMock()
|
||||||
|
mock.generate_cover_letter.return_value = """
|
||||||
|
Dear Hiring Manager,
|
||||||
|
|
||||||
|
I am writing to express my strong interest in the Software Developer position at Test Corp.
|
||||||
|
With my experience in Python development and FastAPI expertise, I am confident I would be
|
||||||
|
a valuable addition to your team.
|
||||||
|
|
||||||
|
Thank you for your consideration.
|
||||||
|
|
||||||
|
Sincerely,
|
||||||
|
Test User
|
||||||
|
"""
|
||||||
|
|
||||||
|
return mock
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_openai_service():
|
||||||
|
"""Mock OpenAI service."""
|
||||||
|
|
||||||
|
mock = AsyncMock()
|
||||||
|
mock.create_embedding.return_value = [0.1] * 1536 # Mock embedding vector
|
||||||
|
mock.test_connection.return_value = True
|
||||||
|
|
||||||
|
return mock
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def multiple_test_users(test_db: AsyncSession):
|
||||||
|
"""Create multiple test users for isolation testing."""
|
||||||
|
|
||||||
|
from app.crud.user import create_user
|
||||||
|
from app.schemas.user import UserCreate
|
||||||
|
|
||||||
|
users = []
|
||||||
|
for i in range(3):
|
||||||
|
user_data = UserCreate(
|
||||||
|
email=f"user{i}@test.com",
|
||||||
|
password="password123",
|
||||||
|
first_name=f"User{i}",
|
||||||
|
last_name="Test"
|
||||||
|
)
|
||||||
|
user = await create_user(test_db, user_data)
|
||||||
|
users.append(user)
|
||||||
|
|
||||||
|
await test_db.commit()
|
||||||
|
return users
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def applications_for_users(test_db: AsyncSession, multiple_test_users):
|
||||||
|
"""Create applications for multiple users to test isolation."""
|
||||||
|
|
||||||
|
from app.crud.application import create_application
|
||||||
|
from app.schemas.application import ApplicationCreate
|
||||||
|
|
||||||
|
all_applications = []
|
||||||
|
|
||||||
|
for i, user in enumerate(multiple_test_users):
|
||||||
|
for j in range(2): # 2 applications per user
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name=f"Company{i}-{j}",
|
||||||
|
role_title=f"Role{i}-{j}",
|
||||||
|
job_description=f"Job description for user {i}, application {j}",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
application = await create_application(test_db, app_data, user.id)
|
||||||
|
all_applications.append(application)
|
||||||
|
|
||||||
|
await test_db.commit()
|
||||||
|
return all_applications
|
||||||
|
|
||||||
|
|
||||||
|
# Test data factories
|
||||||
|
class TestDataFactory:
|
||||||
|
"""Factory for creating test data."""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def user_data(email: str = None, **kwargs):
|
||||||
|
"""Create user test data."""
|
||||||
|
return {
|
||||||
|
"email": email or "test@example.com",
|
||||||
|
"password": "testpassword123",
|
||||||
|
"first_name": "Test",
|
||||||
|
"last_name": "User",
|
||||||
|
**kwargs
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def application_data(company_name: str = None, **kwargs):
|
||||||
|
"""Create application test data."""
|
||||||
|
return {
|
||||||
|
"company_name": company_name or "Test Company",
|
||||||
|
"role_title": "Software Developer",
|
||||||
|
"job_description": "Python developer position",
|
||||||
|
"status": "draft",
|
||||||
|
**kwargs
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def ai_response():
|
||||||
|
"""Create mock AI response."""
|
||||||
|
return """
|
||||||
|
Dear Hiring Manager,
|
||||||
|
|
||||||
|
I am excited to apply for this position. My background in software development
|
||||||
|
and passion for technology make me an ideal candidate.
|
||||||
|
|
||||||
|
Best regards,
|
||||||
|
Test User
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
# Database utilities for testing
|
||||||
|
async def create_test_user_and_token(db: AsyncSession, email: str):
|
||||||
|
"""Helper to create a user and return auth token."""
|
||||||
|
|
||||||
|
from app.crud.user import create_user
|
||||||
|
from app.schemas.user import UserCreate
|
||||||
|
|
||||||
|
user_data = UserCreate(
|
||||||
|
email=email,
|
||||||
|
password="password123",
|
||||||
|
first_name="Test",
|
||||||
|
last_name="User"
|
||||||
|
)
|
||||||
|
|
||||||
|
user = await create_user(db, user_data)
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
token_data = {"sub": str(user.id), "email": user.email}
|
||||||
|
token = create_access_token(data=token_data)
|
||||||
|
|
||||||
|
return user, token
|
||||||
|
|
||||||
|
|
||||||
|
async def set_rls_context(db: AsyncSession, user_id: str):
|
||||||
|
"""Set RLS context for testing multi-tenancy."""
|
||||||
|
|
||||||
|
await db.execute(f"SET app.current_user_id = '{user_id}'")
|
||||||
|
|
||||||
|
|
||||||
|
# Performance testing helpers
|
||||||
|
@pytest.fixture
|
||||||
|
def benchmark_db_operations():
|
||||||
|
"""Benchmark database operations."""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
class BenchmarkContext:
|
||||||
|
def __init__(self):
|
||||||
|
self.start_time = None
|
||||||
|
self.end_time = None
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self.start_time = time.time()
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
self.end_time = time.time()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def duration(self):
|
||||||
|
return self.end_time - self.start_time if self.end_time else None
|
||||||
|
|
||||||
|
return BenchmarkContext
|
||||||
1
tests/integration/__init__.py
Normal file
1
tests/integration/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Integration tests package
|
||||||
455
tests/integration/test_ai_api_integration.py
Normal file
455
tests/integration/test_ai_api_integration.py
Normal file
@@ -0,0 +1,455 @@
|
|||||||
|
"""
|
||||||
|
Integration tests for AI API endpoints
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from unittest.mock import patch, AsyncMock
|
||||||
|
|
||||||
|
from src.backend.main import app
|
||||||
|
|
||||||
|
|
||||||
|
class TestAIDocumentEndpoints:
|
||||||
|
"""Test AI document generation API endpoints."""
|
||||||
|
|
||||||
|
def test_generate_cover_letter_success(self, test_client, test_user_token):
|
||||||
|
"""Test successful cover letter generation."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "We are looking for a Senior Python Developer with FastAPI experience and PostgreSQL knowledge. The ideal candidate will have 5+ years of experience.",
|
||||||
|
"company_name": "TechCorp Industries",
|
||||||
|
"role_title": "Senior Python Developer"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
|
||||||
|
assert "content" in data
|
||||||
|
assert "model_used" in data
|
||||||
|
assert "generation_prompt" in data
|
||||||
|
|
||||||
|
# Verify content includes relevant information
|
||||||
|
assert "TechCorp Industries" in data["content"]
|
||||||
|
assert "Senior Python Developer" in data["content"]
|
||||||
|
assert len(data["content"]) > 100 # Should be substantial
|
||||||
|
|
||||||
|
# Should use template fallback without API keys
|
||||||
|
assert data["model_used"] == "template"
|
||||||
|
|
||||||
|
def test_generate_cover_letter_with_resume(self, test_client, test_user_token):
|
||||||
|
"""Test cover letter generation with user resume included."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Python developer role requiring Django experience",
|
||||||
|
"company_name": "Resume Corp",
|
||||||
|
"role_title": "Python Developer",
|
||||||
|
"user_resume": "John Doe\nSoftware Engineer\n\nExperience:\n- 5 years Python development\n- Django and Flask frameworks\n- PostgreSQL databases"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
|
||||||
|
assert "Resume Corp" in data["content"]
|
||||||
|
# Prompt should reference the resume
|
||||||
|
assert "Resume/Background" in data["generation_prompt"]
|
||||||
|
|
||||||
|
def test_generate_cover_letter_unauthorized(self, test_client):
|
||||||
|
"""Test cover letter generation without authentication."""
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Test job",
|
||||||
|
"company_name": "Test Corp",
|
||||||
|
"role_title": "Test Role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/ai/generate-cover-letter", json=request_data)
|
||||||
|
|
||||||
|
assert response.status_code == 403 # HTTPBearer returns 403
|
||||||
|
|
||||||
|
def test_generate_cover_letter_invalid_token(self, test_client):
|
||||||
|
"""Test cover letter generation with invalid token."""
|
||||||
|
headers = {"Authorization": "Bearer invalid.token.here"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Test job",
|
||||||
|
"company_name": "Test Corp",
|
||||||
|
"role_title": "Test Role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_generate_cover_letter_missing_fields(self, test_client, test_user_token):
|
||||||
|
"""Test cover letter generation with missing required fields."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Test job",
|
||||||
|
# Missing company_name and role_title
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 422 # Validation error
|
||||||
|
|
||||||
|
def test_optimize_resume_success(self, test_client, test_user_token):
|
||||||
|
"""Test successful resume optimization."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"current_resume": """
|
||||||
|
John Smith
|
||||||
|
Software Engineer
|
||||||
|
|
||||||
|
Experience:
|
||||||
|
- 3 years Python development
|
||||||
|
- Built REST APIs using Flask
|
||||||
|
- Database management with MySQL
|
||||||
|
- Team collaboration and code reviews
|
||||||
|
""",
|
||||||
|
"job_description": "Senior Python Developer role requiring FastAPI, PostgreSQL, and AI/ML experience. Must have 5+ years of experience.",
|
||||||
|
"role_title": "Senior Python Developer"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/optimize-resume",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
|
||||||
|
assert "content" in data
|
||||||
|
assert "model_used" in data
|
||||||
|
assert "generation_prompt" in data
|
||||||
|
|
||||||
|
# Should include original resume content
|
||||||
|
assert "John Smith" in data["content"]
|
||||||
|
assert "Senior Python Developer" in data["content"]
|
||||||
|
assert data["model_used"] == "template"
|
||||||
|
|
||||||
|
def test_optimize_resume_unauthorized(self, test_client):
|
||||||
|
"""Test resume optimization without authentication."""
|
||||||
|
request_data = {
|
||||||
|
"current_resume": "Test resume",
|
||||||
|
"job_description": "Test job",
|
||||||
|
"role_title": "Test role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/ai/optimize-resume", json=request_data)
|
||||||
|
|
||||||
|
assert response.status_code == 403
|
||||||
|
|
||||||
|
def test_test_ai_connection_success(self, test_client, test_user_token):
|
||||||
|
"""Test AI connection test endpoint."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
|
||||||
|
response = test_client.post("/api/ai/test-ai-connection", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
|
||||||
|
assert "claude_available" in data
|
||||||
|
assert "openai_available" in data
|
||||||
|
assert "user" in data
|
||||||
|
assert "test_generation" in data
|
||||||
|
|
||||||
|
# Without API keys, should show unavailable but test should succeed
|
||||||
|
assert data["claude_available"] == False
|
||||||
|
assert data["openai_available"] == False
|
||||||
|
assert data["test_generation"] == "success"
|
||||||
|
assert data["model_used"] == "template"
|
||||||
|
assert "content_preview" in data
|
||||||
|
|
||||||
|
def test_test_ai_connection_unauthorized(self, test_client):
|
||||||
|
"""Test AI connection test without authentication."""
|
||||||
|
response = test_client.post("/api/ai/test-ai-connection")
|
||||||
|
|
||||||
|
assert response.status_code == 403
|
||||||
|
|
||||||
|
|
||||||
|
class TestAIAPIErrorHandling:
|
||||||
|
"""Test error handling in AI API endpoints."""
|
||||||
|
|
||||||
|
@patch('src.backend.services.ai_service.ai_service.generate_cover_letter')
|
||||||
|
def test_cover_letter_generation_service_error(self, mock_generate, test_client, test_user_token):
|
||||||
|
"""Test cover letter generation when AI service fails."""
|
||||||
|
# Mock the service to raise an exception
|
||||||
|
mock_generate.side_effect = Exception("AI service unavailable")
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Test job",
|
||||||
|
"company_name": "Error Corp",
|
||||||
|
"role_title": "Test Role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 500
|
||||||
|
data = response.json()
|
||||||
|
assert "Failed to generate cover letter" in data["detail"]
|
||||||
|
|
||||||
|
@patch('src.backend.services.ai_service.ai_service.generate_resume_optimization')
|
||||||
|
def test_resume_optimization_service_error(self, mock_optimize, test_client, test_user_token):
|
||||||
|
"""Test resume optimization when AI service fails."""
|
||||||
|
mock_optimize.side_effect = Exception("Service error")
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"current_resume": "Test resume",
|
||||||
|
"job_description": "Test job",
|
||||||
|
"role_title": "Test role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/optimize-resume",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 500
|
||||||
|
data = response.json()
|
||||||
|
assert "Failed to optimize resume" in data["detail"]
|
||||||
|
|
||||||
|
def test_cover_letter_with_large_payload(self, test_client, test_user_token):
|
||||||
|
"""Test cover letter generation with very large job description."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
|
||||||
|
# Create a very large job description
|
||||||
|
large_description = "A" * 50000 # 50KB of text
|
||||||
|
|
||||||
|
request_data = {
|
||||||
|
"job_description": large_description,
|
||||||
|
"company_name": "Large Corp",
|
||||||
|
"role_title": "Big Role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should handle large payloads gracefully
|
||||||
|
assert response.status_code in [200, 413, 422] # Success or payload too large
|
||||||
|
|
||||||
|
def test_resume_optimization_empty_resume(self, test_client, test_user_token):
|
||||||
|
"""Test resume optimization with empty resume."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"current_resume": "",
|
||||||
|
"job_description": "Test job description",
|
||||||
|
"role_title": "Test Role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/optimize-resume",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should handle empty resume
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "content" in data
|
||||||
|
|
||||||
|
|
||||||
|
class TestAIAPIValidation:
|
||||||
|
"""Test input validation for AI API endpoints."""
|
||||||
|
|
||||||
|
def test_cover_letter_invalid_email_in_description(self, test_client, test_user_token):
|
||||||
|
"""Test cover letter generation with invalid characters."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Job with special chars: <script>alert('xss')</script>",
|
||||||
|
"company_name": "Security Corp",
|
||||||
|
"role_title": "Security Engineer"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should sanitize or handle special characters
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
# The script tag should not be executed (this is handled by the template)
|
||||||
|
assert "Security Corp" in data["content"]
|
||||||
|
|
||||||
|
def test_resume_optimization_unicode_content(self, test_client, test_user_token):
|
||||||
|
"""Test resume optimization with unicode characters."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"current_resume": "José González\nSoftware Engineer\n• 5 años de experiencia",
|
||||||
|
"job_description": "Seeking bilingual developer",
|
||||||
|
"role_title": "Desarrollador Senior"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/optimize-resume",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "José González" in data["content"]
|
||||||
|
|
||||||
|
def test_cover_letter_null_values(self, test_client, test_user_token):
|
||||||
|
"""Test cover letter generation with null values in optional fields."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Test job description",
|
||||||
|
"company_name": "Null Corp",
|
||||||
|
"role_title": "Null Role",
|
||||||
|
"job_url": None,
|
||||||
|
"user_resume": None
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "Null Corp" in data["content"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestAIAPIPerformance:
|
||||||
|
"""Test performance aspects of AI API endpoints."""
|
||||||
|
|
||||||
|
def test_concurrent_cover_letter_requests(self, test_client, test_user_token):
|
||||||
|
"""Test multiple concurrent cover letter requests."""
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
|
||||||
|
def make_request(index):
|
||||||
|
request_data = {
|
||||||
|
"job_description": f"Job description {index}",
|
||||||
|
"company_name": f"Company {index}",
|
||||||
|
"role_title": f"Role {index}"
|
||||||
|
}
|
||||||
|
return test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make 5 concurrent requests
|
||||||
|
start_time = time.time()
|
||||||
|
threads = []
|
||||||
|
results = []
|
||||||
|
|
||||||
|
for i in range(5):
|
||||||
|
thread = threading.Thread(target=lambda i=i: results.append(make_request(i)))
|
||||||
|
threads.append(thread)
|
||||||
|
thread.start()
|
||||||
|
|
||||||
|
for thread in threads:
|
||||||
|
thread.join()
|
||||||
|
|
||||||
|
end_time = time.time()
|
||||||
|
|
||||||
|
# All requests should succeed
|
||||||
|
assert len(results) == 5
|
||||||
|
for response in results:
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
# Should complete in reasonable time (less than 10 seconds for template generation)
|
||||||
|
assert end_time - start_time < 10
|
||||||
|
|
||||||
|
def test_response_time_cover_letter(self, test_client, test_user_token):
|
||||||
|
"""Test response time for cover letter generation."""
|
||||||
|
import time
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Standard Python developer position",
|
||||||
|
"company_name": "Performance Corp",
|
||||||
|
"role_title": "Python Developer"
|
||||||
|
}
|
||||||
|
|
||||||
|
start_time = time.time()
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
end_time = time.time()
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
# Template generation should be fast (less than 1 second)
|
||||||
|
response_time = end_time - start_time
|
||||||
|
assert response_time < 1.0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
class TestAIAPIAsync:
|
||||||
|
"""Test AI API endpoints with async client."""
|
||||||
|
|
||||||
|
async def test_async_cover_letter_generation(self, async_client, test_user_token):
|
||||||
|
"""Test cover letter generation with async client."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"job_description": "Async job description",
|
||||||
|
"company_name": "Async Corp",
|
||||||
|
"role_title": "Async Developer"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = await async_client.post(
|
||||||
|
"/api/ai/generate-cover-letter",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "Async Corp" in data["content"]
|
||||||
|
|
||||||
|
async def test_async_resume_optimization(self, async_client, test_user_token):
|
||||||
|
"""Test resume optimization with async client."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
request_data = {
|
||||||
|
"current_resume": "Async resume content",
|
||||||
|
"job_description": "Async job requirements",
|
||||||
|
"role_title": "Async Role"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = await async_client.post(
|
||||||
|
"/api/ai/optimize-resume",
|
||||||
|
json=request_data,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "Async resume content" in data["content"]
|
||||||
469
tests/integration/test_api_auth.py
Normal file
469
tests/integration/test_api_auth.py
Normal file
@@ -0,0 +1,469 @@
|
|||||||
|
# Integration tests for API authentication
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from httpx import AsyncClient
|
||||||
|
import json
|
||||||
|
|
||||||
|
from app.main import app
|
||||||
|
from app.core.security import create_access_token
|
||||||
|
|
||||||
|
|
||||||
|
class TestAuthenticationEndpoints:
|
||||||
|
"""Test authentication API endpoints."""
|
||||||
|
|
||||||
|
def test_register_user_success(self, test_client):
|
||||||
|
"""Test successful user registration."""
|
||||||
|
|
||||||
|
user_data = {
|
||||||
|
"email": "newuser@test.com",
|
||||||
|
"password": "securepassword123",
|
||||||
|
"first_name": "New",
|
||||||
|
"last_name": "User"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 201
|
||||||
|
data = response.json()
|
||||||
|
assert data["email"] == "newuser@test.com"
|
||||||
|
assert data["first_name"] == "New"
|
||||||
|
assert data["last_name"] == "User"
|
||||||
|
assert "password" not in data # Password should not be returned
|
||||||
|
assert "access_token" in data
|
||||||
|
assert "token_type" in data
|
||||||
|
|
||||||
|
def test_register_user_duplicate_email(self, test_client, test_user):
|
||||||
|
"""Test registration with duplicate email."""
|
||||||
|
|
||||||
|
user_data = {
|
||||||
|
"email": test_user.email, # Use existing user's email
|
||||||
|
"password": "differentpassword",
|
||||||
|
"first_name": "Duplicate",
|
||||||
|
"last_name": "User"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 400
|
||||||
|
data = response.json()
|
||||||
|
assert "email already registered" in data["detail"].lower()
|
||||||
|
|
||||||
|
def test_register_user_invalid_email(self, test_client):
|
||||||
|
"""Test registration with invalid email format."""
|
||||||
|
|
||||||
|
user_data = {
|
||||||
|
"email": "invalid-email",
|
||||||
|
"password": "securepassword123",
|
||||||
|
"first_name": "Invalid",
|
||||||
|
"last_name": "Email"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 422 # Validation error
|
||||||
|
|
||||||
|
def test_register_user_weak_password(self, test_client):
|
||||||
|
"""Test registration with weak password."""
|
||||||
|
|
||||||
|
user_data = {
|
||||||
|
"email": "weakpass@test.com",
|
||||||
|
"password": "123", # Too weak
|
||||||
|
"first_name": "Weak",
|
||||||
|
"last_name": "Password"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 422 # Validation error
|
||||||
|
|
||||||
|
def test_login_success(self, test_client, test_user):
|
||||||
|
"""Test successful login."""
|
||||||
|
|
||||||
|
login_data = {
|
||||||
|
"username": test_user.email, # FastAPI OAuth2 uses 'username'
|
||||||
|
"password": "testpassword123" # From test_user fixture
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/auth/login",
|
||||||
|
data=login_data, # OAuth2 expects form data
|
||||||
|
headers={"Content-Type": "application/x-www-form-urlencoded"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "access_token" in data
|
||||||
|
assert "token_type" in data
|
||||||
|
assert data["token_type"] == "bearer"
|
||||||
|
|
||||||
|
def test_login_wrong_password(self, test_client, test_user):
|
||||||
|
"""Test login with wrong password."""
|
||||||
|
|
||||||
|
login_data = {
|
||||||
|
"username": test_user.email,
|
||||||
|
"password": "wrongpassword"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/auth/login",
|
||||||
|
data=login_data,
|
||||||
|
headers={"Content-Type": "application/x-www-form-urlencoded"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
data = response.json()
|
||||||
|
assert "incorrect" in data["detail"].lower()
|
||||||
|
|
||||||
|
def test_login_nonexistent_user(self, test_client):
|
||||||
|
"""Test login with non-existent user."""
|
||||||
|
|
||||||
|
login_data = {
|
||||||
|
"username": "nonexistent@test.com",
|
||||||
|
"password": "somepassword"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/auth/login",
|
||||||
|
data=login_data,
|
||||||
|
headers={"Content-Type": "application/x-www-form-urlencoded"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_get_current_user_success(self, test_client, test_user_token):
|
||||||
|
"""Test getting current user with valid token."""
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "email" in data
|
||||||
|
assert "id" in data
|
||||||
|
assert "first_name" in data
|
||||||
|
assert "last_name" in data
|
||||||
|
assert "password" not in data # Password should never be returned
|
||||||
|
|
||||||
|
def test_get_current_user_invalid_token(self, test_client):
|
||||||
|
"""Test getting current user with invalid token."""
|
||||||
|
|
||||||
|
headers = {"Authorization": "Bearer invalid.token.here"}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_get_current_user_no_token(self, test_client):
|
||||||
|
"""Test getting current user without token."""
|
||||||
|
|
||||||
|
response = test_client.get("/api/auth/me")
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_get_current_user_expired_token(self, test_client, test_user):
|
||||||
|
"""Test getting current user with expired token."""
|
||||||
|
|
||||||
|
from datetime import timedelta
|
||||||
|
|
||||||
|
# Create expired token
|
||||||
|
token_data = {"sub": str(test_user.id), "email": test_user.email}
|
||||||
|
expired_token = create_access_token(
|
||||||
|
data=token_data,
|
||||||
|
expires_delta=timedelta(seconds=-1) # Expired
|
||||||
|
)
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {expired_token}"}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
|
||||||
|
class TestProtectedEndpoints:
|
||||||
|
"""Test protected endpoints require authentication."""
|
||||||
|
|
||||||
|
def test_protected_endpoint_without_token(self, test_client):
|
||||||
|
"""Test accessing protected endpoint without token."""
|
||||||
|
|
||||||
|
response = test_client.get("/api/applications")
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_protected_endpoint_with_valid_token(self, test_client, test_user_token):
|
||||||
|
"""Test accessing protected endpoint with valid token."""
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
response = test_client.get("/api/applications", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
def test_protected_endpoint_with_invalid_token(self, test_client):
|
||||||
|
"""Test accessing protected endpoint with invalid token."""
|
||||||
|
|
||||||
|
headers = {"Authorization": "Bearer invalid.token"}
|
||||||
|
response = test_client.get("/api/applications", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_create_application_requires_auth(self, test_client):
|
||||||
|
"""Test creating application requires authentication."""
|
||||||
|
|
||||||
|
app_data = {
|
||||||
|
"company_name": "Test Corp",
|
||||||
|
"role_title": "Developer",
|
||||||
|
"job_description": "Test role",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/applications", json=app_data)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_create_application_with_auth(self, test_client, test_user_token):
|
||||||
|
"""Test creating application with authentication."""
|
||||||
|
|
||||||
|
app_data = {
|
||||||
|
"company_name": "Auth Test Corp",
|
||||||
|
"role_title": "Developer",
|
||||||
|
"job_description": "Test role with auth",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
response = test_client.post("/api/applications", json=app_data, headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 201
|
||||||
|
data = response.json()
|
||||||
|
assert data["company_name"] == "Auth Test Corp"
|
||||||
|
|
||||||
|
|
||||||
|
class TestTokenValidation:
|
||||||
|
"""Test token validation scenarios."""
|
||||||
|
|
||||||
|
def test_malformed_token(self, test_client):
|
||||||
|
"""Test malformed token handling."""
|
||||||
|
|
||||||
|
malformed_tokens = [
|
||||||
|
"Bearer",
|
||||||
|
"Bearer ",
|
||||||
|
"not-a-token",
|
||||||
|
"Bearer not.a.jwt",
|
||||||
|
"NotBearer validtoken"
|
||||||
|
]
|
||||||
|
|
||||||
|
for token in malformed_tokens:
|
||||||
|
headers = {"Authorization": token}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_token_with_invalid_signature(self, test_client):
|
||||||
|
"""Test token with invalid signature."""
|
||||||
|
|
||||||
|
# Create a token with wrong signature
|
||||||
|
invalid_token = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c2VyMTIzIiwiZXhwIjoxNjk5OTk5OTk5fQ.invalid_signature"
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {invalid_token}"}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_token_missing_required_claims(self, test_client):
|
||||||
|
"""Test token missing required claims."""
|
||||||
|
|
||||||
|
# Create token without required 'sub' claim
|
||||||
|
token_data = {"email": "test@example.com"} # Missing 'sub'
|
||||||
|
token = create_access_token(data=token_data)
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {token}"}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
|
||||||
|
class TestAuthenticationFlow:
|
||||||
|
"""Test complete authentication flows."""
|
||||||
|
|
||||||
|
def test_complete_registration_and_login_flow(self, test_client):
|
||||||
|
"""Test complete flow from registration to authenticated request."""
|
||||||
|
|
||||||
|
# 1. Register new user
|
||||||
|
user_data = {
|
||||||
|
"email": "flowtest@test.com",
|
||||||
|
"password": "securepassword123",
|
||||||
|
"first_name": "Flow",
|
||||||
|
"last_name": "Test"
|
||||||
|
}
|
||||||
|
|
||||||
|
register_response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
assert register_response.status_code == 201
|
||||||
|
|
||||||
|
register_data = register_response.json()
|
||||||
|
registration_token = register_data["access_token"]
|
||||||
|
|
||||||
|
# 2. Use registration token to access protected endpoint
|
||||||
|
headers = {"Authorization": f"Bearer {registration_token}"}
|
||||||
|
protected_response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
assert protected_response.status_code == 200
|
||||||
|
|
||||||
|
# 3. Login with same credentials
|
||||||
|
login_data = {
|
||||||
|
"username": "flowtest@test.com",
|
||||||
|
"password": "securepassword123"
|
||||||
|
}
|
||||||
|
|
||||||
|
login_response = test_client.post(
|
||||||
|
"/api/auth/login",
|
||||||
|
data=login_data,
|
||||||
|
headers={"Content-Type": "application/x-www-form-urlencoded"}
|
||||||
|
)
|
||||||
|
assert login_response.status_code == 200
|
||||||
|
|
||||||
|
login_token = login_response.json()["access_token"]
|
||||||
|
|
||||||
|
# 4. Use login token to access protected endpoint
|
||||||
|
headers = {"Authorization": f"Bearer {login_token}"}
|
||||||
|
me_response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
assert me_response.status_code == 200
|
||||||
|
|
||||||
|
me_data = me_response.json()
|
||||||
|
assert me_data["email"] == "flowtest@test.com"
|
||||||
|
|
||||||
|
def test_user_isolation_between_tokens(self, test_client):
|
||||||
|
"""Test that different user tokens access different data."""
|
||||||
|
|
||||||
|
# Create two users
|
||||||
|
user1_data = {
|
||||||
|
"email": "user1@isolation.test",
|
||||||
|
"password": "password123",
|
||||||
|
"first_name": "User",
|
||||||
|
"last_name": "One"
|
||||||
|
}
|
||||||
|
|
||||||
|
user2_data = {
|
||||||
|
"email": "user2@isolation.test",
|
||||||
|
"password": "password123",
|
||||||
|
"first_name": "User",
|
||||||
|
"last_name": "Two"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Register both users
|
||||||
|
user1_response = test_client.post("/api/auth/register", json=user1_data)
|
||||||
|
user2_response = test_client.post("/api/auth/register", json=user2_data)
|
||||||
|
|
||||||
|
assert user1_response.status_code == 201
|
||||||
|
assert user2_response.status_code == 201
|
||||||
|
|
||||||
|
user1_token = user1_response.json()["access_token"]
|
||||||
|
user2_token = user2_response.json()["access_token"]
|
||||||
|
|
||||||
|
# Get user info with each token
|
||||||
|
user1_headers = {"Authorization": f"Bearer {user1_token}"}
|
||||||
|
user2_headers = {"Authorization": f"Bearer {user2_token}"}
|
||||||
|
|
||||||
|
user1_me = test_client.get("/api/auth/me", headers=user1_headers)
|
||||||
|
user2_me = test_client.get("/api/auth/me", headers=user2_headers)
|
||||||
|
|
||||||
|
assert user1_me.status_code == 200
|
||||||
|
assert user2_me.status_code == 200
|
||||||
|
|
||||||
|
user1_data = user1_me.json()
|
||||||
|
user2_data = user2_me.json()
|
||||||
|
|
||||||
|
# Verify users are different
|
||||||
|
assert user1_data["email"] != user2_data["email"]
|
||||||
|
assert user1_data["id"] != user2_data["id"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestRateLimiting:
|
||||||
|
"""Test rate limiting on authentication endpoints."""
|
||||||
|
|
||||||
|
def test_login_rate_limiting(self, test_client, test_user):
|
||||||
|
"""Test rate limiting on login attempts."""
|
||||||
|
|
||||||
|
login_data = {
|
||||||
|
"username": test_user.email,
|
||||||
|
"password": "wrongpassword"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Make multiple failed login attempts
|
||||||
|
responses = []
|
||||||
|
for _ in range(10): # Assuming rate limit is higher than 10
|
||||||
|
response = test_client.post(
|
||||||
|
"/api/auth/login",
|
||||||
|
data=login_data,
|
||||||
|
headers={"Content-Type": "application/x-www-form-urlencoded"}
|
||||||
|
)
|
||||||
|
responses.append(response)
|
||||||
|
|
||||||
|
# All should return 401 (wrong password) but not rate limited
|
||||||
|
for response in responses:
|
||||||
|
assert response.status_code == 401
|
||||||
|
# If rate limiting is implemented, some responses might be 429
|
||||||
|
|
||||||
|
def test_registration_rate_limiting(self, test_client):
|
||||||
|
"""Test rate limiting on registration attempts."""
|
||||||
|
|
||||||
|
# Make multiple registration attempts
|
||||||
|
responses = []
|
||||||
|
for i in range(5):
|
||||||
|
user_data = {
|
||||||
|
"email": f"ratelimit{i}@test.com",
|
||||||
|
"password": "password123",
|
||||||
|
"first_name": "Rate",
|
||||||
|
"last_name": f"Limit{i}"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
responses.append(response)
|
||||||
|
|
||||||
|
# Most should succeed (assuming reasonable rate limits)
|
||||||
|
successful_responses = [r for r in responses if r.status_code == 201]
|
||||||
|
assert len(successful_responses) >= 3 # At least some should succeed
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
class TestAsyncAuthEndpoints:
|
||||||
|
"""Test authentication endpoints with async client."""
|
||||||
|
|
||||||
|
async def test_async_register_user(self, async_client):
|
||||||
|
"""Test user registration with async client."""
|
||||||
|
|
||||||
|
user_data = {
|
||||||
|
"email": "async@test.com",
|
||||||
|
"password": "asyncpassword123",
|
||||||
|
"first_name": "Async",
|
||||||
|
"last_name": "User"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = await async_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 201
|
||||||
|
data = response.json()
|
||||||
|
assert data["email"] == "async@test.com"
|
||||||
|
assert "access_token" in data
|
||||||
|
|
||||||
|
async def test_async_login_user(self, async_client, test_user):
|
||||||
|
"""Test user login with async client."""
|
||||||
|
|
||||||
|
login_data = {
|
||||||
|
"username": test_user.email,
|
||||||
|
"password": "testpassword123"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = await async_client.post(
|
||||||
|
"/api/auth/login",
|
||||||
|
data=login_data,
|
||||||
|
headers={"Content-Type": "application/x-www-form-urlencoded"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "access_token" in data
|
||||||
|
|
||||||
|
async def test_async_protected_endpoint(self, async_client, test_user_token):
|
||||||
|
"""Test protected endpoint with async client."""
|
||||||
|
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
response = await async_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "email" in data
|
||||||
1
tests/unit/__init__.py
Normal file
1
tests/unit/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Unit tests package
|
||||||
405
tests/unit/test_ai_service.py
Normal file
405
tests/unit/test_ai_service.py
Normal file
@@ -0,0 +1,405 @@
|
|||||||
|
"""
|
||||||
|
Unit tests for AI document generation service
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import AsyncMock, Mock, patch
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
from src.backend.services.ai_service import AIService, ai_service
|
||||||
|
|
||||||
|
|
||||||
|
class TestAIService:
|
||||||
|
"""Test AI Service functionality."""
|
||||||
|
|
||||||
|
def test_ai_service_initialization(self):
|
||||||
|
"""Test AI service initializes correctly."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
# Without API keys, clients should be None
|
||||||
|
assert service.claude_client is None
|
||||||
|
assert service.openai_client is None
|
||||||
|
|
||||||
|
@patch('src.backend.services.ai_service.settings')
|
||||||
|
def test_ai_service_with_claude_key(self, mock_settings):
|
||||||
|
"""Test AI service initialization with Claude API key."""
|
||||||
|
mock_settings.claude_api_key = "test-claude-key"
|
||||||
|
mock_settings.openai_api_key = None
|
||||||
|
|
||||||
|
with patch('src.backend.services.ai_service.anthropic.Anthropic') as mock_anthropic:
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
mock_anthropic.assert_called_once_with(api_key="test-claude-key")
|
||||||
|
assert service.claude_client is not None
|
||||||
|
assert service.openai_client is None
|
||||||
|
|
||||||
|
@patch('src.backend.services.ai_service.settings')
|
||||||
|
def test_ai_service_with_openai_key(self, mock_settings):
|
||||||
|
"""Test AI service initialization with OpenAI API key."""
|
||||||
|
mock_settings.claude_api_key = None
|
||||||
|
mock_settings.openai_api_key = "test-openai-key"
|
||||||
|
|
||||||
|
with patch('src.backend.services.ai_service.openai.AsyncOpenAI') as mock_openai:
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
mock_openai.assert_called_once_with(api_key="test-openai-key")
|
||||||
|
assert service.claude_client is None
|
||||||
|
assert service.openai_client is not None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_cover_letter_template_fallback(self):
|
||||||
|
"""Test cover letter generation with template fallback."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
result = await service.generate_cover_letter(
|
||||||
|
job_description="Python developer position requiring FastAPI skills",
|
||||||
|
company_name="Tech Corp",
|
||||||
|
role_title="Senior Python Developer",
|
||||||
|
user_name="John Doe"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "content" in result
|
||||||
|
assert "model_used" in result
|
||||||
|
assert "prompt" in result
|
||||||
|
|
||||||
|
assert result["model_used"] == "template"
|
||||||
|
assert "Tech Corp" in result["content"]
|
||||||
|
assert "Senior Python Developer" in result["content"]
|
||||||
|
assert "John Doe" in result["content"]
|
||||||
|
assert "Dear Hiring Manager" in result["content"]
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_cover_letter_with_claude(self):
|
||||||
|
"""Test cover letter generation with Claude API."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
# Mock Claude client
|
||||||
|
mock_claude = Mock()
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.content = [Mock(text="Generated cover letter content")]
|
||||||
|
mock_claude.messages.create.return_value = mock_response
|
||||||
|
service.claude_client = mock_claude
|
||||||
|
|
||||||
|
result = await service.generate_cover_letter(
|
||||||
|
job_description="Python developer position",
|
||||||
|
company_name="Test Company",
|
||||||
|
role_title="Developer",
|
||||||
|
user_name="Test User"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result["content"] == "Generated cover letter content"
|
||||||
|
assert result["model_used"] == "claude-3-haiku"
|
||||||
|
assert "prompt" in result
|
||||||
|
|
||||||
|
# Verify Claude API was called correctly
|
||||||
|
mock_claude.messages.create.assert_called_once()
|
||||||
|
call_args = mock_claude.messages.create.call_args
|
||||||
|
assert call_args[1]["model"] == "claude-3-haiku-20240307"
|
||||||
|
assert call_args[1]["max_tokens"] == 1000
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_cover_letter_with_openai(self):
|
||||||
|
"""Test cover letter generation with OpenAI API."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
# Mock OpenAI client
|
||||||
|
mock_openai = AsyncMock()
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.choices = [Mock(message=Mock(content="OpenAI generated content"))]
|
||||||
|
mock_openai.chat.completions.create.return_value = mock_response
|
||||||
|
service.openai_client = mock_openai
|
||||||
|
|
||||||
|
result = await service.generate_cover_letter(
|
||||||
|
job_description="Software engineer role",
|
||||||
|
company_name="OpenAI Corp",
|
||||||
|
role_title="Engineer",
|
||||||
|
user_name="AI User"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result["content"] == "OpenAI generated content"
|
||||||
|
assert result["model_used"] == "gpt-3.5-turbo"
|
||||||
|
assert "prompt" in result
|
||||||
|
|
||||||
|
# Verify OpenAI API was called correctly
|
||||||
|
mock_openai.chat.completions.create.assert_called_once()
|
||||||
|
call_args = mock_openai.chat.completions.create.call_args
|
||||||
|
assert call_args[1]["model"] == "gpt-3.5-turbo"
|
||||||
|
assert call_args[1]["max_tokens"] == 1000
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_cover_letter_with_user_resume(self):
|
||||||
|
"""Test cover letter generation with user resume included."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
result = await service.generate_cover_letter(
|
||||||
|
job_description="Python developer position",
|
||||||
|
company_name="Resume Corp",
|
||||||
|
role_title="Developer",
|
||||||
|
user_name="Resume User",
|
||||||
|
user_resume="John Doe\nSoftware Engineer\n5 years Python experience"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should include resume information in prompt
|
||||||
|
assert "Resume/Background" in result["prompt"]
|
||||||
|
assert result["model_used"] == "template"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_resume_optimization_template(self):
|
||||||
|
"""Test resume optimization with template fallback."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
current_resume = "John Smith\nDeveloper\n\nExperience:\n- 3 years Python\n- Web development"
|
||||||
|
|
||||||
|
result = await service.generate_resume_optimization(
|
||||||
|
current_resume=current_resume,
|
||||||
|
job_description="Senior Python Developer requiring FastAPI",
|
||||||
|
role_title="Senior Python Developer"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "content" in result
|
||||||
|
assert "model_used" in result
|
||||||
|
assert "prompt" in result
|
||||||
|
|
||||||
|
assert result["model_used"] == "template"
|
||||||
|
assert "Senior Python Developer" in result["content"]
|
||||||
|
assert current_resume in result["content"]
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_generate_resume_optimization_with_ai_error(self):
|
||||||
|
"""Test resume optimization when AI service fails."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
# Mock Claude client that raises an exception
|
||||||
|
mock_claude = Mock()
|
||||||
|
mock_claude.messages.create.side_effect = Exception("API Error")
|
||||||
|
service.claude_client = mock_claude
|
||||||
|
|
||||||
|
result = await service.generate_resume_optimization(
|
||||||
|
current_resume="Test resume",
|
||||||
|
job_description="Test job",
|
||||||
|
role_title="Test role"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should fallback to template
|
||||||
|
assert result["model_used"] == "template-fallback"
|
||||||
|
assert "Test resume" in result["content"]
|
||||||
|
|
||||||
|
def test_template_cover_letter_generation(self):
|
||||||
|
"""Test template cover letter generation."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
content = service._generate_template_cover_letter(
|
||||||
|
company_name="Template Corp",
|
||||||
|
role_title="Template Role",
|
||||||
|
user_name="Template User",
|
||||||
|
job_description="Python, JavaScript, React, SQL, AWS, Docker experience required"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "Template Corp" in content
|
||||||
|
assert "Template Role" in content
|
||||||
|
assert "Template User" in content
|
||||||
|
assert "Dear Hiring Manager" in content
|
||||||
|
|
||||||
|
# Should extract and include relevant skills
|
||||||
|
assert "Python" in content or "Javascript" in content
|
||||||
|
|
||||||
|
def test_template_cover_letter_no_matching_skills(self):
|
||||||
|
"""Test template cover letter when no skills match."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
content = service._generate_template_cover_letter(
|
||||||
|
company_name="No Skills Corp",
|
||||||
|
role_title="Mysterious Role",
|
||||||
|
user_name="Skill-less User",
|
||||||
|
job_description="Experience with proprietary technology XYZ required"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "No Skills Corp" in content
|
||||||
|
assert "Mysterious Role" in content
|
||||||
|
assert "Skill-less User" in content
|
||||||
|
# Should not include skill text when no matches
|
||||||
|
assert "with expertise in" not in content
|
||||||
|
|
||||||
|
|
||||||
|
class TestAIServiceIntegration:
|
||||||
|
"""Test AI service integration and edge cases."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_concurrent_cover_letter_generation(self):
|
||||||
|
"""Test concurrent cover letter generation requests."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
# Create multiple concurrent requests
|
||||||
|
tasks = [
|
||||||
|
service.generate_cover_letter(
|
||||||
|
job_description=f"Job {i} description",
|
||||||
|
company_name=f"Company {i}",
|
||||||
|
role_title=f"Role {i}",
|
||||||
|
user_name=f"User {i}"
|
||||||
|
)
|
||||||
|
for i in range(5)
|
||||||
|
]
|
||||||
|
|
||||||
|
results = await asyncio.gather(*tasks)
|
||||||
|
|
||||||
|
# All should complete successfully
|
||||||
|
assert len(results) == 5
|
||||||
|
for i, result in enumerate(results):
|
||||||
|
assert f"Company {i}" in result["content"]
|
||||||
|
assert f"Role {i}" in result["content"]
|
||||||
|
assert result["model_used"] == "template"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cover_letter_with_empty_inputs(self):
|
||||||
|
"""Test cover letter generation with empty inputs."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
result = await service.generate_cover_letter(
|
||||||
|
job_description="",
|
||||||
|
company_name="",
|
||||||
|
role_title="",
|
||||||
|
user_name=""
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should handle empty inputs gracefully
|
||||||
|
assert "content" in result
|
||||||
|
assert result["model_used"] == "template"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cover_letter_with_very_long_inputs(self):
|
||||||
|
"""Test cover letter generation with very long inputs."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
long_description = "A" * 10000 # Very long job description
|
||||||
|
|
||||||
|
result = await service.generate_cover_letter(
|
||||||
|
job_description=long_description,
|
||||||
|
company_name="Long Corp",
|
||||||
|
role_title="Long Role",
|
||||||
|
user_name="Long User"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should handle long inputs
|
||||||
|
assert "content" in result
|
||||||
|
assert result["model_used"] == "template"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_resume_optimization_with_special_characters(self):
|
||||||
|
"""Test resume optimization with special characters."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
resume_with_special_chars = """
|
||||||
|
José González
|
||||||
|
Software Engineer
|
||||||
|
|
||||||
|
Experience:
|
||||||
|
• 5 years of Python development
|
||||||
|
• Expertise in FastAPI & PostgreSQL
|
||||||
|
• Led team of 10+ developers
|
||||||
|
"""
|
||||||
|
|
||||||
|
result = await service.generate_resume_optimization(
|
||||||
|
current_resume=resume_with_special_chars,
|
||||||
|
job_description="Senior role requiring team leadership",
|
||||||
|
role_title="Senior Developer"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "content" in result
|
||||||
|
assert "José González" in result["content"]
|
||||||
|
assert result["model_used"] == "template"
|
||||||
|
|
||||||
|
|
||||||
|
class TestAIServiceConfiguration:
|
||||||
|
"""Test AI service configuration and settings."""
|
||||||
|
|
||||||
|
@patch('src.backend.services.ai_service.settings')
|
||||||
|
def test_ai_service_singleton(self, mock_settings):
|
||||||
|
"""Test that ai_service is a singleton instance."""
|
||||||
|
# The ai_service should be the same instance
|
||||||
|
from src.backend.services.ai_service import ai_service as service1
|
||||||
|
from src.backend.services.ai_service import ai_service as service2
|
||||||
|
|
||||||
|
assert service1 is service2
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_error_handling_in_ai_generation(self):
|
||||||
|
"""Test error handling in AI generation methods."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
# Mock a client that raises an exception
|
||||||
|
service.claude_client = Mock()
|
||||||
|
service.claude_client.messages.create.side_effect = Exception("Network error")
|
||||||
|
|
||||||
|
result = await service.generate_cover_letter(
|
||||||
|
job_description="Test job",
|
||||||
|
company_name="Error Corp",
|
||||||
|
role_title="Error Role",
|
||||||
|
user_name="Error User"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should fallback gracefully
|
||||||
|
assert result["model_used"] == "template-fallback"
|
||||||
|
assert "Error Corp" in result["content"]
|
||||||
|
|
||||||
|
def test_prompt_construction(self):
|
||||||
|
"""Test that prompts are constructed correctly."""
|
||||||
|
service = AIService()
|
||||||
|
|
||||||
|
# This is tested indirectly through the template generation
|
||||||
|
content = service._generate_template_cover_letter(
|
||||||
|
company_name="Prompt Corp",
|
||||||
|
role_title="Prompt Engineer",
|
||||||
|
user_name="Prompt User",
|
||||||
|
job_description="Looking for someone with strong prompting skills"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "Prompt Corp" in content
|
||||||
|
assert "Prompt Engineer" in content
|
||||||
|
assert "Prompt User" in content
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
class TestAIServiceWithRealAPIs:
|
||||||
|
"""Integration tests for AI service with real APIs (requires API keys)."""
|
||||||
|
|
||||||
|
@pytest.mark.skipif(
|
||||||
|
not hasattr(ai_service, 'claude_client') or ai_service.claude_client is None,
|
||||||
|
reason="Claude API key not configured"
|
||||||
|
)
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_real_claude_api_call(self):
|
||||||
|
"""Test actual Claude API call (only runs if API key is configured)."""
|
||||||
|
result = await ai_service.generate_cover_letter(
|
||||||
|
job_description="Python developer position with FastAPI",
|
||||||
|
company_name="Real API Corp",
|
||||||
|
role_title="Python Developer",
|
||||||
|
user_name="Integration Test User"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result["model_used"] == "claude-3-haiku"
|
||||||
|
assert len(result["content"]) > 100 # Should be substantial content
|
||||||
|
assert "Real API Corp" in result["content"]
|
||||||
|
|
||||||
|
@pytest.mark.skipif(
|
||||||
|
not hasattr(ai_service, 'openai_client') or ai_service.openai_client is None,
|
||||||
|
reason="OpenAI API key not configured"
|
||||||
|
)
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_real_openai_api_call(self):
|
||||||
|
"""Test actual OpenAI API call (only runs if API key is configured)."""
|
||||||
|
# Temporarily disable Claude to force OpenAI usage
|
||||||
|
original_claude = ai_service.claude_client
|
||||||
|
ai_service.claude_client = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = await ai_service.generate_cover_letter(
|
||||||
|
job_description="Software engineer role requiring Python",
|
||||||
|
company_name="OpenAI Test Corp",
|
||||||
|
role_title="Software Engineer",
|
||||||
|
user_name="OpenAI Test User"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result["model_used"] == "gpt-3.5-turbo"
|
||||||
|
assert len(result["content"]) > 100
|
||||||
|
assert "OpenAI Test Corp" in result["content"]
|
||||||
|
finally:
|
||||||
|
ai_service.claude_client = original_claude
|
||||||
458
tests/unit/test_application_service.py
Normal file
458
tests/unit/test_application_service.py
Normal file
@@ -0,0 +1,458 @@
|
|||||||
|
# Unit tests for application service
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import AsyncMock, patch, MagicMock
|
||||||
|
from datetime import datetime
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from app.schemas.application import ApplicationCreate, ApplicationUpdate
|
||||||
|
from app.crud.application import (
|
||||||
|
create_application,
|
||||||
|
get_application_by_id,
|
||||||
|
get_user_applications,
|
||||||
|
update_application,
|
||||||
|
delete_application
|
||||||
|
)
|
||||||
|
from app.services.ai.claude_service import ClaudeService
|
||||||
|
from app.models.application import ApplicationStatus
|
||||||
|
|
||||||
|
|
||||||
|
class TestApplicationCRUD:
|
||||||
|
"""Test application CRUD operations."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_application_success(self, test_db, test_user):
|
||||||
|
"""Test successful application creation."""
|
||||||
|
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name="Google",
|
||||||
|
role_title="Senior Python Developer",
|
||||||
|
job_description="Python developer role with ML focus",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
|
||||||
|
application = await create_application(test_db, app_data, test_user.id)
|
||||||
|
|
||||||
|
assert application.company_name == "Google"
|
||||||
|
assert application.role_title == "Senior Python Developer"
|
||||||
|
assert application.status == ApplicationStatus.DRAFT
|
||||||
|
assert application.user_id == test_user.id
|
||||||
|
assert application.id is not None
|
||||||
|
assert application.created_at is not None
|
||||||
|
assert application.updated_at is not None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_application_with_ai_generation(self, test_db, test_user, mock_claude_service):
|
||||||
|
"""Test application creation with AI cover letter generation."""
|
||||||
|
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name="Microsoft",
|
||||||
|
role_title="Software Engineer",
|
||||||
|
job_description="Full-stack developer position with React and Python",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch('app.services.ai.claude_service.ClaudeService', return_value=mock_claude_service):
|
||||||
|
application = await create_application(test_db, app_data, test_user.id)
|
||||||
|
|
||||||
|
assert application.company_name == "Microsoft"
|
||||||
|
assert application.cover_letter is not None
|
||||||
|
assert len(application.cover_letter) > 100
|
||||||
|
assert "Dear Hiring Manager" in application.cover_letter
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_application_by_id_success(self, test_db, test_application):
|
||||||
|
"""Test getting application by ID."""
|
||||||
|
|
||||||
|
retrieved_app = await get_application_by_id(
|
||||||
|
test_db, test_application.id, test_application.user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
assert retrieved_app is not None
|
||||||
|
assert retrieved_app.id == test_application.id
|
||||||
|
assert retrieved_app.company_name == test_application.company_name
|
||||||
|
assert retrieved_app.user_id == test_application.user_id
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_application_by_id_wrong_user(self, test_db, test_application):
|
||||||
|
"""Test getting application by ID with wrong user (RLS test)."""
|
||||||
|
|
||||||
|
wrong_user_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
retrieved_app = await get_application_by_id(
|
||||||
|
test_db, test_application.id, wrong_user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should return None due to RLS policy
|
||||||
|
assert retrieved_app is None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_application_nonexistent(self, test_db, test_user):
|
||||||
|
"""Test getting non-existent application."""
|
||||||
|
|
||||||
|
fake_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
retrieved_app = await get_application_by_id(
|
||||||
|
test_db, fake_id, test_user.id
|
||||||
|
)
|
||||||
|
|
||||||
|
assert retrieved_app is None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_user_applications(self, test_db, test_user):
|
||||||
|
"""Test getting all applications for a user."""
|
||||||
|
|
||||||
|
# Create multiple applications
|
||||||
|
app_data_list = [
|
||||||
|
ApplicationCreate(
|
||||||
|
company_name=f"Company{i}",
|
||||||
|
role_title=f"Role{i}",
|
||||||
|
job_description=f"Description {i}",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
for i in range(3)
|
||||||
|
]
|
||||||
|
|
||||||
|
created_apps = []
|
||||||
|
for app_data in app_data_list:
|
||||||
|
app = await create_application(test_db, app_data, test_user.id)
|
||||||
|
created_apps.append(app)
|
||||||
|
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
# Get user applications
|
||||||
|
user_apps = await get_user_applications(test_db, test_user.id)
|
||||||
|
|
||||||
|
assert len(user_apps) >= 3 # At least the 3 we created
|
||||||
|
|
||||||
|
# Verify all returned apps belong to user
|
||||||
|
for app in user_apps:
|
||||||
|
assert app.user_id == test_user.id
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_update_application_success(self, test_db, test_application):
|
||||||
|
"""Test successful application update."""
|
||||||
|
|
||||||
|
update_data = ApplicationUpdate(
|
||||||
|
company_name="Updated Company",
|
||||||
|
status="applied"
|
||||||
|
)
|
||||||
|
|
||||||
|
updated_app = await update_application(
|
||||||
|
test_db, test_application.id, update_data, test_application.user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
assert updated_app.company_name == "Updated Company"
|
||||||
|
assert updated_app.status == ApplicationStatus.APPLIED
|
||||||
|
assert updated_app.updated_at > updated_app.created_at
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_update_application_wrong_user(self, test_db, test_application):
|
||||||
|
"""Test updating application with wrong user."""
|
||||||
|
|
||||||
|
wrong_user_id = str(uuid.uuid4())
|
||||||
|
update_data = ApplicationUpdate(company_name="Hacked Company")
|
||||||
|
|
||||||
|
updated_app = await update_application(
|
||||||
|
test_db, test_application.id, update_data, wrong_user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should return None due to RLS policy
|
||||||
|
assert updated_app is None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_delete_application_success(self, test_db, test_application):
|
||||||
|
"""Test successful application deletion."""
|
||||||
|
|
||||||
|
app_id = test_application.id
|
||||||
|
user_id = test_application.user_id
|
||||||
|
|
||||||
|
deleted = await delete_application(test_db, app_id, user_id)
|
||||||
|
|
||||||
|
assert deleted is True
|
||||||
|
|
||||||
|
# Verify application is deleted
|
||||||
|
retrieved_app = await get_application_by_id(test_db, app_id, user_id)
|
||||||
|
assert retrieved_app is None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_delete_application_wrong_user(self, test_db, test_application):
|
||||||
|
"""Test deleting application with wrong user."""
|
||||||
|
|
||||||
|
wrong_user_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
deleted = await delete_application(
|
||||||
|
test_db, test_application.id, wrong_user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should return False due to RLS policy
|
||||||
|
assert deleted is False
|
||||||
|
|
||||||
|
|
||||||
|
class TestApplicationStatusTransitions:
|
||||||
|
"""Test application status transitions."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_status_transition_draft_to_applied(self, test_db, test_application):
|
||||||
|
"""Test status transition from draft to applied."""
|
||||||
|
|
||||||
|
# Initial status should be draft
|
||||||
|
assert test_application.status == ApplicationStatus.DRAFT
|
||||||
|
|
||||||
|
update_data = ApplicationUpdate(status="applied")
|
||||||
|
updated_app = await update_application(
|
||||||
|
test_db, test_application.id, update_data, test_application.user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
assert updated_app.status == ApplicationStatus.APPLIED
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_status_transition_applied_to_interview(self, test_db):
|
||||||
|
"""Test status transition from applied to interview."""
|
||||||
|
|
||||||
|
# Create application in applied status
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name="Interview Corp",
|
||||||
|
role_title="Developer",
|
||||||
|
job_description="Developer role",
|
||||||
|
status="applied"
|
||||||
|
)
|
||||||
|
|
||||||
|
from tests.conftest import TestDataFactory
|
||||||
|
user_data = TestDataFactory.user_data("interview@test.com")
|
||||||
|
|
||||||
|
# Create user and application
|
||||||
|
from app.crud.user import create_user
|
||||||
|
from app.schemas.user import UserCreate
|
||||||
|
|
||||||
|
user = await create_user(test_db, UserCreate(**user_data))
|
||||||
|
application = await create_application(test_db, app_data, user.id)
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
# Update to interview status
|
||||||
|
update_data = ApplicationUpdate(status="interview")
|
||||||
|
updated_app = await update_application(
|
||||||
|
test_db, application.id, update_data, user.id
|
||||||
|
)
|
||||||
|
|
||||||
|
assert updated_app.status == ApplicationStatus.INTERVIEW
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_invalid_status_transition(self, test_db, test_application):
|
||||||
|
"""Test invalid status value."""
|
||||||
|
|
||||||
|
update_data = ApplicationUpdate(status="invalid_status")
|
||||||
|
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
await update_application(
|
||||||
|
test_db, test_application.id, update_data, test_application.user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestApplicationFiltering:
|
||||||
|
"""Test application filtering and searching."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_filter_applications_by_status(self, test_db, test_user):
|
||||||
|
"""Test filtering applications by status."""
|
||||||
|
|
||||||
|
# Create applications with different statuses
|
||||||
|
statuses = ["draft", "applied", "interview", "rejected"]
|
||||||
|
applications = []
|
||||||
|
|
||||||
|
for status in statuses:
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name=f"Company-{status}",
|
||||||
|
role_title="Developer",
|
||||||
|
job_description="Test role",
|
||||||
|
status=status
|
||||||
|
)
|
||||||
|
app = await create_application(test_db, app_data, test_user.id)
|
||||||
|
applications.append(app)
|
||||||
|
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
# Test filtering (this would require implementing filter functionality)
|
||||||
|
all_apps = await get_user_applications(test_db, test_user.id)
|
||||||
|
|
||||||
|
# Verify we have applications with different statuses
|
||||||
|
app_statuses = {app.status for app in all_apps}
|
||||||
|
assert len(app_statuses) >= 3 # Should have multiple statuses
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_search_applications_by_company(self, test_db, test_user):
|
||||||
|
"""Test searching applications by company name."""
|
||||||
|
|
||||||
|
companies = ["Google", "Microsoft", "Apple", "Amazon"]
|
||||||
|
|
||||||
|
for company in companies:
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name=company,
|
||||||
|
role_title="Developer",
|
||||||
|
job_description="Test role",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
await create_application(test_db, app_data, test_user.id)
|
||||||
|
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
# Get all applications
|
||||||
|
all_apps = await get_user_applications(test_db, test_user.id)
|
||||||
|
|
||||||
|
# Verify we can find specific companies
|
||||||
|
company_names = {app.company_name for app in all_apps}
|
||||||
|
assert "Google" in company_names
|
||||||
|
assert "Microsoft" in company_names
|
||||||
|
|
||||||
|
|
||||||
|
class TestApplicationValidation:
|
||||||
|
"""Test application data validation."""
|
||||||
|
|
||||||
|
def test_application_create_validation(self):
|
||||||
|
"""Test ApplicationCreate schema validation."""
|
||||||
|
|
||||||
|
# Valid data
|
||||||
|
valid_data = {
|
||||||
|
"company_name": "Valid Company",
|
||||||
|
"role_title": "Software Developer",
|
||||||
|
"job_description": "Great opportunity",
|
||||||
|
"status": "draft"
|
||||||
|
}
|
||||||
|
|
||||||
|
app_create = ApplicationCreate(**valid_data)
|
||||||
|
assert app_create.company_name == "Valid Company"
|
||||||
|
assert app_create.status == "draft"
|
||||||
|
|
||||||
|
def test_application_create_invalid_data(self):
|
||||||
|
"""Test ApplicationCreate with invalid data."""
|
||||||
|
|
||||||
|
# Missing required fields
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
ApplicationCreate(company_name="Company") # Missing role_title
|
||||||
|
|
||||||
|
# Invalid status
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
ApplicationCreate(
|
||||||
|
company_name="Company",
|
||||||
|
role_title="Role",
|
||||||
|
status="invalid_status"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_application_update_validation(self):
|
||||||
|
"""Test ApplicationUpdate schema validation."""
|
||||||
|
|
||||||
|
# Partial update should work
|
||||||
|
update_data = ApplicationUpdate(company_name="New Company")
|
||||||
|
assert update_data.company_name == "New Company"
|
||||||
|
|
||||||
|
# Update with valid status
|
||||||
|
update_data = ApplicationUpdate(status="applied")
|
||||||
|
assert update_data.status == "applied"
|
||||||
|
|
||||||
|
|
||||||
|
class TestConcurrentApplicationOperations:
|
||||||
|
"""Test concurrent operations on applications."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_concurrent_application_updates(self, test_db, test_application):
|
||||||
|
"""Test concurrent updates to same application."""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
async def update_company_name(name_suffix):
|
||||||
|
update_data = ApplicationUpdate(
|
||||||
|
company_name=f"Updated Company {name_suffix}"
|
||||||
|
)
|
||||||
|
return await update_application(
|
||||||
|
test_db, test_application.id, update_data, test_application.user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
# Perform concurrent updates
|
||||||
|
tasks = [
|
||||||
|
update_company_name(i) for i in range(3)
|
||||||
|
]
|
||||||
|
|
||||||
|
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
|
||||||
|
# At least one update should succeed
|
||||||
|
successful_updates = [r for r in results if not isinstance(r, Exception)]
|
||||||
|
assert len(successful_updates) >= 1
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_concurrent_application_creation(self, test_db, test_user):
|
||||||
|
"""Test concurrent application creation for same user."""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
async def create_test_application(index):
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name=f"Concurrent Company {index}",
|
||||||
|
role_title=f"Role {index}",
|
||||||
|
job_description="Concurrent test",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
return await create_application(test_db, app_data, test_user.id)
|
||||||
|
|
||||||
|
# Create multiple applications concurrently
|
||||||
|
tasks = [create_test_application(i) for i in range(5)]
|
||||||
|
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
|
||||||
|
# All creations should succeed
|
||||||
|
successful_creations = [r for r in results if not isinstance(r, Exception)]
|
||||||
|
assert len(successful_creations) == 5
|
||||||
|
|
||||||
|
# Verify all have different IDs
|
||||||
|
app_ids = {app.id for app in successful_creations}
|
||||||
|
assert len(app_ids) == 5
|
||||||
|
|
||||||
|
|
||||||
|
class TestApplicationBusinessLogic:
|
||||||
|
"""Test application business logic."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_application_timestamps_on_update(self, test_db, test_application):
|
||||||
|
"""Test that updated_at timestamp changes on update."""
|
||||||
|
|
||||||
|
original_updated_at = test_application.updated_at
|
||||||
|
|
||||||
|
# Wait a small amount to ensure timestamp difference
|
||||||
|
import asyncio
|
||||||
|
await asyncio.sleep(0.01)
|
||||||
|
|
||||||
|
update_data = ApplicationUpdate(company_name="Timestamp Test Company")
|
||||||
|
updated_app = await update_application(
|
||||||
|
test_db, test_application.id, update_data, test_application.user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
assert updated_app.updated_at > original_updated_at
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_application_cover_letter_generation_trigger(self, test_db, test_user, mock_claude_service):
|
||||||
|
"""Test that cover letter generation is triggered appropriately."""
|
||||||
|
|
||||||
|
with patch('app.services.ai.claude_service.ClaudeService', return_value=mock_claude_service):
|
||||||
|
|
||||||
|
# Create application without job description
|
||||||
|
app_data = ApplicationCreate(
|
||||||
|
company_name="No Description Corp",
|
||||||
|
role_title="Developer",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
|
||||||
|
app_without_desc = await create_application(test_db, app_data, test_user.id)
|
||||||
|
|
||||||
|
# Should not generate cover letter without job description
|
||||||
|
assert app_without_desc.cover_letter is None
|
||||||
|
|
||||||
|
# Create application with job description
|
||||||
|
app_data_with_desc = ApplicationCreate(
|
||||||
|
company_name="With Description Corp",
|
||||||
|
role_title="Developer",
|
||||||
|
job_description="Detailed job description here",
|
||||||
|
status="draft"
|
||||||
|
)
|
||||||
|
|
||||||
|
app_with_desc = await create_application(test_db, app_data_with_desc, test_user.id)
|
||||||
|
|
||||||
|
# Should generate cover letter with job description
|
||||||
|
assert app_with_desc.cover_letter is not None
|
||||||
|
assert len(app_with_desc.cover_letter) > 50
|
||||||
375
tests/unit/test_auth_endpoints.py
Normal file
375
tests/unit/test_auth_endpoints.py
Normal file
@@ -0,0 +1,375 @@
|
|||||||
|
"""
|
||||||
|
Unit tests for authentication endpoints
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from unittest.mock import patch
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from src.backend.main import app
|
||||||
|
from src.backend.models.user import User
|
||||||
|
from src.backend.api.auth import hash_password, verify_password, create_access_token
|
||||||
|
|
||||||
|
class TestAuthenticationAPI:
|
||||||
|
"""Test authentication API endpoints."""
|
||||||
|
|
||||||
|
def test_register_user_success(self, test_client, test_db):
|
||||||
|
"""Test successful user registration."""
|
||||||
|
user_data = {
|
||||||
|
"email": "newuser@test.com",
|
||||||
|
"password": "securepassword123",
|
||||||
|
"first_name": "New",
|
||||||
|
"last_name": "User"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert data["email"] == "newuser@test.com"
|
||||||
|
assert data["full_name"] == "New User"
|
||||||
|
assert data["first_name"] == "New"
|
||||||
|
assert data["last_name"] == "User"
|
||||||
|
assert data["is_active"] == True
|
||||||
|
assert "id" in data
|
||||||
|
# Password should never be returned
|
||||||
|
assert "password" not in data
|
||||||
|
assert "password_hash" not in data
|
||||||
|
|
||||||
|
def test_register_user_duplicate_email(self, test_client, test_user):
|
||||||
|
"""Test registration with duplicate email fails."""
|
||||||
|
user_data = {
|
||||||
|
"email": test_user.email, # Use existing user's email
|
||||||
|
"password": "differentpassword",
|
||||||
|
"first_name": "Duplicate",
|
||||||
|
"last_name": "User"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 400
|
||||||
|
data = response.json()
|
||||||
|
assert "already registered" in data["detail"].lower()
|
||||||
|
|
||||||
|
def test_register_user_invalid_email(self, test_client):
|
||||||
|
"""Test registration with invalid email format."""
|
||||||
|
user_data = {
|
||||||
|
"email": "invalid-email-format",
|
||||||
|
"password": "securepassword123",
|
||||||
|
"first_name": "Invalid",
|
||||||
|
"last_name": "Email"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 422 # Validation error
|
||||||
|
|
||||||
|
def test_register_user_missing_fields(self, test_client):
|
||||||
|
"""Test registration with missing required fields."""
|
||||||
|
user_data = {
|
||||||
|
"email": "incomplete@test.com",
|
||||||
|
# Missing password, first_name, last_name
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 422
|
||||||
|
|
||||||
|
def test_login_success(self, test_client, test_user):
|
||||||
|
"""Test successful login."""
|
||||||
|
login_data = {
|
||||||
|
"email": test_user.email,
|
||||||
|
"password": "testpassword123"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/login", json=login_data)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "access_token" in data
|
||||||
|
assert "token_type" in data
|
||||||
|
assert data["token_type"] == "bearer"
|
||||||
|
|
||||||
|
# Verify token structure
|
||||||
|
token = data["access_token"]
|
||||||
|
assert len(token.split('.')) == 3 # JWT has 3 parts
|
||||||
|
|
||||||
|
def test_login_wrong_password(self, test_client, test_user):
|
||||||
|
"""Test login with incorrect password."""
|
||||||
|
login_data = {
|
||||||
|
"email": test_user.email,
|
||||||
|
"password": "wrongpassword"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/login", json=login_data)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
data = response.json()
|
||||||
|
assert "incorrect" in data["detail"].lower()
|
||||||
|
|
||||||
|
def test_login_nonexistent_user(self, test_client):
|
||||||
|
"""Test login with non-existent user."""
|
||||||
|
login_data = {
|
||||||
|
"email": "nonexistent@test.com",
|
||||||
|
"password": "somepassword"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/login", json=login_data)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_login_invalid_email_format(self, test_client):
|
||||||
|
"""Test login with invalid email format."""
|
||||||
|
login_data = {
|
||||||
|
"email": "not-an-email",
|
||||||
|
"password": "somepassword"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/login", json=login_data)
|
||||||
|
|
||||||
|
assert response.status_code == 422 # Validation error
|
||||||
|
|
||||||
|
def test_get_current_user_success(self, test_client, test_user_token):
|
||||||
|
"""Test getting current user with valid token."""
|
||||||
|
headers = {"Authorization": f"Bearer {test_user_token}"}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert "email" in data
|
||||||
|
assert "id" in data
|
||||||
|
assert "full_name" in data
|
||||||
|
assert "is_active" in data
|
||||||
|
# Ensure sensitive data is not returned
|
||||||
|
assert "password" not in data
|
||||||
|
assert "password_hash" not in data
|
||||||
|
|
||||||
|
def test_get_current_user_invalid_token(self, test_client):
|
||||||
|
"""Test getting current user with invalid token."""
|
||||||
|
headers = {"Authorization": "Bearer invalid.token.here"}
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
|
||||||
|
assert response.status_code == 401
|
||||||
|
|
||||||
|
def test_get_current_user_no_token(self, test_client):
|
||||||
|
"""Test getting current user without authorization header."""
|
||||||
|
response = test_client.get("/api/auth/me")
|
||||||
|
|
||||||
|
assert response.status_code == 403 # FastAPI HTTPBearer returns 403
|
||||||
|
|
||||||
|
def test_get_current_user_malformed_header(self, test_client):
|
||||||
|
"""Test getting current user with malformed authorization header."""
|
||||||
|
malformed_headers = [
|
||||||
|
{"Authorization": "Bearer"},
|
||||||
|
{"Authorization": "NotBearer validtoken"},
|
||||||
|
{"Authorization": "Bearer "},
|
||||||
|
{"Authorization": "invalid-format"}
|
||||||
|
]
|
||||||
|
|
||||||
|
for headers in malformed_headers:
|
||||||
|
response = test_client.get("/api/auth/me", headers=headers)
|
||||||
|
assert response.status_code in [401, 403]
|
||||||
|
|
||||||
|
|
||||||
|
class TestPasswordUtilities:
|
||||||
|
"""Test password hashing and verification utilities."""
|
||||||
|
|
||||||
|
def test_hash_password(self):
|
||||||
|
"""Test password hashing function."""
|
||||||
|
password = "testpassword123"
|
||||||
|
hashed = hash_password(password)
|
||||||
|
|
||||||
|
assert hashed != password # Should be hashed
|
||||||
|
assert len(hashed) > 0
|
||||||
|
assert hashed.startswith('$2b$') # bcrypt format
|
||||||
|
|
||||||
|
def test_verify_password_correct(self):
|
||||||
|
"""Test password verification with correct password."""
|
||||||
|
password = "testpassword123"
|
||||||
|
hashed = hash_password(password)
|
||||||
|
|
||||||
|
assert verify_password(password, hashed) == True
|
||||||
|
|
||||||
|
def test_verify_password_incorrect(self):
|
||||||
|
"""Test password verification with incorrect password."""
|
||||||
|
password = "testpassword123"
|
||||||
|
wrong_password = "wrongpassword"
|
||||||
|
hashed = hash_password(password)
|
||||||
|
|
||||||
|
assert verify_password(wrong_password, hashed) == False
|
||||||
|
|
||||||
|
def test_hash_different_passwords_different_hashes(self):
|
||||||
|
"""Test that different passwords produce different hashes."""
|
||||||
|
password1 = "password123"
|
||||||
|
password2 = "password456"
|
||||||
|
|
||||||
|
hash1 = hash_password(password1)
|
||||||
|
hash2 = hash_password(password2)
|
||||||
|
|
||||||
|
assert hash1 != hash2
|
||||||
|
|
||||||
|
def test_hash_same_password_different_hashes(self):
|
||||||
|
"""Test that same password produces different hashes (salt)."""
|
||||||
|
password = "testpassword123"
|
||||||
|
|
||||||
|
hash1 = hash_password(password)
|
||||||
|
hash2 = hash_password(password)
|
||||||
|
|
||||||
|
assert hash1 != hash2 # Should be different due to salt
|
||||||
|
# But both should verify correctly
|
||||||
|
assert verify_password(password, hash1) == True
|
||||||
|
assert verify_password(password, hash2) == True
|
||||||
|
|
||||||
|
|
||||||
|
class TestJWTTokens:
|
||||||
|
"""Test JWT token creation and validation."""
|
||||||
|
|
||||||
|
def test_create_access_token(self):
|
||||||
|
"""Test JWT token creation."""
|
||||||
|
data = {"sub": str(uuid.uuid4()), "email": "test@example.com"}
|
||||||
|
token = create_access_token(data)
|
||||||
|
|
||||||
|
assert isinstance(token, str)
|
||||||
|
assert len(token.split('.')) == 3 # JWT format: header.payload.signature
|
||||||
|
|
||||||
|
def test_create_token_with_different_data(self):
|
||||||
|
"""Test that different data creates different tokens."""
|
||||||
|
data1 = {"sub": str(uuid.uuid4()), "email": "user1@example.com"}
|
||||||
|
data2 = {"sub": str(uuid.uuid4()), "email": "user2@example.com"}
|
||||||
|
|
||||||
|
token1 = create_access_token(data1)
|
||||||
|
token2 = create_access_token(data2)
|
||||||
|
|
||||||
|
assert token1 != token2
|
||||||
|
|
||||||
|
def test_token_contains_expiration(self):
|
||||||
|
"""Test that created tokens contain expiration claim."""
|
||||||
|
from jose import jwt
|
||||||
|
from src.backend.core.config import settings
|
||||||
|
|
||||||
|
data = {"sub": str(uuid.uuid4())}
|
||||||
|
token = create_access_token(data)
|
||||||
|
|
||||||
|
# Decode without verification to check claims
|
||||||
|
decoded = jwt.get_unverified_claims(token)
|
||||||
|
assert "exp" in decoded
|
||||||
|
assert "sub" in decoded
|
||||||
|
|
||||||
|
|
||||||
|
class TestUserModel:
|
||||||
|
"""Test User model properties and methods."""
|
||||||
|
|
||||||
|
def test_user_full_name_property(self):
|
||||||
|
"""Test that full_name property works correctly."""
|
||||||
|
user = User(
|
||||||
|
email="test@example.com",
|
||||||
|
password_hash="hashed_password",
|
||||||
|
full_name="John Doe"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert user.full_name == "John Doe"
|
||||||
|
assert user.first_name == "John"
|
||||||
|
assert user.last_name == "Doe"
|
||||||
|
|
||||||
|
def test_user_single_name(self):
|
||||||
|
"""Test user with single name."""
|
||||||
|
user = User(
|
||||||
|
email="test@example.com",
|
||||||
|
password_hash="hashed_password",
|
||||||
|
full_name="Madonna"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert user.full_name == "Madonna"
|
||||||
|
assert user.first_name == "Madonna"
|
||||||
|
assert user.last_name == ""
|
||||||
|
|
||||||
|
def test_user_multiple_last_names(self):
|
||||||
|
"""Test user with multiple last names."""
|
||||||
|
user = User(
|
||||||
|
email="test@example.com",
|
||||||
|
password_hash="hashed_password",
|
||||||
|
full_name="John van der Berg"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert user.full_name == "John van der Berg"
|
||||||
|
assert user.first_name == "John"
|
||||||
|
assert user.last_name == "van der Berg"
|
||||||
|
|
||||||
|
def test_user_is_active_property(self):
|
||||||
|
"""Test user is_active property."""
|
||||||
|
user = User(
|
||||||
|
email="test@example.com",
|
||||||
|
password_hash="hashed_password",
|
||||||
|
full_name="Test User"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert user.is_active == True # Default is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestAuthenticationEdgeCases:
|
||||||
|
"""Test edge cases and error conditions."""
|
||||||
|
|
||||||
|
def test_register_empty_names(self, test_client):
|
||||||
|
"""Test registration with empty names."""
|
||||||
|
user_data = {
|
||||||
|
"email": "empty@test.com",
|
||||||
|
"password": "password123",
|
||||||
|
"first_name": "",
|
||||||
|
"last_name": ""
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
# Should still work but create empty full_name
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert data["full_name"] == " " # Space between empty names
|
||||||
|
|
||||||
|
def test_register_very_long_email(self, test_client):
|
||||||
|
"""Test registration with very long email."""
|
||||||
|
long_email = "a" * 250 + "@test.com" # Very long email
|
||||||
|
user_data = {
|
||||||
|
"email": long_email,
|
||||||
|
"password": "password123",
|
||||||
|
"first_name": "Long",
|
||||||
|
"last_name": "Email"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
# Should handle long emails (within DB constraints)
|
||||||
|
if len(long_email) <= 255:
|
||||||
|
assert response.status_code == 200
|
||||||
|
else:
|
||||||
|
assert response.status_code in [400, 422]
|
||||||
|
|
||||||
|
def test_register_unicode_names(self, test_client):
|
||||||
|
"""Test registration with unicode characters in names."""
|
||||||
|
user_data = {
|
||||||
|
"email": "unicode@test.com",
|
||||||
|
"password": "password123",
|
||||||
|
"first_name": "José",
|
||||||
|
"last_name": "González"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/register", json=user_data)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.json()
|
||||||
|
assert data["full_name"] == "José González"
|
||||||
|
assert data["first_name"] == "José"
|
||||||
|
assert data["last_name"] == "González"
|
||||||
|
|
||||||
|
def test_case_insensitive_email_login(self, test_client, test_user):
|
||||||
|
"""Test that email login is case insensitive."""
|
||||||
|
# Try login with different case
|
||||||
|
login_data = {
|
||||||
|
"email": test_user.email.upper(),
|
||||||
|
"password": "testpassword123"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = test_client.post("/api/auth/login", json=login_data)
|
||||||
|
|
||||||
|
# This might fail if email comparison is case-sensitive
|
||||||
|
# The actual behavior depends on implementation
|
||||||
|
assert response.status_code in [200, 401]
|
||||||
368
tests/unit/test_auth_service.py
Normal file
368
tests/unit/test_auth_service.py
Normal file
@@ -0,0 +1,368 @@
|
|||||||
|
# Unit tests for authentication service
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import AsyncMock, patch
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
|
from app.core.security import (
|
||||||
|
create_access_token,
|
||||||
|
verify_token,
|
||||||
|
hash_password,
|
||||||
|
verify_password,
|
||||||
|
get_current_user
|
||||||
|
)
|
||||||
|
from app.schemas.user import UserCreate
|
||||||
|
from app.crud.user import create_user, authenticate_user
|
||||||
|
|
||||||
|
|
||||||
|
class TestPasswordHashing:
|
||||||
|
"""Test password hashing functionality."""
|
||||||
|
|
||||||
|
def test_hash_password_creates_hash(self):
|
||||||
|
"""Test that password hashing creates a hash."""
|
||||||
|
|
||||||
|
password = "testpassword123"
|
||||||
|
hashed = hash_password(password)
|
||||||
|
|
||||||
|
assert hashed != password
|
||||||
|
assert len(hashed) > 50 # bcrypt hashes are long
|
||||||
|
assert hashed.startswith("$2b$")
|
||||||
|
|
||||||
|
def test_verify_password_correct(self):
|
||||||
|
"""Test password verification with correct password."""
|
||||||
|
|
||||||
|
password = "testpassword123"
|
||||||
|
hashed = hash_password(password)
|
||||||
|
|
||||||
|
assert verify_password(password, hashed) is True
|
||||||
|
|
||||||
|
def test_verify_password_incorrect(self):
|
||||||
|
"""Test password verification with incorrect password."""
|
||||||
|
|
||||||
|
password = "testpassword123"
|
||||||
|
wrong_password = "wrongpassword"
|
||||||
|
hashed = hash_password(password)
|
||||||
|
|
||||||
|
assert verify_password(wrong_password, hashed) is False
|
||||||
|
|
||||||
|
def test_hash_same_password_different_hashes(self):
|
||||||
|
"""Test that hashing the same password twice gives different hashes."""
|
||||||
|
|
||||||
|
password = "testpassword123"
|
||||||
|
hash1 = hash_password(password)
|
||||||
|
hash2 = hash_password(password)
|
||||||
|
|
||||||
|
assert hash1 != hash2
|
||||||
|
assert verify_password(password, hash1) is True
|
||||||
|
assert verify_password(password, hash2) is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestJWTTokens:
|
||||||
|
"""Test JWT token functionality."""
|
||||||
|
|
||||||
|
def test_create_access_token(self):
|
||||||
|
"""Test creating access token."""
|
||||||
|
|
||||||
|
data = {"sub": "user123", "email": "test@example.com"}
|
||||||
|
token = create_access_token(data=data)
|
||||||
|
|
||||||
|
assert isinstance(token, str)
|
||||||
|
assert len(token) > 100 # JWT tokens are long
|
||||||
|
assert "." in token # JWT format has dots
|
||||||
|
|
||||||
|
def test_create_token_with_expiry(self):
|
||||||
|
"""Test creating token with custom expiry."""
|
||||||
|
|
||||||
|
data = {"sub": "user123"}
|
||||||
|
expires_delta = timedelta(minutes=30)
|
||||||
|
token = create_access_token(data=data, expires_delta=expires_delta)
|
||||||
|
|
||||||
|
# Verify token was created
|
||||||
|
assert isinstance(token, str)
|
||||||
|
assert len(token) > 100
|
||||||
|
|
||||||
|
def test_verify_valid_token(self):
|
||||||
|
"""Test verifying a valid token."""
|
||||||
|
|
||||||
|
data = {"sub": "user123", "email": "test@example.com"}
|
||||||
|
token = create_access_token(data=data)
|
||||||
|
|
||||||
|
payload = verify_token(token)
|
||||||
|
|
||||||
|
assert payload["sub"] == "user123"
|
||||||
|
assert payload["email"] == "test@example.com"
|
||||||
|
assert "exp" in payload
|
||||||
|
|
||||||
|
def test_verify_invalid_token(self):
|
||||||
|
"""Test verifying an invalid token."""
|
||||||
|
|
||||||
|
invalid_token = "invalid.token.here"
|
||||||
|
|
||||||
|
with pytest.raises(Exception): # Should raise an exception
|
||||||
|
verify_token(invalid_token)
|
||||||
|
|
||||||
|
def test_verify_expired_token(self):
|
||||||
|
"""Test verifying an expired token."""
|
||||||
|
|
||||||
|
data = {"sub": "user123"}
|
||||||
|
# Create token that expires immediately
|
||||||
|
expires_delta = timedelta(seconds=-1)
|
||||||
|
token = create_access_token(data=data, expires_delta=expires_delta)
|
||||||
|
|
||||||
|
with pytest.raises(Exception): # Should raise an exception
|
||||||
|
verify_token(token)
|
||||||
|
|
||||||
|
|
||||||
|
class TestUserCRUD:
|
||||||
|
"""Test user CRUD operations."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_user_success(self, test_db):
|
||||||
|
"""Test successful user creation."""
|
||||||
|
|
||||||
|
user_data = UserCreate(
|
||||||
|
email="newuser@test.com",
|
||||||
|
password="securepassword123",
|
||||||
|
first_name="New",
|
||||||
|
last_name="User"
|
||||||
|
)
|
||||||
|
|
||||||
|
user = await create_user(test_db, user_data)
|
||||||
|
|
||||||
|
assert user.email == "newuser@test.com"
|
||||||
|
assert user.first_name == "New"
|
||||||
|
assert user.last_name == "User"
|
||||||
|
assert user.password_hash != "securepassword123" # Should be hashed
|
||||||
|
assert user.id is not None
|
||||||
|
assert user.created_at is not None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_create_user_duplicate_email(self, test_db):
|
||||||
|
"""Test creating user with duplicate email."""
|
||||||
|
|
||||||
|
# Create first user
|
||||||
|
user_data1 = UserCreate(
|
||||||
|
email="duplicate@test.com",
|
||||||
|
password="password123",
|
||||||
|
first_name="First",
|
||||||
|
last_name="User"
|
||||||
|
)
|
||||||
|
await create_user(test_db, user_data1)
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
# Try to create second user with same email
|
||||||
|
user_data2 = UserCreate(
|
||||||
|
email="duplicate@test.com",
|
||||||
|
password="password456",
|
||||||
|
first_name="Second",
|
||||||
|
last_name="User"
|
||||||
|
)
|
||||||
|
|
||||||
|
with pytest.raises(Exception): # Should raise integrity error
|
||||||
|
await create_user(test_db, user_data2)
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_authenticate_user_success(self, test_db):
|
||||||
|
"""Test successful user authentication."""
|
||||||
|
|
||||||
|
# Create user
|
||||||
|
user_data = UserCreate(
|
||||||
|
email="auth@test.com",
|
||||||
|
password="testpassword123",
|
||||||
|
first_name="Auth",
|
||||||
|
last_name="User"
|
||||||
|
)
|
||||||
|
user = await create_user(test_db, user_data)
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
# Authenticate user
|
||||||
|
authenticated_user = await authenticate_user(
|
||||||
|
test_db, "auth@test.com", "testpassword123"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert authenticated_user is not None
|
||||||
|
assert authenticated_user.email == "auth@test.com"
|
||||||
|
assert authenticated_user.id == user.id
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_authenticate_user_wrong_password(self, test_db):
|
||||||
|
"""Test authentication with wrong password."""
|
||||||
|
|
||||||
|
# Create user
|
||||||
|
user_data = UserCreate(
|
||||||
|
email="wrongpass@test.com",
|
||||||
|
password="correctpassword",
|
||||||
|
first_name="Test",
|
||||||
|
last_name="User"
|
||||||
|
)
|
||||||
|
await create_user(test_db, user_data)
|
||||||
|
await test_db.commit()
|
||||||
|
|
||||||
|
# Try to authenticate with wrong password
|
||||||
|
authenticated_user = await authenticate_user(
|
||||||
|
test_db, "wrongpass@test.com", "wrongpassword"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert authenticated_user is None
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_authenticate_user_nonexistent(self, test_db):
|
||||||
|
"""Test authentication with non-existent user."""
|
||||||
|
|
||||||
|
authenticated_user = await authenticate_user(
|
||||||
|
test_db, "nonexistent@test.com", "password123"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert authenticated_user is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestAuthenticationIntegration:
|
||||||
|
"""Test authentication integration with FastAPI."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_current_user_valid_token(self, test_db, test_user):
|
||||||
|
"""Test getting current user with valid token."""
|
||||||
|
|
||||||
|
# Create token for test user
|
||||||
|
token_data = {"sub": str(test_user.id), "email": test_user.email}
|
||||||
|
token = create_access_token(data=token_data)
|
||||||
|
|
||||||
|
# Mock the database dependency
|
||||||
|
with patch('app.core.security.get_db') as mock_get_db:
|
||||||
|
mock_get_db.return_value.__aenter__.return_value = test_db
|
||||||
|
|
||||||
|
current_user = await get_current_user(token, test_db)
|
||||||
|
|
||||||
|
assert current_user.id == test_user.id
|
||||||
|
assert current_user.email == test_user.email
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_current_user_invalid_token(self, test_db):
|
||||||
|
"""Test getting current user with invalid token."""
|
||||||
|
|
||||||
|
invalid_token = "invalid.token.here"
|
||||||
|
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
await get_current_user(invalid_token, test_db)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_current_user_nonexistent_user(self, test_db):
|
||||||
|
"""Test getting current user for non-existent user."""
|
||||||
|
|
||||||
|
# Create token for non-existent user
|
||||||
|
token_data = {"sub": "non-existent-id", "email": "fake@test.com"}
|
||||||
|
token = create_access_token(data=token_data)
|
||||||
|
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
await get_current_user(token, test_db)
|
||||||
|
|
||||||
|
|
||||||
|
class TestSecurityValidation:
|
||||||
|
"""Test security validation functions."""
|
||||||
|
|
||||||
|
def test_password_strength_validation(self):
|
||||||
|
"""Test password strength requirements."""
|
||||||
|
|
||||||
|
# This would test password strength if implemented
|
||||||
|
weak_passwords = [
|
||||||
|
"123",
|
||||||
|
"password",
|
||||||
|
"abc",
|
||||||
|
"12345678"
|
||||||
|
]
|
||||||
|
|
||||||
|
strong_passwords = [
|
||||||
|
"SecurePassword123!",
|
||||||
|
"MyStr0ngP@ssw0rd",
|
||||||
|
"C0mpl3xP@ssw0rd!"
|
||||||
|
]
|
||||||
|
|
||||||
|
# Note: Implement password strength validation if needed
|
||||||
|
assert True # Placeholder
|
||||||
|
|
||||||
|
def test_email_validation(self):
|
||||||
|
"""Test email format validation."""
|
||||||
|
|
||||||
|
valid_emails = [
|
||||||
|
"test@example.com",
|
||||||
|
"user.name@domain.co.uk",
|
||||||
|
"user+tag@example.org"
|
||||||
|
]
|
||||||
|
|
||||||
|
invalid_emails = [
|
||||||
|
"invalid-email",
|
||||||
|
"user@",
|
||||||
|
"@domain.com",
|
||||||
|
"user@domain"
|
||||||
|
]
|
||||||
|
|
||||||
|
# Note: Email validation is typically handled by Pydantic
|
||||||
|
assert True # Placeholder
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_rate_limiting_simulation(self):
|
||||||
|
"""Test rate limiting for authentication attempts."""
|
||||||
|
|
||||||
|
# This would test rate limiting if implemented
|
||||||
|
# Simulate multiple failed login attempts
|
||||||
|
|
||||||
|
failed_attempts = []
|
||||||
|
for i in range(5):
|
||||||
|
# Mock failed authentication attempt
|
||||||
|
failed_attempts.append(f"attempt_{i}")
|
||||||
|
|
||||||
|
assert len(failed_attempts) == 5
|
||||||
|
# In real implementation, would test that after X failed attempts,
|
||||||
|
# further attempts are rate limited
|
||||||
|
|
||||||
|
|
||||||
|
class TestTokenSecurity:
|
||||||
|
"""Test token security features."""
|
||||||
|
|
||||||
|
def test_token_contains_required_claims(self):
|
||||||
|
"""Test that tokens contain required claims."""
|
||||||
|
|
||||||
|
data = {"sub": "user123", "email": "test@example.com"}
|
||||||
|
token = create_access_token(data=data)
|
||||||
|
|
||||||
|
payload = verify_token(token)
|
||||||
|
|
||||||
|
# Check required claims
|
||||||
|
assert "sub" in payload
|
||||||
|
assert "exp" in payload
|
||||||
|
assert "iat" in payload
|
||||||
|
assert payload["sub"] == "user123"
|
||||||
|
|
||||||
|
def test_token_expiry_time(self):
|
||||||
|
"""Test token expiry time is set correctly."""
|
||||||
|
|
||||||
|
data = {"sub": "user123"}
|
||||||
|
expires_delta = timedelta(minutes=30)
|
||||||
|
token = create_access_token(data=data, expires_delta=expires_delta)
|
||||||
|
|
||||||
|
payload = verify_token(token)
|
||||||
|
|
||||||
|
# Check expiry is approximately correct (within 1 minute tolerance)
|
||||||
|
exp_time = datetime.fromtimestamp(payload["exp"])
|
||||||
|
expected_exp = datetime.utcnow() + expires_delta
|
||||||
|
time_diff = abs((exp_time - expected_exp).total_seconds())
|
||||||
|
|
||||||
|
assert time_diff < 60 # Within 1 minute tolerance
|
||||||
|
|
||||||
|
def test_token_uniqueness(self):
|
||||||
|
"""Test that different tokens are generated for same data."""
|
||||||
|
|
||||||
|
data = {"sub": "user123", "email": "test@example.com"}
|
||||||
|
|
||||||
|
token1 = create_access_token(data=data)
|
||||||
|
token2 = create_access_token(data=data)
|
||||||
|
|
||||||
|
# Tokens should be different due to different iat (issued at) times
|
||||||
|
assert token1 != token2
|
||||||
|
|
||||||
|
# But both should decode to similar payload (except iat and exp)
|
||||||
|
payload1 = verify_token(token1)
|
||||||
|
payload2 = verify_token(token2)
|
||||||
|
|
||||||
|
assert payload1["sub"] == payload2["sub"]
|
||||||
|
assert payload1["email"] == payload2["email"]
|
||||||
Reference in New Issue
Block a user