# JobForge MVP - Testing Strategy & Guidelines **Version:** 1.0.0 MVP **Target Audience:** Development Team **Testing Framework:** pytest + manual testing **Last Updated:** July 2025 --- ## ๐ŸŽฏ Testing Philosophy ### MVP Testing Approach - **Pragmatic over Perfect:** Focus on critical path testing rather than 100% coverage - **Backend Heavy:** Comprehensive API testing, lighter frontend testing for MVP - **Manual Integration:** Manual testing of full user workflows - **AI Mocking:** Mock external AI services for reliable testing - **Database Testing:** Test data isolation and security policies ### Testing Pyramid for MVP ``` โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Manual E2E โ”‚ โ† Full user workflows โ”‚ Testing โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค โ”‚ Integration โ”‚ โ† API endpoints + database โ”‚ Tests โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค โ”‚ Unit Tests โ”‚ โ† Business logic + utilities โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ``` --- ## ๐Ÿงช Unit Testing (Backend) ### Test Structure ``` tests/ โ”œโ”€โ”€ unit/ โ”‚ โ”œโ”€โ”€ services/ โ”‚ โ”‚ โ”œโ”€โ”€ test_auth_service.py โ”‚ โ”‚ โ”œโ”€โ”€ test_application_service.py โ”‚ โ”‚ โ””โ”€โ”€ test_document_service.py โ”‚ โ”œโ”€โ”€ agents/ โ”‚ โ”‚ โ”œโ”€โ”€ test_research_agent.py โ”‚ โ”‚ โ”œโ”€โ”€ test_resume_optimizer.py โ”‚ โ”‚ โ””โ”€โ”€ test_cover_letter_generator.py โ”‚ โ””โ”€โ”€ helpers/ โ”‚ โ”œโ”€โ”€ test_validators.py โ”‚ โ””โ”€โ”€ test_formatters.py โ”œโ”€โ”€ integration/ โ”‚ โ”œโ”€โ”€ test_api_auth.py โ”‚ โ”œโ”€โ”€ test_api_applications.py โ”‚ โ”œโ”€โ”€ test_api_documents.py โ”‚ โ””โ”€โ”€ test_database_policies.py โ”œโ”€โ”€ fixtures/ โ”‚ โ”œโ”€โ”€ test_data.py โ”‚ โ””โ”€โ”€ mock_responses.py โ”œโ”€โ”€ conftest.py โ””โ”€โ”€ pytest.ini ``` ### Sample Unit Tests #### Authentication Service Test ```python # tests/unit/services/test_auth_service.py import pytest from unittest.mock import AsyncMock, patch from src.backend.services.auth_service import AuthenticationService from src.backend.models.requests import RegisterRequest class TestAuthenticationService: @pytest.fixture def auth_service(self, mock_db): return AuthenticationService(mock_db) @pytest.mark.asyncio async def test_register_user_success(self, auth_service): # Arrange register_data = RegisterRequest( email="test@example.com", password="SecurePass123!", full_name="Test User" ) # Act user = await auth_service.register_user(register_data) # Assert assert user.email == "test@example.com" assert user.full_name == "Test User" assert user.id is not None # Password should be hashed assert user.password_hash != "SecurePass123!" assert user.password_hash.startswith("$2b$") @pytest.mark.asyncio async def test_register_user_duplicate_email(self, auth_service, existing_user): # Arrange register_data = RegisterRequest( email=existing_user.email, # Same email as existing user password="SecurePass123!", full_name="Another User" ) # Act & Assert with pytest.raises(DuplicateEmailError): await auth_service.register_user(register_data) @pytest.mark.asyncio async def test_authenticate_user_success(self, auth_service, existing_user): # Act auth_result = await auth_service.authenticate_user( existing_user.email, "correct_password" ) # Assert assert auth_result.success is True assert auth_result.user.id == existing_user.id assert auth_result.access_token is not None assert auth_result.token_type == "bearer" @pytest.mark.asyncio async def test_authenticate_user_wrong_password(self, auth_service, existing_user): # Act auth_result = await auth_service.authenticate_user( existing_user.email, "wrong_password" ) # Assert assert auth_result.success is False assert auth_result.user is None assert auth_result.access_token is None ``` #### AI Agent Test with Mocking ```python # tests/unit/agents/test_research_agent.py import pytest from unittest.mock import AsyncMock, patch from src.agents.research_agent import ResearchAgent class TestResearchAgent: @pytest.fixture def research_agent(self, mock_claude_client): return ResearchAgent(mock_claude_client) @pytest.mark.asyncio @patch('src.agents.research_agent.web_search') async def test_analyze_job_description(self, mock_web_search, research_agent): # Arrange job_description = """ We are seeking a Senior Python Developer with 5+ years experience. Must have Django, PostgreSQL, and AWS experience. """ mock_claude_response = { "content": [{ "text": """ { "required_skills": ["Python", "Django", "PostgreSQL", "AWS"], "experience_level": "Senior (5+ years)", "key_requirements": ["Backend development", "Database design"], "nice_to_have": ["Docker", "Kubernetes"] } """ }] } research_agent.claude_client.messages.create.return_value = mock_claude_response # Act analysis = await research_agent.analyze_job_description(job_description) # Assert assert "Python" in analysis.required_skills assert "Django" in analysis.required_skills assert analysis.experience_level == "Senior (5+ years)" assert len(analysis.key_requirements) > 0 @pytest.mark.asyncio async def test_research_company_info(self, research_agent): # Test company research with mocked web search company_name = "Google" # Mock web search results with patch('src.agents.research_agent.web_search') as mock_search: mock_search.return_value = { "results": [ { "title": "Google - About", "content": "Google is a multinational technology company...", "url": "https://about.google.com" } ] } company_info = await research_agent.research_company_info(company_name) assert company_info.company_name == "Google" assert len(company_info.recent_news) >= 0 assert company_info.company_description is not None ``` --- ## ๐Ÿ”— Integration Testing ### API Integration Tests ```python # tests/integration/test_api_applications.py import pytest from httpx import AsyncClient from src.backend.main import app class TestApplicationsAPI: @pytest.mark.asyncio async def test_create_application_success(self, auth_headers): async with AsyncClient(app=app, base_url="http://test") as client: # Arrange application_data = { "company_name": "Google", "role_title": "Senior Developer", "job_description": "We are seeking an experienced developer with Python skills...", "location": "Toronto, ON", "priority_level": "high" } # Act response = await client.post( "/api/v1/applications", json=application_data, headers=auth_headers ) # Assert assert response.status_code == 201 data = response.json() assert data["company_name"] == "Google" assert data["role_title"] == "Senior Developer" assert data["status"] == "draft" assert data["name"] == "google_senior_developer_2025_07_01" # Auto-generated @pytest.mark.asyncio async def test_create_application_validation_error(self, auth_headers): async with AsyncClient(app=app, base_url="http://test") as client: # Arrange - missing required fields application_data = { "company_name": "", # Empty company name "job_description": "Short" # Too short (min 50 chars) } # Act response = await client.post( "/api/v1/applications", json=application_data, headers=auth_headers ) # Assert assert response.status_code == 400 error = response.json() assert "company_name" in error["error"]["details"] assert "job_description" in error["error"]["details"] @pytest.mark.asyncio async def test_list_applications_user_isolation(self, auth_headers, other_user_auth_headers): async with AsyncClient(app=app, base_url="http://test") as client: # Create application as user 1 await client.post( "/api/v1/applications", json={ "company_name": "User1 Company", "role_title": "Developer", "job_description": "Job description for user 1 application..." }, headers=auth_headers ) # List applications as user 2 response = await client.get( "/api/v1/applications", headers=other_user_auth_headers ) # Assert user 2 cannot see user 1's applications assert response.status_code == 200 data = response.json() assert len(data["applications"]) == 0 # Should be empty for user 2 ``` ### Database Policy Tests ```python # tests/integration/test_database_policies.py import pytest from src.backend.database.connection import get_db_connection class TestDatabasePolicies: @pytest.mark.asyncio async def test_rls_user_isolation(self, test_user_1, test_user_2): async with get_db_connection() as conn: # Set context as user 1 await conn.execute( "SET LOCAL app.current_user_id = %s", str(test_user_1.id) ) # Create application as user 1 result = await conn.execute(""" INSERT INTO applications (user_id, name, company_name, role_title, job_description) VALUES (%s, 'test_app', 'Test Co', 'Developer', 'Test job description...') RETURNING id """, str(test_user_1.id)) app_id = result.fetchone()[0] # Switch context to user 2 await conn.execute( "SET LOCAL app.current_user_id = %s", str(test_user_2.id) ) # Try to access user 1's application as user 2 result = await conn.execute( "SELECT * FROM applications WHERE id = %s", str(app_id) ) # Assert user 2 cannot see user 1's application assert len(result.fetchall()) == 0 @pytest.mark.asyncio async def test_document_cascade_delete(self, test_user, test_application): async with get_db_connection() as conn: # Set user context await conn.execute( "SET LOCAL app.current_user_id = %s", str(test_user.id) ) # Create document await conn.execute(""" INSERT INTO documents (application_id, document_type, content) VALUES (%s, 'research_report', 'Test research content') """, str(test_application.id)) # Delete application await conn.execute( "DELETE FROM applications WHERE id = %s", str(test_application.id) ) # Verify documents were cascaded deleted result = await conn.execute( "SELECT COUNT(*) FROM documents WHERE application_id = %s", str(test_application.id) ) assert result.fetchone()[0] == 0 ``` --- ## ๐ŸŽญ Test Fixtures & Mocking ### Pytest Configuration ```python # conftest.py import pytest import asyncio from unittest.mock import AsyncMock from src.backend.database.connection import get_db_connection from src.backend.models.requests import RegisterRequest @pytest.fixture(scope="session") def event_loop(): """Create an instance of the default event loop for the test session.""" loop = asyncio.get_event_loop_policy().new_event_loop() yield loop loop.close() @pytest.fixture async def test_db(): """Provide test database connection with cleanup.""" async with get_db_connection() as conn: # Start transaction trans = await conn.begin() yield conn # Rollback transaction (cleanup) await trans.rollback() @pytest.fixture async def test_user(test_db): """Create test user.""" user_data = { "id": "123e4567-e89b-12d3-a456-426614174000", "email": "test@example.com", "password_hash": "$2b$12$LQv3c1yqBWVHxkd0LHAkCOYz6TtxMQJqhN8", "full_name": "Test User" } await test_db.execute(""" INSERT INTO users (id, email, password_hash, full_name) VALUES (%(id)s, %(email)s, %(password_hash)s, %(full_name)s) """, user_data) return User(**user_data) @pytest.fixture def auth_headers(test_user): """Generate authentication headers for test user.""" token = generate_jwt_token(test_user.id) return {"Authorization": f"Bearer {token}"} @pytest.fixture def mock_claude_client(): """Mock Claude API client.""" mock = AsyncMock() mock.messages.create.return_value = { "content": [{ "text": "Mocked Claude response" }] } return mock @pytest.fixture def mock_openai_client(): """Mock OpenAI API client.""" mock = AsyncMock() mock.embeddings.create.return_value = { "data": [{ "embedding": [0.1] * 1536 # Mock 1536-dimensional embedding }] } return mock ``` ### Test Data Factory ```python # tests/fixtures/test_data.py from datetime import datetime import uuid class TestDataFactory: """Factory for creating test data objects.""" @staticmethod def create_user_data(**overrides): defaults = { "id": str(uuid.uuid4()), "email": "user@example.com", "password_hash": "$2b$12$test_hash", "full_name": "Test User", "created_at": datetime.now(), "updated_at": datetime.now() } return {**defaults, **overrides} @staticmethod def create_application_data(user_id, **overrides): defaults = { "id": str(uuid.uuid4()), "user_id": user_id, "name": "test_company_developer_2025_07_01", "company_name": "Test Company", "role_title": "Software Developer", "job_description": "We are seeking a software developer with Python experience...", "location": "Toronto, ON", "priority_level": "medium", "status": "draft", "research_completed": False, "resume_optimized": False, "cover_letter_generated": False, "created_at": datetime.now(), "updated_at": datetime.now() } return {**defaults, **overrides} @staticmethod def create_document_data(application_id, **overrides): defaults = { "id": str(uuid.uuid4()), "application_id": application_id, "document_type": "research_report", "content": "# Test Research Report\n\nThis is test content...", "created_at": datetime.now(), "updated_at": datetime.now() } return {**defaults, **overrides} ``` --- ## ๐ŸŽฏ Manual Testing Guidelines ### Critical User Workflows #### Workflow 1: Complete Application Creation **Goal:** Test full 3-phase workflow from start to finish **Steps:** 1. **Registration & Login** - [ ] Register new account with valid email/password - [ ] Login with created credentials - [ ] Verify JWT token is received and stored 2. **Application Creation** - [ ] Create new application with job description - [ ] Verify application appears in sidebar - [ ] Check application status is "draft" 3. **Research Phase** - [ ] Click "Research" tab - [ ] Verify research processing starts automatically - [ ] Wait for completion (check processing status) - [ ] Review generated research report - [ ] Verify application status updates to "research_complete" 4. **Resume Optimization** - [ ] Upload at least one resume to library - [ ] Click "Resume" tab - [ ] Start resume optimization - [ ] Verify processing completes successfully - [ ] Review optimized resume content - [ ] Test editing resume content - [ ] Verify application status updates to "resume_ready" 5. **Cover Letter Generation** - [ ] Click "Cover Letter" tab - [ ] Add additional context in text box - [ ] Generate cover letter - [ ] Review generated content - [ ] Test editing cover letter - [ ] Verify application status updates to "cover_letter_ready" **Expected Results:** - All phases complete without errors - Documents are editable and changes persist - Status updates correctly through workflow - Navigation works smoothly between phases #### Workflow 2: Data Isolation Testing **Goal:** Verify users cannot access each other's data **Steps:** 1. **Create two test accounts** - Account A: user1@test.com - Account B: user2@test.com 2. **Create applications in both accounts** - Login as User A, create "Google Developer" application - Login as User B, create "Microsoft Engineer" application 3. **Verify isolation** - [ ] User A cannot see User B's applications in sidebar - [ ] User A cannot access User B's application URLs directly - [ ] Document URLs return 404 for wrong user #### Workflow 3: Error Handling **Goal:** Test system behavior with invalid inputs and failures **Steps:** 1. **Invalid Application Data** - [ ] Submit empty company name (should show validation error) - [ ] Submit job description under 50 characters (should fail) - [ ] Submit invalid URL format (should fail or ignore) 2. **Network/API Failures** - [ ] Temporarily block Claude API access (mock network failure) - [ ] Verify user gets meaningful error message - [ ] Verify system doesn't crash or corrupt data 3. **Authentication Failures** - [ ] Try accessing API without token (should get 401) - [ ] Try with expired token (should redirect to login) - [ ] Try with malformed token (should get error) --- ## ๐Ÿ“Š Test Coverage Goals ### MVP Coverage Targets - **Backend Services:** 80%+ line coverage - **API Endpoints:** 100% endpoint coverage (at least smoke tests) - **Database Models:** 90%+ coverage of business logic - **Critical Paths:** 100% coverage of main user workflows - **Error Handling:** 70%+ coverage of error scenarios ### Coverage Exclusions (MVP) - Frontend components (manual testing only) - External API integrations (mocked) - Database migration scripts - Development utilities - Logging and monitoring code --- ## ๐Ÿš€ Testing Commands ### Running Tests ```bash # Run all tests pytest # Run with coverage report pytest --cov=src --cov-report=html # Run specific test file pytest tests/unit/services/test_auth_service.py # Run tests with specific marker pytest -m "not slow" # Run integration tests only pytest tests/integration/ # Verbose output for debugging pytest -v -s tests/unit/services/test_auth_service.py::TestAuthenticationService::test_register_user_success ``` ### Test Database Setup ```bash # Reset test database docker-compose exec postgres psql -U jobforge_user -d jobforge_mvp_test -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;" # Run database init for tests docker-compose exec postgres psql -U jobforge_user -d jobforge_mvp_test -f /docker-entrypoint-initdb.d/init.sql ``` --- ## ๐Ÿ› Testing Best Practices ### DO's - โœ… **Test business logic thoroughly** - Focus on services and agents - โœ… **Mock external dependencies** - Claude API, OpenAI, web scraping - โœ… **Test user data isolation** - Critical for multi-tenant security - โœ… **Use descriptive test names** - Should explain what is being tested - โœ… **Test error conditions** - Not just happy paths - โœ… **Clean up test data** - Use fixtures and database transactions ### DON'Ts - โŒ **Don't test external APIs directly** - Too unreliable for CI/CD - โŒ **Don't ignore database constraints** - Test them explicitly - โŒ **Don't hardcode test data** - Use factories and fixtures - โŒ **Don't skip cleanup** - Polluted test data affects other tests - โŒ **Don't test implementation details** - Test behavior, not internals ### Test Organization ```python # Good test structure class TestApplicationService: """Test class for ApplicationService business logic.""" def test_create_application_with_valid_data_returns_application(self): """Should create and return application when given valid data.""" # Arrange # Act # Assert def test_create_application_with_duplicate_name_raises_error(self): """Should raise DuplicateNameError when application name already exists.""" # Arrange # Act # Assert ``` --- ## ๐Ÿ“ˆ Testing Metrics ### Key Testing Metrics - **Test Execution Time:** Target < 30 seconds for full suite - **Test Reliability:** 95%+ pass rate on repeated runs - **Code Coverage:** 80%+ overall, 90%+ for critical paths - **Bug Detection:** Tests should catch regressions before deployment ### Performance Testing (Basic) ```python # Basic performance test example @pytest.mark.asyncio async def test_application_creation_performance(): """Application creation should complete within 2 seconds.""" start_time = time.time() # Create application result = await application_service.create_application(test_data) execution_time = time.time() - start_time assert execution_time < 2.0, f"Application creation took {execution_time:.2f}s" ``` --- *This testing strategy provides comprehensive coverage for the MVP while remaining practical and maintainable. Focus on backend testing for Phase 1, with more sophisticated frontend testing to be added in Phase 2.*