Implements Phase 1.1b - Wiki.js MCP Server for documentation management and lessons learned capture. **Features:** - GraphQL client for Wiki.js API - Page management (CRUD operations) - Lessons learned workflow - Mode detection (project vs company-wide) - Hybrid configuration system - 18 comprehensive tests (all passing) **Components:** - config.py: Configuration loader with mode detection - wikijs_client.py: GraphQL client implementation - server.py: MCP server with 8 tools - tools/pages.py: Page management tools - tools/lessons_learned.py: Lessons learned tools **Tools Provided:** - search_pages: Search by keywords and tags - get_page: Retrieve specific page - create_page: Create new page with markdown - update_page: Update existing page - list_pages: List pages under path - create_lesson: Create lessons learned entry - search_lessons: Search previous lessons - tag_lesson: Add/update lesson tags **Testing:** - 18 unit tests with mocks (all passing) - Integration tests with real Wiki.js instance - Configuration validation tests - GraphQL error handling tests **Documentation:** - Comprehensive README.md with usage guide - TESTING.md with testing instructions - Integration test script for validation Verified working with live Wiki.js instance at http://wikijs.hotport 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
11 KiB
Testing Guide - Wiki.js MCP Server
This document provides comprehensive testing instructions for the Wiki.js MCP Server.
Test Suite Overview
The test suite includes:
- 18 unit tests with mocks (fast, no external dependencies)
- Integration tests with real Wiki.js instance (requires live Wiki.js)
- Configuration validation tests
- Mode detection tests
- GraphQL client tests
- Error handling tests
Prerequisites
For Unit Tests (Mocked)
- Python 3.10+
- Virtual environment with dependencies installed
- No external services required
For Integration Tests
- Everything from unit tests, plus:
- Running Wiki.js instance
- Valid API token with permissions
- System configuration file (
~/.config/claude/wikijs.env)
Quick Start
Run All Unit Tests
cd mcp-servers/wikijs
source .venv/bin/activate
pytest -v
Expected Output:
==================== test session starts ====================
tests/test_config.py::test_load_system_config PASSED [ 5%]
tests/test_config.py::test_project_config_override PASSED [ 11%]
...
==================== 18 passed in 0.40s ====================
Run Integration Tests
# Set up system configuration first
mkdir -p ~/.config/claude
cat > ~/.config/claude/wikijs.env << 'EOF'
WIKIJS_API_URL=http://wikijs.hotport/graphql
WIKIJS_API_TOKEN=your_real_token_here
WIKIJS_BASE_PATH=/hyper-hive-labs
EOF
# Run integration tests
pytest -v -m integration
Test Categories
1. Configuration Tests (test_config.py)
Tests the hybrid configuration system and mode detection.
Tests:
test_load_system_config: System-level config loadingtest_project_config_override: Project overrides systemtest_missing_system_config: Error when config missingtest_missing_required_config: Validation of required varstest_mode_detection_project: Project mode detectiontest_mode_detection_company: Company mode detection
Run:
pytest tests/test_config.py -v
2. Wiki.js Client Tests (test_wikijs_client.py)
Tests the GraphQL client and all Wiki.js operations.
Tests:
test_client_initialization: Client setuptest_company_mode_initialization: Company mode setuptest_get_full_path_project_mode: Path construction (project)test_get_full_path_company_mode: Path construction (company)test_search_pages: Page searchtest_get_page: Single page retrievaltest_create_page: Page creationtest_update_page: Page updatestest_list_pages: List pages with filteringtest_create_lesson: Lessons learned creationtest_search_lessons: Lesson searchtest_graphql_error_handling: Error handling
Run:
pytest tests/test_wikijs_client.py -v
Integration Testing
Setup Integration Environment
Step 1: Configure Wiki.js
Create a test namespace in Wiki.js:
/test-integration/
├── projects/
│ └── test-project/
│ ├── documentation/
│ └── lessons-learned/
└── shared/
Step 2: Configure System
cat > ~/.config/claude/wikijs.env << 'EOF'
WIKIJS_API_URL=http://wikijs.hotport/graphql
WIKIJS_API_TOKEN=your_token_here
WIKIJS_BASE_PATH=/test-integration
EOF
Step 3: Configure Project
# In test directory
cat > .env << 'EOF'
WIKIJS_PROJECT=projects/test-project
EOF
Run Integration Tests
# Mark tests for integration
pytest -v -m integration
# Run specific integration test
pytest tests/test_wikijs_client.py::test_create_page -v -m integration
Integration Test Scenarios
Scenario 1: Page Lifecycle
- Create page with
create_page - Retrieve with
get_page - Update with
update_page - Search for page with
search_pages - Cleanup (manual via Wiki.js UI)
Scenario 2: Lessons Learned Workflow
- Create lesson with
create_lesson - Search lessons with
search_lessons - Add tags with
tag_lesson - Verify searchability
Scenario 3: Mode Detection
- Test in project mode (with
WIKIJS_PROJECT) - Test in company mode (without
WIKIJS_PROJECT) - Verify path scoping
Manual Testing
Test 1: Create and Retrieve Page
# Start MCP server
python -m mcp_server.server
# In another terminal, send MCP request
echo '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "create_page",
"arguments": {
"path": "documentation/test-api",
"title": "Test API Documentation",
"content": "# Test API\\n\\nThis is a test page.",
"tags": "api,testing",
"publish": true
}
}
}' | python -m mcp_server.server
Expected Result:
{
"success": true,
"page": {
"id": 123,
"path": "/hyper-hive-labs/projects/test-project/documentation/test-api",
"title": "Test API Documentation"
}
}
Test 2: Search Lessons
echo '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "search_lessons",
"arguments": {
"query": "validation",
"tags": "testing,claude-code",
"limit": 10
}
}
}' | python -m mcp_server.server
Expected Result:
{
"success": true,
"count": 2,
"lessons": [...]
}
Test 3: Mode Detection
Project Mode:
# Create .env with WIKIJS_PROJECT
echo "WIKIJS_PROJECT=projects/test-project" > .env
# Start server and check logs
python -m mcp_server.server 2>&1 | grep "mode"
Expected Log:
INFO:Running in project mode: projects/test-project
Company Mode:
# Remove .env
rm .env
# Start server and check logs
python -m mcp_server.server 2>&1 | grep "mode"
Expected Log:
INFO:Running in company-wide mode (PMO)
Test Data Management
Cleanup Test Data
After integration tests, clean up test pages in Wiki.js:
# Via Wiki.js UI
1. Navigate to /test-integration/
2. Select test pages
3. Delete
# Or via GraphQL (advanced)
curl -X POST http://wikijs.hotport/graphql \
-H "Authorization: Bearer $WIKIJS_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"query": "mutation { pages { delete(id: 123) { responseResult { succeeded } } } }"
}'
Test Data Fixtures
For repeatable testing, create fixtures:
# tests/conftest.py
import pytest
@pytest.fixture
async def test_page():
"""Create a test page and clean up after"""
client = WikiJSClient(...)
page = await client.create_page(
path="test/fixture-page",
title="Test Fixture",
content="# Test"
)
yield page
# Cleanup after test
await client.delete_page(page['id'])
Continuous Integration
GitHub Actions / Gitea Actions
name: Test Wiki.js MCP Server
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
working-directory: mcp-servers/wikijs
run: |
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
- name: Run unit tests
working-directory: mcp-servers/wikijs
run: |
source .venv/bin/activate
pytest -v
# Integration tests (optional, requires Wiki.js instance)
- name: Run integration tests
if: env.WIKIJS_API_TOKEN != ''
working-directory: mcp-servers/wikijs
env:
WIKIJS_API_URL: ${{ secrets.WIKIJS_API_URL }}
WIKIJS_API_TOKEN: ${{ secrets.WIKIJS_API_TOKEN }}
WIKIJS_BASE_PATH: /test-integration
run: |
source .venv/bin/activate
pytest -v -m integration
Debugging Tests
Enable Verbose Logging
# Set log level to DEBUG
export PYTHONLOG=DEBUG
pytest -v -s
Run Single Test with Debugging
# Run specific test with print statements visible
pytest tests/test_config.py::test_load_system_config -v -s
# Use pytest debugger
pytest tests/test_config.py::test_load_system_config --pdb
Inspect GraphQL Queries
Add logging to see actual GraphQL queries:
# In wikijs_client.py
async def _execute_query(self, query: str, variables: Optional[Dict[str, Any]] = None):
logger.info(f"GraphQL Query: {query}")
logger.info(f"Variables: {variables}")
# ... rest of method
Test Coverage
Generate Coverage Report
pip install pytest-cov
# Run with coverage
pytest --cov=mcp_server --cov-report=html
# Open report
open htmlcov/index.html
Target Coverage: 90%+ for all modules
Performance Testing
Benchmark GraphQL Operations
import time
async def benchmark_search():
client = WikiJSClient(...)
start = time.time()
results = await client.search_pages("test")
elapsed = time.time() - start
print(f"Search took {elapsed:.3f}s")
Expected Performance:
- Search: < 500ms
- Get page: < 200ms
- Create page: < 1s
- Update page: < 500ms
Common Test Failures
1. Configuration Not Found
Error:
FileNotFoundError: System config not found: ~/.config/claude/wikijs.env
Solution:
mkdir -p ~/.config/claude
cat > ~/.config/claude/wikijs.env << 'EOF'
WIKIJS_API_URL=http://wikijs.hotport/graphql
WIKIJS_API_TOKEN=test_token
WIKIJS_BASE_PATH=/test
EOF
2. GraphQL Connection Error
Error:
httpx.ConnectError: Connection refused
Solution:
- Verify Wiki.js is running
- Check
WIKIJS_API_URLis correct - Ensure
/graphqlendpoint is accessible
3. Permission Denied
Error:
ValueError: Failed to create page: Permission denied
Solution:
- Regenerate API token with write permissions
- Check Wiki.js user/group permissions
- Verify base path exists and is accessible
4. Environment Variable Pollution
Error:
AssertionError: assert 'project' == 'company'
Solution:
# In test, clear environment
monkeypatch.delenv('WIKIJS_PROJECT', raising=False)
Best Practices
- Isolate Tests: Each test should be independent
- Mock External Calls: Use mocks for unit tests
- Clean Up Resources: Delete test pages after integration tests
- Use Fixtures: Reuse common setup/teardown
- Test Error Cases: Not just happy paths
- Document Assumptions: Comment what tests expect
- Consistent Naming: Follow
test_<what>_<scenario>pattern
Next Steps
After testing passes:
- Review code coverage report
- Add integration tests for edge cases
- Document any new test scenarios
- Update CI/CD pipeline
- Create test data fixtures for common scenarios
Support
For testing issues:
- Check test logs:
pytest -v -s - Review Wiki.js logs
- Verify configuration files
- See main README.md troubleshooting section