Phase 2.5: Fix Foundation (CRITICAL) - Fixed 4 failing tests by adding cache attribute to mock_client fixture - Created comprehensive cache tests for Pages endpoint (test_pages_cache.py) - Added missing dependencies: pydantic[email] and aiohttp to core requirements - Updated requirements.txt with proper dependency versions - Achieved 82.67% test coverage with 454 passing tests Phase 2.6: Production Essentials - Implemented structured logging (wikijs/logging.py) * JSON and text log formatters * Configurable log levels and output destinations * Integration with client operations - Implemented metrics and telemetry (wikijs/metrics.py) * Request tracking with duration, status codes, errors * Latency percentiles (min, max, avg, p50, p95, p99) * Error rate calculation * Thread-safe metrics collection - Implemented rate limiting (wikijs/ratelimit.py) * Token bucket algorithm for request throttling * Per-endpoint rate limiting support * Configurable timeout handling * Burst capacity management - Created SECURITY.md policy * Vulnerability reporting procedures * Security best practices * Response timelines * Supported versions Documentation - Added comprehensive logging guide (docs/logging.md) - Added metrics and telemetry guide (docs/metrics.md) - Added rate limiting guide (docs/rate_limiting.md) - Updated README.md with production features section - Updated IMPROVEMENT_PLAN_2.md with completed checkboxes Testing - Created test suite for logging (tests/test_logging.py) - Created test suite for metrics (tests/test_metrics.py) - Created test suite for rate limiting (tests/test_ratelimit.py) - All 454 tests passing - Test coverage: 82.67% Breaking Changes: None Dependencies Added: pydantic[email], email-validator, dnspython 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2.3 KiB
2.3 KiB
Metrics and Telemetry Guide
Overview
The wikijs-python-sdk includes built-in metrics collection for monitoring performance and reliability.
Basic Usage
from wikijs import WikiJSClient
# Create client with metrics enabled (default)
client = WikiJSClient(
"https://wiki.example.com",
auth="your-api-key",
enable_metrics=True
)
# Perform operations
pages = client.pages.list()
page = client.pages.get(123)
# Get metrics
metrics = client.get_metrics()
print(f"Total requests: {metrics['total_requests']}")
print(f"Error rate: {metrics['error_rate']:.2f}%")
print(f"Avg latency: {metrics['latency']['avg']:.2f}ms")
print(f"P95 latency: {metrics['latency']['p95']:.2f}ms")
Available Metrics
Counters
total_requests: Total API requests madetotal_errors: Total errors (4xx, 5xx responses)total_server_errors: Server errors (5xx responses)
Latency Statistics
min: Minimum request durationmax: Maximum request durationavg: Average request durationp50: 50th percentile (median)p95: 95th percentilep99: 99th percentile
Error Rate
- Percentage of failed requests
Example Output
{
"total_requests": 150,
"total_errors": 3,
"error_rate": 2.0,
"latency": {
"min": 45.2,
"max": 523.8,
"avg": 127.3,
"p50": 98.5,
"p95": 312.7,
"p99": 487.2
},
"counters": {
"total_requests": 150,
"total_errors": 3,
"total_server_errors": 1
},
"gauges": {}
}
Advanced Usage
Custom Metrics
from wikijs.metrics import get_metrics
metrics = get_metrics()
# Increment custom counter
metrics.increment("custom_operation_count", 5)
# Set gauge value
metrics.set_gauge("cache_hit_rate", 87.5)
# Get statistics
stats = metrics.get_stats()
Reset Metrics
metrics = get_metrics()
metrics.reset()
Monitoring Integration
Metrics can be exported to monitoring systems:
import json
# Export metrics as JSON
metrics_json = json.dumps(client.get_metrics())
# Send to monitoring service
# send_to_datadog(metrics_json)
# send_to_prometheus(metrics_json)
Best Practices
- Monitor error rates regularly
- Set up alerts for high latency (p95 > threshold)
- Track trends over time
- Reset metrics periodically
- Export to external monitoring systems