630 lines
14 KiB
Markdown
630 lines
14 KiB
Markdown
# Test Runner Guide
|
|
|
|
## Overview
|
|
|
|
The YouTube Summarizer Test Runner is a comprehensive testing framework that provides intelligent test discovery, parallel execution, multiple report formats, and advanced features for maintaining high code quality.
|
|
|
|
## Quick Start
|
|
|
|
### Setup
|
|
```bash
|
|
# Run the setup script (from project root)
|
|
./scripts/setup_test_env.sh
|
|
|
|
# Activate virtual environment
|
|
source venv/bin/activate
|
|
|
|
# Run all tests
|
|
./run_tests.sh run-all
|
|
```
|
|
|
|
### Basic Usage
|
|
```bash
|
|
# Run all tests with coverage
|
|
python3 -m backend.test_runner run-all --coverage
|
|
|
|
# Run only unit tests
|
|
python3 -m backend.test_runner run-unit
|
|
|
|
# Run integration tests
|
|
python3 -m backend.test_runner run-integration
|
|
|
|
# Run tests in parallel
|
|
python3 -m backend.test_runner run-all --parallel
|
|
|
|
# Generate HTML coverage report
|
|
python3 -m backend.test_runner run-coverage
|
|
```
|
|
|
|
## Test Runner Architecture
|
|
|
|
### Core Components
|
|
|
|
1. **TestRunner** (`backend/test_runner/core/test_runner.py`)
|
|
- Main orchestration engine
|
|
- Supports multiple execution modes
|
|
- Handles parallel processing and result aggregation
|
|
|
|
2. **TestDiscovery** (`backend/test_runner/core/test_discovery.py`)
|
|
- Intelligent test categorization
|
|
- Dependency analysis
|
|
- Pattern matching for test files
|
|
|
|
3. **TestExecution** (`backend/test_runner/core/test_execution.py`)
|
|
- pytest and frontend test execution
|
|
- Environment management
|
|
- Real-time progress monitoring
|
|
|
|
4. **ReportGenerator** (`backend/test_runner/core/reporting.py`)
|
|
- Multiple report formats (HTML, JSON, JUnit, Markdown, CSV)
|
|
- Historical tracking and trend analysis
|
|
- Quality metrics and coverage analysis
|
|
|
|
5. **Configuration** (`backend/test_runner/config/test_config.py`)
|
|
- Centralized configuration management
|
|
- Environment-specific settings
|
|
- Validation and defaults
|
|
|
|
## Test Categories
|
|
|
|
The test runner automatically categorizes tests based on file patterns and markers:
|
|
|
|
### Backend Tests
|
|
- **Unit Tests** (`backend/tests/unit/`): Fast, isolated tests with no external dependencies
|
|
- **Integration Tests** (`backend/tests/integration/`): Tests with database or API dependencies
|
|
- **API Tests**: FastAPI endpoint testing with TestClient
|
|
- **Performance Tests**: Load and performance validation
|
|
- **Database Tests**: Database-specific functionality
|
|
|
|
### Frontend Tests
|
|
- **Component Tests** (`frontend/src/test/`): React component unit tests
|
|
- **Hook Tests**: Custom React hook testing
|
|
- **Integration Tests**: Component integration scenarios
|
|
- **E2E Tests**: End-to-end user workflows
|
|
|
|
### Test Markers
|
|
Tests can be marked with pytest markers for targeted execution:
|
|
|
|
```python
|
|
import pytest
|
|
|
|
@pytest.mark.unit
|
|
@pytest.mark.asyncio
|
|
async def test_youtube_service():
|
|
"""Fast unit test for YouTube service."""
|
|
pass
|
|
|
|
@pytest.mark.integration
|
|
@pytest.mark.database
|
|
def test_user_creation():
|
|
"""Integration test requiring database."""
|
|
pass
|
|
|
|
@pytest.mark.slow
|
|
@pytest.mark.performance
|
|
def test_large_transcript_processing():
|
|
"""Performance test for large data."""
|
|
pass
|
|
```
|
|
|
|
## Command Line Interface
|
|
|
|
### Primary Commands
|
|
|
|
#### `run-all`
|
|
Executes the complete test suite with comprehensive reporting.
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-all [options]
|
|
|
|
Options:
|
|
--coverage Generate coverage report
|
|
--parallel Run tests in parallel
|
|
--fail-fast Stop on first failure
|
|
--reports FORMAT Report formats (html,json,junit,markdown,csv)
|
|
--verbose Detailed output
|
|
--timeout SECONDS Test timeout
|
|
```
|
|
|
|
#### `run-unit`
|
|
Executes only unit tests for fast feedback.
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-unit [options]
|
|
```
|
|
|
|
#### `run-integration`
|
|
Executes integration tests with database and external service dependencies.
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-integration [options]
|
|
```
|
|
|
|
#### `run-coverage`
|
|
Specialized coverage analysis with detailed reporting.
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-coverage [options]
|
|
|
|
Options:
|
|
--min-coverage NUM Minimum coverage percentage (default: 80)
|
|
--html Generate HTML coverage report
|
|
--xml Generate XML coverage report for CI
|
|
```
|
|
|
|
#### `run-frontend`
|
|
Executes frontend tests using npm/Vitest.
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-frontend [options]
|
|
```
|
|
|
|
#### `discover`
|
|
Lists all discovered tests without execution.
|
|
|
|
```bash
|
|
python3 -m backend.test_runner discover [options]
|
|
|
|
Options:
|
|
--category CATEGORY Filter by test category
|
|
--pattern PATTERN Filter by file pattern
|
|
--verbose Show test details
|
|
```
|
|
|
|
#### `validate`
|
|
Validates test environment and configuration.
|
|
|
|
```bash
|
|
python3 -m backend.test_runner validate [options]
|
|
```
|
|
|
|
### Advanced Options
|
|
|
|
#### Filtering and Selection
|
|
```bash
|
|
# Run tests matching pattern
|
|
python3 -m backend.test_runner run-all --pattern "test_auth*"
|
|
|
|
# Run specific categories
|
|
python3 -m backend.test_runner run-all --category unit,api
|
|
|
|
# Run tests with specific markers
|
|
python3 -m backend.test_runner run-all --markers "unit and not slow"
|
|
```
|
|
|
|
#### Parallel Execution
|
|
```bash
|
|
# Use specific number of workers
|
|
python3 -m backend.test_runner run-all --parallel --workers 4
|
|
|
|
# Auto-detect workers based on CPU cores
|
|
python3 -m backend.test_runner run-all --parallel --workers auto
|
|
```
|
|
|
|
#### Output Control
|
|
```bash
|
|
# Minimal output
|
|
python3 -m backend.test_runner run-all --quiet
|
|
|
|
# Detailed debugging
|
|
python3 -m backend.test_runner run-all --verbose --debug
|
|
|
|
# Custom output format
|
|
python3 -m backend.test_runner run-all --output-format json
|
|
```
|
|
|
|
## Configuration
|
|
|
|
### Configuration Files
|
|
|
|
#### `pytest.ini`
|
|
Primary pytest configuration with test discovery and markers:
|
|
|
|
```ini
|
|
[tool:pytest]
|
|
testpaths = backend/tests
|
|
python_files = test_*.py *_test.py
|
|
python_classes = Test*
|
|
python_functions = test_*
|
|
|
|
markers =
|
|
unit: Unit tests (fast, isolated)
|
|
integration: Integration tests (database, APIs)
|
|
slow: Tests taking more than 5 seconds
|
|
auth: Authentication-related tests
|
|
```
|
|
|
|
#### `.coveragerc`
|
|
Coverage configuration with exclusions and reporting:
|
|
|
|
```ini
|
|
[run]
|
|
source = backend
|
|
omit = */tests/*, */test_*, */migrations/*
|
|
|
|
[report]
|
|
exclude_lines =
|
|
pragma: no cover
|
|
def __repr__
|
|
raise AssertionError
|
|
|
|
[html]
|
|
directory = test_reports/coverage_html
|
|
```
|
|
|
|
#### Test Configuration (`backend/test_runner/config/test_config.py`)
|
|
Runtime configuration with environment-specific settings:
|
|
|
|
```python
|
|
config = TestConfig(
|
|
test_timeout=300,
|
|
parallel_workers=4,
|
|
coverage_threshold=80,
|
|
fail_fast=False,
|
|
reports_dir="test_reports",
|
|
mock_external_apis=True
|
|
)
|
|
```
|
|
|
|
### Environment Variables
|
|
|
|
```bash
|
|
# Test database
|
|
DATABASE_URL=sqlite:///:memory:
|
|
TEST_DATABASE_URL=sqlite:///./test_data/test.db
|
|
|
|
# Test mode flags
|
|
TESTING=true
|
|
TEST_MODE=true
|
|
MOCK_EXTERNAL_APIS=true
|
|
|
|
# API keys for testing
|
|
ANTHROPIC_API_KEY=test_key
|
|
JWT_SECRET_KEY=test_secret_key
|
|
|
|
# Timeouts and limits
|
|
TEST_TIMEOUT=300
|
|
NETWORK_TIMEOUT=30
|
|
```
|
|
|
|
## Report Formats
|
|
|
|
### HTML Reports
|
|
Comprehensive interactive reports with charts and navigation:
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-all --reports html
|
|
# Output: test_reports/test_report.html
|
|
```
|
|
|
|
Features:
|
|
- Interactive test results with filtering
|
|
- Coverage visualization
|
|
- Performance metrics
|
|
- Historical trends
|
|
- Failure analysis
|
|
|
|
### JSON Reports
|
|
Machine-readable reports for CI/CD integration:
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-all --reports json
|
|
# Output: test_reports/test_report.json
|
|
```
|
|
|
|
Schema:
|
|
```json
|
|
{
|
|
"summary": {
|
|
"total_tests": 150,
|
|
"passed": 145,
|
|
"failed": 3,
|
|
"skipped": 2,
|
|
"duration": 45.2,
|
|
"coverage_percentage": 87.5
|
|
},
|
|
"suites": [...],
|
|
"failures": [...],
|
|
"coverage": {...}
|
|
}
|
|
```
|
|
|
|
### JUnit XML
|
|
Standard JUnit format for CI systems:
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-all --reports junit
|
|
# Output: test_reports/junit.xml
|
|
```
|
|
|
|
### Markdown Reports
|
|
Human-readable reports for documentation:
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-all --reports markdown
|
|
# Output: test_reports/test_report.md
|
|
```
|
|
|
|
### CSV Reports
|
|
Data export for analysis and tracking:
|
|
|
|
```bash
|
|
python3 -m backend.test_runner run-all --reports csv
|
|
# Output: test_reports/test_report.csv
|
|
```
|
|
|
|
## Advanced Features
|
|
|
|
### Smart Test Selection
|
|
The runner can intelligently select tests based on code changes:
|
|
|
|
```bash
|
|
# Run tests affected by recent changes
|
|
python3 -m backend.test_runner run-smart --since HEAD~1
|
|
|
|
# Run tests for specific files
|
|
python3 -m backend.test_runner run-smart --changed-files "backend/services/auth.py"
|
|
```
|
|
|
|
### Historical Tracking
|
|
Track test performance and coverage over time:
|
|
|
|
```bash
|
|
# Generate trend report
|
|
python3 -m backend.test_runner run-trends --days 30
|
|
|
|
# Compare with previous run
|
|
python3 -m backend.test_runner run-all --compare-with previous
|
|
```
|
|
|
|
### Quality Gates
|
|
Enforce quality standards with automatic failure:
|
|
|
|
```bash
|
|
# Fail if coverage below 80%
|
|
python3 -m backend.test_runner run-all --min-coverage 80
|
|
|
|
# Fail if performance regression > 20%
|
|
python3 -m backend.test_runner run-all --max-duration-increase 20
|
|
```
|
|
|
|
### Test Environment Management
|
|
Automatic test environment setup and cleanup:
|
|
|
|
```python
|
|
# Custom environment setup
|
|
from backend.test_runner.utils.environment import TestEnvironmentManager
|
|
|
|
manager = TestEnvironmentManager()
|
|
manager.setup_test_environment(config)
|
|
try:
|
|
# Run tests
|
|
pass
|
|
finally:
|
|
manager.cleanup_test_environment()
|
|
```
|
|
|
|
## Integration Workflows
|
|
|
|
### Local Development
|
|
```bash
|
|
# Quick feedback loop
|
|
python3 -m backend.test_runner run-unit --fail-fast
|
|
|
|
# Pre-commit validation
|
|
python3 -m backend.test_runner run-all --coverage --min-coverage 80
|
|
|
|
# Full validation
|
|
python3 -m backend.test_runner run-all --reports html,junit --parallel
|
|
```
|
|
|
|
### Continuous Integration
|
|
```yaml
|
|
# GitHub Actions example
|
|
- name: Run Tests
|
|
run: |
|
|
source venv/bin/activate
|
|
python3 -m backend.test_runner run-all \
|
|
--reports junit,json \
|
|
--coverage \
|
|
--parallel \
|
|
--min-coverage 80
|
|
```
|
|
|
|
### Pre-commit Hooks
|
|
```yaml
|
|
# .pre-commit-config.yaml
|
|
- repo: local
|
|
hooks:
|
|
- id: test-runner
|
|
name: Run unit tests
|
|
entry: python3 -m backend.test_runner run-unit --fail-fast
|
|
language: system
|
|
pass_filenames: false
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
#### Tests Not Discovered
|
|
```bash
|
|
# Check test discovery
|
|
python3 -m backend.test_runner discover --verbose
|
|
|
|
# Verify test patterns
|
|
python3 -m backend.test_runner discover --pattern "test_*"
|
|
```
|
|
|
|
#### Environment Issues
|
|
```bash
|
|
# Validate environment
|
|
python3 -m backend.test_runner validate
|
|
|
|
# Check dependencies
|
|
python3 -c "from backend.test_runner.utils.environment import check_test_dependencies; print(check_test_dependencies())"
|
|
```
|
|
|
|
#### Performance Issues
|
|
```bash
|
|
# Disable parallel execution
|
|
python3 -m backend.test_runner run-all --no-parallel
|
|
|
|
# Reduce workers
|
|
python3 -m backend.test_runner run-all --workers 2
|
|
|
|
# Skip slow tests
|
|
python3 -m backend.test_runner run-all --markers "not slow"
|
|
```
|
|
|
|
#### Database Issues
|
|
```bash
|
|
# Use memory database
|
|
export DATABASE_URL="sqlite:///:memory:"
|
|
|
|
# Reset test database
|
|
rm -f test_data/test.db
|
|
```
|
|
|
|
### Debug Mode
|
|
```bash
|
|
# Enable debug logging
|
|
python3 -m backend.test_runner run-all --debug --verbose
|
|
|
|
# Show internal state
|
|
python3 -m backend.test_runner run-all --show-internals
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### Writing Tests
|
|
1. Use appropriate markers (`@pytest.mark.unit`, `@pytest.mark.integration`)
|
|
2. Follow naming conventions (`test_*.py`, `Test*` classes)
|
|
3. Mock external dependencies in unit tests
|
|
4. Use fixtures for common setup
|
|
5. Write descriptive test names and docstrings
|
|
|
|
### Test Organization
|
|
1. Group related tests in classes
|
|
2. Use separate files for different components
|
|
3. Keep test files close to source code
|
|
4. Use consistent directory structure
|
|
|
|
### Performance
|
|
1. Keep unit tests fast (< 100ms each)
|
|
2. Use parallel execution for large test suites
|
|
3. Mock external services and APIs
|
|
4. Cache expensive setup operations
|
|
|
|
### Maintenance
|
|
1. Run tests regularly during development
|
|
2. Monitor coverage trends
|
|
3. Review and update slow tests
|
|
4. Clean up obsolete tests
|
|
|
|
## API Reference
|
|
|
|
### TestRunner Class
|
|
```python
|
|
from backend.test_runner.core.test_runner import TestRunner
|
|
|
|
runner = TestRunner(config=config)
|
|
result = await runner.run_tests(
|
|
mode=TestRunnerMode.ALL,
|
|
coverage=True,
|
|
parallel=True,
|
|
reports=[ReportFormat.HTML, ReportFormat.JSON]
|
|
)
|
|
```
|
|
|
|
### Configuration API
|
|
```python
|
|
from backend.test_runner.config.test_config import TestConfig
|
|
|
|
config = TestConfig.load_from_file("test_config.yaml")
|
|
config.update_from_env()
|
|
config.validate()
|
|
```
|
|
|
|
### Report Generation API
|
|
```python
|
|
from backend.test_runner.core.reporting import ReportGenerator
|
|
|
|
generator = ReportGenerator(config)
|
|
report_path = await generator.generate_report(
|
|
result=test_result,
|
|
format_type=ReportFormat.HTML,
|
|
output_path=Path("custom_report.html")
|
|
)
|
|
```
|
|
|
|
## Examples
|
|
|
|
### Custom Test Runner Script
|
|
```python
|
|
#!/usr/bin/env python3
|
|
"""Custom test runner script."""
|
|
|
|
import asyncio
|
|
from pathlib import Path
|
|
from backend.test_runner.core.test_runner import TestRunner, TestRunnerMode
|
|
from backend.test_runner.config.test_config import TestConfig
|
|
|
|
async def main():
|
|
config = TestConfig.load_default(Path.cwd())
|
|
runner = TestRunner(config)
|
|
|
|
result = await runner.run_tests(
|
|
mode=TestRunnerMode.UNIT,
|
|
coverage=True,
|
|
parallel=True,
|
|
fail_fast=True
|
|
)
|
|
|
|
print(f"Tests: {result.total_tests}")
|
|
print(f"Passed: {result.passed}")
|
|
print(f"Failed: {result.failed}")
|
|
print(f"Coverage: {result.coverage_percentage:.1f}%")
|
|
|
|
return 0 if result.success else 1
|
|
|
|
if __name__ == "__main__":
|
|
exit(asyncio.run(main()))
|
|
```
|
|
|
|
### Integration with pytest
|
|
```python
|
|
# conftest.py
|
|
import pytest
|
|
from backend.test_runner.utils.environment import TestEnvironmentManager
|
|
|
|
@pytest.fixture(scope="session", autouse=True)
|
|
def setup_test_environment():
|
|
"""Setup test environment for entire test session."""
|
|
manager = TestEnvironmentManager()
|
|
config = TestConfig.load_default(Path.cwd())
|
|
|
|
manager.setup_test_environment(config)
|
|
yield
|
|
manager.cleanup_test_environment()
|
|
```
|
|
|
|
## Contributing
|
|
|
|
### Adding New Features
|
|
1. Update relevant core modules
|
|
2. Add comprehensive tests
|
|
3. Update CLI interface
|
|
4. Update documentation
|
|
5. Add examples and troubleshooting
|
|
|
|
### Testing the Test Runner
|
|
```bash
|
|
# Self-test the test runner
|
|
python3 -m pytest backend/test_runner/tests/ -v
|
|
|
|
# Integration test with sample project
|
|
python3 -m backend.test_runner run-all --config test_configs/sample_project.yaml
|
|
```
|
|
|
|
This test runner provides a robust foundation for maintaining high code quality in the YouTube Summarizer project with comprehensive testing capabilities, intelligent automation, and detailed reporting. |