youtube-summarizer/README_TEST_RUNNER.md

4.6 KiB

Test Runner Quick Reference

🚀 Quick Start

# Setup (run once)
./scripts/setup_test_env.sh

# Activate environment
source venv/bin/activate

# Run all tests with coverage
./run_tests.sh run-all --coverage

# Run only fast unit tests
./run_tests.sh run-unit

# Generate HTML coverage report
./run_tests.sh run-coverage

🎯 Common Commands

Command Description Usage
run-all Complete test suite ./run_tests.sh run-all --parallel
run-unit Fast unit tests only ./run_tests.sh run-unit --fail-fast
run-integration Integration tests ./run_tests.sh run-integration
run-coverage Coverage analysis ./run_tests.sh run-coverage --html
run-frontend Frontend tests ./run_tests.sh run-frontend
discover List available tests ./run_tests.sh discover --verbose
validate Check environment ./run_tests.sh validate

📊 Test Categories

  • Unit Tests: Fast, isolated, no external dependencies
  • Integration Tests: Database, API, external service tests
  • API Tests: FastAPI endpoint testing
  • Frontend Tests: React component and hook tests
  • Performance Tests: Load and performance validation
  • E2E Tests: End-to-end user workflows

📈 Report Formats

  • HTML: Interactive reports with charts (--reports html)
  • JSON: Machine-readable for CI/CD (--reports json)
  • JUnit: Standard XML for CI systems (--reports junit)
  • Markdown: Human-readable docs (--reports markdown)
  • CSV: Data export for analysis (--reports csv)

🛠️ Advanced Usage

# Parallel execution with specific workers
./run_tests.sh run-all --parallel --workers 4

# Filter tests by pattern
./run_tests.sh run-all --pattern "test_auth*"

# Run specific categories
./run_tests.sh run-all --category unit,api

# Coverage with threshold
./run_tests.sh run-coverage --min-coverage 85

# Multiple report formats
./run_tests.sh run-all --reports html,json,junit

🎯 Test Markers

Use pytest markers to categorize and filter tests:

@pytest.mark.unit           # Fast unit test
@pytest.mark.integration    # Integration test
@pytest.mark.slow          # Slow test (>5 seconds)
@pytest.mark.auth          # Authentication test
@pytest.mark.database      # Database-dependent test
@pytest.mark.asyncio       # Async test

📁 File Structure

test_reports/           # Generated reports
├── coverage_html/      # HTML coverage reports
├── junit.xml          # JUnit XML reports  
├── test_report.json   # JSON reports
└── test_report.html   # Interactive HTML reports

backend/test_runner/    # Test runner source
├── cli.py             # Command-line interface
├── core/              # Core runner components
├── config/            # Configuration management
└── utils/             # Utilities and helpers

backend/tests/          # Test files
├── unit/              # Unit tests
├── integration/       # Integration tests
└── fixtures/          # Test data and mocks

🔧 Configuration

Environment Variables

DATABASE_URL=sqlite:///:memory:
TESTING=true
MOCK_EXTERNAL_APIS=true
TEST_TIMEOUT=300

Configuration Files

  • pytest.ini - pytest configuration and markers
  • .coveragerc - Coverage settings and exclusions
  • .env.test - Test environment variables

Performance Tips

  1. Use --parallel for faster execution
  2. Run unit tests first with run-unit --fail-fast
  3. Filter tests with --pattern or --category
  4. Skip slow tests with --markers "not slow"
  5. Use memory database for speed

🐛 Troubleshooting

Issue Solution
Tests not found Run ./run_tests.sh discover --verbose
Environment errors Run ./run_tests.sh validate
Slow execution Use --parallel or --workers 2
Import errors Check PYTHONPATH and virtual environment
Database locked Use sqlite:///:memory: or remove lock files

🔗 Documentation

📋 CI/CD Integration

# Example GitHub Actions
- name: Run Tests
  run: |
    source venv/bin/activate
    python3 -m backend.test_runner run-all \
      --reports junit,json \
      --coverage --min-coverage 80 \
      --parallel    

Need help? See the complete guide at docs/TEST_RUNNER_GUIDE.md