486 lines
16 KiB
Markdown
486 lines
16 KiB
Markdown
# Testing Instructions - YouTube Summarizer
|
|
|
|
This document provides comprehensive testing guidelines, standards, and procedures for the YouTube Summarizer project.
|
|
|
|
## Table of Contents
|
|
1. [Test Runner System](#test-runner-system)
|
|
2. [Quick Commands](#quick-commands)
|
|
3. [Testing Standards](#testing-standards)
|
|
4. [Test Structure](#test-structure)
|
|
5. [Unit Testing](#unit-testing)
|
|
6. [Integration Testing](#integration-testing)
|
|
7. [Frontend Testing](#frontend-testing)
|
|
8. [Quality Checklist](#quality-checklist)
|
|
9. [Test Runner Development Standards](#test-runner-development-standards)
|
|
10. [Troubleshooting](#troubleshooting)
|
|
|
|
## Test Runner System 🚀
|
|
|
|
### Overview
|
|
The YouTube Summarizer includes a production-ready test runner system with intelligent test discovery, parallel execution, and comprehensive reporting. The test runner discovered **229 unit tests** across all project modules and provides ultra-fast feedback for development.
|
|
|
|
### Quick Commands
|
|
```bash
|
|
# ⚡ Fast Feedback Loop (Primary Development Workflow)
|
|
./run_tests.sh run-unit --fail-fast # Ultra-fast unit tests (~0.2s)
|
|
./run_tests.sh run-specific "test_auth*.py" # Test specific patterns
|
|
|
|
# 🔍 Test Discovery & Validation
|
|
./run_tests.sh list --category unit # Show all 229 discovered tests
|
|
./scripts/validate_test_setup.py # Validate test environment
|
|
|
|
# 📊 Comprehensive Testing
|
|
./run_tests.sh run-all --coverage --parallel # Full suite with coverage
|
|
./run_tests.sh run-integration # Integration & API tests
|
|
./run_tests.sh run-coverage --html # Generate HTML coverage reports
|
|
```
|
|
|
|
### Test Categories Discovered
|
|
- **Unit Tests**: 229 tests across all service modules (auth, video, cache, AI, etc.)
|
|
- **Integration Tests**: API endpoints, database operations, external service integration
|
|
- **Pipeline Tests**: End-to-end workflow validation
|
|
- **Authentication Tests**: JWT, session management, security validation
|
|
|
|
### Test Runner Features
|
|
|
|
**🎯 Intelligent Test Discovery**
|
|
- Automatically categorizes tests by type (unit, integration, API, auth, etc.)
|
|
- Analyzes test files using AST parsing for accurate classification
|
|
- Supports pytest markers for advanced filtering
|
|
|
|
**⚡ Performance Optimized**
|
|
- Ultra-fast execution: 229 tests discovered in ~0.2 seconds
|
|
- Parallel execution support with configurable workers
|
|
- Smart caching and dependency management
|
|
|
|
**📊 Multiple Report Formats**
|
|
```bash
|
|
# Generate different report formats
|
|
./run_tests.sh run-all --reports html,json,junit
|
|
# Outputs:
|
|
# - test_reports/test_report.html (Interactive dashboard)
|
|
# - test_reports/test_report.json (CI/CD integration)
|
|
# - test_reports/junit.xml (Standard format)
|
|
```
|
|
|
|
**🔧 Developer Tools**
|
|
- One-time setup: `./scripts/setup_test_env.sh`
|
|
- Environment validation: `./scripts/validate_test_setup.py`
|
|
- Test runner CLI with comprehensive options
|
|
- Integration with coverage.py for detailed analysis
|
|
|
|
## Testing Standards
|
|
|
|
### Test Runner System Integration
|
|
|
|
**Production-Ready Test Runner**: The project includes a comprehensive test runner with intelligent discovery, parallel execution, and multi-format reporting.
|
|
|
|
**Current Test Coverage**: 229 unit tests discovered across all modules
|
|
- Video service tests, Authentication tests, Cache management tests
|
|
- AI model service tests, Pipeline tests, Export service tests
|
|
|
|
```bash
|
|
# Primary Testing Commands (Use These Instead of Direct pytest)
|
|
./run_tests.sh run-unit --fail-fast # Ultra-fast feedback (0.2s discovery)
|
|
./run_tests.sh run-all --coverage --parallel # Complete test suite
|
|
./run_tests.sh run-specific "test_auth*.py" # Test specific patterns
|
|
./run_tests.sh list --category unit # View all discovered tests
|
|
|
|
# Setup and Validation
|
|
./scripts/setup_test_env.sh # One-time environment setup
|
|
./scripts/validate_test_setup.py # Validate test environment
|
|
```
|
|
|
|
### Test Coverage Requirements
|
|
|
|
- Minimum 80% code coverage
|
|
- 100% coverage for critical paths
|
|
- All edge cases tested
|
|
- Error conditions covered
|
|
|
|
```bash
|
|
# Run tests with coverage
|
|
./run_tests.sh run-all --coverage --html
|
|
|
|
# Coverage report should show:
|
|
# src/services/youtube.py 95%
|
|
# src/services/summarizer.py 88%
|
|
# src/api/routes.py 92%
|
|
```
|
|
|
|
## Test Structure
|
|
|
|
```
|
|
tests/
|
|
├── unit/ (229 tests discovered)
|
|
│ ├── test_youtube_service.py # Video URL parsing, validation
|
|
│ ├── test_summarizer_service.py # AI model integration
|
|
│ ├── test_cache_service.py # Caching and performance
|
|
│ ├── test_auth_service.py # Authentication and JWT
|
|
│ └── test_*_service.py # All service modules covered
|
|
├── integration/
|
|
│ ├── test_api_endpoints.py # FastAPI route testing
|
|
│ └── test_database.py # Database operations
|
|
├── fixtures/
|
|
│ ├── sample_transcripts.json # Test data
|
|
│ └── mock_responses.py # Mock API responses
|
|
├── test_reports/ # Generated by test runner
|
|
│ ├── test_report.html # Interactive dashboard
|
|
│ ├── coverage.xml # Coverage data
|
|
│ └── junit.xml # CI/CD integration
|
|
└── conftest.py
|
|
```
|
|
|
|
## Unit Testing
|
|
|
|
### Unit Test Example
|
|
|
|
```python
|
|
# tests/unit/test_youtube_service.py
|
|
import pytest
|
|
from unittest.mock import Mock, patch, AsyncMock
|
|
from src.services.youtube import YouTubeService
|
|
|
|
class TestYouTubeService:
|
|
@pytest.fixture
|
|
def youtube_service(self):
|
|
return YouTubeService()
|
|
|
|
@pytest.fixture
|
|
def mock_transcript(self):
|
|
return [
|
|
{"text": "Hello world", "start": 0.0, "duration": 2.0},
|
|
{"text": "This is a test", "start": 2.0, "duration": 3.0}
|
|
]
|
|
|
|
@pytest.mark.asyncio
|
|
@pytest.mark.unit # Test runner marker for categorization
|
|
async def test_extract_transcript_success(
|
|
self,
|
|
youtube_service,
|
|
mock_transcript
|
|
):
|
|
with patch('youtube_transcript_api.YouTubeTranscriptApi.get_transcript') as mock_get:
|
|
mock_get.return_value = mock_transcript
|
|
|
|
result = await youtube_service.extract_transcript("test_id")
|
|
|
|
assert result == mock_transcript
|
|
mock_get.assert_called_once_with("test_id")
|
|
|
|
@pytest.mark.unit
|
|
def test_extract_video_id_various_formats(self, youtube_service):
|
|
test_cases = [
|
|
("https://www.youtube.com/watch?v=abc123", "abc123"),
|
|
("https://youtu.be/xyz789", "xyz789"),
|
|
("https://youtube.com/embed/qwe456", "qwe456"),
|
|
("https://www.youtube.com/watch?v=test&t=123", "test")
|
|
]
|
|
|
|
for url, expected_id in test_cases:
|
|
assert youtube_service.extract_video_id(url) == expected_id
|
|
```
|
|
|
|
### Test Markers for Intelligent Categorization
|
|
|
|
```python
|
|
# Test markers for intelligent categorization
|
|
@pytest.mark.unit # Fast, isolated unit tests
|
|
@pytest.mark.integration # Database/API integration tests
|
|
@pytest.mark.auth # Authentication and security tests
|
|
@pytest.mark.api # API endpoint tests
|
|
@pytest.mark.pipeline # End-to-end pipeline tests
|
|
@pytest.mark.slow # Tests taking >5 seconds
|
|
|
|
# Run specific categories
|
|
# ./run_tests.sh run-integration # Runs integration + api marked tests
|
|
# ./run_tests.sh list --category unit # Shows all unit tests
|
|
```
|
|
|
|
## Integration Testing
|
|
|
|
### Integration Test Example
|
|
|
|
```python
|
|
# tests/integration/test_api_endpoints.py
|
|
import pytest
|
|
from fastapi.testclient import TestClient
|
|
from src.main import app
|
|
|
|
@pytest.fixture
|
|
def client():
|
|
return TestClient(app)
|
|
|
|
class TestSummarizationAPI:
|
|
@pytest.mark.asyncio
|
|
@pytest.mark.integration
|
|
@pytest.mark.api
|
|
async def test_summarize_endpoint(self, client):
|
|
response = client.post("/api/summarize", json={
|
|
"url": "https://youtube.com/watch?v=test123",
|
|
"model": "openai",
|
|
"options": {"max_length": 500}
|
|
})
|
|
|
|
assert response.status_code == 200
|
|
data = response.json()
|
|
assert "job_id" in data
|
|
assert data["status"] == "processing"
|
|
|
|
@pytest.mark.asyncio
|
|
@pytest.mark.integration
|
|
async def test_get_summary(self, client):
|
|
# First create a summary
|
|
create_response = client.post("/api/summarize", json={
|
|
"url": "https://youtube.com/watch?v=test123"
|
|
})
|
|
job_id = create_response.json()["job_id"]
|
|
|
|
# Then retrieve it
|
|
get_response = client.get(f"/api/summary/{job_id}")
|
|
assert get_response.status_code in [200, 202] # 202 if still processing
|
|
```
|
|
|
|
## Frontend Testing
|
|
|
|
### Frontend Test Commands
|
|
|
|
```bash
|
|
# Frontend testing (Vitest + RTL)
|
|
cd frontend && npm test
|
|
cd frontend && npm run test:coverage
|
|
|
|
# Frontend test structure
|
|
frontend/src/
|
|
├── components/
|
|
│ ├── SummarizeForm.test.tsx # Component unit tests
|
|
│ ├── ProgressTracker.test.tsx # React Testing Library
|
|
│ └── TranscriptViewer.test.tsx # User interaction tests
|
|
├── hooks/
|
|
│ ├── useAuth.test.ts # Custom hooks testing
|
|
│ └── useSummary.test.ts # State management tests
|
|
└── utils/
|
|
└── helpers.test.ts # Utility function tests
|
|
```
|
|
|
|
### Frontend Test Example
|
|
|
|
```typescript
|
|
// frontend/src/components/SummarizeForm.test.tsx
|
|
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
|
|
import { SummarizeForm } from './SummarizeForm';
|
|
|
|
describe('SummarizeForm', () => {
|
|
it('should validate YouTube URL format', async () => {
|
|
render(<SummarizeForm onSubmit={jest.fn()} />);
|
|
|
|
const urlInput = screen.getByPlaceholderText('Enter YouTube URL');
|
|
const submitButton = screen.getByText('Summarize');
|
|
|
|
// Test invalid URL
|
|
fireEvent.change(urlInput, { target: { value: 'invalid-url' } });
|
|
fireEvent.click(submitButton);
|
|
|
|
await waitFor(() => {
|
|
expect(screen.getByText('Please enter a valid YouTube URL')).toBeInTheDocument();
|
|
});
|
|
|
|
// Test valid URL
|
|
fireEvent.change(urlInput, {
|
|
target: { value: 'https://youtube.com/watch?v=test123' }
|
|
});
|
|
fireEvent.click(submitButton);
|
|
|
|
await waitFor(() => {
|
|
expect(screen.queryByText('Please enter a valid YouTube URL')).not.toBeInTheDocument();
|
|
});
|
|
});
|
|
});
|
|
```
|
|
|
|
## Quality Checklist
|
|
|
|
Before marking any task as complete:
|
|
|
|
- [ ] All tests pass (`./run_tests.sh run-all`)
|
|
- [ ] Code coverage > 80% (`./run_tests.sh run-all --coverage`)
|
|
- [ ] Unit tests pass with fast feedback (`./run_tests.sh run-unit --fail-fast`)
|
|
- [ ] Integration tests validated (`./run_tests.sh run-integration`)
|
|
- [ ] Frontend tests pass (`cd frontend && npm test`)
|
|
- [ ] No linting errors (`ruff check src/`)
|
|
- [ ] Type checking passes (`mypy src/`)
|
|
- [ ] Documentation updated
|
|
- [ ] Task Master updated
|
|
- [ ] Changes committed with proper message
|
|
|
|
## Test Runner Development Standards
|
|
|
|
### For AI Agents
|
|
|
|
When working on this codebase:
|
|
|
|
1. **Always use Test Runner**: Use `./run_tests.sh` commands instead of direct pytest
|
|
2. **Fast Feedback First**: Start with `./run_tests.sh run-unit --fail-fast` for rapid development
|
|
3. **Follow TDD**: Write tests before implementation using test runner validation
|
|
4. **Use Markers**: Add proper pytest markers for test categorization
|
|
5. **Validate Setup**: Run `./scripts/validate_test_setup.py` when encountering issues
|
|
6. **Full Validation**: Use `./run_tests.sh run-all --coverage` before completing tasks
|
|
|
|
### Development Integration
|
|
|
|
**Story-Driven Development**
|
|
```bash
|
|
# 1. Start story implementation
|
|
cat docs/stories/2.1.single-ai-model-integration.md
|
|
|
|
# 2. Fast feedback during development
|
|
./run_tests.sh run-unit --fail-fast # Instant validation
|
|
|
|
# 3. Test specific modules as you build
|
|
./run_tests.sh run-specific "test_anthropic*.py"
|
|
|
|
# 4. Full validation before story completion
|
|
./run_tests.sh run-all --coverage
|
|
```
|
|
|
|
**BMad Method Integration**
|
|
- Seamlessly integrates with BMad agent workflows
|
|
- Provides fast feedback for TDD development approach
|
|
- Supports continuous validation during story implementation
|
|
|
|
### Test Runner Architecture
|
|
|
|
**Core Components**:
|
|
- **TestRunner**: Main orchestration engine (400+ lines)
|
|
- **TestDiscovery**: Intelligent test categorization (500+ lines)
|
|
- **TestExecution**: Parallel execution engine (600+ lines)
|
|
- **ReportGenerator**: Multi-format reporting (500+ lines)
|
|
- **CLI Interface**: Comprehensive command-line tool (500+ lines)
|
|
|
|
**Configuration Files**:
|
|
- `pytest.ini` - Test discovery and markers
|
|
- `.coveragerc` - Coverage configuration and exclusions
|
|
- `backend/test_runner/config/` - Test runner configuration
|
|
|
|
### Required Test Patterns
|
|
|
|
**Service Tests**
|
|
```python
|
|
@pytest.mark.unit
|
|
@pytest.mark.asyncio
|
|
async def test_service_method(self, mock_dependencies):
|
|
# Test business logic with mocked dependencies
|
|
pass
|
|
```
|
|
|
|
**API Tests**
|
|
```python
|
|
@pytest.mark.integration
|
|
@pytest.mark.api
|
|
def test_api_endpoint(self, client):
|
|
# Test API endpoints with TestClient
|
|
pass
|
|
```
|
|
|
|
**Pipeline Tests**
|
|
```python
|
|
@pytest.mark.pipeline
|
|
@pytest.mark.slow
|
|
async def test_end_to_end_pipeline(self, full_setup):
|
|
# Test complete workflows
|
|
pass
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
**Tests not found**
|
|
```bash
|
|
./run_tests.sh list --verbose # Debug test discovery
|
|
./scripts/validate_test_setup.py # Check setup
|
|
```
|
|
|
|
**Environment issues**
|
|
```bash
|
|
./scripts/setup_test_env.sh # Re-run setup
|
|
source venv/bin/activate # Check virtual environment
|
|
```
|
|
|
|
**Performance issues**
|
|
```bash
|
|
./run_tests.sh run-all --no-parallel # Disable parallel execution
|
|
./run_tests.sh run-unit --fail-fast # Use fast subset for development
|
|
```
|
|
|
|
**Import errors**
|
|
```bash
|
|
# Check PYTHONPATH and virtual environment
|
|
echo $PYTHONPATH
|
|
which python3
|
|
pip list | grep -E "(pytest|fastapi)"
|
|
```
|
|
|
|
### Quick Fixes
|
|
|
|
- **Test discovery issues** → Run validation script
|
|
- **Import errors** → Check PYTHONPATH and virtual environment
|
|
- **Slow execution** → Use `--parallel` flag or filter tests with `--category`
|
|
- **Coverage gaps** → Use `--coverage --html` to identify missing areas
|
|
|
|
### Debug Commands
|
|
|
|
```bash
|
|
# Test runner debugging
|
|
./run_tests.sh run-specific "test_auth" --verbose # Debug specific tests
|
|
./run_tests.sh list --category integration # List tests by category
|
|
./scripts/validate_test_setup.py --verbose # Detailed environment check
|
|
|
|
# Performance analysis
|
|
time ./run_tests.sh run-unit --fail-fast # Measure execution time
|
|
./run_tests.sh run-all --reports json # Generate performance data
|
|
```
|
|
|
|
## Advanced Features
|
|
|
|
### Parallel Execution
|
|
|
|
```bash
|
|
# Enable parallel testing (default: auto-detect cores)
|
|
./run_tests.sh run-all --parallel
|
|
|
|
# Specify number of workers
|
|
./run_tests.sh run-all --parallel --workers 4
|
|
|
|
# Disable parallel execution for debugging
|
|
./run_tests.sh run-all --no-parallel
|
|
```
|
|
|
|
### Custom Test Filters
|
|
|
|
```bash
|
|
# Run tests matching pattern
|
|
./run_tests.sh run-specific "test_youtube" --pattern "extract"
|
|
|
|
# Run tests by marker combination
|
|
./run_tests.sh run-all --markers "unit and not slow"
|
|
|
|
# Run tests for specific modules
|
|
./run_tests.sh run-specific "backend/tests/unit/test_auth_service.py"
|
|
```
|
|
|
|
### CI/CD Integration
|
|
|
|
```bash
|
|
# Generate CI-friendly outputs
|
|
./run_tests.sh run-all --coverage --reports junit,json
|
|
# Outputs: junit.xml for CI, test_report.json for analysis
|
|
|
|
# Exit code handling
|
|
./run_tests.sh run-all --fail-fast --exit-on-error
|
|
# Exits immediately on first failure for CI efficiency
|
|
```
|
|
|
|
---
|
|
|
|
*This testing guide ensures comprehensive, efficient testing across the YouTube Summarizer project. Always use the test runner system for optimal development workflow.* |