427 lines
11 KiB
Plaintext
427 lines
11 KiB
Plaintext
---
|
|
description: Single comprehensive agent workflow for Trax development - consolidates all essential patterns
|
|
globs: .cursor/rules/*.mdc, src/**/*, tests/**/*, scripts/**/*
|
|
alwaysApply: true
|
|
---
|
|
|
|
# Agent Workflow - Trax Development
|
|
|
|
**⚠️ SINGLE SOURCE OF TRUTH: This is the ONLY rule file agents need to read for Trax development.**
|
|
|
|
## Core Principles
|
|
|
|
- **Keep It Simple**: One file, clear patterns, no complex hierarchies
|
|
- **Context First**: Always understand what you're building before coding
|
|
- **Test First**: Write tests before implementation
|
|
- **Quality Built-In**: Enforce standards as you go, not as separate phases
|
|
- **Progressive Enhancement**: Start simple, add complexity only when needed
|
|
|
|
## Quick Decision Tree
|
|
|
|
### 1. What Type of Request?
|
|
```
|
|
User Input → Quick Categorization → Action Plan
|
|
```
|
|
|
|
**Question/How-to**: Answer directly with code examples
|
|
**Implementation Request**: Follow TDD workflow below
|
|
**Server/Command**: Execute appropriate command
|
|
**Analysis/Review**: Examine code and provide feedback
|
|
|
|
### 2. For Implementation Requests: Enhanced TDD Workflow
|
|
```
|
|
1. Plan (Spec-First) → 2. Understand Requirements → 3. Write Tests → 4. Implement → 5. Validate → 6. Complete
|
|
```
|
|
|
|
## Implementation Workflow
|
|
|
|
### Step 0: Plan Mode (Spec-First)
|
|
```bash
|
|
# ✅ DO: Always plan before implementing
|
|
# Enter plan mode in Claude Code (Shift+Tab twice)
|
|
# Create detailed spec at .claude/tasks/<feature>.md
|
|
```
|
|
|
|
**Plan Should Include**:
|
|
- Requirements breakdown
|
|
- Architecture decisions
|
|
- Implementation phases
|
|
- Test strategy
|
|
- Success criteria
|
|
|
|
### Step 1: Understand Requirements
|
|
```bash
|
|
# ✅ DO: Get clear understanding before coding
|
|
task-master show <task-id> # Get task details
|
|
./scripts/tm_context.sh get <task-id> # Get cached context
|
|
# Read .claude/context/session.md for current context
|
|
```
|
|
|
|
**What to Look For**:
|
|
- What exactly needs to be built?
|
|
- What are the inputs/outputs?
|
|
- What are the edge cases?
|
|
- What dependencies exist?
|
|
|
|
### Step 2: Write Tests First
|
|
```python
|
|
# ✅ DO: Write tests that define the behavior
|
|
def test_feature_works_correctly():
|
|
# Test the happy path
|
|
result = feature(input_data)
|
|
assert result == expected_output
|
|
|
|
def test_feature_handles_edge_cases():
|
|
# Test edge cases and errors
|
|
with pytest.raises(ValueError):
|
|
feature(invalid_input)
|
|
```
|
|
|
|
**Test Requirements**:
|
|
- Cover all requirements
|
|
- Include edge cases
|
|
- Test error conditions
|
|
- Use real test data (no mocks unless necessary)
|
|
|
|
### Step 3: Implement Minimal Code
|
|
```python
|
|
# ✅ DO: Write minimal code to pass tests
|
|
def feature(input_data):
|
|
# Implement just enough to make tests pass
|
|
if not input_data:
|
|
raise ValueError("Input cannot be empty")
|
|
|
|
# Core logic here
|
|
return process_data(input_data)
|
|
```
|
|
|
|
**Implementation Rules**:
|
|
- Keep files under 300 lines (350 max if justified)
|
|
- Single responsibility per file
|
|
- Use protocols/interfaces for services
|
|
- Follow existing code patterns
|
|
|
|
### Step 4: Validate Quality
|
|
```bash
|
|
# ✅ DO: Run quality checks
|
|
uv run pytest # All tests must pass
|
|
uv run black src/ tests/ # Format code
|
|
uv run ruff check --fix src/ # Lint and fix
|
|
./scripts/validate_loc.sh # Check file sizes
|
|
```
|
|
|
|
**Quality Gates**:
|
|
- All tests pass
|
|
- Code is formatted
|
|
- No linting errors
|
|
- Files under LOC limits
|
|
- Follows project patterns
|
|
|
|
### Step 5: Complete and Update
|
|
```bash
|
|
# ✅ DO: Mark complete and update status
|
|
# Update plan with results in .claude/tasks/<feature>.md
|
|
# Update .claude/context/session.md with completion
|
|
task-master set-status --id=<task-id> --status=done
|
|
./scripts/tm_cache.sh update <task-id>
|
|
./scripts/update_changelog.sh <task-id> --type=task
|
|
```
|
|
|
|
## File Organization Rules
|
|
|
|
### Keep Files Small and Focused
|
|
```python
|
|
# ✅ DO: Single responsibility per file
|
|
# transcription_service.py - Only transcription logic
|
|
class TranscriptionService:
|
|
def transcribe_audio(self, audio_file):
|
|
# 50-100 lines max
|
|
pass
|
|
|
|
# audio_processor.py - Only audio processing logic
|
|
class AudioProcessor:
|
|
def process_audio(self, audio_data):
|
|
# 50-100 lines max
|
|
pass
|
|
```
|
|
|
|
### Anti-Patterns
|
|
```python
|
|
# ❌ DON'T: Monolithic files with multiple responsibilities
|
|
class MassiveService:
|
|
# Authentication + User Management + Email + File Processing
|
|
# This becomes unmaintainable
|
|
pass
|
|
```
|
|
|
|
## Testing Rules
|
|
|
|
### Write Tests First
|
|
```python
|
|
# ✅ DO: Test defines the interface
|
|
def test_transcription_service():
|
|
service = TranscriptionService()
|
|
result = service.transcribe_audio("test.wav")
|
|
assert result.text is not None
|
|
assert result.confidence > 0.8
|
|
|
|
# THEN implement to make test pass
|
|
```
|
|
|
|
### Test Coverage
|
|
```python
|
|
# ✅ DO: Cover all scenarios
|
|
def test_transcription_handles_empty_file():
|
|
service = TranscriptionService()
|
|
with pytest.raises(ValueError):
|
|
service.transcribe_audio("")
|
|
|
|
def test_transcription_handles_large_file():
|
|
service = TranscriptionService()
|
|
result = service.transcribe_audio("large_audio.wav")
|
|
assert result.processing_time < 30 # Performance requirement
|
|
```
|
|
|
|
## Code Quality Standards
|
|
|
|
### Python Standards
|
|
- **Python Version**: 3.11+ with type hints
|
|
- **Formatting**: Black with line length 100
|
|
- **Linting**: Ruff with auto-fix enabled
|
|
- **Type Checking**: MyPy strict mode
|
|
|
|
### File Limits
|
|
- **Target**: Under 300 lines per file
|
|
- **Maximum**: 350 lines (only with clear justification)
|
|
- **Split Strategy**: Break into focused modules
|
|
|
|
### Critical Patterns
|
|
- **Backend-First**: Get data layer right before UI
|
|
- **Download-First**: Never stream media, always download first
|
|
- **Real Files Testing**: Use actual audio files, no mocks
|
|
- **Protocol-Based**: Use typing.Protocol for service interfaces
|
|
|
|
## Common Workflows
|
|
|
|
### Adding New Feature
|
|
```bash
|
|
# 1. Get task details
|
|
task-master show <task-id>
|
|
|
|
# 2. Write tests first
|
|
# Create test file with comprehensive test cases
|
|
|
|
# 3. Implement minimal code
|
|
# Write code to pass tests
|
|
|
|
# 4. Validate quality
|
|
uv run pytest && uv run black src/ tests/ && uv run ruff check --fix
|
|
|
|
# 5. Complete
|
|
task-master set-status --id=<task-id> --status=done
|
|
```
|
|
|
|
### Fixing Bug
|
|
```bash
|
|
# 1. Reproduce the bug
|
|
# Write test that fails
|
|
|
|
# 2. Fix the code
|
|
# Make the test pass
|
|
|
|
# 3. Validate
|
|
uv run pytest && quality checks
|
|
|
|
# 4. Update status
|
|
task-master set-status --id=<task-id> --status=done
|
|
```
|
|
|
|
### Code Review
|
|
```bash
|
|
# 1. Check file sizes
|
|
./scripts/validate_loc.sh
|
|
|
|
# 2. Run tests
|
|
uv run pytest
|
|
|
|
# 3. Check formatting
|
|
uv run black --check src/ tests/
|
|
uv run ruff check src/ tests/
|
|
|
|
# 4. Validate against rules
|
|
# Does code follow project patterns?
|
|
# Are files appropriately sized?
|
|
# Are tests comprehensive?
|
|
```
|
|
|
|
## Project-Specific Rules
|
|
|
|
### Audio Processing
|
|
- Use distil-large-v3 for M3 optimization
|
|
- Convert to 16kHz mono WAV for processing
|
|
- Keep memory usage under 2GB for v1 pipeline
|
|
- Process 5-minute audio in under 30 seconds
|
|
|
|
### Database Operations
|
|
- Use JSONB for flexible data storage
|
|
- Follow repository pattern for data access
|
|
- Use transactions for multi-step operations
|
|
- Cache frequently accessed data
|
|
|
|
### CLI Development
|
|
- Use Click for command-line interface
|
|
- Provide clear help text and examples
|
|
- Use consistent error handling patterns
|
|
- Support both interactive and batch modes
|
|
|
|
## Error Handling
|
|
|
|
### When Things Go Wrong
|
|
```python
|
|
# ✅ DO: Handle errors gracefully
|
|
try:
|
|
result = process_audio(audio_file)
|
|
except AudioProcessingError as e:
|
|
logger.error(f"Failed to process {audio_file}: {e}")
|
|
return ErrorResult(error=str(e))
|
|
except Exception as e:
|
|
logger.error(f"Unexpected error: {e}")
|
|
return ErrorResult(error="Internal processing error")
|
|
```
|
|
|
|
### Common Issues
|
|
- **Missing .env**: Check `../../.env` exists
|
|
- **Import errors**: Run `uv pip install -e ".[dev]"`
|
|
- **Type errors**: Run `uv run mypy src/`
|
|
- **Formatting issues**: Run `uv run black src/ tests/`
|
|
|
|
## Performance Guidelines
|
|
|
|
### Audio Processing
|
|
- **Target**: 5-minute audio in <30 seconds
|
|
- **Memory**: <2GB for v1 pipeline
|
|
- **Accuracy**: 95%+ for clear audio
|
|
|
|
### Caching Strategy
|
|
- **Embeddings**: 24h TTL
|
|
- **Analysis**: 7d TTL
|
|
- **Queries**: 6h TTL
|
|
- **Compression**: LZ4 for storage efficiency
|
|
|
|
## Quick Reference Commands
|
|
|
|
### Development
|
|
```bash
|
|
uv pip install -e ".[dev]" # Install dependencies
|
|
uv run python src/main.py # Start development server
|
|
uv run pytest # Run all tests
|
|
uv run black src/ tests/ # Format code
|
|
uv run ruff check --fix src/ # Lint and fix
|
|
```
|
|
|
|
### Task Management
|
|
```bash
|
|
task-master list # Show all tasks
|
|
task-master next # Get next task
|
|
task-master show <id> # Show task details
|
|
task-master set-status --id=<id> --status=done # Complete task
|
|
```
|
|
|
|
### Quality Validation
|
|
```bash
|
|
./scripts/validate_loc.sh # Check file sizes
|
|
./scripts/validate_quality.sh <id> # Comprehensive quality check
|
|
./scripts/validate_tests.sh <id> # Test validation
|
|
```
|
|
|
|
## Anti-Patterns to Avoid
|
|
|
|
### ❌ DON'T: Skip Understanding
|
|
- Jumping straight to coding without requirements
|
|
- Not reading task details or context
|
|
- Ignoring existing code patterns
|
|
|
|
### ❌ DON'T: Skip Testing
|
|
- Writing code before tests
|
|
- Incomplete test coverage
|
|
- Not testing edge cases
|
|
|
|
### ❌ DON'T: Ignore Quality
|
|
- Large, monolithic files
|
|
- Poor formatting or linting errors
|
|
- Not following project patterns
|
|
|
|
### ❌ DON'T: Over-Engineer
|
|
- Complex abstractions when simple works
|
|
- Multiple layers when one suffices
|
|
- Premature optimization
|
|
|
|
## Claude Code Integration
|
|
|
|
### Context Management
|
|
Use filesystem as ultimate context manager:
|
|
```
|
|
.claude/
|
|
├── tasks/ # Feature plans
|
|
├── context/ # Shared context (session.md)
|
|
├── research/ # Sub-agent research reports
|
|
├── hooks/ # Automation hooks
|
|
└── commands/ # Custom slash commands
|
|
```
|
|
|
|
### Hooks for Automation
|
|
- **task-complete.sh**: Notification when tasks complete
|
|
- **type-check.py**: Real-time type/lint checking on file changes
|
|
|
|
### Sub-Agent Strategy
|
|
- Sub-agents for RESEARCH ONLY, never implementation
|
|
- Research reports saved to `.claude/research/`
|
|
- Main agent reads reports and implements
|
|
|
|
### When to Use Task Tool (Sub-Agent)
|
|
**✅ USE for token-heavy operations:**
|
|
- Reading multiple large files (>500 lines each)
|
|
- Searching entire codebase for patterns
|
|
- Analyzing complex dependencies
|
|
- Researching documentation
|
|
- Web searches for best practices
|
|
|
|
**❌ KEEP in main agent:**
|
|
- All implementation work
|
|
- Bug fixes and debugging
|
|
- File edits and creation
|
|
- Running tests
|
|
- Final decision making
|
|
|
|
**Token Savings:** Task tool can save 80% of context window by returning only summaries
|
|
|
|
### Custom Commands
|
|
- `/tdd-cycle`: Execute complete TDD workflow
|
|
- `/progress`: Show development status
|
|
- `/quick-test`: Fast validation of changes
|
|
- `/research`: Trigger research agents
|
|
|
|
## Success Metrics
|
|
|
|
### Code Quality
|
|
- All tests pass
|
|
- Files under LOC limits
|
|
- No linting errors
|
|
- Consistent formatting
|
|
|
|
### Development Speed
|
|
- Clear understanding of requirements
|
|
- Tests written first
|
|
- Minimal viable implementation
|
|
- Quick validation cycles
|
|
|
|
### Maintainability
|
|
- Small, focused files
|
|
- Clear separation of concerns
|
|
- Consistent patterns
|
|
- Good test coverage
|
|
|
|
---
|
|
|
|
**Remember**: Keep it simple. Follow the enhanced workflow: Plan → Understand → Test → Implement → Validate → Complete.
|