directus-task-management/docs/ai-integration-guide.md

386 lines
10 KiB
Markdown

# AI Integration Guide
This guide covers the AI-powered task management features integrated into the Directus Task Management system.
## Overview
The AI integration provides natural language task creation, intelligent task breakdown, and context-aware recommendations using OpenAI GPT-4-turbo and LangChain.js.
## Features
- **Natural Language Task Creation**: Convert plain English descriptions into structured tasks
- **Intelligent Task Breakdown**: Automatically break complex tasks into manageable subtasks
- **Context-Aware Suggestions**: Get AI-powered task recommendations based on project state
- **Token Usage Tracking**: Monitor API usage and costs
- **Caching Layer**: Redis-based caching to minimize API calls
- **Rate Limiting**: Prevent API quota exhaustion
## API Endpoints
### Create Task from Natural Language
```http
POST /api/ai/create-task
Content-Type: application/json
{
"prompt": "Implement user authentication with JWT tokens",
"projectId": "project-123",
"context": {
"projectDescription": "E-commerce platform",
"currentTasks": ["Database setup", "API structure"],
"completedTasks": ["Project initialization"],
"userRole": "developer"
}
}
```
**Response:**
```json
{
"success": true,
"data": {
"title": "Implement User Authentication",
"description": "Add JWT-based authentication system to the application",
"priority": "high",
"complexity": "major",
"estimatedHours": 16,
"acceptanceCriteria": [
"Users can register with email/password",
"Users can login and receive JWT token",
"Protected routes require valid token",
"Token refresh mechanism works"
],
"tags": ["auth", "security", "jwt"],
"subtasks": [
{
"title": "Setup database schema",
"description": "Create user tables and indexes",
"estimatedHours": 2
}
]
}
}
```
### Break Down Task into Subtasks
```http
POST /api/ai/breakdown/:taskId
Content-Type: application/json
{
"taskDescription": "Implement user authentication with JWT tokens",
"context": "Node.js Express application with PostgreSQL database"
}
```
### Get Task Suggestions
```http
GET /api/ai/suggestions/:projectId?projectDescription=E-commerce platform&completedTasks=Setup database,Create API&currentTasks=Build UI&goals=Launch MVP
```
### Update Task Context
```http
PUT /api/ai/update-context/:taskId
Content-Type: application/json
{
"feedback": "Task needs more specific acceptance criteria for security requirements"
}
```
### Health Check
```http
GET /api/ai/health
```
### Usage Statistics
```http
GET /api/ai/usage
```
## Configuration
### Environment Variables
```env
# OpenAI Configuration
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4-turbo
OPENAI_MAX_TOKENS=4096
OPENAI_TEMPERATURE=0.7
OPENAI_RETRY_ATTEMPTS=3
OPENAI_RETRY_DELAY_MS=1000
OPENAI_TIMEOUT=30000
# LangChain Configuration
LANGCHAIN_VERBOSE=false
LANGCHAIN_CACHE_ENABLED=true
LANGCHAIN_CACHE_TTL=3600
LANGCHAIN_MAX_CONCURRENCY=5
LANGCHAIN_MEMORY_BUFFER_SIZE=10
# Rate Limiting
AI_MAX_REQUESTS_PER_MINUTE=60
AI_MAX_TOKENS_PER_MINUTE=90000
AI_MAX_REQUESTS_PER_DAY=10000
# Monitoring
AI_TRACK_TOKEN_USAGE=true
AI_LOG_LEVEL=info
AI_METRICS_ENABLED=true
```
### Redis Configuration
Redis is used for caching AI responses and rate limiting:
```typescript
// Cache TTL values
- NLP Task Creation: 1 hour
- Task Breakdown: 2 hours
- Task Suggestions: 30 minutes
```
## Service Architecture
### OpenAI Service
- **File**: `src/services/ai/openai.service.ts`
- **Purpose**: Direct integration with OpenAI API
- **Features**:
- Exponential backoff retry logic
- Token counting and cost estimation
- Streaming support
- Error handling with retryable detection
### LangChain Service
- **File**: `src/services/ai/langchain.service.ts`
- **Purpose**: Chain-based AI operations for complex workflows
- **Features**:
- Multiple chain patterns (completion, conversation, task breakdown)
- Memory management for conversation context
- JSON schema validation for structured outputs
- Agent patterns for task processing
### AI Task Service
- **File**: `src/services/ai/ai-task.service.ts`
- **Purpose**: High-level AI task management operations
- **Features**:
- Natural language task creation
- Context management with caching
- Rate limiting integration
- Task suggestion algorithms
- Cache warming capabilities
## Error Handling
### Rate Limiting
```json
{
"success": false,
"error": "Rate limit exceeded",
"message": "Too many requests. Please try again later."
}
```
### Validation Errors
```json
{
"success": false,
"error": "Validation error",
"details": [
{
"path": ["prompt"],
"message": "String must contain at least 1 character(s)"
}
]
}
```
### Service Unavailable
```json
{
"success": false,
"error": "AI service temporarily unavailable",
"message": "OpenAI API is not responding. Please try again later."
}
```
## Best Practices
### Prompt Engineering
1. **Be Specific**: Provide clear, detailed task descriptions
2. **Include Context**: Add project context, current state, and goals
3. **Use Examples**: Reference similar completed tasks when possible
4. **Specify Constraints**: Include technology stack, deadlines, or resource limits
**Good Prompt Example:**
```text
Implement user authentication for our React/Node.js e-commerce platform.
Users should be able to register with email/password, login, and access
protected routes. Use JWT tokens with refresh mechanism. Follow security
best practices for password hashing and token storage.
```
**Poor Prompt Example:**
```text
Add login
```
### Performance Optimization
1. **Use Caching**: Cache common queries to reduce API calls
2. **Warm Cache**: Pre-populate cache during low-traffic periods
3. **Monitor Usage**: Track token consumption and costs
4. **Batch Operations**: Group related AI operations when possible
### Error Recovery
1. **Graceful Degradation**: Provide fallback behavior when AI services are unavailable
2. **Retry Logic**: Implement exponential backoff for transient failures
3. **User Feedback**: Allow users to provide feedback for continuous improvement
4. **Monitoring**: Set up alerts for service degradation
## Usage Examples
### Creating a Complex Feature
```javascript
// 1. Create main task from natural language
const mainTask = await fetch('/api/ai/create-task', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
prompt: 'Build a real-time chat system with message history, user presence, and file sharing',
projectId: 'chat-app',
context: {
projectDescription: 'Real-time collaboration platform',
currentTasks: ['User management', 'Basic UI'],
completedTasks: ['Project setup', 'Database design'],
userRole: 'full-stack developer'
}
})
});
// 2. Break down into subtasks
const breakdown = await fetch(`/api/ai/breakdown/${mainTask.data.id}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
taskDescription: mainTask.data.description,
context: 'React frontend with Node.js/Socket.io backend'
})
});
// 3. Get additional suggestions
const suggestions = await fetch(
`/api/ai/suggestions/${projectId}?` + new URLSearchParams({
projectDescription: 'Real-time collaboration platform',
completedTasks: 'Project setup,Database design,User management',
currentTasks: 'Real-time chat system',
goals: 'Launch beta,Scale to 1000 users'
})
);
```
### Monitoring and Health Checks
```javascript
// Check AI service health
const health = await fetch('/api/ai/health');
console.log('AI Services Status:', health.data);
// Monitor usage and costs
const usage = await fetch('/api/ai/usage');
console.log('Token Usage:', usage.data.openai.totalTokensUsed);
console.log('Estimated Cost:', usage.data.openai.estimatedCost);
```
## Troubleshooting
### Common Issues
1. **High Token Usage**
- Solution: Implement more aggressive caching, reduce prompt complexity
- Monitor: Check `/api/ai/usage` regularly
2. **Rate Limiting**
- Solution: Implement request queuing, reduce concurrent requests
- Configure: Adjust `AI_MAX_REQUESTS_PER_MINUTE` environment variable
3. **Poor Task Quality**
- Solution: Improve prompt engineering, add more context
- Feedback: Use `/api/ai/update-context` to provide training data
4. **Service Timeouts**
- Solution: Increase `OPENAI_TIMEOUT`, implement fallback mechanisms
- Monitor: Check `/api/ai/health` for service status
### Debug Mode
Enable detailed logging:
```env
AI_LOG_LEVEL=debug
LANGCHAIN_VERBOSE=true
```
### Performance Monitoring
Track key metrics:
- Response times for AI operations
- Cache hit/miss ratios
- Token usage trends
- Error rates by endpoint
## Cost Management
### Token Usage Optimization
1. **Cache Aggressively**: Use Redis caching for repeated queries
2. **Optimize Prompts**: Remove unnecessary context or verbosity
3. **Use Appropriate Models**: Consider using smaller models for simple tasks
4. **Batch Processing**: Group related operations to reduce overhead
### Cost Monitoring
```javascript
// Set up usage alerts
const usage = await fetch('/api/ai/usage');
const dailyCost = usage.data.openai.estimatedCost;
if (dailyCost > DAILY_BUDGET_LIMIT) {
// Trigger alert or disable non-critical AI features
console.warn('Daily AI budget exceeded:', dailyCost);
}
```
## Security Considerations
1. **API Key Protection**: Store OpenAI API keys securely, rotate regularly
2. **Input Validation**: Validate all user inputs before sending to AI services
3. **Output Sanitization**: Sanitize AI responses before displaying to users
4. **Rate Limiting**: Implement user-level rate limiting to prevent abuse
5. **Audit Logging**: Log all AI interactions for security monitoring
## Future Enhancements
Planned improvements:
1. **Fine-tuning**: Custom model training on project-specific data
2. **Multi-model Support**: Integration with additional AI providers
3. **Advanced Agents**: Complex multi-step task automation
4. **Voice Interface**: Speech-to-text task creation
5. **Visual Task Design**: AI-powered task visualization and workflow design