10 KiB
10 KiB
AI Integration Guide
This guide covers the AI-powered task management features integrated into the Directus Task Management system.
Overview
The AI integration provides natural language task creation, intelligent task breakdown, and context-aware recommendations using OpenAI GPT-4-turbo and LangChain.js.
Features
- Natural Language Task Creation: Convert plain English descriptions into structured tasks
- Intelligent Task Breakdown: Automatically break complex tasks into manageable subtasks
- Context-Aware Suggestions: Get AI-powered task recommendations based on project state
- Token Usage Tracking: Monitor API usage and costs
- Caching Layer: Redis-based caching to minimize API calls
- Rate Limiting: Prevent API quota exhaustion
API Endpoints
Create Task from Natural Language
POST /api/ai/create-task
Content-Type: application/json
{
"prompt": "Implement user authentication with JWT tokens",
"projectId": "project-123",
"context": {
"projectDescription": "E-commerce platform",
"currentTasks": ["Database setup", "API structure"],
"completedTasks": ["Project initialization"],
"userRole": "developer"
}
}
Response:
{
"success": true,
"data": {
"title": "Implement User Authentication",
"description": "Add JWT-based authentication system to the application",
"priority": "high",
"complexity": "major",
"estimatedHours": 16,
"acceptanceCriteria": [
"Users can register with email/password",
"Users can login and receive JWT token",
"Protected routes require valid token",
"Token refresh mechanism works"
],
"tags": ["auth", "security", "jwt"],
"subtasks": [
{
"title": "Setup database schema",
"description": "Create user tables and indexes",
"estimatedHours": 2
}
]
}
}
Break Down Task into Subtasks
POST /api/ai/breakdown/:taskId
Content-Type: application/json
{
"taskDescription": "Implement user authentication with JWT tokens",
"context": "Node.js Express application with PostgreSQL database"
}
Get Task Suggestions
GET /api/ai/suggestions/:projectId?projectDescription=E-commerce platform&completedTasks=Setup database,Create API¤tTasks=Build UI&goals=Launch MVP
Update Task Context
PUT /api/ai/update-context/:taskId
Content-Type: application/json
{
"feedback": "Task needs more specific acceptance criteria for security requirements"
}
Health Check
GET /api/ai/health
Usage Statistics
GET /api/ai/usage
Configuration
Environment Variables
# OpenAI Configuration
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4-turbo
OPENAI_MAX_TOKENS=4096
OPENAI_TEMPERATURE=0.7
OPENAI_RETRY_ATTEMPTS=3
OPENAI_RETRY_DELAY_MS=1000
OPENAI_TIMEOUT=30000
# LangChain Configuration
LANGCHAIN_VERBOSE=false
LANGCHAIN_CACHE_ENABLED=true
LANGCHAIN_CACHE_TTL=3600
LANGCHAIN_MAX_CONCURRENCY=5
LANGCHAIN_MEMORY_BUFFER_SIZE=10
# Rate Limiting
AI_MAX_REQUESTS_PER_MINUTE=60
AI_MAX_TOKENS_PER_MINUTE=90000
AI_MAX_REQUESTS_PER_DAY=10000
# Monitoring
AI_TRACK_TOKEN_USAGE=true
AI_LOG_LEVEL=info
AI_METRICS_ENABLED=true
Redis Configuration
Redis is used for caching AI responses and rate limiting:
// Cache TTL values
- NLP Task Creation: 1 hour
- Task Breakdown: 2 hours
- Task Suggestions: 30 minutes
Service Architecture
OpenAI Service
- File:
src/services/ai/openai.service.ts - Purpose: Direct integration with OpenAI API
- Features:
- Exponential backoff retry logic
- Token counting and cost estimation
- Streaming support
- Error handling with retryable detection
LangChain Service
- File:
src/services/ai/langchain.service.ts - Purpose: Chain-based AI operations for complex workflows
- Features:
- Multiple chain patterns (completion, conversation, task breakdown)
- Memory management for conversation context
- JSON schema validation for structured outputs
- Agent patterns for task processing
AI Task Service
- File:
src/services/ai/ai-task.service.ts - Purpose: High-level AI task management operations
- Features:
- Natural language task creation
- Context management with caching
- Rate limiting integration
- Task suggestion algorithms
- Cache warming capabilities
Error Handling
Rate Limiting
{
"success": false,
"error": "Rate limit exceeded",
"message": "Too many requests. Please try again later."
}
Validation Errors
{
"success": false,
"error": "Validation error",
"details": [
{
"path": ["prompt"],
"message": "String must contain at least 1 character(s)"
}
]
}
Service Unavailable
{
"success": false,
"error": "AI service temporarily unavailable",
"message": "OpenAI API is not responding. Please try again later."
}
Best Practices
Prompt Engineering
- Be Specific: Provide clear, detailed task descriptions
- Include Context: Add project context, current state, and goals
- Use Examples: Reference similar completed tasks when possible
- Specify Constraints: Include technology stack, deadlines, or resource limits
Good Prompt Example:
Implement user authentication for our React/Node.js e-commerce platform.
Users should be able to register with email/password, login, and access
protected routes. Use JWT tokens with refresh mechanism. Follow security
best practices for password hashing and token storage.
Poor Prompt Example:
Add login
Performance Optimization
- Use Caching: Cache common queries to reduce API calls
- Warm Cache: Pre-populate cache during low-traffic periods
- Monitor Usage: Track token consumption and costs
- Batch Operations: Group related AI operations when possible
Error Recovery
- Graceful Degradation: Provide fallback behavior when AI services are unavailable
- Retry Logic: Implement exponential backoff for transient failures
- User Feedback: Allow users to provide feedback for continuous improvement
- Monitoring: Set up alerts for service degradation
Usage Examples
Creating a Complex Feature
// 1. Create main task from natural language
const mainTask = await fetch('/api/ai/create-task', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
prompt: 'Build a real-time chat system with message history, user presence, and file sharing',
projectId: 'chat-app',
context: {
projectDescription: 'Real-time collaboration platform',
currentTasks: ['User management', 'Basic UI'],
completedTasks: ['Project setup', 'Database design'],
userRole: 'full-stack developer'
}
})
});
// 2. Break down into subtasks
const breakdown = await fetch(`/api/ai/breakdown/${mainTask.data.id}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
taskDescription: mainTask.data.description,
context: 'React frontend with Node.js/Socket.io backend'
})
});
// 3. Get additional suggestions
const suggestions = await fetch(
`/api/ai/suggestions/${projectId}?` + new URLSearchParams({
projectDescription: 'Real-time collaboration platform',
completedTasks: 'Project setup,Database design,User management',
currentTasks: 'Real-time chat system',
goals: 'Launch beta,Scale to 1000 users'
})
);
Monitoring and Health Checks
// Check AI service health
const health = await fetch('/api/ai/health');
console.log('AI Services Status:', health.data);
// Monitor usage and costs
const usage = await fetch('/api/ai/usage');
console.log('Token Usage:', usage.data.openai.totalTokensUsed);
console.log('Estimated Cost:', usage.data.openai.estimatedCost);
Troubleshooting
Common Issues
-
High Token Usage
- Solution: Implement more aggressive caching, reduce prompt complexity
- Monitor: Check
/api/ai/usageregularly
-
Rate Limiting
- Solution: Implement request queuing, reduce concurrent requests
- Configure: Adjust
AI_MAX_REQUESTS_PER_MINUTEenvironment variable
-
Poor Task Quality
- Solution: Improve prompt engineering, add more context
- Feedback: Use
/api/ai/update-contextto provide training data
-
Service Timeouts
- Solution: Increase
OPENAI_TIMEOUT, implement fallback mechanisms - Monitor: Check
/api/ai/healthfor service status
- Solution: Increase
Debug Mode
Enable detailed logging:
AI_LOG_LEVEL=debug
LANGCHAIN_VERBOSE=true
Performance Monitoring
Track key metrics:
- Response times for AI operations
- Cache hit/miss ratios
- Token usage trends
- Error rates by endpoint
Cost Management
Token Usage Optimization
- Cache Aggressively: Use Redis caching for repeated queries
- Optimize Prompts: Remove unnecessary context or verbosity
- Use Appropriate Models: Consider using smaller models for simple tasks
- Batch Processing: Group related operations to reduce overhead
Cost Monitoring
// Set up usage alerts
const usage = await fetch('/api/ai/usage');
const dailyCost = usage.data.openai.estimatedCost;
if (dailyCost > DAILY_BUDGET_LIMIT) {
// Trigger alert or disable non-critical AI features
console.warn('Daily AI budget exceeded:', dailyCost);
}
Security Considerations
- API Key Protection: Store OpenAI API keys securely, rotate regularly
- Input Validation: Validate all user inputs before sending to AI services
- Output Sanitization: Sanitize AI responses before displaying to users
- Rate Limiting: Implement user-level rate limiting to prevent abuse
- Audit Logging: Log all AI interactions for security monitoring
Future Enhancements
Planned improvements:
- Fine-tuning: Custom model training on project-specific data
- Multi-model Support: Integration with additional AI providers
- Advanced Agents: Complex multi-step task automation
- Voice Interface: Speech-to-text task creation
- Visual Task Design: AI-powered task visualization and workflow design