todozi
v0.1.3
Published
AI/Human task collaboration system
Maintainers
Readme
Todozi JavaScript SDK
A comprehensive task management and AI-powered productivity system built with Node.js. This SDK provides structured parsing, execution, and management of various types of content including tasks, memories, ideas, errors, and more.
Table of Contents
Overview
Todozi is a sophisticated task management platform that combines traditional task tracking with AI-powered features. The system provides:
- Task Management: Create, update, and track tasks with rich metadata
- AI Agent System: 26 pre-configured agents for automated task handling
- Semantic Search: Find similar tasks and content using embeddings
- Memory Management: Store and retrieve contextual memories
- Idea Tracking: Capture and organize ideas with importance ratings
- Queue System: Prioritize and process tasks in workflows
- API Management: Secure API key generation and authentication
- Content Extraction: Extract structured data from unstructured text
Core Modules
Agent Management
The Agent Management System provides comprehensive functionality for managing AI agents, their assignments, and lifecycle within the task management platform. This system enables intelligent task assignment by matching agent capabilities and specializations with task requirements.
Key Features:
- Agent lifecycle management (create, update, delete) with UUID-based identification
- Agent assignment to tasks with status tracking (Assigned, InProgress, Completed)
- Agent availability management (Available, Busy, Inactive)
- Specialization and capability-based agent filtering
- Performance statistics tracking with completion rate calculations
- Agent assignment parsing from formatted text
- Best agent selection algorithm based on requirements
- Agent status updates and lifecycle management
- Assignment history tracking
Architecture:
The system uses a singleton AgentManager class that maintains a Map of agents (keyed by agent ID) and an array of assignments. Agents can be filtered by specialization, capability, and availability status. The system supports finding the best agent for a given task based on required specializations and preferred capabilities using a scoring algorithm that considers availability, specialization match, and capability match.
Data Models:
Agent: Core agent entity with id, name, description, capabilities, specializations, metadata (status), timestampsAgentUpdate: Builder pattern class for constructing agent updates with fluent interfaceAgentStatistics: Aggregated statistics with completion rate, total assignments, and status distributionAgentAssignment: Assignment entity linking agents to tasks with project context, timestamps, and statusAgentStatus: Enum for agent status (Available, Busy, Inactive)AssignmentStatus: Enum for assignment status (Assigned, InProgress, Completed, Failed)
Usage Example:
const agentManager = new AgentManager();
await agentManager.loadAgents();
// Create a new agent
const agentId = await agentManager.createAgent({
name: "Research Assistant",
description: "AI agent specialized in research tasks",
capabilities: ["research", "analysis", "data-collection"],
specializations: ["academic", "scientific", "technical"],
metadata: {
status: "available"
}
});
// Update agent using builder pattern
const update = AgentUpdate.new()
.name("Advanced Research Assistant")
.description("Enhanced research capabilities")
.capabilities(["research", "analysis", "data-collection", "writing"])
.specializations(["academic", "scientific", "technical", "medical"])
.status(AgentStatus.Available);
await agentManager.updateAgent(agentId, update);
// Find best agent for a task
const bestAgent = agentManager.findBestAgent("academic", "research");
if (bestAgent) {
await agentManager.assignTaskToAgent(taskId, bestAgent.id, projectId);
console.log(`Task assigned to ${bestAgent.name}`);
}
// Get agent assignments
const assignments = agentManager.getAgentAssignments(agentId);
console.log(`Agent has ${assignments.length} assignments`);
// Complete assignment
await agentManager.completeAgentAssignment(taskId);
// Get statistics
const stats = agentManager.getAgentStatistics();
console.log(`Total agents: ${stats.total_agents}`);
console.log(`Available: ${stats.available_agents}`);
console.log(`Completion rate: ${stats.completionRate()}%`);Agent Selection Algorithm:
The findBestAgent method uses a scoring algorithm:
- Filters agents by required specialization
- Scores agents based on capability match (preferred capability gets bonus)
- Prioritizes available agents over busy ones
- Returns the highest-scoring agent or null if none found
Performance: Operations are O(1) for most agent lookups using hash maps, with O(n) for filtering operations. Agent selection is O(n log n) due to sorting. The system is optimized for in-memory operations with efficient data structures.
Design Patterns: Singleton pattern for AgentManager, Builder pattern for AgentUpdate, Repository pattern for data access, Factory pattern for agent creation, Strategy pattern for agent selection.
Storage Integration: Agents are loaded from storage on initialization. The system supports default agent creation if no agents exist. All agent operations can be persisted through the storage layer.
Error Handling: The system throws TodoziError for invalid operations such as assigning to unavailable agents, updating non-existent agents, or completing non-existent assignments.
API Key Management
The Todozi API Key Management System handles secure API key generation, authentication, and lifecycle management with a dual-key authentication approach for enhanced security.
Key Features:
- API key generation with cryptographically secure random identifiers (32-byte URL-safe tokens)
- User-based API key association with UUID identifiers
- Public/private key authentication mechanism for layered security
- Key activation/deactivation management without deletion
- Persistent storage using JSON files with atomic operations
- Error handling with custom error types
- Admin-level authentication requiring both keys
- Dual-indexed collections for O(1) lookups by user_id or public_key
- Key cloning to prevent external modification
Architecture:
The system uses an ApiKeyManager class that manages an ApiKeyCollection. Each API key has a public key (32-byte identifier) and private key (32-byte authentication token). Keys can be associated with users and have admin privileges. The system supports activation/deactivation without deletion, allowing temporary key suspension. The collection maintains dual indices for efficient lookups.
Data Models:
ApiKey: Core API key entity with user_id, public_key, private_key, active statusApiKeyCollection: Collection manager with dual indexing (_keys_by_user_id, _keys_by_public)ApiKeyManager: Main manager class (currently minimal, functions are exported directly)
Usage Example:
// Create a new API key without user association
const newKey = createApiKey();
console.log('Public Key:', newKey.publicKey);
console.log('Private Key:', newKey.privateKey); // Store securely!
// Create user-specific key
const userKey = createApiKeyWithUserId('user123');
console.log('User ID:', userKey.userId);
// Retrieve API key by user ID
const retrievedKey = getApiKey('user123');
console.log('Retrieved Public Key:', retrievedKey.publicKey);
// Retrieve by public key
const keyByPublic = getApiKeyByPublic(userKey.publicKey);
// List all keys
const allKeys = listApiKeys();
console.log(`Total keys: ${allKeys.length}`);
// List only active keys
const activeKeys = listActiveApiKeys();
console.log(`Active keys: ${activeKeys.length}`);
// Authenticate using key pair
const [userId, isAdmin] = checkApiKeyAuth(
userKey.publicKey,
userKey.privateKey
);
console.log(`Authenticated user: ${userId}, Admin: ${isAdmin}`);
// Deactivate key (temporary suspension)
deactivateApiKey('user123');
// Reactivate key
activateApiKey('user123');
// Remove key permanently
const removedKey = removeApiKey('user123');Security Considerations:
- Keys use cryptographically secure random generation via
crypto.randomBytes()or equivalent - 32-byte keys provide 256 bits of entropy (recommended for security)
- Private keys should NEVER be exposed in logs, error messages, or API responses
- Keys are cloned before returning to prevent external modification
- File permissions should be restricted (chmod 600) for API key storage files
- Keys stored in JSON files - ensure proper file system security
- Consider key rotation policies for long-term security
Performance: O(1) for key creation and lookups using dual-indexed dictionaries, O(n) for listing operations. File I/O operations are performed on each key management action. The system uses efficient hash map lookups for both user_id and public_key indices.
Storage: Default storage location is {storageDir}/api/api_keys.json. The system uses atomic file operations to prevent corruption. Keys are serialized as JSON arrays with all key properties.
Error Handling: The system throws TodoziError.ValidationError for operations on non-existent keys or invalid key pairs. All errors include descriptive messages for debugging.
Design Patterns: Factory pattern for key creation, Repository pattern for collection management, Decorator pattern for key cloning, Singleton pattern for storage access.
Base Tool Framework
A comprehensive framework for defining, managing, and executing tools in AI applications, particularly with language models like Ollama. This framework provides a structured approach to tool creation with validation, resource management, and standardized result handling.
Key Features:
- Tool parameter definitions with type validation and required/optional flags
- Resource locking system for concurrent operation safety (FilesystemWrite, FilesystemRead, Git, Memory, Shell, Network)
- Tool definitions with categories, descriptions, and metadata
- Tool result handling with success/error states, execution time tracking, and metadata
- Comprehensive error types (ValidationError, PermissionError, FileNotFound, TimeoutError, ResourceError, NetworkError, SecurityError, InternalError)
- Tool registry for managing tool collections with registration and execution
- Ollama format conversion for function calling API compatibility
- Parameter validation with type checking and constraint validation
- Error handler utilities for consistent error processing
Architecture:
The framework consists of several core classes: ToolParameter for parameter definitions with type and validation metadata, ToolDefinition for complete tool specifications including parameters and resource locks, ToolResult for standardized execution results, ToolError for tool-specific errors with detailed metadata, ErrorHandler for centralized error processing, and ToolRegistry for managing collections of tools. Tools extend a base Tool abstract class and must implement definition() and execute() methods.
Data Models:
ToolParameter: Parameter definition with name, type, description, required flag, and optional default valueToolDefinition: Complete tool specification with name, description, parameters array, category, and resource_locks arrayToolResult: Execution result with success flag, output string, error string, execution_time_ms, optional metadata, and optional recovery_contextToolError: Custom error with message, error_type (ErrorType enum), and optional details objectResourceLock: Enum for resource types (FilesystemWrite, FilesystemRead, Git, Memory, Shell, Network)ErrorType: Enum for error categoriesTool: Abstract base class for all toolsToolRegistry: Registry for managing and executing tools
Usage Example:
// Define a custom tool
class FileReadTool extends Tool {
definition() {
return new ToolDefinition(
'file_read',
'Read the contents of a file',
[
new ToolParameter('path', 'string', 'Path to the file to read', true),
new ToolParameter('encoding', 'string', 'File encoding', false, 'utf8')
],
'File Operations',
[ResourceLock.FilesystemRead]
);
}
async execute(kwargs) {
const startTime = Date.now();
try {
// Validate required parameters
const validationError = ErrorHandler.validateRequiredParams(
kwargs,
['path']
);
if (validationError) return validationError;
// Validate string parameter
const pathValidation = ErrorHandler.validateStringParam(
kwargs.path,
'path',
1,
1000
);
if (pathValidation) return pathValidation;
// Simulate file reading
const fs = require('fs');
const content = fs.readFileSync(kwargs.path, kwargs.encoding || 'utf8');
return ErrorHandler.createSuccessResult(
content,
Date.now() - startTime,
{ fileSize: content.length }
);
} catch (error) {
return ErrorHandler.handleError(error, 'FileReadTool.execute');
}
}
}
// Register and use tools
const registry = new ToolRegistry();
registry.register(new FileReadTool());
// Get tool definitions in Ollama format
const ollamaDefs = registry.getToolDefinitions();
console.log(JSON.stringify(ollamaDefs, null, 2));
// Execute tool
const result = await registry.executeTool('file_read', {
path: '/path/to/file.txt',
encoding: 'utf8'
});
if (result.success) {
console.log('File content:', result.output);
console.log(`Execution time: ${result.execution_time_ms}ms`);
} else {
console.error('Error:', result.error);
}
// Check if tool exists
if (registry.hasTool('file_read')) {
console.log('Tool is registered');
}
// Get all tools
const allTools = registry.getAllTools();
console.log(`Registered tools: ${allTools.length}`);Ollama Format Conversion: Tools can be converted to Ollama function calling format:
const ollamaFormat = toolDefinition.toOllamaFormat();
// Returns:
// {
// type: "function",
// function: {
// name: "tool_name",
// description: "tool_description",
// parameters: {
// type: "object",
// properties: { ... },
// required: [ ... ]
// }
// }
// }Error Handling:
The framework provides comprehensive error handling through ErrorHandler class:
handleError(error, context): Converts errors to ToolResultvalidateRequiredParams(kwargs, required_params): Validates required parametersvalidateStringParam(value, param_name, min_length, max_length, pattern): Validates string parameterscreateSuccessResult(output, execution_time_ms, metadata): Creates success resultcreateErrorResult(error_msg, execution_time_ms, error_type, metadata): Creates error result
Performance: O(1) for tool registration and retrieval using Maps. Parameter validation is O(n) where n is the number of parameters. Tool execution time depends on implementation. The registry uses efficient hash map lookups.
Design Patterns: Factory pattern for tool creation (static new() methods), Registry pattern for tool management, Template Method pattern for tool interface (abstract Tool class), Builder pattern for complex objects (helper functions), Comprehensive error handling strategy with ErrorHandler.
Resource Locking: The resource lock system helps prevent race conditions and unauthorized access. Tools declare required locks, and the system can enforce these locks during execution to ensure safe concurrent operations.
Helper Functions:
createToolParameter(name, type, description, required): Creates parameter without defaultcreateToolParameterWithDefault(name, type, description, required, defaultValue): Creates parameter with defaultcreateToolDefinition(name, description, category, parameters): Creates definition without lockscreateToolDefinitionWithLocks(name, description, category, parameters, resource_locks): Creates definition with locks
Chunking System
The Todozi Code Generation System manages and orchestrates code generation tasks through a structured chunking approach with hierarchical levels, dependency management, and comprehensive state tracking for complex software development workflows.
Key Features:
- Hierarchical code chunking with predefined levels (Project: 100 tokens, Module: 500 tokens, Class: 1000 tokens, Method: 300 tokens, Block: 100 tokens)
- Dependency management between code chunks with graph-based relationships
- Project state tracking with line count, module completion, and global variables
- Context window management for maintaining generation context (previous/current class, imports, function signatures, error patterns)
- Chunk status tracking through defined lifecycle (Pending, InProgress, Completed, Validated, Failed)
- XML-based chunk definition parsing from formatted text
- Token estimation for chunks based on code content
- Dependency chain resolution for ordered processing
- Ready chunks identification (dependencies satisfied)
Architecture:
The system uses a CodeGenerationGraph that manages CodeChunk objects with dependencies stored as a directed graph. Each chunk has a ChunkingLevel (with token limits and descriptions) and ChunkStatus (lifecycle state). The system maintains ProjectState for overall project tracking (total lines, max lines, current module, dependencies, completed/pending modules, global variables) and ContextWindow for generation context (previous/current class, next planned items, global vars in scope, imports, function signatures, error patterns).
Data Models:
ChunkingLevel: Enum defining hierarchical levels with token limits, descriptions, and examplesChunkStatus: Enum for chunk lifecycle states (Pending, InProgress, Completed, Validated, Failed)CodeChunk: Individual code unit with chunkId, status, dependencies array, code string, tests string, validated flag, level, estimatedTokens, timestampsProjectState: Project state manager with totalLines, maxLines, currentModule, dependencies array, completedModules set, pendingModules set, globalVariables MapContextWindow: Context manager with previousClass, currentClass, nextPlanned, globalVarsInScope array, importsUsed array, functionSignatures Map, errorPatternsSeen arrayCodeGenerationGraph: Main orchestrator managing chunks Map, projectState, and contextWindow
Usage Example:
const graph = new CodeGenerationGraph(10000);
// Add chunks with dependencies
graph.addChunk('project-setup', ChunkingLevel.Project, []);
graph.addChunk('database-module', ChunkingLevel.Module, ['project-setup']);
graph.addChunk('user-class', ChunkingLevel.Class, ['database-module']);
graph.addChunk('user-create-method', ChunkingLevel.Method, ['user-class']);
// Get ready chunks (dependencies satisfied)
const readyChunks = graph.getReadyChunks();
console.log('Ready chunks:', readyChunks);
// Get next chunk to work on
const nextChunkId = graph.getNextChunkToWorkOn();
if (nextChunkId) {
const chunk = graph.getChunk(nextChunkId);
console.log(`Processing: ${chunk.chunkId} at level ${chunk.level.toString()}`);
// Update chunk code
const generatedCode = `class User {\n constructor(name) {\n this.name = name;\n }\n}`;
graph.updateChunkCode(nextChunkId, generatedCode);
// Update tests
const generatedTests = `describe('User', () => {\n it('should create user', () => {\n // test\n });\n});`;
graph.updateChunkTests(nextChunkId, generatedTests);
// Mark as completed
graph.markChunkCompleted(nextChunkId);
// Mark as validated
graph.markChunkValidated(nextChunkId);
}
// Get dependency chain
const chain = graph.getDependencyChain('user-create-method');
// Returns: ['project-setup', 'database-module', 'user-class', 'user-create-method']
// Get chunks by level
const moduleChunks = graph.getChunksByLevel(ChunkingLevel.Module);
// Get project summary
const summary = graph.getProjectSummary();
console.log(summary);
// Update project state
graph.projectState.addCompletedModule('database-module');
graph.projectState.addPendingModule('api-module');
graph.projectState.setGlobalVariable('DATABASE_URL', 'postgresql://localhost:5432/mydb');
graph.projectState.incrementLines(150);
// Update context window
graph.contextWindow.addImport('express');
graph.contextWindow.addFunctionSignature('createUser', 'function createUser(userData)');
graph.contextWindow.setCurrentClass('UserController');
graph.contextWindow.addErrorPattern('TypeError: Cannot read property');Chunk Format Parsing:
Chunks can be parsed from XML-like format:
<chunk>chunk_id; level; description; dependencies; code</chunk>
Performance: O(1) for chunk creation, O(n×d) for ready chunks calculation where n is chunks and d is average dependencies, O(n+e) for dependency chain building where e is total dependencies (graph traversal). The system uses efficient graph traversal algorithms with visited set tracking to prevent cycles.
Design Patterns: Factory pattern for chunk creation (ChunkingLevel.fromString()), Composite pattern for chunk relationships (graph structure), State pattern for chunk lifecycle (ChunkStatus), Builder pattern for parsing (parseChunkingFormat), Observer pattern for state updates (automatic updatedAt timestamps).
Optimization Recommendations:
- Cache ready chunks result and invalidate only when chunk statuses change
- Create reverse dependency index for faster lookup
- Batch updates to reduce timestamp updates
- Lazy parse chunk content only when needed
Error Handling: The system throws TodoziError for invalid chunking levels, missing dependencies, and parsing failures. All operations validate inputs and provide descriptive error messages.
CLI Handler
The Todozi Handler provides comprehensive CLI-based functionality for managing tasks, projects, agents, and AI-powered features through a unified command interface with extensive command support and integration capabilities.
Key Features:
- Task and project management with full CRUD operations
- API key generation, authentication, and lifecycle management
- Queue-based task processing with session management
- AI agent management and assignment
- Semantic search across all data types (tasks, memories, ideas, errors, training data)
- Memory and idea management with full feature set
- Training data collection and management
- Error tracking, resolution, and analytics
- Embedding model management and configuration
- Chat message processing with structured content extraction
- Backup and restore functionality
- Statistics and analytics reporting
- GUI/TUI launch support
Architecture:
The TodoziHandler class processes all commands and manages application state. It integrates with storage systems, supports API key management, queue processing, semantic search, and all other system features. The system uses a command pattern for all operations, providing a consistent interface. Each command type has a dedicated handler method that processes the command and returns results.
Command Categories:
- Task Commands: Add, List, Show, Update, Complete, Delete tasks
- Project Commands: Create, List, Show, Update projects
- API Commands: Register, List, Check, Activate, Deactivate, Remove API keys
- Queue Commands: Plan, List, Start, End queue sessions
- Server Commands: Start, Status, Endpoints server operations
- Search Commands: Unified search across all content types
- Chat Commands: Process messages and extract structured content
- Error Commands: Create, List, Show, Resolve, Delete errors
- Training Commands: Create, List, Show, Delete training data
- Agent Commands: Create, List, Show, Update, Delete agents
- Memory Commands: Create, List, Search memories
- Idea Commands: Create, List, Search ideas
- Embedding Commands: Configure and manage embedding models
Usage Example:
const handler = await TodoziHandler.new(storage);
// Create task
await handler.handleAddCommand({
type: 'Task',
action: 'Implement authentication',
time: '4 hours',
priority: 'HIGH',
project: 'auth-system',
status: 'TODO',
assignee: 'HUMAN',
tags: 'security,authentication',
context: 'Need to implement JWT-based authentication'
});
// List tasks with filters
await handler.handleListCommand({
type: 'Tasks',
project: 'auth-system',
status: 'TODO',
priority: 'HIGH'
});
// Update task
await handler.handleUpdateCommand('task-123', {
status: 'IN_PROGRESS',
progress: 50,
context: 'Started implementation, working on JWT token generation'
});
// Complete task
await handler.completeTask('task-123');
// API key management
await handler.handleApiCommand({
type: 'Register',
userId: 'user-123'
});
await handler.handleApiCommand({
type: 'List',
activeOnly: true
});
// Queue management
await handler.handleQueueCommand({
type: 'Plan',
taskName: 'Database migration',
taskDescription: 'Migrate user data to new schema',
priority: 'HIGH',
projectId: 'data-migration'
});
await handler.handleQueueCommand({
type: 'Start',
queueItemId: 'queue-item-456'
});
// Unified search
await handler.handleSearchAllCommand({
type: 'SearchAll',
query: 'authentication',
types: 'tasks,memories,ideas'
});
// Chat processing
const content = await handler.processChatMessageExtended(
"Meeting notes: <todozi>Review budget; 1 week; high; finance; todo</todozi>",
'user-123'
);
console.log('Extracted tasks:', content.tasks);
// Error management
await handler.handleErrorCommand({
type: 'CreateError',
title: 'Database Connection Failed',
description: 'Could not connect to database',
severity: 'high',
category: 'storage'
});
// Statistics
await handler.handleStatsCommand({
type: 'Stats'
});
// Launch GUI
await handler.launchGui();Command Processing Flow:
- Command received and validated
- Routed to appropriate handler method
- Handler processes command with storage/service integration
- Results formatted and returned
- Errors handled with custom error types
Performance: O(1) for task creation, O(n) for searches and listings. The system uses efficient data structures and caching for frequently accessed data. Command routing is O(1) using command type mapping.
Design Patterns: Command pattern for operations (each command is an object), Factory pattern for entity creation, Builder pattern for complex updates (TaskUpdate), Strategy pattern for different storage/AI models, Singleton pattern for configuration, Facade pattern for unified interface.
Integration Points:
- Storage layer for data persistence
- Embedding service for semantic search
- API key management for authentication
- Agent management for task assignment
- Error management for error tracking
- Queue system for workflow processing
Embedding Service
The Todozi Embedding Service provides comprehensive semantic search and embedding management for task management and knowledge organization using sentence-transformers models and advanced similarity algorithms.
Key Features:
- Semantic search across all content types (Task, Tag, Memory, Idea, Chunk, Feel, Train, Error, Summary, Reminder, Tdz)
- Content clustering and similarity detection with configurable thresholds
- Embedding caching with TTL management and LRU eviction
- Hierarchical clustering capabilities with depth control
- Drift tracking for content evolution over time
- Performance profiling and diagnostics with timing metrics
- Multi-model embedding support with model comparison
- Batch processing for multiple embeddings
- Hybrid search combining semantic and keyword search
- Multi-query search with aggregation strategies (Average, Weighted, Max)
- Content-type agnostic unified embedding approach
Architecture:
The TodoziEmbeddingService manages embeddings using an LRU cache with TTL-based expiration. It supports multiple embedding models through the EmbeddingModel wrapper and provides similarity search using cosine similarity. The system can cluster content hierarchically and track embedding drift over time. The service integrates with tag management and storage systems for persistence.
Data Models:
TodoziEmbeddingService: Main service class managing embeddings and operationsTodoziEmbeddingConfig: Configuration with model_name, dimensions, similarity_threshold, max_results, cache_ttl_seconds, enable_clustering, clustering_thresholdTodoziEmbeddingCache: Cache entry with vector, content_type, content_id, text_content, tags, created_at, ttl_secondsSimilarityResult: Search result with content_id, content_type, similarity_score, text_content, tags, metadataClusteringResult: Cluster result with cluster_id, content_items array, cluster_center vector, cluster_size, average_similarityLRUEmbeddingCache: LRU cache implementation with memory limitsEmbeddingModel: Model wrapper for sentence-transformers
Usage Example:
const service = await TodoziEmbeddingService.new(config);
// Initialize with device selection
await service.initialize(device="cpu"); // or "cuda"
// Generate single embedding
const embedding = service.generate_embedding("Complete documentation");
// Batch processing
const texts = ["Task 1", "Task 2", "Task 3"];
const embeddings = service.generate_embeddings_batch(texts);
// Add task with embedding
await service.addTask({
id: 'task-001',
action: 'Complete documentation',
context_notes: 'Write comprehensive documentation',
priority: 'High',
status: 'InProgress',
tags: ['documentation', 'writing']
});
// Find similar tasks
const similar = await service.findSimilarTasks(
'Write technical documentation',
10
);
similar.forEach(result => {
console.log(`${result.content_id}: ${result.similarity_score}`);
});
// Semantic search across content types
const results = await service.semanticSearch(
'machine learning concepts',
['Task', 'Memory', 'Idea'],
20
);
// Clustering
const clusters = await service.clusterContent();
clusters.forEach(cluster => {
console.log(`Cluster ${cluster.cluster_id}: ${cluster.cluster_size} items`);
console.log(`Average similarity: ${cluster.average_similarity}`);
});
// Hierarchical clustering
const hierarchical = await service.hierarchicalClustering(
['Task', 'Idea'],
3 // max depth
);
// Hybrid search (semantic + keyword)
const hybridResults = await service.hybridSearch(
'project planning',
['schedule', 'timeline', 'milestone'],
['Task', 'Memory'],
0.7, // 70% semantic weight
15
);
// Multi-query search
const multiResults = await service.multiQuerySearch(
['software development', 'coding practices', 'programming techniques'],
'Average', // aggregation strategy
['Task', 'Idea'],
10
);
// Drift tracking
const driftReport = await service.trackEmbeddingDrift(
'content-001',
'Updated content text here...'
);
console.log(`Drift: ${driftReport.drift_percentage}%`);
// Performance profiling
const metrics = await service.profileSearchPerformance(
'machine learning',
20 // iterations
);
console.log(`Average time: ${metrics.avg_time_ms}ms`);
// Statistics
const stats = await service.getStats();
console.log(`Total embeddings: ${stats.total_embeddings}`);
console.log(`Cache size: ${stats.cache_size}`);Configuration Options:
model_name: Pre-trained model identifier (default: "sentence-transformers/all-MiniLM-L6-v2")dimensions: Expected embedding dimensions (default: 384)similarity_threshold: Minimum similarity score (default: 0.7)max_results: Maximum search results (default: 50)cache_ttl_seconds: Cache time-to-live (default: 86400)enable_clustering: Enable clustering (default: true)clustering_threshold: Clustering similarity threshold (default: 0.8)
Performance: O(n) for semantic search (linear scan of cache), O(n²) for clustering (pairwise similarity), O(d) for cosine similarity where d is vector dimensions (typically 384). Cache lookups are O(1). Batch processing reduces overhead for multiple embeddings.
Security: Embeddings contain semantic information that could potentially be reverse-engineered. Sensitive content should have shorter TTL values. The system implements proper access control and input validation. Cache entries include original text content which should be protected.
Caching Strategy:
- LRU cache with memory limits
- TTL-based expiration
- Automatic cleanup of expired entries
- Memory size estimation for cache entries
- Cache hit/miss tracking
Design Patterns: Service pattern for centralized operations, Factory pattern for service initialization, Strategy pattern for aggregation methods, Cache pattern for LRU management, Builder pattern for result construction.
Error Management
The Todozi Error Management System provides structured error definitions, centralized error management, and robust error parsing capabilities with comprehensive error lifecycle tracking and resolution management.
Key Features:
- Custom error class extending JavaScript Error with structured metadata
- Centralized error registry with full lifecycle management
- Error parsing from formatted text strings with validation
- Multiple error types (Validation, Storage, Config, IO, JSON, UUID, Chrono, Dialoguer, HLX, Reqwest, Dir, Embedding, API, Candle, NotImplemented, Serialization)
- Error resolution tracking with resolution notes and timestamps
- Comprehensive error metadata (severity, category, source, context, tags)
- Error severity parsing and validation (low, medium, high, critical)
- Error category parsing and validation (network, database, authentication, validation, logic, UI, API, system, other)
- UUID generation for error identification
Architecture:
The system uses a TodoziError class with static factory methods for different error types. The ErrorManager class maintains a registry of errors (Map-based) with resolution tracking. Error parsing functions can extract structured error information from formatted text. The system supports error severity and category validation with normalization.
Data Models:
TodoziError: Custom error class extending Error with type and details propertiesErrorManager: Error registry manager with create, resolve, and query methods- Error object structure: id (UUID), title, description, severity, category, source, context, tags, resolved (boolean), resolved_at (Date), created_at (Date)
Usage Example:
// Create specific errors using factory methods
throw TodoziError.taskNotFound(123);
throw TodoziError.projectNotFound('my-project');
throw TodoziError.feelingNotFound(456);
throw TodoziError.invalidPriority('extreme');
throw TodoziError.invalidStatus('invalid-status');
throw TodoziError.invalidAssignee('invalid-assignee');
throw TodoziError.invalidProgress(150); // Must be 0-100
throw TodoziError.validation('Invalid input format');
throw TodoziError.storage('Storage operation failed');
throw TodoziError.config('Configuration error');
throw TodoziError.io('I/O operation failed');
throw TodoziError.json(new Error('JSON parse error'));
throw TodoziError.uuid(new Error('UUID generation failed'));
throw TodoziError.chrono(new Error('Date parsing failed'));
throw TodoziError.dialoguer(new Error('User input error'));
throw TodoziError.hlx(new Error('HLX format error'));
throw TodoziError.reqwest(new Error('HTTP request failed'));
throw TodoziError.dir('Directory operation failed');
throw TodoziError.embedding('Embedding operation failed');
throw TodoziError.api('API operation failed');
throw TodoziError.candle('Candle library error');
throw TodoziError.notImplemented('Feature not implemented');
throw TodoziError.serialization('Serialization failed');
// Error manager
const errorManager = new ErrorManager();
// Create and register error
const errorId = await errorManager.createError({
title: 'Database Connection Failed',
description: 'Could not connect to the main database',
severity: 'high',
category: 'storage',
source: 'DatabaseManager.connect',
context: 'Connection timeout after 30 seconds',
tags: ['database', 'connection', 'timeout']
});
// Get unresolved errors
const unresolved = errorManager.getUnresolvedErrors();
console.log(`Unresolved errors: ${unresolved.length}`);
// Resolve error with resolution note
await errorManager.resolveError(
errorId,
'Database connection restored after restarting database service'
);
// Parse error from formatted text
const errorText = "<error>Network Timeout;Connection to server timed out;high;network;api_client;Request ID: 12345;timeout,connection</error>";
const parsedError = parseErrorFormat(errorText);
console.log('Parsed error:', parsedError);
// Parse and validate severity
const severity = parseErrorSeverity('high'); // Returns normalized severity
// Parse and validate category
const category = parseErrorCategory('network'); // Returns normalized categoryError Format:
Errors can be parsed from XML-like format:
<error>title; description; severity; category; source; context; tags</error>
Error Types:
TaskNotFound: Task with given ID not foundProjectNotFound: Project with given name not foundFeelingNotFound: Feeling with given ID not foundInvalidPriority: Invalid priority valueInvalidStatus: Invalid status valueInvalidAssignee: Invalid assignee valueInvalidProgress: Invalid progress value (must be 0-100)ValidationError: General validation errorStorageError: Storage operation errorConfigError: Configuration errorIoError: Input/output errorJsonError: JSON parsing errorUuidError: UUID generation/parsing errorChronoError: Date/time parsing errorDialoguerError: User input errorHlxError: HLX format errorReqwestError: HTTP request errorDirError: Directory operation errorEmbeddingError: Embedding operation errorApiError: API operation errorCandleError: Candle library errorNotImplemented: Feature not implemented errorSerializationError: Serialization error
Performance: O(1) for error creation and registration using Map storage, O(n) for retrieving unresolved errors (filtering), O(1) for error resolution (Map lookup and update). UUID generation is O(1).
Design Patterns: Factory pattern for error creation (static factory methods), Registry pattern for error management (ErrorManager), Custom Error Extension pattern (extends Error), Parser pattern for text parsing (parseErrorFormat).
Error Lifecycle:
- Error created and registered with UUID
- Error tracked in registry with unresolved status
- Error can be queried and filtered
- Error resolved with resolution note and timestamp
- Resolved errors remain in registry for audit trail
Security: Error messages should not expose sensitive information. The system provides structured error information that can be filtered before exposing to end users. Error details should be logged securely for debugging.
Content Extraction
The Todozi Content Extraction System extracts structured data from unstructured text content using AI-powered APIs with support for multiple content types and output formats.
Key Features:
- Multi-format input (inline text or file paths)
- AI-powered extraction using Todozi API endpoints ("plan" and "strategic")
- Structured output formats (JSON, CSV, Markdown)
- Auto-embedding of extracted content to project files
- Human-readable checklist generation for review
- History tracking with comprehensive logs to mega log file
- Support for extracting tasks, memories, ideas, errors, and training data
- Configuration loading from
.todozi/tdz.hlxfile - API key management through
getTdzApiKey()function
Architecture:
The system uses extractContent and strategyContent functions that call Todozi API endpoints. Extracted content is parsed into structured objects (ExtractResponse containing arrays of tasks, memories, ideas, errors, training_data) and formatted according to output preferences. The system supports both "plan" endpoint for general extraction and "strategic" endpoint for strategic planning content.
Data Models:
ExtractResponse: Container for all extracted content typesExtractedTask: Task entity with action, time, priority, project, status, assignee, tagsExtractedMemory: Memory entity with moment, meaning, reason, importance, termExtractedIdea: Idea entity with idea text, share level, importanceExtractedError: Error entity with title, description, severity, categoryExtractedTrainingData: Training data entity with prompt, completion, data_type
Usage Example:
// Extract from inline content
const result = await extractContent(
"Meeting notes: Review Q4 budget (High priority, due next week)",
null,
'json',
true // Generate human-readable checklist
);
// Extract from file
const fileResult = await extractContent(
null,
'/path/to/meeting-notes.txt',
'md',
false
);
// Strategic content analysis
const strategic = await strategyContent(
"Company Q4 Strategy: Expand into European markets (High importance, 18-month timeline)",
null,
'json',
true
);
// CSV output
const csvResult = await extractContent(
"Project kickoff: Design UI mockups (High priority, due Friday) - John",
null,
'csv',
false
);
// The result contains structured data
// JSON format:
// {
// tasks: [...],
// memories: [...],
// ideas: [...],
// errors: [...],
// training_data: [...],
// raw_tags: [...]
// }Output Formats:
- JSON: Structured JSON with all extracted entities as arrays
- CSV: Comma-separated values format for spreadsheet import
- Markdown: Human-readable markdown format
- Human Checklist: Formatted checklist for review (when
human=true)
Configuration:
The system requires a configuration file at ~/.todozi/tdz.hlx:
{
"registration": {
"user_id": "user-123",
"fingerprint": "device-fingerprint-456"
}
}API Integration:
- Uses
getTdzApiKey()for API key retrieval - Makes HTTP POST requests to Todozi API endpoints
- Handles authentication via Bearer token
- Processes API responses and extracts structured content
Performance: O(1) network latency + O(n) content processing where n is content size. File operations are O(1) for single files. Formatting is O(n) where n is number of extracted items. The system includes retry logic for transient failures.
Security: API key management through secure key retrieval, content privacy through Todozi API, secure file system permissions, HTTPS communication for all API requests. Input validation and sanitization are implemented. File paths are validated to prevent directory traversal.
History Logging: Extracted tasks are automatically logged to ~/.todozi/history/mega_log.jsonl for audit and analysis. The system maintains comprehensive logs of all extracted content.
Error Handling: The system handles API request failures, configuration loading errors, file system errors, and parsing errors with descriptive error messages. All errors are wrapped in TodoziError for consistent error handling.
Design Patterns: Facade pattern for simplified API, Factory pattern for response creation, Strategy pattern for different endpoints, Singleton pattern for configuration access.
Idea Management
The Todozi Idea Management System provides comprehensive functionality for organizing, categorizing, and managing ideas within a collaborative environment with sharing controls, importance ratings, and advanced analytics.
Key Features:
- Idea CRUD operations with UUID-based identification
- Tag-based organization and filtering with case-insensitive matching
- Sharing control (Public, Team, Private) for collaboration
- Importance ranking (Breakthrough=5, High=4, Medium=3, Low=2, VeryLow=1)
- Full-text search across ideas, tags, and context fields
- Analytics and statistics with percentage calculations
- Text-based idea format parsing from XML-like markup
- Recent ideas retrieval with limit support
- Tag statistics and usage tracking
- Filtering by importance, share level, and tags
Architecture:
The IdeaManager class maintains a Map of ideas with UUID keys and a separate Map for tag indexing (idea_tags). Ideas have sharing levels, importance ratings, tags, and context. The system provides extensive filtering and search capabilities with case-insensitive matching. Statistics are calculated on-demand and include percentage distributions.
Data Models:
Idea: Core idea entity with id (UUID), idea (text), share (ShareLevel), importance (IdeaImportance), tags (array), context (string), created_at, updated_atIdeaUpdate: Builder pattern class for constructing idea updatesIdeaStatistics: Aggregated statistics with percentage methodsShareLevel: Enum for sharing levels (Public, Team, Private)IdeaImportance: Enum for importance levels (VeryLow=1, Low=2, Medium=3, High=4, Breakthrough=5)
Usage Example:
const ideaManager = new IdeaManager();
// Create idea
const ideaId = await ideaManager.createIdea({
idea: "Implement microservices architecture",
share: ShareLevel.TEAM,
importance: IdeaImportance.HIGH,
tags: ["architecture", "scalability"],
context: "System performance review"
});
// Update using builder pattern
const update = IdeaUpdate.new()
.withIdea("Advanced microservices architecture with service mesh")
.withImportance(IdeaImportance.BREAKTHROUGH)
.withTags(["architecture", "scalability", "microservices", "service-mesh"])
.withContext("Critical system redesign for scalability");
await ideaManager.updateIdea(ideaId, update);
// Search ideas
const results = ideaManager.searchIdeas("microservices");
// Filter by importance
const breakthrough = ideaManager.getBreakthroughIdeas();
const highImportance = ideaManager.getIdeasByImportance(IdeaImportance.HIGH);
// Filter by share level
const publicIdeas = ideaManager.getPublicIdeas();
const teamIdeas = ideaManager.getTeamIdeas();
const privateIdeas = ideaManager.getPrivateIdeas();
// Filter by tag
const architectureIdeas = ideaManager.getIdeasByTag("architecture");
// Get recent ideas
const recent = ideaManager.getRecentIdeas(10);
// Get all tags
const allTags = ideaManager.getAllTags();
// Get tag statistics
const tagStats = ideaManager.getTagStatistics();
for (const [tag, count] of tagStats) {
console.log(`${tag}: ${count} ideas`);
}
// Statistics
const stats = ideaManager.getIdeaStatistics();
console.log(`Total ideas: ${stats.totalIdeas}`);
console.log(`Public: ${stats.publicPercentage()}%`);
console.log(`Team: ${stats.teamPercentage()}%`);
console.log(`Private: ${stats.privatePercentage()}%`);
console.log(`Breakthrough: ${stats.breakthroughPercentage()}%`);Idea Format Parsing:
Ideas can be parsed from text format:
<idea>idea text; share level; importance; tags; context</idea>
Performance: O(1) for CRUD operations using hash maps, O(n×m) for search where n is ideas and m is average text length, O(n) for filtering operations, O(n log n) for recent ideas (sorting required), O(n×t) for tag statistics where t is average tags per idea.
Design Patterns: Builder pattern for IdeaUpdate (fluent interface), Repository pattern for IdeaManager (data access abstraction), Factory pattern for parsing (parseIdeaFormat), Strategy pattern for filtering (different filter methods).
Storage: Ideas are stored in memory. Persistence should be handled by the storage layer integration. The system maintains tag indexes for efficient tag-based queries.
Models & Data Structures
The Models module provides comprehensive data structures and enums for the entire Todozi system with full type safety, validation, and consistent patterns across all entities.
Key Features:
- Enum-like classes for type safety (Priority, Status, Assignee, MemoryImportance, MemoryTerm, MemoryType, ShareLevel, IdeaImportance, ItemStatus, ErrorSeverity, ErrorCategory, TrainingDataType, ProjectStatus, AgentStatus, AssignmentStatus, QueueStatus, SummaryPriority, ReminderPriority, ReminderStatus)
- Core data models with comprehensive metadata (Task, Project, Agent, Memory, Idea, ApiKey, Error, TrainingData, Tag, Reminder, Summary)
- Builder patterns for updates with fluent interfaces
- Collection classes for managing groups of entities
- Comprehensive metadata support with timestamps and tracking
- Validation logic for all entity types
- Serialization support for persistence
Architecture: The module defines enum classes for all system constants with string values and parsing methods, core entity classes with full metadata and validation, builder classes for updates with method chaining, and collection classes for managing groups of entities. All classes follow consistent patterns for validation, serialization, and lifecycle management.
Key Classes:
- Task: Core task entity with id, userId, action, time, priority, parentProject, status, assignee, tags, dependencies, contextNotes, progress, embeddingVector, timestamps
- TaskUpdate: Builder for task updates with fluent interface
- TaskFilters: Filtering criteria for task queries
- TaskCollection: Container for managing task collections
- Project: Project entity with name, description, status, tasks list, and statistics
- ProjectTaskContainer: Specialized container for project-based task management
- Agent: AI agent with id, name, description, capabilities, specializations, behaviors, constraints, metadata, timestamps
- Memory: Memory entity with moment, meaning, reason, importance, term, type, tags, timestamps
- Idea: Idea entity with idea text, share level, importance, tags, context, timestamps
- ApiKey: API key with user_id, public_key, private_key, active status
- ApiKeyCollection: Collection manager for API keys
- Error: Error entity with title, description, severity, category, source, context, tags, resolved flag, timestamps
- TrainingData: Training data with prompt, completion, data_type
- Tag: Tag entity with name, description, color, category, usage_count
- Reminder: Reminder with content, remind_at, priority, status, tags
- Summary: Summary with content, priority, context, tags
Usage Example:
// Create task using factory method
const task = Task.new(
userId,
"Complete documentation",
"4 hours",
Priority.HIGH,
"project-123",
Status.TODO
);
// Create full task with all fields
const fullTask = Task.newFull(
userId,
"Complete documentation",
"4 hours",
Priority.HIGH,
"project-123",
Status.TODO,
Assignee.HUMAN,
["documentation", "writing"],
[],
"Write comprehensive API documentation",
0
);
// Update using builder
const update = TaskUpdate.new()
.withAction("Complete API documentation")
.withStatus(Status.IN_PROGRESS)
.withProgress(50)
.withContext("Started writing introduction section")
.withTags(["documentation", "writing", "api"]);
task.update(update);
// Check task status
if (task.isCompleted()) {
console.log("Task is completed");
}
if (task.isActive()) {
console.log("Task is active");
}
// Complete task
task.complete();
// Enum parsing
const priority = Priority.fromStr("high"); // Returns Priority.HIGH
const status = Status.fromStr("in_progress"); // Returns Status.IN_PROGRESS
const assignee = Assignee.AI; // or Assignee.HUMAN, Assignee.COLLABORATIVE, Assignee.AGENT("agent-id")Enum Classes:
Priority: Low, Medium, High, Critical, UrgentStatus: Todo, Pending, InProgress, Blocked, Review, Done, Completed, Cancelled, DeferredAssignee: Ai, Human, Collaborative, Agent(id)MemoryImportance: Low, Medium, High, CriticalMemoryTerm: Short, LongMemoryType: Standard, Secret, Human, Emotional(emotion)ShareLevel: Public, Private, TeamIdeaImportance: VeryLow, Low, Medium, High, BreakthroughItemStatus: Active, Archived, DeletedErrorSeverity: Low, Medium, High, CriticalErrorCategory: Network, Database, Authentication, Validation, Logic, UI, API, System, OtherTrainingDataType: Instruction, Question, Conversation, Code, DocumentationProjectStatus: Active, Archived, CompletedAgentStatus: Available, Busy, InactiveAssignmentStatus: Assigned, InProgress, Completed, FailedQueueStatus: Pending, Active, Completed, CancelledSummaryPriority: Low, Medium, High, CriticalReminderPriority: Low, Medium, High, CriticalReminderStatus: Pending, Active, Completed, Cancelled
Validation:
- Progress must be between 0-100 (inclusive)
- Status and priority validated through enum parsing
- Automatic timestamp updates on modifications
- Required field validation on creation
- Type validation for all fields
Performance: O(1) for entity creation and updates, O(n) for collection operations. Enum parsing is O(1) with string mapping.
Design Patterns: Enum pattern for constants (string-based enums), Builder pattern for updates (fluent interfaces), Factory pattern for creation (static new() methods), Repository pattern for collections (data access abstraction), Validation pattern for input validation.
Serialization: All models support JSON serialization for persistence. The system handles serialization/deserialization automatically through the storage layer.
Todozi Core
The Todozi Core module provides structured parsing, execution, and management of various content types using a custom XML-like markup language with comprehensive support for all system entities and execution workflows.
Key Features:
- Custom XML-like markup parsing with validation
- Content type parsing (tasks, memories, ideas, errors, training data, feelings)
- Task execution for different assignee types (AI, Human, Collaborative, Agent)
- Workflow processing with dependency resolution
- Enum definitions for all system constants with parsing support
- Shorthand tag transformation for simplified syntax
- Extended chat message processing with multiple content types
- JSON example processing for structured data extraction
- Helper functions for enum parsing and validation
- Tag transformation utilities
Architecture: The module provides parsing functions for each content type that extract structured data from formatted text. Execution functions handle different task types with appropriate workflows. Helper functions transform shorthand tags, validate enums, and provide utility operations. The system supports both single content extraction and batch processing from chat messages.
Content Format Parsing:
- Tasks:
<todozi>action; time; priority; project; status; assignee</todozi> - Memories:
<memory>moment; meaning; reason; importance; term; tags; context</memory> - Ideas:
<idea>idea text; share level; importance; tags; context</idea> - Errors:
<error>title; description; severity; category; source; context; tags</error> - Training Data:
<train>prompt; completion; data_type</train> - Feelings:
<feel>feeling; intensity; context; tags</feel>
Usage Example:
// Parse formatted task
const taskText = "<todozi>Complete docs; 4 hours; high; project-123; todo; human</todozi>";
const task = parseTodoziFormat(taskText);
console.log(task.action); // "Complete docs"
console.log(task.priority); // Priority.HIGH
// Parse memory
const memoryText = "<memory>First day; New beginning; Career start; high; long; career,new; Starting new job</memory>";
const memory = parseMemoryFormat(memoryText);
// Parse idea
const ideaText = "<idea>Microservices architecture; team; high; architecture,scalability; System redesign</idea>";
const idea = parseIdeaFormat(ideaText);
// Parse error
const errorText = "<error>Connection Failed; Database timeout; high; network; db_client; Request ID: 123; timeout,connection</error>";
const error = parseErrorFormat(errorText);
// Parse training data
const trainText = "<train>What is a task?; A task is a unit of work; instruction</train>";
const training = parseTrainingDataFormat(trainText);
// Parse feeling
const feelText = "<feel>excited; 8; Starting new project; work,new</feel>";
const feeling = parseFeelingFormat(feelText);
// Process chat message (extracts all content types)
const message = `
Meeting notes:
<todozi>Review budget; 1 week; high; finance; todo</todozi>
<memory>Budget meeting; Important discussion; Q4 planning; high; short; meeting,budget</memory>
<idea>Automated budget tracking; team; medium; automation,finance; Budget management</idea>
`;
const extracted = processChatMessage(message);
console.log('Tasks:', extracted.tasks);
console.log('Memories:', extracted.memories);
console.log('Ideas:', extracted.ideas);
// Extended processing with additional metadata
const extended = processChatMessageExtended(message, 'user-123');
console.log('Extended extraction:', extended);
// Process JSON examples
const jsonExamples = [
{ action: "Task 1", priority: "high" },
{ action: "Task 2", priority: "medium" }
];
const processed = processJsonExamples(jsonExamples);
// Execute tasks
await executeTask(task); // Generic execution
await executeAiTask(task); // AI-specific execution
await executeHumanTask(task); // Human-specific execution
await executeCollaborativeTask(task); // Collaborative execution
await executeAgentTask(task, 'agent-123'); // Agent-specific execution
// Process workflow
const workflow = [
{ action: "Setup", dependencies: [] },
{ action: "Implement", dependencies: ["Setup"] },
{ action: "Test", dependencies: ["Implement"] }
];
await processWorkflow(workflow);
// Transform shorthand tags
const shorthand = "Complete docs [high] [project-123]";
const transformed = transformShorthandTags(shorthand);
// Converts to: "<todozi>Complete docs; ASAP; high; project-123; todo; human</todozi>"
// Parse enums
const priority = parseEnum("high", Priority); // Returns Priority.HIGH
const status = parseEnum("in_progress", Status); // Returns Status.IN_PROGRESSHelper Functions:
transformShorthandTags(text): Converts shorthand syntax to full formatparseEnum(value, enumClass): Parses enum values with validationprocessChatMessage(message): Extracts all content types from messageprocessChatMessageExtended(message, userId): Extended processing with user contextprocessJsonExamples(examples): Processes structured JSON examplesprocessWorkflow(workflow): Executes workflow with dependency resolution
Execution Functions:
executeTask(task): Generic task executionexecuteAiTask(task): AI agent executionexecuteHumanTask(task): Human execution workflowexecuteCollaborativeTask(task): Collaborative executionexecuteAgentTask(task, agentId): Specific agent execution
Parsing Functions:
parseTodoziFormat(text): Parse task formatparseMemoryFormat(text): Parse memory formatparseIdeaFormat(text): Parse idea formatparseErrorFormat(text): Parse error formatparseTrainingDataFormat(text): Parse training data formatparseFeelingFormat(text): Parse feeling format
Performance: O(n) for parsing where n is text length, O(1) for enum parsing with string mapping, O(n×d) for workflow processing where d is dependencies.
Design Patterns: Parser pattern for content extraction (dedicated parsing functions), Strategy pattern for execution (different execution methods), Factory pattern for object creation (parsing returns objects), Template Method pattern for workflow processing.
Error Handling: All parsing functions validate input and throw TodoziError for invalid formats. Enum parsing includes validation and normalization. Execution functions handle errors gracefully with proper error reporting.
Server
The Todozi Enhanced Server provides a comprehensive RESTful API for managing tasks, agents, training data, memories, and more with integrated AI capabilities, supporting multiple protocols and advanced features.
Key Features:
- RESTful API endpoints for all system resources with full CRUD operations
- Multiple protocol support (HTTP, HTTPS, gRPC, WebSocket)
- AI agent system with 26 pre-configured agents for automated task handling
- Training data management for AI model improvement
- Semantic search and similarity detection across all content types
- Time tracking and analytics with detailed metrics
- Memory and idea management with full feature set
- Queue-based workflow processing with session management
- API key authentication with role-based access control
- Real-time updates via WebSocket connections
- Comprehensive error handling and logging
- Request validation and sanitization
- Rate limiting and throttling
- CORS support for cross-origin requests
- Health check and status endpoints
- Metrics and monitoring endpoints
Architecture: The server uses a controller pattern with separate controllers for each resource type. Business logic is encapsulated in service classes. Data access is abstracted through storage interfaces. The system supports multiple protocols (HTTP/HTTPS for REST, gRPC for high-performance RPC, WebSocket for real-time) and includes comprehensive error handling, middleware for authentication and validation, and request/response transformation.
Key Endpoints:
- Tasks:
/api/tasks- GET (list), POST (create), GET/:id(get), PUT/:id(update), DELETE/:id(delete) - Projects:
/api/projects- GET (list), POST (create), GET/:name(get), PUT/:name(update) - Agents:
/api/agents- GET (list), POST (create), GET/:id(get), PUT/:id(update), DELETE/:id(delete) - Training Data:
/api/training- GET (list), POST (create), GET/:id(get), DELETE/:id(delete) - Memory:
/api/memory- GET (list), POST (create), GET/:id(get), PUT/:id(update), DELETE/:id(delete) - Ideas:
/api/ideas- GET (list), POST (create), GET/:id(get), PUT/:id(update), DELETE/:id(delete) - Search:
/api/search- POST (semantic search across all types) - Queue:
/api/queue- GET (list), POST (plan), POST/:id/start(start), POST/:id/end(end) - Health:
/api/health- GET (health check) - Status:
/api/status- GET (server status and metrics) - Endpoints:
/api/endpoints- GET (list all available endpoints)
Usage Example:
// Start server
const server = new TodoziServer({
port: 8636,
host: '0.0.0.0',
enableHttps: false,
enableGrpc: true,
enableWebSocket: true,
corsEnabled: true,
rateLimitEnabled: true
});
await server.start();
// Create task via API
const response = await fetch('http://localhost:8636/api/tasks', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
action: 'Complete documentation',
priority: 'high',
project: 'project-123',
status: 'todo',
assignee: 'human',
time: '4 hours'
})
});
const task = await response.json();
// List tasks with filters
const listResponse = await fetch('http://localhost:8636/api/tasks?project=project-123&status=todo', {
headers: {
'Authorization': `Bearer ${apiKey}`
}
});
const tasks = await listResponse.json();
// Update task
await fetch(`http://localhost:8636/api/tasks/${task.id}`, {
method: 'PUT',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
status: 'in_progress',
progress: 50
})
});
// Semantic search
const searchResponse = await fetch('http://localhost:8636/api/search', {
method: 'POST',
head