officellm
v1.1.6
Published
A TypeScript library for multi-model agentic architecture with managers and workers
Downloads
1,136
Maintainers
Readme
OfficeLLM
A powerful TypeScript framework for building multi-agent AI systems with continuous execution. Coordinate specialized AI workers that autonomously use tools and collaborate to complete complex tasks.
Features
- Multi-Agent Architecture: Manager coordinates specialized worker agents
- Continuous Execution: Agents autonomously work until task completion
- User-Defined Tools: Bring your own tool implementations
- Multiple LLM Providers: OpenAI, Anthropic, Google Gemini, OpenRouter
- Memory System: Store and retrieve conversation history with In-Memory or Redis storage
- Type-Safe: Full TypeScript support with Zod schemas
- Flexible: Easy to extend and customize
Installation
npm install officellmHow It Works
Continuous Execution Model
OfficeLLM implements a continuous execution loop where:
- Manager Agent analyzes tasks and calls worker agents
- Worker Agents use their tools to complete subtasks
- Execution continues until the manager determines completion
- Completion signal: Manager responds without calling more workers
User Task → Manager → Worker (uses tools) → Manager → Worker → ... → Final ResultKey Concepts
- Manager: Orchestrates the entire workflow, delegates to workers
- Workers: Specialized agents with specific tools and expertise
- Tools: Functions that workers can call (YOU provide implementations)
- Completion: Detected when agents stop calling tools
Tool Implementations
IMPORTANT: You MUST provide tool implementations for your workers. The framework provides the skeleton, you provide the functionality.
Example: Web Search Tool
const researchWorker = {
name: 'researcher',
tools: [
{
name: 'web_search',
description: 'Search the web for information',
parameters: z.object({
query: z.string(),
limit: z.number().default(10),
}),
},
],
toolImplementations: {
web_search: async (args) => {
// YOUR implementation - integrate with Google, Bing, etc.
const results = await yourSearchAPI(args.query, args.limit);
return formatResults(results);
},
},
};Example: Database Query Tool
const dataWorker = {
name: 'data_analyst',
tools: [
{
name: 'query_database',
description: 'Query the database',
parameters: z.object({
sql: z.string(),
}),
},
],
toolImplementations: {
query_database: async (args) => {
// YOUR implementation
const results = await database.query(args.sql);
return JSON.stringify(results);
},
},
};Memory System
OfficeLLM includes an extensible memory system to store conversation history:
In-Memory Storage
const office = new OfficeLLM({
memory: {
type: 'in-memory',
maxConversations: 1000, // Optional limit
},
// ... rest of config
});Redis Storage
const office = new OfficeLLM({
memory: {
type: 'redis',
host: 'localhost',
port: 6379,
password: 'secret', // Optional
ttl: 86400, // 24 hours
},
// ... rest of config
});Querying Memory
const memory = office.getMemory();
// Get all conversations
const conversations = await memory.queryConversations();
// Filter by agent type
const managerConvs = await memory.queryConversations({
agentType: 'manager'
});
// Get statistics
const stats = await memory.getStats();
// Always close when done
await office.close();Custom Memory Providers
Easily add new storage backends (PostgreSQL, MongoDB, etc.) by extending BaseMemory and using registerMemory(). See documentation for details.
Configuration
Manager Configuration
{
name: 'Manager Name',
description: 'What the manager does',
provider: {
type: 'gemini' | 'openai' | 'anthropic' | 'openrouter',
apiKey: 'your-api-key',
model: 'model-name',
temperature: 0.7,
},
systemPrompt: 'Instructions for the manager...',
maxIterations: 20, // Optional: Max iterations before stopping (default: 20)
contextWindow: 10, // Optional: Number of recent messages to keep (default: 10)
tools: [], // Optional: Custom tools for the manager
toolImplementations: {}, // Optional: Tool implementations for manager
restrictedWorkers: [], // Optional: Worker names to exclude from delegation
}Worker Configuration
{
name: 'Worker Name',
description: 'What the worker does',
provider: { /* LLM config */ },
systemPrompt: 'Instructions for the worker...',
tools: [
{
name: 'tool_name',
description: 'What the tool does',
parameters: zodSchema,
},
],
toolImplementations: {
tool_name: async (args) => {
// YOUR implementation
return 'result';
},
},
maxIterations: 25, // Optional: Max iterations before stopping (default: 25)
contextWindow: 10, // Optional: Number of recent messages to keep (default: 10)
restrictedTools: [], // Optional: Tool names to exclude from this worker
}System Prompts Best Practices
Manager Prompts
systemPrompt: `You are a project manager.
Workflow:
1. Analyze the task
2. Call appropriate workers
3. Review worker results
4. Continue calling workers as needed
5. When complete, provide summary WITHOUT calling more workers
IMPORTANT: Signal completion by responding without tool calls`Worker Prompts
systemPrompt: `You are a specialist.
Workflow:
1. Use your tools to complete the task
2. Call tools as needed (tools return complete results)
3. Review tool results - don't repeat the same call
4. When done, provide results WITHOUT calling more tools
IMPORTANT: Signal completion by responding without tool calls`Examples
See the examples/ directory for complete examples:
real-world-demo.ts- Real world examplememory-demo.ts- Memory system usage examples
Advanced Features
Context Window Management
Control memory usage by limiting conversation history:
const worker = {
name: 'analyst',
contextWindow: 15, // Keep only last 15 messages + system prompt
// ... rest of config
};The context window automatically maintains the system prompt and keeps only the most recent N messages, preventing unbounded memory growth during long conversations.
Restricted Tools and Workers
Control which tools workers can use and which workers the manager can delegate to:
// Restrict specific tools from a worker
const worker = {
name: 'researcher',
tools: [searchTool, writeTool, deleteTool],
restrictedTools: ['deleteTool'], // This worker cannot use deleteTool
// ... rest of config
};
// Restrict manager from delegating to specific workers
const manager = {
name: 'project_manager',
restrictedWorkers: ['experimental_worker'], // Won't delegate to this worker
// ... rest of config
};Manager Tools
Managers can now have their own tools in addition to delegating to workers:
const manager = {
name: 'smart_manager',
tools: [
{
name: 'check_status',
description: 'Check system status',
parameters: z.object({ system: z.string() }),
},
],
toolImplementations: {
check_status: async (args) => {
// Manager's own tool implementation
return `Status of ${args.system}: OK`;
},
},
// ... rest of config
};Safety Features
- Iteration Limits: Prevents infinite loops (Manager: 20, Workers: 25, configurable)
- Context Window: Automatic message history limiting to prevent memory issues
- Error Handling: Graceful error catching at all levels
- Missing Tools: Clear error messages when implementations are missing
- Restricted Access: Fine-grained control over tool and worker access
Contributing
See CONTRIBUTING.md for development guidelines.
License
MIT
