npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

officellm

v1.1.6

Published

A TypeScript library for multi-model agentic architecture with managers and workers

Downloads

1,136

Readme

OfficeLLM

A powerful TypeScript framework for building multi-agent AI systems with continuous execution. Coordinate specialized AI workers that autonomously use tools and collaborate to complete complex tasks.

Features

  • Multi-Agent Architecture: Manager coordinates specialized worker agents
  • Continuous Execution: Agents autonomously work until task completion
  • User-Defined Tools: Bring your own tool implementations
  • Multiple LLM Providers: OpenAI, Anthropic, Google Gemini, OpenRouter
  • Memory System: Store and retrieve conversation history with In-Memory or Redis storage
  • Type-Safe: Full TypeScript support with Zod schemas
  • Flexible: Easy to extend and customize

Installation

npm install officellm

How It Works

Continuous Execution Model

OfficeLLM implements a continuous execution loop where:

  1. Manager Agent analyzes tasks and calls worker agents
  2. Worker Agents use their tools to complete subtasks
  3. Execution continues until the manager determines completion
  4. Completion signal: Manager responds without calling more workers
User Task → Manager → Worker (uses tools) → Manager → Worker → ... → Final Result

Key Concepts

  • Manager: Orchestrates the entire workflow, delegates to workers
  • Workers: Specialized agents with specific tools and expertise
  • Tools: Functions that workers can call (YOU provide implementations)
  • Completion: Detected when agents stop calling tools

Tool Implementations

IMPORTANT: You MUST provide tool implementations for your workers. The framework provides the skeleton, you provide the functionality.

Example: Web Search Tool

const researchWorker = {
  name: 'researcher',
  tools: [
    {
      name: 'web_search',
      description: 'Search the web for information',
      parameters: z.object({
        query: z.string(),
        limit: z.number().default(10),
      }),
    },
  ],
  toolImplementations: {
    web_search: async (args) => {
      // YOUR implementation - integrate with Google, Bing, etc.
      const results = await yourSearchAPI(args.query, args.limit);
      return formatResults(results);
    },
  },
};

Example: Database Query Tool

const dataWorker = {
  name: 'data_analyst',
  tools: [
    {
      name: 'query_database',
      description: 'Query the database',
      parameters: z.object({
        sql: z.string(),
      }),
    },
  ],
  toolImplementations: {
    query_database: async (args) => {
      // YOUR implementation
      const results = await database.query(args.sql);
      return JSON.stringify(results);
    },
  },
};

Memory System

OfficeLLM includes an extensible memory system to store conversation history:

In-Memory Storage

const office = new OfficeLLM({
  memory: {
    type: 'in-memory',
    maxConversations: 1000, // Optional limit
  },
  // ... rest of config
});

Redis Storage

const office = new OfficeLLM({
  memory: {
    type: 'redis',
    host: 'localhost',
    port: 6379,
    password: 'secret', // Optional
    ttl: 86400, // 24 hours
  },
  // ... rest of config
});

Querying Memory

const memory = office.getMemory();

// Get all conversations
const conversations = await memory.queryConversations();

// Filter by agent type
const managerConvs = await memory.queryConversations({ 
  agentType: 'manager' 
});

// Get statistics
const stats = await memory.getStats();

// Always close when done
await office.close();

Custom Memory Providers

Easily add new storage backends (PostgreSQL, MongoDB, etc.) by extending BaseMemory and using registerMemory(). See documentation for details.

Configuration

Manager Configuration

{
  name: 'Manager Name',
  description: 'What the manager does',
  provider: {
    type: 'gemini' | 'openai' | 'anthropic' | 'openrouter',
    apiKey: 'your-api-key',
    model: 'model-name',
    temperature: 0.7,
  },
  systemPrompt: 'Instructions for the manager...',
  maxIterations: 20, // Optional: Max iterations before stopping (default: 20)
  contextWindow: 10, // Optional: Number of recent messages to keep (default: 10)
  tools: [], // Optional: Custom tools for the manager
  toolImplementations: {}, // Optional: Tool implementations for manager
  restrictedWorkers: [], // Optional: Worker names to exclude from delegation
}

Worker Configuration

{
  name: 'Worker Name',
  description: 'What the worker does',
  provider: { /* LLM config */ },
  systemPrompt: 'Instructions for the worker...',
  tools: [
    {
      name: 'tool_name',
      description: 'What the tool does',
      parameters: zodSchema,
    },
  ],
  toolImplementations: {
    tool_name: async (args) => {
      // YOUR implementation
      return 'result';
    },
  },
  maxIterations: 25, // Optional: Max iterations before stopping (default: 25)
  contextWindow: 10, // Optional: Number of recent messages to keep (default: 10)
  restrictedTools: [], // Optional: Tool names to exclude from this worker
}

System Prompts Best Practices

Manager Prompts

systemPrompt: `You are a project manager.

Workflow:
1. Analyze the task
2. Call appropriate workers
3. Review worker results
4. Continue calling workers as needed
5. When complete, provide summary WITHOUT calling more workers

IMPORTANT: Signal completion by responding without tool calls`

Worker Prompts

systemPrompt: `You are a specialist.

Workflow:
1. Use your tools to complete the task
2. Call tools as needed (tools return complete results)
3. Review tool results - don't repeat the same call
4. When done, provide results WITHOUT calling more tools

IMPORTANT: Signal completion by responding without tool calls`

Examples

See the examples/ directory for complete examples:

  • real-world-demo.ts - Real world example
  • memory-demo.ts - Memory system usage examples

Advanced Features

Context Window Management

Control memory usage by limiting conversation history:

const worker = {
  name: 'analyst',
  contextWindow: 15, // Keep only last 15 messages + system prompt
  // ... rest of config
};

The context window automatically maintains the system prompt and keeps only the most recent N messages, preventing unbounded memory growth during long conversations.

Restricted Tools and Workers

Control which tools workers can use and which workers the manager can delegate to:

// Restrict specific tools from a worker
const worker = {
  name: 'researcher',
  tools: [searchTool, writeTool, deleteTool],
  restrictedTools: ['deleteTool'], // This worker cannot use deleteTool
  // ... rest of config
};

// Restrict manager from delegating to specific workers
const manager = {
  name: 'project_manager',
  restrictedWorkers: ['experimental_worker'], // Won't delegate to this worker
  // ... rest of config
};

Manager Tools

Managers can now have their own tools in addition to delegating to workers:

const manager = {
  name: 'smart_manager',
  tools: [
    {
      name: 'check_status',
      description: 'Check system status',
      parameters: z.object({ system: z.string() }),
    },
  ],
  toolImplementations: {
    check_status: async (args) => {
      // Manager's own tool implementation
      return `Status of ${args.system}: OK`;
    },
  },
  // ... rest of config
};

Safety Features

  • Iteration Limits: Prevents infinite loops (Manager: 20, Workers: 25, configurable)
  • Context Window: Automatic message history limiting to prevent memory issues
  • Error Handling: Graceful error catching at all levels
  • Missing Tools: Clear error messages when implementations are missing
  • Restricted Access: Fine-grained control over tool and worker access

Contributing

See CONTRIBUTING.md for development guidelines.

License

MIT

Support