npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@fondation-io/agents

v1.1.0

Published

Multi-agent orchestration system built on AI SDK v5 - handoffs, routing, and coordination for any AI provider

Readme

@fondation-io/agents

🔱 Fork Notice: This is part of the @fondation-io/ai-sdk-tools fork.

npm version

Multi-agent orchestration for AI SDK v5. Build intelligent workflows with specialized agents, automatic handoffs, and seamless coordination. Works with any AI provider.

npm install @fondation-io/agents ai zod

Why Multi-Agent Systems?

Complex tasks benefit from specialized expertise. Instead of a single model handling everything, break work into focused agents:

Customer Support: Triage → Technical Support → Billing
Content Pipeline: Research → Writing → Editing → Publishing
Code Development: Planning → Implementation → Testing → Documentation
Data Analysis: Collection → Processing → Visualization → Insights

Benefits

  • Specialization - Each agent focuses on its domain with optimized instructions and tools
  • Context Preservation - Full conversation history maintained across handoffs
  • Provider Flexibility - Use different models for different tasks (GPT-4 for analysis, Claude for writing)
  • Programmatic Routing - Pattern matching and automatic agent selection
  • Production Ready - Built on AI SDK v5 with streaming, error handling, and observability

When to Use Agents

Use multi-agent systems when:

  • Tasks require distinct expertise (technical vs. creative vs. analytical)
  • Workflow has clear stages that could be handled independently
  • Different models excel at different parts of the task
  • You need better control over specialized behavior

Use single model when:

  • Task is straightforward and can be handled by general instructions
  • No clear separation of concerns
  • Response time is critical (multi-agent adds orchestration overhead)

Core Concepts

Agent

An AI with specialized instructions, tools, and optional context. Each agent is configured with a language model and system prompt tailored to its role.

Memory & Conversation History

Agents automatically load conversation history from storage when memory is enabled. This creates a clean separation of concerns:

  • Frontend: Sends only the new user message
  • Backend: Loads conversation history from storage
  • Storage: Single source of truth for all conversations

This approach:

  • Reduces network payload (no need to send full history)
  • Provides consistent context across requests
  • Enables server-side control of context window size via lastMessages config
  • Integrates seamlessly with @fondation-io/memory providers

Handoffs

Agents can transfer control to other agents while preserving conversation context. Handoffs include the reason for transfer and any relevant context.

Enhanced Context Management: The system now supports OpenAI-style context filtering during handoffs, allowing you to control exactly what information is passed between agents using HandoffInputFilter functions.

Orchestration

Automatic routing between agents based on:

  • Programmatic matching: Pattern-based routing with matchOn
  • LLM-based routing: The orchestrator agent decides which specialist to invoke
  • Hybrid: Combine both for optimal performance

Enhanced Features (v0.3.0+)

Context Management & Handoff Filtering

The package now includes OpenAI-style context management with AgentRunContext and HandoffInputFilter support:

import { Agent, handoff, removeAllTools, keepLastNMessages } from '@fondation-io/agents';

// Configure handoffs with context filtering
const specialist = new Agent({
  name: 'Specialist',
  model: openai('gpt-4o'),
  instructions: 'Specialized instructions...',
});

const orchestrator = new Agent({
  name: 'Orchestrator',
  model: openai('gpt-4o'),
  instructions: 'Route to specialists...',
  handoffs: [
    // Remove all tool calls when handing off
    handoff(specialist, {
      inputFilter: removeAllTools,
      onHandoff: async (runContext) => {
        console.log('Handing off to specialist');
      },
    }),
    // Keep only last 10 messages for context windowing
    handoff(anotherSpecialist, {
      inputFilter: keepLastNMessages(10),
    }),
  ],
});

Agent Communication During Handoffs

Agents automatically share context through conversationMessages during handoffs. No separate tools needed.

Working Memory

Working memory automatically loads and provides update capability when enabled:

const agent = createAgent({
  memory: {
    workingMemory: { 
      enabled: true,
      scope: 'user', // Persists across all chats for this user
      template: 'Custom template...'
    }
  }
});

When enabled:

  • Working memory loads automatically into system instructions
  • Agent gets updateWorkingMemory tool to update preferences/context
  • Updates persist in storage via the memory provider

Pre-built Handoff Filters

  • removeAllTools() - Remove all tool-related messages
  • keepLastNMessages(n) - Keep only the last N messages for context windowing

Quick Start

Basic: Single Agent

import { Agent } from '@fondation-io/agents';
import { openai } from '@ai-sdk/openai';

const agent = new Agent({
  name: 'Assistant',
  model: openai('gpt-4o'),
  instructions: 'You are a helpful assistant.',
});

// Generate response
const result = await agent.generate({
  prompt: 'What is 2+2?',
});

console.log(result.text); // "4"

Handoffs: Two Specialists

import { Agent } from '@fondation-io/agents';
import { openai } from '@ai-sdk/openai';

// Create specialized agents
const mathAgent = new Agent({
  name: 'Math Tutor',
  model: openai('gpt-4o'),
  instructions: 'You help with math problems. Show step-by-step solutions.',
});

const historyAgent = new Agent({
  name: 'History Tutor',
  model: openai('gpt-4o'),
  instructions: 'You help with history questions. Provide context and dates.',
});

// Create orchestrator with handoff capability
const orchestrator = new Agent({
  name: 'Triage',
  model: openai('gpt-4o'),
  instructions: 'Route questions to the appropriate specialist.',
  handoffs: [mathAgent, historyAgent],
});

// LLM decides which specialist to use
const result = await orchestrator.generate({
  prompt: 'What is the quadratic formula?',
});

console.log(`Handled by: ${result.finalAgent}`); // "Math Tutor"
console.log(`Handoffs: ${result.handoffs.length}`); // 1

Orchestration: Auto-Routing

Use programmatic routing for instant agent selection without LLM overhead:

const mathAgent = new Agent({
  name: 'Math Tutor',
  model: openai('gpt-4o'),
  instructions: 'You help with math problems.',
  matchOn: ['calculate', 'math', 'equation', /\d+\s*[\+\-\*\/]\s*\d+/],
});

const historyAgent = new Agent({
  name: 'History Tutor',
  model: openai('gpt-4o'),
  instructions: 'You help with history questions.',
  matchOn: ['history', 'war', 'civilization', /\d{4}/], // Years
});

const orchestrator = new Agent({
  name: 'Smart Router',
  model: openai('gpt-4o-mini'), // Efficient for routing
  instructions: 'Route to specialists. Fall back to handling general questions.',
  handoffs: [mathAgent, historyAgent],
});

// Automatically routes to mathAgent based on pattern match
const result = await orchestrator.generate({
  prompt: 'What is 15 * 23?',
});

Advanced Patterns

Streaming with UI

For Next.js route handlers and real-time UI updates:

// app/api/chat/route.ts
import { Agent } from '@fondation-io/agents';
import { openai } from '@ai-sdk/openai';

const supportAgent = new Agent({
  name: 'Support',
  model: openai('gpt-4o'),
  instructions: 'Handle customer support inquiries.',
  handoffs: [technicalAgent, billingAgent],
});

export async function POST(req: Request) {
  const { message, chatId } = await req.json();

  return supportAgent.toUIMessageStream({
    message, // Only pass the new user message
    context: { chatId }, // Storage will provide conversation history
    maxRounds: 5, // Max handoffs
    maxSteps: 10, // Max tool calls per agent
    onEvent: async (event) => {
      if (event.type === 'agent-handoff') {
        console.log(`Handoff: ${event.from} → ${event.to}`);
      }
    },
  });
}

Tools and Context

import { tool } from 'ai';
import { z } from 'zod';

const calculatorTool = tool({
  description: 'Perform calculations',
  parameters: z.object({
    expression: z.string(),
  }),
  execute: async ({ expression }) => {
    return eval(expression); // Use safe-eval in production
  },
});

const agent = new Agent({
  name: 'Calculator Agent',
  model: openai('gpt-4o'),
  instructions: 'Help with math using the calculator tool.',
  tools: {
    calculator: calculatorTool,
  },
  maxTurns: 20, // Max tool call iterations
});

Context-Aware Agents

Use typed context for team/user-specific behavior:

interface TeamContext {
  teamId: string;
  userId: string;
  preferences: Record<string, string>;
}

const agent = new Agent<TeamContext>({
  name: 'Team Assistant',
  model: openai('gpt-4o'),
  instructions: (context) => {
    return `You are helping team ${context.teamId}. 
    User preferences: ${JSON.stringify(context.preferences)}`;
  },
});

// Pass context when streaming
agent.toUIMessageStream({
  message: userMessage, // New user message
  context: {
    teamId: 'team-123',
    userId: 'user-456',
    chatId: 'chat-789', // For conversation history
    preferences: { theme: 'dark', language: 'en' },
  },
});

Custom Routing Function

const expertAgent = new Agent({
  name: 'Expert',
  model: openai('gpt-4o'),
  instructions: 'Handle complex technical questions.',
  matchOn: (message) => {
    const complexity = calculateComplexity(message);
    return complexity > 0.7;
  },
});

Multi-Provider Setup

Use the best model for each task:

import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';

const researchAgent = new Agent({
  name: 'Researcher',
  model: anthropic('claude-3-5-sonnet-20241022'), // Excellent reasoning
  instructions: 'Research topics thoroughly.',
});

const writerAgent = new Agent({
  name: 'Writer',
  model: openai('gpt-4o'), // Great at creative writing
  instructions: 'Create engaging content.',
});

const editorAgent = new Agent({
  name: 'Editor',
  model: google('gemini-1.5-pro'), // Strong at review
  instructions: 'Review and improve content.',
  handoffs: [writerAgent], // Can send back for rewrites
});

const pipeline = new Agent({
  name: 'Content Manager',
  model: openai('gpt-4o-mini'), // Efficient orchestrator
  instructions: 'Coordinate content creation.',
  handoffs: [researchAgent, writerAgent, editorAgent],
});

Guardrails

Control agent behavior with input/output validation:

const agent = new Agent({
  name: 'Moderated Agent',
  model: openai('gpt-4o'),
  instructions: 'Answer questions helpfully.',
  inputGuardrails: [
    async (input) => {
      if (containsProfanity(input)) {
        return { 
          pass: false, 
          action: 'block',
          message: 'Input violates content policy',
        };
      }
      return { pass: true };
    },
  ],
  outputGuardrails: [
    async (output) => {
      if (containsSensitiveInfo(output)) {
        return { 
          pass: false, 
          action: 'modify',
          modifiedOutput: redactSensitiveInfo(output),
        };
      }
      return { pass: true };
    },
  ],
});

Tool Permissions

Control which tools agents can access:

const agent = new Agent({
  name: 'Restricted Agent',
  model: openai('gpt-4o'),
  instructions: 'Help with tasks.',
  tools: {
    readData: readDataTool,
    writeData: writeDataTool,
    deleteData: deleteDataTool,
  },
  permissions: {
    allowed: ['readData', 'writeData'], // deleteData blocked
    maxCallsPerTool: {
      writeData: 5, // Limit writes
    },
  },
});

Usage Tracking

Automatically track token usage and costs across all agents with global configuration:

import { configureUsageTracking, extractOpenRouterUsage } from '@fondation-io/agents';

// Configure once at app startup
configureUsageTracking({
  onUsage: async (event) => {
    // Automatically tracks all agent.generate() and agent.stream() calls
    await db.usage.create({
      agentName: event.agentName,
      sessionId: event.sessionId,
      tokens: event.usage?.totalTokens || 0,
      cost: extractOpenRouterUsage({ providerMetadata: event.providerMetadata })?.cost || 0,
      timestamp: new Date(),
    });
  },
  onError: (error, event) => {
    console.error('Tracking failed:', error);
  }
});

// Then use agents normally - tracking happens automatically
const result = await agent.generate({ prompt: "Hello" });
const stream = agent.stream({ prompt: "Hello" });

Key features:

  • Works with all AI providers (OpenAI, Anthropic, OpenRouter, etc.)
  • Tracks multi-agent handoffs with full chain context
  • Async, non-blocking (never delays responses)
  • Includes session context and custom metadata
  • Type-safe event structure

⚠️ OpenRouter Cost Tracking

When using OpenRouter, you must enable usage accounting to track costs:

import { openrouter } from '@openrouter/ai-sdk-provider';

const agent = new Agent({
  name: 'Assistant',
  model: openrouter('anthropic/claude-3.5-haiku', {
    usage: { include: true }  // ← REQUIRED for cost tracking with OpenRouter
  }),
  instructions: 'You are helpful.',
});

Without usage: { include: true }, OpenRouter will not provide cost information and tracking will show $0. This is specific to OpenRouter - other providers don't require this configuration.

See the Usage Tracking Guide for complete documentation and examples.

Complex Multi-Agent Example

Here's a real-world example: determining if a user can afford a Tesla Model Y by combining web research and financial analysis:

import { Agent, handoff, removeAllTools, keepLastNMessages } from '@fondation-io/agents';
import { openai } from '@ai-sdk/openai';

// Research Specialist - gathers current product information
const researchSpecialist = new Agent({
  name: 'Research Specialist',
  model: openai('gpt-4o-mini'),
  instructions: `You research current product information and pricing.
Provide detailed findings for other agents.`,
  tools: {
    webSearch: webSearchTool,
  },
});

// Financial Analyst - evaluates affordability
const financialAnalyst = new Agent({
  name: 'Financial Analyst', 
  model: openai('gpt-4o-mini'),
  instructions: `You analyze financial affordability based on user data and research.
Use previous conversation context to provide comprehensive analysis.`,
  tools: {
    getFinancialData: getFinancialDataTool,
  },
});

// Main Assistant with configured handoffs
const assistant = new Agent({
  name: 'Assistant',
  model: openai('gpt-4o-mini'),
  instructions: `Help users determine if they can afford major purchases.
Coordinate research and financial analysis for comprehensive answers.`,
  handoffs: [
    // Research first, remove tool calls to keep context clean
    handoff(researchSpecialist, {
      inputFilter: removeAllTools,
    }),
    // Then financial analysis, keep recent context
    handoff(financialAnalyst, {
      inputFilter: keepLastNMessages(10),
    }),
  ],
});

// Usage: "Can I afford a Tesla Model Y?"
// 1. Assistant → Research Specialist (searches for Tesla pricing)
// 2. Research Specialist → Financial Analyst (reads pricing, gets user's financial data)
// 3. Financial Analyst provides comprehensive affordability analysis

API Reference

Agent Class

class Agent<TContext extends Record<string, unknown> = Record<string, unknown>>

Constructor Options:

  • name: string - Unique agent identifier
  • model: LanguageModel - AI SDK language model
  • instructions: string | ((context: TContext) => string) - System prompt
  • tools?: Record<string, Tool> - Available tools
  • handoffs?: Agent[] - Agents this agent can hand off to
  • maxTurns?: number - Maximum tool call iterations (default: 10)
  • temperature?: number - Model temperature
  • matchOn?: (string | RegExp)[] | ((message: string) => boolean) - Routing patterns
  • onEvent?: (event: AgentEvent) => void - Lifecycle event handler
  • inputGuardrails?: InputGuardrail[] - Pre-execution validation
  • outputGuardrails?: OutputGuardrail[] - Post-execution validation
  • permissions?: ToolPermissions - Tool access control

Methods:

// Generate response (non-streaming)
async generate(options: {
  prompt: string;
  messages?: ModelMessage[];
}): Promise<AgentGenerateResult>

// Stream response (AI SDK stream)
stream(options: {
  prompt?: string;
  messages?: ModelMessage[];
}): AgentStreamResult

// Stream as UI messages (Next.js route handler)
toUIMessageStream(options: {
  message: UIMessage; // New user message - history loaded from storage
  strategy?: 'auto' | 'manual';
  maxRounds?: number;
  maxSteps?: number;
  context?: TContext;
  onEvent?: (event: AgentEvent) => void;
  beforeStream?: (ctx: { writer: UIMessageStreamWriter }) => boolean | Promise<boolean>;
  // ... AI SDK stream options
}): Response

// Get handoff agents
getHandoffs(): Agent[]

Utility Functions

// Create handoff instruction
createHandoff(
  targetAgent: string,
  context?: string,
  reason?: string
): HandoffInstruction

// Check if result is handoff
isHandoffResult(result: unknown): result is HandoffInstruction

// Create handoff tool for AI SDK
createHandoffTool(agents: Agent[]): Tool

// Create configured handoff with filtering
handoff<TContext>(agent: Agent<TContext>, config?: HandoffConfig<TContext>): ConfiguredHandoff<TContext>

// Get transfer message for handoff
getTransferMessage<TContext>(agent: Agent<TContext>): string

Handoff Filters

// Remove all tool-related messages
removeAllTools(data: HandoffInputData): HandoffInputData

// Keep only last N messages
keepLastNMessages(n: number): HandoffInputFilter

Context Management

// Run context for workflow state
class AgentRunContext<TContext = Record<string, unknown>> {
  context: TContext;
  metadata: Record<string, unknown>;
  constructor(context?: TContext);
  toJSON(): object;
}

// Execution context
createExecutionContext<T>(options: {
  context?: T;
  writer?: UIMessageStreamWriter;
  metadata?: Record<string, unknown>;
}): ExecutionContext<T>

// Routing utilities
matchAgent(message: string, agents: Agent[]): Agent | null
findBestMatch(message: string, agents: Agent[]): Agent | null

// Streaming utilities
writeAgentStatus(writer: UIMessageStreamWriter, status: {
  status: 'executing' | 'routing' | 'completing';
  agent: string;
}): void

Event Types

type AgentEvent =
  | { type: 'agent-start'; agent: string; round: number }
  | { type: 'agent-step'; agent: string; step: StepResult }
  | { type: 'agent-finish'; agent: string; round: number }
  | { type: 'agent-handoff'; from: string; to: string; reason?: string }
  | { type: 'agent-complete'; totalRounds: number }
  | { type: 'agent-error'; error: Error }

OpenRouter Support

@fondation-io/agents includes native support for OpenRouter, providing access to 300+ AI models through a single API with built-in cost tracking and type safety.

Quick Start

import { openrouter } from '@openrouter/ai-sdk-provider';
import { Agent, extractOpenRouterUsage, formatCost } from '@fondation-io/agents';

const agent = new Agent({
  name: 'Assistant',
  model: openrouter('openai/gpt-4o-mini'),
  instructions: 'You are a helpful assistant.'
});

const result = await agent.generate({
  prompt: 'Explain TypeScript in one sentence.'
});

// Extract usage metrics
const usage = extractOpenRouterUsage(result);
if (usage) {
  console.log('Cost:', formatCost(usage.cost));
  console.log('Tokens:', usage.totalTokens);
}

Features

  • Type Definitions - OpenRouterUsage, OpenRouterMetadata, OpenRouterProviderOptions
  • Usage Extraction - Works with generateText(), streamText(), agent.generate(), and agent.stream()
  • Formatting Utilities - formatCost(), formatTokens(), summarizeUsage()
  • Budget Tracking - UsageAccumulator class for multi-request cost monitoring with automatic enforcement
  • TypeScript Support - Full autocomplete for providerOptions and type-safe metadata access
  • 7 Examples - Comprehensive examples covering all use cases from basic to advanced

Streaming Usage

With streaming, usage is only available in the onFinish callback:

import { streamText } from 'ai';

let usage = null;

const result = streamText({
  model: openrouter('openai/gpt-4o'),
  prompt: 'Hello',
  onFinish: (event) => {
    usage = extractOpenRouterUsage(event);
  }
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

if (usage) {
  console.log('Cost:', formatCost(usage.cost));
}

With Agent.stream()

Usage tracking also works with agent.stream():

const agent = new Agent({
  name: 'Assistant',
  model: openrouter('anthropic/claude-3.5-haiku'),
  instructions: 'You are a helpful assistant.'
});

const stream = agent.stream({
  prompt: 'Explain async/await in JavaScript.',
  onFinish: async (event) => {
    const usage = extractOpenRouterUsage(event);
    if (usage) {
      console.log('Cost:', formatCost(usage.cost));
      console.log('Tokens:', usage.totalTokens);
    }
  }
} as any); // Type assertion until AgentStreamOptions includes onFinish

for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

Budget Monitoring

import { UsageAccumulator } from '@fondation-io/agents';

const accumulator = new UsageAccumulator({ maxCost: 1.00 });

for (const prompt of prompts) {
  const result = await streamText({
    model: openrouter('openai/gpt-3.5-turbo'),
    prompt,
    onFinish: (event) => {
      const usage = extractOpenRouterUsage(event);
      if (usage) {
        accumulator.add(usage); // Throws if budget exceeded
      }
    }
  });

  for await (const chunk of result.textStream) {
    process.stdout.write(chunk);
  }
}

console.log(accumulator.summarize({ detailed: true }));

Documentation

  • Native Support Guide: /docs/guides/openrouter-native-support.md
  • Comprehensive Guide: /docs/openrouter-integration.md
  • Examples: packages/agents/src/examples/openrouter/

Integration with Other Packages

With @fondation-io/cache

Cache expensive tool calls across agents:

import { createCached } from '@fondation-io/cache';
import { Redis } from '@upstash/redis';

const cached = createCached({ cache: Redis.fromEnv() });

const agent = new Agent({
  name: 'Data Agent',
  model: openai('gpt-4o'),
  instructions: 'Analyze data.',
  tools: {
    analyze: cached(expensiveAnalysisTool),
  },
});

With @fondation-io/artifacts

Stream structured artifacts from agents:

import { artifact } from '@fondation-io/artifacts';
import { tool } from 'ai';
import { z } from 'zod';

const ReportArtifact = artifact('report', z.object({
  title: z.string(),
  sections: z.array(z.object({
    heading: z.string(),
    content: z.string(),
  })),
}));

const reportAgent = new Agent({
  name: 'Report Generator',
  model: openai('gpt-4o'),
  instructions: 'Generate structured reports.',
  tools: {
    createReport: tool({
      description: 'Create a report',
      parameters: z.object({
        title: z.string(),
      }),
      execute: async function* ({ title }) {
        const report = ReportArtifact.stream({ title, sections: [] });
        
        yield { text: 'Generating report...' };
        
        await report.update({ 
          sections: [{ heading: 'Introduction', content: '...' }],
        });
        
        yield { text: 'Report complete', forceStop: true };
      },
    }),
  },
});

With @fondation-io/devtools

Debug agent execution in development:

import { AIDevTools } from '@fondation-io/devtools';

const agent = new Agent({
  name: 'Debug Agent',
  model: openai('gpt-4o'),
  instructions: 'Test agent.',
  onEvent: (event) => {
    console.log('[Agent Event]', event);
  },
});

// In your app
export default function App() {
  return (
    <>
      <YourChatInterface />
      <AIDevTools />
    </>
  );
}

Examples

Real-world implementations in /apps/example/src/ai/agents/:

  • Triage Agent - Route customer questions to specialists
  • Financial Agent - Multi-step analysis with artifacts
  • Code Review - Analyze → Test → Document workflow
  • Multi-Provider - Use different models for different tasks

Contributing

Contributions are welcome! See the contributing guide for details.

License

MIT

Acknowledgments

This package is part of the @fondation-io/ai-sdk-tools fork of the original AI SDK Tools created by the Midday team.

All credit for the original implementation goes to the original authors.