npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@paean-ai/agents

v0.1.0

Published

LLM-agnostic agent development framework with tool execution, session management, and MCP support

Readme

@paean-ai/agents

LLM-agnostic agent development framework for building AI agents with tool execution, session management, and MCP support.

Features

  • LLM-agnostic — Works with any OpenAI-compatible API (GLM, Qwen, DeepSeek, Moonshot, OpenAI, etc.)
  • Tool execution — Zod-based function tools with automatic JSON Schema generation
  • Session management — In-memory session store with TTL and automatic cleanup
  • MCP support — Cloud MCP servers via Streamable HTTP + desktop MCP bridge
  • Sub-agents — Delegate tasks to specialized sub-agents via AgentTool
  • Streaming — Full SSE streaming support for real-time responses
  • Minimal dependencies — Only zod as a required dependency

Installation

npm install @paean-ai/agents zod

Quick Start

import { LlmAgent, OpenAILlm, InMemoryRunner, FunctionTool } from '@paean-ai/agents';
import { z } from 'zod';

// Create an LLM instance (any OpenAI-compatible API)
const model = new OpenAILlm({
  model: 'glm-4-flash',
  apiKey: process.env.GLM_API_KEY!,
  baseURL: 'https://open.bigmodel.cn/api/paas/v4',
});

// Define tools
const getWeather = new FunctionTool({
  name: 'getWeather',
  description: 'Get current weather for a city',
  parameters: z.object({
    city: z.string().describe('City name'),
  }),
  execute: async (args) => {
    return { city: args.city, temperature: 22, condition: 'sunny' };
  },
});

// Create an agent
const agent = new LlmAgent({
  name: 'assistant',
  model,
  instruction: 'You are a helpful assistant.',
  tools: [getWeather],
});

// Run the agent
const runner = new InMemoryRunner({ agent, appName: 'my-app' });

for await (const event of runner.runAsync({
  userId: 'user-1',
  sessionId: 'session-1',
  newMessage: { role: 'user', content: 'What is the weather in Beijing?' },
})) {
  if (event.type === 'content' && !event.partial) {
    console.log('Agent:', event.content);
  }
}

Core Concepts

LLM Providers

OpenAILlm works with any OpenAI-compatible API by changing the baseURL:

| Provider | Model | baseURL | |----------|-------|---------| | GLM (Zhipu) | glm-4-flash | https://open.bigmodel.cn/api/paas/v4 | | Qwen (DashScope) | qwen-plus | https://dashscope.aliyuncs.com/compatible-mode/v1 | | DeepSeek | deepseek-chat | https://api.deepseek.com | | Moonshot | moonshot-v1-8k | https://api.moonshot.cn/v1 | | OpenAI | gpt-4o | https://api.openai.com/v1 |

To add support for a non-OpenAI-compatible provider, extend BaseLlm.

Tools

Define tools using FunctionTool with Zod schemas:

import { FunctionTool } from '@paean-ai/agents';
import { z } from 'zod';

const searchProducts = new FunctionTool({
  name: 'searchProducts',
  description: 'Search product catalog',
  parameters: z.object({
    query: z.string().describe('Search query'),
    maxResults: z.number().optional().describe('Max results to return'),
  }),
  execute: async (args, context) => {
    // Access session state via context.invocationContext.session.state
    const results = await db.search(args.query, args.maxResults);
    return results;
  },
});

Dynamic Toolsets

Load tools dynamically based on runtime context:

import { BaseToolset, BaseTool } from '@paean-ai/agents';

class RoleBasedToolset extends BaseToolset {
  async getTools(context) {
    const isAdmin = context?.invocationContext.session.state.isAdmin;
    return isAdmin ? [adminTool1, adminTool2] : [basicTool];
  }
}

Sub-Agents

Delegate tasks to specialized sub-agents:

const researchAgent = new LlmAgent({
  name: 'researcher',
  model,
  instruction: 'You are a research specialist.',
  tools: [webSearch, summarize],
});

const mainAgent = new LlmAgent({
  name: 'assistant',
  model,
  instruction: 'You are a helpful assistant. Delegate research tasks.',
  subAgents: [researchAgent],
});

MCP Integration

Connect to MCP servers for tool discovery and execution:

import { MCPToolset } from '@paean-ai/agents';

const mcpTools = new MCPToolset({
  servers: [
    {
      name: 'my-mcp-server',
      url: 'https://mcp.example.com/sse',
      headers: { Authorization: 'Bearer ...' },
    },
  ],
});

const agent = new LlmAgent({
  name: 'assistant',
  model,
  instruction: 'You are a helpful assistant.',
  tools: [mcpTools],
});

Session Management

Sessions persist conversation history and state across turns:

const runner = new InMemoryRunner({ agent, appName: 'my-app' });

// First turn
for await (const event of runner.runAsync({
  userId: 'user-1',
  sessionId: 'conv-123',
  newMessage: { role: 'user', content: 'My name is Alice.' },
})) { /* ... */ }

// Second turn (same session — agent remembers context)
for await (const event of runner.runAsync({
  userId: 'user-1',
  sessionId: 'conv-123',
  newMessage: { role: 'user', content: 'What is my name?' },
  stateDelta: { lastSeen: Date.now() },
})) { /* ... */ }

For custom storage backends (Redis, PostgreSQL), extend BaseSessionService.

Architecture

┌──────────────────────────────────────────────────────┐
│                     Runner                           │
│  ┌──────────┐  ┌────────────┐  ┌─────────────────┐  │
│  │ Session   │  │  LlmAgent  │  │  Event Stream   │  │
│  │ Service   │  │            │  │                 │  │
│  └─────┬─────┘  └──────┬─────┘  └────────┬────────┘  │
│        │               │                 │           │
│  ┌─────┴───────────────┴─────────────────┴────────┐  │
│  │            Tool Execution Loop                  │  │
│  │   LLM Call → Parse → Execute Tools → Feed Back  │  │
│  └────────────────────┬───────────────────────────┘  │
│                       │                              │
│  ┌────────────────────┴────────────────────────┐     │
│  │          LLM Provider (BaseLlm)             │     │
│  │  ┌──────────┐  ┌───────┐  ┌──────────────┐ │     │
│  │  │ OpenAILlm│  │Gemini │  │ Anthropic    │ │     │
│  │  │(GLM,Qwen)│  │(P2)   │  │ (P3)         │ │     │
│  │  └──────────┘  └───────┘  └──────────────┘ │     │
│  └─────────────────────────────────────────────┘     │
└──────────────────────────────────────────────────────┘

Roadmap

  • [x] Phase 1: OpenAI-compatible LLM support (GLM, Qwen, DeepSeek, etc.)
  • [x] Phase 1: FunctionTool with Zod schemas
  • [x] Phase 1: In-memory session management with TTL
  • [x] Phase 1: MCP toolset (Streamable HTTP)
  • [x] Phase 1: Local MCP bridge for desktop clients
  • [x] Phase 1: Sub-agent delegation via AgentTool
  • [x] Phase 1: SSE streaming support
  • [ ] Phase 2: Gemini native adapter (Content/Part format)
  • [ ] Phase 3: Anthropic native adapter (Messages format)
  • [ ] Phase 4: Provider auto-detection from model name / baseURL
  • [ ] Multi-turn sub-agent execution
  • [ ] Built-in tool result summarization
  • [ ] OpenTelemetry tracing support

License

Apache-2.0