@paean-ai/agents
v0.1.0
Published
LLM-agnostic agent development framework with tool execution, session management, and MCP support
Readme
@paean-ai/agents
LLM-agnostic agent development framework for building AI agents with tool execution, session management, and MCP support.
Features
- LLM-agnostic — Works with any OpenAI-compatible API (GLM, Qwen, DeepSeek, Moonshot, OpenAI, etc.)
- Tool execution — Zod-based function tools with automatic JSON Schema generation
- Session management — In-memory session store with TTL and automatic cleanup
- MCP support — Cloud MCP servers via Streamable HTTP + desktop MCP bridge
- Sub-agents — Delegate tasks to specialized sub-agents via AgentTool
- Streaming — Full SSE streaming support for real-time responses
- Minimal dependencies — Only
zodas a required dependency
Installation
npm install @paean-ai/agents zodQuick Start
import { LlmAgent, OpenAILlm, InMemoryRunner, FunctionTool } from '@paean-ai/agents';
import { z } from 'zod';
// Create an LLM instance (any OpenAI-compatible API)
const model = new OpenAILlm({
model: 'glm-4-flash',
apiKey: process.env.GLM_API_KEY!,
baseURL: 'https://open.bigmodel.cn/api/paas/v4',
});
// Define tools
const getWeather = new FunctionTool({
name: 'getWeather',
description: 'Get current weather for a city',
parameters: z.object({
city: z.string().describe('City name'),
}),
execute: async (args) => {
return { city: args.city, temperature: 22, condition: 'sunny' };
},
});
// Create an agent
const agent = new LlmAgent({
name: 'assistant',
model,
instruction: 'You are a helpful assistant.',
tools: [getWeather],
});
// Run the agent
const runner = new InMemoryRunner({ agent, appName: 'my-app' });
for await (const event of runner.runAsync({
userId: 'user-1',
sessionId: 'session-1',
newMessage: { role: 'user', content: 'What is the weather in Beijing?' },
})) {
if (event.type === 'content' && !event.partial) {
console.log('Agent:', event.content);
}
}Core Concepts
LLM Providers
OpenAILlm works with any OpenAI-compatible API by changing the baseURL:
| Provider | Model | baseURL |
|----------|-------|---------|
| GLM (Zhipu) | glm-4-flash | https://open.bigmodel.cn/api/paas/v4 |
| Qwen (DashScope) | qwen-plus | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| DeepSeek | deepseek-chat | https://api.deepseek.com |
| Moonshot | moonshot-v1-8k | https://api.moonshot.cn/v1 |
| OpenAI | gpt-4o | https://api.openai.com/v1 |
To add support for a non-OpenAI-compatible provider, extend BaseLlm.
Tools
Define tools using FunctionTool with Zod schemas:
import { FunctionTool } from '@paean-ai/agents';
import { z } from 'zod';
const searchProducts = new FunctionTool({
name: 'searchProducts',
description: 'Search product catalog',
parameters: z.object({
query: z.string().describe('Search query'),
maxResults: z.number().optional().describe('Max results to return'),
}),
execute: async (args, context) => {
// Access session state via context.invocationContext.session.state
const results = await db.search(args.query, args.maxResults);
return results;
},
});Dynamic Toolsets
Load tools dynamically based on runtime context:
import { BaseToolset, BaseTool } from '@paean-ai/agents';
class RoleBasedToolset extends BaseToolset {
async getTools(context) {
const isAdmin = context?.invocationContext.session.state.isAdmin;
return isAdmin ? [adminTool1, adminTool2] : [basicTool];
}
}Sub-Agents
Delegate tasks to specialized sub-agents:
const researchAgent = new LlmAgent({
name: 'researcher',
model,
instruction: 'You are a research specialist.',
tools: [webSearch, summarize],
});
const mainAgent = new LlmAgent({
name: 'assistant',
model,
instruction: 'You are a helpful assistant. Delegate research tasks.',
subAgents: [researchAgent],
});MCP Integration
Connect to MCP servers for tool discovery and execution:
import { MCPToolset } from '@paean-ai/agents';
const mcpTools = new MCPToolset({
servers: [
{
name: 'my-mcp-server',
url: 'https://mcp.example.com/sse',
headers: { Authorization: 'Bearer ...' },
},
],
});
const agent = new LlmAgent({
name: 'assistant',
model,
instruction: 'You are a helpful assistant.',
tools: [mcpTools],
});Session Management
Sessions persist conversation history and state across turns:
const runner = new InMemoryRunner({ agent, appName: 'my-app' });
// First turn
for await (const event of runner.runAsync({
userId: 'user-1',
sessionId: 'conv-123',
newMessage: { role: 'user', content: 'My name is Alice.' },
})) { /* ... */ }
// Second turn (same session — agent remembers context)
for await (const event of runner.runAsync({
userId: 'user-1',
sessionId: 'conv-123',
newMessage: { role: 'user', content: 'What is my name?' },
stateDelta: { lastSeen: Date.now() },
})) { /* ... */ }For custom storage backends (Redis, PostgreSQL), extend BaseSessionService.
Architecture
┌──────────────────────────────────────────────────────┐
│ Runner │
│ ┌──────────┐ ┌────────────┐ ┌─────────────────┐ │
│ │ Session │ │ LlmAgent │ │ Event Stream │ │
│ │ Service │ │ │ │ │ │
│ └─────┬─────┘ └──────┬─────┘ └────────┬────────┘ │
│ │ │ │ │
│ ┌─────┴───────────────┴─────────────────┴────────┐ │
│ │ Tool Execution Loop │ │
│ │ LLM Call → Parse → Execute Tools → Feed Back │ │
│ └────────────────────┬───────────────────────────┘ │
│ │ │
│ ┌────────────────────┴────────────────────────┐ │
│ │ LLM Provider (BaseLlm) │ │
│ │ ┌──────────┐ ┌───────┐ ┌──────────────┐ │ │
│ │ │ OpenAILlm│ │Gemini │ │ Anthropic │ │ │
│ │ │(GLM,Qwen)│ │(P2) │ │ (P3) │ │ │
│ │ └──────────┘ └───────┘ └──────────────┘ │ │
│ └─────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────┘Roadmap
- [x] Phase 1: OpenAI-compatible LLM support (GLM, Qwen, DeepSeek, etc.)
- [x] Phase 1: FunctionTool with Zod schemas
- [x] Phase 1: In-memory session management with TTL
- [x] Phase 1: MCP toolset (Streamable HTTP)
- [x] Phase 1: Local MCP bridge for desktop clients
- [x] Phase 1: Sub-agent delegation via AgentTool
- [x] Phase 1: SSE streaming support
- [ ] Phase 2: Gemini native adapter (Content/Part format)
- [ ] Phase 3: Anthropic native adapter (Messages format)
- [ ] Phase 4: Provider auto-detection from model name / baseURL
- [ ] Multi-turn sub-agent execution
- [ ] Built-in tool result summarization
- [ ] OpenTelemetry tracing support
License
Apache-2.0
