@piaoxianguo/miniagent
v0.5.0
Published
A minimal, extensible TypeScript Agent framework
Maintainers
Readme
MiniAgent
A minimal, extensible TypeScript Agent framework. Simple by default, powerful when needed.
Quick Start
npm install @piaoxianguo/miniagentimport { MiniAgent, LLMEngineManager, MessageType } from "@piaoxianguo/miniagent";
import { AnthropicEngine } from "@piaoxianguo/miniagent/engine/anthropic";
import { z } from "zod";
// 1. Set up the LLM engine
const engines = new LLMEngineManager();
engines.register("anthropic", AnthropicEngine);
// 2. Create the agent
const agent = new MiniAgent(engines, {
model: {
provider: "anthropic",
model: "claude-sonnet-4-20250514",
apiKey: process.env.ANTHROPIC_API_KEY!,
baseUrl: "",
},
models: new Map(),
plugins: new Map(),
paths: { sessiondir: "./sessions" },
});
// 3. Print streaming output
agent.on("llm:chunk", ({ chunk }) => {
if (chunk.type === "text-delta") process.stdout.write(chunk.text);
});
// 4. Register a tool — that's it
agent.register({
name: "get_weather",
description: "Get the current weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
}),
execute: async (args) => `${args.city}: Sunny, 25°C`,
});
// 5. Run
const messages = await agent.run({
id: crypto.randomUUID(),
type: MessageType.User,
content: "What's the weather in Beijing?",
});That's a fully working agent with streaming output and tool use. No boilerplate, no configuration files.
Design Philosophy
MiniAgent is built on one principle: a minimal core with free assembly.
The core does exactly one thing — the agent loop (collect context → call LLM → execute tools → repeat). Everything else is a pluggable component you register through the same register() method:
┌─────────────────────────────────┐
│ MiniAgent │
│ │
register() ───► │ Tool ───────────── execute() │
◄ │ ContextProvider ── collect() │
◄ │ ContextProcessor ─ process() │
◄ │ MessageNotifier ── notify() │
◄ │ ErrorHandler ───── handle() │
◄ │ ToolApprover ───── approve() │
◄ │ ... │
│ │
└─────────────────────────────────┘- Schema-Driven Types — All data structures are defined as Zod schemas. TypeScript types are derived automatically. Runtime validation comes for free.
- Auto-Detection — Components are identified by Zod schema validation, not manual type tags. You register a tool, a provider, or a processor — the agent knows what it is.
- Plugin Over Framework — No inheritance hierarchies, no abstract base classes. Just plain objects that satisfy the right schema.
Tools and Interfaces
Tool
A tool is the simplest thing to define — a name, a description, a Zod parameter schema, and an execute function:
const myTool: Tool = {
name: "read_file",
description: "Read the contents of a file",
parameters: z.object({
path: z.string().describe("Absolute file path"),
}),
execute: async (args) => {
return fs.readFile(args.path, "utf-8");
},
};
agent.register(myTool);ToolProvider
When you need to dynamically provide multiple tools (e.g. connecting to an MCP server), implement ToolProvider:
const provider: ToolProvider = {
async getTools(): Promise<Tool[]> {
// Dynamically discover and return tools
return [tool1, tool2, tool3];
},
};
agent.register(provider);LLMRequire
Some components need access to the LLM (e.g. a context compressor that summarizes old messages). Implement LLMRequire and the agent will inject the LLMRequest at registration time:
const compressor = {
priority: -1000,
private llm: null,
async setLLMRequest(llm: LLMRequest) {
this.llm = llm;
},
async collect() {
// Use this.llm to summarize old messages...
return [summaryMessage];
},
};ContextProvider
Inject additional context messages into every turn. Sorted by priority:
const provider = {
priority: 0,
async collect() {
return [
{ id: crypto.randomUUID(), type: MessageType.System, content: "You are a helpful assistant." },
];
},
};ContextProcessor
Transform the message list before it's sent to the LLM. Return Action objects to delete, replace, or inject messages:
const processor = {
priority: 100,
async process(messages) {
return [
{ type: ActionType.Delete, targetId: "old-message-id" },
{ type: ActionType.Replace, targetId: "msg-id", message: newMessage },
{ type: ActionType.AddFirst, message: systemMsg },
{ type: ActionType.AddLast, message: footerMsg },
];
},
};Other Interfaces
| Interface | Purpose |
|-----------|---------|
| MessageNotifier | Called every time a new message is created |
| ErrorHandler | Handle errors within the agent loop (retry, fallback, etc.) |
| ToolApprover | Human-in-the-loop approval before tool execution |
| AfterTurnProcessor | Run logic after each agent run completes |
| ConfigNotifier | Notified when model config changes |
| PersistRequire | Receive the Store instance for persistence |
| TurnContextConsumer | Receive the full context of each turn |
| TurnContextAppender | Prepend messages before other context providers |
LLMRequest and LLMEngine
MiniAgent separates LLM interaction into two layers:
LLMRequest— The interface the agent calls:streamInvoke(messages, modelConfig, tools) → LLMStreamHandle. This is the contract.LLMEngine— The interface an engine implements:streamGenerate(messages, tools) → LLMStreamHandle. TheModelConfigis bound at construction time.LLMEngineManager— The defaultLLMRequestimplementation. It manages engine constructors, creates engines perModelConfig, and caches them with LRU eviction.
MiniAgent ──calls──► LLMRequest (interface)
│
LLMEngineManager (default impl)
│
┌──────┴──────┐
LLMEngine LLMEngine
(Anthropic) (OpenAI) ...Built-in Engines
import { LLMEngineManager } from "@piaoxianguo/miniagent";
import { AnthropicEngine } from "@piaoxianguo/miniagent/engine/anthropic";
import { OpenAIEngine } from "@piaoxianguo/miniagent/engine/openai";
import { OpenAICompatibleEngine } from "@piaoxianguo/miniagent/engine/openai-compatible";
import { GLMEngine } from "@piaoxianguo/miniagent/engine/glm";
import { GLMCodePlanEngine } from "@piaoxianguo/miniagent/engine/glm-codeplan";
const engines = new LLMEngineManager();
engines.register("anthropic", AnthropicEngine);
engines.register("openai", OpenAIEngine);
engines.register("openai-compatible", OpenAICompatibleEngine);
engines.register("glm", GLMEngine);
engines.register("glm-codeplan", GLMCodePlanEngine);Implement the LLMEngine interface to add your own:
interface LLMEngine {
streamGenerate(messages: Message[], tools: Tool[]): LLMStreamHandle<LLMResponse>;
}Blueprint and Assembly
For real-world applications, you don't want to register every component manually. MiniAgent provides a Blueprint system for declarative agent assembly.
Blueprint
A blueprint is a declarative description of what an agent needs:
interface AgentBlueprint {
uses: string[]; // List of component IDs to include
}Registry and Assembler
Register component factories, then assemble an agent from a blueprint:
import { AgentAssembler, AgentBlueprintRegistry } from "@piaoxianguo/miniagent";
// Register factories
const registry = new AgentBlueprintRegistry();
registry.register("tool.read", () => readTool);
registry.register("tool.write", () => writeTool);
registry.register("plugin.mcp", () => new McpPlugin());
registry.register("plugin.skill", () => new SkillPlugin());
// Assemble
const assembler = new AgentAssembler(registry);
const agent = await assembler.assemble({
llm: engines,
config: agentConfig,
blueprint: { uses: ["tool.read", "tool.write", "plugin.mcp", "plugin.skill"] },
capabilities: { tool: { deny: ["bash"] } }, // Optional: control visibility
});Capability System
Blueprints work with a capability system to control what tools, plugins, and subagents are visible:
const capabilities = {
tool: { allow: ["read", "glob", "grep"], deny: ["bash"] },
mcp: {
server: { allow: ["filesystem"] },
tool: { deny: ["mcp__filesystem__write_file"] },
},
skill: { allow: ["*"] },
subagent: { deny: ["dangerous-agent"] },
};Factory Function
For simpler cases, use createMiniAgent with the use array — a flat list of tools, providers, modules, or setup functions:
import { createMiniAgent } from "@piaoxianguo/miniagent";
const agent = createMiniAgent({
llm: engines,
config: agentConfig,
use: [
readTool,
myToolProvider,
myContextProvider,
(agent) => {
agent.on("llm:chunk", ({ chunk }) => {
if (chunk.type === "text-delta") process.stdout.write(chunk.text);
});
},
],
});Built-in Tools
| Tool | Description | Docs |
|------|-------------|------|
| read | Read file contents or list directory entries | read.md |
| write | Write content to a file (creates parent dirs) | write.md |
| edit | Exact string replacement in files | edit.md |
| glob | Find files by glob pattern (**/*.ts, etc.) | glob.md |
| grep | Search file contents with regex | grep.md |
| bash | Execute bash commands with timeout and working directory | bash.md |
| todo | Create, update, delete todo items | todo.md |
| subagent | Delegate tasks to file-defined sub-agents | subagent.md |
| agent-context | Auto-load agent framework config files into context | agent-context.md |
| mcp | MCP client with stdio / SSE / Streamable HTTP transports | mcp.md |
| skill | Load skill instructions from SKILL.md manifests | skill.md |
Built-in CLI
MiniAgent ships with an interactive REPL built with Ink (React for CLI):
npm run chatOn first run, a .cliagent/config.json template is generated. Configure your models and run again:
{
"models": [
{
"name": "claude",
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"apiKey": "sk-ant-..."
}
],
"defaultModel": "claude",
"systemPrompt": "You are a helpful assistant."
}CLI Commands
| Command | Description |
|---------|-------------|
| /models | List configured models |
| /model <provider/model> | Switch active model |
| /tools | List registered tools |
| /history [page] | View conversation history |
| /context | Preview context sent to LLM |
| /compress | Trigger context compression |
| /session | List all sessions |
| /session new | Create a new session |
| /session switch <id> | Switch to a session |
| /session delete <id> | Delete a session |
| /session rename <id> <name> | Rename a session |
| /hitl [on\|off] | Toggle human-in-the-loop |
| /clear | Clear current conversation |
| /system <text> | Update system prompt |
| /quit | Exit |
Events
Full lifecycle events via EventEmitter:
agent.on("run:start", ({ input }) => { /* agent run started */ });
agent.on("run:complete", ({ messages }) => { /* agent run finished */ });
agent.on("run:stop", () => { /* agent was stopped */ });
agent.on("run:error", ({ error, turn }) => { /* unhandled error */ });
agent.on("turn:start", ({ turn }) => { /* new turn began */ });
agent.on("turn:end", ({ turn }) => { /* turn finished */ });
agent.on("llm:request", ({ context, tools }) => { /* LLM request about to be made */ });
agent.on("llm:chunk", ({ chunk }) => { /* streaming chunk received */ });
agent.on("llm:response", ({ response }) => { /* full LLM response received */ });
agent.on("tool:execute", ({ toolCall }) => { /* tool execution started */ });
agent.on("tool:result", ({ toolCall, result }) => { /* tool execution completed */ });
agent.on("message:notify", ({ message }) => { /* new message created */ });Agent API
| Method | Description |
|--------|-------------|
| run(input) | Run the agent loop with a user message. Returns all messages. |
| stop() | Stop the running agent loop. |
| register(item) | Register a component (tool, provider, processor, etc.) |
| on(event, listener) | Subscribe to lifecycle events. |
| getMessages() | Get all messages in the session. |
| getMessage(id) | Get a specific message by ID. |
| getToolList() | Get all currently available tools. |
| previewContext() | Preview the context that will be sent to the LLM. |
| setDiscardBefore(id) | Set a watermark to discard messages before the given ID. |
| setModel(config) | Switch to a different model at runtime. |
| setModelByPath(path) | Switch model by provider/model path string. |
| setAutoApprovedTools(names) | Set tools that bypass HITL approval. |
| getConfig() | Get the current agent configuration. |
| getContextCount() | Get cumulative token usage statistics. |
Tech Stack
- Runtime: Node.js
- Language: TypeScript (strict, ESM,
verbatimModuleSyntax) - Schema: Zod (beta, v3-compatible API)
- Test: Vitest
- Lint: ESLint (typescript-eslint)
- SDKs:
@anthropic-ai/sdk,openai,@modelcontextprotocol/sdk - Utils:
eventemitter3,lru-cache,zod-to-json-schema
License
MIT
