@animalabs/agent-framework
v0.1.0
Published
Multi-agent framework with pluggable modules, persistent state, and concurrent tool execution
Readme
@connectome/agent-framework
Multi-agent framework with pluggable modules, persistent state, streaming inference, and concurrent tool execution.
Overview
Agent Framework orchestrates one or more LLM-powered agents that interact with the world through modules (pluggable capability providers). It handles the full lifecycle: event processing, context compilation, inference (streaming or request/response), tool dispatch, and state persistence via Chronicle.
External events (Discord, API, MCPL servers, timers)
↓
ProcessQueue → Module.onProcess() → EventResponse
↓ ↓
Inference trigger Messages / state updates
↓
Context compilation (with injections from modules + MCPL hooks)
↓
Membrane (LLM abstraction) → YieldingStream
↓
Tool calls → Module.handleToolCall() → results → stream resumes
↓
Agent speech → Module.onAgentSpeech() → external deliveryQuick Start
import { AgentFramework, ApiModule, ApiServer } from '@connectome/agent-framework';
import { Membrane, AnthropicAdapter } from 'membrane';
const membrane = new Membrane({ adapter: new AnthropicAdapter() });
const framework = await AgentFramework.create({
storePath: './data/store',
membrane,
agents: [{
name: 'assistant',
model: 'claude-sonnet-4-20250514',
systemPrompt: 'You are a helpful assistant.',
}],
modules: [new ApiModule()],
});
framework.start();
const server = new ApiServer(framework, { port: 8765 });
await server.start();Core Concepts
Agents
An agent wraps an LLM identity: model, system prompt, context strategy, and tool permissions. Multiple agents can coexist, each with independent context and inference state.
{
name: 'researcher',
model: 'claude-opus-4-20250514',
systemPrompt: '...',
strategy: new AutobiographicalStrategy({ recentWindowTokens: 30000 }),
allowedTools: ['search', 'read', 'write'], // or 'all'
triggerSources: ['discord'], // or 'all'
maxTokens: 8192,
maxStreamTokens: 150_000, // input token budget before stream restart
}Inference modes:
- Streaming (default in framework):
startStreamWithInjections()returns aYieldingStreamthat emits tokens, tool calls, and completion events. The framework drives the stream, dispatches tools, and resumes automatically. - Request/response:
runInferenceWithInjections()for simple complete-and-return usage with optionalAbortSignalfor cancellation.
State machine:
idle → inferring → streaming ⇄ waiting_for_tools → ready → streaming → ... → idle
↘ (abort) → idleModules
Modules are pluggable capability providers. They process events, expose tools, deliver speech, and optionally inject context before inference.
interface Module {
readonly name: string;
start(ctx: ModuleContext): Promise<void>;
stop(): Promise<void>;
getTools(): ToolDefinition[];
handleToolCall(call: ToolCall): Promise<ToolResult>;
onProcess(event: ProcessEvent, state: ProcessState): Promise<EventResponse>;
// Optional
onAgentSpeech?(agentName: string, content: ContentBlock[], context: SpeechContext): Promise<void>;
gatherContext?(agentName: string): Promise<ContextInjection[]>;
}An EventResponse can add/edit/remove messages, request inference, signal tool changes, and atomically update module state:
return {
addMessages: [{ participant: 'user', content: [{ type: 'text', text: '...' }] }],
requestInference: true,
stateUpdate: { lastProcessed: Date.now() },
};Built-in modules:
- DiscordModule - Discord bot integration (messages, reactions, threads, typing indicators)
- ApiModule - WebSocket API event processing
- WorkspaceModule - Mountable filesystem abstraction with Chronicle tree state, filesystem watching, and materialization
Event Processing
The framework runs an event loop over a ProcessQueue. Each event is processed by all modules, which return EventResponse objects. If any response requests inference, the framework compiles context and starts a stream.
Event types: ExternalMessageEvent, ToolCallEvent, ToolResultEvent, InferenceRequestEvent, McplPushEvent, McplChannelIncomingEvent, TimerFiredEvent, ApiMessageEvent, ModuleEvent, CustomEvent
Context Management
Each agent has a ContextManager (from @connectome/context-manager) that maintains conversation history with optional compression strategies:
- PassthroughStrategy - No compression, raw message replay up to budget
- AutobiographicalStrategy - Chunks old messages and summarizes them into diary entries, preserving recent context uncompressed
Before inference, modules and MCPL hooks can inject additional context via gatherContext().
Persistence
All state is persisted in a Chronicle store:
| State | Strategy | Description |
|-------|----------|-------------|
| framework/state | snapshot | Agent configs |
| framework/inference-log | append_log | Raw LLM requests/responses |
| messages | (context-manager) | Shared conversation log |
| agents/{name}/context | (context-manager) | Per-agent context log |
| modules/{name}/state | snapshot | Per-module persistent state |
Chronicle's branching support enables time-travel and what-if exploration across all state.
MCPL (MCP Live)
Optional host-side implementation of the MCP Live protocol. External servers (game engines, dev tools, etc.) can:
- Push events into agent context
- Hook into inference lifecycle (beforeInference/afterInference)
- Initiate inference requests
- Publish and observe channels
- Provide tools (namespaced automatically)
{
mcplServers: [{
id: 'game-engine',
command: 'node',
args: ['./game-server.js'],
toolPrefix: 'game',
}],
}Streaming Lifecycle
- Start: Framework calls
agent.startStreamWithInjections()→YieldingStream - Drive: Framework iterates stream events (tokens, tool-calls, complete, error)
- Tool yield: Stream yields tool calls → framework dispatches concurrently → collects results
- Resume: All results in → stream resumes with tool results
- Budget restart: If input tokens exceed
maxStreamTokens, the stream is cancelled, context is recompressed, and a fresh stream starts - Complete: Final response saved to context, agent returns to idle
Abort / Cancellation
// Cancel in-flight inference for an agent
framework.abortInference('assistant', 'user requested stop');The non-streaming path uses AbortSignal forwarded to membrane.stream(), which returns an AbortedResponse with partial content. The streaming path cancels the YieldingStream directly.
API Server
WebSocket server for external clients (UIs, scripts, other agents).
const server = new ApiServer(framework, { port: 8765 });
await server.start();Commands: message.send, message.list, inference.request, inference.abort, branch.*, agent.*, module.*, store.*, inference.tail/inspect/search, events.*
Also available as an MCP server via the agent-framework-mcp binary.
Observability
Subscribe to trace events for logging, UI updates, or debugging:
framework.onTrace((event) => {
// inference:started, inference:completed, inference:aborted
// tool:started, tool:completed, tool:failed
// process:received, process:completed
// message:added, module:added, module:removed
console.log(event.type, event);
});Query raw inference logs:
const logs = framework.queryInferenceLogs({ agentName: 'assistant', limit: 10 });Dependencies
| Package | Role | |---------|------| | membrane | LLM provider abstraction (Anthropic, Bedrock, OpenRouter) | | @connectome/context-manager | Context window management and compression | | chronicle | Branchable persistent event store (Rust + N-API) |
Development
npm install
npm run build # TypeScript compilation
npm run dev # Watch mode
npm test # Run tests
npm run typecheck # Type-check without emittingRequires Node.js >= 20. Chronicle requires a Rust toolchain for native module compilation (npm run build in the chronicle directory).
