@cargo-cult/pi-agent-core
v0.47.0
Published
General-purpose agent with transport abstraction, state management, and attachment support
Maintainers
Readme
@cargo-cult/pi-agent
Stateful agent with tool execution and event streaming. Built on @cargo-cult/pi-ai.
Installation
npm install @cargo-cult/pi-agentQuick Start
import { Agent } from "@cargo-cult/pi-agent";
import { getModel } from "@cargo-cult/pi-ai";
const agent = new Agent({
initialState: {
systemPrompt: "You are a helpful assistant.",
model: getModel("anthropic", "claude-sonnet-4-20250514"),
},
});
agent.subscribe((event) => {
if (event.type === "message_update" && event.assistantMessageEvent.type === "text_delta") {
// Stream just the new text chunk
process.stdout.write(event.assistantMessageEvent.delta);
}
});
await agent.prompt("Hello!");Core Concepts
AgentMessage vs LLM Message
The agent works with AgentMessage, a flexible type that can include:
- Standard LLM messages (
user,assistant,toolResult) - Custom app-specific message types via declaration merging
LLMs only understand user, assistant, and toolResult. The convertToLlm function bridges this gap by filtering and transforming messages before each LLM call.
Message Flow
AgentMessage[] → transformContext() → AgentMessage[] → convertToLlm() → Message[] → LLM
(optional) (required)- transformContext: Prune old messages, inject external context
- convertToLlm: Filter out UI-only messages, convert custom types to LLM format
Event Flow
The agent emits events for UI updates. Understanding the event sequence helps build responsive interfaces.
prompt() Event Sequence
When you call prompt("Hello"):
prompt("Hello")
├─ agent_start
├─ turn_start
├─ message_start { message: userMessage } // Your prompt
├─ message_end { message: userMessage }
├─ message_start { message: assistantMessage } // LLM starts responding
├─ message_update { message: partial... } // Streaming chunks
├─ message_update { message: partial... }
├─ message_end { message: assistantMessage } // Complete response
├─ turn_end { message, toolResults: [] }
└─ agent_end { messages: [...] }With Tool Calls
If the assistant calls tools, the loop continues:
prompt("Read config.json")
├─ agent_start
├─ turn_start
├─ message_start/end { userMessage }
├─ message_start { assistantMessage with toolCall }
├─ message_update...
├─ message_end { assistantMessage }
├─ tool_execution_start { toolCallId, toolName, args }
├─ tool_execution_update { partialResult } // If tool streams
├─ tool_execution_end { toolCallId, result }
├─ message_start/end { toolResultMessage }
├─ turn_end { message, toolResults: [toolResult] }
│
├─ turn_start // Next turn
├─ message_start { assistantMessage } // LLM responds to tool result
├─ message_update...
├─ message_end
├─ turn_end
└─ agent_endcontinue() Event Sequence
continue() resumes from existing context without adding a new message. Use it for retries after errors.
// After an error, retry from current state
await agent.continue();The last message in context must be user or toolResult (not assistant).
Event Types
| Event | Description |
|-------|-------------|
| agent_start | Agent begins processing |
| agent_end | Agent completes with all new messages |
| turn_start | New turn begins (one LLM call + tool executions) |
| turn_end | Turn completes with assistant message and tool results |
| message_start | Any message begins (user, assistant, toolResult) |
| message_update | Assistant only. Includes assistantMessageEvent with delta |
| message_end | Message completes |
| tool_execution_start | Tool begins |
| tool_execution_update | Tool streams progress |
| tool_execution_end | Tool completes |
Agent Options
const agent = new Agent({
// Initial state
initialState: {
systemPrompt: string,
model: Model<any>,
thinkingLevel: "off" | "minimal" | "low" | "medium" | "high" | "xhigh",
tools: AgentTool<any>[],
messages: AgentMessage[],
},
// Convert AgentMessage[] to LLM Message[] (required for custom message types)
convertToLlm: (messages) => messages.filter(...),
// Transform context before convertToLlm (for pruning, compaction)
transformContext: async (messages, signal) => pruneOldMessages(messages),
// How to handle queued messages: "one-at-a-time" (default) or "all"
queueMode: "one-at-a-time",
// Custom stream function (for proxy backends)
streamFn: streamProxy,
// Dynamic API key resolution (for expiring OAuth tokens)
getApiKey: async (provider) => refreshToken(),
});Agent State
interface AgentState {
systemPrompt: string;
model: Model<any>;
thinkingLevel: ThinkingLevel;
tools: AgentTool<any>[];
messages: AgentMessage[];
isStreaming: boolean;
streamMessage: AgentMessage | null; // Current partial during streaming
pendingToolCalls: Set<string>;
error?: string;
}Access via agent.state. During streaming, streamMessage contains the partial assistant message.
Methods
Prompting
// Text prompt
await agent.prompt("Hello");
// With images
await agent.prompt("What's in this image?", [
{ type: "image", data: base64Data, mimeType: "image/jpeg" }
]);
// AgentMessage directly
await agent.prompt({ role: "user", content: "Hello", timestamp: Date.now() });
// Continue from current context (last message must be user or toolResult)
await agent.continue();State Management
agent.setSystemPrompt("New prompt");
agent.setModel(getModel("openai", "gpt-4o"));
agent.setThinkingLevel("medium");
agent.setTools([myTool]);
agent.replaceMessages(newMessages);
agent.appendMessage(message);
agent.clearMessages();
agent.reset(); // Clear everythingControl
agent.abort(); // Cancel current operation
await agent.waitForIdle(); // Wait for completionEvents
const unsubscribe = agent.subscribe((event) => {
console.log(event.type);
});
unsubscribe();Message Queue
Queue messages to inject during tool execution (for user interruptions):
agent.setQueueMode("one-at-a-time");
// While agent is running tools
agent.queueMessage({
role: "user",
content: "Stop! Do this instead.",
timestamp: Date.now(),
});When queued messages are detected after a tool completes:
- Remaining tools are skipped with error results
- Queued message is injected
- LLM responds to the interruption
Custom Message Types
Extend AgentMessage via declaration merging:
declare module "@cargo-cult/pi-agent" {
interface CustomAgentMessages {
notification: { role: "notification"; text: string; timestamp: number };
}
}
// Now valid
const msg: AgentMessage = { role: "notification", text: "Info", timestamp: Date.now() };Handle custom types in convertToLlm:
const agent = new Agent({
convertToLlm: (messages) => messages.flatMap(m => {
if (m.role === "notification") return []; // Filter out
return [m];
}),
});Tools
Define tools using AgentTool:
import { Type } from "@sinclair/typebox";
const readFileTool: AgentTool = {
name: "read_file",
label: "Read File", // For UI display
description: "Read a file's contents",
parameters: Type.Object({
path: Type.String({ description: "File path" }),
}),
execute: async (toolCallId, params, signal, onUpdate) => {
const content = await fs.readFile(params.path, "utf-8");
// Optional: stream progress
onUpdate?.({ content: [{ type: "text", text: "Reading..." }], details: {} });
return {
content: [{ type: "text", text: content }],
details: { path: params.path, size: content.length },
};
},
};
agent.setTools([readFileTool]);Error Handling
Throw an error when a tool fails. Do not return error messages as content.
execute: async (toolCallId, params, signal, onUpdate) => {
if (!fs.existsSync(params.path)) {
throw new Error(`File not found: ${params.path}`);
}
// Return content only on success
return { content: [{ type: "text", text: "..." }] };
}Thrown errors are caught by the agent and reported to the LLM as tool errors with isError: true.
Proxy Usage
For browser apps that proxy through a backend:
import { Agent, streamProxy } from "@cargo-cult/pi-agent";
const agent = new Agent({
streamFn: (model, context, options) =>
streamProxy(model, context, {
...options,
authToken: "...",
proxyUrl: "https://your-server.com",
}),
});Low-Level API
For direct control without the Agent class:
import { agentLoop, agentLoopContinue } from "@cargo-cult/pi-agent";
const context: AgentContext = {
systemPrompt: "You are helpful.",
messages: [],
tools: [],
};
const config: AgentLoopConfig = {
model: getModel("openai", "gpt-4o"),
convertToLlm: (msgs) => msgs.filter(m => ["user", "assistant", "toolResult"].includes(m.role)),
};
const userMessage = { role: "user", content: "Hello", timestamp: Date.now() };
for await (const event of agentLoop([userMessage], context, config)) {
console.log(event.type);
}
// Continue from existing context
for await (const event of agentLoopContinue(context, config)) {
console.log(event.type);
}License
MIT
