@economic/agents
v1.7.1
Published
A starter for creating a TypeScript package.
Keywords
Readme
@economic/agents
A batteries-included toolkit for building AI agents on Cloudflare Workers. Provides Durable Object base classes for both chat and non-chat agents, with on-demand skill loading, automatic message compaction, conversation management, and audit logging to D1.
For chat agents, extend ChatAgentHarness (recommended) or ChatAgent (lower-level). For headless agents, extend Agent.
For React integration, see @economic/agents-react.
Install
npm install @economic/agents @cloudflare/ai-chat ai agentsQuick Start
Server
import { openai } from "@ai-sdk/openai";
import { tool } from "ai";
import { z } from "zod";
import { ChatAgentHarness, type AgentToolContext, type Skill } from "@economic/agents";
const searchSkill: Skill = {
name: "search",
description: "Web search tools",
guidance: "Use search_web for any queries requiring up-to-date information.",
tools: {
search_web: tool({
description: "Search the web",
inputSchema: z.object({ query: z.string() }),
execute: async ({ query }) => `Results for: ${query}`,
}),
},
};
export class MyAgent extends ChatAgentHarness<Env> {
getModel(ctx: AgentToolContext) {
return openai("gpt-4o");
}
getFastModel() {
return openai("gpt-4o-mini");
}
getSystemPrompt(ctx: AgentToolContext) {
return "You are a helpful assistant.";
}
getSkills(ctx: AgentToolContext) {
return [searchSkill];
}
}For lower-level control (custom onChatMessage implementations), extend ChatAgent directly — see ChatAgent.
Wrangler Config
{
"durable_objects": {
"bindings": [{ "name": "MyAgent", "class_name": "MyAgent" }],
},
"migrations": [{ "tag": "v1", "new_sqlite_classes": ["MyAgent"] }],
}Run wrangler types after to generate typed Env bindings.
Client
import { useAIChatAgent, type AgentConnectionStatus } from "@economic/agents-react";
import { useState } from "react";
const [connectionStatus, setConnectionStatus] = useState<AgentConnectionStatus>("connecting");
const { agent, chat } = useAIChatAgent({
agent: "MyAgent",
host: "localhost:8787",
chatId: "user_123:session-1",
toolContext: {},
connectionParams: { userId: "…" },
onConnectionStatusChange: setConnectionStatus,
});
const { messages, sendMessage, status, stop } = chat;chatId is the Durable Object name — use userId:uniqueChatId (see Providing userId).
Note: React hooks are in a separate package. Install with
npm install @economic/agents-react.
Harnesses
The server-side API is built around Durable Object base classes — Agent for headless workflows and ChatAgent/ChatAgentHarness for conversational UIs — plus a skill system that lets the LLM load tools on demand.
ChatAgentHarness
The recommended starting point for chat agents. Extends ChatAgent with an opinionated structure: implement abstract methods for model selection, system prompt, tools, and skills. The harness handles onChatMessage for you.
import { openai } from "@ai-sdk/openai";
import { ChatAgentHarness, type AgentToolContext, type Skill } from "@economic/agents";
interface RequestBody {
userTier: "free" | "pro";
}
export class MyAgent extends ChatAgentHarness<Env, RequestBody> {
getModel(ctx: AgentToolContext<RequestBody>) {
return ctx.userTier === "pro" ? openai("gpt-4o") : openai("gpt-4o-mini");
}
getFastModel() {
return openai("gpt-4o-mini");
}
getSystemPrompt(ctx: AgentToolContext<RequestBody>) {
return "You are a helpful assistant.";
}
getTools(ctx: AgentToolContext<RequestBody>) {
return { myTool };
}
getSkills(ctx: AgentToolContext<RequestBody>) {
return [searchSkill, calculatorSkill];
}
}getModel(ctx)— returns the primary language model. Context includes request body for tier-based model selection.getFastModel()— returns a fast/cheap model for compaction and conversation summarization.getSystemPrompt(ctx)— returns the system prompt.getTools(ctx)— returns always-on tools (optional, defaults to{}).getSkills(ctx)— returns skills available for on-demand loading (optional, defaults to[]).conversationRetentionDays— defaults to 90. Set toundefinedto disable auto-deletion.
Binding Name
ChatAgentHarness automatically derives the Durable Object binding name from the class name. The binding name in your wrangler.jsonc must exactly match your class name:
// Class name is "MyAgent"
export class MyAgent extends ChatAgentHarness<Env> {
/* ... */
}// wrangler.jsonc — binding name must be "MyAgent" to match
{
"durable_objects": {
"bindings": [{ "name": "MyAgent", "class_name": "MyAgent" }],
},
}If the names don't match, the harness won't be able to resolve the binding and will throw at runtime. If you need a different binding name, override the binding getter:
export class MyAgent extends ChatAgentHarness<Env> {
protected get binding() {
return this.env.CUSTOM_BINDING_NAME;
}
// ...
}
---
### ChatAgent
Lower-level base class for chat agents. Use when you need full control over `onChatMessage` — custom streaming, multiple LLM calls per turn, or non-standard response formats.
```typescript
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { ChatAgent } from "@economic/agents";
export class MyAgent extends ChatAgent<Env> {
protected get binding() {
return this.env.MyAgent;
}
protected getFastModel() {
return openai("gpt-4o-mini");
}
async onChatMessage(onFinish, options) {
const params = await this.buildLLMParams({
options,
onFinish,
model: openai("gpt-4o"),
system: "You are a helpful assistant.",
skills: [searchSkill],
tools: { alwaysOnTool },
});
return streamText(params).toUIMessageStreamResponse();
}
}binding— abstract getter returning the DO namespace binding. Required on every subclass.getFastModel()— abstract method returning the fast model for compaction and summarization.maxMessagesBeforeCompaction— class property to override the default threshold (15). Set toundefinedto disable.conversationRetentionDays— class property to auto-delete inactive conversations after N days.this.buildLLMParams()— pre-fillsmessages,activeSkills, and injectslogEventintoexperimental_context.getConversations()/deleteConversation(id)— callable methods for listing/deleting a user's conversations.
Agent
Abstract Durable Object base for non-chat agents. Use for headless workflows driven from HTTP handlers, schedules, or alarms.
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { callable } from "agents";
import { Agent } from "@economic/agents";
export class MyAgent extends Agent<Env> {
@callable
async summarize(document: string) {
const params = await this.buildLLMParams({
model: openai("gpt-4o"),
messages: [{ role: "user", content: `Summarise: ${document}` }],
system: "You are a helpful assistant.",
skills: [searchSkill],
});
const result = await generateText(params);
return result.text;
}
}this.buildLLMParams()pre-fillsactiveSkillsfrom DO SQLite and injectslogEventintoexperimental_context.this.logEvent(message, payload?)writes audit events to D1 whenAGENT_DBis bound, silent no-op otherwise.
Tool context
Pass data via the body option of useAgentChat (with useAIChatAgent, use toolContext — it is forwarded as body). It arrives as experimental_context in tool execute functions. Use AgentToolContext<TBody> to type it:
import type { AgentToolContext } from "@economic/agents";
interface AgentBody {
authorization: string;
userId: string;
}
type ToolContext = AgentToolContext<AgentBody>;
// Tool
execute: async (args, { experimental_context }) => {
const ctx = experimental_context as ToolContext;
await ctx.logEvent("tool called", { userId: ctx.userId });
return await fetchSomething(ctx.authorization);
};logEvent is a no-op when AGENT_DB is not bound.
JWT Authentication
Authenticate WebSocket connections by implementing getJwtAuthConfig on your agent. When defined, JWT verification runs in onConnect — failed auth closes the connection, successful auth stores claims in session.
import type { JWTPayload } from "jose";
import { ChatAgentHarness, type AgentToolContext } from "@economic/agents";
interface Session {
clientId: string;
userGuid: string;
agreementNumber: number;
}
export class MyAgent extends ChatAgentHarness<Env, RequestBody> {
getJwtAuthConfig(request: Request) {
const origin = request.headers.get("Origin") ?? "";
const isStaging = origin.includes("staging");
return {
allowedIssuers: isStaging
? [/^https:\/\/auth\.staging\.example\.com$/]
: ["https://auth.example.com"],
audience: "my-api",
requiredScopes: ["read"],
getClaims: (payload: JWTPayload): Session => ({
clientId: payload.client_id as string,
userGuid: payload.user_guid as string,
agreementNumber: payload.agreement_number as number,
}),
};
}
// Session is available in tool context
getModel(ctx: AgentToolContext<RequestBody>) {
console.log(ctx.session); // { clientId, userGuid, agreementNumber }
return openai("gpt-4o");
}
}allowedIssuers— array of strings or RegExp patterns for trusted issuersaudience— expectedaudclaimrequiredScopes— optional array of required OAuth scopesgetClaims(payload)— extract claims from verified JWT payload
Claims are available as ctx.session in getModel, getSystemPrompt, getTools, getSkills, and tool execute functions.
If getJwtAuthConfig is not implemented, no authentication is performed and ctx.session is undefined.
Source URLs from Tools
Any tool can surface source URLs into the message stream by including a sources array in its return value. Detected automatically by buildLLMParams — no additional wiring needed.
execute: async ({ query }) => {
const data = await fetchResults(query);
return {
results: data.results,
sources: data.results.map((r) => ({ url: r.url, title: r.title })),
};
};Each source entry: { url: string, title?: string }.
Skills
Named groups of tools loaded on demand by the LLM. The agent starts with only always-on tools active. When the LLM needs more, it calls activate_skill.
import { tool } from "ai";
import { z } from "zod";
import type { Skill } from "@economic/agents";
export const calculatorSkill: Skill = {
name: "calculator",
description: "Mathematical calculation and expression evaluation",
guidance:
"Use the calculate tool for any arithmetic or algebraic expressions. " +
"Always show the expression you are evaluating.",
tools: {
calculate: tool({
description: "Evaluate a mathematical expression",
inputSchema: z.object({
expression: z.string().describe('e.g. "2 + 2", "Math.sqrt(144)"'),
}),
execute: async ({ expression }) => {
const result = new Function(`"use strict"; return (${expression})`)();
return `${expression} = ${result}`;
},
}),
},
};When skills are provided to buildLLMParams, two meta-tools are registered automatically:
activate_skill— loads skills by name, making their tools available for the rest of the conversation. Idempotent. State is persisted to DO SQLite.list_capabilities— returns active tools, loaded skills, and skills available to load.
The activate_skill and list_capabilities meta-tools are stripped from message history before persistence.
Audit Logging (D1)
All agent base classes write audit events to a D1 database when AGENT_DB is bound. If not bound, logEvent is a no-op.
D1 Setup
- Create a D1 database in the Cloudflare dashboard.
- Run the schema in the D1 console. For
Agent, useschema/agent.sql. ForChatAgent/ChatAgentHarness, useschema/chat.sql(includes the conversations table). - Bind it in
wrangler.jsonc:
"d1_databases": [
{ "binding": "AGENT_DB", "database_name": "agents", "database_id": "YOUR_DB_ID" }
]- For local dev, apply the schema to your local D1 (from your app’s directory), e.g.
wrangler d1 execute <database_name> --local --file=node_modules/@economic/agents/schema/chat.sql. You can wrap that in adb:setupnpm script if you prefer.
Providing userId
The client's chatId becomes the Durable Object name. Use userId:uniqueChatId so the first segment is your stable user id (audit and conversations key off getUserId(), i.e. the substring before the first :). If that segment is empty (e.g. :chat-1), the connection is rejected. Same idea as Quick Start (chatId).
import { useAIChatAgent } from "@economic/agents-react";
const { agent, chat } = useAIChatAgent({
agent: "MyAgent",
host: "localhost:8787",
chatId: "148583_matt:conversation-1",
});Chat Features
Compaction and the conversation list (below) require getFastModel() on your subclass.
Compaction
Compaction summarises older messages before each turn. Full history in DO SQLite is unaffected — compaction is in-memory only. The default threshold is 15 recent messages (maxMessagesBeforeCompaction on the class).
export class MyAgent extends ChatAgentHarness<Env> {
getModel() {
return openai("gpt-4o");
}
getFastModel() {
return openai("gpt-4o-mini");
}
getSystemPrompt() {
return "You are a helpful assistant.";
}
// Optional: keep more messages verbatim before summarising (default 15).
// protected maxMessagesBeforeCompaction = 50;
// Optional: disable compaction (still uses fastModel for conversation title/summary).
// protected maxMessagesBeforeCompaction = undefined;
}Conversations (D1)
ChatAgent and ChatAgentHarness maintain a conversations table in AGENT_DB. One row per Durable Object instance, upserted automatically after every turn. Requires schema/chat.sql.
Automatic title and summary — On the first turn, title and summary are generated and inserted. On subsequent turns, only updated_at is refreshed. Title and summary are regenerated periodically as the conversation grows.
Retention — Set conversationRetentionDays to auto-delete inactive conversations:
export class MyAgent extends ChatAgentHarness<Env> {
getModel() {
return openai("gpt-4o");
}
getFastModel() {
return openai("gpt-4o-mini");
}
getSystemPrompt() {
return "You are a helpful assistant.";
}
// ChatAgentHarness defaults to 90 days. Override or set to undefined to disable.
protected conversationRetentionDays = 30;
}When the retention period expires, the D1 row is deleted, WebSocket connections are closed, and the DO's SQLite storage is wiped.
Querying — From a connected client:
const conversations = await agent.call("getConversations");Message Ratings (D1)
Users can rate individual messages with thumbs up/down. Ratings are stored in D1 and can be updated if the user changes their mind.
// Rate a message (1 = thumbs up, -1 = thumbs down)
await agent.call("rateMessage", [messageId, 1]);
// Change rating
await agent.call("rateMessage", [messageId, -1]);
// Get all ratings for the current conversation
const ratings = await agent.call("getMessageRatings");
// Returns: { "message_id_1": 1, "message_id_2": -1, ... }Ratings are stored per message and upserted on conflict — calling rateMessage on an already-rated message updates the rating. Requires schema/chat.sql which includes the message_ratings table.
API Reference
@economic/agents
| Export | Description |
| ---------------------- | -------------------------------------------------------------------------- |
| Agent | Abstract DO base for non-chat agents with audit logging and buildLLMParams |
| ChatAgent | Abstract chat DO with compaction, conversations, and custom onChatMessage |
| ChatAgentHarness | Opinionated chat harness with getModel/getSystemPrompt/getTools/getSkills |
| buildLLMParams | Standalone function to build streamText/generateText params |
| Skill | Type: named group of tools with optional guidance |
| AgentToolContext | Type: request body merged with session and logEvent for tool context |
| OnChatMessageOptions | Type: options passed to onChatMessage |
| BuildLLMParamsConfig | Type: config for standalone buildLLMParams |
@economic/agents-react
React hooks are in a separate package. See @economic/agents-react for full documentation.
| Export | Description |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| useAIChatAgent | React hook wrapping useAgent + useAgentChat |
| UseAIChatAgentOptions | Type: options for useAIChatAgent (agent, host, chatId, optional basePath, toolContext, connectionParams, …) |
| AgentConnectionStatus | Type: "connecting" \| "connected" \| "disconnected" \| "unauthorized" |
CLI
| Command | Description |
| -------------------------------------------- | ---------------------------------------- |
| npx @economic/agents generate skill <name> | Scaffold a new skill with tools |
| npx @economic/agents generate tool <name> | Scaffold a new tool (global or in skill) |
CLI
The package includes a CLI for scaffolding skills and tools.
Generate a Skill
npx @economic/agents generate skill weatherThis will:
- Prompt for a skill description
- Ask for initial tool names (comma-separated)
- Prompt for each tool's description and whether it needs
AgentToolContext - Create the skill file at
src/skills/weather/weather.ts - Create tool files at
src/skills/weather/tools/*.ts - Auto-register the skill in your agent's
getSkills()method
Generate a Tool
npx @economic/agents generate tool geocodeThis will:
- Prompt for a tool description
- Ask where to create it (global
src/tools/or within an existing skill) - Ask whether it needs
AgentToolContext - Create the tool file
- Auto-register the tool in your agent's
getTools()or the skill'stoolsobject
Auto-registration
The CLI automatically detects agent files by scanning src/ for classes extending ChatAgentHarness, ChatAgent, or Agent from @economic/agents. If one agent is found, it's used automatically. If multiple are found, you'll be prompted to select one.
For ChatAgentHarness, the CLI modifies getSkills() or getTools() methods. For ChatAgent or Agent, it modifies buildLLMParams() calls.
If the CLI detects complex patterns (spread operators, function calls, variables), it will print manual registration instructions instead.
Development
npm install
npm test
npm pack.
