@ariaflowagents/core
v0.8.1
Published
A framework for structured conversational AI agents
Readme
@ariaflowagents/core
AriaFlow core runtime and agent primitives for building structured, multi-agent conversations.
Install
npm install @ariaflowagents/coreRequirements
This package is built on top of the Vercel AI SDK v6.
- Peer deps:
ai@^6andzod@^3 - Provider packages (example):
@ai-sdk/openai
Exports
This package exports:
- Agents:
Agent,LLMAgent,FlowAgent,TriageAgent,CompositeAgent - Runtime:
Runtime,createRuntime - Flows:
FlowManager,FlowGraph,FlowNode,createFlowTransition,validateFlowConfig - Session:
SessionManager,SessionStore,MemoryStore,RedisStore - Tools:
createTool,createToolWithFiller,createHandoffTool,createHttpTool,createLoadMemoryTool - Memory:
InMemoryMemoryService,preloadMemoryContext,MemoryService(interface) - Context Budget:
DEFAULT_CONTEXT_BUDGET,computeMessageHistoryBudget,truncateToTokenBudget,formatMemoryWithBudget,estimateTokenCount - Handoff Filters:
handoffFilters,composeFilters,removeToolHistory,keepRecentMessages,removeKeys - System Injections:
InjectionQueue,commonInjections - Prompts:
PromptTemplateBuilder,PromptBuilder - Hooks:
HookRunner,loggingHooks,createMetricsHooks - Guards:
ToolEnforcer,StopConditions - Utils:
createDateParser,parseDate,parseDateRange,formatDateForSpeech,formatTimeForSpeech
Quick start
import { Runtime, createDateParser, type AgentConfig } from '@ariaflowagents/core';
import { openai } from '@ai-sdk/openai';
// Create date parsing tool for natural language dates
const dateParser = createDateParser();
const supportAgent: AgentConfig = {
id: 'support',
name: 'Support Agent',
systemPrompt: 'You are a helpful support agent that can help book appointments.',
model: openai('gpt-4o-mini') as any,
type: 'llm',
tools: {
parse_date: dateParser,
},
// Production default: non-triage agents cannot hand off unless explicitly configured.
canHandoffTo: [],
};
const runtime = new Runtime({
agents: [supportAgent],
defaultAgentId: 'support',
defaultModel: openai('gpt-4o-mini') as any,
});
const run = async () => {
for await (const part of runtime.stream({ input: 'Hello there' })) {
if (part.type === 'text-delta') {
process.stdout.write(part.text);
}
}
};
run();Stream Callback Defaults (Message-Oriented)
streamCallback is non-blocking and pluggable (file/http/db/custom sinks).
Current defaults are optimized for persistence pipelines:
- no implicit sink (if
sinksis omitted, callback is a no-op) - message mode by default (
input,done,error,tripwire, tool events, and transition events) - token deltas off by default
- final assistant text included as
fullTexton terminal events
import { Runtime, createFileStreamSink } from '@ariaflowagents/core';
const runtime = new Runtime({
agents: [supportAgent],
defaultAgentId: 'support',
streamCallback: {
sinks: [createFileStreamSink({ directory: './transcripts' })],
// defaults shown explicitly:
eventMode: 'message',
emitToolEvents: true,
emitTransitionEvents: true,
emitTextDeltas: false,
emitFinalText: true,
},
});Runtime Durability Defaults
Runtime now applies lightweight durability hardening by default:
- session checkpoints are persisted on
tool-result,tool-error, andflow-transition - handoff state changes are checkpointed immediately after routing updates
- replay-friendly per-turn events are stored in
session.workingMemory.runtimeEventLog(user,assistant_final,tool_call,tool_result,tool_error,transition)
Tool execution context also includes idempotencyKey in experimental_context, which can be forwarded to external systems.
For high-volume token streaming, opt in:
streamCallback: {
sinks: [createFileStreamSink({ directory: './transcripts' })],
eventMode: 'all',
emitTextDeltas: true,
}Routing & Handoffs (Important Defaults)
AriaFlow supports invisible multi-agent routing via a handoff tool.
- A
TriageAgentcan route to specialists viahandoff. - Production default: non-triage agents only get the
handofftool if you explicitly setcanHandoffTo.
Example:
const support: AgentConfig = {
id: 'support',
name: 'Support',
type: 'llm',
systemPrompt: 'General support agent.',
model: openai('gpt-4o-mini') as any,
// This agent may route to booking and billing specialists.
canHandoffTo: ['booking', 'billing'],
};Note: the runtime stream includes internal events like { type: 'handoff', ... }. If you are building a UI transcript, do not render these internal events directly to end users.
Built-in System Guardrails
The Runtime injects a small set of system-level instructions by default (e.g. “no secrets” and “invisible handoffs”) to reduce prompt-injection leakage and prevent user-visible routing language.
These are defense-in-depth guardrails. You should still treat tool inputs/outputs and webhook callbacks as sensitive, and filter what you expose to end users.
Guides
Guides live in packages/ariaflow-core/guides/:
GETTING_STARTED.mdRUNTIME.mdFLOWS.mdTOOLS.mdGUARDRAILS.md
Related Packages
AriaFlow provides additional packages for specific deployment targets:
| Package | Description | Use When |
|---------|-------------|----------|
| @ariaflowagents/cf-agent | Cloudflare Durable Objects for Runtime and AgentFlowManager | Deploying to Cloudflare Workers |
| @ariaflowagents/hono-server | Hono router for HTTP/WebSocket serving | Running a Node.js or Bun server |
Cloudflare Workers
Use @ariaflowagents/cf-agent for serverless deployment on Cloudflare:
npm install @ariaflowagents/cf-agentRuntime (multi-agent):
import { AriaFlowChatAgent } from '@ariaflowagents/cf-agent';
export class MyChatAgent extends AriaFlowChatAgent {
async createRuntime() {
return {
agents: [supportAgent],
defaultAgentId: 'support',
};
}
}Flow (structured conversation):
import { AriaFlowFlowAgent } from '@ariaflowagents/cf-agent';
export class ReservationAgent extends AriaFlowFlowAgent {
async createFlowConfig() {
return {
initialNode: 'greeting',
model: openai('gpt-4o-mini') as object,
nodes: [...],
};
}
}See @ariaflowagents/cf-agent for full documentation.
Hono Server
Use @ariaflowagents/hono-server for HTTP/WebSocket hosting:
npm install @ariaflowagents/hono-serverRuntime server:
import { Hono } from 'hono';
import { serve } from '@hono/node-server';
import { createNodeWebSocket } from '@hono/node-ws';
import { Runtime } from '@ariaflowagents/core';
import { createAriaChatRouter } from '@ariaflowagents/hono-server';
const runtime = new Runtime({ agents: [...] });
const app = new Hono();
app.route('/', createAriaChatRouter({ runtime }));
serve({ fetch: app.fetch, port: 3000 });Flow server:
import { AgentFlowManager } from '@ariaflowagents/core';
import { createAriaFlowRouter } from '@ariaflowagents/hono-server';
const flowManager = new AgentFlowManager({ nodes: [...] });
app.route('/', createAriaFlowRouter({ flowManager, sessionId: 'my-flow' }));See @ariaflowagents/hono-server for full documentation.
Core Concepts
Runtime (Multi-Agent)
The Runtime class orchestrates multiple agents with seamless handoffs:
- TriageAgent: Routes requests to the appropriate specialist
- Agent Handoffs: Transfer conversation context between agents
- Session Persistence: Maintains conversation state
- Flow Snapshot Memory: Persists flow progress in
session.workingMemory.flowStateByAgent
FlowManager (Single Flow)
The FlowManager class manages structured, node-based conversations:
- Flow Nodes: Each node has a specific purpose and tools
- State Transitions: Tools drive transitions via
createFlowTransition() - Declarative Edge Tools:
FlowManagerauto-injects tools fortransitions[].onedges - Flow Hooks: Observe lifecycle events (onFlowStart, onTransition, etc.)
- Context Strategies: Control memory management (append, reset, summarize)
- Prompt Composition: Global role prompt + node prompt (per-node opt-out via
addGlobalPrompt: false) - Transition Contracts:
contract.toolOnlyandcontract.requiresUserTurnare enforced at runtime
Date Parsing Utilities
Natural language date parsing for conversational agents using Chrono:
import { createDateParser, parseDate, formatDateForSpeech } from '@ariaflowagents/core';
// As a tool for agents
const dateParser = createDateParser();
const result = await dateParser.execute({ text: 'tomorrow at 3pm' });
// { success: true, startDate: '2026-01-18T15:00:00Z', ... }
// Standalone function
const parsed = parseDate('next Friday');
// { date: Date, text: 'next Friday', confidence: 1.0 }
// TTS-friendly formatting
formatDateForSpeech(new Date('2026-01-18')); // "Saturday, January 18, 2026"Supported expressions:
- Relative: "tomorrow", "today", "yesterday", "in 3 days"
- Weekdays: "next Friday", "this weekend", "Monday morning"
- Specific dates: "March 15th", "December 25th, 2026"
- With time: "tomorrow at 3pm", "next Tuesday at 2:30pm"
Date Parser in Flows
Use the date parser within flow nodes for booking and scheduling:
import { createDateParser, createFlowTransition } from '@ariaflowagents/core';
import { tool } from 'ai';
import { z } from 'zod';
const dateParserTool = createDateParser();
const bookingFlow = {
nodes: [
{
id: 'collect_date',
prompt: 'What date would you like to book?',
tools: {
parse_date: tool({
description: 'Parse the date from user input',
inputSchema: z.object({
dateText: z.string().describe('Natural language date'),
}),
execute: async ({ dateText }) => {
const result = await dateParserTool.execute({ text: dateText });
if (result.success) {
return createFlowTransition('collect_time', {
date: result.startDate.split('T')[0]
});
}
return { error: 'Could not parse date' };
},
}),
},
},
// ... more nodes
],
};Changelog
Unreleased — Memory System, Context Budget, Handoff Filters (RFC-008)
Long-Term Memory
Cross-session memory for agents. Facts from past conversations are ingested, stored, and automatically preloaded into future sessions.
MemoryServiceinterface — pluggable backend for memory storage (addSessionToMemory,searchMemory,deleteMemories)InMemoryMemoryService— in-process implementation with keyword-based search and idempotent ingestionpreloadMemoryContext()— retrieves relevant memories and formats them as a## Context from Past Conversationsblock injected into the system promptcreateLoadMemoryTool()— gives agents an on-demand tool to search long-term memory mid-conversation- Runtime integration — new config flags
memoryService,preloadMemory: true,memoryIngestion: 'onEnd' - Store adapters —
RedisMemoryService(@ariaflowagents/redis-store) andPostgresMemoryService(@ariaflowagents/postgres-store)
import { Runtime, InMemoryMemoryService, createLoadMemoryTool } from '@ariaflowagents/core';
const runtime = new Runtime({
agents: [agent],
defaultAgentId: 'agent',
memoryService: new InMemoryMemoryService(),
preloadMemory: true, // auto-inject past context each turn
memoryIngestion: 'onEnd', // ingest memories when session stream ends
});Context Budget
Token budget enforcement across all system prompt components (base prompt, memory, working memory, policy injections, message history).
ContextBudgetConfig— configurable token limits per component (modelContextWindow,responseReserve,maxLongTermMemoryTokens, etc.)computeMessageHistoryBudget()— computes residual tokens available for message history after all prompt componentstruncateToTokenBudget()/formatMemoryWithBudget()— helpers for fitting content within token budgetsonBeforeModelCallhook — receivestokenBreakdownandestimatedTokensfor observability- Budget telemetry — stored in
session.workingMemory.__ariaContextBudgetfor inspection
Handoff Filters
Context filtering during agent-to-agent handoffs. Control what conversation history and working memory passes between agents.
handoffFilters.removeToolHistory— strips tool call/result messageshandoffFilters.keepRecentMessages(n)— retains only the last N messageshandoffFilters.removeKeys(keys)— removes specific working memory keyscomposeFilters(...filters)— chains multiple filters in sequence- Applied via
AgentRoute.inputFilteron triage agent routes
const triageAgent: TriageAgentConfig = {
type: 'triage',
routes: [{
agentId: 'refunds',
inputFilter: composeFilters(
handoffFilters.removeToolHistory,
handoffFilters.keepRecentMessages(5),
),
}],
};ContextManager Coordination
ContextManagerContextnow acceptsmaxTokensOverride— the Runtime passes the computed message history budget so the ContextManager prunes messages to fit within the remaining token budget after system prompt assembly
Examples
New interactive demos in examples/agents/memory-demo/:
| File | Description |
|------|-------------|
| run.ts | Multi-session memory chat — facts recalled across sessions |
| validate.ts | Programmatic end-to-end validation (9 checks) |
| context-budget.ts | Token budget enforcement with onBeforeModelCall observability |
| handoff-filters.ts | Multi-agent handoff with context filtering |
| form-filler-extraction-with-memory.ts | Extraction-based form filler with cross-session patient recall |
| form-filler-with-memory.ts | Questionnaire-based form filler with cross-session memory |
Tests
test/memory/InMemoryMemoryService.test.js— unit tests for in-memory storetest/memory/preloadMemory.test.js— preload formatting and budget compliancetest/runtime/ContextBudget.test.js— budget computation and truncationtest/runtime/ContextManagerCoordination.test.js— budget override integrationtest/runtime/handoffFilters.test.js— filter composition and edge casestest/runtime/integration-memory-budget.test.js— full pipeline integration (9 tests)
