npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@ariaflowagents/core

v0.8.1

Published

A framework for structured conversational AI agents

Readme

@ariaflowagents/core

AriaFlow core runtime and agent primitives for building structured, multi-agent conversations.

Install

npm install @ariaflowagents/core

Requirements

This package is built on top of the Vercel AI SDK v6.

  • Peer deps: ai@^6 and zod@^3
  • Provider packages (example): @ai-sdk/openai

Exports

This package exports:

  • Agents: Agent, LLMAgent, FlowAgent, TriageAgent, CompositeAgent
  • Runtime: Runtime, createRuntime
  • Flows: FlowManager, FlowGraph, FlowNode, createFlowTransition, validateFlowConfig
  • Session: SessionManager, SessionStore, MemoryStore, RedisStore
  • Tools: createTool, createToolWithFiller, createHandoffTool, createHttpTool, createLoadMemoryTool
  • Memory: InMemoryMemoryService, preloadMemoryContext, MemoryService (interface)
  • Context Budget: DEFAULT_CONTEXT_BUDGET, computeMessageHistoryBudget, truncateToTokenBudget, formatMemoryWithBudget, estimateTokenCount
  • Handoff Filters: handoffFilters, composeFilters, removeToolHistory, keepRecentMessages, removeKeys
  • System Injections: InjectionQueue, commonInjections
  • Prompts: PromptTemplateBuilder, PromptBuilder
  • Hooks: HookRunner, loggingHooks, createMetricsHooks
  • Guards: ToolEnforcer, StopConditions
  • Utils: createDateParser, parseDate, parseDateRange, formatDateForSpeech, formatTimeForSpeech

Quick start

import { Runtime, createDateParser, type AgentConfig } from '@ariaflowagents/core';
import { openai } from '@ai-sdk/openai';

// Create date parsing tool for natural language dates
const dateParser = createDateParser();

const supportAgent: AgentConfig = {
  id: 'support',
  name: 'Support Agent',
  systemPrompt: 'You are a helpful support agent that can help book appointments.',
  model: openai('gpt-4o-mini') as any,
  type: 'llm',
  tools: {
    parse_date: dateParser,
  },
  // Production default: non-triage agents cannot hand off unless explicitly configured.
  canHandoffTo: [],
};

const runtime = new Runtime({
  agents: [supportAgent],
  defaultAgentId: 'support',
  defaultModel: openai('gpt-4o-mini') as any,
});

const run = async () => {
  for await (const part of runtime.stream({ input: 'Hello there' })) {
    if (part.type === 'text-delta') {
      process.stdout.write(part.text);
    }
  }
};

run();

Stream Callback Defaults (Message-Oriented)

streamCallback is non-blocking and pluggable (file/http/db/custom sinks).
Current defaults are optimized for persistence pipelines:

  • no implicit sink (if sinks is omitted, callback is a no-op)
  • message mode by default (input, done, error, tripwire, tool events, and transition events)
  • token deltas off by default
  • final assistant text included as fullText on terminal events
import { Runtime, createFileStreamSink } from '@ariaflowagents/core';

const runtime = new Runtime({
  agents: [supportAgent],
  defaultAgentId: 'support',
  streamCallback: {
    sinks: [createFileStreamSink({ directory: './transcripts' })],
    // defaults shown explicitly:
    eventMode: 'message',
    emitToolEvents: true,
    emitTransitionEvents: true,
    emitTextDeltas: false,
    emitFinalText: true,
  },
});

Runtime Durability Defaults

Runtime now applies lightweight durability hardening by default:

  • session checkpoints are persisted on tool-result, tool-error, and flow-transition
  • handoff state changes are checkpointed immediately after routing updates
  • replay-friendly per-turn events are stored in session.workingMemory.runtimeEventLog (user, assistant_final, tool_call, tool_result, tool_error, transition)

Tool execution context also includes idempotencyKey in experimental_context, which can be forwarded to external systems.

For high-volume token streaming, opt in:

streamCallback: {
  sinks: [createFileStreamSink({ directory: './transcripts' })],
  eventMode: 'all',
  emitTextDeltas: true,
}

Routing & Handoffs (Important Defaults)

AriaFlow supports invisible multi-agent routing via a handoff tool.

  • A TriageAgent can route to specialists via handoff.
  • Production default: non-triage agents only get the handoff tool if you explicitly set canHandoffTo.

Example:

const support: AgentConfig = {
  id: 'support',
  name: 'Support',
  type: 'llm',
  systemPrompt: 'General support agent.',
  model: openai('gpt-4o-mini') as any,
  // This agent may route to booking and billing specialists.
  canHandoffTo: ['booking', 'billing'],
};

Note: the runtime stream includes internal events like { type: 'handoff', ... }. If you are building a UI transcript, do not render these internal events directly to end users.

Built-in System Guardrails

The Runtime injects a small set of system-level instructions by default (e.g. “no secrets” and “invisible handoffs”) to reduce prompt-injection leakage and prevent user-visible routing language.

These are defense-in-depth guardrails. You should still treat tool inputs/outputs and webhook callbacks as sensitive, and filter what you expose to end users.

Guides

Guides live in packages/ariaflow-core/guides/:

  • GETTING_STARTED.md
  • RUNTIME.md
  • FLOWS.md
  • TOOLS.md
  • GUARDRAILS.md

Related Packages

AriaFlow provides additional packages for specific deployment targets:

| Package | Description | Use When | |---------|-------------|----------| | @ariaflowagents/cf-agent | Cloudflare Durable Objects for Runtime and AgentFlowManager | Deploying to Cloudflare Workers | | @ariaflowagents/hono-server | Hono router for HTTP/WebSocket serving | Running a Node.js or Bun server |

Cloudflare Workers

Use @ariaflowagents/cf-agent for serverless deployment on Cloudflare:

npm install @ariaflowagents/cf-agent

Runtime (multi-agent):

import { AriaFlowChatAgent } from '@ariaflowagents/cf-agent';

export class MyChatAgent extends AriaFlowChatAgent {
  async createRuntime() {
    return {
      agents: [supportAgent],
      defaultAgentId: 'support',
    };
  }
}

Flow (structured conversation):

import { AriaFlowFlowAgent } from '@ariaflowagents/cf-agent';

export class ReservationAgent extends AriaFlowFlowAgent {
  async createFlowConfig() {
    return {
      initialNode: 'greeting',
      model: openai('gpt-4o-mini') as object,
      nodes: [...],
    };
  }
}

See @ariaflowagents/cf-agent for full documentation.

Hono Server

Use @ariaflowagents/hono-server for HTTP/WebSocket hosting:

npm install @ariaflowagents/hono-server

Runtime server:

import { Hono } from 'hono';
import { serve } from '@hono/node-server';
import { createNodeWebSocket } from '@hono/node-ws';
import { Runtime } from '@ariaflowagents/core';
import { createAriaChatRouter } from '@ariaflowagents/hono-server';

const runtime = new Runtime({ agents: [...] });
const app = new Hono();
app.route('/', createAriaChatRouter({ runtime }));

serve({ fetch: app.fetch, port: 3000 });

Flow server:

import { AgentFlowManager } from '@ariaflowagents/core';
import { createAriaFlowRouter } from '@ariaflowagents/hono-server';

const flowManager = new AgentFlowManager({ nodes: [...] });
app.route('/', createAriaFlowRouter({ flowManager, sessionId: 'my-flow' }));

See @ariaflowagents/hono-server for full documentation.

Core Concepts

Runtime (Multi-Agent)

The Runtime class orchestrates multiple agents with seamless handoffs:

  • TriageAgent: Routes requests to the appropriate specialist
  • Agent Handoffs: Transfer conversation context between agents
  • Session Persistence: Maintains conversation state
  • Flow Snapshot Memory: Persists flow progress in session.workingMemory.flowStateByAgent

FlowManager (Single Flow)

The FlowManager class manages structured, node-based conversations:

  • Flow Nodes: Each node has a specific purpose and tools
  • State Transitions: Tools drive transitions via createFlowTransition()
  • Declarative Edge Tools: FlowManager auto-injects tools for transitions[].on edges
  • Flow Hooks: Observe lifecycle events (onFlowStart, onTransition, etc.)
  • Context Strategies: Control memory management (append, reset, summarize)
  • Prompt Composition: Global role prompt + node prompt (per-node opt-out via addGlobalPrompt: false)
  • Transition Contracts: contract.toolOnly and contract.requiresUserTurn are enforced at runtime

Date Parsing Utilities

Natural language date parsing for conversational agents using Chrono:

import { createDateParser, parseDate, formatDateForSpeech } from '@ariaflowagents/core';

// As a tool for agents
const dateParser = createDateParser();
const result = await dateParser.execute({ text: 'tomorrow at 3pm' });
// { success: true, startDate: '2026-01-18T15:00:00Z', ... }

// Standalone function
const parsed = parseDate('next Friday');
// { date: Date, text: 'next Friday', confidence: 1.0 }

// TTS-friendly formatting
formatDateForSpeech(new Date('2026-01-18')); // "Saturday, January 18, 2026"

Supported expressions:

  • Relative: "tomorrow", "today", "yesterday", "in 3 days"
  • Weekdays: "next Friday", "this weekend", "Monday morning"
  • Specific dates: "March 15th", "December 25th, 2026"
  • With time: "tomorrow at 3pm", "next Tuesday at 2:30pm"

Date Parser in Flows

Use the date parser within flow nodes for booking and scheduling:

import { createDateParser, createFlowTransition } from '@ariaflowagents/core';
import { tool } from 'ai';
import { z } from 'zod';

const dateParserTool = createDateParser();

const bookingFlow = {
  nodes: [
    {
      id: 'collect_date',
      prompt: 'What date would you like to book?',
      tools: {
        parse_date: tool({
          description: 'Parse the date from user input',
          inputSchema: z.object({
            dateText: z.string().describe('Natural language date'),
          }),
          execute: async ({ dateText }) => {
            const result = await dateParserTool.execute({ text: dateText });
            if (result.success) {
              return createFlowTransition('collect_time', { 
                date: result.startDate.split('T')[0] 
              });
            }
            return { error: 'Could not parse date' };
          },
        }),
      },
    },
    // ... more nodes
  ],
};

Changelog

Unreleased — Memory System, Context Budget, Handoff Filters (RFC-008)

Long-Term Memory

Cross-session memory for agents. Facts from past conversations are ingested, stored, and automatically preloaded into future sessions.

  • MemoryService interface — pluggable backend for memory storage (addSessionToMemory, searchMemory, deleteMemories)
  • InMemoryMemoryService — in-process implementation with keyword-based search and idempotent ingestion
  • preloadMemoryContext() — retrieves relevant memories and formats them as a ## Context from Past Conversations block injected into the system prompt
  • createLoadMemoryTool() — gives agents an on-demand tool to search long-term memory mid-conversation
  • Runtime integration — new config flags memoryService, preloadMemory: true, memoryIngestion: 'onEnd'
  • Store adaptersRedisMemoryService (@ariaflowagents/redis-store) and PostgresMemoryService (@ariaflowagents/postgres-store)
import { Runtime, InMemoryMemoryService, createLoadMemoryTool } from '@ariaflowagents/core';

const runtime = new Runtime({
  agents: [agent],
  defaultAgentId: 'agent',
  memoryService: new InMemoryMemoryService(),
  preloadMemory: true,         // auto-inject past context each turn
  memoryIngestion: 'onEnd',    // ingest memories when session stream ends
});

Context Budget

Token budget enforcement across all system prompt components (base prompt, memory, working memory, policy injections, message history).

  • ContextBudgetConfig — configurable token limits per component (modelContextWindow, responseReserve, maxLongTermMemoryTokens, etc.)
  • computeMessageHistoryBudget() — computes residual tokens available for message history after all prompt components
  • truncateToTokenBudget() / formatMemoryWithBudget() — helpers for fitting content within token budgets
  • onBeforeModelCall hook — receives tokenBreakdown and estimatedTokens for observability
  • Budget telemetry — stored in session.workingMemory.__ariaContextBudget for inspection

Handoff Filters

Context filtering during agent-to-agent handoffs. Control what conversation history and working memory passes between agents.

  • handoffFilters.removeToolHistory — strips tool call/result messages
  • handoffFilters.keepRecentMessages(n) — retains only the last N messages
  • handoffFilters.removeKeys(keys) — removes specific working memory keys
  • composeFilters(...filters) — chains multiple filters in sequence
  • Applied via AgentRoute.inputFilter on triage agent routes
const triageAgent: TriageAgentConfig = {
  type: 'triage',
  routes: [{
    agentId: 'refunds',
    inputFilter: composeFilters(
      handoffFilters.removeToolHistory,
      handoffFilters.keepRecentMessages(5),
    ),
  }],
};

ContextManager Coordination

  • ContextManagerContext now accepts maxTokensOverride — the Runtime passes the computed message history budget so the ContextManager prunes messages to fit within the remaining token budget after system prompt assembly

Examples

New interactive demos in examples/agents/memory-demo/:

| File | Description | |------|-------------| | run.ts | Multi-session memory chat — facts recalled across sessions | | validate.ts | Programmatic end-to-end validation (9 checks) | | context-budget.ts | Token budget enforcement with onBeforeModelCall observability | | handoff-filters.ts | Multi-agent handoff with context filtering | | form-filler-extraction-with-memory.ts | Extraction-based form filler with cross-session patient recall | | form-filler-with-memory.ts | Questionnaire-based form filler with cross-session memory |

Tests

  • test/memory/InMemoryMemoryService.test.js — unit tests for in-memory store
  • test/memory/preloadMemory.test.js — preload formatting and budget compliance
  • test/runtime/ContextBudget.test.js — budget computation and truncation
  • test/runtime/ContextManagerCoordination.test.js — budget override integration
  • test/runtime/handoffFilters.test.js — filter composition and edge cases
  • test/runtime/integration-memory-budget.test.js — full pipeline integration (9 tests)