@axlsdk/axl
v0.3.0
Published
TypeScript SDK for orchestrating Agentic Systems
Maintainers
Readme
axl
Core SDK for orchestrating agentic systems in TypeScript.
Installation
npm install @axlsdk/axl zodAPI
tool(config)
Define a tool with Zod input validation:
import { tool } from '@axlsdk/axl';
import { z } from 'zod';
const calculator = tool({
name: 'calculator',
description: 'Evaluate arithmetic expressions',
input: z.object({ expression: z.string() }),
handler: ({ expression }) => {
const result = new Function(`return (${expression})`)();
return { result };
},
retry: { attempts: 3, backoff: 'exponential' },
sensitive: false,
});agent(config)
Define an agent with model, system prompt, tools, and handoffs:
import { agent } from '@axlsdk/axl';
const researcher = agent({
name: 'researcher',
model: 'openai:gpt-4o',
system: 'You are a research assistant.',
tools: [calculator],
maxTurns: 10,
timeout: '30s',
temperature: 0.7,
version: 'v1.2',
});Dynamic model and system prompt selection:
const dynamicAgent = agent({
model: (ctx) => ctx.metadata?.tier === 'premium'
? 'openai:gpt-4o'
: 'openai:gpt-4.1-nano',
system: (ctx) => `You are a ${ctx.metadata?.role ?? 'general'} assistant.`,
});workflow(config)
Define a named workflow with typed input/output:
import { workflow } from '@axlsdk/axl';
import { z } from 'zod';
const myWorkflow = workflow({
name: 'my-workflow',
input: z.object({ query: z.string() }),
output: z.object({ answer: z.string() }),
handler: async (ctx) => {
const answer = await ctx.ask(researcher, ctx.input.query);
return { answer };
},
});AxlRuntime
Register and execute workflows:
import { AxlRuntime } from '@axlsdk/axl';
const runtime = new AxlRuntime();
runtime.register(myWorkflow);
// Execute
const result = await runtime.execute('my-workflow', { query: 'Hello' });
// Stream
const stream = runtime.stream('my-workflow', { query: 'Hello' });
for await (const event of stream) {
if (event.type === 'token') process.stdout.write(event.data);
}
// Sessions
const session = runtime.session('user-123');
await session.send('my-workflow', { query: 'Hello' });
await session.send('my-workflow', { query: 'Follow-up' });
const history = await session.history();Context Primitives
All available on ctx inside workflow handlers:
// Invoke an agent
const answer = await ctx.ask(agent, 'prompt', { schema, retries });
// Run N concurrent tasks
const results = await ctx.spawn(3, async (i) => ctx.ask(agent, prompts[i]));
// Consensus vote
const winner = ctx.vote(results, { strategy: 'majority', key: 'answer' });
// Self-correcting validation
const valid = await ctx.verify(
async (lastOutput, error) => ctx.ask(agent, prompt),
schema,
{ retries: 3, fallback: defaultValue },
);
// Cost control
const budgeted = await ctx.budget({ cost: '$1.00', onExceed: 'hard_stop' }, async () => {
return ctx.ask(agent, prompt);
});
// First to complete
const fastest = await ctx.race([
() => ctx.ask(agentA, prompt),
() => ctx.ask(agentB, prompt),
], { schema });
// Concurrent independent tasks
const [a, b] = await ctx.parallel([
() => ctx.ask(agentA, promptA),
() => ctx.ask(agentB, promptB),
]);
// Map with bounded concurrency
const mapped = await ctx.map(items, async (item) => ctx.ask(agent, item), {
concurrency: 5,
quorum: 3,
});
// Human-in-the-loop
const decision = await ctx.awaitHuman({
channel: 'slack',
prompt: 'Approve this action?',
});
// Durable checkpoint
const value = await ctx.checkpoint(async () => expensiveOperation());OpenTelemetry Observability
Automatic span emission for every ctx.* primitive with cost-per-span attribution. Install @opentelemetry/api as an optional peer dependency.
import { defineConfig, AxlRuntime } from '@axlsdk/axl';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
const config = defineConfig({
telemetry: {
enabled: true,
serviceName: 'my-app',
exporter: new OTLPTraceExporter({ url: 'http://localhost:4318/v1/traces' }),
},
});
const runtime = new AxlRuntime(config);
runtime.initializeTelemetry();Span model: axl.workflow.execute > axl.agent.ask > axl.tool.call. Also: axl.ctx.spawn, axl.ctx.race, axl.ctx.vote, axl.ctx.budget, axl.ctx.checkpoint, axl.ctx.awaitHuman. Each span includes relevant attributes (cost, duration, token counts, etc.).
When disabled (default), NoopSpanManager provides zero overhead.
import { createSpanManager, NoopSpanManager } from '@axlsdk/axl';Memory Primitives
Working memory backed by the existing StateStore interface:
// Store and retrieve structured state
await ctx.remember('user-preferences', { theme: 'dark', lang: 'en' });
const prefs = await ctx.recall('user-preferences');
await ctx.forget('user-preferences');
// Scoped to session (default) or global
await ctx.remember('user-profile', data, { scope: 'global' });
const profile = await ctx.recall('user-profile', { scope: 'global' });Semantic recall requires a vector store and embedder configured on the runtime:
import { AxlRuntime, InMemoryVectorStore, OpenAIEmbedder } from '@axlsdk/axl';
const runtime = new AxlRuntime({
memory: {
vector: new InMemoryVectorStore(),
embedder: new OpenAIEmbedder({ model: 'text-embedding-3-small' }),
},
});
// In a workflow:
const relevant = await ctx.recall('knowledge-base', {
query: 'refund policy',
topK: 5,
});Vector store implementations: InMemoryVectorStore (testing), SqliteVectorStore (production, requires better-sqlite3).
Agent Guardrails
Input and output validation at the agent boundary:
const safe = agent({
model: 'openai:gpt-4o',
system: 'You are a helpful assistant.',
guardrails: {
input: async (prompt, ctx) => {
if (containsPII(prompt)) return { block: true, reason: 'PII detected' };
return { block: false };
},
output: async (response, ctx) => {
if (isOffTopic(response)) return { block: true, reason: 'Off-topic response' };
return { block: false };
},
onBlock: 'retry', // 'retry' | 'throw' | (reason, ctx) => fallbackResponse
maxRetries: 2,
},
});When onBlock is 'retry', the LLM sees the block reason and self-corrects (same pattern as ctx.verify()). Throws GuardrailError if retries are exhausted or onBlock is 'throw'.
Session Options
const session = runtime.session('user-123', {
history: {
maxMessages: 100, // Trim oldest messages when exceeded
summarize: true, // Auto-summarize trimmed messages
},
persist: true, // Save to StateStore (default: true)
});SessionOptions type:
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| history.maxMessages | number | unlimited | Max messages to retain |
| history.summarize | boolean | false | Summarize trimmed messages |
| persist | boolean | true | Persist history to StateStore |
Error Hierarchy
import {
AxlError, // Base class
VerifyError, // Schema validation failed after retries
QuorumNotMet, // Quorum threshold not reached
NoConsensus, // Vote could not reach consensus
TimeoutError, // Operation exceeded timeout
MaxTurnsError, // Agent exceeded max tool-calling turns
BudgetExceededError, // Budget limit exceeded
GuardrailError, // Guardrail blocked input or output
ToolDenied, // Agent tried to call unauthorized tool
} from '@axlsdk/axl';State Stores
import { MemoryStore, SQLiteStore, RedisStore } from '@axlsdk/axl';
// In-memory (default)
const runtime = new AxlRuntime();
// SQLite (requires better-sqlite3)
const runtime = new AxlRuntime({
state: { store: 'sqlite', sqlite: { path: './data/axl.db' } },
});
// Redis (requires ioredis)
const runtime = new AxlRuntime({
state: { store: 'redis', redis: { url: 'redis://localhost:6379' } },
});Provider URIs
Four built-in providers are supported:
# OpenAI — Chat Completions API
openai:gpt-4o # Flagship multimodal
openai:gpt-4o-mini # Fast and affordable
openai:gpt-4.1 # GPT-4.1
openai:gpt-4.1-mini # GPT-4.1 small
openai:gpt-4.1-nano # GPT-4.1 cheapest
openai:gpt-5 # GPT-5
openai:gpt-5-mini # GPT-5 small
openai:gpt-5-nano # GPT-5 cheapest
openai:gpt-5.1 # GPT-5.1
openai:gpt-5.2 # GPT-5.2
openai:o1 # Reasoning
openai:o1-mini # Reasoning (small)
openai:o1-pro # Reasoning (pro)
openai:o3 # Reasoning
openai:o3-mini # Reasoning (small)
openai:o3-pro # Reasoning (pro)
openai:o4-mini # Reasoning (small)
openai:gpt-4-turbo # Legacy
openai:gpt-4 # Legacy
openai:gpt-3.5-turbo # Legacy
# OpenAI — Responses API (same models, better caching, native reasoning)
openai-responses:gpt-4o
openai-responses:o3
# Anthropic
anthropic:claude-opus-4-6 # Most capable
anthropic:claude-sonnet-4-5 # Balanced
anthropic:claude-haiku-4-5 # Fast and affordable
anthropic:claude-sonnet-4 # Previous gen
anthropic:claude-opus-4 # Previous gen
anthropic:claude-3-7-sonnet # Legacy
anthropic:claude-3-5-sonnet # Legacy
anthropic:claude-3-5-haiku # Legacy
anthropic:claude-3-opus # Legacy
anthropic:claude-3-sonnet # Legacy
anthropic:claude-3-haiku # Legacy
# Google Gemini
google:gemini-2.5-pro # Most capable
google:gemini-2.5-flash # Fast
google:gemini-2.5-flash-lite # Cheapest 2.5
google:gemini-2.0-flash # Previous gen
google:gemini-2.0-flash-lite # Previous gen (lite)
google:gemini-3-pro-preview # Next gen (preview)
google:gemini-3-flash-preview # Next gen fast (preview)