@agentspan-ai/sdk
v0.0.13
Published
TypeScript SDK for building and running AI agents on Agentspan
Readme
@agentspan-ai/sdk
TypeScript SDK for building and running AI agents on Agentspan. Define agents and tools in TypeScript, run them durably on the platform with crash recovery, distributed workers, and human-in-the-loop approval.
Quick Start
npm install @agentspan-ai/sdk zodimport { Agent, AgentRuntime, tool } from '@agentspan-ai/sdk';
import { z } from 'zod';
const getWeather = tool(
async ({ city }: { city: string }) => ({ city, temp: 72, condition: 'Sunny' }),
{
description: 'Get current weather for a city.',
inputSchema: z.object({ city: z.string() }),
},
);
const agent = new Agent({
name: 'weather_agent',
model: 'openai/gpt-4o',
instructions: 'You are a helpful weather assistant.',
tools: [getWeather],
});
const runtime = new AgentRuntime();
const result = await runtime.run(agent, "What's the weather in SF?");
result.printResult();
await runtime.shutdown();Already using Vercel AI SDK?
One import change. Your code stays identical.
-import { generateText } from 'ai';
+import { generateText } from '@agentspan-ai/sdk/vercel-ai';That's it. generateText and streamText are intercepted, compiled to an agent execution, and run on Agentspan. Tools, model, prompt, result shape -- all unchanged.
When you need Agentspan-specific features (guardrails, termination, multi-agent handoff), switch to the Agent API. See examples/vercel-ai/README.md for the full before/after.
Already using another framework?
Pass your existing agent objects directly to runtime.run():
import { Agent } from '@openai/agents';
import { AgentRuntime } from '@agentspan-ai/sdk';
const agent = new Agent({
name: 'helper', model: 'gpt-4o-mini',
instructions: 'You are helpful.',
tools: [getWeather],
});
// Agent format auto-detected
const runtime = new AgentRuntime();
await runtime.run(agent, 'Weather in SF?');import { LlmAgent } from '@google/adk';
import { AgentRuntime } from '@agentspan-ai/sdk';
const agent = new LlmAgent({
name: 'helper', model: 'gemini-2.5-flash',
instruction: 'You are helpful.',
tools: [getWeather],
});
// Agent format auto-detected
const runtime = new AgentRuntime();
await runtime.run(agent, 'Weather in Tokyo?');import { createReactAgent }
from '@langchain/langgraph/prebuilt';
import { ChatOpenAI } from '@langchain/openai';
import { AgentRuntime } from '@agentspan-ai/sdk';
const graph = createReactAgent({
llm: new ChatOpenAI({ model: 'gpt-4o-mini' }),
tools: [searchTool],
});
// Add metadata for extraction
(graph as any)._agentspan = {
model: 'openai/gpt-4o-mini',
tools: [searchTool],
framework: 'langgraph',
};
const runtime = new AgentRuntime();
await runtime.run(graph, 'Search quantum');See per-framework READMEs for complete before/after guides: Vercel AI | OpenAI | Google ADK | LangGraph | LangChain
Features
Streaming
const stream = await runtime.stream(agent, prompt);
for await (const event of stream) {
switch (event.type) {
case 'thinking': console.log(event.content); break;
case 'tool_call': console.log(event.toolName, event.args); break;
case 'tool_result': console.log(event.toolName, event.result); break;
case 'waiting': await stream.approve(); break;
case 'done': console.log(event.output); break;
}
}Multi-Agent Strategies
// Sequential pipeline
const pipeline = researcher.pipe(writer).pipe(editor);
// Parallel (scatter-gather)
const panel = new Agent({ name: 'panel', agents: [analyst1, analyst2], strategy: 'parallel' });
// Handoff (LLM decides which specialist to route to)
const team = new Agent({ name: 'team', agents: [coder, reviewer], strategy: 'handoff' });
// Also: router, round-robin, swarm, manualGuardrails
import { guardrail, RegexGuardrail, LLMGuardrail } from '@agentspan-ai/sdk';
const piiBlocker = new RegexGuardrail({
name: 'pii_blocker',
patterns: ['\\b\\d{3}-\\d{2}-\\d{4}\\b'],
mode: 'block', onFail: 'raise',
});
const customCheck = guardrail(
async (content: string) => {
if (content.includes('secret')) return { passed: false, message: 'Sensitive content' };
return { passed: true };
},
{ name: 'custom_check', position: 'output', onFail: 'retry' },
);
const agent = new Agent({ name: 'safe', guardrails: [piiBlocker, customCheck], ... });Human-in-the-Loop
const handle = await runtime.start(agent, prompt);
// Agent pauses when it hits a tool with approvalRequired: true
const status = await handle.getStatus();
if (status.isWaiting) {
await handle.approve(); // or handle.reject('reason')
}
const result = await handle.wait();Termination Conditions
import { TextMention, MaxMessage } from '@agentspan-ai/sdk';
const agent = new Agent({
name: 'analyst',
termination: new TextMention('DONE').or(new MaxMessage(10)),
...
});Testing
import { mockRun, expectResult } from '@agentspan-ai/sdk/testing';
const result = await mockRun(agent, 'Write an article', {
mockTools: { search: async () => ({ results: ['paper1'] }) },
});
expectResult(result)
.toBeCompleted()
.toContainOutput('article')
.toHaveUsedTool('search');Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| AGENTSPAN_SERVER_URL | http://localhost:6767/api | Server API URL |
| AGENTSPAN_API_KEY | -- | Bearer token |
| OPENAI_API_KEY | -- | For OpenAI models |
All config can also be passed to the AgentRuntime constructor.
Examples
157 examples covering every feature:
| Directory | Count | Description |
|-----------|-------|-------------|
| examples/ | 107 | Native Agentspan agents |
| examples/vercel-ai/ | 10 | Vercel AI SDK integration |
| examples/langgraph/ | 10 | LangGraph integration |
| examples/langchain/ | 10 | LangChain integration |
| examples/openai/ | 10 | OpenAI Agents SDK integration |
| examples/adk/ | 10 | Google ADK integration |
npx tsx examples/01-basic-agent.ts
npx tsx examples/vercel-ai/01-basic-agent.ts
npx tsx examples/langgraph/02-react-with-tools.tsContributing
We welcome contributions! Please open an issue or PR on GitHub.
git clone https://github.com/agentspan-ai/agentspan.git
cd agentspan/sdk/typescript
npm install
npm test # unit tests (no server needed)
npm run lint # type-checkLicense
MIT
