@runtypelabs/a2a-aisdk-example
v0.3.3
Published
Example A2A (Agent-to-Agent) protocol server for testing and reference implementation
Readme
@runtypelabs/a2a-aisdk-example
Reference A2A agent implementation with deterministic time tools and LLM-powered chat.
Why time tools?
LLMs can't reliably:
- Tell you what day "next Tuesday" is
- Calculate dates ("30 days from now")
- Convert timezones correctly
- Know what time it is
This agent includes deterministic time skills that compute answers from the system clock — no generation, no hallucination. Other agents can call these tools via A2A to get reliable temporal data.
Quick Start
Run an A2A Server (Echo Mode - for testing)
# Using CLI
npx @runtypelabs/a2a-aisdk-example serve --echoThis starts a server at:
- Agent Card:
http://localhost:9999/.well-known/agent-card.json - A2A Endpoint:
http://localhost:9999/a2a
Test a time skill
curl -X POST http://localhost:9999/a2a \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": "1",
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"data": {"date": "2025-02-04"}}]
},
"metadata": { "skill": "time/day_of_week" }
}
}'Returns:
{ "result": { "day": "Tuesday", ... }, "computed": { "method": "deterministic" } }Run with LLM
Uses Vercel AI Gateway to access any model provider through a single API key.
# Vercel AI Gateway (recommended — supports any provider)
AI_GATEWAY_API_KEY=xxx npx @runtypelabs/a2a-aisdk-example serveThe default model is alibaba/qwen3.5-flash. Use --model to switch:
AI_GATEWAY_API_KEY=xxx npx @runtypelabs/a2a-aisdk-example serve --model openai/gpt-4o-mini# OpenAI
OPENAI_API_KEY=sk-xxx npx @runtypelabs/a2a-aisdk-example serve --model gpt-4o-mini --provider openai
# Anthropic
ANTHROPIC_API_KEY=sk-xxx npx @runtypelabs/a2a-aisdk-example serve --model claude-sonnet-4-6 --provider anthropicDirect provider mode only supports openai and anthropic providers, and the model name must not contain a / prefix.
Test an A2A Endpoint
The server must be running first. Use two terminals:
Terminal 1 — start the server:
# Echo mode (no API key needed)
npx @runtypelabs/a2a-aisdk-example serve --echo# Or with LLM (requires AI_GATEWAY_API_KEY)
AI_GATEWAY_API_KEY=xxx npx @runtypelabs/a2a-aisdk-example serveTerminal 2 — run the test:
# Test echo (works with echo mode)
npx @runtypelabs/a2a-aisdk-example test http://localhost:9999
# Test with streaming (chat requires LLM mode + API key)
npx @runtypelabs/a2a-aisdk-example test http://localhost:9999 --stream --skill chat --message "What is AI?"Run
npx @runtypelabs/a2a-aisdk-example --helpto see all commands and options.
Test Runtype A2A Surface
npx @runtypelabs/a2a-aisdk-example test-runtype \
--product-id prod_xxx \
--surface-id surf_xxx \
--api-key a2a_xxx \
--environment local \
--message "Hello!"Available Skills
Time Tools (Deterministic)
| Skill | Description |
| -------------------- | ------------------------------------- |
| time/now | Current time with timezone |
| time/parse | Parse "next Tuesday 3pm" to timestamp |
| time/convert | Convert between timezones |
| time/add | Add days/weeks/months to a date |
| time/diff | Duration between two dates |
| time/day_of_week | What day is this date? |
| time/is_past | Is this timestamp in the past? |
| time/business_days | Add/subtract business days |
General (LLM-Powered)
| Skill | Description |
| --------- | -------------------- |
| chat | Conversational AI |
| analyze | Content analysis |
| echo | Echo input (testing) |
chat can invoke skills tagged with tool (for example the deterministic time/* skills) through AI SDK tool calling.
Time tools return structured responses with a computed.method: "deterministic" field and usage: "Use this value directly. Do not recalculate." guidance for calling agents.
Example prompt that triggers tool use from chat:
npx @runtypelabs/a2a-aisdk-example test http://localhost:9999 \
--skill chat \
--message "What day of the week is 2026-02-09 in UTC?"Connecting to Runtype
As an External Agent
Start the A2A server:
npx @runtypelabs/a2a-aisdk-example serve --echo --port 9999In Runtype Dashboard:
- Go to your Product
- Click "Add Capability" > "Connect External"
- Enter:
- Agent Card URL:
http://localhost:9999/.well-known/agent-card.json - A2A Endpoint URL:
http://localhost:9999/a2a
- Agent Card URL:
- Click "Connect & Add"
Testing Runtype's A2A Surface
- Create an A2A Surface in Runtype Dashboard
- Add capabilities (flows) to the surface
- Generate an API key for the surface
- Test with the CLI:
npx @runtypelabs/a2a-aisdk-example test-runtype \ --product-id prod_xxx \ --surface-id surf_xxx \ --api-key a2a_xxx \ --environment local
Programmatic Usage
Create a Server
import { createA2AServer } from '@runtypelabs/a2a-aisdk-example'
// Requires AI_GATEWAY_API_KEY env var for gateway models (provider/model format)
const server = createA2AServer({
config: {
name: 'My Agent',
description: 'A helpful AI assistant',
port: 9999,
},
llmConfig: {
provider: 'openai',
model: 'alibaba/qwen3.5-flash', // gateway model — any provider via Vercel AI Gateway
temperature: 0.7,
},
})
await server.start()
// Graceful shutdown
process.on('SIGINT', async () => {
await server.stop()
})Create a Client
import { A2AClient } from '@runtypelabs/a2a-aisdk-example'
const client = new A2AClient({
baseUrl: 'http://localhost:9999',
})
// Get agent card
const agentCard = await client.getAgentCard()
console.log(
'Skills:',
agentCard.skills.map((s) => s.name)
)
// Send a task
const task = await client.sendTask({
skill: 'chat',
message: {
role: 'user',
parts: [{ type: 'text', text: 'Hello!' }],
},
})
console.log('Response:', task.artifacts?.[0]?.parts?.[0]?.text)Test Runtype Surface
import { createRuntypeA2AClient } from '@runtypelabs/a2a-aisdk-example'
const client = createRuntypeA2AClient({
productId: 'prod_xxx',
surfaceId: 'surf_xxx',
apiKey: 'a2a_xxx',
environment: 'local', // or 'staging', 'production'
})
// Send streaming task
await client.sendTaskStreaming(
{
skill: 'my-capability',
message: {
role: 'user',
parts: [{ type: 'text', text: 'Analyze this data...' }],
},
},
{
onChunk: (text) => process.stdout.write(text),
onStatus: (status) => console.log('Status:', status),
}
)CLI Reference
Run npx @runtypelabs/a2a-aisdk-example --help for all commands and options.
serve - Start A2A Server
Usage: a2a-aisdk-example serve [options]
Options:
-p, --port <port> Port to listen on (default: "9999")
-h, --host <host> Host to bind to (default: "localhost")
-n, --name <name> Agent name (default: "Example A2A Agent")
--echo Run in echo mode (no LLM, for testing)
--provider <provider> LLM provider: openai, anthropic (default: "openai")
--model <model> LLM model (default: "alibaba/qwen3.5-flash")
--temperature <temp> LLM temperature (default: "0.7")test - Test A2A Endpoint
Usage: a2a-aisdk-example test [options] <url>
Arguments:
url Base URL of the A2A endpoint
Options:
-s, --skill <skill> Skill to test (default: "echo")
-m, --message <msg> Message to send (default: "Hello from A2A client!")
--stream Use streaming mode
-k, --api-key <key> API key for authenticationtest-runtype - Test Runtype A2A Surface
Usage: a2a-aisdk-example test-runtype [options]
Options:
--product-id <id> Runtype product ID (required)
--surface-id <id> Runtype surface ID (required)
--api-key <key> A2A API key (required)
-e, --environment Environment: production, staging, local (default: "local")
-s, --skill <skill> Skill/capability to test
-m, --message <msg> Message to send (default: "Hello from A2A client!")
--stream Use streaming modeA2A Protocol
This package implements A2A Protocol v0.3.
Endpoints
GET /.well-known/agent-card.json- Agent Card discovery (also serves/.well-known/agent.jsonfor backward compat)POST /a2a- JSON-RPC endpoint
Supported Methods
| Spec Method (preferred) | Legacy Alias | Description |
| --- | --- | --- |
| message/send | tasks/send, SendMessage | Send a message (synchronous) |
| message/stream | tasks/sendSubscribe, SendStreamingMessage | Send a message with SSE streaming |
| GetTask | tasks/get | Get task status |
| CancelTask | tasks/cancel | Cancel a running task |
| ping | | Health check |
Vercel Deployment
Deploy your A2A agent to Vercel for serverless operation.
Option 1: Deploy the vercel-app directory
- In Vercel dashboard, set Root Directory to
vercel-app - Add environment variables:
AI_GATEWAY_API_KEY— Vercel AI Gateway key (recommended)AGENT_NAME(optional)ECHO_MODE=truefor testing without LLM- Or use direct provider keys as fallback:
OPENAI_API_KEY/ANTHROPIC_API_KEY
- Deploy
Option 2: Add to Existing Next.js App
Install the package and use the Vercel handlers. Set AI_GATEWAY_API_KEY in your environment for gateway models:
// app/api/a2a/route.ts
import { createA2AHandler } from '@runtypelabs/a2a-aisdk-example/vercel'
export const POST = createA2AHandler({
name: 'My Agent',
llmConfig: { provider: 'openai', model: 'alibaba/qwen3.5-flash' },
})
// app/.well-known/agent-card.json/route.ts
import { createAgentCardHandler } from '@runtypelabs/a2a-aisdk-example/vercel'
export const GET = createAgentCardHandler({
name: 'My Agent',
llmConfig: { provider: 'openai', model: 'alibaba/qwen3.5-flash' },
})Serverless Limitations
Since Vercel functions are stateless:
GetTaskreturns "not available" (no task storage)CancelTaskreturns "not available" (can't cancel in-flight tasks)- Use
message/streamfor streaming responses
Development
# Build
pnpm build
# Development mode (watch)
pnpm dev
# Type check
pnpm typecheck
# Clean
pnpm cleanLicense
MIT
