zidane
v1.6.13
Published
an agent that goes straight to the goal
Readme

Zidane
An agent that goes straight to the goal.
Minimal TypeScript agent loop built with Bun. Hook into every step using hookable. Built to be embedded.
Quickstart
bun install
bun run auth # Anthropic OAuth
bun start --prompt "create a hello world app"Agent Setup
import { createAgent, anthropic } from 'zidane'
import { basic } from 'zidane'
const agent = createAgent({
provider: anthropic({ apiKey: 'sk-ant-...' }),
harness: basic,
})
const stats = await agent.run({ prompt: 'build a REST API' })
console.log(`Done in ${stats.turns} turns`)All options on createAgent:
createAgent({
provider, // required: LLM provider
session, // session for persistence
harness: basic, // tool set (default: noTools)
behavior: { // agent-level defaults
toolExecution: 'parallel', // or 'sequential' (default: parallel)
maxTurns: 50, // max loop iterations
maxTokens: 16384, // max tokens per LLM response
thinkingBudget: 10240, // exact thinking token budget
},
execution: createProcessContext(), // where tools run
mcpServers: [], // MCP tool servers
skills: {}, // skills configuration
})All options on agent.run():
await agent.run({
prompt: 'your task', // optional when session has existing turns
model: 'claude-opus-4-6',
system: 'be concise',
thinking: 'medium', // off | minimal | low | medium | high
behavior: { // per-run overrides
maxTurns: 10,
maxTokens: 4096,
thinkingBudget: 8192,
},
tools: {}, // override tools for this run ({} = no tools)
images: [], // base64 images
signal: abortController.signal,
})prompt is optional when a session with existing turns is provided — the agent resumes from the last turn. This supports apps where the user message is persisted to the session before the agent runs (e.g. WebSocket → session → queue → agent).
Precedence: run.behavior > agent.behavior > harness.behavior > hardcoded defaults.
CLI
bun start \
--prompt "your task" \ # required
--model claude-opus-4-6 \ # model id
--provider anthropic \ # anthropic | openrouter | cerebras
--harness basic \ # tool set
--system "be concise" \ # system prompt
--thinking off \ # off | minimal | low | medium | high
--context process \ # process | docker
--mcp '{"name":"fs","transport":"stdio","command":"npx","args":["-y","@modelcontextprotocol/server-filesystem","."]}'Providers
All providers accept runtime credentials via a params object. Env vars are fallbacks.
Anthropic
import { anthropic } from 'zidane'
anthropic({ apiKey: 'sk-ant-...' })
anthropic({ access: 'sk-ant-oat-...' }) // OAuth
anthropic({ apiKey: '...', defaultModel: 'claude-sonnet-4-6' })Fallback: params.apiKey > params.access > ANTHROPIC_API_KEY env > .credentials.json
OpenRouter
import { openrouter } from 'zidane'
openrouter({ apiKey: 'sk-or-...', defaultModel: 'google/gemini-pro' })Fallback: params.apiKey > OPENROUTER_API_KEY env
Cerebras
import { cerebras } from 'zidane'
cerebras({ apiKey: 'csk-...', defaultModel: 'zai-glm-4.7' })Fallback: params.apiKey > CEREBRAS_API_KEY env
Harnesses
Tools are grouped into harnesses. The basic harness includes:
| Tool | Description |
|---|---|
| shell | Execute shell commands |
| read_file | Read file contents |
| write_file | Write/create files |
| list_files | List directory contents |
| spawn | Spawn a sub-agent |
Define a custom harness:
import { defineHarness, basicTools } from 'zidane'
const harness = defineHarness({
name: 'researcher',
system: 'You are a research assistant.',
tools: { ...basicTools },
})For pure chat with no tools, pass tools: {} on a specific run or use the noTools harness:
await agent.run({ prompt: 'just chat', tools: {} })Thinking
Extended reasoning with named levels or exact token budgets.
| Level | Default budget |
|---|---|
| off | disabled |
| minimal | 1,024 tokens |
| low | 4,096 tokens |
| medium | 10,240 tokens |
| high | 32,768 tokens |
// Named level
await agent.run({ prompt: 'solve this', thinking: 'high' })
// Exact budget (overrides level default)
await agent.run({ prompt: 'solve this', thinking: 'high', behavior: { thinkingBudget: 50000 } })
// Agent-level default
const agent = createAgent({ provider, harness, behavior: { thinkingBudget: 16384 } })Thinking traces are stored in session turns as { type: 'thinking', text } content blocks and streamed live via the stream:thinking hook. Supported by Anthropic (native) and OpenRouter/Cerebras (reasoning_content/reasoning SSE fields).
Hooks
Every hook receives a mutable context object.
Turn lifecycle
agent.hooks.hook('turn:before', (ctx) => {
// ctx.turn, ctx.turnId, ctx.options (StreamOptions)
})
agent.hooks.hook('turn:after', (ctx) => {
// ctx.turn, ctx.turnId, ctx.usage, ctx.message (full SessionTurn)
// Always fires — even if the provider throws mid-stream
// Turn is guaranteed to be in agent.turns before this fires
})
agent.hooks.hook('usage', (ctx) => {
// ctx.turn, ctx.turnId, ctx.usage (per-turn)
// ctx.totalIn, ctx.totalOut (running totals)
})
agent.hooks.hook('agent:done', (ctx) => {
// ctx.totalIn, ctx.totalOut, ctx.turns, ctx.elapsed, ctx.children?
// ctx.output — structured output (when behavior.schema is set)
// Fires on all exit paths: completion, maxTurns, and abort
})Streaming
agent.hooks.hook('stream:text', (ctx) => {
// ctx.delta, ctx.text, ctx.turnId
})
agent.hooks.hook('stream:end', (ctx) => {
// ctx.text (final), ctx.turnId
// Only fires when there is text content (not on tool-only turns)
})
agent.hooks.hook('stream:thinking', (ctx) => {
// ctx.delta, ctx.thinking (accumulated), ctx.turnId
// Fires when the model streams reasoning traces (Anthropic, OpenRouter)
})Tool execution
All tool hooks include turnId and callId for correlation. Typed via ToolHookContext.
agent.hooks.hook('tool:gate', (ctx) => {
// ctx.turnId, ctx.callId, ctx.name, ctx.input
if (ctx.name === 'shell' && String(ctx.input.command).includes('rm -rf')) {
ctx.block = true
ctx.reason = 'dangerous command'
}
})
agent.hooks.hook('tool:before', (ctx) => { /* ctx.turnId, ctx.callId, ctx.name, ctx.input */ })
agent.hooks.hook('tool:after', (ctx) => { /* + ctx.result */ })
agent.hooks.hook('tool:error', (ctx) => { /* + ctx.error */ })
agent.hooks.hook('tool:transform', (ctx) => {
// + ctx.result, ctx.isError — mutate to modify output
if (ctx.result.length > 5000)
ctx.result = ctx.result.slice(0, 5000) + '\n... (truncated)'
})MCP tool hooks mirror the same pattern with server and tool fields. Typed via McpToolHookContext.
agent.hooks.hook('mcp:tool:gate', (ctx) => { /* ctx.turnId, ctx.callId, ctx.server, ctx.tool, ctx.input, ctx.block, ctx.reason */ })
agent.hooks.hook('mcp:tool:before', (ctx) => { /* ctx.turnId, ctx.callId, ctx.server, ctx.tool, ctx.input */ })
agent.hooks.hook('mcp:tool:after', (ctx) => { /* + ctx.result */ })
agent.hooks.hook('mcp:tool:transform', (ctx) => { /* + ctx.result — mutate to modify */ })
agent.hooks.hook('mcp:tool:error', (ctx) => { /* + ctx.error */ })Context transform
Prune messages before each LLM call:
agent.hooks.hook('context:transform', (ctx) => {
if (ctx.messages.length > 30)
ctx.messages.splice(2, ctx.messages.length - 30)
})Steering and Follow-up
Steering
Inject a message while the agent is working. Delivered between tool calls.
agent.steer('focus only on the tests directory')Follow-up
Queue messages that extend the conversation after the agent finishes.
agent.followUp('now write tests for what you built')Sub-agent Spawning
The spawn tool delegates tasks to child agents that run independently.
import { createSpawnTool, defineHarness, basicTools } from 'zidane'
const harness = defineHarness({
name: 'orchestrator',
tools: {
...basicTools,
spawn: createSpawnTool({
maxConcurrent: 5,
model: 'claude-haiku-4-5-20251001',
thinking: 'low',
}),
},
})Children inherit the parent's harness and can spawn their own children.
Interaction Tool
Let the agent pause and request structured input from the outside world. Not included in any harness by default.
import { createInteractionTool, defineHarness, basicTools } from 'zidane'
const askUser = createInteractionTool({
name: 'ask_user',
schema: {
type: 'object',
properties: { question: { type: 'string' } },
required: ['question'],
},
onRequest: async (payload) => {
const answer = await promptUser(payload.question)
return { answer }
},
})
const harness = defineHarness({
name: 'interactive',
tools: { ...basicTools, ask_user: askUser },
})onRequest can be async — the agent waits for the response. Return a string or object (objects are JSON-stringified).
Sessions
Sessions give an agent persistent turn history and run metadata across calls.
import { createAgent, createSession, createSqliteStore } from 'zidane'
const store = createSqliteStore({ path: './sessions.db' })
const session = await createSession({ store })
const agent = createAgent({ harness, provider, session })
await agent.run({ prompt: 'hello' })
await session.save()Turns are persisted incrementally after each turn — not as a full save. If the agent crashes, you have turns up to the last completed turn.
Storage backends
import { createMemoryStore, createSqliteStore, createRemoteStore } from 'zidane/session'
createMemoryStore() // in-memory, no persistence
createSqliteStore({ path: './sessions.db' }) // SQLite (Bun built-in)
createRemoteStore({ url: 'https://api.example.com' }) // HTTP REST APIRestoring a session
import { loadSession } from 'zidane/session'
const session = await loadSession(store, 'my-session')
if (session) {
const agent = createAgent({ harness, provider, session })
await agent.run({ prompt: 'continue' })
}Session hooks
agent.hooks.hook('session:start', (ctx) => { /* ctx.sessionId, ctx.runId, ctx.prompt */ })
agent.hooks.hook('session:end', (ctx) => { /* ctx.sessionId, ctx.runId, ctx.status, ctx.turnRange */ })
agent.hooks.hook('session:turns', (ctx) => { /* ctx.sessionId, ctx.turns (SessionTurn[]), ctx.count */ })MCP Servers
Connect any MCP-compatible tool server. Tools are namespaced as mcp_{server}_{tool}.
const agent = createAgent({
harness,
provider,
mcpServers: [
{ name: 'fs', transport: 'stdio', command: 'npx', args: ['-y', '@modelcontextprotocol/server-filesystem', '.'] },
{ name: 'api', transport: 'streamable-http', url: 'http://localhost:3002/mcp' },
],
})MCP servers can also be declared on the harness. Connections are lazy (first run()) and reused.
Skills
Reusable instruction packages following the Agent Skills open standard.
SKILL.md format
my-skill/
SKILL.md
scripts/ # optional
references/ # optional
assets/ # optional---
name: my-skill
description: When to activate this skill.
model: claude-opus-4-6
thinking: low
allowed-tools: Bash Read Write
paths: "src/**/*.ts, test/**/*.ts"
---
Full instructions the agent receives when this skill activates.Discovery
Scan paths in priority order (first found wins):
{cwd}/.agents/skills{cwd}/.zidane/skills~/.agents/skills~/.zidane/skills
Configuration
import { createAgent, defineSkill } from 'zidane'
const agent = createAgent({
harness,
provider,
skills: {
scan: ['./custom-skills'],
write: [
defineSkill({
name: 'review',
description: 'Code review guidelines.',
instructions: 'Review for correctness and test coverage.',
}),
],
exclude: ['deprecated-skill'],
enabled: ['review', 'deploy'],
},
})Instructions support !\command`` for dynamic content — commands run during resolution and output replaces the placeholder.
Execution Contexts
An execution context defines where tools run. Defaults to in-process.
Docker
import { createAgent, createDockerContext } from 'zidane'
const agent = createAgent({
harness,
provider,
execution: createDockerContext({
image: 'node:22',
cwd: '/workspace',
limits: { memory: 512, cpu: '1.0' },
}),
})Sandbox (remote)
Implement SandboxProvider for your provider (E2B, Rivet, etc.):
import { createAgent, createSandboxContext } from 'zidane'
const agent = createAgent({
harness,
provider,
execution: createSandboxContext(myProvider),
})State Management
agent.isRunning // is a run in progress?
agent.turns // conversation history (SessionTurn[])
agent.abort() // cancel the current run
agent.reset() // clear messages and queues
await agent.destroy() // clean up context + MCP connections
await agent.waitForIdle() // wait for current run to completeMessage Format
All messages use a canonical format. Providers convert to/from wire formats internally.
type SessionContentBlock =
| { type: 'text', text: string }
| { type: 'image', mediaType: string, data: string }
| { type: 'tool_call', id: string, name: string, input: Record<string, unknown> }
| { type: 'tool_result', callId: string, output: string, isError?: boolean }
| { type: 'thinking', text: string }
interface SessionMessage {
role: 'user' | 'assistant'
content: SessionContentBlock[]
}Converters for external interop:
import { fromAnthropic, toAnthropic, fromOpenAI, toOpenAI, autoDetectAndConvert } from 'zidane'Structured Output
Force the agent's final response to match a JSON Schema via provider-level tool forcing.
const stats = await agent.run({
prompt: 'Extract the entities',
behavior: {
schema: {
type: 'object',
properties: { name: { type: 'string' }, age: { type: 'number' } },
required: ['name', 'age'],
},
},
})
console.log(stats.output) // { name: 'Alice', age: 30 }The output hook fires when structured output is extracted:
agent.hooks.hook('output', (ctx) => {
// ctx.output — the parsed JSON matching the schema
// ctx.schema — the schema that was enforced
})Zod v4 integration
Use zodToJsonSchema to normalize z.toJsonSchema() output for tool schemas:
import { z } from 'zod'
import { zodToJsonSchema } from 'zidane'
const schema = zodToJsonSchema(z.toJsonSchema(z.object({ name: z.string() })))Usage Tracking
const stats = await agent.run({ prompt: 'hello' })
stats.turnUsage // TurnUsage[] — per-turn { input, output, cacheCreation?, cacheRead?, thinking?, cost? }
stats.cost // total USD cost (if reported by provider)Types
All types are available from zidane/types:
import type { Agent, SessionTurn, TurnUsage, Provider, ToolDef } from 'zidane/types'
// Hook context types for typed event handlers
import type { ToolHookContext, McpToolHookContext, SessionHookContext, StreamHookContext } from 'zidane/types'Testing
bun test495+ tests with mock provider and execution context. No API keys or Docker needed.
License
ISC
