goatchain
v0.0.2
Published
> A lightweight, extensible TypeScript SDK for building AI agents with streaming support, tool calling, and middleware pattern.
Readme
GoatChain 🐐
A lightweight, extensible TypeScript SDK for building AI agents with streaming support, tool calling, and middleware pattern.
✨ Features
- 🔄 Agentic Loop - Automatic tool calling loop with configurable max iterations
- 📡 Streaming First - Real-time streaming responses with detailed events
- 🧅 Middleware Pattern - Koa-style onion model for extensible hooks
- 🔧 Tool System - Easy-to-use tool registration and execution
- 💾 State Management - Two-level state store (Agent + Session level)
- 📸 Snapshot/Restore - Full persistence support for agents and sessions
- 🎯 TypeScript Native - Full type safety with comprehensive type exports
📦 Installation
pnpm add goatchain📖 Documentation
完整的文档请查看 docs/ 目录:
🧰 CLI
After installing goatchain-cli globally (or using pnpm -s cli in this repo), run:
goatchainCommon options:
-k, --api-key <key>(or setOPENAI_API_KEY)-m, --model <id>--base-url <url>--max-tokens <n>--temperature <n>
Commands:
/helphelp/model <id>switch model id (OpenAI)/set <k> <v>set request params (e.g.temperature,maxTokens,topP)/unset <k>clear a request param/paramsshow current request params/base-url <url>set base URL/api-key <key>set API key (not printed)/toolslist enabled tools (Read/Write/Edit/Glob/Grep/WebSearch*)/sessionslist and pick a saved session/use <sessionId>restore a saved session (prints recent history)/savepersist current config/session/statusshow current model/session info/newstart a new conversation (clears history)
Requires OPENAI_API_KEY in the environment.
Web search (optional):
- Set
SERPER_API_KEY(orGOATCHAIN_SERPER_API_KEY) to enable the builtinWebSearchtool for up-to-date info like weather. - You can also set it in
./.goatchain/config.json(workspace-scoped, gitignored):{"tools":{"webSearch":{"apiKey":"...","apiEndpoint":"...","numResults":10}}}
Local persistence (workspace-scoped):
- Config and sessions are saved under
./.goatchain/(auto-created). .goatchain/is gitignored to avoid accidentally committing secrets.
DeepSeek thinking mode compatibility:
- Some OpenAI-compatible gateways (e.g. DeepSeek thinking mode) require
reasoning_contentto be present on assistant messages that containtool_calls(and may reject empty strings). GoatChain will attach the accumulated thinking content when available. - If you use DeepSeek via a proxy where GoatChain can't detect it from
baseUrl/modelId, setopenai.compat.requireReasoningContentForToolCalls=truein./.goatchain/config.json.
🏗️ Architecture
classDiagram
direction TB
class Agent {
+id: string
+name: string
+systemPrompt: string
+model: BaseModel
+tools: ToolRegistry
+stateStore: StateStore
+sessionManager: BaseSessionManager
+stats: AgentStats
+use(middleware): this
+createSession(options): BaseSession
+resumeSession(sessionId, options): BaseSession
+setModel(modelOrRef): void
}
class BaseModel {
<<abstract>>
+modelId: string
+invoke(messages, options): Promise~ChatResponse~
+stream(messages, options): AsyncIterable~StreamEvent~
}
class StateStore {
<<interface>>
+savePoint: string
+deleteOnComplete: boolean
+saveCheckpoint(checkpoint): Promise~void~
+loadCheckpoint(sessionId): Promise~AgentLoopCheckpoint~
+deleteCheckpoint(sessionId): Promise~void~
+listCheckpoints(agentId): Promise~AgentLoopCheckpoint[]~
}
class BaseTool {
<<abstract>>
+name: string
+description: string
+parameters: JSONSchema
+execute(args, ctx?): Promise~unknown~
}
class ToolRegistry {
+register(tool): void
+unregister(name): boolean
+get(name): BaseTool
+list(): BaseTool[]
+toOpenAIFormat(): OpenAITool[]
}
class BaseSession {
<<abstract>>
+id: string
+agentId: string
+status: SessionStatus
+messages: Message[]
+usage: Usage
+configOverride: SessionConfigOverride
+addMessage(message): void
+save(): Promise~void~
+toSnapshot(): SessionSnapshot
+restoreFromSnapshot(snapshot): void
}
class BaseSessionManager {
<<abstract>>
+create(agentId, metadata): Promise~BaseSession~
+get(sessionId): Promise~BaseSession~
+list(agentId): Promise~BaseSession[]~
+destroy(sessionId): Promise~void~
}
class Middleware {
<<function>>
(ctx: AgentLoopState, next: NextFunction) => Promise~void~
}
class AgentLoopState {
+sessionId: string
+agentId: string
+messages: Message[]
+iteration: number
+pendingToolCalls: ToolCallWithResult[]
+currentResponse: string
+shouldContinue: boolean
+usage: Usage
}
Agent --> BaseModel : uses
Agent --> ToolRegistry : uses
Agent --> StateStore : uses
Agent --> BaseSessionManager : uses
Agent ..> Middleware : applies
Agent ..> AgentLoopState : manages
ToolRegistry --> BaseTool : contains
BaseSessionManager --> BaseSession : manages🚀 Quick Start
最简单的 Agent Loop 示例:
import process from 'node:process'
import { Agent, createModel, createOpenAIAdapter } from 'goatchain'
// 创建模型
const model = createModel({
adapters: [
createOpenAIAdapter({
defaultModelId: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY!,
}),
],
})
// 创建 Agent
const agent = new Agent({
name: 'Simple Assistant',
systemPrompt: 'You are a helpful assistant.',
model,
})
// 流式处理响应
const session = await agent.createSession()
session.send('Hello!')
for await (const event of session.receive()) {
if (event.type === 'text_delta') {
process.stdout.write(event.delta)
} else if (event.type === 'done') {
console.log('\n完成:', event.stopReason)
}
}📖 详细文档: 查看 docs/getting-started.md 了解更多示例和完整指南。
🧅 Middleware Pattern
GoatChain uses a Koa-style onion model for middleware. Each middleware wraps around the core execution:
outer:before → inner:before → exec (model.stream) → inner:after → outer:after// Logging middleware
agent.use(async (state, next) => {
const start = Date.now()
console.log(`[${state.iteration}] Before model call`)
await next() // Execute model stream
console.log(`[${state.iteration}] After model call (${Date.now() - start}ms)`)
})
// Error handling middleware
agent.use(async (state, next) => {
try {
await next()
} catch (error) {
state.shouldContinue = false
state.stopReason = 'error'
state.error = error
}
})
// Rate limiting middleware
agent.use(async (state, next) => {
await rateLimiter.acquire()
await next()
})📡 Event Types
The session receive stream emits the following events:
| Event | Description |
| ----------------- | ------------------------------------- |
| iteration_start | Beginning of a loop iteration |
| text_delta | Partial text response from LLM |
| thinking_start | Thinking phase begins (if supported) |
| thinking_delta | Thinking content delta (if supported) |
| thinking_end | Thinking phase ends (if supported) |
| tool_call_start | Tool call begins |
| tool_call_delta | Tool call arguments delta |
| tool_call_end | Tool call is complete |
| tool_result | Tool execution completed |
| iteration_end | End of a loop iteration (includes usage) |
| done | Stream completed (includes usage) |
| error | Error occurred |
interface AgentEvent {
type:
| 'text_delta'
| 'tool_call_start'
| 'tool_call_delta'
| 'tool_call_end'
| 'tool_result'
| 'thinking_start'
| 'thinking_delta'
| 'thinking_end'
| 'error'
| 'done'
| 'iteration_start'
| 'iteration_end'
// ... event-specific fields
}iteration_end and done events include optional usage: Usage with cumulative token counts.
💾 Checkpoint & Resume
Built-in checkpoint support for resuming interrupted agent executions:
import { Agent, FileStateStore } from 'goatchain'
// Create state store with configuration
const stateStore = new FileStateStore({
dir: './checkpoints',
savePoint: 'before', // Save before each iteration
deleteOnComplete: true, // Clean up after successful completion
})
// Agent automatically saves checkpoints when stateStore is provided
const agent = new Agent({
name: 'MyAgent',
systemPrompt: 'You are helpful.',
model,
stateStore,
})
// Run agent - checkpoints are saved automatically
const session = await agent.createSession()
session.send('Hello')
for await (const event of session.receive()) {
console.log(event)
}
// If interrupted, resume from checkpoint
const checkpoint = await stateStore.loadCheckpoint(session.id)
if (checkpoint) {
const resumed = await agent.resumeSession(session.id)
for await (const event of resumed.receive()) {
console.log(event)
}
}Available state stores:
FileStateStore- File-based persistenceInMemoryStateStore- In-memory (for testing)
🔧 Session Management
Sessions represent individual conversations with per-session configuration overrides:
// Create session
const session = await sessionManager.create(agent.id, {
customField: 'value',
})
// Session-level overrides
session.setModelOverride({ modelId: 'gpt-4o-mini' })
session.setSystemPromptOverride('You are a concise assistant.')
session.disableTools(['dangerous_tool'])
// Track session activity
session.addMessage({ role: 'user', content: 'Hello!' })
session.addUsage({ promptTokens: 10, completionTokens: 5, totalTokens: 15 })
session.recordResponse(1500) // ms
// Get session snapshot for persistence
const snapshot = session.toSnapshot()📁 Project Structure
src/
├── index.ts # Public exports
├── types/
│ ├── index.ts # Re-export all types
│ ├── message.ts # Message types (User, Assistant, Tool, System)
│ ├── event.ts # Stream event types
│ ├── common.ts # Shared types (ToolCall, Usage, JSONSchema)
│ └── snapshot.ts # Snapshot types (Agent, Session)
├── model/
│ ├── index.ts
│ ├── base.ts # BaseModel abstract class
│ └── types.ts # Model-specific types
├── state/
│ ├── index.ts
│ ├── stateStore.ts # StateStore interface
│ ├── FileStateStore.ts # File-based state storage
│ └── InMemoryStateStore.ts # In-memory state storage
├── tool/
│ ├── index.ts
│ ├── base.ts # BaseTool abstract class
│ └── registry.ts # ToolRegistry class
├── session/
│ ├── index.ts
│ ├── base.ts # BaseSession abstract class
│ └── manager.ts # BaseSessionManager abstract class
└── agent/
├── index.ts
├── agent.ts # Agent class
├── types.ts # Agent types (AgentLoopState, AgentInput, etc.)
├── middleware.ts # Middleware compose function
└── errors.ts # Agent-specific errors🛡️ Error Handling
import { AgentAbortError, AgentMaxIterationsError } from 'goatchain'
// Cancellation support
const controller = new AbortController()
try {
const session = await agent.createSession({ maxIterations: 5 })
session.send('Hello', { signal: controller.signal })
for await (const event of session.receive()) {
// Handle events...
}
} catch (error) {
if (error instanceof AgentAbortError) {
console.log('Agent was cancelled')
} else if (error instanceof AgentMaxIterationsError) {
console.log('Max iterations reached')
}
}
// Cancel from another context
controller.abort()📖 API Reference
Agent
| Method | Description |
| ------------------------------- | ----------------------- |
| constructor(options) | Create a new agent |
| use(middleware) | Add middleware |
| createSession(options) | Create a session |
| resumeSession(sessionId, options) | Resume a session |
| setModel(modelOrRef) | Switch/pin model at runtime |
Session
| Method | Description |
| ----------------------- | -------------------------------------------- |
| send(input, options?) | Queue input for the session |
| receive(options?) | Stream events, resuming from checkpoint if present |
| messages | Conversation history |
CreateSessionOptions
| Property | Type | Description |
| ---------------- | ------------- | ---------------------------------------- |
| model? | ModelRef | Optional model override for this session |
| maxIterations? | number | Max loop iterations (default: 10) |
| hooks? | ToolHooks | Tool execution hooks |
| requestParams? | object | Model request parameters |
SendOptions
| Property | Type | Description |
| ---------------- | ------------- | ---------------------------------------- |
| signal? | AbortSignal | Cancellation support |
| toolContext? | object | Tool execution context input |
🔄 Model Switching
Per-Request Model Override
Temporarily use a different model for a single request:
const session = await agent.createSession({
model: { provider: 'openai', modelId: 'gpt-4' }, // Use GPT-4 for this session
})
session.send('Explain quantum physics')
for await (const event of session.receive()) {
// ...
}Persistent Model Switch
Change the default model for all subsequent requests:
// Switch model at runtime
model.setModelId('gpt-4')
// All subsequent requests will use the new model
const session = await agent.createSession()
session.send('Hello')
for await (const event of session.receive()) {
// Uses gpt-4
}Pin Default Model (Overrides Routing)
If your ModelClient supports routing (e.g. created via createModel()), you can pin a specific default model at the agent level:
agent.setModel({ provider: 'openai', modelId: 'gpt-4' })Multi-Provider Fallback
Configure multiple models with automatic fallback:
const model = createModel({
adapters: [
createOpenAIAdapter({ defaultModelId: 'gpt-4' }),
createAnthropicAdapter({ defaultModelId: 'claude-3' }),
],
routing: {
fallbackOrder: [
{ provider: 'openai', modelId: 'gpt-4' }, // Try first
{ provider: 'anthropic', modelId: 'claude-3' }, // Fallback if first fails
],
},
})📄 License
MIT © Simon He
