@cuylabs/agent-code
v0.1.6
Published
Embeddable AI coding agent built on @cuylabs/agent-core
Maintainers
Readme
@cuylabs/agent-code
An embeddable AI coding agent library built on top of the Vercel AI SDK.
Designed to be embedded into larger applications, using modern Vercel AI SDK patterns for streaming, tool calling, and multi-provider support.
Features
- 🔌 Embeddable - Drop into any Node.js/TypeScript application
- 🤖 Multi-Provider - Works with OpenAI, Anthropic, Google, and more via Vercel AI SDK
- 🛠️ Coding Tools - Built-in tools for file operations, search, and shell commands
- 📡 Streaming - Real-time streaming of responses and tool execution
- 🔧 Extensible - Easy to add custom tools
- 💾 Session Management - Built-in conversation history tracking
- 🧠 Reasoning Support - Works with reasoning models (o3-mini, etc.)
Installation
npm install @cuylabs/agent-code ai @ai-sdk/openai
# or
pnpm add @cuylabs/agent-code ai @ai-sdk/openaiQuick Start
import { createAgent, defaultCodingTools } from "@cuylabs/agent-code";
import { openai } from "@ai-sdk/openai";
// Create an agent with GPT-4o
const agent = createAgent({
model: openai("gpt-4o"),
cwd: process.cwd(),
tools: defaultCodingTools,
});
// Non-streaming usage
const { response, toolCalls } = await agent.send(
"session-1",
"List all TypeScript files in the src directory"
);
console.log(response);
// Streaming usage
for await (const event of agent.chat("session-1", "Fix the bug in utils.ts")) {
switch (event.type) {
case "text-delta":
process.stdout.write(event.text);
break;
case "tool-start":
console.log(`\n🔧 ${event.toolName}...`);
break;
case "tool-result":
console.log(`✅ ${event.toolName} complete`);
break;
case "error":
console.error(`❌ Error:`, event.error);
break;
}
}Built-in Tools
| Tool | Description |
|------|-------------|
| bash | Execute shell commands |
| read | Read file contents with line numbers |
| edit | Edit files by replacing text |
| write | Create or overwrite files |
| grep | Search file contents with regex |
| glob | Find files by glob pattern |
Custom Tools
Add your own tools using the Tool.define helper:
import { createAgent, Tool } from "@cuylabs/agent-code";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const myTool = Tool.define("my_custom_tool", {
description: "Does something custom",
parameters: z.object({
input: z.string().describe("The input to process"),
}),
execute: async (params, ctx) => ({
title: "Custom Tool",
output: `Processed: ${params.input}`,
metadata: {},
}),
});
const agent = createAgent({
model: openai("gpt-4o"),
cwd: process.cwd(),
});
agent.addTool(myTool);Using Different Providers
// OpenAI (recommended)
import { openai } from "@ai-sdk/openai";
const agent = createAgent({
model: openai("gpt-4o"),
cwd: process.cwd(),
});
// OpenAI with reasoning
const reasoningAgent = createAgent({
model: openai("o3-mini"),
cwd: process.cwd(),
reasoningLevel: "high",
});
// Anthropic
import { anthropic } from "@ai-sdk/anthropic";
const agent = createAgent({
model: anthropic("claude-sonnet-4-20250514"),
cwd: process.cwd(),
});
// Google
import { google } from "@ai-sdk/google";
const agent = createAgent({
model: google("gemini-2.0-flash"),
cwd: process.cwd(),
});Configuration
import { createAgent, defaultCodingTools, readTool, grepTool, globTool } from "@cuylabs/agent-code";
import { openai } from "@ai-sdk/openai";
const agent = createAgent({
// Required: Vercel AI SDK model instance
model: openai("gpt-4o"),
// Working directory for file operations (default: process.cwd())
cwd: "/path/to/project",
// System prompt (has sensible defaults for coding)
systemPrompt: "You are a helpful coding assistant...",
// Temperature (0-1, default varies by provider)
temperature: 0.7,
// Max output tokens (default: 32000)
maxOutputTokens: 16000,
// Max steps for tool calling loops (default: 50)
maxSteps: 25,
// Reasoning level for reasoning models (default: "off")
reasoningLevel: "high", // "off", "low", "medium", "high"
// Tools to enable (defaults to empty, use defaultCodingTools for all)
tools: defaultCodingTools,
// Or use a subset for read-only operations
// tools: [readTool, grepTool, globTool],
});Session Management
// Sessions are managed automatically
// Just use a unique session ID for each conversation
// List all sessions
const sessions = await agent.listSessions();
// Delete a session
await agent.deleteSession("session-id");
// Get current session's messages
const messages = agent.getMessages();
// Branch from current point (for undo/redo)
const branchId = await agent.branch("checkpoint before refactor");Events
The streaming chat() method yields these events:
type AgentEvent =
// Text streaming
| { type: "text-start" }
| { type: "text-delta"; text: string }
| { type: "text-end" }
// Reasoning (for o3-mini, etc.)
| { type: "reasoning-start"; id: string }
| { type: "reasoning-delta"; id: string; text: string }
| { type: "reasoning-end"; id: string }
// Tool execution
| { type: "tool-start"; toolName: string; toolCallId: string; input: unknown }
| { type: "tool-result"; toolName: string; toolCallId: string; result: unknown }
| { type: "tool-error"; toolName: string; toolCallId: string; error: string }
// Progress
| { type: "step-start"; step: number; maxSteps: number }
| { type: "step-finish"; step: number; usage?: TokenUsage; finishReason?: string }
// Completion
| { type: "error"; error: Error }
| { type: "complete"; usage?: TokenUsage };Examples
See the examples/ folder for working examples:
| # | File | What it shows |
|---|------|---------------|
| 01 | 01-basic.ts | Basic streaming chat with coding tools |
| 02 | 02-reasoning.ts | Using reasoning models (o4-mini) |
| 03 | 03-read-only.ts | Read-only agent (no bash/edit/write) |
| 04 | 04-custom-tools.ts | Adding your own tools |
cd packages/agent-code/examples
cp .env.example .env
# Add your OPENAI_API_KEY
npx tsx examples/01-basic.tsArchitecture
This library is built on:
- @cuylabs/agent-core - Core agent infrastructure (sessions, streaming, tool execution)
- Vercel AI SDK - LLM interaction and multi-provider support
- Zod - Parameter validation for tools
License
Apache-2.0
