@avasis-ai/synth
v0.6.0
Published
Synthesize any LLM into a production-grade AI agent. Battle-tested agentic patterns, model-agnostic, TypeScript-first.
Maintainers
Readme
import { Agent, BashTool, FileReadTool } from "@avasis-ai/synth";
import { AnthropicProvider } from "@avasis-ai/synth/llm";
const agent = new Agent({
model: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
tools: [BashTool, FileReadTool],
});
for await (const event of agent.run("Create a Python todo app with tests")) {
if (event.type === "text") process.stdout.write(event.text);
if (event.type === "tool_use") console.log(`\n [${event.name}]`);
}Why Synth
Bundle Size
Performance
Measured on Apple M4 Pro, Node 22. Run npx tsx tests/benchmarks.ts to reproduce.
Quick Start
npm install @avasis-ai/synth @anthropic-ai/sdk zodimport { Agent, BashTool, FileReadTool } from "@avasis-ai/synth";
import { AnthropicProvider } from "@avasis-ai/synth/llm";
const agent = new Agent({
model: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
tools: [BashTool, FileReadTool],
});
for await (const event of agent.run("Create a Python todo app with tests")) {
if (event.type === "text") process.stdout.write(event.text);
if (event.type === "tool_use") console.log(`\n [${event.name}]`);
}Ollama (zero API costs)
npm install @avasis-ai/synth zodimport { Agent, BashTool } from "@avasis-ai/synth";
import { OllamaProvider } from "@avasis-ai/synth/llm";
const agent = new Agent({
model: new OllamaProvider({ model: "qwen3:32b" }),
tools: [BashTool],
disableTitle: true,
});
const result = await agent.chat("List all TypeScript files in this project");
console.log(result.text);Architecture
@avasis-ai/synth/
|-- Agent High-level API (run, chat, structured)
| |-- agentLoop() Core: think -> act -> observe -> repeat
| | |-- Provider Any LLM (Anthropic, OpenAI, Ollama, custom)
| | |-- ToolRegistry Lookup, dedup, case-insensitive search
| | |-- Orchestrator Concurrent reads, serial writes
| | |-- ContextManager Compaction + token-weighted pruning
| | |-- PermissionEngine Allow/deny/ask with pattern matching
| |-- structured<T>() JSON extraction with Zod + retry
| |-- structuredViaTool<T>() Structured output via tool injection
| |-- asTool() Sub-agent delegation
| |-- fork() Session forking
| |-- hooks / memory / cost Lifecycle hooks, persistence, tracking
|
|-- Tools Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch
|-- Fuzzy Edit 9-strategy find-and-replace engine
|-- LLM Providers Anthropic, OpenAI, Ollama (raw fetch), custom
|-- CLI synth init / synth runCore Concepts
Structured Output
import { z } from "zod";
const schema = z.object({ name: z.string(), age: z.number() });
// JSON extraction with retry
const data = await agent.structured("Extract the person's info", schema);
// Tool injection (higher reliability)
const data = await agent.structuredViaTool("Extract the person's info", schema);Sub-Agent Delegation
const researchTool = await researchAgent.asTool({
name: "research",
description: "Research a topic and return a summary",
allowSubAgents: false,
});
const coderAgent = new Agent({
model: provider,
tools: [BashTool, FileWriteTool, researchTool],
});Custom Tools
import { defineTool } from "@avasis-ai/synth";
import { z } from "zod";
const WeatherTool = defineTool({
name: "get_weather",
description: "Get current weather for a city",
inputSchema: z.object({
city: z.string().describe("City name"),
units: z.enum(["celsius", "fahrenheit"]).optional(),
}),
isReadOnly: true,
isConcurrencySafe: true,
execute: async ({ city, units = "celsius" }) => {
const res = await fetch(`https://wttr.in/${city}?format=j1`);
const data = await res.json();
return `${city}: ${data.current_condition[0].weatherDesc[0].value}`;
},
});Context Management
const agent = new Agent({
model: provider,
tools: [BashTool],
context: {
maxTokens: 200_000,
compactThreshold: 0.85,
maxOutputTokens: 16_384,
},
});Multi-layer compaction: snip old messages, summarize with compact, compress large tool outputs with token-weighted pruning. Automatic -- the loop never crashes from overflow.
Tool Orchestration
Read-only tools run concurrently (up to 10 parallel). Write tools run serial. The orchestrator partitions automatically.
Permissions
const agent = new Agent({
model: provider,
tools: [BashTool, FileReadTool, FileWriteTool],
permissions: {
allowedTools: ["file_read", "glob", "grep"],
deniedTools: ["bash"],
defaultAction: "deny",
},
});Model Portability
import { AnthropicProvider, OpenAIProvider, OllamaProvider } from "@avasis-ai/synth/llm";
// Claude / GPT / Ollama -- swap freely
const agent = new Agent({ model: claude, tools: [...] });
const agent = new Agent({ model: gpt, tools: [...] });
const agent = new Agent({ model: ollama, tools: [...] });Built-in Tools
| Tool | Description | Read-Only | Concurrent |
|------|-------------|:---------:|:----------:|
| BashTool | Shell commands with timeout + truncation | | |
| FileReadTool | File reading with line numbers + pagination | Yes | Yes |
| FileWriteTool | File creation with recursive mkdir | | |
| FileEditTool | Fuzzy search-and-replace (9 strategies) | | |
| GlobTool | File pattern matching, sorted by mtime | Yes | Yes |
| GrepTool | Regex content search | Yes | Yes |
| WebFetchTool | URL fetch with response truncation | Yes | Yes |
Origins
Synth is a clean-room reimplementation. In early 2026, Claude Code's complete source (~2,200 files, 49MB TypeScript) was published on GitHub. Dozens of raw copies appeared. Synth is different -- every line written from scratch after analyzing the patterns from Claude Code, OpenAI Codex (678K LOC Rust), claw-code-parity (71K LOC Rust), and MiroFish (85K LOC Python).
| | Raw Reuploads | Synth | |---|:---:|:---:| | Files | 2,200 | 40 | | Lines of code | ~150,000 | ~3,700 | | Usable as npm package | No | Yes | | Works with any LLM | No | Yes | | Legal to use | No | Yes (MIT) | | Zero runtime deps | No | Yes |
Installation
npm install @avasis-ai/synth zod # Core + Ollama
npm install @avasis-ai/synth @anthropic-ai/sdk zod # + Claude
npm install @avasis-ai/synth openai zod # + GPTAPI
Agent
const agent = new Agent({
model: Provider,
tools?: Tool[],
systemPrompt?: string,
maxTurns?: number, // default: 100
disableTitle?: boolean,
context?: {
maxTokens?: number, // default: 200,000
compactThreshold?: number, // default: 0.85
maxOutputTokens?: number, // default: 16,384
},
permissions?: {
allowedTools?: string[],
deniedTools?: string[],
defaultAction?: "allow" | "deny" | "ask",
},
});agent.run(prompt)-- async generator yielding eventsagent.chat(prompt)-- returns{ text, usage, cost }agent.structured(prompt, schema)-- typed JSON extractionagent.structuredViaTool(prompt, schema)-- via tool injectionagent.asTool()-- wrap as callable tool for delegationagent.fork()-- session forkingagent.addTool(tool)-- runtime tool addition
Tests
npm test95 tests, 8 suites, <2 seconds, zero API calls (fully mocked).
Contributing
- Fork
- Branch (
git checkout -b feature/my-feature) - Commit (
git commit -am 'feat: add my feature') - Push (
git push origin feature/my-feature) - PR
License
MIT -- AVASIS AI
