npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@avasis-ai/synth

v0.6.0

Published

Synthesize any LLM into a production-grade AI agent. Battle-tested agentic patterns, model-agnostic, TypeScript-first.

Readme


import { Agent, BashTool, FileReadTool } from "@avasis-ai/synth";
import { AnthropicProvider } from "@avasis-ai/synth/llm";

const agent = new Agent({
  model: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
  tools: [BashTool, FileReadTool],
});

for await (const event of agent.run("Create a Python todo app with tests")) {
  if (event.type === "text") process.stdout.write(event.text);
  if (event.type === "tool_use") console.log(`\n  [${event.name}]`);
}

Why Synth

Bundle Size

Performance

Measured on Apple M4 Pro, Node 22. Run npx tsx tests/benchmarks.ts to reproduce.

Quick Start

npm install @avasis-ai/synth @anthropic-ai/sdk zod
import { Agent, BashTool, FileReadTool } from "@avasis-ai/synth";
import { AnthropicProvider } from "@avasis-ai/synth/llm";

const agent = new Agent({
  model: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
  tools: [BashTool, FileReadTool],
});

for await (const event of agent.run("Create a Python todo app with tests")) {
  if (event.type === "text") process.stdout.write(event.text);
  if (event.type === "tool_use") console.log(`\n  [${event.name}]`);
}

Ollama (zero API costs)

npm install @avasis-ai/synth zod
import { Agent, BashTool } from "@avasis-ai/synth";
import { OllamaProvider } from "@avasis-ai/synth/llm";

const agent = new Agent({
  model: new OllamaProvider({ model: "qwen3:32b" }),
  tools: [BashTool],
  disableTitle: true,
});

const result = await agent.chat("List all TypeScript files in this project");
console.log(result.text);

Architecture

@avasis-ai/synth/
|-- Agent                      High-level API (run, chat, structured)
|   |-- agentLoop()            Core: think -> act -> observe -> repeat
|   |   |-- Provider           Any LLM (Anthropic, OpenAI, Ollama, custom)
|   |   |-- ToolRegistry       Lookup, dedup, case-insensitive search
|   |   |-- Orchestrator       Concurrent reads, serial writes
|   |   |-- ContextManager     Compaction + token-weighted pruning
|   |   |-- PermissionEngine   Allow/deny/ask with pattern matching
|   |-- structured<T>()        JSON extraction with Zod + retry
|   |-- structuredViaTool<T>() Structured output via tool injection
|   |-- asTool()               Sub-agent delegation
|   |-- fork()                 Session forking
|   |-- hooks / memory / cost  Lifecycle hooks, persistence, tracking
|
|-- Tools                      Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch
|-- Fuzzy Edit                 9-strategy find-and-replace engine
|-- LLM Providers              Anthropic, OpenAI, Ollama (raw fetch), custom
|-- CLI                        synth init / synth run

Core Concepts

Structured Output

import { z } from "zod";

const schema = z.object({ name: z.string(), age: z.number() });

// JSON extraction with retry
const data = await agent.structured("Extract the person's info", schema);

// Tool injection (higher reliability)
const data = await agent.structuredViaTool("Extract the person's info", schema);

Sub-Agent Delegation

const researchTool = await researchAgent.asTool({
  name: "research",
  description: "Research a topic and return a summary",
  allowSubAgents: false,
});

const coderAgent = new Agent({
  model: provider,
  tools: [BashTool, FileWriteTool, researchTool],
});

Custom Tools

import { defineTool } from "@avasis-ai/synth";
import { z } from "zod";

const WeatherTool = defineTool({
  name: "get_weather",
  description: "Get current weather for a city",
  inputSchema: z.object({
    city: z.string().describe("City name"),
    units: z.enum(["celsius", "fahrenheit"]).optional(),
  }),
  isReadOnly: true,
  isConcurrencySafe: true,
  execute: async ({ city, units = "celsius" }) => {
    const res = await fetch(`https://wttr.in/${city}?format=j1`);
    const data = await res.json();
    return `${city}: ${data.current_condition[0].weatherDesc[0].value}`;
  },
});

Context Management

const agent = new Agent({
  model: provider,
  tools: [BashTool],
  context: {
    maxTokens: 200_000,
    compactThreshold: 0.85,
    maxOutputTokens: 16_384,
  },
});

Multi-layer compaction: snip old messages, summarize with compact, compress large tool outputs with token-weighted pruning. Automatic -- the loop never crashes from overflow.

Tool Orchestration

Read-only tools run concurrently (up to 10 parallel). Write tools run serial. The orchestrator partitions automatically.

Permissions

const agent = new Agent({
  model: provider,
  tools: [BashTool, FileReadTool, FileWriteTool],
  permissions: {
    allowedTools: ["file_read", "glob", "grep"],
    deniedTools: ["bash"],
    defaultAction: "deny",
  },
});

Model Portability

import { AnthropicProvider, OpenAIProvider, OllamaProvider } from "@avasis-ai/synth/llm";

// Claude / GPT / Ollama -- swap freely
const agent = new Agent({ model: claude, tools: [...] });
const agent = new Agent({ model: gpt, tools: [...] });
const agent = new Agent({ model: ollama, tools: [...] });

Built-in Tools

| Tool | Description | Read-Only | Concurrent | |------|-------------|:---------:|:----------:| | BashTool | Shell commands with timeout + truncation | | | | FileReadTool | File reading with line numbers + pagination | Yes | Yes | | FileWriteTool | File creation with recursive mkdir | | | | FileEditTool | Fuzzy search-and-replace (9 strategies) | | | | GlobTool | File pattern matching, sorted by mtime | Yes | Yes | | GrepTool | Regex content search | Yes | Yes | | WebFetchTool | URL fetch with response truncation | Yes | Yes |

Origins

Synth is a clean-room reimplementation. In early 2026, Claude Code's complete source (~2,200 files, 49MB TypeScript) was published on GitHub. Dozens of raw copies appeared. Synth is different -- every line written from scratch after analyzing the patterns from Claude Code, OpenAI Codex (678K LOC Rust), claw-code-parity (71K LOC Rust), and MiroFish (85K LOC Python).

| | Raw Reuploads | Synth | |---|:---:|:---:| | Files | 2,200 | 40 | | Lines of code | ~150,000 | ~3,700 | | Usable as npm package | No | Yes | | Works with any LLM | No | Yes | | Legal to use | No | Yes (MIT) | | Zero runtime deps | No | Yes |

Installation

npm install @avasis-ai/synth zod                    # Core + Ollama
npm install @avasis-ai/synth @anthropic-ai/sdk zod  # + Claude
npm install @avasis-ai/synth openai zod              # + GPT

API

Agent

const agent = new Agent({
  model: Provider,
  tools?: Tool[],
  systemPrompt?: string,
  maxTurns?: number,           // default: 100
  disableTitle?: boolean,
  context?: {
    maxTokens?: number,        // default: 200,000
    compactThreshold?: number, // default: 0.85
    maxOutputTokens?: number,  // default: 16,384
  },
  permissions?: {
    allowedTools?: string[],
    deniedTools?: string[],
    defaultAction?: "allow" | "deny" | "ask",
  },
});
  • agent.run(prompt) -- async generator yielding events
  • agent.chat(prompt) -- returns { text, usage, cost }
  • agent.structured(prompt, schema) -- typed JSON extraction
  • agent.structuredViaTool(prompt, schema) -- via tool injection
  • agent.asTool() -- wrap as callable tool for delegation
  • agent.fork() -- session forking
  • agent.addTool(tool) -- runtime tool addition

Tests

npm test

95 tests, 8 suites, <2 seconds, zero API calls (fully mocked).

Contributing

  1. Fork
  2. Branch (git checkout -b feature/my-feature)
  3. Commit (git commit -am 'feat: add my feature')
  4. Push (git push origin feature/my-feature)
  5. PR

License

MIT -- AVASIS AI