npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@synkro/agents

v0.4.0

Published

AI agent orchestration for @synkro/core — LLM-powered agents with tools, memory, and multi-agent patterns

Downloads

608

Readme

@synkro/agents

AI agent orchestration for Synkro. Build LLM-powered agents with tools, memory, and multi-agent patterns — all on top of Synkro's event-driven workflow engine.

Features

  • ReAct Loop — Agents reason and act in a loop: call LLM, execute tools, repeat until done
  • Tool Execution — Define typed tools with JSON Schema parameters; agents call them automatically
  • Provider Agnostic — Built-in adapters for OpenAI and Anthropic; implement ModelProvider for any LLM
  • Conversation Memory — Redis-backed message history via Synkro's existing TransportManager
  • Synkro Integrationagent.asHandler() bridges any agent into Synkro's event system with locking, dedup, retries, and dead letter queue for free
  • Token Tracking — Built-in usage accumulation with tokenBudget hard stops
  • Safety GuardrailsmaxIterations prevents infinite tool loops; tokenBudget caps API spend
  • Zero Dependencies — Providers use native fetch; memory uses Synkro's existing transport

Installation

npm install @synkro/agents @synkro/core

Quick Start

Single Agent

import { createAgent, createTool, OpenAIProvider } from "@synkro/agents";

const searchTool = createTool({
  name: "web_search",
  description: "Search the web for information",
  parameters: {
    type: "object",
    properties: {
      query: { type: "string", description: "Search query" },
    },
    required: ["query"],
  },
  async execute(input) {
    const res = await fetch(`https://api.search.com?q=${input.query}`);
    return res.json();
  },
});

const agent = createAgent({
  name: "researcher",
  systemPrompt: "You are a research assistant. Use web_search to find information.",
  provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  model: { model: "gpt-4o", temperature: 0.3 },
  tools: [searchTool],
  maxIterations: 5,
});

const result = await agent.run("What are the latest trends in AI?");
console.log(result.output);
console.log(result.tokenUsage);

With Anthropic

import { createAgent, AnthropicProvider } from "@synkro/agents";

const agent = createAgent({
  name: "writer",
  systemPrompt: "You are a technical writer.",
  provider: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
  model: { model: "claude-sonnet-4-20250514" },
});

const result = await agent.run("Write a summary of event-driven architecture.");

With Gemini

import { createAgent, GeminiProvider } from "@synkro/agents";

const agent = createAgent({
  name: "analyst",
  systemPrompt: "You are a data analyst.",
  provider: new GeminiProvider({ apiKey: process.env.GEMINI_API_KEY! }),
  model: { model: "gemini-2.0-flash" },
});

const result = await agent.run("Summarize the key metrics from this quarter.");

As a Synkro Event Handler

Bridge an agent into Synkro's event system. The agent automatically gets distributed locking, deduplication, retries, and dead letter queue support.

import { Synkro } from "@synkro/core";
import { createAgent, OpenAIProvider } from "@synkro/agents";

const agent = createAgent({
  name: "support-agent",
  systemPrompt: "You answer customer support questions.",
  provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  model: { model: "gpt-4o" },
  tools: [lookupOrderTool, checkInventoryTool],
});

const synkro = await Synkro.start({
  transport: "redis",
  connectionUrl: "redis://localhost:6379",
  events: [
    { type: "support:request", handler: agent.asHandler() },
  ],
});

// Publish triggers the agent with full Synkro guarantees
await synkro.publish("support:request", { input: "Where is my order #12345?" });

The handler reads payload.input as the agent's input string. If payload.input is not a string, the entire payload is JSON-serialized as input. The agent writes its results back via ctx.setPayload():

{
  agentOutput: "Your order #12345 is...",
  agentStatus: "completed",
  agentTokenUsage: { promptTokens: 150, completionTokens: 80, totalTokens: 230 },
  agentToolCalls: 2,
}

With Conversation Memory

Persist conversation history across runs using Redis (via Synkro's transport layer).

import { Synkro } from "@synkro/core";
import { createAgent, OpenAIProvider, ConversationMemory } from "@synkro/agents";

const synkro = await Synkro.start({
  transport: "redis",
  connectionUrl: "redis://localhost:6379",
});

const memory = new ConversationMemory({
  transport: synkro.transport, // reuses existing Redis connection
  maxMessages: 50,
  ttlSeconds: 3600, // 1 hour
});

const agent = createAgent({
  name: "assistant",
  systemPrompt: "You are a helpful assistant with memory.",
  provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  model: { model: "gpt-4o" },
  memory,
});

// First run
await agent.run("My name is Alice.", { requestId: "session-1" });

// Second run — agent remembers the conversation
const result = await agent.run("What's my name?", { requestId: "session-1" });
// result.output → "Your name is Alice."

API

createAgent(config): Agent

Creates an agent instance.

type AgentConfig = {
  name: string;
  description?: string;
  systemPrompt: string;
  provider: ModelProvider;
  model: ModelOptions;
  tools?: Tool[];
  memory?: AgentMemory;
  maxIterations?: number;  // default: 10
  tokenBudget?: number;    // max total tokens before stopping
  retry?: RetryConfig;     // reuses @synkro/core's RetryConfig
  onTokenUsage?: (usage: TokenUsage) => void;
};

agent.run(input, options?): Promise<AgentRunResult>

Runs the agent's ReAct loop with the given input string.

type AgentRunOptions = {
  requestId?: string;  // correlation ID (auto-generated if omitted)
  payload?: unknown;   // additional context passed to tool execution
};

type AgentRunResult = {
  agentName: string;
  runId: string;
  output: string;
  messages: Message[];
  toolCalls: ToolResult[];
  tokenUsage: TokenUsage;
  status: "completed" | "failed" | "max_iterations" | "token_budget_exceeded";
};

agent.asHandler(): HandlerFunction

Returns a Synkro-compatible HandlerFunction that can be used with synkro.on(), event definitions, or workflow steps.

createTool(tool): Tool

Creates a typed tool definition.

type Tool<TInput, TOutput> = {
  name: string;
  description: string;
  parameters: Record<string, unknown>;  // JSON Schema
  execute: (input: TInput, ctx: AgentContext) => Promise<TOutput>;
};

Tools receive an AgentContext which extends Synkro's HandlerCtx with agent-specific fields (agentName, runId, tokenUsage).

createDebate(config): { run, asHandler }

Creates a debate orchestration where multiple agents collaborate by discussing a topic over several rounds.

import { createAgent, createDebate, OpenAIProvider } from "@synkro/agents";

const optimist = createAgent({
  name: "optimist",
  systemPrompt: "You argue in favor of the topic, highlighting benefits and opportunities.",
  provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  model: { model: "gpt-4o" },
});

const critic = createAgent({
  name: "critic",
  systemPrompt: "You challenge assumptions and highlight risks and downsides.",
  provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  model: { model: "gpt-4o" },
});

const moderator = createAgent({
  name: "moderator",
  systemPrompt: "You are a neutral moderator. Frame debates clearly and synthesize balanced conclusions.",
  provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  model: { model: "gpt-4o" },
});

const debate = createDebate({
  name: "tech-debate",
  participants: [optimist, critic],
  moderator,       // optional: frames the topic and synthesizes conclusion
  maxRounds: 3,    // default: 3
});

const result = await debate.run("Should we adopt microservices?");
console.log(result.output);    // moderator's synthesis (or last round output if no moderator)
console.log(result.rounds);    // full round-by-round contributions
console.log(result.tokenUsage);
type DebateConfig = {
  name: string;
  participants: Agent[];
  maxRounds?: number;    // default: 3
  moderator?: Agent;     // optional
  onTokenUsage?: (usage: TokenUsage) => void;
};

type DebateResult = {
  topic: string;
  rounds: DebateRound[];
  synthesis: string | undefined;
  output: string;
  tokenUsage: TokenUsage;
  status: "completed" | "failed";
};

Each round, every participant sees the full transcript of all previous contributions. The moderator (if provided) speaks first to frame the debate and last to synthesize the conclusion.

OpenAIProvider

const provider = new OpenAIProvider({
  apiKey: "sk-...",
  baseUrl: "https://api.openai.com/v1",  // optional, for proxies or compatible APIs
});

AnthropicProvider

const provider = new AnthropicProvider({
  apiKey: "sk-ant-...",
  baseUrl: "https://api.anthropic.com/v1",  // optional
});

GeminiProvider

const provider = new GeminiProvider({
  apiKey: "AIza...",
  baseUrl: "https://generativelanguage.googleapis.com/v1beta",  // optional
});

ModelProvider Interface

Implement this interface to use any LLM provider:

interface ModelProvider {
  chat(messages: Message[], options: ModelOptions): Promise<ModelResponse>;
  chatStream?(messages: Message[], options: ModelOptions): AsyncIterable<ModelStreamChunk>;
}

ConversationMemory

Redis-backed conversation memory using Synkro's TransportManager.

const memory = new ConversationMemory({
  transport: transportManager,  // from Synkro instance
  maxMessages: 100,             // default: 100
  ttlSeconds: 86400,            // default: 24 hours
});

AgentMemory Interface

Implement this interface for custom memory backends:

interface AgentMemory {
  addMessage(agentId: string, runId: string, message: Message): Promise<void>;
  getMessages(agentId: string, runId: string): Promise<Message[]>;
  clear(agentId: string, runId: string): Promise<void>;
}

Types

type Message = {
  role: "system" | "user" | "assistant" | "tool";
  content: string;
  toolCallId?: string;
  toolCalls?: ToolCall[];
};

type ModelOptions = {
  model: string;
  temperature?: number;
  maxTokens?: number;
  tools?: ToolDefinition[];
};

type ModelResponse = {
  content: string;
  toolCalls?: ToolCall[];
  usage: TokenUsage;
  finishReason: "stop" | "tool_calls" | "length";
};

type TokenUsage = {
  promptTokens: number;
  completionTokens: number;
  totalTokens: number;
};

type ToolResult = {
  toolCallId: string;
  name: string;
  result: unknown;
  error?: string;
  durationMs: number;
};

type AgentContext = HandlerCtx & {
  agentName: string;
  runId: string;
  tokenUsage: TokenUsage;
};

License

MIT