npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@treppenhaus/chisato

v1.0.4

Published

Lightweight LLM agentic framework for TypeScript

Readme

Chisato

GitHub Repository: https://github.com/treppenhaus/chisato

A lightweight, extensible TypeScript framework for building LLM-powered agents with custom actions and pluggable LLM providers.

What's New in Version 1.0.4

  • Flexible Input: AgentLoop.run and Agent.chat now accept string or Message[] history
  • Context Management: Easily inject conversation history or context messages
  • Type Safety: Improved overloads for better TypeScript support
  • Previous features from 1.0.3:
    • Improved termination logic
    • Better context handling
    • AgentLoop System with autonomous task execution
    • Dual LLM Methods: sendAgenticMessage and sendMessage
    • LLM-Driven Decisions
    • Retry System
    • Action Tracking
    • Default Actions: user_output and query_llm

Table of Contents

Features

  • Provider Agnostic: Use any LLM service (OpenAI, Anthropic, local models, etc.)
  • Simple API: Easy to use and integrate into your projects
  • Custom Actions: Create your own agent capabilities with a simple interface
  • Automatic Action Handling: Actions are automatically added to system prompts and executed
  • Conversation Loop: Handles multi-turn interactions with automatic action execution
  • Agent Loop: Break down complex tasks into steps and execute them autonomously
  • LLM-Driven: Let the LLM intelligently decide when to use actions
  • Retry Logic: Automatic retries for LLM errors and action failures
  • Full Visibility: Track all actions and retries with callbacks
  • TypeScript First: Full type safety and IntelliSense support

Quick Start

import { Agent, ILLMProvider, IAction, Message } from "chisato";

// 1. Implement your LLM provider
class MyLLMProvider implements ILLMProvider {
  async sendAgenticMessage(
    messages: Message[],
    systemPrompt?: string
  ): Promise<string> {
    // For agentic calls (with actions)
    return await callYourLLM(messages, systemPrompt);
  }

  async sendMessage(
    messages: Message[],
    systemPrompt?: string
  ): Promise<string> {
    // For normal chat (no actions)
    return await callYourLLM(messages, systemPrompt);
  }
}

// 2. Create custom actions
class CalculatorAction implements IAction {
  name = "calculator";
  description = "Perform mathematical calculations";
  parameters = [
    {
      name: "expression",
      type: "string" as const,
      description: "Mathematical expression to evaluate",
      required: true,
    },
  ];

  async execute(params: Record<string, any>): Promise<any> {
    const result = eval(params.expression);
    return { result };
  }
}

// 3. Create and configure your agent
const provider = new MyLLMProvider();
const agent = new Agent(provider);

// 4. Register actions
agent.registerAction(new CalculatorAction());

// 5. Start chatting!
const response = await agent.chat("What is 15 * 23?");
console.log(response);

Documentation

Guides

Key Concepts

Two Types of LLM Calls:

  1. Agentic Messages (sendAgenticMessage): Used when the LLM should have access to actions and can decide whether to use them
  2. Normal Messages (sendMessage): Used for simple chat without action capabilities

Retry System:

  • Automatic retry for LLM failures (empty responses, malformed JSON, API errors)
  • Automatic retry for action execution failures
  • Configurable retry limits and backoff strategies
  • Callbacks for monitoring and alerting

Core Concepts

How It Works

  1. You send a message to the agent using agent.chat()
  2. The agent builds a system prompt that includes descriptions of all registered actions
  3. The LLM responds, potentially including action calls in JSON format
  4. The agent parses the response and automatically executes any requested actions
  5. Action results are fed back to the LLM
  6. Steps 3-5 repeat until the LLM provides a final response without actions

Creating Custom Actions

Implement the IAction interface:

import { IAction } from "chisato";

class WeatherAction implements IAction {
  name = "get_weather";
  description = "Get current weather for a location";
  parameters = [
    {
      name: "location",
      type: "string" as const,
      description: "City name or coordinates",
      required: true,
    },
  ];

  async execute(params: Record<string, any>): Promise<any> {
    const weather = await fetchWeatherAPI(params.location);
    return {
      temperature: weather.temp,
      condition: weather.condition,
    };
  }
}

agent.registerAction(new WeatherAction());

Agent Loop - Autonomous Task Execution

The AgentLoop class enables autonomous task breakdown and execution:

import { AgentLoop } from "chisato";

const agentLoop = new AgentLoop(provider, {
  includeDefaultActions: true,
  maxSteps: 20,
  maxRetries: 3,
  maxActionRetries: 2,
  onUserOutput: (message) => console.log("Agent:", message),
  onActionExecuted: (action) => console.log("Executed:", action.actionName),
  onActionRetry: (name, attempt, error) =>
    console.log(`Retry ${name}: ${attempt}`),
});

// Register custom actions
agentLoop.registerAction(new SearchAction());
agentLoop.registerAction(new WeatherAction());

// Run a complex task - the LLM decides which actions to use
const result = await agentLoop.run(
  "Search for TypeScript tutorials and summarize"
);

// OR inject history/context
const resultWithContext = await agentLoop.run([
  { role: 'user', content: 'Context: Current location is Berlin.' },
  { role: 'assistant', content: 'Understood.' },
  { role: 'user', content: 'What is the weather like?' }
]);

Retry Configuration

const agentLoop = new AgentLoop(provider, {
  // LLM retry options
  maxRetries: 3, // Retry LLM calls up to 3 times

  onInvalidOutput: (attempt, error, output) => {
    console.log(`LLM retry ${attempt}: ${error}`);
  },

  // Action retry options
  maxActionRetries: 2, // Retry each action up to 2 times

  onActionRetry: (actionName, attempt, error) => {
    console.log(`Action ${actionName} retry ${attempt}: ${error}`);
  },

  onActionMaxRetries: (actionName, error) => {
    console.error(`Action ${actionName} failed permanently: ${error}`);
  },
});

Examples

See the examples directory for complete working examples:

  • basic-usage.ts - Simple agent with a calculator action
  • custom-provider.ts - Example LLM provider implementation
  • agent-loop-example.ts - Comprehensive AgentLoop examples
  • action-retry-example.ts - Demonstrating retry functionality

API Reference

Agent

Main agent class for building conversational agents.

Constructor:

new Agent(provider: ILLMProvider, options?: AgentOptions)

Methods:

  • registerAction(action: IAction): void - Register an action
  • chat(input: string | Message[]): Promise<string> - Send a message or history and get a response
  • getHistory(): Message[] - Get conversation history
  • clearHistory(): void - Clear conversation history

Options:

interface AgentOptions {
  maxIterations?: number; // Maximum conversation loops (default: 10)
  systemPromptPrefix?: string; // Custom system prompt prefix
  maxRetries?: number; // Max LLM retries (default: 3)
  maxActionRetries?: number; // Max action retries (default: 2)
  onInvalidOutput?: (attempt: number, error: string, output: string) => void;
  onActionRetry?: (actionName: string, attempt: number, error: string) => void;
  onActionMaxRetries?: (actionName: string, error: string) => void;
}

AgentLoop

Main class for autonomous task execution with automatic action recognition.

Constructor:

new AgentLoop(provider: ILLMProvider, options?: AgentLoopOptions)

Methods:

  • registerAction(action: IAction): void - Register an action
  • run(task: string | Message[]): Promise<AgentLoopResult> - Execute a task or process history
  • getHistory(): Message[] - Get conversation history
  • getUserOutputs(): string[] - Get all user outputs
  • getActionsExecuted(): ActionExecution[] - Get all executed actions

Options:

interface AgentLoopOptions {
  maxSteps?: number; // Maximum steps (default: 10)
  includeDefaultActions?: boolean; // Include user_output and query_llm (default: true)
  systemPrompt?: string; // Custom system prompt
  maxRetries?: number; // Max LLM retries (default: 3)
  maxActionRetries?: number; // Max action retries (default: 2)
  onStepComplete?: (step: AgentStep) => void;
  onUserOutput?: (message: string) => void;
  onActionExecuted?: (execution: ActionExecution) => void;
  onInvalidOutput?: (attempt: number, error: string, output: string) => void;
  onActionRetry?: (actionName: string, attempt: number, error: string) => void;
  onActionMaxRetries?: (actionName: string, error: string) => void;
}

ILLMProvider

Interface for LLM providers.

Methods:

  • sendAgenticMessage(messages: Message[], systemPrompt?: string): Promise<string> - For agentic calls
  • sendMessage(messages: Message[], systemPrompt?: string): Promise<string> - For normal chat

See ILLMPROVIDER_GUIDE.md for detailed information.

IAction

Interface for actions.

Properties:

  • name: string - Unique action name
  • description: string - What the action does
  • parameters: ParameterDefinition[] - Parameter definitions

Methods:

  • execute(params: Record<string, any>): Promise<any> - Execute the action

License

ISC

Contributing

Contributions are welcome! Please feel free to submit issues or pull requests on GitHub.