@treppenhaus/chisato
v1.0.4
Published
Lightweight LLM agentic framework for TypeScript
Maintainers
Readme
Chisato
GitHub Repository: https://github.com/treppenhaus/chisato
A lightweight, extensible TypeScript framework for building LLM-powered agents with custom actions and pluggable LLM providers.
What's New in Version 1.0.4
- Flexible Input:
AgentLoop.runandAgent.chatnow acceptstringorMessage[]history - Context Management: Easily inject conversation history or context messages
- Type Safety: Improved overloads for better TypeScript support
- Previous features from 1.0.3:
- Improved termination logic
- Better context handling
- AgentLoop System with autonomous task execution
- Dual LLM Methods:
sendAgenticMessageandsendMessage - LLM-Driven Decisions
- Retry System
- Action Tracking
- Default Actions:
user_outputandquery_llm
Table of Contents
Features
- Provider Agnostic: Use any LLM service (OpenAI, Anthropic, local models, etc.)
- Simple API: Easy to use and integrate into your projects
- Custom Actions: Create your own agent capabilities with a simple interface
- Automatic Action Handling: Actions are automatically added to system prompts and executed
- Conversation Loop: Handles multi-turn interactions with automatic action execution
- Agent Loop: Break down complex tasks into steps and execute them autonomously
- LLM-Driven: Let the LLM intelligently decide when to use actions
- Retry Logic: Automatic retries for LLM errors and action failures
- Full Visibility: Track all actions and retries with callbacks
- TypeScript First: Full type safety and IntelliSense support
Quick Start
import { Agent, ILLMProvider, IAction, Message } from "chisato";
// 1. Implement your LLM provider
class MyLLMProvider implements ILLMProvider {
async sendAgenticMessage(
messages: Message[],
systemPrompt?: string
): Promise<string> {
// For agentic calls (with actions)
return await callYourLLM(messages, systemPrompt);
}
async sendMessage(
messages: Message[],
systemPrompt?: string
): Promise<string> {
// For normal chat (no actions)
return await callYourLLM(messages, systemPrompt);
}
}
// 2. Create custom actions
class CalculatorAction implements IAction {
name = "calculator";
description = "Perform mathematical calculations";
parameters = [
{
name: "expression",
type: "string" as const,
description: "Mathematical expression to evaluate",
required: true,
},
];
async execute(params: Record<string, any>): Promise<any> {
const result = eval(params.expression);
return { result };
}
}
// 3. Create and configure your agent
const provider = new MyLLMProvider();
const agent = new Agent(provider);
// 4. Register actions
agent.registerAction(new CalculatorAction());
// 5. Start chatting!
const response = await agent.chat("What is 15 * 23?");
console.log(response);Documentation
Guides
- ILLMProvider Guide - Understanding the two LLM methods
- Using Real LLMs - Integration with OpenAI, Anthropic, Ollama
- Retry System Guide - Handling failures and retries
- AgentLoop Design - How autonomous task execution works
- Quick Reference - Quick API reference
- Architecture - System architecture overview
- Implementation Summary - Implementation details
Key Concepts
Two Types of LLM Calls:
- Agentic Messages (
sendAgenticMessage): Used when the LLM should have access to actions and can decide whether to use them - Normal Messages (
sendMessage): Used for simple chat without action capabilities
Retry System:
- Automatic retry for LLM failures (empty responses, malformed JSON, API errors)
- Automatic retry for action execution failures
- Configurable retry limits and backoff strategies
- Callbacks for monitoring and alerting
Core Concepts
How It Works
- You send a message to the agent using
agent.chat() - The agent builds a system prompt that includes descriptions of all registered actions
- The LLM responds, potentially including action calls in JSON format
- The agent parses the response and automatically executes any requested actions
- Action results are fed back to the LLM
- Steps 3-5 repeat until the LLM provides a final response without actions
Creating Custom Actions
Implement the IAction interface:
import { IAction } from "chisato";
class WeatherAction implements IAction {
name = "get_weather";
description = "Get current weather for a location";
parameters = [
{
name: "location",
type: "string" as const,
description: "City name or coordinates",
required: true,
},
];
async execute(params: Record<string, any>): Promise<any> {
const weather = await fetchWeatherAPI(params.location);
return {
temperature: weather.temp,
condition: weather.condition,
};
}
}
agent.registerAction(new WeatherAction());Agent Loop - Autonomous Task Execution
The AgentLoop class enables autonomous task breakdown and execution:
import { AgentLoop } from "chisato";
const agentLoop = new AgentLoop(provider, {
includeDefaultActions: true,
maxSteps: 20,
maxRetries: 3,
maxActionRetries: 2,
onUserOutput: (message) => console.log("Agent:", message),
onActionExecuted: (action) => console.log("Executed:", action.actionName),
onActionRetry: (name, attempt, error) =>
console.log(`Retry ${name}: ${attempt}`),
});
// Register custom actions
agentLoop.registerAction(new SearchAction());
agentLoop.registerAction(new WeatherAction());
// Run a complex task - the LLM decides which actions to use
const result = await agentLoop.run(
"Search for TypeScript tutorials and summarize"
);
// OR inject history/context
const resultWithContext = await agentLoop.run([
{ role: 'user', content: 'Context: Current location is Berlin.' },
{ role: 'assistant', content: 'Understood.' },
{ role: 'user', content: 'What is the weather like?' }
]);Retry Configuration
const agentLoop = new AgentLoop(provider, {
// LLM retry options
maxRetries: 3, // Retry LLM calls up to 3 times
onInvalidOutput: (attempt, error, output) => {
console.log(`LLM retry ${attempt}: ${error}`);
},
// Action retry options
maxActionRetries: 2, // Retry each action up to 2 times
onActionRetry: (actionName, attempt, error) => {
console.log(`Action ${actionName} retry ${attempt}: ${error}`);
},
onActionMaxRetries: (actionName, error) => {
console.error(`Action ${actionName} failed permanently: ${error}`);
},
});Examples
See the examples directory for complete working examples:
basic-usage.ts- Simple agent with a calculator actioncustom-provider.ts- Example LLM provider implementationagent-loop-example.ts- Comprehensive AgentLoop examplesaction-retry-example.ts- Demonstrating retry functionality
API Reference
Agent
Main agent class for building conversational agents.
Constructor:
new Agent(provider: ILLMProvider, options?: AgentOptions)Methods:
registerAction(action: IAction): void- Register an actionchat(input: string | Message[]): Promise<string>- Send a message or history and get a responsegetHistory(): Message[]- Get conversation historyclearHistory(): void- Clear conversation history
Options:
interface AgentOptions {
maxIterations?: number; // Maximum conversation loops (default: 10)
systemPromptPrefix?: string; // Custom system prompt prefix
maxRetries?: number; // Max LLM retries (default: 3)
maxActionRetries?: number; // Max action retries (default: 2)
onInvalidOutput?: (attempt: number, error: string, output: string) => void;
onActionRetry?: (actionName: string, attempt: number, error: string) => void;
onActionMaxRetries?: (actionName: string, error: string) => void;
}AgentLoop
Main class for autonomous task execution with automatic action recognition.
Constructor:
new AgentLoop(provider: ILLMProvider, options?: AgentLoopOptions)Methods:
registerAction(action: IAction): void- Register an actionrun(task: string | Message[]): Promise<AgentLoopResult>- Execute a task or process historygetHistory(): Message[]- Get conversation historygetUserOutputs(): string[]- Get all user outputsgetActionsExecuted(): ActionExecution[]- Get all executed actions
Options:
interface AgentLoopOptions {
maxSteps?: number; // Maximum steps (default: 10)
includeDefaultActions?: boolean; // Include user_output and query_llm (default: true)
systemPrompt?: string; // Custom system prompt
maxRetries?: number; // Max LLM retries (default: 3)
maxActionRetries?: number; // Max action retries (default: 2)
onStepComplete?: (step: AgentStep) => void;
onUserOutput?: (message: string) => void;
onActionExecuted?: (execution: ActionExecution) => void;
onInvalidOutput?: (attempt: number, error: string, output: string) => void;
onActionRetry?: (actionName: string, attempt: number, error: string) => void;
onActionMaxRetries?: (actionName: string, error: string) => void;
}ILLMProvider
Interface for LLM providers.
Methods:
sendAgenticMessage(messages: Message[], systemPrompt?: string): Promise<string>- For agentic callssendMessage(messages: Message[], systemPrompt?: string): Promise<string>- For normal chat
See ILLMPROVIDER_GUIDE.md for detailed information.
IAction
Interface for actions.
Properties:
name: string- Unique action namedescription: string- What the action doesparameters: ParameterDefinition[]- Parameter definitions
Methods:
execute(params: Record<string, any>): Promise<any>- Execute the action
License
ISC
Contributing
Contributions are welcome! Please feel free to submit issues or pull requests on GitHub.
