smallllm
v0.1.0
Published
Minimal LLM framework for Node.js - Build AI agents and workflows with TypeScript
Maintainers
Readme
SmallLLM
Minimal LLM framework for Node.js - Build complex AI agents and workflows with TypeScript. A lightweight, expressive alternative to heavy LLM frameworks.
✨ Key Features
- 🚀 Lightweight: Minimal core (~200 lines) with zero dependencies except TypeScript
- 🤖 Agent-First: Built-in patterns for autonomous agents, RAG, and multi-agent systems
- 🔄 Checkpoint & Resume: Automatic state persistence with crash recovery
- 🎯 Type-Safe: Full TypeScript support with comprehensive type definitions
- 🏗️ Composable: Easy flow composition and nesting
- ⚡ Async Support: Native async/await with proper error handling
- 🔧 Extensible: Plugin architecture for custom nodes and checkpointers
📦 Installation
npm install smallllmBuild from Source
git clone https://github.com/AnupKumarJha/small-llm.git
cd small-llm
npm install
npm run build🚀 Quick Start
1. Basic Node Example
import { Node, Flow } from 'smallllm';
class HelloNode extends Node {
async exec(prepRes: any): Promise<any> {
return `Hello, ${prepRes.name}!`;
}
}
class PrintNode extends Node {
async post(shared: any, prepRes: any, execRes: any): Promise<string> {
console.log(execRes);
return 'default';
}
}
// Create and connect nodes
const hello = new HelloNode();
const print = new PrintNode();
hello.then(print);
// Create flow and run
const flow = new Flow(hello);
await flow.run({ name: 'World' });2. LLM-Powered Agent
import { LLMNode, Flow, LLMConfig } from 'smallllm';
const llmConfig: LLMConfig = {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o'
};
class SummarizeNode extends LLMNode {
constructor() {
super(llmConfig, 'You are a helpful assistant that summarizes text.');
}
async prep(shared: any): Promise<any> {
return shared.text;
}
async exec(prepRes: any): Promise<any> {
const prompt = `Summarize this text:\n\n${prepRes}`;
return await this.callLLM(prompt);
}
}
const summarize = new SummarizeNode();
const flow = new Flow(summarize);
const result = await flow.run({ text: 'Your long text here...' });
console.log(result.summary);3. Research Agent with Checkpointing
import { Node, Flow, FileCheckpointer } from 'smallllm';
class ResearchAgent extends Node {
// Agent logic here...
}
// Enable checkpointing for crash recovery
const checkpointer = new FileCheckpointer('./checkpoints');
const flow = new Flow(researchAgent, { checkpointer });
// Run with unique flow ID for resume capability
const flowId = `research-${Date.now()}`;
await flow.run(sharedData, flowId);
// Later resume from checkpoint
await flow.run(sharedData, flowId);🎯 Core Concepts
Nodes
- Node: Base class for all workflow components
- LLMNode: Pre-built node with LLM integration
- prep(): Prepare data from shared store
- exec(): Execute main logic (idempotent)
- post(): Store results and return next action
Flows
- Flow: Orchestrates node execution
- Action-based routing: Nodes return actions to control flow
- Composition: Flows can contain other flows
- Checkpointing: Automatic state persistence
Shared Store
Global data structure for node communication:
const shared = {
query: "What is AI?",
results: [],
finalAnswer: null
};📚 Examples
🤖 Agent Patterns
- Research Agent: Web search and analysis agent
- HITL Agent: Human-in-the-loop decision making
- Resume Demo: Checkpoint and crash recovery
🔧 Utility Examples
- Hello World: Basic node usage
- LLM Integration: Direct LLM calls
- Error Handling: Retry mechanisms
Run examples:
# Research agent
npm run research-agent
# Human-in-the-loop agent
npm run hitl-agent
# Resume functionality demo
npm run demo-resume🔧 API Reference
Classes
Node
Base node class for custom logic.
Methods:
prep(shared: SharedData): Promise<any>- Prepare input dataexec(prepRes: any): Promise<any>- Execute main logicpost(shared: SharedData, prepRes: any, execRes: any): Promise<Action>- Store resultssetParams(params: Params): void- Set node parameters
LLMNode
Node with built-in LLM capabilities.
Constructor:
new LLMNode(llmConfig: LLMConfig, systemPrompt?: string)Methods:
callLLM(prompt: string): Promise<string>- Make LLM call
Flow
Workflow orchestrator.
Constructor:
new Flow(startNode: Node, options?: { checkpointer?: Checkpointer })Methods:
run(shared: SharedData, flowId?: string): Promise<void>- Execute flowthen(node: Node): Flow- Chain nodes sequentiallyaction(actionName: string): Flow- Add conditional routing
Types
type SharedData = Record<string, any>;
type Action = string | null | undefined;
type Params = Record<string, any>;
type LLMProvider = "openai" | "gemini" | "anthropic";
type LLMConfig = {
provider: LLMProvider;
apiKey: string;
model?: string;
temperature?: number;
maxTokens?: number;
};🔌 Supported LLM Providers
- OpenAI: GPT-4, GPT-3.5-turbo
- Google Gemini: Gemini 1.5, Gemini Pro
- Anthropic Claude: Claude 3, Claude 2
🏗️ Architecture
See ARCHITECTURE.md for detailed design documentation.
🤝 Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature - Make your changes and add tests
- Run tests:
npm test - Submit a pull request
Development Setup
npm install
npm run dev # Watch mode
npm run build📄 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
Inspired by PocketFlow - a similar framework for Python.
Built with ❤️ for the AI engineering community
