@samrahimi/smol-js
v0.7.4
Published
A TypeScript agentic framework - CodeAgent and ToolUseAgent with YAML orchestration, built-in tools, and Exa.ai integration
Maintainers
Readme
smol-js
A TypeScript agentic framework inspired by smolagents.
Build AI agents that solve tasks autonomously using the ReAct (Reasoning + Acting) pattern. Agents can write and execute JavaScript code, call tools, run shell commands, delegate to other agents, and orchestrate complex workflows via YAML definitions.
Features
- Three Agent Types:
CodeAgent— Writes and executes JavaScript code in a sandboxed VMToolUseAgent— Uses native LLM function calling (OpenAI-style)TerminalAgent— Executes shell commands on your macOS terminal
- YAML Orchestration: Define complex agent workflows declaratively
- Custom Tool Plugins: Drop standalone
.tstool files into a folder — no build step, no config - Nested Agents: Manager-worker patterns for hierarchical task delegation
- Sandboxed Execution: JavaScript runs in Node's vm module with state persistence
- Exa.ai Integration: Semantic web search, content extraction, and research automation
- Dynamic Imports: Import npm packages on-the-fly in CodeAgent
- OpenAI-Compatible: Works with OpenRouter, OpenAI, Azure, Anthropic, and local servers
- Streaming: Real-time output streaming from the LLM
- Memory Management: Context-aware with truncate/compact strategies
- Color-Coded Logging: Beautiful terminal output with session logging to disk
Installation
npm install @samrahimi/smol-jsOr run workflows directly without installing:
npx @samrahimi/smol-js workflow.yaml --task "Your task here"Quick Start
Via CLI (YAML Workflows)
The easiest way to get started is using YAML workflows:
# Run directly with npx (no installation needed)
npx @samrahimi/smol-js workflow.yaml --task "Your task here"
# Or install globally and use the CLI
npm install -g @samrahimi/smol-js
smol-js workflow.yaml --task "Research quantum computing"
# Validate a workflow
npx @samrahimi/smol-js validate workflow.yamlExample workflow (research-agent.yaml):
name: "Research Agent"
description: "An agent that can search the web and write reports"
model:
modelId: "anthropic/claude-sonnet-4.5"
baseUrl: "https://openrouter.ai/api/v1"
maxTokens: 4000
tools:
search:
type: exa_search
config:
apiKey: "$EXA_API_KEY"
write:
type: write_file
agents:
researcher:
type: ToolUseAgent
tools:
- search
- write
maxSteps: 10
customInstructions: "You are a thorough researcher. Always cite sources."
entrypoint: researcherProgrammatic Usage
import 'dotenv/config';
import { CodeAgent, OpenAIModel } from '@samrahimi/smol-js';
// Create the model (defaults to Claude via OpenRouter)
const model = new OpenAIModel({
modelId: 'anthropic/claude-sonnet-4.5',
});
// Create the agent
const agent = new CodeAgent({
model,
maxSteps: 10,
});
// Run a task
const result = await agent.run('Calculate the first 10 prime numbers');
console.log(result.output); // [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]Agent Types
CodeAgent
Generates and executes JavaScript code to solve tasks. The agent has access to tools as async functions in its execution environment.
import { CodeAgent, OpenAIModel } from '@samrahimi/smol-js';
const agent = new CodeAgent({
model,
tools: [myTool],
maxSteps: 20,
});
await agent.run('Analyze the file data.csv and create a summary');How it works:
- Agent generates thought + JavaScript code
- Code executes in sandboxed VM
- Results become observations
- Repeats until
final_answer()is called
ToolUseAgent
Uses native LLM function calling (OpenAI-style tool calling). Best for LLMs with strong structured-output capabilities.
import { ToolUseAgent, OpenAIModel } from '@samrahimi/smol-js';
const agent = new ToolUseAgent({
model,
tools: [searchTool, readFileTool],
maxSteps: 15,
enableParallelToolCalls: true, // Execute independent tools in parallel
});
await agent.run('Search for recent AI papers and summarize the top 3');TerminalAgent
Accomplishes tasks by reasoning about and executing shell commands on your terminal. Tuned for macOS — uses zsh, understands BSD conventions, and gives you full visibility into what it's doing.
import { TerminalAgent, OpenAIModel } from '@samrahimi/smol-js';
const agent = new TerminalAgent({
model,
maxSteps: 10,
commandDelay: 5, // seconds before running commands (Ctrl+C window)
maxOutputLength: 8000, // chars of command output fed back to the LLM
});
await agent.run('Set up a new Node.js project with TypeScript and Jest');How it works:
- Agent reasons about what shell commands to run
- Commands are displayed for your review (with a configurable delay — default 5 seconds — so you can Ctrl+C if anything looks wrong)
- Commands execute sequentially via
/bin/zsh - stdout, stderr, and exit codes are captured and fed back as observations
- Repeats until the agent signals completion with
FINAL_ANSWER:
Key behaviours:
- Verbose by default. You see the agent's reasoning, pending commands, and live output at every step.
- Safety delay. Every command batch pauses for 5 seconds before execution. Read what's about to run. Press Ctrl+C to abort.
- Error recovery. If a command fails (non-zero exit code), the agent sees the error and tries a different approach.
- No user-assigned tools. TerminalAgent is a pure shell agent. It can only delegate to sub-agents (via the manager-worker pattern) — it doesn't use function-calling tools.
TerminalAgent in YAML
name: system-inspector
description: Gathers system information using shell commands.
model:
modelId: anthropic/claude-sonnet-4.5
baseUrl: https://openrouter.ai/api/v1
agents:
inspector:
type: TerminalAgent
maxSteps: 5
commandDelay: 3
customInstructions: >
Gather system information: OS version, CPU, RAM, disk usage,
and list running processes sorted by CPU. Summarize your findings.
entrypoint: inspectorRun it:
npx @samrahimi/smol-js system-inspector.yaml --task "What are my system specs?"TerminalAgent as a Sub-Agent (Manager-Worker)
One of the most powerful patterns: a ToolUseAgent manager that delegates shell tasks to a TerminalAgent while handling other work itself.
name: manager-demo
description: >
A ToolUseAgent manager orchestrates a TerminalAgent (shell commands)
and a ToolUseAgent file reader. The manager delegates and synthesizes.
model:
modelId: anthropic/claude-sonnet-4.5
baseUrl: https://openrouter.ai/api/v1
tools:
read:
type: read_file
agents:
terminal_worker:
type: TerminalAgent
description: Runs shell commands to gather system information.
maxSteps: 3
customInstructions: >
Gather the requested system information by running shell commands.
When you have everything, call final_answer with a clear summary.
file_worker:
type: ToolUseAgent
description: Reads files from the project and summarizes their contents.
tools:
- read
maxSteps: 3
customInstructions: >
Read the file(s) you are asked about and return a concise summary
of the relevant details.
manager:
type: ToolUseAgent
description: >
Delegates to terminal_worker for shell tasks and file_worker for
file reads, then synthesizes the results.
agents:
- terminal_worker
- file_worker
maxSteps: 5
customInstructions: >
You are a manager agent. You have no tools of your own — your job
is to delegate to your sub-agents and combine their results.
- terminal_worker: use for anything that needs shell commands or live system state
- file_worker: use for reading files on disk
Delegate both tasks in parallel if possible, then synthesize a final answer.
entrypoint: managerRun it:
npx @samrahimi/smol-js examples/js/agents/manager-demo.yaml \
--task "Tell me my macOS version and summarize the smol-js README"Custom Tool Plugins
The custom tool system lets you write standalone tool files that any agent can use — without modifying the framework or writing boilerplate. Drop a .ts file into a folder, point the CLI at it, and it just works.
How It Works
- You write a single
.tsfile that exportsTOOL_METADATAand anexecute()function - You tell the CLI where to find your tools:
--custom-tools-folder ./my-tools - At startup, smol-js scans the folder, discovers your tools, and makes them available in YAML workflows
- When an agent calls your tool, smol-js spawns an isolated Bun process to run it
The tool file is transport-agnostic. It has zero knowledge of how it's invoked. All the process management, argument passing, and result serialization is handled by a harness adapter that ships with the package.
Writing a Custom Tool
A custom tool is a single .ts file. Here's a complete example — WeatherLookup.ts:
/**
* WeatherLookup — fetches weather data from Open-Meteo API.
*
* This file exports two things and nothing else:
* 1. TOOL_METADATA — describes the tool to the smol-js scanner
* 2. execute(args) — the actual function; called by the toolHarness adapter
*
* It has zero knowledge of how it is invoked. The harness handles argument
* deserialization, protocol framing, and process lifecycle.
*
* Dependencies are resolved by Bun at runtime — no package.json or
* npm install needed.
*/
import chalk from 'chalk'; // Bun resolves this from npm automatically
import dayjs from 'dayjs'; // Same here — zero config
// TOOL_METADATA — parsed by the scanner at discovery time.
// The `name` field MUST match this file's basename (WeatherLookup).
export const TOOL_METADATA = {
name: 'WeatherLookup',
description: 'Fetches current weather for a given city using the Open-Meteo API.',
inputs: {
city: {
type: 'string',
description: 'The city to look up (e.g. "London", "Tokyo")',
required: true,
},
units: {
type: 'string',
description: 'Temperature units: "celsius" (default) or "fahrenheit"',
required: false,
default: 'celsius',
},
},
outputType: 'string',
};
// execute — the tool's entry point. Just a function.
// args contains the deserialized parameters passed by the agent.
// Return your result. Throw an Error if something goes wrong.
// console.log() calls are streamed back as output the agent can see.
export async function execute(args: Record<string, unknown>): Promise<string> {
const city = args.city as string;
console.log(chalk.dim(` Geocoding "${city}"...`));
// ... fetch weather data ...
return `Weather in ${city}: 22°C, Partly cloudy`;
}The Rules
- File name must match
TOOL_METADATA.name— if the metadata saysWeatherLookup, the file must beWeatherLookup.ts - Export exactly two things:
TOOL_METADATAandexecute() - Dependencies are free — Bun resolves npm packages natively. Just
importthem. Nopackage.json, nonpm install. - Don't add a
main()function. Don't readprocess.argv. Don't write[TOOL_RESULT]lines. The framework's harness handles all of that. executemust be async and return aPromiseconsole.log()works — output is streamed back and visible to the agent as context
Using Custom Tools in a Workflow
Reference your tool by its metadata name in the tools section:
name: weather-assistant
description: A helpful weather assistant.
model:
modelId: anthropic/claude-haiku-4.5
baseUrl: https://openrouter.ai/api/v1
tools:
WeatherLookup:
type: WeatherLookup # Must match TOOL_METADATA.name
agents:
assistant:
type: ToolUseAgent
tools:
- WeatherLookup
maxSteps: 5
customInstructions: >
You are a friendly weather assistant. Use WeatherLookup to get
current conditions. Present results conversationally.
entrypoint: assistantRun it:
npx @samrahimi/smol-js weather-assistant.yaml \
--task "What's the weather in Tokyo?" \
--custom-tools-folder ./custom-toolsTool-Maker Agent
If you'd rather have an AI write your custom tools, there's a tool-maker agent in the examples:
npx @samrahimi/smol-js examples/js/agents/tool-maker/tool-maker.yaml \
--task "Create a tool that converts currencies using the ExchangeRate API"The tool-maker reads the WeatherLookup reference, understands the contract, generates a new standalone tool file, creates a matching YAML workflow, and outputs a ready-to-run npx command. Drop the output into any --custom-tools-folder and it works.
Configuration
Environment Variables
# API key for LLM provider
OPENAI_API_KEY=sk-or-v1-your-openrouter-key
# or
OPENROUTER_API_KEY=sk-or-v1-your-key
# For Exa.ai tools
EXA_API_KEY=your-exa-api-keyModel Configuration
const model = new OpenAIModel({
modelId: 'anthropic/claude-sonnet-4.5', // Model identifier
apiKey: 'sk-...', // API key (or use env var)
baseUrl: 'https://openrouter.ai/api/v1', // API endpoint
maxTokens: 4096, // Max tokens to generate
temperature: 0.7, // Generation temperature
timeout: 120000, // Request timeout in ms
});Agent Configuration
const agent = new CodeAgent({
model,
tools: [myTool], // Custom tools
maxSteps: 20, // Max iterations (default: 20)
codeExecutionDelay: 5000, // Safety delay before execution (default: 5000ms)
customInstructions: '...', // Additional system prompt instructions
verboseLevel: LogLevel.INFO, // Logging level
streamOutputs: true, // Stream LLM output in real-time
persistent: false, // Retain memory between run() calls
maxContextLength: 100000, // Token limit for context
memoryStrategy: 'truncate', // 'truncate' or 'compact'
additionalAuthorizedImports: ['lodash'], // npm packages (CodeAgent only)
workingDirectory: '/path/to/dir', // Working dir for fs operations
});TerminalAgent Configuration
const agent = new TerminalAgent({
model,
maxSteps: 10,
commandDelay: 5, // Seconds before executing commands (default: 5)
maxOutputLength: 8000, // Max chars of output fed back per command (default: 8000)
customInstructions: 'Focus on Python tooling only.',
});Built-in Tools
- FinalAnswerTool: Return the final result (always available)
- UserInputTool: Prompt for user input
- ReadFileTool: Read file contents
- WriteFileTool: Write to files
- CurlTool: Make HTTP requests
- ExaSearchTool: Semantic web search via Exa.ai
- ExaGetContentsTool: Fetch and extract webpage content
- ExaResearchTool: Multi-step research workflow
- AgentTool: Wrap agents as tools for nested architectures
Creating Custom Tools (Class-Based)
For tools that need to live inside your TypeScript project (rather than as standalone plugin files), extend the Tool class:
import { Tool } from '@samrahimi/smol-js';
import type { ToolInputs } from '@samrahimi/smol-js';
class WeatherTool extends Tool {
readonly name = 'get_weather';
readonly description = 'Get current weather for a city';
readonly inputs: ToolInputs = {
city: {
type: 'string',
description: 'The city name',
required: true,
},
};
readonly outputType = 'object';
async execute(args: Record<string, unknown>): Promise<unknown> {
const city = args.city as string;
const response = await fetch(`https://api.weather.com/${city}`);
return response.json();
}
}
const agent = new CodeAgent({
model,
tools: [new WeatherTool()],
});Registering Tools for YAML Workflows
import { YAMLLoader, Orchestrator } from '@samrahimi/smol-js';
import { MyCustomTool } from './tools.js';
const loader = new YAMLLoader();
// Register custom tools by type name
loader.registerToolType('my_tool', MyCustomTool);
// Now use in YAML:
// tools:
// custom:
// type: my_tool
// config:
// apiKey: "$MY_API_KEY"
const workflow = loader.loadFromFile('./workflow.yaml');
const orchestrator = new Orchestrator();
await orchestrator.runWorkflow(workflow, 'Your task here');YAML Workflow System
Define complex agent architectures declaratively:
name: "Multi-Agent Research System"
description: "Manager-worker pattern with specialized agents"
model:
modelId: "anthropic/claude-sonnet-4.5"
baseUrl: "https://openrouter.ai/api/v1"
apiKey: "$OPENROUTER_API_KEY"
tools:
search:
type: exa_search
config:
apiKey: "$EXA_API_KEY"
read:
type: read_file
write:
type: write_file
agents:
# Worker: specialized in research
researcher:
type: ToolUseAgent
tools:
- search
- read
maxSteps: 8
temperature: 0.3
customInstructions: "Be thorough and cite sources."
# Worker: specialized in writing
writer:
type: CodeAgent
tools:
- write
maxSteps: 5
temperature: 0.7
customInstructions: "Create clear, engaging content."
# Worker: runs shell commands
sysadmin:
type: TerminalAgent
maxSteps: 5
customInstructions: "Handle any system or environment setup tasks."
# Manager: delegates to all three workers
manager:
type: ToolUseAgent
agents:
- researcher
- writer
- sysadmin
maxSteps: 10
customInstructions: "Coordinate research, writing, and system tasks. Delegate appropriately."
entrypoint: managerRun it:
npx @samrahimi/smol-js research-workflow.yaml --task "Write a report on quantum computing"Nested Agents (Manager-Worker Pattern)
Use agents as tools for hierarchical task delegation:
import { CodeAgent, ToolUseAgent, TerminalAgent, OpenAIModel, AgentTool } from '@samrahimi/smol-js';
// Create specialized worker agents
const mathAgent = new CodeAgent({
model,
maxSteps: 5,
verboseLevel: LogLevel.OFF,
});
const sysAgent = new TerminalAgent({
model,
maxSteps: 5,
commandDelay: 3,
verboseLevel: LogLevel.OFF,
});
// Wrap workers as tools
const mathExpert = new AgentTool({
agent: mathAgent,
name: 'math_expert',
description: 'Delegate math and calculation tasks',
});
const sysAdmin = new AgentTool({
agent: sysAgent,
name: 'sys_admin',
description: 'Delegate shell commands and system tasks',
});
// Manager delegates to workers
const manager = new ToolUseAgent({
model,
tools: [mathExpert, sysAdmin],
maxSteps: 10,
});
await manager.run('Check disk usage and calculate how many GB are free as a percentage');Exa.ai Integration
Three tools for web research powered by Exa.ai:
ExaSearchTool
Semantic search with advanced filtering:
import { ExaSearchTool } from '@samrahimi/smol-js';
const searchTool = new ExaSearchTool({
apiKey: process.env.EXA_API_KEY,
numResults: 10,
searchType: 'auto', // 'auto', 'neural', or 'keyword'
});ExaGetContentsTool
Extract clean webpage content:
import { ExaGetContentsTool } from '@samrahimi/smol-js';
const contentTool = new ExaGetContentsTool({
apiKey: process.env.EXA_API_KEY,
textOnly: true,
});ExaResearchTool
Agentic web research that writes comprehensive reports:
import { ExaResearchTool } from '@samrahimi/smol-js';
const researchTool = new ExaResearchTool({
apiKey: process.env.EXA_API_KEY,
model: 'exa-research', // or 'exa-research-fast', 'exa-research-pro'
});
// The Exa Research API is an asynchronous research agent that:
// 1. Plans the research approach
// 2. Executes searches across the web
// 3. Extracts and analyzes facts from sources
// 4. Synthesizes findings into a markdown report with citations
// 5. Returns the complete report (typically 20-90 seconds)
await agent.run('Use exa_research to write a report on quantum computing breakthroughs in 2024');Built-in Capabilities (CodeAgent)
The CodeAgent sandbox includes:
| Category | Available |
|----------|-----------|
| Output | console.log(), console.error(), print() |
| HTTP | fetch(), URL, URLSearchParams |
| File System | fs.* (readFileSync, writeFileSync, etc.) |
| Path | path.* (join, resolve, dirname, etc.) |
| Data | JSON, Buffer, TextEncoder, TextDecoder |
| Math | Math.*, parseInt(), parseFloat() |
| Types | Object, Array, Map, Set, Date, RegExp, Promise |
| Timers | setTimeout(), setInterval() |
| Final | final_answer(value) - Return the result |
Dynamic npm Imports
const agent = new CodeAgent({
model,
additionalAuthorizedImports: ['lodash', 'dayjs', 'uuid'],
});
// Agent can now write:
// const _ = await importPackage('lodash');
// const result = _.uniq([1, 2, 2, 3]);Packages are fetched from jsdelivr CDN and cached in ~/.smol-js/packages/.
Examples
See the examples/js/ directory for complete examples:
- main.ts: Main demo with custom tools and YAML workflows
- custom-tools.ts: Custom tool implementations (TimestampTool, TextStatsTool, SlugifyTool)
- agents/: YAML workflow definitions
custom-tool-workflow.yaml: Weather assistant using the standalone custom tool plugin systemmanager-demo.yaml: Manager + TerminalAgent worker + file-reader workertool-maker/tool-maker.yaml: AI agent that generates custom tool filesbloomberg.yaml: Bloomberg research workflowpolicy.yaml: Policy analysis workflowsimple-test.yaml: Simple test workflow
And the custom-tools/ directory at the repo root contains example standalone tool plugins:
WeatherLookup.ts: Fetches weather from Open-Meteo (demonstrates npm deps, streaming output, error handling)
Run an example:
cd examples/js
npm install
npx tsx main.tsMemory Management
Agents track all execution steps and manage context automatically:
const agent = new CodeAgent({
model,
maxContextLength: 100000, // Token limit
memoryStrategy: 'truncate', // or 'compact'
persistent: false, // Retain memory between run() calls
});
// Non-persistent (default): Fresh memory each run
await agent.run('Task 1');
await agent.run('Task 2'); // Doesn't remember Task 1
// Persistent: Maintains conversation context
const persistentAgent = new CodeAgent({ model, persistent: true });
await persistentAgent.run('Remember this: X = 42');
await persistentAgent.run('What is X?'); // Remembers X = 42Memory Strategies:
- truncate: Removes oldest steps when over token limit
- compact: Uses LLM to summarize older steps
Session Logging
All sessions are logged to ~/.smol-js/:
session-<timestamp>.log— Full session transcript with color codespackages/— Cached npm packages from dynamic imports
API Reference
Agent Base Class
abstract class Agent {
constructor(config: AgentConfig)
run(task: string, reset?: boolean): Promise<RunResult>
stop(): void
reset(): void
addTool(tool: Tool): void
removeTool(name: string): boolean
getMemory(): AgentMemory
}CodeAgent
class CodeAgent extends Agent {
constructor(config: CodeAgentConfig)
getExecutor(): LocalExecutor
}ToolUseAgent
class ToolUseAgent extends Agent {
constructor(config: ToolUseAgentConfig)
}
interface ToolUseAgentConfig extends AgentConfig {
enableParallelToolCalls?: boolean;
}TerminalAgent
class TerminalAgent extends Agent {
constructor(config: TerminalAgentConfig)
}
interface TerminalAgentConfig extends AgentConfig {
commandDelay?: number; // Seconds before executing (default: 5)
maxOutputLength?: number; // Max chars of output per command (default: 8000)
}RunResult
interface RunResult {
output: unknown; // Final answer
steps: MemoryStep[]; // Execution history
tokenUsage: TokenUsage; // Token counts
duration: number; // Total time in ms
}Tool
abstract class Tool {
abstract readonly name: string;
abstract readonly description: string;
abstract readonly inputs: ToolInputs;
abstract readonly outputType: string;
abstract execute(args: Record<string, unknown>): Promise<unknown>;
setup(): Promise<void>;
call(args: Record<string, unknown>): Promise<unknown>;
toCodePrompt(): string; // For CodeAgent
toOpenAITool(): OpenAITool; // For ToolUseAgent
}Orchestrator
class Orchestrator {
constructor(config?: { verbose?: boolean })
loadWorkflow(filePath: string): Workflow
loadWorkflowFromString(yaml: string): Workflow
runWorkflow(workflow: Workflow, task: string): Promise<void>
runAgent(agent: Agent, task: string): Promise<void>
getEventLog(): OrchestrationEvent[]
}YAMLLoader
class YAMLLoader {
registerToolType(typeName: string, toolClass: typeof Tool): void
registerToolInstance(name: string, tool: Tool): void
loadFromFile(filePath: string): Workflow
loadFromString(yaml: string): Workflow
}CLI Reference
# Run a workflow
npx @samrahimi/smol-js <workflow.yaml> [options]
npx @samrahimi/smol-js run <workflow.yaml> [options]
# Validate a workflow
npx @samrahimi/smol-js validate <workflow.yaml>
# Options
--task, -t <task> Task description (prompted if not provided)
--custom-tools-folder <path> Path to folder containing standalone tool plugins
--quiet, -q Reduce output verbosity
--help, -h Show help messageSecurity Considerations
- Sandboxed Execution: Code runs in Node's vm module, isolated from the main process
- Authorized Imports: Only explicitly allowed npm packages can be imported in CodeAgent
- File System Isolation: fs operations are restricted to configured working directory
- Execution Delay: Configurable delay before code/command execution allows user interruption (Ctrl+C)
- Timeout Protection: Code execution has a configurable timeout (default: 30s); shell commands timeout at 2 minutes
- Isolated Tool Processes: Custom tool plugins run in separate Bun processes — a misbehaving tool can't affect the main framework
Comparison with Python smolagents
| Feature | Python smolagents | smol-js |
|---------|------------------|---------|
| Code execution | Python interpreter | Node.js vm module |
| Imports | import statement | await importPackage() |
| Tool definition | @tool decorator | Class extending Tool |
| Nested agents | ManagedAgent | AgentTool |
| Async support | Optional | All tools are async |
| HTTP requests | Requires tool | Built-in fetch() |
| Remote executors | E2B, Docker, etc. | Local only |
| Agent types | CodeAgent, ToolCallingAgent | CodeAgent, ToolUseAgent, TerminalAgent |
| YAML workflows | ❌ | ✅ |
| Exa.ai integration | ❌ | ✅ Built-in |
| Custom tool plugins | ❌ | ✅ Standalone .ts files |
Contributing
Contributions are welcome! Please follow OOP principles and open an issue or PR on GitHub.
# Clone and install
git clone https://github.com/samrahimi/smol.git
cd smol/smol-js
npm install
# Build
npm run build
# Run tests
npm test
# Lint
npm run lint
npm run lint:fix
# Type check
npm run typecheckLicense
MIT
Credits
This is a TypeScript framework inspired by smolagents by Hugging Face, with additional features including YAML orchestration, ToolUseAgent, TerminalAgent, custom tool plugins, and Exa.ai integration.
