atomus-ai
v1.0.0
Published
The atomic toolkit for AI agents. Zero dependencies. Parse LLM outputs, manage tokens, retry with backoff, build tool schemas, stream responses, secure agent communication.
Maintainers
Readme
atomus-ai
The atomic toolkit for AI agents. Zero external dependencies. Tiny bundle.
Everything you need to build, secure, and scale AI agents — from parsing LLM outputs to managing conversation memory, from retry with backoff to quantum-ready cryptographic identity.
Why atomus-ai?
Building AI agents means solving the same problems over and over:
- LLMs return JSON wrapped in markdown code blocks
- API calls fail with rate limits and need smart retry
- Token counting requires heavy dependencies (tiktoken is 3MB+)
- Tool schemas differ between OpenAI, Claude, and MCP
- Conversation history grows beyond context windows
- Streaming responses need accumulation and parsing
atomus-ai solves all of these in one package. Zero dependencies. Tree-shakeable. Works everywhere Node.js runs.
npm install atomus-aiModules
| Module | What it does | Size |
|--------|-------------|------|
| parse | Extract JSON, XML tags, key-value fields from LLM outputs | ~1KB |
| retry | Exponential backoff, rate limiter, circuit breaker | ~1KB |
| tokens | Token estimation without heavy tokenizers (~5% accuracy) | ~1KB |
| cost | LLM pricing for 20+ models, budget tracking | ~1KB |
| schema | Tool schema builder for OpenAI, Claude, MCP formats | ~1KB |
| memory | Sliding window conversation memory with token budgets | ~1KB |
| stream | SSE parsing, delta accumulation for streaming APIs | ~1KB |
| shield | Ed25519 identity, AES-256-GCM, prompt injection protection | ~2KB |
| agent | Base agent class, tool execution, multi-agent swarm | ~2KB |
Quick Start
import {
parseJSON,
retry,
estimateTokens,
Schema,
ConversationMemory,
Agent,
AgentIdentity,
} from 'atomus-ai'Parse LLM outputs
import { parseJSON, parseTag, parseFields } from 'atomus-ai/parse'
// Handles ```json code blocks, raw JSON, malformed JSON
const data = parseJSON('Here is the result:\n```json\n{"score": 95}\n```')
// => { score: 95 }
// Extract XML-tagged content
const answer = parseTag('<answer>42</answer>', 'answer')
// => "42"
// Parse "field: value" format
const fields = parseFields('Name: Atomus\nVersion: 1.0\nStatus: Active')
// => { name: "Atomus", version: "1.0", status: "Active" }Retry with exponential backoff
import { retry, RateLimiter, CircuitBreaker } from 'atomus-ai/retry'
// Auto-retries on rate limits, timeouts, 5xx errors
const response = await retry(
() => fetch('https://api.anthropic.com/v1/messages', { method: 'POST', body }),
{ maxRetries: 3, initialDelay: 1000 }
)
// Rate limiter
const limiter = new RateLimiter(60, 60000) // 60 requests per minute
await limiter.acquire()
// Circuit breaker for failing services
const breaker = new CircuitBreaker(5, 30000) // 5 failures = 30s cooldown
const result = await breaker.execute(() => callExternalAPI())Token estimation (no heavy dependencies)
import { estimateTokens, chunkByTokens, truncateToTokens } from 'atomus-ai/tokens'
const { tokens, words } = estimateTokens('Your text here', 'claude')
// => { tokens: 4, characters: 14, words: 3 }
// Split text into chunks that fit token limits
const chunks = chunkByTokens(longDocument, 4000, 'gpt-4o')
// Truncate to fit
const short = truncateToTokens(longText, 1000, 'claude')LLM cost estimation
import { estimateCost, BudgetTracker } from 'atomus-ai/cost'
const cost = estimateCost('Analyze this document...', 'claude-sonnet-4', 2000)
// => { totalCost: 0.000045, inputTokens: 5, outputTokens: 2000, ... }
// Track spending
const budget = new BudgetTracker(10.00) // $10 budget
budget.record(cost.totalCost)
console.log(`Remaining: $${budget.remaining}`)Build tool schemas
import { Schema } from 'atomus-ai/schema'
const searchTool = Schema.create('Search the web for information')
.string('query', 'Search query')
.integer('limit', 'Max results', { default: 10, required: false })
// Output for Claude
searchTool.buildClaude('web_search')
// Output for OpenAI
searchTool.buildOpenAI('web_search')
// Output for MCP
searchTool.buildMCP('web_search')Conversation memory
import { ConversationMemory } from 'atomus-ai/memory'
const memory = new ConversationMemory({
maxTokens: 100000,
model: 'claude',
systemMessage: 'You are a helpful assistant.',
reserveOutput: 4000,
})
memory.add('user', 'What is quantum computing?')
memory.add('assistant', 'Quantum computing uses...')
// Automatically fits within token budget
const messages = await memory.getMessages()Streaming
import { processClaudeStream, processOpenAIStream } from 'atomus-ai/stream'
// Claude streaming
const text = await processClaudeStream(response, (delta) => {
process.stdout.write(delta.delta) // Real-time output
})
// OpenAI streaming
const text2 = await processOpenAIStream(response, (delta) => {
process.stdout.write(delta.delta)
})Security (Shield)
import { AgentIdentity, Cipher, sanitizeInput } from 'atomus-ai/shield'
// Create agent identity (Ed25519)
const agent = new AgentIdentity()
const signed = agent.createSignedMessage({ action: 'transfer', amount: 100 })
const valid = agent.verifySignedMessage(signed) // true
// Encrypt sensitive data (AES-256-GCM)
const cipher = new Cipher('my-secret-key')
const encrypted = cipher.encrypt('sensitive data')
const decrypted = cipher.decrypt(encrypted)
// Protect against prompt injection
const safe = sanitizeInput(userInput)Build agents
import { Agent, AgentSwarm } from 'atomus-ai/agent'
const agent = new Agent({
name: 'researcher',
systemPrompt: 'You are a research assistant.',
model: 'claude',
enableIdentity: true,
llmCall: async (messages, tools) => {
// Your LLM API call here
return { content: 'Response from LLM' }
},
})
agent.tool('search', 'Search the web', async (params) => {
return { results: ['...'] }
})
const response = await agent.run('Find the latest AI research papers')
// Multi-agent coordination
const swarm = new AgentSwarm()
swarm.add(researcher)
swarm.add(coder)
swarm.add(reviewer)
const results = await swarm.broadcast('Analyze this codebase')Tree-shaking
Import only what you need — unused modules are eliminated:
// Only imports ~1KB of code
import { parseJSON } from 'atomus-ai/parse'
// Only imports ~1KB of code
import { retry } from 'atomus-ai/retry'Comparison
| Feature | atomus-ai | langchain | ai (Vercel) | tiktoken | |---------|-----------|-----------|-------------|----------| | Bundle size | ~10KB | 2MB+ | 200KB+ | 3MB+ | | Dependencies | 0 | 50+ | 10+ | 1 | | Token counting | Yes (~5%) | Via tiktoken | No | Exact | | Tool schemas | Multi-format | Own format | Own format | No | | Streaming | Universal | Yes | Yes | No | | Security | Ed25519 + AES | No | No | No | | Agent framework | Yes | Yes | No | No | | Tree-shakeable | Yes | Partial | Yes | No |
Funding
If atomus-ai saves you time, consider sponsoring the project.
Built by Padrao Bitcoin — building the atomic layer for AI.
License
MIT
