@node-llm/core
v1.6.2
Published
A provider-agnostic LLM core for Node.js, inspired by ruby-llm.
Downloads
3,061
Maintainers
Readme
@node-llm/core
The production-grade LLM engine for Node.js. Provider-agnostic by design.
@node-llm/core provides a single, unified API for interacting with over 540+ models across all major providers. It is built for developers who need stable infrastructure, standard streaming, and automated tool execution without vendor lock-in.
🚀 Key Features
- Unified API: One interface for OpenAI, Anthropic, Gemini, DeepSeek, OpenRouter, and Ollama.
- Automated Tool Loops: Recursive tool execution handled automatically—no manual loops required.
- Streaming + Tools: Seamlessly execute tools and continue the stream with the final response.
- Structured Output: Native Zod support for rigorous schema validation (
.withSchema()). - Multimodal engine: Built-in handling for Vision, Audio (Whisper), and Video (Gemini).
- Security-First: Integrated circuit breakers for timeouts, max tokens, and infinite tool loops.
📋 Supported Providers
| Provider | Supported Features | | :------------- | :--------------------------------------------------------------- | | OpenAI | Chat, Streaming, Tools, Vision, Audio, Images, Reasoning (o1/o3) | | Anthropic | Chat, Streaming, Tools, Vision, PDF Support (Claude 3.5) | | Gemini | Chat, Streaming, Tools, Vision, Audio, Video, Embeddings | | DeepSeek | Chat (V3), Reasoning (R1), Streaming + Tools | | OpenRouter | 540+ models via a single API with automatic capability detection | | Ollama | Local LLM inference with full Tool and Vision support |
⚡ Quick Start
Installation
npm install @node-llm/coreBasic Chat & Streaming
NodeLLM automatically reads your API keys from environment variables (e.g., OPENAI_API_KEY).
import { createLLM } from "@node-llm/core";
const llm = createLLM({ provider: "openai" });
// 1. Standard Request
const res = await llm.chat("gpt-4o").ask("What is the speed of light?");
console.log(res.content);
// 2. Real-time Streaming
for await (const chunk of llm.chat().stream("Tell me a long story")) {
process.stdout.write(chunk.content);
}Structured Output (Zod)
Stop parsing markdown. Get typed objects directly.
import { z } from "@node-llm/core";
const PlayerSchema = z.object({
name: z.string(),
powerLevel: z.number(),
abilities: z.array(z.string())
});
const chat = llm.chat("gpt-4o-mini").withSchema(PlayerSchema);
const response = await chat.ask("Generate a random RPG character");
console.log(response.parsed.name); // Fully typed!🛡️ Security Circuit Breakers
NodeLLM protects your production environment with four built-in safety pillars:
const llm = createLLM({
requestTimeout: 15000, // 15s DoS Protection
maxTokens: 4096, // Cost Protection
maxRetries: 3, // Retry Storm Protection
maxToolCalls: 5 // Infinite Loop Protection
});💾 Ecosystem
Looking for persistence? use @node-llm/orm.
- Automatically saves chat history to PostgreSQL/MySQL/SQLite via Prisma.
- Tracks tool execution results and API metrics (latency, cost, tokens).
📚 Full Documentation
Visit node-llm.eshaiju.com for:
License
MIT © [NodeLLM Contributors]
