npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@node-llm/orm

v0.5.0

Published

Database persistence layer for NodeLLM - Chat, Message, and ToolCall tracking with streaming support

Readme

@node-llm/orm

npm version npm downloads License: MIT TypeScript

Database persistence layer for NodeLLM. Automatically tracks chats, messages, tool calls, and API requests.

Read the Full Documentation | View Example App

Features

  • Automatic Persistence - Messages, tool calls, and API metrics saved automatically
  • Streaming Support - Real-time token delivery with askStream()
  • Provider Agnostic - Works with any NodeLLM provider (OpenAI, Anthropic, Gemini, OpenRouter, etc.)
  • Type Safe - Full TypeScript support with Prisma
  • Audit Trail - Complete history of tool executions and API calls
  • Flexible - Prisma adapter included, other ORMs can be added

Installation

npm install @node-llm/orm @node-llm/core @prisma/client
npm install -D prisma

Quick Start

Option 1: Using the CLI (Recommended)

# Generate schema.prisma automatically
npx node-llm-orm init

# Create and apply migration
npx prisma migrate dev --name init
npx prisma generate

Option 2: Manual Setup

Copy the reference schema into your project:

cp node_modules/@node-llm/orm/schema.prisma prisma/schema.prisma

Or manually add the models to your existing prisma/schema.prisma:

model LlmChat {
  id           String       @id @default(uuid())
  model        String?
  provider     String?
  instructions String?      @db.Text
  metadata     String?      @db.Text
  createdAt    DateTime     @default(now())
  updatedAt    DateTime     @updatedAt

  messages     LlmMessage[]
  requests     LlmRequest[]
}

model LlmMessage {
  id           String        @id @default(uuid())
  chatId       String
  role         String
  content      String?       @db.Text
  // ... see schema.prisma for full definition
}

model LlmToolCall {
  id         String   @id @default(uuid())
  messageId  String
  toolCallId String
  name       String
  arguments  String   @db.Text
  // ... see schema.prisma for full definition
}

model LlmRequest {
  id           String   @id @default(uuid())
  chatId       String
  messageId    String?
  provider     String
  model        String
  // ... see schema.prisma for full definition
}

2. Run Migration

npx prisma migrate dev --name init
npx prisma generate

3. Use the ORM

import { PrismaClient } from "@prisma/client";
import { createLLM } from "@node-llm/core";
import { createChat } from "@node-llm/orm/prisma";

const prisma = new PrismaClient();
const llm = createLLM({ provider: "openai" });

// Create a new chat
const chat = await createChat(prisma, llm, {
  model: "gpt-4",
  instructions: "You are a helpful assistant."
});

// Ask a question (automatically persisted)
const response = await chat.ask("What is the capital of France?");
console.log(response.content); // "The capital of France is Paris."

// View conversation history
const messages = await chat.messages();
console.log(messages); // [{ role: 'user', content: '...' }, { role: 'assistant', content: '...' }]

Architecture

The ORM tracks five core entities:

| Model | Purpose | Example | | ------------------- | -------------------- | ------------------------------------------ | | LlmAgentSession | Agent persistence | Links Agent class to Chat (v0.5.0+) | | LlmChat | Session container | Holds model, provider, system instructions | | LlmMessage | Conversation history | User queries and assistant responses | | LlmToolCall | Tool executions | Function calls made by the assistant | | LlmRequest | API metrics | Token usage, latency, cost per API call |

Data Flow

User Input
    ↓
Chat.ask()
    ↓
┌─────────────────────────────────────┐
│ 1. Create User Message (DB)        │
│ 2. Create Assistant Message (DB)   │
│ 3. Fetch History (DB)              │
│ 4. Call LLM API                     │
│    ├─ onToolCallEnd → ToolCall (DB)│
│    └─ afterResponse → Request (DB) │
│ 5. Update Assistant Message (DB)   │
└─────────────────────────────────────┘
    ↓
Return Response

Agent Sessions (v0.5.0+)

For stateful agents with persistence, use AgentSession. This follows the "Code Wins" principle:

  • Model, Tools, Instructions → from Agent class (code)
  • Message History → from database

Define an Agent (in @node-llm/core)

import { Agent, Tool, z } from "@node-llm/core";

class LookupOrderTool extends Tool {
  static definition = {
    name: "lookup_order",
    description: "Look up order status",
    parameters: z.object({ orderId: z.string() })
  };
  async execute({ orderId }) {
    return { status: "shipped", eta: "Tomorrow" };
  }
}

class SupportAgent extends Agent {
  static model = "gpt-4.1";
  static instructions = "You are a helpful support agent.";
  static tools = [LookupOrderTool];
}

Create & Resume Sessions

import { createAgentSession, loadAgentSession } from "@node-llm/orm/prisma";

// Create a new persistent session
const session = await createAgentSession(prisma, llm, SupportAgent, {
  metadata: { userId: "user_123", ticketId: "TKT-456" }
});

await session.ask("Where is my order #789?");
console.log(session.id); // "sess_abc123" - save this!

// Resume later (even after code upgrades)
const session = await loadAgentSession(prisma, llm, SupportAgent, "sess_abc123");
await session.ask("Can you cancel it?");

Code Wins Principle

When you deploy a code change (new model, updated tools), resumed sessions use the new configuration:

| Aspect | Source | Why | | ------------ | ----------- | ------------------------------- | | Model | Agent class | Immediate upgrades | | Tools | Agent class | Only code can execute functions | | Instructions | Agent class | Deploy prompt fixes immediately | | History | Database | Sacred, never modified |

Schema Addition

Add LlmAgentSession to your Prisma schema:

model LlmAgentSession {
  id         String   @id @default(uuid())
  agentClass String   // For validation (e.g., 'SupportAgent')
  chatId     String   @unique
  metadata   Json?    // Session context (userId, ticketId)
  createdAt  DateTime @default(now())
  updatedAt  DateTime @updatedAt

  chat       LlmChat  @relation(fields: [chatId], references: [id], onDelete: Cascade)

  @@index([agentClass])
  @@index([createdAt])
}

// Add relation to LlmChat
model LlmChat {
  // ... existing fields
  agentSession LlmAgentSession?
}

Advanced Usage

Streaming Responses

For real-time UX, use askStream() to yield tokens as they arrive:

import { createChat } from "@node-llm/orm/prisma";

const chat = await createChat(prisma, llm, {
  model: "gpt-4"
});

// Stream tokens in real-time
for await (const token of chat.askStream("Tell me a story")) {
  process.stdout.write(token); // Print each token immediately
}

// Message is automatically persisted after streaming completes
const messages = await chat.messages();
console.log(messages[messages.length - 1].content); // Full story

React/Next.js Streaming Example

// app/api/chat/route.ts
import { createChat } from "@node-llm/orm/prisma";

export async function POST(req: Request) {
  const { message, chatId } = await req.json();

  const chat = chatId
    ? await loadChat(prisma, llm, chatId)
    : await createChat(prisma, llm, { model: "gpt-4" });

  const stream = new ReadableStream({
    async start(controller) {
      for await (const token of chat.askStream(message)) {
        controller.enqueue(new TextEncoder().encode(token));
      }
      controller.close();
    }
  });

  return new Response(stream);
}

Custom Table Names

If you have existing tables with different names (e.g., AssistantChat instead of Chat), you can specify custom table names:

import { createChat } from "@node-llm/orm/prisma";

const tableNames = {
  chat: "assistantChat",
  message: "assistantMessage",
  toolCall: "assistantToolCall",
  request: "assistantRequest"
};

// Create chat with custom table names
const chat = await createChat(prisma, llm, { model: "gpt-4" }, tableNames);

// Load chat (must use same table names)
const loaded = await loadChat(prisma, llm, chatId, tableNames);

Note: Your Prisma schema model names must match the table names you specify. For example:

model AssistantChat {
  // ... fields
  @@map("assistantChat") // Optional: map to different database table name
}

With Tools

import { createChat } from "@node-llm/orm/prisma";
import { searchTool } from "./tools/search";

const chat = await createChat(prisma, llm, {
  model: "gpt-4",
  tools: [searchTool]
});

await chat.ask("Search for NodeLLM documentation");
// Tool calls are automatically persisted to ToolCall table

Loading Existing Chats

import { loadChat } from "@node-llm/orm/prisma";

const chat = await loadChat(prisma, llm, "chat-id-123");
if (chat) {
  await chat.ask("Continue our conversation");
}

Querying Metrics

// Get all API requests for a chat
const requests = await prisma.request.findMany({
  where: { chatId: chat.id },
  orderBy: { createdAt: "desc" }
});

console.log(
  `Total tokens: ${requests.reduce((sum, r) => sum + r.inputTokens + r.outputTokens, 0)}`
);
console.log(`Total cost: $${requests.reduce((sum, r) => sum + (r.cost || 0), 0)}`);

Custom Fields & Metadata

You can add custom fields (like userId, projectId, tenantId) to your Prisma schema and pass them directly to createChat. The library will pass these fields through to the generic Prisma create call.

1. Update your Prisma Schema:

model LlmChat {
  // ... standard fields
  metadata     Json?      // Use Json type for flexible storage
  userId       String?    // Custom field
  projectId    String?    // Custom field
}

2. Pass fields to createChat:

const chat = await createChat(prisma, llm, {
  model: "gpt-4",
  instructions: "You are consistent.",
  // Custom fields passed directly
  userId: "user_123",
  projectId: "proj_abc",
  // Metadata is passed as-is (native JSON support)
  metadata: {
    source: "web-client",
    tags: ["experiment-a"]
  }
});

Persistence Configuration

By default, the ORM persists everything: messages, tool calls, and API requests. If you don't need certain tables (e.g., you're building a minimal app without tool tracking), you can disable specific persistence features:

const chat = await createChat(prisma, llm, {
  model: "gpt-4",
  persistence: {
    toolCalls: false, // Skip LlmToolCall table
    requests: false // Skip LlmRequest table
  }
});

Use Cases:

  • Minimal Schema: Only create LlmChat and LlmMessage tables
  • Privacy: Disable request logging for sensitive applications
  • Performance: Reduce database writes for high-throughput scenarios

Note: Message persistence is always enabled (required for conversation history).

Environment Variables

The ORM respects NodeLLM's provider configuration:

# Chat Provider
NODELLM_PROVIDER="openrouter"
NODELLM_MODEL="google/gemini-2.0-flash-001"

# Embedding Provider (optional, for RAG apps)
NODELLM_EMBEDDING_PROVIDER="openai"
NODELLM_EMBEDDING_MODEL="text-embedding-3-small"

Roadmap

  • [ ] TypeORM adapter
  • [ ] Drizzle adapter
  • [ ] Migration utilities
  • [ ] Analytics dashboard
  • [ ] Export/import conversations

License

MIT