npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

trustcalljs

v0.2.9

Published

Utilities for validated tool calling and extraction with retries using LLMs - TypeScript port of trustcall

Readme

🤝 TrustCallJS

TypeScript port of trustcall - Utilities for validated tool calling and extraction with retries using LLMs.

Built on top of @langchain/langgraph.

Installation

npm install trustcalljs @langchain/langgraph @langchain/core

Why TrustCallJS?

Tool calling makes it easier to compose LLM calls within reliable software systems, but LLMs today can be error prone and inefficient in two common scenarios:

  1. Populating complex, nested schemas - LLMs often make validation errors on deeply nested structures
  2. Updating existing schemas without information loss - Regenerating entire objects can lose important data

TrustCallJS solves these problems using JSONPatch to correct validation errors, reducing costs and improving reliability.

Quick Start

import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import { createExtractor } from "trustcalljs";

// Define your schema
const UserInfo = z.object({
  name: z.string().describe("User's full name"),
  age: z.number().describe("User's age in years"),
}).describe("UserInfo");

// Create the extractor
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const extractor = createExtractor(llm, {
  tools: [UserInfo],
});

// Extract structured data - simplest form with a string
const result = await extractor.invoke(
  "My name is Alice and I'm 30 years old"
);

console.log(result.responses[0]);
// { name: "Alice", age: 30 }

Input Formats

The extractor supports multiple input formats:

// 1. Simple string (converted to HumanMessage internally)
const result = await extractor.invoke("My name is Alice and I'm 30");

// 2. Single BaseMessage
import { HumanMessage } from "@langchain/core/messages";
const result = await extractor.invoke(
  new HumanMessage("My name is Alice and I'm 30")
);

// 3. Array of BaseMessage (LangGraph MessagesValue compatible)
const result = await extractor.invoke({
  messages: [new HumanMessage("My name is Alice and I'm 30")],
});

// 4. OpenAI-style message dict format
const result = await extractor.invoke({
  messages: [{ role: "user", content: "My name is Alice and I'm 30" }],
});

// 5. With existing data for updates
const result = await extractor.invoke({
  messages: [{ role: "user", content: "Change my age to 31" }],
  existing: { UserInfo: { name: "Alice", age: 30 } },
});

Features

Complex Schema Extraction

TrustCallJS handles complex, deeply nested schemas that often cause validation errors:

const TelegramPreferences = z.object({
  communication: z.object({
    telegram: z.object({
      preferredEncoding: z.enum(["morse", "standard"]),
      paperType: z.string().optional(),
    }),
    semaphore: z.object({
      flagColor: z.string(),
    }),
  }),
}).describe("TelegramPreferences");

const extractor = createExtractor(llm, {
  tools: [TelegramPreferences],
});

// Even with complex schemas, TrustCallJS will retry and fix validation errors
const result = await extractor.invoke({
  messages: `Extract preferences from: 
    User: I'd like morse code on daredevil paper`,
});

Updating Existing Data

Update existing schemas without losing information:

const UserPreferences = z.object({
  name: z.string(),
  favoriteColors: z.array(z.string()),
  settings: z.object({
    notifications: z.boolean(),
    theme: z.enum(["light", "dark"]),
  }),
}).describe("UserPreferences");

const existing = {
  UserPreferences: {
    name: "Alice",
    favoriteColors: ["blue", "green"],
    settings: {
      notifications: true,
      theme: "light",
    },
  },
};

const extractor = createExtractor(llm, {
  tools: [UserPreferences],
  enableUpdates: true,
});

const result = await extractor.invoke({
  messages: [{ role: "user", content: "I now prefer dark theme and add purple to my colors" }],
  existing,
});

// Result preserves existing data while applying updates:
// {
//   name: "Alice",
//   favoriteColors: ["blue", "green", "purple"],
//   settings: { notifications: true, theme: "dark" }
// }

Validation and Retries

TrustCallJS automatically:

  • Validates tool call outputs against your schemas
  • Generates JSONPatch operations to fix validation errors
  • Retries with corrections up to a configurable maximum
const extractor = createExtractor(llm, {
  tools: [MySchema],
});

// Configure max retry attempts
const result = await extractor.invoke(
  { messages: "..." },
  { configurable: { max_attempts: 5 } }
);

// Check how many attempts were needed
console.log(`Extraction completed in ${result.attempts} attempts`);

API Reference

createExtractor(llm, options)

Creates an extractor runnable.

Parameters:

  • llm: A LangChain chat model (e.g., ChatOpenAI, ChatAnthropic)
  • options: Extractor configuration
    • tools: Array of Zod schemas, structured tools, or functions
    • toolChoice?: Force a specific tool to be used
    • enableInserts?: Allow creating new schemas when updating (default: false)
    • enableUpdates?: Allow updating existing schemas (default: true)
    • enableDeletes?: Allow deleting existing schemas (default: false)
    • existingSchemaPolicy?: How to handle unknown existing schemas (default: true)

Returns: An extractor with invoke() and stream() methods.

ExtractionOutputs

interface ExtractionOutputs {
  messages: AIMessage[];      // The AI messages with tool calls
  responses: unknown[];       // Validated schema instances
  responseMetadata: Array<{   // Metadata about each response
    id: string;
    jsonDocId?: string;
  }>;
  attempts: number;           // Number of extraction attempts
}

ValidationNode

A standalone validation node for use in custom graphs:

import { ValidationNode } from "trustcalljs";

const validator = new ValidationNode([UserInfo, Preferences], {
  formatError: (error, call, schema) => `Custom error: ${error.message}`,
});

const result = await validator.invoke({ messages });

How It Works

  1. Initial Extraction: The LLM generates tool calls based on input
  2. Validation: Tool calls are validated against Zod schemas
  3. Error Detection: Validation errors are detected and formatted
  4. Patch Generation: The LLM generates JSONPatch operations to fix errors
  5. Application: Patches are applied to the original arguments
  6. Retry: The process repeats until validation passes or max attempts reached

This approach is more efficient than regenerating entire outputs because:

  • Only the failing parts are regenerated
  • Existing correct data is preserved
  • Fewer output tokens are needed for fixes

License

MIT