npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@synstack/llm

v2.3.18

Published

Immutable, chainable, and type-safe wrapper of Vercel's AI SDK

Readme

@synstack/llm

Immutable, chainable, and type-safe wrapper of Vercel's AI SDK.

Installation

pnpm add @synstack/llm ai zod
yarn add @synstack/llm ai zod
npm install @synstack/llm ai zod

To add models you need to install the appropriate provider package:

pnpm add @ai-sdk/openai // or @ai-sdk/[provider-name]
yarn add @ai-sdk/openai // or @ai-sdk/[provider-name]
npm install @ai-sdk/openai // or @ai-sdk/[provider-name]

Features

Completion Building

The completion builder provides a type-safe API to configure LLM completions:

import { completion } from "@synstack/llm"; // or @synstack/synscript/llm
import { openai } from "@ai-sdk/openai";

const baseCompletion = completion
  .model(openai("gpt-4"))
  .maxTokens(20)
  .temperature(0.8);

const imageToLanguagePrompt = (imagePath: string) => [
  systemMsg`
    You are a helpful assistant that can identify the language of the text in the image.
  `,
  userMsg`
    Here is the image: ${filePart.fromPath(imagePath)}
  `,
  assistantMsg`
    The language of the text in the image is
  `,
];

const imageToLanguageAgent = (imagePath: string) =>
  baseCompletion.prompt(imageToLanguagePrompt(imagePath)).generateText();

Model Configuration

  • model(): Set the language model
  • maxTokens(): Set maximum tokens to generate
  • temperature(): Set temperature (0-1)
  • topP(), topK(): Configure sampling parameters
  • frequencyPenalty(), presencePenalty(): Adjust output diversity
  • seed(): Set random seed for deterministic results

Flow Control

  • maxSteps(): Maximum number of sequential LLM calls
  • maxRetries(): Number of retry attempts
  • stopSequences(): Define sequences that stop generation
  • abortSignal(): Cancel ongoing completions

Generation Methods

  • generateText(): Generate text completion
  • streamText(): Stream text completion
  • generateObject(): Generate structured object
  • streamObject(): Stream structured object

Message Building

Messages can be built using template strings with various features:

  • Add promises or array of promises in your template string as if they were synchronous
  • Format your prompt for readability with automatic trimming and padding removal
  • Type-safe template values that prevent invalid prompt content

Template-based message builders for different roles:

// System messages
systemMsg`
  You are a helpful assistant.
`;

// User messages with support for text, images and files
userMsg`
  Here is the image: ${filePart.fromPath("./image.png")}
`;

// Assistant messages with support for text and tool calls
assistantMsg`
  The language of the text in the image is
`;

Advanced Message Configuration

The package provides customization options for messages with provider-specific settings:

// User message with cache control
const cachedUserMsg = userMsg.cached`
  Here is the image: ${filePart.fromPath("./image.png")}
`;

// Custom provider options for user messages
const customUserMsg = userMsgWithOptions({
  providerOptions: { anthropic: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;

// Custom provider options for assistant messages
const customAssistantMsg = assistantMsgWithOptions({
  providerOptions: { openai: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;

// Custom provider options for system messages
const customSystemMsg = systemMsgWithOptions({
  providerOptions: { anthropic: { system_prompt_behavior: "default" } },
})`Hello World`;

File Handling

The filePart utility provides methods to handle files and images, and supports automatic mime-type detection:

// Load from file system path
filePart.fromPath(path, mimeType?)

// Load from base64 string
filePart.fromBase64(base64, mimeType?)

// Load from URL
filePart.fromUrl(url, mimeType?)

Tool Usage

Tools can be configured in completions for function calling with type safety:

const completion = baseCompletion
  .tools({
    search: {
      description: "Search for information",
      parameters: z.object({
        query: z.string(),
      }),
    },
  })
  .activeTools(["search"])
  .toolChoice("auto"); // or 'none', 'required', or { type: 'tool', toolName: 'search' }

Model Middlewares

The library provides middleware utilities to enhance model behavior:

import { includeAssistantMessage, cacheCalls } from "@synstack/llm/middleware";
import { fsCache } from "@synstack/fs-cache";

// Apply middlewares to completion
const completion = baseCompletion
  .middlewares([includeAssistantMessage]) // Include last assistant message in output
  .prependMiddlewares([cacheCalls(cache)]); // Cache model responses

// Apply middlewares directly to the model
const modelWithAssistant = includeAssistantMessage(baseModel);
const modelWithCache = cacheCalls(cache)(baseModel);
  • middlewares(): Replace the middlewares
  • prependMiddlewares(): Add middlewares to the beginning of the chain to be executed first
  • appendMiddlewares(): Add middlewares to the end of the chain to be executed last

For more details on available options, please refer to Vercel's AI SDK documentation: