npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

llm-polyglot

v2.6.0

Published

A universal LLM client - provides adapters for various LLM providers to adhere to a universal interface - the openai sdk - allows you to use providers like anthropic using the same openai interface and transforms the responses in the same way - this allow

Readme

llm-polyglot extends the OpenAI SDK to provide a consistent interface across different LLM providers. Use the same familiar OpenAI-style API with Anthropic, Google, and others.

Provider Support

Native API Support Status:

| Provider API | Status | Chat | Basic Stream | Functions/Tool calling | Function streaming | Notes | |-------------|---------|------|--------------|---------------------|-----------------|--------| | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | Direct SDK proxy | | Anthropic | ✅ | ✅ | ✅ | ❌ | ❌ | Claude models | | Google | ✅ | ✅ | ✅ | ✅ | ❌ | Gemini models + context caching | | Azure | 🚧 | ✅ | ✅ | ❌ | ❌ | OpenAI model hosting | | Cohere | ❌ | - | - | - | - | Not supported | | AI21 | ❌ | - | - | - | - | Not supported |

Stream Types:

  • Basic Stream: Simple text streaming
  • Partial JSON Stream: Progressive JSON object construction during streaming
  • Function Stream: Streaming function/tool calls and their results

OpenAI-Compatible Hosting Providers:

These providers use the OpenAI SDK format, so they work directly with the OpenAI client configuration:

| Provider | How to Use | Available Models | |----------|------------|------------------| | Together | Use OpenAI client with Together base URL | Mixtral, Llama, OpenChat, Yi, others | | Anyscale | Use OpenAI client with Anyscale base URL | Mistral, Llama, others | | Perplexity | Use OpenAI client with Perplexity base URL | pplx-* models | | Replicate | Use OpenAI client with Replicate base URL | Various open models |

Installation

# Base installation
npm install llm-polyglot openai

# Provider-specific SDKs (as needed)
npm install @anthropic-ai/sdk    # For Anthropic
npm install @google/generative-ai # For Google/Gemini

Basic Usage

import { createLLMClient } from "llm-polyglot";

// Initialize provider-specific client
const client = createLLMClient({
  provider: "anthropic" // or "google", "openai", etc.
});

// Use consistent OpenAI-style interface
const completion = await client.chat.completions.create({
  model: "claude-3-opus-20240229",
  messages: [{ role: "user", content: "Hello!" }],
  max_tokens: 1000
});

Provider-Specific Features

Anthropic

The llm-polyglot library provides support for Anthropic's API, including standard chat completions, streaming chat completions, and function calling. Both input paramaters and responses match exactly those of the OpenAI SDK - for more detailed documentation please see the OpenAI docs: https://platform.openai.com/docs/api-reference

The anthropic sdk is required when using the anthropic provider - we only use the types provided by the sdk.

  bun add @anthropic-ai/sdk
const client = createLLMClient({ provider: "anthropic" });

// Standard completion
const response = await client.chat.completions.create({
  model: "claude-3-opus-20240229",
  messages: [{ role: "user", content: "Hello!" }]
});

// Streaming
const stream = await client.chat.completions.create({
  model: "claude-3-opus-20240229",
  messages: [{ role: "user", content: "Hello!" }],
  stream: true
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}

// Tool/Function calling
const result = await client.chat.completions.create({
  model: "claude-3-opus-20240229",
  messages: [{ role: "user", content: "Analyze this data" }],
  tools: [{
    type: "function",
    function: {
      name: "analyze",
      parameters: {
        type: "object",
        properties: {
          sentiment: { type: "string" }
        }
      }
    }
  }]
});

Google (Gemini)

The llm-polyglot library provides support for Google's Gemini API including:

  • Standard chat completions with OpenAI-compatible interface
  • Streaming chat completions with delta updates
  • Function/tool calling with automatic schema conversion
  • Context caching for token optimization (requires paid API key)
  • Grounding support with Google Search integration
  • Safety settings and model generation config
  • Session management for stateful conversations
  • Automatic response transformation with source attribution

The Google generative-ai sdk is required when using the google provider:

  bun add @google/generative-ai

To use any of the above functionality, the schema matches OpenAI's format since we translate the OpenAI params spec into Gemini's model spec.

Basic Usage

const client = createLLMClient({ provider: "google" });

// Standard completion
const completion = await client.chat.completions.create({
  model: "gemini-1.5-flash-latest",
  messages: [{ role: "user", content: "Hello!" }],
  max_tokens: 1000
});

// With grounding (Google Search)
const groundedCompletion = await client.chat.completions.create({
  model: "gemini-1.5-flash-latest",
  messages: [{ role: "user", content: "What are the latest AI developments?" }],
  groundingThreshold: 0.7,
  max_tokens: 1000
});

// With safety settings
const safeCompletion = await client.chat.completions.create({
  model: "gemini-1.5-flash-latest",
  messages: [{ role: "user", content: "Tell me a story" }],
  additionalProperties: {
    safetySettings: [{
      category: "HARM_CATEGORY_HARASSMENT",
      threshold: "BLOCK_MEDIUM_AND_ABOVE"
    }]
  }
});

// With session management
const sessionCompletion = await client.chat.completions.create({
  model: "gemini-1.5-flash-latest",
  messages: [{ role: "user", content: "Remember this: I'm Alice" }],
  additionalProperties: {
    sessionId: "user-123"
  }
});

Context Caching

Context Caching is a feature specific to Gemini that helps cut down on duplicate token usage by allowing you to create a cache with a TTL:

// Create a cache
const cache = await client.cacheManager.create({
  model: "gemini-1.5-flash-8b",
  messages: [{ role: "user", content: "Context to cache" }],
  ttlSeconds: 3600 // Cache for 1 hour
});

// Use the cached context
const completion = await client.chat.completions.create({
  model: "gemini-1.5-flash-8b",
  messages: [{ role: "user", content: "Follow-up question" }],
  additionalProperties: {
    cacheName: cache.name
  }
});

Function/Tool Calling

const completion = await client.chat.completions.create({
  model: "gemini-1.5-flash-latest",
  messages: [{ role: "user", content: "Analyze this data" }],
  tools: [{
    type: "function",
    function: {
      name: "analyze",
      parameters: {
        type: "object",
        properties: {
          sentiment: { type: "string" }
        }
      }
    }
  }],
  tool_choice: {
    type: "function",
    function: { name: "analyze" }
  }
});

Error Handling