npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@kreuzberg/liter-llm-wasm

v1.1.1

Published

WebAssembly bindings for liter-llm — universal LLM API client with 142+ providers. Rust-powered.

Readme

WebAssembly

Universal LLM API client for browsers and WebAssembly runtimes. Access 142+ LLM providers with portable deployment across browsers, Deno, and Cloudflare Workers.

Installation

Package Installation

Install via one of the supported package managers:

npm:

npm install @kreuzberg/liter-llm-wasm

pnpm:

pnpm add @kreuzberg/liter-llm-wasm

yarn:

yarn add @kreuzberg/liter-llm-wasm

System Requirements

  • Modern browser with WebAssembly support, or Deno 1.0+, or Cloudflare Workers
  • API keys via environment variables or runtime configuration

Quick Start

Basic Chat

Send a message to any provider using the provider/model prefix:

import init, { LlmClient } from "@kreuzberg/liter-llm-wasm";

await init();

const client = new LlmClient({ apiKey: "sk-..." });
const response = await client.chat({
  model: "openai/gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);

Common Use Cases

Streaming Responses

Stream tokens in real time:

import init, { LlmClient } from "@kreuzberg/liter-llm-wasm";

await init();

const client = new LlmClient({ apiKey: "sk-..." });
const stream = await client.chatStream({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "Hello" }],
});
// stream is a ReadableStream

Next Steps

Features

Supported Providers (142+)

Route to any provider using the provider/model prefix convention:

| Provider | Example Model | |----------|--------------| | OpenAI | openai/gpt-4o, openai/gpt-4o-mini | | Anthropic | anthropic/claude-3-5-sonnet-20241022 | | Groq | groq/llama-3.1-70b-versatile | | Mistral | mistral/mistral-large-latest | | Cohere | cohere/command-r-plus | | Together AI | together/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo | | Fireworks | fireworks/accounts/fireworks/models/llama-v3p1-70b-instruct | | Google Vertex | vertexai/gemini-1.5-pro | | Amazon Bedrock | bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0 |

Complete Provider List

Key Capabilities

  • Provider Routing -- Single client for 142+ LLM providers via provider/model prefix

  • Unified API -- Consistent chat, chat_stream, embeddings, list_models interface

  • Streaming -- Real-time token streaming via chat_stream

  • Tool Calling -- Function calling and tool use across all supporting providers

  • Type Safe -- Schema-driven types compiled from JSON schemas

  • Secure -- API keys never logged or serialized, managed via environment variables

  • Observability -- Built-in OpenTelemetry with GenAI semantic conventions

  • Error Handling -- Structured errors with provider context and retry hints

Performance

Built on a compiled Rust core for speed and safety:

  • Provider resolution at client construction -- zero per-request overhead
  • Configurable timeouts and connection pooling
  • Zero-copy streaming with SSE and AWS EventStream support
  • API keys wrapped in secure memory, zeroed on drop

Provider Routing

Route to 142+ providers using the provider/model prefix convention:

openai/gpt-4o
anthropic/claude-3-5-sonnet-20241022
groq/llama-3.1-70b-versatile
mistral/mistral-large-latest

See the provider registry for the full list.

Proxy Server

liter-llm also ships as an OpenAI-compatible proxy server with Docker support:

docker run -p 4000:4000 -e LITER_LLM_MASTER_KEY=sk-your-key ghcr.io/kreuzberg-dev/liter-llm

See the proxy server documentation for configuration, CLI usage, and MCP integration.

Documentation

Part of kreuzberg.dev.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.

Join our Discord community for questions and discussion.

License

MIT -- see LICENSE for details.