npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@adia-ai/llm

v0.5.4

Published

Provider-agnostic LLM client — anthropic / openai / gemini adapters with a unified chat() + streamChat() facade. Used by AdiaUI's chat-shell and the A2UI generation pipeline; works in browser (with proxyUrl) and Node.

Readme

@adia-ai/llm

Provider-agnostic LLM client. Three adapters (anthropic / openai / gemini) behind a single chat() + streamChat() facade. Works in browser and Node.

Install

npm install @adia-ai/llm

Usage

import { chat, streamChat } from '@adia-ai/llm';

// Direct API call (apiKey owned by the caller)
const reply = await chat({
  apiKey: 'sk-...',
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello' }],
});

// Streaming
for await (const chunk of streamChat({
  apiKey: 'sk-...',
  model: 'claude-haiku-4-5-20251001',
  messages: [{ role: 'user', content: 'Hello' }],
})) {
  if (chunk.type === 'text') process.stdout.write(chunk.text);
}

Browser dev-mode warning (since v0.4.3): if createAdapter() resolves an apiKey while running in a browser, the bridge emits a one-shot masked console.warn (e.g. sk-ant-a…Fiw-) noting that the key will be sent in request headers. Fine for local dev — never deploy this shape. The warning is dedup'd via window.__adia_llm_key_warning_shown (one warn per page load). For production, use proxy mode (next section) to keep the key server-side.

Browser proxy mode

proxyUrl routes through a server-side proxy so the API key never reaches the browser. The client supports two proxy shapes and auto-detects which to use based on the URL:

Smart proxy (provider-neutral body)

The default. Send any proxyUrl that doesn't match the passthrough pattern below — typically your own backend route like /api/chat. The client speaks a single provider-neutral protocol; the proxy holds the API key and dispatches internally to the right upstream adapter.

for await (const chunk of streamChat({
  proxyUrl: '/api/chat',
  provider: 'openai',          // optional — auto-detected from model
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello' }],
})) { /* ... */ }

The body sent to the proxy:

{
  "provider": "openai",
  "model": "gpt-4o-mini",
  "messages": [{ "role": "user", "content": "Hello" }],
  "system": "...optional...",
  "maxTokens": 4096,
  "temperature": 0.7,
  "stream": true
}

The proxy reformats per upstream provider and pipes SSE bytes verbatim. The reference smart-proxy implementation is at packages/llm/server.js in the chat-ui repo (route: POST /api/chat, plus /api/generate, /api/generate/reset, /api/convert-html for the A2UI generation pipeline). It is not shipped with the npm package — it's a development convenience for the in-repo apps.

Passthrough proxy (real upstream body)

When proxyUrl matches /api/llm/<provider>/<rest> (the Vite-dev shape used by chat-ui apps), the client switches to passthrough mode. The proxy is "dumb" — it just rewrites the URL to the real upstream (https://api.<provider>.com/<rest>) and forwards bytes unchanged. The client sends the real upstream body shape plus the adapter's normal auth headers.

This is auto-detected — you don't pick it explicitly. If you mounted a Vite proxy like:

// vite.config.js
server: {
  proxy: {
    '/api/llm/anthropic': {
      target: 'https://api.anthropic.com',
      rewrite: (p) => p.replace(/^\/api\/llm\/anthropic/, ''),
    },
  },
},

…then passing proxyUrl: '/api/llm/anthropic/v1/messages' will produce a request the upstream understands directly.

| Shape | URL pattern | Body | Auth header | Use when | |---|---|---|---|---| | Smart | /api/chat (anything non-passthrough) | provider-neutral | none (server holds key) | You control the proxy and want one route across providers | | Passthrough | /api/llm/<provider>/<rest> | real upstream shape | adapter's own (e.g. x-api-key) | You're using Vite/nginx URL-rewrite and don't want server-side dispatch |

Detection lives in adapters/index.js — regex /\/api\/llm\/[a-z]+(\/|$)/.

Production deployment

Neither server.js (smart proxy reference) nor any Vite/nginx URL rewrite (passthrough reference) is shipped by the npm package — both are development-time conveniences for the in-repo apps. Production consumers must deploy their own proxy: a small server that holds your provider API key(s) and either:

  1. Speaks the smart-proxy contract — accepts the provider-neutral body documented above and dispatches per-provider. See packages/llm/server.js for a reference implementation (Express + chat/streamChat from this package, ~150 LOC).
  2. Speaks the passthrough contract — exposes /api/llm/<provider>/<rest> and forwards to https://api.<provider>.com/<rest> with the real API-key header injected server-side. See the Vite config snippet above for the shape; a 50-line nginx or Express proxy works fine.

Either contract works — the client auto-detects which one your proxy implements by URL shape. Pick the one that matches your existing infrastructure.

Subpath exports

| Subpath | Purpose | |---------|---------| | @adia-ai/llm | Default: chat, streamChat, createClient | | @adia-ai/llm/bridge | createAdapter — wraps the facade in the A2UI pipeline's adapter interface | | @adia-ai/llm/stub | StubLLMAdapter — deterministic adapter for tests | | @adia-ai/llm/adapters/anthropic | Direct adapter object | | @adia-ai/llm/adapters/openai | Direct adapter object | | @adia-ai/llm/adapters/gemini | Direct adapter object |