@adia-ai/llm
v0.5.4
Published
Provider-agnostic LLM client — anthropic / openai / gemini adapters with a unified chat() + streamChat() facade. Used by AdiaUI's chat-shell and the A2UI generation pipeline; works in browser (with proxyUrl) and Node.
Readme
@adia-ai/llm
Provider-agnostic LLM client. Three adapters (anthropic / openai / gemini)
behind a single chat() + streamChat() facade. Works in browser and Node.
Install
npm install @adia-ai/llmUsage
import { chat, streamChat } from '@adia-ai/llm';
// Direct API call (apiKey owned by the caller)
const reply = await chat({
apiKey: 'sk-...',
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello' }],
});
// Streaming
for await (const chunk of streamChat({
apiKey: 'sk-...',
model: 'claude-haiku-4-5-20251001',
messages: [{ role: 'user', content: 'Hello' }],
})) {
if (chunk.type === 'text') process.stdout.write(chunk.text);
}Browser dev-mode warning (since v0.4.3): if
createAdapter()resolves anapiKeywhile running in a browser, the bridge emits a one-shot maskedconsole.warn(e.g.sk-ant-a…Fiw-) noting that the key will be sent in request headers. Fine for local dev — never deploy this shape. The warning is dedup'd viawindow.__adia_llm_key_warning_shown(one warn per page load). For production, use proxy mode (next section) to keep the key server-side.
Browser proxy mode
proxyUrl routes through a server-side proxy so the API key never
reaches the browser. The client supports two proxy shapes and
auto-detects which to use based on the URL:
Smart proxy (provider-neutral body)
The default. Send any proxyUrl that doesn't match the passthrough
pattern below — typically your own backend route like /api/chat. The
client speaks a single provider-neutral protocol; the proxy holds the
API key and dispatches internally to the right upstream adapter.
for await (const chunk of streamChat({
proxyUrl: '/api/chat',
provider: 'openai', // optional — auto-detected from model
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello' }],
})) { /* ... */ }The body sent to the proxy:
{
"provider": "openai",
"model": "gpt-4o-mini",
"messages": [{ "role": "user", "content": "Hello" }],
"system": "...optional...",
"maxTokens": 4096,
"temperature": 0.7,
"stream": true
}The proxy reformats per upstream provider and pipes SSE bytes
verbatim. The reference smart-proxy implementation is at
packages/llm/server.js in the chat-ui repo (route: POST /api/chat,
plus /api/generate, /api/generate/reset, /api/convert-html for
the A2UI generation pipeline). It is not shipped with the npm
package — it's a development convenience for the in-repo apps.
Passthrough proxy (real upstream body)
When proxyUrl matches /api/llm/<provider>/<rest> (the Vite-dev
shape used by chat-ui apps), the client switches to passthrough
mode. The proxy is "dumb" — it just rewrites the URL to the real
upstream (https://api.<provider>.com/<rest>) and forwards bytes
unchanged. The client sends the real upstream body shape plus the
adapter's normal auth headers.
This is auto-detected — you don't pick it explicitly. If you mounted a Vite proxy like:
// vite.config.js
server: {
proxy: {
'/api/llm/anthropic': {
target: 'https://api.anthropic.com',
rewrite: (p) => p.replace(/^\/api\/llm\/anthropic/, ''),
},
},
},…then passing proxyUrl: '/api/llm/anthropic/v1/messages' will
produce a request the upstream understands directly.
| Shape | URL pattern | Body | Auth header | Use when |
|---|---|---|---|---|
| Smart | /api/chat (anything non-passthrough) | provider-neutral | none (server holds key) | You control the proxy and want one route across providers |
| Passthrough | /api/llm/<provider>/<rest> | real upstream shape | adapter's own (e.g. x-api-key) | You're using Vite/nginx URL-rewrite and don't want server-side dispatch |
Detection lives in adapters/index.js — regex
/\/api\/llm\/[a-z]+(\/|$)/.
Production deployment
Neither server.js (smart proxy reference) nor any Vite/nginx URL
rewrite (passthrough reference) is shipped by the npm package — both
are development-time conveniences for the in-repo apps. Production
consumers must deploy their own proxy: a small server that holds
your provider API key(s) and either:
- Speaks the smart-proxy contract — accepts the provider-neutral
body documented above and dispatches per-provider. See
packages/llm/server.jsfor a reference implementation (Express +chat/streamChatfrom this package, ~150 LOC). - Speaks the passthrough contract — exposes
/api/llm/<provider>/<rest>and forwards tohttps://api.<provider>.com/<rest>with the real API-key header injected server-side. See the Vite config snippet above for the shape; a 50-line nginx or Express proxy works fine.
Either contract works — the client auto-detects which one your proxy implements by URL shape. Pick the one that matches your existing infrastructure.
Subpath exports
| Subpath | Purpose |
|---------|---------|
| @adia-ai/llm | Default: chat, streamChat, createClient |
| @adia-ai/llm/bridge | createAdapter — wraps the facade in the A2UI pipeline's adapter interface |
| @adia-ai/llm/stub | StubLLMAdapter — deterministic adapter for tests |
| @adia-ai/llm/adapters/anthropic | Direct adapter object |
| @adia-ai/llm/adapters/openai | Direct adapter object |
| @adia-ai/llm/adapters/gemini | Direct adapter object |
