@lenylvt/pi-ai
v0.69.2
Published
Unified LLM API for Anthropic, GitHub Copilot, OpenAI Codex, and OpenRouter
Maintainers
Readme
@lenylvt/pi-ai
Unified LLM client layer for the four built-in providers kept in this fork:
anthropicgithub-copilotopenai-codexopenrouter
The package ships built-in model catalogs for those providers only. Custom models and custom providers are still possible through the generic Model<Api> and registry APIs.
Install
bun add @lenylvt/pi-aiTypeBox exports are re-exported from @lenylvt/pi-ai: Type, Static, and TSchema.
Built-in Providers
| Provider | Auth | Notes |
|----------|------|-------|
| anthropic | OAuth or API key | Claude Pro/Max or ANTHROPIC_API_KEY |
| github-copilot | OAuth or token | COPILOT_GITHUB_TOKEN, GH_TOKEN, or GITHUB_TOKEN |
| openai-codex | OAuth | ChatGPT Plus/Pro subscription |
| openrouter | API key | OPENROUTER_API_KEY |
Built-in APIs
anthropic-messagesopenai-completionsopenai-responsesopenai-codex-responses
github-copilot uses a mix of anthropic-messages, openai-completions, and openai-responses depending on the selected model. openrouter uses openai-completions. openai-codex uses openai-codex-responses.
Quick Start
import { complete, getModel, Type, type Context, type Tool } from "@lenylvt/pi-ai";
const model = getModel("anthropic", "claude-sonnet-4-5");
const tools: Tool[] = [
{
name: "get_time",
description: "Get the current time",
parameters: Type.Object({
timezone: Type.Optional(Type.String()),
}),
},
];
const context: Context = {
systemPrompt: "You are a helpful assistant.",
messages: [{ role: "user", content: "What time is it?", timestamp: Date.now() }],
tools,
};
const message = await complete(model, context);
console.log(message.content);Providers and Models
import { getModel, getModels, getProviders } from "@lenylvt/pi-ai";
console.log(getProviders());
// ["anthropic", "github-copilot", "openai-codex", "openrouter"]
const anthropicModels = getModels("anthropic");
const codex = getModel("openai-codex", "gpt-5.4");
const openRouter = getModel("openrouter", "openai/gpt-5.1-codex");Environment Variables
| Provider | Environment Variable(s) |
|----------|-------------------------|
| anthropic | ANTHROPIC_API_KEY or ANTHROPIC_OAUTH_TOKEN |
| github-copilot | COPILOT_GITHUB_TOKEN, GH_TOKEN, or GITHUB_TOKEN |
| openrouter | OPENROUTER_API_KEY |
openai-codex does not use a static API key in this fork. Authenticate with OAuth.
You can inspect environment-based auth with getEnvApiKey():
import { getEnvApiKey } from "@lenylvt/pi-ai";
const anthropicKey = getEnvApiKey("anthropic");
const openRouterKey = getEnvApiKey("openrouter");OAuth
OAuth helpers are exported from @lenylvt/pi-ai/oauth.
CLI Login
bunx @lenylvt/pi-ai login
bunx @lenylvt/pi-ai login anthropic
bunx @lenylvt/pi-ai login github-copilot
bunx @lenylvt/pi-ai login openai-codexProgrammatic OAuth
import {
getOAuthApiKey,
loginAnthropic,
loginGitHubCopilot,
loginOpenAICodex,
type OAuthCredentials,
} from "@lenylvt/pi-ai/oauth";Stored credentials are the caller's responsibility.
Thinking / Reasoning
Use streamSimple() or completeSimple() with a unified reasoning level:
import { completeSimple, getModel } from "@lenylvt/pi-ai";
const model = getModel("openrouter", "openai/gpt-5.1-codex");
const response = await completeSimple(
model,
{
messages: [{ role: "user", content: "Refactor this function", timestamp: Date.now() }],
},
{
reasoning: "medium",
},
);Streaming
import { getModel, stream } from "@lenylvt/pi-ai";
const model = getModel("github-copilot", "gpt-5");
const s = stream(model, {
messages: [{ role: "user", content: "Summarize this diff", timestamp: Date.now() }],
});
for await (const event of s) {
if (event.type === "text_delta") {
process.stdout.write(event.delta);
}
}Custom Models
Custom models still work. Use openai-completions or anthropic-messages for the common proxy patterns supported by this fork.
import type { Model } from "@lenylvt/pi-ai";
const openRouterModel: Model<"openai-completions"> = {
id: "openai/gpt-5.4-mini",
name: "OpenRouter GPT-5.4 Mini",
api: "openai-completions",
provider: "openrouter",
baseUrl: "https://openrouter.ai/api/v1",
reasoning: true,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 16384,
compat: {
supportsDeveloperRole: true,
supportsReasoningEffort: false,
},
};Public Exports
Provider-specific option types kept in this fork:
AnthropicOptionsOpenAICompletionsOptionsOpenAIResponsesOptionsOpenAICodexResponsesOptions
Notes
openai-codexrequires ChatGPT Plus or Pro.github-copilotcan require enabling some models in VS Code before the token can use them.openrouteris the source of truth for its model availability; use the published model list bundled with each release.
License
MIT
