ai-sdk-provider-env
v0.2.0
Published
A dynamic, environment-variable-driven provider for Vercel AI SDK — resolves provider configurations from env var conventions at runtime
Readme
ai-sdk-provider-env
A dynamic, environment-variable-driven provider for Vercel AI SDK. Resolves AI provider configuration from env var conventions at runtime, so you can switch models without touching code.
Motivation
Using multiple AI providers with Vercel AI SDK means importing each SDK, configuring API keys and base URLs, and wiring everything together — per provider, per project. Switching providers requires code changes.
ai-sdk-provider-env eliminates this boilerplate. Define provider configurations through environment variables, resolve them at runtime. Add a new provider by setting env vars, switch models by changing a string — no code changes needed.
Features
- Resolve provider config (base URL, API key, compatibility mode) from environment variables automatically
- Built-in presets for popular providers, so you only need to set an API key
- Supports OpenAI, Anthropic, Google Gemini, and any OpenAI-compatible API
- Implements
ProviderV3, plugs directly intocreateProviderRegistry - Provider instances are cached, no redundant initialization
- Fully customizable: custom fetch, env-based headers, custom separator, code-based configs
Installation
pnpm add ai-sdk-provider-envInstall provider SDKs as needed:
pnpm add @ai-sdk/openai # for OpenAI
pnpm add @ai-sdk/anthropic # for Anthropic
pnpm add @ai-sdk/google # for Google AI Studio (Gemini)
pnpm add @ai-sdk/openai-compatible # for generic OpenAI-compatible APIsQuick Start
import { createProviderRegistry, generateText } from 'ai'
import { envProvider } from 'ai-sdk-provider-env'
const registry = createProviderRegistry({
env: envProvider(),
})
// Use a preset: only API_KEY is required
// OPENAI_API_KEY=sk-xxx (OPENAI_PRESET=openai is optional — auto-detected)
const model = registry.languageModel('env:openai/gpt-4o')
const { text } = await generateText({ model, prompt: 'Hello!' })Any env var prefix is a config set. Two endpoints? Two prefixes, zero code changes:
# .env
FAST_BASE_URL=https://fast-api.example.com/v1
FAST_API_KEY=key-fast
SMART_BASE_URL=https://smart-api.example.com/v1
SMART_API_KEY=key-smartconst draft = await generateText({
model: registry.languageModel('env:fast/llama-3-8b'),
prompt: 'Write a story',
})
const review = await generateText({
model: registry.languageModel('env:smart/gpt-4o'),
prompt: `Review this: ${draft.text}`,
})Environment Variable Convention
The model ID format is {configSet}/{modelId}. The config set name maps to an env var prefix (uppercased).
With the default separator _, a config set reads these variables ([MYAI] = your config set name, uppercased):
| Variable | Required | Description |
|---|---|---|
| [MYAI]_API_KEY | Yes | API key |
| [MYAI]_BASE_URL | Yes (unless preset is set or auto-detected) | API base URL |
| [MYAI]_PRESET | No | Built-in preset name (e.g. openai) |
| [MYAI]_COMPATIBLE | No | Compatibility mode (default: openai-compatible) |
| [MYAI]_HEADERS | No | Custom HTTP headers (JSON format) |
When PRESET is set, BASE_URL and COMPATIBLE become optional and fall back to the preset's values.
Compatibility modes:
| Value | Behavior |
|---|---|
| openai | Uses @ai-sdk/openai |
| anthropic | Uses @ai-sdk/anthropic |
| gemini | Uses @ai-sdk/google |
| openai-compatible | Uses @ai-sdk/openai-compatible with the config set name as the provider name (default) |
Built-in Presets
| Preset name | Base URL | Compatible |
|---|---|---|
| openai | https://api.openai.com/v1 | openai |
| anthropic | https://api.anthropic.com | anthropic |
| google | https://generativelanguage.googleapis.com/v1beta | gemini |
| deepseek | https://api.deepseek.com | openai-compatible |
| zhipu | https://open.bigmodel.cn/api/paas/v4 | openai-compatible |
| groq | https://api.groq.com/openai/v1 | openai-compatible |
| together | https://api.together.xyz/v1 | openai-compatible |
| fireworks | https://api.fireworks.ai/inference/v1 | openai-compatible |
| mistral | https://api.mistral.ai/v1 | openai-compatible |
| moonshot | https://api.moonshot.cn/v1 | openai-compatible |
| perplexity | https://api.perplexity.ai | openai-compatible |
| openrouter | https://openrouter.ai/api/v1 | openai-compatible |
| siliconflow | https://api.siliconflow.cn/v1 | openai-compatible |
Preset Auto-Detect
presetAutoDetect is enabled by default. When the config set name exactly matches a built-in preset name, the preset is applied automatically — no _PRESET env var needed. Only an API key is required:
# OPENROUTER_API_KEY is all you need
OPENROUTER_API_KEY=sk-or-xxxconst provider = envProvider()
// Works — openrouter preset auto-detected from config set name
const model = provider.languageModel('openrouter/some-model')Explicit _PRESET and _BASE_URL env vars always take precedence over auto-detect. To disable this behavior:
envProvider({ presetAutoDetect: false })API Reference
envProvider(options?)
Returns a ProviderV3 instance.
import { envProvider } from 'ai-sdk-provider-env'
const provider = envProvider(options)Options (EnvProviderOptions):
| Option | Type | Default | Description |
|---|---|---|---|
| separator | string | '_' | Separator between the prefix and the variable name |
| configs | Record<string, ConfigSetEntry> | undefined | Explicit config sets (takes precedence over env vars) |
| defaults | EnvProviderDefaults | undefined | Global defaults applied to all providers (can be overridden per config set) |
| presetAutoDetect | boolean | true | Auto-apply a built-in preset when the config set name matches. Set to false to require explicit _PRESET configuration. |
EnvProviderDefaults:
| Option | Type | Default | Description |
|---|---|---|---|
| fetch | typeof globalThis.fetch | undefined | Custom fetch implementation passed to all created providers |
| headers | Record<string, string> | undefined | Default HTTP headers for all providers (overridden by config-set headers) |
ConfigSetEntry:
interface ConfigSetEntry {
apiKey: string
preset?: string
baseURL?: string
compatible?: 'openai' | 'anthropic' | 'gemini' | 'openai-compatible' // default: 'openai-compatible'
headers?: Record<string, string>
}Model ID format:
{configSet}/{modelId}Examples: openai/gpt-4o, anthropic/claude-sonnet-4-20250514, myapi/some-model.
Advanced Usage
Custom separator
If single underscores conflict with your naming scheme, use double underscores or any other string:
const provider = envProvider({ separator: '__' })
// Now reads: OPENAI__BASE_URL, OPENAI__API_KEY, OPENAI__PRESET, OPENAI__COMPATIBLECode-based configs
Skip env vars entirely and pass config directly. This takes the highest precedence:
const provider = envProvider({
configs: {
openai: {
baseURL: 'https://api.openai.com/v1',
apiKey: process.env.OPENAI_KEY!,
compatible: 'openai',
},
claude: {
baseURL: 'https://api.anthropic.com',
apiKey: process.env.ANTHROPIC_KEY!,
compatible: 'anthropic',
},
deepseek: {
preset: 'deepseek',
apiKey: process.env.DEEPSEEK_KEY!,
},
},
})
const model = provider.languageModel('openai/gpt-4o')Custom fetch
Pass a custom fetch implementation to all providers. Useful for proxies, logging, or test mocks:
const provider = envProvider({ defaults: { fetch: myCustomFetch } })Default headers
Set HTTP headers that apply to all providers. Per-config-set headers (from env vars or code configs) override defaults with the same key:
const provider = envProvider({
defaults: {
headers: { 'X-App-Name': 'my-app', 'X-Request-Source': 'server' },
},
})Custom headers via env vars
Set per-config-set HTTP headers using the HEADERS env var. The value must be valid JSON:
OPENAI_HEADERS={"X-Custom":"value","X-Request-Source":"my-app"}These headers are merged into every request made by that config set's provider. When combined with defaults.headers, config-set headers take precedence for the same key.
Using with createProviderRegistry
envProvider() implements ProviderV3, so it works directly with createProviderRegistry:
import { createProviderRegistry, generateText } from 'ai'
import { envProvider } from 'ai-sdk-provider-env'
const registry = createProviderRegistry({
env: envProvider(),
})
// Language model
const model = registry.languageModel('env:openai/gpt-4o')
// Embedding model
const embedder = registry.embeddingModel('env:openai/text-embedding-3-small')
// Image model
const imageModel = registry.imageModel('env:openai/dall-e-3')
const { text } = await generateText({
model,
prompt: 'Hello!',
})The model ID format inside the registry is {registryKey}:{configSet}/{modelId}. With the setup above, env:openai/gpt-4o means config set openai, model gpt-4o.
You can also mount multiple providers side by side:
import { createOpenAI } from '@ai-sdk/openai'
const registry = createProviderRegistry({
env: envProvider(),
openai: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
})