npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ai-sdk-provider-env

v0.2.0

Published

A dynamic, environment-variable-driven provider for Vercel AI SDK — resolves provider configurations from env var conventions at runtime

Readme

中文

ai-sdk-provider-env

A dynamic, environment-variable-driven provider for Vercel AI SDK. Resolves AI provider configuration from env var conventions at runtime, so you can switch models without touching code.

npm version license

Motivation

Using multiple AI providers with Vercel AI SDK means importing each SDK, configuring API keys and base URLs, and wiring everything together — per provider, per project. Switching providers requires code changes.

ai-sdk-provider-env eliminates this boilerplate. Define provider configurations through environment variables, resolve them at runtime. Add a new provider by setting env vars, switch models by changing a string — no code changes needed.

Features

  • Resolve provider config (base URL, API key, compatibility mode) from environment variables automatically
  • Built-in presets for popular providers, so you only need to set an API key
  • Supports OpenAI, Anthropic, Google Gemini, and any OpenAI-compatible API
  • Implements ProviderV3, plugs directly into createProviderRegistry
  • Provider instances are cached, no redundant initialization
  • Fully customizable: custom fetch, env-based headers, custom separator, code-based configs

Installation

pnpm add ai-sdk-provider-env

Install provider SDKs as needed:

pnpm add @ai-sdk/openai            # for OpenAI
pnpm add @ai-sdk/anthropic         # for Anthropic
pnpm add @ai-sdk/google            # for Google AI Studio (Gemini)
pnpm add @ai-sdk/openai-compatible # for generic OpenAI-compatible APIs

Quick Start

import { createProviderRegistry, generateText } from 'ai'
import { envProvider } from 'ai-sdk-provider-env'

const registry = createProviderRegistry({
  env: envProvider(),
})

// Use a preset: only API_KEY is required
// OPENAI_API_KEY=sk-xxx  (OPENAI_PRESET=openai is optional — auto-detected)
const model = registry.languageModel('env:openai/gpt-4o')

const { text } = await generateText({ model, prompt: 'Hello!' })

Any env var prefix is a config set. Two endpoints? Two prefixes, zero code changes:

# .env
FAST_BASE_URL=https://fast-api.example.com/v1
FAST_API_KEY=key-fast

SMART_BASE_URL=https://smart-api.example.com/v1
SMART_API_KEY=key-smart
const draft = await generateText({
  model: registry.languageModel('env:fast/llama-3-8b'),
  prompt: 'Write a story',
})

const review = await generateText({
  model: registry.languageModel('env:smart/gpt-4o'),
  prompt: `Review this: ${draft.text}`,
})

Environment Variable Convention

The model ID format is {configSet}/{modelId}. The config set name maps to an env var prefix (uppercased).

With the default separator _, a config set reads these variables ([MYAI] = your config set name, uppercased):

| Variable | Required | Description | |---|---|---| | [MYAI]_API_KEY | Yes | API key | | [MYAI]_BASE_URL | Yes (unless preset is set or auto-detected) | API base URL | | [MYAI]_PRESET | No | Built-in preset name (e.g. openai) | | [MYAI]_COMPATIBLE | No | Compatibility mode (default: openai-compatible) | | [MYAI]_HEADERS | No | Custom HTTP headers (JSON format) |

When PRESET is set, BASE_URL and COMPATIBLE become optional and fall back to the preset's values.

Compatibility modes:

| Value | Behavior | |---|---| | openai | Uses @ai-sdk/openai | | anthropic | Uses @ai-sdk/anthropic | | gemini | Uses @ai-sdk/google | | openai-compatible | Uses @ai-sdk/openai-compatible with the config set name as the provider name (default) |

Built-in Presets

| Preset name | Base URL | Compatible | |---|---|---| | openai | https://api.openai.com/v1 | openai | | anthropic | https://api.anthropic.com | anthropic | | google | https://generativelanguage.googleapis.com/v1beta | gemini | | deepseek | https://api.deepseek.com | openai-compatible | | zhipu | https://open.bigmodel.cn/api/paas/v4 | openai-compatible | | groq | https://api.groq.com/openai/v1 | openai-compatible | | together | https://api.together.xyz/v1 | openai-compatible | | fireworks | https://api.fireworks.ai/inference/v1 | openai-compatible | | mistral | https://api.mistral.ai/v1 | openai-compatible | | moonshot | https://api.moonshot.cn/v1 | openai-compatible | | perplexity | https://api.perplexity.ai | openai-compatible | | openrouter | https://openrouter.ai/api/v1 | openai-compatible | | siliconflow | https://api.siliconflow.cn/v1 | openai-compatible |

Preset Auto-Detect

presetAutoDetect is enabled by default. When the config set name exactly matches a built-in preset name, the preset is applied automatically — no _PRESET env var needed. Only an API key is required:

# OPENROUTER_API_KEY is all you need
OPENROUTER_API_KEY=sk-or-xxx
const provider = envProvider()

// Works — openrouter preset auto-detected from config set name
const model = provider.languageModel('openrouter/some-model')

Explicit _PRESET and _BASE_URL env vars always take precedence over auto-detect. To disable this behavior:

envProvider({ presetAutoDetect: false })

API Reference

envProvider(options?)

Returns a ProviderV3 instance.

import { envProvider } from 'ai-sdk-provider-env'

const provider = envProvider(options)

Options (EnvProviderOptions):

| Option | Type | Default | Description | |---|---|---|---| | separator | string | '_' | Separator between the prefix and the variable name | | configs | Record<string, ConfigSetEntry> | undefined | Explicit config sets (takes precedence over env vars) | | defaults | EnvProviderDefaults | undefined | Global defaults applied to all providers (can be overridden per config set) | | presetAutoDetect | boolean | true | Auto-apply a built-in preset when the config set name matches. Set to false to require explicit _PRESET configuration. |

EnvProviderDefaults:

| Option | Type | Default | Description | |---|---|---|---| | fetch | typeof globalThis.fetch | undefined | Custom fetch implementation passed to all created providers | | headers | Record<string, string> | undefined | Default HTTP headers for all providers (overridden by config-set headers) |

ConfigSetEntry:

interface ConfigSetEntry {
  apiKey: string
  preset?: string
  baseURL?: string
  compatible?: 'openai' | 'anthropic' | 'gemini' | 'openai-compatible' // default: 'openai-compatible'
  headers?: Record<string, string>
}

Model ID format:

{configSet}/{modelId}

Examples: openai/gpt-4o, anthropic/claude-sonnet-4-20250514, myapi/some-model.

Advanced Usage

Custom separator

If single underscores conflict with your naming scheme, use double underscores or any other string:

const provider = envProvider({ separator: '__' })

// Now reads: OPENAI__BASE_URL, OPENAI__API_KEY, OPENAI__PRESET, OPENAI__COMPATIBLE

Code-based configs

Skip env vars entirely and pass config directly. This takes the highest precedence:

const provider = envProvider({
  configs: {
    openai: {
      baseURL: 'https://api.openai.com/v1',
      apiKey: process.env.OPENAI_KEY!,
      compatible: 'openai',
    },
    claude: {
      baseURL: 'https://api.anthropic.com',
      apiKey: process.env.ANTHROPIC_KEY!,
      compatible: 'anthropic',
    },
    deepseek: {
      preset: 'deepseek',
      apiKey: process.env.DEEPSEEK_KEY!,
    },
  },
})

const model = provider.languageModel('openai/gpt-4o')

Custom fetch

Pass a custom fetch implementation to all providers. Useful for proxies, logging, or test mocks:

const provider = envProvider({ defaults: { fetch: myCustomFetch } })

Default headers

Set HTTP headers that apply to all providers. Per-config-set headers (from env vars or code configs) override defaults with the same key:

const provider = envProvider({
  defaults: {
    headers: { 'X-App-Name': 'my-app', 'X-Request-Source': 'server' },
  },
})

Custom headers via env vars

Set per-config-set HTTP headers using the HEADERS env var. The value must be valid JSON:

OPENAI_HEADERS={"X-Custom":"value","X-Request-Source":"my-app"}

These headers are merged into every request made by that config set's provider. When combined with defaults.headers, config-set headers take precedence for the same key.

Using with createProviderRegistry

envProvider() implements ProviderV3, so it works directly with createProviderRegistry:

import { createProviderRegistry, generateText } from 'ai'
import { envProvider } from 'ai-sdk-provider-env'

const registry = createProviderRegistry({
  env: envProvider(),
})

// Language model
const model = registry.languageModel('env:openai/gpt-4o')

// Embedding model
const embedder = registry.embeddingModel('env:openai/text-embedding-3-small')

// Image model
const imageModel = registry.imageModel('env:openai/dall-e-3')

const { text } = await generateText({
  model,
  prompt: 'Hello!',
})

The model ID format inside the registry is {registryKey}:{configSet}/{modelId}. With the setup above, env:openai/gpt-4o means config set openai, model gpt-4o.

You can also mount multiple providers side by side:

import { createOpenAI } from '@ai-sdk/openai'

const registry = createProviderRegistry({
  env: envProvider(),
  openai: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
})