npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@x12i/ai-providers-router

v4.7.7

Published

Unified router for all LLM provider implementations

Downloads

908

Readme

@x12i/ai-providers-router

A unified LLM provider router that routes requests to installed provider packages using the ProviderModule architecture.

This router:

  • OpenRouter Mode: Access 353+ models from 67 providers using catalog-driven routing
  • Chooses a provider/model (and optionally a fallback chain)
  • Loads ProviderModules from installed provider packages (lazy import)
  • Uses router-side adapters to convert requests to ProviderSDKCallSpec
  • Executes via ProviderModule.execute() / stream() / submitBatch()
  • Parses responses using router-side adapters
  • Returns standardized responses with lossless rawResponse

Architecture

  • ProviderModule: Provider packages export ProviderModules that implement @x12i/ai-provider-interface
  • Router Adapters: Router-side adapters convert router requests to ProviderSDKCallSpec and parse responses
  • Capability Gating: Router gates execution by provider.capabilities.modes.sync/stream/batch (ProviderModule is source of truth)
  • Execution Semantics: Router owns execution semantics (timeoutMs, retries, idempotencyKey, signal)

Important: This router never installs provider packages at runtime.


Install

npm i @x12i/ai-providers-router

Install at least one provider package (examples):

npm i @x12i/ai-provider-openai
npm i @x12i/ai-provider-anthropic
npm i @x12i/ai-provider-google
npm i @x12i/ai-provider-xai
npm i @x12i/ai-provider-groq

For OpenRouter mode: Only @x12i/ai-provider-openai is required to access 353 models from 67 providers through OpenRouter's unified API.


Provider IDs (canonical)

Core Providers:

  • openai → OpenAI
  • anthropic → Claude
  • google → Gemini
  • xai → Grok (xAI)
  • groq → GroqCloud (Llama/Mixtral/OSS models)
  • kimi → Moonshot/Kimi (if installed)

OpenRouter Mode (67 providers supported):

  • openrouter → OpenRouter (unified gateway to all providers)
  • All provider names work seamlessly (automatic routing through OpenRouter)
  • Access to 353+ models from providers like Meta, Mistral, Cohere, Perplexity, and many more

Grok ≠ Groq

  • Grok is xAI (xai)
  • Groq is GroqCloud (groq)

OpenRouter Mode

OpenRouter is a unified API gateway that provides access to multiple AI models from different providers. When OpenRouter mode is enabled, all provider calls automatically route through OpenRouter while maintaining a seamless API experience.

Key Features

  • Comprehensive Model Catalog: Access 353 models from 67 providers using catalog data automatically loaded from OpenRouter APIs
  • Seamless API: Use the same provider names ("openai", "grok", "anthropic", etc.) - no code changes needed
  • Smart Provider Inference: Uses catalog data to automatically infer providers from model names (e.g., "gpt-4o""openai")
  • Model Validation: Validates models against available OpenRouter catalog and warns about invalid models
  • Provider Aliases: Supports vendor mappings (e.g., xai models route to grok provider)
  • Model Name Mapping: Automatically converts provider + model to OpenRouter format (e.g., provider: "openai" + model: "gpt-4o""openai/gpt-4o")
  • Access any OpenRouter model: Call models even without direct provider packages (e.g., "meta-llama/llama-3-70b-instruct")
  • Unified Reasoning API: Cross-vendor reasoning support with effort control and visibility options (see Reasoning Integration)
  • No ai-io-normalizer: OpenRouter responses are parsed directly (faster, simpler)

OpenRouter Mode - Completely Automatic

OpenRouter mode works automatically - no code changes required!

Simply set the OPEN_ROUTER_KEY environment variable:

export OPEN_ROUTER_KEY=sk-or-your-openrouter-api-key-here

That's it! OpenRouter mode is completely automatic and works with:

  • Factory initialization: await createRouter() - automatically registers OpenRouter provider module
  • Manual initialization: new LLMProviderRouter() - automatically detects OpenRouter mode via environment variable
  • Any provider name: Use config.provider: "openai", "grok", "anthropic", etc. - all route through OpenRouter automatically

How it works:

  • When OPEN_ROUTER_KEY is set, the router automatically detects OpenRouter mode
  • All provider requests (openai, grok, anthropic, etc.) automatically route through OpenRouter
  • No need to register individual provider modules - OpenRouter handles everything
  • Works seamlessly whether you use createRouter() or manual new LLMProviderRouter() initialization

To disable OpenRouter mode explicitly:

export USE_OPENROUTER=false

Note: When OpenRouter mode is enabled, direct provider packages are not registered to avoid conflicts. All calls route through OpenRouter using the integrated catalog data (.metadata/openrouter_catalog_with_vendor_mapping.json).

Troubleshooting:

If you see errors like "No provider specified and no providers registered":

  1. ✅ Check that OPEN_ROUTER_KEY is set: echo $OPEN_ROUTER_KEY
  2. ✅ Verify the key is valid (not empty, doesn't start with "ENV.")
  3. ✅ Ensure config.provider is specified in your request (e.g., config: { provider: "openai", model: "gpt-4o" })
  4. ✅ The OpenRouter adapter is always registered - no additional setup needed

The router will automatically use OpenRouter mode when these conditions are met!

Usage Examples

Example 1: Using provider names (seamless - no code changes needed):

const router = await createRouter();

// Works exactly the same whether OpenRouter mode is on or off
const req: AIRouterRequest = {
  request: {
    messages: [{ role: 'user', content: 'Hello!' }],
    config: { model: 'gpt-4o' },
  },
  provider: 'openai',  // Still use "openai" - router handles routing
  mode: 'sync',
};

const res = await router.invoke(req);
// Model automatically mapped to "openai/gpt-4o" when using OpenRouter

Example 2: Provider inference (no provider specified):

// Router infers provider from model name
const req: AIRouterRequest = {
  request: {
    messages: [{ role: 'user', content: 'Hello!' }],
    config: { model: 'gpt-4o' },  // Infers "openai" from "gpt-4o"
  },
  // provider not specified - router infers "openai"
  mode: 'sync',
};

const res = await router.invoke(req);

Example 3: Using OpenRouter model format directly:

// Call any OpenRouter-supported model using OpenRouter's format
const req: AIRouterRequest = {
  request: {
    messages: [{ role: 'user', content: 'Hello!' }],
    config: { model: 'anthropic/claude-3-opus' },  // Direct OpenRouter format
  },
  provider: 'openrouter',  // Use "openrouter" provider
  mode: 'sync',
};

const res = await router.invoke(req);

Example 4: Accessing models without provider packages:

// Access Meta Llama models without installing @x12i/ai-provider-meta
const req: AIRouterRequest = {
  request: {
    messages: [{ role: 'user', content: 'Hello!' }],
    config: { model: 'meta-llama/llama-3-70b-instruct' },
  },
  provider: 'openrouter',
  mode: 'sync',
};

const res = await router.invoke(req);

Example 5: Using diverse models from different providers:

// Anthropic Claude models
const claudeReq = { request: { messages: [{ role: 'user', content: 'Hello!' }], config: { model: 'claude-3-opus' } }, provider: 'anthropic', mode: 'sync' };

// Google Gemini models
const geminiReq = { request: { messages: [{ role: 'user', content: 'Hello!' }], config: { model: 'gemini-pro' } }, provider: 'google', mode: 'sync' };

// Groq models (via xAI provider)
const groqReq = { request: { messages: [{ role: 'user', content: 'Hello!' }], config: { model: 'llama-3-70b-8192' } }, provider: 'groq', mode: 'sync' };

// All automatically route through OpenRouter when mode is enabled
const results = await Promise.all([
  router.invoke(claudeReq),
  router.invoke(geminiReq),
  router.invoke(groqReq),
]);

How OpenRouter Mode Works

  1. Request Interceptor: When OpenRouter mode is enabled, a request interceptor:

    • Preserves the original provider name (e.g., "openai", "grok") in request.config.provider
    • Routes the request to "openrouter" provider
    • Infers provider from model name if not specified
  2. Model Name Mapping: The OpenRouterAdapter:

    • Reads the original provider from request.config.provider
    • Maps model names: "gpt-4o" + provider: "openai""openai/gpt-4o"
    • Handles models already in OpenRouter format (with /) as-is
  3. Response Parsing: Responses are parsed directly from OpenAI formats (no ai-io-normalizer):

    • Chat Completions: Extracts choices[0].message.content for text
    • Responses API (v1): Handles output array with text and encrypted reasoning items
    • Extracts usage for token counts from both formats
    • Adds status: 'completed' for compatibility

Provider Inference Rules

When no provider is specified, the router uses catalog data to intelligently infer providers from model names. This includes:

  • Exact Model Matching: Recognizes all 353 OpenRouter models by their exact IDs

  • Alias Support: Handles model aliases from the catalog

  • Vendor Mapping: Maps vendor IDs to provider slugs (e.g., xaigrok)

  • Fallback Patterns: Uses legacy pattern matching when catalog data is unavailable:

    • gpt-*, o1-*, openai/*"openai"
    • claude-*, anthropic/*"anthropic"
    • grok-*, xai/*"grok"
    • gemini-*, google/*"google"
    • llama-*, meta-llama/*"meta"
    • Default → "openai" (most common case)

Model Validation & Catalog Features

The router automatically validates models against the OpenRouter catalog:

  • Model Availability: Warns when requesting models not available in OpenRouter
  • Alias Resolution: Automatically resolves model aliases to canonical OpenRouter IDs
  • Capability Checking: Validates model parameters against supported capabilities
  • Graceful Fallbacks: Falls back to legacy logic if catalog loading fails
  • Format Support: Handles both OpenAI Chat Completions and Responses API v1 formats
  • Encrypted Reasoning: Processes encrypted reasoning traces (model thinking is privacy-protected)
  • Reasoning Parameter Support: Enables reasoning effort levels for compatible models

Catalog Data Sources:

  • 67 Providers: All current OpenRouter providers
  • 353 Models: Complete model catalog with aliases and capabilities
  • Vendor Mappings: Direct API mappings for accurate routing
  • Auto-updating: Uses latest catalog data from OpenRouter APIs

OpenRouter Configuration

Optional environment variables for OpenRouter rankings:

export OPEN_ROUTER_HTTP_REFERER=https://your-site.com
export OPEN_ROUTER_X_TITLE=Your Site Name

See Environment Variables documentation for details.


Zero-config router creation

No arguments are required.

import { createRouter } from '@x12i/ai-providers-router';

const router = await createRouter();

Optional router-level config (logging, usage tracking, timeout):

const router = await createRouter({
  logLevel: 'info',
  verbose: false,
  timeoutMs: 60000, // Default timeout for all operations (ERC: AI_PROVIDER_ROUTER_TIMEOUT_MS)
  usageTracker: {
    recordRequest(e) { /* ... */ },
  },
});

Request/Response Types

Router uses its own request/response types:

  • AIRouterRequest (input) - includes unified reasoning controls
  • AIResponse (sync output) - includes unified reasoning response
  • AIStreamEvent (streaming output) - includes reasoning streaming events
  • AIBatchResponse (batch output)

Authoritative trace diagnostics (stable contract)

For downstream orchestration, AIResponse includes stable, provider-agnostic diagnostics:

  • response.usage?: { promptTokens; completionTokens; totalTokens }
  • response.metadata (keys when known):
    • metadata.provider: final provider used for the successful call (or last attempt)
    • metadata.modelUsed: the actual model that served the response
    • metadata.maxTokensRequested: final effective generation cap applied (if determinable)
    • metadata.costUsd: normalized USD cost (if computable)
    • metadata.requestIds: { routerRequestId, providerRequestId?, openrouterRequestId? }
    • metadata.timing: { startedAt, endedAt, durationMs } (provider-call timing)
    • metadata.latencyMs: alias for metadata.timing.durationMs
    • metadata.attempts[]: ordered attempts across retries + fallbacks (authoritative execution trace)
import type { AIRouterRequest, AIResponse } from '@x12i/ai-providers-router';

// Request reasoning with extended effort levels
config: {
  reasoning: {
    effort: 'high',        // or 'low', 'medium', 'high', 'xhigh' (xhigh normalized to high)
    maxTokens: 2000,        // optional: for Anthropic/Gemini models (max_tokens mode)
    visibility: 'trace',     // or 'none', 'summary' (best-effort; downgraded if not returned)
    onUnsupported: 'downgrade'  // or 'error' (throws), 'ignore' (silent)
  }
}

// Access unified reasoning response
response.reasoning.artifacts.encrypted  // Encrypted reasoning traces
response.reasoning.applied.effort       // What was actually applied (may differ from requested)
response.reasoning.applied.visibility  // What visibility was actually returned
response.reasoning.availability        // Model capability flags
response.reasoning.warnings             // Any downgrade/normalization warnings

Reasoning Features:

  • Effort Control: low, medium, high, xhigh (xhigh auto-normalized to high)
  • Max Tokens Control: Direct maxTokens budget for Anthropic/Gemini models
  • Encrypted Traces: Access encrypted reasoning artifacts (ciphertext not decryptable by user; only metadata/prefix logged)
  • Summary Visibility: Human-readable reasoning summary (best-effort; returned only if provider returns reasoning_details with reasoning.summary; otherwise downgraded with warning)
  • Trace Visibility: Encrypted or readable reasoning traces (best-effort; satisfied by either reasoning.encrypted artifacts or reasoning.text chunks; downgraded if not available)
  • Model Detection: Automatic detection of reasoning-capable models via JSON registry (cross-vendor support)
  • Extended Support: Works with OpenAI o-series models (o1, o3, o4 series - 10+ models), xAI Grok models, Anthropic Claude reasoning models, and Google Gemini reasoning models

Supported Models: Currently detected via router-owned JSON registry (.metadata/reasoning-support.json):

  • OpenAI o-series (openai/o* pattern): openai/o1, openai/o1-pro, openai/o3, openai/o3-mini, openai/o3-pro, openai/o3-deep-research, openai/o3-mini-high, openai/o4-mini, openai/o4-mini-deep-research, openai/o4-mini-high
  • xAI Grok (x-ai/grok* pattern): x-ai/grok-4.1-fast and other reasoning-enabled Grok models
  • Anthropic Claude (anthropic/claude* pattern): Reasoning-enabled Claude models (uses max_tokens mode)
  • Google Gemini (google/gemini* pattern): Reasoning-enabled Gemini models (uses max_tokens mode)

ℹ️ Note: Summary/trace visibility are best-effort and depend on what the provider actually returns in reasoning_details. If the provider doesn't return the requested visibility type, the router downgrades to none and adds a VISIBILITY_DOWNGRADED warning. Encrypted reasoning artifacts are not decryptable by the user; only metadata (id, format, index) and a ciphertext prefix (first 32 chars) are logged for debugging. Many other vendors have reasoning-capable models (Amazon Nova, Aion Labs, Alibaba Tongyi, AllenAI OLMO, Arcee AI, Baidu ERNIE, ByteDance Seed, DeepCogito, MoonshotAI Kimi, Qwen, THUDM GLM, and more), including models with "thinking" or "thought" capabilities, but they are not yet implemented. See Reasoning Supported Models for the complete list.

See Reasoning Integration Guide and Reasoning Supported Models for complete documentation.


Sync call

import { createRouter, type AIRouterRequest, type AIResponse } from '@x12i/ai-providers-router';

const router = await createRouter();

const req: AIRouterRequest = {
  request: {
    inputData: 'Write 3 bullets about routers.',
    config: {
      maxTokens: 200,
      temperature: 0.7,
      model: 'gpt-4o-mini',
    },
  },
  provider: 'openai',
  mode: 'sync',
  exec: {
    timeoutMs: 60000, // Optional: override default timeout
    idempotencyKey: 'optional-key', // Optional: for idempotent requests
  },
};

const res: AIResponse = await router.invoke(req);

console.log(res.outputText); // Normalized text (optional)
console.log(res.rawResponse); // Lossless raw response (always present)
console.log(res.usage); // Token usage

Streaming call

const streamReq: AIRouterRequest = {
  ...req,
  mode: 'stream',
};

for await (const ev of router.stream(streamReq)) {
  if (ev.type === 'provider_raw') {
    // Raw provider event (always emitted for debugging)
    console.log('Raw event:', ev.raw);
  } else if (ev.type === 'output_text_delta') {
    // Normalized text delta
    process.stdout.write(ev.delta);
  } else if (ev.type === 'completed') {
    // Final response
    console.log('Final:', ev.response.outputText);
  } else if (ev.type === 'error') {
    console.error('Error:', ev.error);
  }
}

Batch requests

Batch requests use the batch API (gated by ProviderModule capabilities):

const items = [
  { request: { inputData: 'First request', config: { model: 'gpt-4o-mini' } } },
  { request: { inputData: 'Second request', config: { model: 'gpt-4o-mini' } } },
];

const batchResult = await router.createBatch('openai', items, {
  timeoutMs: 120000, // Optional: override default timeout
  idempotencyKey: 'optional-key', // Optional
});

console.log(batchResult.items); // Array of results
console.log(batchResult.rawBatch); // Lossless raw batch response

Note: Batch is only available if provider.capabilities.modes.batch === true. Router gates execution by ProviderModule capabilities, not transformer supports.


How it works (high level)

  1. Router receives an AIRouterRequest

  2. Request Interceptors (if OpenRouter mode enabled):

    • Preserve original provider name for model mapping
    • Route requests to OpenRouter provider
    • Infer provider from model name if not specified
  3. Router loads ProviderModule from installed provider package (lazy import)

  4. Router checks provider.capabilities.modes to gate execution

  5. Router-side adapter converts request to ProviderSDKCallSpec

    • OpenRouterAdapter: Maps provider + model to OpenRouter format (e.g., "openai/gpt-4o")
  6. Router calls ProviderModule:

    • provider.execute(spec) (sync)
    • provider.stream(spec) (streaming)
    • provider.submitBatch(specs) (batch)
  7. Router-side adapter parses ProviderSDKExecResult to AIResponse

    • OpenRouterAdapter: Parses OpenAI Chat Completions format directly (no ai-io-normalizer)
  8. Router returns standardized response with lossless rawResponse


Provider packages are required

If you call a provider that is not installed, the router throws a clear error with install instructions.

Exception: When OpenRouter mode is enabled, you only need @x12i/ai-provider-openai installed (OpenRouter uses OpenAI-compatible API). You can access any of the 353 models from 67 providers without installing individual provider packages.

Supported Providers in OpenRouter Mode:

  • All major providers: OpenAI, Anthropic, Google, xAI (Grok), Groq, Meta, Mistral, Cohere, etc.
  • 67 total providers from the OpenRouter catalog
  • 353 models with full capability support

Examples:

  • Provider openai requires @x12i/ai-provider-openai
  • Provider grok requires @x12i/ai-provider-grok
  • OpenRouter mode: Only requires @x12i/ai-provider-openai to access all OpenRouter-supported models

This router will never auto-install packages.


License

ISC