npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ai-io-normalizer

v6.0.4

Published

Provider Adapter Standard I/O Contract - SDK-only adapter pattern for unified AI API requests across multiple providers

Downloads

77

Readme

ai-io-normalizer

Standard I/O contract + adapter interface for LLM provider SDKs.

This package defines the single standard request and single standard response shape that all provider adapters must implement, so the rest of the system can speak one unified contract across providers (OpenAI, Anthropic, Gemini, xAI/Grok, Groq, Kimi, etc.) and across execution modes (sync, stream, async jobs, batch).

This package is SDK-only. Adapters call provider SDK clients.
It does not perform routing, fallback, HTTP transport, or package installation.


What this package provides

  • Standard request type: AdapterRequest
  • Standard response types:
    • AdapterSyncResponse
    • AdapterStreamResponse (streaming via AsyncIterable<StreamEvent>)
    • AdapterAsyncAcceptedResponse (native async jobs)
    • AdapterErrorResponse
  • Adapter interface: LLMProviderAdapter
  • Unified tool calling model (tool definitions + tool calls + tool result messages)
  • Per-provider capabilities JSON schema (one JSON file per provider adapter package)
  • ✅ Consistent error taxonomy and usage reporting
  • ✅ Optional raw payload capture (fullRawRequest, fullRawResponse) gated by request options

Non-goals

This package does not:

  • Route between providers or implement fallback chains
  • Auto-install missing provider packages
  • Make direct HTTP calls (SDK-only)
  • Emulate async/batch job storage internally (native provider support only)

Install

npm i ai-io-normalizer

You'll also need to install the provider SDKs you want to use:

npm i @openai/openai @anthropic-ai/sdk @google/generative-ai

Usage

Creating an adapter

Use the createAdapter factory function with a provider SDK client:

import { createAdapter } from 'ai-io-normalizer';
import OpenAI from '@openai/openai';

// Create OpenAI adapter
const openaiClient = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const adapter = createAdapter('openai', { client: openaiClient });

// Create Anthropic adapter
import Anthropic from '@anthropic-ai/sdk';
const anthropicClient = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const anthropicAdapter = createAdapter('anthropic', { client: anthropicClient });

Complete example: Sync request

import { createAdapter } from 'ai-io-normalizer';
import OpenAI from '@openai/openai';
import type { AdapterRequest } from 'ai-io-normalizer';

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const adapter = createAdapter('openai', { client });

const req: AdapterRequest = {
  kind: 'llm.request',
  mode: 'sync',
  target: {
    provider: 'openai',
    model: 'gpt-4o',
  },
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' },
  ],
  config: {
    maxOutputTokens: 100,
    temperature: 0.7,
  },
};

const res = await adapter.invoke(req);

if (res.ok && res.mode === 'sync') {
  console.log(res.output.text);
  console.log(res.usage);
  console.log(res.metadata);
} else if (!res.ok) {
  console.error(res.error);
}

Complete example: Streaming

const streamReq: AdapterRequest = {
  ...req,
  mode: 'stream',
};

const streamRes = await adapter.invoke(streamReq);

if (streamRes.ok && streamRes.mode === 'stream') {
  for await (const ev of streamRes.stream) {
    if (ev.type === 'content.delta' && ev.deltaText) {
      process.stdout.write(ev.deltaText);
    }
    if (ev.type === 'response.completed') {
      console.log('\n\nUsage:', ev.usage);
    }
    if (ev.type === 'error') {
      console.error('Stream error:', ev.error);
    }
  }
}

Complete example: With tools

const reqWithTools: AdapterRequest = {
  kind: 'llm.request',
  mode: 'sync',
  target: {
    provider: 'openai',
    model: 'gpt-4o',
  },
  messages: [
    { role: 'user', content: 'What is the weather in San Francisco?' },
  ],
  tools: [
    {
      name: 'getWeather',
      description: 'Get weather by city name',
      inputSchema: {
        type: 'object',
        properties: {
          city: { type: 'string' },
        },
        required: ['city'],
      },
    },
  ],
  toolChoice: 'auto',
};

const res = await adapter.invoke(reqWithTools);

if (res.ok && res.mode === 'sync') {
  const toolCalls = res.output.messages[0]?.toolCalls;
  if (toolCalls && toolCalls.length > 0) {
    // Execute tools and send results back
    for (const toolCall of toolCalls) {
      const result = await executeTool(toolCall.name, toolCall.arguments);
      
      // Send tool result as a follow-up message
      const followUpReq: AdapterRequest = {
        ...reqWithTools,
        messages: [
          ...reqWithTools.messages,
          {
            role: 'tool',
            toolCallId: toolCall.id,
            content: JSON.stringify(result),
          },
        ],
      };
      
      const finalRes = await adapter.invoke(followUpReq);
      console.log(finalRes.output.text);
    }
  } else {
    console.log(res.output.text);
  }
}

Using adapter identity

Each adapter exposes its identity:

const adapter = createAdapter('openai', { client });

console.log(adapter.identity);
// {
//   providerId: 'openai',
//   supportedApiVariants: ['openai.chat_completions', 'openai.responses'],
//   defaultApiVariant: 'openai.chat_completions',
//   capabilitiesSchemaVersion: '2025-12-29'
// }

Error handling

const res = await adapter.invoke(req);

if (!res.ok) {
  switch (res.error.code) {
    case 'PROVIDER_RATE_LIMIT':
      // Retry with backoff
      break;
    case 'VALIDATION_FAILED':
      // Fix request and retry
      break;
    case 'UNSUPPORTED':
      // Feature not supported by this provider/apiVariant
      break;
    case 'PROVIDER_REQUEST_FAILED':
      if (res.error.retriable) {
        // Retry
      }
      break;
  }
}

Core Concepts

Adapter = transformer + SDK executor

An adapter:

  1. receives a standard request
  2. chooses an SDK API variant (if multiple exist) using its provider capabilities map
  3. calls the provider SDK client
  4. returns a standard response

Adapters are provider-specific in implementation, but can be used by any consumer (router, tests, other packages).


Standard Input: AdapterRequest

AdapterRequest is the only accepted input shape.

Key properties:

  • mode: 'sync' | 'stream' | 'async' | 'batch'
  • target: { provider, model, apiVariant? }
  • messages: standardized message array
  • tools + toolChoice: optional tool calling
  • config: canonical generation config
  • async / batch: options (native support only)
import type { AdapterRequest } from 'ai-io-normalizer';

const req: AdapterRequest = {
  kind: 'llm.request',
  mode: 'sync',
  target: {
    provider: 'openai',
    model: 'gpt-5.2',
    // apiVariant optional (adapter chooses if omitted)
  },
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain transformers in 3 bullets.' },
  ],
  config: {
    maxOutputTokens: 250,
    reasoning: { effort: 'medium' },
  },
};

Standard Output: responses

Adapters return one of:

  • AdapterSyncResponse (final result)
  • AdapterStreamResponse (streaming events)
  • AdapterAsyncAcceptedResponse (native async job handle)
  • AdapterErrorResponse (standard error)

Sync example

const res = await adapter.invoke(req);

if (res.ok && res.mode === 'sync') {
  console.log(res.output.text);
  console.log(res.usage);
  console.log(res.metadata);
}

Streaming example

const streamRes = await adapter.invoke({ ...req, mode: 'stream' });

if (streamRes.ok && streamRes.mode === 'stream') {
  for await (const ev of streamRes.stream) {
    if (ev.type === 'content.delta' && ev.deltaText) process.stdout.write(ev.deltaText);
    if (ev.type === 'error') console.error(ev.error);
  }
}

Async jobs example (native support only)

const asyncRes = await adapter.invoke({
  ...req,
  mode: 'async',
  async: { preferAsync: true },
});

if (asyncRes.ok && asyncRes.mode === 'async') {
  // poll until complete
  const polled = await adapter.getJob?.(asyncRes.job.jobId);
  console.log(polled);
}

Raw payload capture: fullRawRequest / fullRawResponse

If enabled, the adapter may include the provider-native payloads as top-level fields:

  • fullRawRequest
  • fullRawResponse

They are not placed in metadata.

Enable with:

const req: AdapterRequest = {
  ...reqBase,
  config: {
    ...reqBase.config,
    providerOptions: {
      openai: { includeRaw: true },
    },
  },
};

Tool calling (unified)

Tool definition

const req: AdapterRequest = {
  // ...
  tools: [
    {
      name: 'getWeather',
      description: 'Get weather by city name',
      inputSchema: {
        type: 'object',
        properties: { city: { type: 'string' } },
        required: ['city'],
      },
    },
  ],
  toolChoice: 'auto',
};

Tool calls returned by the model

If the provider returns tool calls, they appear in:

  • output.messages[0].toolCalls

Tool result messages

Tool results must be sent back as role: 'tool' messages and MUST include toolCallId:

const toolResultMsg = {
  role: 'tool',
  toolCallId: 'call_123',
  name: 'getWeather',
  content: [{ type: 'json', value: { city: 'Tel Aviv', tempC: 26 } }],
};

Adapter dependencies (SDK-only)

Adapters are SDK-only: they require an initialized provider SDK client.

export type AdapterDeps = {
  client: unknown; // provider SDK client instance (required)
  logger?: {
    debug?: (...a: any[]) => void;
    info?: (...a: any[]) => void;
    warn?: (...a: any[]) => void;
    error?: (...a: any[]) => void;
  };
};

If client is missing, adapters must return PROVIDER_CONFIG_MISSING.


Per-provider capabilities JSON (one per provider)

Each provider adapter package must include a capabilities JSON file:

  • capabilities/<providerId>.json

Examples:

  • capabilities/openai.json
  • capabilities/xai.json (Grok)
  • capabilities/groq.json (GroqCloud)

The adapter uses this to:

  • pick default apiVariant
  • validate feature support (tools, streaming, structured output, etc.)
  • normalize config and emit warnings

This package defines the expected capabilities schema.


MCP note

MCP (Model Context Protocol) is not part of adapters.

Adapters only support tool calling via standard tools and toolCalls. If you use MCP, it should live in a tool runtime that:

  • discovers MCP tools
  • converts them into tools[]
  • executes tool calls via MCP
  • returns tool results as role='tool' messages

Adapters remain unchanged.


Package exports

This package exports:

  • Types: All standard types (AdapterRequest, responses, events, errors, etc.)
  • Factory: createAdapter(providerId, deps) - Create adapter instances
  • Adapters: Individual adapter classes (OpenAIAdapter, AnthropicAdapter, etc.)
  • Base class: BaseProviderAdapter - For creating custom adapters
  • Validation: Validation utilities (validateRequest, etc.)
  • Capabilities: loadCapabilities() and ProviderCapabilities type
  • Interface: LLMProviderAdapter interface

Example imports

import {
  // Types
  type AdapterRequest,
  type AdapterSyncResponse,
  type AdapterStreamResponse,
  type LLMProviderAdapter,
  type AdapterDeps,
  
  // Factory
  createAdapter,
  
  // Individual adapters (optional, for advanced usage)
  OpenAIAdapter,
  AnthropicAdapter,
  
  // Utilities
  validateRequest,
  loadCapabilities,
} from 'ai-io-normalizer';

License

ISC