ai-io-normalizer
v6.0.4
Published
Provider Adapter Standard I/O Contract - SDK-only adapter pattern for unified AI API requests across multiple providers
Downloads
77
Maintainers
Readme
ai-io-normalizer
Standard I/O contract + adapter interface for LLM provider SDKs.
This package defines the single standard request and single standard response shape that all provider adapters must implement, so the rest of the system can speak one unified contract across providers (OpenAI, Anthropic, Gemini, xAI/Grok, Groq, Kimi, etc.) and across execution modes (sync, stream, async jobs, batch).
This package is SDK-only. Adapters call provider SDK clients.
It does not perform routing, fallback, HTTP transport, or package installation.
What this package provides
- ✅ Standard request type:
AdapterRequest - ✅ Standard response types:
AdapterSyncResponseAdapterStreamResponse(streaming viaAsyncIterable<StreamEvent>)AdapterAsyncAcceptedResponse(native async jobs)AdapterErrorResponse
- ✅ Adapter interface:
LLMProviderAdapter - ✅ Unified tool calling model (tool definitions + tool calls + tool result messages)
- ✅ Per-provider capabilities JSON schema (one JSON file per provider adapter package)
- ✅ Consistent error taxonomy and usage reporting
- ✅ Optional raw payload capture (
fullRawRequest,fullRawResponse) gated by request options
Non-goals
This package does not:
- Route between providers or implement fallback chains
- Auto-install missing provider packages
- Make direct HTTP calls (SDK-only)
- Emulate async/batch job storage internally (native provider support only)
Install
npm i ai-io-normalizerYou'll also need to install the provider SDKs you want to use:
npm i @openai/openai @anthropic-ai/sdk @google/generative-aiUsage
Creating an adapter
Use the createAdapter factory function with a provider SDK client:
import { createAdapter } from 'ai-io-normalizer';
import OpenAI from '@openai/openai';
// Create OpenAI adapter
const openaiClient = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const adapter = createAdapter('openai', { client: openaiClient });
// Create Anthropic adapter
import Anthropic from '@anthropic-ai/sdk';
const anthropicClient = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const anthropicAdapter = createAdapter('anthropic', { client: anthropicClient });Complete example: Sync request
import { createAdapter } from 'ai-io-normalizer';
import OpenAI from '@openai/openai';
import type { AdapterRequest } from 'ai-io-normalizer';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const adapter = createAdapter('openai', { client });
const req: AdapterRequest = {
kind: 'llm.request',
mode: 'sync',
target: {
provider: 'openai',
model: 'gpt-4o',
},
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
config: {
maxOutputTokens: 100,
temperature: 0.7,
},
};
const res = await adapter.invoke(req);
if (res.ok && res.mode === 'sync') {
console.log(res.output.text);
console.log(res.usage);
console.log(res.metadata);
} else if (!res.ok) {
console.error(res.error);
}Complete example: Streaming
const streamReq: AdapterRequest = {
...req,
mode: 'stream',
};
const streamRes = await adapter.invoke(streamReq);
if (streamRes.ok && streamRes.mode === 'stream') {
for await (const ev of streamRes.stream) {
if (ev.type === 'content.delta' && ev.deltaText) {
process.stdout.write(ev.deltaText);
}
if (ev.type === 'response.completed') {
console.log('\n\nUsage:', ev.usage);
}
if (ev.type === 'error') {
console.error('Stream error:', ev.error);
}
}
}Complete example: With tools
const reqWithTools: AdapterRequest = {
kind: 'llm.request',
mode: 'sync',
target: {
provider: 'openai',
model: 'gpt-4o',
},
messages: [
{ role: 'user', content: 'What is the weather in San Francisco?' },
],
tools: [
{
name: 'getWeather',
description: 'Get weather by city name',
inputSchema: {
type: 'object',
properties: {
city: { type: 'string' },
},
required: ['city'],
},
},
],
toolChoice: 'auto',
};
const res = await adapter.invoke(reqWithTools);
if (res.ok && res.mode === 'sync') {
const toolCalls = res.output.messages[0]?.toolCalls;
if (toolCalls && toolCalls.length > 0) {
// Execute tools and send results back
for (const toolCall of toolCalls) {
const result = await executeTool(toolCall.name, toolCall.arguments);
// Send tool result as a follow-up message
const followUpReq: AdapterRequest = {
...reqWithTools,
messages: [
...reqWithTools.messages,
{
role: 'tool',
toolCallId: toolCall.id,
content: JSON.stringify(result),
},
],
};
const finalRes = await adapter.invoke(followUpReq);
console.log(finalRes.output.text);
}
} else {
console.log(res.output.text);
}
}Using adapter identity
Each adapter exposes its identity:
const adapter = createAdapter('openai', { client });
console.log(adapter.identity);
// {
// providerId: 'openai',
// supportedApiVariants: ['openai.chat_completions', 'openai.responses'],
// defaultApiVariant: 'openai.chat_completions',
// capabilitiesSchemaVersion: '2025-12-29'
// }Error handling
const res = await adapter.invoke(req);
if (!res.ok) {
switch (res.error.code) {
case 'PROVIDER_RATE_LIMIT':
// Retry with backoff
break;
case 'VALIDATION_FAILED':
// Fix request and retry
break;
case 'UNSUPPORTED':
// Feature not supported by this provider/apiVariant
break;
case 'PROVIDER_REQUEST_FAILED':
if (res.error.retriable) {
// Retry
}
break;
}
}Core Concepts
Adapter = transformer + SDK executor
An adapter:
- receives a standard request
- chooses an SDK API variant (if multiple exist) using its provider capabilities map
- calls the provider SDK client
- returns a standard response
Adapters are provider-specific in implementation, but can be used by any consumer (router, tests, other packages).
Standard Input: AdapterRequest
AdapterRequest is the only accepted input shape.
Key properties:
mode:'sync' | 'stream' | 'async' | 'batch'target:{ provider, model, apiVariant? }messages: standardized message arraytools+toolChoice: optional tool callingconfig: canonical generation configasync/batch: options (native support only)
import type { AdapterRequest } from 'ai-io-normalizer';
const req: AdapterRequest = {
kind: 'llm.request',
mode: 'sync',
target: {
provider: 'openai',
model: 'gpt-5.2',
// apiVariant optional (adapter chooses if omitted)
},
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain transformers in 3 bullets.' },
],
config: {
maxOutputTokens: 250,
reasoning: { effort: 'medium' },
},
};Standard Output: responses
Adapters return one of:
AdapterSyncResponse(final result)AdapterStreamResponse(streaming events)AdapterAsyncAcceptedResponse(native async job handle)AdapterErrorResponse(standard error)
Sync example
const res = await adapter.invoke(req);
if (res.ok && res.mode === 'sync') {
console.log(res.output.text);
console.log(res.usage);
console.log(res.metadata);
}Streaming example
const streamRes = await adapter.invoke({ ...req, mode: 'stream' });
if (streamRes.ok && streamRes.mode === 'stream') {
for await (const ev of streamRes.stream) {
if (ev.type === 'content.delta' && ev.deltaText) process.stdout.write(ev.deltaText);
if (ev.type === 'error') console.error(ev.error);
}
}Async jobs example (native support only)
const asyncRes = await adapter.invoke({
...req,
mode: 'async',
async: { preferAsync: true },
});
if (asyncRes.ok && asyncRes.mode === 'async') {
// poll until complete
const polled = await adapter.getJob?.(asyncRes.job.jobId);
console.log(polled);
}Raw payload capture: fullRawRequest / fullRawResponse
If enabled, the adapter may include the provider-native payloads as top-level fields:
fullRawRequestfullRawResponse
They are not placed in metadata.
Enable with:
const req: AdapterRequest = {
...reqBase,
config: {
...reqBase.config,
providerOptions: {
openai: { includeRaw: true },
},
},
};Tool calling (unified)
Tool definition
const req: AdapterRequest = {
// ...
tools: [
{
name: 'getWeather',
description: 'Get weather by city name',
inputSchema: {
type: 'object',
properties: { city: { type: 'string' } },
required: ['city'],
},
},
],
toolChoice: 'auto',
};Tool calls returned by the model
If the provider returns tool calls, they appear in:
output.messages[0].toolCalls
Tool result messages
Tool results must be sent back as role: 'tool' messages and MUST include toolCallId:
const toolResultMsg = {
role: 'tool',
toolCallId: 'call_123',
name: 'getWeather',
content: [{ type: 'json', value: { city: 'Tel Aviv', tempC: 26 } }],
};Adapter dependencies (SDK-only)
Adapters are SDK-only: they require an initialized provider SDK client.
export type AdapterDeps = {
client: unknown; // provider SDK client instance (required)
logger?: {
debug?: (...a: any[]) => void;
info?: (...a: any[]) => void;
warn?: (...a: any[]) => void;
error?: (...a: any[]) => void;
};
};If client is missing, adapters must return PROVIDER_CONFIG_MISSING.
Per-provider capabilities JSON (one per provider)
Each provider adapter package must include a capabilities JSON file:
capabilities/<providerId>.json
Examples:
capabilities/openai.jsoncapabilities/xai.json(Grok)capabilities/groq.json(GroqCloud)
The adapter uses this to:
- pick default
apiVariant - validate feature support (tools, streaming, structured output, etc.)
- normalize config and emit warnings
This package defines the expected capabilities schema.
MCP note
MCP (Model Context Protocol) is not part of adapters.
Adapters only support tool calling via standard tools and toolCalls.
If you use MCP, it should live in a tool runtime that:
- discovers MCP tools
- converts them into
tools[] - executes tool calls via MCP
- returns tool results as
role='tool'messages
Adapters remain unchanged.
Package exports
This package exports:
- Types: All standard types (
AdapterRequest, responses, events, errors, etc.) - Factory:
createAdapter(providerId, deps)- Create adapter instances - Adapters: Individual adapter classes (
OpenAIAdapter,AnthropicAdapter, etc.) - Base class:
BaseProviderAdapter- For creating custom adapters - Validation: Validation utilities (
validateRequest, etc.) - Capabilities:
loadCapabilities()andProviderCapabilitiestype - Interface:
LLMProviderAdapterinterface
Example imports
import {
// Types
type AdapterRequest,
type AdapterSyncResponse,
type AdapterStreamResponse,
type LLMProviderAdapter,
type AdapterDeps,
// Factory
createAdapter,
// Individual adapters (optional, for advanced usage)
OpenAIAdapter,
AnthropicAdapter,
// Utilities
validateRequest,
loadCapabilities,
} from 'ai-io-normalizer';License
ISC
