ai-requests-adapter
v4.0.0
Published
A flexible adapter SDK for handling AI API requests across multiple providers
Maintainers
Readme
ai-requests-adapter (v4.0)
A complete AI API adapter ecosystem that provides bidirectional transformation between unified formats and provider-specific APIs:
- Request Adaptation:
unified input → transformToVendorRequest() → provider API calls - Response Normalization:
provider responses → normalizeProviderResponse() → unified output - Model Fixing: Automatic alias resolution and normalization for model names
Supports OpenAI, Anthropic, Google, Groq, xAI, and Moonshot with metadata-driven configuration to eliminate "model config broke again" issues.
Why v4.0
v4.0 completes the bidirectional AI API ecosystem:
- Request Adaptation (v3.0): Transform unified requests to provider-specific API calls
- Response Normalization (v4.0): Transform provider responses back to unified format
- Model Fixing (v4.0): Automatic model alias resolution and normalization
- Metadata-Driven: All behavior defined in JSON registries, not code
- Complete Symmetry: Request → Response round-trip compatibility
Unified Input Form (what you pass in)
The unified input is intentionally OpenAI Chat Completions–like:
type UnifiedChatCompletionsLikeRequest = {
model: string;
messages: Array<{ role: "system"|"user"|"assistant"|"tool"; content: string }>;
max_tokens?: number;
temperature?: number;
top_p?: number;
frequency_penalty?: number;
presence_penalty?: number;
stop?: string[] | string;
// optional reasoning knob (used when target supports it)
reasoning_effort?: "none"|"minimal"|"low"|"medium"|"high";
// vendor-specific passthrough namespace
extra?: Record<string, any>;
};Unified Output Form (responses)
The library also normalizes provider responses back to a unified chat-completions-style format:
type UnifiedChatCompletionsLikeResponse = {
schema: "chat_completions_like_v1";
id?: string;
created?: number;
model?: string;
choices: Array<{
index: number;
message: {
role: "assistant";
content: string;
tool_calls?: any[];
};
finish_reason?: string;
}>;
usage?: {
prompt_tokens?: number;
completion_tokens?: number;
total_tokens?: number;
reasoning_tokens?: number;
cached_tokens?: number;
};
meta: {
provider: string;
apiVariant: string;
mode: "standard" | "stream" | "batch";
raw?: any;
};
warnings?: string[];
};Installation
npm install ai-requests-adapterTesting
The package includes a comprehensive test suite covering:
- Request Transformation - Message transformation, parameter filtering, streaming support
- Response Normalization - Provider response parsing, unified output formatting
- Model Fixing - Alias resolution, normalization, suffix stripping
- Multi-provider compatibility - OpenAI, Anthropic, Google, Groq, xAI, Moonshot
- Capabilities resolution - Model fallback rules, API variant selection
- Error handling - Unknown providers/models, invalid inputs
- Integration tests - End-to-end request/response transformations
Run tests with:
npm testRun tests with coverage:
npm run test:coverageQuick Start
Request Transformation
import { transformToVendorRequest } from 'ai-requests-adapter';
// Your unified request (chat-completions style)
const request = {
model: 'gpt-5.2',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' }
],
max_tokens: 200,
reasoning_effort: 'medium'
};
// Transform to vendor-specific payload
const { provider, apiVariant, request: vendorRequest, response, warnings } = transformToVendorRequest(
request,
{
provider: 'openai',
responseMode: 'standard' // or 'stream' for streaming responses
}
);
console.log(apiVariant); // "responses"
console.log(vendorRequest); // Ready-to-send OpenAI Responses API payload
console.log(response); // { mode: "standard", protocol: undefined }
console.log(warnings); // Any parameter filtering warningsModel Fixing (Automatic Alias Resolution)
import { fixAndTransformToVendorRequest } from 'ai-requests-adapter';
// Model aliases are resolved automatically
const request = {
model: 'gpt5', // ← This gets fixed to 'gpt-5'
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100
};
const result = fixAndTransformToVendorRequest(request, { provider: 'openai' });
console.log(result.request.model); // 'gpt-5' (automatically fixed)
console.log(result.modelFix?.output); // 'gpt-5'
console.log(result.modelFix?.resolution); // 'alias'Response Normalization
import { normalizeProviderResponse, loadResponseMaps, responseRegistry } from 'ai-requests-adapter';
// Raw provider response (example: OpenAI)
const rawResponse = {
id: "chatcmpl-123",
choices: [{
message: { role: "assistant", content: "Hello!" }
}],
usage: { prompt_tokens: 10, completion_tokens: 5, total_tokens: 15 }
};
// Normalize to unified format
const maps = loadResponseMaps(responseRegistry);
const normalized = normalizeProviderResponse(rawResponse, {
provider: 'openai',
apiVariant: 'chat_completions'
}, maps);
console.log(normalized.schema); // "chat_completions_like_v1"
console.log(normalized.choices[0].message.content); // "Hello!"
console.log(normalized.usage?.total_tokens); // 15Streaming Support
The transformer handles streaming responses by:
- Adding streaming flags to the request payload (e.g.,
stream: true) - Indicating response mode (
"standard"or"stream") - Specifying protocol (
"sse","jsonl", etc.) for proper parsing
Your gateway then routes to the appropriate response handler:
if (result.response.mode === 'stream') {
// Use streaming parser for result.response.protocol
switch (result.response.protocol) {
case 'sse':
return handleSSEStream(vendorResponse);
case 'jsonl':
return handleJSONLStream(vendorResponse);
default:
return handleStandardResponse(vendorResponse);
}
} else {
// Use standard JSON parser
return handleStandardResponse(vendorResponse);
}Complete Workflow (Request → Response)
import {
fixAndTransformToVendorRequest,
normalizeProviderResponse,
loadResponseMaps,
responseRegistry
} from 'ai-requests-adapter';
// 1. Transform request (with automatic model fixing)
const requestResult = fixAndTransformToVendorRequest({
model: 'gpt5', // ← Automatically resolved to 'gpt-5'
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100
}, { provider: 'openai' });
console.log(requestResult.modelFix?.resolution); // 'alias'
// 2. Send to vendor API (simulated)
const vendorResponse = await callVendorAPI(requestResult.provider, requestResult.request);
// 3. Normalize response back to unified format
const maps = loadResponseMaps(responseRegistry);
const normalizedResponse = normalizeProviderResponse(vendorResponse, {
provider: requestResult.provider,
apiVariant: requestResult.apiVariant
}, maps);
console.log(normalizedResponse.choices[0].message.content); // Unified formatStreaming Example
// Request streaming response
const { provider, apiVariant, request: streamingRequest, response } = transformToVendorRequest(
{
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story' }],
max_tokens: 1000,
temperature: 0.7
},
{
provider: 'openai',
responseMode: 'stream' // Enable streaming
}
);
console.log(response); // { mode: "stream", protocol: "sse" }
console.log(streamingRequest.stream); // true (added by transformer)
// Now use appropriate streaming parser for SSE protocolAPI Reference
Core Functions
Request Transformation
transformToVendorRequest(unifiedRequest, target, registry?, options?)
Transform unified requests to provider-specific API payloads.
import { transformToVendorRequest } from 'ai-requests-adapter';
const result = transformToVendorRequest(
unifiedRequest, // UnifiedChatCompletionsLikeRequest
target, // Target (provider + optional apiVariant)
registry?, // Optional custom registry
options? // TransformOptions with model fixing
);Returns: BuiltRequest
{
provider: ProviderKey;
apiVariant: string;
request: Record<string, any>;
response: { mode: "standard" | "stream"; protocol?: "sse" | "jsonl" };
modelFix?: FixModelResult; // If model fixing was applied
warnings?: string[];
}fixAndTransformToVendorRequest(unifiedRequest, target, aliases?, normalization?, fixOptions?, registry?)
Convenience function that combines model fixing with request transformation.
Model Fixing
fixModelName(provider, rawModel, aliasesJson, normalizationJson, options?)
Resolve model aliases and normalize model names.
import { fixModelName, modelAliases, modelNormalization } from 'ai-requests-adapter';
const result = fixModelName('openai', 'gpt5', modelAliases, modelNormalization);
// result.output = 'gpt-5'
// result.resolution = 'alias'Returns: FixModelResult
{
input: string;
normalizedInput: string;
output: string; // canonical model name
changed: boolean;
resolution: "exact" | "alias" | "normalized" | "suffix_stripped" | "unresolved";
warning?: string;
}Response Normalization
normalizeProviderResponse(rawResponse, context, maps)
Transform provider responses to unified chat-completions format.
import { normalizeProviderResponse, loadResponseMaps, responseRegistry } from 'ai-requests-adapter';
const maps = loadResponseMaps(responseRegistry);
const result = normalizeProviderResponse(rawResponse, {
provider: 'openai',
apiVariant: 'chat_completions'
}, maps);Returns: UnifiedChatCompletionsLikeResponse
{
schema: "chat_completions_like_v1";
id?: string;
created?: number;
model?: string;
choices: Array<{
index: number;
message: NormalizedChatMessage;
finish_reason?: string;
}>;
usage?: UnifiedUsage;
meta: {
provider: string;
apiVariant: string;
mode: "standard" | "stream" | "batch";
raw?: any;
};
warnings?: string[];
}Target Specification
type Target = {
provider: ProviderKey;
apiVariant?: string; // If omitted, uses model default from registry
};
type ProviderKey = 'openai' | 'anthropic' | 'google' | 'groq' | 'xai' | 'moonshot_kimi';Provider Examples
OpenAI (Complete Round-trip)
// 1. Transform request with model fixing
const requestResult = fixAndTransformToVendorRequest(
{
model: 'gpt-5.2',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello' }
],
max_tokens: 200,
reasoning_effort: 'medium'
},
{ provider: 'openai' }
);
// Request result: apiVariant = "responses"
// requestResult.request = {
// model: 'gpt-5.2',
// instructions: 'You are helpful.',
// input: [{ role: 'user', content: 'Hello' }],
// max_output_tokens: 200,
// reasoning: { effort: 'medium' }
// }
// 2. Normalize response (simulated OpenAI response)
const simulatedResponse = {
id: "resp-123",
model: "gpt-5.2",
output_text: "Hello! How can I help you today?",
finish_reason: "completed",
usage: {
input_tokens: 15,
output_tokens: 8,
total_tokens: 23,
output_tokens_details: { reasoning_tokens: 5 }
}
};
const normalized = normalizeProviderResponse(simulatedResponse, {
provider: 'openai',
apiVariant: 'responses'
}, loadResponseMaps(responseRegistry));
// Normalized result:
// {
// schema: "chat_completions_like_v1",
// id: "resp-123",
// model: "gpt-5.2",
// choices: [{
// index: 0,
// message: { role: "assistant", content: "Hello! How can I help you today?" },
// finish_reason: "completed"
// }],
// usage: {
// prompt_tokens: 15,
// completion_tokens: 8,
// total_tokens: 23,
// reasoning_tokens: 5
// }
// }console.log(result.api); // "responses" console.log(result.payload); /* { model: 'gpt-5.2', instructions: 'You are helpful.\n\n-------------------------------------------------------------\n\nReturn JSON only.', input: [{ role: 'user', content: 'Hello' }], max_output_tokens: 200, reasoning: { effort: 'medium' } } */ console.log(result.warnings); // ["Omitted temperature=0.2 (not supported for model/reasoning)."]
#### GPT-4-Turbo with Chat Completions API
```typescript
const result = transformToVendorRequest({
provider: 'openai',
model: 'gpt-4-turbo',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello' }
],
max_tokens: 100,
temperature: 0.7,
frequency_penalty: 0.1
});
console.log(result.api); // "chat_completions"
console.log(result.payload);
/*
{
model: 'gpt-4-turbo',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello' }
],
max_tokens: 100,
temperature: 0.7,
frequency_penalty: 0.1
}
*/Anthropic (Messages API)
const { request } = transformToVendorRequest(
{
model: 'claude-sonnet-4-5-20250929',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello' }
],
max_tokens: 200,
temperature: 0.7
},
{ provider: 'anthropic' }
);
// Result: apiVariant = "messages"
// request = {
// system: 'You are helpful.',
// messages: [{ role: 'user', content: 'Hello' }],
// max_tokens: 200,
// temperature: 0.7
// }Google Gemini (Native API)
const { request } = transformToVendorRequest(
{
model: 'gemini-2.0-flash',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello' }
],
max_tokens: 200,
temperature: 0.7
},
{ provider: 'google' }
);
// Result: apiVariant = "gemini_generateContent"
// request = {
// systemInstruction: { role: 'system', parts: [{ text: 'You are helpful.' }] },
// contents: [{ role: 'user', parts: [{ text: 'Hello' }] }],
// generationConfig: { maxOutputTokens: 200, temperature: 0.7 }
// }Groq (OpenAI Compatible)
const { request } = transformToVendorRequest(
{
model: 'llama-3.3-70b-versatile',
messages: [
{ role: 'user', content: 'Hello' }
],
max_tokens: 200,
temperature: 0.7
},
{ provider: 'groq' }
);
// Result: apiVariant = "openai_chat_completions"
// request = {
// model: 'llama-3.3-70b-versatile',
// messages: [{ role: 'user', content: 'Hello' }],
// max_tokens: 200,
// temperature: 0.7
// }Metadata Registries
All provider behavior is defined in JSON registries, not hardcoded in TypeScript:
Request Registry (.metadata/llm-request-config-registry.json)
Handles request transformation:
- API variant selection (Responses vs Chat Completions, etc.)
- Parameter mapping (
max_tokens→max_output_tokens) - Capability filtering (unsupported params are dropped)
- Conditional rules (temperature only when reasoning_effort='none')
- Fallback resolution (exact → pattern → provider defaults)
Response Registry (.metadata/llm-response-config-registry.json)
Handles response normalization:
- Field extraction using JSONPath expressions
- Text joining for multi-part content
- Usage mapping from provider-specific to unified format
- Metadata preservation and error handling
Model Aliases & Normalization
Automatic model name resolution:
- Alias mapping (
gpt5→gpt-5) - Normalization rules (case, whitespace, underscores)
- Suffix stripping for date-based model variants
Usage
import {
capabilities, // Request registry
modelAliases, // Model aliases
modelNormalization, // Normalization rules
responseRegistry // Response registry
} from 'ai-requests-adapter';
// All registries are pre-loaded and ready to useGateway Integration
Complete bidirectional AI API gateway with unified request/response formats:
import {
fixAndTransformToVendorRequest,
normalizeProviderResponse,
loadResponseMaps,
responseRegistry
} from 'ai-requests-adapter';
const responseMaps = loadResponseMaps(responseRegistry);
app.post('/api/chat', async (req, res) => {
try {
// 1. Transform unified request to vendor API (with model fixing)
const requestResult = fixAndTransformToVendorRequest(
req.body, // UnifiedChatCompletionsLikeRequest
{ provider: req.body.provider || 'openai' }
);
// 2. Route to vendor SDK
let rawResponse;
switch (requestResult.provider) {
case 'openai':
rawResponse = requestResult.apiVariant === 'responses'
? await openai.responses.create(requestResult.request)
: await openai.chat.completions.create(requestResult.request);
break;
case 'anthropic':
rawResponse = await anthropic.messages.create(requestResult.request);
break;
case 'google':
rawResponse = await google.generativeAI.generateContent(requestResult.request);
break;
// ... other providers
}
// 3. Normalize vendor response to unified format
const normalizedResponse = normalizeProviderResponse(rawResponse, {
provider: requestResult.provider,
apiVariant: requestResult.apiVariant
}, responseMaps);
// 4. Return unified response
res.json({
...normalizedResponse,
modelFix: requestResult.modelFix, // Include model fixing info if any
requestWarnings: requestResult.warnings // Include any request transformation warnings
});
} catch (error) {
res.status(500).json({ error: error.message });
}
});This gateway provides:
- Automatic model alias resolution (
gpt5→gpt-5) - Unified request format → Provider-specific API calls
- Provider responses → Unified response format
- Complete metadata tracking (provider, API variant, warnings, etc.)
Provider Support Matrix
| Provider | API Variants | Message Format | Token Param | Streaming | Request ✅ | Response ✅ |
|----------|-------------|----------------|-------------|----------|------------|-------------|
| OpenAI | responses, chat_completions | Standard + developer role merging | max_output_tokens, max_tokens | ✅ (Responses only) | ✅ | ✅ |
| Anthropic | messages | System as top-level field | max_tokens | ✅ | ✅ | ✅ |
| Google | generateContent, streamGenerateContent | Native contents format | generationConfig.maxOutputTokens | ✅ | ✅ | ✅ |
| Groq | openai_chat_completions | Standard | max_tokens | ✅ | ✅ | ✅ |
| xAI | openai_chat_completions | Standard | max_tokens | ✅ | ✅ | ✅ |
| Moonshot | openai_chat_completions | Standard | max_tokens | ✅ | ✅ | ✅ |
Legend:
- Request ✅: Unified input → Provider API transformation
- Response ✅: Provider API response → Unified output normalization
Contributing
Add new providers by:
- Create provider config JSON in the registry
- Add message transformation logic in
transformUnifiedRequest() - Update API variants with token/stop param mappings
License
MIT
