@megallm/mega-translator
v3.0.8
Published
Bidirectional translation between Anthropic Messages API and OpenAI Chat Completions API
Downloads
1,345
Maintainers
Readme
mega-translator
Unified API translation library for Anthropic Messages API ↔ OpenAI Chat Completions API ↔ OpenAI Responses API with smart auto-detection, validation, and token counting.
What's New in v3.0
- 🚀 Single
translate()function - Auto-detects, translates, validates, fixes, and counts tokens in one call - 🔢 Smart Token Counting - Works with any format (OpenAI Chat, Responses API, Anthropic)
- 🎯 Bulletproof Format Detection - Handles all edge cases including Responses API
Features
- 🔄 Bidirectional Translation - Convert requests/responses between Anthropic and OpenAI
- 🚀 Unified API - Single
translate()function handles everything (NEW in v3.0) - 📊 OpenAI Responses API - Auto-converts to Chat Completions format (NEW in v3.0)
- 🧠 Reasoning Model Support - Auto-detection and normalization for o1, o3, o4-mini, gpt-5
- 🔍 Auto-Format Detection - Automatically detect OpenAI, Responses API, or Anthropic format
- 🛠️ Auto-Fix Utilities - Validate and fix function names, descriptions, stop sequences
- 🛠️ Tool Calling - Full support for function/tool calling in both directions
- 📡 Streaming - Convert SSE streams between both APIs (100% OpenAI format match)
- 🖼️ Multimodal - Handle images (base64 & URL) and mixed content
- 🔢 Token Counting - Accurate token counting for all formats with base200k tokenizer
- ⚠️ Smart Warnings - Track feature losses and approximations
- 📝 TypeScript - Full type safety with IntelliSense
- ☁️ Azure OpenAI - Compatible with Azure OpenAI deployments
Installation
npm install mega-translatorQuick Start
v3.0 Unified API (Recommended)
The new translate() function handles everything in a single call:
import { translate } from 'mega-translator';
// Auto-detect format, translate, validate, fix, and count tokens
const result = translate(anyRequest, {
targetFormat: 'openai', // 'openai' | 'anthropic' | 'auto'
countTokens: true, // Enable token counting
autoFix: true, // Auto-fix invalid values
});
console.log(result.data); // Translated & fixed request
console.log(result.originalFormat); // 'openai', 'openai_responses', or 'anthropic'
console.log(result.targetFormat); // 'openai' or 'anthropic'
console.log(result.tokens?.input); // Token count (if countTokens: true)
console.log(result.warnings); // Any fixes applied
console.log(result.wasFixed); // Whether any fixes were appliedUsing the mega Object
import { mega } from 'mega-translator';
// All-in-one translate
const result = mega.translate(request, { targetFormat: 'openai' });
// Shorthand methods
const openaiResult = mega.toOpenAI(anthropicRequest);
const anthropicResult = mega.toAnthropic(openaiRequest);
// Validate and fix without changing format
const fixed = mega.fix(request);
// Count tokens for any format
const tokens = mega.countTokens(anyRequest);Handling OpenAI Responses API
The translator automatically handles OpenAI Responses API format:
import { translate } from 'mega-translator';
// Responses API format (uses 'input' instead of 'messages')
const responsesApiRequest = {
model: 'gpt-4o',
input: 'Hello, how are you?',
instructions: 'You are a helpful assistant.',
tools: [{ type: 'function', name: 'get_weather', parameters: {...} }]
};
// Automatically converted to Chat Completions format
const result = translate(responsesApiRequest, { targetFormat: 'openai' });
console.log(result.originalFormat); // 'openai_responses'
console.log(result.data.messages); // Converted to messages arrayToken Counting (Fixed in v3.0)
Token counting now works with all formats:
import { translate, countTokensFor } from 'mega-translator';
// Method 1: With translate()
const result = translate(anyRequest, { countTokens: true });
console.log(result.tokens?.input); // Input tokens
console.log(result.tokens?.estimatedOutput); // Estimated output tokens
// Method 2: Direct count
const tokens = countTokensFor(anyRequest);
console.log(tokens.input, tokens.estimatedOutput);
// Works with all formats:
countTokensFor(openaiChatRequest); // ✅ OpenAI Chat Completions
countTokensFor(responsesApiRequest); // ✅ OpenAI Responses API
countTokensFor(anthropicRequest); // ✅ Anthropic Messages APILegacy API (Still Supported)
The v2.x API is still available but deprecated. Use translate() instead.
Request Translation (Deprecated)
import { translateRequest } from 'mega-translator';
// OpenAI → Anthropic
const { data } = translateRequest.openaiToAnthropic({
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
]
});
// Anthropic → OpenAI
const { data } = translateRequest.anthropicToOpenai({
model: 'claude-sonnet-4-5',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }]
});Auto-Fix Request (Deprecated)
import { autoFixRequest } from 'mega-translator';
// Use translate() instead:
// translate(request, { targetFormat: 'openai', autoFix: true })
const result = autoFixRequest({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
tools: [{
type: 'function',
function: {
name: 'invalid name with spaces!', // Will be auto-fixed
description: 'A'.repeat(2000), // Will be truncated to 1024 chars
}
}],
stop: ['a', 'b', 'c', 'd', 'e', 'f'], // Will be truncated to 4
});
console.log(result.data); // Fixed request
console.log(result.originalFormat); // 'openai'
console.log(result.wasFixed); // true
console.log(result.warnings); // Array of applied fixesFormat Detection (NEW in 2.0)
import { detectRequestFormat, isOpenAIFormat, isAnthropicFormat } from 'mega-translator';
const detection = detectRequestFormat(request);
console.log(detection.format); // 'openai' | 'anthropic' | 'unknown'
console.log(detection.confidence); // 'high' | 'medium' | 'low'
console.log(detection.reasons); // Array of detection reasons
// Type guards
if (isOpenAIFormat(request)) {
// TypeScript knows request is OpenAIRequestParams
}Reasoning Model Support (NEW in 2.0)
import { models, translateRequest } from 'mega-translator';
// Detect reasoning models
console.log(models.isReasoningModel('o1')); // true
console.log(models.isReasoningModel('o3-mini')); // true
console.log(models.isReasoningModel('gpt-5')); // true
console.log(models.isReasoningModel('gpt-4o')); // false
// Auto-normalize for reasoning models
const result = translateRequest.anthropicToOpenai({
model: 'o1',
messages: [
{ role: 'user', content: 'Hello!' }
],
system: 'You are helpful.', // Will become 'developer' role
max_tokens: 1000, // Will become max_completion_tokens
temperature: 0.7, // Will be stripped (not supported)
});
// Or use normalizeForReasoningModel directly
const { request, warnings } = models.normalizeForReasoningModel({
model: 'o3',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' }
],
temperature: 0.7,
top_p: 0.9,
});
// system → developer role
// temperature, top_p → strippedResponse Translation
import { translateResponse } from 'mega-translator';
// Anthropic → OpenAI (100% OpenAI format match)
const { data } = translateResponse.anthropicToOpenai(anthropicResponse);
// Returns exact OpenAI format with chatcmpl- prefixed IDs
// OpenAI → Anthropic
const { data } = translateResponse.openaiToAnthropic(openaiResponse);Streaming Translation
import { AnthropicToOpenAIStreamConverter } from 'mega-translator';
const converter = new AnthropicToOpenAIStreamConverter();
for await (const line of anthropicStream) {
if (line.startsWith('data: ')) {
const event = JSON.parse(line.slice(6));
const chunk = converter.convert(event);
if (chunk) console.log(chunk);
}
}Token Counting
import { tokenCounter } from 'mega-translator';
// Count tokens in text
const tokens = tokenCounter.countTokens("Hello, world!");
// Count request tokens
const { inputTokens, estimatedOutputTokens } =
tokenCounter.countOpenAIRequest({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100
});
// Count full conversation
const { totalTokens } = tokenCounter.countConversation(request, response);
// Calculate cost
const cost = (inputTokens / 1_000_000) * 0.003 +
(outputTokens / 1_000_000) * 0.015;API Reference
v3.0 Unified API
translate(request, options?)
The single entry point for all mega-translator operations.
import { translate, TranslateOptions, TranslateResult } from 'mega-translator';
const result = translate(request, {
targetFormat: 'openai', // 'openai' | 'anthropic' | 'auto' (default: 'auto')
countTokens: true, // Enable token counting (default: false)
validate: true, // Validate the request (default: true)
autoFix: true, // Auto-fix invalid values (default: true)
normalizeForModel: true, // Normalize for reasoning models (default: true)
preserveOriginal: false, // Return a copy instead of mutating (default: false)
});
// Returns:
interface TranslateResult<T> {
data: T; // Translated/fixed request
originalFormat: DetectedFormat; // 'openai' | 'openai_responses' | 'anthropic'
targetFormat: 'openai' | 'anthropic'; // Output format
tokens?: { input: number; estimatedOutput: number }; // Token counts
warnings: TranslationWarning[]; // All warnings/fixes
wasFixed: boolean; // Whether any fixes were applied
}translateToOpenAI(request, options?)
Shorthand for translate(request, { targetFormat: 'openai', ...options }).
import { translateToOpenAI } from 'mega-translator';
const result = translateToOpenAI(anthropicRequest, { countTokens: true });translateToAnthropic(request, options?)
Shorthand for translate(request, { targetFormat: 'anthropic', ...options }).
import { translateToAnthropic } from 'mega-translator';
const result = translateToAnthropic(openaiRequest);fix(request, options?)
Validate and fix a request without changing format.
import { fix } from 'mega-translator';
const result = fix(request); // Same format, but validated and fixedcountTokensFor(request)
Count tokens for any request format.
import { countTokensFor, TokenCount } from 'mega-translator';
const tokens: TokenCount = countTokensFor(anyRequest);
console.log(tokens.input, tokens.estimatedOutput);mega Object
Convenience object with all v3.0 functions:
import { mega } from 'mega-translator';
// Request API
mega.translate(request, options); // Same as translate()
mega.toOpenAI(request, options); // Same as translateToOpenAI()
mega.toAnthropic(request, options);// Same as translateToAnthropic()
mega.fix(request, options); // Same as fix()
mega.countTokens(request); // Same as countTokensFor()
// Response API
mega.translateResponse(response, options); // Unified response translation
mega.responseToOpenAI(response); // Response to OpenAI format
mega.responseToAnthropic(response); // Response to Anthropic format
// Streaming API
mega.createStreamConverter({ targetFormat: 'openai' }); // Create converter
mega.convertStream(stream, { targetFormat: 'openai' }); // Convert stream
// Detection API
mega.detectResponse(response); // Detect response format
mega.isOpenAIResponse(response); // Check if OpenAI format
mega.isAnthropicResponse(response);// Check if Anthropic formatResponse Translation API (NEW in v3.0)
translateResponse(response, options?)
Unified response translation with auto-detection:
import { translateResponse, TranslateResponseOptions, TranslateResponseResult } from 'mega-translator';
const result = translateResponse(anyResponse, {
targetFormat: 'openai', // 'openai' | 'anthropic' | 'auto' (default: 'auto')
thinkingHandling: 'strip', // 'strip' | 'include' | 'metadata' (default: 'strip')
includeWarnings: true, // Include warnings in result (default: true)
});
// Returns:
interface TranslateResponseResult<T> {
data: T; // Translated response
originalFormat: 'openai' | 'anthropic'; // Detected format
targetFormat: 'openai' | 'anthropic'; // Output format
warnings: TranslationWarning[]; // Warnings/approximations
wasTranslated: boolean; // Whether format changed
}translateResponseToOpenAI(response, options?)
Shorthand for translateResponse(response, { targetFormat: 'openai', ...options }).
import { translateResponseToOpenAI } from 'mega-translator';
const result = translateResponseToOpenAI(anthropicResponse);
console.log(result.data); // OpenAI format responsetranslateResponseToAnthropic(response, options?)
Shorthand for translateResponse(response, { targetFormat: 'anthropic', ...options }).
import { translateResponseToAnthropic } from 'mega-translator';
const result = translateResponseToAnthropic(openaiResponse);
console.log(result.data); // Anthropic format responseStreaming API (NEW in v3.0)
createStreamConverter(options)
Create a unified stream converter that auto-detects the source format:
import { createStreamConverter, UnifiedStreamConverter } from 'mega-translator';
const converter = createStreamConverter({
targetFormat: 'openai', // 'openai' | 'anthropic'
thinkingHandling: 'strip', // 'strip' | 'include' (default: 'strip')
});
// Use in your streaming loop
for await (const event of sourceStream) {
const converted = converter.convert(event);
if (converted) {
yield converted; // Converted to target format
}
}
// Get detected source format after first event
console.log(converter.getSourceFormat()); // 'openai' or 'anthropic'
// Reset for reuse
converter.reset();convertStream(stream, options)
Convert a full async stream to the target format:
import { convertStream } from 'mega-translator';
// Anthropic stream → OpenAI format
const openaiStream = convertStream(anthropicStream, { targetFormat: 'openai' });
for await (const chunk of openaiStream) {
console.log(chunk); // OpenAI format chunks
}
// OpenAI stream → Anthropic format
const anthropicStream = convertStream(openaiStream, { targetFormat: 'anthropic' });
for await (const events of anthropicStream) {
// May be single event or array of events
console.log(events);
}Response Format Detection (NEW in v3.0)
detectResponseFormat(response)
Detect whether a response is in OpenAI or Anthropic format:
import { detectResponseFormat, DetectedResponseFormat } from 'mega-translator';
const result = detectResponseFormat(response);
console.log(result.format); // 'openai' | 'anthropic' | 'unknown'
console.log(result.confidence); // 'high' | 'medium' | 'low'
console.log(result.reasons); // Array of detection reasons
console.log(result.isStreaming); // false for non-streaming responsesdetectStreamEventFormat(event)
Detect format of a streaming event:
import { detectStreamEventFormat } from 'mega-translator';
const result = detectStreamEventFormat(streamEvent);
console.log(result.format); // 'openai' | 'anthropic' | 'unknown'
console.log(result.isStreaming); // trueisOpenAIResponse(response) / isAnthropicResponse(response)
Type guards for response format:
import { isOpenAIResponse, isAnthropicResponse } from 'mega-translator';
if (isOpenAIResponse(response)) {
// response has OpenAI format (choices array, etc.)
}
if (isAnthropicResponse(response)) {
// response has Anthropic format (type: 'message', etc.)
}Legacy API (Deprecated)
Note: The following APIs are deprecated and will be removed in v4.0. Use
translate()instead.
Auto-Fix & Detection
autoFixRequest(request, options?)
Automatically detect format, translate if needed, validate, and fix a request.
import { autoFixRequest } from 'mega-translator';
const result = autoFixRequest(request, {
targetFormat: 'openai', // Target format (optional, keeps original if not specified)
normalizeForModel: true, // Apply reasoning model fixes (default: true)
validateTools: true, // Validate tool definitions (default: true)
validateStopSequences: true, // Validate stop sequences (default: true)
});
// Returns:
{
data: OpenAIRequestParams, // Fixed request
originalFormat: 'openai', // Detected original format
targetFormat: 'openai', // Output format
warnings: TranslationWarning[],// All fixes applied
wasFixed: boolean, // Whether any fixes were applied
}detectRequestFormat(request)
Detect whether a request is in OpenAI or Anthropic format.
import { detectRequestFormat } from 'mega-translator';
const result = detectRequestFormat(request);
// Returns: { format: 'openai' | 'anthropic' | 'unknown', confidence: 'high' | 'medium' | 'low', reasons: string[] }isOpenAIFormat(request) / isAnthropicFormat(request)
Type guards for format detection.
import { isOpenAIFormat, isAnthropicFormat } from 'mega-translator';
if (isOpenAIFormat(request)) {
// request is typed as OpenAIRequestParams
}Validation Functions (NEW in 2.0)
validate.functionName(name)
Validate and fix function names (max 64 chars, alphanumeric + underscore/hyphen only).
import { validate } from 'mega-translator';
const result = validate.functionName('invalid name with spaces!');
// { valid: false, fixed: 'invalid_name_with_spaces', warning: '...' }validate.functionDescription(description)
Validate and fix function descriptions (max 1024 chars).
const result = validate.functionDescription('A'.repeat(2000));
// { valid: false, fixed: 'AAA...', warning: 'Description truncated...' }validate.stopSequences(stop)
Validate and fix stop sequences (max 4).
const result = validate.stopSequences(['a', 'b', 'c', 'd', 'e', 'f']);
// { valid: false, fixed: ['a', 'b', 'c', 'd'], warning: 'Stop sequences truncated...' }validate.toolDefinition(tool)
Validate and fix a complete tool definition.
const { tool, warnings } = validate.toolDefinition({
type: 'function',
function: {
name: 'invalid name!',
description: 'Test',
}
});Reasoning Model Utilities (NEW in 2.0)
models.isReasoningModel(model)
Check if a model is a reasoning model (o1, o3, o4-mini, gpt-5).
import { models } from 'mega-translator';
models.isReasoningModel('o1'); // true
models.isReasoningModel('o3-mini'); // true
models.isReasoningModel('gpt-5'); // true
models.isReasoningModel('gpt-4o'); // falsemodels.isO3O4Model(model)
Check if model is o3 or o4 series (stricter restrictions).
models.isGpt5Model(model)
Check if model is GPT-5 series.
models.normalizeForReasoningModel(request)
Normalize a request for reasoning model compatibility.
const { request, warnings } = models.normalizeForReasoningModel({
model: 'o1',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' }
],
temperature: 0.7,
max_tokens: 1000,
});
// Applies:
// - system → developer role
// - max_tokens → max_completion_tokens
// - Strips: temperature, top_p, presence_penalty, frequency_penalty, logprobs, logit_bias
// - n > 1 → n = 1
// - stop (for o3/o4) → strippedmodels.getUnsupportedParameters(model)
Get list of parameters not supported by a reasoning model.
models.getUnsupportedParameters('o1');
// ['temperature', 'top_p', 'presence_penalty', 'frequency_penalty', 'logprobs', 'logit_bias']
models.getUnsupportedParameters('o3');
// [...above, 'stop']Request Translation
translateRequest.openaiToAnthropic(request, options?)
Convert OpenAI Chat Completions request to Anthropic Messages format.
Parameters:
request: OpenAIRequestParams- OpenAI request objectoptions?: TranslationOptions- Optional configuration
Returns: TranslationResult<AnthropicRequestParams>
What it does:
- Extracts
systemmessages →systemparameter - Handles
developerrole (converts back to system) - Converts content to content blocks
- Translates
tool_calls→tool_useblocks - Maps
toolrole →tool_resultcontent - Validates message alternation
- Normalizes temperature (0-2 → 0-1)
translateRequest.anthropicToOpenai(request, options?)
Convert Anthropic Messages request to OpenAI Chat Completions format.
Parameters:
request: AnthropicRequestParams- Anthropic request objectoptions?: TranslationOptions- Optional configuration
Returns: TranslationResult<OpenAIRequestParams>
What it does:
- Moves
systemparameter → first message (ordeveloperrole for reasoning models) - Converts content blocks → string/array
- Handles
tool_useblocks →tool_calls - Processes thinking blocks (strip/include)
- Auto-normalizes for reasoning models
Response Translation
translateResponse.anthropicToOpenai(response, options?)
Convert Anthropic Messages response to OpenAI Chat Completions format.
Returns: TranslationResult<OpenAIResponse>
100% OpenAI Format:
- Exact field names and structure
chatcmpl-prefixed IDssystem_fingerprintgeneration- All required usage fields
translateResponse.openaiToAnthropic(response, options?)
Convert OpenAI Chat Completions response to Anthropic Messages format.
Returns: TranslationResult<AnthropicResponse>
Streaming
AnthropicToOpenAIStreamConverter
Convert Anthropic SSE events to OpenAI stream chunks.
const converter = new AnthropicToOpenAIStreamConverter();
const chunk = converter.convert(anthropicEvent);OpenAIToAnthropicStreamConverter
Convert OpenAI stream chunks to Anthropic SSE events.
const converter = new OpenAIToAnthropicStreamConverter();
const events = converter.convert(openaiChunk);Token Counting
v3.0 Smart Token Counting
import { countTokensFor, countRequestTokens } from 'mega-translator';
// Smart counting - auto-detects format
const tokens = countTokensFor(anyRequest); // Works with any format
// Or use countRequestTokens directly
const { inputTokens, estimatedOutputTokens } = countRequestTokens(anyRequest);tokenCounter Object (Deprecated)
import { tokenCounter } from 'mega-translator';
// String tokens
tokenCounter.countTokens("Hello, world!");
// Smart request counting (v3.0 - handles all formats)
tokenCounter.countRequest(anyRequest);
// Format-specific (deprecated - use countRequest instead)
tokenCounter.countOpenAIRequest(request);
tokenCounter.countResponsesApiRequest(responsesRequest); // NEW in v3.0
tokenCounter.countAnthropicRequest(request);
// Response counting
tokenCounter.countOpenAIResponse(response);
tokenCounter.countAnthropicResponse(response);
// Full conversation
tokenCounter.countConversation(request, response);Returns:
{
inputTokens: number;
estimatedOutputTokens: number;
}Translation Options
{
strictMode?: boolean; // Throw on unsupported features (default: false)
includeWarnings?: boolean; // Include warnings in result (default: true)
stripUnsupported?: boolean; // Remove unsupported parameters (default: false - try to include first)
defaultMaxTokens?: number; // Default max_tokens for Anthropic (default: 4096)
thinkingHandling?: 'strip' | 'include' | 'metadata';
normalizeForReasoningModels?: boolean; // Auto-fix for o1/o3/gpt-5 (default: true)
autoFixTools?: boolean; // Auto-fix tool definitions (default: true)
autoFixStopSequences?: boolean; // Auto-fix stop sequences (default: true)
}OpenAI 2025 API Parameters (NEW in 2.0)
This library supports all latest OpenAI API parameters:
| Parameter | Type | Description |
|-----------|------|-------------|
| reasoning_effort | 'none'\|'minimal'\|'low'\|'medium'\|'high'\|'xhigh' | Reasoning intensity for o1/o3/gpt-5 |
| developer role | Message role | System instructions for reasoning models |
| max_completion_tokens | number | Max output tokens (replaces max_tokens for reasoning) |
| modalities | ('text'\|'audio')[] | Output modalities |
| audio | { voice, format } | Audio output configuration |
| prediction | { type, content } | Predicted output for faster responses |
| store | boolean | Store conversation for fine-tuning |
| metadata | Record<string,string> | Custom metadata |
| service_tier | 'auto'\|'default'\|'flex'\|'priority' | Service tier selection |
| stream_options | { include_usage } | Streaming options |
| web_search_options | { search_context_size, user_location } | Web search config |
Advanced Examples
Tool Calling with Auto-Fix
import { autoFixRequest } from 'mega-translator';
const result = autoFixRequest({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'What is the weather in SF?' }
],
tools: [{
type: 'function',
function: {
name: 'get weather data!', // Invalid: has space and !
description: 'A'.repeat(2000), // Too long
parameters: {
type: 'object',
properties: {
location: { type: 'string' }
}
}
}
}]
});
// result.data.tools[0].function.name === 'get_weather_data'
// result.data.tools[0].function.description.length === 1024
// result.warnings contains details of all fixesReasoning Model Auto-Normalization
import { translateRequest } from 'mega-translator';
const { data, warnings } = translateRequest.anthropicToOpenai({
model: 'o3-mini',
system: 'You are a helpful coding assistant.',
messages: [{ role: 'user', content: 'Write a function' }],
max_tokens: 4000,
temperature: 0.7, // Will be stripped
top_p: 0.9, // Will be stripped
stop: ['END'], // Will be stripped for o3
});
// data.messages[0].role === 'developer'
// data.max_completion_tokens === 4000
// data.temperature === undefined
// warnings explain each changeCost Estimation
import { tokenCounter } from 'mega-translator';
const CLAUDE_SONNET_PRICING = {
input: 0.003, // $3 per 1M tokens
output: 0.015 // $15 per 1M tokens
};
const { inputTokens, outputTokens } =
tokenCounter.countConversation(request, response);
const cost =
(inputTokens / 1_000_000) * CLAUDE_SONNET_PRICING.input +
(outputTokens / 1_000_000) * CLAUDE_SONNET_PRICING.output;
console.log(`Cost: $${cost.toFixed(6)}`);Feature Translation Matrix
| Feature | OpenAI → Anthropic | Anthropic → OpenAI |
|---------|-------------------|-------------------|
| Text messages | ✅ Full support | ✅ Full support |
| System messages | ✅ Extracted to system | ✅ First message |
| Developer role | ✅ Converted to system | ✅ Preserved for reasoning models |
| Images (base64) | ✅ Full support | ✅ Full support |
| Images (URL) | ✅ Full support (NEW) | ✅ Full support (NEW) |
| Tool definitions | ✅ Full support | ✅ Full support (auto-fixed) |
| Tool calls | ✅ → tool_use | ✅ → tool_calls |
| Tool results | ✅ → tool_result | ✅ → tool role |
| Streaming | ✅ SSE conversion | ✅ SSE conversion |
| Temperature | ✅ Normalized 0-2 → 0-1 | ✅ Direct passthrough |
| max_tokens | ⚠️ Optional → Required | ✅ Direct passthrough |
| max_completion_tokens | N/A | ✅ For reasoning models |
| reasoning_effort | ✅ → thinking.budget_tokens (NEW) | ✅ Passed through |
| Thinking blocks | N/A | ⚠️ Stripped by default → reasoning_tokens |
| response_format | ⚠️ Approximated via system prompt (NEW) | N/A |
| Extended thinking | ✅ reasoning_effort → thinking (NEW) | ✅ thinking → reasoning_effort (NEW) |
| Reasoning models | ✅ Auto-normalized | ✅ Auto-normalized |
| Advanced Tool Use | ✅ x_anthropic_* passthrough (NEW) | ✅ Preserved in metadata (NEW) |
| defer_loading | ✅ Pass via x_anthropic_defer_loading | ✅ Preserved |
| allowed_callers | ✅ Pass via x_anthropic_allowed_callers | ✅ Preserved |
| input_examples | ✅ Pass via x_anthropic_input_examples | ✅ Preserved |
| code_execution tool | ⚠️ No equivalent | ⚠️ Removed with warning |
| PTC caller field | N/A | ⚠️ Warning with info |
| PTC container | N/A | ⚠️ Warning with session info |
Legend:
- ✅ Full support
- ⚠️ Partial support or feature loss
Reasoning Model Constraints
When using reasoning models (o1, o3, o4-mini, gpt-5), the following parameters are automatically handled:
| Parameter | Behavior |
|-----------|----------|
| system role | Converted to developer role |
| max_tokens | Converted to max_completion_tokens |
| temperature | Stripped (not supported) |
| top_p | Stripped (not supported) |
| presence_penalty | Stripped (not supported) |
| frequency_penalty | Stripped (not supported) |
| logprobs | Stripped (not supported) |
| logit_bias | Stripped (not supported) |
| n > 1 | Set to 1 (not supported) |
| stop (o3/o4 only) | Stripped (not supported) |
Type Definitions
The package exports full TypeScript definitions:
import type {
// v3.0 Types
TranslateOptions, // NEW - Options for translate()
TranslateResult, // NEW - Result from translate()
TokenCount, // NEW - Token count result
// Core Types
OpenAI, // OpenAI types namespace
Anthropic, // Anthropic types namespace
TranslationOptions,
TranslationResult,
TranslationWarning,
AutoFixOptions,
AutoFixResult,
FormatDetectionResult,
DetectedFormat, // 'openai' | 'openai_responses' | 'anthropic' | 'unknown'
} from 'mega-translator';Error Handling
import { TranslationError } from 'mega-translator';
try {
const result = translateRequest.openaiToAnthropic(request);
} catch (error) {
if (error instanceof TranslationError) {
console.error('Translation failed:', error.message);
console.error('Field:', error.field);
console.error('Value:', error.value);
}
}Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass:
npm test - Submit a pull request
License
MIT © 2024
Support
- Issues: GitHub Issues
- Documentation: This README
- Examples:
/examplesdirectory
Changelog
3.0.0 (2026-01-01)
Major release with unified API, smart token counting, and response translation
🚀 Unified
translate()API - Single function that does everything- Auto-detects format (OpenAI Chat, Responses API, Anthropic)
- Translates between formats
- Validates and auto-fixes invalid values
- Normalizes for reasoning models
- Counts tokens (optional)
- Returns comprehensive result with warnings
🔢 Smart Token Counting - Fixed crashes, now works with all formats
countTokensFor()- Smart counting for any request formatcountRequestTokens()- Same as above, different namecountResponsesApiRequestTokens()- Direct Responses API counting- No more "undefined is not an object" errors
📊 OpenAI Responses API Support
- Auto-detects
inputfield (Responses API) vsmessages(Chat) - Converts flat tool format to nested format
- Handles
instructions,max_output_tokens, etc.
- Auto-detects
📡 Unified Response Translation (NEW)
translateResponse()- Auto-detect and translate responsestranslateResponseToOpenAI()/translateResponseToAnthropic()- Format-specificdetectResponseFormat()- Detect response format with confidenceisOpenAIResponse()/isAnthropicResponse()- Type guards
🌊 Unified Streaming API (NEW)
createStreamConverter()- Create auto-detecting stream converterconvertStream()- Convert full async streamsUnifiedStreamConverterclass - Flexible streaming conversion- Auto-detects source format on first event
🎯 New Exports
- Request:
translate(),translateToOpenAI(),translateToAnthropic() - Response:
translateResponse(),translateResponseToOpenAI(),translateResponseToAnthropic() - Streaming:
createStreamConverter(),convertStream(),UnifiedStreamConverter - Detection:
detectResponseFormat(),detectStreamEventFormat(),isOpenAIResponse(),isAnthropicResponse() - Utilities:
fix(),countTokensFor() - Types:
TranslateOptions,TranslateResult,TranslateResponseOptions,TranslateResponseResult,StreamConverterOptions,TokenCount megaobject - Convenience wrapper for all v3.0 functions
- Request:
🧪 Comprehensive Tests - 154 tests covering all functionality
⚠️ Deprecated APIs (still work, will be removed in v4.0)
translateRequest.openaiToAnthropic()→ usetranslate(req, { targetFormat: 'anthropic' })translateRequest.anthropicToOpenai()→ usetranslate(req, { targetFormat: 'openai' })translateRequest.autoFix()→ usetranslate(req, { autoFix: true })tokenCounter.countOpenAIRequest()→ usecountTokensFor()ortokenCounter.countRequest()translateResponseObj→ usetranslateResponse()instead
2.2.0 (2025-01-01)
- 🛠️ Advanced Tool Use Support - Support for Anthropic's
advanced-tool-use-2025-11-20beta- Tool Search Tool (
defer_loading: true)- Pass through via
x_anthropic_defer_loadingfield in OpenAI tool definitions - Enables dynamic tool discovery for up to 10,000 tools
- Pass through via
- Programmatic Tool Calling (PTC) (
allowed_callers)- Pass through via
x_anthropic_allowed_callersfield - Supports
['direct', 'code_execution_20250825']values
- Pass through via
- Tool Use Examples (
input_examples)- Pass through via
x_anthropic_input_examplesfield - Provides concrete usage patterns for tools
- Pass through via
- Code Execution Tool (
code_execution_20250825)- Filtered out during A→O translation with warning (no OpenAI equivalent)
- PTC Response Fields
callerfield intool_useblocks tracked and warnedcontainersession info tracked and warned
- Tool Search Tool (
- 📋 New Types
CodeExecutionToolinterface for PTCToolUseCallerinterface for caller trackingContainerInfointerface for PTC sessionsToolDefinitionunion type (Tool | CodeExecutionTool)
- 🔧 Streaming Updates
getContainer()method to access PTC container infogetToolsWithCaller()method to track tools invoked via PTC
2.1.0 (2025-01-01)
- 🖼️ Image URL Support - Full bidirectional support for remote image URLs
- O→A: Remote URLs converted to Anthropic's
source.type: "url"format - A→O: URL sources converted to OpenAI's
image_url.urlformat - Base64 images continue to work as before
- O→A: Remote URLs converted to Anthropic's
- 🧠 Extended Thinking Translation - Bidirectional
reasoning_effort↔thinkingmapping- O→A:
reasoning_effort→thinking.budget_tokensminimal→ 1024 tokens,low→ 2048,medium→ 8192,high→ 16384,xhigh→ 32768
- A→O:
thinking.budget_tokens→reasoning_effort(inverse mapping)
- O→A:
- 📋 response_format Translation - JSON mode approximation via system prompt
json_object→ Adds JSON-only instruction to system promptjson_schema→ Includes schema definition in system prompt- Generates warning about approximation
- 📡 Streaming Enhancements
- Thinking deltas can now be included as
<thinking>tags (optional) reasoning_tokenscalculated from thinking blockssystem_fingerprintandservice_tieradded to stream chunks- Cache tokens mapped to
prompt_tokens_details.cached_tokens
- Thinking deltas can now be included as
- 🔧 Response Enhancements
- A→O:
service_tierfield added (default: "default") - A→O:
reasoning_tokenscalculated from thinking blocks - Better usage details mapping for cache tokens
- A→O:
2.0.0 (2025-01-01)
- 🧠 Reasoning Model Support - Full support for o1, o3, o4-mini, and gpt-5 models
- Auto-detection with
isReasoningModel(),isO3O4Model(),isGpt5Model() - Auto-normalization with
normalizeForReasoningModel() - Converts
system→developerrole - Converts
max_tokens→max_completion_tokens - Strips unsupported parameters (temperature, top_p, etc.)
- Auto-detection with
- 🔍 Auto-Format Detection - Automatically detect OpenAI vs Anthropic format
detectRequestFormat()with confidence levelsisOpenAIFormat()/isAnthropicFormat()type guards
- 🛠️ Auto-Fix Utilities - One function that does everything
autoFixRequest()- detect, translate, validate, fix- Validates function names (64 char limit, alphanumeric pattern)
- Validates descriptions (1024 char limit)
- Truncates stop sequences (max 4)
- 📡 100% OpenAI Format Match - Response translation matches OpenAI exactly
chatcmpl-prefixed IDssystem_fingerprintgeneration- All required fields present
- 🆕 OpenAI 2025 API Parameters - Support for 30+ new parameters
developerrole,reasoning_effort,modalities,audioprediction,store,metadata,service_tierstream_options,web_search_optionsmax_completion_tokens,prompt_cache_key, etc.
- ☁️ Azure OpenAI Compatibility - Works with Azure OpenAI deployments
- ⚠️ BREAKING: Default
stripUnsupportedchanged tofalse(try to include params first)
1.0.2 (2024-12-17)
- 🐛 Bug Fix: Fixed duplicate
tool_call_iderror when multiple tool results share the same ID- Tool results with duplicate
tool_use_idare now automatically merged - Prevents "Duplicate value(s) for 'tool_call.id'" errors in OpenAI/Gemini API calls
- Tool results with duplicate
- 🐛 Bug Fix: Fixed JSON Schema compatibility issues with Gemini API
- Automatically strips
$schemaandadditionalPropertiesfields from tool parameters - Ensures tool definitions work with strict OpenAI-compatible providers
- No functional loss - these are metadata fields that don't affect tool behavior
- Automatically strips
1.0.0 (2024-12-14)
- ✅ Initial release
- ✅ Bidirectional request/response translation
- ✅ Streaming support
- ✅ Tool calling support
- ✅ Token counting with base200k
- ✅ Full TypeScript support
- ✅ Comprehensive test coverage
