mega-translator
v1.0.8
Published
Bidirectional translation between Anthropic Messages API and OpenAI Chat Completions API
Downloads
844
Maintainers
Readme
mega-translator
Bidirectional translation library for Anthropic Messages API ↔ OpenAI Chat Completions API with token counting support.
Features
- 🔄 Bidirectional Translation - Convert requests/responses between Anthropic and OpenAI
- 🛠️ Tool Calling - Full support for function/tool calling in both directions
- 📡 Streaming - Convert SSE streams between both APIs
- 🖼️ Multimodal - Handle images and mixed content
- 🔢 Token Counting - Accurate token counting with base200k tokenizer
- ⚠️ Smart Warnings - Track feature losses and approximations
- 📝 TypeScript - Full type safety with IntelliSense
Installation
npm install mega-translatorQuick Start
Request Translation
import { translateRequest } from 'mega-translator';
// OpenAI → Anthropic
const { data } = translateRequest.openaiToAnthropic({
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
]
});
// Anthropic → OpenAI
const { data } = translateRequest.anthropicToOpenai({
model: 'claude-sonnet-4-5',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }]
});Response Translation
import { translateResponse } from 'mega-translator';
// Anthropic → OpenAI
const { data } = translateResponse.anthropicToOpenai(anthropicResponse);
// OpenAI → Anthropic
const { data } = translateResponse.openaiToAnthropic(openaiResponse);Streaming Translation
import { AnthropicToOpenAIStreamConverter } from 'mega-translator';
const converter = new AnthropicToOpenAIStreamConverter();
for await (const line of anthropicStream) {
if (line.startsWith('data: ')) {
const event = JSON.parse(line.slice(6));
const chunk = converter.convert(event);
if (chunk) console.log(chunk);
}
}Token Counting
import { tokenCounter } from 'mega-translator';
// Count tokens in text
const tokens = tokenCounter.countTokens("Hello, world!");
// Count request tokens
const { inputTokens, estimatedOutputTokens } =
tokenCounter.countOpenAIRequest({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100
});
// Count full conversation
const { totalTokens } = tokenCounter.countConversation(request, response);
// Calculate cost
const cost = (inputTokens / 1_000_000) * 0.003 +
(outputTokens / 1_000_000) * 0.015;API Reference
Request Translation
translateRequest.openaiToAnthropic(request, options?)
Convert OpenAI Chat Completions request to Anthropic Messages format.
Parameters:
request: OpenAIRequestParams- OpenAI request objectoptions?: TranslationOptions- Optional configuration
Returns: TranslationResult<AnthropicRequestParams>
What it does:
- Extracts
systemmessages →systemparameter - Converts content to content blocks
- Translates
tool_calls→tool_useblocks - Maps
toolrole →tool_resultcontent - Validates message alternation
- Normalizes temperature (0-2 → 0-1)
translateRequest.anthropicToOpenai(request, options?)
Convert Anthropic Messages request to OpenAI Chat Completions format.
Parameters:
request: AnthropicRequestParams- Anthropic request objectoptions?: TranslationOptions- Optional configuration
Returns: TranslationResult<OpenAIRequestParams>
What it does:
- Moves
systemparameter → first message - Converts content blocks → string/array
- Handles
tool_useblocks →tool_calls - Processes thinking blocks (strip/include)
Response Translation
translateResponse.anthropicToOpenai(response, options?)
Convert Anthropic Messages response to OpenAI Chat Completions format.
Returns: TranslationResult<OpenAIResponse>
translateResponse.openaiToAnthropic(response, options?)
Convert OpenAI Chat Completions response to Anthropic Messages format.
Returns: TranslationResult<AnthropicResponse>
Streaming
AnthropicToOpenAIStreamConverter
Convert Anthropic SSE events to OpenAI stream chunks.
const converter = new AnthropicToOpenAIStreamConverter();
const chunk = converter.convert(anthropicEvent);OpenAIToAnthropicStreamConverter
Convert OpenAI stream chunks to Anthropic SSE events.
const converter = new OpenAIToAnthropicStreamConverter();
const events = converter.convert(openaiChunk);Token Counting
tokenCounter.countTokens(text: string): number
Count tokens in a plain text string using base200k tokenizer.
tokenCounter.countOpenAIRequest(request)
Count tokens in OpenAI request.
Returns:
{
inputTokens: number;
estimatedOutputTokens: number;
}tokenCounter.countAnthropicRequest(request)
Count tokens in Anthropic request.
Returns:
{
inputTokens: number;
estimatedOutputTokens: number;
}tokenCounter.countOpenAIResponse(response)
tokenCounter.countAnthropicResponse(response)
Count tokens in responses.
Returns:
{
inputTokens: number;
outputTokens: number;
totalTokens: number;
}tokenCounter.countConversation(request, response)
Count tokens for full conversation (works with both formats).
Returns:
{
inputTokens: number;
outputTokens: number;
totalTokens: number;
}Translation Options
{
strictMode?: boolean; // Throw on unsupported features (default: false)
includeWarnings?: boolean; // Include warnings in result (default: true)
stripUnsupported?: boolean; // Remove unsupported parameters (default: true)
defaultMaxTokens?: number; // Default max_tokens for Anthropic (default: 4096)
thinkingHandling?: 'strip' | 'include' | 'metadata';
}Advanced Examples
Tool Calling
import { translateRequest } from 'mega-translator';
const { data } = translateRequest.openaiToAnthropic({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'What is the weather in SF?' }
],
tools: [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather',
parameters: {
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
}
}
}]
});
// Tool use response
const response = {
role: 'assistant',
content: null,
tool_calls: [{
id: 'call_123',
type: 'function',
function: {
name: 'get_weather',
arguments: '{"location":"San Francisco"}'
}
}]
};
// Tool result
const toolResult = {
role: 'tool',
tool_call_id: 'call_123',
content: '{"temperature":72,"condition":"sunny"}'
};Cost Estimation
import { tokenCounter } from 'mega-translator';
const CLAUDE_SONNET_PRICING = {
input: 0.003, // $3 per 1M tokens
output: 0.015 // $15 per 1M tokens
};
const { inputTokens, outputTokens } =
tokenCounter.countConversation(request, response);
const cost =
(inputTokens / 1_000_000) * CLAUDE_SONNET_PRICING.input +
(outputTokens / 1_000_000) * CLAUDE_SONNET_PRICING.output;
console.log(`Cost: $${cost.toFixed(6)}`);Input Validation
import { tokenCounter } from 'mega-translator';
const MAX_CONTEXT = 200_000; // Claude Sonnet 4.5
const MAX_OUTPUT = 8_192;
const SAFETY_MARGIN = 100;
function validateRequest(request) {
const { inputTokens, estimatedOutputTokens } =
tokenCounter.countAnthropicRequest(request);
const totalEstimated = inputTokens + estimatedOutputTokens;
if (totalEstimated > MAX_CONTEXT) {
throw new Error(`Request too large: ${totalEstimated} > ${MAX_CONTEXT}`);
}
if (inputTokens > MAX_CONTEXT - MAX_OUTPUT - SAFETY_MARGIN) {
throw new Error('Not enough room for output');
}
return { inputTokens, estimatedOutputTokens };
}Usage Tracking
import { tokenCounter } from 'mega-translator';
class UsageTracker {
private totalInput = 0;
private totalOutput = 0;
track(request, response) {
const { inputTokens, outputTokens } =
tokenCounter.countConversation(request, response);
this.totalInput += inputTokens;
this.totalOutput += outputTokens;
}
getCost(inputPrice, outputPrice) {
return (this.totalInput / 1_000_000) * inputPrice +
(this.totalOutput / 1_000_000) * outputPrice;
}
getStats() {
return {
totalInput: this.totalInput,
totalOutput: this.totalOutput,
totalTokens: this.totalInput + this.totalOutput
};
}
}
const tracker = new UsageTracker();
tracker.track(req1, res1);
tracker.track(req2, res2);
console.log(tracker.getStats());Feature Translation Matrix
| Feature | OpenAI → Anthropic | Anthropic → OpenAI |
|---------|-------------------|-------------------|
| Text messages | ✅ Full support | ✅ Full support |
| System messages | ✅ Extracted to system | ✅ First message |
| Images | ✅ Base64 only | ✅ Base64 conversion |
| Tool definitions | ✅ Full support | ✅ Full support |
| Tool calls | ✅ → tool_use | ✅ → tool_calls |
| Tool results | ✅ → tool_result | ✅ → tool role |
| Streaming | ✅ SSE conversion | ✅ SSE conversion |
| Temperature | ✅ Normalized 0-2 → 0-1 | ✅ Direct passthrough |
| max_tokens | ⚠️ Optional → Required | ✅ Direct passthrough |
| Thinking blocks | N/A | ⚠️ Stripped by default |
| response_format | ⚠️ Not supported | N/A |
| seed | ⚠️ Not supported | N/A |
| logprobs | ⚠️ Not supported | N/A |
Legend:
- ✅ Full support
- ⚠️ Partial support or feature loss
Token Counting Details
What Gets Counted
- Messages: Role name + content + formatting overhead (~4 tokens/message)
- Images: ~85 tokens per image (base cost)
- Tools: Full JSON definition
- Tool calls: Name + input/arguments
- System messages: Full content
- Thinking blocks: Full text (Claude extended thinking)
Accuracy
Token counts are approximate (±2-5% of actual API usage) due to:
- Using cl100k_base as proxy for base200k
- Internal message formatting differences
- Special tokens
Model Context Limits
| Model | Context | Max Output | |-------|---------|-----------| | Claude Sonnet 4.5 | 200,000 | 8,192 | | Claude Haiku 4.5 | 200,000 | 8,192 | | GPT-4 Turbo | 128,000 | 4,096 | | GPT-4 | 8,192 | 4,096 |
Examples
See the /examples directory for complete working examples:
basic-usage.ts- Basic request/response translationtool-calling.ts- Tool/function calling examplescomplete-coverage.ts- All 8 translation scenariostoken-counting.ts- Token counting and cost estimation
Run examples:
npm install
npm run build
npx tsx examples/token-counting.tsType Definitions
The package exports full TypeScript definitions:
import type {
OpenAI, // OpenAI types namespace
Anthropic, // Anthropic types namespace
TranslationOptions,
TranslationResult,
TranslationWarning
} from 'mega-translator';Error Handling
import { TranslationError } from 'mega-translator';
try {
const result = translateRequest.openaiToAnthropic(request);
} catch (error) {
if (error instanceof TranslationError) {
console.error('Translation failed:', error.message);
console.error('Field:', error.field);
console.error('Value:', error.value);
}
}Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass:
npm test - Submit a pull request
License
MIT © 2024
Support
- Issues: GitHub Issues
- Documentation: This README
- Examples:
/examplesdirectory
Changelog
1.0.2 (2024-12-17)
- 🐛 Bug Fix: Fixed duplicate
tool_call_iderror when multiple tool results share the same ID- Tool results with duplicate
tool_use_idare now automatically merged - Prevents "Duplicate value(s) for 'tool_call.id'" errors in OpenAI/Gemini API calls
- Tool results with duplicate
- 🐛 Bug Fix: Fixed JSON Schema compatibility issues with Gemini API
- Automatically strips
$schemaandadditionalPropertiesfields from tool parameters - Ensures tool definitions work with strict OpenAI-compatible providers
- No functional loss - these are metadata fields that don't affect tool behavior
- Automatically strips
1.0.0 (2024-12-14)
- ✅ Initial release
- ✅ Bidirectional request/response translation
- ✅ Streaming support
- ✅ Tool calling support
- ✅ Token counting with base200k
- ✅ Full TypeScript support
- ✅ Comprehensive test coverage
