npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

mega-translator

v1.0.8

Published

Bidirectional translation between Anthropic Messages API and OpenAI Chat Completions API

Downloads

844

Readme

mega-translator

npm version License: MIT

Bidirectional translation library for Anthropic Messages APIOpenAI Chat Completions API with token counting support.

Features

  • 🔄 Bidirectional Translation - Convert requests/responses between Anthropic and OpenAI
  • 🛠️ Tool Calling - Full support for function/tool calling in both directions
  • 📡 Streaming - Convert SSE streams between both APIs
  • 🖼️ Multimodal - Handle images and mixed content
  • 🔢 Token Counting - Accurate token counting with base200k tokenizer
  • ⚠️ Smart Warnings - Track feature losses and approximations
  • 📝 TypeScript - Full type safety with IntelliSense

Installation

npm install mega-translator

Quick Start

Request Translation

import { translateRequest } from 'mega-translator';

// OpenAI → Anthropic
const { data } = translateRequest.openaiToAnthropic({
  model: 'gpt-4',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' }
  ]
});

// Anthropic → OpenAI
const { data } = translateRequest.anthropicToOpenai({
  model: 'claude-sonnet-4-5',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hello!' }]
});

Response Translation

import { translateResponse } from 'mega-translator';

// Anthropic → OpenAI
const { data } = translateResponse.anthropicToOpenai(anthropicResponse);

// OpenAI → Anthropic
const { data } = translateResponse.openaiToAnthropic(openaiResponse);

Streaming Translation

import { AnthropicToOpenAIStreamConverter } from 'mega-translator';

const converter = new AnthropicToOpenAIStreamConverter();

for await (const line of anthropicStream) {
  if (line.startsWith('data: ')) {
    const event = JSON.parse(line.slice(6));
    const chunk = converter.convert(event);
    if (chunk) console.log(chunk);
  }
}

Token Counting

import { tokenCounter } from 'mega-translator';

// Count tokens in text
const tokens = tokenCounter.countTokens("Hello, world!");

// Count request tokens
const { inputTokens, estimatedOutputTokens } =
  tokenCounter.countOpenAIRequest({
    model: 'gpt-4',
    messages: [{ role: 'user', content: 'Hello!' }],
    max_tokens: 100
  });

// Count full conversation
const { totalTokens } = tokenCounter.countConversation(request, response);

// Calculate cost
const cost = (inputTokens / 1_000_000) * 0.003 +
             (outputTokens / 1_000_000) * 0.015;

API Reference

Request Translation

translateRequest.openaiToAnthropic(request, options?)

Convert OpenAI Chat Completions request to Anthropic Messages format.

Parameters:

  • request: OpenAIRequestParams - OpenAI request object
  • options?: TranslationOptions - Optional configuration

Returns: TranslationResult<AnthropicRequestParams>

What it does:

  • Extracts system messages → system parameter
  • Converts content to content blocks
  • Translates tool_callstool_use blocks
  • Maps tool role → tool_result content
  • Validates message alternation
  • Normalizes temperature (0-2 → 0-1)

translateRequest.anthropicToOpenai(request, options?)

Convert Anthropic Messages request to OpenAI Chat Completions format.

Parameters:

  • request: AnthropicRequestParams - Anthropic request object
  • options?: TranslationOptions - Optional configuration

Returns: TranslationResult<OpenAIRequestParams>

What it does:

  • Moves system parameter → first message
  • Converts content blocks → string/array
  • Handles tool_use blocks → tool_calls
  • Processes thinking blocks (strip/include)

Response Translation

translateResponse.anthropicToOpenai(response, options?)

Convert Anthropic Messages response to OpenAI Chat Completions format.

Returns: TranslationResult<OpenAIResponse>

translateResponse.openaiToAnthropic(response, options?)

Convert OpenAI Chat Completions response to Anthropic Messages format.

Returns: TranslationResult<AnthropicResponse>

Streaming

AnthropicToOpenAIStreamConverter

Convert Anthropic SSE events to OpenAI stream chunks.

const converter = new AnthropicToOpenAIStreamConverter();
const chunk = converter.convert(anthropicEvent);

OpenAIToAnthropicStreamConverter

Convert OpenAI stream chunks to Anthropic SSE events.

const converter = new OpenAIToAnthropicStreamConverter();
const events = converter.convert(openaiChunk);

Token Counting

tokenCounter.countTokens(text: string): number

Count tokens in a plain text string using base200k tokenizer.

tokenCounter.countOpenAIRequest(request)

Count tokens in OpenAI request.

Returns:

{
  inputTokens: number;
  estimatedOutputTokens: number;
}

tokenCounter.countAnthropicRequest(request)

Count tokens in Anthropic request.

Returns:

{
  inputTokens: number;
  estimatedOutputTokens: number;
}

tokenCounter.countOpenAIResponse(response)

tokenCounter.countAnthropicResponse(response)

Count tokens in responses.

Returns:

{
  inputTokens: number;
  outputTokens: number;
  totalTokens: number;
}

tokenCounter.countConversation(request, response)

Count tokens for full conversation (works with both formats).

Returns:

{
  inputTokens: number;
  outputTokens: number;
  totalTokens: number;
}

Translation Options

{
  strictMode?: boolean;         // Throw on unsupported features (default: false)
  includeWarnings?: boolean;    // Include warnings in result (default: true)
  stripUnsupported?: boolean;   // Remove unsupported parameters (default: true)
  defaultMaxTokens?: number;    // Default max_tokens for Anthropic (default: 4096)
  thinkingHandling?: 'strip' | 'include' | 'metadata';
}

Advanced Examples

Tool Calling

import { translateRequest } from 'mega-translator';

const { data } = translateRequest.openaiToAnthropic({
  model: 'gpt-4',
  messages: [
    { role: 'user', content: 'What is the weather in SF?' }
  ],
  tools: [{
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get current weather',
      parameters: {
        type: 'object',
        properties: {
          location: { type: 'string' }
        },
        required: ['location']
      }
    }
  }]
});

// Tool use response
const response = {
  role: 'assistant',
  content: null,
  tool_calls: [{
    id: 'call_123',
    type: 'function',
    function: {
      name: 'get_weather',
      arguments: '{"location":"San Francisco"}'
    }
  }]
};

// Tool result
const toolResult = {
  role: 'tool',
  tool_call_id: 'call_123',
  content: '{"temperature":72,"condition":"sunny"}'
};

Cost Estimation

import { tokenCounter } from 'mega-translator';

const CLAUDE_SONNET_PRICING = {
  input: 0.003,   // $3 per 1M tokens
  output: 0.015   // $15 per 1M tokens
};

const { inputTokens, outputTokens } =
  tokenCounter.countConversation(request, response);

const cost =
  (inputTokens / 1_000_000) * CLAUDE_SONNET_PRICING.input +
  (outputTokens / 1_000_000) * CLAUDE_SONNET_PRICING.output;

console.log(`Cost: $${cost.toFixed(6)}`);

Input Validation

import { tokenCounter } from 'mega-translator';

const MAX_CONTEXT = 200_000;  // Claude Sonnet 4.5
const MAX_OUTPUT = 8_192;
const SAFETY_MARGIN = 100;

function validateRequest(request) {
  const { inputTokens, estimatedOutputTokens } =
    tokenCounter.countAnthropicRequest(request);

  const totalEstimated = inputTokens + estimatedOutputTokens;

  if (totalEstimated > MAX_CONTEXT) {
    throw new Error(`Request too large: ${totalEstimated} > ${MAX_CONTEXT}`);
  }

  if (inputTokens > MAX_CONTEXT - MAX_OUTPUT - SAFETY_MARGIN) {
    throw new Error('Not enough room for output');
  }

  return { inputTokens, estimatedOutputTokens };
}

Usage Tracking

import { tokenCounter } from 'mega-translator';

class UsageTracker {
  private totalInput = 0;
  private totalOutput = 0;

  track(request, response) {
    const { inputTokens, outputTokens } =
      tokenCounter.countConversation(request, response);

    this.totalInput += inputTokens;
    this.totalOutput += outputTokens;
  }

  getCost(inputPrice, outputPrice) {
    return (this.totalInput / 1_000_000) * inputPrice +
           (this.totalOutput / 1_000_000) * outputPrice;
  }

  getStats() {
    return {
      totalInput: this.totalInput,
      totalOutput: this.totalOutput,
      totalTokens: this.totalInput + this.totalOutput
    };
  }
}

const tracker = new UsageTracker();
tracker.track(req1, res1);
tracker.track(req2, res2);
console.log(tracker.getStats());

Feature Translation Matrix

| Feature | OpenAI → Anthropic | Anthropic → OpenAI | |---------|-------------------|-------------------| | Text messages | ✅ Full support | ✅ Full support | | System messages | ✅ Extracted to system | ✅ First message | | Images | ✅ Base64 only | ✅ Base64 conversion | | Tool definitions | ✅ Full support | ✅ Full support | | Tool calls | ✅ → tool_use | ✅ → tool_calls | | Tool results | ✅ → tool_result | ✅ → tool role | | Streaming | ✅ SSE conversion | ✅ SSE conversion | | Temperature | ✅ Normalized 0-2 → 0-1 | ✅ Direct passthrough | | max_tokens | ⚠️ Optional → Required | ✅ Direct passthrough | | Thinking blocks | N/A | ⚠️ Stripped by default | | response_format | ⚠️ Not supported | N/A | | seed | ⚠️ Not supported | N/A | | logprobs | ⚠️ Not supported | N/A |

Legend:

  • ✅ Full support
  • ⚠️ Partial support or feature loss

Token Counting Details

What Gets Counted

  • Messages: Role name + content + formatting overhead (~4 tokens/message)
  • Images: ~85 tokens per image (base cost)
  • Tools: Full JSON definition
  • Tool calls: Name + input/arguments
  • System messages: Full content
  • Thinking blocks: Full text (Claude extended thinking)

Accuracy

Token counts are approximate (±2-5% of actual API usage) due to:

  • Using cl100k_base as proxy for base200k
  • Internal message formatting differences
  • Special tokens

Model Context Limits

| Model | Context | Max Output | |-------|---------|-----------| | Claude Sonnet 4.5 | 200,000 | 8,192 | | Claude Haiku 4.5 | 200,000 | 8,192 | | GPT-4 Turbo | 128,000 | 4,096 | | GPT-4 | 8,192 | 4,096 |

Examples

See the /examples directory for complete working examples:

  • basic-usage.ts - Basic request/response translation
  • tool-calling.ts - Tool/function calling examples
  • complete-coverage.ts - All 8 translation scenarios
  • token-counting.ts - Token counting and cost estimation

Run examples:

npm install
npm run build
npx tsx examples/token-counting.ts

Type Definitions

The package exports full TypeScript definitions:

import type {
  OpenAI,           // OpenAI types namespace
  Anthropic,        // Anthropic types namespace
  TranslationOptions,
  TranslationResult,
  TranslationWarning
} from 'mega-translator';

Error Handling

import { TranslationError } from 'mega-translator';

try {
  const result = translateRequest.openaiToAnthropic(request);
} catch (error) {
  if (error instanceof TranslationError) {
    console.error('Translation failed:', error.message);
    console.error('Field:', error.field);
    console.error('Value:', error.value);
  }
}

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure all tests pass: npm test
  5. Submit a pull request

License

MIT © 2024

Support

  • Issues: GitHub Issues
  • Documentation: This README
  • Examples: /examples directory

Changelog

1.0.2 (2024-12-17)

  • 🐛 Bug Fix: Fixed duplicate tool_call_id error when multiple tool results share the same ID
    • Tool results with duplicate tool_use_id are now automatically merged
    • Prevents "Duplicate value(s) for 'tool_call.id'" errors in OpenAI/Gemini API calls
  • 🐛 Bug Fix: Fixed JSON Schema compatibility issues with Gemini API
    • Automatically strips $schema and additionalProperties fields from tool parameters
    • Ensures tool definitions work with strict OpenAI-compatible providers
    • No functional loss - these are metadata fields that don't affect tool behavior

1.0.0 (2024-12-14)

  • ✅ Initial release
  • ✅ Bidirectional request/response translation
  • ✅ Streaming support
  • ✅ Tool calling support
  • ✅ Token counting with base200k
  • ✅ Full TypeScript support
  • ✅ Comprehensive test coverage