npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

openmodex-sdk

v0.1.1

Published

Official Node.js / TypeScript SDK for the OpenModex API

Readme

OpenModex Node.js / TypeScript SDK

The official Node.js SDK for the OpenModex API. Provides a typed, ergonomic client for chat completions, legacy completions, embeddings, and model discovery with built-in streaming, retries, and OpenModex-specific features like intelligent routing and semantic caching.

Installation

npm install openmodex

Requires Node.js 18+ (uses native fetch).

Quick start

import OpenModex from 'openmodex';

const client = new OpenModex({
  apiKey: process.env.OPENMODEX_API_KEY, // or pass directly: 'omx_sk_...'
});

const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);

Streaming

const stream = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Write a haiku about code.' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}

The stream: true overload returns an AsyncIterable<ChatCompletionChunk> that you can consume with for await...of.

OpenModex-specific features

Intelligent routing

Route requests to the best provider based on cost, latency, or quality:

const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Explain quantum computing.' }],
  routing: {
    strategy: 'cost_optimized',        // 'cost_optimized' | 'latency_optimized' | 'quality_optimized'
    fallback: ['claude-3.5-sonnet'],    // server-side fallback chain
    allow_upgrade: true,                // allow routing to a better model at the same price
  },
});

Semantic caching

const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is 2+2?' }],
  cache: {
    enabled: true,
    ttl: 3600,   // seconds
  },
});

console.log(response.openmodex?.cache_hit); // true if served from cache

Response metadata

Every response includes OpenModex metadata when available:

const { openmodex } = response;
if (openmodex) {
  console.log(openmodex.request_id);        // unique request ID
  console.log(openmodex.provider);           // provider that served the request
  console.log(openmodex.model_used);         // actual model used
  console.log(openmodex.routing_strategy);   // routing strategy applied
  console.log(openmodex.cache_hit);          // whether the response was cached
  console.log(openmodex.latency_ms);         // end-to-end latency
}

Model discovery

// List all models
const models = await client.models.list();

// Get a specific model
const model = await client.models.retrieve('openai/gpt-4o');
console.log(model.pricing, model.quality_scores);

// Compare models side by side
const comparison = await client.models.compare([
  'openai/gpt-4o',
  'anthropic/claude-3.5-sonnet',
]);
console.log(comparison.highlights?.cheapest);

Embeddings

const result = await client.embeddings.create({
  model: 'text-embedding-3-small',
  input: 'Hello world',
});
console.log(result.data[0].embedding);

Legacy completions

const result = await client.completions.create({
  model: 'gpt-3.5-turbo-instruct',
  prompt: 'Once upon a time',
  max_tokens: 100,
});
console.log(result.choices[0].text);

Configuration

const client = new OpenModex({
  apiKey: 'omx_sk_...',
  baseURL: 'https://api.openmodex.com/v1',  // default
  timeout: 60_000,                           // request timeout in ms (default: 30000)
  maxRetries: 3,                             // automatic retries on 5xx (default: 2)
  defaultModel: 'gpt-4o',                   // used when request omits model
  fallbackModels: ['claude-3.5-sonnet'],     // client-side fallback chain
  defaultHeaders: {                          // sent with every request
    'X-Custom-Header': 'value',
  },
});

Error handling

import { APIError } from 'openmodex';

try {
  await client.chat.completions.create({ ... });
} catch (err) {
  if (err instanceof APIError) {
    console.log(err.statusCode);  // 401, 429, 500, ...
    console.log(err.code);        // API error code
    console.log(err.message);     // human-readable message
    console.log(err.isRateLimited);
    console.log(err.isAuthError);
  }
}

OpenAI SDK compatibility

OpenModex is API-compatible with the OpenAI API. If you are already using the OpenAI Node SDK, you can point it at OpenModex:

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'omx_sk_...',
  baseURL: 'https://api.openmodex.com/v1',
});

Or switch to the OpenModex SDK for access to routing, caching, model comparison, and richer metadata.

License

MIT