openmodex-sdk
v0.1.1
Published
Official Node.js / TypeScript SDK for the OpenModex API
Maintainers
Readme
OpenModex Node.js / TypeScript SDK
The official Node.js SDK for the OpenModex API. Provides a typed, ergonomic client for chat completions, legacy completions, embeddings, and model discovery with built-in streaming, retries, and OpenModex-specific features like intelligent routing and semantic caching.
Installation
npm install openmodexRequires Node.js 18+ (uses native fetch).
Quick start
import OpenModex from 'openmodex';
const client = new OpenModex({
apiKey: process.env.OPENMODEX_API_KEY, // or pass directly: 'omx_sk_...'
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);Streaming
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a haiku about code.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}The stream: true overload returns an AsyncIterable<ChatCompletionChunk> that you can consume with for await...of.
OpenModex-specific features
Intelligent routing
Route requests to the best provider based on cost, latency, or quality:
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Explain quantum computing.' }],
routing: {
strategy: 'cost_optimized', // 'cost_optimized' | 'latency_optimized' | 'quality_optimized'
fallback: ['claude-3.5-sonnet'], // server-side fallback chain
allow_upgrade: true, // allow routing to a better model at the same price
},
});Semantic caching
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What is 2+2?' }],
cache: {
enabled: true,
ttl: 3600, // seconds
},
});
console.log(response.openmodex?.cache_hit); // true if served from cacheResponse metadata
Every response includes OpenModex metadata when available:
const { openmodex } = response;
if (openmodex) {
console.log(openmodex.request_id); // unique request ID
console.log(openmodex.provider); // provider that served the request
console.log(openmodex.model_used); // actual model used
console.log(openmodex.routing_strategy); // routing strategy applied
console.log(openmodex.cache_hit); // whether the response was cached
console.log(openmodex.latency_ms); // end-to-end latency
}Model discovery
// List all models
const models = await client.models.list();
// Get a specific model
const model = await client.models.retrieve('openai/gpt-4o');
console.log(model.pricing, model.quality_scores);
// Compare models side by side
const comparison = await client.models.compare([
'openai/gpt-4o',
'anthropic/claude-3.5-sonnet',
]);
console.log(comparison.highlights?.cheapest);Embeddings
const result = await client.embeddings.create({
model: 'text-embedding-3-small',
input: 'Hello world',
});
console.log(result.data[0].embedding);Legacy completions
const result = await client.completions.create({
model: 'gpt-3.5-turbo-instruct',
prompt: 'Once upon a time',
max_tokens: 100,
});
console.log(result.choices[0].text);Configuration
const client = new OpenModex({
apiKey: 'omx_sk_...',
baseURL: 'https://api.openmodex.com/v1', // default
timeout: 60_000, // request timeout in ms (default: 30000)
maxRetries: 3, // automatic retries on 5xx (default: 2)
defaultModel: 'gpt-4o', // used when request omits model
fallbackModels: ['claude-3.5-sonnet'], // client-side fallback chain
defaultHeaders: { // sent with every request
'X-Custom-Header': 'value',
},
});Error handling
import { APIError } from 'openmodex';
try {
await client.chat.completions.create({ ... });
} catch (err) {
if (err instanceof APIError) {
console.log(err.statusCode); // 401, 429, 500, ...
console.log(err.code); // API error code
console.log(err.message); // human-readable message
console.log(err.isRateLimited);
console.log(err.isAuthError);
}
}OpenAI SDK compatibility
OpenModex is API-compatible with the OpenAI API. If you are already using the OpenAI Node SDK, you can point it at OpenModex:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'omx_sk_...',
baseURL: 'https://api.openmodex.com/v1',
});Or switch to the OpenModex SDK for access to routing, caching, model comparison, and richer metadata.
License
MIT
