@tokenrouter/sdk
v1.0.6
Published
TypeScript/JavaScript SDK for TokenRouter - Intelligent LLM Routing API
Downloads
8
Maintainers
Readme
TokenRouter Node.js SDK
Official Node.js/TypeScript SDK for TokenRouter routing. This SDK exposes only the routing endpoints:
- client.create(...) → Native routing (/route)
- client.chat.completions.create(...) → OpenAI chat completions (/v1/chat/completions)
- client.completions.create(...) → OpenAI legacy completions (/v1/completions)
All calls are BYOK. Provide your TokenRouter API key; provider keys are configured in TokenRouter.
Install
npm install @tokenrouter/sdkQuick Start (Native Route)
import { TokenRouterClient } from '@tokenrouter/sdk';
const client = new TokenRouterClient({
apiKey: process.env.TOKENROUTER_API_KEY!,
baseUrl: 'https://api.tokenrouter.io', // or http://localhost:8000
});
const response = await client.create({
model: 'auto',
mode: 'balanced',
model_preferences: ['gpt-4o', 'gpt-4o-mini'],
messages: [
{ role: 'developer', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(response.choices[0].message.content);Endpoints
Native Route (/route)
Non‑streaming
const resp = await client.create({
model: 'auto',
mode: 'balanced',
model_preferences: ['gpt-4o', 'gpt-4o-mini'],
messages: [
{ role: 'developer', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(resp.choices[0].message.content);Streaming
const stream = (await client.create({
model: 'auto',
stream: true,
messages: [
{ role: 'developer', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Stream a short greeting.' },
],
})) as AsyncIterable<any>;
for await (const chunk of stream) {
const delta = chunk?.choices?.[0]?.delta;
if (delta?.content) process.stdout.write(delta.content);
}Chat Completions (/v1/chat/completions)
OpenAI‑compatible chat completions.
const chat = await client.chat.completions.create({
model: 'auto',
mode: 'balanced',
model_preferences: ['gpt-4o', 'gpt-4o-mini'],
messages: [
{ role: 'developer', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(chat.choices[0].message.content);Streaming
const chatStream = (await client.chat.completions.create({
model: 'auto',
stream: true,
messages: [
{ role: 'developer', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
})) as AsyncIterable<any>;
for await (const chunk of chatStream) {
const delta = chunk?.choices?.[0]?.delta;
if (delta?.content) process.stdout.write(delta.content);
}Legacy Completions (/v1/completions)
Returns OpenAI legacy text completion JSON (raw dict).
const tc = await client.completions.create({
prompt: 'Say this is a test',
model: 'auto',
mode: 'balanced',
});
console.log(tc.choices?.[0]?.text);
const tstream = (await client.completions.create({
prompt: 'Stream text completion please',
model: 'auto',
stream: true,
})) as AsyncIterable<any>;
for await (const chunk of tstream) {
const text = chunk?.choices?.[0]?.text;
if (text) process.stdout.write(text);
}Errors
import { AuthenticationError, RateLimitError, InvalidRequestError, APIConnectionError } from '@tokenrouter/sdk';
try {
const r = await client.chat.completions.create({
messages: [{ role: 'user', content: 'Hello' }],
model: 'auto',
});
console.log(r.choices[0].message.content);
} catch (e: any) {
if (e instanceof RateLimitError) console.log('Retry after', e.retryAfter);
else if (e instanceof AuthenticationError) console.log('Invalid API key');
else if (e instanceof InvalidRequestError) console.log('Invalid request');
else if (e instanceof APIConnectionError) console.log('Connection error');
else console.log('Unexpected error', e);
}Environment
export TOKENROUTER_API_KEY=tr_your-api-key
# Optional
export TOKENROUTER_BASE_URL=https://api.tokenrouter.io