@hyperscalesdkdev/sdk
v1.0.0
Published
Hyperscale SDK — client library wrapping OpenRouter inference API
Maintainers
Readme
Hyperscale SDK
Client library for the Hyperscale API — a branded interface over OpenRouter inference. Use 400+ AI models (OpenAI, Anthropic, Google, Meta, Mistral, etc.) through a single integration with Hyperscale authentication, billing, and routing.
Install
npm install @hyperscale/sdkQuick start
import { HyperscaleClient } from "@hyperscale/sdk";
const client = new HyperscaleClient({
apiKey: process.env.HYPERSCALE_API_KEY!,
baseURL: "https://api.hyperscale.ai/v1",
});
// Non-streaming
const response = await client.chat.completions.create({
model: "anthropic/claude-3-5-sonnet",
messages: [{ role: "user", content: "Hello!" }],
max_tokens: 1024,
});
console.log(response.choices[0].message.content);
// Streaming
const stream = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Tell me a story." }],
stream: true,
max_tokens: 1024,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}Features
- Chat completions — streaming and non-streaming, OpenAI-compatible request/response
- Model discovery —
models.list()with optional category filter and ~5 min cache - Usage & cost —
usageon completion responses;generations.get(id)for post-hoc stats;keys.getCredits()for balance - Errors — typed exceptions (
ValidationError,AuthError,InsufficientCreditsError,RateLimitError,UpstreamError) with retry for 429/5xx - OpenRouter model variants — use suffixes in
model::free,:nitro,:floor,:online,:thinking
API overview
| Method | Description |
|--------|-------------|
| client.chat.completions.create(params) | Chat completion (use stream: true for streaming) |
| client.models.list({ category? }) | List models (cached) |
| client.models.endpoints(author, slug) | Providers for a model |
| client.generations.get(id) | Token/cost stats for a generation |
| client.keys.getKeyInfo() | Key rate limit and usage |
| client.keys.getCredits() | Account credit balance |
Testing
1. Unit tests (no API key needed)
From the repo root:
npm install
npm testThis runs Vitest on tests/ (SSE parsing, error mapping). Use npm run test:watch for watch mode.
2. Live API test
After building, you can hit a real API to verify the SDK end-to-end.
Build the SDK
npm run buildSet your API key (and optional base URL)
- Hyperscale gateway:
HYPERSCALE_API_KEY=your-hyperscale-key
OptionallyHYPERSCALE_BASE_URL=https://api.hyperscale.ai/v1 - OpenRouter directly (e.g. for dev):
HYPERSCALE_API_KEY=your-openrouter-keyHYPERSCALE_BASE_URL=https://openrouter.ai/api/v1
Windows (PowerShell):
$env:HYPERSCALE_API_KEY="your-key" $env:HYPERSCALE_BASE_URL="https://openrouter.ai/api/v1" # optionalWindows (CMD):
set HYPERSCALE_API_KEY=your-keyLinux / macOS:
export HYPERSCALE_API_KEY=your-key export HYPERSCALE_BASE_URL=https://openrouter.ai/api/v1 # optional- Hyperscale gateway:
Run the example
node examples/quick-test.mjsIt runs a non-streaming and a streaming chat call (using a free model) and lists a few models. If any step fails, it prints the error and exits with code 1.
Error handling
import {
HyperscaleClient,
AuthError,
InsufficientCreditsError,
RateLimitError,
} from "@hyperscale/sdk";
try {
const r = await client.chat.completions.create({ ... });
} catch (e) {
if (e instanceof AuthError) { /* invalid key */ }
if (e instanceof InsufficientCreditsError) { /* add credits */ }
if (e instanceof RateLimitError) { /* back off; SDK retries by default */ }
}License
MIT
