@metrxbot/sdk
v0.1.0
Published
TypeScript SDK for integrating with Metrx gateway
Downloads
43
Readme
Metrx SDK
TypeScript SDK for integrating with the Metrx gateway. The gateway is a Cloudflare Worker that proxies LLM API calls with cost tracking, rate limiting, and request metadata.
Installation
npm install @metrx/sdkQuick Start
Initialize the Client
import { MetrxbotClient } from '@metrx/sdk';
const client = new MetrxbotClient({
apiKey: process.env.METRX_API_KEY,
gatewayUrl: 'https://gateway.metrxbot.com', // optional, defaults to this
defaultAgentId: 'my-agent', // optional
timeout: 30000, // optional, in milliseconds
});Chat Completions (OpenAI-compatible)
const response = await client.chat({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'What is 2+2?' }
],
customerId: 'user-123', // optional, for tracking
sessionId: 'session-456', // optional, for tracking
});
console.log(response.choices[0].message.content);
console.log(`Cost: ${response._meta.costMicrocents} microcents`);
console.log(`Latency: ${response._meta.latencyMs}ms`);Streaming Chat Completions
const stream = client.chatStream({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Write a haiku' }
]
});
for await (const chunk of stream) {
if (chunk.choices?.[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}Anthropic Messages API
const response = await client.messages({
model: 'claude-3-sonnet',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Hello, Claude!' }
]
});
console.log(response.content[0].text);Embeddings
const response = await client.embeddings({
model: 'text-embedding-3-small',
input: 'Hello world'
});
console.log(response.data[0].embedding);Health Check
const health = await client.health();
console.log(health.status); // 'ok', 'degraded', or 'error'
console.log(health.version);List Available Models
const models = await client.models();
console.log(models.data.map(m => m.id));Configuration
Client Options
interface MetrxbotConfig {
// Required: Your Metrx API key
apiKey: string;
// Optional: Gateway URL (defaults to https://gateway.metrxbot.com)
gatewayUrl?: string;
// Optional: Default agent ID for all requests
defaultAgentId?: string;
// Optional: Default LLM provider (openai, anthropic, etc.)
defaultProvider?: string;
// Optional: Request timeout in milliseconds (defaults to 30000)
timeout?: number;
}Request Options
All API methods accept these additional optional parameters:
interface RequestOptions {
// Override the default agent ID for this request
agentId?: string;
// Customer/end-user ID for tracking
customerId?: string;
// Session ID for tracking
sessionId?: string;
// Provider API key (if required by provider)
providerKey?: string;
// Force a specific provider for this request
provider?: string;
}Error Handling
The SDK provides specific error classes for different failure scenarios:
import {
MetrxbotError,
AuthenticationError,
RateLimitError,
GatewayError,
ValidationError,
TimeoutError,
} from '@metrxbot/sdk';
try {
const response = await client.chat({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello' }]
});
} catch (error) {
if (error instanceof AuthenticationError) {
console.error('API key is invalid');
} else if (error instanceof RateLimitError) {
console.error(`Rate limited. Retry after ${error.retryAfter}s`);
} else if (error instanceof GatewayError) {
console.error('Gateway is experiencing issues');
} else if (error instanceof ValidationError) {
console.error('Invalid request parameters');
} else if (error instanceof TimeoutError) {
console.error('Request timed out');
} else {
console.error('Unknown error:', error);
}
}Response Metadata
All responses include a _meta field with gateway metadata:
interface MetrxbotMeta {
// Gateway response latency in milliseconds
latencyMs: number;
// Cost of the request in microcents
costMicrocents: number;
// X-Request-ID header from gateway
requestId?: string;
}Supported Environments
- Node.js 18+
- Deno
- Modern browsers (with native fetch support)
License
MIT
