proxle
v0.0.1
Published
Proxle SDK for Node.js - LLM observability and cost optimization
Maintainers
Readme
Proxle SDK for Node.js
Drop-in replacements for OpenAI, Anthropic, and other LLM provider clients with built-in observability, caching, and cost tracking through the Proxle proxy.
Installation
npm install proxleInstall the provider SDK you need (optional peer dependencies):
# For OpenAI / Azure OpenAI
npm install openai
# For Anthropic
npm install @anthropic-ai/sdkCohere and Gemini clients are built-in and require no additional dependencies.
Quick Start
import { OpenAI } from 'proxle';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
proxyKey: process.env.PROXLE_API_KEY!,
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
metadata: { feature: 'chat_assistant', userId: 'user_123' },
});
console.log(response.choices[0].message.content);Configuration
Environment Variables
PROXLE_API_KEY=pk_live_xxx # Required: Your Proxle API key
PROXLE_URL=https://... # Optional: Defaults to https://api.proxle.devConstructor Options
All provider clients accept proxyKey and proxyUrl as constructor options. Constructor values take precedence over environment variables.
Provider Examples
OpenAI
import { OpenAI } from 'proxle';
const client = new OpenAI({
apiKey: 'sk-...',
proxyKey: 'pk_live_...',
});
// Chat completions
const chat = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
metadata: { feature: 'chat' },
});
// Embeddings
const embedding = await client.embeddings.create({
model: 'text-embedding-ada-002',
input: 'Hello world',
metadata: { feature: 'search' },
});Anthropic
import { Anthropic } from 'proxle';
const client = new Anthropic({
apiKey: 'sk-ant-...',
proxyKey: 'pk_live_...',
});
// Messages
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }],
metadata: { feature: 'summarizer' },
});
// Streaming
const stream = client.messages.stream({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }],
metadata: { feature: 'chat' },
});Azure OpenAI
import { AzureOpenAI } from 'proxle';
const client = new AzureOpenAI({
apiKey: 'azure-key',
azureEndpoint: 'https://my-resource.openai.azure.com',
apiVersion: '2024-02-01',
proxyKey: 'pk_live_...',
});
const response = await client.chat.completions.create({
model: 'gpt-4o', // deployment name
messages: [{ role: 'user', content: 'Hello!' }],
metadata: { feature: 'azure_chat' },
});Cohere
import { Cohere } from 'proxle';
const client = new Cohere({
apiKey: 'cohere-key',
proxyKey: 'pk_live_...',
});
// Chat
const chat = await client.chat({
message: 'Hello!',
model: 'command-r-plus',
metadata: { feature: 'chat' },
});
// Generate
const gen = await client.generate({
prompt: 'Write a poem about AI',
model: 'command',
metadata: { feature: 'generation' },
});
// Embed
const embed = await client.embed({
texts: ['Hello world'],
model: 'embed-english-v3.0',
inputType: 'search_document',
metadata: { feature: 'search' },
});Gemini
import { Gemini } from 'proxle';
const client = new Gemini({
apiKey: 'gemini-key',
proxyKey: 'pk_live_...',
});
// Generate content
const response = await client.generateContent({
model: 'gemini-1.5-pro',
contents: [{ role: 'user', parts: [{ text: 'Hello!' }] }],
metadata: { feature: 'chat' },
});
// Embed content
const embedding = await client.embedContent({
model: 'text-embedding-004',
content: { parts: [{ text: 'Hello world' }] },
metadata: { feature: 'search' },
});
// Count tokens
const tokens = await client.countTokens({
model: 'gemini-1.5-pro',
contents: [{ parts: [{ text: 'Hello!' }] }],
});Metadata for Cost Attribution
Pass metadata on any provider method to track costs by feature, user, or any custom dimension:
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
metadata: {
feature: 'chat_assistant',
userId: 'user_123',
environment: 'production',
},
});Metadata appears in the Proxle dashboard for cost breakdowns and analytics.
Streaming
Streaming works transparently through the proxy for all wrapped providers:
// OpenAI streaming
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
metadata: { feature: 'streaming_chat' },
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Cohere SSE streaming
const sseStream = await cohereClient.chat({
message: 'Hello!',
stream: true,
metadata: { feature: 'cohere_stream' },
});
for await (const event of sseStream) {
console.log(event);
}TypeScript
Full TypeScript type definitions are included. Import types directly:
import type {
OpenAIConfig,
AnthropicConfig,
AzureConfig,
LightweightClientConfig,
Metadata,
ProxyResponseMeta,
} from 'proxle';Error Handling
- Missing proxy key: Throws
ProxleConfigError - Missing provider SDK: Throws
Errorwith install instructions - Provider API errors: Passed through from the official SDK (OpenAI/Anthropic) or as HTTP errors (Cohere/Gemini)
import { ProxleConfigError } from 'proxle';
try {
const client = new OpenAI({ apiKey: 'sk-...' }); // no proxyKey
} catch (e) {
if (e instanceof ProxleConfigError) {
console.error('Config error:', e.message);
}
}Requirements
- Node.js >= 18.0.0
openai>= 4.0.0 (optional, for OpenAI/Azure wrappers)@anthropic-ai/sdk>= 0.18.0 (optional, for Anthropic wrapper)
License
MIT
