@endlessriver/optimaiz
v1.0.10
Published
Client SDK for interacting with the Optimaiz logging & trace system.
Downloads
97
Maintainers
Readme
@endlessriver/optimaiz
Unified tracing, feedback, and cost analytics for LLM-based apps.
Drop-in SDK to track prompts, responses, cost, errors, and feedback across OpenAI, LangChain, Sarvam, Gemini, and more. Visit https://optimaiz.io to observe, analyze, optimize and comply
📦 Installation
npm install @endlessriver/optimaiz✨ Key Features
🛠️ Initialization
import { OptimaizClient } from "@endlessriver/optimaiz";
const optimaiz = new OptimaizClient({
token: process.env.OPTIMAIZ_API_KEY!,
});🚀 Basic Usage: call
The call function provides a unified way to interact with the Optimaiz API, handling all the complexity of model selection, caching, and optimization behind the scenes.
const { response, status } = await optimaiz.call({
promptTemplate: [
{
type: "text",
role: "user",
value: "Summarize this: {text}"
}
],
promptVariables: {
text: "Your text to summarize"
},
tools: [weatherTool], // Optional: Include tools for function calling
modelParams: {
temperature: 0.7
},
threadId: "summary_thread",
userId: "user_123",
agentId: "Tool:LLM"
});The function returns:
data: The model's response with traceIderror: Relevant error, null if success
Key benefits of using call:
- ✅ Automatic model selection based on your needs
- ✅ Built-in error handling and logging
- ✅ Intelligent caching for faster responses
- ✅ Automatic trace generation and management
- ✅ Seamless integration with the Optimaiz platform
🚀 Basic Usage: wrapLLMCall
const { response, traceId } = await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Summarize this" }],
promptVariables: {},
tools: [weatherTool], // Optional: Include tools for function calling
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
tools: optimaiz.convertToolsToProviderFormat([weatherTool], "openai"), // Convert to provider format
}),
});This handles:
- ✅ Start trace
- ✅ Append raw response
- ✅ Finalize trace with latency
- ✅ Log any errors
⚙️ Advanced Usage with IDs
const { response } = await optimaiz.wrapLLMCall({
traceId: "trace_123",
agentId: "agent:translator",
userId: "user_456",
flowId: "translate_email",
threadId: "email_translation",
sessionId: "session_2025_06_01_user_456",
provider: "openai",
model: "gpt-4o",
promptTemplate: [{ role: "user", type: "text", value: "Translate to French: {text}" }],
promptVariables: { text: "Hello, how are you?" },
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Translate to French: Hello, how are you?" }],
}),
});🧩 Manual Usage (Start, Append, Finalize)
Sometimes you need lower-level control (e.g., multiple responses, partial logs).
🔹 Start a trace manually
await optimaiz.startTrace({
traceId: "trace_xyz",
agentId: "imageAnalyzer",
userId: "user_999",
flowId: "caption_image",
promptTemplate: [
{ role: "user", type: "image", value: "https://cdn.site/image.png" },
{ role: "user", type: "text", value: "What's in this image?" }
],
promptVariables: {},
provider: "openai",
model: "gpt-4o"
});🔹 Append a model response
await optimaiz.appendResponse({
traceId: "trace_xyz",
rawResponse: response,
provider: "openai",
model: "gpt-4o"
});🔹 Finalize the trace
await optimaiz.finalizeTrace("trace_xyz");❌ Log an Error to a Trace
await optimaiz.logError("trace_abc123", {
message: "Timeout waiting for OpenAI response",
code: "TIMEOUT_ERROR",
details: {
timeout: "30s",
model: "gpt-4o",
retryAttempt: 1,
},
});🔧 Example Usage
try {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarize this" }],
});
await optimaiz.appendResponse({
traceId,
rawResponse: response,
provider: "openai",
model: "gpt-4o",
});
await optimaiz.finalizeTrace(traceId);
} catch (err: any) {
await optimaiz.logError(traceId, {
message: err.message,
code: err.code || "UNCAUGHT_EXCEPTION",
details: err.stack,
});
throw err;
}🛠️ Tool Management
Optimaiz supports comprehensive tool/function calling with provider-agnostic interfaces and automatic format conversion.
Standard Tool Definition
const weatherTool: StandardToolDefinition = {
name: "get_weather",
description: "Get current weather for a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City name or coordinates",
required: true
},
unit: {
type: "string",
enum: ["celsius", "fahrenheit"],
description: "Temperature unit"
}
},
required: ["location"]
},
category: "weather",
tags: ["api", "external"]
};Tool Prompt Helper
const { promptTemplate, promptVariables } = optimaiz.generatePromptFromTools({
toolInfo: [weatherTool],
toolInput: { name: "get_weather", arguments: { location: "Delhi" } },
});Tool Format Conversion
// Convert standard tools to provider-specific format
const openaiTools = optimaiz.convertToolsToProviderFormat([weatherTool], "openai");
const anthropicTools = optimaiz.convertToolsToProviderFormat([weatherTool], "anthropic");
// Convert provider tools to standard format
const standardTools = optimaiz.convertProviderToolsToStandard(openaiTools, "openai");
// Validate tool definitions
const validation = optimaiz.validateTools([weatherTool]);
if (!validation.valid) {
console.error("Tool validation errors:", validation.errors);
}Tool Execution Tracking
// Track tool execution
await optimaiz.addToolExecution({
traceId: "trace_123",
toolId: "weather_api_1",
toolName: "get_weather",
executionTime: new Date(),
duration: 150, // milliseconds
success: true,
result: { temperature: 25, unit: "celsius" }
});
// Add tool results to trace
await optimaiz.addToolResults({
traceId: "trace_123",
toolResults: [{
toolCallId: "call_1",
name: "get_weather",
result: { temperature: 25, unit: "celsius" }
}]
});🔄 Compose Prompts from Template
const { prompts, promptTemplate, promptVariables } = optimaiz.composePrompts(
[
{ role: "system", content: "You are a poet." },
{ role: "user", content: "Write a haiku about {topic}" },
],
{ topic: "the ocean" }
);📂 Integration Examples
✅ OpenAI SDK
const userPrompt = "Summarize this blog about AI agents";
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "summarizer",
userId: "user_123",
promptTemplate: [{ role: "user", type: "text", value: userPrompt }],
promptVariables: {},
tools: [weatherTool], // Optional: Include tools
call: () => openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userPrompt }],
tools: optimaiz.convertToolsToProviderFormat([weatherTool], "openai"),
}),
});✅ LangChain
const prompt = PromptTemplate.fromTemplate("Tell me a joke about {topic}");
const formatted = await prompt.format({ topic: "elephants" });
await optimaiz.wrapLLMCall({
provider: "openai",
model: "gpt-4o",
agentId: "joke-bot",
userId: "user_321",
flowId: "joke-generation",
promptTemplate: [{ role: "user", type: "text", value: "Tell me a joke about {topic}" }],
promptVariables: { topic: "elephants" },
call: () => langchainModel.invoke(formatted),
});✅ Sarvam AI (Audio)
await optimaiz.wrapLLMCall({
provider: "sarvam",
model: "shivang",
agentId: "transcriber",
userId: "user_999",
flowId: "transcribe",
promptTemplate: [{ role: "user", type: "audio", value: "https://cdn.site/audio.wav" }],
promptVariables: {},
call: () => sarvam.speechToText({ url: "https://cdn.site/audio.wav" }),
});✅ Gemini (Google Vertex AI)
await optimaiz.wrapLLMCall({
provider: "google",
model: "gemini-pro",
promptTemplate: [{ role: "user", type: "text", value: "Write a haiku about the ocean." }],
promptVariables: {},
call: () => gemini.generateContent({
contents: [{ role: "user", parts: [{ text: "Write a haiku about the ocean." }] }],
}),
});📊 Field Scope & Best Practices
| Field | Scope | Used for... | Example Value |
|--------------|---------------|--------------------------------------------------|-----------------------------|
| traceId | Per action | Track 1 LLM/tool call | trace_a9f3 |
| flowId | Per task | Multi-step task grouping | flow_generate_poem |
| agentId | Per trace | Identify AI agent handling task | calendarAgent |
| threadId | Per topic | Group related flows by theme/intent | thread_booking |
| sessionId | Per session | Temporal or login-bound grouping | session_2025_06_01_user1 |
| userId | Global | Usage, feedback, and cost attribution | user_321 |
✅ Use These for Full Insight:
agentId: Enables per-agent cost & prompt optimizationuserId: Enables user behavior analytics & pricing insightsflowId: Helps trace multi-step user taskstraceId: Use like a span for 1 prompt/responsethreadId,sessionId: Group related interactions over time or topics
✨ Optimaiz Features
- ✅ Works with OpenAI, Gemini, Sarvam, Mistral, LangChain, Anthropic
- 🧠 RAG and function/tool-call support
- 🛠️ Provider-agnostic tool management with automatic format conversion
- 🔍 Token usage + latency tracking
- 📉 Cost and model metadata logging
- 🧪 Error + feedback logging
- 🔄 Templated prompt builder + tool integration support
- 🧩 Full control via start/append/finalize or simple
wrapLLMCall
🛡️ Error Handling
Optimaiz provides comprehensive error handling with specific error types for different scenarios.
Error Types
import {
OptimaizError,
OptimaizAuthenticationError,
OptimaizValidationError,
OptimaizServerError
} from '@endlessriver/optimaiz';
try {
const result = await optimaiz.call({
promptTemplate: [{ role: "user", type: "text", value: "Hello" }],
promptVariables: {}
});
} catch (error) {
if (OptimaizClient.isAuthenticationError(error)) {
// Handle authentication errors (401)
console.error('Auth error:', error.message);
} else if (OptimaizClient.isValidationError(error)) {
// Handle validation errors (400)
console.error('Validation error:', error.message, error.details);
} else if (OptimaizClient.isServerError(error)) {
// Handle server errors (500)
console.error('Server error:', error.message);
} else {
// Handle other errors
console.error('Unknown error:', error.message);
}
}Common Error Scenarios
| Error Type | Status | Common Causes | Solution | |------------|--------|---------------|----------| | Authentication | 401 | Invalid/missing token | Check API key validity, Add Provider Key in Optimaiz Max section | | Authorization | 403 | Invalid app token | Verify token permissions | | Validation | 400 | Missing required fields, invalid tools | Check request format | | Server | 500 | Database/LLM provider issues | Retry or contact support |
Error Properties
All Optimaiz errors include:
message: Human-readable error messagestatus: HTTP status codedetails: Additional error details (if available)type: Error type for programmatic handling
🔗 Get Started
- Install:
npm install @endlessriver/optimaiz - Add your API key:
process.env.OPTIMAIZ_API_KEY - Use
wrapLLMCall()for LLM/tool calls - Pass
userId,agentId, andflowIdfor best observability - Analyze and improve prompt cost, user flow, and LLM performance
Need hosted dashboards, insights, or tuning support?
Visit 👉 https://optimaiz.io
