@metrio-ai/client
v1.0.9
Published
The official Metrio AI SDK for JavaScript and TypeScript
Readme
@metrio-ai/client
The official JavaScript SDK for MetrioAI, providing a convenient way to interact with the MetrioAI API from JavaScript and TypeScript applications.
Installation
# Using npm
npm install @metrio-ai/client
# Using yarn
yarn add @metrio-ai/client
# Using pnpm
pnpm add @metrio-ai/clientQuick Start
import { MetrioAI } from '@metrio-ai/client';
// Initialize the SDK with your API key
const client = new MetrioAI({
apiKey: 'your-api-key',
// Optional: customize configuration
baseUrl: 'https://api.metrio.ai', // Default API endpoint
maxRetries: 3, // Number of retry attempts for failed requests
retryDelay: 500 // Delay between retries (milliseconds)
});
// You can also set your API key via environment variable
// METRIOAI_API_KEY=your-api-key
// METRIOAI_BASE_URL=https://api.metrio.ai (optional)CommonJS
const { MetrioAI } = require('@metrio-ai/client');
const client = new MetrioAI({ apiKey: 'your-api-key' });Features
The SDK provides the following main functionalities:
Get Available Providers
const providers = await client.providers();
console.log(providers.providers); // ['openai', 'anthropic', 'gemini', 'xai', ...]Get Available Models for a Provider
const models = await client.models('openai');
console.log(models.models); // ['gpt-3.5-turbo', 'gpt-4', ...]Chat Completion
Send a conversation to the API:
const messages = [
{
role: 'user',
content: {
type: 'text',
text: 'What is the capital of France?'
}
}
];
const response = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: messages,
// Optional parameters
variables: [{ name: 'customVar', value: 'customValue' }],
tag: 'custom-tag'
});
console.log(response.response); // The AI-generated response
console.log(response.input_tokens); // Number of input tokens
console.log(response.output_tokens); // Number of output tokens
console.log(response.elapsed_time); // Time taken to generate the response
console.log(response.logId); // Log ID for conversation history (if applicable)Evaluate a Prompt
Test a prompt with specific model settings:
const evalResponse = await client.evaluate({
projectId: 'test-project',
promptId: 1,
modelProvider: 'openai',
modelName: 'gpt-3.5-turbo',
modelSettings: {
temperature: 0.7,
maxTokens: 1000,
topP: 0.9,
topK: 40,
frequencyPenalty: 0,
presencePenalty: 0
},
messages: [
{
role: 'user',
content: {
type: 'text',
text: 'What is the capital of France?'
}
}
],
// Optional: specify output format
outputFormat: 'json'
});System Messages
Include system messages for context:
const messages = [
{
role: 'system',
content: {
type: 'text',
text: 'You are a helpful assistant.'
}
},
{
role: 'user',
content: {
type: 'text',
text: 'What is the capital of France?'
}
}
];
const completion = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: messages,
tag: 'geography-questions'
});MCP Tools Requiring Approval
When using MCP (Model Context Protocol) tools that require human approval before execution, you can specify which tools need approval and handle the approval workflow:
// Step 1: Initial request with tools requiring approval
const initialResponse = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: [
{
role: 'user',
content: {
type: 'text',
text: 'Send a message to the team on Slack about the deployment'
}
}
],
// Specify which MCP tools require human approval
toolsRequiringApproval: ['send-slack-message', 'post-to-channel']
});
// The response will include a logId if approval is needed
console.log(initialResponse.logId); // e.g., 12345
// Step 2: After user approves, continue the conversation with the logId
const approvalResponse = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: [
{
role: 'user',
content: {
type: 'text',
text: 'Approved. Please proceed with sending the Slack message.'
}
}
],
// Include the logId to retrieve stored conversation history
logId: initialResponse.logId
});
// MetrioAI will retrieve the stored history and continue execution
console.log(approvalResponse.response); // "Message sent to #general channel successfully"How It Works
- Specify Tools Requiring Approval: Use the
toolsRequiringApprovalparameter to list MCP tool names that need human approval before execution - Receive Log ID: When approval is needed, the response includes a
logIdthat identifies the conversation state - Send Approval: Make a follow-up call with the user's approval message and the
logId - Continue Execution: MetrioAI retrieves the stored conversation history and continues with the approved tool execution
This workflow ensures that sensitive operations (like sending messages, making API calls, or modifying systems) require explicit human approval before proceeding.
Streaming Responses
The SDK supports streaming responses for chat completions:
// Enable streaming with a callback function
const completion = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: messages,
stream: true // Enable streaming
}, (chunk) => {
// This callback is called for each chunk of the response
console.log('Received chunk:', chunk.chunk);
});
// The completion variable will contain the complete response when streaming is done
console.log('Final response:', completion.response);The SDK handles both SSE-formatted responses (starting with data:) and plain text responses:
- SSE-formatted chunks are parsed as JSON and provide full metadata (tokens, timing)
- Plain text lines are treated as response chunks with just the text content
- The final response combines all chunks into a complete response
Multimodal Support
The SDK supports multimodal inputs (text, images, PDFs):
Image Input
const completion = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: [
{
role: 'user',
content: {
type: 'image',
mime: 'image/jpeg',
data: 'base64encodedimagedata...'
}
}
]
});PDF Input
const completion = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: [
{
role: 'user',
content: {
type: 'pdf',
mime: 'application/pdf',
data: 'base64encodedpdfdata...'
}
}
]
});Error Handling
The SDK provides robust error handling with detailed error information:
import { MetrioAI, MetrioApiError } from '@metrio-ai/client';
try {
const response = await client.chatCompletion({
projectId: 'test-project',
promptId: 1,
messages: [{
role: 'user',
content: {
type: 'text',
text: 'Hello'
}
}]
});
} catch (error) {
if (error instanceof MetrioApiError) {
console.error('API Error:', error.message);
console.error('Status Code:', error.statusCode);
console.error('Response Data:', error.responseData);
console.error('Request Data:', error.requestData);
} else {
console.error('Unexpected error:', error);
}
}Automatic Retries
The SDK automatically retries failed requests that are considered retryable (network errors, timeouts, or 5xx server errors):
// Configure retry behavior when creating the client
const client = new MetrioAI({
apiKey: 'your-api-key',
maxRetries: 5, // Maximum retry attempts (default: 3)
retryDelay: 1000 // Base delay between retries in ms (default: 500)
});TypeScript Support
This SDK is fully written in TypeScript and provides comprehensive type definitions:
import {
MetrioAI,
MetrioApiError,
RunParams,
RunResponse,
Message,
StreamChunk,
ProvidersResponse,
ModelsResponse
} from '@metrio-ai/client';
// Type-safe usage example
const params: RunParams = {
projectId: 'test-project',
promptId: 1,
messages: [{
role: 'user',
content: {
type: 'text',
text: 'Hello'
}
}],
toolsRequiringApproval: ['send-slack-message'],
logId: 12345
};
try {
const response: RunResponse = await client.chatCompletion(params);
if (response.logId) {
console.log('Conversation ID for approval workflow:', response.logId);
}
} catch (error) {
if (error instanceof MetrioApiError) {
// Handle API error with typed properties
}
}Available Types
| Type | Description |
|------|-------------|
| MetrioAIOptions | Configuration options for the client |
| RunParams | Parameters for chat completion requests |
| EvalParams | Parameters for evaluation requests |
| RunResponse | Response from chat completion or evaluation |
| StreamChunk | A chunk from a streaming response |
| Message | A single message in a conversation |
| MessageContent | Content of a message (text or file) |
| ProvidersResponse | Response from the providers endpoint |
| ModelsResponse | Response from the models endpoint |
| ModelSettings | Model configuration settings |
| Variable | Variable for template substitution |
Requirements
- Node.js 18.0.0 or higher
- Native
fetchAPI support (built into Node.js 18+)
License
This SDK is licensed under the BSD 3-Clause License.
Support
For questions or issues, please contact [email protected].
