@synova-cloud/sdk
v2.0.2
Published
Official Node.js SDK for Synova Cloud API
Downloads
1,527
Maintainers
Readme
Synova Cloud SDK for Node.js
Official Node.js SDK for the Synova Cloud API.
Installation
npm install @synova-cloud/sdk
# or
yarn add @synova-cloud/sdk
# or
pnpm add @synova-cloud/sdkQuick Start
import { SynovaCloudSdk } from '@synova-cloud/sdk';
const client = new SynovaCloudSdk('your-api-key');
// Execute a prompt
const response = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
variables: { name: 'World' },
});
console.log(response.content);Configuration
import { SynovaCloudSdk } from '@synova-cloud/sdk';
const client = new SynovaCloudSdk('your-api-key', {
baseUrl: 'https://api.synova.cloud', // Custom API URL
timeout: 30000, // Request timeout in ms (default: 30000)
debug: true, // Enable debug logging
retry: {
maxRetries: 3, // Max retry attempts (default: 3)
strategy: 'exponential', // 'exponential' or 'linear' (default: 'exponential')
initialDelayMs: 1000, // Initial retry delay (default: 1000)
maxDelayMs: 30000, // Max retry delay (default: 30000)
backoffMultiplier: 2, // Multiplier for exponential backoff (default: 2)
},
});API Reference
Prompts
Get Prompt
Retrieve a prompt template by ID.
// Get latest version (default)
const prompt = await client.prompts.get('prm_abc123');
// Get by tag
const prompt = await client.prompts.get('prm_abc123', { tag: 'production' });
// Get specific version
const prompt = await client.prompts.get('prm_abc123', { version: '1.2.0' });Execute Prompt
Execute a prompt with an LLM provider.
const response = await client.prompts.execute('prm_abc123', {
provider: 'openai', // Required: 'openai', 'anthropic', 'google', etc.
model: 'gpt-4o', // Required: model ID
variables: { // Optional: template variables
userMessage: 'Hello!',
},
});
console.log(response.content); // LLM response text
console.log(response.usage); // { inputTokens, outputTokens, totalTokens }Execute with Tag or Version
// Execute specific tag
const response = await client.prompts.execute('prm_abc123', {
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
tag: 'production',
variables: { topic: 'AI' },
});
// Execute specific version
const response = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
version: '1.2.0',
variables: { topic: 'AI' },
});With Model Parameters
const response = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
variables: { topic: 'TypeScript' },
parameters: {
temperature: 0.7,
maxTokens: 1000,
topP: 0.9,
},
});With Your Own API Key
You can pass your own LLM provider API key directly in the request:
// OpenAI, Anthropic, Google, DeepSeek
const response = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
apiKey: 'sk-your-openai-key',
variables: { topic: 'TypeScript' },
});
// Azure OpenAI (requires endpoint)
const response = await client.prompts.execute('prm_abc123', {
provider: 'azure_openai',
model: 'my-gpt4-deployment',
apiKey: 'your-azure-key',
azureEndpoint: 'https://my-resource.openai.azure.com',
variables: { topic: 'TypeScript' },
});With Conversation History
const response = await client.prompts.execute('prm_chat456', {
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
variables: { topic: 'TypeScript' },
messages: [
{ role: 'user', content: 'What is TypeScript?' },
{ role: 'assistant', content: 'TypeScript is a typed superset of JavaScript...' },
{ role: 'user', content: 'How do I use generics?' },
],
});Image Generation
const response = await client.prompts.execute('prm_image123', {
provider: 'openai',
model: 'dall-e-3',
variables: { description: 'A sunset over mountains' },
});
if (response.type === 'image') {
for (const file of response.files) {
console.log('Generated image:', file.url);
console.log('MIME type:', file.mimeType);
}
}Structured Output (Typed Responses)
Get typed and validated responses from LLMs using JSON Schema.
First, install optional peer dependencies:
npm install class-validator class-transformerDefine a response class with decorators:
import { IsString, IsArray, IsNumber, Min, Max } from 'class-validator';
import { Description, Example, ArrayItems } from '@synova-cloud/sdk';
class TopicDto {
@IsString()
@Description('Article title for SEO')
@Example('10 Ways to Improve SQL Performance')
title: string;
@IsString()
@Description('Short description')
description: string;
@IsArray()
@ArrayItems(String)
@Description('SEO keywords')
keywords: string[];
@IsNumber()
@Min(1)
@Max(10)
@Description('Priority from 1 to 10')
priority: number;
}Execute with responseClass to get typed response:
const topic = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
responseClass: TopicDto,
});
// topic is typed as TopicDto
console.log(topic.title); // string
console.log(topic.keywords); // string[]
console.log(topic.priority); // numberWorks with executeByTag and executeByVersion too:
const topic = await client.prompts.executeByTag('prm_abc123', 'production', {
provider: 'openai',
model: 'gpt-4o',
responseClass: TopicDto,
});Disable validation if needed:
const topic = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
responseClass: TopicDto,
validate: false, // Skip class-validator validation
});Available Schema Decorators:
| Decorator | Description |
|-----------|-------------|
| @Description(text) | Adds description to help LLM |
| @Example(...values) | Adds example values |
| @Default(value) | Sets default value |
| @ArrayItems(Type) | Sets array item type |
| @Format(format) | Sets string format (email, uri, uuid, date-time) |
| @Nullable() | Marks as nullable |
| @SchemaMin(n) | Minimum number value |
| @SchemaMax(n) | Maximum number value |
| @SchemaMinLength(n) | Minimum string length |
| @SchemaMaxLength(n) | Maximum string length |
| @SchemaPattern(regex) | Regex pattern for string |
| @SchemaMinItems(n) | Minimum array length |
| @SchemaMaxItems(n) | Maximum array length |
| @SchemaEnum(values) | Allowed enum values |
| @AdditionalProperties(value) | Allow dynamic keys on objects (for arrays, applies to items) |
Observability
Track and group your LLM calls using traces and spans. Each execution creates a span, and multiple spans can be grouped into a trace using sessionId.
Session-Based Tracing
Use sessionId to group related calls (e.g., a conversation) into a single trace:
const sessionId = 'chat_user123_conv1';
// First message - creates new trace
const response1 = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
sessionId,
variables: { topic: 'TypeScript' },
});
console.log(response1.traceId); // trc_xxx
console.log(response1.spanId); // spn_xxx
// Follow-up - same sessionId = same trace, new span
const response2 = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
sessionId,
messages: [
{ role: 'assistant', content: response1.content },
{ role: 'user', content: 'Tell me more' },
],
});
// response2.traceId === response1.traceId (same trace)
// response2.spanId !== response1.spanId (new span)Response Properties
Every execution returns observability IDs:
| Property | Type | Description |
|----------|------|-------------|
| spanDataId | string | Execution data ID (messages, response, usage) |
| traceId | string | Trace ID (groups related calls) |
| spanId | string | Span ID (this specific call) |
Custom Span Tracking
Track tool calls, retrieval operations, and custom logic as spans within a trace.
Manual approach:
// Create span
const span = await client.spans.create(traceId, {
type: 'tool',
toolName: 'fetch_weather',
toolArguments: { city: 'NYC' },
parentSpanId: generationSpanId,
});
// Execute
const weather = await fetchWeather('NYC');
// End span
await client.spans.end(span.id, {
status: 'completed',
toolResult: weather,
});Wrapper approach:
// wrapTool() - for tools
const weather = await client.spans.wrapTool(
{ traceId, toolName: 'fetch_weather', parentSpanId },
{ city: 'NYC' },
async (args) => fetchWeather(args.city),
);
// wrap() - for custom/retriever/embedding
const docs = await client.spans.wrap(
{ traceId, type: 'retriever', name: 'vector_search' },
{ query: 'how to...', topK: 5 },
async () => vectorDb.search(query),
);Wrappers automatically handle errors and set status: 'error' with message.
Span Types
| Type | Use Case |
|------|----------|
| generation | LLM calls (auto-created by execute()) |
| tool | Tool/function calls |
| retriever | RAG document retrieval |
| embedding | Embedding generation |
| custom | Any custom operation |
Viewing Traces
View your traces in the Synova Cloud Dashboard under the Observability section. Each trace shows:
- All spans (LLM calls) in the session
- Input/output for each span
- Token usage and latency
- Error details if any
Models
List All Models
const { providers } = await client.models.list();
for (const provider of providers) {
console.log(`${provider.displayName}:`);
for (const model of provider.models) {
console.log(` - ${model.displayName} (${model.id})`);
}
}Filter Models
// Filter by type
const textModels = await client.models.list({ type: 'text' });
const imageModels = await client.models.list({ type: 'image' });
// Filter by capability
const visionModels = await client.models.list({ capability: 'vision' });
// Filter by provider
const openaiModels = await client.models.list({ provider: 'openai' });Get Models by Provider
const models = await client.models.getByProvider('anthropic');
for (const model of models) {
console.log(`${model.displayName}: context=${model.limits.contextWindow}`);
}Get Specific Model
const model = await client.models.get('openai', 'gpt-4o');
console.log('Capabilities:', model.capabilities);
console.log('Context window:', model.limits.contextWindow);
console.log('Pricing:', model.pricing);Files
Upload Files
Upload files for use in prompt execution (e.g., images for vision models).
const result = await client.files.upload(
[file1, file2], // File[] or Blob[]
{ projectId: 'prj_abc123' }
);
for (const file of result.data) {
console.log(`Uploaded: ${file.originalName}`);
console.log(` ID: ${file.id}`);
console.log(` URL: ${file.url}`);
console.log(` Size: ${file.size} bytes`);
}Use Uploaded Files in Messages
// Upload an image
const uploadResult = await client.files.upload([imageFile], { projectId: 'prj_abc123' });
const fileId = uploadResult.data[0].id;
// Use in prompt execution with vision model
const response = await client.prompts.execute('prm_vision123', {
provider: 'openai',
model: 'gpt-4o',
messages: [
{
role: 'user',
content: 'What is in this image?',
files: [{ fileId }],
},
],
});
console.log(response.content);Error Handling
The SDK provides typed errors for different failure scenarios:
import {
SynovaCloudSdk,
ExecutionSynovaError,
ValidationSynovaError,
AuthSynovaError,
NotFoundSynovaError,
RateLimitSynovaError,
ServerSynovaError,
TimeoutSynovaError,
NetworkSynovaError,
ApiSynovaError,
} from '@synova-cloud/sdk';
try {
const response = await client.prompts.execute('prm_abc123', {
provider: 'openai',
model: 'gpt-4o',
responseClass: TopicDto,
});
} catch (error) {
// LLM execution error (rate limit, invalid key, context too long, etc.)
if (error instanceof ExecutionSynovaError) {
console.error(`LLM error [${error.code}]: ${error.message}`);
console.error(`Provider: ${error.provider}`);
console.error(`Retryable: ${error.retryable}`);
if (error.retryAfterMs) {
console.error(`Retry after: ${error.retryAfterMs}ms`);
}
}
// Validation error (response doesn't match class-validator constraints)
if (error instanceof ValidationSynovaError) {
console.error('Validation failed:');
for (const v of error.violations) {
console.error(` ${v.property}: ${Object.values(v.constraints).join(', ')}`);
}
}
// API errors
if (error instanceof AuthSynovaError) {
console.error('Invalid API key');
} else if (error instanceof NotFoundSynovaError) {
console.error('Resource not found');
} else if (error instanceof RateLimitSynovaError) {
console.error(`Rate limited. Retry after: ${error.retryAfterMs}ms`);
} else if (error instanceof ServerSynovaError) {
console.error(`Server error: ${error.message}`);
} else if (error instanceof TimeoutSynovaError) {
console.error(`Request timed out after ${error.timeoutMs}ms`);
} else if (error instanceof NetworkSynovaError) {
console.error(`Network error: ${error.message}`);
} else if (error instanceof ApiSynovaError) {
console.error(`API error [${error.code}]: ${error.message}`);
console.error(`Request ID: ${error.requestId}`);
}
}Error Properties
All API errors (AuthSynovaError, NotFoundSynovaError, RateLimitSynovaError, ServerSynovaError, and ApiSynovaError) include:
| Property | Type | Description |
|----------|------|-------------|
| code | string | Error code (e.g., "common.validation") |
| httpCode | number | HTTP status code |
| message | string | Human-readable error message |
| requestId | string | Request ID for debugging |
| timestamp | string | When the error occurred |
| path | string? | API endpoint path |
| method | string? | HTTP method |
| details | unknown? | Additional details (e.g., validation errors) |
Validation Errors
When validation fails, details contains an array of field errors:
try {
await client.llmProviderKeys.create({ provider: 'invalid' as any, apiKey: '' });
} catch (error) {
if (error instanceof ApiSynovaError && error.code === 'common.validation') {
console.log('Validation errors:', error.details);
// [{ field: "provider", message: "must be a valid enum value", code: "FIELD_PROVIDER_INVALID" }]
}
}Retry Behavior
The SDK automatically retries requests on:
- Rate limit errors (429) - uses
Retry-Afterheader if available - Server errors (5xx)
- Network errors
- Timeout errors
Non-retryable errors (fail immediately):
- Authentication errors (401)
- Not found errors (404)
- Client errors (4xx)
Retry Strategies
Exponential Backoff (default):
delay = initialDelayMs * backoffMultiplier^(attempt-1)Example with defaults: 1000ms, 2000ms, 4000ms...
Linear:
delay = initialDelayMs * attemptExample with defaults: 1000ms, 2000ms, 3000ms...
Both strategies add ±10% jitter to prevent thundering herd.
Custom Logger
You can provide a custom logger that implements the ISynovaLogger interface:
import { SynovaCloudSdk, ISynovaLogger } from '@synova-cloud/sdk';
const customLogger: ISynovaLogger = {
debug: (message, ...args) => myLogger.debug(message, ...args),
info: (message, ...args) => myLogger.info(message, ...args),
warn: (message, ...args) => myLogger.warn(message, ...args),
error: (messageOrError, ...args) => {
if (messageOrError instanceof Error) {
myLogger.error(messageOrError.message, messageOrError, ...args);
} else {
myLogger.error(messageOrError, ...args);
}
},
};
const client = new SynovaCloudSdk('your-api-key', {
debug: true,
logger: customLogger,
});TypeScript
The SDK is written in TypeScript and provides full type definitions:
import type {
// Config
ISynovaConfig,
ISynovaRetryConfig,
TSynovaRetryStrategy,
ISynovaLogger,
// Prompts
ISynovaPrompt,
ISynovaPromptVariable,
ISynovaGetPromptOptions,
// Execution
ISynovaExecuteOptions, // includes sessionId
ISynovaExecuteTypedOptions,
ISynovaExecuteResponse, // includes spanDataId, traceId, spanId
ISynovaExecutionUsage,
ISynovaExecutionError,
// Messages
ISynovaMessage,
TSynovaMessageRole,
TSynovaResponseType,
// Spans
ISynovaSpan,
ISynovaSpanData,
ISynovaCreateSpanOptions,
ISynovaEndSpanOptions,
ISynovaWrapOptions,
ISynovaWrapToolOptions,
TSynovaSpanType,
TSynovaSpanStatus,
TSynovaSpanLevel,
// Files
ISynovaFileAttachment,
ISynovaFileThumbnails,
ISynovaUploadedFile,
ISynovaUploadResponse,
ISynovaUploadOptions,
// Models
ISynovaModel,
ISynovaModelCapabilities,
ISynovaModelLimits,
ISynovaModelPricing,
ISynovaProvider,
ISynovaModelsResponse,
ISynovaListModelsOptions,
TSynovaModelType,
// Schema
IJsonSchema,
TJsonSchemaType,
TJsonSchemaFormat,
TClassConstructor,
// Errors
IValidationViolation,
} from '@synova-cloud/sdk';CommonJS
The SDK supports both ESM and CommonJS:
const { SynovaCloudSdk } = require('@synova-cloud/sdk');
const client = new SynovaCloudSdk('your-api-key');Requirements
- Node.js 18+ (uses native
fetch)
Optional Peer Dependencies
For structured output with typed responses:
npm install class-validator class-transformerThese are optional - the SDK works without them, but responseClass feature requires them.
License
MIT
