@lunos/client
v1.5.0
Published
Official TypeScript client library for Lunos AI API - A comprehensive AI proxy service supporting chat completions, image generation, audio synthesis, embeddings, and more.
Maintainers
Readme
Lunos AI Client Library
Official TypeScript client library for the Lunos AI API - A comprehensive AI proxy service supporting multiple AI providers including OpenAI, Anthropic, Google, and more.
Features
- 🤖 Multi-Provider Support: Access OpenAI, Anthropic, Google, and other AI models through a single API
- 💬 Chat Completions: Full support for chat conversations with streaming
- 🎨 Image Generation: Create images with DALL-E, Midjourney, and other models
- 🔊 Audio Generation: Text-to-speech with multiple voices and formats
- 📊 Embeddings: Generate and work with text embeddings
- 🔍 Model Discovery: Browse and search available models
- ⚡ TypeScript First: Full TypeScript support with comprehensive type definitions
- 🛡️ Error Handling: Robust error handling with specific error types
- 🔄 Retry Logic: Automatic retry with exponential backoff
- 📁 File Operations: Built-in utilities for saving generated content
- 🎯 SOLID Principles: Well-architected, extensible, and maintainable code
Installation
npm install @lunos/clientyarn add @lunos/clientpnpm add @lunos/clientQuick Start
import { LunosClient } from "@lunos/client";
// Initialize the client
const client = new LunosClient({
apiKey: "your-api-key-here",
baseUrl: "https://api.lunos.tech",
appId: "my-application", // Optional: for analytics and usage tracking
});
// Chat completion
const response = await client.chat.createCompletion({
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Hello, how are you?" }],
});
console.log(response.choices[0].message.content);Application ID (App ID) for Analytics
The client supports an optional appId parameter for analytics and usage tracking. This helps track which applications are using the API and provides insights into usage patterns.
// Set default appId for all requests
const client = new LunosClient({
apiKey: "your-api-key",
appId: "my-application", // Default appId for all requests
});
// Override appId for specific requests
const response = await client.chat.createCompletion({
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Hello!" }],
appId: "chat-feature", // Specific appId for this request
});
// AppId is supported across all AI generation endpoints
const imageResponse = await client.image.generate({
prompt: "A beautiful sunset",
model: "openai/dall-e-3",
appId: "image-generator",
});
const audioResponse = await client.audio.textToSpeech({
text: "Hello world",
voice: "alloy",
appId: "audio-service",
});
const embedding = await client.embedding.embedText(
"Sample text",
"openai/text-embedding-3-small",
"embedding-tool"
);The appId will be included as the X-App-ID header in all requests and stored in the query_history table for analytics and usage tracking per application.
## API Reference
### Client Configuration
```typescript
import { LunosClient, LunosConfig } from "@lunos/client";
const config: Partial<LunosConfig> = {
apiKey: "your-api-key",
baseUrl: "https://api.lunos.tech",
timeout: 30000,
retries: 3,
retryDelay: 1000,
fallback_model: "openai/gpt-4o", // Optional fallback model
appId: "my-application", // Optional: for analytics and usage tracking
debug: false,
headers: {
"Custom-Header": "value",
},
};
const client = new LunosClient(config);Chat Completions
// Simple chat completion
const response = await client.chat.createCompletion({
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Write a short story about a robot." }],
temperature: 0.7,
max_tokens: 500,
appId: "chat-feature", // Optional: override default appId for this request
});
// Streaming chat completion with callback
let fullResponse = "";
const stream = await client.chat.createCompletionWithStream(
{
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Write a poem about AI." }],
},
(chunk) => {
fullResponse += chunk;
process.stdout.write(chunk);
}
);
// Streaming chat completion without callback (returns stream)
const stream = await client.chat.createCompletionStream({
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Explain quantum computing." }],
});
// Process the stream manually
const reader = stream.getReader();
const decoder = new TextDecoder();
let buffer = "";
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() || "";
for (const line of lines) {
if (line.startsWith("data: ")) {
const data = line.slice(6);
if (data === "[DONE]") break;
try {
const parsed = JSON.parse(data);
if (parsed.choices?.[0]?.delta?.content) {
process.stdout.write(parsed.choices[0].delta.content);
}
} catch (e) {
// Ignore parsing errors
}
}
}
}
} finally {
reader.releaseLock();
}
// Convenience methods
const response = await client.chat.chatWithUser(
"What is the capital of France?",
"openai/gpt-4o"
);
const response = await client.chat.chatWithSystem(
"You are a helpful assistant.",
"What is 2+2?",
"openai/gpt-4.1"
);Fallback Model Support
The client supports automatic fallback to alternative models when the primary model fails after retries. This is useful for ensuring high availability and graceful degradation.
// Per-request fallback model
const response = await client.chat.createCompletion({
model: "openai/gpt-4-turbo", // Primary model
fallback_model: "openai/gpt-4o", // Fallback model
messages: [{ role: "user", content: "Explain quantum computing." }],
temperature: 0.5,
max_tokens: 300,
});
// Client-level fallback model configuration
const clientWithFallback = client.withFallbackModel("openai/gpt-4o");
const response = await clientWithFallback.chat.createCompletion({
model: "openai/gpt-4-turbo", // Will fallback to gpt-4o if needed
messages: [{ role: "user", content: "What is machine learning?" }],
});
// Streaming with fallback model
const stream = await client.chat.createCompletionStream({
model: "openai/gpt-4-turbo",
fallback_model: "openai/gpt-4o",
messages: [{ role: "user", content: "Write a story." }],
stream: true,
});How it works:
- When a model-related error occurs after all retry attempts, the client automatically tries the fallback model
- Fallback is triggered for errors containing keywords like "model not found", "model unavailable", etc.
- Debug mode will log when fallback models are used
- Both regular and streaming requests support fallback models
Image Generation
// Generate an image
const image = await client.image.generateImage({
model: "openai/dall-e-3",
prompt: "A beautiful sunset over mountains",
size: "1024x1024",
quality: "hd",
});
// Convenience methods
const image = await client.image.generate(
"A futuristic city skyline",
"openai/dall-e-3"
);
const image = await client.image.generateWithSize(
"A cat playing with yarn",
512,
512,
"openai/dall-e-2"
);
const image = await client.image.generateHD(
"A detailed portrait of a dragon",
"openai/dall-e-3"
);
// Generate multiple images
const images = await client.image.generateMultiple(
"A flower in different seasons",
4,
"openai/dall-e-3"
);Audio Generation
// Text-to-speech
const audio = await client.audio.generateAudio({
model: "openai/tts-1",
input: "Hello, this is a test of text to speech.",
voice: "alloy",
response_format: "mp3",
});
// Save to file
await client.audio.generateAudioToFile(
{
input: "Hello world",
voice: "alloy",
model: "openai/tts-1",
},
"./output/hello.mp3"
);
// Convenience methods
const audio = await client.audio.textToSpeech(
"Hello, how are you today?",
"alloy",
"openai/tts-1"
);
const audio = await client.audio.textToSpeechWithSpeed(
"This is a test of speed control.",
1.5,
"nova",
"openai/tts-1"
);Audio Transcription
// Transcribe audio
const transcription = await client.audio.transcribeAudio({
file: audioBuffer, // Buffer or base64 string
model: "openai/whisper-1",
response_format: "verbose_json",
});
// Transcribe from file
const transcription = await client.audio.transcribeFromFile(
"./audio/recording.mp3",
"openai/whisper-1"
);Embeddings
// Create embeddings
const embedding = await client.embedding.createEmbedding({
model: "openai/text-embedding-3-small",
input: "This is a sample text for embedding.",
});
// Convenience methods
const embedding = await client.embedding.embedText(
"This is a sample text.",
"openai/text-embedding-3-small"
);
const embeddings = await client.embedding.embedMultiple(
["First text", "Second text", "Third text"],
"openai/text-embedding-3-small"
);
// Calculate similarity
const similarity = EmbeddingService.cosineSimilarity(embedding1, embedding2);
const distance = EmbeddingService.euclideanDistance(embedding1, embedding2);Model Information
// Get all models
const models = await client.models.getModels();
// Get models by capability
const chatModels = await client.models.getChatModels();
const imageModels = await client.models.getImageModels();
const audioModels = await client.models.getAudioModels();
const embeddingModels = await client.models.getEmbeddingModels();
// Get specific model
const gpt4 = await client.models.getModelById("openai/gpt-4.1");
// Check model capabilities
const supportsChat = await client.models.supportsCapability(
"openai/gpt-4.1",
"chat"
);
// Search models
const searchResults = await client.models.searchModels("gpt");Error Handling
import { LunosError, APIError, ValidationError } from "@lunos/client";
try {
const response = await client.chat.createCompletion({
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Hello" }],
});
} catch (error) {
if (error instanceof LunosError) {
console.error("Lunos API Error:", error.message);
console.error("Status:", error.status);
console.error("Code:", error.code);
} else {
console.error("Unexpected error:", error);
}
}Streaming
import { StreamProcessor } from "@lunos/client";
// Process streaming response
const stream = await client.chat.createCompletionStream({
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Write a story." }],
});
const processor = new StreamProcessor();
await processor.processStream(stream, (chunk) => {
console.log("Received chunk:", chunk);
});File Operations
import { FileUtils } from "@lunos/client";
// Save buffer to file
await FileUtils.saveBufferToFile(audioBuffer, "./output/audio.mp3");
// Read file as buffer
const buffer = await FileUtils.readFileAsBuffer("./input/image.png");
// Convert file to base64
const base64 = await FileUtils.fileToBase64("./input/audio.wav");
// Convert base64 to buffer
const buffer = FileUtils.base64ToBuffer(base64String);Client Configuration Methods
// Create client with different configuration
const debugClient = client.withDebug();
const timeoutClient = client.withTimeout(60000);
const customHeadersClient = client.withHeaders({
"X-Custom-Header": "value",
});
const fallbackClient = client.withFallbackModel("openai/gpt-4o");
// Update configuration
client.updateConfig({
timeout: 60000,
debug: true,
});
// Health check
const isHealthy = await client.healthCheck();
// Get usage information
const usage = await client.getUsage();Advanced Usage
Custom Fetch Implementation
import { fetch } from "node-fetch";
const client = new LunosClient({
apiKey: "your-api-key",
fetch: fetch as typeof globalThis.fetch,
});Retry Configuration
const client = new LunosClient({
apiKey: "your-api-key",
retries: 5,
retryDelay: 2000,
});Request Cancellation
const controller = new AbortController();
const response = await client.chat.createCompletion(
{
model: "openai/gpt-4.1",
messages: [{ role: "user", content: "Hello" }],
},
{
signal: controller.signal,
}
);
// Cancel the request
controller.abort();Validation
import { ValidationUtils } from "@lunos/client";
// Validate chat request
ValidationUtils.validateChatCompletionRequest({
messages: [{ role: "user", content: "Hello" }],
});
// Validate image generation
ValidationUtils.validateImageGenerationRequest({
prompt: "A beautiful landscape",
});Error Types
LunosError: Base error classAPIError: API-specific errorsValidationError: Input validation errorsAuthenticationError: Authentication failuresRateLimitError: Rate limit exceededNetworkError: Network-related errors
TypeScript Support
The library is built with TypeScript and provides comprehensive type definitions:
import type {
ChatMessage,
ChatCompletionRequest,
ChatCompletionResponse,
ImageGenerationRequest,
AudioGenerationRequest,
EmbeddingRequest,
Model,
} from "@lunos/client";Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
License
MIT License - see LICENSE for details.
Support
- Documentation: https://lunos.tech/docs
- Issues: GitHub Issues
- Email: [email protected]
Changelog
1.3.0
Breaking Changes
- Model API Structure Overhaul:
- Update speech generation response logic.
- Add PCM conversion to WAV util.
- The
Modeltype and all related API responses have changed. - All code, types, and examples now use the new model structure.
- All references to model grouping, categories, and latest models have been removed.
- Audio Transcription:
- All audio transcription functions and types have been removed (not available in Lunos).
Improvements
- Documentation:
- Updated all code examples and documentation to use the new model structure and fields.
- Added a new section describing the model object structure with a sample JSON.
- Updated all example files and their README to match the new API.
- Validation:
- Voice validation for Google TTS now supports a comprehensive list of voices.
- General:
- Improved error handling and parameter validation throughout the client.
1.2.0
- Added fallback model support for automatic model switching on errors
- Added
fallback_modelparameter to chat completion requests - Added
withFallbackModel()method to client configuration - Enhanced error handling with model-specific fallback logic
- Added validation for fallback model configuration
- Updated documentation with fallback model examples
1.1.0
- Enhanced error handling and retry logic
- Improved TypeScript type definitions
- Added comprehensive validation utilities
- Updated dependencies and build configuration
1.0.0
- Initial release
- Full TypeScript support
- Chat completions with streaming
- Image generation
- Audio generation and transcription
- Embeddings
- Model discovery
- Comprehensive error handling
- File operations utilities
