mars-llm
v1.0.4
Published
Node.js client for Azure OpenAI using credentials from Azure Key Vault with streaming support
Maintainers
Readme
Mars LLM - Azure OpenAI Client with Key Vault Integration
A Node.js client that retrieves Azure OpenAI credentials from Azure Key Vault and provides both streaming and non-streaming chat completion capabilities.
Features
- 🔐 Secure credential retrieval from Azure Key Vault
- 🚀 Support for both streaming and non-streaming chat completions
- 🖼️ NEW: Image analysis support (JPG, PNG, GIF, WebP)
- 🎭 NEW: Multimodal chat (text + images)
- 🔧 Multiple Azure authentication methods
- 📝 TypeScript-ready with proper error handling
- 🎯 Simple and intuitive API
Prerequisites
- Node.js 16+
- Azure subscription with:
- Azure Key Vault instance
- Azure OpenAI service
- Proper authentication configured
Installation
Option 1: Install from npm (Recommended)
npm install mars-llmOption 2: Clone and build locally
- Clone or download this project
- Install dependencies:
npm install- Set the required environment variable:
export AZURE_KEY_VAULT_URL=https://your-keyvault.vault.azure.net/- Configure your Key Vault secrets (see Configuration section below)
- For local development, run
az loginto authenticate
Configuration
Azure Key Vault Secrets
Your Azure Key Vault must contain the following secrets:
| Secret Name | Description | Example Value |
|-------------|-------------|---------------|
| MARS-API-KEY | Your Azure OpenAI API key | abc123... |
| MARS-DEPLOYMENT | Your Azure OpenAI deployment name | gpt-4o |
| MARS-ENDPOINT | Your Azure OpenAI endpoint URL | https://your-resource.openai.azure.com/ |
| MARS-API-VERSION | API version | 2024-02-15-preview |
Environment Variables (Required)
You must set the Key Vault URL via environment variable:
export AZURE_KEY_VAULT_URL=https://your-keyvault.vault.azure.net/Or on Windows:
set AZURE_KEY_VAULT_URL=https://your-keyvault.vault.azure.net/This environment variable is required for the client to work.
Authentication
This client uses DefaultAzureCredential which automatically tries authentication methods in order:
1. Azure CLI (Recommended for local development)
Run az login before using the client - no additional configuration needed.
2. Managed Identity (For Azure-hosted applications)
Automatically works when running on Azure services - no configuration needed.
3. Environment Variables (For service principal if needed)
Set AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID if required.
4. Azure PowerShell
Uses Azure PowerShell context if available.
5. Interactive Browser
Falls back to interactive authentication if other methods fail.
Usage
Basic Usage
// If installed from npm
const AzureOpenAIClient = require('mars-llm');
// If using locally
// const AzureOpenAIClient = require('./azure-openai-client');
async function example() {
const client = new AzureOpenAIClient();
// Simple chat
const response = await client.chat('Hello, how are you?');
console.log(response.content);
}Advanced Usage
const AzureOpenAIClient = require('./azure-openai-client');
async function advancedExample() {
const client = new AzureOpenAIClient();
// Chat with custom options
const response = await client.chat('Explain quantum computing', {
maxTokens: 200,
temperature: 0.7,
model: 'gpt-4o'
});
console.log('Response:', response.content);
console.log('Usage:', response.usage);
}Streaming Chat
const AzureOpenAIClient = require('./azure-openai-client');
async function streamingExample() {
const client = new AzureOpenAIClient();
// Streaming chat with callback
await client.chatStream(
'Tell me a story about AI',
(chunk, fullContent) => {
process.stdout.write(chunk); // Print each chunk as it arrives
},
{
maxTokens: 500,
temperature: 0.8
}
);
}Using Prompt and Text (Python-style)
const AzureOpenAIClient = require('./azure-openai-client');
async function promptTextExample() {
const client = new AzureOpenAIClient();
const prompt = "Summarize the following text";
const text = "Your long text content here...";
// Non-streaming
const response = await client.chatCompletion(prompt, text, {
maxTokens: 150,
temperature: 0.5
});
// Streaming
await client.chatCompletionStream(
prompt,
text,
(chunk) => process.stdout.write(chunk),
{ maxTokens: 150 }
);
}Image Analysis Examples (NEW!)
const AzureOpenAIClient = require('./azure-openai-client');
async function imageExample() {
const client = new AzureOpenAIClient();
// Analyze image from file
const response1 = await client.chatWithImages(
'What do you see in this image?',
'./my-image.jpg',
{
maxTokens: 300,
imageDetail: 'high' // 'low', 'high', or 'auto'
}
);
// Analyze multiple images
const response2 = await client.chatWithImages(
'Compare these images',
['./image1.jpg', './image2.jpg']
);
// Use base64 encoded image
const response3 = await client.chat('Describe this image', {
images: [
{
type: 'base64',
data: 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAAB...',
mimeType: 'image/png'
}
]
});
// Use image URL
const response4 = await client.chat('What is shown here?', {
images: [
{
type: 'url',
data: 'https://example.com/image.jpg'
}
]
});
// Streaming with images
await client.chatStream(
'Create a story based on this image',
(chunk) => process.stdout.write(chunk),
{
images: [{ type: 'file', data: './story-image.jpg' }]
}
);
}API Reference
Constructor
const client = new AzureOpenAIClient(keyVaultUrl);keyVaultUrl(optional): Key Vault URL. If not provided, usesAZURE_KEY_VAULT_URLenvironment variable. One of these must be set.
Methods
chat(message, options)
Simple chat completion without streaming.
Parameters:
message(string): The message to sendoptions(object): Optional configurationmodel(string): Model name (default: 'gpt-4o')maxTokens(number): Maximum tokens (default: 50)temperature(number): Temperature (default: 0.7)images(array): Array of image objects (NEW!)
Returns: Promise with response object containing content, usage, model, finishReason
chatStream(message, onChunk, options)
Streaming chat completion.
Parameters:
message(string): The message to sendonChunk(function): Callback for each chunk(chunk, fullContent) => {}options(object): Optional configuration (same as chat, including images)
Returns: Promise with response object containing content, model
chatCompletion(prompt, text, options)
Chat completion with separate prompt and text (Python-style).
chatCompletionStream(prompt, text, onChunk, options)
Streaming chat completion with separate prompt and text.
chatWithImages(message, imagePaths, options) (NEW!)
Convenience method for image analysis.
Parameters:
message(string): The message about the image(s)imagePaths(string|array): Single image path or array of image pathsoptions(object): Optional configurationimageDetail(string): 'low', 'high', or 'auto' (default: 'auto')
Image Object Format
{
type: 'file' | 'base64' | 'url',
data: 'path/to/image.jpg' | 'base64string' | 'https://example.com/image.jpg',
mimeType: 'image/png', // required for base64
detail: 'low' | 'high' | 'auto' // optional, default: 'auto'
}Supported Image Formats
- Formats: PNG, JPEG, GIF, WebP
- Input Types: Local files, Base64 strings, URLs
- Detail Levels:
low: Faster processing, lower costhigh: More detailed analysis, higher costauto: Model decides based on image
Running the Examples
Basic example:
npm startComprehensive demo:
node example.jsImage analysis examples:
# Image analysis demonstration
node samples/multimodal-example.js
# Simple image test
node samples/simple-multimodal-test.jsTroubleshooting
Common Issues
- Authentication Error: Run
az loginfor local development - Key Vault Access Denied: Verify your Azure account has Key Vault access permissions
- Secret Not Found: Check that secrets exist in Key Vault with correct names
- Invalid Endpoint: Ensure your Azure OpenAI endpoint URL is correct
- Image Analysis Not Working: Ensure your deployment uses GPT-4o (required for image support)
- File Not Found: Check image/audio file paths are correct and accessible
Debug Tips
- Test Azure CLI authentication:
az login && az account show - Test Key Vault access:
az keyvault secret show --vault-name your-vault --name MARS-API-KEY - Check Key Vault URL: Verify the correct Key Vault URL is used
- Test image analysis: Run
node samples/simple-multimodal-test.js
Image Analysis Considerations
- File Size Limits: Large images consume more tokens
- Cost: Image requests cost more than text-only requests
- Model Support: Requires GPT-4o deployment (not GPT-3.5 or GPT-4)
- Performance: Image detail level affects processing speed and cost
Security Best Practices
- Use managed identities for production deployments
- Use Azure CLI for local development
- Rotate API keys regularly
- Limit Key Vault access permissions
- Monitor Key Vault access logs
License
MIT License
