@arclabs561/llm-utils
v0.1.0
Published
Shared LLM utility library for text-only LLM calls across multiple providers
Maintainers
Readme
llm-utils
Shared LLM utility library for text-only LLM calls.
Features
- Unified API for OpenAI, Anthropic (Claude), and Google (Gemini)
- Auto-detection of provider from environment variables
- Simple, consistent interface across providers
- Error handling and retries
- JSON extraction utilities
Installation
npm install @arclabs561/llm-utilsUsage
Basic Usage
import { callLLM, detectProvider } from '@arclabs561/llm-utils';
// Auto-detect provider from environment
const provider = detectProvider();
if (provider) {
const response = await callLLM('Your prompt here', provider.provider, provider.apiKey);
console.log(response);
}Tiered Model Selection
The library supports two tiers: simple (fast/cheap) and advanced (higher quality).
import { callLLM, detectProvider, MODEL_TIERS } from '@arclabs561/llm-utils';
const provider = detectProvider();
// Use simple models (default) - fast and cheap
const fastResponse = await callLLM('Prompt', provider.provider, provider.apiKey, {
tier: 'simple' // gpt-4o-mini, claude-haiku, gemini-flash
});
// Use advanced models - higher quality, slower, more expensive
const qualityResponse = await callLLM('Prompt', provider.provider, provider.apiKey, {
tier: 'advanced' // gpt-4o, claude-sonnet, gemini-pro
});Using LLMClient Class
import { LLMClient } from '@arclabs561/llm-utils';
// Simple tier (default)
const client = new LLMClient({ tier: 'simple' });
const response = await client.complete('Your prompt');
// Advanced tier
const advancedClient = new LLMClient({ tier: 'advanced' });
const qualityResponse = await advancedClient.complete('Your prompt');Environment Variables
The library auto-detects providers from these environment variables:
GEMINI_API_KEY- Google GeminiOPENAI_API_KEY- OpenAIANTHROPIC_API_KEY- Anthropic ClaudeVLM_PROVIDER- Explicit provider selection (gemini, openai, claude)API_KEY- Fallback (defaults to gemini)
License
MIT
