@verydia/llms
v0.1.0
Published
LLM abstractions and registry for Verydia agents
Maintainers
Readme
@verydia/llms
LLM abstractions, provider adapters, and client factory for Verydia agents.
Features
- Unified LLM Client Interface: Provider-agnostic
LlmClientinterface from@verydia/llm-core - Provider Adapters: Concrete implementations for major LLM providers
- Factory Pattern: Simple factory functions for creating clients
- Streaming Support: Built-in streaming for real-time responses
- Embeddings & Tool Calling: Support for embeddings and function calling where available
- Type-Safe: Full TypeScript support with comprehensive types
Installation
pnpm add @verydia/llmsQuick Start
Using the Factory
import { createLLMClient } from "@verydia/llms";
const client = createLLMClient({
provider: "openai",
apiKey: process.env.OPENAI_API_KEY!,
defaultModel: "gpt-4o-mini",
});
const response = await client.call({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" },
],
});
console.log(response.content);Provider-Specific Clients
import { createOpenAIClient, createAnthropicClient } from "@verydia/llms";
// OpenAI
const openai = createOpenAIClient({
apiKey: process.env.OPENAI_API_KEY!,
defaultModel: "gpt-4o-mini",
});
// Anthropic
const anthropic = createAnthropicClient({
apiKey: process.env.ANTHROPIC_API_KEY!,
defaultModel: "claude-3-5-sonnet-20241022",
});Streaming
const client = createOpenAIClient({
apiKey: process.env.OPENAI_API_KEY!,
});
for await (const chunk of client.stream({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Tell me a story" }],
})) {
process.stdout.write(chunk.delta);
}Supported Providers
| Provider | Status | Streaming | Embeddings | Tool Calling | |----------|--------|-----------|------------|--------------| | OpenAI | ✅ Implemented | ✅ | ✅ | ✅ | | Anthropic | ✅ Implemented | ✅ | - | ✅ | | Google Gemini | ✅ Implemented | ✅ | ✅ | ✅ | | Mistral | ✅ Implemented | ✅ | ✅ | ✅ | | AWS Bedrock | ⚠️ Requires AWS SDK | - | - | - | | Ollama | ✅ Implemented | ✅ | ✅ | - |
Configuration
OpenAI
const client = createOpenAIClient({
apiKey: process.env.OPENAI_API_KEY!,
baseUrl: "https://api.openai.com/v1", // Optional
defaultModel: "gpt-4o-mini",
timeoutMs: 30000, // Optional
modelAliases: {
// Optional
"gpt-4o-mini": "gpt-4o-mini-2024-07-18",
},
});Anthropic
const client = createAnthropicClient({
apiKey: process.env.ANTHROPIC_API_KEY!,
baseUrl: "https://api.anthropic.com", // Optional
defaultModel: "claude-3-5-sonnet-20241022",
timeoutMs: 30000, // Optional
});Gemini
const client = createGeminiClient({
apiKey: process.env.GEMINI_API_KEY!,
defaultModel: "gemini-1.5-flash",
timeoutMs: 30000, // Optional
});Mistral
const client = createMistralClient({
apiKey: process.env.MISTRAL_API_KEY!,
defaultModel: "mistral-large-latest",
timeoutMs: 30000, // Optional
});Ollama
const client = createOllamaClient({
baseUrl: process.env.OLLAMA_BASE_URL ?? "http://localhost:11434",
defaultModel: "llama2",
timeoutMs: 30000, // Optional
});API Reference
createLLMClient(config)
Creates an LLM client from configuration.
Parameters:
config.provider: Provider type ("openai"|"anthropic"| ...)config.apiKey: API key for the providerconfig.defaultModel: Optional default modelconfig.baseUrl: Optional base URL overrideconfig.timeoutMs: Optional request timeoutconfig.modelAliases: Optional model alias mapping
Returns: LlmClient
LlmClient Interface
interface LlmClient {
readonly id: string;
call(request: LlmRequest): Promise<LlmResponse>;
stream?(request: LlmRequest): AsyncIterable<LlmStreamChunk>;
}Request/Response Types
interface LlmRequest {
model: string;
messages: Array<{
role: "system" | "user" | "assistant" | "tool";
content: string;
}>;
temperature?: number;
maxTokens?: number;
topP?: number;
metadata?: Record<string, unknown>;
}
interface LlmResponse {
content: string;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
finishReason?: string;
providerMetadata?: Record<string, unknown>;
}Advanced Usage
Using with LLM Registry
The package also provides an LlmRegistry for managing multiple providers:
import { LlmRegistry, createDefaultLlmRegistry } from "@verydia/llms";
const registry = createDefaultLlmRegistry();
// Use model refs like "openai:gpt-4o-mini"
const model = registry.createModelFromRef("openai:gpt-4o-mini", {
apiKey: process.env.OPENAI_API_KEY!,
});
const result = await model.invoke([
{ role: "user", content: "Hello!" },
]);Router-Based Model Selection
import { createLlmRouter } from "@verydia/llms";
const router = createLlmRouter({
profiles: {
fast: { provider: "openai", model: "gpt-4o-mini" },
smart: { provider: "openai", model: "gpt-4" },
},
});
const result = await router.invoke("fast", messages);Architecture
This package integrates three layers:
@verydia/llm-core: Core types and interfaces@verydia/providers: Provider adapters and registry@verydia/llms: High-level factory and routing
See PLAN_STAGE_17.md for implementation details.
Contributing
To add a new provider:
- Implement
ProviderAdapterin@verydia/providers/src/adapters/ - Add config helper (e.g.,
createXyzConfig) - Update factory in
@verydia/llms/src/factory.ts - Add tests
License
MIT
