@verydia/providers
v0.1.0
Published
LLM provider registry and abstractions for Verydia
Maintainers
Readme
@verydia/providers
Universal provider registry and adapter layer for LLM providers.
Overview
This package provides:
- A unified
ProviderAdapterinterface for LLM providers - A
ProviderRegistryfor managing and resolving providers - Production-ready adapters for major LLM providers
- Type-safe provider configuration and model resolution
- Mock-based testing with no real API calls
Implemented Adapters
✅ OpenAI (openaiAdapter)
Full implementation with:
- Chat completions
- SSE streaming
- Embeddings (text-embedding-3-small, text-embedding-3-large, etc.)
- Tool/function calling
- Configurable base URL, timeout, organization
✅ Anthropic (anthropicAdapter)
Full implementation with:
- Chat completions (Claude 3.5 Sonnet, etc.)
- SSE streaming
- Tool calling (Claude's tool use format)
- System message handling (Anthropic requirement)
- Configurable API version
✅ Google Gemini (geminiAdapter)
Full implementation with:
- Chat completions (Gemini 1.5 Flash, Pro, etc.)
- SSE streaming
- Embeddings
- Function declarations for tool calling
- Uses Google's generativelanguage API
✅ Mistral (mistralAdapter)
Full implementation with:
- Chat completions (Mistral Large, Medium, Small)
- SSE streaming
- Embeddings
- Tool/function calling
- OpenAI-compatible API format
✅ Ollama (ollamaAdapter)
Full implementation with:
- Chat completions (local models: Llama 2, Mistral, etc.)
- Streaming
- Embeddings
- Configurable base URL (default: http://localhost:11434)
- No API key required (local deployment)
⚠️ AWS Bedrock (bedrockAdapter)
Placeholder - Requires AWS SDK integration:
- Throws clear error directing to Anthropic adapter or AWS SDK
- Ready for future implementation with SigV4 signing
Key Concepts
ProviderAdapter Interface
Each provider implements the ProviderAdapter interface:
interface ProviderAdapter<C extends ProviderBaseConfig> {
readonly kind: ProviderKind;
resolveModelId(config: C, logicalModel: string): string;
invokeChat(config: C, req: ProviderChatRequest): Promise<ProviderChatResult>;
streamChat?(config: C, req: ProviderChatRequest): AsyncIterable<ProviderStreamChunk>;
embedding?(config: C, req: ProviderEmbeddingRequest): Promise<ProviderEmbeddingResult>;
toolCall?(config: C, req: ProviderToolCallRequest): Promise<ProviderToolCallResult>;
}ProviderRegistry
The registry manages provider instances and resolves model references:
const registry = new ProviderRegistry();
registry.register(openaiConfig);
const result = await registry.invokeChat("openai:gpt-4", {
messages: [{ role: "user", content: "Hello!" }]
});Usage Examples
OpenAI
import { openaiAdapter, createOpenAIConfig } from "@verydia/providers";
const config = createOpenAIConfig({
apiKey: process.env.OPENAI_API_KEY!,
modelAliases: {
"gpt-4o-mini": "gpt-4o-mini-2024-07-18"
}
});
const result = await openaiAdapter.invokeChat(config, {
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }],
temperature: 0.7
});
console.log(result.text);Anthropic
import { anthropicAdapter, createAnthropicConfig } from "@verydia/providers";
const config = createAnthropicConfig({
apiKey: process.env.ANTHROPIC_API_KEY!,
});
const result = await anthropicAdapter.invokeChat(config, {
model: "claude-3-5-sonnet-20241022",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain TypeScript." }
],
});
console.log(result.text);Streaming
for await (const chunk of openaiAdapter.streamChat!(config, {
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Tell me a story" }],
})) {
process.stdout.write(chunk.delta);
if (chunk.done) {
console.log("\n[Stream complete]");
}
}Testing Strategy
All adapters use mock-based testing with no real API calls:
- ✅ No network calls in tests: All HTTP requests are mocked
- ✅ Fixture-based responses: Tests use pre-defined response fixtures
- ✅ Type safety verification: Tests ensure correct type transformations
- ✅ CI-safe: No API keys required, instant test execution
Example test structure:
describe("OpenAI Adapter", () => {
it("should have correct type signature", () => {
expect(openaiAdapter.kind).toBe("openai");
expect(typeof openaiAdapter.invokeChat).toBe("function");
});
it("should resolve model aliases", () => {
const config = createOpenAIConfig({
apiKey: "test",
modelAliases: { "gpt-4o-mini": "gpt-4o-mini-2024-07-18" }
});
const resolved = openaiAdapter.resolveModelId(config, "gpt-4o-mini");
expect(resolved).toBe("gpt-4o-mini-2024-07-18");
});
});Architecture
@verydia/providers
├── src/
│ ├── types.ts # Core interfaces
│ ├── index.ts # ProviderRegistry
│ ├── builtinProviders.ts # Default configs
│ └── adapters/
│ ├── index.ts # Adapter exports
│ ├── openaiAdapter.ts # OpenAI implementation
│ ├── anthropicAdapter.ts
│ ├── geminiAdapter.ts
│ ├── mistralAdapter.ts
│ ├── bedrockAdapter.ts
│ └── ollamaAdapter.ts
└── test/
├── providers.test.ts # Registry tests
└── openaiAdapter.test.ts # Adapter testsFor Contributors
Adding a New Provider
- Create
src/adapters/yourProviderAdapter.ts - Implement the
ProviderAdapterinterface - Export from
src/adapters/index.ts - Add config creator function (e.g.,
createYourProviderConfig) - Add tests in
test/yourProviderAdapter.test.ts - Update this README
Adapter Implementation Checklist
- [ ] Implement
invokeChat()(required) - [ ] Implement
streamChat()if provider supports streaming - [ ] Implement
embedding()if provider supports embeddings - [ ] Implement
toolCall()if provider supports function calling - [ ] Add proper error handling and timeout support
- [ ] Add cost estimation in usage metadata
- [ ] Write mock-based tests (no real API calls)
- [ ] Document provider-specific quirks
License
Apache-2.0
