@agentskit/adapters
v0.9.1
Published
Provider adapters for AgentsKit.
Downloads
1,401
Maintainers
Readme
@agentskit/adapters
Connect to any LLM provider — and swap between them — without touching your app code.
Tags: ai · agents · llm · agentskit · openai · anthropic · claude · gemini · chatgpt · ollama · embeddings · providers
Why adapters
- Vendor independence — switch from OpenAI to Anthropic to a local Ollama model by changing one line; your hooks, runtime, and tools stay untouched
- 20+ providers included — Anthropic, OpenAI, Gemini, Ollama, DeepSeek, Grok, Kimi, Mistral, Cohere, Together, Groq, Fireworks, OpenRouter, Hugging Face, LM Studio, vLLM, llama.cpp, LangChain, Vercel AI SDK, and any raw
ReadableStream - Embedder functions built in — the same adapter pattern covers text embeddings, so you can reuse provider config for both chat and RAG
- One-line local AI —
ollama({ model: 'llama3.1' })for fully offline agents with no API key required
Install
npm install @agentskit/adaptersQuick example
import { anthropic, openai, ollama } from '@agentskit/adapters'
import { createRuntime } from '@agentskit/runtime'
// Switch provider by swapping one import
const adapter = anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: 'claude-sonnet-4-6' })
// const adapter = openai({ apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' })
// const adapter = ollama({ model: 'llama3.1' })
const runtime = createRuntime({ adapter })
const result = await runtime.run('Summarize the latest AI news')
console.log(result.content)Embeddings (for RAG)
Use the same package for vector embeddings — wire openaiEmbedder, geminiEmbedder, or ollamaEmbedder into @agentskit/rag:
import { openaiEmbedder } from '@agentskit/adapters'
import { createRAG } from '@agentskit/rag'
import { fileVectorMemory } from '@agentskit/memory'
const rag = createRAG({
embed: openaiEmbedder({ apiKey: process.env.OPENAI_API_KEY! }),
store: fileVectorMemory({ path: './vectors' }),
})Features
- Providers: Anthropic, OpenAI, Gemini, Ollama, DeepSeek, Grok, Kimi, Mistral, Cohere, Together, Groq, Fireworks, OpenRouter, Hugging Face, LM Studio, vLLM, llama.cpp, LangChain, LangGraph, Vercel AI SDK, generic
ReadableStream - Embedders:
openaiEmbedder,geminiEmbedder,ollamaEmbedder,deepseekEmbedder,grokEmbedder,kimiEmbedder,createOpenAICompatibleEmbedder - All adapters satisfy
Adaptercontract v1 (ADR 0001) — substitutable anywhere in the ecosystem - Custom adapter authoring via
createAdapter() - Higher-order adapters:
createRouter(cost/latency/classifier),createEnsembleAdapter(fan-out + merge),createFallbackAdapter(ordered try-next)
Higher-order adapters
import { createRouter, anthropic, openai } from '@agentskit/adapters'
// Auto-pick cheapest capable candidate per request.
const router = createRouter({
candidates: [
{ id: 'haiku', adapter: anthropic({ model: 'claude-haiku-4-5' }), cost: 0.25 },
{ id: 'sonnet', adapter: anthropic({ model: 'claude-sonnet-4-6' }), cost: 3 },
{ id: 'gpt-mini', adapter: openai({ model: 'gpt-4o-mini' }), cost: 0.15 },
],
})See Adapter router, Ensemble, and Fallback chain.
Ecosystem
| Package | Role |
|---------|------|
| @agentskit/core | Adapter, EmbedFn, types |
| @agentskit/runtime | Headless createRuntime |
| @agentskit/rag | createRAG + embedders |
| @agentskit/memory | Vector + chat memory backends |
Testing Adapters
Three built-in utilities let you test agents without hitting a real LLM.
mockAdapter — deterministic responses
import { mockAdapter } from '@agentskit/adapters'
const adapter = mockAdapter({
response: [
{ type: 'text', content: 'Hello!' },
{ type: 'done' },
],
})Pass a function to make responses request-aware, or pass an array of arrays to return different chunks on each call (sequenced mode). Use the optional history array to capture every request for assertions.
recordingAdapter + inMemorySink — capture real calls
import { recordingAdapter, inMemorySink, anthropic } from '@agentskit/adapters'
const sink = inMemorySink()
const adapter = recordingAdapter(
anthropic({ apiKey: process.env.ANTHROPIC_API_KEY!, model: 'claude-sonnet-4-6' }),
sink,
)
// Runs the real LLM and captures every chunk to sink.fixturereplayAdapter — replay captured fixtures
import { replayAdapter } from '@agentskit/adapters'
import fixture from './fixture.json'
const adapter = replayAdapter(fixture) // no network callsTypical workflow: record once in dev → commit JSON fixture → replay in CI.
Contributors
License
MIT — see LICENSE.
