@coopah/bentley-provider-ollama
v0.3.0
Published
Ollama LLM provider for Bentley
Downloads
271
Readme
@coopah/bentley-provider-ollama
Ollama LLM provider for Bentley. Run local models (Llama, Mistral, Gemma, etc.) with your Bentley agents.
Install
pnpm add @coopah/bentley-provider-ollamaMake sure Ollama is running locally or at the specified base URL.
Dependencies
@coopah/bentley-core@ai-sdk/openai^3.0.41 (Ollama exposes an OpenAI-compatible API)ai^6.0.116
Usage
import { createBentley } from "@coopah/bentley-core";
import { bentleyOllamaPlugin } from "@coopah/bentley-provider-ollama";
const bentley = createBentley({
plugins: [
bentleyOllamaPlugin({ baseUrl: "http://localhost:11434/v1" }),
// or: bentleyOllamaPlugin() — defaults to http://localhost:11434/v1
],
});Then reference Ollama models in your shell config (e.g. llama3.2, mistral, gemma2).
API
bentleyOllamaPlugin(options?)—BentleyPluginthat registers Ollama as an LLM providercreateBentleyOllamaProvider(baseUrl?)— Low-level factory returning(modelId: string) => LanguageModel
Configuration
interface OllamaPluginOptions {
baseUrl?: string; // Default: "http://localhost:11434/v1"
}Related Packages
| Package | Role |
|---------|------|
| @coopah/bentley-core | Core runtime (required) |
| @coopah/bentley-provider-openai | OpenAI provider |
| @coopah/bentley-provider-anthropic | Anthropic provider |
| @coopah/bentley-provider-copilot | GitHub Copilot provider |
License
MIT
