@gsep/adapters-llm-ollama
v0.8.0
Published
Ollama adapter for GSEP — use local LLMs with GSEP
Readme
@gsep/adapters-llm-ollama
Ollama adapter for GSEP — use local LLMs with self-evolving prompts
No API keys needed. Run any open-source model locally.
Installation
npm install @gsep/core @gsep/adapters-llm-ollamaPrerequisites
Install and run Ollama:
# Install Ollama, then:
ollama pull llama3
ollama serveUsage
import { PGA } from '@gsep/core';
import { OllamaAdapter } from '@gsep/adapters-llm-ollama';
const pga = new PGA({
llm: new OllamaAdapter({
model: 'llama3',
}),
});Configuration
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| model | string | required | Ollama model name |
| baseURL | string | 'http://localhost:11434' | Ollama server URL |
| temperature | number | 0.7 | Temperature (0-2) |
| timeout | number | 120000 | Request timeout (ms) |
Supported Models
Any model available in Ollama:
llama3/llama3.1/llama3.2mistral/mixtraldeepseek-r1phi3/phi4qwen2/qwen2.5gemma2- And hundreds more
Remote Ollama
Connect to Ollama running on a remote server:
const llm = new OllamaAdapter({
model: 'llama3',
baseURL: 'http://your-gpu-server:11434',
});License
MIT
