@aigne/ollama
v0.7.59
Published
AIGNE Ollama SDK for integrating with locally hosted AI models via Ollama
Readme
@aigne/ollama
AIGNE Ollama SDK for integrating with locally hosted AI models via Ollama within the AIGNE Framework.
Introduction
@aigne/ollama provides a seamless integration between the AIGNE Framework and locally hosted AI models via Ollama. This package enables developers to easily leverage open-source language models running locally through Ollama in their AIGNE applications, providing a consistent interface across the framework while offering private, offline access to AI capabilities.
Features
- Ollama Integration: Direct connection to a local Ollama instance
- Local Model Support: Support for a wide variety of open-source models hosted via Ollama
- Chat Completions: Support for chat completions API with all available Ollama models
- Streaming Responses: Support for streaming responses for more responsive applications
- Type-Safe: Comprehensive TypeScript typings for all APIs and models
- Consistent Interface: Compatible with the AIGNE Framework's model interface
- Privacy-Focused: Run models locally without sending data to external API services
- Full Configuration: Extensive configuration options for fine-tuning behavior
Installation
Using npm
npm install @aigne/ollama @aigne/coreUsing yarn
yarn add @aigne/ollama @aigne/coreUsing pnpm
pnpm add @aigne/ollama @aigne/corePrerequisites
Before using this package, you need to have Ollama installed and running on your machine with at least one model pulled. Follow the instructions on the Ollama website to set up Ollama.
Basic Usage
import { OllamaChatModel } from "@aigne/ollama";
const model = new OllamaChatModel({
// Specify base URL (defaults to http://localhost:11434)
baseURL: "http://localhost:11434",
// Specify Ollama model to use (defaults to 'llama3')
model: "llama3",
modelOptions: {
temperature: 0.8,
},
});
const result = await model.invoke({
messages: [{ role: "user", content: "Tell me what model you're using" }],
});
console.log(result);
/* Output:
{
text: "I'm an AI assistant running on Ollama with the llama3 model.",
model: "llama3"
}
*/Streaming Responses
import { isAgentResponseDelta } from "@aigne/core";
import { OllamaChatModel } from "@aigne/ollama";
const model = new OllamaChatModel({
baseURL: "http://localhost:11434",
model: "llama3",
});
const stream = await model.invoke(
{
messages: [{ role: "user", content: "Tell me what model you're using" }],
},
{ streaming: true },
);
let fullText = "";
const json = {};
for await (const chunk of stream) {
if (isAgentResponseDelta(chunk)) {
const text = chunk.delta.text?.text;
if (text) fullText += text;
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
}
}
console.log(fullText); // Output: "I'm an AI assistant running on Ollama with the llama3 model."
console.log(json); // { model: "llama3" }License
Elastic-2.0
