npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ollama-llm-bridge

v0.0.8

Published

Universal Ollama LLM Bridge for multiple models (Llama, Gemma, etc.)

Downloads

12

Readme

Ollama LLM Bridge

Universal Ollama LLM Bridge supporting multiple models (Llama, Gemma, etc.) with a unified interface.

🚀 Features

  • Universal Ollama Support: Single package supporting all Ollama models
  • Model Auto-Detection: Automatically resolves appropriate model implementation
  • Type Safety: Full TypeScript support with comprehensive type definitions
  • Streaming Support: Native streaming API support
  • Multi-Modal: Image support for compatible models (Llama 3.2+)
  • Error Handling: Robust error handling with standardized error types
  • Extensible: Easy to add new model support

📦 Installation

# pnpm (권장)
pnpm add ollama-llm-bridge llm-bridge-spec ollama zod

# npm
npm install ollama-llm-bridge llm-bridge-spec ollama zod

# yarn
yarn add ollama-llm-bridge llm-bridge-spec ollama zod

🏗️ Architecture

This package follows the Abstract Model Pattern inspired by the bedrock-llm-bridge:

ollama-llm-bridge/
├── models/
│   ├── base/AbstractOllamaModel     # Abstract base class
│   ├── llama/LlamaModel            # Llama implementation
│   ├── gemma/GemmaModel            # Gemma implementation
│   └── gpt-oss/GptOssModel        # GPT-OSS implementation
├── bridge/OllamaBridge             # Main bridge class
├── factory/                        # Factory functions
└── utils/error-handler             # Error handling

🎯 Quick Start

Basic Usage

import { createOllamaBridge } from 'ollama-llm-bridge';

// Create bridge with auto-detected model
const bridge = createOllamaBridge({
  host: 'http://localhost:11434',
  model: 'llama3.2', // or 'gemma3n:latest' or 'gpt-oss-20:b'
  temperature: 0.7,
});

// Simple chat
const response = await bridge.invoke({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }],
});

console.log(response.choices[0].message.content[0].text);

Streaming

// Streaming chat
const stream = bridge.invokeStream({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'Tell me a story' }] }],
});

for await (const chunk of stream) {
  const text = chunk.choices[0]?.message?.content[0]?.text;
  if (text) {
    process.stdout.write(text);
  }
}

Multi-Modal (Llama 3.2+)

const response = await bridge.invoke({
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What do you see in this image?' },
        { type: 'image', data: 'base64_encoded_image_data' },
      ],
    },
  ],
});

🔧 Factory Functions

Main Factory

import { createOllamaBridge } from 'ollama-llm-bridge';

const bridge = createOllamaBridge({
  host: 'http://localhost:11434',
  model: 'llama3.2', // Required
  temperature: 0.7,
  num_predict: 4096,
});

Convenience Factories

import {
  createLlamaBridge,
  createGemmaBridge,
  createGptOssBridge,
  createDefaultOllamaBridge,
} from 'ollama-llm-bridge';

// Llama with defaults
const llamaBridge = createLlamaBridge({
  model: 'llama3.2', // Optional, defaults to 'llama3.2'
  temperature: 0.8,
});

// Gemma with defaults
const gemmaBridge = createGemmaBridge({
  model: 'gemma3n:7b', // Optional, defaults to 'gemma3n:latest'
  num_predict: 1024,
});

// GPT-OSS with defaults
const gptOssBridge = createGptOssBridge({
  model: 'gpt-oss-20:b', // Optional, defaults to 'gpt-oss-20:b'
});

// Default configuration (Llama 3.2)
const defaultBridge = createDefaultOllamaBridge({
  temperature: 0.5, // Override defaults
});

📋 Supported Models

Llama Models

  • llama3.2 (with multi-modal support)
  • llama3.1
  • llama3
  • llama2
  • llama

Gemma Models

  • gemma3n:latest
  • gemma3n:7b
  • gemma3n:2b
  • gemma2:latest
  • gemma2:7b
  • gemma2:2b
  • gemma:latest
  • gemma:7b
  • gemma:2b

GPT-OSS Models

  • gpt-oss-20:b

⚙️ Configuration

interface OllamaBaseConfig {
  host?: string; // Default: 'http://localhost:11434'
  model: string; // Required: Model ID
  temperature?: number; // 0.0 - 1.0
  top_p?: number; // 0.0 - 1.0
  top_k?: number; // Integer >= 1
  num_predict?: number; // Max tokens to generate
  stop?: string[]; // Stop sequences
  seed?: number; // Seed for reproducibility
  stream?: boolean; // Default: false
}

🎭 Model Capabilities

// Get model capabilities
const capabilities = bridge.getMetadata();

console.log(capabilities);
// {
//   name: 'Llama',
//   version: '3.2',
//   description: 'Ollama Llama Bridge',
//   model: 'llama3.2',
//   contextWindow: 8192,
//   maxTokens: 4096
// }

// Check model features
const features = bridge.model.getCapabilities();
console.log(features.multiModal); // true for Llama 3.2+
console.log(features.streaming); // true for all models
console.log(features.functionCalling); // false (coming soon)

🚦 Error Handling

The bridge provides comprehensive error handling with standardized error types:

import { NetworkError, ModelNotSupportedError, ServiceUnavailableError } from 'llm-bridge-spec';

try {
  const response = await bridge.invoke(prompt);
} catch (error) {
  if (error instanceof NetworkError) {
    console.error('Network issue:', error.message);
  } else if (error instanceof ModelNotSupportedError) {
    console.error('Unsupported model:', error.requestedModel);
    console.log('Supported models:', error.supportedModels);
  } else if (error instanceof ServiceUnavailableError) {
    console.error('Ollama server unavailable. Retry after:', error.retryAfter);
  }
}

🔄 Model Switching

// Create bridge with initial model
const bridge = createOllamaBridge({ model: 'llama3.2' });

// Switch to different model at runtime
bridge.setModel('gemma3n:latest');

// Get current model
console.log(bridge.getCurrentModel()); // 'gemma3n:latest'

// Get supported models
console.log(bridge.getSupportedModels());

🧪 Testing

# Run unit tests
pnpm test

# Run tests with coverage
pnpm test:coverage

# Run e2e tests (requires running Ollama server)
pnpm test:e2e

📊 Comparison with Previous Packages

| Feature | llama3-llm-bridge | gemma3n-llm-bridge | ollama-llm-bridge | | ---------------- | -------------------- | -------------------- | ------------------- | | Code Duplication | ❌ High | ❌ High | ✅ Eliminated | | Model Support | 🔶 Llama only | 🔶 Gemma only | ✅ Universal | | Architecture | 🔶 Basic | 🔶 Basic | ✅ Abstract Pattern | | Extensibility | ❌ Limited | ❌ Limited | ✅ Easy to extend | | Maintenance | ❌ Multiple packages | ❌ Multiple packages | ✅ Single package |

🔮 Roadmap

  • [ ] Function Calling Support
  • [ ] Batch Processing
  • [ ] More Ollama Models (CodeLlama, Mistral, etc.)
  • [ ] Custom Model Plugins
  • [ ] Performance Optimizations

🤝 기여하기

이 프로젝트는 Git Workflow Guide를 따릅니다.

  1. Issues: 새로운 기능이나 버그 리포트를 GitHub Issues에 등록
  2. 브랜치 생성: git checkout -b feature/core-new-feature
  3. TODO 기반 개발: 각 작업을 TODO 단위로 커밋
    git commit -m "✅ [TODO 1/3] Add new model support"
  4. 품질 체크: 커밋 전 반드시 확인
    pnpm lint && pnpm test:ci && pnpm build
  5. PR 생성: GitHub에서 Pull Request 생성
  6. 코드 리뷰: 승인 후 Squash Merge

📄 License

MIT License - see the LICENSE file for details.

🙏 Acknowledgments


Made with ❤️ by the LLM Bridge Team