npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama

v0.3.7

Published

The embeddings-provider-ollama backend module for the ai-assistant plugin.

Readme

@sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama

An embeddings provider module that lets the Backstage AI Assistant backend create vector embeddings using Ollama-hosted models (local Ollama server or Ollama Cloud).

This README explains how the provider works, when to use it, configuration options, and how to wire it into your Backstage backend.

Features

  • Convert text or documents to numeric vector embeddings using an Ollama model.
  • Exposes a provider implementation compatible with the AI Assistant backend so different embeddings services can be swapped without changing the rest of the app.
  • Minimal configuration for local or remote Ollama endpoints and optional API key support.

When to use

Use this module when you run an Ollama embeddings-capable model and want the AI Assistant to build semantic search indices, vector stores, or provide retrieval-augmented generation (RAG) capabilities in Backstage. It's a good fit for local development with Ollama or when using an Ollama-hosted endpoint.

Configuration

Add the provider configuration to your Backstage app-config.yaml or app-config.local.yaml under aiAssistant.embeddings.ollama.

Minimum configuration keys (example):

aiAssistant:
  embeddings:
    ollama:
      baseUrl: 'http://localhost:11434'
      model: 'text-embedding-3-small'
      apiKey: ${OLLAMA_API_KEY}

Field descriptions:

  • baseUrl - The base URL of your Ollama service. For a local Ollama server this is typically http://localhost:11434. For Ollama Cloud or a proxied endpoint, use the full base URL. For ollama with webui you must set the base url to the /ollama route. i.e http://localhost:11434/ollama
  • model - The name of the model to use for generating embeddings. The model must support embeddings (check your Ollama model documentation for supported capabilities).
  • apiKey - (Optional) An API key for Ollama Cloud or any endpoint that requires authentication. Mark this value as secret in Backstage configuration when applicable.

The exact keys available and required depend on your Ollama setup. Check the provider's config.d.ts in the package for the canonical types used by the module.

Install

Install the module into your Backstage backend workspace:

yarn workspace backend add @sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama

Wire the provider into your backend

Add the provider module import to your backend entrypoint (usually packages/backend/src/index.ts):

// packages/backend/src/index.ts

// other backend modules...
backend.add(import('@sweetoburrito/backstage-plugin-ai-assistant-backend'));

// Add the Ollama embeddings provider
++backend.add(
++  import(
++    '@sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama'
++  ),
++);

Restart your backend after adding the provider so it registers with the AI Assistant plugin.