npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@sisu-ai/mw-rag

v11.0.0

Published

Compose retrieval-augmented generation pipelines by connecting vector retrieval outputs to prompting.

Readme

@sisu-ai/mw-rag

Compose retrieval-augmented generation pipelines by connecting vector retrieval outputs to prompting.

Tests CodeQL License Downloads PRs Welcome

Exports

  • ragIngest({ vectorStore, namespace?, select? })
    • vectorStore: required VectorStore implementation.
    • namespace: optional default namespace.
    • select(ctx): return { records, namespace? } or VectorRecord[] to ingest.
  • ragRetrieve({ vectorStore, namespace?, topK?, filter?, select? })
    • vectorStore: required VectorStore implementation.
    • namespace: optional default namespace.
    • topK: default 5; also accepted via select.
    • filter: provider-specific filter object to pass to the query.
    • select(ctx): return { embedding, topK?, filter?, namespace? } or number[].
  • buildRagPrompt({ template?, select? })
    • template: customize the system prompt; uses a sensible default.
    • select(ctx): return { context?, question? } to override defaults.

State used under ctx.state.rag:

  • records (ingest input), ingested (result)
  • queryEmbedding (retrieve input), retrieval (result)

Choosing a Package

  • Use @sisu-ai/rag-core when app code needs reusable chunking, embedding orchestration, seeding, or direct store/retrieve helpers.
  • Use @sisu-ai/tool-rag when the model should call retrieveContext / storeContext as tools.
  • Use @sisu-ai/mw-rag when your app already owns embeddings and vector writes/queries, and you want a deterministic middleware pipeline that turns retrieval into prompt context.
  • @sisu-ai/mw-rag no longer depends on low-level vector tool registration.

What It Does

  • ragIngest upserts your prepared documents into a vector index via a VectorStore.
  • ragRetrieve queries nearest neighbors using an embedding for the current question.
  • buildRagPrompt turns retrieval results into a grounded system prompt that precedes your user question.

It wires the minimum state in ctx.state.rag so you can compose ingestion, retrieval, and prompting without monolithic code.

@sisu-ai/mw-rag does not own chunking or embedding generation. You prepare VectorRecord[] and query embeddings in app code or another layer, then this middleware handles the retrieval/prompting composition.

How It Works

  • Vector operations are provided by a VectorStore implementation such as @sisu-ai/vector-chroma or @sisu-ai/vector-vectra.
  • You provide inputs via ctx.state.rag or select callbacks:
    • rag.records: VectorRecord[] for ingestion.
    • rag.queryEmbedding: number[] representing the query embedding.
  • Retrieval matches are placed at rag.retrieval. buildRagPrompt formats these into a context block and appends a system message to ctx.messages.

For agent-facing retrieval/storage tools that handle chunking and embedding orchestration, prefer @sisu-ai/tool-rag composed with a backend adapter such as @sisu-ai/vector-chroma or @sisu-ai/vector-vectra.

For app-side seeding and reusable chunking/embedding mechanics outside tool-calling, use @sisu-ai/rag-core directly.

When To Use @sisu-ai/mw-rag

  • You want deterministic, middleware-driven RAG rather than model tool-calling.
  • You already compute embeddings in your own code and want to keep that explicit.
  • You want prompt injection based on retrieval results without exposing storage/retrieval tools to the model.
  • You want to compose retrieval with other middleware such as guardrails, orchestration, or prompt shaping.

When Not To Use @sisu-ai/mw-rag

  • You want the model to decide when to retrieve or store context; use @sisu-ai/tool-rag.
  • You want reusable app-side ingestion helpers; use @sisu-ai/rag-core.
  • You only need backend access or maintenance operations; use a backend adapter such as @sisu-ai/vector-chroma or @sisu-ai/vector-vectra directly.

Example

Exampls using ChromaDb

import 'dotenv/config';
import { Agent, createConsoleLogger, InMemoryKV, NullStream, SimpleTools, type Ctx } from '@sisu-ai/core';
import { openAIAdapter } from '@sisu-ai/adapter-openai';
import { ragIngest, ragRetrieve, buildRagPrompt } from '@sisu-ai/mw-rag';
import { createChromaVectorStore } from '@sisu-ai/vector-chroma';

// Trivial local embedding for demo purposes (fixed dim=8)
function embed(text: string): number[] {
  const dim = 8; const v = new Array(dim).fill(0);
  for (const w of text.toLowerCase().split(/[^a-z0-9]+/).filter(Boolean)) {
    let h = 0; for (let i = 0; i < w.length; i++) h = (h * 31 + w.charCodeAt(i)) >>> 0;
    v[h % dim] += 1;
  }
  // L2 normalize
  const norm = Math.sqrt(v.reduce((s, x) => s + x * x, 0)) || 1; return v.map(x => x / norm);
}

const model = openAIAdapter({ model: 'gpt-5.4' });
const query = 'Best fika in Malmö?';
const vectorStore = createChromaVectorStore({ namespace: process.env.VECTOR_NAMESPACE || 'sisu' });

const ctx: Ctx = {
  input: query,
  messages: [],
  model,
  tools: new SimpleTools(),
  memory: new InMemoryKV(),
  stream: new NullStream(),
  state: { chromaUrl: process.env.CHROMA_URL, vectorNamespace: process.env.VECTOR_NAMESPACE || 'sisu' },
  signal: new AbortController().signal,
  log: createConsoleLogger({ level: 'info' }),
};

const docs = [
  { id: 'd1', text: 'Guide to fika in Malmö. Best cafe in Malmö is SisuCafe404.' },
  { id: 'd2', text: 'Travel notes from Helsinki. Sauna etiquette and tips.' },
];

(ctx.state as any).rag = {
  records: docs.map(d => ({ id: d.id, embedding: embed(d.text), metadata: { text: d.text } })),
  queryEmbedding: embed(query),
};

const app = new Agent()
  .use(ragIngest({ vectorStore }))
  .use(ragRetrieve({ vectorStore, topK: 2 }))
  .use(buildRagPrompt());

Placement & Ordering

  • Ingest rarely (batch or startup), retrieve per-query; you can split pipelines for ingestion and query-time retrieval.
  • Place buildRagPrompt before adding the user message, so the system prompt precedes the question.
  • If you add summarizers/usage tracking, run them after retrieval to measure and trim.

When To Use

  • You want a minimal, explicit RAG flow with your own embedding generation.
  • You prefer composing small middlewares over a large RAG framework.

When Not To Use

  • You need cross-turn caching, reranking, or chunk summarization — add specialized middleware or a RAG tool.
  • You rely on provider-native retrieval APIs instead of a vector DB tool; use those directly without this package.

Community & Support

Discover what you can do through examples or documentation. Check it out at https://github.com/finger-gun/sisu. Example projects live under examples/ in the repo.


Documentation

CorePackage docs · Error types

AdaptersOpenAI · Anthropic · Ollama

Anthropichello · control-flow · stream · weather

Ollamahello · stream · vision · weather · web-search

OpenAIhello · weather · stream · vision · reasoning · react · control-flow · branch · parallel · graph · orchestration · orchestration-adaptive · guardrails · error-handling · rag-chroma · rag-vectra · web-search · web-fetch · wikipedia · terminal · github-projects · server · aws-s3 · azure-blob


Contributing

We build Sisu in the open. Contributions welcome.

Contributing Guide · Report a Bug · Request a Feature · Code of Conduct


Star on GitHub if Sisu helps you build better agents.

Quiet, determined, relentlessly useful.

Apache 2.0 License