npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

react-native-rag

v0.2.0

Published

Private, local RAGs. Supercharge LLMs with your own knowledge base.

Readme

React Native RAG

header

Private, local RAGs. Supercharge LLMs with your own knowledge base.

Navigation

:rocket: Features

  • Modular: Use only the components you need. Choose from LLM, Embeddings, VectorStore, and TextSplitter.
  • Extensible: Create your own components by implementing the LLM, Embeddings, VectorStore, and TextSplitter interfaces.
  • Multiple Integration Options: Whether you prefer a simple hook (useRAG), a powerful class (RAG), or direct component interaction, the library adapts to your needs.
  • On-device Inference: Powered by @react-native-rag/executorch, allowing for private and efficient model execution directly on the user's device.
  • Vector Store Persistence: Includes support for SQLite with @react-native-rag/op-sqlite to save and manage vector stores locally.
  • Semantic Search Ready: Easily implement powerful semantic search in your app by using the VectorStore and Embeddings components directly.

:earth_africa: Real-World Example

React Native RAG is powering Private Mind, a privacy-first mobile AI app available on App Store and Google Play.

:package: Installation

npm install react-native-rag

You will also need an embeddings model and a large language model. We recommend using @react-native-rag/executorch for on-device inference. To use it, install the following packages:

npm install @react-native-rag/executorch react-native-executorch

For persisting vector stores, you can use @react-native-rag/op-sqlite:

:iphone: Quickstart - Example App

For a complete example app that demonstrates how to use the library, check out the example app.

:books: Usage

We offer three ways to integrate RAG, depending on your needs.

1. Using the useRAG Hook

The easiest way to get started. Good for simple use cases where you want to quickly set up RAG.

import React from 'react';
import { Text } from 'react-native';

import { useRAG, MemoryVectorStore } from 'react-native-rag';
import {
  ALL_MINILM_L6_V2,
  ALL_MINILM_L6_V2_TOKENIZER,
  LLAMA3_2_1B_QLORA,
  LLAMA3_2_1B_TOKENIZER,
  LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';
import {
  ExecuTorchEmbeddings,
  ExecuTorchLLM,
} from '@react-native-rag/executorch';

const vectorStore = new MemoryVectorStore({
  embeddings: new ExecuTorchEmbeddings({
    modelSource: ALL_MINILM_L6_V2,
    tokenizerSource: ALL_MINILM_L6_V2_TOKENIZER,
  }),
});

const llm = new ExecuTorchLLM({
  modelSource: LLAMA3_2_1B_QLORA,
  tokenizerSource: LLAMA3_2_1B_TOKENIZER,
  tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
});

const App = () => {
  const rag = useRAG({ vectorStore, llm });
  return <Text>{rag.response}</Text>;
};

2. Using the RAG Class

For more control over components and configuration.

import React, { useEffect, useState } from 'react';
import { Text } from 'react-native';

import { RAG, MemoryVectorStore } from 'react-native-rag';
import {
  ExecuTorchEmbeddings,
  ExecuTorchLLM,
} from '@react-native-rag/executorch';
import {
  ALL_MINILM_L6_V2,
  ALL_MINILM_L6_V2_TOKENIZER,
  LLAMA3_2_1B_QLORA,
  LLAMA3_2_1B_TOKENIZER,
  LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';

const App = () => {
  const [rag, setRag] = useState<RAG | null>(null);
  const [response, setResponse] = useState<string | null>(null);

  useEffect(() => {
    const initializeRAG = async () => {
      const embeddings = new ExecuTorchEmbeddings({
        modelSource: ALL_MINILM_L6_V2,
        tokenizerSource: ALL_MINILM_L6_V2_TOKENIZER,
      });

      const llm = new ExecuTorchLLM({
        modelSource: LLAMA3_2_1B_QLORA,
        tokenizerSource: LLAMA3_2_1B_TOKENIZER,
        tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
        responseCallback: setResponse,
      });

      const vectorStore = new MemoryVectorStore({ embeddings });
      const ragInstance = new RAG({ llm, vectorStore });

      await ragInstance.load();
      setRag(ragInstance);
    };
    initializeRAG();
  }, []);

  return <Text>{response}</Text>;
};

3. Using RAG Components Separately

For advanced use cases requiring fine-grained control.

This is the recommended way if you want to implement semantic search in your app - use the VectorStore and Embeddings classes directly.

import React, { useEffect, useState } from 'react';
import { Text } from 'react-native';

import { MemoryVectorStore } from 'react-native-rag';
import {
  ExecuTorchEmbeddings,
  ExecuTorchLLM,
} from '@react-native-rag/executorch';
import {
  ALL_MINILM_L6_V2,
  ALL_MINILM_L6_V2_TOKENIZER,
  LLAMA3_2_1B_QLORA,
  LLAMA3_2_1B_TOKENIZER,
  LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';

const App = () => {
  const [embeddings, setEmbeddings] = useState<ExecuTorchEmbeddings | null>(null);
  const [llm, setLLM] = useState<ExecuTorchLLM | null>(null);
  const [vectorStore, setVectorStore] = useState<MemoryVectorStore | null>(null);
  const [response, setResponse] = useState<string | null>(null);

  useEffect(() => {
    const initialize = async () => {
      // Instantiate and load the Embeddings Model
      // NOTE: Calling load on VectorStore will automatically load the embeddings model
      // so loading the embeddings model separately is not necessary in this case.
      const embeddings = await new ExecuTorchEmbeddings({
        modelSource: ALL_MINILM_L6_V2,
        tokenizerSource: ALL_MINILM_L6_V2_TOKENIZER,
      }).load();

      // Instantiate and load the Large Language Model
      const llm = await new ExecuTorchLLM({
        modelSource: LLAMA3_2_1B_QLORA,
        tokenizerSource: LLAMA3_2_1B_TOKENIZER,
        tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
        responseCallback: setResponse,
      }).load();

      // Instantiate and initialize the Vector Store
      const vectorStore = await new MemoryVectorStore({ embeddings }).load();

      setEmbeddings(embeddings);
      setLLM(llm);
      setVectorStore(vectorStore);
    };
    initialize();
  }, []);

  return <Text>{response}</Text>;
};

:jigsaw: Using Custom Components

Bring your own components by creating classes that implement the LLM, Embeddings, VectorStore and TextSplitter interfaces. This allows you to use any model or service that fits your needs.

:electric_plug: Plugins

:handshake: Contributing

Contributions are welcome! See the contributing guide to learn about the development workflow.

:page_facing_up: License

MIT

React Native RAG is created by Software Mansion

Since 2012 Software Mansion is a software agency with experience in building web and mobile apps. We are Core React Native Contributors and experts in dealing with all kinds of React Native issues. We can help you build your next dream product – Hire us.

swm