npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

react-native-nobodywho

v1.1.0

Published

Run LLMs locally with offline inference — React Native bindings

Downloads

1,381

Readme

NobodyWho React Native

NobodyWho is a React Native library for running large language models locally and offline on iOS and Android.

Free to use in commercial projects under the EUPL-1.2 license — no API key required. Supports text, vision, embeddings, RAG & function calling.

Quick Start

Install the library:

npm install react-native-nobodywho

Supported Model Format

This library uses the GGUF format — a binary format optimized for fast loading and efficient LLM inference. A wide selection of GGUF models is available on Hugging Face.

Compatibility notes:

  • Most GGUF models will work, but some may fail due to formatting issues.
  • For mobile devices, models under 1 GB tend to run smoothly. As a general rule, the device should have at least twice the available RAM as the model file size. Note that available RAM differs from total RAM — iOS typically reserves around 1–2 GB for the kernel and system processes, while Android overhead varies by manufacturer: roughly 2 GB on stock Android (e.g. Pixel devices), and between 2–4 GB on Samsung, Xiaomi, and Oppo devices due to additional services.

Minimum recommended specs:

  • iOS: iPhone 11 or newer with at least 4 GB of RAM.
  • Android: Snapdragon 855 / Adreno 640 / 6 GB RAM or better.

Model Loading

Models can be loaded from a local file path or downloaded automatically from HuggingFace:

import { Model } from "react-native-nobodywho";

// Download from HuggingFace (cached automatically)
const model = await Model.load({
  modelPath: "hf://NobodyWho/Qwen_Qwen3-0.6B-GGUF/Qwen_Qwen3-0.6B-Q4_K_M.gguf",
});

// Or load from a local file
const model = await Model.load({ modelPath: "/path/to/model.gguf" });

Downloaded models are cached on disk and reused on subsequent loads.

Chat

import { Chat } from "react-native-nobodywho";

const chat = await Chat.fromPath({
  modelPath: "hf://NobodyWho/Qwen_Qwen3-0.6B-GGUF/Qwen_Qwen3-0.6B-Q4_K_M.gguf",
  systemPrompt: "You are a helpful assistant.",
});

// Stream tokens
for await (const token of chat.ask("Is water wet?")) {
  console.log(token);
}

// Or get the full response
const response = await chat.ask("Is water wet?").completed();

See the Chat documentation for details.

Tool Calling

Give your LLM the ability to interact with the outside world by defining tools:

import { Chat, Tool } from "react-native-nobodywho";

function getWeatherForCity(city: string): string {
  return JSON.stringify({ temp: 22, condition: "sunny" });
}

const getWeather = new Tool({
  name: "get_weather",
  description: "Get the current weather for a city",
  parameters: [
    { name: "city", type: "string", description: "The city name" },
  ],
  call: getWeatherForCity,
});

const chat = await Chat.fromPath({
  modelPath: "/path/to/model.gguf",
  tools: [getWeather],
});

const response = await chat.ask("What's the weather in Paris?").completed();

See the Tool Calling documentation for more.


Sampling

The model outputs a probability distribution over possible tokens. A sampler determines how the next token is selected from that distribution. You can configure sampling to improve output quality or constrain outputs to a specific format (e.g. JSON):

import { Chat, SamplerPresets } from "react-native-nobodywho";

const chat = await Chat.fromPath({
  modelPath: "/path/to/model.gguf",
  sampler: SamplerPresets.temperature(0.2), // Lower = more deterministic
});

See the Sampling documentation for more.


Vision & Hearing

Provide image and audio information to your LLM.

To enable this, you need two model files:

  • A multimodal LLM, so the LLM can consume image-tokens or/and audio-tokens
  • A matching projection model, which converts images to image-tokens or/and audio to audio-tokens (usually has mmproj in the name)

Pass the projection model when loading your model, then use Prompt to compose prompts that mix text and images:

import { Chat, Prompt } from "react-native-nobodywho";

const chat = await Chat.fromPath({
  modelPath: "/path/to/vision-model.gguf",
  projectionModelPath: "/path/to/mmproj.gguf",
});

const response = await chat
  .ask(
    new Prompt([
      Prompt.Text("Tell me what you see in the image and what you hear in the audio."),
      Prompt.Image("/path/to/dog.png"),
      Prompt.Audio("/path/to/sound.mp3"),
    ]),
  )
  .completed();

You can pass multiple images/audio files and interleave text between them. If the model performs poorly, try reordering the text, audio and image parts — this can make a noticeable difference. If images consume too much context, increase contextSize or preprocess images with compression.

See the Vision & Hearing documentation for model recommendations and advanced tips.