react-native-nobodywho
v1.1.0
Published
Run LLMs locally with offline inference — React Native bindings
Downloads
1,381
Maintainers
Readme
NobodyWho React Native
NobodyWho is a React Native library for running large language models locally and offline on iOS and Android.
Free to use in commercial projects under the EUPL-1.2 license — no API key required. Supports text, vision, embeddings, RAG & function calling.
- Documentation — React Native & other frameworks documentation
- Starter example app — Test this library in 5 minutes
- Discord — Get help, share ideas, and connect with other developers
- GitHub Issues — Report bugs
- GitHub Discussions — Ask questions and request features
Quick Start
Install the library:
npm install react-native-nobodywhoSupported Model Format
This library uses the GGUF format — a binary format optimized for fast loading and efficient LLM inference. A wide selection of GGUF models is available on Hugging Face.
Compatibility notes:
- Most GGUF models will work, but some may fail due to formatting issues.
- For mobile devices, models under 1 GB tend to run smoothly. As a general rule, the device should have at least twice the available RAM as the model file size. Note that available RAM differs from total RAM — iOS typically reserves around 1–2 GB for the kernel and system processes, while Android overhead varies by manufacturer: roughly 2 GB on stock Android (e.g. Pixel devices), and between 2–4 GB on Samsung, Xiaomi, and Oppo devices due to additional services.
Minimum recommended specs:
- iOS: iPhone 11 or newer with at least 4 GB of RAM.
- Android: Snapdragon 855 / Adreno 640 / 6 GB RAM or better.
Model Loading
Models can be loaded from a local file path or downloaded automatically from HuggingFace:
import { Model } from "react-native-nobodywho";
// Download from HuggingFace (cached automatically)
const model = await Model.load({
modelPath: "hf://NobodyWho/Qwen_Qwen3-0.6B-GGUF/Qwen_Qwen3-0.6B-Q4_K_M.gguf",
});
// Or load from a local file
const model = await Model.load({ modelPath: "/path/to/model.gguf" });Downloaded models are cached on disk and reused on subsequent loads.
Chat
import { Chat } from "react-native-nobodywho";
const chat = await Chat.fromPath({
modelPath: "hf://NobodyWho/Qwen_Qwen3-0.6B-GGUF/Qwen_Qwen3-0.6B-Q4_K_M.gguf",
systemPrompt: "You are a helpful assistant.",
});
// Stream tokens
for await (const token of chat.ask("Is water wet?")) {
console.log(token);
}
// Or get the full response
const response = await chat.ask("Is water wet?").completed();See the Chat documentation for details.
Tool Calling
Give your LLM the ability to interact with the outside world by defining tools:
import { Chat, Tool } from "react-native-nobodywho";
function getWeatherForCity(city: string): string {
return JSON.stringify({ temp: 22, condition: "sunny" });
}
const getWeather = new Tool({
name: "get_weather",
description: "Get the current weather for a city",
parameters: [
{ name: "city", type: "string", description: "The city name" },
],
call: getWeatherForCity,
});
const chat = await Chat.fromPath({
modelPath: "/path/to/model.gguf",
tools: [getWeather],
});
const response = await chat.ask("What's the weather in Paris?").completed();See the Tool Calling documentation for more.
Sampling
The model outputs a probability distribution over possible tokens. A sampler determines how the next token is selected from that distribution. You can configure sampling to improve output quality or constrain outputs to a specific format (e.g. JSON):
import { Chat, SamplerPresets } from "react-native-nobodywho";
const chat = await Chat.fromPath({
modelPath: "/path/to/model.gguf",
sampler: SamplerPresets.temperature(0.2), // Lower = more deterministic
});See the Sampling documentation for more.
Vision & Hearing
Provide image and audio information to your LLM.
To enable this, you need two model files:
- A multimodal LLM, so the LLM can consume image-tokens or/and audio-tokens
- A matching projection model, which converts images to image-tokens or/and audio to audio-tokens (usually has
mmprojin the name)
Pass the projection model when loading your model, then use Prompt to compose prompts that mix text and images:
import { Chat, Prompt } from "react-native-nobodywho";
const chat = await Chat.fromPath({
modelPath: "/path/to/vision-model.gguf",
projectionModelPath: "/path/to/mmproj.gguf",
});
const response = await chat
.ask(
new Prompt([
Prompt.Text("Tell me what you see in the image and what you hear in the audio."),
Prompt.Image("/path/to/dog.png"),
Prompt.Audio("/path/to/sound.mp3"),
]),
)
.completed();You can pass multiple images/audio files and interleave text between them. If the model performs poorly, try reordering the text, audio and image parts — this can make a noticeable difference. If images consume too much context, increase contextSize or preprocess images with compression.
See the Vision & Hearing documentation for model recommendations and advanced tips.
