npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

react-brai

v2.0.9

Published

The zero-latency WebGPU runtime for React. Run Llama-3, Phi-3, and Gemma directly in the browser with privacy-first local inference.

Readme

REACT-BRAI

OVERVIEW

react-brai is the easiest way to integrate local, private Large Language Models (LLMs) into your React applications. It powers in-browser AI inference using WebGPU, allowing you to run models like Llama-3 and Qwen directly on the client-no API keys, no server costs, and complete privacy.

FEATURES

  • 100% Client-Side: Runs entirely in the browser using WebGPU.
  • Zero Server Costs: No cloud API bills.
  • Privacy First: User data never leaves their device.
  • Streaming Support: Real-time token generation.
  • Progress Tracking: Built-in hooks for download status.
  • Model Agnostic: Supports MLC-compiled models (Llama 3, Qwen, etc.).

INSTALLATION

npm install react-brai

or

yarn add react-brai

Note: Requires a browser with WebGPU support (Chrome 113+, Edge, Brave).

QUICK START

import { useEffect, useState } from "react";
import { useLocalAI } from "react-brai";

export default function ChatComponent() {
  // Destructure the hook state and methods
  const { 
    loadModel, 
    isReady, 
    chat, 
    response, 
    isLoading, 
    progress 
  } = useLocalAI();

  const [input, setInput] = useState("");

  // 1. Load the model on mount
  useEffect(() => {
    loadModel("Llama-3.2-3B-Instruct-q4f16_1-MLC", { 
      contextWindow: 4096 
    });
  }, []);

  // 2. Handle sending messages
  const handleSend = () => {
    chat([
      { role: "system", content: "You are a helpful assistant." },
      { role: "user", content: input }
    ]);
  };

  return (
    <div>
      {/* Loading State */}
      {!isReady && (
        <div>
          Loading Model: {Math.round((progress?.progress || 0) * 100)}%
          <p>{progress?.text}</p>
        </div>
      )}

      {/* Chat Interface */}
      {isReady && (
        <>
          <div>{response || "AI is thinking..."}</div>
          
          <input 
            value={input} 
            onChange={(e) => setInput(e.target.value)} 
            disabled={isLoading}
          />
          <button onClick={handleSend} disabled={isLoading}>
            Send
          </button>
        </>
      )}
    </div>
  );
}

API REFERENCE: useLocalAI()

The core hook that manages the Web Worker and WebGPU engine.

RETURN VALUES:

  1. isReady (boolean)

    • True when the model is fully loaded and ready for inference.
  2. isLoading (boolean)

    • True while the model is currently generating a response.
  3. response (string)

    • The real-time streaming text output from the model.
  4. progress (object)

    • Download status. Contains { progress: number, text: string }.
  5. loadModel(modelId, config)

    • Initializes the engine.
    • modelId (string): The MLC model ID (e.g., "Llama-3.2-3B-Instruct-q4f16_1-MLC").
    • config (object):
      • contextWindow (number): Max tokens (default 2048).
  6. chat(messages)

    • Sends a prompt to the model.
    • messages (array): List of message objects [{ role: "user", content: "..." }].

REQUIREMENTS & SERVER CONFIG

  1. HTTPS: WebGPU requires a secure context (HTTPS) or localhost.

  2. Headers: If hook is not working in your app, try this hack. You should configure your development server (Vite/Next.js) to serve files with specific headers for multi-threading support.

    // vite.config.js or next.config.js headers: { 'Cross-Origin-Embedder-Policy': 'require-corp', 'Cross-Origin-Opener-Policy': 'same-origin', }