npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

controller-chat

v1.1.1

Published

Unbranded conversational search widget - keyword-only or optional AI (Llama 3)

Readme

controller-chat

Conversational search widget for React—unbranded, configurable, and works with or without a backend. Use keyword-only search on your local data, or plug in your own AI/LLM (e.g. Llama 3 via Ollama) for natural-language answers.

No hardcoded backends, no cloud credentials. You pass your API URLs (or omit them for keyword-only mode).

npm downloads

Demo

controller-chat on redlightcam—homepage and search assistant

redlightcam homepage with the controller search assistant—natural language search over events, showcase, and more.

Installation

npm install controller-chat

Quick Start

Option 1: Keyword-only (no backend)

Works out of the box—no API setup. Searches your data array locally.

import { ControllerChat } from 'controller-chat';
import 'controller-chat/styles.css';

<ControllerChat
  context="events"
  data={myEvents}
  onResultClick={(result) => navigate(`/events/${result.id}`)}
  viewAllUrl="/events"
  welcomeMessages={["How can I help find events?"]}
  suggestionChips={[
    { label: 'Upcoming', query: 'upcoming events' },
    { label: 'This Weekend', query: 'this weekend' }
  ]}
/>

Option 2: With your own AI backend

Point the widget at your own API endpoints. Your backend handles RAG, LLM, or whatever you use.

<ControllerChat
  context="events"
  data={myEvents}
  controllerApiUrl="/api/controller"   // Your RAG/search API
  chatApiUrl="/api/chat"               // Your streaming chat API
  chatApiEnabled={true}
  onResultClick={(result) => navigate(result.url)}
  getAboutResponse={() => "We organize local car events."}
/>

The widget sends requests to the URLs you provide. You host and control the backend—nothing is built into the package.


Llama 3 & Lightweight LLMs

controller-chat pairs well with Ollama and Llama 3 for local, privacy-friendly AI search—no API keys, no cloud calls.

Why lightweight LLMs?

| Benefit | Description | |--------|-------------| | Privacy | Data stays on your machine or your server | | Cost | No per-token API fees | | Speed | Smaller models (1B–8B) run on laptops and small VMs | | Offline | Works without internet once models are downloaded |

Installing Ollama & Llama 3

  1. Install Ollama (Mac, Windows, Linux): ollama.com

    # Linux
    curl -fsSL https://ollama.com/install.sh | sh
  2. Pull Llama 3 (choose one for your hardware):

    ollama pull llama3.2:1b    # ~1.3GB - fastest, runs on almost anything
    ollama pull llama3.2:3b   # ~2GB - good balance
    ollama pull llama3.2      # ~2GB - 3B instruction-tuned (default)
    ollama pull llama3       # ~4.7GB - 8B, more capable
  3. Run Ollama (if not running as a service):

    ollama serve
  4. Point your backend at http://localhost:11434 (or your Ollama host). Your backend calls the Ollama API; controller-chat calls your backend.

Model size guide

| Model | Size | Use case | |-------|------|----------| | llama3.2:1b | ~1.3GB | Embedded, Raspberry Pi, low-spec | | llama3.2:3b | ~2GB | Laptops, small VMs, fast responses | | llama3 (8B) | ~4.7GB | Higher quality, needs 8GB+ RAM |


Peer Dependencies

  • react >= 17
  • react-dom >= 17

API URLs (what you provide)

| URL | Method | Purpose | |-----|--------|---------| | controllerApiUrl | POST | Fast search—RAG, keyword, or hybrid. Body: { context, query, conversationHistory }. Response: { text, results }. | | chatApiUrl | POST | Streaming chat. Body: { message, context, sessionId, conversationHistory }. Stream: data: {"type":"token","content":"..."} then data: {"type":"done","sources":[...]}. |

Use relative paths like /api/controller and proxy them in your app (Vite, Next.js, etc.) to your backend. The package never knows your infrastructure.

Context & extensibility

context accepts any string—not just the built-in ones. Pass whatever fits your domain.

| Context | Keyword-only behavior | |---------|------------------------| | events | Event-specific: filters past dates, deduplicates, "list all" support | | showcase, products, software | Pre-tuned suggestion chips; generic item search (title, name, description, tags) | | Any other string | Same as showcase: generic item search. Use suggestionChips to customize quick actions |

Custom contexts (e.g. articles, recipes, inventory): your backend receives the context in every request. Use it to route queries, switch RAG collections, or tailor responses. The client fallback searches data using title, name, description, tags, category—format your items accordingly.

<ControllerChat
  context="recipes"
  data={myRecipes}
  suggestionChips={[
    { label: 'Desserts', query: 'dessert recipes' },
    { label: 'Quick meals', query: 'under 30 minutes' },
  ]}
  viewAllUrl="/recipes"
/>

Props

| Prop | Type | Default | Description | |------|------|---------|-------------| | context | string | 'events' | Search context—any string. Built-in: events, showcase, products, software. Custom: pass your domain (e.g. recipes, articles). | | data | Array | [] | Items to search (events, products, etc.) | | inline | boolean | false | Inline mode (no floating button) | | onResultClick | (result) => void | - | Called when user clicks a result | | onResultsChange | (results) => void | - | Called when results change | | viewAllUrl | string | - | URL for "View all" link | | controllerApiUrl | string \| null | null | Your RAG/search API URL | | chatApiUrl | string \| null | null | Your streaming chat API URL | | chatApiEnabled | boolean | true | Enable chat when chatApiUrl is set | | getAboutResponse | () => string | - | Response for "about" queries | | aboutPhrases | string[] | - | Phrases that trigger about response | | suggestionChips | Array<{label, query}> | - | Quick-action chips | | welcomeMessages | string[] | - | Random welcome message | | placeholder | string | 'What are you looking for?' | Input placeholder | | emptyStateMessage | string | - | Message when no results | | title | string | 'Search' | Header title | | logoUrl | string \| null | null | Logo image URL | | autocompleteSuggestions | string[] | [] | Extra autocomplete hints |

Programmatic open

window.dispatchEvent(new Event('controller-open'));

Examples & Resources

If you found this useful, please ⭐ the repo and share where you found it!

Credits

By THE RISE COLLECTION

License

ISC