controller-chat
v1.1.1
Published
Unbranded conversational search widget - keyword-only or optional AI (Llama 3)
Maintainers
Readme
controller-chat
Conversational search widget for React—unbranded, configurable, and works with or without a backend. Use keyword-only search on your local data, or plug in your own AI/LLM (e.g. Llama 3 via Ollama) for natural-language answers.
No hardcoded backends, no cloud credentials. You pass your API URLs (or omit them for keyword-only mode).
Demo

redlightcam homepage with the controller search assistant—natural language search over events, showcase, and more.
Installation
npm install controller-chatQuick Start
Option 1: Keyword-only (no backend)
Works out of the box—no API setup. Searches your data array locally.
import { ControllerChat } from 'controller-chat';
import 'controller-chat/styles.css';
<ControllerChat
context="events"
data={myEvents}
onResultClick={(result) => navigate(`/events/${result.id}`)}
viewAllUrl="/events"
welcomeMessages={["How can I help find events?"]}
suggestionChips={[
{ label: 'Upcoming', query: 'upcoming events' },
{ label: 'This Weekend', query: 'this weekend' }
]}
/>Option 2: With your own AI backend
Point the widget at your own API endpoints. Your backend handles RAG, LLM, or whatever you use.
<ControllerChat
context="events"
data={myEvents}
controllerApiUrl="/api/controller" // Your RAG/search API
chatApiUrl="/api/chat" // Your streaming chat API
chatApiEnabled={true}
onResultClick={(result) => navigate(result.url)}
getAboutResponse={() => "We organize local car events."}
/>The widget sends requests to the URLs you provide. You host and control the backend—nothing is built into the package.
Llama 3 & Lightweight LLMs
controller-chat pairs well with Ollama and Llama 3 for local, privacy-friendly AI search—no API keys, no cloud calls.
Why lightweight LLMs?
| Benefit | Description | |--------|-------------| | Privacy | Data stays on your machine or your server | | Cost | No per-token API fees | | Speed | Smaller models (1B–8B) run on laptops and small VMs | | Offline | Works without internet once models are downloaded |
Installing Ollama & Llama 3
Install Ollama (Mac, Windows, Linux): ollama.com
# Linux curl -fsSL https://ollama.com/install.sh | shPull Llama 3 (choose one for your hardware):
ollama pull llama3.2:1b # ~1.3GB - fastest, runs on almost anything ollama pull llama3.2:3b # ~2GB - good balance ollama pull llama3.2 # ~2GB - 3B instruction-tuned (default) ollama pull llama3 # ~4.7GB - 8B, more capableRun Ollama (if not running as a service):
ollama servePoint your backend at
http://localhost:11434(or your Ollama host). Your backend calls the Ollama API; controller-chat calls your backend.
Model size guide
| Model | Size | Use case |
|-------|------|----------|
| llama3.2:1b | ~1.3GB | Embedded, Raspberry Pi, low-spec |
| llama3.2:3b | ~2GB | Laptops, small VMs, fast responses |
| llama3 (8B) | ~4.7GB | Higher quality, needs 8GB+ RAM |
Peer Dependencies
react>= 17react-dom>= 17
API URLs (what you provide)
| URL | Method | Purpose |
|-----|--------|---------|
| controllerApiUrl | POST | Fast search—RAG, keyword, or hybrid. Body: { context, query, conversationHistory }. Response: { text, results }. |
| chatApiUrl | POST | Streaming chat. Body: { message, context, sessionId, conversationHistory }. Stream: data: {"type":"token","content":"..."} then data: {"type":"done","sources":[...]}. |
Use relative paths like /api/controller and proxy them in your app (Vite, Next.js, etc.) to your backend. The package never knows your infrastructure.
Context & extensibility
context accepts any string—not just the built-in ones. Pass whatever fits your domain.
| Context | Keyword-only behavior |
|---------|------------------------|
| events | Event-specific: filters past dates, deduplicates, "list all" support |
| showcase, products, software | Pre-tuned suggestion chips; generic item search (title, name, description, tags) |
| Any other string | Same as showcase: generic item search. Use suggestionChips to customize quick actions |
Custom contexts (e.g. articles, recipes, inventory): your backend receives the context in every request. Use it to route queries, switch RAG collections, or tailor responses. The client fallback searches data using title, name, description, tags, category—format your items accordingly.
<ControllerChat
context="recipes"
data={myRecipes}
suggestionChips={[
{ label: 'Desserts', query: 'dessert recipes' },
{ label: 'Quick meals', query: 'under 30 minutes' },
]}
viewAllUrl="/recipes"
/>Props
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| context | string | 'events' | Search context—any string. Built-in: events, showcase, products, software. Custom: pass your domain (e.g. recipes, articles). |
| data | Array | [] | Items to search (events, products, etc.) |
| inline | boolean | false | Inline mode (no floating button) |
| onResultClick | (result) => void | - | Called when user clicks a result |
| onResultsChange | (results) => void | - | Called when results change |
| viewAllUrl | string | - | URL for "View all" link |
| controllerApiUrl | string \| null | null | Your RAG/search API URL |
| chatApiUrl | string \| null | null | Your streaming chat API URL |
| chatApiEnabled | boolean | true | Enable chat when chatApiUrl is set |
| getAboutResponse | () => string | - | Response for "about" queries |
| aboutPhrases | string[] | - | Phrases that trigger about response |
| suggestionChips | Array<{label, query}> | - | Quick-action chips |
| welcomeMessages | string[] | - | Random welcome message |
| placeholder | string | 'What are you looking for?' | Input placeholder |
| emptyStateMessage | string | - | Message when no results |
| title | string | 'Search' | Header title |
| logoUrl | string \| null | null | Logo image URL |
| autocompleteSuggestions | string[] | [] | Extra autocomplete hints |
Programmatic open
window.dispatchEvent(new Event('controller-open'));Examples & Resources
- Live demo: redlightcam.co (Events & Showcase pages)
- Homepage: therisecollection.co/portfolio/controller
- Ollama: ollama.com
- Llama models: ollama.com/library/llama3.2
If you found this useful, please ⭐ the repo and share where you found it!
Credits
License
ISC
