rawan-bee-bot
v1.3.2
Published
A flexible, AI-powered chatbot UI component for React, supporting multiple AI providers (OpenAI, Gemini, self-hosted LLMs).
Maintainers
Readme
rawan-bee-bot
A React chatbot UI you drop into any app: a bottom Messages launcher, a glass-style chat panel, and built-in wiring for OpenAI, Google Gemini, or a self-hosted HTTP endpoint (e.g. Ollama).
Preview

This document is written for people installing and using the package, not for library authors.
Who this is for
- You use React 17+ and want a ready-made chat UI (no design system required).
- You want to point the widget at OpenAI, Gemini, or your own backend URL.
- You want optional
localStoragehistory and optional hooks to log messages to your server.
Requirements
| Requirement | Notes |
|-------------|--------|
| React & React DOM | >= 17. Listed as peer dependencies — install them in your app. |
| Browser fetch | Used for all AI calls. |
| localStorage | Only if persistMessages is true (default). |
The package bundles its own runtime dependencies (e.g. styled-components). You do not need to add styled-components yourself unless you want to.
Installation
npm install rawan-bee-botEnsure React is installed in your project:
npm install react react-domAdd it to your app (3 steps)
1. Import the default component.
2. Render it once (usually near the root of your layout, e.g. inside App).
3. Set aiProvider and the options for that provider (see below).
Minimal example (self-hosted / Ollama)
import ChatbotUI from "rawan-bee-bot";
export default function App() {
return (
<ChatbotUI
aiProvider="selfhosted"
backendUrl="http://localhost:11434/api/generate"
/>
);
}Minimal example (OpenAI)
import ChatbotUI from "rawan-bee-bot";
export default function App() {
return (
<ChatbotUI
aiProvider="openai"
apiKey={process.env.REACT_APP_OPENAI_API_KEY!}
/>
);
}Use your bundler’s way of reading env vars (import.meta.env in Vite, etc.).
What the UI does
- A fixed bar at the bottom (on wide screens: a tab on the bottom-right) labeled Messages. Clicking it opens the panel.
- The panel shows the conversation, an input field, and a send button.
- Escape closes the panel (when it is open).
- Appearance defaults to a neutral, glassy look and follows system light/dark (
prefers-color-scheme). You can override colors withthemeand the font withfontFamily.
Choosing an AI provider
Pick one aiProvider value. The widget sends the user’s text to the matching backend.
| aiProvider | When to use | Required props | Default endpoint (if you omit backendUrl) |
|----------------|-------------|----------------|---------------------------------------------|
| "selfhosted" | Local LLM / custom HTTP API compatible with the request shape below | None (API key not used) | http://localhost:11434/api/generate |
| "openai" | OpenAI Chat Completions | apiKey | https://api.openai.com/v1/chat/completions |
| "gemini" | Google Gemini | apiKey | Gemini generateContent URL for gemini-2.0-flash |
backendUrl (optional): Full URL to POST to instead of the default. Use this for proxies or alternate hosts.
Self-hosted (Ollama-style) details
The built-in client POSTs JSON like:
{ "model": "mistral", "prompt": "<user message>" }and expects a streaming-style response body where lines are JSON objects that may include a response string field (typical Ollama /api/generate behavior).
Important: The model name is currently fixed to "mistral" in this package. If you need another model, use a proxy that maps the request, or open an issue / fork to make the model configurable.
OpenAI details
- Model used in code:
gpt-4o-mini. - Auth:
Authorization: Bearer <apiKey>.
Gemini details
- API key is sent as a query parameter on the request URL (as required by the REST API).
<ChatbotUI /> — all props
Every prop except aiProvider is optional unless your provider needs it.
| Prop | Type | Default | What it does |
|------|------|---------|----------------|
| aiProvider | "openai" \| "gemini" \| "selfhosted" | — | Required. Which backend to call. |
| apiKey | string | undefined | Required for openai and gemini. Not used for selfhosted. |
| backendUrl | string | Provider default (see table above) | Override the full URL for AI requests. |
| title | string | "Messages" | Text in the panel header. |
| placeholder | string | "Type your message…" | Input placeholder. |
| fontFamily | string | System UI stack (see below) | CSS font-family for the whole widget. |
| theme | ChatbotTheme | Built-in light/dark neutral glass | Partial object; merged over defaults. |
| persistMessages | boolean | true | If true, loads/saves message list in localStorage. |
| storageKey | string | "rawan-bee-bot:chatHistory" | localStorage key when persistMessages is true. |
| className | string | undefined | Applied to the outer wrapper <div> around launcher + panel (for your own layout/CSS). |
| onMessageSend | function | undefined | Called after a successful assistant reply (see Callbacks). |
| onSaveMessage | function | undefined | Called when the widget stores a message (user + assistant; see Callbacks). |
| onError | function | undefined | Called with an Error if the AI request fails. |
Callbacks (onSaveMessage, onMessageSend, onError)
All callbacks are optional. They may be async; the widget will await them where used.
onSaveMessage(message, history)
Called:
- After the user message is added (before the AI responds).
- After each assistant message is added (including error messages shown in the UI).
message: The single message that was just appended (ChatMessage).history: The full list including that message.
Use this to mirror chat to your database or analytics. It does not replace localStorage unless you set persistMessages={false}.
onMessageSend(message, history)
Called only after a successful assistant reply (not for user-only turns, not for thrown errors).
message: The assistant’sChatMessage.history: Full transcript including that reply.
onError(error)
Called when the AI call throws (network, HTTP error, missing API key, etc.). The user still sees an error line in the chat when possible.
ChatMessage shape (TypeScript)
type ChatMessage = {
id: string;
role: "user" | "assistant" | "system";
content: string;
createdAt: string; // ISO timestamp
isError?: boolean; // true when content is an error explanation
};Import the type in TypeScript:
import type { ChatMessage, ChatbotTheme, ChatbotUIProps } from "rawan-bee-bot";Chat history & localStorage
persistMessages={true}(default): On load, the widget readsstorageKey; after each change, it writes the full message array back.persistMessages={false}: No read/write; each full page load starts empty unless you add your own persistence later.- Changing
storageKeygives you a separate thread per key (e.g. per user or per page).
Styling: theme and fontFamily
Defaults
- Colors: Neutral, glassy presets; light vs dark follows
prefers-color-schemeuntil you override withtheme. - Font: A system UI stack (same as exporting
SYSTEM_UI_FONT_STACK).
theme — what each key controls
Pass only the keys you need. Unset keys keep the default for the current light/dark preset.
| Key | Affects |
|-----|--------|
| chatButtonBg | Bottom launcher bar background |
| chatButtonTextColor | Launcher text and accent dot |
| headerBg | Panel header background |
| headerTextColor | Header title color |
| closeButtonColor | Close control |
| userMessageBg / userMessageText | User bubbles |
| botMessageBg / botMessageText | Assistant bubbles |
| inputContainerBg | Area behind the input row |
| inputFieldBg / inputFieldText | Input field |
| sendButtonBg / sendButtonTextColor | Send button |
| typingIndicatorColor | “Typing…” text |
Color format: Any valid CSS color string works (#rrggbb, rgb(), rgba(), etc.). For hex colors, some internal helpers can lighten them for gradients; non-hex values are passed through unchanged.
fontFamily
Set a full CSS stack, e.g.:
<ChatbotUI
aiProvider="openai"
apiKey={key}
fontFamily={'"Inter", system-ui, sans-serif'}
/>To extend the package default instead of replacing it:
import ChatbotUI, { SYSTEM_UI_FONT_STACK } from "rawan-bee-bot";
<ChatbotUI
aiProvider="selfhosted"
fontFamily={`"Inter", ${SYSTEM_UI_FONT_STACK}`}
/>Imports and exports
Default export (most apps):
import ChatbotUI from "rawan-bee-bot";Named exports:
import ChatbotUI, { SYSTEM_UI_FONT_STACK } from "rawan-bee-bot";Type-only imports:
import type {
AIProviderName,
ChatbotTheme,
ChatMessage,
ChatbotUIProps,
} from "rawan-bee-bot";Full example (theme + callbacks + font)
import ChatbotUI, { SYSTEM_UI_FONT_STACK } from "rawan-bee-bot";
export default function App() {
return (
<ChatbotUI
className="my-chatbot-root"
aiProvider="openai"
apiKey={process.env.REACT_APP_OPENAI_API_KEY!}
title="Support"
placeholder="How can we help?"
fontFamily={`"Inter", ${SYSTEM_UI_FONT_STACK}`}
theme={{
userMessageBg: "#2563eb",
sendButtonBg: "#0f172a",
headerBg: "rgba(255, 255, 255, 0.85)",
}}
persistMessages
storageKey="my-app:support-chat"
onSaveMessage={async (message, history) => {
await fetch("/api/chat-log", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message, history }),
});
}}
onError={(err) => console.error(err.message)}
/>
);
}Troubleshooting
| Problem | Things to check |
|--------|------------------|
| CORS errors | Browser calls go from your site to the AI URL. OpenAI/Gemini allow browser keys in some setups; local Ollama often needs a same-origin proxy or dev server proxy. |
| openai / gemini errors | apiKey set? Env var actually loaded in the build? |
| selfhosted empty or errors | Is the server running? Does URL match? Is the response shape compatible (streaming JSON lines with response)? Model on server is mistral per request body. |
| Wrong font | Set fontFamily explicitly; ensure the font is loaded in your app (e.g. Google Fonts + CSS). |
| Duplicate React | In monorepos, ensure a single React instance (common Vite/webpack resolve.dedupe issue). |
Develop this repo locally
git clone https://github.com/RawanBee/rawan-bee-bot.git
cd rawan-bee-bot
npm install
npm run buildDemo app (demo/)
cd demo
npm install
npm run devQuestions & bugs
Use GitHub Issues for bug reports and feature requests.
License
MIT
