@wedobrandish/astro-chat
v0.3.0
Published
Streaming AI chat widget + API handler for Astro: BYO LLM key (Anthropic/OpenAI) or proxy to your backend (OpenAI-shaped SSE).
Maintainers
Readme
@wedobrandish/astro-chat
Streaming AI chat widget + chatPost handler for Astro. The browser posts to your Astro route; the route calls an LLM directly (Anthropic or OpenAI-compatible API with your server-side key) or forwards to your own backend (proxy) that returns OpenAI-shaped SSE — same format the widget already parses.
Quick start (Anthropic, ~15 minutes)
Install
npm install @wedobrandish/astro-chatPeers:
astro^4 || ^5 || ^6,zod^4. No Anthropic npm SDK required (usesfetch).Env (Astro app, server-only — never
PUBLIC_):ANTHROPIC_API_KEY=sk-ant-...Route —
src/pages/api/chat.ts:import type { APIRoute } from "astro"; import { chatPost } from "@wedobrandish/astro-chat"; import { anthropic } from "@wedobrandish/astro-chat/providers"; export const prerender = false; const provider = anthropic({ apiKey: import.meta.env.ANTHROPIC_API_KEY ?? "", }); export const POST: APIRoute = async ({ request }) => { return chatPost(request, { knowledge: { businessName: "My Studio", businessType: "Design", description: "We build brands and websites.", faqs: [{ question: "What are your rates?", answer: "We quote per project." }], }, provider, }); };Widget — add
ChatWidget(see Widget) withapiPath="/api/chat".
Icons default to Bootstrap Icons (bi bi-*). Optional stylesheet:
<link
rel="stylesheet"
href="https://cdn.jsdelivr.net/npm/[email protected]/font/bootstrap-icons.css"
/>Preview

Providers
Import from @wedobrandish/astro-chat/providers.
| Factory | Use case |
|--------|-----------|
| anthropic({ apiKey, model?, maxTokens? }) | Claude via Messages API; SSE translated to OpenAI-shaped deltas for the widget. |
| openai({ apiKey, model?, maxTokens?, baseURL? }) | OpenAI or compatible servers (Ollama, Azure, etc.); stream passed through. |
| proxy({ url, headers? }) | Your backend — body: { messages, site_id, business_context, stream: true } (Brick / FastAPI path). |
Keys stay server-side (import.meta.env.*), never in the browser.
Knowledge schema (SiteKnowledge)
Generic sites pass a small object; Brick templates can use brickConfigToKnowledge (see Migration).
interface SiteKnowledge {
businessName: string;
businessType?: string;
description?: string;
sections?: Record<string, unknown>;
faqs?: Array<{ question: string; answer: string }>;
contact?: { email?: string; phone?: string; address?: string; url?: string; nextSteps?: string[] };
extraContext?: string;
}buildSystemPrompt(knowledge) and buildSuggestionQueries(knowledge, max?) are exported from the main entry and @wedobrandish/astro-chat/context.
Guardrails
Default behavior (override with guardrails: { ... } on chatPost):
- Max 24 turns, 8000 chars per message (after optional HTML strip).
- Per-IP rate limit 30 requests / 60s (in-memory LRU, 10k keys). Set
rateLimitPerIP: nullto disable (recommended when your backend already limits, e.g. Brick). - Block patterns for basic prompt-injection phrases;
stripHtml: trueon input.
Bring your own store for multi-instance deploys: implement RateLimitStore from @wedobrandish/astro-chat/guardrails (e.g. Redis / Upstash — example in plan doc).
API route (chatPost)
New signature (recommended)
chatPost(request, {
knowledge: siteKnowledge, // or () => siteKnowledge
provider: anthropic({ apiKey: ... }), // or openai() / proxy()
siteId?: "tenant-id",
systemPrompt?: string | ((k: SiteKnowledge) => string),
guardrails?: Partial<Guardrails>,
});Legacy (deprecated, still works)
Same as v0.2.x — forwards to your URL with { messages, site_id, business_context, stream: true } and no package-level rate limit:
chatPost(request, {
loadConfig: () => loadConfig() as ChatbotSiteConfig,
apiUrl: import.meta.env.CHATBOT_API_URL,
siteId: "optional",
});Requires config.chatbot?.enabled === true (403 otherwise).
OpenAI example (src/pages/api/chat.ts)
import type { APIRoute } from "astro";
import { chatPost } from "@wedobrandish/astro-chat";
import { openai } from "@wedobrandish/astro-chat/providers";
export const prerender = false;
const provider = openai({
apiKey: import.meta.env.OPENAI_API_KEY ?? "",
// baseURL: "http://127.0.0.1:11434/v1", // Ollama example
});
export const POST: APIRoute = async ({ request }) => {
return chatPost(request, {
knowledge: { businessName: "Demo Co", description: "We ship widgets." },
provider,
});
};Proxy / Brick example
import type { APIRoute } from "astro";
import { chatPost } from "@wedobrandish/astro-chat";
import { proxy } from "@wedobrandish/astro-chat/providers";
import { brickConfigToKnowledge } from "@wedobrandish/astro-chat/adapters/brick";
import { loadConfig } from "../lib/loadConfig";
import type { ChatbotSiteConfig } from "@wedobrandish/astro-chat/types";
export const prerender = false;
export const POST: APIRoute = async ({ request }) => {
const config = loadConfig() as ChatbotSiteConfig;
if (!config.chatbot?.enabled) {
return new Response(JSON.stringify({ error: "Chat is disabled." }), { status: 403 });
}
return chatPost(request, {
knowledge: () => brickConfigToKnowledge(config),
provider: proxy({ url: import.meta.env.CHATBOT_API_URL! }),
siteId: "my-tenant-id",
guardrails: { rateLimitPerIP: null },
});
};Widget
In a layout or page:
---
import ChatWidget from '@wedobrandish/astro-chat/components/ChatWidget.astro';
import { CHAT_BOOTSTRAP_ICON_DEFAULTS, buildSuggestionQueries } from '@wedobrandish/astro-chat';
const knowledge = {
businessName: 'Acme',
faqs: [{ question: 'Hours?', answer: '9–5' }],
};
const suggestionQueries = buildSuggestionQueries(knowledge, 4);
---
<ChatWidget
headerLine1="Online"
headerLine2={knowledge.businessName}
welcomeMessage="Hi! How can I help?"
suggestionQueries={suggestionQueries}
actionLinks={[{ title: 'Contact', url: '#contact', primary: true }]}
apiPath="/api/chat"
icons={{ ...CHAT_BOOTSTRAP_ICON_DEFAULTS, launcher: 'bi bi-chat-heart-fill' }}
/>Use camelCase CSS keys in styles. For Brick configs, buildChatbotSuggestionQueries(config) remains available (deprecated).
Props
| Prop | Description |
|------|-------------|
| headerLine1 | Small header line (e.g. status: “Online”). |
| headerLine2 | Main header title (e.g. business name). |
| welcomeMessage | First assistant message when the panel opens. |
| suggestionQueries | string[] — quick-send chip labels. |
| actionLinks | Optional { title, url, primary? }[]. |
| assistantAvatarUrl | Optional image URL for the header avatar. |
| apiPath | POST endpoint for SSE chat. Default /api/chat. |
| class | Extra classes on the root element. |
| style | Extra root styles. |
| styles | Nested partials for header, body, footer, teaser, launcher. |
| icons | Bootstrap classes or Astro icon components. |
| (slots) | launcher-icon, close-icon, send-icon, avatar-fallback. |
| launcher | mode, text, iconClass, ariaLabel, button, etc. |
Full styling and icon notes are unchanged from earlier releases; see sections below for Bootstrap vs Lucide.
Icons (Bootstrap by default)
- Omit
icons→ defaults (bi bi-chat-dots-fill,bi bi-x-lg,bi bi-send-fill,bi bi-building). - Strings →
icons={{ launcher: 'bi bi-chat-heart-fill', ... }}. - Curated lists →
CHAT_BOOTSTRAP_ICON_SUGGESTIONS,CHAT_BOOTSTRAP_ICON_DEFAULTSfrom@wedobrandish/astro-chat.
Optional: Lucide or other Astro components
| Approach | How |
|----------|-----|
| icons + component | icons={{ close: CloseIcon }} — pass CloseIcon, not <CloseIcon />. |
| Named slot | <CloseIcon slot="close-icon" size={20} /> — slot wins over icons. |
launcher.iconClass overrides icons.launcher only.
Custom icons (Astro slots)
| Slot | Replaces |
|------|----------|
| launcher-icon | Floating trigger |
| close-icon | Header close |
| send-icon | Send button |
| avatar-fallback | Header placeholder when no assistantAvatarUrl |
Migration from 0.2.x
- No code change required —
chatPost(request, { loadConfig, apiUrl })still works (deprecated). - Optional: switch to
knowledge+proxy({ url })andbrickConfigToKnowledgefor explicit guardrails and clearer OSS boundaries. - Prompt shape for generic
SiteKnowledgediffers slightly from the old flat JSON; snapshot-test your system prompt if you rely on byte-identicalbusiness_contextfor a hosted backend. - Bump dependency:
"@wedobrandish/astro-chat": "^0.3.0".
FAQ
- Can I use this without a backend? Yes — use
anthropic()oropenai()with a server env key. - Self-hosted LLM? Yes —
openai({ apiKey, baseURL })pointing at an OpenAI-compatible endpoint. - Are conversations stored? No — the widget sends full history each request unless you add storage.
- Is the API key exposed to the browser? No — only the Astro server reads env vars.
Exports
@wedobrandish/astro-chat—chatPost,buildSystemPrompt,buildSuggestionQueries, legacy builders, types, widget helpers@wedobrandish/astro-chat/api—chatPost@wedobrandish/astro-chat/context— context builders@wedobrandish/astro-chat/types—SiteKnowledge,ChatbotSiteConfig, widget types@wedobrandish/astro-chat/validation—chatRequestSchema@wedobrandish/astro-chat/providers—anthropic,openai,proxy@wedobrandish/astro-chat/guardrails— guardrail types,createMemoryRateLimitStore, helpers@wedobrandish/astro-chat/adapters/brick—brickConfigToKnowledge@wedobrandish/astro-chat/chat-widget-styles— appearance helpers@wedobrandish/astro-chat/bootstrap-icons— icon presets@wedobrandish/astro-chat/components/ChatWidget.astro— UI
Examples
See examples/anthropic-minimal, examples/openai-minimal, and examples/brick-proxy (install from repo; file:../../ to this package).
Before publishing (maintainers)
- Scope —
@wedobrandish/astro-chat,"publishConfig": { "access": "public" }. - Dry run —
npm pack --dry-run(expectsrc,components,assets,README.md,LICENSE,CHANGELOG.md,CONTRIBUTING.md). - Release —
npm run typecheck && npm test, thennpm publish.
Contributing
See CONTRIBUTING.md.
License
MIT — see LICENSE.
