npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@wedobrandish/astro-chat

v0.3.0

Published

Streaming AI chat widget + API handler for Astro: BYO LLM key (Anthropic/OpenAI) or proxy to your backend (OpenAI-shaped SSE).

Readme

@wedobrandish/astro-chat

Streaming AI chat widget + chatPost handler for Astro. The browser posts to your Astro route; the route calls an LLM directly (Anthropic or OpenAI-compatible API with your server-side key) or forwards to your own backend (proxy) that returns OpenAI-shaped SSE — same format the widget already parses.

Quick start (Anthropic, ~15 minutes)

  1. Install

    npm install @wedobrandish/astro-chat

    Peers: astro ^4 || ^5 || ^6, zod ^4. No Anthropic npm SDK required (uses fetch).

  2. Env (Astro app, server-only — never PUBLIC_):

    ANTHROPIC_API_KEY=sk-ant-...
  3. Routesrc/pages/api/chat.ts:

    import type { APIRoute } from "astro";
    import { chatPost } from "@wedobrandish/astro-chat";
    import { anthropic } from "@wedobrandish/astro-chat/providers";
    
    export const prerender = false;
    
    const provider = anthropic({
      apiKey: import.meta.env.ANTHROPIC_API_KEY ?? "",
    });
    
    export const POST: APIRoute = async ({ request }) => {
      return chatPost(request, {
        knowledge: {
          businessName: "My Studio",
          businessType: "Design",
          description: "We build brands and websites.",
          faqs: [{ question: "What are your rates?", answer: "We quote per project." }],
        },
        provider,
      });
    };
  4. Widget — add ChatWidget (see Widget) with apiPath="/api/chat".

Icons default to Bootstrap Icons (bi bi-*). Optional stylesheet:

<link
  rel="stylesheet"
  href="https://cdn.jsdelivr.net/npm/[email protected]/font/bootstrap-icons.css"
/>

Preview

astro-chat: launcher, teaser, and chat panel

Providers

Import from @wedobrandish/astro-chat/providers.

| Factory | Use case | |--------|-----------| | anthropic({ apiKey, model?, maxTokens? }) | Claude via Messages API; SSE translated to OpenAI-shaped deltas for the widget. | | openai({ apiKey, model?, maxTokens?, baseURL? }) | OpenAI or compatible servers (Ollama, Azure, etc.); stream passed through. | | proxy({ url, headers? }) | Your backend — body: { messages, site_id, business_context, stream: true } (Brick / FastAPI path). |

Keys stay server-side (import.meta.env.*), never in the browser.

Knowledge schema (SiteKnowledge)

Generic sites pass a small object; Brick templates can use brickConfigToKnowledge (see Migration).

interface SiteKnowledge {
  businessName: string;
  businessType?: string;
  description?: string;
  sections?: Record<string, unknown>;
  faqs?: Array<{ question: string; answer: string }>;
  contact?: { email?: string; phone?: string; address?: string; url?: string; nextSteps?: string[] };
  extraContext?: string;
}

buildSystemPrompt(knowledge) and buildSuggestionQueries(knowledge, max?) are exported from the main entry and @wedobrandish/astro-chat/context.

Guardrails

Default behavior (override with guardrails: { ... } on chatPost):

  • Max 24 turns, 8000 chars per message (after optional HTML strip).
  • Per-IP rate limit 30 requests / 60s (in-memory LRU, 10k keys). Set rateLimitPerIP: null to disable (recommended when your backend already limits, e.g. Brick).
  • Block patterns for basic prompt-injection phrases; stripHtml: true on input.

Bring your own store for multi-instance deploys: implement RateLimitStore from @wedobrandish/astro-chat/guardrails (e.g. Redis / Upstash — example in plan doc).

API route (chatPost)

New signature (recommended)

chatPost(request, {
  knowledge: siteKnowledge, // or () => siteKnowledge
  provider: anthropic({ apiKey: ... }), // or openai() / proxy()
  siteId?: "tenant-id",
  systemPrompt?: string | ((k: SiteKnowledge) => string),
  guardrails?: Partial<Guardrails>,
});

Legacy (deprecated, still works)

Same as v0.2.x — forwards to your URL with { messages, site_id, business_context, stream: true } and no package-level rate limit:

chatPost(request, {
  loadConfig: () => loadConfig() as ChatbotSiteConfig,
  apiUrl: import.meta.env.CHATBOT_API_URL,
  siteId: "optional",
});

Requires config.chatbot?.enabled === true (403 otherwise).

OpenAI example (src/pages/api/chat.ts)

import type { APIRoute } from "astro";
import { chatPost } from "@wedobrandish/astro-chat";
import { openai } from "@wedobrandish/astro-chat/providers";

export const prerender = false;

const provider = openai({
  apiKey: import.meta.env.OPENAI_API_KEY ?? "",
  // baseURL: "http://127.0.0.1:11434/v1", // Ollama example
});

export const POST: APIRoute = async ({ request }) => {
  return chatPost(request, {
    knowledge: { businessName: "Demo Co", description: "We ship widgets." },
    provider,
  });
};

Proxy / Brick example

import type { APIRoute } from "astro";
import { chatPost } from "@wedobrandish/astro-chat";
import { proxy } from "@wedobrandish/astro-chat/providers";
import { brickConfigToKnowledge } from "@wedobrandish/astro-chat/adapters/brick";
import { loadConfig } from "../lib/loadConfig";
import type { ChatbotSiteConfig } from "@wedobrandish/astro-chat/types";

export const prerender = false;

export const POST: APIRoute = async ({ request }) => {
  const config = loadConfig() as ChatbotSiteConfig;
  if (!config.chatbot?.enabled) {
    return new Response(JSON.stringify({ error: "Chat is disabled." }), { status: 403 });
  }
  return chatPost(request, {
    knowledge: () => brickConfigToKnowledge(config),
    provider: proxy({ url: import.meta.env.CHATBOT_API_URL! }),
    siteId: "my-tenant-id",
    guardrails: { rateLimitPerIP: null },
  });
};

Widget

In a layout or page:

---
import ChatWidget from '@wedobrandish/astro-chat/components/ChatWidget.astro';
import { CHAT_BOOTSTRAP_ICON_DEFAULTS, buildSuggestionQueries } from '@wedobrandish/astro-chat';

const knowledge = {
  businessName: 'Acme',
  faqs: [{ question: 'Hours?', answer: '9–5' }],
};
const suggestionQueries = buildSuggestionQueries(knowledge, 4);
---

<ChatWidget
  headerLine1="Online"
  headerLine2={knowledge.businessName}
  welcomeMessage="Hi! How can I help?"
  suggestionQueries={suggestionQueries}
  actionLinks={[{ title: 'Contact', url: '#contact', primary: true }]}
  apiPath="/api/chat"
  icons={{ ...CHAT_BOOTSTRAP_ICON_DEFAULTS, launcher: 'bi bi-chat-heart-fill' }}
/>

Use camelCase CSS keys in styles. For Brick configs, buildChatbotSuggestionQueries(config) remains available (deprecated).

Props

| Prop | Description | |------|-------------| | headerLine1 | Small header line (e.g. status: “Online”). | | headerLine2 | Main header title (e.g. business name). | | welcomeMessage | First assistant message when the panel opens. | | suggestionQueries | string[] — quick-send chip labels. | | actionLinks | Optional { title, url, primary? }[]. | | assistantAvatarUrl | Optional image URL for the header avatar. | | apiPath | POST endpoint for SSE chat. Default /api/chat. | | class | Extra classes on the root element. | | style | Extra root styles. | | styles | Nested partials for header, body, footer, teaser, launcher. | | icons | Bootstrap classes or Astro icon components. | | (slots) | launcher-icon, close-icon, send-icon, avatar-fallback. | | launcher | mode, text, iconClass, ariaLabel, button, etc. |

Full styling and icon notes are unchanged from earlier releases; see sections below for Bootstrap vs Lucide.

Icons (Bootstrap by default)

  • Omit icons → defaults (bi bi-chat-dots-fill, bi bi-x-lg, bi bi-send-fill, bi bi-building).
  • Stringsicons={{ launcher: 'bi bi-chat-heart-fill', ... }}.
  • Curated listsCHAT_BOOTSTRAP_ICON_SUGGESTIONS, CHAT_BOOTSTRAP_ICON_DEFAULTS from @wedobrandish/astro-chat.

Optional: Lucide or other Astro components

| Approach | How | |----------|-----| | icons + component | icons={{ close: CloseIcon }} — pass CloseIcon, not <CloseIcon />. | | Named slot | <CloseIcon slot="close-icon" size={20} /> — slot wins over icons. |

launcher.iconClass overrides icons.launcher only.

Custom icons (Astro slots)

| Slot | Replaces | |------|----------| | launcher-icon | Floating trigger | | close-icon | Header close | | send-icon | Send button | | avatar-fallback | Header placeholder when no assistantAvatarUrl |

Migration from 0.2.x

  • No code change requiredchatPost(request, { loadConfig, apiUrl }) still works (deprecated).
  • Optional: switch to knowledge + proxy({ url }) and brickConfigToKnowledge for explicit guardrails and clearer OSS boundaries.
  • Prompt shape for generic SiteKnowledge differs slightly from the old flat JSON; snapshot-test your system prompt if you rely on byte-identical business_context for a hosted backend.
  • Bump dependency: "@wedobrandish/astro-chat": "^0.3.0".

FAQ

  • Can I use this without a backend? Yes — use anthropic() or openai() with a server env key.
  • Self-hosted LLM? Yes — openai({ apiKey, baseURL }) pointing at an OpenAI-compatible endpoint.
  • Are conversations stored? No — the widget sends full history each request unless you add storage.
  • Is the API key exposed to the browser? No — only the Astro server reads env vars.

Exports

  • @wedobrandish/astro-chatchatPost, buildSystemPrompt, buildSuggestionQueries, legacy builders, types, widget helpers
  • @wedobrandish/astro-chat/apichatPost
  • @wedobrandish/astro-chat/context — context builders
  • @wedobrandish/astro-chat/typesSiteKnowledge, ChatbotSiteConfig, widget types
  • @wedobrandish/astro-chat/validationchatRequestSchema
  • @wedobrandish/astro-chat/providersanthropic, openai, proxy
  • @wedobrandish/astro-chat/guardrails — guardrail types, createMemoryRateLimitStore, helpers
  • @wedobrandish/astro-chat/adapters/brickbrickConfigToKnowledge
  • @wedobrandish/astro-chat/chat-widget-styles — appearance helpers
  • @wedobrandish/astro-chat/bootstrap-icons — icon presets
  • @wedobrandish/astro-chat/components/ChatWidget.astro — UI

Examples

See examples/anthropic-minimal, examples/openai-minimal, and examples/brick-proxy (install from repo; file:../../ to this package).

Before publishing (maintainers)

  1. Scope@wedobrandish/astro-chat, "publishConfig": { "access": "public" }.
  2. Dry runnpm pack --dry-run (expect src, components, assets, README.md, LICENSE, CHANGELOG.md, CONTRIBUTING.md).
  3. Releasenpm run typecheck && npm test, then npm publish.

Contributing

See CONTRIBUTING.md.

License

MIT — see LICENSE.