chatgsd
v0.7.0
Published
Portable terminal-style chat intake form component
Downloads
22
Readme
ChatGSD
A conversational UI component where AI orchestrates the interface. The AI decides when to show calendars, collect payments, submit forms — chat becomes the control layer.
Install
npm install chatgsdQuick Start
import { ChatGSD } from "chatgsd";
import "chatgsd/styles.css";
function App() {
return (
<ChatGSD
endpoint="/api/chat"
brandName="assistant"
initialMessage="Hey! How can I help you today?"
/>
);
}How It Works
Your backend streams responses to the chat. ChatGSD supports two formats:
SSE Format (recommended)
Server-Sent Events with JSON payloads. Auto-detected via Content-Type: text/event-stream.
data: {"type": "text", "content": "Let me "}
data: {"type": "text", "content": "show you "}
data: {"type": "text", "content": "available times."}
data: {"type": "tool_call", "name": "show_slots", "args": {}}
data: {"type": "done"}Plain Text Format
Raw streaming text with tool calls embedded. Falls back to this when not SSE.
Let me show you available times.
[TOOL:show_slots]Or with arguments:
[TOOL:show_slots({"date": "2024-01-15"})]Tools
Tools are handlers that run when the AI invokes them. They can render UI, call APIs, update parent state — anything.
import { ChatGSD, Tool } from "chatgsd";
const tools: Record<string, Tool> = {
show_slots: {
handler: async (args, messages) => {
// Fetch data, update state, etc.
return { success: true };
},
renderSuccess: () => (
<div className="slots">
{slots.map(slot => (
<button key={slot.id} onClick={() => selectSlot(slot)}>
{slot.time}
</button>
))}
</div>
),
},
submit_inquiry: {
terminal: true, // Ends the conversation
handler: async (args, messages) => {
await fetch("/api/submit", {
method: "POST",
body: JSON.stringify({ messages }),
});
return { success: true, message: "Submitted!" };
},
},
};
<ChatGSD endpoint="/api/chat" tools={tools} />Tool Configuration
| Property | Type | Description |
|----------|------|-------------|
| handler | (args, messages) => Promise<ToolResult> | Called when tool is invoked |
| terminal | boolean | If true, ends the conversation |
| renderCalling | () => ReactNode | Custom UI while tool is running |
| renderSuccess | (result) => ReactNode | Custom UI on success |
| renderError | (result) => ReactNode | Custom UI on error |
ToolResult
interface ToolResult {
success: boolean;
message?: string;
}Props
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| endpoint | string | "/api/chat" | Chat API endpoint |
| tools | Record<string, Tool> | {} | Tool definitions |
| context | Record<string, unknown> | {} | Context sent with each request |
| headers | Record<string, string> | {} | Custom headers for API requests |
| brandName | string | "assistant" | Name shown in chat |
| initialMessage | string | "How can I help?" | First message displayed |
| className | string | "" | Additional CSS class |
| format | "auto" \| "sse" \| "text" | "auto" | Stream format (auto-detects from Content-Type) |
| onMessagesChange | (messages) => void | - | Called when messages update |
| onComplete | (messages) => void | - | Called when conversation ends |
Backend API
The component expects a streaming endpoint that accepts:
// Request
{
messages: Array<{ role: "system" | "user", content: string }>,
context: Record<string, unknown>
}SSE Response (Node.js)
app.post("/api/chat", async (req, res) => {
const { messages, context } = req.body;
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
const stream = await openai.chat.completions.create({
model: "gpt-4",
stream: true,
messages: [
{
role: "system",
content: `You are a helpful assistant. Available tools:
- show_slots: Show available meeting times
- submit_inquiry: Submit the inquiry
When you want to use a tool, I'll signal it in the stream.`
},
...messages
],
});
for await (const chunk of stream) {
const text = chunk.choices[0]?.delta?.content || "";
if (text) {
res.write(`data: ${JSON.stringify({ type: "text", content: text })}\n\n`);
}
}
// Send tool call if needed (based on your logic)
res.write(`data: ${JSON.stringify({ type: "tool_call", name: "show_slots", args: {} })}\n\n`);
res.write(`data: ${JSON.stringify({ type: "done" })}\n\n`);
res.end();
});With Vercel AI SDK
import { streamText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages, context } = await req.json();
const result = await streamText({
model: openai("gpt-4"),
messages,
tools: {
show_slots: tool({
description: "Show available meeting times",
parameters: z.object({ date: z.string().optional() }),
}),
},
});
// AI SDK streams in a compatible format
return result.toDataStreamResponse();
}Styling
Import the default styles:
import "chatgsd/styles.css";Customize with CSS variables:
:root {
--chatgsd-bg: #0a0a0a;
--chatgsd-border: #333;
--chatgsd-green: #27ca40;
--chatgsd-yellow: #ffbd2e;
--chatgsd-red: #ff5f56;
--chatgsd-text: #e0e0e0;
--chatgsd-muted: #666;
--chatgsd-font: ui-monospace, monospace;
}Or write your own styles targeting .chatgsd-* classes.
Example
Run the example app:
cd example
npm install
npm run devLicense
MIT
