agentkit-ui
v0.1.1
Published
Drop-in React UI for AI agents — streaming, tool calls, canvas-ready
Maintainers
Readme
agentkit-ui
The drop-in React UI for AI agents. Add a full-featured streaming chat interface — with tool call visualization, inline result cards, a canvas panel, metrics, and LLM provider config — in under 5 minutes.
npm install agentkit-uiWhat you get
- SSE streaming — token-by-token display, automatic retry with exponential backoff
- Tool registration —
defineTool()declares your tools once: schema goes to the LLM, canvas/card renders the result - Inline tool cards — tool results appear as compact cards inside the chat (
card: trueor your own component) - Canvas panel — full-width side panel for rich tool visualizations (search results, browser, code output, documents)
- Canvas → agent — users can select results in the canvas and send them back into the conversation
- 5 customization layers — CSS vars, classNames, full component overrides, render props, middleware
- Error resilience — rate-limit countdown, interrupted stream recovery, context length detection
- Headless mode — use just the
useAgenthook and build your own UI from scratch - Built-in metrics — call latency, TTFT, tool durations, thumbs up/down feedback
Quick start
import { AgentUI } from 'agentkit-ui'
import 'agentkit-ui/styles.css'
export default function App() {
return (
<div style={{ height: '600px' }}>
<AgentUI endpoint="/api/agent" />
</div>
)
}Your backend streams SSE. See Backend setup below.
Installation
npm install agentkit-ui # npm
pnpm add agentkit-ui # pnpm
yarn add agentkit-ui # yarnPeer dependencies (React 18+ must already be in your project):
npm install react react-domBackend setup
Your backend must stream data: <JSON>\n\n lines (standard SSE format), ending with data: [DONE]\n\n.
Event protocol
type StreamEvent =
| { type: 'message_start' }
| { type: 'text_delta'; content: string }
| { type: 'tool_call_start'; id: string; name: string; args: Record<string, any> }
| { type: 'tool_progress'; id: string; message: string }
| { type: 'tool_call_result'; id: string; result: any; status: 'done' | 'error'; error?: string }
| { type: 'message_done'; metadata?: { usage?, model?, latency_ms? } }
| { type: 'error'; message: string; code?: string }
| { type: 'keepalive' }Minimal Next.js backend
// app/api/agent/route.ts
export async function POST(req: Request) {
const { messages } = await req.json()
const encoder = new TextEncoder()
const stream = new ReadableStream({
async start(controller) {
const emit = (obj: object) =>
controller.enqueue(encoder.encode(`data: ${JSON.stringify(obj)}\n\n`))
emit({ type: 'message_start' })
emit({ type: 'text_delta', content: 'Hello! How can I help?' })
emit({ type: 'message_done' })
controller.enqueue(encoder.encode('data: [DONE]\n\n'))
controller.close()
}
})
return new Response(stream, {
headers: { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache' }
})
}Backend with tool calls
// app/api/agent/route.ts
export async function POST(req: Request) {
const { messages, tools } = await req.json()
// `tools` is the array of ToolSchema objects sent by AgentKit
// Pass it directly to OpenAI or Anthropic
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
tools: tools.map(t => ({ type: 'function', function: t })),
stream: true,
})
const stream = new ReadableStream({
async start(controller) {
const emit = (obj: object) =>
controller.enqueue(encoder.encode(`data: ${JSON.stringify(obj)}\n\n`))
emit({ type: 'message_start' })
for await (const chunk of response) {
const delta = chunk.choices[0]?.delta
if (delta?.content) {
emit({ type: 'text_delta', content: delta.content })
}
if (delta?.tool_calls) {
const tc = delta.tool_calls[0]
// Emit tool_call_start when you have name + args
// Execute the tool on your backend
// Emit tool_call_result when done
emit({ type: 'tool_call_start', id: tc.id, name: tc.function.name, args: JSON.parse(tc.function.arguments) })
const result = await executeToolOnBackend(tc.function.name, JSON.parse(tc.function.arguments))
emit({ type: 'tool_call_result', id: tc.id, status: 'done', result })
}
}
emit({ type: 'message_done' })
controller.enqueue(encoder.encode('data: [DONE]\n\n'))
controller.close()
}
})
return new Response(stream, {
headers: { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache' }
})
}Tool registration
defineTool() is the core API. It connects three things:
- Schema → sent to your backend (and to the LLM) on every request
card→ compact inline card rendered inside the chat message listcanvas→ full-width panel rendered beside the chat
import { AgentUI, defineTool } from 'agentkit-ui'
import 'agentkit-ui/styles.css'
const tools = [
defineTool({
name: 'search_products',
description: 'Search the product catalog by keyword and optional filters',
parameters: {
type: 'object',
properties: {
query: { type: 'string', description: 'Search keyword' },
category: { type: 'string', enum: ['electronics', 'clothing', 'books'] },
maxPrice: { type: 'number', description: 'Maximum price in USD' },
},
required: ['query'],
},
card: true, // built-in compact card inline in chat
canvas: ProductCanvas, // full custom canvas panel for details
displayName: 'Product Search',
icon: '🛍',
}),
]
export default function App() {
return (
<div style={{ height: '700px' }}>
<AgentUI endpoint="/api/agent" tools={tools} canvas />
</div>
)
}card — inline tool cards
Rendered inside the message flow when the tool completes.
// Option 1: built-in DefaultToolCard (auto-summarizes any result)
defineTool({ ..., card: true })
// Option 2: your own React component
import type { ToolCardProps } from 'agentkit-ui'
function ProductResultCard({ toolCall, onAction }: ToolCardProps) {
const results = toolCall.result ?? []
return (
<div className="product-card">
<h4>{results.length} products found</h4>
{results.map((p: any) => (
<div key={p.id}>{p.name} — ${p.price}</div>
))}
<button onClick={() => onAction('open_canvas')}>
View all →
</button>
<button onClick={() => onAction('use_content', { content: results.map((p: any) => p.name).join(', ') })}>
Use in chat
</button>
</div>
)
}
defineTool({ ..., card: ProductResultCard })ToolCardProps
type ToolCardProps = {
toolCall: ToolCall // includes .result once done, .status, .icon, .displayName
onAction: (
type: 'open_canvas' | 'use_content' | 'send_message',
data?: any
) => void
}| onAction type | Effect |
|---|---|
| open_canvas | Opens the tool's canvas panel |
| use_content | Sends data.content as a context message to the agent |
| send_message | Sends data.message as a new user message |
canvas — full canvas panel
Rendered in the side panel (slide-in from right). Receives CanvasRendererProps:
type CanvasRendererProps = {
toolCall: ToolCall
onAction: (action: CanvasAction) => void
}import type { CanvasRendererProps } from 'agentkit-ui'
function ProductCanvas({ toolCall, onAction }: CanvasRendererProps) {
const results = toolCall.result ?? []
return (
<div style={{ padding: 16 }}>
{results.map((p: any) => (
<div key={p.id} onClick={() =>
onAction({ type: 'use_content', toolName: toolCall.name, toolCallId: toolCall.id, data: { content: p.name } })
}>
{p.name}
</div>
))}
</div>
)
}Built-in canvas fallback
If no canvas is set, AgentKit matches by tool name automatically:
| Tool name pattern | Canvas |
|---|---|
| browser, navigate, open_url, visit, web_browser | Browser canvas |
| search, web_search, google, bing, query | Search canvas |
| write_file, document, create_doc, edit_file | Document canvas |
| code, execute, run_python, run_js, shell, bash | Code canvas |
| anything else | Generic canvas (shows args + result JSON) |
<AgentUI> props reference
Connection
| Prop | Type | Description |
|---|---|---|
| endpoint | string | Your backend SSE URL (recommended for production) |
| provider | 'openai' \| 'anthropic' \| 'ollama' | Direct provider mode (dev/prototyping only) |
| apiKey | string | API key for direct mode |
| model | string | Model name |
| baseUrl | string | Custom base URL (Ollama, proxies) |
| headers | Record<string, string> | Extra headers on every request |
| context | Record<string, any> | Static context passed to every request body |
Features
| Prop | Type | Default | Description |
|---|---|---|---|
| tools | ToolDefinition[] | [] | Tool definitions — schema + card/canvas |
| canvas | boolean | true | Enable the canvas panel |
| metrics | boolean | false | Show debug metrics panel (dev only) |
| debug | boolean | false | Show raw JSON debug panel |
| suggestions | string[] | [] | Suggestion chips on the empty state |
| title | string | 'Agent' | Header title |
| description | string | — | Subtitle shown on the empty state |
| placeholder | string | 'Message...' | Input placeholder |
| height | string \| number | '100%' | Root element height |
Storage & config
| Prop | Type | Default | Description |
|---|---|---|---|
| storage | 'localStorage' \| 'sessionStorage' \| false | false | Persist message history |
| storageKey | string | 'agentkit-messages' | Storage key for messages |
| configKey | string | — | Storage key suffix for LLM config |
| showConfig | boolean | — | Force-show the config wizard |
Timeouts
| Prop | Type | Default | Description |
|---|---|---|---|
| timeout | number | 30000 | Request timeout in ms |
| toolTimeout | number | 30000 | Per-tool timeout in ms |
| slowThreshold | number | 5000 | ms before entering slow state |
| staleStreamThreshold | number | 3000 | ms of stream silence before stale state |
Callbacks
<AgentUI
onMessage={(msg) => console.log(msg)}
onToolCall={(tool) => console.log(tool.name)}
onError={(err) => console.error(err.code, err.message)}
onAuthError={() => router.push('/login')}
onFeedback={(messageId, rating) => saveFeedback(messageId, rating)}
onMetrics={(m) => sendToAnalytics({ latency: m.avgLatency, success: m.successRate })}
onSlowResponse={() => showToast('Taking a bit longer...')}
onStaleStream={() => showToast('Still working...')}
/>Customization
Layer 1 — theme prop (named tokens)
<AgentUI
theme={{
primary: '#10b981', // accent color — buttons, avatars, focus rings
primaryHover: '#059669', // accent hover state
bg: '#ffffff', // root background
surface: '#f8fafc', // card / panel background
text: '#0f172a', // primary text
textMuted: '#64748b', // secondary / placeholder text
border: '#e2e8f0', // default border
radius: '4px', // base border radius
font: 'Inter, sans-serif', // UI font stack
fontMono: 'JetBrains Mono, monospace',
shadow: '0 4px 16px rgba(0,0,0,0.08)',
// Semantic
success: '#10b981',
warning: '#f59e0b',
error: '#ef4444',
// Component-level
userBubble: '#10b981', // user message bubble color (defaults to primary)
userBubbleText: '#ffffff', // text inside user bubble
headerBg: '#0f172a', // header bar background
cardBg: '#ffffff', // inline tool card background
cardBorder: '#e2e8f0', // inline tool card border
}}
/>Layer 2 — cssVars prop (escape hatch)
For any CSS variable not covered by theme, pass arbitrary --ak-* values:
<AgentUI
cssVars={{
'--ak-shadow-lg': '0 20px 60px rgba(0,0,0,0.25)',
'--ak-r-lg': '20px',
'--ak-surface-2': '#f0f4ff',
}}
/>All available CSS variables (for use in your own stylesheets):
--ak-primary Accent color
--ak-primary-h Accent hover
--ak-primary-light Accent tint (chip hover, selected bg)
--ak-bg Root / page background
--ak-surface Card / panel background
--ak-surface-2 Elevated surface (code blocks, table rows)
--ak-border Default border
--ak-border-2 Stronger border
--ak-text Primary text
--ak-muted Secondary / placeholder text
--ak-success Green (done state)
--ak-success-bg Light green background
--ak-warning Orange (slow / interrupted)
--ak-warning-bg Light orange background
--ak-error Red (errors)
--ak-error-bg Light red background
--ak-r Base border radius
--ak-r-sm Small radius (badges)
--ak-r-lg Large radius (modals, bubbles)
--ak-r-pill Pill radius (chips, tags)
--ak-font UI font stack
--ak-mono Monospace font (code blocks)
--ak-shadow-sm Subtle shadow
--ak-shadow Default shadow
--ak-shadow-lg Large shadow (modals)
--ak-user-bubble User message bubble color
--ak-user-bubble-text User message text color
--ak-header-bg Header bar background
--ak-card-bg Inline tool card background
--ak-card-border Inline tool card borderOr override in CSS directly on any parent element:
.my-chat-container {
--ak-primary: #10b981;
--ak-radius: 4px;
--ak-header-bg: #0f172a;
}Layer 3 — classNames prop
Inject extra CSS classes onto internal elements without replacing base styles:
<AgentUI
classNames={{
root: 'my-root',
header: 'my-header',
chat: 'my-chat',
messageList: 'my-message-list',
messageUser: 'font-sans bg-blue-50',
messageAssistant: 'bg-gray-50',
messageTool: 'opacity-90',
inputBar: 'my-input',
canvas: 'border-l-2 border-indigo-200',
canvasHeader: 'bg-slate-800 text-white',
status: 'my-status',
suggestions: 'my-suggestions',
modal: 'my-modal',
settings: 'my-settings',
metrics: 'my-metrics',
debug: 'my-debug',
}}
/>Layer 4 — components prop (full replacement)
Replace any internal component entirely. Your component receives the same typed props:
import type { MessageItemProps, InputBarProps, CanvasRendererProps } from 'agentkit-ui'
function MyMessage({ message, onFeedback, renderMessageActions }: MessageItemProps) {
return (
<div className={`my-msg my-msg--${message.role}`}>
{message.content}
{renderMessageActions?.(message)}
</div>
)
}
function MyInputBar({ onSend, state, placeholder }: InputBarProps) {
const [val, setVal] = React.useState('')
return (
<input
value={val}
placeholder={placeholder}
disabled={state === 'thinking' || state === 'streaming'}
onChange={e => setVal(e.target.value)}
onKeyDown={e => { if (e.key === 'Enter') { onSend(val); setVal('') } }}
/>
)
}
<AgentUI
components={{
MessageItem: MyMessage,
InputBar: MyInputBar,
EmptyState: MyWelcomeScreen,
Header: MyHeader,
// Canvas overrides
SearchCanvas: MySearchCanvas,
BrowserCanvas: MyBrowserCanvas,
DocumentCanvas: MyDocCanvas,
CodeCanvas: MyCodeCanvas,
GenericCanvas: MyGenericCanvas,
}}
/>Layer 5 — Render props (slot augmentation)
Add content to specific slots without replacing the whole component:
<AgentUI
// Extra buttons inside the input bar
renderExtraControls={() => (
<button onClick={startVoiceInput}>🎤</button>
)}
// Extra action buttons per message
renderMessageActions={(msg) => (
<button onClick={() => shareMessage(msg)}>Share</button>
)}
// Custom tool display inside chat (overrides card)
renderToolDisplay={(tool) => (
tool.name === 'my_tool' ? <MyToolBadge tool={tool} /> : null
)}
// Custom canvas header
renderCanvasHeader={(tool) => (
<span>{tool.displayName} — {tool.elapsedMs}ms</span>
)}
// Custom empty state
renderEmptyState={() => <MyWelcomeScreen />}
/>Layer 6 — Middleware
Intercept the send pipeline:
<AgentUI
// Modify or cancel outgoing messages
onBeforeSend={(message, context) => {
if (!message.trim()) return null // return null to cancel
return {
message: `[v2] ${message}`,
context: { ...context, userId: currentUser.id },
}
}}
// Override tool rendering before it hits the canvas
onBeforeToolRender={(tool) => {
if (tool.name === 'my_special_tool') return <MyCanvas tool={tool} />
return null // null = use default
}}
// Filter or transform messages before render
transformMessages={(messages) =>
messages.filter(m => !m.metadata?.hidden)
}
/>Direct provider mode
Call OpenAI, Anthropic, or Ollama directly from the browser — no backend needed.
⚠️ Development and prototyping only — your API key is visible in the browser.
// OpenAI
<AgentUI provider="openai" apiKey="sk-..." model="gpt-4o" />
// Anthropic
<AgentUI provider="anthropic" apiKey="sk-ant-..." model="claude-sonnet-4-6" />
// Ollama (local)
<AgentUI provider="ollama" model="llama3.2" baseUrl="http://localhost:11434" />Headless usage
Use just the hooks and build your own UI from scratch:
import { useAgent } from 'agentkit-ui'
function MyChat() {
const agent = useAgent({
endpoint: '/api/agent',
storage: 'localStorage',
storageKey: 'my-app',
onToolCall: (tool) => console.log('Tool:', tool.name),
})
return (
<div>
{agent.messages.map(m => (
<div key={m.id} className={`msg msg--${m.role}`}>
{m.content}
</div>
))}
{agent.state === 'thinking' && <Spinner />}
<input onKeyDown={e => {
if (e.key === 'Enter') agent.send(e.currentTarget.value)
}} />
<button onClick={agent.interrupt}>Stop</button>
<button onClick={agent.clear}>Clear</button>
{agent.lastError && <button onClick={agent.retry}>Retry</button>}
<div>State: {agent.state}</div>
{agent.state === 'rate_limited' && (
<div>Retry in {agent.retryCountdown}s</div>
)}
</div>
)
}useAgent options
| Option | Type | Default | Description |
|---|---|---|---|
| endpoint | string | — | Backend SSE URL |
| provider | string | — | Direct provider (openai, anthropic, ollama) |
| apiKey | string | — | API key for direct mode |
| model | string | — | Model name |
| baseUrl | string | — | Custom base URL |
| headers | object | — | Extra request headers |
| context | object | — | Static context sent in every request |
| tools | ToolSchema[] | — | Tool schemas to forward to backend |
| timeout | number | 30000 | Request timeout (ms) |
| toolTimeout | number | 30000 | Per-tool timeout (ms) |
| slowThreshold | number | 5000 | ms before slow state |
| staleStreamThreshold | number | 3000 | ms stream silence before stale state |
| retry | { attempts: number } | { attempts: 2 } | Retry config |
| storage | string \| false | false | Message persistence |
| storageKey | string | — | Storage key |
| metricsCollector | MetricsCollector | — | Metrics collector instance |
| onMessage | (msg) => void | — | Called on each new message |
| onToolCall | (tool) => void | — | Called on each tool event |
| onError | (err) => void | — | Called on errors |
| onAuthError | () => void | — | Called on 401/403 |
| onSlowResponse | () => void | — | Called when slow threshold exceeded |
| onStaleStream | () => void | — | Called when stale threshold exceeded |
useAgent return values
const {
messages, // AgentMessage[]
state, // AgentState — see below
send, // (message: string, context?: object) => void
retry, // () => void — re-run last failed/interrupted call
interrupt, // () => void — abort current stream
clear, // () => void — clear message history
setContext, // (data: Record<string, any>) => void
isConnected, // boolean
lastError, // AgentError | null
metadata, // last message_done metadata (usage, model, latency_ms)
retryCountdown, // number — seconds remaining in rate_limited state
sessionId, // string — stable session identifier
} = agentAgent states
idle No active call
thinking Request sent, waiting for first token
streaming Receiving tokens
done Stream completed successfully
error Unrecoverable error
slow Thinking longer than slowThreshold
stale Stream paused longer than staleStreamThreshold
retrying In retry backoff window
rate_limited 429 received, waiting for Retry-AfterMetrics
The metrics panel is a developer-only debug tool. End users never see it.
// Development only
<AgentUI metrics={process.env.NODE_ENV === 'development'} />
// Production — use onMetrics callback instead
<AgentUI
onMetrics={(m) => {
analytics.track('agent_call', {
latency: m.avgLatency,
successRate: m.successRate,
toolSuccessRate: m.toolSuccessRate,
})
}}
/>You can also use MetricsCollector directly:
import { MetricsCollector, useMetrics } from 'agentkit-ui'
const collector = new MetricsCollector()
function MyApp() {
const { metrics } = useMetrics(collector)
return (
<AgentUI
endpoint="/api/agent"
// Pass collector to AgentUI — it tracks everything automatically
// metricsCollector={collector} (via useAgent options in headless mode)
/>
)
}AgentMetrics shape:
{
totalCalls: number
successRate: number // 0–1
avgLatency: number // ms
p95Latency: number // ms
avgTtft: number // ms (time to first token)
toolSuccessRate: number // 0–1
toolAvgDuration: number // ms
thumbsUpRate: number // 0–1
calls: CallRecord[]
tools: ToolRecord[]
feedback: FeedbackRecord[]
}Error handling
| Code | Trigger | Behaviour |
|---|---|---|
| rate_limited | HTTP 429 | State → rate_limited, countdown shown, auto-retry after Retry-After header |
| context_length | Provider error | Banner: "Conversation too long — clear history to continue" |
| auth | 401 / 403 | onAuthError callback fired |
| server_error | 5xx | Exponential backoff retry (1s → 2s → 4s → 8s, max 10s) |
| network | Fetch throws | Same retry + isConnected: false |
| interrupted | User clicks Stop | Partial message kept with Retry button |
| timeout | Request exceeds timeout ms | Error state, retry available |
Full example
import { AgentUI, defineTool } from 'agentkit-ui'
import 'agentkit-ui/styles.css'
// Define your tools
const tools = [
defineTool({
name: 'search_products',
description: 'Search the product catalog',
parameters: {
type: 'object',
properties: {
query: { type: 'string', description: 'Search keyword' },
maxPrice: { type: 'number', description: 'Max price in USD' },
},
required: ['query'],
},
card: true, // compact inline card in chat
canvas: ProductCanvas, // full canvas panel for details
displayName: 'Product Search',
icon: '🛍',
}),
]
export default function App() {
return (
<div style={{ height: '700px', borderRadius: 12, overflow: 'hidden' }}>
<AgentUI
endpoint="/api/agent"
tools={tools}
title="Shopping Assistant"
description="Ask me to search products or look up your orders."
suggestions={['Search for laptops under $800', 'Find running shoes']}
canvas
storage="localStorage"
storageKey="my-app"
theme={{
primary: '#6366f1',
radius: '10px',
userBubble: '#6366f1',
headerBg: '#1e1b4b',
}}
cssVars={{ '--ak-shadow-lg': '0 20px 60px rgba(99,102,241,0.2)' }}
onFeedback={(id, rating) => saveFeedback(id, rating)}
onMetrics={(m) => sendToDatadog(m)}
onAuthError={() => router.push('/login')}
onError={(err) => {
if (err.code === 'context_length') showBanner('Conversation too long')
}}
/>
</div>
)
}TypeScript types
All types are exported:
import type {
// Core
AgentMessage, AgentState, AgentError, AgentErrorCode,
ToolCall, StreamEvent, AgentUsage, AgentResponse,
// Customization
AgentTheme, AgentUIClassNames, AgentUIComponents, AgentUIProps,
// Tool registration
ToolDefinition, ToolSchema, ToolParameterSchema, ToolCardProps,
// Canvas
CanvasType, CanvasAction, CanvasActionType, CanvasRendererProps,
// Hooks
UseAgentOptions, UseAgentReturn,
AgentConfig, AgentConfigMode, UseConfigReturn,
UseMetricsReturn,
// Metrics
AgentMetrics, CallRecord, ToolRecord, FeedbackRecord,
} from 'agentkit-ui'Requirements
- React 18+
- Node 18+ (build time only)
- A backend that streams SSE — or use direct provider mode for prototyping
License
MIT © Vamshidhar Parupally
