langgraph-ui-components
v0.0.31
Published
A React component library for building AI chat interfaces with LangChain/LangGraph integration.
Maintainers
Readme
Langgraph UI Chat Components
A React component library for building AI chat interfaces with LangChain/LangGraph integration.
Features
- 🎨 Pre-styled chat UI components - Sidebar, message bubbles, input fields, markdown rendering
- 🔄 Streaming support - Real-time AI response streaming
- 📎 File uploads - Built-in file handling and metadata
- � Speech-to-text - Built-in microphone button with Whisper transcription support
- 🎭 Custom components - Inject your own React components into chat messages
- 🧩 Provider-based architecture - Flexible state management with React Context
- 📝 TypeScript - Full type definitions included
- 🎨 Tailwind CSS - Pre-built styles, easy to customize
- 🛑 Human-in-the-Loop (HITL) - Built-in interrupt handling for agent approval flows
- 📋 Deep Agent Todos - Built-in "Agent Plan" progress card that tracks multi-step agent tasks in real time
Installation
npm install langgraph-ui-componentsPeer dependencies (install these separately):
npm install react react-dom @langchain/core @langchain/langgraph @langchain/langgraph-sdk framer-motion lucide-react react-markdown react-spinners rehype-highlight remark-gfm sonner @radix-ui/react-labelUsage
import { Sidebar } from 'langgraph-ui-components/components';
import { ChatProvider } from 'langgraph-ui-components/providers';
import 'langgraph-ui-components/styles.css';
function App() {
return (
<ChatProvider
apiUrl="your-api-url"
assistantId="your-assistant-id"
identity={{ user_id: "user123", org_id: "org456" }}
>
<Sidebar />
</ChatProvider>
);
}Import Paths
Use subpath imports for best auto-import support in VS Code and TypeScript:
import { Sidebar, Chat, AskUserInterrupt } from 'langgraph-ui-components/components';
import {
ChatProvider,
ChatRuntimeProvider,
ThreadProvider,
StreamProvider,
FileProvider,
CustomComponentProvider,
useThread,
useStreamContext,
useChatRuntime,
useFileProvider,
useCustomComponents,
useChatSuggestions,
} from 'langgraph-ui-components/providers';
import {
useTools,
useModels,
} from 'langgraph-ui-components/hooks';
import type {
AskUserInterruptProps,
AskUserResponse,
Question,
QuestionAnswer,
QuestionOption,
} from 'langgraph-ui-components/components';
import 'langgraph-ui-components/styles.css';Root import is also supported for backward compatibility:
import { Sidebar, ChatProvider, useStreamContext } from 'langgraph-ui-components';
import type { AskUserInterruptProps, AskUserResponse } from 'langgraph-ui-components';Speech-to-Text Configuration
To enable speech-to-text functionality with your Whisper API backend, configure the textToSpeechVoice property in the identity object:
import { Sidebar } from 'langgraph-ui-components/components';
import { ChatProvider, ChatRuntimeProvider } from 'langgraph-ui-components/providers';
import 'langgraph-ui-components/styles.css';
function App() {
return (
<ChatRuntimeProvider
apiUrl="your-api-url"
assistantId="your-assistant-id"
identity={{
user_id: "user123",
org_id: "org456",
}}
>
<Sidebar textToSpeechVoice={{
apiUrl: "https://domain_url.com/v1/audio/transcriptions",
apiKey: "your-api-key",
model: "Systran/faster-whisper-large-v3"
}}/>
</ChatRuntimeProvider>
);
}Speech-to-Text Props:
textToSpeechVoice.apiUrl(string): The endpoint URL for your Whisper transcription API.textToSpeechVoice.apiKey(string): Bearer token for authentication with your Whisper API backend.
The microphone button in the ChatInput component will automatically use these settings for audio transcription.
Exported Components
Sidebar- Main chat UI with sidebar navigationChat- Standalone chat interface with thread historyAskUserInterrupt- Built-in UI forask_userHITL questions
Component Props
Chat Component
The Chat component provides a complete chat interface with thread history and file upload support.
import { Chat } from 'langgraph-ui-components/components';
<Chat
enableToolCallIndicator={true}
callThisOnSubmit={async () => uploadedFiles}
handleFileSelect={customFileHandler}
/>Props:
enableToolCallIndicator?: boolean- Show visual indicators when AI tools are being executed. Default:falsecallThisOnSubmit?: () => Promise<CallThisOnSubmitResponse | void>- Custom callback executed before message submission, useful for uploading files to external storage. Return{ files, contextValues }to attach files or inject context into the message.handleFileSelect?: (event: React.ChangeEvent<HTMLInputElement>) => void- Custom file selection handler to override default behaviorinputFileAccept?: string- File types accepted by the file input (e.g."image/*,.pdf")chatBodyProps?: chatBodyProps- Customize agent name, avatar, and font size (see chatBodyProps)
Sidebar Component
The Sidebar component provides a chat interface with collapsible sidebar navigation.
import { Sidebar } from 'langgraph-ui-components/components';
<Sidebar
supportChatHistory={true}
enableToolCallIndicator={true}
callThisOnSubmit={async () => uploadedFiles}
handleFileSelect={customFileHandler}
/>Props:
supportChatHistory?: boolean- Enables multi-thread mode, allowing users to switch between different conversation threads. Whentrue, clicking a thread in ThreadHistory will load that thread's messages. Whenfalse(default), only a single thread is maintained. Default:falseenableToolCallIndicator?: boolean- Show visual indicators when AI tools are being executed. Default:falsecallThisOnSubmit?: () => Promise<{ files?: FileInfo[], contextValues?: Record<string, any> }>- Custom callback executed before message submission, useful for uploading files to external storage or adding contexthandleFileSelect?: (event: React.ChangeEvent<HTMLInputElement>) => void- Custom file selection handler to override default behaviorpreventSubmit?: boolean- Whentrue, disables all message submission functionality. Useful for read-only or custom submission flows. Default:falseheader?: { title?: string, logoUrl?: string }- Custom header configuration with title and logoleftPanelContent?: React.ReactNode- Custom content to display in the left expansion panelleftPanelOpen?: boolean- External control for left panel open statesetLeftPanelOpen?: (open: boolean) => void- External setter for left panel open stateleftPanelInitialWidth?: number- Initial width of the left panel in pixelsleftPanelClassName?: string- CSS class name for the left panel containerbanner?: React.ReactNode- Optional banner rendered above the chat messages (e.g. an alert or notice)filePreview?: (files: FileInfo[], setFileInput) => React.ReactNode- Custom file preview renderer for selected files before submissioninputFileAccept?: string- File types accepted by the file input (e.g."image/*,.pdf")s3_upload?: boolean- Enable S3 upload mode for file attachments
Exported Providers
ChatProvider- Core chat state management. Props:apiUrl,assistantId,identity?,initialMode?("single"|"multi", default"single"),customComponents?,suspenseFallback?ChatRuntimeProvider- Runtime configurationThreadProvider- Conversation thread management. Props:initialMode?("single"|"multi", default"single")StreamProvider- AI streaming responsesFileProvider- File upload handlingCustomComponentProvider- Custom component rendering
Exported Hooks
All from langgraph-ui-components/providers unless noted:
| Hook | Description |
|------|-------------|
| useStreamContext() | Messages, loading state, sendMessage, stop, interrupt — details |
| useThread() | Thread ID, thread list, deleteThread, updateThread, mode — details |
| useChatRuntime() | apiUrl, assistantId, setAssistantId, identity — details |
| useFileProvider() | fileInput: FileInfo[] and setFileInput |
| useCustomComponents() | Register generative UI and interrupt components — details |
| useChatSuggestions() | Opt-in chat suggestions — details |
| useTools() (hooks) | Sidebar tool buttons — details |
| useModels() (hooks) | Model list and selection — details |
useChatSuggestions Hook
The useChatSuggestions hook enables intelligent, opt-in chat suggestions for your application. It acts as a configuration hook that doesn't return anything but internally registers suggestion settings. The built-in Suggestion component (included in Sidebar) automatically picks up this configuration and displays suggestions only when the hook is used.
Key Features
- Opt-in by default - Suggestions only appear when you call this hook
- No return value - Simply call it to enable suggestions
- Context-aware - Pass dependencies for dynamic, contextual suggestions
- Agent integration - Automatically uses agent-provided suggestions when available
Basic Usage
import { useChatSuggestions } from 'langgraph-ui-components/providers';
function MyComponent() {
// Simply call the hook - it registers configuration internally
useChatSuggestions({
instructions: "Suggest helpful next actions",
minSuggestions: 1,
maxSuggestions: 2,
});
return <div>Your component content</div>;
}Without the Hook
If you don't call useChatSuggestions anywhere in your component tree, no suggestions will be generated or displayed. This makes the feature completely opt-in.
Options
instructions(string, optional): Guidance text for suggestion generation. Default:"Suggest relevant next actions."minSuggestions(number, optional): Minimum number of suggestions to display. Default:2maxSuggestions(number, optional): Maximum number of suggestions to display. Default:4
Note: The hook returns void - it doesn't provide any return values. The internal Suggestion component handles display and interaction.
Agent Integration
When your agent returns suggestions in the response (via the suggestions field in state), they're automatically used instead of generating defaults:
{
"messages": [...],
"suggestions": ["Show part details", "Update configuration", "Get pricing"]
}The system seamlessly switches between agent-provided suggestions and fallback suggestions based on availability.
Context-Aware Suggestions
Pass dependencies as the second argument to generate context-aware suggestions:
function ChatInterface() {
const [lastMessage, setLastMessage] = useState('');
useChatSuggestions(
{
instructions: "Suggest based on conversation context",
maxSuggestions: 3,
},
[lastMessage] // Dependencies trigger context-aware generation
);
return <div>...</div>;
}When dependencies change, suggestions are regenerated to match the new context.
useStreamContext
Access the full streaming state and control functions from anywhere inside the provider tree.
import { useStreamContext } from 'langgraph-ui-components/providers';
const {
messages, // Message[] — all messages in the conversation
isLoading, // boolean — true while agent is streaming
interrupt, // interrupt payload when agent pauses for human input
sendMessage, // send a message programmatically
submitMessage, // low-level submit with stream control options
regenerateMessage, // regenerate an AI response by message ID
fetchCatalog, // fetch available agents from /agents/catalog
stop, // cancel the current stream
} = useStreamContext();sendMessage
Send a message programmatically. The message is appended to the conversation and submitted to the agent.
// Simple string
await sendMessage("Hello!");
// With options
await sendMessage("Hello!", {
type: "human", // message type, defaults to "human"
hidden: true, // hide from UI (useful for system-level triggers)
id: "custom-id", // custom message ID instead of auto-generated UUID
context: { key: "val" }, // extra context merged with identity
additional_kwargs: {}, // custom metadata attached to the message
});| Option | Type | Description |
|--------|------|-------------|
| type | Message["type"] | Message type ("human", "system", etc.). Default: "human" |
| hidden | boolean | If true, message is not shown in the chat UI |
| id | string | Custom message ID (auto-generated UUID if omitted) |
| name | string | Required for function/tool messages |
| tool_call_id | string | ID linking this message to a tool call |
| tool_calls | ToolCall[] | Tool calls to attach to the message |
| additional_kwargs | Record<string, unknown> | Custom metadata on the message |
| ui | UIMessage[] | UI components to display alongside the message |
| context | Record<string, unknown> | Context values merged with identity for this message |
submitMessage
Low-level submit with full control over streaming behavior. Use this when you need non-default stream modes.
await submitMessage(messageObject, {
streamMode: ["values", "updates"], // which stream modes to use
streamSubgraphs: true, // include subgraph updates
streamResumable: true, // allow stream resumption
contextValues: { user_role: "admin" }, // extra context for this call
});regenerateMessage
Regenerate an AI response. Resumes from the checkpoint before the given message ID.
await regenerateMessage(messageId);fetchCatalog
Fetch the list of available agents from your API's /agents/catalog endpoint.
const catalog = await fetchCatalog();stop
Cancel the currently active stream.
const { stop, isLoading } = useStreamContext();
<button onClick={stop} disabled={!isLoading}>Stop</button>useThread
Access and manage conversation threads.
import { useThread } from 'langgraph-ui-components/providers';
const {
threadId, // string | null — current thread ID
setThreadId, // switch to a different thread
threads, // Thread[] — list of all threads
getThreads, // fetch threads from API
setThreads, // directly update thread list
configuration, // ThreadConfiguration — config passed to LangGraph on each call
setConfiguration, // update thread configuration
mode, // "single" | "multi"
setMode, // switch between single and multi-thread modes
threadsLoading, // boolean — true while fetching thread list
deleteThread, // delete a thread by ID
updateThread, // update thread metadata
} = useThread();deleteThread
await deleteThread(threadId);
// Removes thread from list and clears current threadId if it was the active oneupdateThread
await updateThread(threadId, { title: "My conversation" });
// Updates metadata on the thread and refreshes the thread listThread Configuration
configuration is a free-form object passed to the LangGraph API on every stream call. Use it to send per-thread settings your agent reads from config.configurable:
const { setConfiguration } = useThread();
setConfiguration({
temperature: 0.7,
system_prompt: "You are a helpful assistant.",
});URL-based Thread Loading
Append ?thread=<threadId> to the page URL to automatically load a specific thread on mount:
https://yourapp.com/chat?thread=abc123useChatRuntime
Access and update the core runtime configuration.
import { useChatRuntime } from 'langgraph-ui-components/providers';
const {
apiUrl, // string — base API URL
assistantId, // string — current assistant/graph ID
setAssistantId, // switch to a different assistant at runtime
identity, // ChatIdentity | null | undefined
} = useChatRuntime();setAssistantId is useful when your app lets users pick which agent to talk to:
const { setAssistantId } = useChatRuntime();
<button onClick={() => setAssistantId("support_agent")}>Switch to Support</button>useTools
Manage the sidebar tool buttons (the icon strip on the left panel).
import { useTools } from 'langgraph-ui-components/hooks';
const {
tool, // CustomTool[] — built-in tools (Search, Chat)
addTool, // add a custom tool button
userDefinedTools, // CustomTool[] — tools added via addTool
setUserDefinedTools, // directly replace user-defined tools
} = useTools();Adding a Custom Tool
import { useTools } from 'langgraph-ui-components/hooks';
import { Download } from 'lucide-react';
function MyApp() {
const { addTool } = useTools();
useEffect(() => {
addTool({
label: "Export",
icon: <Download />,
alt: "Export conversation", // tooltip text
onClick: () => handleExport(),
});
}, []);
}CustomTool Type
type CustomTool = {
label: string; // display name
icon: React.ReactElement; // icon component (e.g. from lucide-react)
alt?: string; // tooltip text
onClick: () => void; // click handler
};useModels
Fetch and manage model selection. Calls GET /agents/models on mount and persists the selection to localStorage.
import { useModels } from 'langgraph-ui-components/hooks';
const {
models, // ModelOption[] — available models
selectedModel, // string — currently selected model ID
setSelectedModel, // update selection (also persists to localStorage)
loading, // boolean — true while fetching
} = useModels();// ModelOption type
type ModelOption = {
id: string;
name: string;
};Models with "embed" or "rerank" in their ID are automatically filtered out. Selection persists under the localStorage key "agent-chat:selected-model" and is safe for SSR environments.
Custom Components
You can inject custom React components into chat messages using the CustomComponentProvider. Components are registered by name and can be referenced in message content.
Registering Components via Props
Pass initial components as the initialComponents prop to CustomComponentProvider:
import { CustomComponentProvider } from 'langgraph-ui-components/providers';
const MyCustomButton = ({ text }) => <button>{text}</button>;
function App() {
return (
<CustomComponentProvider
initialComponents={{
'my-button': MyCustomButton,
}}
>
{/* Your app */}
</CustomComponentProvider>
);
}Registering Components Programmatically
Use the registerComponent method from the useCustomComponents hook:
import { useCustomComponents } from 'langgraph-ui-components/providers';
function RegisterComponent() {
const { registerComponent } = useCustomComponents();
useEffect(() => {
registerComponent('my-button', ({ text }) => <button>{text}</button>);
}, [registerComponent]);
return null;
}Additional Methods
registerComponents(components): Register multiple components at once.unregisterComponent(name): Remove a registered component.
Human-in-the-Loop (HITL) Interrupts
LangGraph agents can pause mid-execution and ask a human to review or approve an action before continuing. This library has built-in support for rendering these interrupt requests with custom UI.
How It Works
- Your agent raises an interrupt with an
actionRequestspayload - The library detects
stream.interruptand looks up a registered interrupt component by tool name - Your component receives the interrupt data and action callbacks
- Calling one of the action callbacks resumes the agent
Agent-side Interrupt Format
Your LangGraph agent should raise an interrupt with this shape:
from langgraph.types import interrupt
interrupt({
"actionRequests": [
{
"name": "send_email", # must match the name you register
"args": {"to": "[email protected]", "subject": "Hello"},
"description": "Send a welcome email" # optional
}
],
"reviewConfigs": [
{
"actionName": "send_email",
"allowedDecisions": ["approve", "reject", "edit"]
}
]
})Registering an Interrupt Component
Use registerInterruptComponent from useCustomComponents() to register a component for a specific tool name:
import { useCustomComponents } from 'langgraph-ui-components/providers';
import type { InterruptComponentProps } from 'langgraph-ui-components/providers';
import { useEffect } from 'react';
function SendEmailInterrupt({ interrupt, actions }: InterruptComponentProps) {
const request = interrupt.actionRequests[0];
return (
<div className="border rounded p-4">
<h3>Approve Action: {request.name}</h3>
<pre>{JSON.stringify(request.args, null, 2)}</pre>
<div className="flex gap-2 mt-3">
<button onClick={() => actions.approve()}>Approve</button>
<button onClick={() => actions.reject("Not needed")}>Reject</button>
<button onClick={() => actions.edit({ subject: "Updated subject" })}>
Edit & Approve
</button>
</div>
</div>
);
}
function App() {
const { registerInterruptComponent } = useCustomComponents();
useEffect(() => {
// Register for the tool name that matches your agent's interrupt
registerInterruptComponent('send_email', SendEmailInterrupt);
}, [registerInterruptComponent]);
return <Sidebar />;
}Or register via ChatProvider's customComponents if you prefer props-based setup (note: this is for generative UI components — for interrupts, use registerInterruptComponent as above).
ask_user Interrupts (Built-in)
For question/answer style interruptions, the library includes a ready-to-use AskUserInterrupt renderer.
If your interrupt payload has type: "ask_user" and a questions array, ChatBody renders the built-in component automatically.
Agent-side format (ask_user)
from langgraph.types import interrupt
interrupt({
"type": "ask_user",
"questions": [
{
"header": "Environment",
"question": "Which environment should I deploy to?",
"options": [
{"label": "Staging", "description": "Safe pre-production run"},
{"label": "Production", "description": "Live users"}
],
"allowFreeformInput": True,
"multiSelect": False,
}
]
})TypeScript types
import type {
AskUserInterruptProps,
AskUserResponse,
Question,
QuestionAnswer,
QuestionOption,
} from 'langgraph-ui-components/components';
// also available from root:
// import type { AskUserInterruptProps, AskUserResponse } from 'langgraph-ui-components';AskUserInterruptProps shape:
interface AskUserInterruptProps {
questions: Question[];
onSubmit: (response: AskUserResponse) => void;
}Overriding ask_user UI
You can override the default card by registering an interrupt component under the ask_user key:
import { useEffect } from 'react';
import { useCustomComponents } from 'langgraph-ui-components/providers';
import type { InterruptComponentProps } from 'langgraph-ui-components/providers';
function MyAskUserInterrupt({ interrupt, actions }: InterruptComponentProps) {
return (
<div>
<h3>Need your input</h3>
<pre>{JSON.stringify(interrupt.questions, null, 2)}</pre>
<button
onClick={() =>
actions.edit({
answers: {
Environment: { selected: ["Staging"], freeText: null, skipped: false },
},
})
}
>
Submit
</button>
</div>
);
}
function RegisterAskUserInterrupt() {
const { registerInterruptComponent } = useCustomComponents();
useEffect(() => {
registerInterruptComponent('ask_user', MyAskUserInterrupt);
}, [registerInterruptComponent]);
return null;
}InterruptComponentProps
interface InterruptComponentProps {
interrupt: {
actionRequests: Array<{
name: string;
args: Record<string, unknown>;
description?: string;
}>;
reviewConfigs: Array<{
actionName: string;
allowedDecisions: string[];
argsSchema?: Record<string, unknown>;
}>;
};
actions: {
/** Resume the agent with approval */
approve: () => void;
/** Resume the agent with rejection */
reject: (reason?: string) => void;
/** Resume the agent with edited arguments */
edit: (editedArgs: Record<string, unknown>) => void;
};
}useCustomComponents — Interrupt Methods
registerInterruptComponent(toolName, component)— Register a component to render when the agent interrupts for the given tool nameunregisterInterruptComponent(toolName)— Remove a registered interrupt component
Accessing Interrupt State Directly
You can also access the raw interrupt from the stream context if you need to build custom logic:
import { useStreamContext } from 'langgraph-ui-components/providers';
function MyComponent() {
const { interrupt, isLoading } = useStreamContext();
if (!isLoading && interrupt) {
console.log('Agent paused:', interrupt.value);
}
}Deep Agent Todos
When your agent works through a multi-step plan, it can expose a todos list in its LangGraph state. The library automatically picks this up and renders an "Agent Plan" card above the agent's response — showing a progress bar, the currently active step, and a status-icon list of all tasks.
What it looks like
┌─────────────────────────────────────────┐
│ ✦ Agent Plan 2 / 4 done │
│ Now: Analysing purchase history │
│ ████████░░░░░░░░░░░░ 50% │
│ │
│ ✓ Fetch customer profile │
│ ✓ Load order history │
│ ⟳ Analysing purchase history │
│ ○ Generate recommendations │
└─────────────────────────────────────────┘Status icons: ✓ completed · ⟳ in_progress (spinning) · ○ pending
Agent-side setup (Python)
Add a todos field to your LangGraph state and update it as your agent progresses through steps:
from typing import TypedDict, Literal
from langgraph.graph import StateGraph
class TodoItem(TypedDict):
id: str
content: str
status: Literal["pending", "in_progress", "completed"]
class AgentState(TypedDict):
messages: list
todos: list[TodoItem]
def plan_node(state: AgentState):
return {
"todos": [
{"id": "1", "content": "Fetch customer profile", "status": "pending"},
{"id": "2", "content": "Load order history", "status": "pending"},
{"id": "3", "content": "Analyse purchase history", "status": "pending"},
{"id": "4", "content": "Generate recommendations", "status": "pending"},
]
}
def step_1(state: AgentState):
# Mark step 1 in_progress, then completed when done
return {
"todos": [
{"id": "1", "content": "Fetch customer profile", "status": "completed"},
{"id": "2", "content": "Load order history", "status": "in_progress"},
{"id": "3", "content": "Analyse purchase history", "status": "pending"},
{"id": "4", "content": "Generate recommendations", "status": "pending"},
]
}
# ... continue updating todos in each nodeThe UI reads todos directly from the LangGraph state values — no extra configuration needed on the frontend.
How the UI handles it
- During streaming — the
todoslist updates live as the agent emits new state values. The progress bar and "Now: …" subtitle reflect the currentin_progressitem. - After streaming — the final todos are frozen against the message group, so the card persists correctly when scrolling back through history.
- Subgraph support — when
streamSubgraphs: true, the library scans nested update events for non-emptytodosarrays so subgraph nodes don't accidentally overwrite the parent's todo state. - Empty arrays — if an event contains
todos: [](common in subgraph value events that don't own the todos state), the previous todos are preserved rather than cleared.
TodoItem type
import type { TodoItem } from 'langgraph-ui-components';
type TodoItem = {
id: string;
content: string;
status: "pending" | "in_progress" | "completed";
updatedAt?: string | number | Date; // raw JSON value from stream, not normalized
run_id?: string;
runId?: string;
messageId?: string;
checkpoint?: string;
};Enabling tool call indicators alongside todos
enableToolCallIndicator must be true (the default) for the Agent Plan card to appear, since the card is part of the agent message rendering pipeline:
<Sidebar enableToolCallIndicator={true} />
// or
<Chat enableToolCallIndicator={true} />chatBodyProps
The chatBodyProps prop on both Chat and Sidebar customizes how agent messages are displayed:
<Sidebar
chatBodyProps={{
agentName: "Aria",
agentAvatarUrl: "https://example.com/avatar.png",
fontSize: "15px",
}}
/>Props:
agentName?: string— Display name shown above agent messages. Default:"Agent"agentAvatarUrl?: string— URL for the agent avatar imagefontSize?: string— Font size for message text (e.g."14px","1rem")
Types
Full TypeScript definitions available for:
ChatIdentity— user/org identity + auth tokenChatRuntimeContextValue—useChatRuntime()return typeFileInfo—{ fileName, fileType, file?, fileData?, metadata? }SuggestionsOptions— options foruseChatSuggestionsSuggestionConfig— internal suggestion config shapeThreadMode—"single" | "multi"ThreadConfiguration—Record<string, unknown>passed to LangGraph configThreadContextType—useThread()return typeStateType—{ messages, ui?, suggestions?, todos? }TodoItem—{ id, content, status, updatedAt?, run_id?, runId?, messageId?, checkpoint? }CustomComponentContextValue—useCustomComponents()return typeInterruptComponentProps— props for HITL interrupt componentsCustomTool—{ label, icon, alt?, onClick }ModelOption—{ id, name }ChatProps— base props shared by Chat and SidebarChatSidebarProps— Sidebar-specific props (extends ChatProps)ChatUIProps— Chat-specific props (extends ChatProps)CallThisOnSubmitResponse—{ files?, contextValues? }chatBodyProps—{ agentName?, agentAvatarUrl?, fontSize? }headerProps—{ title?, logoUrl? }textToSpeechVoice—{ apiUrl, apiKey, model }
