@convai/web-sdk
v0.2.4
Published
Convai Web SDK for AI voice assistants. Supports both React and vanilla JavaScript/TypeScript. Build voice-powered AI interactions with real-time audio/video streaming.
Downloads
1,758
Maintainers
Readme
@convai/web-sdk
JavaScript/TypeScript SDK for building AI voice assistants with real-time audio/video streaming. Drop-in widgets for React and Vanilla JavaScript/TypeScript with customizable UI components.
📑 Table of Contents
- Installation
- Quick Start
- Core Concepts
- React SDK
- Vanilla SDK
- Video & Screen Share
- Building Custom UIs
- API Reference
- Getting Credentials
- TypeScript Support
- Support
Installation
npm install @convai/web-sdkPeer Dependencies (React only):
npm install react@^18.0.0 react-dom@^18.0.0Quick Start
React - ConvaiWidget
The ConvaiWidget is a complete, pre-built chat interface with voice/video capabilities.
import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk/react";
function App() {
// Initialize the Convai client
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
return <ConvaiWidget convaiClient={convaiClient} />;
}That's it! The widget auto-connects on first user interaction and handles all UI/audio for you.
🤖 For AI Code Generators (v0, Lovable, Bolt, etc.)
If you're using an AI coding assistant to add Convai to your project, use this exact template to avoid errors:
Copy-Paste Template (Works Every Time)
import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk/react";
export default function App() {
// Step 1: Create the client with your credentials
const convaiClient = useConvaiClient({
apiKey: "your-api-key-here",
characterId: "your-character-id-here"
});
// Step 2: Pass ONLY the client to the widget
return <ConvaiWidget convaiClient={convaiClient} />;
}Common Mistakes AI Tools Make
❌ DON'T DO THIS:
// Wrong: Passing props directly to ConvaiWidget
<ConvaiWidget apiKey="..." characterId="..." />
// Wrong: Stringifying the client
<ConvaiWidget convaiClient={JSON.stringify(convaiClient)} />
// Wrong: Spreading client properties
<ConvaiWidget {...convaiClient} />
// Wrong: Using client in string context
const info = `Client: ${convaiClient}`; // "Cannot convert object to primitive value"
// Wrong: Passing client through env vars
const client = process.env.CONVAI_CLIENT; // This won't work✅ DO THIS:
// Correct: Client created in component, passed as object
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id"
});
return <ConvaiWidget convaiClient={convaiClient} />;If You Get "Cannot convert object to primitive value"
This error means you're using the client object in a primitive context. Check for:
String concatenation:
// ❌ Wrong console.log("Client: " + convaiClient); // ✅ Correct console.log("Connected:", convaiClient.state.isConnected);Template literals:
// ❌ Wrong const text = `Client: ${convaiClient}`; // ✅ Correct const text = `Status: ${convaiClient.state.agentState}`;Using as a key:
// ❌ Wrong <div key={convaiClient}>...</div> // ✅ Correct <div key="convai-widget">...</div>
Complete Working Example for AI Tools
import React from 'react';
import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk/react";
function App() {
// Initialize the client
const convaiClient = useConvaiClient({
apiKey: "your-api-key-here",
characterId: "your-character-id-here",
enableVideo: false, // Set to true if you need video
});
// Optional: Add debug logging
console.log("Convai client initialized:", !!convaiClient);
console.log("Connection status:", convaiClient?.state?.isConnected);
return (
<div style={{ width: '100vw', height: '100vh' }}>
<ConvaiWidget convaiClient={convaiClient} />
</div>
);
}
export default App;Prompt for AI Code Generators
Use this prompt to get correct code:
Add Convai voice chat to this app using @convai/web-sdk.
CRITICAL RULES:
1. Import: import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk/react";
2. Create client INSIDE component: const convaiClient = useConvaiClient({ apiKey: "...", characterId: "..." });
3. Pass ONLY client to widget: <ConvaiWidget convaiClient={convaiClient} />
4. DO NOT pass apiKey or characterId directly to ConvaiWidget
5. DO NOT stringify, spread, or destructure the client object
6. DO NOT use the client object in string contexts
Example:
const convaiClient = useConvaiClient({ apiKey: "KEY", characterId: "ID" });
return <ConvaiWidget convaiClient={convaiClient} />;Video & Screen Share with AI Tools
If you need video/screen share, add these TWO changes:
// Step 1: Add enableVideo to client config
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
enableVideo: true // ← Required for video features
});
// Step 2: Show controls in widget
<ConvaiWidget
convaiClient={convaiClient}
showVideo={true} // ← Shows video button
showScreenShare={true} // ← Shows screen share button
/>Without enableVideo: true, video and screen share will NOT work even if you show the buttons.
Vanilla JS/TS - ConvaiWidget
import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
// Create client with configuration
const client = new ConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
// Create and mount widget - auto-connects on first user click
const widget = createConvaiWidget(document.body, {
convaiClient: client,
});
// Cleanup when done
widget.destroy();Core Concepts
The Architecture
┌─────────────────────────────────────────────────┐
│ ConvaiWidget (UI Layer) │
│ ├─ Chat Interface │
│ ├─ Voice Mode │
│ └─ Video/Screen Share UI │
└─────────────────────────────────────────────────┘
▼
┌─────────────────────────────────────────────────┐
│ ConvaiClient (Core Logic) │
│ ├─ Connection Management │
│ ├─ Message Handling │
│ ├─ State Management │
│ └─ Audio/Video Controls │
└─────────────────────────────────────────────────┘
▼
┌─────────────────────────────────────────────────┐
│ WebRTC Room (Communication Layer) │
│ ├─ Real-time Audio/Video Streaming │
│ ├─ Track Management │
│ └─ Network Communication │
└─────────────────────────────────────────────────┘
▼
┌─────────────────────────────────────────────────┐
│ AudioRenderer (Critical for Playback) │
│ ├─ Attaches audio tracks to DOM │
│ ├─ Manages audio elements │
│ └─ Enables bot voice playback │
└─────────────────────────────────────────────────┘Key Principles
- ConvaiClient - The brain. Manages connection, state, and communication with Convai servers.
- AudioRenderer - CRITICAL: Without this, you won't hear the bot. It renders audio to the user's speakers.
- ConvaiWidget - The complete UI. Uses both ConvaiClient and AudioRenderer internally.
- Connection Type - Determines capabilities:
"audio"(default) - Audio only"video"- Audio + Video + Screen Share
React SDK
useConvaiClient Hook
Purpose: Returns a fully configured ConvaiClient instance with reactive state updates.
When to Use: Every React app using Convai needs this hook.
What It Does:
- Creates and manages a ConvaiClient instance
- Provides reactive state (connection, messages, activity)
- Handles connection lifecycle
- Exposes audio/video/screen share controls
import { useConvaiClient } from "@convai/web-sdk/react";
function ChatbotWrapper() {
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
enableVideo: false, // Default: audio only
startWithAudioOn: false, // Mic starts muted
});
// Access reactive state
const { state, chatMessages, userTranscription, isBotReady } = convaiClient;
// Use controls
const handleMute = () => convaiClient.audioControls.muteAudio();
const handleSend = () =>
convaiClient.sendUserTextMessage("Hello, character!");
return (
<div>
<p>Status: {state.agentState}</p>
<p>Messages: {chatMessages.length}</p>
<button onClick={handleMute}>Mute</button>
<button onClick={handleSend}>Send</button>
</div>
);
}AudioRenderer Component
Purpose: Renders remote audio tracks to the user's speakers.
⚠️ CRITICAL: Without AudioRenderer, you will NOT hear the bot's voice.
When to Use:
- Always when building custom UIs
- Already included in
ConvaiWidget(no need to add separately)
How It Works:
- Attaches to the WebRTC room
- Automatically creates
<audio>elements for remote participants (the bot) - Manages audio playback lifecycle
import { useConvaiClient, AudioRenderer } from "@convai/web-sdk/react";
function CustomChatUI() {
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
return (
<div>
{/* CRITICAL: This component renders bot audio to speakers */}
<AudioRenderer />
{/* Your custom UI */}
<div>
{convaiClient.chatMessages.map((msg) => (
<div key={msg.id}>{msg.content}</div>
))}
</div>
</div>
);
}AudioContext
Purpose: Provides the WebRTC Room to child components.
When to Use: When building deeply nested custom UIs that need direct access to the audio room.
How It Works: React Context that holds the active WebRTC room.
import { useConvaiClient, AudioRenderer, AudioContext } from "@convai/web-sdk/react";
import { useContext } from "react";
function ChatbotWrapper() {
const convaiClient = useConvaiClient({
/* config */
});
return (
<AudioContext.Provider value={convaiClient.room}>
<AudioRenderer />
<ChildComponent />
</AudioContext.Provider>
);
}
function ChildComponent() {
const room = useContext(AudioContext);
// Access WebRTC room directly
console.log("Room state:", room?.state);
return <div>Child has access to Room</div>;
}React Exports Reference
// Components
import { ConvaiWidget } from "@convai/web-sdk/react";
// Hooks
import { useConvaiClient, useCharacterInfo } from "@convai/web-sdk/react";
// Audio Rendering (Critical)
import { AudioRenderer, AudioContext } from "@convai/web-sdk/react";
// Core Client (for advanced usage)
import { ConvaiClient } from "@convai/web-sdk/react";
// Types
import type {
ConvaiConfig,
ConvaiClientState,
ChatMessage,
IConvaiClient,
AudioControls,
VideoControls,
ScreenShareControls,
} from "@convai/web-sdk/react";Vanilla SDK
ConvaiClient Class
Purpose: Core client for managing Convai connections in vanilla JavaScript/TypeScript.
When to Use: Any non-React application or when you need full control.
What It Provides:
- Connection management
- Message handling
- State management (via events)
- Audio/video/screen share controls
import { ConvaiClient } from "@convai/web-sdk/vanilla";
const client = new ConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
// Connect
await client.connect();
// Listen to events
client.on("stateChange", (state) => {
console.log("Agent state:", state.agentState);
});
client.on("message", (message) => {
console.log("New message:", message.content);
});
// Send messages
client.sendUserTextMessage("Hello!");
// Control audio
await client.audioControls.muteAudio();
await client.audioControls.unmuteAudio();
// Disconnect
await client.disconnect();AudioRenderer Class
Purpose: Manages audio playback for vanilla JavaScript/TypeScript applications.
⚠️ CRITICAL: Without this, you will NOT hear the bot's voice.
When to Use:
- Always when building custom vanilla UIs
- Already included in vanilla
ConvaiWidget(no need to add separately)
How It Works:
- Attaches to the WebRTC room
- Automatically creates hidden
<audio>elements - Manages audio playback for remote participants (the bot)
import { ConvaiClient, AudioRenderer } from "@convai/web-sdk/vanilla";
const client = new ConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
await client.connect();
// CRITICAL: Create AudioRenderer to hear bot audio
const audioRenderer = new AudioRenderer(client.room);
// Your custom UI logic...
// Cleanup
audioRenderer.destroy();
await client.disconnect();Vanilla Exports Reference
// Widget
import { createConvaiWidget, destroyConvaiWidget } from "@convai/web-sdk/vanilla";
// Core Client
import { ConvaiClient } from "@convai/web-sdk/vanilla";
// Audio Rendering (Critical)
import { AudioRenderer } from "@convai/web-sdk/vanilla";
// Types
import type {
VanillaWidget,
VanillaWidgetOptions,
ConvaiConfig,
ConvaiClientState,
ChatMessage,
IConvaiClient,
} from "@convai/web-sdk/vanilla";Video & Screen Share
Critical Requirements
⚠️ IMPORTANT: Video and Screen Share features require TWO configuration changes:
1. Set enableVideo: true in Client Configuration
This sets the connection type to "video" which enables video capabilities.
2. Set showVideo and/or showScreenShare in Widget Props
This shows the UI controls for video/screen share.
Without both, video features will NOT work.
Enabling Video
React
import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk/react";
function App() {
// ✅ STEP 1: Enable video in client config
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
enableVideo: true, // ← REQUIRED for video
startWithVideoOn: false, // Camera off by default
});
return (
<ConvaiWidget
convaiClient={convaiClient}
showVideo={true} // ← STEP 2: Show video controls
showScreenShare={false} // Optional: hide screen share
/>
);
}Vanilla
import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
// ✅ STEP 1: Enable video in client config
const client = new ConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
enableVideo: true, // ← REQUIRED for video
startWithVideoOn: false,
});
const widget = createConvaiWidget(document.body, {
convaiClient: client,
showVideo: true, // ← STEP 2: Show video controls
showScreenShare: false,
});Manual Video Control
// Enable camera
await convaiClient.videoControls.enableVideo();
// Disable camera
await convaiClient.videoControls.disableVideo();
// Toggle camera
await convaiClient.videoControls.toggleVideo();
// Check state
console.log(convaiClient.isVideoEnabled);
// Set video quality
await convaiClient.videoControls.setVideoQuality("high"); // 'low' | 'medium' | 'high'
// Get available cameras
const devices = await convaiClient.videoControls.getVideoDevices();
// Switch camera
await convaiClient.videoControls.setVideoDevice(deviceId);Enabling Screen Share
Screen sharing requires enableVideo: true (connection type must be "video").
React
import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk/react";
function App() {
// ✅ STEP 1: Enable video (required for screen share)
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
enableVideo: true, // ← REQUIRED for screen share
});
return (
<ConvaiWidget
convaiClient={convaiClient}
showVideo={true} // Optional: show video controls
showScreenShare={true} // ← STEP 2: Show screen share controls
/>
);
}Vanilla
import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
// ✅ STEP 1: Enable video (required for screen share)
const client = new ConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
enableVideo: true, // ← REQUIRED for screen share
});
const widget = createConvaiWidget(document.body, {
convaiClient: client,
showVideo: true,
showScreenShare: true, // ← STEP 2: Show screen share controls
});Manual Screen Share Control
// Start screen share
await convaiClient.screenShareControls.enableScreenShare();
// Start screen share with audio
await convaiClient.screenShareControls.enableScreenShareWithAudio();
// Stop screen share
await convaiClient.screenShareControls.disableScreenShare();
// Toggle screen share
await convaiClient.screenShareControls.toggleScreenShare();
// Check state
console.log(convaiClient.isScreenShareActive);Building Custom UIs
Custom Chat Interface
Use the chatMessages array from ConvaiClient to build your own chat UI.
React Example
import { useConvaiClient, AudioRenderer } from "@convai/web-sdk/react";
import { useState } from "react";
function CustomChatUI() {
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
const { chatMessages, state } = convaiClient;
const [inputValue, setInputValue] = useState("");
const handleSend = () => {
if (inputValue.trim() && state.isConnected) {
convaiClient.sendUserTextMessage(inputValue);
setInputValue("");
}
};
return (
<div>
{/* CRITICAL: AudioRenderer for bot voice */}
<AudioRenderer />
{/* Chat Messages */}
<div className="chat-container">
{chatMessages.map((msg) => {
const isUser = msg.type.includes("user");
const displayMessage =
msg.type === "user-llm-text" || msg.type === "bot-llm-text";
if (!displayMessage) return null;
return (
<div
key={msg.id}
className={isUser ? "user-message" : "bot-message"}
>
<span className="sender">
{isUser ? "You" : "Character"}
</span>
<p>{msg.content}</p>
<span className="timestamp">
{new Date(msg.timestamp).toLocaleTimeString()}
</span>
</div>
);
})}
</div>
{/* Input */}
<div className="input-container">
<input
type="text"
value={inputValue}
onChange={(e) => setInputValue(e.target.value)}
onKeyPress={(e) => e.key === "Enter" && handleSend()}
placeholder="Type a message..."
disabled={!state.isConnected}
/>
<button onClick={handleSend} disabled={!state.isConnected}>
Send
</button>
</div>
{/* Status Indicator */}
<div className="status">
{state.isConnecting && "Connecting..."}
{state.isConnected && state.agentState}
{!state.isConnected && "Disconnected"}
</div>
</div>
);
}Vanilla Example
import { ConvaiClient, AudioRenderer } from "@convai/web-sdk/vanilla";
const client = new ConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
await client.connect();
// CRITICAL: Create AudioRenderer for bot voice
const audioRenderer = new AudioRenderer(client.room);
const chatContainer = document.getElementById("chat-container");
const inputElement = document.getElementById("message-input");
const sendButton = document.getElementById("send-button");
// Render messages
client.on("messagesChange", (messages) => {
chatContainer.innerHTML = "";
messages.forEach((msg) => {
const isUser = msg.type.includes("user");
const displayMessage =
msg.type === "user-llm-text" || msg.type === "bot-llm-text";
if (!displayMessage) return;
const messageDiv = document.createElement("div");
messageDiv.className = isUser ? "user-message" : "bot-message";
const sender = document.createElement("span");
sender.textContent = isUser ? "You" : "Character";
sender.className = "sender";
const content = document.createElement("p");
content.textContent = msg.content;
const timestamp = document.createElement("span");
timestamp.textContent = new Date(msg.timestamp).toLocaleTimeString();
timestamp.className = "timestamp";
messageDiv.appendChild(sender);
messageDiv.appendChild(content);
messageDiv.appendChild(timestamp);
chatContainer.appendChild(messageDiv);
});
// Auto-scroll
chatContainer.scrollTop = chatContainer.scrollHeight;
});
// Send message
sendButton.addEventListener("click", () => {
const text = inputElement.value.trim();
if (text && client.state.isConnected) {
client.sendUserTextMessage(text);
inputElement.value = "";
}
});
inputElement.addEventListener("keypress", (e) => {
if (e.key === "Enter") {
sendButton.click();
}
});
// Cleanup
// audioRenderer.destroy();
// await client.disconnect();Audio Visualizer
Create real-time audio visualizers using the WebRTC room's audio tracks.
React Example
import { useConvaiClient } from "@convai/web-sdk/react";
import { useEffect, useRef, useState } from "react";
function AudioVisualizer() {
const convaiClient = useConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
const canvasRef = useRef<HTMLCanvasElement>(null);
const [audioLevel, setAudioLevel] = useState(0);
useEffect(() => {
if (!convaiClient.room) return;
let animationId: number;
let analyzer: AnalyserNode | null = null;
let dataArray: Uint8Array | null = null;
const setupAnalyzer = async () => {
const audioContext = new AudioContext();
// Get remote participant (bot)
const remoteParticipants = Array.from(
convaiClient.room.remoteParticipants.values()
);
if (remoteParticipants.length === 0) return;
const participant = remoteParticipants[0];
const audioTracks = Array.from(
participant.audioTrackPublications.values()
);
if (audioTracks.length === 0) return;
const audioTrack = audioTracks[0].track;
if (!audioTrack) return;
// Get MediaStream from track
const mediaStream = new MediaStream([audioTrack.mediaStreamTrack]);
// Create analyzer
const source = audioContext.createMediaStreamSource(mediaStream);
analyzer = audioContext.createAnalyser();
analyzer.fftSize = 256;
source.connect(analyzer);
dataArray = new Uint8Array(analyzer.frequencyBinCount);
// Animate
const animate = () => {
if (!analyzer || !dataArray) return;
analyzer.getByteFrequencyData(dataArray);
// Calculate average volume
const sum = dataArray.reduce((a, b) => a + b, 0);
const average = sum / dataArray.length;
const normalizedLevel = average / 255;
setAudioLevel(normalizedLevel);
// Draw visualization
drawVisualizer(dataArray);
animationId = requestAnimationFrame(animate);
};
animate();
};
const drawVisualizer = (dataArray: Uint8Array) => {
const canvas = canvasRef.current;
if (!canvas) return;
const ctx = canvas.getContext("2d");
if (!ctx) return;
const width = canvas.width;
const height = canvas.height;
ctx.clearRect(0, 0, width, height);
const barWidth = (width / dataArray.length) * 2.5;
let x = 0;
for (let i = 0; i < dataArray.length; i++) {
const barHeight = (dataArray[i] / 255) * height;
ctx.fillStyle = `rgb(${barHeight + 100}, 50, 150)`;
ctx.fillRect(x, height - barHeight, barWidth, barHeight);
x += barWidth + 1;
}
};
if (convaiClient.state.isConnected) {
setupAnalyzer();
}
return () => {
if (animationId) cancelAnimationFrame(animationId);
};
}, [convaiClient.room, convaiClient.state.isConnected]);
return (
<div>
<canvas
ref={canvasRef}
width={800}
height={200}
style={{ border: "1px solid #ccc" }}
/>
<div>Audio Level: {(audioLevel * 100).toFixed(0)}%</div>
<div>
Bot is {convaiClient.state.isSpeaking ? "speaking" : "silent"}
</div>
</div>
);
}Vanilla Example
import { ConvaiClient, AudioRenderer } from "@convai/web-sdk/vanilla";
const client = new ConvaiClient({
apiKey: "your-api-key",
characterId: "your-character-id",
});
await client.connect();
// CRITICAL: AudioRenderer for playback
const audioRenderer = new AudioRenderer(client.room);
const canvas = document.getElementById("visualizer") as HTMLCanvasElement;
const ctx = canvas.getContext("2d")!;
let analyzer: AnalyserNode | null = null;
let dataArray: Uint8Array | null = null;
let animationId: number;
// Setup analyzer
const audioContext = new AudioContext();
const remoteParticipants = Array.from(client.room.remoteParticipants.values());
const participant = remoteParticipants[0];
const audioTracks = Array.from(participant.audioTrackPublications.values());
const audioTrack = audioTracks[0].track;
const mediaStream = new MediaStream([audioTrack.mediaStreamTrack]);
const source = audioContext.createMediaStreamSource(mediaStream);
analyzer = audioContext.createAnalyser();
analyzer.fftSize = 256;
source.connect(analyzer);
dataArray = new Uint8Array(analyzer.frequencyBinCount);
// Animate
function animate() {
if (!analyzer || !dataArray) return;
analyzer.getByteFrequencyData(dataArray);
// Clear canvas
ctx.clearRect(0, 0, canvas.width, canvas.height);
const barWidth = (canvas.width / dataArray.length) * 2.5;
let x = 0;
for (let i = 0; i < dataArray.length; i++) {
const barHeight = (dataArray[i] / 255) * canvas.height;
ctx.fillStyle = `rgb(${barHeight + 100}, 50, 150)`;
ctx.fillRect(x, canvas.height - barHeight, barWidth, barHeight);
x += barWidth + 1;
}
animationId = requestAnimationFrame(animate);
}
animate();
// Cleanup
// cancelAnimationFrame(animationId);
// audioRenderer.destroy();
// await client.disconnect();Message Types
All messages from convaiClient.chatMessages have a type field:
type ChatMessageType =
| "user" // User's sent message (raw)
| "user-transcription" // Real-time speech-to-text from user
| "user-llm-text" // User text processed by LLM (final)
| "convai" // Character's response (raw)
| "bot-llm-text" // Character's LLM-generated text (final)
| "bot-emotion" // Character's emotional state
| "emotion" // Generic emotion
| "behavior-tree" // Behavior tree response
| "action" // Action execution
| "interrupt-bot"; // Interrupt messageFor Chat UIs, filter to:
const displayMessages = chatMessages.filter(
(msg) => msg.type === "user-llm-text" || msg.type === "bot-llm-text"
);API Reference
Configuration
interface ConvaiConfig {
/** Your Convai API key from convai.com dashboard (required) */
apiKey: string;
/** The Character ID to connect to (required) */
characterId: string;
/**
* End user identifier for speaker management (optional).
* When provided: enables long-term memory and analytics
* When not provided: anonymous mode, no persistent memory
*/
endUserId?: string;
/** Custom Convai API URL (optional) */
url?: string;
/**
* Enable video capability (default: false).
* If true, connection_type will be "video" (supports audio, video, screenshare).
* If false, connection_type will be "audio" (audio only).
* ⚠️ REQUIRED for video and screen share features.
*/
enableVideo?: boolean;
/**
* Start with video camera on when connecting (default: false).
* Only works if enableVideo is true.
*/
startWithVideoOn?: boolean;
/**
* Start with microphone on when connecting (default: false).
* If false, microphone stays off until user enables it.
*/
startWithAudioOn?: boolean;
/**
* Enable text-to-speech audio generation (default: true).
*/
ttsEnabled?: boolean;
}Connection Management
// Connect
await convaiClient.connect({
apiKey: "your-api-key",
characterId: "your-character-id",
enableVideo: true,
});
// Disconnect
await convaiClient.disconnect();
// Reconnect
await convaiClient.reconnect();
// Reset session (clear conversation history)
convaiClient.resetSession();
// Check connection state
console.log(convaiClient.state.isConnected);
console.log(convaiClient.state.isConnecting);
console.log(convaiClient.state.agentState); // 'disconnected' | 'connected' | 'listening' | 'thinking' | 'speaking'
console.log(convaiClient.isBotReady); // Bot ready to receive messagesMessaging
// Send text message
convaiClient.sendUserTextMessage("Hello, how are you?");
// Send trigger message (invoke character action)
convaiClient.sendTriggerMessage("greet", "User entered the room");
// Interrupt character's current response
convaiClient.sendInterruptMessage();
// Update context
convaiClient.updateTemplateKeys({ user_name: "John" });
convaiClient.updateDynamicInfo({ text: "User is browsing products" });
// Access messages
console.log(convaiClient.chatMessages);
// Access real-time user transcription
console.log(convaiClient.userTranscription);Audio Controls
// Mute/unmute microphone
await convaiClient.audioControls.muteAudio();
await convaiClient.audioControls.unmuteAudio();
await convaiClient.audioControls.toggleAudio();
// Check mute state
console.log(convaiClient.isAudioMuted);
// Get available microphones
const devices = await convaiClient.audioControls.getAudioDevices();
// Set microphone
await convaiClient.audioControls.setAudioDevice(deviceId);
// Monitor audio level
convaiClient.audioControls.startAudioLevelMonitoring();
convaiClient.audioControls.on("audioLevelChange", (level) => {
console.log("Audio level:", level); // 0 to 1
});
convaiClient.audioControls.stopAudioLevelMonitoring();Video Controls
⚠️ Requires enableVideo: true in config.
// Enable/disable camera
await convaiClient.videoControls.enableVideo();
await convaiClient.videoControls.disableVideo();
await convaiClient.videoControls.toggleVideo();
// Check video state
console.log(convaiClient.isVideoEnabled);
// Set video quality
await convaiClient.videoControls.setVideoQuality("high"); // 'low' | 'medium' | 'high'
// Get available cameras
const devices = await convaiClient.videoControls.getVideoDevices();
// Switch camera
await convaiClient.videoControls.setVideoDevice(deviceId);Screen Share Controls
⚠️ Requires enableVideo: true in config.
// Start/stop screen share
await convaiClient.screenShareControls.enableScreenShare();
await convaiClient.screenShareControls.enableScreenShareWithAudio();
await convaiClient.screenShareControls.disableScreenShare();
await convaiClient.screenShareControls.toggleScreenShare();
// Check screen share state
console.log(convaiClient.isScreenShareActive);Getting Credentials
- Visit convai.com and create an account
- Navigate to your dashboard
- Create a new character or use an existing one
- Copy your API Key from the dashboard
- Copy your Character ID from the character details
TypeScript Support
All exports are fully typed:
React:
import type {
// Configuration
ConvaiConfig,
// State
ConvaiClientState,
// Messages
ChatMessage,
ChatMessageType,
// Client
IConvaiClient,
ConvaiClient,
// Controls
AudioControls,
VideoControls,
ScreenShareControls,
} from "@convai/web-sdk/react";Vanilla:
import type {
// Configuration
ConvaiConfig,
// State
ConvaiClientState,
// Messages
ChatMessage,
ChatMessageType,
// Client
IConvaiClient,
ConvaiClient,
// Controls
AudioControls,
VideoControls,
ScreenShareControls,
// Widget
VanillaWidget,
VanillaWidgetOptions,
} from "@convai/web-sdk/vanilla";Support
- Documentation: API Reference
- Forum: Convai Forum
- Website: convai.com
- Issues: GitHub Issues
