@unith-ai/react
v1.6.3
Published
React hooks for Unith AI digital humans
Maintainers
Readme
Unith React SDK
A React hooks library for building complex digital human experiences that run on Unith AI.
Prerequisite
Before proceeding with using this library, you're expected to have an account on Unith AI, create a digital human and take note of your API key. You can create an account here in minutes!
Installation
Install the package in your project through package manager.
npm install @unith-ai/react
# or
yarn add @unith-ai/react
# or
pnpm install @unith-ai/reactUsage
This library provides React hooks for integrating Unith AI digital humans into your React applications.
useConversation Hook
The useConversation hook manages the digital human conversation state and provides methods to control the session.
import { useConversation } from '@unith-ai/react';
function MyComponent() {
const conversation = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
});
// Use conversation methods and state
}Configuration
The hook accepts a configuration object with the following properties:
Required Parameters
- orgId - Your organization ID
- headId - The digital human head ID to use
- apiKey - API key for authentication
Optional Parameters
- mode - Conversation mode (default: "default")
- language - Language code for the conversation (default: browser language)
- allowWakeLock - Prevent screen from sleeping during conversation (default: true)
Returned Values
The hook returns an object containing methods and state:
Methods
- startDigitalHuman(element, options?) - Initialize and start the digital human
- element
HTMLElement- DOM element where the video will be rendered - options
Partial<ConversationEvents>- Optional event callbacks - Returns:
Promise<string | undefined>- The user ID
- element
- getBackgroundVideo() - Retrieve the idle background video URL
- Returns:
Promise<string>- Video URL
- Returns:
- startSession() - Start the conversation session and begin audio & video playback
- Returns:
Promise<void>
- Returns:
- sendMessage(text) - Send a text message to the digital human
- text
string- Message text to send - Returns:
Promise<void>
- text
- stopResponse() - Stop the current response from the digital human
- Returns:
Promise<void>
- Returns:
- toggleMuteStatus() - Toggle the mute status of the audio output
- Returns:
number | undefined- New volume (0 for muted, 1 for unmuted)
- Returns:
- keepSession() - Send keep-alive event to prevent session timeout
- Returns:
Promise<void>
- Returns:
- initializeMicrophone() - Initialize microphone for voice input
- Returns:
Promise<void>
- Returns:
- getUserId() - Get the current user's ID
- Returns:
string | undefined
- Returns:
- endSession() - End the conversation session and clean up resources
- Returns:
Promise<void>
- Returns:
State
- status
"connecting" | "connected" | "disconnecting" | "disconnected"- Current WebSocket connection status - isConnected
boolean- True if status is "connected" - isDisconnected
boolean- True if status is "disconnected" - isNotConnected
boolean- True if status is not "connected" - sessionStarted
boolean- True if session has been started - mode
"listening" | "speaking" | "thinking" | "stopping"- Current conversation mode - isSpeaking
boolean- True if mode is "speaking" - messages
MessageEventData[]- Array of conversation messages - messageCounter
number- Count of messages sent - userId
string | null- Current user's unique identifier - headInfo
ConnectHeadType | null- Information about the digital human- name
string- Digital human head name - phrases
string[]- Array with phrases set during digital human creation - language
string- Language code setup during digital human creation - avatar
string- Static image URL for digital human
- name
- microphoneAccess
boolean- True if microphone access was granted - isMuted
boolean- True if audio is muted - timeOutWarning
boolean- True when session timeout warning is active - timeOutBanner
boolean- True when session has timed out - capacityError
boolean- True if a capacity error occurred
Basic Example
import { useConversation } from '@unith-ai/react';
import { useRef, useEffect } from 'react';
function DigitalHumanChat() {
const videoRef = useRef(null);
const conversation = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
});
useEffect(() => {
if (videoRef.current) {
conversation.startDigitalHuman(videoRef.current, {
onConnect: ({ userId, headInfo, microphoneAccess }) => {
console.log('Connected:', userId);
},
onMessage: ({ timestamp, sender, text, visible }) => {
console.log('Message:', text);
},
onError: ({ message, endConversation, type }) => {
console.error('Error:', message);
},
});
}
}, []);
const handleSendMessage = () => {
conversation.sendMessage("Hello!");
};
const handleStartSession = () => {
conversation.startSession();
};
return (
<div>
<div ref={videoRef} style={{ width: '100%', height: '500px' }} />
{conversation.isConnected && !conversation.sessionStarted && (
<button onClick={handleStartSession}>Start Conversation</button>
)}
{conversation.sessionStarted && (
<button onClick={handleSendMessage}>Send Message</button>
)}
<div>
{conversation.messages.map((msg, index) => (
<div key={index}>
<strong>{msg.sender}:</strong> {msg.text}
</div>
))}
</div>
</div>
);
}Advanced Example with Event Callbacks
import { useConversation } from '@unith-ai/react';
import { useRef, useEffect, useState } from 'react';
function AdvancedChat() {
const videoRef = useRef(null);
const [inputText, setInputText] = useState('');
const conversation = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
mode: "default",
language: "en-US",
});
useEffect(() => {
if (videoRef.current) {
conversation.startDigitalHuman(videoRef.current, {
onConnect: ({ userId, headInfo, microphoneAccess }) => {
console.log('Connected with user ID:', userId);
console.log('Digital human:', headInfo.name);
},
onMessage: ({ timestamp, sender, text, visible }) => {
console.log(`[${sender}] ${text}`);
},
onSpeakingStart: () => {
console.log('Digital human started speaking');
},
onSpeakingEnd: () => {
console.log('Digital human finished speaking');
},
onTimeoutWarning: () => {
console.log('Session will timeout soon');
},
onTimeout: () => {
console.log('Session timed out');
},
onError: ({ message, type }) => {
if (type === 'toast') {
alert(message);
}
},
});
}
}, []);
const handleSendMessage = async (e) => {
e.preventDefault();
if (inputText.trim()) {
await conversation.sendMessage(inputText);
setInputText('');
}
};
const handleKeepSession = () => {
conversation.keepSession();
};
return (
<div>
<div ref={videoRef} style={{ width: '100%', height: '500px' }} />
<div>
<p>Status: {conversation.status}</p>
<p>Mode: {conversation.mode}</p>
{conversation.isSpeaking && <p>Digital human is speaking...</p>}
</div>
{conversation.isConnected && !conversation.sessionStarted && (
<button onClick={() => conversation.startSession()}>
Start Conversation
</button>
)}
{conversation.timeOutWarning && (
<div>
<p>Your session will timeout soon</p>
<button onClick={handleKeepSession}>Keep Session Active</button>
</div>
)}
{conversation.sessionStarted && (
<form onSubmit={handleSendMessage}>
<input
type="text"
value={inputText}
onChange={(e) => setInputText(e.target.value)}
disabled={conversation.mode !== 'listening'}
placeholder="Type your message..."
/>
<button type="submit" disabled={conversation.mode !== 'listening'}>
Send
</button>
<button type="button" onClick={() => conversation.toggleMuteStatus()}>
{conversation.isMuted ? 'Unmute' : 'Mute'}
</button>
{conversation.isSpeaking && (
<button type="button" onClick={() => conversation.stopResponse()}>
Stop Response
</button>
)}
</form>
)}
<div>
<h3>Messages ({conversation.messageCounter})</h3>
{conversation.messages.map((msg, index) => (
msg.visible && (
<div key={index}>
<strong>{msg.sender}:</strong> {msg.text}
<small> ({msg.timestamp.toLocaleTimeString()})</small>
</div>
)
))}
</div>
</div>
);
}Message Structure
Messages in the conversation follow this structure:
interface MessageEventData {
timestamp: Date;
sender: "user" | "ai";
text: string;
visible: boolean;
}Event Callbacks
When calling startDigitalHuman, you can pass event callbacks:
- onConnect({userId, headInfo, microphoneAccess}) - Called when the WebSocket connection is established
- onDisconnect() - Called when the connection is closed
- onStatusChange({status}) - Called when connection status changes
- onMessage({timestamp, sender, text, visible}) - Called when a message is received or sent
- onMuteStatusChange({isMuted}) - Called when mute status changes
- onSpeakingStart() - Called when the digital human starts speaking
- onSpeakingEnd() - Called when the digital human finishes speaking
- onStoppingEnd() - Called when a response is manually stopped
- onTimeout() - Called when the session times out due to inactivity
- onTimeoutWarning() - Called before the session times out
- onKeepSession({granted}) - Called when a keep-alive request is processed
- onError({message, endConversation, type}) - Called when an error occurs
Error Handling
Handle errors using the onError callback:
conversation.startDigitalHuman(videoRef.current, {
onError: ({ message, endConversation, type }) => {
if (type === "toast") {
// Show toast notification
showToast(message);
if (endConversation) {
// Restart the session
conversation.endSession();
}
} else if (type === "modal") {
// Show modal dialog
showModal(message);
}
},
});Getting Background Video
Retrieve the idle background video URL for welcome screens:
const { getBackgroundVideo } = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
});
useEffect(() => {
async function loadBackgroundVideo() {
const videoUrl = await getBackgroundVideo();
// Use videoUrl for your background/welcome screen
}
loadBackgroundVideo();
}, []);TypeScript Support
Full TypeScript types are included with the library. Import types as needed:
import { useConversation } from '@unith-ai/react';
import type {
HeadConfigOptions,
MessageEventData,
Status,
Mode
} from '@unith-ai/react';Best Practices
- Call startSession() after user interaction - This ensures audio context is properly initialized, especially on mobile browsers
- Handle the listening mode - Only send messages when
mode === "listening"to avoid interrupting the digital human - Clean up on unmount - The hook automatically calls
endSession()on unmount, but you can call it manually if needed - Use keepSession() - Respond to
onTimeoutWarningby callingkeepSession()to extend the session - Handle errors gracefully - Always implement the
onErrorcallback to handle connection and capacity errors
Development
Please refer to the README.md file in the root of this repository.
