@unith-ai/core-client
v1.5.0
Published
Core TypeScript SDK for building digital human experiences with Unith AI
Downloads
527
Maintainers
Readme
Unith Core Client Typescript SDK
An SDK library for building complex digital human experiences using javascript/typescript that run on Unith AI.
Prerequisite
Before proceeding with using this library, you're expected to have an account on Unith AI, create a digital human and take note of your API key. You can create an account here in minutes!
Installation
Install the package in your project through package manager.
npm install @unith-ai/core-client
# or
yarn add @unith-ai/core-client
# or
pnpm install @unith-ai/core-clientUsage
This library is designed for use in plain JavaScript applications or to serve as a foundation for framework-specific implementations. Before integrating it, verify if a dedicated library exists for your particular framework. That said, it's compatible with any project built on JavaScript.
Initialize Digital Human
First, initialize the Conversation instance:
const conversation = await Conversation.startDigitalHuman(options);This will establish a WebSocket connection and initialize the digital human with realtime audio & video streaming capabilities.
Session Configuration
The options passed to startDigitalHuman specify how the session is established:
const conversation = await Conversation.startDigitalHuman({
orgId: "your-org-id",
headId: "your-head-id",
element: document.getElementById("video-container"), // HTML element for video output
apiKey: "your-api-key",
allowWakeLock: true,
...callbacks,
});Required Parameters
- orgId - Your organization ID
- headId - The digital human head ID to use
- apiKey - API key for authentication (default: "")
- element - HTML element where the video will be rendered
Optional Parameters
- mode - Conversation mode (default: "default")
- language - Language code for the conversation (default: browser language)
- allowWakeLock - Prevent screen from sleeping during conversation (default: true)
Callbacks
Register callbacks to handle various events:
- onConnect ({userId, headInfo, microphoneAccess}) - Called when the WebSocket connection is established
- userId
BooleanUnique Identifier for the users session. - headInfo
ConnectHeadTypeObject with data about the digital human.- name
StringDigital human head name - phrases
String[]Array with phrases set during digital human creation. - language
StringLanguage code setup during digital human creation. - avatar
StringStatic image url for digital human.
- name
- microphoneAccess
BooleanTrue if microphone access was granted, False otherwise.
- userId
- onDisconnect () - Called when the connection is closed
- onStatusChange ({status}) - Called when connection status changes
- status
"connecting" | "connected" | "disconnecting" | "disconnected"Shows current websocket connection status.
- status
- onMessage ({ timestamp, speaker, text, visible }) - Called when websocket receives a message or sends a response.
- timestamp
DateTimestamp when message was received/sent - sender
"user" | "ai"Shows who the message came from. - text
StringMessage text - visible
BooleanFlag that you can use to control visibility of message. Sometimes, message comes before the video response starts playing. In such cases, this is usuallyfalse. Listen theonSpeakingStartevent to change visibility when the video response starts playing.
- timestamp
- onMuteStatusChange - Called when mute status changes
- onSpeakingStart - Called when the digital human starts speaking
- onSpeakingEnd - Called when the digital human finishes speaking
- onStoppingEnd - Called when a response is manually stopped
- onTimeout - Called when the session times out due to inactivity
- onTimeoutWarning - Called before the session times out. This event warns you that the customers session is going to end in a bit. You can call the
keepSessionmethod to extend the customers session. - onKeepSession - Called when a keep-alive request is processed
- onError - Called when an error occurs
Getting Background Video
Retrieve the idle background video URL for use in welcome screens or widget mode:
const videoUrl = await Conversation.getBackgroundVideo({
orgId: "your-org-id",
headId: "your-head-id",
});Instance Methods
startSession()
Start the conversation session and begin audio & video playback:
await conversation.startSession();This method should be called after user interaction to ensure audio context is properly initialized, especially on mobile browsers.
sendMessage(message)
Send a text message to the digital human:
conversation.sendMessage("Hello, how are you?");keepSession()
Sends keep-alive event to prevent session timeout:
conversation.keepSession();stopCurrentResponse()
Stop the current response from the digital human:
conversation.stopCurrentResponse();This clears both audio and video queues and returns the digital human to idle state.
toggleMuteStatus()
Toggle the mute status of the audio output:
const volume = await conversation.toggleMuteStatus();
console.log("New volume:", volume); // 0 for muted, 1 for unmutedgetUserId()
Get the current user's ID:
const userId = conversation.getUserId();endSession()
End the conversation session and clean up resources:
await conversation.endSession();This closes the WebSocket connection, releases the wake lock, and destroys audio/video outputs.
Message Structure
Messages sent to and from the digital human follow this structure:
interface Message {
timestamp: Date;
sender: SpeakerType;
text: string;
visible: boolean;
}Error Handling
Always handle errors appropriately:
try {
const conversation = await Conversation.startDigitalHuman({
orgId: "your-org-id",
headId: "your-head-id",
element: videoElement,
onError: ({ message, endConversation, type }) => {
if (type === "toast") {
// Show toast notification
showToast(message);
if (endConversation) {
// Restart the session
restartSession();
}
} else if (type === "modal") {
// Show modal dialog
showModal(message);
}
},
});
} catch (error) {
console.error("Failed to start digital human:", error);
}TypeScript Support
Full TypeScript types are included:
Development
Please refer to the README.md file in the root of this repository.
