npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@unith-ai/core-client

v1.5.0

Published

Core TypeScript SDK for building digital human experiences with Unith AI

Downloads

527

Readme

Unith Core Client Typescript SDK

An SDK library for building complex digital human experiences using javascript/typescript that run on Unith AI.

Prerequisite

Before proceeding with using this library, you're expected to have an account on Unith AI, create a digital human and take note of your API key. You can create an account here in minutes!

Installation

Install the package in your project through package manager.

npm install @unith-ai/core-client
# or
yarn add @unith-ai/core-client
# or
pnpm install @unith-ai/core-client

Usage

This library is designed for use in plain JavaScript applications or to serve as a foundation for framework-specific implementations. Before integrating it, verify if a dedicated library exists for your particular framework. That said, it's compatible with any project built on JavaScript.

Initialize Digital Human

First, initialize the Conversation instance:

const conversation = await Conversation.startDigitalHuman(options);

This will establish a WebSocket connection and initialize the digital human with realtime audio & video streaming capabilities.

Session Configuration

The options passed to startDigitalHuman specify how the session is established:

const conversation = await Conversation.startDigitalHuman({
  orgId: "your-org-id",
  headId: "your-head-id",
  element: document.getElementById("video-container"), // HTML element for video output
  apiKey: "your-api-key",
  allowWakeLock: true,
  ...callbacks,
});

Required Parameters

  • orgId - Your organization ID
  • headId - The digital human head ID to use
  • apiKey - API key for authentication (default: "")
  • element - HTML element where the video will be rendered

Optional Parameters

  • mode - Conversation mode (default: "default")
  • language - Language code for the conversation (default: browser language)
  • allowWakeLock - Prevent screen from sleeping during conversation (default: true)

Callbacks

Register callbacks to handle various events:

  • onConnect ({userId, headInfo, microphoneAccess}) - Called when the WebSocket connection is established
    • userId Boolean Unique Identifier for the users session.
    • headInfo ConnectHeadType Object with data about the digital human.
      • name String Digital human head name
      • phrases String[] Array with phrases set during digital human creation.
      • language String Language code setup during digital human creation.
      • avatar String Static image url for digital human.
    • microphoneAccess Boolean True if microphone access was granted, False otherwise.
  • onDisconnect () - Called when the connection is closed
  • onStatusChange ({status}) - Called when connection status changes
    • status "connecting" | "connected" | "disconnecting" | "disconnected" Shows current websocket connection status.
  • onMessage ({ timestamp, speaker, text, visible }) - Called when websocket receives a message or sends a response.
    • timestamp Date Timestamp when message was received/sent
    • sender "user" | "ai" Shows who the message came from.
    • text String Message text
    • visible Boolean Flag that you can use to control visibility of message. Sometimes, message comes before the video response starts playing. In such cases, this is usually false. Listen the onSpeakingStart event to change visibility when the video response starts playing.
  • onMuteStatusChange - Called when mute status changes
  • onSpeakingStart - Called when the digital human starts speaking
  • onSpeakingEnd - Called when the digital human finishes speaking
  • onStoppingEnd - Called when a response is manually stopped
  • onTimeout - Called when the session times out due to inactivity
  • onTimeoutWarning - Called before the session times out. This event warns you that the customers session is going to end in a bit. You can call the keepSession method to extend the customers session.
  • onKeepSession - Called when a keep-alive request is processed
  • onError - Called when an error occurs

Getting Background Video

Retrieve the idle background video URL for use in welcome screens or widget mode:

const videoUrl = await Conversation.getBackgroundVideo({
  orgId: "your-org-id",
  headId: "your-head-id",
});

Instance Methods

startSession()

Start the conversation session and begin audio & video playback:

await conversation.startSession();

This method should be called after user interaction to ensure audio context is properly initialized, especially on mobile browsers.

sendMessage(message)

Send a text message to the digital human:

conversation.sendMessage("Hello, how are you?");

keepSession()

Sends keep-alive event to prevent session timeout:

conversation.keepSession();

stopCurrentResponse()

Stop the current response from the digital human:

conversation.stopCurrentResponse();

This clears both audio and video queues and returns the digital human to idle state.

toggleMuteStatus()

Toggle the mute status of the audio output:

const volume = await conversation.toggleMuteStatus();
console.log("New volume:", volume); // 0 for muted, 1 for unmuted

getUserId()

Get the current user's ID:

const userId = conversation.getUserId();

endSession()

End the conversation session and clean up resources:

await conversation.endSession();

This closes the WebSocket connection, releases the wake lock, and destroys audio/video outputs.

Message Structure

Messages sent to and from the digital human follow this structure:

interface Message {
  timestamp: Date;
  sender: SpeakerType;
  text: string;
  visible: boolean;
}

Error Handling

Always handle errors appropriately:

try {
  const conversation = await Conversation.startDigitalHuman({
    orgId: "your-org-id",
    headId: "your-head-id",
    element: videoElement,
    onError: ({ message, endConversation, type }) => {
      if (type === "toast") {
        // Show toast notification
        showToast(message);
        if (endConversation) {
          // Restart the session
          restartSession();
        }
      } else if (type === "modal") {
        // Show modal dialog
        showModal(message);
      }
    },
  });
} catch (error) {
  console.error("Failed to start digital human:", error);
}

TypeScript Support

Full TypeScript types are included:

Development

Please refer to the README.md file in the root of this repository.