npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@unith-ai/react

v1.6.3

Published

React hooks for Unith AI digital humans

Readme

Unith React SDK

A React hooks library for building complex digital human experiences that run on Unith AI.

Prerequisite

Before proceeding with using this library, you're expected to have an account on Unith AI, create a digital human and take note of your API key. You can create an account here in minutes!

Installation

Install the package in your project through package manager.

npm install @unith-ai/react
# or
yarn add @unith-ai/react
# or
pnpm install @unith-ai/react

Usage

This library provides React hooks for integrating Unith AI digital humans into your React applications.

useConversation Hook

The useConversation hook manages the digital human conversation state and provides methods to control the session.

import { useConversation } from '@unith-ai/react';

function MyComponent() {
  const conversation = useConversation({
    orgId: "your-org-id",
    headId: "your-head-id",
    apiKey: "your-api-key",
  });

  // Use conversation methods and state
}

Configuration

The hook accepts a configuration object with the following properties:

Required Parameters
  • orgId - Your organization ID
  • headId - The digital human head ID to use
  • apiKey - API key for authentication
Optional Parameters
  • mode - Conversation mode (default: "default")
  • language - Language code for the conversation (default: browser language)
  • allowWakeLock - Prevent screen from sleeping during conversation (default: true)

Returned Values

The hook returns an object containing methods and state:

Methods
  • startDigitalHuman(element, options?) - Initialize and start the digital human
    • element HTMLElement - DOM element where the video will be rendered
    • options Partial<ConversationEvents> - Optional event callbacks
    • Returns: Promise<string | undefined> - The user ID
  • getBackgroundVideo() - Retrieve the idle background video URL
    • Returns: Promise<string> - Video URL
  • startSession() - Start the conversation session and begin audio & video playback
    • Returns: Promise<void>
  • sendMessage(text) - Send a text message to the digital human
    • text string - Message text to send
    • Returns: Promise<void>
  • stopResponse() - Stop the current response from the digital human
    • Returns: Promise<void>
  • toggleMuteStatus() - Toggle the mute status of the audio output
    • Returns: number | undefined - New volume (0 for muted, 1 for unmuted)
  • keepSession() - Send keep-alive event to prevent session timeout
    • Returns: Promise<void>
  • initializeMicrophone() - Initialize microphone for voice input
    • Returns: Promise<void>
  • getUserId() - Get the current user's ID
    • Returns: string | undefined
  • endSession() - End the conversation session and clean up resources
    • Returns: Promise<void>
State
  • status "connecting" | "connected" | "disconnecting" | "disconnected" - Current WebSocket connection status
  • isConnected boolean - True if status is "connected"
  • isDisconnected boolean - True if status is "disconnected"
  • isNotConnected boolean - True if status is not "connected"
  • sessionStarted boolean - True if session has been started
  • mode "listening" | "speaking" | "thinking" | "stopping" - Current conversation mode
  • isSpeaking boolean - True if mode is "speaking"
  • messages MessageEventData[] - Array of conversation messages
  • messageCounter number - Count of messages sent
  • userId string | null - Current user's unique identifier
  • headInfo ConnectHeadType | null - Information about the digital human
    • name string - Digital human head name
    • phrases string[] - Array with phrases set during digital human creation
    • language string - Language code setup during digital human creation
    • avatar string - Static image URL for digital human
  • microphoneAccess boolean - True if microphone access was granted
  • isMuted boolean - True if audio is muted
  • timeOutWarning boolean - True when session timeout warning is active
  • timeOutBanner boolean - True when session has timed out
  • capacityError boolean - True if a capacity error occurred

Basic Example

import { useConversation } from '@unith-ai/react';
import { useRef, useEffect } from 'react';

function DigitalHumanChat() {
  const videoRef = useRef(null);
  const conversation = useConversation({
    orgId: "your-org-id",
    headId: "your-head-id",
    apiKey: "your-api-key",
  });

  useEffect(() => {
    if (videoRef.current) {
      conversation.startDigitalHuman(videoRef.current, {
        onConnect: ({ userId, headInfo, microphoneAccess }) => {
          console.log('Connected:', userId);
        },
        onMessage: ({ timestamp, sender, text, visible }) => {
          console.log('Message:', text);
        },
        onError: ({ message, endConversation, type }) => {
          console.error('Error:', message);
        },
      });
    }
  }, []);

  const handleSendMessage = () => {
    conversation.sendMessage("Hello!");
  };

  const handleStartSession = () => {
    conversation.startSession();
  };

  return (
    <div>
      <div ref={videoRef} style={{ width: '100%', height: '500px' }} />
      
      {conversation.isConnected && !conversation.sessionStarted && (
        <button onClick={handleStartSession}>Start Conversation</button>
      )}
      
      {conversation.sessionStarted && (
        <button onClick={handleSendMessage}>Send Message</button>
      )}
      
      <div>
        {conversation.messages.map((msg, index) => (
          <div key={index}>
            <strong>{msg.sender}:</strong> {msg.text}
          </div>
        ))}
      </div>
    </div>
  );
}

Advanced Example with Event Callbacks

import { useConversation } from '@unith-ai/react';
import { useRef, useEffect, useState } from 'react';

function AdvancedChat() {
  const videoRef = useRef(null);
  const [inputText, setInputText] = useState('');
  
  const conversation = useConversation({
    orgId: "your-org-id",
    headId: "your-head-id",
    apiKey: "your-api-key",
    mode: "default",
    language: "en-US",
  });

  useEffect(() => {
    if (videoRef.current) {
      conversation.startDigitalHuman(videoRef.current, {
        onConnect: ({ userId, headInfo, microphoneAccess }) => {
          console.log('Connected with user ID:', userId);
          console.log('Digital human:', headInfo.name);
        },
        onMessage: ({ timestamp, sender, text, visible }) => {
          console.log(`[${sender}] ${text}`);
        },
        onSpeakingStart: () => {
          console.log('Digital human started speaking');
        },
        onSpeakingEnd: () => {
          console.log('Digital human finished speaking');
        },
        onTimeoutWarning: () => {
          console.log('Session will timeout soon');
        },
        onTimeout: () => {
          console.log('Session timed out');
        },
        onError: ({ message, type }) => {
          if (type === 'toast') {
            alert(message);
          }
        },
      });
    }
  }, []);

  const handleSendMessage = async (e) => {
    e.preventDefault();
    if (inputText.trim()) {
      await conversation.sendMessage(inputText);
      setInputText('');
    }
  };

  const handleKeepSession = () => {
    conversation.keepSession();
  };

  return (
    <div>
      <div ref={videoRef} style={{ width: '100%', height: '500px' }} />
      
      <div>
        <p>Status: {conversation.status}</p>
        <p>Mode: {conversation.mode}</p>
        {conversation.isSpeaking && <p>Digital human is speaking...</p>}
      </div>

      {conversation.isConnected && !conversation.sessionStarted && (
        <button onClick={() => conversation.startSession()}>
          Start Conversation
        </button>
      )}

      {conversation.timeOutWarning && (
        <div>
          <p>Your session will timeout soon</p>
          <button onClick={handleKeepSession}>Keep Session Active</button>
        </div>
      )}

      {conversation.sessionStarted && (
        <form onSubmit={handleSendMessage}>
          <input
            type="text"
            value={inputText}
            onChange={(e) => setInputText(e.target.value)}
            disabled={conversation.mode !== 'listening'}
            placeholder="Type your message..."
          />
          <button type="submit" disabled={conversation.mode !== 'listening'}>
            Send
          </button>
          <button type="button" onClick={() => conversation.toggleMuteStatus()}>
            {conversation.isMuted ? 'Unmute' : 'Mute'}
          </button>
          {conversation.isSpeaking && (
            <button type="button" onClick={() => conversation.stopResponse()}>
              Stop Response
            </button>
          )}
        </form>
      )}

      <div>
        <h3>Messages ({conversation.messageCounter})</h3>
        {conversation.messages.map((msg, index) => (
          msg.visible && (
            <div key={index}>
              <strong>{msg.sender}:</strong> {msg.text}
              <small> ({msg.timestamp.toLocaleTimeString()})</small>
            </div>
          )
        ))}
      </div>
    </div>
  );
}

Message Structure

Messages in the conversation follow this structure:

interface MessageEventData {
  timestamp: Date;
  sender: "user" | "ai";
  text: string;
  visible: boolean;
}

Event Callbacks

When calling startDigitalHuman, you can pass event callbacks:

  • onConnect({userId, headInfo, microphoneAccess}) - Called when the WebSocket connection is established
  • onDisconnect() - Called when the connection is closed
  • onStatusChange({status}) - Called when connection status changes
  • onMessage({timestamp, sender, text, visible}) - Called when a message is received or sent
  • onMuteStatusChange({isMuted}) - Called when mute status changes
  • onSpeakingStart() - Called when the digital human starts speaking
  • onSpeakingEnd() - Called when the digital human finishes speaking
  • onStoppingEnd() - Called when a response is manually stopped
  • onTimeout() - Called when the session times out due to inactivity
  • onTimeoutWarning() - Called before the session times out
  • onKeepSession({granted}) - Called when a keep-alive request is processed
  • onError({message, endConversation, type}) - Called when an error occurs

Error Handling

Handle errors using the onError callback:

conversation.startDigitalHuman(videoRef.current, {
  onError: ({ message, endConversation, type }) => {
    if (type === "toast") {
      // Show toast notification
      showToast(message);
      if (endConversation) {
        // Restart the session
        conversation.endSession();
      }
    } else if (type === "modal") {
      // Show modal dialog
      showModal(message);
    }
  },
});

Getting Background Video

Retrieve the idle background video URL for welcome screens:

const { getBackgroundVideo } = useConversation({
  orgId: "your-org-id",
  headId: "your-head-id",
  apiKey: "your-api-key",
});

useEffect(() => {
  async function loadBackgroundVideo() {
    const videoUrl = await getBackgroundVideo();
    // Use videoUrl for your background/welcome screen
  }
  loadBackgroundVideo();
}, []);

TypeScript Support

Full TypeScript types are included with the library. Import types as needed:

import { useConversation } from '@unith-ai/react';
import type { 
  HeadConfigOptions, 
  MessageEventData, 
  Status, 
  Mode 
} from '@unith-ai/react';

Best Practices

  1. Call startSession() after user interaction - This ensures audio context is properly initialized, especially on mobile browsers
  2. Handle the listening mode - Only send messages when mode === "listening" to avoid interrupting the digital human
  3. Clean up on unmount - The hook automatically calls endSession() on unmount, but you can call it manually if needed
  4. Use keepSession() - Respond to onTimeoutWarning by calling keepSession() to extend the session
  5. Handle errors gracefully - Always implement the onError callback to handle connection and capacity errors

Development

Please refer to the README.md file in the root of this repository.