npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@open-avatar/livekit-react

v0.1.1

Published

A React component that renders a **3D talking-head avatar** driven by your [LiveKit](https://livekit.io) voice agent. It uses the agent’s audio track for real-time lip-sync and optional gestures, and works with [@met4citizen/talkinghead](https://www.npmjs

Readme

@open-avatar/livekit-react

A React component that renders a 3D talking-head avatar driven by your LiveKit voice agent. It uses the agent’s audio track for real-time lip-sync and optional gestures, and works with @met4citizen/talkinghead and @met4citizen/headaudio.

Author: jempf · Source

Features

  • Real-time lip-sync – Avatar mouth is driven by the voice agent’s audio via a neural lip-sync model (HeadAudio).
  • Agent state overlay – Shows listening, speaking, thinking, etc.
  • Configurable avatar – Body style, camera framing, mood, and optional gestures.
  • Loading and error states – Built-in spinner and error message.

Prerequisites

Installation

npm install @open-avatar/livekit-react

Peer dependencies (install if not already present):

npm install react react-dom @livekit/components-react livekit-client

Quick start

LiveKitAvatar must be used inside a LiveKit room where a voice agent is (or will be) connected. It uses useVoiceAssistant() from @livekit/components-react to get the agent’s audio and state.

Minimal example:

import { LiveKitRoom } from '@livekit/components-react';
import { LiveKitAvatar } from '@open-avatar/livekit-react';

const AVATAR_MODEL_URL = 'https://your-cdn.com/path/to/avatar-model.glb';

export function VoiceAgentPage() {
  const token = '…'; // Get from your backend (e.g. /api/token)
  const serverUrl = 'wss://your-livekit-server.livekit.cloud';

  return (
    <LiveKitRoom
      token={token}
      serverUrl={serverUrl}
      connect={true}
      audio={true}
      video={false}
    >
      <div style={{ width: '100%', height: '400px' }}>
        <LiveKitAvatar modelUrl={AVATAR_MODEL_URL} />
      </div>
    </LiveKitRoom>
  );
}

Full example with controls

You can combine the avatar with LiveKit’s VoiceAssistantControlBar for a full voice-UI:

import { LiveKitRoom, VoiceAssistantControlBar } from '@livekit/components-react';
import { LiveKitAvatar } from '@open-avatar/livekit-react';

const AVATAR_MODEL_URL = 'https://your-cdn.com/path/to/avatar-model.glb';

export function VoiceAgentPage() {
  const token = '…';
  const serverUrl = 'wss://your-livekit-server.livekit.cloud';

  return (
    <LiveKitRoom
      token={token}
      serverUrl={serverUrl}
      connect={true}
      audio={true}
      video={false}
    >
      <div className="flex flex-col h-screen">
        <div className="flex-1 relative min-h-[300px]">
          <LiveKitAvatar
            modelUrl={AVATAR_MODEL_URL}
            bodyStyle="F"
            cameraView="upper"
            avatarMood="neutral"
            enableGestures={true}
          />
        </div>
        <VoiceAssistantControlBar />
      </div>
    </LiveKitRoom>
  );
}

API

LiveKitAvatar

| Prop | Type | Default | Description | |------------------|-----------------------|------------|-----------------------------------------------------------------------------| | modelUrl | string | required | URL to the 3D avatar model (e.g. .glb) used by @met4citizen/talkinghead. | | bodyStyle | 'M' \| 'F' | 'F' | Body style of the avatar. | | cameraView | 'upper' \| 'full' \| 'head' | 'upper' | Framing of the avatar (upper body, full body, or head only). | | avatarMood | string | 'neutral'| Mood/expression preset. | | enableGestures | boolean | true | Whether to trigger gestures (e.g. look-at-camera, hand gestures) when the agent starts speaking. | | className | string | — | Optional CSS class for the wrapper div. |

Avatar model URL

  • Must be a URL to a model compatible with @met4citizen/talkinghead (e.g. GLB with the expected rig/blendshapes).
  • The component loads the model at runtime; ensure the URL is CORS-accessible from the browser.

Avatar sources (examples)

You can use 3D avatar models from these kinds of sources (ensure the format and rig are compatible with @met4citizen/talkinghead):

  • Avaturn – Create realistic 3D avatars and export as GLB. For lip-sync and facial animation, use T2-type avatars; they support ARKit blendshapes and visemes (separate mouth/eyes) and work well for talking-head use. T1 avatars are more photorealistic but have static faces. See Avaturn docs (bodies).
  • Met4 Citizen – Pre-made or exportable avatars designed for the Met4 Citizen / Talking Head stack.
  • Your own pipeline – Any GLB (or compatible format) that matches the rig and morph targets expected by @met4citizen/talkinghead.

Host the exported model on a CDN or your own server and pass the public URL as modelUrl.

Agent state overlay

When the avatar is ready, a small overlay shows the current voice assistant state: Listening, Speaking, Processing, Connecting, Ready, Disconnected, Failed, or Buffering.

Requirements

  • React ≥ 18
  • LiveKit voice agent in the room (e.g. livekit-agents ≥ 0.9.0)
  • Browser: modern browser with WebGL and AudioWorklet support (Chrome, Firefox, Safari, Edge)

License

See repository for license information. Created by jempf.