@open-avatar/livekit-react
v0.1.1
Published
A React component that renders a **3D talking-head avatar** driven by your [LiveKit](https://livekit.io) voice agent. It uses the agent’s audio track for real-time lip-sync and optional gestures, and works with [@met4citizen/talkinghead](https://www.npmjs
Readme
@open-avatar/livekit-react
A React component that renders a 3D talking-head avatar driven by your LiveKit voice agent. It uses the agent’s audio track for real-time lip-sync and optional gestures, and works with @met4citizen/talkinghead and @met4citizen/headaudio.
Features
- Real-time lip-sync – Avatar mouth is driven by the voice agent’s audio via a neural lip-sync model (HeadAudio).
- Agent state overlay – Shows listening, speaking, thinking, etc.
- Configurable avatar – Body style, camera framing, mood, and optional gestures.
- Loading and error states – Built-in spinner and error message.
Prerequisites
- A LiveKit room with a voice agent in it (e.g. livekit-agents ≥ 0.9.0).
- A 3D avatar model URL compatible with @met4citizen/talkinghead (e.g. from Met4 Citizen or your own exported model).
Installation
npm install @open-avatar/livekit-reactPeer dependencies (install if not already present):
npm install react react-dom @livekit/components-react livekit-clientQuick start
LiveKitAvatar must be used inside a LiveKit room where a voice agent is (or will be) connected. It uses useVoiceAssistant() from @livekit/components-react to get the agent’s audio and state.
Minimal example:
import { LiveKitRoom } from '@livekit/components-react';
import { LiveKitAvatar } from '@open-avatar/livekit-react';
const AVATAR_MODEL_URL = 'https://your-cdn.com/path/to/avatar-model.glb';
export function VoiceAgentPage() {
const token = '…'; // Get from your backend (e.g. /api/token)
const serverUrl = 'wss://your-livekit-server.livekit.cloud';
return (
<LiveKitRoom
token={token}
serverUrl={serverUrl}
connect={true}
audio={true}
video={false}
>
<div style={{ width: '100%', height: '400px' }}>
<LiveKitAvatar modelUrl={AVATAR_MODEL_URL} />
</div>
</LiveKitRoom>
);
}Full example with controls
You can combine the avatar with LiveKit’s VoiceAssistantControlBar for a full voice-UI:
import { LiveKitRoom, VoiceAssistantControlBar } from '@livekit/components-react';
import { LiveKitAvatar } from '@open-avatar/livekit-react';
const AVATAR_MODEL_URL = 'https://your-cdn.com/path/to/avatar-model.glb';
export function VoiceAgentPage() {
const token = '…';
const serverUrl = 'wss://your-livekit-server.livekit.cloud';
return (
<LiveKitRoom
token={token}
serverUrl={serverUrl}
connect={true}
audio={true}
video={false}
>
<div className="flex flex-col h-screen">
<div className="flex-1 relative min-h-[300px]">
<LiveKitAvatar
modelUrl={AVATAR_MODEL_URL}
bodyStyle="F"
cameraView="upper"
avatarMood="neutral"
enableGestures={true}
/>
</div>
<VoiceAssistantControlBar />
</div>
</LiveKitRoom>
);
}API
LiveKitAvatar
| Prop | Type | Default | Description |
|------------------|-----------------------|------------|-----------------------------------------------------------------------------|
| modelUrl | string | required | URL to the 3D avatar model (e.g. .glb) used by @met4citizen/talkinghead. |
| bodyStyle | 'M' \| 'F' | 'F' | Body style of the avatar. |
| cameraView | 'upper' \| 'full' \| 'head' | 'upper' | Framing of the avatar (upper body, full body, or head only). |
| avatarMood | string | 'neutral'| Mood/expression preset. |
| enableGestures | boolean | true | Whether to trigger gestures (e.g. look-at-camera, hand gestures) when the agent starts speaking. |
| className | string | — | Optional CSS class for the wrapper div. |
Avatar model URL
- Must be a URL to a model compatible with @met4citizen/talkinghead (e.g. GLB with the expected rig/blendshapes).
- The component loads the model at runtime; ensure the URL is CORS-accessible from the browser.
Avatar sources (examples)
You can use 3D avatar models from these kinds of sources (ensure the format and rig are compatible with @met4citizen/talkinghead):
- Avaturn – Create realistic 3D avatars and export as GLB. For lip-sync and facial animation, use T2-type avatars; they support ARKit blendshapes and visemes (separate mouth/eyes) and work well for talking-head use. T1 avatars are more photorealistic but have static faces. See Avaturn docs (bodies).
- Met4 Citizen – Pre-made or exportable avatars designed for the Met4 Citizen / Talking Head stack.
- Your own pipeline – Any GLB (or compatible format) that matches the rig and morph targets expected by
@met4citizen/talkinghead.
Host the exported model on a CDN or your own server and pass the public URL as modelUrl.
Agent state overlay
When the avatar is ready, a small overlay shows the current voice assistant state: Listening, Speaking, Processing, Connecting, Ready, Disconnected, Failed, or Buffering.
Requirements
- React ≥ 18
- LiveKit voice agent in the room (e.g. livekit-agents ≥ 0.9.0)
- Browser: modern browser with WebGL and AudioWorklet support (Chrome, Firefox, Safari, Edge)
License
See repository for license information. Created by jempf.
