@omote/r3f
v0.3.3
Published
React Three Fiber adapter for Omote AI Characters
Downloads
707
Maintainers
Readme
@omote/r3f
React Three Fiber adapter for Omote AI characters. Drop-in <OmoteAvatar> component with lip sync, gaze tracking, emotion blending, and animation support.
Installation
npm install @omote/r3f @omote/three @omote/core @react-three/fiber @react-three/drei three three-stdlibRequired peer dependencies: @omote/three (>=0.3.0), @react-three/drei (>=9.0.0), three-stdlib (>=2.0.0), three (>=0.150.0), react (>=18.0.0).
Quick Start
import { Suspense } from 'react';
import { Canvas } from '@react-three/fiber';
import { OmoteAvatar } from '@omote/r3f';
function App() {
return (
<Canvas>
<ambientLight />
<Suspense fallback={null}>
<OmoteAvatar src="/models/avatar.glb" />
</Suspense>
</Canvas>
);
}Important:
<OmoteAvatar>uses R3F'suseLoaderwhich triggers React Suspense. Always wrap it in a<Suspense>boundary.
Lip Sync (TTS Playback)
import { Suspense } from 'react';
import { Canvas } from '@react-three/fiber';
import { OmoteAvatar, usePlaybackPipeline } from '@omote/r3f';
function AvatarWithLipSync() {
const pipeline = usePlaybackPipeline({
lamModelUrl: '/models/lam.onnx',
autoLoad: true,
});
return <OmoteAvatar src="/models/avatar.glb" frameRef={pipeline.frameRef} />;
}
function App() {
return (
<Canvas>
<Suspense fallback={null}>
<AvatarWithLipSync />
</Suspense>
</Canvas>
);
}Conversational Avatar
import { useRef, Suspense } from 'react';
import { Canvas } from '@react-three/fiber';
import { OmoteAvatar, type OmoteAvatarRef } from '@omote/r3f';
import { createKokoroTTS } from '@omote/core';
function ConversationalAvatar() {
const avatarRef = useRef<OmoteAvatarRef>(null);
async function handleReady() {
await avatarRef.current?.connectVoice({
mode: 'local',
tts: createKokoroTTS(),
onTranscript: async (text) => {
const res = await fetch('/api/chat', { method: 'POST', body: text });
return await res.text();
},
});
}
return (
<OmoteAvatar
ref={avatarRef}
src="/models/avatar.glb"
onReady={handleReady}
/>
);
}
function App() {
return (
<Canvas>
<Suspense fallback={null}>
<ConversationalAvatar />
</Suspense>
</Canvas>
);
}Props Reference
<OmoteAvatar>
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| src | string | required | URL to .glb avatar model |
| frameRef | RefObject<Float32Array \| null> | — | 52-channel blendshape frame from pipeline hooks |
| rawFrameRef | RefObject<Float32Array \| null> | — | Raw (unscaled) blendshape frame |
| compositorConfig | FaceCompositorConfig | — | FaceCompositor configuration |
| emotion | string \| EmotionWeights \| null | — | Emotion label ('happy', 'sad', etc.) or weights object |
| stateRef | RefObject<ConversationalState> | — | Conversational state ('idle', 'listening', 'speaking') |
| isSpeakingRef | RefObject<boolean> | — | Speaking state for blink suppression |
| gaze | GazeConfig | { enabled: true, target: 'camera' } | Gaze tracking configuration |
| animations | AnimationSource[] | — | External FBX animation sources |
| currentAnimation | string | — | Active animation ID (triggers crossfade) |
| boneFilter | BoneFilterConfig | DEFAULT_BONE_FILTER | Bones excluded from animation playback |
| avatarScale | number | 1 | Uniform scale multiplier |
| onReady | (event: AvatarReadyEvent) => void | — | Called when avatar is loaded and configured |
| onAnimationChange | (id: string) => void | — | Called when animation changes |
| onAnimationFinished | (event) => void | — | Called when a non-looping animation finishes |
All standard <group> props (position, rotation, scale, etc.) are also supported.
useOmoteAvatar (Headless)
For custom rendering pipelines without the <OmoteAvatar> component:
const gltf = useGLTF('/models/avatar.glb');
const { update, compositor, animationControls, parts, playAnimation } = useOmoteAvatar({
scene: gltf.scene,
embeddedAnimations: gltf.animations,
frameRef: pipeline.frameRef,
gaze: { enabled: true, target: 'camera' },
});
useFrame((state, delta) => {
update(delta, state.camera, myGroup.current?.rotation.y);
});Pipeline Hooks
| Hook | Description |
|------|-------------|
| usePlaybackPipeline | TTS audio playback with A2E lip sync |
| useTTSPlayback | High-level speak(text) with lip sync (wraps TTSSpeaker) |
| useMicLipSync | Microphone input with real-time lip sync |
| useListener | Speech listener with VAD + ASR (wraps SpeechListener) |
All pipeline hooks return frameRef and rawFrameRef — pass directly to <OmoteAvatar>.
useTTSPlayback
import { useTTSPlayback } from '@omote/r3f';
const { frameRef, speak, isSpeaking, stop } = useTTSPlayback({
tts: myTTSBackend, // required — any TTSBackend (KokoroTTSInference, etc.)
profile: { mouth: 1.0, jaw: 1.0, brows: 0.6 },
});
// Speak text with lip sync
await speak("Hello world!");
// Pass frameRef to OmoteAvatar
<OmoteAvatar src="/avatar.glb" frameRef={frameRef} />useListener
import { useListener } from '@omote/r3f';
const { transcript, start, stop, isListening } = useListener({
config: {
models: {
senseVoice: { modelUrl: '/models/sensevoice/model.int8.onnx' },
vad: { modelUrl: '/models/silero-vad.onnx' },
},
},
autoLoad: true,
});Utilities
| Export | Description |
|--------|-------------|
| preloadAvatar(url) | Preload a GLB model for instant display |
| preloadAnimation(url) | Preload an FBX animation file |
| resolveEmotion(label) | Convert emotion string to EmotionWeights |
| discoverScene(scene) | Pure function: traverse scene graph, find bones + morph targets |
Browser Compatibility
See @omote/core browser support for WebGPU/WASM details.
Migration from v0.1.x
See MIGRATION.md for a detailed guide on breaking changes.
License
MIT
