@omote/babylon
v0.3.3
Published
Babylon.js adapter for Omote AI character SDK
Readme
@omote/babylon
Babylon.js adapter for the Omote AI Character SDK.
Install
npm install @omote/babylon @omote/core @babylonjs/core @babylonjs/loadersQuick Start
import { SceneLoader } from '@babylonjs/loaders';
import { OmoteAvatar } from '@omote/babylon';
import { MicLipSync } from '@omote/core';
// Load avatar
const result = await SceneLoader.ImportMeshAsync('', '/models/', 'avatar.glb', scene);
// Create avatar
const avatar = new OmoteAvatar({ target: result.meshes[0], scene });
// Wire a pipeline
const mic = new MicLipSync({ /* config */ });
avatar.connectFrameSource(mic);
// Update loop (or use autoUpdate: true in constructor)
scene.registerBeforeRender(() => {
avatar.update(engine.getDeltaTime() / 1000, scene.activeCamera);
});API
OmoteAvatar
Full-featured avatar class with CharacterController (compositor + gaze + life layer).
| Method | Description |
|--------|-------------|
| update(delta, camera, avatarRotationY?) | Call each frame (or use autoUpdate: true) |
| connectFrameSource(source) | Wire any pipeline |
| disconnectFrameSource() | Disconnect the current frame source |
| setFrame(blendshapes) | Direct blendshape input |
| setEmotion(emotion) | Set emotion (string preset or weights) |
| setSpeaking(speaking) | Drive mouth animation intensity |
| setState(state) | Set conversational state (idle, listening, thinking, speaking) |
| setAudioEnergy(energy) | Set audio energy level (0-1, drives emphasis) |
| setCamera(camera) | Set camera for gaze tracking (required with autoUpdate) |
| reset() | Reset all state (smoothing, life layer, emotions) |
| dispose() | Clean up resources |
Accessors: compositor, parts, hasMorphTargets, mappedBlendshapeCount
Voice Integration
connectVoice() combines speaker + listener + interruption handling:
import { OmoteAvatar } from '@omote/babylon';
import { createKokoroTTS } from '@omote/core';
const avatar = new OmoteAvatar({ target: rootMesh, scene });
await avatar.connectVoice({
mode: 'local',
tts: createKokoroTTS({ defaultVoice: 'af_heart' }),
interruptionEnabled: true,
onTranscript: async (text) => {
const res = await fetch('/api/chat', { body: text });
return await res.text();
},
});
// Or use individual APIs:
await avatar.connectSpeaker(ttsBackend, { lam: createA2E() });
await avatar.speak("Hello!");Listener (Speech Recognition)
import { OmoteAvatar } from '@omote/babylon';
const avatar = new OmoteAvatar({ target: rootMesh, scene });
// Connect listener (mic + VAD + ASR)
await avatar.connectListener();
avatar.onTranscript((result) => {
console.log('Transcript:', result.text);
});
await avatar.startListening();Auto Update
Use autoUpdate: true to skip manual update() calls:
const avatar = new OmoteAvatar({
target: rootMesh,
scene,
autoUpdate: true,
});
avatar.setCamera(scene.activeCamera);License
MIT
