lip-sync-engine
v1.0.3
Published
ποΈ WebAssembly port of Rhubarb Lip Sync for browser-based lip-sync animation. Analyzes audio to generate precise mouth movements using PocketSphinx speech recognition. Framework-agnostic TypeScript API for React, Vue, Svelte, Next.js, and vanilla JS. Per
Downloads
67
Maintainers
Readme
ποΈ LipSyncEngine.js
High-quality lip-sync animation from audio in the browser
WebAssembly port of Rhubarb Lip Sync with TypeScript support.
β¨ Features
- π High Performance - Runs natively in browser via WebAssembly
- π― Accurate - Uses PocketSphinx speech recognition for precise phoneme detection
- π¦ Small Bundle - Only ~80KB JavaScript + 2.2MB WASM + models
- π§ TypeScript - Full type definitions included
- π Framework-Agnostic - Works with React, Vue, Svelte, vanilla JS, and any framework
- π§΅ Web Workers - Non-blocking analysis with worker pool
- π Streaming Support - Dynamic real-time chunk processing for live audio
- π¨ Complete API - Audio utilities, format conversion, microphone recording
- π± Browser-Native - No server required, runs entirely client-side
π¦ Installation
npm install lip-sync-engineπ Quick Start
Vanilla JavaScript / TypeScript
import { analyze, recordAudio } from 'lip-sync-engine';
// Record audio from microphone
const { pcm16 } = await recordAudio(5000); // 5 seconds
// Analyze
const result = await analyze(pcm16, {
dialogText: "Hello world", // Optional, improves accuracy
sampleRate: 16000
});
// Use mouth cues for animation
result.mouthCues.forEach(cue => {
console.log(`${cue.start}s - ${cue.end}s: ${cue.value}`);
// Output: 0.00s - 0.35s: X
// 0.35s - 0.50s: D
// 0.50s - 0.85s: B
// ...
});React
import { useState, useEffect, useRef } from 'react';
import { LipSyncEngine, recordAudio } from 'lip-sync-engine';
function useLipSyncEngine() {
const [result, setResult] = useState(null);
const lipSyncEngineRef = useRef(LipSyncEngine.getInstance());
useEffect(() => {
lipSyncEngineRef.current.init();
return () => lipSyncEngineRef.current.destroy();
}, []);
const analyze = async (pcm16, options) => {
const result = await lipSyncEngineRef.current.analyze(pcm16, options);
setResult(result);
};
return { analyze, result };
}
function MyComponent() {
const { analyze, result } = useLipSyncEngine();
const handleRecord = async () => {
const { pcm16 } = await recordAudio(5000);
await analyze(pcm16, { dialogText: "Hello world" });
};
return (
<div>
<button onClick={handleRecord}>Record & Analyze</button>
{result && <div>Found {result.mouthCues.length} mouth cues!</div>}
</div>
);
}See examples/react for complete example.
Vue
<script setup lang="ts">
import { ref, onMounted, onUnmounted } from 'vue';
import { LipSyncEngine, recordAudio } from 'lip-sync-engine';
const result = ref(null);
const lipSyncEngine = LipSyncEngine.getInstance();
onMounted(() => lipSyncEngine.init());
onUnmounted(() => lipSyncEngine.destroy());
const handleRecord = async () => {
const { pcm16 } = await recordAudio(5000);
result.value = await lipSyncEngine.analyze(pcm16, { dialogText: "Hello world" });
};
</script>
<template>
<div>
<button @click="handleRecord">Record & Analyze</button>
<div v-if="result">Found {{ result.mouthCues.length }} mouth cues!</div>
</div>
</template>See examples/vue for complete example.
Svelte
<script>
import { writable } from 'svelte/store';
import { LipSyncEngine, recordAudio } from 'lip-sync-engine';
const result = writable(null);
const lipSyncEngine = LipSyncEngine.getInstance();
lipSyncEngine.init();
async function handleRecord() {
const { pcm16 } = await recordAudio(5000);
const res = await lipSyncEngine.analyze(pcm16, { dialogText: "Hello world" });
result.set(res);
}
</script>
<button on:click={handleRecord}>Record & Analyze</button>
{#if $result}
<div>Found {$result.mouthCues.length} mouth cues!</div>
{/if}See examples/svelte for complete example.
π Documentation
- Getting Started
- API Reference
- Streaming Analysis Guide - Real-time chunk processing
- Framework Examples:
π¨ Mouth Shapes (Visemes)
LipSyncEngine.js generates 9 mouth shapes based on Preston Blair's phoneme categorization:
| Shape | Image | Description | Example Sounds | |-------|-------|-------------|----------------| | X | | Closed/Rest | Silence | | A | | Open | ah, aa, aw | | B | | Lips together | p, b, m | | C | | Rounded | sh, ch, zh | | D | | Tongue-teeth | th, dh, t, d, n, l | | E | | Slightly open | eh, ae, uh | | F | | F/V sound | f, v | | G | | Open back | k, g, ng | | H | | Wide open | ee, ih, ey |
π¬ How It Works
- Speech Recognition - PocketSphinx analyzes audio to detect phonemes
- Phoneme Mapping - Phonemes are mapped to Preston Blair mouth shapes
- Timing Optimization - Animation is smoothed for natural transitions
- JSON Output - Returns timestamped mouth shape cues
π API Overview
Core Functions
// Simple one-off analysis
import { analyze } from 'lip-sync-engine';
const result = await analyze(pcm16, options);
// Async analysis (non-blocking)
import { analyzeAsync } from 'lip-sync-engine';
const result = await analyzeAsync(pcm16, options);
// Using the main class
import { LipSyncEngine } from 'lip-sync-engine';
const lipSyncEngine = LipSyncEngine.getInstance();
await lipSyncEngine.init();
const result = await lipSyncEngine.analyze(pcm16, options);Streaming Analysis (Real-Time)
import { WorkerPool } from 'lip-sync-engine';
const pool = WorkerPool.getInstance(4);
await pool.init({ /* paths */ });
await pool.warmup(); // Pre-create workers
// Create streaming analyzer
const stream = pool.createStreamAnalyzer({
dialogText: "Expected dialog",
sampleRate: 16000
});
// Add chunks as they arrive from WebSocket, MediaRecorder, etc.
for await (const chunk of audioStream) {
stream.addChunk(chunk); // Non-blocking!
}
// Get all results in order
const results = await stream.finalize();See Streaming Analysis Guide for complete usage patterns.
Audio Utilities
import {
recordAudio,
loadAudio,
audioBufferToInt16,
float32ToInt16,
resample
} from 'lip-sync-engine';
// Record from microphone
const { pcm16, audioBuffer } = await recordAudio(5000); // 5 seconds
// Load from file or URL
const { pcm16, audioBuffer } = await loadAudio('audio.mp3');
// Convert formats
const int16 = audioBufferToInt16(audioBuffer, 16000);
const int16 = float32ToInt16(float32Array);
const resampled = resample(float32Array, 44100, 16000);Types
interface MouthCue {
start: number; // seconds
end: number; // seconds
value: string; // X, A, B, C, D, E, F, G, or H
}
interface LipSyncEngineResult {
mouthCues: MouthCue[];
metadata?: {
duration: number;
sampleRate: number;
dialogText?: string;
};
}
interface LipSyncEngineOptions {
dialogText?: string; // Improves accuracy significantly
sampleRate?: number; // Default: 16000 (recommended)
}π οΈ Development
# Install dependencies
npm install
# Build WASM module
npm run build:wasm
# Build TypeScript
npm run build:ts
# Build everything
npm run build
# Type check
npm run typecheck
# Clean build artifacts
npm run cleanπ License
MIT License - see LICENSE
π Credits
- Original Rhubarb Lip Sync by Daniel Wolf
- PocketSphinx for speech recognition
- Preston Blair for phoneme categorization system
π Issues
Report issues at https://github.com/biolimbo/lip-sync-engine/issues
