aniface
v0.4.0
Published
Animate 3D avatars with real-time facial tracking using MediaPipe and Three.js
Maintainers
Readme
Aniface
🎭 Try Live Demo | 📦 npm Package
Animate 3D avatars with real-time facial tracking using MediaPipe and Three.js
A powerful, easy-to-use library that brings your 3D avatar models to life with real-time facial expressions captured from a webcam.
Features
- 🎭 Real-time facial tracking - Powered by MediaPipe Face Landmarker
- 🎨 Automatic blendshape mapping - Works with ARKit-compatible models
- 🎯 Simple API - Get started in minutes
- 🔌 Flexible input modes - Built-in detection, custom sources, or direct data push
- 🎮 Multiple rigging systems - Support for Ready Player Me, QuickRig, and custom models
- 🔧 Highly customizable - Fine-tune every aspect of rendering and tracking
- 📦 TypeScript support - Full type definitions included
- ⚡ Performance optimized - Efficient rendering and blendshape updates
Installation
npm install aniface three @mediapipe/tasks-visionQuick Start
import { Aniface } from 'aniface'
// Get your video and canvas elements
const videoElement = document.getElementById('webcam')
const canvasElement = document.getElementById('avatar')
// Create the avatar
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: '/path/to/your/model.glb',
onReady: () => console.log('Avatar ready!'),
onError: (error) => console.error('Error:', error)
})
// Initialize and start
await avatar.initialize()
avatar.start()
// When done
avatar.stop()
avatar.destroy()Manual Landmark Input (Advanced)
Aniface supports two input modes:
1. Built-in MediaPipe Detection (Default)
The standard way - provide a video element and Aniface handles everything:
const avatar = new Aniface({
videoElement: document.getElementById('webcam'),
canvasElement: document.getElementById('avatar'),
modelPath: '/models/avatar.glb'
})
await avatar.initialize()
avatar.start() // Automatic processing loop2. Custom MediaPipe Configuration
Use your own MediaPipe Face Landmarker instance with custom settings:
const avatar = new Aniface({
canvasElement: document.getElementById('avatar'),
modelPath: '/models/avatar.glb'
// No videoElement needed when using custom MediaPipe
})
await avatar.initialize()
// Create MediaPipe instance with custom configuration
const myLandmarker = await FaceLandmarker.createFromOptions(vision, {
runningMode: 'VIDEO',
minFaceDetectionConfidence: 0.7,
minFacePresenceConfidence: 0.7,
minTrackingConfidence: 0.7
// ... other custom options
})
// Manual animation loop with custom MediaPipe
function animate() {
requestAnimationFrame(animate)
const results = myLandmarker.detectForVideo(video, performance.now())
avatar.processLandmarkData(results)
}
animate()Use Cases for Custom MediaPipe
- Custom detection thresholds: Fine-tune confidence levels for your specific use case
- Performance optimization: Adjust settings for better FPS on lower-end devices
- Custom delegates: Use GPU acceleration or WebAssembly optimizations
- Multiple faces: Process multiple faces from the same video stream
- Data processing: Apply custom filters or transformations before rendering
- Integration: Combine with other MediaPipe tasks (hand tracking, pose detection)
Configuration Options
Basic Configuration
new Aniface({
// Video element (optional if using processLandmarkData for manual updates)
videoElement?: HTMLVideoElement,
// Canvas element (always required)
canvasElement: HTMLCanvasElement,
// Path to GLB/GLTF model (always required)
modelPath: string,
// Optional callbacks
onReady: () => void,
onError: (error: Error) => void,
onLandmarksDetected: (results) => void,
onNoFaceDetected: () => void
})Examples
Head-only Avatar (Raccoon avatar)
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: './raccoon_head_small.glb',
cameraConfig: {
fov: 60,
enableControls: false,
enableZoom: false
},
blendshapeMultipliers: {
eyeBlinkLeft: 1.2,
eyeBlinkRight: 1.2,
jawOpen: 1.0,
mouthSmileLeft: 1.1,
mouthSmileRight: 1.1
},
modelOptions: {
scale: 1.0
},
onReady: () => console.log('Avatar ready!'),
onError: (error) => console.error('Error:', error)
})
await avatar.initialize()
avatar.start()Half-body Avatar (Ready Player Me avatar)
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: 'https://models.readyplayer.me/[YOUR_ID].glb?morphTargets=ARKit&useHands=false',
cameraConfig: {
fov: 60,
enableControls: false,
enableZoom: false
},
blendshapeMultipliers: {
eyeBlinkLeft: 1.3,
eyeBlinkRight: 1.3,
browInnerUp: 1.2,
browOuterUpLeft: 1.2,
browOuterUpRight: 1.2,
jawOpen: 1.0,
mouthSmileLeft: 1.0,
mouthSmileRight: 1.0
},
lightingConfig: {
ambientIntensity: 1.2,
directionalIntensity: 1.5,
directionalPosition: [2, 3, 3]
},
modelOptions: {
scale: 1.8,
center: true,
autoRotate: false,
rotation: 0,
fullBodyAvatar: true
},
onReady: () => console.log('Avatar ready!'),
onError: (error) => console.error('Error:', error)
})
await avatar.initialize()
avatar.start()QuickRig / Different Rigging Systems
Different 3D models use different coordinate systems for bone rotations. Use axisMapping to specify your model's rigging system:
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: './quickrig_avatar.glb',
cameraConfig: {
fov: 60,
position: [0, 1.65, 1.2],
target: [0, 1.6, 0]
},
modelOptions: {
scale: 1.8,
center: true,
autoRotate: false,
rotation: 0,
position: [0, -0.5, 0],
fullBodyAvatar: true,
axisMapping: 'quickrig' // Specify QuickRig coordinate system
},
blendshapeMultipliers: {
eyeBlinkLeft: 1.3,
eyeBlinkRight: 1.3,
jawOpen: 1.0
},
onReady: () => console.log('Avatar ready!'),
onError: (error) => console.error('Error:', error)
})
await avatar.initialize()
avatar.start()Available axis mappings:
'standard'(default): For Ready Player Me and most manually rigged models'quickrig': For QuickRig rigged models
How to know which to use:
- If your avatar looks in wrong directions when you move your head, try switching the
axisMapping - Check your model's bone naming (e.g.,
QuickRigCharacter_Headsuggests'quickrig')
Advanced Configuration Examples
Blendshape Customization
Fine-tune expression intensity:
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: '/models/avatar.glb',
// Adjust specific blendshapes
blendshapeMultipliers: {
eyeBlinkLeft: 1.2, // Make blinks more pronounced
eyeBlinkRight: 1.2,
jawOpen: 0.8, // Reduce mouth opening
mouthSmileLeft: 1.5, // Exaggerate smiles
mouthSmileRight: 1.5
}
})Camera & Lighting
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: '/models/avatar.glb',
// Camera settings
cameraConfig: {
fov: 60, // Field of view
position: [0, 1.65, 1.2], // Camera position [x, y, z] (default: [0, 0, 1.5])
target: [0, 1.6, 0], // Camera lookAt target [x, y, z] (default: [0, 0, 0])
enableControls: true, // Orbit controls
enableZoom: false // Disable zoom
},
// Lighting settings
lightingConfig: {
ambientIntensity: 0.6, // Ambient light intensity
directionalIntensity: 0.9, // Main light intensity
directionalPosition: [5, 5, 5] // Light position
}
})Model Options
Configure model positioning, scaling, and rigging system:
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: '/models/avatar.glb',
modelOptions: {
scale: 1.5, // Uniform scale factor
center: true, // Center model at origin
autoRotate: true, // Apply automatic Y rotation
rotation: Math.PI, // Rotation in radians (default: 180°)
position: [0, -0.5, 0], // Position offset [x, y, z]
fullBodyAvatar: true, // Enable for full-body models (vs head-only)
axisMapping: 'standard' // Rigging system: 'standard' or 'quickrig'
}
})Axis Mapping Options:
'standard': Ready Player Me, most manual rigs (default)'quickrig': QuickRig rigged models
💡 Tip: If your avatar moves in wrong directions, try changing axisMapping.
MediaPipe Configuration
Fine-tune facial detection and tracking:
const avatar = new Aniface({
videoElement,
canvasElement,
modelPath: '/models/avatar.glb',
// Optional: Adjust facial tracking sensitivity
landmarkConfig: {
minDetectionConfidence: 0.5, // 0-1: Lower = more detections (default: 0.5)
minTrackingConfidence: 0.5 // 0-1: Lower = smoother tracking (default: 0.5)
}
})Configuration tips:
- Lower confidence values = more sensitive detection but may be less stable
- Higher confidence values = more stable but may miss subtle movements
- WASM files load from CDN automatically - no
wasmPathneeded unless self-hosting
Model Requirements
Your 3D model must have:
- ARKit-compatible blendshapes - Standard facial animation targets
- GLB or GLTF format - Three.js compatible
- Proper rigging - Face mesh with morph targets
Supported Rigging Systems
Aniface supports multiple rigging conventions:
- Ready Player Me - Use default settings or
axisMapping: 'standard' - QuickRig - Use
axisMapping: 'quickrig' - Custom models - Try
'standard'first, switch to'quickrig'if movements are incorrect
Common Blendshape Names
Aniface maps MediaPipe blendshapes to these standard names:
eyeBlinkLeft,eyeBlinkRightjawOpen,jawLeft,jawRightmouthSmileLeft,mouthSmileRightbrowInnerUp,browOuterUpLeft,browOuterUpRight- And many more... (52 total)
API Reference
Aniface Class
Methods
// Initialize the avatar system
await avatar.initialize(): Promise<void>
// Start facial tracking and animation
avatar.start(): void
// Stop the animation loop
avatar.stop(): void
// Clean up all resources
avatar.destroy(): void
// Update canvas size (call on window resize)
avatar.updateSize(width: number, height: number): void
// Check if avatar is running
avatar.running: boolean
// Check if avatar is initialized
avatar.ready: booleanLicense
MIT - see LICENSE file for details
