@sage-rsc/talking-head-react
v1.0.84
Published
A reusable React component for 3D talking avatars with lip-sync and text-to-speech
Maintainers
Readme
@sage-rsc/talking-head-react
A powerful React component library for creating interactive 3D talking avatars with realistic lip-sync, multiple text-to-speech services, and curriculum-based learning capabilities.
✨ Features
- 🎭 3D Avatar Rendering - Support for GLB/GLTF avatar models with full body or head-only modes
- 🎤 Real-time Lip-Sync - Automatic lip synchronization with audio using viseme-based animation
- 🔊 Multiple TTS Services - Edge TTS, ElevenLabs, Deepgram, Google Cloud, Azure, and Browser TTS
- 📚 Curriculum Learning - Built-in curriculum system with lessons, questions, and code examples
- 🎬 Animation Support - FBX animation support and code-based body movements
- ⏯️ Playback Control - Pause, resume, and stop speech functionality
- 🎯 Interactive Mode - Manual control over curriculum progression
- 💻 Code IDE Integration - Simulate code typing and execution in an IDE
- 🎨 Zero UI - Pure components, no built-in UI elements - full control over styling
📦 Installation
npm install @sage-rsc/talking-head-reactPeer Dependencies
npm install react react-dom threeRequirements:
- React 18.0.0+ or 19.0.0+
- React DOM 18.0.0+ or 19.0.0+
- Three.js 0.150.0+
🚀 Quick Start
Simple Talking Avatar
The simplest way to get started - just pass text and the avatar speaks:
import React, { useRef } from 'react';
import { SimpleTalkingAvatar } from '@sage-rsc/talking-head-react';
function App() {
const avatarRef = useRef(null);
const handleSpeak = () => {
avatarRef.current?.speakText("Hello! I'm a talking avatar.");
};
return (
<div style={{ width: '100vw', height: '100vh' }}>
<SimpleTalkingAvatar
ref={avatarRef}
avatarUrl="/avatars/brunette.glb"
avatarBody="F"
mood="happy"
showFullAvatar={false}
onReady={() => console.log('Avatar ready!')}
/>
<button onClick={handleSpeak}>Speak</button>
</div>
);
}Full-Featured Avatar
For advanced control with animations and custom configurations:
import React, { useRef } from 'react';
import { TalkingHeadAvatar } from '@sage-rsc/talking-head-react';
function App() {
const avatarRef = useRef(null);
const animations = {
teaching: "/animations/Arguing.fbx",
correct: "/animations/Happy.fbx",
incorrect: "/animations/Disappointed.fbx"
};
return (
<div style={{ width: '100vw', height: '100vh' }}>
<TalkingHeadAvatar
ref={avatarRef}
avatarUrl="/avatars/brunette.glb"
avatarBody="F"
mood="happy"
ttsService="elevenlabs"
ttsApiKey="your-api-key"
ttsVoice="21m00Tcm4TlvDq8ikWAM"
showFullAvatar={false}
bodyMovement="gesturing"
animations={animations}
onReady={() => {
avatarRef.current?.speakText("Welcome!");
}}
/>
</div>
);
}Curriculum Learning
Complete learning system with lessons, questions, and automatic progression:
import React, { useRef } from 'react';
import { CurriculumLearning } from '@sage-rsc/talking-head-react';
function App() {
const curriculumData = {
curriculum: {
title: "Introduction to Programming",
description: "Learn the basics of programming",
language: "en",
modules: [
{
title: "Module 1: Variables",
lessons: [
{
title: "JavaScript Variables",
avatar_script: "Let's learn about variables.",
body: "Variables store data values in your program.",
questions: [
{
type: "multiple_choice",
question: "Which keyword creates a constant variable?",
options: ["let", "const", "var"],
answer: "const",
explanation: "The 'const' keyword creates a constant variable."
}
]
}
]
}
]
}
};
const avatarConfig = {
avatarUrl: "/avatars/brunette.glb",
avatarBody: "F",
mood: "happy",
showFullAvatar: false
};
return (
<div style={{ width: '100vw', height: '100vh' }}>
<CurriculumLearning
curriculumData={curriculumData}
avatarConfig={avatarConfig}
autoStart={true}
onLessonComplete={(data) => {
console.log('Lesson completed:', data);
}}
/>
</div>
);
}📖 Components
SimpleTalkingAvatar
A lightweight component for simple text-to-speech scenarios. Perfect when you just need an avatar to speak text.
Props:
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| text | string | null | Text to speak (optional, can use speakText method) |
| avatarUrl | string | "/avatars/brunette.glb" | URL/path to GLB avatar file |
| avatarBody | string | "F" | Avatar body type ('M' or 'F') |
| mood | string | "neutral" | Initial mood ('happy', 'sad', 'neutral', 'excited') |
| ttsLang | string | "en" | Text-to-speech language code |
| ttsService | string | null | TTS service ('edge', 'elevenlabs', 'deepgram', 'google', 'azure', 'browser') |
| ttsVoice | string | null | TTS voice ID |
| ttsApiKey | string | null | TTS API key (ElevenLabs, Deepgram, Google, or Azure |
| bodyMovement | string | "idle" | Body movement type ('idle', 'gesturing', 'dancing') |
| movementIntensity | number | 0.5 | Movement intensity (0-1) |
| showFullAvatar | boolean | false | Whether to show full body avatar |
| cameraView | string | "upper" | Camera view ('upper', 'full') |
| autoSpeak | boolean | false | Automatically speak text prop when ready |
| onReady | function | () => {} | Callback when avatar is ready |
| onError | function | () => {} | Callback for errors |
| onSpeechEnd | function | () => {} | Callback when speech ends |
| className | string | "" | Additional CSS classes |
| style | object | {} | Additional inline styles |
Ref Methods:
| Method | Parameters | Description |
|--------|------------|-------------|
| speakText(text, options) | text: string, options: object | Make avatar speak text |
| pauseSpeaking() | - | Pause current speech |
| resumeSpeaking() | - | Resume paused speech |
| stopSpeaking() | - | Stop all speech |
| resumeAudioContext() | - | Resume audio context (for user interaction) |
| isPaused() | - | Check if currently paused |
| setMood(mood) | mood: string | Change avatar mood |
| setBodyMovement(movement) | movement: string | Change body movement |
| playAnimation(name) | name: string | Play FBX animation |
| playReaction(type) | type: string | Play reaction animation |
| playCelebration() | - | Play celebration animation |
| setShowFullAvatar(show) | show: boolean | Toggle full body mode |
| isReady | - | Boolean indicating if avatar is ready |
Example:
const avatarRef = useRef(null);
// Speak text
avatarRef.current?.speakText("Hello world!", {
lipsyncLang: 'en',
onSpeechEnd: () => console.log('Done speaking')
});
// Pause/Resume
avatarRef.current?.pauseSpeaking();
avatarRef.current?.resumeSpeaking();
// Change mood
avatarRef.current?.setMood("happy");TalkingHeadAvatar
Full-featured avatar component with advanced controls, animations, and TTS configuration.
Props: Same as SimpleTalkingAvatar plus:
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| animations | object | {} | Object mapping animation names to FBX file paths |
| onLoading | function | () => {} | Callback for loading progress |
Ref Methods: Same as SimpleTalkingAvatar plus:
| Method | Parameters | Description |
|--------|------------|-------------|
| setTimingAdjustment(rate) | rate: number | Adjust animation timing (e.g., 1.05 for 5% slower) |
| setMovementIntensity(intensity) | intensity: number | Set movement intensity (0-1) |
| playRandomDance() | - | Play random dance animation |
| lockAvatarPosition() | - | Lock avatar position |
| unlockAvatarPosition() | - | Unlock avatar position |
CurriculumLearning
Complete learning system with curriculum management, questions, and code examples.
Props:
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| curriculumData | object | null | Curriculum data object (see structure below) |
| avatarConfig | object | {} | Avatar configuration (same as SimpleTalkingAvatar props) |
| animations | object | {} | Animation files mapping |
| autoStart | boolean | false | Automatically start teaching when ready |
| onLessonStart | function | () => {} | Callback when lesson starts |
| onLessonComplete | function | () => {} | Callback when lesson completes |
| onQuestionAnswer | function | () => {} | Callback when question is answered |
| onCurriculumComplete | function | () => {} | Callback when curriculum completes |
| onCustomAction | function | () => {} | Callback for custom actions (interactive mode) |
Ref Methods:
| Method | Parameters | Description |
|--------|------------|-------------|
| startTeaching() | - | Start teaching current lesson |
| startQuestions() | - | Start asking questions |
| handleAnswerSelect(answer) | answer: string/boolean | Submit answer to current question |
| nextQuestion() | - | Move to next question |
| previousQuestion() | - | Move to previous question |
| nextLesson() | - | Move to next lesson |
| previousLesson() | - | Move to previous lesson |
| completeLesson() | - | Complete current lesson |
| completeCurriculum() | - | Complete entire curriculum |
| resetCurriculum() | - | Reset curriculum to beginning |
| getState() | - | Get current curriculum state |
| pauseSpeaking() | - | Pause avatar speech |
| resumeSpeaking() | - | Resume avatar speech |
| isPaused() | - | Check if paused |
| speakText(text, options) | text: string, options: object | Make avatar speak text |
Curriculum Data Structure:
{
curriculum: {
title: "Course Title",
description: "Course description",
language: "en",
modules: [
{
title: "Module Title",
lessons: [
{
title: "Lesson Title",
avatar_script: "What the avatar will say",
body: "Lesson content text",
code_example: { // Optional
code: "console.log('Hello');",
language: "javascript",
description: "Code example description",
autoRun: true,
typingSpeed: 50
},
questions: [ // Optional
{
type: "multiple_choice", // or "true_false" or "code_test"
question: "Question text?",
options: ["Option 1", "Option 2", "Option 3"], // For multiple_choice
answer: "Option 1", // or true/false for true_false
explanation: "Why this answer is correct"
}
]
}
]
}
]
}
}Interactive Mode:
For manual control over curriculum progression:
const curriculumRef = useRef(null);
// Handle custom actions
const handleCustomAction = (action) => {
switch (action.type) {
case 'teachingComplete':
// Teaching finished, enable "Start Questions" button
break;
case 'questionStart':
// Question started, show question UI
break;
case 'answerFeedbackComplete':
// Answer feedback finished, enable "Next Question" button
break;
case 'allQuestionsComplete':
// All questions done, enable "Complete Lesson" button
break;
case 'lessonCompleteFeedbackDone':
// Lesson completion feedback done, enable "Next Lesson" button
break;
case 'codeExampleReady':
// Code example ready, trigger IDE typing animation
break;
}
};
<CurriculumLearning
ref={curriculumRef}
curriculumData={curriculumData}
avatarConfig={avatarConfig}
autoStart={false} // Manual control
onCustomAction={handleCustomAction}
/>
// Control progression manually
curriculumRef.current?.startTeaching();
curriculumRef.current?.startQuestions();
curriculumRef.current?.nextQuestion();
curriculumRef.current?.completeLesson();🎤 Text-to-Speech Services
Edge TTS (Default)
Free, no API key required:
<SimpleTalkingAvatar
ttsService="edge"
ttsVoice="en-US-AriaNeural"
/>ElevenLabs
High-quality voices, requires API key:
<SimpleTalkingAvatar
ttsService="elevenlabs"
ttsApiKey="your-api-key"
ttsVoice="21m00Tcm4TlvDq8ikWAM"
/>Deepgram
Fast, high-quality TTS, requires API key:
<SimpleTalkingAvatar
ttsService="deepgram"
ttsApiKey="your-api-key"
ttsVoice="aura-asteria-en" // Options: aura-thalia-en, aura-asteria-en, aura-orion-en, etc.
/>Browser TTS
Uses browser's built-in speech synthesis:
<SimpleTalkingAvatar
ttsService="browser"
/>Google Cloud / Azure
<SimpleTalkingAvatar
ttsService="google" // or "azure"
ttsApiKey="your-api-key"
ttsVoice="en-US-Wavenet-D"
/>🎬 Animations
FBX Animations
Provide animation mappings via the animations prop:
const animations = {
teaching: "/animations/Arguing.fbx",
correct: "/animations/Happy.fbx",
incorrect: "/animations/Disappointed.fbx",
lessonComplete: "/animations/Step.fbx"
};
<TalkingHeadAvatar animations={animations} />Code-Based Animations
Built-in body movements:
idle- Idle animationgesturing- Teaching gesturesdancing- Dance animationhappy- Happy mood animationsad- Sad mood animation
avatarRef.current?.setBodyMovement("gesturing");
avatarRef.current?.playReaction("happy");
avatarRef.current?.playCelebration();📚 Question Types
Multiple Choice
{
type: "multiple_choice",
question: "What is a variable?",
options: ["A container", "A function", "A loop"],
answer: "A container",
explanation: "A variable is a container that stores data."
}True/False
{
type: "true_false",
question: "JavaScript is a compiled language.",
answer: false,
explanation: "JavaScript is an interpreted language."
}Code Test
{
type: "code_test",
question: "Write a function that adds two numbers.",
testCriteria: {
type: "function",
functionName: "add",
testCases: [
{ input: [2, 3], expectedOutput: 5 },
{ input: [10, 20], expectedOutput: 30 }
]
},
explanation: "The function should return the sum of two numbers."
}💻 Code Examples
Include code examples in lessons for IDE integration:
{
title: "JavaScript Variables",
avatar_script: "Let's learn about variables.",
body: "Variables store data values.",
code_example: {
code: "let name = 'Alice';\nconst age = 25;\nconsole.log(name);",
language: "javascript", // "javascript", "python", "java", "html"
description: "Basic variable declarations",
autoRun: true, // Automatically run after typing
typingSpeed: 50 // Characters per second
}
}Listen for codeExampleReady event in interactive mode:
const handleCustomAction = (action) => {
if (action.type === 'codeExampleReady') {
// Trigger IDE typing animation
handleCodeExample(action.codeExample);
}
};🎯 Use Cases
- Educational Platforms - Interactive learning with curriculum management
- Virtual Assistants - Conversational avatars with TTS
- Code Tutorials - Step-by-step coding lessons with IDE integration
- Training Simulations - Interactive training with questions and feedback
- Presentation Tools - Animated presentations with talking avatars
🔧 Configuration
TTS Configuration
Configure default TTS service in your app:
import { getActiveTTSConfig } from '@sage-rsc/talking-head-react';
const config = getActiveTTSConfig();
// Returns current TTS configurationAvatar Configuration
Common avatar settings:
const avatarConfig = {
avatarUrl: "/avatars/brunette.glb",
avatarBody: "F", // "M" or "F"
mood: "happy",
ttsService: "elevenlabs",
ttsApiKey: "your-key",
ttsVoice: "voice-id",
showFullAvatar: false, // false = head only, true = full body
bodyMovement: "gesturing",
movementIntensity: 0.7
};📝 Examples
Check the example-*.jsx files in the repository for complete examples:
example-simple-avatar.jsx- Simple text-to-speech usageexample-interactive-mode.jsx- Manual curriculum controlexample-with-code-ide.jsx- Code IDE integrationexample-with-api-key.jsx- TTS service configuration
🐛 Troubleshooting
Audio Not Playing
If you see "Web Audio API suspended" error:
// The component automatically resumes audio context on user interaction
// But you can also manually resume:
avatarRef.current?.resumeAudioContext();Avatar Not Loading
- Ensure the avatar file path is correct
- Check browser console for loading errors
- Verify GLB file is valid
Lip-Sync Not Working
- Ensure avatar model has viseme morph targets
- Check that
lipsyncLangmatches your TTS language - Verify TTS service is working correctly
📄 License
MIT
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
📞 Support
For issues, questions, or feature requests, please open an issue on GitHub.
