livetalking-streaming-avatar
v1.0.1
Published
LiveTalking Streaming Avatar SDK - Create interactive AI avatars with real-time communication
Maintainers
Readme
LiveTalking Streaming Avatar SDK
A powerful TypeScript SDK for creating interactive AI avatars with real-time communication, compatible with HeyGen's API structure.
Features
- 🎭 Multiple Avatar Models: Support for wav2lip, musetalk, ernerf, and ultralight
- 🔄 Real-time Communication: WebRTC-based streaming with low latency
- 🎤 Voice Chat: Bidirectional audio communication
- 🛡️ Authentication: JWT-based secure authentication
- 📱 Cross-platform: Works in browsers and Node.js environments
- 🔧 TypeScript: Full type safety and IntelliSense support
Installation
npm install @livetalking/streaming-avatarQuick Start
import { StreamingAvatar, AvatarQuality, StreamingEvents } from '@livetalking/streaming-avatar';
// Initialize the SDK
const avatar = new StreamingAvatar({
serverUrl: 'http://localhost:8010' // Your LiveTalking server URL
});
// Create and start avatar session
async function startAvatar() {
try {
const session = await avatar.createStartAvatar({
avatarId: 'wav2lip256_avatar1',
quality: AvatarQuality.Medium,
voice: {
voiceId: 'en-US-AriaNeural',
rate: 1.0,
emotion: 'friendly'
}
});
console.log('Avatar session created:', session.sessionId);
} catch (error) {
console.error('Failed to start avatar:', error);
}
}
// Listen for events
avatar.on(StreamingEvents.STREAM_READY, (stream) => {
// Attach video stream to video element
const videoElement = document.getElementById('avatar-video') as HTMLVideoElement;
videoElement.srcObject = stream;
videoElement.play();
});
avatar.on(StreamingEvents.AVATAR_START_TALKING, () => {
console.log('Avatar started talking');
});
avatar.on(StreamingEvents.AVATAR_STOP_TALKING, () => {
console.log('Avatar stopped talking');
});
// Make avatar speak
async function speak(text: string) {
try {
await avatar.speak(text);
} catch (error) {
console.error('Failed to make avatar speak:', error);
}
}
// Start voice chat
async function startVoiceChat() {
try {
await avatar.startVoiceChat();
console.log('Voice chat started');
} catch (error) {
console.error('Failed to start voice chat:', error);
}
}
// Initialize
startAvatar();API Reference
StreamingAvatar Class
Constructor
new StreamingAvatar(config: LiveTalkingConfig)config.serverUrl: Your LiveTalking server URLconfig.token: Optional pre-created authentication tokenconfig.apiKey: Optional API key for authentication
Methods
createStartAvatar(request: StartAvatarRequest): Promise<StartAvatarResponse>
Creates and starts a new avatar streaming session.
const session = await avatar.createStartAvatar({
avatarId: 'wav2lip256_avatar1',
quality: AvatarQuality.High,
voice: {
voiceId: 'en-US-AriaNeural',
rate: 1.2,
emotion: 'excited'
},
stt: {
provider: 'whisper',
language: 'en-US'
}
});speak(text: string | SpeakRequest): Promise<void>
Makes the avatar speak the provided text.
// Simple usage
await avatar.speak('Hello, world!');
// Advanced usage
await avatar.speak({
text: 'Hello, world!',
sessionId: 'custom-session-id'
});interrupt(): Promise<void>
Interrupts the current avatar speech.
await avatar.interrupt();startVoiceChat(): Promise<void>
Starts bidirectional voice communication.
await avatar.startVoiceChat();stopVoiceChat(): Promise<void>
Stops voice communication.
await avatar.stopVoiceChat();closeConnection(): Promise<void>
Closes the connection and ends the session.
await avatar.closeConnection();Events
The SDK uses EventEmitter for real-time communication:
import { StreamingEvents } from '@livetalking/streaming-avatar';
avatar.on(StreamingEvents.STREAM_READY, (stream: MediaStream) => {
// Video stream is ready
});
avatar.on(StreamingEvents.AVATAR_START_TALKING, () => {
// Avatar started speaking
});
avatar.on(StreamingEvents.AVATAR_STOP_TALKING, () => {
// Avatar stopped speaking
});
avatar.on(StreamingEvents.CONNECTION_ESTABLISHED, () => {
// WebRTC connection established
});
avatar.on(StreamingEvents.STREAM_DISCONNECTED, () => {
// Connection lost
});
avatar.on(StreamingEvents.ERROR, (error: Error) => {
// Handle errors
});Configuration Options
Avatar Quality
enum AvatarQuality {
Low = 'low', // 480p, lower bandwidth
Medium = 'medium', // 720p, balanced
High = 'high' // 1080p, higher bandwidth
}Voice Configuration
interface VoiceConfig {
voiceId: string; // Voice identifier (e.g., 'en-US-AriaNeural')
rate?: number; // Speech rate (0.5 - 2.0)
emotion?: string; // Emotion: 'neutral', 'happy', 'sad', etc.
language?: string; // Language code (e.g., 'en-US')
}STT Configuration
interface STTConfig {
provider: 'whisper' | 'deepgram' | 'azure';
language?: string; // Language code
model?: string; // Model variant
}Error Handling
try {
await avatar.createStartAvatar({
avatarId: 'wav2lip256_avatar1',
quality: AvatarQuality.Medium
});
} catch (error) {
if (error.message.includes('authentication')) {
console.error('Authentication failed');
} else if (error.message.includes('session')) {
console.error('Session creation failed');
} else {
console.error('Unknown error:', error);
}
}Browser Compatibility
- Chrome 80+
- Firefox 75+
- Safari 13+
- Edge 80+
Server Requirements
This SDK requires a LiveTalking server running with the streaming API enabled. See the LiveTalking repository for server setup instructions.
License
Apache-2.0
Support
For issues and questions:
- GitHub Issues: https://github.com/lipku/LiveTalking/issues
- Documentation: https://github.com/lipku/LiveTalking#readme
