muz-avatar
v1.0.39
Published
A React avatar component with illustrated faces, initials, gradient, pixel and beam variants
Readme
muz-avatar
A comprehensive React avatar component library with 2D illustrated faces, 3D interactive avatars, speaking capabilities, and multiple rendering variants. Built with Three.js for 3D avatars and Web Speech API for voice synthesis.
✨ Features
- 🎨 5 Rendering Variants -
face(illustrated),initials,gradient,pixel,beam - 🎭 3D Dynamic Avatars - Fully interactive 3D characters with emotions and gestures
- 📐 Gender-Specific Proportions - Distinct realistic facial structures, jawlines, and lip proportions based on gender
- 🗣️ Speaking Avatars - Text-to-speech with realistic lip synchronization
- 🎛️ Manual Animation Controls - Full external control over mouth (
customMouthOpen), eyes (customEyeBlink), and head tilt - 👤 Full Customization - Hair styles, outfits, accessories, colors, face shapes
- 🎯 Avatar Groups - Stacked avatars with overflow count
- 🪝 React Hooks -
useAvatar,useSpeakingAvatar,useAvatarChat - ⚛️ Next.js Ready - Works in App Router out of the box
- 📱 Responsive - Fully responsive and accessible
- 🔧 TypeScript - Complete type safety
📦 Installation
npm install muz-avatar
# or
yarn add muz-avatar
# or
pnpm add muz-avatarPeer Dependencies
For 3D avatars, you'll need:
npm install three @react-three/fiber @react-three/drei🚀 Quick Start
Basic 2D Avatar
import { Avatar } from 'muz-avatar';
export default function App() {
return (
<Avatar
variant="face"
name="Alice Johnson"
gender="female"
hairColor="#8B4513"
eyeColor="#4169E1"
size={80}
/>
);
}3D Speaking Avatar
import { DynamicAvatar, useSpeakingAvatar } from 'muz-avatar';
function SpeakingAvatarComponent() {
const { isSpeaking, speak, stop } = useSpeakingAvatar();
return (
<div>
<DynamicAvatar
name="Alice"
gender="female"
hairColor="#8B4513"
eyeColor="#4169E1"
size={200}
isSpeaking={isSpeaking}
speakText={isSpeaking ? "Hello, I'm speaking!" : ""}
onSpeechEnd={() => console.log('Speech ended')}
/>
<button onClick={() => speak("Hello world!")}>Speak</button>
<button onClick={stop}>Stop</button>
</div>
);
}📚 Components
<Avatar> (2D)
Basic 2D avatar with multiple rendering variants.
Props
| Prop | Type | Default | Description |
|---|---|---|---|
| name | string | '' | Used to derive initials and deterministic colors |
| src | string | — | Image URL; falls back to selected variant on error |
| size | number | 40 | Width and height in pixels |
| shape | 'circle' \| 'square' \| 'rounded' | 'circle' | Container shape |
| variant | 'face' \| 'initials' \| 'gradient' \| 'pixel' \| 'beam' | 'initials' | Rendering style |
| bgColor | string | auto | Override background color |
| textColor | string | auto | Override text/icon color |
| fontSize | number | auto | Override initials font size |
| className | string | — | Extra CSS class |
| style | React.CSSProperties | — | Extra inline styles |
| alt | string | name | Alt/aria-label text |
| onError | () => void | — | Called when src fails to load |
Face Variant Props
| Prop | Type | Default | Description |
|---|---|---|---|
| gender | 'male' \| 'female' \| 'neutral' | 'neutral' | Gender for default styling |
| hairColor | CSS color | '#4A3728' | Hair, eyebrow, and eyelash color |
| eyeColor | CSS color | '#5D7FA3' | Iris color |
| skinTone | CSS color | auto | Face, neck, and ear color |
| hairStyle | 'short' \| 'long' \| 'curly' \| 'bald' \| 'ponytail' \| 'manbun' | gender-based | Hair style |
| accessory | 'none' \| 'glasses' | 'none' | Face accessory |
| outfit | 'casual' \| 'formal' | 'formal' | Clothing style |
<DynamicAvatar> (3D)
Advanced 3D avatar with speaking capabilities and emotions.
Core Props
| Prop | Type | Default | Description |
|---|---|---|---|
| name | string | — | Avatar name |
| size | number | 200 | Avatar size in pixels |
| shape | 'circle' \| 'square' \| 'rounded' | 'circle' | Container shape |
Customization Props
| Prop | Type | Default | Description |
|---|---|---|---|
| gender | 'male' \| 'female' \| 'neutral' | 'neutral' | Gender |
| skinTone | CSS color | '#FCC89C' | Skin color |
| hairColor | CSS color | '#8B5A2B' | Hair color |
| hairStyle | 'short' \| 'long' \| 'curly' \| 'bald' \| 'ponytail' \| 'manbun' | gender-based | Hair style |
| eyeColor | CSS color | '#4169E1' | Eye color |
| eyeSize | number (0.5-1.5) | 1 | Eye size multiplier |
| faceShape | 'round' \| 'oval' \| 'square' \| 'heart' | 'round' | Face shape |
| outfit | 'casual' \| 'formal' \| 'sporty' \| 'professional' | 'casual' | Outfit style |
| outfitColor | CSS color | '#1E3A5F' | Outfit color |
| accessory | 'none' \| 'glasses' \| 'earrings' \| 'hat' \| 'headphones' | 'none' | Accessory |
| accessoryColor | CSS color | '#FFD700' | Accessory color |
Speaking & Animation Props
| Prop | Type | Default | Description |
|---|---|---|---|
| isSpeaking | boolean | false | Controls speaking animation |
| speakText | string | — | Text to speak when isSpeaking is true |
| voiceName | string | — | Specific speech synthesis voice |
| onSpeechEnd | () => void | — | Callback when speech ends |
Interactive Features
| Prop | Type | Default | Description |
|---|---|---|---|
| enableMic | boolean | true | Enable microphone interactions |
| enableBlinking | boolean | true | Enable natural eye blinking |
| enableHeadMovement | boolean | true | Enable subtle head movements |
| enableGestures | boolean | true | Enable emotional gestures |
Animation Controls
| Prop | Type | Default | Description |
|---|---|---|---|
| customMouthOpen | number (0-1) | — | Override mouth openness |
| customEyeBlink | number (0-1) | — | Override eye blinking |
| customHeadTilt | number | — | Override head tilt |
<AvatarGroup>
Stack multiple avatars with overflow count.
import { AvatarGroup } from 'muz-avatar';
<AvatarGroup
avatars={[
{ variant: 'face', name: 'Alice', gender: 'female', hairColor: '#8B4513' },
{ variant: 'face', name: 'Bob', gender: 'male', hairColor: '#2C1810' },
{ name: 'Carol' },
{ name: 'David' },
{ name: 'Eve' }, // Becomes "+2" if max={3}
]}
max={3}
size={48}
overlap={10}
/>Props
| Prop | Type | Default | Description |
|---|---|---|---|
| avatars | AvatarProps[] | — | Array of avatar configurations |
| max | number | 5 | Max visible avatars before overflow |
| size | number | 40 | Uniform size for all avatars |
| overlap | number | 8 | Horizontal overlap in pixels |
| shape | 'circle' \| 'square' \| 'rounded' | 'circle' | Shape for all avatars |
🪝 Hooks
useSpeakingAvatar
Manages speaking state and text-to-speech for avatars.
import { useSpeakingAvatar } from 'muz-avatar';
function SpeakingComponent() {
const { isSpeaking, mouthOpen, speak, stop, voices } = useSpeakingAvatar({
voiceName: 'Samantha',
rate: 1.0,
pitch: 1.0,
onEnd: () => console.log('Speech ended'),
});
return (
<div>
<DynamicAvatar
isSpeaking={isSpeaking}
customMouthOpen={mouthOpen}
// ... other props
/>
<button onClick={() => speak("Hello world!")}>Speak</button>
<button onClick={stop}>Stop</button>
<p>Status: {isSpeaking ? 'Speaking' : 'Silent'}</p>
<p>Mouth openness: {mouthOpen.toFixed(2)}</p>
</div>
);
}Options
| Option | Type | Default | Description |
|---|---|---|---|
| voiceName | string | — | Specific voice name from browser |
| rate | number | 1.0 | Speech rate (0.1-10) |
| pitch | number | 1.0 | Speech pitch (0-2) |
| onEnd | () => void | — | Callback when speech ends |
Returns
| Property | Type | Description |
|---|---|---|
| isSpeaking | boolean | True while speaking |
| mouthOpen | number | Mouth openness 0-1, updated at 60fps |
| speak | (text: string) => void | Start speaking text |
| stop | () => void | Stop speaking immediately |
| voices | string[] | Available voice names |
useAvatar
Get deterministic avatar data for custom UI.
import { useAvatar } from 'muz-avatar';
function ProfileCard({ name }: { name: string }) {
const { initials, bgColor, textColor, gradient, dataURL } = useAvatar(name, {
size: 64,
shape: 'circle',
});
return (
<div style={{ background: gradient, padding: 16, borderRadius: 12 }}>
<img src={dataURL} alt={name} width={64} height={64} />
<p style={{ color: textColor }}>{initials}</p>
</div>
);
}useAvatarChat
Advanced chat functionality with AI integration.
import { useAvatarChat } from 'muz-avatar';
function ChatComponent() {
const { messages, sendMessage, isTyping, error } = useAvatarChat({
apiKey: 'your-api-key',
model: 'gpt-4',
systemPrompt: 'You are a helpful assistant.',
});
return (
<div>
<DynamicAvatar />
<div>
{messages.map(msg => (
<div key={msg.id}>{msg.content}</div>
))}
</div>
<input onKeyPress={(e) => {
if (e.key === 'Enter') {
sendMessage(e.currentTarget.value);
}
}} />
</div>
);
}🎨 Customization Data
Access pre-defined customization options:
import { AvatarCustomization } from 'muz-avatar';
console.log('Hair styles:', AvatarCustomization.hairStyles);
// ['short', 'long', 'curly', 'bald', 'ponytail', 'manbun']
console.log('Outfits:', AvatarCustomization.outfits);
// ['casual', 'formal', 'sporty', 'professional']
console.log('Accessories:', AvatarCustomization.accessories);
// ['none', 'glasses', 'earrings', 'hat', 'headphones']🎭 Examples
Complete Speaking Avatar
import React, { useState } from 'react';
import { DynamicAvatar, useSpeakingAvatar, AvatarCustomization } from 'muz-avatar';
import "muz-avatar/styles.css";
function InteractiveAvatar() {
const [text, setText] = useState('Hello! How are you today?');
const { isSpeaking, speak, stop } = useSpeakingAvatar();
return (
<div style={{ textAlign: 'center', padding: 20 }}>
<DynamicAvatar
name="Assistant"
gender="female"
hairColor="#8B4513"
eyeColor="#4169E1"
skinTone="#FCC89C"
hairStyle="long"
outfit="professional"
accessory="glasses"
size={300}
isSpeaking={isSpeaking}
speakText={isSpeaking ? text : ""}
enableGestures={true}
/>
<div style={{ marginTop: 20 }}>
<input
value={text}
onChange={(e) => setText(e.target.value)}
placeholder="Enter text to speak"
style={{ padding: 10, marginRight: 10 }}
/>
<button
onClick={() => speak(text)}
disabled={isSpeaking}
style={{ padding: 10, marginRight: 10 }}
>
{isSpeaking ? 'Speaking...' : 'Speak'}
</button>
<button onClick={stop} style={{ padding: 10 }}>
Stop
</button>
</div>
</div>
);
}Custom Speech & Manual Lip-Sync Control
For advanced use cases (like integrating external AI voice streams or using raw Web Speech API), you can take manual control over the avatar's lip-syncing by passing the customMouthOpen prop.
import React, { useState, useEffect } from 'react';
import { DynamicAvatar } from 'muz-avatar';
function CustomSpeakingAvatar() {
const [mouthOpen, setMouthOpen] = useState(0);
const [isSpeaking, setIsSpeaking] = useState(false);
const handleSpeak = () => {
setIsSpeaking(true);
const text = "This is a demonstration of manual lip-sync control using custom mouth open properties.";
const utterance = new SpeechSynthesisUtterance(text);
utterance.onend = () => {
setIsSpeaking(false);
setMouthOpen(0);
};
// Manual mouth animation loop
const animateMouth = () => {
const interval = setInterval(() => {
if (!speechSynthesis.speaking) {
clearInterval(interval);
setMouthOpen(0);
return;
}
// Randomize mouth openness to simulate speech
setMouthOpen(Math.random() * 0.8 + 0.2);
}, 100);
};
window.speechSynthesis.speak(utterance);
animateMouth();
};
return (
<div style={{ textAlign: 'center' }}>
<DynamicAvatar
name="CustomSync"
gender="male"
size={250}
customMouthOpen={mouthOpen}
isSpeaking={isSpeaking}
/>
<button onClick={handleSpeak} disabled={isSpeaking}>
{isSpeaking ? 'Speaking...' : 'Speak Manually'}
</button>
</div>
);
}Avatar Customization Panel
function CustomizationPanel() {
const [config, setConfig] = useState({
hairStyle: 'long',
outfit: 'casual',
accessory: 'none',
});
return (
<div>
<DynamicAvatar
name="Custom Avatar"
gender="female"
{...config}
size={200}
/>
<div>
<label>Hair Style:</label>
<select
value={config.hairStyle}
onChange={(e) => setConfig({...config, hairStyle: e.target.value})}
>
{AvatarCustomization.hairStyles.map(style => (
<option key={style} value={style}>{style}</option>
))}
</select>
<label>Outfit:</label>
<select
value={config.outfit}
onChange={(e) => setConfig({...config, outfit: e.target.value})}
>
{AvatarCustomization.outfits.map(outfit => (
<option key={outfit} value={outfit}>{outfit}</option>
))}
</select>
{/* More controls... */}
</div>
</div>
);
}🌐 Server-Side Utilities
Import from muz-avatar/utils for pure functions without React dependencies:
import {
generateAvatarSVG,
generateAvatarDataURL,
getAvatarColors,
getInitials,
generateGradient,
} from 'muz-avatar/utils';
const svg = generateAvatarSVG('Jane Doe', 64, 'circle');
const dataURL = generateAvatarDataURL('Jane Doe', 64, 'rounded');
const { bg, text } = getAvatarColors('Jane Doe');
const initials = getInitials('Jane Doe'); // "JD"📱 Next.js App Router
All components include "use client" and work directly in Server Components:
// app/page.tsx - Server Component
import { Avatar, DynamicAvatar } from 'muz-avatar';
export default function Page() {
return (
<div>
<Avatar name="Alice" variant="face" gender="female" />
<DynamicAvatar name="Bob" gender="male" size={150} />
</div>
);
}For hooks, use Client Components:
'use client';
import { useSpeakingAvatar } from 'muz-avatar';🎨 CSS Imports
For optimal styling, import the CSS file:
import "muz-avatar/styles.css";📋 TypeScript
All types are fully exported:
import type {
AvatarProps,
AvatarGroupProps,
DynamicAvatarProps,
AvatarShape,
AvatarVariant,
AvatarGender,
AvatarHairStyle,
AvatarAccessory,
AvatarOutfit,
AvatarMood,
AvatarEmotion,
AvatarFaceShape,
UseAvatarReturn,
UseSpeakingAvatarOptions,
UseSpeakingAvatarReturn,
UseAvatarChatOptions,
UseAvatarChatReturn,
} from 'muz-avatar';🔧 Browser Support
- Modern Browsers: Chrome 88+, Firefox 85+, Safari 14+, Edge 88+
- Web Speech API: Required for speaking features
- WebGL: Required for 3D avatars
- React 17+: Required for all components
📄 License
MIT
🚀 Getting Started Checklist
- Install package:
npm install muz-avatar - Install peer dependencies (for 3D):
npm install three @react-three/fiber @react-three/drei - Import CSS:
import "muz-avatar/styles.css" - Start with basic
<Avatar>or jump to<DynamicAvatar> - Add speaking with
useSpeakingAvatarhook - Customize with
AvatarCustomizationdata
Ready to build amazing avatar experiences!
