npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@marcosremar/cabecao

v1.3.4

Published

Modern React 3D avatar component with chat and lip-sync capabilities

Readme

@marcosremar/cabecao

Modern React 3D avatar component with real-time chat, WebSocket support, and lip-sync capabilities.

Features

  • 🎭 Real-time 3D Avatar: Powered by React Three Fiber and Three.js
  • 🎤 Voice Chat: WebSocket-based real-time audio processing
  • 👄 Lip-sync: Automatic mouth movement synchronization
  • 😊 Facial Expressions: Dynamic facial expressions based on content
  • 🎨 Customizable: Easy configuration for gaze direction, animations, and more
  • 📦 Streaming: Efficient chunk-based audio streaming
  • 🔊 VAD: Voice Activity Detection for hands-free interaction

Installation

npm install @marcosremar/cabecao

Peer Dependencies

Make sure to install the required peer dependencies:

npm install @react-three/drei @react-three/fiber @ricky0123/vad-react leva react react-dom socket.io-client three

Basic Usage

import { Cabecao } from '@marcosremar/cabecao';

function App() {
  return (
    <Cabecao 
      wsUrl="ws://localhost:4002"
      apiUrl="http://localhost:4001"
    />
  );
}

Configuration

Props

| Prop | Type | Default | Description | |------|------|---------|-------------| | wsUrl | string | "http://localhost:4002" | WebSocket server URL | | apiUrl | string | "http://localhost:4001" | REST API server URL | | r2Url | string | undefined | Cloudflare R2 URL for models | | modelPath | string | undefined | Custom model path | | showControls | boolean | false | Show Leva controls | | autoStartVAD | boolean | false | Auto-start voice detection | | showStartButton | boolean | true | Show start button | | vadEnabled | boolean | true | Enable voice activity detection | | gazeConfig | object | See below | Eye gaze configuration | | style | object | {} | Custom CSS styles | | className | string | undefined | CSS class name |

Gaze Configuration

const gazeConfig = {
  enabled: true,
  talking0: {
    rightIntensity: 0.15,  // 0-1, how much to look right
    downIntensity: 0.08    // 0-1, how much to look down
  }
};

<Cabecao gazeConfig={gazeConfig} />

Advanced Usage

Custom Model

<Cabecao 
  modelPath="/path/to/custom/model.glb"
  r2Url="https://your-r2-bucket.com"
  wsUrl="ws://your-websocket-server"
/>

With Custom Styling

<Cabecao 
  style={{
    width: '100%',
    height: '500px',
    borderRadius: '12px'
  }}
  className="my-avatar"
  showControls={true}
/>

WebSocket API

The component expects a WebSocket server that handles:

Events Sent

  • chat: Audio data with sample rate
{
  audio: Float32Array,
  sampleRate: 16000
}

Events Received

  • audio-chunk: Audio response with visemes
{
  text: "Hello there!",
  audio: "data:audio/webm;codecs=opus;base64,UklGRiQF...",
  visemes: [
    { v: "X", start: 0, end: 100 },
    { v: "H", start: 100, end: 200 }
  ],
  animation: "Talking_0",
  facialExpression: "smile"
}
  • chat-error: Error handling
{
  error: "Error message"
}

Animations

Supported animations:

  • Idle - Default idle animation
  • Talking_0 - Primary talking animation
  • Talking_1 - Secondary talking animation
  • Talking_2 - Tertiary talking animation

Facial Expressions

Supported expressions:

  • default - Neutral expression
  • smile - Happy/positive expression
  • sad - Sad/negative expression
  • surprised - Surprised expression
  • angry - Angry expression

Visemes

The component supports standard visemes A-H and X for lip-sync:

  • A - Bilabial sounds (P, B, M)
  • B - Velar sounds (K, G)
  • C - Vowel I
  • D - Vowel A
  • E - Vowel O
  • F - Vowel U
  • G - Fricative sounds (F, V)
  • H - Dental sounds (TH, T, D)
  • X - Silence

Example Backend Integration

// Express + Socket.IO server example
const express = require('express');
const { Server } = require('socket.io');
const http = require('http');

const app = express();
const server = http.createServer(app);
const io = new Server(server);

io.on('connection', (socket) => {
  socket.on('chat', async (data) => {
    const { audio, sampleRate } = data;
    
    // Process audio, generate response
    const response = await processAudio(audio, sampleRate);
    
    // Send chunk with visemes
    socket.emit('audio-chunk', {
      text: response.text,
      audio: response.audioBase64,
      visemes: response.visemes,
      animation: response.animation,
      facialExpression: response.facialExpression
    });
  });
});

server.listen(4002);

Development

# Clone the repository
git clone https://github.com/marcosremar/cabecao-npm.git
cd cabecao-npm

# Install dependencies
npm install --legacy-peer-deps

# Build the package
npm run build

# Test locally
npm link

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

MIT © Marcos Remar

Changelog

v1.1.0

  • ✨ Added WebSocket support for real-time communication
  • 🎯 Configurable gaze direction for better eye contact
  • 🎭 Dynamic facial expressions based on content
  • 📦 Streaming audio chunks for improved performance
  • 🔧 Enhanced configuration options

v1.0.0

  • 🎉 Initial release
  • 🎭 Basic 3D avatar with lip-sync
  • 🎤 Voice chat capabilities
  • 📱 Responsive design