npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@meetcore/expression

v1.0.0

Published

Face Expression Analysis Engine - TypeScript module for React

Readme

Face Expression Analysis Engine - TypeScript Module

TypeScript module for face expression analysis using MediaPipe FaceLandmarker. Converted from Python for use with React and browser environments.

Features

  • Action Unit (AU) Detection: Detects 21+ facial action units from MediaPipe blendshapes
  • Gaze Tracking: Calculates attention vector combining head rotation and iris position
  • Voice Activity Detection: Visual detection of speech using mouth geometry
  • Baseline Management: Dynamic baseline calculation for individual users
  • Temporal Analysis: Classifies expressions as micro or macro
  • Scoring Engine: Calculates emotional dimensions (engagement, approval, resistance, etc.)
  • Smooth Processing: EMA filtering to eliminate flicker and noise

Installation

No Projeto React

npm install @mediapipe/tasks-vision
# or
yarn add @mediapipe/tasks-vision

Baixar Modelo MediaPipe

Baixe o arquivo face_landmarker.task de:

  • https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task

Coloque em public/models/face_landmarker.task no seu projeto React.

O Que Você Precisa Passar

O módulo precisa de 1 coisa principal:

  1. MediaPipe FaceLandmarker - Instância inicializada do MediaPipe (passado como parâmetro)

Usage

Basic Setup

import { createFaceExpressionEngine } from '@meet-expression/core';
import { FaceLandmarker, FilesetResolver } from '@mediapipe/tasks-vision';
import facsConfig from './config/FACS_IA_decision_ready_v1.json';

// 1. Inicializar MediaPipe FaceLandmarker (OBRIGATÓRIO)
const filesetResolver = await FilesetResolver.forVisionTasks(
  'https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/wasm'
);

const faceLandmarker = await FaceLandmarker.createFromOptions(
  filesetResolver,
  {
    baseOptions: {
      modelAssetPath: '/models/face_landmarker.task', // Caminho para o modelo
    },
    outputFaceBlendshapes: true, // CRÍTICO: precisa estar true
    runningMode: 'VIDEO', // Modo vídeo para processamento contínuo
    numFaces: 1,
  }
);

// 2. Criar o engine (MediaPipe é passado como parâmetro)
const engine = createFaceExpressionEngine(
  {
    facsConfig: facsConfig, // OBRIGATÓRIO: Config JSON com regras
    windowSeconds: 4.0, // Opcional: padrão 4.0
    fps: 30, // Opcional: padrão 30
    // thresholdsConfig é opcional, usa default se não fornecido
  },
  faceLandmarker // OBRIGATÓRIO: MediaPipe FaceLandmarker passado aqui
);

// 3. Configurar callbacks
engine.onResult((result) => {
  console.log('Frame result:', result.aus, result.meta);
});

engine.onDecision((decision) => {
  console.log('Decision:', decision.dominant_dimension, decision.dominant_value);
});

// 4. Processar frames (processFrame é async)
const video = document.getElementById('video') as HTMLVideoElement;
await engine.processFrame(video);

React Hook Example

import { useEffect, useRef, useState } from 'react';
import { createFaceExpressionEngine, type FrameResult } from '@meet-expression/core';
import { FaceLandmarker, FilesetResolver } from '@mediapipe/tasks-vision';

export function useFaceExpression() {
  const [result, setResult] = useState<FrameResult | null>(null);
  const engineRef = useRef<ReturnType<typeof createFaceExpressionEngine> | null>(null);
  const videoRef = useRef<HTMLVideoElement | null>(null);

  useEffect(() => {
    async function init() {
      // Load FACS config
      const facsConfig = await fetch('/config/FACS_IA_decision_ready_v1.json')
        .then(r => r.json());

      // Initialize MediaPipe
      const filesetResolver = await FilesetResolver.forVisionTasks(
        'https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/wasm'
      );

      const faceLandmarker = await FaceLandmarker.createFromOptions(
        filesetResolver,
        {
          baseOptions: {
            modelAssetPath: '/models/face_landmarker.task',
          },
          outputFaceBlendshapes: true,
          runningMode: 'VIDEO',
          numFaces: 1,
        }
      );

      // Create engine
      const engine = createFaceExpressionEngine(
        {
          facsConfig,
          windowSeconds: 4.0,
          fps: 30,
        },
        faceLandmarker
      );

      engine.onResult(setResult);

      engineRef.current = engine;

      // Start video processing
      if (videoRef.current) {
        const video = videoRef.current;
        video.addEventListener('loadeddata', () => {
          processVideo(video, engine);
        });
      }
    }

    init();
  }, []);

  function processVideo(video: HTMLVideoElement, engine: ReturnType<typeof createFaceExpressionEngine>) {
    function frame() {
      if (video.readyState === video.HAVE_ENOUGH_DATA) {
        engine.processFrame(video);
      }
      requestAnimationFrame(frame);
    }
    requestAnimationFrame(frame);
  }

  return { result, videoRef, engineRef };
}

Configuration

Thresholds Config

import { defaultThresholdsConfig } from '@meet-expression/core';

// Customize config
const customConfig = {
  ...defaultThresholdsConfig,
  baseline: {
    window_size: 120,
    warmup_frames: 60,
  },
};

FACS Config

Load the FACS configuration JSON file:

import facsConfig from './config/FACS_IA_decision_ready_v1.json';

API Reference

createFaceExpressionEngine

Main orchestration function. Returns an engine object with methods.

Usage

const engine = createFaceExpressionEngine(options, faceLandmarker);

Methods

  • processFrame(frame: HTMLVideoElement | HTMLImageElement | HTMLCanvasElement | ImageData): Process a single frame
  • calibrate(currentAus: ActionUnits): Calibrate neutral face baseline
  • resetCalibration(): Reset calibration
  • getCurrentDecision(): Get current decision result
  • getTimeSeriesAnalyzer(): Get time series analyzer instance
  • clearBuffer(): Clear window buffer
  • onResult(callback: (result: FrameResult) => void): Set frame result callback
  • onDecision(callback: (decision: ExpressionResult) => void): Set decision update callback

FrameResult

interface FrameResult {
  aus: ActionUnits;           // Detected action units
  meta: MetaSignals;          // Gaze, head pose, voice activity
  rotPenalty?: number;        // Rotation penalty (0-1)
  isStable: boolean;          // Whether baseline is stable
  currentDecision?: ExpressionResult; // Current emotional dimension decision
}

ExpressionResult

interface ExpressionResult {
  dominant_dimension: string;  // e.g., "engagement", "approval", "resistance"
  dominant_value: number;      // -100 to +100
  active_combos: string[][];   // Triggered combo rules (array de arrays de tags)
  scores: Record<string, number>; // All dimension scores
  recommended_actions: string[];  // Recommended actions
  questions: string[];         // Suggested questions
}

Action Units Detected

The engine detects the following Action Units:

  • AU1: Inner Brow Raiser
  • AU2: Outer Brow Raiser
  • AU4: Brow Lowerer
  • AU5: Upper Lid Raiser
  • AU6: Cheek Raiser
  • AU7: Lid Tightener
  • AU9: Nose Wrinkler
  • AU10: Upper Lip Raiser
  • AU12: Lip Corner Puller (Smile)
  • AU14: Dimpler
  • AU15: Lip Corner Depressor
  • AU17: Chin Raiser
  • AU18: Lip Puckerer
  • AU20: Lip Stretcher
  • AU23: Lip Tightener
  • AU24: Lip Pressor
  • AU25: Lips Part
  • AU26: Jaw Drop
  • AU28: Lip Suck
  • AU43: Eyes Closed
  • AU45: Blink

Emotional Dimensions

The scoring engine calculates scores for:

  • engagement: Engagement/attention (tendency to participate)
  • affiliation: Affection/approach (tendency to bond/welcome)
  • approval: Approval/alignment with proposal
  • resistance: Resistance/blocking (tendency to disagree/close)
  • stress_load: Cognitive load/stress (overload/pressure)
  • aversion: Aversion/disgust/contempt (stimulus rejection)
  • incongruence_risk: Risk of incongruence (speech vs signals) / retention

License

Private project