npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

facecap

v1.0.1

Published

Face autocapture module with quality check, face angle estimation via solvePnP, and guideline UI overlay

Readme

facecap

Plug-and-play face autocapture library powered by Google's BlazeFace TFLite model (via @litertjs/core) with solvePnP-based face-angle estimation, per-frame quality gating, and a built-in canvas overlay UI.

npm version license types


✨ Features

  • 🎯 Face Detection — BlazeFace short-range TFLite model running in-browser via WASM
  • 📐 Pose EstimationsolvePnP (opencv.js) converts 6 facial keypoints to yaw / pitch / roll angles
  • Quality Gating — Rejects frames with multiple faces, low confidence, wrong size, or off-angle pose
  • 📸 Auto-Capture — Captures a configurable number of face images after N consecutive quality-passing frames
  • 🎨 Overlay UI — Draws a guide oval (or face silhouette), bounding boxes, landmarks, and a status banner
  • 🔁 Full Lifecycleinit → start → stop → destroy with safe re-initialization support
  • 📦 ESM + CJS — Ships both module formats with TypeScript declarations

📦 Installation

npm install facecap

Peer runtime: opencv.js is loaded from CDN at runtime by default (configurable). No extra npm install needed.


🚀 Quick Start

<!-- 1. Add a container element -->
<div id="camera" style="width: 320px;"></div>
import { FaceAutocapture, FaceStatus } from 'facecap';

const capture = new FaceAutocapture({
  container: '#camera',
  captureCount: 1,
});

capture.onCapture = (result) => {
  console.log(`Captured image ${result.index}`, result.blob);
  const url = URL.createObjectURL(result.blob);
  document.querySelector<HTMLImageElement>('#preview')!.src = url;
};

capture.onStatusChange = (status, message) => {
  console.log(`[${status}] ${message}`);
};

await capture.init();   // loads WASM + model + opencv.js
await capture.start();  // opens webcam + begins detection loop

⚙️ Configuration

All options are optional — sensible defaults are applied automatically.

new FaceAutocapture({
  // Required
  container: '#camera',               // CSS selector or HTMLElement

  // Model / runtime
  modelPath?: string,                 // URL to .tflite model (default: bundled)
  wasmBasePath?: string,              // @litertjs/core WASM base URL
  opencvJsUrl?: string,               // opencv.js CDN URL

  // Capture behaviour
  captureCount?: number,              // Number of images to capture  (default: 1)
  captureIntervalSec?: number,        // Minimum seconds between captures (default: 0.5)
  stableFramesRequired?: number,      // Consecutive quality-OK frames before capture (default: 3)
  captureMimeType?: string,           // 'image/jpeg' | 'image/png'  (default: 'image/jpeg')
  captureQuality?: number,            // 0–1 quality for lossy formats (default: 0.99)

  // Quality thresholds
  minConfidence?: number,             // Minimum detection score 0–1  (default: 0.75)
  maxYawAngle?: number,               // Max left/right head turn °   (default: 15)
  maxPitchAngle?: number,             // Max up/down head tilt °      (default: 15)
  maxRollAngle?: number,              // Max clockwise head roll °    (default: 15)
  minFaceAreaRatio?: number,          // Min face area / frame area   (default: 0.24)
  maxFaceAreaRatio?: number,          // Max face area / frame area   (default: 0.30)

  // Output
  outputWidth?: number,               // Capture canvas width px      (default: 480)
  outputHeight?: number,              // Capture canvas height px     (default: 640)
  facingMode?: 'user' | 'environment',// Camera facing               (default: 'user')

  // Overlay UI
  showFaceGuide?: 'oval' | 'face',    // Guide shape style            (default: 'oval')
  showBoundingBox?: boolean,          // Draw detection bbox          (default: true)
  showLandmarks?: boolean,            // Draw 6 keypoint dots         (default: false)
});

📡 Events / Callbacks

onCapture

Fired each time a quality frame is captured.

capture.onCapture = (result: CaptureResult) => {
  result.blob       // Blob — the captured face image
  result.index      // number — 1-based capture index
  result.detection  // Detection — raw bbox & keypoints
  result.angle      // FaceAngle — { yaw, pitch, roll } in degrees
  result.quality    // QualityResult — individual check flags
  result.timestamp  // number — ms since epoch
};

onStatusChange

Fired when the face status changes (debounced to avoid flicker).

capture.onStatusChange = (status: FaceStatus, message: string) => {
  // status is one of:
  // FaceStatus.NO_FACE | MULTIPLE_FACES | FACE_TOO_SMALL | FACE_TOO_LARGE
  // FACE_NOT_STRAIGHT | LOW_CONFIDENCE | HOLD_STILL | CAPTURING
  // READY | COMPLETE
};

onDetection

Fired every frame with raw detection data (useful for custom analytics).

capture.onDetection = (detections: Detection[], angle: FaceAngle | null) => {
  // Called at the camera frame rate
};

🔄 Lifecycle

new FaceAutocapture(config)
        │
        ▼
   capture.init()      ← loads WASM + TFLite model + opencv.js
        │
        ▼
   capture.start()     ← opens webcam, begins requestAnimationFrame loop
        │
        ▼
  [detection loop]     ← detects → quality check → overlay → capture
        │
        ▼
   capture.stop()      ← pauses loop, releases webcam
        │
        ▼
   capture.destroy()   ← removes DOM elements, releases model

You can safely call init() + start() again after destroy().


🗂️ Exported Types

import type {
  AutocaptureConfig,      // Constructor config object
  Detection,              // { box, keypoints, score }
  BoundingBox,            // { x, y, width, height }
  Point2D,                // { x, y }
  FaceAngle,              // { yaw, pitch, roll }
  QualityResult,          // { passed, checks, message }
  CaptureResult,          // { blob, index, detection, angle, quality, timestamp }
  OnCaptureCallback,
  OnStatusChangeCallback,
  OnDetectionCallback,
} from 'facecap';

import { FaceStatus, KeypointIndex, STATUS_MESSAGES } from 'facecap';

🏗️ Architecture

The library is built with a 5-layer architecture for maintainability:

Application  ──►  Core (Orchestrator)
                     ├── Domain Layer      (business logic: capture, status, smoothing)
                     ├── Infrastructure    (I/O: DOM, webcam, ML pipeline)
                     ├── Processors        (pure math: preprocess, decode, NMS, pose)
                     └── UI Layer          (canvas overlay rendering)

🧑‍💻 Full Example

<!DOCTYPE html>
<html>
<body>
  <div id="camera" style="width: 320px; margin: auto;"></div>
  <img id="preview" style="display:none; margin: auto; width: 320px;" />

  <script type="module">
    import { FaceAutocapture, FaceStatus } from 'facecap';

    const cam = new FaceAutocapture({
      container: '#camera',
      captureCount: 3,
      stableFramesRequired: 5,
      showFaceGuide: 'oval',
      showLandmarks: true,
    });

    cam.onCapture = ({ blob, index }) => {
      console.log(`📸 Captured ${index}`);
      if (index === 1) {
        document.getElementById('preview').src = URL.createObjectURL(blob);
        document.getElementById('preview').style.display = 'block';
      }
    };

    cam.onStatusChange = (status, msg) => {
      if (status === FaceStatus.COMPLETE) console.log('✅ All done!');
    };

    await cam.init();
    await cam.start();
  </script>
</body>
</html>

🔒 Browser Permissions

The library accesses the camera via navigator.mediaDevices.getUserMedia. The browser will prompt the user for permission. This requires:

  • A secure context (https:// or localhost)
  • A supported browser (Chrome, Firefox, Safari, Edge)

📄 License

MIT © Kuson