npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

react-native-esanusi-sensor-pose

v1.0.2

Published

Real-time pose detection for React Native using Google ML Kit — frame processor plugin for react-native-vision-camera v4 with skeleton overlay, pose classification, and recording API.

Readme

react-native-esanusi-sensor-pose

A high-performance React Native library for real-time pose detection, skeleton visualization, pose classification, and session recording, powered by Google ML Kit and react-native-vision-camera v4.

Features

  • ✅ Real-time detection of 33 body landmarks via Google ML Kit
  • <Camera> wrapper — zero boilerplate, async worklet pipeline
  • <PoseOverlay> — live skeleton renderer (joints + bones, no extra deps)
  • classifyPose() — detects standing, squatting, lunging, T-arms, arms-raised
  • usePoseRecorder — record sessions + one-shot snapshots + JSON export
  • mirrorX prop for front-camera coordinate correction
  • ✅ Configurable minLandmarkConfidence threshold
  • ✅ Expo config plugin for managed workflow
  • ✅ React Native New Architecture compatible
  • ✅ Full TypeScript support

⚠️ Single-person limitation: Google ML Kit Pose Detection processes one person per frame on mobile. If multiple people are in the frame, only the most prominent subject is returned. This is a platform constraint, not a bug.


Installation

npm install react-native-esanusi-sensor-pose
# or
yarn add react-native-esanusi-sensor-pose

Peer dependencies

npm install react-native-vision-camera react-native-worklets-core

iOS

cd ios && pod install

Minimum iOS deployment target: 15.5

Android

Minimum SDK: 24 (Android 8.0)

Babel

Add react-native-worklets-core and the required syntax proposal plugins to your babel.config.js:

module.exports = {
  plugins: [
    ['react-native-worklets-core/plugin'],
    ['@babel/plugin-proposal-optional-chaining'],
    ['@babel/plugin-proposal-nullish-coalescing-operator'],
  ],
};

(Note: Depending on your React Native version, the proposal plugins may already be included in @react-native/babel-preset, but add them if you encounter Metro bundler syntax errors).

Expo (Managed Workflow)

Add the plugin to app.json or app.config.js:

{
  "plugins": ["react-native-esanusi-sensor-pose"]
}

The config plugin automatically configures the ML Kit Maven/CocoaPods dependencies and injects the required Camera permissions (android.permission.CAMERA for Android and NSCameraUsageDescription for iOS).

Then run npx expo prebuild to regenerate native files.


Quick Start

import {
  Camera,
  PoseOverlay,
  classifyPose,
} from 'react-native-esanusi-sensor-pose';
import { useCameraDevice } from 'react-native-vision-camera';
import { useWindowDimensions, View, StyleSheet } from 'react-native';

function App() {
  const device = useCameraDevice('back');
  const { width, height } = useWindowDimensions();
  const [poses, setPoses] = useState([]);
  const [frameSize, setFrameSize] = useState({ width: 1, height: 1 });

  if (!device) return null;

  return (
    <View style={{ flex: 1 }}>
      <Camera
        style={StyleSheet.absoluteFill}
        device={device}
        isActive
        poseDetectionOptions={{
          performanceMode: 'fast',
          detectorMode: 'stream',
        }}
        poseDetectionCallback={(p, f) => {
          setPoses(p);
          setFrameSize({ width: f.width, height: f.height });
        }}
      />
      <PoseOverlay
        poses={poses}
        frameWidth={frameSize.width}
        frameHeight={frameSize.height}
        viewWidth={width}
        viewHeight={height}
      />
    </View>
  );
}

API

<Camera>

Drop-in replacement for VisionCamera's <Camera> with built-in async pose detection.

| Prop | Type | Default | Description | | ----------------------- | -------------------------------- | ------------ | ------------------------------------------------ | | poseDetectionOptions | PoseDetectionOptions | — | Detection configuration | | poseDetectionCallback | (poses: Pose[], frame) => void | required | Called with detected poses on each frame | | mirrorX | boolean | false | Flip X coordinates — use true for front camera |

<PoseOverlay>

Renders a skeleton on top of the camera preview. All coordinates are automatically scaled from camera pixel space to screen space.

| Prop | Type | Default | Description | | ------------- | -------- | ------------ | ------------------------------------------------------- | | poses | Pose[] | required | Poses from poseDetectionCallback | | frameWidth | number | required | Camera frame width in pixels | | frameHeight | number | required | Camera frame height in pixels | | viewWidth | number | required | Overlay view width (e.g. useWindowDimensions().width) | | viewHeight | number | required | Overlay view height | | dotColor | string | '#00FF88' | Joint circle colour | | boneColor | string | '#00FF88' | Bone line colour | | dotSize | number | 10 | Joint circle diameter (px) | | boneWidth | number | 2 | Bone line thickness (px) |

usePoseDetector(options?)

Lower-level hook; returns { detectPose } for use in a custom useFrameProcessor.

PoseDetectionOptions

| Option | Type | Default | Description | | ----------------------- | ---------------------- | ---------- | --------------------------------------------- | | performanceMode | 'fast' \| 'accurate' | 'fast' | Use 'accurate' for static images | | detectorMode | 'stream' \| 'single' | 'stream' | Use 'single' for photos | | minLandmarkConfidence | number | 0.5 | Filter out low-confidence landmarks (0.0–1.0) |


Pose Classification

import {
  classifyPose,
  isStanding,
  isSquatting,
  getAngle,
  getDistance,
} from 'react-native-esanusi-sensor-pose';

// Classify automatically
const label = classifyPose(poses[0]);
// → 'standing' | 'squatting' | 'arms-raised' | 't-arms' | 'lunging' | 'unknown'

// Or use individual classifiers
if (isSquatting(poses[0])) {
  /* ... */
}

// Geometry helpers
const kneeAngle = getAngle(pose.leftHip!, pose.leftKnee!, pose.leftAnkle!); // degrees
const shoulderWidth = getDistance(pose.leftShoulder!, pose.rightShoulder!); // pixels

| Classifier | What it detects | | -------------- | -------------------------------------- | | isStanding | Upright body, both knee angles > 150° | | isSquatting | Both knees 60°–130°, hips dropped | | isTArms | Arms extended horizontally (T-pose) | | isArmsRaised | Both wrists above shoulder line | | isLunging | Asymmetric knee bend (>40° difference) |


Recording & Snapshot API

import { usePoseRecorder } from 'react-native-esanusi-sensor-pose';

const recorder = usePoseRecorder({ maxCaptures: 300 });

// In your pose callback:
recorder.record(poses, { width: frame.width, height: frame.height });

// Controls
recorder.startRecording();
recorder.stopRecording();

// One-shot capture (regardless of recording state)
recorder.snapshot(poses, frameSize, 'squat rep 3');

// Export
const json = recorder.exportJSON();
// { exportedAt, captureCount, captures: [{ id, timestamp, poses, frameWidth, frameHeight, label }] }

recorder.clearCaptures();

Pose Object

All 33 ML Kit landmarks. Each is optional and only present if inFrameLikelihood exceeds minLandmarkConfidence.

interface PoseLandmark {
  x: number; // pixel X
  y: number; // pixel Y
  z?: number; // depth
  inFrameLikelihood?: number; // 0.0 – 1.0
}

Face (11): nose, leftEye, rightEye, leftEyeInner, leftEyeOuter, rightEyeInner, rightEyeOuter, leftEar, rightEar, leftMouth, rightMouth

Upper body (6): leftShoulder, rightShoulder, leftElbow, rightElbow, leftWrist, rightWrist

Hands (6): leftPinky, rightPinky, leftIndex, rightIndex, leftThumb, rightThumb

Lower body (10): leftHip, rightHip, leftKnee, rightKnee, leftAnkle, rightAnkle, leftHeel, rightHeel, leftFootIndex, rightFootIndex


React Native New Architecture

This library uses VisionCamera Frame Processor Pluginsnot a TurboModule. Frame processor plugins are registered natively and accessed via VisionCameraProxy.initFrameProcessorPlugin. They are fully compatible with both the old and new React Native architectures.

To enable the new architecture, follow the VisionCamera New Architecture guide.


Performance Tips

  1. performanceMode: 'fast' for real-time streams (30+ FPS)
  2. detectorMode: 'stream' for video
  3. pixelFormat: 'yuv' on the camera (set automatically by <Camera>)
  4. Avoid heavy JS work in the poseDetectionCallback; offload with Worklets.createRunOnJS()
  5. Use minLandmarkConfidence: 0.6–0.7 to skip jittery low-confidence landmarks

Requirements

| | Minimum | | -------------------------- | ------------ | | iOS | 15.5 | | Android | SDK 24 (8.0) | | React Native | 0.74+ | | react-native-vision-camera | 4.0+ | | react-native-worklets-core | 1.0+ |


Contributing

See CONTRIBUTING.md.

License

MIT — Made with create-react-native-library