react-native-esanusi-sensor-pose
v1.0.2
Published
Real-time pose detection for React Native using Google ML Kit — frame processor plugin for react-native-vision-camera v4 with skeleton overlay, pose classification, and recording API.
Maintainers
Readme
react-native-esanusi-sensor-pose
A high-performance React Native library for real-time pose detection, skeleton visualization, pose classification, and session recording, powered by Google ML Kit and react-native-vision-camera v4.
Features
- ✅ Real-time detection of 33 body landmarks via Google ML Kit
- ✅
<Camera>wrapper — zero boilerplate, async worklet pipeline - ✅
<PoseOverlay>— live skeleton renderer (joints + bones, no extra deps) - ✅
classifyPose()— detects standing, squatting, lunging, T-arms, arms-raised - ✅
usePoseRecorder— record sessions + one-shot snapshots + JSON export - ✅
mirrorXprop for front-camera coordinate correction - ✅ Configurable
minLandmarkConfidencethreshold - ✅ Expo config plugin for managed workflow
- ✅ React Native New Architecture compatible
- ✅ Full TypeScript support
⚠️ Single-person limitation: Google ML Kit Pose Detection processes one person per frame on mobile. If multiple people are in the frame, only the most prominent subject is returned. This is a platform constraint, not a bug.
Installation
npm install react-native-esanusi-sensor-pose
# or
yarn add react-native-esanusi-sensor-posePeer dependencies
npm install react-native-vision-camera react-native-worklets-coreiOS
cd ios && pod installMinimum iOS deployment target: 15.5
Android
Minimum SDK: 24 (Android 8.0)
Babel
Add react-native-worklets-core and the required syntax proposal plugins to your babel.config.js:
module.exports = {
plugins: [
['react-native-worklets-core/plugin'],
['@babel/plugin-proposal-optional-chaining'],
['@babel/plugin-proposal-nullish-coalescing-operator'],
],
};(Note: Depending on your React Native version, the proposal plugins may already be included in @react-native/babel-preset, but add them if you encounter Metro bundler syntax errors).
Expo (Managed Workflow)
Add the plugin to app.json or app.config.js:
{
"plugins": ["react-native-esanusi-sensor-pose"]
}The config plugin automatically configures the ML Kit Maven/CocoaPods dependencies and injects the required Camera permissions (android.permission.CAMERA for Android and NSCameraUsageDescription for iOS).
Then run npx expo prebuild to regenerate native files.
Quick Start
import {
Camera,
PoseOverlay,
classifyPose,
} from 'react-native-esanusi-sensor-pose';
import { useCameraDevice } from 'react-native-vision-camera';
import { useWindowDimensions, View, StyleSheet } from 'react-native';
function App() {
const device = useCameraDevice('back');
const { width, height } = useWindowDimensions();
const [poses, setPoses] = useState([]);
const [frameSize, setFrameSize] = useState({ width: 1, height: 1 });
if (!device) return null;
return (
<View style={{ flex: 1 }}>
<Camera
style={StyleSheet.absoluteFill}
device={device}
isActive
poseDetectionOptions={{
performanceMode: 'fast',
detectorMode: 'stream',
}}
poseDetectionCallback={(p, f) => {
setPoses(p);
setFrameSize({ width: f.width, height: f.height });
}}
/>
<PoseOverlay
poses={poses}
frameWidth={frameSize.width}
frameHeight={frameSize.height}
viewWidth={width}
viewHeight={height}
/>
</View>
);
}API
<Camera>
Drop-in replacement for VisionCamera's <Camera> with built-in async pose detection.
| Prop | Type | Default | Description |
| ----------------------- | -------------------------------- | ------------ | ------------------------------------------------ |
| poseDetectionOptions | PoseDetectionOptions | — | Detection configuration |
| poseDetectionCallback | (poses: Pose[], frame) => void | required | Called with detected poses on each frame |
| mirrorX | boolean | false | Flip X coordinates — use true for front camera |
<PoseOverlay>
Renders a skeleton on top of the camera preview. All coordinates are automatically scaled from camera pixel space to screen space.
| Prop | Type | Default | Description |
| ------------- | -------- | ------------ | ------------------------------------------------------- |
| poses | Pose[] | required | Poses from poseDetectionCallback |
| frameWidth | number | required | Camera frame width in pixels |
| frameHeight | number | required | Camera frame height in pixels |
| viewWidth | number | required | Overlay view width (e.g. useWindowDimensions().width) |
| viewHeight | number | required | Overlay view height |
| dotColor | string | '#00FF88' | Joint circle colour |
| boneColor | string | '#00FF88' | Bone line colour |
| dotSize | number | 10 | Joint circle diameter (px) |
| boneWidth | number | 2 | Bone line thickness (px) |
usePoseDetector(options?)
Lower-level hook; returns { detectPose } for use in a custom useFrameProcessor.
PoseDetectionOptions
| Option | Type | Default | Description |
| ----------------------- | ---------------------- | ---------- | --------------------------------------------- |
| performanceMode | 'fast' \| 'accurate' | 'fast' | Use 'accurate' for static images |
| detectorMode | 'stream' \| 'single' | 'stream' | Use 'single' for photos |
| minLandmarkConfidence | number | 0.5 | Filter out low-confidence landmarks (0.0–1.0) |
Pose Classification
import {
classifyPose,
isStanding,
isSquatting,
getAngle,
getDistance,
} from 'react-native-esanusi-sensor-pose';
// Classify automatically
const label = classifyPose(poses[0]);
// → 'standing' | 'squatting' | 'arms-raised' | 't-arms' | 'lunging' | 'unknown'
// Or use individual classifiers
if (isSquatting(poses[0])) {
/* ... */
}
// Geometry helpers
const kneeAngle = getAngle(pose.leftHip!, pose.leftKnee!, pose.leftAnkle!); // degrees
const shoulderWidth = getDistance(pose.leftShoulder!, pose.rightShoulder!); // pixels| Classifier | What it detects |
| -------------- | -------------------------------------- |
| isStanding | Upright body, both knee angles > 150° |
| isSquatting | Both knees 60°–130°, hips dropped |
| isTArms | Arms extended horizontally (T-pose) |
| isArmsRaised | Both wrists above shoulder line |
| isLunging | Asymmetric knee bend (>40° difference) |
Recording & Snapshot API
import { usePoseRecorder } from 'react-native-esanusi-sensor-pose';
const recorder = usePoseRecorder({ maxCaptures: 300 });
// In your pose callback:
recorder.record(poses, { width: frame.width, height: frame.height });
// Controls
recorder.startRecording();
recorder.stopRecording();
// One-shot capture (regardless of recording state)
recorder.snapshot(poses, frameSize, 'squat rep 3');
// Export
const json = recorder.exportJSON();
// { exportedAt, captureCount, captures: [{ id, timestamp, poses, frameWidth, frameHeight, label }] }
recorder.clearCaptures();Pose Object
All 33 ML Kit landmarks. Each is optional and only present if inFrameLikelihood exceeds minLandmarkConfidence.
interface PoseLandmark {
x: number; // pixel X
y: number; // pixel Y
z?: number; // depth
inFrameLikelihood?: number; // 0.0 – 1.0
}Face (11): nose, leftEye, rightEye, leftEyeInner, leftEyeOuter, rightEyeInner, rightEyeOuter, leftEar, rightEar, leftMouth, rightMouth
Upper body (6): leftShoulder, rightShoulder, leftElbow, rightElbow, leftWrist, rightWrist
Hands (6): leftPinky, rightPinky, leftIndex, rightIndex, leftThumb, rightThumb
Lower body (10): leftHip, rightHip, leftKnee, rightKnee, leftAnkle, rightAnkle, leftHeel, rightHeel, leftFootIndex, rightFootIndex
React Native New Architecture
This library uses VisionCamera Frame Processor Plugins — not a TurboModule. Frame processor plugins are registered natively and accessed via VisionCameraProxy.initFrameProcessorPlugin. They are fully compatible with both the old and new React Native architectures.
To enable the new architecture, follow the VisionCamera New Architecture guide.
Performance Tips
performanceMode: 'fast'for real-time streams (30+ FPS)detectorMode: 'stream'for videopixelFormat: 'yuv'on the camera (set automatically by<Camera>)- Avoid heavy JS work in the
poseDetectionCallback; offload withWorklets.createRunOnJS() - Use
minLandmarkConfidence: 0.6–0.7to skip jittery low-confidence landmarks
Requirements
| | Minimum | | -------------------------- | ------------ | | iOS | 15.5 | | Android | SDK 24 (8.0) | | React Native | 0.74+ | | react-native-vision-camera | 4.0+ | | react-native-worklets-core | 1.0+ |
Contributing
See CONTRIBUTING.md.
License
MIT — Made with create-react-native-library
