react-native-mediapipe-posedetection
v0.2.0
Published
PoseDetection using google's mediapipe models using poselandmark
Downloads
144
Readme
react-native-mediapipe-posedetection
High-performance pose detection for React Native using Google's MediaPipe models with optimized frame processing for smooth real-time tracking.
⚠️ New Architecture Required
This library only supports React Native's New Architecture (Turbo Modules). You must enable the New Architecture in your app to use this library.
Features
- ✨ Real-time pose detection via
react-native-vision-camera - 🎯 33 pose landmarks per detected person
- 🚀 Optimized for performance (~15 FPS throttling to prevent memory issues)
- 📱 iOS & Android support
- 🔥 GPU acceleration support
- 🎨 Static image detection support
- 🪝 Easy-to-use React hooks
Requirements
- React Native: 0.74.0 or higher
- New Architecture: Must be enabled
- iOS: iOS 12.0+
- Android: API 24+
- Dependencies:
react-native-vision-camera^4.0.0 (for real-time detection)react-native-worklets-core^1.0.0 (for frame processing)
Installation
npm install react-native-mediapipe-posedetection react-native-vision-camera react-native-worklets-core
# or
yarn add react-native-mediapipe-posedetection react-native-vision-camera react-native-worklets-coreEnable New Architecture
If you haven't already enabled the New Architecture in your React Native app:
Android
In your android/gradle.properties:
newArchEnabled=trueiOS
In your ios/Podfile:
use_frameworks! :linkage => :static
$RNNewArchEnabled = trueThen reinstall pods:
cd ios && pod installiOS Setup
- Download the MediaPipe pose landmarker model (e.g.,
pose_landmarker_lite.task) - Add it to your Xcode project
- Ensure it's included in "Copy Bundle Resources" build phase
Android Setup
The MediaPipe dependencies are automatically included. Place your model file in android/app/src/main/assets/.
Usage
Real-time Pose Detection with Camera
import {
usePoseDetection,
RunningMode,
Delegate,
KnownPoseLandmarks,
} from 'react-native-mediapipe-posedetection';
import { Camera, useCameraDevice } from 'react-native-vision-camera';
function PoseDetectionScreen() {
const device = useCameraDevice('back');
const poseDetection = usePoseDetection(
{
onResults: (result) => {
// result.landmarks contains detected pose keypoints
console.log('Number of poses:', result.landmarks.length);
if (result.landmarks[0]?.length > 0) {
const nose = result.landmarks[0][KnownPoseLandmarks.nose];
console.log('Nose position:', nose.x, nose.y);
}
},
onError: (error) => {
console.error('Detection error:', error.message);
},
},
RunningMode.LIVE_STREAM,
'pose_landmarker_lite.task',
{
numPoses: 1,
minPoseDetectionConfidence: 0.5,
minPosePresenceConfidence: 0.5,
minTrackingConfidence: 0.5,
delegate: Delegate.GPU,
}
);
if (!device) return null;
return (
<Camera
style={{ flex: 1 }}
device={device}
isActive={true}
frameProcessor={poseDetection.frameProcessor}
onLayout={poseDetection.cameraViewLayoutChangeHandler}
/>
);
}Static Image Detection
import {
PoseDetectionOnImage,
Delegate,
} from 'react-native-mediapipe-posedetection';
async function detectPoseInImage(imagePath: string) {
const result = await PoseDetectionOnImage(
imagePath,
'pose_landmarker_lite.task',
{
numPoses: 2, // Detect up to 2 people
minPoseDetectionConfidence: 0.5,
delegate: Delegate.GPU,
}
);
console.log('Detected poses:', result.landmarks.length);
console.log('Inference time:', result.inferenceTime, 'ms');
}Using the MediapipeCamera Component
For a simpler setup, use the provided MediapipeCamera component:
import { MediapipeCamera } from 'react-native-mediapipe-posedetection';
function App() {
return (
<MediapipeCamera
style={{ flex: 1 }}
cameraPosition="back"
onResults={(result) => {
console.log('Pose detected:', result.landmarks);
}}
poseDetectionOptions={{
numPoses: 1,
minPoseDetectionConfidence: 0.5,
}}
/>
);
}API Reference
usePoseDetection(callbacks, runningMode, model, options)
Hook for real-time pose detection.
Parameters:
callbacks:DetectionCallbacks<PoseDetectionResultBundle>onResults: (result: PoseDetectionResultBundle) => void- Called when poses are detectedonError: (error: DetectionError) => void- Called on detection errors
runningMode:RunningModeRunningMode.LIVE_STREAM- For camera/video inputRunningMode.VIDEO- For video file processingRunningMode.IMAGE- For static images (usePoseDetectionOnImageinstead)
model:string- Path to MediaPipe model file (e.g.,'pose_landmarker_lite.task')options:Partial<PoseDetectionOptions>(optional)numPoses:number- Maximum number of poses to detect (default: 1)minPoseDetectionConfidence:number- min confidence threshold (default: 0.5)minPosePresenceConfidence:number- min presence threshold (default: 0.5)minTrackingConfidence:number- min tracking threshold (default: 0.5)shouldOutputSegmentationMasks:boolean- Include segmentation masks (default: false)delegate:Delegate.CPU | Delegate.GPU | Delegate.NNAPI- Processing delegate (default: GPU)mirrorMode:'no-mirror' | 'mirror' | 'mirror-front-only'- Camera mirroringfpsMode:'none' | number- Additional FPS throttling (default: 'none')
Returns: MediaPipeSolution
frameProcessor: VisionCamera frame processorcameraViewLayoutChangeHandler: Layout change handlercameraDeviceChangeHandler: Camera device change handlercameraOrientationChangedHandler: Orientation change handlerresizeModeChangeHandler: Resize mode handlercameraViewDimensions: Current camera view dimensions
PoseDetectionOnImage(imagePath, model, options)
Detect poses in a static image.
Parameters:
imagePath:string- Path to the image filemodel:string- Path to MediaPipe model fileoptions: Same asusePoseDetectionoptions
Returns: Promise<PoseDetectionResultBundle>
Result Structure
interface PoseDetectionResultBundle {
inferenceTime: number; // Milliseconds
size: { width: number; height: number };
landmarks: Landmark[][]; // Array of poses, each with 33 landmarks
worldLandmarks: Landmark[][]; // 3D world coordinates
segmentationMasks?: Mask[]; // Optional segmentation masks
}
interface Landmark {
x: number; // Normalized 0-1
y: number; // Normalized 0-1
z: number; // Depth (relative)
visibility?: number; // Confidence 0-1
presence?: number; // Presence confidence 0-1
}Landmark Indices
Use KnownPoseLandmarks for easy landmark access:
import { KnownPoseLandmarks } from 'react-native-mediapipe-posedetection';
const landmarks = result.landmarks[0];
const nose = landmarks[KnownPoseLandmarks.nose];
const leftShoulder = landmarks[KnownPoseLandmarks.leftShoulder];
const rightWrist = landmarks[KnownPoseLandmarks.rightWrist];Available landmarks:
- Face:
nose,leftEye,rightEye,leftEar,rightEar,mouthLeft,mouthRight - Upper body:
leftShoulder,rightShoulder,leftElbow,rightElbow,leftWrist,rightWrist - Hands:
leftPinky,rightPinky,leftIndex,rightIndex,leftThumb,rightThumb - Lower body:
leftHip,rightHip,leftKnee,rightKnee,leftAnkle,rightAnkle - Feet:
leftHeel,rightHeel,leftFootIndex,rightFootIndex
Performance Optimizations
This library includes critical performance optimizations for React Native's new architecture:
⚡ Automatic Frame Throttling
To prevent memory pressure and crashes, the library automatically throttles:
- Frame processing to ~15 FPS (MediaPipe detection calls)
- Event emissions to ~15 FPS (JavaScript callbacks)
This dual-layer throttling ensures:
- ✅ Stable memory usage
- ✅ No crashes during extended use
- ✅ Smooth pose detection experience
- ✅ Efficient battery usage
The throttling is transparent and requires no configuration. 15 FPS is sufficient for smooth pose tracking in most use cases.
🎯 Additional FPS Control
For even more control, use the fpsMode option:
usePoseDetection(callbacks, RunningMode.LIVE_STREAM, 'model.task', {
fpsMode: 10, // Process frames at 10 FPS
});Migration from Old Architecture
If you were using a previous version that supported the Bridge architecture:
Upgrade React Native to 0.74.0 or higher
Enable New Architecture (see installation instructions)
Rebuild your app completely:
# iOS cd ios && pod install && cd .. # Android cd android && ./gradlew clean && cd ..
The API remains the same, so your application code shouldn't need changes.
Troubleshooting
"MediapipePosedetection TurboModule is not available"
Cause: New Architecture is not enabled or not properly configured.
Solution:
- Verify
newArchEnabled=trueinandroid/gradle.properties - Verify
$RNNewArchEnabled = trueinios/Podfile - Clean and rebuild your app
- Ensure you're using React Native 0.74.0+
"Failed to create detector"
Cause: Model file not found or invalid configuration.
Solution:
- Verify the model file path is correct
- Ensure the model file is included in your app bundle
- Check that the model file is a valid MediaPipe pose landmarker model
Memory Warnings / Crashes on iOS
Solution: The library automatically throttles to prevent this. If you still experience issues:
- Ensure you're on the latest version
- Reduce
numPosesto 1 - Set
shouldOutputSegmentationMasks: false - Use
Delegate.CPUinstead ofDelegate.GPUif GPU memory is limited
Poor Performance
Solutions:
- Use
pose_landmarker_lite.taskinstead ofpose_landmarker_full.task - Set
fpsMode: 10for lower frame processing - Reduce
numPosesif you don't need to detect multiple people - Enable GPU acceleration:
delegate: Delegate.GPU
Example App
Check out the example directory for a complete working app demonstrating:
- Real-time pose detection
- Pose landmark visualization
- Camera controls
- Performance optimization
Run the example:
cd example
yarn install
yarn ios # or yarn androidContributing
See the contributing guide to learn how to contribute to the repository and the development workflow.
Credits
This library is based on the work from react-native-mediapipe by cdiddy77. The pose detection module codes were taken from this repository and upgraded to support React Native's New Architecture (Turbo Modules).
License
MIT
Made with create-react-native-library
