react-native-mediapipe-vision
v1.0.0
Published
MediaPipe frame processor for VisionCamera in React Native
Maintainers
Readme
react-native-mediapipe-vision
MediaPipe frame processor for VisionCamera in React Native
Installation
npm install react-native-mediapipe-vision
# or
yarn add react-native-mediapipe-visioniOS Setup
cd ios
pod installThis package requires:
- React Native >= 0.60.0
- VisionCamera >= 3.0.0
- iOS 13.0 or higher
MediaPipe Models
To use this module, you need to add the MediaPipe model files to your project:
- Download the required models from the MediaPipe website
- Add the models to your application bundle
- For iOS: Add them to your Xcode project's main bundle
Models needed:
selfie_segmenter.tflite- For segmentationpose_landmarker_lite.task- For pose detectionface_detection_short_range.tflite- For face detectionhand_landmarker.task- For hand landmark detection
Usage
import { Camera } from 'react-native-vision-camera';
import { MediaPipeProcessors, useMediaPipeProcessor } from 'react-native-mediapipe-vision';
export default function App() {
// Method 1: Using the custom hook
const frameProcessor = useMediaPipeProcessor('pose');
// Method 2: Creating your own frame processor
const customFrameProcessor = useFrameProcessor((frame) => {
'worklet';
// Process different MediaPipe models in one frame processor
const poseResults = MediaPipeProcessors.pose(frame);
const faceResults = MediaPipeProcessors.face(frame);
console.log('Pose results:', poseResults);
console.log('Face results:', faceResults);
}, []);
return (
<Camera
style={{ flex: 1 }}
device={device}
isActive={true}
frameProcessor={frameProcessor}
/>
);
}Available Processors
This module provides four MediaPipe processors:
segmentation- Performs image segmentationpose- Detects body pose landmarksface- Detects faces and facial landmarkshands- Detects hand landmarks
Each processor returns structured data with detection results.
License
MIT
