expo-face-detection
v1.0.20
Published
Expo native module for face detection, liveness detection, and face recognition using MTCNN and MobileFaceNet
Maintainers
Readme
expo-face-detection
Expo native module for face detection, liveness detection, and face recognition on Android. Uses MTCNN for face detection and MobileFaceNet for face embeddings.
Features
- Face Detection - Detect multiple faces with bounding boxes and landmarks using MTCNN
- Liveness Detection - Anti-spoofing to detect fake/printed faces
- Face Registration - Extract 192-dimensional face embeddings for storage
- Face Matching - Compare faces against registered embeddings
- Native Camera View - Real-time face processing without JS bridge overhead
Requirements
- Expo SDK 54+
- Android only (iOS not supported)
- Managed workflow with custom dev client
Installation
npm install expo-face-detectionAdd to your app.json:
{
"expo": {
"plugins": ["expo-face-detection"]
}
}Build your custom dev client:
npx expo prebuild
npx expo run:androidAPI Reference
Face Detection
detectFaces(imageBase64, cropFaces?)
Detect all faces in an image.
import * as FaceDetection from 'expo-face-detection';
const result = await FaceDetection.detectFaces(imageBase64, false);
// result: {
// faces: [{ box, landmarks, confidence }],
// faceCount: number,
// hasFaces: boolean,
// processingTimeMs: number,
// frameWidth: number,
// frameHeight: number
// }detectLargestFace(imageBase64)
Detect only the largest face in an image.
const face = await FaceDetection.detectLargestFace(imageBase64);
// face: { box, landmarks, confidence } | nullLiveness Detection
checkLiveness(imageBase64)
Check if the detected face is from a live person (anti-spoofing).
const result = await FaceDetection.checkLiveness(imageBase64);
// result: {
// faceDetected: boolean,
// isLive: boolean,
// livenessScore: number, // Lower is more likely live
// sharpness: number, // Image sharpness score
// isSharp: boolean,
// faceBox: { left, top, right, bottom } | null,
// confidence: number,
// processingTimeMs: number,
// errorMessage: string | null
// }Face Registration
Important: This module does NOT store embeddings. Your app is responsible for storing embeddings (e.g., on your server, in a database, etc.).
extractEmbedding(imageBase64)
Extract a 192-dimensional face embedding from a single image.
const result = await FaceDetection.extractEmbedding(imageBase64);
if (result.success) {
// Store embedding on your server
await api.saveUserEmbedding(userId, result.embedding);
}
// result: {
// success: boolean,
// embedding: number[] | null, // 192-dimensional array
// faceBox: { left, top, right, bottom } | null,
// processingTimeMs: number,
// errorMessage: string | null
// }registerFace(frontBase64, leftBase64, rightBase64)
Register a face using 3 photos for better accuracy. Returns an averaged embedding.
const result = await FaceDetection.registerFace(
frontPhotoBase64,
leftPhotoBase64,
rightPhotoBase64
);
if (result.success) {
// Store the averaged embedding
await api.registerUser(userId, result.embedding);
}Face Matching
setTargetEmbedding(embedding)
Set the target embedding for face matching.
// Fetch embedding from your server
const userEmbedding = await api.getUserEmbedding(userId);
FaceDetection.setTargetEmbedding(userEmbedding);hasTarget()
Check if a target embedding is set.
const hasTarget = FaceDetection.hasTarget(); // booleanclearTarget()
Clear the current target embedding.
FaceDetection.clearTarget();processFrame(imageBase64)
Match a face against the target embedding.
const result = await FaceDetection.processFrame(imageBase64);
// result: {
// faceDetected: boolean,
// isMatch: boolean,
// confidence: number, // 0-1, higher is better
// distance: number, // L2 distance, lower is better
// faceBox: { left, top, right, bottom } | null,
// processingTimeMs: number,
// errorMessage: string | null
// }Threshold Configuration
// Face detection
FaceDetection.setMinFaceRatio(0.2); // 0.05-0.5, default: 0.2
FaceDetection.setDetectionConfidenceThreshold(0.6); // 0-1, default: 0.6
// Liveness detection
FaceDetection.setLivenessThreshold(0.2); // default: 0.2
FaceDetection.setSharpnessThreshold(50); // default: 50
// Face matching
FaceDetection.setMatchThreshold(1.1); // L2 distance, default: 1.1Native Camera View
For real-time face processing, use the native camera view. Frames are processed entirely in native code without crossing the JS bridge.
The camera view supports two modes:
matching(default) - Live face verification against a target embeddingenrollment- Capture 3 photos (front, left, right) to create a face embedding
Matching Mode
import { FaceDetectionCameraView } from 'expo-face-detection';
<FaceDetectionCameraView
style={{ flex: 1 }}
mode="matching"
enableMatching={true}
enableLiveness={false}
targetEmbedding={userEmbedding}
matchThreshold={1.1}
cameraFacing="front"
onMatchResult={({ nativeEvent }) => {
if (nativeEvent.isMatch) {
console.log(`Match! Confidence: ${nativeEvent.confidence}`);
}
}}
onFaceDetected={({ nativeEvent }) => {
console.log('Face detected:', nativeEvent.faceBox);
}}
onError={({ nativeEvent }) => {
console.error('Error:', nativeEvent.error);
}}
/>Enrollment Mode
Native camera enrollment uses the same Camera2 pipeline for both enrollment and live matching, ensuring consistent embeddings. This is recommended over using expo-image-picker for enrollment.
import React, { useState } from 'react';
import { View, Button, Text } from 'react-native';
import { FaceDetectionCameraView } from 'expo-face-detection';
function EnrollmentScreen({ onComplete }) {
const [capturePhoto, setCapturePhoto] = useState(false);
const [instruction, setInstruction] = useState('');
const [photosRemaining, setPhotosRemaining] = useState(3);
const handleEnrollmentStatus = ({ nativeEvent }) => {
// Called continuously with current status
setInstruction(nativeEvent.instruction);
setPhotosRemaining(nativeEvent.photosRemaining);
// nativeEvent: {
// currentPhotoIndex: 0, // 0, 1, 2
// photoLabel: "front", // "front", "left", "right"
// instruction: "Look straight at camera",
// photosRemaining: 3, // 3, 2, 1, 0
// readyToCapture: true, // true if face detected
// faceDetected: true,
// isLive: true,
// livenessScore: 0.1,
// faceBox: { left, top, right, bottom }
// }
};
const handleEnrollmentCapture = ({ nativeEvent }) => {
// Called when a photo is captured
console.log(`Captured ${nativeEvent.photoLabel} (${nativeEvent.photoIndex + 1}/3)`);
setCapturePhoto(false); // Reset capture trigger
// nativeEvent: {
// photoIndex: 0, // 0, 1, 2
// photoLabel: "front", // "front", "left", "right"
// totalPhotos: 3,
// success: true,
// faceDetected: true,
// isLive: true,
// livenessScore: 0.1,
// faceBox: { left, top, right, bottom }
// }
};
const handleEnrollmentComplete = ({ nativeEvent }) => {
// Called after all 3 photos are captured
if (nativeEvent.success) {
// Save the embedding to your server
onComplete(nativeEvent.embedding);
} else {
console.error('Enrollment failed:', nativeEvent.errorMessage);
}
// nativeEvent: {
// success: true,
// embedding: number[], // 192-dimensional averaged embedding
// photoCount: 3,
// processingTimeMs: 250,
// errorMessage: null
// }
};
return (
<View style={{ flex: 1 }}>
<FaceDetectionCameraView
style={{ flex: 1 }}
mode="enrollment"
capturePhoto={capturePhoto}
cameraFacing="front"
onEnrollmentStatus={handleEnrollmentStatus}
onEnrollmentCapture={handleEnrollmentCapture}
onEnrollmentComplete={handleEnrollmentComplete}
onError={({ nativeEvent }) => {
console.error('Error:', nativeEvent.error);
}}
/>
<View style={{ padding: 20 }}>
<Text>{instruction}</Text>
<Text>Photos remaining: {photosRemaining}</Text>
<Button
title="Capture"
onPress={() => setCapturePhoto(true)}
/>
</View>
</View>
);
}Resetting Enrollment
To restart the enrollment process (e.g., if the user wants to re-capture photos):
const [resetEnrollment, setResetEnrollment] = useState(false);
// Trigger reset
setResetEnrollment(true);
// Remember to set it back to false after triggering
setTimeout(() => setResetEnrollment(false), 100);
<FaceDetectionCameraView
mode="enrollment"
resetEnrollment={resetEnrollment}
// ... other props
/>Props
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| mode | 'matching' \| 'enrollment' | 'matching' | Camera mode |
| enableMatching | boolean | false | Enable face matching (matching mode) |
| enableLiveness | boolean | false | Enable liveness detection |
| targetEmbedding | number[] | - | 192-d embedding for matching |
| matchThreshold | number | 1.1 | L2 distance threshold |
| cameraFacing | 'front' \| 'back' | 'front' | Camera to use |
| capturePhoto | boolean | false | Trigger photo capture (enrollment mode) |
| resetEnrollment | boolean | false | Reset enrollment to start over |
| onMatchResult | function | - | Called with match results (matching mode) |
| onFaceDetected | function | - | Called when face detected |
| onEnrollmentStatus | function | - | Called with enrollment status updates |
| onEnrollmentCapture | function | - | Called when enrollment photo captured |
| onEnrollmentComplete | function | - | Called when all 3 photos captured |
| onError | function | - | Called on errors |
Enrollment Events
onEnrollmentStatus - Called continuously while in enrollment mode
interface EnrollmentStatusEvent {
currentPhotoIndex: number; // 0, 1, 2
photoLabel: string; // "front", "left", "right"
instruction: string; // User instruction text
photosRemaining: number; // 3, 2, 1, 0
readyToCapture: boolean; // true if conditions met
faceDetected: boolean;
isLive?: boolean;
livenessScore?: number;
faceBox?: { left, top, right, bottom } | null;
}onEnrollmentCapture - Called after each photo capture
interface EnrollmentCaptureEvent {
photoIndex: number; // 0, 1, 2
photoLabel: string; // "front", "left", "right"
totalPhotos: number; // 3
success: boolean;
faceDetected: boolean;
isLive?: boolean;
livenessScore?: number;
faceBox?: { left, top, right, bottom } | null;
errorMessage?: string;
}onEnrollmentComplete - Called when all 3 photos are captured
interface EnrollmentCompleteEvent {
success: boolean;
embedding?: number[]; // 192-d averaged & normalized embedding
photoCount: number; // Number of photos used
processingTimeMs: number;
errorMessage?: string;
}Why Use Native Camera Enrollment?
Using native camera enrollment instead of expo-image-picker provides:
- Same camera pipeline - Both enrollment and matching use the identical Camera2 API, ensuring consistent image processing
- Better embedding consistency - No differences in color correction, compression, or preprocessing between enrollment and verification
- Guided capture - Real-time feedback shows user instructions and face detection status
- Liveness during enrollment - Optional anti-spoofing checks during photo capture
- Higher match accuracy - Embeddings extracted from the same pipeline produce more reliable matches
Complete Example
Full Registration and Verification Flow
import React, { useState } from 'react';
import { View, Button, Text, Alert, StyleSheet } from 'react-native';
import { FaceDetectionCameraView } from 'expo-face-detection';
type Screen = 'home' | 'enroll' | 'verify';
export default function App() {
const [screen, setScreen] = useState<Screen>('home');
const [savedEmbedding, setSavedEmbedding] = useState<number[] | null>(null);
// Enrollment state
const [capturePhoto, setCapturePhoto] = useState(false);
const [instruction, setInstruction] = useState('');
const [photosRemaining, setPhotosRemaining] = useState(3);
// Verification state
const [isVerifying, setIsVerifying] = useState(false);
// ===== ENROLLMENT HANDLERS =====
const handleEnrollmentStatus = ({ nativeEvent }) => {
setInstruction(nativeEvent.instruction);
setPhotosRemaining(nativeEvent.photosRemaining);
};
const handleEnrollmentCapture = ({ nativeEvent }) => {
setCapturePhoto(false);
Alert.alert('Photo Captured', `${nativeEvent.photoLabel} (${nativeEvent.photoIndex + 1}/3)`);
};
const handleEnrollmentComplete = ({ nativeEvent }) => {
if (nativeEvent.success) {
// In a real app, save this to your server
setSavedEmbedding(nativeEvent.embedding);
Alert.alert('Enrollment Complete', 'Face registered successfully!');
setScreen('home');
} else {
Alert.alert('Error', nativeEvent.errorMessage);
}
};
// ===== VERIFICATION HANDLER =====
const handleMatchResult = ({ nativeEvent }) => {
if (nativeEvent.isMatch && nativeEvent.confidence > 0.7) {
setIsVerifying(false);
Alert.alert('Verified!', `Confidence: ${(nativeEvent.confidence * 100).toFixed(1)}%`);
}
};
// ===== SCREENS =====
if (screen === 'enroll') {
return (
<View style={styles.container}>
<FaceDetectionCameraView
style={styles.camera}
mode="enrollment"
capturePhoto={capturePhoto}
cameraFacing="front"
onEnrollmentStatus={handleEnrollmentStatus}
onEnrollmentCapture={handleEnrollmentCapture}
onEnrollmentComplete={handleEnrollmentComplete}
onError={({ nativeEvent }) => Alert.alert('Error', nativeEvent.error)}
/>
<View style={styles.controls}>
<Text style={styles.instruction}>{instruction}</Text>
<Text>Photos remaining: {photosRemaining}</Text>
<Button title="Capture Photo" onPress={() => setCapturePhoto(true)} />
<Button title="Cancel" onPress={() => setScreen('home')} />
</View>
</View>
);
}
if (screen === 'verify') {
return (
<View style={styles.container}>
<FaceDetectionCameraView
style={styles.camera}
mode="matching"
enableMatching={isVerifying}
targetEmbedding={savedEmbedding!}
matchThreshold={1.1}
cameraFacing="front"
onMatchResult={handleMatchResult}
onError={({ nativeEvent }) => Alert.alert('Error', nativeEvent.error)}
/>
<View style={styles.controls}>
<Button
title={isVerifying ? 'Stop Verification' : 'Start Verification'}
onPress={() => setIsVerifying(!isVerifying)}
/>
<Button title="Back" onPress={() => { setIsVerifying(false); setScreen('home'); }} />
</View>
</View>
);
}
// HOME SCREEN
return (
<View style={styles.homeContainer}>
<Text style={styles.title}>Face Recognition Demo</Text>
<Button title="Enroll Face" onPress={() => setScreen('enroll')} />
<Button
title="Verify Face"
onPress={() => setScreen('verify')}
disabled={!savedEmbedding}
/>
{!savedEmbedding && <Text>Please enroll first</Text>}
</View>
);
}
const styles = StyleSheet.create({
container: { flex: 1 },
camera: { flex: 1 },
controls: { padding: 20, gap: 10 },
homeContainer: { flex: 1, justifyContent: 'center', alignItems: 'center', gap: 20 },
title: { fontSize: 24, fontWeight: 'bold' },
instruction: { fontSize: 16, fontWeight: '500' },
});Technical Details
Models
| Model | File | Input Size | Purpose |
|-------|------|------------|---------|
| P-Net | pnet.tflite | 12x12 | First stage face detection |
| R-Net | rnet.tflite | 24x24 | Second stage refinement |
| O-Net | onet.tflite | 48x48 | Final stage + landmarks |
| MobileFaceNet | MobileFaceNet.tflite | 112x112 | 192-d face embedding |
| FaceAntiSpoofing | FaceAntiSpoofing.tflite | 256x256 | Liveness detection |
MTCNN Pipeline
- P-Net (Proposal Network): Generates candidate face regions at multiple scales
- R-Net (Refine Network): Filters candidates and refines bounding boxes
- O-Net (Output Network): Final refinement + 5-point facial landmarks
Face Embedding
- Model: MobileFaceNet
- Output: 192-dimensional L2-normalized vector
- Comparison: L2 (Euclidean) distance
- Threshold: ~1.1 for same person (lower = stricter)
Liveness Detection
- Model: FaceAntiSpoofing (tree-based classifier)
- Sharpness: Laplacian variance filter
- Score: Lower values indicate live face
- Threshold: ~0.2 (values below = live)
Performance
- Face detection: ~50-100ms per frame
- Embedding extraction: ~30-50ms
- Liveness check: ~40-60ms
- Matching: ~5-10ms
Performance varies based on device, image size, and number of faces.
Data Flow
Native Camera Enrollment (Recommended)
┌─────────────────────────────────────────────────────────────────┐
│ YOUR EXPO APP │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Registration (Native Camera): │
│ ┌───────────────────────────┐ ┌──────────────────────────┐ │
│ │ FaceDetectionCameraView │ │ onEnrollmentComplete │ │
│ │ mode="enrollment" │───►│ embedding (192-d) │ │
│ │ (3 photos: front/left/ │ │ (averaged & normalized) │ │
│ │ right captured natively)│ └───────────┬──────────────┘ │
│ └───────────────────────────┘ │ │
│ Store on your server │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Your Server │ │
│ │ / Database │ │
│ └────────┬────────┘ │
│ │ │
│ Verification (Native Camera): Fetch embedding │
│ ▼ │
│ ┌───────────────────────────┐ ┌──────────────────────────┐ │
│ │ FaceDetectionCameraView │◄───│ targetEmbedding prop │ │
│ │ mode="matching" │ └──────────────────────────┘ │
│ │ enableMatching={true} │ │
│ └───────────┬───────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ onMatchResult │ │
│ │ isMatch: true │ │
│ │ confidence: 0.9 │ │
│ └─────────────────┘ │
│ │
│ ✓ Same Camera2 pipeline for enrollment & matching │
│ ✓ Consistent image processing = better accuracy │
└────────────────────────────────────────────────────────────────┘Alternative: Image-Based Registration
┌─────────────────────────────────────────────────────────────────┐
│ YOUR EXPO APP │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Registration (Using expo-image-picker or similar): │
│ ┌──────────┐ ┌─────────────────┐ ┌──────────────────┐ │
│ │ 3 Photos │───►│ registerFace() │───►│ embedding (192-d)│ │
│ └──────────┘ └─────────────────┘ └────────┬─────────┘ │
│ │ │
│ Store on your server │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Your Server │ │
│ │ / Database │ │
│ └────────┬────────┘ │
│ │ │
│ Verification: Fetch embedding │
│ ▼ │
│ ┌──────────┐ ┌───────────────────┐ ┌──────────────────┐ │
│ │ Camera │───►│ setTargetEmbedding│◄─┤ embedding (192-d)│ │
│ └──────────┘ └─────────┬─────────┘ └──────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ processFrame() │ │
│ │ or CameraView │ │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ isMatch: true │ │
│ │ confidence: 0.9 │ │
│ └─────────────────┘ │
│ │
│ ⚠ Different camera pipelines may affect match accuracy │
└─────────────────────────────────────────────────────────────────┘Permissions
The config plugin automatically adds:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" android:required="false" />
<uses-feature android:name="android.hardware.camera.autofocus" android:required="false" />You still need to request runtime permission in your app:
import { Camera } from 'expo-camera';
const { status } = await Camera.requestCameraPermissionsAsync();Troubleshooting
"No face detected"
- Ensure good lighting
- Face should be at least 20% of image width (adjustable via
setMinFaceRatio) - Face should be clearly visible and not occluded
"Image too blurry"
- Hold camera steady
- Ensure adequate lighting
- Adjust
setSharpnessThresholdif needed
Match threshold tuning
- Stricter (fewer false positives): Lower threshold (e.g., 0.9)
- Looser (fewer false negatives): Higher threshold (e.g., 1.3)
- Default 1.1 is a good balance
Performance issues
- Use lower resolution images for detection
- Process frames at intervals (not every frame)
- Use native camera view for real-time processing
License
MIT
