rn-face-sdk
v1.0.4
Published
React Native SDK for Face Detection, Verification, Recognition & Liveness Detection — Android & iOS, zero cost, minimal dependencies.
Maintainers
Readme
rn-face-sdk
React Native SDK for Face Detection, Face Recognition, Face Verification, and Liveness Detection — works on Android & iOS, completely FREE, and minimal third-party dependencies.
🏗️ Architecture Overview
rn-face-sdk
├── src/
│ ├── index.js ← Main JS API (detectFaces, recognizeFace, verifyFace, detectLiveness)
│ ├── index.d.ts ← TypeScript types
│ └── utils/
│ ├── faceHash.js ← Persistent geometry-based face hash (no server needed)
│ └── livenessAnalyzer.js ← Active liveness challenge logic
├── android/ ← Native Android module (Google ML Kit — FREE, on-device)
└── ios/ ← Native iOS module (Apple Vision — FREE, on-device)Zero Cost Strategy
| Platform | Face Detection Library | Cost | Internet Required? | |----------|----------------------|------|--------------------| | Android | Google ML Kit Face Detection | FREE | ❌ No | | iOS | Apple Vision Framework | FREE | ❌ No | | Both | Geometric embedding (JS) | FREE | ❌ No |
📦 Installation
Step 1 — Install the SDK
# If published to npm:
npm install rn-face-sdk
# Or install from local path during development:
npm install ../rn-face-sdkStep 2 — Android Setup
1. Add ML Kit to your app's android/app/build.gradle:
dependencies {
// Already in the SDK's build.gradle, but verify:
implementation 'com.google.mlkit:face-detection:16.1.5'
implementation 'com.google.code.gson:gson:2.10.1'
}2. Register the package in MainApplication.java (React Native < 0.73):
import com.rnfacesdk.RNFaceSDKPackage;
@Override
protected List<ReactPackage> getPackages() {
return Arrays.<ReactPackage>asList(
new MainReactPackage(),
new RNFaceSDKPackage() // ← add this
);
}React Native 0.73+ uses auto-linking — no manual step needed.
3. Add camera permission to android/app/src/main/AndroidManifest.xml:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />Step 3 — iOS Setup
1. Run pod install:
cd ios && pod install2. Add camera permission to ios/<YourApp>/Info.plist:
<key>NSCameraUsageDescription</key>
<string>We need camera access to detect and verify your face.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>We need photo library access for face processing.</string>3. Enable Swift in your Xcode project:
- Open Xcode → your project → Build Settings
- Set
Swift Language VersiontoSwift 5 - If the project has no Swift files yet, Xcode will prompt to create a bridging header — accept it.
🚀 API Usage
Import
import FaceSDK, {
detectFaces,
recognizeFace,
registerFace,
verifyFace,
detectLiveness,
LivenessSession,
getRandomChallenge,
} from 'rn-face-sdk';1. Face Detection
Detect all faces in an image. Returns bounding boxes, landmarks, head angles, and eye/smile probabilities.
const faces = await detectFaces(base64Image, {
landmarksEnabled: true, // Get eye, nose, mouth positions
classificationEnabled: true, // Get eye open / smile probability
performanceMode: 'ACCURATE', // or 'FAST'
});
console.log(faces[0].faceHash); // Persistent geometry hash
console.log(faces[0].boundingBox); // { x, y, width, height }
console.log(faces[0].landmarks.leftEye); // { x, y }
console.log(faces[0].headEulerAngleY); // Yaw (head turn)
console.log(faces[0].leftEyeOpenProbability); // 0.0–1.02. Face Recognition
Analyse a face and get a stable unique identifier based on permanent facial geometry.
The same face will return the same faceHash every time.
const result = await recognizeFace(base64Image);
console.log(result.faceHash); // "a3f2b1c4-d5e6f7a8-..." — same across sessions
console.log(result.confidence); // 0.0–1.0
console.log(result.embedding); // Float array for comparison3. Register a Face
Store a user's face embedding on-device (encrypted via SharedPreferences/UserDefaults).
const result = await registerFace('user_123', base64Image);
// { userId: 'user_123', faceHash: '...', registered: true }4. Face Verification
Check if the person in the camera matches a registered user.
const result = await verifyFace('user_123', base64Image, {
threshold: 0.75 // Similarity threshold (0–1). Default: 0.75
});
console.log(result.isMatch); // true / false
console.log(result.similarity); // 0.0–1.05. Liveness Detection
Confirm the face is a real live person (not a photo, video, or mask).
// Passive + Active combined (recommended)
const result = await detectLiveness(base64Image, {
mode: 'BOTH', // 'PASSIVE' | 'ACTIVE' | 'BOTH'
});
console.log(result.isLive); // true / false
console.log(result.confidence); // 0.0–1.0
console.log(result.challenge); // Which challenge was used
console.log(result.failReason); // Why it failed (if it did)6. Active Liveness Session (Multi-Frame)
For real-time camera-based liveness, use LivenessSession across multiple frames:
import { LivenessSession, getRandomChallenge } from 'rn-face-sdk';
const challenge = getRandomChallenge(); // 'BLINK' | 'SMILE' | 'TURN_LEFT' | ...
const session = new LivenessSession(challenge, 8000); // 8s timeout
// Show the user what to do:
// "Please blink" / "Please smile" / "Please turn your head left"
// In your camera frame loop (e.g., every 200ms):
const result = session.processFrame(detectedFace);
if (result.completed) {
console.log('Liveness verified!', result.confidence);
} else if (result.failed) {
console.log('Failed:', result.message);
} else {
console.log('Waiting:', result.message);
// Display session.getTimeRemaining() to user
}7. Real-Time Events
Subscribe to native camera events (when SDK is integrated with the camera view):
const sub = addFaceEventListener('onFaceDetected', (face) => {
console.log('Face appeared:', face.faceHash);
});
// Cleanup
sub.remove();🔑 How the Persistent Face Hash Works
The faceHash is computed purely from facial geometry ratios — no server, no ML model required:
- Inter-ocular distance (IOD) — distance between eyes (scale baseline)
- Eye-to-nose ratio — vertical distance from eye midpoint to nose / IOD
- Mouth width ratio — mouth width / IOD
- Nose-to-mouth ratio — distance from nose to mouth midpoint / IOD
- Face aspect ratio — bounding box width / height
- Relative eye positions — eyes normalised within bounding box
All values are ratios (scale-invariant). The same face at different distances, lighting conditions, or expressions produces the same hash (within natural variation).
🛡️ Liveness Detection Strategy
Passive (no user action required):
- Head pose naturalness check (perfectly flat = possible photo)
- Face size sufficiency check
- Landmark presence and symmetry analysis
- Eye openness naturalness (frozen fully-open = possible photo)
Active (user must perform an action):
| Challenge | What it checks |
|-----------|---------------|
| BLINK | Drop-then-rise in eye open probability across frames |
| SMILE | smilingProbability rises above 0.75 |
| TURN_LEFT | headEulerAngleY drops below -18° |
| TURN_RIGHT | headEulerAngleY rises above 18° |
| NOD | headEulerAngleX exceeds ±12° |
📁 File Storage
Registered face embeddings are stored entirely on-device:
- Android:
SharedPreferences(app-private, not accessible to other apps) - iOS:
UserDefaults(app-sandboxed)
No data leaves the device. No server required.
🔧 Advanced: Swap to Neural Embeddings (Optional, Still Free)
For higher accuracy verification, you can upgrade the embedding step to use TFLite MobileFaceNet (free, on-device):
- Add
org.tensorflow:tensorflow-lite:2.13.0to Android gradle - Download MobileFaceNet
.tflitemodel (Apache 2.0 license, free) - Replace
computeGeometricEmbedding()inRNFaceSDKModule.javawith TFLite inference - The JS API stays identical — no changes needed in your app
📋 Requirements
| | Minimum | |---|---| | React Native | 0.70+ | | Android | API 21 (Android 5.0) | | iOS | 13.0 | | Node.js | 16+ |
📄 License
MIT — free to use in personal and commercial projects.
