@moveris/react
v2.9.0
Published
React SDK for Moveris Live API
Readme
@moveris/react
React SDK for Moveris Liveness Detection API. Provides ready-to-use components and hooks for adding liveness verification to your web application.
Installation
pnpm add @moveris/react
# or
npm install @moveris/react
# or
yarn add @moveris/reactQuick Start
Option 1: All-in-One Component (Recommended)
The simplest way to add liveness detection:
import { MoverisProvider, LivenessView } from '@moveris/react';
function App() {
return (
<MoverisProvider apiKey="mv_your_api_key">
<LivenessView
model="mixed-10-v2"
endpoint="fast-check-crops"
enableDetection
showCapturedFrames
sessionId="optional-fixed-id" {/* auto-generated if omitted */}
onResult={(result) => {
console.log('Verdict:', result.verdict); // 'live' or 'fake'
console.log('Score:', result.score); // 0-100
}}
onError={(error) => console.error(error)}
showResult
/>
</MoverisProvider>
);
}Option 2: Custom Implementation with Hooks
For full control over the UI:
import { MoverisProvider, useLiveness, useCamera, useFrameCapture } from '@moveris/react';
function LivenessCheck() {
const videoRef = useRef<HTMLVideoElement>(null);
const { start, stop, state, result, progress, captureFrame } = useLiveness({
model: 'mixed-10-v2',
endpoint: 'fast-check-crops',
sessionId: 'optional-fixed-id', // auto-generated if omitted
onResult: (result) => console.log(result),
});
const { isActive, startCamera, stopCamera } = useCamera();
useFrameCapture({
videoRef,
onFrame: (frame) => captureFrame(frame.pixels),
enabled: state === 'capturing',
});
return (
<div>
<video ref={videoRef} autoPlay playsInline muted />
<p>State: {state}</p>
<p>
Progress: {progress.current}/{progress.total}
</p>
<button onClick={start}>Start Verification</button>
<button onClick={stop}>Cancel</button>
</div>
);
}Provider Setup
Wrap your application with MoverisProvider to configure the SDK:
import { MoverisProvider } from '@moveris/react';
function App() {
return (
<MoverisProvider
apiKey="mv_your_api_key"
model="mixed-10-v2"
baseUrl="https://api.moveris.com"
debug={false}
>
<YourApp />
</MoverisProvider>
);
}MoverisProvider Props
| Prop | Type | Default | Description |
| ---------- | ---------------- | --------------------------- | ---------------------------------- |
| apiKey | string | required | Your Moveris API key |
| model | FastCheckModel | 'mixed-10-v2' | Default model (see Models section) |
| baseUrl | string | 'https://api.moveris.com' | API base URL |
| debug | boolean | false | Enable console logging |
| children | ReactNode | required | Child components |
Components
LivenessView
All-in-one component that combines camera, overlay, controls, and result display.
<LivenessView
model="mixed-10-v2"
endpoint="fast-check-crops"
enableDetection
showCapturedFrames
showOverlay={true}
showControls={true}
showResult={true}
autoStartCamera={true}
onResult={(result) => console.log(result)}
onError={(error) => console.error(error)}
onReset={() => console.log('Reset')}
/>Props
| Prop | Type | Default | Description |
| ------------------------ | ------------------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| model | FastCheckModel | 'mixed-10-v2' | Model (see Models section) |
| source | FrameSource | 'live' | Frame source: 'live' or 'media' |
| frameCount | number | based on model | Number of frames to capture |
| endpoint | LivenessEndpoint | 'fast-check-crops' | API endpoint (see Endpoints section) |
| sequentialStreaming | boolean | false | Send frames one-by-one (only for fast-check-stream) |
| sessionId | string | auto-generated | Fixed session ID for all API requests (useful for debugging/testing) |
| onResult | (result) => void | - | Called on successful verification |
| onError | (error) => void | - | Called on error |
| onReset | () => void | - | Called when reset is triggered |
| onStateChange | (state) => void | - | Called when state changes |
| onFrameCaptured | (frame: CapturedFrame) => void | - | Called each time a frame is captured |
| onDetectionUpdate | (summary: DetectionSummary) => void | - | Detection pipeline results (requires enableDetection) |
| showOverlay | boolean | true | Show face guide overlay |
| showControls | boolean | true | Show start/stop buttons |
| showResult | boolean | true | Show result after verification |
| showCapturedFrames | boolean | false | Show collapsible panel with captured frame thumbnails |
| autoStartCamera | boolean | true | Start camera automatically |
| enableDetection | boolean | false | Enable detection pipeline (Face+Gaze+Hand+Landmarks) |
| backgroundSegmentation | boolean \| { color?: string } | false | Replace background in captured crops with solid colour (default: #767676). Shows a bg-seg badge in the overlay when active. |
| feedback | string \| null | auto | Override feedback message |
| ovalState | OvalGuideState | auto | Override oval guide state |
| canCapture | boolean | auto | Override capture control |
| className | string | - | Container CSS class |
| style | CSSProperties | - | Container inline styles |
| cameraClassName | string | - | Camera CSS class |
| cameraStyle | CSSProperties | - | Camera inline styles |
| controlsClassName | string | - | Controls CSS class |
| controlsStyle | CSSProperties | - | Controls inline styles |
| renderControls | (props) => ReactNode | - | Custom controls renderer |
| renderResult | (result) => ReactNode | - | Custom result renderer |
| statusMessages | Record<string, string> | - | Custom status messages |
Endpoint Options
| Value | Description |
| --------------------- | --------------------------------------------------------------------------------------------- |
| 'fast-check-crops' | Batch upload of pre-cropped 224×224 face images (client-side crop). Default. Recommended. |
| 'fast-check-stream' | Stream frames individually (parallel or sequential) |
| 'fast-check' | Batch upload — send all frames in a single request (server-side detection) |
| 'hybrid-50' | 50-frame hybrid model with physiological features (93.8% accuracy) |
| 'hybrid-150' | 150-frame hybrid model, highest security (96.2% accuracy) |
| 'verify' | Spatial feature-based detection (50+ frames) |
Streaming Modes
When using fast-check-stream, frames can be sent in two modes:
// Parallel (default) — accumulates all frames, then sends them all at once
<LivenessView endpoint="fast-check-stream" />
// Sequential — sends each frame immediately, waits for response before next
<LivenessView endpoint="fast-check-stream" sequentialStreaming />Frame Capture Mode (automatic)
The component automatically selects the right capture mode based on endpoint:
fast-check/fast-check-stream: Full video frames (640x480 JPEG) — server does face detectionfast-check-crops: Face-cropped frames (224x224 PNG) — client-side face detection + crop
Detection Pipeline
When enableDetection is true, the component runs a full detection pipeline matching cognito-check:
- Face Detection: Confidence-based face positioning
- Gaze Detection: Head pose (yaw/pitch) validation
- Hand Occlusion: Detects hand covering face
- Eye Region Quality: Glasses glare is non-blocking (warning forwarded to API); hidden or shadowed eyes trigger an automatic capture restart
- Landmark Validation: Validates facial landmark positions
- Camera Angle: Detects camera too low (face near top of frame) or too high (face near bottom). Uses face center Y + MediaPipe landmark vertical ratio (forehead/nose/chin perspective distortion) for higher confidence. Mobile: blocking — user can reposition the device. Desktop: advisory only — feedback shown but capture not stopped since a mounted webcam may not be repositionable.
Frames are only captured when all detectors pass, ensuring high-quality submissions. The detection starts blocking capture immediately (before the first cycle completes) to prevent low-quality frames from slipping through.
Captured Frames Panel
When showCapturedFrames is true, a collapsible panel appears below the result:
- Toggle: "Show captured frames (N)" / "Hide captured frames (N)"
- Grid mode (≤ 10 frames): Thumbnail grid with frame indices
- List mode (> 10 frames): Compact list with index, timestamp, and size
- Auto-detects PNG (crop) vs JPEG (full frame) format
Custom Controls Example
<LivenessView
onResult={handleResult}
renderControls={({ state, start, stop, reset }) => (
<div className="flex gap-4 mt-4">
{state === 'idle' && <button onClick={start}>Start Verification</button>}
{state === 'capturing' && <button onClick={stop}>Cancel</button>}
{(state === 'complete' || state === 'error') && <button onClick={reset}>Try Again</button>}
</div>
)}
/>Tailwind CSS Example
<LivenessView
className="max-w-xl mx-auto p-4"
cameraClassName="rounded-2xl shadow-lg overflow-hidden"
controlsClassName="mt-4 flex justify-center gap-4"
onResult={handleResult}
/>LivenessModal
Modal wrapper for liveness verification. Supports inline and modal display modes.
import { LivenessModal } from '@moveris/react';
// Inline mode — always visible
<LivenessModal
mode="inline"
model="mixed-10-v2"
endpoint="fast-check-stream"
enableDetection
showCapturedFrames
sessionId="optional-fixed-id" // auto-generated if omitted
onResult={(result) => console.log(result)}
/>
// Modal mode — triggered by button
<LivenessModal
mode="modal"
triggerButtonText="Verify Identity"
model="mixed-10-v2"
endpoint="fast-check-stream"
enableDetection
showCapturedFrames
sessionId={sessionIdFromUrl} // e.g. from URL query param
onResult={handleResult}
onClose={() => console.log('Closed')}
/>Props
| Prop | Type | Default | Description |
| -------------------- | --------------------- | ---------------------- | --------------------------------- |
| mode | 'inline' \| 'modal' | 'modal' | Display mode |
| title | string | 'Human Verification' | Modal title |
| subtitle | string | auto | Modal subtitle |
| triggerButtonText | string | 'Start Verification' | Button text (modal mode) |
| sessionId | string | auto-generated | Fixed session ID for API requests |
| showCloseButton | boolean | true | Show close button |
| enableDetection | boolean | false | Enable detection pipeline |
| showCapturedFrames | boolean | false | Show captured frames panel |
| onDetectionUpdate | (summary) => void | - | Detection pipeline callback |
| onFrameCaptured | (frame) => void | - | Per-frame capture callback |
| onClose | () => void | - | Called when modal closes |
| onResult | (result) => void | - | Called on verification complete |
| ... | | | Inherits all LivenessView props |
CapturedFramesPanel
Standalone component for displaying captured frame thumbnails. Use this when building custom UIs with hooks.
import { CapturedFramesPanel, type CapturedFrame } from '@moveris/react';
const [frames, setFrames] = useState<CapturedFrame[]>([]);
<CapturedFramesPanel frames={frames} defaultCollapsed={true} />;Props
| Prop | Type | Default | Description |
| ------------------ | ----------------- | ------------ | ----------------------- |
| frames | CapturedFrame[] | required | List of captured frames |
| defaultCollapsed | boolean | true | Panel starts collapsed |
LivenessCamera
Camera component with ref access for advanced control.
import { LivenessCamera, type LivenessCameraRef } from '@moveris/react';
const cameraRef = useRef<LivenessCameraRef>(null);
<LivenessCamera
ref={cameraRef}
autoStart={true}
facingMode="user"
onCameraStart={() => console.log('Camera started')}
onCameraError={(error) => console.error(error)}
>
<LivenessOverlay state="capturing" progress={50} />
</LivenessCamera>;
// Manual control
cameraRef.current?.start();
cameraRef.current?.stop();
const video = cameraRef.current?.getVideoElement();Props
| Prop | Type | Default | Description |
| --------------- | ------------------------- | -------- | ------------------------- |
| autoStart | boolean | true | Start camera on mount |
| facingMode | 'user' \| 'environment' | 'user' | Camera facing mode |
| onCameraStart | () => void | - | Called when camera starts |
| onCameraStop | () => void | - | Called when camera stops |
| onCameraError | (error) => void | - | Called on camera error |
| className | string | - | CSS class |
| style | CSSProperties | - | Inline styles |
| children | ReactNode | - | Overlay content |
Ref Methods
| Method | Description |
| ------------------- | -------------------------------- |
| start() | Start the camera |
| stop() | Stop the camera |
| getVideoElement() | Get the underlying video element |
LivenessOverlay
Face guide overlay with status and progress.
import { LivenessOverlay } from '@moveris/react';
<LivenessOverlay state="capturing" progress={45} feedback="Hold still" ovalState="good" />;Props
| Prop | Type | Default | Description |
| ---------------- | ------------------------ | ------------ | --------------------------- |
| state | LivenessState | required | Current verification state |
| progress | number | 0 | Progress percentage (0-100) |
| feedback | string \| null | - | User feedback message |
| ovalState | OvalGuideState | 'no_face' | Oval guide visual state |
| statusMessages | Record<string, string> | - | Custom status messages |
| className | string | - | CSS class |
| style | CSSProperties | - | Inline styles |
OvalGuideState
| Value | Color | Description |
| ----------- | ------ | ---------------------------- |
| 'no_face' | Red | No face detected |
| 'poor' | Orange | Poor alignment |
| 'good' | Yellow | Good alignment |
| 'perfect' | Green | Perfect alignment, capturing |
LivenessButton
Styled button for liveness actions.
import { LivenessButton } from '@moveris/react';
<LivenessButton variant="start" onClick={handleStart} disabled={isProcessing} />;LivenessResult
Display verification result.
import { LivenessResult } from '@moveris/react';
<LivenessResult result={verificationResult} showDetails={true} />;Hooks
useLiveness
Main hook for liveness detection workflow. Handles frame accumulation, streaming, and API submission.
const {
state, // 'idle' | 'capturing' | 'uploading' | 'processing' | 'complete' | 'error'
result, // LivenessResult | null
error, // Error | null
progress, // { current: number, total: number }
frames, // CapturedFrame[] - Accumulated frames
isStreamSending, // boolean - True when a sequential stream frame is in-flight
start, // () => void
stop, // () => void
reset, // () => void
captureFrame, // (base64: string) => void
submit, // () => Promise<void> - Manual submit
} = useLiveness({
model: 'mixed-10-v2',
source: 'live',
endpoint: 'fast-check-stream',
sequentialStreaming: false,
sessionId: 'optional-fixed-id', // auto-generated if omitted
onResult: (result) => console.log(result),
onError: (error) => console.error(error),
onStateChange: (state) => console.log('State:', state),
});Config Options
| Option | Type | Default | Description |
| --------------------- | ------------------ | -------------------- | --------------------------------------------------------------------------------------------------------------------- |
| model | FastCheckModel | 'mixed-10-v2' | Verification model |
| source | FrameSource | 'live' | Frame source |
| frameCount | number | based on model | Frames to capture |
| endpoint | LivenessEndpoint | 'fast-check-crops' | API endpoint |
| sequentialStreaming | boolean | false | Sequential frame sending (fast-check-stream only) |
| sessionId | string | auto-generated | Fixed session ID for all API requests |
| bgSegmentation | boolean | false | Sends bg_segmentation in the API payload. Set to match your useSmartFrameCapture backgroundSegmentation option. |
| onResult | (result) => void | - | Success callback |
| onError | (error) => void | - | Error callback |
| onStateChange | (state) => void | - | State change callback |
Submission Behavior
| Endpoint | sequentialStreaming | Behavior |
| ------------------- | --------------------- | ------------------------------------------------------- |
| fast-check-stream | false (default) | Accumulates all frames, sends them all in parallel |
| fast-check-stream | true | Sends each frame immediately, waits for response |
| fast-check | n/a | Accumulates all frames, sends in a single batch request |
| fast-check-crops | n/a | Accumulates cropped frames, sends in a single batch |
Session ID
All components and hooks accept an optional sessionId. When provided, the same ID is used for every API request in the session, making it easy to correlate client and server logs.
// From URL query parameter
const params = new URLSearchParams(window.location.search);
const sessionId = params.get('sessionId') || undefined;
// Component usage
<LivenessView sessionId={sessionId} model="mixed-10-v2" onResult={handleResult} />
<LivenessModal sessionId={sessionId} mode="modal" onResult={handleResult} />
// Hook usage
const liveness = useLiveness({ sessionId, model: 'mixed-10-v2', onResult: handleResult });If sessionId is not provided (or undefined), a new UUID v4 is generated automatically each time start() is called.
useCamera
Hook for camera access and control.
const {
stream, // MediaStream | null
isActive, // boolean
error, // Error | null
permissionStatus, // 'prompt' | 'granted' | 'denied' | 'unavailable'
startCamera, // () => Promise<void>
stopCamera, // () => void
switchCamera, // () => Promise<void>
} = useCamera({
autoStart: true,
facingMode: 'user',
});Config Options
| Option | Type | Default | Description |
| ------------- | ------------------------- | -------- | --------------------- |
| autoStart | boolean | false | Start camera on mount |
| facingMode | 'user' \| 'environment' | 'user' | Camera facing |
| constraints | MediaTrackConstraints | - | Custom constraints |
useSmartFrameCapture
Intelligent frame capture with built-in face detection, quality assessment, and alignment feedback. Matches the cognito-check capture flow: 100ms detection interval, color-coded oval feedback, and automatic frame capture when all quality conditions pass.
const {
state, // 'idle' | 'detecting' | 'capturing' | 'complete'
progress, // { current: number, total: number, quality: string }
feedback, // string | null - User guidance message
ovalState, // 'no_face' | 'poor' | 'good' | 'perfect'
frames, // CapturedFrame[]
start, // () => void
stop, // () => void
reset, // () => void
restart, // () => void — reset all state and immediately resume capturing
} = useSmartFrameCapture({
videoRef,
targetFrames: 10,
captureMode: 'full',
onFrameCapture: (frame, index, total) => console.log('Captured:', index),
onComplete: (frames) => console.log('All frames:', frames),
onError: (error) => console.error(error),
});Config Options
| Option | Type | Default | Description |
| ------------------------ | --------------------------------------------- | ------------ | ---------------------------------------------------------------------------------------------------------------------------------- |
| videoRef | RefObject<HTMLVideoElement> | required | Video element ref |
| targetFrames | number | 10 | Number of frames to capture |
| captureIntervalMs | number | 100 | Detection/capture interval in ms (100 = 10 FPS, 50 = ~20 FPS streaming) |
| blurThreshold | number | auto | Blur rejection threshold (110 desktop, 60 mobile — lower = more permissive) |
| captureMode | 'crop' \| 'full' | 'crop' | Frame capture mode (see below) |
| detectionGate | () => boolean | - | External gate — must return true to allow capture |
| detectFace | (video) => Promise<FaceBoundingBox \| null> | built-in | Custom face detection function |
| onFrameCapture | (frame, index, total) => void | - | Called per frame capture |
| onComplete | (frames) => void | - | Called when all frames captured |
| onError | (error) => void | - | Error callback |
| onQualityUpdate | (quality) => void | - | Quality state callback |
| backgroundSegmentation | boolean \| { color?: string } | false | Replace background in captured crops. Only active with captureMode: 'crop'. Lazily initialises MediaPipe segmenter on first use. |
Capture Modes
| Mode | Output | Use With | Description |
| -------- | ------------ | --------------------------------- | ------------------------------------------ |
| 'crop' | 224x224 PNG | fast-check-crops | Face-cropped frame (client-side detection) |
| 'full' | 640x480 JPEG | fast-check, fast-check-stream | Full video frame (server-side detection) |
Important:
fast-checkandfast-check-streamexpect full video frames because the server performs its own face detection. Onlyfast-check-cropsexpects pre-cropped 224x224 images.LivenessViewselects the correct mode automatically based on the endpoint.
Detection Gate
Use detectionGate to integrate external detectors (e.g., gaze, hand occlusion):
const detectionPassedRef = useRef(true);
// Updated by DetectionManager loop
useSmartFrameCapture({
videoRef,
detectionGate: () => detectionPassedRef.current,
onFrameCapture: (frame) => captureFrame(frame.pixels),
});When the gate returns false, the frame is silently skipped — the internal counter does NOT increment.
useDetectionPipeline
Encapsulates the full gaze + eye-region detection pipeline so apps don't need to manage GazeDetectorImpl or EyeRegionDetectorImpl directly. Use this hook alongside useSmartFrameCapture to gate frame capture on detection quality.
import { useDetectionPipeline, useSmartFrameCapture } from '@moveris/react';
// Break the circular dependency: useDetectionPipeline needs restart(),
// but restart() comes from useSmartFrameCapture.
const restartRef = useRef<() => void>(() => {});
const onRestartNeeded = useCallback(() => restartRef.current(), []);
const { detectionGate, getWarnings } = useDetectionPipeline({
videoRef,
enabled: isSessionOpen,
onRestartNeeded,
onGazeFeedback: (message) => setGazeFeedback(message), // optional
onEyeWarning: (message) => showEyeWarningToast(message), // optional
});
const { state, progress, start, stop, restart } = useSmartFrameCapture({
videoRef,
targetFrames: 10,
captureMode: 'full',
detectionGate,
onComplete: (frames) => {
const warnings = getWarnings(); // collect at submission time
submitFrames(frames, warnings);
},
});
// Wire restart after the hook call
restartRef.current = restart;Options
| Option | Type | Default | Description |
| ----------------- | ----------------------------- | -------- | ------------------------------------------------------------------------ |
| videoRef | RefObject<HTMLVideoElement> | required | Video element to run detection on |
| enabled | boolean | required | Activates/deactivates the detection loop |
| onRestartNeeded | () => void | required | Called when hidden/shadowed eyes are detected (triggers capture restart) |
| intervalMs | number | 200 | Detection interval in milliseconds |
| onGazeFeedback | (message: string) => void | — | Called with each gaze feedback update (empty string = clear) |
| onEyeWarning | (message: string) => void | — | Called with the eye failure reason just before onRestartNeeded fires |
Return Value
| Property | Type | Description |
| --------------- | ---------------- | --------------------------------------------------------- |
| detectionGate | () => boolean | Pass to useSmartFrameCapture.detectionGate |
| getWarnings | () => string[] | Returns accumulated session warnings; call at submit time |
Detection Behaviour
| Condition | Behaviour |
| -------------------- | ---------------------------------------------------------------------------------------------- |
| Gaze off-camera | detectionGate returns false; onGazeFeedback called with hint |
| Gaze restored | detectionGate returns true; onGazeFeedback('') clears message |
| Glasses glare | Non-blocking — "User was wearing glasses" added to warnings |
| Hidden/shadowed eyes | onEyeWarning(reason) fired first, then onRestartNeeded() — triggers restart() on capture |
useFaceDetection
Hook for real-time face detection (uses MediaPipe CDN by default).
import { useFaceDetection, createMediaPipeAdapter } from '@moveris/react';
const faceDetector = useMemo(() => createMediaPipeAdapter(), []);
const {
faces, // FaceDetectionResult[]
isDetecting, // boolean
error, // Error | null
startDetection, // () => void
stopDetection, // () => void
} = useFaceDetection({
videoRef,
adapter: faceDetector,
onFaceDetected: (faces) => console.log('Faces:', faces),
});Detection Pipeline
The SDK includes a full detection pipeline matching cognito-check's architecture:
import {
DetectionManager,
FaceDetectorImpl,
GazeDetectorImpl,
HandOcclusionDetectorImpl,
EyeRegionDetectorImpl,
} from '@moveris/react';
// Create and initialize
const manager = new DetectionManager();
const gazeDetector = new GazeDetectorImpl();
const eyeRegionDetector = new EyeRegionDetectorImpl();
eyeRegionDetector.setGazeDetector(gazeDetector);
manager.register(new FaceDetectorImpl());
manager.register(gazeDetector);
manager.register(new HandOcclusionDetectorImpl());
manager.register(eyeRegionDetector);
await manager.initializeAll();
// Run detection — pipeline: Face → Gaze → Hand → Eye Region
const summary = await manager.runAll(videoElement, Date.now());
console.log(summary.allPassed); // boolean
console.log(summary.faceBox); // FaceBoundingBox | null
console.log(summary.results); // Map<string, DetectionResult>EyeRegionDetectorImpl
Blocks frame capture when eyes are not properly visible. Uses MediaPipe face landmarks (from GazeDetector) to locate the eye regions, then extracts pixel data and runs brightness, contrast, and specular highlight analysis.
| Detection | Failure message | | ---------------------- | ------------------------------------------------- | | Shadowed eyes | "Eyes are in shadow - improve lighting" | | Overexposed eyes | "Eye region overexposed - reduce lighting" | | Glasses glare | "Glare detected - adjust angle or remove glasses" | | Occluded / featureless | "Eyes not clearly visible" |
The detector reports per-eye metadata (brightness, contrast, glare ratio, pass/fail) for detailed UI display.
Context Hooks
useMoverisContext
Access the provider context directly.
const { client, model, debug } = useMoverisContext();useLivenessClient
Get the LivenessClient instance.
const client = useLivenessClient();
// Use client directly
const result = await client.fastCheck(frames, { model: '10' });Frame Processing Utilities
captureVideoFrame
Capture a full video frame (640x480 JPEG). Includes optional blur detection.
import { captureVideoFrame } from '@moveris/react';
const frame = captureVideoFrame(videoElement, index, startTime, {
format: 'jpeg',
quality: 0.85,
blurThreshold: 110,
});
// Returns: CapturedFrame | nullcaptureFaceCroppedFrame
Capture a face-cropped 224x224 PNG frame.
import { captureFaceCroppedFrame } from '@moveris/react';
const frame = captureFaceCroppedFrame(videoElement, index, startTime, faceBoundingBox, {
blurThreshold: 110,
});
// Returns: CapturedFrame | nullanalyzeBlur / analyzeLighting / checkFrameQuality
import { analyzeBlur, analyzeLighting, checkFrameQuality } from '@moveris/react';
const blur = analyzeBlur(imageData); // { isBlurry, variance }
const light = analyzeLighting(videoEl, faceBox); // { isGood, brightness }
const quality = checkFrameQuality(videoEl, faceBox); // combined checkisFaceInOval / isFaceFullyVisible
import { isFaceInOval, isFaceFullyVisible, DEFAULT_OVAL_REGION } from '@moveris/react';
const inOval = isFaceInOval(faceBbox, DEFAULT_OVAL_REGION);
// { isInOval, alignmentScore, feedback }
const visible = isFaceFullyVisible(faceBbox, frameWidth, frameHeight);
// { isVisible, reason }calculateFaceCropRegion
import { calculateFaceCropRegion } from '@moveris/react';
const region = calculateFaceCropRegion(faceBbox, videoWidth, videoHeight);
// { x, y, size } — square crop region in pixel coordinatesModels
Active models (recommended):
| Model | Frames | Description |
| ---------------- | ------ | --------------------------------- |
| 'mixed-10-v2' | 10 | Fast verification, lowest latency |
| 'mixed-30-v2' | 30 | Balanced speed and accuracy |
| 'mixed-60-v2' | 60 | Higher accuracy |
| 'mixed-90-v2' | 90 | High accuracy |
| 'mixed-120-v2' | 120 | Highest accuracy, slower |
Legacy models (still supported):
| Model | Frames | Description |
| ------- | ------ | ---------------------------- |
| '10' | 10 | Standard — fast verification |
| '50' | 50 | Standard — balanced |
| '250' | 250 | Standard — highest accuracy |
Feedback Messages
Get localized user feedback.
import { getFeedbackMessage, getStatusMessage, DEFAULT_LOCALE, ES_LOCALE } from '@moveris/react';
// English (default)
getFeedbackMessage('no_face'); // "No face detected"
// Spanish
getFeedbackMessage('no_face', ES_LOCALE); // "No se detecta rostro"Type Definitions
import type {
// Component Props
LivenessViewProps,
LivenessModalProps,
CapturedFramesPanelProps,
LivenessCameraProps,
LivenessCameraRef,
LivenessOverlayProps,
LivenessButtonProps,
LivenessResultProps,
// Hook Types
UseLivenessConfig,
UseLivenessReturn,
UseCameraReturn,
UseSmartFrameCaptureOptions,
UseSmartFrameCaptureReturn,
UseDetectionPipelineOptions,
UseDetectionPipelineReturn,
UseFaceDetectionOptions,
UseFaceDetectionReturn,
// Core Types
LivenessResult,
LivenessState,
LivenessEndpoint,
FastCheckModel,
FrameSource,
CapturedFrame,
// Detection Types
DetectionResult,
DetectionSummary,
FaceBoundingBox,
HeadPose,
// Analysis Types
BlurAnalysis,
LightingAnalysis,
FrameQualityResult,
OvalGuideState,
} from '@moveris/react';Browser Compatibility
| Browser | Minimum Version | | ------- | --------------- | | Chrome | 88+ | | Firefox | 78+ | | Safari | 14+ | | Edge | 88+ |
Requirements:
- Camera access (getUserMedia API)
- HTTPS (required for camera access in production)
Error Handling
import { LivenessApiError } from '@moveris/react';
<LivenessView
onError={(error) => {
if (error instanceof LivenessApiError) {
switch (error.code) {
case 'insufficient_frames':
console.log(`Need ${error.required} frames, got ${error.received}`);
break;
case 'invalid_key':
console.log('Invalid API key');
break;
case 'timeout':
console.log('Request timed out');
break;
default:
console.log(error.message);
}
} else {
console.error(error);
}
}}
/>;License
MIT
