@moveris/shared
v3.0.0
Published
Core business logic for Moveris Live SDK
Downloads
838
Readme
@moveris/shared
Core business logic for Moveris Live SDK. This package provides the HTTP client, types, utilities, and frame management for liveness detection.
Installation
pnpm add @moveris/shared
# or
npm install @moveris/shared
# or
yarn add @moveris/sharedQuick Start
import { LivenessClient, generateSessionId } from '@moveris/shared';
// Create the client
const client = new LivenessClient({
apiKey: 'mv_your_api_key',
baseUrl: 'https://api.moveris.com', // optional, uses default
});
// Perform a fast liveness check
const result = await client.fastCheck(frames, {
sessionId: 'my-session-id', // optional — auto-generated if omitted
model: 'mixed-10-v2',
source: 'live',
});
console.log(result.verdict); // 'live' or 'fake'
console.log(result.confidence); // 0-1
console.log(result.score); // 0-100API Reference
LivenessClient
The main client for interacting with the Moveris Liveness API.
Constructor
const client = new LivenessClient(config: LivenessClientConfig);| Option | Type | Default | Description |
| -------------- | -------------- | --------------------------- | ------------------------------------------------------------------ |
| apiKey | string | required | Your Moveris API key |
| baseUrl | string | 'https://api.moveris.com' | API base URL |
| modelVersion | ModelVersion | - | Model version alias for X-Model-Version header (e.g. 'latest') |
| timeout | number | 30000 | Request timeout in milliseconds |
| enableRetry | boolean | true | Enable automatic retry with exponential backoff |
| customFetch | typeof fetch | fetch | Custom fetch implementation (for React Native) |
Methods
fastCheck(frames, options)
Perform fast liveness check with server-side face detection. Ideal for quick verification with 10-250 frames.
const result = await client.fastCheck(frames, {
sessionId: 'optional-uuid',
model: 'mixed-10-v2',
source: 'live', // 'live' | 'media'
});Parameters:
| Option | Type | Default | Description |
| -------------- | ---------------- | --------------- | ----------------------------------------------------------------- |
| sessionId | string | auto-generated | Unique session identifier (UUID) |
| model | FastCheckModel | 'mixed-10-v2' | Model to use (see Models section) |
| modelVersion | ModelVersion | - | Version alias sent via X-Model-Version header (e.g. 'latest') |
| frameCount | number | - | Frame count for alias-based model resolution |
| source | FrameSource | 'live' | Frame source: 'live' (camera) or 'media' (recorded video) |
Returns: Promise<LivenessResult>
fastCheckCrops(crops, options)
Perform fast liveness check with pre-cropped 224x224 face images. Use this when you handle face detection client-side. Expects CropData[] (no timestamps — only index and pixels).
import type { CropData } from '@moveris/shared';
const crops: CropData[] = capturedFrames.map((f) => ({
index: f.index,
pixels: f.pixels, // base64 224x224 PNG
}));
const result = await client.fastCheckCrops(crops, {
model: '10',
source: 'live',
bgSegmentation: true, // optional — sent as bg_segmentation in payload
});streamFrame(frame, options)
Send a single frame to the streaming endpoint. Returns the API response which may include a partial or final result. Used internally by useLiveness for sequential streaming.
const response = await client.streamFrame(frame, {
sessionId: 'uuid',
model: '10',
source: 'live',
frameIndex: 0,
totalFrames: 10,
});verify(frames, options)
Verify liveness using spatial feature-based detection. Requires minimum 50 frames.
const result = await client.verify(frames, {
fps: 30,
frameWidth: 640,
frameHeight: 480,
});hybridCheck(frames, options)
Perform hybrid liveness check combining CNN with physiological features. Requires minimum 50 frames.
const result = await client.hybridCheck(frames, { fps: 30 });hybrid50(frames, options)
50-frame hybrid model with 93.8% balanced accuracy.
const result = await client.hybrid50(frames, { fps: 30 });hybrid150(frames, options)
150-frame hybrid model with 96.2% balanced accuracy. Best for high-security scenarios.
const result = await client.hybrid150(frames, { fps: 30 });health()
Check API health status.
const health = await client.health();
console.log(health.status); // 'ok'
console.log(health.models_loaded); // ['10', '50', '250']getJobResult(jobId)
Get job status for async processing.
const job = await client.getJobResult('job-uuid');
console.log(job.status); // 'queued' | 'processing' | 'complete' | 'failed'waitForJobResult(jobId, timeout)
Long-poll for job completion (max 120 seconds).
const job = await client.waitForJobResult('job-uuid', 30);Types
FastCheckModel
Model selection for liveness detection speed/accuracy trade-off.
type FastCheckModel =
| 'mixed-10-v2'
| 'mixed-30-v2'
| 'mixed-60-v2'
| 'mixed-90-v2'
| 'mixed-120-v2' // Active mixed-v2 models (recommended)
| '10'
| '50'
| '250' // Legacy standard models
| (string & object); // Forward-compatible with future model IDsActive models (recommended):
| Value | Frames | Description |
| ---------------- | ------ | ------------------------------ |
| 'mixed-10-v2' | 10 | Fast verification, low latency |
| 'mixed-30-v2' | 30 | Balanced speed and accuracy |
| 'mixed-60-v2' | 60 | Higher accuracy |
| 'mixed-90-v2' | 90 | High accuracy |
| 'mixed-120-v2' | 120 | Highest accuracy, slower |
Legacy models (still supported):
| Value | Frames | Description |
| ------- | ------ | ------------------------------------------------ |
| '10' | 10 | Standard — fast (alias: 'fast') |
| '50' | 50 | Standard — balanced (alias: 'spatial') |
| '250' | 250 | Standard — highest accuracy (alias: 'spatial') |
Hybrid-v2 models (alias: 'hybrid'):
| Value | Frames | Description |
| ----------------- | ------ | ---------------------------- |
| 'hybrid-v2-10' | 10 | Hybrid v2 — fast |
| 'hybrid-v2-30' | 30 | Hybrid v2 — balanced |
| 'hybrid-v2-60' | 60 | Hybrid v2 — higher accuracy |
| 'hybrid-v2-90' | 90 | Hybrid v2 — high accuracy |
| 'hybrid-v2-100' | 100 | Hybrid v2 — high accuracy |
| 'hybrid-v2-120' | 120 | Hybrid v2 — highest accuracy |
| 'hybrid-v2-125' | 125 | Hybrid v2 — highest accuracy |
| 'hybrid-v2-250' | 250 | Hybrid v2 — maximum accuracy |
Deprecated models (v1 — sunset 2026-09-01):
| Value | Frames | Replacement |
| ------------- | ------ | ---------------- |
| 'mixed-10' | 10 | 'mixed-10-v2' |
| 'mixed-30' | 30 | 'mixed-30-v2' |
| 'mixed-60' | 60 | 'mixed-60-v2' |
| 'mixed-90' | 90 | 'mixed-90-v2' |
| 'mixed-120' | 120 | 'mixed-120-v2' |
| 'mixed-150' | 150 | — |
| 'mixed-250' | 250 | — |
FrameSource
Source of the captured frames.
type FrameSource = 'media' | 'live';| Value | Description |
| --------- | ----------------------- |
| 'live' | Real-time camera feed |
| 'media' | Pre-recorded video file |
Verdict
Liveness determination result.
type Verdict = 'live' | 'fake';| Value | Description |
| -------- | ---------------------------------------------------- |
| 'live' | Real person detected |
| 'fake' | Spoofing attempt detected (photo, video, mask, etc.) |
LivenessState
State machine for liveness detection flow.
type LivenessState =
| 'idle' // Ready to start
| 'capturing' // Capturing frames from camera
| 'uploading' // Uploading frames to API
| 'processing' // API is processing frames
| 'complete' // Verification complete
| 'error'; // Error occurredLivenessResult
Result returned from liveness verification.
interface LivenessResult {
verdict: Verdict; // 'live' or 'fake'
confidence: number; // Confidence score (0-1)
score: number; // Liveness score (0-100)
sessionId: string; // Session identifier
processingMs: number; // Server processing time
framesProcessed: number; // Number of frames analyzed
deprecation?: DeprecationInfo; // Present when X-Moveris-Model-Resolved header is returned
}CapturedFrame
Individual frame captured from camera.
interface CapturedFrame {
index: number; // Frame sequence number (0-based)
timestampMs: number; // Timestamp from capture start
pixels: string; // Base64-encoded image data
}CropData
Pre-cropped face image for the fast-check-crops endpoint. Omits timestampMs since timing is not needed for cropped submissions.
interface CropData {
index: number; // Frame sequence number (0-based)
pixels: string; // Base64-encoded 224x224 PNG image
}ModelVersion
Version alias sent via the X-Model-Version request header for server-side model resolution.
type ModelVersion = 'latest' | 'v2' | 'v1' | 'fast' | 'spatial' | 'hybrid' | (string & object);| Value | Resolves to |
| ----------- | ----------------------------------- |
| 'latest' | Current production mixed-v2 model |
| 'v2' | Pinned mixed-v2 version |
| 'v1' | Legacy mixed-v1 models (deprecated) |
| 'fast' | Model '10' |
| 'spatial' | Models '50' / '250' |
| 'hybrid' | Hybrid-v2 models |
When
X-Model-Versionis set, the API also requiresframe_countin the request body. Pass it via theframeCountoption.
DeprecationInfo
Parsed from API response headers when the server resolves a model.
interface DeprecationInfo {
deprecated: boolean; // true when model is deprecated
resolvedModel: string; // Concrete model id (X-Moveris-Model-Resolved)
deprecatedModel?: string; // Deprecated model id (X-Moveris-Deprecated-Model)
sunsetDate?: string; // ISO date of removal (Sunset header)
suggestedModel?: string; // Replacement model (X-Moveris-Suggested-Model)
}Model Versioning & Deprecation
The client supports model alias resolution via the X-Model-Version header and automatically parses deprecation headers from API responses.
// Use aliases for automatic server-side resolution
const client = new LivenessClient({
apiKey: 'mv_your_api_key',
modelVersion: 'latest', // sent as X-Model-Version header on every request
});
const result = await client.fastCheck(frames, {
model: 'mixed-30-v2',
frameCount: 30, // required when modelVersion is set
});
// Deprecation info is automatically parsed from response headers
if (result.deprecation?.deprecated) {
console.warn(`Model "${result.deprecation.resolvedModel}" is deprecated.`);
console.warn(`Migrate to "${result.deprecation.suggestedModel}".`);
console.warn(`Sunset date: ${result.deprecation.sunsetDate}`);
}
// Per-call override (takes precedence over client-level modelVersion)
const result2 = await client.fastCheck(frames, {
modelVersion: 'v2',
frameCount: 10,
});Valid Frame Counts
When using model version aliases, frameCount must be one of:
import { VALID_FRAME_COUNTS } from '@moveris/shared';
// [10, 30, 60, 90, 120]Utilities
generateSessionId()
Generate a UUID v4 for session identification.
import { generateSessionId } from '@moveris/shared';
const sessionId = generateSessionId();
// e.g., 'a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d'Session ID injection: Every
LivenessClientmethod (fastCheck,fastCheckCrops,streamFrame,verify,hybrid50,hybrid150) accepts an optionalsessionIdin its options object. If provided, that ID is used for the request; if omitted, a new UUID is generated automatically. This is useful for debugging, testing, or correlating client requests with server logs. In@moveris/react, thesessionIdcan be injected via theuseLivenesshook config or theLivenessView/LivenessModalprops.
toFrameData(frames)
Convert CapturedFrame array to API format.
import { toFrameData } from '@moveris/shared';
const apiFrames = toFrameData(capturedFrames);toHybridFrameData(frames)
Convert frames to hybrid endpoint format.
import { toHybridFrameData } from '@moveris/shared';
const hybridFrames = toHybridFrameData(capturedFrames);Frame Buffer Management
FrameBuffer
Manages a sliding window buffer of frames with automatic cleanup.
import { FrameBuffer } from '@moveris/shared';
const buffer = new FrameBuffer({
maxSize: 10, // Maximum frames to keep
maxMemoryBytes: 4 * 1024 * 1024, // 4MB limit
});
// Add frames
buffer.add(frame);
// Get all frames
const frames = buffer.getAll();
// Acknowledge processed frames
buffer.acknowledge(5); // Remove frames up to index 5
// Clear buffer
buffer.clear();FrameQueue
FIFO queue for frame processing.
import { FrameQueue } from '@moveris/shared';
const queue = new FrameQueue({ maxSize: 50 });
queue.enqueue(frame);
const frame = queue.dequeue();
const peeked = queue.peek();Error Handling
LivenessApiError
Custom error class for API errors.
import { LivenessApiError } from '@moveris/shared';
try {
await client.fastCheck(frames);
} catch (error) {
if (error instanceof LivenessApiError) {
console.log(error.code); // 'insufficient_frames'
console.log(error.message); // 'Not enough frames provided'
console.log(error.statusCode); // 400
console.log(error.required); // 10
console.log(error.received); // 5
}
}Error Codes:
| Code | Description |
| --------------------- | -------------------------- |
| insufficient_frames | Not enough frames provided |
| invalid_model | Invalid model specified |
| invalid_key | Invalid API key |
| missing_field | Required field missing |
| timeout | Request timeout |
| network_error | Network connectivity issue |
Feedback Messages
Localized user feedback for liveness detection UI.
import { getFeedbackMessage, getStatusMessage, DEFAULT_LOCALE, ES_LOCALE } from '@moveris/shared';
// Get localized feedback
const message = getFeedbackMessage('face_not_centered', ES_LOCALE);
// "Centra tu rostro en el óvalo"
// Get status message
const status = getStatusMessage('capturing', DEFAULT_LOCALE);
// "Hold still..."Feedback Keys
| Key | English | Description |
| ------------------- | ------------------------------ | ---------------------------- |
| no_face | "No face detected" | Face detection failed |
| face_not_centered | "Center your face in the oval" | Face outside guide |
| too_close | "Move back a little" | Face too close to camera |
| too_far | "Move closer" | Face too far from camera |
| poor_lighting | "Improve lighting" | Insufficient light |
| hold_still | "Hold still" | Movement detected |
| capturing | "Capturing..." | Frame capture in progress |
| processing | "Processing..." | API verification in progress |
| success | "Verification complete" | Successful completion |
| failed | "Verification failed" | Failed verification |
| eyes_not_visible | "Eyes not clearly visible" | Eye region featureless |
| eyes_shadowed | "Eyes are in shadow…" | Eye region too dark |
| eyes_overexposed | "Eye region overexposed…" | Eye region too bright |
| glasses_glare | "Glare detected…" | Specular highlights on eyes |
| eye_quality_poor | "Eye region quality is poor" | Generic eye quality failure |
| camera_angle_low | "Raise camera to eye level" | Camera below face (Y < 0.3) |
| camera_angle_high | "Lower camera to eye level" | Camera above face (Y > 0.7) |
| camera_tilted | "Hold camera level" | Eye line deviates > 15° |
Oval Guide States
Visual feedback states for the face positioning guide.
import { getOvalGuideState, OVAL_GUIDE_COLORS } from '@moveris/shared';
const state = getOvalGuideState(alignmentScore);
// Returns: 'no_face' | 'poor' | 'good' | 'perfect'
const color = OVAL_GUIDE_COLORS[state];
// 'red' | 'orange' | 'yellow' | 'green'| State | Color | Alignment Score |
| --------- | ------ | ---------------- |
| no_face | Red | No face detected |
| poor | Orange | < 0.5 |
| good | Yellow | 0.5 - 0.8 |
| perfect | Green | > 0.8 |
Eye Region Quality Analysis
Pre-request eye-region quality gate. Platform-agnostic functions that analyze RGBA pixel data from eye regions to detect shadows, glare, occlusion, and poor visibility.
import {
checkEyeRegionQuality,
analyzeEyeRegionBrightness,
analyzeEyeRegionContrast,
detectSpecularHighlights,
EYE_QUALITY_THRESHOLDS,
} from '@moveris/shared';
// Combined quality check (recommended)
const quality = checkEyeRegionQuality(eyeRegionPixels);
console.log(quality.passed); // boolean
console.log(quality.brightness); // 0-255
console.log(quality.contrast); // standard deviation of luminance
console.log(quality.hasGlare); // boolean
console.log(quality.glareRatio); // 0-1
console.log(quality.message); // user-facing feedback or null
// Individual checks
const brightness = analyzeEyeRegionBrightness(pixels);
const contrast = analyzeEyeRegionContrast(pixels);
const glareRatio = detectSpecularHighlights(pixels);| Check | Threshold | Condition |
| ----------- | --------------------- | --------------------------------------- |
| Shadowed | minBrightness: 40 | Average luminance too low |
| Overexposed | maxBrightness: 230 | Average luminance too high |
| Glare | maxGlareRatio: 0.15 | >15% of pixels are specular highlights |
| Occluded | minContrast: 12 | Standard deviation of luminance too low |
Glare detection uses a relative threshold (mean brightness × 2.5) so sensitivity adapts to ambient lighting, avoiding false positives in bright environments.
Custom thresholds can be passed as a second argument to checkEyeRegionQuality().
Eye Region Landmarks
Extract bounding boxes for left and right eye regions from MediaPipe 468-point face mesh landmarks.
import { getEyeRegionBounds, EYE_LANDMARK_INDICES, validateFaceLandmarks } from '@moveris/shared';
const bounds = getEyeRegionBounds(landmarks);
if (bounds) {
console.log(bounds.leftEye); // { x, y, width, height } (normalized 0-1)
console.log(bounds.rightEye); // { x, y, width, height } (normalized 0-1)
}Frame Analysis Utilities
Utilities for assessing frame quality before submission.
import {
isFaceFullyVisible,
isFaceInOval,
calculateFaceCropRegion,
detectFaceRoll,
detectCameraAngle,
} from '@moveris/shared';
// Check if face is fully visible in frame
const visibility = isFaceFullyVisible(faceBbox, frameWidth, frameHeight);
console.log(visibility.isVisible); // true/false
console.log(visibility.reason); // 'left_edge' | 'top_edge' | etc.
// Check if face is within the oval guide
const inOval = isFaceInOval(faceBbox, ovalRegion);
console.log(inOval.isInOval); // true/false
console.log(inOval.alignmentScore); // 0-1
// Calculate crop region for face (224x224, aligned with cognito-check constants)
const cropRegion = calculateFaceCropRegion(faceBbox, frameWidth, frameHeight);
// { x, y, size } — square crop region in pixel coordinates
// Calculate adaptive crop multiplier based on face size
import { calculateAdaptiveCropMultiplier } from '@moveris/shared';
const multiplier = calculateAdaptiveCropMultiplier(faceBox, frameWidth, frameHeight);
// Returns 2.5–4.0x (matched to cognito-check for ~33% face coverage in 224×224 crop)
// Detect face roll (device tilt) from eye-corner landmarks
const rollResult = detectFaceRoll(landmarks); // requires ≥364 landmarks
// { roll: number (degrees), tooTilted: boolean }
// Detect camera vertical angle from forehead/nose/chin landmark ratio
const angleResult = detectCameraAngle(landmarks); // requires ≥153 landmarks
// { ratio: number, cameraAbove: boolean, cameraBelow: boolean }
// ratio ~1.0 at eye level; >1.35 = camera above user; <0.75 = camera below userCrop Constants (aligned with cognito-check)
These constants control how face crops are generated for the fast-check-crops endpoint:
| Constant | Value | Description |
| -------------------------------- | ----- | ------------------------------------------------ |
| IDEAL_CROP_MULTIPLIER | 2.0 | Default crop region = 2x face size |
| MIN_CROP_MULTIPLIER | 1.8 | Minimum crop (face very close) |
| MAX_CROP_MULTIPLIER | 2.5 | Maximum crop (face very far) |
| FACE_CENTER_VERTICAL_OFFSET | 0.05 | Slight upward shift to include forehead for rPPG |
| TARGET_FACE_PERCENTAGE_IN_CROP | 0.5 | Face should occupy ~50% of 224x224 crop |
Model Configuration
import { MODEL_CONFIGS, isDeprecatedModel, getActiveModels } from '@moveris/shared';
// Access model metadata
const config = MODEL_CONFIGS['mixed-10-v2'];
console.log(config.minFrames); // 10
console.log(config.deprecated); // false
console.log(config.aliases); // ['v2', 'latest']
// Check if a model is deprecated
if (isDeprecatedModel('mixed-10')) {
const mc = MODEL_CONFIGS['mixed-10'];
console.log(`Deprecated! Sunset: ${mc.sunsetDate}, replace with ${mc.replacement}`);
}
// Get only active (non-deprecated) models
const active = getActiveModels();
// ['mixed-10-v2', 'mixed-30-v2', ..., '10', '50', '250', 'hybrid-v2-10', ...]Configuration Constants
import { DEFAULT_ENDPOINT, AUTH_CONFIG, RETRY_CONFIG, FRAME_CONFIG } from '@moveris/shared';
console.log(DEFAULT_ENDPOINT); // 'https://api.moveris.com'
console.log(AUTH_CONFIG.apiKeyHeader); // 'X-API-Key'
console.log(RETRY_CONFIG.maxAttempts); // 3
console.log(FRAME_CONFIG.maxBufferSize); // 10TypeScript Support
This package is written in TypeScript and provides full type definitions.
import type {
LivenessResult,
LivenessState,
LivenessConfig,
FastCheckModel,
ModelVersion,
DeprecationInfo,
ModelConfig,
FrameSource,
Verdict,
CapturedFrame,
CropData,
LivenessClientConfig,
DetectionResult,
DetectionSummary,
FaceBoundingBox,
HeadPose,
FaceLandmarkPoint,
LandmarkValidationResult,
EyeRegionQuality,
EyeQualityThresholds,
EyeRegionBounds,
EyeRegionsBounds,
} from '@moveris/shared';License
MIT