@jhicksaware/aware-face-capture-web
v2.0.0
Published
Aware Face Capture Web SDK for biometric authentication
Downloads
6
Maintainers
Readme
@aware/face-capture
Framework-agnostic face capture SDK for biometric authentication. Works with vanilla JavaScript, React, Angular, Vue, and any other framework.
Features
- KnomiWeb-compatible - Generates identical payloads to legacy KnomiWeb SDK
- Framework Agnostic - Works with any JavaScript framework or vanilla JS
- Multiple Providers - Pluggable face detection (face-api.js, MediaPipe, custom)
- Built-in Encryption - PKCS#1 v1.5 RSA + AES-256-CBC encryption via forge.js
- Brightscreen Capture - Flash sequence for enhanced image capture
- Responsive - Works on desktop and mobile browsers
- Web Component - Use as
<aware-face-capture>in any HTML with modal or embedded modes - Tree-Shakeable - Only bundle what you use
- Multiple Formats - ESM, CommonJS, UMD, and Web Component
- State Management - Built-in capture state handling with retry logic
- Manual Fallback - Automatic manual capture button after quality timeout
Installation
npm install @aware/face-captureOr use via CDN:
<script src="https://unpkg.com/@aware/face-capture/dist/browser.min.js"></script>Runtime Dependencies
The SDK dynamically loads the following dependencies from CDN when needed:
- forge.js - For PKCS#1 v1.5 encryption (loaded automatically when encryption is enabled)
- face-api.js - For face detection when using faceapi provider (loaded if not present)
- MediaPipe - For face detection when using mediapipe provider (loaded if not present)
These are loaded on-demand to minimize initial bundle size.
Quick Start
Vanilla JavaScript
import { AwareFaceCapture } from '@aware/face-capture';
const sdk = new AwareFaceCapture({
outputFormat: 'knomiweb',
brightscreen: false, // true for 3-frame flash sequence, false for normal capture
frameCount: 1, // number of frames when brightscreen is false (default: 1)
encryption: {
enabled: true,
publicKey: 'YOUR_RSA_PUBLIC_KEY',
method: 'pkcs1' // optional, defaults to 'pkcs1'
}
});
// Initialize with video element
await sdk.initialize(document.getElementById('video'));
// Subscribe to events
sdk.on('feedback', (event) => console.log('Feedback:', event.message));
sdk.on('quality', (result) => console.log('Quality:', result));
sdk.on('progress', (progress) => console.log('Progress:', progress));
sdk.on('error', (error) => console.error('Error:', error));
// Capture face
const result = await sdk.capture();
// Or manually stop capture
sdk.stopCapture();
// Clean up when done
sdk.destroy();
// Send to your backend
await fetch('/api/verify', {
method: 'POST',
body: JSON.stringify(result.payload)
});Web Component
<!-- Embedded mode (default) -->
<aware-face-capture
provider="faceapi"
output-format="knomiweb"
brightscreen="true"
frame-count="1"
encryption="true"
public-key="YOUR_RSA_PUBLIC_KEY"
mode="embedded"
auto-start="false"
show-button="true"
button-text="Start Face Capture"
quality="true"
hold-still-ms="2000"
detection-frequency="200"
username="[email protected]">
</aware-face-capture>
<!-- Modal mode -->
<aware-face-capture
mode="modal"
brightscreen="false"
frame-count="2"
encryption="true"
public-key="YOUR_RSA_PUBLIC_KEY">
</aware-face-capture>
<script>
const element = document.querySelector('aware-face-capture');
// Listen to events
element.addEventListener('capture-complete', (event) => {
console.log('Captured:', event.detail);
});
element.addEventListener('ready', () => {
console.log('SDK initialized');
});
element.addEventListener('state-change', (event) => {
console.log('State changed:', event.detail);
});
element.addEventListener('retry-requested', (event) => {
console.log('Retry attempt:', event.detail.attempt);
});
// Programmatic control
await element.start(); // Start capture
element.stop(); // Stop capture
await element.setPublicKey('NEW_KEY'); // Update public key
// State management
element.setState('processing', { message: 'Custom processing...' });
element.showSuccess('Capture successful!', { autoClose: 3000 });
element.showError('Something went wrong');
element.showWarning('Please adjust lighting');
element.reset(); // Reset to initial state
element.complete(); // Mark as complete and cleanup
// Configure state behavior
element.setStateConfig({
autoShowProcessing: true,
processingTimeout: 30000,
retryLimit: 3,
freezeFrameOnCapture: true,
keepCameraForRetry: true
});
</script>React
import { AwareFaceCapture } from '@aware/face-capture';
import { useEffect, useRef } from 'react';
function FaceCapture({ onCapture }) {
const videoRef = useRef(null);
const sdkRef = useRef(null);
useEffect(() => {
const initSDK = async () => {
sdkRef.current = new AwareFaceCapture({
outputFormat: 'knomiweb',
livenessMode: 'active'
});
await sdkRef.current.initialize(videoRef.current);
};
initSDK();
return () => sdkRef.current?.destroy();
}, []);
const handleCapture = async () => {
const result = await sdkRef.current.capture();
onCapture(result);
};
return (
<>
<video ref={videoRef} autoPlay muted />
<button onClick={handleCapture}>Capture</button>
</>
);
}Vue
<template>
<div>
<video ref="videoEl" autoplay muted></video>
<button @click="capture">Capture</button>
</div>
</template>
<script>
import { AwareFaceCapture } from '@aware/face-capture';
export default {
data() {
return {
sdk: null
};
},
async mounted() {
this.sdk = new AwareFaceCapture({
outputFormat: 'knomiweb',
livenessMode: 'passive'
});
await this.sdk.initialize(this.$refs.videoEl);
},
beforeUnmount() {
this.sdk?.destroy();
},
methods: {
async capture() {
const result = await this.sdk.capture();
this.$emit('captured', result);
}
}
};
</script>Angular
import { Component, ElementRef, ViewChild, OnInit, OnDestroy } from '@angular/core';
import { AwareFaceCapture } from '@aware/face-capture';
@Component({
selector: 'app-face-capture',
template: `
<video #videoElement autoplay muted></video>
<button (click)="capture()">Capture</button>
`
})
export class FaceCaptureComponent implements OnInit, OnDestroy {
@ViewChild('videoElement') videoElement!: ElementRef<HTMLVideoElement>;
private sdk: AwareFaceCapture | null = null;
async ngOnInit() {
this.sdk = new AwareFaceCapture({
outputFormat: 'knomiweb',
livenessMode: 'active'
});
await this.sdk.initialize(this.videoElement.nativeElement);
}
ngOnDestroy() {
this.sdk?.destroy();
}
async capture() {
const result = await this.sdk?.capture();
console.log('Captured:', result);
}
}Configuration
const config = {
// Face detection provider
provider: 'faceapi', // 'faceapi' | 'mediapipe' | custom provider instance
// Output format
outputFormat: 'knomiweb', // 'knomiweb' | 'raw' | 'custom'
// Enable brightscreen capture (3 frames with flashes)
brightscreen: false, // true for brightscreen, false for normal capture
// Enable quality checks
quality: true,
// Number of frames to capture (only used when brightscreen is false)
frameCount: 1, // default: 1, can be any number for multi-frame capture
// Hold still duration (ms)
holdStillMs: 2000,
// Detection frequency (ms)
detectionFrequency: 200,
// Flash duration for brightscreen mode (ms)
flashDuration: 800,
// Username for metadata
username: '[email protected]',
// Workflow configuration (auto-detected if not specified)
workflow: 'hotel4', // Specific workflow override
workflowCategory: 'hotel', // 'charlie' | 'delta' | 'hotel' | 'foxtrot'
securityLevel: 4, // 2 (high usability) | 4 (balanced) | 6 (high security)
// Video element ID (if pre-existing)
videoElementId: 'myVideoElement',
// Encryption configuration
encryption: {
enabled: true,
publicKey: 'YOUR_RSA_PUBLIC_KEY',
method: 'pkcs1' // Uses forge.js for PKCS#1 v1.5
},
// Custom formatter function
customFormatter: (frames) => {
return { custom: 'payload', frames };
}
};API Reference
AwareFaceCapture
Main SDK class.
Methods
constructor(config)- Create new instance with configurationinitialize(videoElement?)- Initialize camera and face detection. Creates video element if not providedcapture()- Start capture process, returns promise with CaptureOutput resultstopCapture()- Manually stop the capture processsetPublicKey(key)- Set or update RSA public key for encryptioncalculateROI(width, height)- Calculate region of interest for face detectionon(event, handler)- Subscribe to events, returns unsubscribe functiondestroy()- Cleanup all resources and event listeners
Events
feedback- UI feedback messages{ message: string, color: 'red' | 'yellow' | 'green' }quality- Quality check results{ passed: boolean, score: number, feedback: QualityFeedback[], details: object }progress- Capture progress updates{ stage: string, progress: number, message?: string }error- Error events{ code: string, message: string, details?: any }complete- Capture complete with output data
Web Component Methods
Lifecycle Methods
start()- Initialize and start capturestop()- Stop capture and cleanupcapture()- Perform capture and return resultdestroy()- Complete cleanup
State Management
setState(state, options)- Set capture state ('idle' | 'capturing' | 'processing' | 'success' | 'error' | 'warning')showSuccess(message?, options?)- Show success stateshowError(message?, options?)- Show error stateshowWarning(message?, options?)- Show warning statereset()- Reset to initial statecomplete()- Mark as complete and cleanupsetStateConfig(config)- Configure state behavior
Configuration
setPublicKey(key)- Update encryption public keywaitForReady()- Wait for initialization to complete
Web Component Attributes
provider- Face detection provider ('faceapi' | 'mediapipe')output-format- Output format ('knomiweb' | 'raw' | 'custom')brightscreen- Enable brightscreen capture ('true' | 'false')encryption- Enable encryption ('true' | 'false')public-key- RSA public key for encryptionframe-count- Number of frames to capturequality- Enable quality checks ('true' | 'false')username- Username for metadataworkflow- Specific workflow override (e.g., 'charlie4', 'hotel6', 'foxtrot2')workflow-category- Workflow category ('charlie' | 'delta' | 'hotel' | 'foxtrot')security-level- Security level ('2' | '4' | '6')flash-duration- Flash duration for brightscreen (ms)hold-still-ms- Hold still duration (ms)detection-frequency- Face detection frequency (ms)mode- Display mode ('embedded' | 'modal')auto-start- Auto start capture ('true' | 'false')show-button- Show start button ('true' | 'false')button-text- Custom button text
Web Component Events
capture-complete- Fired when capture completes with resultready- Fired when SDK is initializedstate-change- Fired on state transitionsretry-requested- Fired when retry is requestedaction- Fired when state action button is clickedtimeout- Fired on processing timeoutreset- Fired when reset occurscomplete- Fired when marked complete
Output Formats
KnomiWeb Format (Encrypted)
{
"encrypted": true,
"payload": {
"p": "base64_encrypted_payload",
"key": "base64_encrypted_aes_key",
"iv": "base64_initialization_vector"
}
}KnomiWeb Format (Unencrypted)
{
"encrypted": false,
"payload": {
"video": {
"client_version": "Aware Face Capture SDK v2.0.0",
"meta_data": {
"client_device_brand": "Unknown",
"client_device_model": "Browser",
"client_os_version": "Unknown",
"username": ""
},
"workflow_data": {
"frames": [
{
"data": "base64_image",
"tags": [], // For brightscreen: [], ["DARKENED"], ["BRIGHTENED"]
"timestamp": 1234567890
}
],
"rotation": 0,
"timestamp": 1234567890,
"workflow": "foxtrot4" // Auto-detected based on device type and frame count
}
}
}
}Raw Format
{
"images": ["base64_image1", "base64_image2"],
"frames": [
{
"data": "base64_image",
"timestamp": 1234567890,
"tags": [],
"quality": { // Optional quality data if quality checks enabled
"passed": true,
"score": 100,
"feedback": [],
"details": {
"faceDetected": true,
"faceCount": 1,
"position": "good",
"size": "good",
"blur": 0,
"brightness": 100,
"eyesOpen": true
}
}
}
]
}Custom Format
When using a custom formatter function, the output is whatever your function returns:
const config = {
outputFormat: 'custom',
customFormatter: (frames) => {
// frames is an array of Frame objects
return {
customField: 'customValue',
processedFrames: frames.map(f => f.data),
metadata: {
timestamp: Date.now(),
frameCount: frames.length
}
};
}
};Brightscreen Capture Mode
Brightscreen mode captures three frames with different screen flashes:
- Normal frame - Captured with normal lighting (no tag)
- Darkened frame - Captured with black screen overlay (tag: "DARKENED")
- Brightened frame - Captured with white screen overlay (tag: "BRIGHTENED")
The flash sequence:
- Each flash lasts 800ms by default (configurable via
flashDuration) - Frames are captured at the midpoint of each flash
- Total brightscreen capture takes ~2.4 seconds
const sdk = new AwareFaceCapture({
brightscreen: true, // Enable brightscreen capture
flashDuration: 800 // Customize flash duration
});
// For multi-frame capture without brightscreen
const multiFrameSdk = new AwareFaceCapture({
brightscreen: false,
frameCount: 5 // Capture 5 frames without flash sequence
});Quality Feedback
The SDK provides real-time quality feedback during capture:
Quality Feedback Types
NO_FACE_DETECTED- No face found in frameMULTIPLE_FACES- More than one face detectedMOVE_LEFT/MOVE_RIGHT/MOVE_UP/MOVE_DOWN- Position adjustment neededFACE_TOO_FAR/FACE_TOO_CLOSE- Distance adjustment neededEYES_CLOSED- Eyes appear to be closedEYES_COVERED- Eyes are obscuredIMAGE_BLURRY- Image lacks sharpnessINSUFFICIENT_LIGHTING- Lighting is too poorNOT_LOOKING_STRAIGHT- Face orientation is offPROCESSING_ERROR- Internal processing error
Manual Capture Fallback
If quality checks are enabled and the user cannot meet quality requirements within 5 seconds, a manual capture button automatically appears, allowing users to bypass quality checks.
Backend Integration
Knomi Backend
const result = await sdk.capture();
await fetch('https://your-knomi-server/analyzeEncrypted', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result.payload)
});AwareID Backend
const result = await sdk.capture();
await fetch('https://your-awareid-server/enrollment/addFace', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${token}`
},
body: JSON.stringify({
enrollmentToken: enrollmentToken,
faceLivenessData: result.payload
})
});Browser Support
- Chrome 90+
- Firefox 88+
- Safari 14+
- Edge 90+
- Mobile Safari 14+
- Chrome Android 90+
Requirements
- WebRTC camera access (getUserMedia)
- Canvas API
- ES2020 support
- Web Crypto API (optional - falls back to forge.js)
The SDK automatically checks browser compatibility on load and warns about missing features.
Face Detection Providers
FaceAPI Provider
The face-api.js provider loads models from CDN by default. You can override the model path:
const sdk = new AwareFaceCapture({
provider: 'faceapi'
});
// Models are loaded from:
// - CDN: https://cdn.jsdelivr.net/gh/justadudewhohacks/face-api.js@master/weights
// - Or from your awareid_host if configured
// - Or from local /models directoryRequired models:
- tiny_face_detector_model
- face_landmark_68_tiny_model
MediaPipe Provider
The MediaPipe provider loads models from Google's CDN:
const sdk = new AwareFaceCapture({
provider: 'mediapipe'
});
// Automatically loads from:
// https://storage.googleapis.com/mediapipe-models/face_detector/Custom Provider
You can implement a custom provider by extending BaseFaceDetectionProvider:
import { BaseFaceDetectionProvider } from '@aware/face-capture';
class MyCustomProvider extends BaseFaceDetectionProvider {
async initialize() { /* ... */ }
async detect(input) { /* ... */ }
async checkQuality(frame, roi) { /* ... */ }
dispose() { /* ... */ }
}
const sdk = new AwareFaceCapture({
provider: new MyCustomProvider()
});
## State Management
The Web Component includes a comprehensive state management system:
### States
- `idle` - Initial state, no capture in progress
- `capturing` - Actively capturing from camera
- `processing` - Processing captured data
- `success` - Capture completed successfully
- `error` - Error occurred during capture
- `warning` - Warning state (non-fatal issue)
### State Configuration
```javascript
const element = document.querySelector('aware-face-capture');
element.setStateConfig({
autoShowProcessing: true, // Show processing state automatically
processingTimeout: 30000, // Timeout for processing state (ms)
retryLimit: 3, // Maximum retry attempts
freezeFrameOnCapture: true, // Show last frame during processing
keepCameraForRetry: true // Keep camera active for retry
});State Transitions
// Manual state control
element.setState('processing', {
message: 'Analyzing face...',
timeout: 10000
});
element.setState('success', {
message: 'Verification complete!',
autoClose: 3000, // Auto close after 3 seconds
actions: [
{ label: 'Done', action: 'done', primary: true },
{ label: 'Retry', action: 'retry' }
]
});TypeScript Support
Full TypeScript definitions are included:
import {
AwareFaceCapture,
CaptureConfig,
CaptureOutput,
Frame,
OutputFormat,
QualityResult,
FeedbackEvent,
ProgressEvent,
ErrorEvent,
CaptureState,
StateOptions
} from '@aware/face-capture';
// All types are fully typed
const config: CaptureConfig = {
provider: 'faceapi',
brightscreen: true,
frameCount: 1,
// ...
};Workflow System
The SDK automatically selects the appropriate workflow based on device type and security requirements:
Workflow Categories
- charlie: Mobile phone front camera (3 frames with SDK, 1 without)
- delta: Mobile phone back camera (3 frames with SDK, 1 without)
- hotel: Web camera (always 1 frame)
- foxtrot: Mobile phone front camera (always 1 frame)
Security Levels
Each workflow has three security variants:
- 2: High usability, lower security
- 4: Balanced usability and security (recommended default)
- 6: High security, lower usability
Automatic Detection
If no workflow is specified, the SDK automatically selects:
- Desktop (Web Camera):
hotel4 - Mobile (Front Camera):
- Single frame:
foxtrot4 - Multiple frames:
charlie4
- Single frame:
- Mobile (Back Camera):
delta4
Manual Configuration
// Let SDK auto-detect
const autoConfig = new AwareFaceCapture({
// Workflow will be auto-selected based on device
});
// Specify category and security level
const categoryConfig = new AwareFaceCapture({
workflowCategory: 'hotel',
securityLevel: 6 // High security
});
// Override with specific workflow
const specificConfig = new AwareFaceCapture({
workflow: 'charlie2' // Specific workflow
});Development
# Install dependencies
npm install
# Build library
npm run build
# Watch mode
npm run dev
# Run tests
npm test
# Type check
npm run type-check
# Lint
npm run lintPackage Exports
The package provides multiple entry points:
// Main SDK
import { AwareFaceCapture } from '@aware/face-capture';
// Specific providers
import { FaceApiProvider } from '@aware/face-capture/providers/faceapi';
import { MediaPipeProvider } from '@aware/face-capture/providers/mediapipe';
// Web Component only
import '@aware/face-capture/web-component';
// Utilities
import { BrowserDetection } from '@aware/face-capture';
import { KnomiWebFormatter } from '@aware/face-capture';
import { KnomiWebEncryption } from '@aware/face-capture';License
Aware
Support
For issues and questions, please visit: https://github.com/awareinc/aware-face-capture/issues
