capacitor-plugin-camera-forked
v3.1.125
Published
A capacitor camera plugin - A custom Capacitor camera plugin with additional features.
Downloads
106
Maintainers
Readme
capacitor-plugin-camera
A capacitor camera plugin.
Supported Platforms
- Android (based on CameraX)
- iOS (based on AVCaptureSession)
- Web (based on getUserMedia with Dynamsoft Camera Enhancer)
Versions
For Capacitor 5, use versions 1.x.
For Capacitor 6, use versions 2.x.
For Capacitor 7, use versions 3.x.
Install
npm install capacitor-plugin-camera
npx cap syncGet Bitmap/UIImage via Reflection
If you are developing a plugin, you can use reflection to get the camera frames as Bitmap or UIImage on the native side.
Java:
Class cls = Class.forName("com.tonyxlh.capacitor.camera.CameraPreviewPlugin");
Method m = cls.getMethod("getBitmap",null);
Bitmap bitmap = (Bitmap) m.invoke(null, null);Objective-C:
- (UIImage*)getUIImage{
UIImage *image = ((UIImage* (*)(id, SEL))objc_msgSend)(objc_getClass("CameraPreviewPlugin"), sel_registerName("getBitmap"));
return image;
}You have to call saveFrame beforehand.
Declare Permissions
To use camera and microphone, we need to declare permissions.
Add the following to Android's AndroidManifest.xml:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />Add the following to iOS's Info.plist:
<key>NSCameraUsageDescription</key>
<string>For camera usage</string>
<key>NSMicrophoneUsageDescription</key>
<string>For video recording</string>FAQ
Why I cannot see the camera?
For native platforms, the plugin puts the native camera view behind the webview and sets the webview as transparent so that we can display HTML elements above the camera.
You may need to add the style below on your app's HTML or body element to avoid blocking the camera view:
ion-content {
--background: transparent;
}In dark mode, it is neccessary to set the --ion-blackground-color property. You can do this with the following code:
document.documentElement.style.setProperty('--ion-background-color', 'transparent');API
initialize(...)getResolution()setResolution(...)getAllCameras()getSelectedCamera()selectCamera(...)setScanRegion(...)setZoom(...)setFocus(...)setDefaultUIElementURL(...)setElement(...)startCamera()stopCamera()takeSnapshot(...)detectBlur(...)saveFrame()takeSnapshot2(...)takePhoto(...)toggleTorch(...)getOrientation()startRecording()stopRecording(...)setLayout(...)requestCameraPermission()requestMicroPhonePermission()isOpen()addListener('onPlayed', ...)addListener('onOrientationChanged', ...)removeAllListeners()- Interfaces
- Type Aliases
initialize(...)
initialize(options?: { quality?: number | undefined; } | undefined) => Promise<void>| Param | Type |
| ------------- | ---------------------------------- |
| options | { quality?: number; } |
getResolution()
getResolution() => Promise<{ resolution: string; }>Returns: Promise<{ resolution: string; }>
setResolution(...)
setResolution(options: { resolution: number; }) => Promise<void>| Param | Type |
| ------------- | ------------------------------------ |
| options | { resolution: number; } |
getAllCameras()
getAllCameras() => Promise<{ cameras: string[]; }>Returns: Promise<{ cameras: string[]; }>
getSelectedCamera()
getSelectedCamera() => Promise<{ selectedCamera: string; }>Returns: Promise<{ selectedCamera: string; }>
selectCamera(...)
selectCamera(options: { cameraID: string; }) => Promise<void>| Param | Type |
| ------------- | ---------------------------------- |
| options | { cameraID: string; } |
setScanRegion(...)
setScanRegion(options: { region: ScanRegion; }) => Promise<void>| Param | Type |
| ------------- | -------------------------------------------------------------- |
| options | { region: ScanRegion; } |
setZoom(...)
setZoom(options: { factor: number; }) => Promise<void>| Param | Type |
| ------------- | -------------------------------- |
| options | { factor: number; } |
setFocus(...)
setFocus(options: { x: number; y: number; }) => Promise<void>| Param | Type |
| ------------- | -------------------------------------- |
| options | { x: number; y: number; } |
setDefaultUIElementURL(...)
setDefaultUIElementURL(url: string) => Promise<void>Web Only
| Param | Type |
| --------- | ------------------- |
| url | string |
setElement(...)
setElement(ele: any) => Promise<void>Web Only
| Param | Type |
| --------- | ---------------- |
| ele | any |
startCamera()
startCamera() => Promise<void>stopCamera()
stopCamera() => Promise<void>takeSnapshot(...)
takeSnapshot(options: { quality?: number; checkBlur?: boolean; }) => Promise<{ base64: string; confidence?: number; boundingBoxes?: number[][]; isBlur?: boolean; detectionMethod?: string; }>take a snapshot as base64.
| Param | Type |
| ------------- | ------------------------------------------------------- |
| options | { quality?: number; checkBlur?: boolean; } |
Returns: Promise<{ base64: string; confidence?: number; boundingBoxes?: number[][]; isBlur?: boolean; detectionMethod?: string; }>
detectBlur(...)
detectBlur(options: { image: string; }) => Promise<{ isBlur: boolean; blurConfidence: number; sharpConfidence: number; method?: string; boundingBoxes?: number[][]; objectCount?: number; wordCount?: number; readableWords?: number; }>analyze an image for blur detection with detailed confidence scores.
| Param | Type |
| ------------- | ------------------------------- |
| options | { image: string; } |
Returns: Promise<{ isBlur: boolean; blurConfidence: number; sharpConfidence: number; method?: string; boundingBoxes?: number[][]; objectCount?: number; wordCount?: number; readableWords?: number; }>
saveFrame()
saveFrame() => Promise<{ success: boolean; }>save a frame internally. Android and iOS only.
Returns: Promise<{ success: boolean; }>
takeSnapshot2(...)
takeSnapshot2(options: { canvas: HTMLCanvasElement; maxLength?: number; }) => Promise<{ scaleRatio?: number; }>take a snapshot on to a canvas. Web Only
| Param | Type |
| ------------- | ------------------------------------------------- |
| options | { canvas: any; maxLength?: number; } |
Returns: Promise<{ scaleRatio?: number; }>
takePhoto(...)
takePhoto(options: { pathToSave?: string; includeBase64?: boolean; }) => Promise<{ path?: string; base64?: string; blob?: Blob; confidence?: number; }>| Param | Type |
| ------------- | -------------------------------------------------------------- |
| options | { pathToSave?: string; includeBase64?: boolean; } |
Returns: Promise<{ path?: string; base64?: string; blob?: any; confidence?: number; }>
toggleTorch(...)
toggleTorch(options: { on: boolean; }) => Promise<void>| Param | Type |
| ------------- | ----------------------------- |
| options | { on: boolean; } |
getOrientation()
getOrientation() => Promise<{ "orientation": "PORTRAIT" | "LANDSCAPE"; }>get the orientation of the device.
Returns: Promise<{ orientation: 'PORTRAIT' | 'LANDSCAPE'; }>
startRecording()
startRecording() => Promise<void>stopRecording(...)
stopRecording(options: { includeBase64?: boolean; }) => Promise<{ path?: string; base64?: string; blob?: Blob; }>| Param | Type |
| ------------- | ----------------------------------------- |
| options | { includeBase64?: boolean; } |
Returns: Promise<{ path?: string; base64?: string; blob?: any; }>
setLayout(...)
setLayout(options: { top: string; left: string; width: string; height: string; }) => Promise<void>| Param | Type |
| ------------- | -------------------------------------------------------------------------- |
| options | { top: string; left: string; width: string; height: string; } |
requestCameraPermission()
requestCameraPermission() => Promise<void>requestMicroPhonePermission()
requestMicroPhonePermission() => Promise<void>isOpen()
isOpen() => Promise<{ isOpen: boolean; }>Returns: Promise<{ isOpen: boolean; }>
addListener('onPlayed', ...)
addListener(eventName: 'onPlayed', listenerFunc: onPlayedListener) => Promise<PluginListenerHandle>| Param | Type |
| ------------------ | ------------------------------------------------------------- |
| eventName | 'onPlayed' |
| listenerFunc | onPlayedListener |
Returns: Promise<PluginListenerHandle>
addListener('onOrientationChanged', ...)
addListener(eventName: 'onOrientationChanged', listenerFunc: onOrientationChangedListener) => Promise<PluginListenerHandle>| Param | Type |
| ------------------ | ------------------------------------------------------------------------------------- |
| eventName | 'onOrientationChanged' |
| listenerFunc | onOrientationChangedListener |
Returns: Promise<PluginListenerHandle>
removeAllListeners()
removeAllListeners() => Promise<void>Interfaces
ScanRegion
measuredByPercentage: 0 in pixel, 1 in percent
| Prop | Type |
| -------------------------- | ------------------- |
| left | number |
| top | number |
| right | number |
| bottom | number |
| measuredByPercentage | number |
PluginListenerHandle
| Prop | Type |
| ------------ | ----------------------------------------- |
| remove | () => Promise<void> |
Type Aliases
onPlayedListener
(result: { resolution: string; }): void
onOrientationChangedListener
(): void
Blur Detection
The plugin includes blur detection capabilities using TensorFlow Lite models with Laplacian variance fallback, providing consistent results across all platforms.
Analyze Existing Images
Use the detectBlur method to analyze any base64 image with detailed confidence scores:
// Analyze an existing image (base64 string or data URL)
const result = await CameraPreview.detectBlur({
image: "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD..."
// or just the base64 string: "/9j/4AAQSkZJRgABAQEASABIAAD..."
});
console.log('Is Blurry:', result.isBlur); // boolean: true/false
console.log('Blur Confidence:', result.blurConfidence); // 0.0-1.0 (higher = more blurry)
console.log('Sharp Confidence:', result.sharpConfidence); // 0.0-1.0 (higher = more sharp)
// Use confidence scores for advanced logic
if (result.blurConfidence > 0.7) {
console.log('High confidence this image is blurry');
} else if (result.sharpConfidence > 0.8) {
console.log('High confidence this image is sharp');
} else {
console.log('Uncertain blur status - manual review needed');
}Basic Usage (Capture + Detection)
// Take a snapshot with blur detection
const result = await CameraPreview.takeSnapshot({
quality: 85,
checkBlur: true // Optional, defaults to false for performance
});
console.log('Base64:', result.base64);
if (result.blurScore !== undefined) {
console.log('Blur Score:', result.blurScore);
// Implement your own blur threshold logic
const threshold = 50.0; // Adjust based on your quality requirements
const isBlurry = result.blurScore < threshold;
if (isBlurry) {
console.log('Image appears to be blurry');
} else {
console.log('Image appears to be sharp');
}
}Performance Control
Blur detection is disabled by default for optimal performance. Enable it only when needed:
// Blur detection OFF (default) - faster performance
const result = await CameraPreview.takeSnapshot({ quality: 85 });
// Blur detection ON - includes blur analysis
const resultWithBlur = await CameraPreview.takeSnapshot({
quality: 85,
checkBlur: true
});Understanding Blur Results
New detectBlur Method (Recommended):
- Returns standardized confidence scores (0.0-1.0 range) across all platforms
blurConfidence: Higher values indicate more blur (>0.7 = likely blurry)sharpConfidence: Higher values indicate more sharpness (>0.8 = likely sharp)isBlur: Simple boolean result based on confidence thresholds
Legacy takeSnapshot Method:
- Higher values = Sharper images
- Lower values = Blurrier images
- Threshold guidelines:
- iOS: Consider values below
0.001as blurry - Android/Web: Consider values below
50-100as blurry - Adjust thresholds based on your specific quality requirements
- iOS: Consider values below
When to Use Which Method
Use detectBlur for:
- Analyzing already captured images
- Batch processing multiple images
- Getting detailed confidence scores for advanced decision logic
- Post-processing workflows
- When you need consistent confidence values across platforms
Use takeSnapshot with checkBlur: true for:
- Real-time capture with immediate blur feedback
- Simple blur detection during image capture
- When you only need a basic blur/sharp indication
Performance Impact
| Platform | Without Blur Detection | With Blur Detection | Overhead | |----------|----------------------|-------------------|----------| | iOS | 100-120ms | 120-145ms | ~20% | | Android | 80-120ms | 100-145ms | ~21% | | Web | 60-100ms | 85-140ms | ~40% |
Implementation Notes
- TensorFlow Lite models for advanced blur detection with high accuracy
- Laplacian variance fallback when TFLite models unavailable
- Pixel sampling for performance optimization
- Hardware acceleration on iOS with Core Image
- Client-side threshold logic for maximum flexibility
- Cross-platform algorithm consistency
- Dual API:
takeSnapshotfor capture+detection,detectBlurfor analyzing existing images
