rn-executorch-card-scanner
v0.1.1
Published
Credit/debit card scanner using ExecuTorch EasyOCR (CRAFT + CRNN) and Vision Camera for React Native
Maintainers
Readme
rn-executorch-card-scanner
On-device credit/debit card scanner for React Native using ExecuTorch EasyOCR (CRAFT text detector + CRNN recognizer) and Vision Camera.
Runs entirely on-device — no server, no API keys, no data leaves the phone.
How it works
- Vision Camera captures photos at a configurable interval
- ExecuTorch OCR (EasyOCR) detects and recognizes text regions
- Smart parser extracts card number, expiry, holder name, and bank name
- Accumulator requires multiple consistent readings before locking a field (reduces noise)
- Returns result when essential fields are locked, or on timeout with partial data
Based on the react-native-executorch library by Software Mansion, with card-specific parsing and accumulation logic built on top. See docs/react-native-executorch.md for the full reference.
Known limitations
- Low-contrast cards: Cards without raised/embossed numbers or with low contrast between text and background may not parse reliably
- Memory: ~1.4GB RAM while scanner is active (OCR models are large)
- APK size: Adds ~180MB to your APK (OCR model weights)
- First run: Models (~45MB) are downloaded from HuggingFace and cached locally
- Performance: ~1s on Galaxy S24, ~2.8s on iPhone SE 3 per inference cycle
Contributions to improve accuracy are welcome — the modular architecture makes it easy to swap in better models.
Requirements
- React Native >= 0.81 (New Architecture)
- iOS >= 17.0, Android >= 13
- Expo: custom dev build required (no Expo Go)
- Metro config: add
.pteand.bintoassetExts
Installation
# Peer dependencies (install these in your app)
npm install react-native-vision-camera react-native-executorch
# If using Expo:
npm install @react-native-executorch/expo-adapter expo-file-system expo-asset
# This package
npm install rn-executorch-card-scannerMetro config
Add .pte and .bin to your metro.config.js asset extensions:
const { getDefaultConfig } = require('expo/metro-config');
const config = getDefaultConfig(__dirname);
config.resolver.assetExts.push('pte', 'bin');
module.exports = config;Quick start — drop-in component
import { useState } from 'react';
import { Modal, Button } from 'react-native';
import { CardScannerView, type ScannedCard } from 'rn-executorch-card-scanner';
function MyScreen() {
const [visible, setVisible] = useState(false);
const [card, setCard] = useState<ScannedCard | null>(null);
return (
<>
<Button title="Scan Card" onPress={() => setVisible(true)} />
<Modal visible={visible} animationType="slide">
<CardScannerView
config={{ debug: true, timeout: 60 }}
onResult={(result) => { setCard(result); setVisible(false); }}
onClose={() => setVisible(false)}
/>
</Modal>
</>
);
}Advanced — custom UI with hook
import { useCardScanner } from 'rn-executorch-card-scanner';
import { Camera, useCameraDevice } from 'react-native-vision-camera';
function MyCustomScanner() {
const device = useCameraDevice('back');
const scanner = useCardScanner({
timeout: 90,
scanInterval: 1500,
requiredTicks: 3,
debug: true,
onOCRText: (text) => console.log('Live OCR:', text),
});
// Render your own UI using scanner.displayFields, scanner.countdown, etc.
// Pass scanner.cameraRef to your Camera component:
return (
<Camera ref={scanner.cameraRef} device={device} isActive={scanner.isScanning} photo />
);
}Configuration
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| timeout | number | 120 | Seconds before giving up and returning partial results |
| scanInterval | number | 1000 | Milliseconds between capture attempts |
| requiredTicks | number | 2 | Consecutive identical values needed to lock a field |
| debug | boolean | false | Enable [EOCR-*] console logs |
| ocrModel | any | OCR_ENGLISH | OCR model config — swap for other ExecuTorch OCR models |
| onOCRText | (text: string) => void | — | Callback with live formatted OCR text |
Exports
| Export | Type | Description |
|--------|------|-------------|
| CardScannerView | Component | Drop-in scanner with camera + overlay |
| useCardScanner | Hook | Core logic for custom UI |
| parseCardFromDetections | Function | Pure parser — pass OCR detections, get card fields |
| updateAccumulator | Function | Pure accumulator state machine |
| fixDigits | Function | OCR character-to-digit correction |
| CHAR_TO_DIGIT | Constant | Character mapping table (customizable) |
| BANNED_WORDS | Constant | Words filtered from holder name detection |
Swapping OCR models
This package uses two models from the EasyOCR project:
| Model | Type | Description | |-------|------|-------------| | CRAFT | Detector | Finds text regions in the image (heatmap-based) | | CRNN | Recognizer | Reads text from detected regions |
Browse available models and language packs at the EasyOCR Model Hub.
The ocrModel config accepts any model config compatible with react-native-executorch's useOCR hook:
import { OCR_ENGLISH } from 'react-native-executorch';
// Default English
<CardScannerView config={{ ocrModel: OCR_ENGLISH }} ... />
// Or use RECOGNIZER_LATIN_CRNN for broader Latin support
import { RECOGNIZER_LATIN_CRNN } from 'react-native-executorch';
<CardScannerView config={{ ocrModel: RECOGNIZER_LATIN_CRNN }} ... />If a better model becomes available, just pass it — the parsing logic is model-agnostic.
Testing with Android emulator
Sample card images are included in docs/sample-cards/ for testing on the Android emulator:
- Open the emulator's Settings > Camera
- Add a sample card image to the wall option
- Open the card scanner in your app
- In the camera view, hold Alt and use WASD keys to walk to the room where the card is displayed on the wall
- Point the camera at the card to test scanning
License
MIT
