react-native-speech-to-text-vin
v0.1.0
Published
Speech-to-text native module for React Native (iOS & Android).
Maintainers
Readme
react-native-speech-to-text-vin
Speech‑to‑text native module for React Native, using:
- iOS:
SFSpeechRecognizer(Speech framework) +AVAudioEngine - Android:
SpeechRecognizer+RecognizerIntent
Provides simple start/stop APIs and events for partial and final transcription, with basic silence auto‑stop on both platforms.
Installation
1. Add the package
Using npm:
npm install react-native-speech-to-text-vin
2. iOS setup
From your React Native project:
cd ios
pod install
cd ..
iOS permissions
In your app’s Info.plist, add:
<key>NSSpeechRecognitionUsageDescription</key>
<string>We need speech recognition to convert your voice to text.</string>
<key>NSMicrophoneUsageDescription</key>
<string>We need access to your microphone for speech recognition.</string>
Without these, iOS will crash or deny access when starting recognition.
3. Android setup
Usually, autolinking is enough. Just rebuild your app:
npx react-native run-android
Android permissions
In android/app/src/main/AndroidManifest.xml ensure you have:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
The JS API will ask for the runtime microphone permission using PermissionsAndroid.
Usage
Basic example (hooks)
import React, { useEffect, useState } from 'react';
import { View, Text, Button } from 'react-native';
import {
requestAuthorization,
start,
stop,
addResultListener,
addPartialResultListener,
addErrorListener,
} from 'react-native-speech-to-text-vin';
export default function SpeechExample() {
const [partial, setPartial] = useState('');
const [finalText, setFinalText] = useState('');
const [status, setStatus] = useState<'idle' | 'listening'>('idle');
useEffect(() => {
let resultSub: any;
let partialSub: any;
let errorSub: any;
(async () => {
const authStatus = await requestAuthorization();
console.log('Speech auth status:', authStatus);
resultSub = addResultListener(event => {
// { value: string }
setFinalText(event.value);
setPartial('');
setStatus('idle');
});
partialSub = addPartialResultListener(event => {
// { value: string }
setPartial(event.value);
setStatus('listening');
});
errorSub = addErrorListener(event => {
console.warn('Speech error:', event.error);
setStatus('idle');
});
})();
return () => {
stop();
resultSub?.remove();
partialSub?.remove();
errorSub?.remove();
};
}, []);
const handleStart = async () => {
try {
await start();
setStatus('listening');
} catch (e) {
console.warn('Failed to start speech recognition', e);
}
};
const handleStop = async () => {
try {
await stop();
setStatus('idle');
} catch (e) {
console.warn('Failed to stop speech recognition', e);
}
};
return (
<View style={{ padding: 16 }}>
<Text>Status: {status}</Text>
<Text style={{ marginTop: 16 }}>Partial:</Text>
<Text>{partial}</Text>
<Text style={{ marginTop: 16 }}>Final:</Text>
<Text>{finalText}</Text>
<View style={{ flexDirection: 'row', marginTop: 24, gap: 12 }}>
<Button title="Start" onPress={handleStart} />
<Button title="Stop" onPress={handleStop} />
</View>
</View>
);
}
API
All functions are imported from 'react-native-speech-to-text-vin'.
requestAuthorization(): Promise<string>
Requests speech/microphone authorization.
iOS: Uses SFSpeechRecognizer.requestAuthorization.
Returns one of: "authorized" | "denied" | "restricted" | "notDetermined" | "unknown".
Android: Uses PermissionsAndroid.request for RECORD_AUDIO.
Returns "authorized" if granted, otherwise "denied".
Example:
const status = await requestAuthorization();
if (status !== 'authorized') {
// Show message / handle denied
}
start(): Promise<void>
Starts speech recognition.
On iOS, starts AVAudioEngine and SFSpeechRecognizer.
On Android, checks RECORD_AUDIO permission, verifies availability, and calls SpeechRecognizer.startListening.
Auto‑stops after a period of silence (implemented natively on both platforms).
Rejects with an error if:
Permission is missing (Android), or
Speech recognition is not available, or
Native setup fails.
stop(): Promise<void>
Stops recognition if running and cleans up internal state.
You should also call this in cleanup (e.g. useEffect return).
addResultListener(listener)
Listens for final recognition results.
Event payload: { value: string }
Emitted when the native recognizer thinks the utterance is complete (or after silence timeout).
Returns a subscription with .remove().
Example:
const sub = addResultListener(event => {
console.log('Final:', event.value);
});
// later
sub.remove();
addPartialResultListener(listener)
Listens for partial / intermediate recognition results.
Event payload: { value: string }
Can fire multiple times as the user speaks.
addErrorListener(listener)
Listens for recognition errors.
Payload: { error: string }
Platform behaviour details
iOS
Uses:
SFSpeechRecognizer with shouldReportPartialResults = true
AVAudioEngine input tap
Silence handling:
A 2‑second timer resets whenever results come in.
If no new callbacks for ~2 seconds:
If there is a lastTranscript, onSpeechResults is emitted with it.
Recording is stopped.
Android
Uses:
SpeechRecognizer + RecognizerIntent.ACTION_RECOGNIZE_SPEECH
EXTRA_PARTIAL_RESULTS enabled
Silence handling:
A 4‑second timeout resets on partials.
When it fires:
If there is a lastPartialText and no final has been sent, onSpeechResults is emitted.
SpeechRecognizer.stopListening() is called.
Troubleshooting
Native module not linked / not found
If you see:
SpeechToText native module not linked
then autolinking might have failed. Try:
Reinstall node modules
cd ios && pod install
Clean build and rebuild iOS / Android
iOS build errors about Speech/AVFoundation
Make sure you have:
import Speech
import AVFoundation
(already provided in this library’s Swift code)
and that your iOS deployment target in the app is compatible with the podspec (≥ iOS 11.0).
Android permission denied
Ensure both:
RECORD_AUDIO is in AndroidManifest.xml
You are calling requestAuthorization() before start() and handling "denied".