@3senseai/react-native-system-speech-output
v0.1.2
Published
React Native bridge for system text-to-speech on iOS and Android.
Maintainers
Readme
@3senseai/react-native-system-speech-output
React Native bridge for system text-to-speech on iOS and Android.
Features
- Speak plain text on iOS and Android
- iOS support for SSML on iOS 16+
- iOS support for
voiceIdentifier - iOS support for
preferAssistiveTechnologySettings - iOS support for
useSystemAudioSession - Android support for
pitch - Android support for
voiceName - Cross-platform
listVoices() - Cross-platform
addProgressListener()with{ utteranceId, start, end } - Android speech annotations for phones, emails, and URLs
- Backward-compatible React Native New Architecture support
Install
npm install @3senseai/react-native-system-speech-output
cd ios && pod install && cd ..Because this package contains native iOS and Android code, rebuild the app after installing it.
New Architecture
This package ships both:
- a legacy native module implementation for bridge-based apps
- a TurboModule Codegen spec in
src/NativeSystemSpeechOutput.ts
Android uses split oldarch / newarch sources with a shared implementation.
iOS uses conditional compilation with RCT_NEW_ARCH_ENABLED.
Generated Codegen artifacts are not checked into this repository. They are generated by the consuming app during build.
API
isAvailable(): Promise<boolean>
Returns whether the native speech bridge is available.
listVoices(): Promise<VoiceInfo[]>
Returns available voices.
On Android, VoiceInfo.name maps to the value you pass back as voiceName.
On iOS, VoiceInfo.name maps to the value you pass back as voiceIdentifier.
speak(text, options?): Promise<boolean>
Supported options:
type SpeakOptions = {
language?: string | null;
rate?: number | null;
pitch?: number | null; // Android
voiceName?: string | null; // Android
ssml?: string | null; // iOS
voiceIdentifier?: string | null; // iOS
preferAssistiveTechnologySettings?: boolean | null; // iOS
useSystemAudioSession?: boolean | null; // iOS
};stop(): Promise<boolean>
Stops current speech immediately.
addStateListener(listener)
Subscribes to speech state updates.
addProgressListener(listener)
Subscribes to speech progress updates with this shape:
type SpeechProgressEvent = {
utteranceId: string;
start: number;
end: number;
};start is inclusive and end is exclusive, both relative to the original text passed to speak(...).
Example
import SpeechOutput from "@3senseai/react-native-system-speech-output";
const text = "Hello world from speech output.";
const progressSub = SpeechOutput.addProgressListener((event) => {
console.log("progress:", event.utteranceId, event.start, event.end);
console.log("spoken range:", text.slice(event.start, event.end));
});
const stateSub = SpeechOutput.addStateListener((event) => {
console.log("state:", event.state);
});
const available = await SpeechOutput.isAvailable();
const voices = await SpeechOutput.listVoices();
if (available) {
await SpeechOutput.speak(text, {
language: "en-AU",
rate: 0.95,
});
}
// later
progressSub.remove();
stateSub.remove();Local testing
Recommended flow:
- Create a fresh React Native app.
- Install this package from a local path.
- Turn New Architecture on explicitly in the test app.
- Run
pod installon iOS. - Build both iOS and Android.
For example:
npx @react-native-community/cli@latest init SpeechOutputNewArchTest --version 0.84.0
cd SpeechOutputNewArchTest
npm install ../react-native-system-speech-outputThen enable the New Architecture in the app and rebuild.
License
MIT
