react-native-davoice
v1.0.12
Published
tts library for React Native
Maintainers
Readme
react-native-davoice
React Native on-device speech package for:
- Speaker identification / speaker verification - onboarding and real-time
- Speech to Text / Real time ASR - Real time ASR, Supports all languages. Supports real time speaker verified and isolation.
- Voice Cloning / Text to Speech - Cloning any voice any language.
- On-device Text to Speech - Human like text to speech, quality bits top cloud providers. Supports all cloned voices.
- Smooth audio flow between all voice components - Audio session handling across the flows.
- React Native support for iOS and Android
It works well with react-native-wakeword npm package:
- Wake word / keyword detection / Hotword - Real time Wake Word detection. Supports real time speaker verified and isolation.
It supports iOS and Android and is designed for apps that need a local voice pipeline with native audio-session handling.
This package pairs well with react-native-wakeword, which covers wake-word, keyword spotting, and related always-listening flows.
Features
- On-device TTS for React Native
- On-device STT for React Native
- Unified
speech/API that can coordinate STT and TTS together - Native event support for speech results, partials, volume, and TTS completion
- License activation and validation APIs
- Local model paths, bundled assets, and asset
require(...)support - WAV / audio playback helpers in the unified speech API
- iOS microphone and speech-recognition permission helpers
Package Layout
The package exposes three entry points:
react-native-davoiceDefault TTS entry point.react-native-davoice/sttStandalone speech-to-text API.react-native-davoice/speechUnified API for apps that use both TTS and STT together.
Installation
npm install react-native-davoiceor
yarn add react-native-davoiceFor iOS:
cd ios
pod installReact Native autolinking is supported.
When To Use Which API
Use react-native-davoice/speech if your app uses both STT and TTS and you want one bridge managing the flow.
Use react-native-davoice if you only need TTS.
Use react-native-davoice/stt if you only need speech recognition.
Quick Start
Unified Speech API
import Speech from 'react-native-davoice/speech';
const model = require('./assets/models/model_ex_ariana.dm');
await Speech.setLicense('YOUR_LICENSE_KEY');
await Speech.initAll({
locale: 'en-US',
model,
});
Speech.onSpeechResults = (event) => {
console.log('results', event.value);
};
Speech.onSpeechPartialResults = (event) => {
console.log('partial', event.value);
};
Speech.onFinishedSpeaking = () => {
console.log('finished speaking');
};
await Speech.start('en-US');
await Speech.speak('Hello from DaVoice', 0, 1.0);TTS Only
import { DaVoiceTTSInstance } from 'react-native-davoice';
const tts = new DaVoiceTTSInstance();
await tts.setLicense('YOUR_LICENSE_KEY');
await tts.initTTS({
model: require('./assets/models/model_ex_ariana.dm'),
});
tts.onFinishedSpeaking(() => {
console.log('done');
});
await tts.speak('Hello world', 0);STT Only
import STT from 'react-native-davoice/stt';
await STT.setLicense('YOUR_LICENSE_KEY');
STT.onSpeechResults = (event) => {
console.log(event.value);
};
await STT.start('en-US');Unified Speech API
The unified API is intended for real voice flows where STT and TTS are part of the same experience.
Common methods:
initAll({ locale, model, timeoutMs?, onboardingJsonPath? })destroyAll()start(locale, options?)stop()cancel()speak(text, speakerId?, speed?)stopSpeaking()playWav(pathOrURL, markAsLast?)playPCM(data, { sampleRate, channels?, interleaved?, format?, markAsLast? })playBuffer({ base64, sampleRate, channels?, interleaved?, format, markAsLast? })pauseMicrophone()unPauseMicrophone()pauseSpeechRecognition()unPauseSpeechRecognition(times)isAvailable()isRecognizing()setLicense(licenseKey)isLicenseValid(licenseKey)
Unified events:
onSpeechStartonSpeechRecognizedonSpeechEndonSpeechErroronSpeechResultsonSpeechPartialResultsonSpeechVolumeChangedonFinishedSpeakingonNewSpeechWAVAndroid-only remote STT flow event.
iOS Permission Helpers
The unified API also exposes iOS permission helpers:
hasIOSMicPermissions()requestIOSMicPermissions(waitTimeout)hasIOSSpeechRecognitionPermissions()requestIOSSpeechRecognitionPermissions(waitTimeout)
TTS API
TTS is exposed from the package root.
import { DaVoiceTTSInstance } from 'react-native-davoice';Available methods:
initTTS({ model })setLicense(licenseKey)isLicenseValid(licenseKey)speak(text, speakerId?)stopSpeaking()destroy()onFinishedSpeaking(callback)
STT API
import STT from 'react-native-davoice/stt';Available methods:
start(locale, options?)stop()cancel()destroy()isAvailable()isRecognizing()setLicense(licenseKey)isLicenseValid(licenseKey)
Available event handlers:
onSpeechStartonSpeechRecognizedonSpeechEndonSpeechErroronSpeechResultsonSpeechPartialResultsonSpeechVolumeChanged
License API
All entry points expose the same license helpers:
await Speech.setLicense(key);
await Speech.isLicenseValid(key);
await tts.setLicense(key);
await tts.isLicenseValid(key);
await STT.setLicense(key);
await STT.isLicenseValid(key);For the unified speech/ entry point, setLicense applies the license to both STT and TTS under the hood.
Models And Assets
Model arguments can be provided as:
- a local file path
- a bundled asset via
require(...) - a file URL
- in some APIs, a remote URL
Typical model formats used by this package include .dm and .onnx.
If your app bundles model files with Metro, make sure your React Native asset configuration includes the extensions you use.
Works Well With react-native-wakeword
If your product needs wake word, keyword spotting, or an always-listening front end, pair this package with react-native-wakeword.
A common production setup is:
react-native-wakewordfor wake word and wake-phase audio control.react-native-davoice/speechfor STT, TTS, and the active voice session.
That separation works well for assistants, hands-free flows, and full on-device voice UX.
Platform Notes
iOS
- Run
pod installafter adding or updating the package. - Microphone permission is required.
- Speech recognition permission may also be required for STT flows.
Android
- Autolinking is supported.
- The package includes native Android bridge registration.
onNewSpeechWAVis Android-only.
Example
This README was aligned with the companion DaVoice React Native example app that demonstrates:
- TTS model selection
- STT flows
- combined speech orchestration
- integration alongside
react-native-wakeword
Troubleshooting
Native module not found
Make sure:
- the package is installed in
node_modules - iOS pods are installed
- the app was rebuilt after installation
Model file cannot be resolved
Check:
- the model path is correct
- the asset was bundled correctly
- Metro is configured to include the model extension
TTS or STT fails to initialize
Check:
- the license was set before initialization
- the model file exists on device
- required permissions were granted
Support
For licensing, production integration, or custom deployments:
- Website: https://davoice.io
- Email:
[email protected]
License
MIT for the React Native wrapper. Native model/runtime licensing may require a commercial DaVoice license depending on your deployment.
