npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

vcr-pkg-lib-vosk

v0.1.13

Published

Speech recognition module for react native using Vosk library

Downloads

6

Readme

react-native-vosk - React ASR (Automated Speech Recognition)

Speech recognition module for react native using Vosk library

Installation

Library

npm install -S react-native-vosk

Models

Vosk uses prebuilt models to perform speech recognition offline. You have to download the model(s) that you need on Vosk official website Avoid using too heavy models, because the computation time required to load them into your app could lead to bad user experience. Then, unzip the model in your app folder. If you just need to use the iOS version, put the model folder wherever you want, and import it as described below. If you need both iOS and Android to work, you can avoid to copy the model twice for both projects by importing the model from the Android assets folder in XCode. Just do as follow:

Android

In Android Studio, open the project manager, right-click on your project folder and New > Folder > Assets folder. Android Studio assets folder creation

Then put the model folder inside the assets folder created. In your file tree it should be located in android\app\src\main\assets. So, if you downloaded the french model named model-fr-fr, you should access the model by going to android\app\src\main\assets\model-fr-fr. In Android studio, your project structure should be like that: Android Studio final project structure

You can import as many models as you want.

iOS

In order to let the project work, you're going to need the iOS library. Mail [email protected] to get the libraries. You're going to have a libvosk.xcframework file (or folder for not mac users), just copy it in the ios folder of the module (node_modules/react-native-vosk/ios/libvosk.xcframework). Then run in your root project:

npm run pods

In XCode, right-click on your project folder, and click on "Add files to [your project name]".

XCode add files to project

Then navigate to your model folder. You can navigate to your Android assets folder as mentionned before, and chose your model here. It will avoid to have the model copied twice in your project. If you don't use the Android build, you can just put the model wherever you want, and select it.

XCode chose model folder

That's all. The model folder should appear in your project. When you click on it, your project target should be checked (see below).

XCode full settings screenshot

Usage

import VoiceRecognition from 'react-native-voice-recognition';

// ...

const voiceRecognition = new VoiceRecognition();

voiceRecognition.loadModel('model-fr-fr').then(() => {
    // we can use promise...
    voiceRecognition
        .start()
        .then((res: any) => {
            console.log('Result is: ' + res);
        })

    // ... or events
    const resultEvent = vosk.onResult((res) => {
      console.log('A onResult event has been caught: ' + res.data);
    });

    // Don't forget to call resultEvent.remove(); when needed
}).catch(e => {
    console.error(e);
})

Note that start() method will ask for audio record permission.

Complete example...

Methods

| Method | Argument | Return | Description | |---|---|---|---| | loadModel | path: string | Promise | Loads the voice model used for recognition, it is required before using start method | | start | grammar: string[] or none | Promise | Starts the voice recognition and returns the recognized text as a promised string, you can recognize specific words using the grammar argument (ex: ["left", "right"]) according to kaldi's documentation | | stop | none | none | Stops the recognition |

Events

| Method | Promise return | Description | |---|---|---| | onResult | The recognized word as a string | Triggers on voice recognition result | | onFinalResult | The recognized word as a string | Triggers if stopped using stop() method | | onError | The error that occured as a string or exception | Triggers if an error occured | | onTimeout | "timeout" string | Triggers on timeout |

Example

const resultEvent = voiceRecognition.onResult((res) => {
    console.log('A onResult event has been caught: ' + res.data);
});
    
resultEvent.remove();

Don't forget to remove the event listener once you don't need it anymore.

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

MIT