npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

nwhisper

v0.3.0

Published

Native Node.js bindings for OpenAI's Whisper using whisper.cpp. High-performance local speech-to-text with custom model support.

Readme

nwhisper

Native Node.js bindings for OpenAI's Whisper using whisper.cpp. High-performance local speech-to-text with custom model path support.

MIT License

Features

  • Custom Model Path Support: Use your own trained models by providing a custom model file path
  • Automatically convert the audio to WAV format with a 16000 Hz frequency to support the whisper model.
  • Output transcripts to (.txt .srt .vtt .json .wts .lrc)
  • Optimized for CPU (Including Apple Silicon ARM)
  • Timestamp precision to single word
  • Split on word rather than on token (Optional)
  • Translate from source language to english (Optional)
  • Convert audio format to wav to support whisper model
  • Backward compatible with nodejs-whisper

Installation

  1. Install make tools
sudo apt update
sudo apt install build-essential
  1. Install nwhisper with npm
  npm i nwhisper
  1. Download whisper model (for standard models)
  npx nwhisper download
  • NOTE: user may need to install make tool

Windows Installation

  1. Install MinGW-w64 or MSYS2 (which includes make tools)

    • Option 1: Install MSYS2 from https://www.msys2.org/
    • Option 2: Install MinGW-w64 from https://www.mingw-w64.org/
  2. Install nwhisper with npm

npm i nwhisper
  1. Download whisper model (for standard models)
npx nwhisper download
  • Note: Make sure mingw32-make or make is available in your system PATH.

Usage/Examples

See example/basic.ts (can be run with $ npm run example)

import path from 'path'
import { transcribe } from 'nwhisper'

// Need to provide exact path to your audio file.
const filePath = path.resolve(__dirname, 'YourAudioFileName')

// Using standard model
await transcribe(filePath, {
  modelName: 'base.en', //Downloaded models name
  autoDownloadModelName: 'base.en', // (optional) auto download a model if model is not present
  removeWavFileAfterTranscription: false, // (optional) remove wav file once transcribed
  withCuda: false, // (optional) use cuda for faster processing
  logger: console, // (optional) Logging instance, defaults to console
  whisperOptions: {
    outputInCsv: false, // get output result in csv file
    outputInJson: false, // get output result in json file
    outputInJsonFull: false, // get output result in json file including more information
    outputInLrc: false, // get output result in lrc file
    outputInSrt: true, // get output result in srt file
    outputInText: false, // get output result in txt file
    outputInVtt: false, // get output result in vtt file
    outputInWords: false, // get output result in wts file for karaoke
    translateToEnglish: false, // translate from source language to english
    wordTimestamps: false, // word-level timestamps
    timestamps_length: 20, // amount of dialogue per timestamp pair
    splitOnWord: true, // split on word rather than on token
  },
})

// Using custom models (NEW FEATURES)
// Method 1: Specify model directory
const modelDir = path.join(process.cwd(), '.models')
await transcribe(filePath, {
  modelName: 'tiny.en',
  modelDir: modelDir,
  whisperOptions: {
    outputInSrt: true,
  },
})

// Method 2: Direct file path
const modelPath = path.join(__dirname, 'models', 'my-custom-model.bin')
await transcribe(filePath, {
  modelPath: modelPath,
  whisperOptions: {
    outputInSrt: true,
    language: 'en',
  },
})

// Method 3: Download to and use custom directory
await transcribe(filePath, {
  modelName: 'tiny.en',
  autoDownloadModelName: 'tiny.en',
  modelDir: path.join(__dirname, 'models'), // Download to and use this directory
  whisperOptions: {
    outputInSrt: true,
  },
})

// Model list
const MODELS_LIST = [
  'tiny',
  'tiny.en',
  'base',
  'base.en',
  'small',
  'small.en',
  'medium',
  'medium.en',
  'large-v1',
  'large',
  'large-v3-turbo',
]

Types

 interface IOptions {
  modelName?: string // Model name (works with directories)
  modelPath?: string // NEW: Direct path to model file
  modelDir?: string // NEW: Directory for models (download & use)
  autoDownloadModelName?: string // Model to auto-download
  removeWavFileAfterTranscription?: boolean
  withCuda?: boolean
  whisperOptions?: WhisperOptions
  logger?: Console
}

 interface WhisperOptions {
  outputInCsv?: boolean
  outputInJson?: boolean
  outputInJsonFull?: boolean
  outputInLrc?: boolean
  outputInSrt?: boolean
  outputInText?: boolean
  outputInVtt?: boolean
  outputInWords?: boolean
  translateToEnglish?: boolean
  timestamps_length?: number
  wordTimestamps?: boolean
  splitOnWord?: boolean
}

Custom Model Path Usage

The main feature of nwhisper is the ability to use custom model files. This is useful when you have:

  • Fine-tuned models for specific domains
  • Custom trained models
  • Models in different locations than the default

Example with Custom Model

import { transcribe } from 'nwhisper'
import path from 'path'

// Method 1: Specify model directory
const modelDir = path.join(process.cwd(), '.models')
const result = await transcribe('audio.wav', {
  modelName: 'tiny.en',
  modelDir: modelDir,
  whisperOptions: {
    outputInSrt: true,
    language: 'en'
  }
})

// Method 2: Direct file path
const modelPath = path.join(__dirname, 'models', 'my-custom-model.bin')
const result2 = await transcribe('audio.wav', {
  modelPath: modelPath,
  whisperOptions: {
    outputInSrt: true,
    language: 'auto'
  }
})

// Method 3: Download to and use custom directory
const result3 = await transcribe('audio.wav', {
  modelName: 'tiny.en',
  autoDownloadModelName: 'tiny.en',
  modelDir: modelDir, // Download to and use this directory
  whisperOptions: {
    outputInSrt: true,
    language: 'auto'
  }
})

Model Priority

  1. modelPath - Direct file path (highest priority)
  2. modelDir + modelName - Model directory with model name
  3. Standard directory - Default whisper.cpp models (fallback)

Important Notes

  • modelDir serves dual purpose: download location and model location
  • When modelDir is specified, models are downloaded to and used from that directory
  • Model files should follow whisper.cpp naming (e.g., ggml-tiny.en.bin)
  • Models must be compatible with whisper.cpp format

Migration from nodejs-whisper

nwhisper is fully backward compatible with nodejs-whisper. Simply replace the package:

# Remove old package
npm uninstall nodejs-whisper

# Install nwhisper
npm install nwhisper

Function Names

  • Recommended: Use transcribe function for new code
  • Legacy: nodewhisper function is still available but deprecated
// New (recommended)
import { transcribe } from 'nwhisper'
await transcribe('audio.wav', { modelName: 'tiny.en' })

// Legacy (deprecated but still works)
import { nodewhisper } from 'nwhisper'
await nodewhisper('audio.wav', { modelName: 'tiny.en' })

No code changes required for existing functionality!

Run locally

Clone the project

  git clone https://github.com/teomyth/nwhisper

Go to the project directory

  cd nwhisper

Install dependencies

  npm install

Start the server

  npm run dev

Build project

  npm run build

Made with

Feedback

If you have any feedback, please reach out to us at [email protected]

Authors

Acknowledgments

This project is a fork of nodejs-whisper by @chetanXpro. We extend our gratitude to the original author for creating the foundation that made nwhisper possible.

Original Project