npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

web-audio-api

v1.5.1

Published

Portable Web Audio API

Readme

web-audio-api test npm

Portable Web Audio API / polyfill.

  • 100% WPT conformance, no native deps.
  • Audio in CIOfflineAudioContext renders without speakers.
  • CLI audio scripting – pipe, process, synthesize from terminal.
  • Server-side audio – generate from APIs, bots, pipelines.
  • Tone.js and web audio libs work in Node as-is.
npm install web-audio-api

Use

import { AudioContext } from 'web-audio-api'

const ctx = new AudioContext()
await ctx.resume()

const osc = ctx.createOscillator()
osc.frequency.value = 440
osc.connect(ctx.destination)
osc.start()
// → A440 through your speakers

Built-in speaker output via audio-speaker — no extra setup.

Offline rendering

import { OfflineAudioContext } from 'web-audio-api'

const ctx = new OfflineAudioContext(2, 44100, 44100) // 1 second, stereo
const osc = ctx.createOscillator()
osc.frequency.value = 440
osc.connect(ctx.destination)
osc.start()

const buffer = await ctx.startRendering()
// buffer.getChannelData(0) → Float32Array of 44100 samples

Examples

node examples/<name>.js — all parametric. Positional args or key=value with prefix matching (f=440, freq=440 both work). Note names (A4, C#3, Eb5), k for kHz (20k), s/m/h for duration (10m).

| Example | | |---|---| | Test Signals | | | tone.js | Reference pitch — sine A4 2s | | sweep.js | Hear the audible range — 20..20k exp 3s | | noise.js | White, pink, brown, blue, violet — pink 2s | | impulse.js | Dirac click — 5 0.5s | | dtmf.js | Dial a phone number — 5551234 | | stereo-test.js | Left, right, center — 1k 1s | | metronome.js | Programmable click — 120..240 10m X-x- | | Illusions | | | shepard.js | Pitch that rises forever — up 15s | | risset-rhythm.js | Beat that accelerates forever — up 120 20s | | binaural-beats.js | Third tone from two (headphones!) — 200 10 10s | | missing-fundamental.js | Your brain fills in the note — 100 3s | | beating.js | Two close frequencies dance — 440 3 5s | | Synthesis | | | subtractive-synth.js | Sawtooth → filter sweep → ADSR | | additive.js | Waveforms from harmonics — square 220 16 3s | | fm-synthesis.js | DX7 frequency modulation — 440 2 5 3s | | karplus-strong.js | A string plucked from noise — A4 4s | | Generative | | | sequencer.js | Step sequencer — precise timing | | serial.js | Twelve-tone rows (Webern) — 72 30s | | gamelan.js | Balinese kotekan — two parts, one melody — 120 20s | | drone.js | Tanpura shimmer — C3 30s | | jazz.js | Modal jazz — new every time | | API | | | speaker.js | Hello world | | lfo.js | Tremolo via LFO | | spatial.js | Sound moving through space | | worklet.js | Custom AudioWorkletProcessor | | linked-params.js | One source controlling many gains | | fft.js | Frequency spectrum | | render-to-buffer.js | Offline render → buffer | | process-file.js | Audio file → EQ + compress → render | | pipe-stdout.js | PCM to stdout — pipe to aplay, sox, etc. | | mic.js | Live microphone → speakers with RMS meter (requires audio-mic) |

Node extensions

Beyond the spec, for Node.js. Not portable to browsers.

  • addModule(fn) — register a processor via callback instead of URL, no file needed
  • sinkId: stream — pipe PCM to any writable: new AudioContext({ sinkId: process.stdout }) then node synth.js | aplay -f cd
  • numberOfChannels, bitDepth — control output format in the constructor.
  • CustomMediaStreamTrack — extends MediaStreamTrack with a public constructor and pushData(chunk, options) to feed audio data (e.g. from a microphone). Prior art: CanvasCaptureMediaStreamTrack. See the mic FAQ.

FAQ

await ctx.close()

Or with explicit resource management: using ctx = new AudioContext()

Per W3C spec — browsers require user gesture before audio plays. Call await ctx.resume() to start. OfflineAudioContext doesn't need it.

Yes. Tone.js uses standardized-audio-context which needs window.AudioParam etc. for instanceof checks. The polyfill sets that up — just load Tone.js after it:

import 'web-audio-api/polyfill'
const Tone = await import('tone')

Tone.setContext(new AudioContext())
const synth = new Tone.Synth().toDestination()
synth.triggerAttackRelease('C4', '8n')

Tone.js must be a dynamic import() — static imports get hoisted before the polyfill runs. Alternatively, use --import:

node --import web-audio-api/polyfill app.js

Then static import * as Tone from 'tone' works in app.js.

const buffer = await ctx.decodeAudioData(readFileSync('track.mp3'))

WAV, MP3, FLAC, OGG, AAC via audio-decode.

In Node, pair audio-mic with CustomMediaStreamTrack:

npm install audio-mic
import { AudioContext, MediaStreamAudioSourceNode, CustomMediaStreamTrack, MediaStream } from 'web-audio-api'
import mic from 'audio-mic'

const ctx = new AudioContext()
await ctx.resume()

const track = new CustomMediaStreamTrack({
  kind: 'audio',
  label: 'mic',
  settings: { channelCount: 1, sampleSize: 16, sampleRate: ctx.sampleRate }
})
const stream = new MediaStream([track])

const src = new MediaStreamAudioSourceNode(ctx, { mediaStream: stream })
src.connect(ctx.destination) // live monitor

const read = mic({ sampleRate: ctx.sampleRate, channels: 1, bitDepth: 16 })
read((err, buf) => {
  if (err || !buf) return
  track.pushData(buf, { channels: 1, bitDepth: 16 })
})

track.pushData() accepts Float32Array, Float32Array[], or interleaved 8/16/32-bit integer PCM buffers. Integer PCM conversion uses pcm-convert. CustomMediaStreamTrack extends MediaStreamTrack — prior art: CanvasCaptureMediaStreamTrack.

See examples/mic.js for a runnable demo with gain and VU meter. To record the graph to a buffer, use OfflineAudioContext.startRendering(). To capture live graph output as a stream, use ctx.createMediaStreamDestination().

import 'web-audio-api/polyfill'
// AudioContext, GainNode, etc. are now global

The polyfill also installs navigator.mediaDevices.getUserMedia({ audio: true }), backed by the optional audio-mic peer dependency. This lets browser mic-capture code run verbatim in Node:

import 'web-audio-api/polyfill'
// npm install audio-mic

const stream = await navigator.mediaDevices.getUserMedia({ audio: true })
const ctx = new AudioContext()
const src = ctx.createMediaStreamSource(stream)
src.connect(ctx.destination)

// stop capture
stream.getAudioTracks()[0].stop()

Without audio-mic installed, getUserMedia rejects with a NotFoundError containing an install hint.

OfflineAudioContext renders without speakers — pair with any test runner. See render-to-buffer.js.

All scenarios render faster than real-time. Pure JS matches Rust napi on simple graphs; heavier DSP (convolution, compression) is 2–4× slower — WASM kernels planned. npm run bench:all to measure.

Architecture

Pull-based audio graph. AudioDestinationNode pulls upstream via _tick(), 128-sample render quanta per spec. AudioWorklet runs synchronously (no thread isolation). DSP kernels separated from graph plumbing for future WASM swap.

EventTarget ← Emitter ← DspObject ← AudioNode ← concrete nodes
                                    ← AudioParam
EventTarget ← Emitter ← AudioPort ← AudioInput / AudioOutput

Alternatives

License

MIT