web-audio-api
v1.2.0
Published
Portable Web Audio API
Readme
web-audio-api 
Portable Web Audio API for any JS environment. 100% WPT conformance.
npm install web-audio-apiUse
import { AudioContext } from 'web-audio-api'
const ctx = new AudioContext()
await ctx.resume()
const osc = ctx.createOscillator()
osc.frequency.value = 440
osc.connect(ctx.destination)
osc.start()
// → plays through speakersAudio output is built-in via audio-speaker — no extra packages needed.
Offline rendering
import { OfflineAudioContext } from 'web-audio-api'
const ctx = new OfflineAudioContext(2, 44100, 44100) // 1 second, stereo
const osc = ctx.createOscillator()
osc.frequency.value = 440
osc.connect(ctx.destination)
osc.start()
const buffer = await ctx.startRendering()
// buffer.getChannelData(0) → Float32Array of 44100 samplesCustom output stream
For piping to external tools or custom sinks, set outStream to any writable:
ctx.outStream = myWritableStreamnode synth.js | aplay -f cdPolyfill
Register Web Audio API globals for environments that lack them:
import 'web-audio-api/polyfill'
// AudioContext, OfflineAudioContext, GainNode, etc. are now globalTesting audio code
import { OfflineAudioContext } from 'web-audio-api'
import { test } from 'node:test'
import { strictEqual } from 'node:assert'
test('gain halves amplitude', async () => {
const ctx = new OfflineAudioContext(1, 128, 44100)
const src = ctx.createConstantSource()
const gain = ctx.createGain()
gain.gain.value = 0.5
src.connect(gain).connect(ctx.destination)
src.start()
const buf = await ctx.startRendering()
strictEqual(buf.getChannelData(0)[0], 0.5)
})Examples
Run any example: node examples/<name>.js — real-time examples play sound through speakers.
| | Example | | |---|---------|---| | | speaker.js | Hello world — play a tone | | | sweep.js | Frequency sweep 100Hz → 4kHz | | | subtractive-synth.js | Sawtooth → filter sweep → ADSR | | | noise.js | AudioWorklet noise → bandpass filter | | | lfo.js | Tremolo via LFO modulation | | | spatial.js | PannerNode — sound moves left to right | | | sequencer.js | Step sequencer with precise scheduling | | | worklet.js | AudioWorkletProcessor with custom param | | | linked-params.js | ConstantSourceNode controlling multiple gains | | | fft.js | AnalyserNode — frequency spectrum | | | render-to-buffer.js | OfflineAudioContext → buffer | | | process-file.js | Read audio file → EQ + compress → render | | | pipe-stdout.js | Pipe PCM to system player |
Alternatives
| | web-audio-api | node-web-audio-api | standardized-audio-context | web-audio-api-rs |
|---|---|---|---|---|
| Language | JS | Rust (napi) | JS | Rust |
| Runs in | Node, Bun, Deno, browser | Node only | Browser only | Rust / WASM |
| Native deps | none | platform binary | none | Rust toolchain |
| WPT compliance | 98% | partial | n/a (wraps native) | partial |
| Install | npm install | npm install (downloads binary) | npm install | cargo add |
Choose this package when you need portable spec-compliant Web Audio in any JS environment — testing, offline rendering, SSR, or lightweight real-time playback. Choose node-web-audio-api when you need maximum DSP throughput on Node.js and can accept the native dependency. Choose standardized-audio-context when you target browsers and need a uniform API across vendor differences. Choose web-audio-api-rs for Rust-native or WASM projects (not an npm package).
Also: web-audio-engine — earlier pure-JS effort (archived 2019), inspiration for this project.
Rendering 1s of audio at 44.1kHz (npm run bench:all):
| Scenario | web-audio-api (JS) | node-web-audio-api (Rust) | Chrome (native) | |---|---|---|---| | OscillatorNode | 0.3ms | 0.3ms | 0.4ms | | Osc → Gain | 0.4ms | 0.2ms | 0.4ms | | Osc → BiquadFilter | 0.9ms | 0.4ms | 0.5ms | | DynamicsCompressor | 2.0ms | 0.5ms | 1.2ms | | ConvolverNode (128-tap) | 4.4ms | 1.6ms | 0.4ms | | 8-voice polyphony | 2.3ms | 1.8ms | 1.2ms |
All scenarios run faster than real-time. Pure JS matches Rust on simple graphs; heavier DSP (convolution, compression) is 2–4× slower — WASM kernels are planned for these paths.
Limitations
- Performance — pure JS is fast for most use cases but won't match native implementations for sustained heavy real-time DSP (dozens of simultaneous convolver/panner nodes). WASM kernels are planned.
outStream— the only API surface outside the W3C spec. It's the bridge to custom audio output (stdout, streams). Default output usesaudio-speakerand needs no configuration.- AudioWorklet threading — runs synchronously on the main thread. Browsers use a separate audio thread. Functionally identical, but no thread isolation.
FAQ
await ctx.close() // stops rendering, releases resourcesOr with explicit resource management: using ctx = new AudioContext()
Per the W3C spec. Browsers require user activation before audio can play. Call await ctx.resume() to start, or use OfflineAudioContext which doesn't need it.
import { AudioContext } from 'web-audio-api'
import * as Tone from 'tone'
Tone.setContext(new AudioContext())import { readFileSync } from 'node:fs'
const ctx = new OfflineAudioContext(2, 1, 44100)
const buffer = await ctx.decodeAudioData(readFileSync('track.mp3'))Supports WAV, MP3, FLAC, OGG, AAC, and more.
import 'web-audio-api/polyfill' // registers AudioContext, OfflineAudioContext, etc. as globalsAll nodes run faster than real-time on a single thread (npm run bench). For heavy real-time workloads (many convolvers/panners), consider node-web-audio-api which uses Rust.
Architecture
Pull-based audio graph. AudioDestinationNode pulls upstream via _tick(), 128-sample render quanta per the spec. DSP kernels separated from graph plumbing for future WASM swap.
EventTarget ← Emitter ← DspObject ← AudioNode ← concrete nodes
← AudioParam
EventTarget ← Emitter ← AudioPort ← AudioInput / AudioOutputLicense
MIT
