npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

gregblur

v0.1.3

Published

High-quality WebGL2 background blur for video streams. Confidence masks, joint bilateral filtering, mask-weighted blur, and temporal smoothing — the full Google Meet technique stack in a standalone library.

Downloads

318

Readme

gregblur

npm version license bundle size TypeScript

High-quality WebGL2 background blur for video streams.

Implements the full Google Meet technique stack using confidence masks, joint bilateral filtering, mask-weighted Gaussian blur, temporal EMA smoothing, masked downsampling, and foreground-biased compositing all as a standalone, framework-agnostic library.

Why gregblur?

Most background blur libraries either give you raw segmentation masks (TensorFlow.js) or lock you into a specific video platform (Twilio, LiveKit, Agora). Gregblur sits in the gap: a complete, production-quality blur pipeline that works with any video source.

| Technique | gregblur | LiveKit OSS | Volcomix | Twilio | | ----------------------- | -------- | ----------- | -------- | ------ | | Confidence masks | Yes | Yes | Yes | Yes | | Joint bilateral filter | Yes | No | Yes | Yes | | Temporal smoothing | Yes | No | No | Yes | | Mask-weighted blur | Yes | No | No | Yes | | Masked downsample | Yes | No | No | No | | Foreground-biased matte | Yes | No | No | No | | Open source | Yes | Yes | Yes | No | | Framework-agnostic | Yes | No | Yes | No |

Install

npm install gregblur

Quick Start

With LiveKit

import { createLiveKitBlurProcessor } from 'gregblur/livekit'

const processor = createLiveKitBlurProcessor({
  blurRadius: 25,
  initialEnabled: true,
  segmentationModel: 'selfie-multiclass-256',
})

await track.setProcessor(processor)

With raw MediaStreamTrack

import { createRawBlurProcessor } from 'gregblur/raw'

const processor = createRawBlurProcessor({ blurRadius: 25 })
const blurredTrack = await processor.start(cameraTrack)

// Use blurredTrack with any WebRTC connection
peerConnection.addTrack(blurredTrack)

Core pipeline (advanced)

import { createGregblurPipeline, createMediaPipeProvider } from 'gregblur'

const provider = createMediaPipeProvider({ model: 'selfie-multiclass-256' })
const pipeline = createGregblurPipeline(provider, { blurRadius: 30 })

await pipeline.init(1280, 720)
pipeline.processFrame(videoElement, performance.now())
const canvas = pipeline.getCanvas()

API

Entry Points

| Import | What you get | | ------------------ | ---------------------------------- | | gregblur | Core pipeline + MediaPipe provider | | gregblur/livekit | LiveKit TrackProcessor adapter | | gregblur/raw | Raw MediaStreamTrack processor | | gregblur/detect | Browser capability detection |

createGregblurPipeline(provider, options?)

Creates the core WebGL2 blur pipeline. You manage frame timing yourself.

Options:

  • blurRadius — Gaussian blur radius (default: 25)
  • bilateralSigmaSpace — Spatial sigma for bilateral filter (default: 4.0)
  • bilateralSigmaColor — Color sigma for bilateral filter (default: 0.1)
  • initialEnabled — Start with blur on (default: true)
  • downsampleFactor — Background resolution divisor (default: 2)
  • temporalBlendFactor — EMA blend with previous mask (default: 0.24)

createMediaPipeProvider(options?)

Default segmentation provider using MediaPipe's selfie segmentation models.

Options:

  • model'selfie-multiclass-256' or 'selfie-segmenter' (default: 'selfie-multiclass-256')
  • mediapipeVersion — CDN version (default: '0.10.14')
  • visionBundleUrl — custom URL for vision_bundle.mjs if you self-host MediaPipe
  • wasmBasePath — Custom WASM path (defaults to jsDelivr CDN)
  • modelUrl — custom URL for the segmentation model file

createLiveKitBlurProcessor(options?)

Drop-in LiveKit TrackProcessor. Combines the core pipeline with a segmentation provider and track management.

createRawBlurProcessor(options?)

Framework-agnostic processor. Takes a MediaStreamTrack, returns a blurred MediaStreamTrack.

isBlurSupported()

Checks for WebGL2, WebAssembly, and Insertable Streams / canvas fallback support.

Browser Support

  • Chrome (desktop) — full Insertable Streams path
  • Edge (desktop) — full Insertable Streams path
  • Safari (desktop) — canvas captureStream fallback
  • Firefox — canvas captureStream fallback
  • iOS — not supported (no WebGL2 + captureStream combination)

How It Works

The pipeline processes each video frame through 8 GPU stages:

  1. Upload — Camera frame to WebGL texture
  2. Segmentation — MediaPipe produces a soft confidence mask (0.0–1.0)
  3. Bilateral filter — Refines mask edges using frame color as guide
  4. Temporal blend — EMA with previous frame's mask (reduces flicker)
  5. Masked downsample — Half-res with foreground-weighted sampling
  6. Mask-weighted blur — 2-pass separable Gaussian, foreground suppressed
  7. Composite — Smoothstep blend with foreground-biased matte
  8. Output — Rendered to canvas for capture

Custom Segmentation Providers

Implement the SegmentationProvider interface to use your own model:

import type { SegmentationProvider } from 'gregblur'

const myProvider: SegmentationProvider = {
  async init(canvas) {
    // Load your model, share the GL context via canvas
  },
  segment(source, timestampMs) {
    // Return { confidenceTexture: WebGLTexture, close(): void }
    // confidenceTexture values: 1.0 = background, 0.0 = person
  },
  destroy() {
    // Cleanup
  },
}

License

Apache-2.0