npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

parabun-browser-shims

v0.2.0

Published

Browser-compatible shims for the bun:* modules Parabun parse-time desugaring pulls in (bun:arena, bun:signals, bun:wrap, bun:parallel, bun:simd, bun:gpu, bun:llm). Lets .pts / .pjs code compile and run in browsers — at varying fidelity per module.

Downloads

265

Readme

parabun-browser-shims

Browser-compatible shims for the bun:* modules that Parabun's parse-time desugarings import:

| Module | Browser fidelity | |---|---| | bun:arena | No-op. Browsers don't expose GC control — arena { body } runs the body inline, same observable behavior. | | bun:signals | Real implementation. signal / derived / effect / batch / untrack. | | bun:wrap | Real implementation. Carries the __parabunMemo / __parabunDefer0 / __parabunRange runtime, including .forget() / .clear() / .bypass() cache invalidation. | | bun:parallel | Web Worker pool (navigator.hardwareConcurrency workers). pmap / preduce dispatch across workers; transparent sequential fallback under CSP or non-browser hosts. | | bun:simd | WebAssembly SIMD kernels (v128 f32x4). mulScalar / addScalar / add / mul / sum / dot dispatch to WASM; scalar JS fallback when WASM SIMD is unavailable. alloc(n, "f32") returns a Float32Array backed by the WASM linear memory for zero-copy calls. | | bun:gpu | WebGPU compute shaders for matVecAsync (workgroup reduction), matmulAsync (16×16 tiled), dotAsync (tree reduction). Opt-in via await gpu.initWebGPU(); sync surface stays CPU for drop-in compatibility. Quantized kernels (Q4_K / Q6_K) are on the roadmap. | | bun:llm | Throws on load with a clear message — a WebGPU GGUF / Llama port is substantial future work. |

Language surface that doesn't need a shim — all of these desugar to plain JS: pure, memo (statement and arrow forms, including .forget / .clear / .bypass), |>, ..=, ..!, ..&, .. (range), defer / defer await (compile to ES2024 using), throw as expression.

Install

npm i parabun-browser-shims

Bundler alias — Vite

// vite.config.ts
import { defineConfig } from "vite";
import { bunAliases } from "parabun-browser-shims";

export default defineConfig({
  resolve: { alias: bunAliases },
});

Bundler alias — esbuild

import * as esbuild from "esbuild";
import { bunAliases } from "parabun-browser-shims";

await esbuild.build({
  entryPoints: ["src/app.pts"],
  bundle: true,
  outfile: "dist/app.js",
  alias: bunAliases,
});

Bundler alias — Webpack

// webpack.config.ts
import { bunAliases } from "parabun-browser-shims";

export default {
  resolve: { alias: bunAliases },
};

Bundler alias — Rollup

import alias from "@rollup/plugin-alias";
import { bunAliases } from "parabun-browser-shims";

export default {
  plugins: [alias({ entries: bunAliases })],
};

WebGPU — opt-in async kernels

The sync gpu.matVec(...) path stays CPU so .pts code that uses it compiles unchanged. Opt into the GPU backend at startup:

import gpu from "bun:gpu";

await gpu.initWebGPU();                      // once per app
const mat = gpu.hold(weights);               // uploads to GPU buffer
const out = await gpu.matVecAsync(mat, q, M, K);

gpu.initWebGPU() returns false and the async variants fall back to CPU on browsers without WebGPU (Safari ≤17.3, Firefox without dom.webgpu.enabled). gpu.describe() reports the live backend + any init error.

WebAssembly SIMD — f32 kernels

bun:simd dispatches to v128 kernels compiled from src/simd.wat for inputs of ≥256 elements on WASM SIMD-capable runtimes; smaller inputs and non-Float32Array types take the scalar path. simd.alloc(n, "f32") allocates inside the WASM linear memory — calls on the returned array skip the per-call copy-in.

import simd from "bun:simd";

const a = simd.alloc(1_000_000, "f32");      // Float32Array, wasm-backed
const b = simd.alloc(1_000_000, "f32");
// ...fill...
const d = simd.dot(a, b);                    // no copy, runs v128

Non-wasm-backed TypedArrays still work — they're copied into the WASM memory per call.

Web Worker pool — pmap / preduce

bun:parallel lazily spins up navigator.hardwareConcurrency workers on first call. Each worker receives the stringified callback, evals it via new Function(...), and processes a contiguous chunk of the input. Outputs transfer back (TypedArray buffers) to avoid per-chunk copies. Input structured-clones in.

import { pmap, preduce, disposeWorkers } from "bun:parallel";

const out = await pmap(x => x * x, input);  // chunks across workers
const s = await preduce((a, b) => a + b, 0, input);
disposeWorkers();                            // tear down at teardown

Strict CSP (script-src without unsafe-eval) or non-browser hosts skip the pool and run sequentially on the calling thread.

Roadmap to in-browser LLM inference

The missing pieces for real .pts code doing LLM inference in a browser:

  1. GGUF loaderfetch-backed parser that streams metadata + weights from a URL. Tokenizer metadata (BPE merges / vocab) comes free from the same file.
  2. Quantized matVec kernels — WGSL compute shaders for Q4_K / Q6_K / Q8_0. Each reads packed block-encoded weights, dequantizes on the fly with the block scale + min, multiplies by the input vector, accumulates.
  3. Forward pass — RMSNorm, RoPE, attention, FFN, softmax. All f32 WGSL kernels reusing the compute pipeline pattern matVecAsync uses.
  4. Sampler — argmax is trivial; top-k and nucleus sampling are small CPU-side passes over the final f32 logits vector.
  5. Chat templates — Llama-3 / ChatML / Mistral-Instruct parsed out of the GGUF's tokenizer.chat_template; mostly string interpolation.

None are conceptually blocked. (2) is the critical path and the largest single piece — porting the Parabun native Q4_K kernel to WGSL is the natural first commit. Ping the repo if you want a specific module prioritized.

Re-compiling the SIMD WASM

The committed src/simd.wasm is compiled from src/simd.wat via wabt. Maintainers editing the WAT run:

bun install        # installs wabt as a devDependency
bun run build:wasm

License

MIT.