@addmaple/base64
v0.3.2
Published
Fast Base64 encoding/decoding using WASM SIMD
Readme
@addmaple/base64
Fast Base64 encoding and decoding for browsers using WebAssembly SIMD128.
Overview
@addmaple/base64 provides a high-performance Base64 implementation leveraging WASM SIMD128 instructions. It is designed to be significantly faster than browser-native btoa/atob when processing large buffers.
Note: Node.js
Buffer.toString('base64')already uses optimized native code and achieves similar throughput. This library is primarily beneficial for browser environments wherebtoa/atobare the only native options.
Key Features
- 🚀 Extreme Performance: Up to 19x faster than scalar code using 128-bit SIMD registers
- ⚡ Branchless Logic: Minimizes CPU branch mispredictions for consistent, high-speed processing
- 🌐 Universal: Works in modern browsers and Node.js (for isomorphic code)
- 📦 Lightweight: Zero external runtime dependencies
- 📜 TypeScript Support: Full type definitions included
- 🌊 Streaming: Supports the Web Streams API for processing large data sets
- ✅ Compatible: Byte-for-byte identical output to
btoa/atob
Performance
Speedup by Data Size
Shows how SIMD benefits scale with input size.
| Variant | 1 KB | 16 KB | 64 KB | 256 KB | 1 MB | |---------|------|------|------|------|------| | scalar | 1.00x | 1.00x | 1.00x | 1.00x | 1.00x | | autovec | 1.0x | 1.0x | 0.9x | 1.0x | 1.0x | | explicit-simd | 4.8x | 5.7x | 10.0x | 16.9x | 18.7x |
Performance Summary
| Variant | Throughput | Speedup vs Scalar | |---------|------------|-------------------| | scalar | 232 MB/s | 1.00x (baseline) | | autovec | 227 MB/s | 0.98x | | explicit-simd | 4,334 MB/s | 18.7x |
Measured on Apple M4 Pro using npm run bench.
Library Comparison (Encoding)
| Size | @addmaple/base64 | base64-simd-wasm-vn | turbobase64 | btoa | Buffer | |------|------------------|---------------------|-------------|------|--------| | 1 KB | 780 MB/s | 1,108 MB/s | 18 MB/s | 89 MB/s | 703 MB/s | | 16 KB | 2,109 MB/s | 2,572 MB/s | 136 MB/s | 159 MB/s | 6,389 MB/s | | 64 KB | 2,368 MB/s | 784 MB/s | 152 MB/s | 157 MB/s | 6,278 MB/s | | 256 KB | 2,645 MB/s | 1,730 MB/s | 146 MB/s | 49 MB/s | 1,098 MB/s | | 1 MB | 2,510 MB/s | 2,484 MB/s | 144 MB/s | 31 MB/s | 4,188 MB/s |
Library Comparison (Decoding)
| Size | @addmaple/base64 | base64-simd-wasm-vn | turbobase64 | atob | Buffer | |------|------------------|---------------------|-------------|------|--------| | 1 KB | 113 MB/s | 51 MB/s | 19 MB/s | 67 MB/s | 1,396 MB/s | | 16 KB | 594 MB/s | 353 MB/s | 50 MB/s | 815 MB/s | 3,245 MB/s | | 64 KB | 2,425 MB/s | 499 MB/s | 49 MB/s | 769 MB/s | 4,286 MB/s | | 256 KB | 2,575 MB/s | 492 MB/s | 47 MB/s | 689 MB/s | 3,878 MB/s | | 1 MB | 2,426 MB/s | 481 MB/s | 46 MB/s | 715 MB/s | 3,897 MB/s |
Key findings:
- vs base64-simd-wasm-vn: Similar encoding speed, but 5x faster decoding at ≥64 KB
- vs turbobase64: 15–50x faster (turbobase64 uses JS polyfill, not real WASM SIMD)
- vs btoa/atob: 3–80x faster at larger sizes
- vs Buffer: Buffer is still fastest in Node.js (but unavailable in browsers)
Run node bench/compare.js to reproduce.
When to use this library
- 🌐 Browser with large payloads (≈64 KB+). The fixed cost of WASM + SIMD pays off once you're above tens of KB. Up to 18x faster than
btoa/atob. - 🌊 Streaming in browser. Processing large files or chunked data keeps memory flat while maximizing throughput.
- 🔄 Isomorphic code. Works in both browser and Node.js with the same API, even though Node.js
Bufferis already fast.
Skip this library if: You're only targeting Node.js — just use
Buffer.from(data).toString('base64')which is already native and fast.
Installation
npm install @addmaple/base64Usage
Initialization
Before using the library, you must initialize the WASM module. This only needs to be done once.
import { init } from '@addmaple/base64';
await init();Basic Encoding & Decoding
import { encode, decode } from '@addmaple/base64';
const data = new Uint8Array([72, 101, 108, 108, 111]); // "Hello"
// Encode to Base64 (returns Uint8Array of ASCII bytes)
const encoded = encode(data);
const base64String = new TextDecoder().decode(encoded);
console.log(base64String); // "SGVsbG8="
// Decode back to Uint8Array
const decoded = decode(encoded);
console.log(new TextDecoder().decode(decoded)); // "Hello"Working with Strings
You can pass strings directly to encode and decode. They will be automatically converted to UTF-8 bytes.
const encoded = encode("Hello, World!");
const decoded = decode("SGVsbG8sIFdvcmxkIQ==");URL-Safe Base64
Use the second parameter to toggle URL-safe mode (replaces + with - and / with _).
const data = new Uint8Array([0xFB, 0xFF, 0xBF]);
const std = new TextDecoder().decode(encode(data)); // "+/+/"
const urlSafe = new TextDecoder().decode(encode(data, true)); // "-_-_"Browser Usage
Bundlers (recommended): Just import { init, encode, decode } from '@addmaple/base64'; package.json exports automatically pick the browser build.
Direct script (no bundler):
<script type="module">
import { init, encode, decode } from 'https://cdn.jsdelivr.net/npm/@addmaple/base64/dist/browser.js';
await init();
const encoded = encode('hello');
const decoded = decode(encoded);
console.log(new TextDecoder().decode(decoded));
</script>Streaming API
Process large files or streams using the Web Streams API.
import { createEncoderStream } from '@addmaple/base64';
const response = await fetch('https://example.com/large-file.bin');
const base64Stream = response.body.pipeThrough(createEncoderStream());
for await (const chunk of base64Stream) {
console.log('Received chunk of length:', chunk.length);
}See examples/streaming-example.js for a complete example.
How It Works
Traditional Base64 implementations process data 1-3 bytes at a time using scalar instructions and heavy branching. @addmaple/base64 uses:
- Parallelism: Uses WASM's 128-bit registers to process 16 bytes at once
- Shuffles: Uses
i8x16.shufflefor efficient bit rearrangement without manual masks/shifts - Branchless Mapping: Uses
v128.bitselectand comparison masks to map values to ASCII in parallel
Development
Prerequisites
- Rust with
wasm32-unknown-unknowntarget - Node.js 16+
- wasm-bindgen-lite
Scripts
| Command | Description |
|---------|-------------|
| npm run build | Build optimized SIMD and baseline WASM |
| npm test | Run tests (21 tests including btoa/atob compatibility) |
| npm run bench | Build variant matrix + SIMD analysis + benchmarks |
Running Benchmarks
npm run benchThis builds 5 variants (scalar, autovec, explicit-encode, explicit-decode, explicit-all), analyzes SIMD instruction usage, runs performance benchmarks, and generates reports:
bench_out/report.json- Machine-readable databench_out/report.html- Interactive web reportbench_out/report.md- Markdown for README/docs
Test Coverage
Tests verify:
- Round-trip encoding/decoding for both backends (scalar & SIMD)
- URL-safe encoding/decoding
- Streaming API
- Edge cases (empty input, padding, special characters)
- Compatibility with
btoa/atob(byte-for-byte identical output)
Sponsor
Development of this module was sponsored by addmaple.com — a modern data analysis platform.
License
MIT — see LICENSE for details.
