npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

sab-message-port

v1.0.4

Published

Worker IPC over SharedArrayBuffer — blocking reads without the event loop

Readme

sab-message-port

High-performance IPC for Web Workers and Node.js worker threads over SharedArrayBuffer.

Passes JSON messages between threads through shared memory using Atomics for synchronization — no serialization through postMessage, no copying overhead. Supports both blocking reads (via Atomics.wait in worker threads) and non-blocking async reads (via Atomics.waitAsync, safe on the main thread). Large messages are chunked transparently.

Motivation

The native postMessage API is event-loop driven — a worker can only receive messages by yielding control back to the event loop. This makes it unsuitable for long-running synchronous worker code that needs to communicate mid-execution without returning from its current call stack.

sab-message-port solves this by providing a blocking read (Atomics.wait) that lets a worker pause in place, wait for a message, and resume — with no dependence on the event loop. The worker stays in its own synchronous flow while the main thread sends messages asynchronously from the other side.

Install

npm install sab-message-port
import { SABMessagePort, SABPipe, MWChannel } from 'sab-message-port';

A global build is also available for classic workers and <script> tags:

// Classic worker (non-module)
importScripts('sab-message-port/dist/SABMessagePort.global.min.js');
const { SABMessagePort, SABPipe, MWChannel } = SABMessagePortLib;

Quick Start

Main thread:

import { SABMessagePort } from 'sab-message-port';

const port = new SABMessagePort();
port.postInit(worker);

port.onmessage = (e) => console.log('from worker:', e.data);
port.postMessage({ cmd: 'ping' });

Worker (classic/module):

// import { SABMessagePort } from 'sab-message-port';  // module worker
// or, for a classic worker:
importScripts('sab-message-port/dist/SABMessagePort.global.min.js');
const { SABMessagePort } = SABMessagePortLib;

self.onmessage = (e) => {
  if (e.data.type === 'SABMessagePort') {
    const port = SABMessagePort.from(e.data);

    port.onmessage = (e) => {
      if (e.data.cmd === 'ping') {
        port.postMessage({ reply: 'pong' });
      }
    };
  }
};

Example: Blocking Reads with Interrupt

With native postMessage, a worker can only receive messages by returning to the event loop. If the worker is stuck in a long synchronous loop, incoming messages pile up undelivered. sab-message-port solves this — port.read() blocks in-place via Atomics.wait, and port.tryRead() checks for signals mid-computation, all without yielding to the event loop.

main.js

import { SABMessagePort } from 'sab-message-port';

const worker = new Worker('./worker.js', { type: 'module' });
const port = new SABMessagePort();
port.postInit(worker);

port.onmessage = ({ data }) => console.log(data);

// Start a long task
port.postMessage({ cmd: 'run', task: 'A', iterations: 5_000_000 });

// After 200ms, abort and start a different task
setTimeout(() => {
  port.postMessage({ cmd: 'abort' });
  port.postMessage({ cmd: 'run', task: 'B', iterations: 2_000_000 });
}, 200);

worker.js — entirely synchronous, never yields to the event loop

// import { SABMessagePort } from 'sab-message-port';  // module worker
// or, for a classic worker:
importScripts('sab-message-port/dist/SABMessagePort.global.min.js');
const { SABMessagePort } = SABMessagePortLib;

self.onmessage = (e) => {
  if (e.data.type !== 'SABMessagePort') return;
  const port = SABMessagePort.from(e.data);

  while (true) {
    const task = port.read();            // block until a task arrives
    if (task.cmd !== 'run') continue;

    for (let i = 0; i < task.iterations; i++) {
      /* ... heavy work ... */

      if (i % 500_000 === 0) {
        port.postMessage({ task: task.task, progress: `${(i / task.iterations * 100) | 0}%` });

        if (port.tryRead()?.cmd === 'abort') {   // non-blocking check
          port.postMessage({ task: task.task, aborted: true });
          break;                                  // → back to port.read()
        }
      }
    }
  }
};

SABMessagePort

Bidirectional channel over a single SharedArrayBuffer. Both sides can read and write simultaneously — full duplex. API mirrors the native MessagePort.

All messages must be JSON-serializable (they go through JSON.stringify/JSON.parse internally). Message ordering is FIFO — messages are always delivered in the order they were sent.

new SABMessagePort(side = 'a', sizeKB = 256)

Creates a new bidirectional port.

| Parameter | Default | Description | |-----------|---------|-------------| | side | 'a' | 'a' (initiator) or 'b' (responder). Typically only the initiator is created directly; the responder uses SABMessagePort.from(). | | sizeKB | 256 | Total buffer size in KB (split in half between the two directions), or an existing SharedArrayBuffer. Each direction gets sizeKB / 2 KB of buffer space. |

const port = new SABMessagePort();          // 256 KB total (128 KB per direction), side 'a'
const port = new SABMessagePort('a', 512);  // 512 KB total (256 KB per direction)

// Or pass an existing SharedArrayBuffer
const sab = new SharedArrayBuffer(256 * 1024);
const port = new SABMessagePort('a', sab);

Throws if side is not 'a' or 'b'.

SABMessagePort.from(initMsg)

Creates the responder side ('b') from a received init message. The init message must have type: 'SABMessagePort' and a buffer property containing the SharedArrayBuffer.

Throws if initMsg.type !== 'SABMessagePort'.

// Worker side
self.onmessage = (e) => {
  if (e.data.type === 'SABMessagePort') {
    const port = SABMessagePort.from(e.data);
    // ready to send and receive
  }
};

port.postInit(target = null, extraProps = {})

Sends the shared buffer to the other side via postMessage, or returns the arguments for manual sending.

| Parameter | Default | Description | |-----------|---------|-------------| | target | null | A Worker, MessagePort, or any object with a postMessage method. If null, returns the arguments instead of sending. | | extraProps | {} | Additional properties merged into the init message (e.g. { channel: 'rpc' }). |

When target is provided, calls target.postMessage(data, transferList) directly. When target is null, returns [data, transferList] — a two-element array where data is the init message object and transferList is [sharedArrayBuffer].

// Auto-send to worker
port.postInit(worker, { channel: 'rpc' });

// Manual — returns [data, transferList]
const [data, transfer] = port.postInit(null, { channel: 'rpc' });
// data = { type: 'SABMessagePort', buffer: SharedArrayBuffer, channel: 'rpc' }
// transfer = [SharedArrayBuffer]
worker.postMessage(data, transfer);

port.postMessage(msg)Promise

Queues a JSON-serializable message for sending. Returns a promise that resolves when the message has been written to the shared buffer.

Multiple postMessage() calls made before the writer flushes are batched into a single payload and sent together. The returned promise resolves when the entire batch containing that message is fully written. This means you can fire off multiple postMessage() calls without awaiting — they will be coalesced efficiently.

// Fire-and-forget (message is queued and sent asynchronously)
port.postMessage({ action: 'save', data: [1, 2, 3] });

// Or await to know when it's been written to the buffer
await port.postMessage({ action: 'save', data: [1, 2, 3] });

// Batching: these may all be sent as one payload
port.postMessage({ a: 1 });
port.postMessage({ b: 2 });
port.postMessage({ c: 3 });

Throws if the port has been closed.

port.onmessage

Event-driven reader. Mirrors the MessagePort.onmessage pattern. Setting a handler starts a continuous async read loop; setting null stops it.

The handler receives an event object with a data property containing the message, matching the Web API convention: handler({ data: message }).

port.onmessage = (e) => {
  console.log(e.data); // the received message
};

// Stop listening
port.onmessage = null;

Mutual exclusion: You cannot call read(), asyncRead(), or tryRead() while an onmessage handler is active — doing so throws an error. Set onmessage = null first.

Error resilience: If the handler throws, the error is silently caught and the message loop continues. This ensures one bad message doesn't break the entire channel.

Re-assignment: Assigning a new handler function replaces the current one immediately within the same loop — no gap in delivery and no duplicate loops.

await port.asyncRead(timeout = Infinity, maxMessages = 1) → message | null | Array

Async read using Atomics.waitAsync. Safe on the main thread. Waits for a message or until timeout expires.

| Parameter | Default | Description | |-----------|---------|-------------| | timeout | Infinity | Maximum time to wait in milliseconds. Infinity waits forever. | | maxMessages | 1 | Maximum number of messages to return. |

Return value depends on maxMessages:

  • maxMessages = 1 (default): Returns a single message, or null if timeout expired with no data.
  • maxMessages > 1: Returns an array of messages ordered newest-first, oldest-last, up to maxMessages items. Returns an empty array [] if timeout expired with no data. You can pop() from the returned array to process messages in send order (FIFO).

If messages are already queued internally from a previous batch, they are returned immediately without waiting.

const msg = await port.asyncRead();          // wait forever, returns single message
const msg = await port.asyncRead(1000);      // wait up to 1s, returns message or null
const msgs = await port.asyncRead(1000, 5);  // up to 5 messages, array (newest first, pop for FIFO)
const msgs = await port.asyncRead(0, 10);    // non-blocking, returns whatever is available now

Throws if the port has been closed, or if onmessage is active.

port.read(timeout = Infinity, blocking = true, maxMessages = 1) → message | null | Array

Synchronous read using Atomics.wait. Worker threads only — calling this on the main thread throws (Atomics.wait is not allowed on the main thread).

| Parameter | Default | Description | |-----------|---------|-------------| | timeout | Infinity | Maximum time to wait in milliseconds. Ignored when blocking is false. | | blocking | true | If true, blocks the thread until data arrives or timeout expires. If false, returns immediately. | | maxMessages | 1 | Maximum number of messages to return. |

Return value depends on maxMessages:

  • maxMessages = 1 (default): Returns a single message, or null if no data is available (timeout or non-blocking).
  • maxMessages > 1: Returns an array of messages ordered newest-first, oldest-last, up to maxMessages items. Returns an empty array [] if no data is available. You can pop() from the returned array to process messages in send order (FIFO).

If messages are already queued internally from a previous batch, they are returned immediately without blocking.

Timeout and multipart messages: The timeout only applies to the initial wait for a message. Once a large (multipart) message begins arriving, the read waits indefinitely for all remaining parts to ensure the message is fully received.

const msg = port.read();                    // block forever until message
const msg = port.read(500);                 // block up to 500ms, null on timeout
const msg = port.read(0, false);            // non-blocking, returns null if empty
const msgs = port.read(1000, true, 5);      // block up to 1s, up to 5 msgs (newest first, pop for FIFO)

Throws if the port has been closed, or if onmessage is active.

port.tryRead(maxMessages = 1) → message | null | Array

Non-blocking read. Equivalent to port.read(0, false, maxMessages). Returns immediately with available data.

| Parameter | Default | Description | |-----------|---------|-------------| | maxMessages | 1 | Maximum number of messages to return. |

Return value depends on maxMessages:

  • maxMessages = 1 (default): Returns a single message, or null.
  • maxMessages > 1: Returns an array (newest-first, oldest-last), or an empty array []. pop() to process in FIFO order.
const msg = port.tryRead();       // single message or null
const msgs = port.tryRead(10);    // up to 10 messages (array, newest first) or []

port.tryPeek() → message | null

Non-blocking peek. Returns the next message without removing it from the queue. If the queue is empty, attempts a non-blocking read from the shared buffer first. Returns null if no data is available.

port.queueLimit (getter/setter)

Optional per-queue message count limit. When the queue exceeds the limit, the oldest messages are silently discarded. Applies to both the internal write queue and read queue.

  • null (default): No limit — queues grow without bound.
  • Non-negative integer: Maximum number of messages allowed in each queue.
  • Setting a lower limit immediately trims the existing queue.
const port = new SABMessagePort('a', 256, 5);  // queueLimit=5 via constructor
port.queueLimit = 10;   // change at runtime
port.queueLimit = null;  // disable

The third constructor parameter (queueLimit) sets the initial limit. SABMessagePort.from(initMsg, queueLimit) also accepts it.

port.onQueueOverflow (getter/setter)

Callback invoked before a queue is truncated due to exceeding its queueLimit. Receives the overflowing queue array by reference — the callback can inspect, log, or modify it. After the callback returns, the queue is truncated only if it still exceeds the limit.

port.onQueueOverflow = (queue) => {
  console.warn(`Dropping ${queue.length - port.queueLimit} messages`);
  // Or handle manually: queue.splice(0, queue.length - port.queueLimit);
};
  • null (default): No callback — overflow is silently truncated.
  • Propagates to both internal SABPipe instances (writer and reader).

port.close()

Disposes both directions. Unblocks any waiting readers/writers by signaling disposal. After closing, all postMessage(), read(), asyncRead(), and tryRead() calls will throw. Calling close() multiple times is safe (subsequent calls are no-ops).

port.bufferSharedArrayBuffer

The underlying shared buffer.


SABPipe

Unidirectional channel — one end writes, the other reads. Used internally by SABMessagePort, but useful on its own when you only need one-way communication.

All messages must be JSON-serializable. Message ordering is FIFO.

new SABPipe(role, sabOrSize = 131072, byteOffset = 0, sectionSize = null, queueLimit = null)

| Parameter | Default | Description | |-----------|---------|-------------| | role | (required) | 'w' (writer) or 'r' (reader). Throws if invalid. | | sabOrSize | 131072 | Byte size for a new buffer (128 KB), or an existing SharedArrayBuffer. | | byteOffset | 0 | Starting byte offset in the SAB. | | sectionSize | null | Section size in bytes. Defaults to the remaining SAB from byteOffset. | | queueLimit | null | Max messages in the queue. null = unlimited. When exceeded, oldest messages are discarded. |

The writer and reader must share the same SharedArrayBuffer (and same offset/section) to communicate. Role enforcement is strict: the writer can only call postMessage(), and the reader can only call read()/asyncRead()/tryRead()/onmessage. Calling the wrong method throws.

// Writer creates the buffer
const writer = new SABPipe('w');

// Reader attaches to the same buffer
const reader = new SABPipe('r', writer.buffer);

Writer API

writer.postMessage(msg)Promise

Queues a JSON-serializable message for sending. Returns a promise that resolves when the batch is written. Multiple calls are batched — see SABMessagePort.postMessage for details.

writer.postMessage({ hello: 'world' });

Reader API

All read methods share the same return-value convention:

  • maxMessages = 1 (default): returns a single message or null.
  • maxMessages > 1: returns an array of messages ordered newest-first, oldest-last, or an empty array [] if no data. pop() to process in FIFO order.

reader.read(timeout = Infinity, blocking = true, maxMessages = 1)

Synchronous read. Worker threads only (uses Atomics.wait).

| Parameter | Default | Description | |-----------|---------|-------------| | timeout | Infinity | Max wait time in ms. Ignored when non-blocking. | | blocking | true | If false, returns immediately without waiting. | | maxMessages | 1 | Max messages to return. |

Timeout only applies to the initial wait. Multipart messages (large payloads that span multiple chunks) always wait for all parts once the first part arrives.

await reader.asyncRead(timeout = Infinity, maxMessages = 1)

Async read using Atomics.waitAsync. Safe on the main thread.

| Parameter | Default | Description | |-----------|---------|-------------| | timeout | Infinity | Max wait time in ms. | | maxMessages | 1 | Max messages to return. |

reader.tryRead(maxMessages = 1)

Non-blocking read. Equivalent to reader.read(0, false, maxMessages). Returns immediately.

reader.tryPeek() → message | null

Non-blocking peek. Returns the next message that would be returned by read() or tryRead(), without removing it from the queue. If the internal queue is empty, attempts a non-blocking read from the shared buffer first. Returns null if no data is available.

const msg = reader.tryPeek();   // peek at next message, or null
const same = reader.tryRead();  // consumes the same message

reader.onmessage

Event-driven handler. Setting a function starts a continuous async read loop; setting null stops it. Handler receives { data: message }. See SABMessagePort.onmessage for full behavior details (mutual exclusion, error resilience, re-assignment).

Shared

pipe.queueLimit (getter/setter)

Optional per-queue message count limit. See SABMessagePort.queueLimit for details.

pipe.onQueueOverflow (getter/setter)

Callback invoked before queue truncation. See SABMessagePort.onQueueOverflow for details.

pipe.close() / pipe.destroy()

Disposes the channel and unblocks any waiting readers/writers. After disposal, all read/write operations throw 'SABPipe disposed'. Safe to call multiple times.

pipe.isDisposed()boolean

Returns true if the pipe has been disposed (by either side).


MWChannel

A hybrid channel that uses the best transport for each direction: native MessagePort for worker→main (faster, no SharedArrayBuffer overhead) and SABPipe for main→worker (enables blocking reads). The worker can also switch to receiving via MessagePort when blocking reads aren't needed.

SABMessagePort uses SABPipe in both directions — which means worker→main messages pay the SABPipe serialization cost even though the worker never needs to block on outgoing messages. MWChannel avoids this by using native postMessage for the worker→main direction, where it's typically faster, while keeping SABPipe for the main→worker direction where blocking reads are the whole point.

The worker can also switch between blocking (SABPipe) and non-blocking (MessagePort) receive modes at runtime for the main→worker direction.

Key design:

  • Worker→main: Always uses native MessagePort (faster, no SABPipe overhead).
  • Main→worker: Uses SABPipe by default (enables read()/tryRead() on the worker). Can be switched to native MessagePort when blocking reads aren't needed.
  • The main thread always receives via native MessagePort and never blocks.

new MWChannel(side, sabSizeKB = 128, queueLimit = null)

Creates a new channel.

| Parameter | Default | Description | |-----------|---------|-------------| | side | (required) | 'm' (main thread) or 'w' (worker). Throws if invalid. | | sabSizeKB | 128 | SABPipe buffer size in KB. Main side only. | | queueLimit | null | Max messages in the queue. null = unlimited. |

const port = new MWChannel('m');         // 128 KB SABPipe buffer
const port = new MWChannel('m', 256);   // 256 KB SABPipe buffer

MWChannel.from(initMsg, queueLimit = null)

Creates the worker side from a received init message. The init message must have type: 'MWChannel'.

The worker starts in blocking mode by default.

self.onmessage = (e) => {
  if (e.data.type === 'MWChannel') {
    const port = MWChannel.from(e.data);
    // ready — defaults to blocking mode
  }
};

port.postInit(target = null, extraProps = {})

Sends the SAB and a MessagePort to the other side. The init message contains { type: 'MWChannel', buffer, port, ...extraProps }. The MessagePort is included in the transfer list.

If target is null, returns [msg, transferList] for manual sending.

// Auto-send to worker
port.postInit(worker, { channel: 'events' });

// Manual
const [msg, transfer] = port.postInit(null);
worker.postMessage(msg, transfer);

port.postMessage(msg)

Main side: Sends via SABPipe (blocking mode) or native MessagePort (nonblocking mode), depending on the current mode. Returns a Promise in blocking mode, undefined in nonblocking mode.

Worker side: Always sends via native MessagePort. Returns undefined.

port.onmessage

Main side: Event handler on the native MessagePort. Always active — the main thread never blocks. Handler receives { data: message }.

Worker side: Only available in nonblocking mode. Delegates to MessagePort.onmessage. Setting onmessage in blocking mode throws — use read()/tryRead() instead.

port.read(timeout = Infinity, blocking = true, maxMessages = 1)

Worker side, blocking mode only. Synchronous read from the SABPipe. Same return-value convention as SABPipe.read(). Throws if called on main side or in nonblocking mode.

port.tryRead(maxMessages = 1)

Worker side, blocking mode only. Non-blocking read from the SABPipe. Throws if called on main side or in nonblocking mode.

port.tryPeek()

Worker side, blocking mode only. Non-blocking peek from the SABPipe. Returns the next message without removing it, or null. Throws if called on main side or in nonblocking mode.

port.asyncRead(timeout = Infinity, maxMessages = 1)

Worker side, blocking mode only. Async read from the SABPipe. Throws if called on main side or in nonblocking mode.

port.setMode(mode)

Switches the transport mode. mode must be 'blocking' or 'nonblocking'. No-op if already in the requested mode.

Main side: Switches the send transport. 'blocking' sends via SABPipe, 'nonblocking' sends via native MessagePort.

Worker side: Switches the receive transport.

Switching to 'blocking':

  1. Detaches the onmessage handler from the native MessagePort.
  2. Sets mode to 'blocking'read()/tryRead()/asyncRead() become available.

Switching to 'nonblocking':

  1. Drains any pending messages from the SABPipe via tryRead().
  2. Sets mode to 'nonblocking'onmessage becomes available.
  3. Drained messages are delivered to the onmessage handler when it is set.

Mode switching is not automatic — the programmer is responsible for coordinating both sides. The typical pattern is:

  1. Worker sends an RPC to tell main which mode to use for sending.
  2. Main calls port.setMode(newMode) to switch its send transport.
  3. Worker calls port.setMode(newMode) to switch its receive transport.

port.queueLimit (getter/setter)

Optional per-queue message count limit. Delegates to the underlying SABPipe (_sabWriter on main, _sabReader on worker). Also enforces on the _pendingDrain buffer during mode switches.

port.onQueueOverflow (getter/setter)

Callback invoked before queue truncation. Delegates to the underlying SABPipe and also fires for _pendingDrain overflow. See SABMessagePort.onQueueOverflow for details.

port.close()

Destroys the SABPipe and closes the native MessagePort. All subsequent operations throw.

port.bufferSharedArrayBuffer

The underlying SABPipe buffer.

MWChannel Usage Example

// === Main thread ===
import { MWChannel } from 'sab-message-port';

const worker = new Worker('./worker.js', { type: 'module' });
const port = new MWChannel('m');
port.postInit(worker, { channel: 'events' });

port.onmessage = (e) => { /* worker→main always arrives here */ };

// Send in blocking mode (default — worker reads via SABPipe)
port.postMessage(events);

// Worker requests nonblocking mode:
port.setMode('nonblocking');
port.postMessage(events);  // now sent via native MessagePort

// === Worker ===
import { MWChannel } from 'sab-message-port';

self.onmessage = (e) => {
  if (e.data.type !== 'MWChannel') return;
  const port = MWChannel.from(e.data);

  // Start in blocking mode (default)
  while (running) {
    const events = port.read();  // blocks via SABPipe
    for (const evt of events) handle(evt);
  }

  // Switch to nonblocking
  port.postMessage({ cmd: 'set_mode', mode: 'nonblocking' }); // tell main
  port.setMode('nonblocking');
  port.onmessage = (e) => handle(e.data);
};

Message Batching

When the writer side calls postMessage() multiple times in quick succession (without awaiting), messages are batched into a single payload and sent together over the shared buffer. On the reader side, these batched messages are unpacked into an internal queue and delivered one at a time.

This means a single read() or asyncRead() call may populate the internal queue with multiple messages. Subsequent reads return immediately from the queue without waiting on the shared buffer. Use maxMessages > 1 to retrieve multiple queued messages in one call.

// Writer side: 3 messages batched into one payload
writer.postMessage({ id: 0 });
writer.postMessage({ id: 1 });
writer.postMessage({ id: 2 });

// Reader side: first read waits for data, gets all 3 into the queue
const msg0 = reader.read();   // { id: 0 } — waited for data
const msg1 = reader.read();   // { id: 1 } — returned immediately from queue
const msg2 = reader.read();   // { id: 2 } — returned immediately from queue

// Or get all at once (newest first — pop() for FIFO)
const msgs = reader.read(Infinity, true, 10); // [{ id: 2 }, { id: 1 }, { id: 0 }]
msgs.pop(); // { id: 0 } — oldest
msgs.pop(); // { id: 1 }
msgs.pop(); // { id: 2 } — newest

Large Messages & Chunking

Messages larger than the buffer's data section are automatically split into chunks (multipart messages) and reassembled on the reader side. This is fully transparent — no API changes needed regardless of message size.

During a multipart read, timeout is suspended: once the first chunk arrives, the reader waits indefinitely for remaining chunks to ensure the full message is received.

The maximum single-chunk size is bufferSize - 32 bytes (32 bytes are reserved for control fields). For the default 128 KB pipe, that's ~131 KB per chunk.


Performance

Benchmarked on a single machine, Node.js worker threads, 1000 messages (~757 KB total):

| Mode | Avg Latency | Throughput | |------|-------------|------------| | Blocking read (Atomics.wait) | ~19 us/msg | ~39 MB/s | | Async read (Atomics.waitAsync) | ~27 us/msg | ~27 MB/s |

Sustained throughput (3 second run, ~500 byte messages):

| Mode | Messages/sec | Throughput | |------|-------------|------------| | Blocking read | ~59,000 msg/s | ~30 MB/s | | Async read | ~50,000 msg/s | ~25 MB/s |

Blocking reads are faster because Atomics.wait wakes with lower latency than the async event loop. Use blocking reads in worker threads for maximum performance; use async reads on the main thread or when you need to interleave with other async work.

Round-Trip Comparison

Bidirectional echo test — main sends a message, worker echoes it back, repeat. 1000 round-trips, 118-byte messages. Node.js worker threads.

| Configuration | Avg Latency | Messages/sec | Throughput | Relative | |---------------|-------------|-------------|------------|----------| | MWChannel (blocking worker) | ~16 µs/rt | ~63,000 msg/s | ~14 MB/s | 1.00x | | Native MessagePort | ~18 µs/rt | ~56,000 msg/s | ~13 MB/s | 1.15x | | SABMessagePort (blocking worker) | ~23 µs/rt | ~43,000 msg/s | ~10 MB/s | 1.49x | | SABMessagePort (async both sides) | ~26 µs/rt | ~38,000 msg/s | ~9 MB/s | 1.66x |

MWChannel wins because it combines blocking Atomics.wait reads (low wake-up latency) with native MessagePort writes (zero SABPipe overhead for the worker→main direction). SABMessagePort pays the SABPipe cost in both directions.

Measured with Node.js.

Round-Trip Comparison (Chrome)

Same test, Chrome 137 with cross-origin isolation headers.

| Configuration | Avg Latency | Messages/sec | Throughput | Relative | |---------------|-------------|-------------|------------|----------| | Native MessagePort | ~33 µs/rt | ~30,000 msg/s | ~6.8 MB/s | 1.00x | | MWChannel (blocking worker) | ~46 µs/rt | ~21,500 msg/s | ~4.9 MB/s | 1.40x | | SABMessagePort (blocking worker) | ~47 µs/rt | ~21,300 msg/s | ~4.8 MB/s | 1.41x | | SABMessagePort (async both sides) | ~66 µs/rt | ~15,100 msg/s | ~3.4 MB/s | 1.99x |

In Chrome, native MessagePort is fastest for round-trips. MWChannel and SABMessagePort blocking are nearly identical — Chrome's Atomics.wait wake-up latency is higher than Node.js, reducing the advantage of blocking reads. Async SABMessagePort remains the slowest at ~2x.

Measured with Chrome.

Requirements

  • Node.js >= 16 or any browser with SharedArrayBuffer support
  • SharedArrayBuffer requires cross-origin isolation headers in browsers

License

BSD-3-Clause