npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

sbe-ts

v0.1.0

Published

Zero-allocation Simple Binary Encoding runtime for TypeScript

Readme

sbe-ts

Zero-allocation Simple Binary Encoding runtime for TypeScript. Reads SBE-encoded binary messages directly from an ArrayBuffer using a flyweight pattern. Fixed primitive fields and composite accessors are fully zero-allocation; VarData accessors return a zero-copy Uint8Array view (one lightweight view-object allocation per call, no data copying).

Install

npm install sbe-ts

What it does

Wraps a binary buffer with a typed stencil. Reading a field is a single DataView call at a fixed byte offset. The decoder object is reused across messages with wrap().

Supports the full SBE feature set: fixed primitive fields, composite types, enums, bitsets, repeating groups (including nested groups), and variable-length data fields. GroupIterator<T> provides zero-allocation for...of iteration over repeating groups.

What it does not do: parse XML schemas, generate code, or handle network transport. For code generation from an SBE XML schema, see sbe-ts-cli.

Core usage

import { MessageFlyweight } from 'sbe-ts';

const decoder = new MessageFlyweight(buffer, 0);

decoder.getUint32(0);   // read 4 bytes at field offset 0
decoder.getInt64(4);    // read 8 bytes at field offset 4 — returns bigint
decoder.getFloat64(12); // read 8 bytes at field offset 12

Re-use the same decoder for every message in the stream — zero allocation after the first new:

while (hasMessages()) {
  decoder.wrap(nextBuffer(), headerSize);
  process(decoder.getUint32(0), decoder.getInt64(4));
}

Performance guide

The library can reach ~220M ops/sec on a ring-buffer feed. Whether you hit that number or 20M depends entirely on three architectural choices in your ingest pipeline.

1. Use a ring buffer — don't allocate per message

// Allocate once at startup
const ringBuffer = new ArrayBuffer(64 * 1024); // 64 KB ring

// Hot loop — wrap() is a single integer assignment, no allocation
while (feed.hasMessages()) {
  const offset = feed.writeNext(ringBuffer);
  decoder.wrap(ringBuffer, offset);
  process(decoder);
}

Allocating a new ArrayBuffer() per message costs a new DataView() construction on every wrap() call. That's the difference between ~165M and 20M ops/sec. Pre-allocate one large buffer and write messages into it at rotating offsets. Use wrapOffset() instead of wrap() when the buffer hasn't changed to skip the identity check and reach ~220M.

2. Pre-allocate decoders — one instance per message type

// At startup — allocate once, reuse forever
const marketData = new MarketDataDecoder(RING_BUF, 0);
const orderAck   = new OrderAckDecoder(RING_BUF, 0);

// Per message — wrap() only updates the offset integer
marketData.wrap(ringBuffer, offset);

Creating new MarketDataDecoder() inside a hot loop re-allocates the object and forces V8 to re-derive the hidden class. One instance per type, allocated once.

3. Keep handler functions monomorphic — branch at the dispatch layer

V8 compiles a dedicated machine-code stub for a function that always sees the same object shape. If one function handles multiple decoder types, the stub degenerates to a polymorphic lookup and throughput can drop 3×.

The fix is to keep each handler function dedicated to one decoder type, and do the routing in a thin jump table:

const mktDecoder = new MarketDataDecoder(ringBuf, 0);
const ackDecoder = new OrderAckDecoder(ringBuf, 0);

// Each function is monomorphic — V8 inlines all field accessors
function onMarketData(buf: ArrayBuffer, off: number): void {
  mktDecoder.wrap(buf, off);
  // ... read fields
}
function onOrderAck(buf: ArrayBuffer, off: number): void {
  ackDecoder.wrap(buf, off);
  // ... read fields
}

// O(1) dispatch — the table lookup is thin; work stays in the monomorphic handlers
const handlers = new Array<((buf: ArrayBuffer, off: number) => void) | undefined>(256);
handlers[MarketDataDecoder.TEMPLATE_ID] = onMarketData;
handlers[OrderAckDecoder.TEMPLATE_ID]   = onOrderAck;

while (feed.hasMessages()) {
  const { buf, off } = feed.next();
  const templateId = header.wrap(buf, off).templateId();
  handlers[templateId]?.(buf, off);
}

The dispatch table itself sees multiple function shapes — that's unavoidable. But it does no work except jump. The field accessors, where 99% of the CPU time is spent, are in the monomorphic handler functions and stay fast.


Building your own decoder

Extend MessageFlyweight with named accessors. This is exactly what sbe-ts-cli generate produces:

import { MessageFlyweight } from 'sbe-ts';

export class MarketDataDecoder extends MessageFlyweight {
  static readonly BLOCK_LENGTH = 24;
  static readonly TEMPLATE_ID  = 1;
  static readonly SCHEMA_ID    = 1;
  static readonly VERSION      = 0;

  instrumentId(): number { return this.getUint32(0); }
  price():        bigint  { return this.getInt64(4); }
  quantity():     bigint  { return this.getInt64(12); }
  flags():        number  { return this.getUint32(20); }
}

const decoder = new MarketDataDecoder(buffer, headerSize);
decoder.price(); // direct DataView read at byte 4, no allocation

Composite types

CompositeFlyweight is the base for fixed-length nested structs (e.g., messageHeader). It has the same API as MessageFlyweightsbe-ts-cli generates composite classes that extend it.

import { CompositeFlyweight } from 'sbe-ts';

export class MessageHeaderDecoder extends CompositeFlyweight {
  static readonly SIZE = 8;
  blockLength(): number { return this.getUint16(0); }
  templateId():  number { return this.getUint16(2); }
  schemaId():    number { return this.getUint16(4); }
  version():     number { return this.getUint16(6); }
}

using keyword

MessageFlyweight implements Symbol.dispose, so you can use it with TypeScript's using declaration. When the block exits, offset is set to -1 as a use-after-dispose sentinel:

{
  using decoder = new MarketDataDecoder(buffer, 0);
  decoder.price(); // fine
} // decoder[Symbol.dispose]() called — offset set to -1

Requires "lib": ["ES2025", "ESNext.Disposable"] in tsconfig and TypeScript 5.2+.

Benchmark

Measured with a raw Node.js script (no framework overhead) on Node 24, Windows 11. Run node bench-raw.mjs in the runtime package to reproduce.

Ring-buffer pattern — one large ArrayBuffer, messages at different offsets. This is the realistic hot path for market data feeds.

| Scenario | ops/sec | vs JSON.parse | |---|---|---| | wrapOffset() — ring — 4× uint32 | ~220M | ~34× | | wrap() — ring — 4× uint32 | ~165M | ~26× | | wrap() — ring — 2× uint32 + 2× int64/BigInt | ~80M | ~12× | | TypedArray — ring — 4× uint32 | ~140M | ~22× | | JSON.parse — 4 fields | ~6.4M | baseline |

wrapOffset(offset) skips the buffer-identity check and DataView re-creation when you know the buffer hasn't changed — use it in the inner loop for maximum throughput. wrap() is still correct in all cases.

Rotating buffers — one ArrayBuffer per logical message (typical network packet scenario). Each wrap() call allocates a new DataView over the incoming buffer.

| Scenario | ops/sec | vs JSON.parse | |---|---|---| | DataView — 4× uint32 | ~20M | ~3× | | JSON.parse — 4 fields | ~6.4M | baseline |

Note: V8 inlines DataView.getUint32(constantOffset) to a near-direct memory read in the ring-buffer case. TypedArray still benefits non-V8 runtimes and older V8 builds.

API reference

All methods are inherited by generated decoder/encoder subclasses.

Constructor

new MessageFlyweight(buffer: ArrayBufferLike, offset: number, littleEndian?: boolean)
// littleEndian defaults to true (SBE default)

Buffer management

wrap(buffer: ArrayBufferLike, offset: number): this  // re-point to a new buffer/offset; resets cursor
wrapOffset(offset: number): this                     // fast-path: update offset only, skip identity check
getBuffer(): ArrayBufferLike
getOffset(): number
[Symbol.dispose](): void                             // sets offset to -1

Primitive reads (all take fieldOffset: number)

| Method | Returns | Bytes | |---|---|---| | getInt8(o) / getUint8(o) | number | 1 | | getInt16(o) / getUint16(o) | number | 2 | | getFloat16(o) | number | 2 | | getInt32(o) / getUint32(o) | number | 4 | | getFloat32(o) | number | 4 | | getInt64(o) / getUint64(o) | bigint | 8 | | getFloat64(o) | number | 8 |

Primitive writes (fieldOffset, value — all return this for chaining)

Same naming with set prefix. setInt64 / setUint64 take bigint.

String utilities

import { encodeString, decodeString } from 'sbe-ts';

encodeString(str: string, buf: ArrayBufferLike, offset: number, maxLen: number): void
decodeString(buf: ArrayBufferLike, offset: number, maxLen: number): string
// stops at null byte; pads with zeros on encode

GroupIterator

GroupIterator<T> iterates repeating groups with zero allocation per entry. Generated by sbe-ts-cli — not typically constructed directly.

import { GroupIterator } from 'sbe-ts';
import type { GroupEntry } from 'sbe-ts';

// Generated entry class satisfies GroupEntry:
// interface GroupEntry {
//   wrap(buffer: ArrayBufferLike, offset: number): unknown;
//   absoluteEnd(): number;
// }

const fills = decoder.fills(); // returns GroupIterator<FillsEntry>
for (const entry of fills) {
  console.log(entry.price(), entry.quantity());
  // early break is safe — iterator.return() fast-forwards remaining entries
}
// fills.absoluteEnd() gives the byte position after all entries

Requirements

  • Node 22+ — required for DataView.getFloat16 / setFloat16 (V8 native, no polyfill)
  • TypeScript 5.2+ for using / Symbol.dispose; TypeScript 6+ recommended