npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

shmio

v1.0.4

Published

SHM IO project

Readme

shmio

High-performance shared memory library for Node.js with append-only log semantics, designed for event sourcing and inter-process communication.

Features

  • Memory-mapped files with automatic buffer management
  • Append-only log with atomic commits
  • Symmetric frame headers for bidirectional iteration
  • Zero-copy frame access and mutation
  • Native N-API iterator with configurable batch reads
  • Single writer / multi-reader concurrency model
  • Optional debug checks with zero production overhead
  • TypeScript support with full type definitions
  • Built-in Claude Sonet agent for low-latency prompt dispatching

Installation

npm install shmio

Quick Start

Writer Process

import { createSharedLog } from 'shmio'
import { Bendec } from 'bendec'

const bendec = new Bendec({
  types: [{
    name: 'LogEvent',
    fields: [
      { name: 'timestamp', type: 'u64' },
      { name: 'level', type: 'u8' },
      { name: 'message', type: 'string' },
    ],
  }],
})

const log = createSharedLog({
  path: '/dev/shm/myapp-events',
  capacityBytes: 16n * 1024n * 1024n, // 16 MiB
  writable: true,
  debugChecks: process.env.SHMIO_DEBUG === 'true',
})

const writer = log.writer!

const frameSize = bendec.getSize('LogEvent')
const frame = writer.allocate(frameSize)
bendec.encodeAs({
  timestamp: BigInt(Date.now()),
  level: 1,
  message: 'Application started',
}, 'LogEvent', frame)

writer.commit()
log.close()

Reader Process

import { createSharedLog } from 'shmio'
import { Bendec } from 'bendec'

const bendec = new Bendec({ /* same schema as writer */ })

const log = createSharedLog({
  path: '/dev/shm/myapp-events',
  writable: false,
})

const iterator = log.createIterator()

const batch = iterator.nextBatch({ maxMessages: 32 })
for (const buffer of batch) {
  const event = bendec.decodeAs(buffer, 'LogEvent')
  console.log(event)
}

iterator.close()
log.close()

API

createSharedLog(options)

Creates (or opens) a memory-mapped append-only log. Options:

createSharedLog({
  path: string,                   // File path (/dev/shm/name for shared memory)
  capacityBytes?: number | bigint, // Desired file size when creating (required if writable=true; optional for read-only)
  writable: boolean,              // Enable writer support
  debugChecks?: boolean,          // Optional integrity checks for writer + iterator
})

Returns a SharedLog with:

  • header — a mutable Bendec wrapper exposing headerSize, dataOffset, and the current size cursor.
  • createIterator(options?) — opens a new native iterator. Pass { startCursor: bigint } to resume from a stored position.
  • writer — available when writable: true. Use it to append frames atomically.
  • close() — release the underlying file descriptor and mapping.

ShmIterator

Native iterator instances returned by createIterator() expose:

  • next() — returns the next frame as a Buffer, or null when no new data is committed.
  • nextBatch({ maxMessages, maxBytes, debugChecks }) — pulls multiple frames in one call.
  • cursor() — current read cursor (as bigint). Persist this to resume later.
  • committedSize() — total number of committed bytes visible to readers.
  • seek(position) — jump to an absolute cursor position.
  • close() — release underlying native resources.

ShmWriter

When the log is writable, log.writer exposes:

  • allocate(size, { debugChecks }) — reserves a frame buffer for writing.
  • commit() — atomically publishes all allocated frames since the previous commit.
  • close() — releases writer resources.

Architecture

Frame Structure

Each message has symmetric headers for bidirectional iteration:

┌─────────────┬──────────────────┬─────────────┐
│ Leading u16 │   Message Data   │ Trailing u16│
│   (size)    │  (variable len)  │   (size)    │
└─────────────┴──────────────────┴─────────────┘
     2 bytes        N bytes           2 bytes

Both size fields contain the total frame size (N + 4 bytes). This enables:

  • Forward iteration (read leading size, skip forward)
  • Backward iteration (read trailing size, skip backward)
  • Integrity validation (compare both sizes)

Memory Layout

┌──────────────────────────────────────────────────────┐
│ Header (24 bytes)                                     │
│ - headerSize: u64                                     │
│ - dataOffset: u64                                     │
│ - size: u64 (current cursor, updated on commit)      │
├──────────────────────────────────────────────────────┤
│ Event 1: [u16 size][data][u16 size]                 │
├──────────────────────────────────────────────────────┤
│ Event 2: [u16 size][data][u16 size]                 │
├──────────────────────────────────────────────────────┤
│ ...                                                   │
└──────────────────────────────────────────────────────┘

Concurrency Model

Single Writer, Multiple Readers

  • ONE writer process can call writer.commit() — multiple writers will corrupt data
  • MULTIPLE reader processes can read concurrently via independent iterators
  • NO explicit locking — relies on atomic 64-bit writes on x86/x64

Writers must:

  1. Allocate a frame with writer.allocate(size)
  2. Encode the frame payload (e.g., via Bendec)
  3. Call writer.commit() to make events visible atomically

Readers see:

  • Consistent snapshots (all events up to last commit)
  • Never see partial events

Debug Mode

Enable comprehensive frame validation during development:

# Enable debug mode
SHMIO_DEBUG=true node your-app.js

# Run tests with validation
SHMIO_DEBUG=true npm test

Debug mode validates:

  • Frame size sanity (must be 4 bytes to buffer size)
  • Symmetric frame integrity (leading size == trailing size)
  • Position-aware validation (avoids false positives)

Performance: Zero overhead in production (disabled by default), ~2-5% overhead when enabled.

See DEBUG.md for complete documentation.

Use Cases

Event Sourcing

Perfect for append-only event logs:

// Writer: Event producer
const path = '/dev/shm/event-log'
const bendec = createEventBendec() // your Bendec schema helper
const writerLog = createSharedLog({ path, capacityBytes: 64n * 1024n * 1024n, writable: true })
const messageSize = bendec.getSize('Event')

function recordEvent(type: string, data: Buffer) {
  const frame = writerLog.writer!.allocate(messageSize)
  bendec.encodeAs({
    type,
    timestamp: BigInt(Date.now()),
    data,
  }, 'Event', frame)
  writerLog.writer!.commit()
}

// Reader: Event consumer
const readerLog = createSharedLog({ path, capacityBytes: 64n * 1024n * 1024n, writable: false })
const iterator = readerLog.createIterator()
for (const buffer of iterator.nextBatch({ maxMessages: 32 })) {
  const event = bendec.decodeAs(buffer, 'Event')
  processEvent(event)
}

System Monitoring

Real-time log streaming between processes:

// Logger process
const path = '/dev/shm/log-stream'
const bendec = createLogBendec()
const writerLog = createSharedLog({ path, capacityBytes: 32n * 1024n * 1024n, writable: true })
const writer = writerLog.writer!

function logEntry(level: number, message: string) {
  const frame = writer.allocate(bendec.getSize('LogEntry'))
  bendec.encodeAs({
    timestamp: BigInt(Date.now()),
    level,
    message,
  }, 'LogEntry', frame)
  writer.commit()
}

// Monitor process
const readerLog = createSharedLog({ path, capacityBytes: 32n * 1024n * 1024n, writable: false })
const iterator = readerLog.createIterator()
for (const buffer of iterator.nextBatch({ maxMessages: 100 })) {
  const entry = bendec.decodeAs(buffer, 'LogEntry')
  console.log(`[${entry.level}] ${entry.message}`)
}

Inter-Process Communication

High-speed message passing:

// Producer
const path = '/dev/shm/ipc-channel'
const bendec = createMessageBendec()
const producerLog = createSharedLog({ path, capacityBytes: 8n * 1024n * 1024n, writable: true })
const writer = producerLog.writer!

for (let i = 0; i < 1000; i++) {
  const frame = writer.allocate(bendec.getSize('Message'))
  bendec.encodeAs({
    id: i,
    payload: generateData(),
  }, 'Message', frame)
}
writer.commit()  // Batch commit for performance

// Consumer
const consumerLog = createSharedLog({ path, capacityBytes: 8n * 1024n * 1024n, writable: false })
const iterator = consumerLog.createIterator()
for (const buffer of iterator.nextBatch()) {
  const msg = bendec.decodeAs(buffer, 'Message')
  process(msg)
}

Performance

Benchmarks

On modern hardware (Intel i7, NVMe SSD):

  • Write throughput: ~250–360k events/sec (64–256 byte payloads)
  • Read throughput: ~400k events/sec (64-byte batched reads)
  • Latency: ~2.8–3.7 µs per event (write+commit), ~2.5 µs per event (read batch)

To run the performance benchmark yourself:

npm run build
node dist/tests/perf/bench.js

The benchmark includes:

  • Write performance across payload sizes (16–1024 bytes)
  • Batched read performance
  • Throughput in events/sec and MB/sec
  • Per-event latency in microseconds and nanoseconds

Best Practices

  1. Batch commits - Group multiple writes before calling commit()
  2. Size buffers appropriately - Balance memory usage vs overflow handling
  3. Use overlap wisely - Should be >= your largest message size
  4. Monitor memory - Check getSize() to avoid exhaustion
  5. Enable debug mode in dev - Catches issues early with zero production cost

Error Handling

try {
  const frame = log.writer!.allocate(bendec.getSize('Event'))
  // ... write event data
  log.writer!.commit()
} catch (err) {
  const message = err instanceof Error ? err.message : String(err)
  if (message.includes('Shared memory exhausted')) {
    // Handle memory full - rotate files or wait for readers
  } else if (message.includes('ERR_SHM_FRAME_CORRUPT')) {
    // Debug mode caught corruption
  } else {
    // Other errors
  }
}

Requirements

  • Node.js 12.x or higher
  • Linux or macOS (mmap support)
  • bendec for serialization
  • rxjs for streaming (optional)

Building

# Install dependencies
npm install

# Build TypeScript and native addon
npm run build

# Run tests
npm test

# Run tests with debug mode
SHMIO_DEBUG=true npm test

Limitations

  1. Platform-specific - Linux/macOS only (requires POSIX mmap)
  2. Single writer - Multiple writers will corrupt data
  3. No automatic cleanup - File remains until explicitly deleted
  4. Fixed size - Cannot grow after creation
  5. No built-in compression - Store data as-is

Troubleshooting

"Shared memory exhausted"

Increase buffer size or number of buffers:

const log = createSharedLog({
  path: '/dev/shm/myapp-events',
  capacityBytes: 32n * 1024n * 1024n,
  writable: true,
})

Frame corruption in debug mode

Usually indicates:

  • Multiple writers (violates single writer requirement)
  • Manual buffer manipulation
  • Process crashed mid-write

File already exists

Delete stale files:

rm /dev/shm/myapp-events

Or handle in code:

const fs = require('fs')
try {
  fs.unlinkSync('/dev/shm/myapp-events')
} catch (err) {
  // Ignore if doesn't exist
}

Related Projects

License

MIT

Author

Rafal Okninski [email protected]

Repository

https://github.com/hpn777/shmio