shmio
v1.0.4
Published
SHM IO project
Readme
shmio
High-performance shared memory library for Node.js with append-only log semantics, designed for event sourcing and inter-process communication.
Features
- Memory-mapped files with automatic buffer management
- Append-only log with atomic commits
- Symmetric frame headers for bidirectional iteration
- Zero-copy frame access and mutation
- Native N-API iterator with configurable batch reads
- Single writer / multi-reader concurrency model
- Optional debug checks with zero production overhead
- TypeScript support with full type definitions
- Built-in Claude Sonet agent for low-latency prompt dispatching
Installation
npm install shmioQuick Start
Writer Process
import { createSharedLog } from 'shmio'
import { Bendec } from 'bendec'
const bendec = new Bendec({
types: [{
name: 'LogEvent',
fields: [
{ name: 'timestamp', type: 'u64' },
{ name: 'level', type: 'u8' },
{ name: 'message', type: 'string' },
],
}],
})
const log = createSharedLog({
path: '/dev/shm/myapp-events',
capacityBytes: 16n * 1024n * 1024n, // 16 MiB
writable: true,
debugChecks: process.env.SHMIO_DEBUG === 'true',
})
const writer = log.writer!
const frameSize = bendec.getSize('LogEvent')
const frame = writer.allocate(frameSize)
bendec.encodeAs({
timestamp: BigInt(Date.now()),
level: 1,
message: 'Application started',
}, 'LogEvent', frame)
writer.commit()
log.close()Reader Process
import { createSharedLog } from 'shmio'
import { Bendec } from 'bendec'
const bendec = new Bendec({ /* same schema as writer */ })
const log = createSharedLog({
path: '/dev/shm/myapp-events',
writable: false,
})
const iterator = log.createIterator()
const batch = iterator.nextBatch({ maxMessages: 32 })
for (const buffer of batch) {
const event = bendec.decodeAs(buffer, 'LogEvent')
console.log(event)
}
iterator.close()
log.close()API
createSharedLog(options)
Creates (or opens) a memory-mapped append-only log. Options:
createSharedLog({
path: string, // File path (/dev/shm/name for shared memory)
capacityBytes?: number | bigint, // Desired file size when creating (required if writable=true; optional for read-only)
writable: boolean, // Enable writer support
debugChecks?: boolean, // Optional integrity checks for writer + iterator
})Returns a SharedLog with:
header— a mutable Bendec wrapper exposingheaderSize,dataOffset, and the currentsizecursor.createIterator(options?)— opens a new native iterator. Pass{ startCursor: bigint }to resume from a stored position.writer— available whenwritable: true. Use it to append frames atomically.close()— release the underlying file descriptor and mapping.
ShmIterator
Native iterator instances returned by createIterator() expose:
next()— returns the next frame as aBuffer, ornullwhen no new data is committed.nextBatch({ maxMessages, maxBytes, debugChecks })— pulls multiple frames in one call.cursor()— current read cursor (asbigint). Persist this to resume later.committedSize()— total number of committed bytes visible to readers.seek(position)— jump to an absolute cursor position.close()— release underlying native resources.
ShmWriter
When the log is writable, log.writer exposes:
allocate(size, { debugChecks })— reserves a frame buffer for writing.commit()— atomically publishes all allocated frames since the previous commit.close()— releases writer resources.
Architecture
Frame Structure
Each message has symmetric headers for bidirectional iteration:
┌─────────────┬──────────────────┬─────────────┐
│ Leading u16 │ Message Data │ Trailing u16│
│ (size) │ (variable len) │ (size) │
└─────────────┴──────────────────┴─────────────┘
2 bytes N bytes 2 bytesBoth size fields contain the total frame size (N + 4 bytes). This enables:
- Forward iteration (read leading size, skip forward)
- Backward iteration (read trailing size, skip backward)
- Integrity validation (compare both sizes)
Memory Layout
┌──────────────────────────────────────────────────────┐
│ Header (24 bytes) │
│ - headerSize: u64 │
│ - dataOffset: u64 │
│ - size: u64 (current cursor, updated on commit) │
├──────────────────────────────────────────────────────┤
│ Event 1: [u16 size][data][u16 size] │
├──────────────────────────────────────────────────────┤
│ Event 2: [u16 size][data][u16 size] │
├──────────────────────────────────────────────────────┤
│ ... │
└──────────────────────────────────────────────────────┘Concurrency Model
Single Writer, Multiple Readers
- ONE writer process can call
writer.commit()— multiple writers will corrupt data - MULTIPLE reader processes can read concurrently via independent iterators
- NO explicit locking — relies on atomic 64-bit writes on x86/x64
Writers must:
- Allocate a frame with
writer.allocate(size) - Encode the frame payload (e.g., via Bendec)
- Call
writer.commit()to make events visible atomically
Readers see:
- Consistent snapshots (all events up to last commit)
- Never see partial events
Debug Mode
Enable comprehensive frame validation during development:
# Enable debug mode
SHMIO_DEBUG=true node your-app.js
# Run tests with validation
SHMIO_DEBUG=true npm testDebug mode validates:
- Frame size sanity (must be 4 bytes to buffer size)
- Symmetric frame integrity (leading size == trailing size)
- Position-aware validation (avoids false positives)
Performance: Zero overhead in production (disabled by default), ~2-5% overhead when enabled.
See DEBUG.md for complete documentation.
Use Cases
Event Sourcing
Perfect for append-only event logs:
// Writer: Event producer
const path = '/dev/shm/event-log'
const bendec = createEventBendec() // your Bendec schema helper
const writerLog = createSharedLog({ path, capacityBytes: 64n * 1024n * 1024n, writable: true })
const messageSize = bendec.getSize('Event')
function recordEvent(type: string, data: Buffer) {
const frame = writerLog.writer!.allocate(messageSize)
bendec.encodeAs({
type,
timestamp: BigInt(Date.now()),
data,
}, 'Event', frame)
writerLog.writer!.commit()
}
// Reader: Event consumer
const readerLog = createSharedLog({ path, capacityBytes: 64n * 1024n * 1024n, writable: false })
const iterator = readerLog.createIterator()
for (const buffer of iterator.nextBatch({ maxMessages: 32 })) {
const event = bendec.decodeAs(buffer, 'Event')
processEvent(event)
}System Monitoring
Real-time log streaming between processes:
// Logger process
const path = '/dev/shm/log-stream'
const bendec = createLogBendec()
const writerLog = createSharedLog({ path, capacityBytes: 32n * 1024n * 1024n, writable: true })
const writer = writerLog.writer!
function logEntry(level: number, message: string) {
const frame = writer.allocate(bendec.getSize('LogEntry'))
bendec.encodeAs({
timestamp: BigInt(Date.now()),
level,
message,
}, 'LogEntry', frame)
writer.commit()
}
// Monitor process
const readerLog = createSharedLog({ path, capacityBytes: 32n * 1024n * 1024n, writable: false })
const iterator = readerLog.createIterator()
for (const buffer of iterator.nextBatch({ maxMessages: 100 })) {
const entry = bendec.decodeAs(buffer, 'LogEntry')
console.log(`[${entry.level}] ${entry.message}`)
}Inter-Process Communication
High-speed message passing:
// Producer
const path = '/dev/shm/ipc-channel'
const bendec = createMessageBendec()
const producerLog = createSharedLog({ path, capacityBytes: 8n * 1024n * 1024n, writable: true })
const writer = producerLog.writer!
for (let i = 0; i < 1000; i++) {
const frame = writer.allocate(bendec.getSize('Message'))
bendec.encodeAs({
id: i,
payload: generateData(),
}, 'Message', frame)
}
writer.commit() // Batch commit for performance
// Consumer
const consumerLog = createSharedLog({ path, capacityBytes: 8n * 1024n * 1024n, writable: false })
const iterator = consumerLog.createIterator()
for (const buffer of iterator.nextBatch()) {
const msg = bendec.decodeAs(buffer, 'Message')
process(msg)
}Performance
Benchmarks
On modern hardware (Intel i7, NVMe SSD):
- Write throughput: ~250–360k events/sec (64–256 byte payloads)
- Read throughput: ~400k events/sec (64-byte batched reads)
- Latency: ~2.8–3.7 µs per event (write+commit), ~2.5 µs per event (read batch)
To run the performance benchmark yourself:
npm run build
node dist/tests/perf/bench.jsThe benchmark includes:
- Write performance across payload sizes (16–1024 bytes)
- Batched read performance
- Throughput in events/sec and MB/sec
- Per-event latency in microseconds and nanoseconds
Best Practices
- Batch commits - Group multiple writes before calling
commit() - Size buffers appropriately - Balance memory usage vs overflow handling
- Use overlap wisely - Should be >= your largest message size
- Monitor memory - Check
getSize()to avoid exhaustion - Enable debug mode in dev - Catches issues early with zero production cost
Error Handling
try {
const frame = log.writer!.allocate(bendec.getSize('Event'))
// ... write event data
log.writer!.commit()
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
if (message.includes('Shared memory exhausted')) {
// Handle memory full - rotate files or wait for readers
} else if (message.includes('ERR_SHM_FRAME_CORRUPT')) {
// Debug mode caught corruption
} else {
// Other errors
}
}Requirements
- Node.js 12.x or higher
- Linux or macOS (mmap support)
bendecfor serializationrxjsfor streaming (optional)
Building
# Install dependencies
npm install
# Build TypeScript and native addon
npm run build
# Run tests
npm test
# Run tests with debug mode
SHMIO_DEBUG=true npm testLimitations
- Platform-specific - Linux/macOS only (requires POSIX mmap)
- Single writer - Multiple writers will corrupt data
- No automatic cleanup - File remains until explicitly deleted
- Fixed size - Cannot grow after creation
- No built-in compression - Store data as-is
Troubleshooting
"Shared memory exhausted"
Increase buffer size or number of buffers:
const log = createSharedLog({
path: '/dev/shm/myapp-events',
capacityBytes: 32n * 1024n * 1024n,
writable: true,
})Frame corruption in debug mode
Usually indicates:
- Multiple writers (violates single writer requirement)
- Manual buffer manipulation
- Process crashed mid-write
File already exists
Delete stale files:
rm /dev/shm/myapp-eventsOr handle in code:
const fs = require('fs')
try {
fs.unlinkSync('/dev/shm/myapp-events')
} catch (err) {
// Ignore if doesn't exist
}Related Projects
- bendec - Binary encoder/decoder for schemas
- node-addon-api - N-API wrapper used for mmap
License
MIT
Author
Rafal Okninski [email protected]
Repository
https://github.com/hpn777/shmio
