npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@ephemeral-broker/core

v1.0.5

Published

Fast, secure, ephemeral IPC over pipes. Share secrets, tokens, and small state between parallel processes without touching disk or opening ports.

Readme

Ephemeral Broker

Fast, secure, ephemeral IPC over pipes. Share secrets, tokens, and small state between parallel processes without touching disk or opening ports. Cleans itself up automatically when your process exits.

Production-ready for same-host use. Designed for test coordination, parallel workers, and ephemeral secret distribution.

Core Value Props

  1. One Thing Well — A temporary KV/lease store over pipes. That's it.
  2. Zero Dependencies — Core uses only Node built-ins.
  3. Security First — HMAC auth, size limits, TTL required by default.
  4. Production-Ready — Bounded memory, metrics, health checks, graceful shutdown.
  5. Cross-Platform — Works the same on Mac, Linux, and Windows.

Why

Most modern dev/test environments run into the same problems:

  • Secrets on disk → API keys, STS creds, and OAuth tokens end up written to .env or cache files.
  • Parallel worker collisions → WDIO, Playwright, Jest, etc. spawn many workers with no safe way to share ephemeral state.
  • Lifecycle pollution → bootstrap state lingers after jobs, causing flaky tests and security risks.

Ephemeral Broker solves this:

  • Starts before your process.
  • Exposes a random local pipe (/tmp/…sock or \\.\pipe\…).
  • Brokers secrets/state in memory only.
  • Wipes itself clean on exit.

Install

npm install --save-dev ephemeral-broker

Quickstart

1. Spawn broker with your command

npx ephemeral-broker start -- pnpm test

Broker:

  • generates a random pipe,
  • exports it to child as EPHEMERAL_PIPE,
  • spawns your command,
  • exits + wipes memory when done.

2. Use the client

import { Client } from 'ephemeral-broker'

const client = new Client(process.env.EPHEMERAL_PIPE)

// Set a value with TTL
await client.set('foo', 'bar', 60000)
console.log(await client.get('foo')) // "bar"

// Lease tokens per worker
const token = await client.lease('publisher-api', process.env.WORKER_ID)

3. With adapter (WDIO example)

import { withBrokerTokens } from '@ephemeral-broker/wdio'

export const config = withBrokerTokens(
  {
    tokens: {
      publisher: 'publisher-api-token',
      admin: 'admin-api-token'
    },
    envVars: true
  },
  baseConfig
)

Production Use

Same-Host Only

Ephemeral Broker is production-ready for same-host, same-pod coordination:

Use in production for:

  • Parallel test worker coordination (WDIO, Playwright, Jest)
  • Ephemeral secret distribution (API tokens, STS credentials)
  • Short-lived locks and rendezvous (< 10 minutes)
  • In-process state sharing between parent and child processes

Not suitable for:

  • Cross-host or cross-pod coordination → Use Redis/etcd
  • Long-term persistent storage → Use Redis/database
  • Large datasets (>1GB) → Use Redis/filesystem
  • Mission-critical data that must survive crashes → Use Redis/database

Configuration

Broker and client options can be set via constructor or environment variables:

Broker:

const broker = new Broker({
  defaultTTL: 600000, // BROKER_DEFAULT_TTL (10 minutes)
  maxItems: 10000, // BROKER_MAX_ITEMS
  maxValueSize: 262144, // BROKER_MAX_VALUE_SIZE (256KB)
  maxRequestSize: 1048576, // BROKER_MAX_REQUEST_SIZE (1MB)
  sweeperInterval: 30000, // BROKER_SWEEPER_INTERVAL (30s)
  requireTTL: true, // BROKER_REQUIRE_TTL
  debug: false, // BROKER_DEBUG
  secret: 'hmac-secret', // BROKER_SECRET
  compression: true, // BROKER_COMPRESSION
  compressionThreshold: 1024, // BROKER_COMPRESSION_THRESHOLD
  logLevel: 'info', // BROKER_LOG_LEVEL (debug, info, warn, error)
  structuredLogging: false, // BROKER_STRUCTURED_LOGGING
  idleTimeout: 0, // BROKER_IDLE_TIMEOUT (disabled by default)
  heartbeatInterval: 0 // BROKER_HEARTBEAT_INTERVAL (disabled by default)
})

Client:

const client = new Client(pipePath, {
  timeout: 5000, // CLIENT_TIMEOUT
  debug: false, // CLIENT_DEBUG
  allowNoTtl: false, // CLIENT_ALLOW_NO_TTL
  secret: 'hmac-secret', // CLIENT_SECRET
  compression: true, // CLIENT_COMPRESSION
  compressionThreshold: 1024 // CLIENT_COMPRESSION_THRESHOLD
})

Environment variable usage:

# Configure via environment
export BROKER_DEFAULT_TTL=600000
export BROKER_MAX_ITEMS=10000
export BROKER_DEBUG=true
export CLIENT_TIMEOUT=10000

# Constructor args override env vars
const broker = new Broker() // Uses env vars
const broker2 = new Broker({ debug: false }) // Overrides BROKER_DEBUG

Kubernetes Deployment

Run as an in-process library (not a sidecar):

// app.js
import { Broker, Client } from '@ephemeral-broker/core'

const broker = new Broker({ debug: true })
await broker.start()

// Your app uses process.env.EPHEMERAL_PIPE
// Workers inherit the pipe path automatically

Readiness probe (exec):

readinessProbe:
  exec:
    command:
      [
        'node',
        '-e',
        'import("@ephemeral-broker/core").then(m => m.checkReadiness()).then(r => process.exit(r ? 0 : 1))'
      ]
  initialDelaySeconds: 2
  periodSeconds: 5

Or create a probe script:

// probe.js
import { checkReadiness } from '@ephemeral-broker/core'
const ready = await checkReadiness()
process.exit(ready ? 0 : 1)
readinessProbe:
  exec:
    command: ['node', 'probe.js']
  initialDelaySeconds: 2
  periodSeconds: 5

Resource limits:

resources:
  limits:
    memory: '512Mi' # Broker uses ~1.5KB per item
  requests:
    memory: '256Mi'

Monitoring

Health checks:

const health = await client.health()
// {
//   ok: true,
//   status: 'healthy' | 'degraded',
//   capacity: {
//     items: 42,
//     maxItems: 10000,
//     utilization: 0.0042,
//     nearCapacity: false,
//     atCapacity: false,
//     warning: null | 'near_capacity' | 'at_capacity'
//   },
//   memory: { rss, heapUsed, heapTotal },
//   connections: { inFlight, draining }
// }

Stats and metrics:

const stats = await client.stats()
// { items, leases, capacity, memory, uptime }

const metrics = await client.metrics()
// Prometheus format including:
// - ephemeral_broker_capacity_items
// - ephemeral_broker_capacity_max
// - ephemeral_broker_capacity_utilization

Recommended alerts:

# Prometheus AlertManager rules
groups:
  - name: ephemeral-broker
    rules:
      # Capacity alerts
      - alert: BrokerNearCapacity
        expr: ephemeral_broker_capacity_utilization > 0.9
        for: 2m
        labels:
          severity: warning
        annotations:
          summary: 'Broker approaching capacity ({{ $value | humanizePercentage }})'
          description: 'Broker has {{ $value | humanizePercentage }} capacity utilization'

      - alert: BrokerAtCapacity
        expr: ephemeral_broker_capacity_utilization >= 1.0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: 'Broker at full capacity'
          description: 'Broker is rejecting new items (maxItems reached)'

      # Performance alerts
      - alert: BrokerHighLatency
        expr: rate(ephemeral_broker_operations_total{result="error"}[5m]) > 0.1
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: 'Broker error rate high ({{ $value | humanize }} errors/sec)'

      - alert: BrokerSweeperStalled
        expr: rate(ephemeral_broker_expired_total[2m]) == 0 and ephemeral_broker_capacity_items > 100
        for: 2m
        labels:
          severity: warning
        annotations:
          summary: 'TTL sweeper may be stalled'
          description: 'No expired items swept in 2 minutes despite high item count'

      # Memory alerts
      - alert: BrokerMemoryHigh
        expr: process_resident_memory_bytes > 512000000 # 512MB
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: 'Broker memory usage high ({{ $value | humanize1024 }}B)'

      # Health alerts
      - alert: BrokerDown
        expr: up{job="ephemeral-broker"} == 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: 'Broker is down'

Grafana dashboard queries:

# Capacity utilization over time
ephemeral_broker_capacity_utilization

# Items vs max capacity
ephemeral_broker_capacity_items / ephemeral_broker_capacity_max

# Operation success rate
rate(ephemeral_broker_operations_total{result="success"}[5m]) /
rate(ephemeral_broker_operations_total[5m])

# Compression efficiency
ephemeral_broker_compression_ratio

# Expired items rate
rate(ephemeral_broker_expired_total{type="items"}[5m])

Capacity warnings:

Broker automatically logs warnings when ≥90% full:

[broker] Broker approaching capacity (items=9000 maxItems=10000 utilization=0.9)

Health endpoint returns status: 'degraded' when at 100% capacity.

Deployment Patterns

Pattern 1: Test Coordination (WDIO/Playwright)

// wdio.conf.js or playwright.config.js
import { Broker } from '@ephemeral-broker/core'

export async function globalSetup() {
  const broker = new Broker({
    maxItems: 1000,
    defaultTTL: 600000, // 10 minutes
    debug: process.env.CI === 'true'
  })

  await broker.start()

  // Seed shared test data
  const client = new Client(process.env.EPHEMERAL_PIPE)
  await client.set('apiToken', process.env.API_TOKEN, 600000)
  await client.set('testBaseUrl', process.env.TEST_URL, 600000)
}

Pattern 2: CI/CD Pipeline Coordination

// build.js - Main build process
const broker = new Broker({ maxItems: 500 })
await broker.start()

// Share build artifacts info between parallel steps
await client.set('buildId', buildId, 3600000)
await client.set('artifactUrls', urls, 3600000)

// Spawn parallel workers
broker.spawn('node', ['deploy-worker.js'])

Pattern 3: Kubernetes Job Coordination

apiVersion: batch/v1
kind: Job
metadata:
  name: test-suite
spec:
  template:
    spec:
      containers:
        - name: test-runner
          image: myapp:test
          env:
            - name: BROKER_MAX_ITEMS
              value: '5000'
            - name: BROKER_DEFAULT_TTL
              value: '600000'
            - name: BROKER_DEBUG
              value: 'true'
          command: ['node', 'run-tests.js']

Pattern 4: Development Hot Reload

// dev-server.js
const broker = new Broker({ maxItems: 100 })
await broker.start()

// Share rebuild state across dev tools
await client.set('lastBuild', Date.now(), 60000)
await client.set('changedFiles', changedFiles, 60000)

// Watch processes can query state
const lastBuild = await client.get('lastBuild')

Performance Tuning

Optimize for your workload:

// High-throughput (many small values)
const broker = new Broker({
  maxItems: 50000,
  maxValueSize: 10240, // 10KB
  compression: false, // Skip compression overhead
  sweeperInterval: 60000 // Sweep less frequently
})

// Large values (API responses, fixtures)
const broker = new Broker({
  maxItems: 1000,
  maxValueSize: 1048576, // 1MB
  compression: true,
  compressionThreshold: 512 // Compress aggressively
})

// Low latency (real-time coordination)
const client = new Client(pipePath, {
  timeout: 1000, // Fail fast
  compression: false // Skip compression
})

// Batch operations (CI/CD)
const client = new Client(pipePath, {
  timeout: 10000, // Allow retries
  compression: true
})

Memory planning:

// Approximate memory per item: 1.5KB overhead + value size
// For 10,000 items @ 10KB average value:
// Memory = 10,000 × (1.5KB + 10KB) = ~115MB

const broker = new Broker({
  maxItems: 10000,
  maxValueSize: 10240
})
// Set K8s memory limit to 256Mi (leaves 140MB for Node.js heap)

TTL strategies:

// Short-lived secrets (refresh frequently)
await client.set('apiToken', token, 60000) // 1 minute

// Test fixtures (persist for test run)
await client.set('testData', data, 600000) // 10 minutes

// Build metadata (persist for deployment)
await client.set('buildInfo', info, 3600000) // 1 hour

CLI Usage

# Simple: run tests with broker
npx ephemeral-broker start -- pnpm test

# With plugin
epb start --plugin @ephemeral-broker/aws-sts -- pnpm test

# Debug mode
epb start --debug --auth $SECRET -- pnpm test

Client API

class Client {
  constructor(pipe?: string, options?: { secret?: string; timeout?: number })

  get(key: string): Promise<any>
  set(key: string, value: any, ttlMs?: number): Promise<void>
  del(key: string): Promise<void>

  lease(key: string, workerId?: string, ttlMs?: number): Promise<any>
  release(workerId?: string): Promise<void>

  stats(): Promise<{ items: number; leases: number; memory: number; uptime: number }>
  ping(): Promise<number>
  health(): Promise<object>
  metrics(): Promise<string>
}

Adapters & Plugins

  • @ephemeral-broker/wdio – WDIO integration (leases tokens per worker, auto-renew, release on exit)
  • @ephemeral-broker/aws-sts – AWS STS plugin (mint + cache creds in memory)

Upcoming:

  • @ephemeral-broker/playwright
  • @ephemeral-broker/jest
  • @ephemeral-broker/testcafe
  • @ephemeral-broker/rate-limit
  • @ephemeral-broker/mock

Publishing Strategy

Phase 1: Core

npm publish ephemeral-broker

Phase 2: Immediate adapter

npm publish @ephemeral-broker/wdio

Phase 3: Common asks

npm publish @ephemeral-broker/aws-sts
npm publish @ephemeral-broker/rate-limit

Phase 4: Ecosystem

  • @ephemeral-broker/playwright
  • @ephemeral-broker/jest
  • @ephemeral-broker/mock

Usage Patterns

  • Testing: WDIO, Playwright, TestCafe, Jest workers share tokens + fixtures
  • CI/CD: distribute secrets, share build state, coordinate artifacts (same runner)
  • Dev Tools: ESLint/Prettier caches, hot reload flags, monorepo build state
  • Security-Sensitive Apps: OAuth broker, AWS STS, temporary creds

Performance

Tested on Apple M1 Max, Node.js v22.17.0:

| Operation | Ops/sec | P50 (ms) | P95 (ms) | P99 (ms) | | --------- | ------- | -------- | -------- | -------- | | SET | 12,285 | 0 | 1 | 1 | | GET | 26,882 | 0 | 0 | 1 | | DEL | 26,042 | 0 | 0 | 1 | | PING | 27,027 | 0 | 0 | 1 |

Memory: ~1,566 bytes per item

See BENCHMARKS.md for details and how to run benchmarks on your system.

Alternatives Comparison

vs Redis

| Feature | Ephemeral-Broker | Redis | | ---------------- | --------------------------------------------- | ------------------------------------- | | Setup | Zero config, auto-start | Install + daemon setup required | | Transport | Unix domain sockets / Named pipes | TCP (network stack overhead) | | Performance | 12-27k ops/sec (same-host IPC) | 50-100k ops/sec (TCP) | | Ports | None (uses pipes) | Requires port (default 6379) | | Security | Filesystem permissions (0700) + optional HMAC | Requires auth config + firewall rules | | Persistence | None (ephemeral only) | Optional (RDB/AOF) | | Multi-host | ❌ Same-host only | ✅ Network accessible | | Dependencies | Zero (Node.js built-ins) | Redis server + client library | | Cleanup | Auto on process exit | Manual or systemd management | | Use Case | Test coordination, parallel workers | Production caching, queues, pub/sub |

When to use ephemeral-broker:

  • Coordinating parallel test workers (WDIO, Playwright, Jest)
  • Sharing ephemeral secrets without disk writes
  • Zero-setup local development
  • Single-host IPC only

When to use Redis:

  • Production caching with persistence
  • Multi-host distributed systems
  • Advanced data structures (sorted sets, streams)
  • Pub/sub messaging across services

vs Filesystem (temp files)

| Feature | Ephemeral-Broker | Filesystem | | ------------------- | ----------------------------- | ------------------------------ | | Speed | 12-27k ops/sec (in-memory) | 100-1000 ops/sec (disk I/O) | | Security | No disk writes | Secrets written to disk | | Cleanup | Automatic on exit | Manual cleanup required | | Atomicity | Atomic lease/release | Requires file locking | | Concurrency | High (parallel clients) | Limited (file lock contention) | | TTL | Built-in automatic expiration | Manual TTL implementation | | Race Conditions | No (atomic operations) | Yes (TOCTOU, stale locks) | | Crash Recovery | Clean (no residue) | Stale files/locks remain |

When to use ephemeral-broker:

  • Sharing secrets that must never touch disk
  • Coordinating parallel workers with leases
  • Atomic operations (counters, flags)
  • Fast in-memory state

When to use filesystem:

  • Large datasets (>1GB)
  • Persistent state needed across runs
  • Legacy code using file-based config
  • Cross-process sharing without dependencies

vs SharedArrayBuffer

| Feature | Ephemeral-Broker | SharedArrayBuffer | | ------------------ | ------------------------------- | ---------------------------------- | | Data Types | JSON (strings, objects, arrays) | Raw bytes only | | Serialization | Automatic (JSON) | Manual (DataView, TypedArrays) | | Process Model | Independent processes | Threads/workers in same process | | Cross-Platform | ✅ Mac, Linux, Windows | ✅ Browser + Node.js | | Setup | Simple (import + connect) | Complex (worker setup, Atomics) | | TTL | Built-in | Manual implementation | | Lease/Release | Built-in atomic operations | Manual with Atomics.wait/notify | | Type Safety | Structured data (JSON) | Byte manipulation only | | Memory Model | Isolated processes | Shared memory with race conditions |

When to use ephemeral-broker:

  • Coordinating separate processes (not threads)
  • Structured data (tokens, config objects)
  • Simple API for key/value + leases
  • Cross-process without shared memory complexity

When to use SharedArrayBuffer:

  • High-frequency updates (>100k ops/sec)
  • Raw binary data (buffers, TypedArrays)
  • Workers in same Node.js process
  • Lock-free algorithms with Atomics

Summary

Ephemeral-Broker is optimized for:

  • ✅ Same-host parallel process coordination
  • ✅ Zero-setup ephemeral state (no daemon)
  • ✅ No disk writes (secrets stay in memory)
  • ✅ No ports (uses Unix sockets / Named pipes)
  • ✅ Automatic cleanup on exit

Not suitable for:

  • ❌ Production caching (use Redis)
  • ❌ Multi-host coordination (use Redis/etcd)
  • ❌ Large datasets >1GB (use filesystem/database)
  • ❌ Ultra-high throughput >100k ops/sec (use SharedArrayBuffer)
  • ❌ Persistent state across runs (use database)

Troubleshooting

Error: EADDRINUSE or stale socket

Symptom: Broker fails to start with Error: listen EADDRINUSE

Cause: A previous broker process crashed and left a socket file in /tmp/ (Unix) or a named pipe handle (Windows).

Solution:

# Unix/Mac: Remove stale socket files
rm -f /tmp/broker-*.sock

# Windows: Named pipes clean up automatically, but check for hung processes
tasklist | findstr node
taskkill /F /PID <process_id>

The broker automatically detects and removes stale sockets on Unix systems (see broker.js:24-38), but if you see this error, manually clean up the socket files.

Error: EPERM (Windows)

Symptom: Permission denied when creating or connecting to named pipes on Windows

Cause: Windows named pipes require consistent elevation context. If the broker runs elevated (admin) but the client doesn't, or vice versa, connections will fail.

Solution:

  1. Run both broker and clients in the same elevation context (both elevated or both normal)
  2. In CI/CD, ensure all processes run with the same permissions
  3. Avoid runas or sudo for individual commands - elevate the entire terminal session

Error: Timeouts / ECONNREFUSED

Symptom: Client times out or gets "connection refused" errors

Cause:

  • Broker not started before client connects
  • EPHEMERAL_PIPE environment variable not set or incorrect
  • Broker crashed silently

Solutions:

// 1. Verify EPHEMERAL_PIPE is set
console.log('Pipe:', process.env.EPHEMERAL_PIPE)
// Should output: /tmp/broker-xxxxx.sock (Unix) or \\.\pipe\broker-xxxxx (Windows)

// 2. Increase client timeout (default: 5000ms)
const client = new Client(pipe, { timeout: 10000 })

// 3. Check broker is running
await client.ping() // Should return number (latency in ms)

// 4. Enable debug mode to see connection attempts
const broker = new Broker({ debug: true })
const client = new Client(pipe, { debug: true })

Common causes:

  • Test framework started before globalSetup completed
  • Broker stopped too early (before all workers finished)
  • Using wrong pipe path (hardcoded instead of process.env.EPHEMERAL_PIPE)

Memory Usage Climbing

Symptom: Broker memory usage grows continuously, eventually hitting limits

Cause:

  • TTL not set on keys (data never expires)
  • Too many unique keys being created
  • Large values being stored (exceeds intended use case)

Solutions:

// 1. Always set TTL (broker enforces this by default)
await client.set('key', 'value', 60000) // 60 second TTL

// 2. Monitor memory with stats endpoint
const stats = await client.stats()
console.log('Items:', stats.items)
console.log('Memory:', stats.memory.heapUsed)

// 3. Reduce maxItems limit to prevent unbounded growth
const broker = new Broker({ maxItems: 1000 }) // Default: 10,000

// 4. Reduce sweeper interval to clean up expired items faster
const broker = new Broker({ sweeperInterval: 10000 }) // 10s (default: 30s)

// 5. Set smaller TTLs for temporary data
await client.set('temp-data', value, 5000) // 5 seconds

Error: too_large

Symptom: Client gets too_large error when setting values

Cause: Value or request exceeds size limits

Solution:

// Default limits:
// - maxRequestSize: 1 MB
// - maxValueSize: 256 KB

// Increase limits if needed (not recommended for ephemeral data)
const broker = new Broker({
  maxRequestSize: 5 * 1024 * 1024, // 5 MB
  maxValueSize: 1 * 1024 * 1024 // 1 MB
})

// Or: Split large data into smaller chunks
const chunks = splitIntoChunks(largeData, 200 * 1024) // 200 KB chunks
for (let i = 0; i < chunks.length; i++) {
  await client.set(`data-chunk-${i}`, chunks[i])
}

Note: Ephemeral-broker is designed for small, temporary data (tokens, session IDs, config flags). For large datasets, use Redis or filesystem storage.

Workers Can't Connect in Parallel Tests

Symptom: Some workers connect successfully, others timeout or fail

Cause:

  • Race condition: workers start before broker exports EPHEMERAL_PIPE
  • Workers inherit different environment variables

Solution:

// 1. Use framework-specific global setup hooks
// Playwright
export default defineConfig({
  globalSetup: async () => {
    const broker = new Broker()
    await broker.start() // Exports EPHEMERAL_PIPE to all workers
    return async () => broker.stop()
  }
})

// Jest
export default async function globalSetup() {
  const broker = new Broker()
  await broker.start()
  global.__BROKER__ = broker
}

// WebdriverIO
export const config = {
  async onPrepare() {
    broker = new Broker()
    await broker.start()
  }
}

// 2. Add retry logic in workers
let client
for (let i = 0; i < 5; i++) {
  try {
    client = new Client(process.env.EPHEMERAL_PIPE)
    await client.ping()
    break
  } catch (err) {
    if (i === 4) throw err
    await new Promise(resolve => setTimeout(resolve, 100 * (i + 1)))
  }
}

HMAC Authentication Failures

Symptom: Client gets auth_failed error

Cause: Client and broker using different secrets, or secret not set

Solution:

// 1. Ensure same secret on both sides
const secret = process.env.EPHEMERAL_SECRET || 'my-test-secret'
const broker = new Broker({ secret })
const client = new Client(pipe, { secret })

// 2. Or disable auth for local testing
const broker = new Broker() // No secret = no auth
const client = new Client(pipe) // No secret = no auth

// 3. In CI/CD, set EPHEMERAL_SECRET as environment variable
// GitHub Actions:
// env:
//   EPHEMERAL_SECRET: ${{ secrets.EPHEMERAL_SECRET }}

Security note: Always use HMAC authentication in CI/CD environments. Only disable for local development.

Getting Help

If you encounter issues not covered here:

  1. Enable debug mode: { debug: true } on both broker and client
  2. Check EPHEMERAL_PIPE value: echo $EPHEMERAL_PIPE
  3. Verify broker is running: await client.ping()
  4. Check broker stats: await client.stats()
  5. Open an issue at: https://github.com/kwegrz/ephemeral-broker/issues

Security

For detailed security information, see SECURITY.md.

Key security features:

  • ✅ Ephemeral state (broker dies → secrets vanish)
  • ✅ No disk writes (memory-only storage)
  • ✅ Random pipe names (generated fresh on every run)
  • ✅ Unix socket permissions (0700, owner-only)
  • ✅ Optional HMAC authentication (timing-safe)
  • ✅ Required TTL (prevents memory leaks)
  • ✅ Size limits (prevents DoS)
  • ✅ Zero external dependencies (supply chain safety)

Threat model: Protects against accidental disk persistence, unauthorized local access, memory exhaustion, and stale data leakage. Not designed for multi-host security or root-level attackers.

Why This Will Succeed

  1. Solves real pain — tokens on disk, port conflicts, worker collisions
  2. Simple mental model — just a temp KV/lease store over a pipe
  3. Easy adoptionnpx ephemeral-broker start -- your-command
  4. Framework-agnostic — not tied to WDIO or any specific stack
  5. Safe defaults — TTL, auth, size limits, heartbeat all built-in

This isn't another heavy service. It's essential infrastructure in ~800 LOC.