npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@harperfast/symphony

v0.1.0

Published

High-performance TLS proxy with SNI routing — Rust/napi-rs, Linux

Readme

symphony

High-performance TLS termination proxy with SNI-based routing, written in Rust (via napi-rs) and exposed as a Node.js native module.

Designed for Linux (x64 + arm64, glibc + musl), and will run on MacOS as well. Pre-built binaries are published for all targets.


Overview

symphony sits in front of your services and:

  • Terminates TLS per route using per-route certificates (falls back to a listener-level default cert)
  • Routes by SNI hostname — exact matches, wildcard prefixes (*.example.com), and a catch-all default
  • Proxies TCP — either terminating TLS (decrypt + forward plaintext) or passing raw TLS bytes through
  • Balances over Unix Domain Sockets (UDS) using least-connections weighted by thread CPU utilisation, with optional IP session affinity
  • Limits routes with per-route token-bucket rate caps to prevent any one route from starving others
  • Protects connections with per-IP token-bucket rate limiting, concurrency limits, CIDR allowlist/blocklist, JA3 fingerprint blocking, TLS handshake timeout, and SNI-required enforcement
  • Suspends routes — hold incoming connections and fire an event; your code decides whether to proxy or reject each one
  • Hot-swaps routes and protection config without restarting or dropping existing connections
  • Scales to ~1 million concurrent connections via SO_REUSEPORT, tokio's multi-thread runtime, and lock-free data structures

Installation

npm install symphony

Pre-built binaries are downloaded automatically for your platform during install. No Rust toolchain required.


Quick start

import { SymphonyProxy } from 'symphony';
import { readFileSync } from 'node:fs';

const proxy = new SymphonyProxy({
  listeners: [{ port: 443 }],
  routes: [
    {
      sni: 'api.example.com',
      upstreams: [{ kind: 'tcp', host: '127.0.0.1', port: 3000 }],
      terminateTls: true,
      cert: {
        certChain: readFileSync('/etc/ssl/api.pem', 'utf8'),
        privateKey: readFileSync('/etc/ssl/api-key.pem', 'utf8'),
      },
    },
  ],
});

await proxy.start();
console.log('proxy listening on :443');

Configuration reference

ProxyConfig

| Field | Type | Default | Description | |---|---|---|---| | listeners | ListenerConfig[] | required | One entry per listening address | | routes | RouteConfig[] | required | SNI routing table | | workerThreads | number | CPU count | Tokio worker threads; also controls SO_REUSEPORT socket count per listener | | readBufferSize | number | 65536 | Internal copy buffer size in bytes |

ListenerConfig

| Field | Type | Default | Description | |---|---|---|---| | host | string | '0.0.0.0' | Bind address | | port | number | required | Bind port | | defaultCert | CertConfig | — | Fallback cert for routes without their own cert | | mtls | MtlsConfig | — | Listener-level mTLS, used when a route doesn't override it | | maxConnections | number | 0 (unlimited) | Drop new connections when active count reaches this | | idleTimeoutMs | number | 60000 | Close connections silent for this many ms | | protection | ProtectionConfig | — | IP-level protection |

RouteConfig

| Field | Type | Default | Description | |---|---|---|---| | sni | string | required | Hostname for exact match, or '*.suffix' for wildcard, or '' for default | | upstreams | Upstream[] | required | Destination(s); multiple UDS upstreams are load-balanced | | terminateTls | boolean | required | true = decrypt TLS; false = TCP passthrough | | cert | CertConfig | — | Per-route cert, overrides listener defaultCert | | mtls | MtlsConfig | — | Per-route mTLS, overrides listener mtls | | suspended | boolean | false | Hold connections and emit 'suspended' events | | suspendTimeoutMs | number | 30000 | Drop held connections after this ms if not resolved | | maxConnectionsPerSecond | number | — | Route-wide new-connection rate cap (token bucket). Connections are silently dropped when exhausted. | | burst | number | maxConnectionsPerSecond | Token bucket burst ceiling for the route rate limit | | sourceAddressHeader | 'proxyProtocol' \| 'xForwardedFor' \| 'none' | 'proxyProtocol' for UDS, 'none' for TCP | How the real client IP is forwarded to the upstream. See Source address forwarding. |

Upstream

// TCP upstream
{ kind: 'tcp', host: string, port: number }

// Unix Domain Socket upstream
{
  kind: 'uds',
  path: string,
  ipAffinity?: boolean,      // route same-IP connections to same socket
  ipAffinityTtlMs?: number,  // evict affinity entry after this ms idle (default 300000)
  pid?: number,              // Linux PID of the worker process (enables CPU monitoring)
  tid?: number,              // Linux TID of the worker thread (must be set with pid)
}

CertConfig

{ certChain: string | Buffer, privateKey: string | Buffer }

Both fields accept PEM-encoded strings or Buffer. The cert chain may include intermediate certificates.

MtlsConfig

{ clientCaCert: string | Buffer, requireClientCert?: boolean }

requireClientCert defaults to true. Set to false to accept connections without a client cert while still validating those that do present one.

ProtectionConfig

| Field | Type | Default | Description | |---|---|---|---| | rateLimit | { connectionsPerSecond, burst? } | — | Token bucket per source IP | | maxConcurrentPerIp | number | 0 (unlimited) | Max simultaneous connections per source IP | | allowlist | string[] | [] | CIDRs that bypass all checks | | blocklist | string[] | [] | CIDRs that are always blocked | | ja3Blocklist | string[] | [] | JA3 MD5 hex fingerprints to block (32 chars each) | | tlsHandshakeTimeoutMs | number | 10000 | Abort slow TLS handshakes | | requireSni | boolean | false | Reject connections without an SNI extension |


TLS & mTLS

Per-route certificates

Each route can have its own certificate. Routes without a cert use the listener's defaultCert.

const proxy = new SymphonyProxy({
  listeners: [{
    port: 443,
    defaultCert: { certChain: wildcardCert, privateKey: wildcardKey },
  }],
  routes: [
    // Uses its own cert
    { sni: 'special.example.com', cert: { certChain: specialCert, privateKey: specialKey }, ... },
    // Falls back to listener defaultCert
    { sni: '*.example.com', ... },
  ],
});

mTLS

const proxy = new SymphonyProxy({
  listeners: [{
    port: 443,
    mtls: { clientCaCert: readFileSync('ca.pem', 'utf8'), requireClientCert: true },
  }],
  routes: [
    {
      sni: 'internal.example.com',
      terminateTls: true,
      cert: { certChain, privateKey },
      // Inherits listener mTLS; or override per-route:
      // mtls: { clientCaCert: ..., requireClientCert: false },
    },
  ],
});

TLS passthrough

Set terminateTls: false to forward raw TLS bytes to the upstream without decryption. No cert needed.

{ sni: 'passthrough.example.com', terminateTls: false, upstreams: [{ kind: 'tcp', host: '10.0.0.5', port: 443 }] }

Routing

Routes are checked in order: exact matchwildcard suffixdefault (empty sni).

routes: [
  { sni: 'api.example.com', ... },        // exact
  { sni: '*.example.com', ... },          // matches foo.example.com, bar.example.com
  { sni: '', ... },                        // catch-all default
]

Suspended routes

Use suspended routes to inspect or authorize connections before proxying them:

proxy.on('suspended', async (conn) => {
  // conn.id, conn.sni, conn.peerIp, conn.peerPort, conn.listener
  const allowed = await checkAuthority(conn);

  if (allowed) {
    proxy.resolveConnection(conn.id, {
      upstreams: [{ kind: 'tcp', host: '127.0.0.1', port: 3000 }],
      terminateTls: false,
    });
  } else {
    proxy.resolveConnection(conn.id, null); // reject — TCP close
  }
});

// Route declared as suspended
{ sni: 'gated.example.com', suspended: true, upstreams: [], terminateTls: true, cert: { ... } }

Connections not resolved within suspendTimeoutMs are dropped automatically. Calling resolveConnection with an unknown or already-expired ID is a no-op.


UDS load balancing

Provide multiple uds upstreams for a route. symphony picks the socket with the lowest score, where score is:

score = active_connections × 1000 + cpu_utilisation_permille

Active connections are the primary factor; CPU utilisation (0–1000, representing 0–100%) is a tiebreaker that steers new connections away from overloaded threads when connection counts are equal.

upstreams: [
  { kind: 'uds', path: '/run/app/worker-0.sock' },
  { kind: 'uds', path: '/run/app/worker-1.sock' },
  { kind: 'uds', path: '/run/app/worker-2.sock' },
]

IP session affinity

Add ipAffinity: true to any UDS upstream entry to pin source IPs to the same socket:

upstreams: [
  { kind: 'uds', path: '/run/app/worker-0.sock', ipAffinity: true, ipAffinityTtlMs: 300000 },
  { kind: 'uds', path: '/run/app/worker-1.sock', ipAffinity: true },
]

The same ipAffinity / ipAffinityTtlMs values apply to all sockets in the set (values from the first entry are used for the shared balancer).

Thread CPU utilisation monitoring

When each UDS upstream serves a known worker thread, symphony can read its CPU utilisation from /proc/{pid}/task/{tid}/stat and incorporate it into socket selection:

upstreams: [
  { kind: 'uds', path: '/run/app/worker-0.sock', pid: 12345, tid: 12346 },
  { kind: 'uds', path: '/run/app/worker-1.sock', pid: 12345, tid: 12347 },
  { kind: 'uds', path: '/run/app/worker-2.sock', pid: 12345, tid: 12348 },
]

Symphony samples /proc/{pid}/task/{tid}/stat every 250 ms and computes the thread's CPU utilisation over the interval. Sockets without pid/tid keep a CPU score of 0 and fall back to pure least-connections. Sampling stops gracefully when pid is gone (process exit, crash) — those slots simply keep their last measured value.


Per-route rate limiting

Use maxConnectionsPerSecond on a route to cap the rate of new connections accepted for that route, independent of source IP. This prevents a single busy route from starving other routes under high load:

routes: [
  {
    sni: 'api.example.com',
    maxConnectionsPerSecond: 500,  // route-wide cap; burst defaults to this value
    burst: 1000,                   // allow short bursts up to 1000 conn/s
    upstreams: [{ kind: 'uds', path: '/run/app/api.sock' }],
    terminateTls: true,
    cert: { certChain, privateKey },
  },
  {
    sni: 'admin.example.com',
    maxConnectionsPerSecond: 20,
    upstreams: [{ kind: 'uds', path: '/run/app/admin.sock' }],
    terminateTls: true,
    cert: { certChain, privateKey },
  },
]

Connections that exceed the limit are silently dropped (TCP RST). This is a global token bucket per route — not per IP. For per-IP rate limiting use protection.rateLimit.


Source address forwarding

Use sourceAddressHeader on a route to control how the real client IP is communicated to the upstream. This only applies when terminateTls: true (TLS is terminated by the proxy).

| Value | Behaviour | |---|---| | 'proxyProtocol' | Sends a PROXY protocol v1 header (PROXY TCP4 <src-ip> <dst-ip> <src-port> 0\r\n) before any application data. Default for UDS upstreams. | | 'xForwardedFor' | Reads the first chunk of the HTTP request, inserts an X-Forwarded-For header after the request line, then copies the rest verbatim. No per-request parsing overhead for keep-alive connections. Default for TCP upstreams (disabled). | | 'none' | Does not forward source address information. Default for TCP upstreams. |

PROXY protocol (default for UDS)

Most backends that consume PROXY protocol (nginx, HAProxy, HarperDB) read the header once per connection before parsing application data.

{
  sni: 'api.example.com',
  upstreams: [{ kind: 'uds', path: '/run/app/worker.sock' }],
  terminateTls: true,
  cert: { certChain, privateKey },
  // sourceAddressHeader: 'proxyProtocol',  // this is already the default for UDS
}

X-Forwarded-For (for Bun and other HTTP backends)

Bun's built-in HTTP server does not support PROXY protocol. Use 'xForwardedFor' instead — symphony injects the header into the first HTTP request of each connection:

{
  sni: 'app.example.com',
  upstreams: [{ kind: 'uds', path: '/run/bun/worker.sock' }],
  terminateTls: true,
  cert: { certChain, privateKey },
  sourceAddressHeader: 'xForwardedFor',
}

In your Bun server:

Bun.serve({
  unix: '/run/bun/worker.sock',
  fetch(req) {
    const clientIp = req.headers.get('x-forwarded-for');
    // ...
  },
});

Protection

Recommended starting values for public-facing deployments

protection: {
  rateLimit: { connectionsPerSecond: 50, burst: 100 },
  maxConcurrentPerIp: 200,
  allowlist: ['10.0.0.0/8', '172.16.0.0/12', '192.168.0.0/16'],
  requireSni: true,
  tlsHandshakeTimeoutMs: 5000,
}

JA3 blocking

Collect JA3 fingerprints from your logs (the ja3 field is available in future log integrations) and add known-bad clients:

ja3Blocklist: [
  'e7d705a3286e19ea42f587b344ee6865', // example known-bad scanner
]

Hot-swapping protection config

Protection config is per-listener and not currently hot-swappable via updateConfig (listeners would need to restart). To update protection, restart with a new config. Route changes do not require listener restarts.


Metrics & monitoring

const m = proxy.metrics();
// m.activeConnections — connections being proxied right now
// m.blockedConnections — total blocked since start
// m.pendingSuspended — connections currently held waiting for resolveConnection()

const blocked = proxy.blockedIps();
// blocked.rateLimited — IPs with depleted token buckets
// blocked.concurrencyLimited — IPs at their maxConcurrentPerIp limit
// blocked.cidrBlocklist — the configured static CIDR blocklist

setInterval(() => {
  console.log('active:', proxy.metrics().activeConnections);
}, 10_000);

Hot config updates

// Replace the entire route table atomically — in-flight connections are unaffected.
proxy.updateConfig({
  routes: newRoutes,
});

What can be hot-swapped: routes (destinations, TLS certs, suspension state).

What requires a restart: listeners (bind address, port, protection config, idle timeout).


Building from source

Requirements: Rust stable (1.70+), Node.js 18+, @napi-rs/cli.

npm install
npm run build:debug    # builds a dev .node file
npm run build          # release build (LTO, stripped)

Cross-compilation

Use the napi-rs Docker images (same ones used in CI):

# x64 musl (Alpine)
docker run --rm -v $(pwd):/build -w /build \
  ghcr.io/napi-rs/napi-rs/nodejs-rust:lts-alpine \
  npm run build -- --target x86_64-unknown-linux-musl

# arm64 glibc
docker run --rm -v $(pwd):/build -w /build \
  ghcr.io/napi-rs/napi-rs/nodejs-rust:lts-debian-aarch64 \
  npm run build -- --target aarch64-unknown-linux-gnu

Linux kernel tuning

To reach ~1 million concurrent connections, the following system settings are required.

File descriptor limits

# Per-process (set before starting Node)
ulimit -n 2097152

# System-wide persistent — /etc/security/limits.conf
*  soft  nofile  2097152
*  hard  nofile  2097152

symphony attempts to raise RLIMIT_NOFILE automatically at startup (to 2 × maxConnections + 1024), but the hard limit must be raised by the OS first.

Kernel networking

# /etc/sysctl.d/99-symphony.conf

# TCP connection tracking
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1

# Socket buffers (tune to your bandwidth)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Accept queue depth per socket
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535

# Max open files system-wide
fs.file-max = 4194304

Apply with:

sudo sysctl --system

musl note

On musl-libc systems (Alpine), the hard RLIMIT_NOFILE is often capped at 1048576 rather than the glibc default of 1073741816. symphony will log a warning if the desired limit exceeds the hard limit and fall back to the hard limit.