portok
v1.0.9
Published
Zero-downtime deployment proxy - routes traffic through a stable port to internal app instances with health-gated switching
Downloads
1,144
Maintainers
Readme
Portok
A lightweight Node.js "switchboard" proxy that enables zero-downtime deployments by routing a stable public port to an internal app instance running on a random port, switching only when the new instance is healthy.
Features
- Zero-downtime switching: Health-gated port switching with connection draining
- Auto-rollback: Automatic rollback if the new port becomes unhealthy
- WebSocket support: Full HTTP and WebSocket proxying
- Lightweight metrics: Built-in metrics without heavy dependencies
- Security: Token-based auth, IP allowlist, rate limiting
- CLI: Easy-to-use command-line interface
Quick Start
Installation
Global Installation (Recommended):
# Install globally
npm install -g portok
# Now you can use portok and portokd commands from anywhere
portok --help
portokd --helpStart the Daemon
# Required environment variables
export LISTEN_PORT=3000
export INITIAL_TARGET_PORT=8080
export ADMIN_TOKEN=your-secret-token
# Start portokd
node portokd.jsUse the CLI
# Check status
portok status --token your-secret-token
# Switch to new port
portok switch 8081 --token your-secret-token
# Check metrics
portok metrics --token your-secret-token
# Check health
portok health --token your-secret-tokenNote: If not installed globally, use
./portok.jsinstead ofportok.
Configuration
All configuration is via environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| INSTANCE_NAME | default | Instance identifier (for logging/state file naming) |
| LISTEN_PORT | 3000 | Port the proxy listens on |
| INITIAL_TARGET_PORT | (required) | Initial backend port to proxy to |
| STATE_FILE | /var/lib/portok/<instance>.json | Path to persist state |
| HEALTH_PATH | /health | Health check endpoint path |
| HEALTH_TIMEOUT_MS | 5000 | Health check timeout |
| DRAIN_MS | 30000 | Connection drain period after switch |
| ROLLBACK_WINDOW_MS | 60000 | Time window for auto-rollback monitoring |
| ROLLBACK_CHECK_EVERY_MS | 5000 | Health check interval during rollback window |
| ROLLBACK_FAIL_THRESHOLD | 3 | Consecutive failures before rollback |
| ADMIN_TOKEN | (required) | Token for admin endpoint authentication |
| ADMIN_ALLOWLIST | 127.0.0.1,::1 | Comma-separated list of allowed IPs |
| ADMIN_UNIX_SOCKET | (optional) | Unix socket path for admin endpoints |
Performance Tuning
| Variable | Default | Description |
|----------|---------|-------------|
| FAST_PATH | 0 | Enable minimal metrics mode for maximum throughput |
| UPSTREAM_KEEPALIVE | 1 | Enable keep-alive for upstream connections (critical) |
| UPSTREAM_MAX_SOCKETS | 1024 | Maximum sockets per upstream host |
| UPSTREAM_KEEPALIVE_MSECS | 1000 | Keep-alive ping interval in ms |
| SERVER_KEEPALIVE_TIMEOUT | 5000 | Server keep-alive timeout in ms |
| SERVER_HEADERS_TIMEOUT | 6000 | Headers timeout (must be > keepAliveTimeout) |
| ENABLE_XFWD | 1 | Add X-Forwarded-* headers to proxied requests |
| DEBUG_UPSTREAM | 0 | Track upstream socket creation in /__metrics |
| VERBOSE_ERRORS | 0 | Log full error stacks (disable in production) |
Admin Endpoints
All admin endpoints require the x-admin-token header.
GET /__status
Returns current proxy status.
curl -H "x-admin-token: your-token" http://127.0.0.1:3000/__statusResponse:
{
"activePort": 8080,
"drainUntil": null,
"lastSwitch": {
"from": 8081,
"to": 8080,
"at": "2024-01-15T10:30:00.000Z",
"reason": "manual",
"id": "uuid-here"
}
}GET /__metrics
Returns proxy metrics.
curl -H "x-admin-token: your-token" http://127.0.0.1:3000/__metricsResponse:
{
"startedAt": "2024-01-15T10:00:00.000Z",
"inflight": 5,
"inflightMax": 100,
"totalRequests": 50000,
"totalProxyErrors": 2,
"statusCounters": {
"2xx": 49500,
"3xx": 100,
"4xx": 398,
"5xx": 2
},
"rollingRps60": 125.5,
"health": {
"activePortOk": true,
"lastCheckedAt": "2024-01-15T10:29:55.000Z",
"consecutiveFails": 0
},
"lastProxyError": null
}POST /__switch?port=PORT
Switch to a new target port. Performs health check before switching.
curl -X POST -H "x-admin-token: your-token" \
"http://127.0.0.1:3000/__switch?port=8081"Success response (200):
{
"success": true,
"message": "Switched to port 8081",
"switch": {
"from": 8080,
"to": 8081,
"at": "2024-01-15T10:30:00.000Z",
"reason": "manual",
"id": "uuid-here"
}
}Failure response (409 - health check failed):
{
"error": "Health check failed",
"message": "Port 8081 did not respond with 2xx at /health"
}GET /__health
Check health of the current active port.
curl -H "x-admin-token: your-token" http://127.0.0.1:3000/__healthCLI Reference
portok <command> [options]
Management Commands:
init Initialize portok (creates dirs, installs systemd service)
add <name> Create a new service instance
remove <name> Remove a service instance (stops, disables, deletes config/state)
clean Remove ALL portok data (configs, states, systemd service)
list List all configured instances and their status
Service Control Commands:
start <name> Start a portok service (systemctl start portok@<name>)
stop <name> Stop a portok service
restart <name> Restart a portok service
enable <name> Enable service at boot
disable <name> Disable service at boot
logs <name> Show service logs (journalctl)
Operational Commands:
status Show current proxy status
metrics Show proxy metrics
switch <port> Switch to a new target port
health Check health of active port
Options:
--url <url> Daemon URL (default: http://127.0.0.1:3000)
--instance <name> Target instance by name (reads /etc/portok/<name>.env)
--token <token> Admin token (or PORTOK_TOKEN env var)
--json Output as JSON
--help Show help
Options for 'add' command:
--port <port> Listen port (default: random 3000-3999)
--target <port> Target port (default: random 8000-8999)
--health <path> Health check path (default: /health)
--force Overwrite existing config
Options for 'remove' command:
--force Skip confirmation prompt
--keep-state Keep state file (/var/lib/portok/<name>.json)
Options for 'clean' command:
--force Skip confirmation prompt (required)
Options for 'logs' command:
--follow, -f Follow log output
--lines, -n Number of lines to show (default: 50)Quick Start with CLI
# 1. Initialize portok (creates /etc/portok and /var/lib/portok)
sudo portok init
# 2. Create a new service
sudo portok add api --port 3001 --target 8001
# 3. Start the service
sudo portok start api
# 4. Enable at boot
sudo portok enable api
# 5. Check status
portok status --instance api
# 6. List all services
portok listEnvironment Variables for CLI
PORTOK_URL: Default daemon URLPORTOK_TOKEN: Admin token
Examples
# Initialize portok (run once)
sudo portok init
# Create services with specific ports
sudo portok add api --port 3001 --target 8001
sudo portok add web --port 3002 --target 8002
# Service management
sudo portok start api
sudo portok stop api
sudo portok restart api
sudo portok enable api # Enable at boot
sudo portok disable api # Disable at boot
# Remove a service (stops, disables, removes config and state)
sudo portok remove api --force
sudo portok remove api --force --keep-state # Keep state file
# View logs
portok logs api
portok logs api --follow # Follow log output
portok logs api -n 100 # Show last 100 lines
# List all instances with status
portok list
# Check status by instance name
portok status --instance api
# Get metrics as JSON
portok metrics --instance api --json
# Switch to new port
portok switch 8081 --instance api
# Health check (exits 0 if healthy, 1 if unhealthy)
portok health --instance api && echo "OK" || echo "FAIL"
# Direct URL mode (without instance)
portok status --url http://127.0.0.1:3000 --token your-tokensystemd Service
Example systemd unit file (/etc/systemd/system/portokd.service):
[Unit]
Description=Portok Zero-Downtime Proxy
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/portok
ExecStart=/usr/bin/node /opt/portok/portokd.js
Restart=always
RestartSec=5
# Environment
Environment=LISTEN_PORT=3000
Environment=INITIAL_TARGET_PORT=8080
Environment=STATE_FILE=/var/lib/portok/state.json
Environment=ADMIN_TOKEN=your-secret-token
Environment=HEALTH_PATH=/health
Environment=DRAIN_MS=30000
Environment=ROLLBACK_WINDOW_MS=60000
Environment=ROLLBACK_CHECK_EVERY_MS=5000
Environment=ROLLBACK_FAIL_THRESHOLD=3
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/portok
[Install]
WantedBy=multi-user.targetEnable and start:
sudo systemctl daemon-reload
sudo systemctl enable portokd
sudo systemctl start portokdTesting
Tests run in Docker for Linux compatibility:
# Run all tests
docker compose run --rm test
# Run specific test file
docker compose run --rm test npm test -- --test-name-pattern="proxy"
# Development shell
docker compose run --rm devOr run locally (requires Node.js 20+):
npm testBenchmarks
Benchmarks measure proxy performance:
# Run all benchmarks in Docker
docker compose run --rm bench
# Quick benchmark (shorter duration)
docker compose run --rm bench npm run bench -- --quick
# Output JSON for CI
docker compose run --rm bench npm run bench -- --json > results.jsonBenchmark scenarios:
| Benchmark | Description | |-----------|-------------| | Throughput | Maximum requests/sec with 100 connections | | Latency | Latency percentiles (p50, p95, p99) | | Connections | Scaling with 10-500 concurrent connections | | Switching | Switch latency and request loss | | Baseline | Direct vs proxied overhead comparison | | Keep-Alive | Validates keep-alive performance (RPS >= 70% of direct) |
Multi-Instance Setup
Portok supports running multiple isolated instances on the same host, each managing a different application. This is the recommended approach for multi-app deployments.
Directory Structure
/etc/portok/
├── api.env # Config for "api" instance
├── web.env # Config for "web" instance
└── worker.env # Config for "worker" instance
/var/lib/portok/
├── api.json # State file for "api" instance
├── web.json # State file for "web" instance
└── worker.json # State file for "worker" instanceInstance Configuration
Each instance has its own env file at /etc/portok/<instance>.env:
Example: /etc/portok/api.env
# Required
LISTEN_PORT=3001
INITIAL_TARGET_PORT=8001
ADMIN_TOKEN=api-secret-token-change-me
# Optional (defaults shown)
HEALTH_PATH=/health
HEALTH_TIMEOUT_MS=5000
DRAIN_MS=30000
ROLLBACK_WINDOW_MS=60000
ROLLBACK_CHECK_EVERY_MS=5000
ROLLBACK_FAIL_THRESHOLD=3Example: /etc/portok/web.env
LISTEN_PORT=3002
INITIAL_TARGET_PORT=8002
ADMIN_TOKEN=web-secret-token-change-mesystemd Template Unit
The portok init command automatically installs and configures the systemd template:
# Initialize Portok (creates dirs, installs systemd service)
sudo portok init
# Create a new instance
sudo portok add api --port 3001 --target 8001
# Start and enable
sudo portok start api
sudo portok enable api
# Check status
sudo portok status api
portok logs api --followNode.js Installation Methods
System-wide Node.js (Recommended for Production):
# Install via OS package manager
sudo apt install nodejs # Debian/Ubuntu
sudo yum install nodejs # RHEL/CentOS
# Standard init
sudo portok initnvm/fnm/volta (Development or when system node unavailable):
# Use --nvm flag for less restrictive security settings
sudo portok init --nvm
# Or specify custom node path
sudo portok init --node-path=/custom/path/to/nodePreview changes before applying:
sudo portok init --dry-runDiagnosing Issues
Use portok doctor to check your installation:
portok doctorExample output:
Portok Doctor - Diagnosing installation...
✓ Node.js binary
/usr/local/bin/node (v20.19.6)
✓ System Node.js
/usr/local/bin/node (v20.19.6)
✓ Config directory
/etc/portok (2 config files)
✓ State directory
/var/lib/portok (writable)
✓ systemd service
/etc/systemd/system/[email protected]
✓ ExecStart node
/usr/local/bin/node
✓ ProtectHome
ProtectHome=true
✓ systemctl
Available
✓ Running instances
2 running, 0 failed
All checks passed. Portok is ready to use.Security Hardening
The default [email protected] includes production-grade security settings:
| Setting | Value | Purpose | |---------|-------|---------| | ProtectHome | true | Block access to home directories | | ProtectSystem | strict | Make /usr, /boot, /etc read-only | | PrivateTmp | true | Private /tmp per service | | NoNewPrivileges | true | Prevent privilege escalation | | ReadWritePaths | /var/lib/portok | Only state directory writable |
For nvm users, portok init --nvm uses ProtectHome=read-only to allow ~/.nvm access.
Manual systemd Setup (Advanced)
For manual setup without using portok init:
# Copy the template (choose one)
sudo cp [email protected] /etc/systemd/system/ # Production (system node)
sudo cp [email protected] /etc/systemd/system/[email protected] # NVM variant
# Edit paths in the service file
sudo nano /etc/systemd/system/[email protected]
# Update: ExecStart, WorkingDirectory, User, Group
# Create directories
sudo mkdir -p /etc/portok /var/lib/portok
sudo chown $(whoami):$(id -gn) /var/lib/portok
# Reload systemd
sudo systemctl daemon-reload
# Start instances
sudo systemctl start portok@apiCLI with Multi-Instance
Use --instance to target a specific instance:
# Target by instance name (reads /etc/portok/<name>.env)
portok status --instance api
portok metrics --instance web
portok switch 8081 --instance api
portok health --instance web --json
# Explicit URL/token still works (and overrides env file)
portok status --url http://127.0.0.1:3001 --token api-secret-tokenInstance Isolation
Each instance is fully isolated:
- Separate state files: No shared state between instances
- Separate tokens: Each instance has its own admin token
- Separate metrics: Metrics are per-instance
- Separate rollback monitors: Each instance tracks its own rollback window
- Separate rate limits: Rate limiting is per-instance
Multi-Instance Architecture
┌─────────────────────────────────────────────────────────────┐
│ Load Balancer / Nginx │
└───────────┬─────────────────────────┬───────────────────────┘
│ │
▼ ▼
┌───────────────────────┐ ┌───────────────────────┐
│ portok@api (:3001) │ │ portok@web (:3002) │
│ State: api.json │ │ State: web.json │
└───────────┬───────────┘ └───────────┬───────────┘
│ │
▼ ▼
┌───────────────────────┐ ┌───────────────────────┐
│ api-v1 (:8001) │ │ web-v1 (:8002) │
│ api-v2 (:8011) │ │ web-v2 (:8012) │
└───────────────────────┘ └───────────────────────┘Architecture
┌─────────────────────────────────────────────────────────┐
│ Client Traffic │
└─────────────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ portokd (LISTEN_PORT) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │
│ │ Proxy │ │ Admin │ │ Health Monitor │ │
│ │ (http-proxy)│ │ Endpoints │ │ (auto-rollback)│ │
│ └──────┬──────┘ └─────────────┘ └─────────────────┘ │
│ │ │
│ ┌──────┴──────┐ │
│ │Socket Tracker│ ← Maps connections to ports for drain │
│ └──────┬──────┘ │
└─────────┼───────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ 127.0.0.1:ACTIVE_PORT │
│ (Your App) │
└─────────────────────────────────────────────────────────┘Zero-Downtime Deployment Flow
- Deploy new app version on a random port (e.g., 54321)
- New app starts and exposes
/healthendpoint - Call
portok switch 54321orPOST /__switch?port=54321 - Portok health-checks the new port
- If healthy: switch traffic, drain old connections
- If new port fails during rollback window: auto-rollback
- Old app can be stopped after drain period
Performance Notes
Portok is optimized for high throughput and low latency proxy operations.
FAST_PATH Mode (Recommended for High-Traffic)
Enable FAST_PATH=1 for maximum throughput in production or benchmarks:
export FAST_PATH=1This disables expensive metrics (status counters, rolling RPS) while keeping essential counters (totalRequests, inflight, proxyErrors).
Keep-Alive (Critical)
The upstream keep-alive agent is critical for performance. Without it, every request opens a new TCP connection which adds ~0.5-2ms latency and significantly limits throughput.
Keep-alive is enabled by default (UPSTREAM_KEEPALIVE=1). Do not disable it in production.
Performance Validation
Run the validation benchmark to verify performance:
# Quick validation (3s)
FAST_PATH=1 node bench/validate.js --quick
# Full validation (10s)
FAST_PATH=1 node bench/validate.js
# Manual autocannon test
# Direct:
npx autocannon -c 50 -d 10 http://127.0.0.1:<APP_PORT>/
# Proxied:
npx autocannon -c 50 -d 10 http://127.0.0.1:<PROXY_PORT>/Acceptance Criteria:
- RPS >= 30% of direct (http-proxy adds inherent overhead)
- Added p50 latency <= 10ms
- p99 latency <= 50ms
Typical Results (localhost, FAST_PATH=1):
- Direct: ~28,000 RPS, p50=1ms
- Proxied: ~13,000 RPS, p50=3ms
- Socket reuse: 800-2000x (confirms keep-alive working)
Debug Upstream Connections
Enable DEBUG_UPSTREAM=1 to track upstream socket creation in /__metrics:
export DEBUG_UPSTREAM=1This exposes upstreamSocketsCreated in metrics to verify keep-alive is working.
Optimization Summary
| Optimization | Impact | |--------------|--------| | FAST_PATH mode | Minimal per-request overhead | | Keep-alive agent | 10-20x throughput vs no keep-alive | | Connection header stripping | Ensures upstream keep-alive works | | Minimal URL parsing | No allocations in hot path | | res.once() listeners | Auto-cleanup, no memory leaks | | Socket reuse tracking | DEBUG_UPSTREAM confirms keep-alive |
Security
- Token authentication: All admin endpoints require
x-admin-tokenheader - Timing-safe comparison: Token validation uses
crypto.timingSafeEqual - IP allowlist: Admin endpoints restricted by IP (default: localhost only)
- Rate limiting: 10 requests/minute per IP for admin endpoints
- SSRF protection: Target host fixed to
127.0.0.1
License
MIT
