npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@velocityai/velocity

v0.1.3

Published

WebSocket multiplexer and batching proxy for agent loops

Downloads

198

Readme

VELOCITY

velocity is a TypeScript CLI proxy that sits between an agent and a WebSocket server to reduce frame count and byte overhead while guarding against latency regression. It is package-first and terminal-first: there is no built-in web dashboard UI.

Quickstart (Package-first)

npx @velocityai/velocity proxy --target ws://localhost:4000
npx @velocityai/velocity doctor
npx @velocityai/velocity bootstrap

Or install globally:

npm install -g @velocityai/velocity
velocity proxy --target ws://localhost:4000
velocity doctor
velocity bootstrap

Node control-plane SDK from the same package:

import { VelocityControlClient } from "@velocityai/velocity/control-plane-sdk";

Core commands

velocity proxy --target ws://localhost:4000
velocity canary --target ws://localhost:4000
velocity doctor
velocity bootstrap
velocity stats
velocity stats --watch
velocity stats --watch --interval-ms 1000 --tenant-limit 20
velocity replay .velocity/traces/<trace>.jsonl
velocity bench
velocity bench-ci --fail-on-regression

Performance presets (CLI-only):

velocity proxy --target ws://localhost:4000 --performance-profile low-latency
velocity proxy --target ws://localhost:4000 --performance-profile high-throughput
velocity proxy --target ws://localhost:4000 --target-pool ws://10.0.0.11:4000,ws://10.0.0.12:4000
velocity proxy --target ws://localhost:4000 --runtime-control-plane-endpoint http://127.0.0.1:4200

What it does

  • Batches logical frames inside a configurable window.
  • Adapts the batch window to a p95 latency budget.
  • Negotiates capabilities with upstream via hello/hello-ack handshake.
  • Falls back to JSON-RPC batch merge in passthrough mode for non-velocity upstreams.
  • --safe-mode applies conservative runtime defaults (stricter latency guard, disabled risky optimizations).
  • Per-tenant circuit breaker opens on sustained guard breaches and forces passthrough.
  • Session rollback auto-switches to passthrough after repeated guard breaches.
  • velocity canary assigns a tenant subset to safe-mode and auto-promotes clean tenants to full mode.
  • Queue limits protect the proxy under bursty load (--max-inbound-queue, --max-outstanding-batches).
  • Semantic coalescing deduplicates identical in-flight JSON-RPC tool calls and fans out responses by request id.
  • Priority lane bypasses batching for critical methods (cancel/abort/interrupt/final/error style calls).
  • Streaming-aware flush path sends stream/token/delta style traffic with low queue delay.
  • Backpressure guard reacts when WebSocket buffered bytes exceed --max-socket-backpressure-bytes.
  • Encodes transport envelopes with MessagePack.
  • Optionally compresses envelopes with zstd.
  • Payload-aware compression gates avoid wasting bytes on small/low-gain payloads.
  • Optionally emits delta-only downstream updates when smaller.
  • Records per-frame metrics and trace files.
  • Rich terminal stats via velocity stats (including --json and --verbose modes).
  • Optional Prometheus-style endpoint at /metrics when running long-lived service deployments.
  • Structured log output (--log-format json) for ingestion pipelines.
  • Optional OTLP HTTP export (--otlp-http-endpoint) for enterprise observability stacks.
  • Auto-fallback: temporarily disables batching if queueing delay starts hurting RTT.
  • Includes bench and bench-ci for multi-profile direct vs proxied validation.
  • Includes perf:bakeoff script for ws vs uWebSockets.js transport baseline checks.
  • Optional OPA policy hook supports per-tenant allow/deny and rate limit decisions.
  • Durable control-plane store with default JSON-file engine and optional SQLite engine for tenant policy + distributed token-bucket checks.
  • Optional Valkey-backed distributed rate-limit buckets via --valkey-url.
  • Optional NATS runtime/control-plane event propagation via --nats-url.
  • Optional listener engine selection (ws or uWebSockets.js when installed).
  • Optional JWT authentication + OpenFGA authorization checks for enterprise access control.

Bootstrap

Create local template config files for fast setup:

velocity bootstrap

This writes:

  • velocity.config.json
  • .env.velocity.example

Use --out-dir to target another directory and --force to overwrite.

Control-plane defaults to JSON-file persistence (stable, no experimental runtime warnings):

velocity control-plane --store-engine json --state-file .velocity/control-plane-state.json

SQLite is still available when explicitly selected:

velocity control-plane --store-engine sqlite --db-path .velocity/control-plane.db

Distributed mode (shared rate-limit buckets + event propagation):

velocity control-plane --store-engine json --state-file .velocity/control-plane-state.json --valkey-url redis://127.0.0.1:6379 --nats-url nats://127.0.0.1:4222
velocity proxy --target ws://localhost:4000 --rate-limit-control-plane-endpoint http://127.0.0.1:4200 --nats-url nats://127.0.0.1:4222

Hot runtime tuning (no proxy restart):

curl -X PUT http://127.0.0.1:4200/v1/runtime/profile \
  -H "content-type: application/json" \
  -d '{"batchWindowMs":2,"minBatchWindowMs":0,"maxBatchWindowMs":8,"latencyBudgetMs":20,"enableDelta":true}'

velocity proxy --target ws://localhost:4000 --runtime-control-plane-endpoint http://127.0.0.1:4200

Proxy options

velocity proxy \
  --target ws://localhost:4000 \
  --listener-engine ws \
  --performance-profile balanced \
  --listener-max-payload-bytes 104857600 \
  --upstream-handshake-timeout-ms 10000 \
  --upstream-max-payload-bytes 104857600 \
  --no-upstream-per-message-deflate \
  --heartbeat-interval-ms 25000 \
  --heartbeat-timeout-ms 10000 \
  --target-pool ws://10.0.0.11:4000,ws://10.0.0.12:4000 \
  --target-pool-ewma-alpha 0.2 \
  --host 127.0.0.1 \
  --port 4100 \
  --batch-window-ms 10 \
  --min-batch-window-ms 0 \
  --max-batch-window-ms 20 \
  --latency-budget-ms 40 \
  --zstd \
  --zstd-min-bytes 512 \
  --zstd-min-gain-ratio 0.03 \
  --delta \
  --metrics-port 9464 \
  --rate-limit-control-plane-endpoint http://127.0.0.1:4200

Observability

Default CLI flow (no web UI):

velocity stats
velocity stats --json
velocity stats --verbose
velocity stats --watch

velocity stats includes a tenant breakdown section by default (top tenants by frame volume). It also reports agent loop KPIs (loop turn avg, frames per turn avg, queue delay p95) for end-to-end loop tuning.

Optional service-mode telemetry endpoint:

velocity proxy --target ws://localhost:4000 --metrics-port 9464

Then scrape when needed:

curl http://127.0.0.1:9464/metrics

This exports counters and gauges for frame/byte reduction, latency, loop KPIs, queue overflow events, backpressure events, safety rollbacks, and access-control deny paths.

For OTLP, point to your collector base URL (the proxy posts to /v1/logs):

velocity proxy --target ws://localhost:4000 --otlp-http-endpoint http://localhost:4318 --log-format json

CI gate

npm run bench:ci
npm run bench:certify
npm run package:smoke

This command fails CI when any profile exceeds configured p95/avg latency regression thresholds or misses minimum byte reduction. bench-ci is deterministic by default using seeded jitter and median aggregation across repeated runs. bench:certify additionally compares current results to ci/bench/baseline-report.json for baseline-aware regression checks. The p95 certification guard uses both percentage and absolute-millisecond slack to reduce low-latency measurement noise flakiness. package:smoke validates that the packed npm tarball installs and runs velocity --version and velocity doctor in a clean temp project.

Advanced

Advanced infra deployment assets are still available:

  • GitHub required checks: .github/workflows/required-gate.yml
  • OPA sample policy bundle: deploy/opa/
  • Docker/Kubernetes manifests: deploy/docker-compose.prod.yml, deploy/k8s/
  • HA add-ons for event/state infrastructure: deploy/k8s/nats.yaml, deploy/k8s/valkey.yaml
  • Control-plane OpenAPI + SDKs: openapi/control-plane.yaml, sdk/
  • Envoy edge config + JWT fronting reference: deploy/envoy/
  • OTel collector templates (service mode only): deploy/otel/

Contract checks

Use the OpenAPI contract as the source of truth and validate SDK/CLI parity:

npm run contract:check

This check verifies operation coverage across:

  • openapi/control-plane.yaml
  • sdk/typescript/src/index.ts
  • sdk/python/velocity_control_sdk/client.py
  • sdk/python/velocity_control_sdk/cli.py

SDK packaging

  • Node control-plane SDK (bundled in CLI package): @velocityai/velocity/control-plane-sdk
  • Python SDK package: sdk/python/pyproject.toml (velocityai-cli)
  • Python CLI entrypoint: velocity-control

Build locally:

(cd sdk/typescript && npm install && npm run build)
(cd sdk/python && python -m pip install build && python -m build)

Release

Tagged releases (v*) trigger .github/workflows/release.yml:

  • verify gates (npm run release:verify)
  • publish CLI to npm
  • publish Python SDK to PyPI
  • publish container image to GHCR

Go-live runbook: docs/go-live-checklist.md. First-team setup: docs/first-15-minutes.md.

Deployment tiers

  • Tier 1 (single node): run velocity proxy + velocity control-plane with JSON store.
  • Tier 2 (HA cluster): Kubernetes manifests in deploy/k8s/ with OPA + OTel + HPA.
  • Tier 3 (enterprise edge): add Envoy fronting and policy/auth integrations using deploy/envoy/ + control-plane APIs.

Latency objective

VELOCITY is designed to reduce end-to-end latency, not just frame count. The adaptive fallback disables batching for a cooldown period when queue delay exceeds a threshold relative to observed round-trip time.

Metrics and traces are stored in .velocity/ by default.