npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@robertsvendsen/node-works

v0.11.1

Published

A node framework built for simplicity and performance

Readme

Node-works

Heavily influenced by Laravel and Express.

This framework is working at least double concurrency compared to express – up to 5 times more connections due to the underlying WebAssembly HTTP module. Implicit this also means more speed.

Note: Performance cost of using the dependency injection is huge. Using up to 2/3 of the time per request on simple requests. On more power hungry requests, this may become better.

Important when upgrading to version >= 0.6.0: Express as backend has a smaller set of functionality supported.

Important when upgrading to version >= 0.8.0: Express has no support for functionality for streaming.

Important when upgrading to version >= 0.9.0: Express as backend has been removed. Use node/uws as backend framework.

Requirements

  • Node v20 or newer.

Install

Three steps to get started.

  1. Run yarn to install dependencies: yarn

  2. Run yarn download to download dependent wasm-binaries for your platform: yarn download

  3. Run yarn dev to start serving: yarn dev

If 'yarn download' failed:

Take the binary missing and download it from here: https://github.com/uNetworking/uWebSockets.js/tree/binaries

Example: wget https://github.com/uNetworking/uWebSockets.js/blob/binaries/uws_linux_x64_83.node?raw=true

Features

  • Modular (Everything written as modules)
  • Built-in WAF (DOS, DDOS, CSRF, IP blacklist and more)
  • Super fast high performance node framework (aims to be as fast as nginx or faster)
  • Middlewares (supports Express middlewares)
  • Services (from Laravel)
  • Dependency injection (Awilix)
  • Controllers (domain controllers for you to quickly setup new endpoints)
  • Views (templates Swig)
  • Built-in rate limiter
  • Built-in router (static and regex matching)
  • Built-in session management
  • Supporting PHP as runtime (.php files can be served through php-fpm)
  • Supporting ~~both Express and~~ uWS as http server. (Express as backend has been removed as of 0.8.0)

Modules

ORM: https://vincit.github.io/objection.js/guide/getting-started.html

Routes

Register your routes in config/routes.js

  • For parameterized routes, use regex and double slash slashes (escape all backslashes like '\' => '\\')
  • Static routes are always matched before regex routes (except for the catch-all).

Static routes

'GET /test/dbquery': {
  controller: TestController,
  method: 'dbQuery',
},

Parameterized routes

'GET /api/v1/books/:id': {
  controller: BooksController,
  method: 'show',
  parameters: { id: '\\d+' },
},

'GET /:directory/:file': {
  controller: FileController,
  method: 'download',
  parameters: { directory: ['\/*\\w*\/', '*'], file: '\\w+\\.\\w+' },
},

PHP route example

'GET /:phpFile': {
  controller: ExternalRuntimeController,
  method: 'execPhp',
  parameters: { phpFile: ['.*.php'] },
},

Test it with this:

$ curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "formKey=valueform" --cookie "USER_TOKEN=Yes" http://localhost:3005/index.php?test=form

Services

Services are singleton classes. Can be dependency injected in all controller methods by its registered name/key.

  • Register your service in config/services.js

Caveats

  • Inside the controllers, you don't have a lexical scope, you have no 'this'. To get the controller, you need to dependency inject it with a destructing parameter by typing: { controller } or just controller in your controller method parameters.

TODO

  • Test PUT, POST, DELETE
  • Test POST Json
  • Test GET multipart form
  • Test POST File
  • Test WAF
  • Test Rate limiter
  • Streaming responses
  • Error responses must be in the type of the Accept header.

Performance tweaking

  • Do performance tweaking again, because its 200k vs 44k when early returning request before kernel.
  • As far as I know; Scoped DI uses a lot
  • Router is probably some.
  • Do profiling

Benchmarks

Managed to come to half the speed of Nginx, serving the same content. But it is much faster to use this Node directly than going through Nginx first. Using this as gateway for php-fpm is 2x slower than Nginx -> fpm. There are some screenshots of the benchmarks. Add them to this repo.

Update 31.03.2023:

After testing this on my 16 core AMD 7950X I got some interesting results.

node-works (0.5.4, router with regex, physical, auto workers a 32 threads) 
Bombarding http://platonpc6:8000 with 10000000 request(s) using 100 connection(s)
10000000 / 10000000 [==================================================================================================================================================================================================] 100.00% 241416/s 41s
Done!
Statistics        Avg      Stdev        Max
Reqs/sec    241909.04   11952.28  272448.09
Latency      411.13us   135.26us    36.87ms
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput:   103.11MB/s

The interesting part is that the distribution of load between node workers goes from 80% to 15%, quite evenly spread.
It seems like no load for them. That indicates that my test machine Ryzen 3 3100 couldn't create enough load with bombardier.
Oh now, when I think about it, bombardier does saturate the CPU like 450%, but when I look at the throughput here 
– I think I need to do a speed test between the machines.

Nginx (nginx version: nginx/1.22.1)
robert@platonpc5:~/go/bin$ ./bombardier -c 100 -n 10000000 http://platonpc6:80
Bombarding http://platonpc6:80 with 10000000 request(s) using 100 connection(s)
10000000 / 10000000 [==================================================================================================================================================================================================] 100.00% 208203/s 48s
Done!
Statistics        Avg      Stdev        Max
Reqs/sec    208228.69    8590.20  214424.95
Latency      477.54us   172.77us    32.53ms
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput:   111.40MB/s


node-works (with express as backend, same settings as the other node-works)
./bombardier -c 100 -n 10000000 http://platonpc6:8000
Bombarding http://platonpc6:8000 with 10000000 request(s) using 100 connection(s)
 10000000 / 10000000 [================================================================================================================================================================================================] 100.00% 122761/s 1m21s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec    123025.80   15415.93  167395.90
  Latency      810.69us     1.00ms   117.40ms
  HTTP codes:
    1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:    65.82MB/s


So NOW I have done an another test, with less data output, actually only one dot ".".

node-works (uws, auto, 32 threads)
Bombarding http://platonpc6:8000/loadtest with 10000000 request(s) using 100 connection(s)
 10000000 / 10000000 [==================================================================================================================================================================================================] 100.00% 247334/s 40s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec    248071.48   12875.44  267839.63
  Latency      400.93us   123.37us    33.66ms
  HTTP codes:
    1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:    44.47MB/s

But it still doesn't generate much load on the server. The bottle neck must still be the bombardier library.
I'm going to start two bombardiers: ./bombardier -c 100 -n 10000000 http://platonpc6:8000/loadtest at the same time.
That is weird. I got the same result on both. Approximately the same result in total, 50% - 50%. I am struggling to find the bottle neck here.
I'm going to test from another computer at the same time. It is about 30% slower than this, so I will benchmark it first.
Okay, so this new computer (platonpc4) is terribly slower OR the network in between slows it down. It only generates 11k ish more RPS.
Although it helped anyway because I ran it from the Ryzen 3 3100 at the same time as well, and it got the same result.
So that means its not the server, not the node-works that is the bottle neck. It actually looks like it is the network, but from the Ryzen 3 3100 and the server its only one TPlink Gbit switch.

Why 100 connections? It seems to not mean anything how many there is above 30. It caps at the same number above that.

Now I tested with both platonpc4 (i5 3570k) and platonpc5 (ryzen 3), 
the i5 generated ish 95k and the ryzen generated approx 202k, ish sub-300k against nginx.

Going to test again against node-works, but nginx have no troubles delivering this, almost not creating any load at all.

Okay so now this is interesting. Testing against nginx from platonpc4 alone gives me 180k rps on 1000 connections.

Nah, that was wrong. It didnt respond with real response (the "."), it responded with an error "Too many open files".
It is still peaking at around 200k.

Testing against node-works now:

platonpc4 coming out with around 224k on 500 connections
platonpc5 coming out with around 243k on 500 connections

Both together should give ish 500k. It doesnt. It ends at 153k on both, while running together.
node-works doesnt saturate, but there are a lot of softirqd activity.


robert@platonpc5:~/go/bin$ ./bombardier -c 500 -n 10000000 http://192.168.2.192:8000/loadtest
Bombarding http://192.168.2.192:8000/loadtest with 10000000 request(s) using 500 connection(s)
 10000000 / 10000000 [=================================================================================================================================================================================================] 100.00% 154207/s 1m4s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec    154403.33   23670.92  266861.49
  Latency        3.23ms     9.44ms      1.48s
  HTTP codes:
    1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:    28.28MB/s


obert@PlatonPC4 ~/tmp $ ./bombardier-linux-amd64 -c 500 -n 10000000 http://192.168.2.192:8000/loadtest
Bombarding http://192.168.2.192:8000/loadtest with 10000000 request(s) using 500 connection(s)
 10000000 / 10000000 [=================================================================================================================================================================================================] 100.00% 152257/s 1m5s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec    152507.64   22731.04  247298.64
  Latency        3.27ms     9.14ms      1.69s
  HTTP codes:
    1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:    27.92MB/s

bwm-ng shows 97MB/s, I think it's being network capped somewhere.. and that means 300k RPS.

While looking at the CPU load it looks like it should be possible to get node-works up to 1 mill RPS, if network, kernel limits etc is playing ball!
But currently its not CPU limitted, not on the 7950X at least.