npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

throtty

v1.0.3

Published

Yet another rolling window rate limiter.

Downloads

3

Readme

Throtty - Yet another rolling window rate limiter

Throtty is an efficient rate limiter for Node.js. Useful when you need to rate limit/throttle API clients or any other task that need to be rate limited. Can be used in standalone mode using in-memory storage or backed by a Redis server.

Features

  • Based on rolling windows with minimum delay between successive requests.
  • Atomic. When backed by redis, atomicity is guaranteed with the help of transactions.
  • Concurrency proof. When backed by redis, Throtty can handle multiple requests which can be performed in parallel.
  • Distributed. When backed by redis, multiple Throtty instances can be run from different hosts.

Installation

npm install throtty --save

Usage

const throtty = require('throtty');
const redisClient = require('redis').createClient();

const rateLimiter = throtty({
    interval: 10000, // Required. Rolling windows in milliseconds.
    threshold: 3, // Required. How many times per rolling window?
    delay: 1000, // Required. Minimum delay between two successive requests in milliseconds.
    redis: redisClient, // Optional. Redis client. If not provided In-memory storage is used.
    promisify: true, // Optional. When true, `checkRateAsync` will be the promisified version of `checkRate`.
});

// using callback
rateLimiter.checkRate('user-1234-some-action', function(err, res) {
    if (err) {
        // handle error

    }  else {
        if (res.allowed) {
            // accepted request

        } else {
            // reject request

        }
    }
})

// using promise when promisify parameter is set to true
rateLimiter.checkRateAsync('user-1234-some-action')
    .then((res) => {
        if (res.allowed) {
            // accepted request

        } else {
            // reject request

        }
    }).catch((err) => {
        // error handling

    });

// using throtty as a rate limiter middleware (in KOA)
app.use(async rateLimit(ctx, next) => {
    const res = await rateLimiter.checkRateAsync('user-1234-some-action');
    if (res.allowed) return next();
    throw new TooManyRequestsError();
})

Advanced usage

rateLimiter.checkRate('user-1234-some-action', function(err, res) {
    
    // error handling
    // ...
        
    const allowed = res.allowed; // Boolean. True when allowed otherwise false.
    const { 
        wait, // Number. Time to wait in milliseconds before performing a new request. 
        thresholdViolation, // Boolean. Whether or not the current request was rejected because of threshold violation.
        delayViolation, // Boolean. Whether or not the current request was rejected because of delay violation.
        rolls, // Number. How many requests has been performed so far.
        remaining, // Number. How many remaining requests do we have?
    } = res.details;
    
})

The algorithm

Let's suppose we want to limit API requests on some busy service or maybe to rate limit user requests for some specific end-points. For example we want to limit API requests to our service like 1000 per user (token) each hour.

The rolling/sliding window in our case is one hour.


-------------|--|------|------|-|----|----------|-----|--------|--------  Time
             ^
             request                  

                           |<--------------------------------->|
                                 Rolling window == 1 hour
const interval = 60 * 60 * 1000000; // One hour in microseconds (rolling window)
const threshold = 1000;

To track the number of user's requests performed in the last hour from NOW, we need to remember the timestamp of each request.

const rolls = {}; // storage of user's timestamps

Each time when a user performs a request, we save the request timestamp first. Request timestamps are saved in an ordered list (array).

const now = microtime.now(); // time in microseconds
const key = 'user-2345-some-action';
rolls[key] = rolls[key] || [];
rolls[key].push(now);

To get the count of user requests in the last hour:

const from = now - interval;
rolls[key] = rolls[key].filter(timestamp => timestamp > from);
const count = rolls[key].length; 

From this point we can check if the count of the user's requests in the last hour exceeds maximum allowed requests per hour and based on that we can accept/reject the request.

const thresholdExceeded = count > threshold;

This should give you a basic idea about the algorithm being implemented by this package.

Considerations

Because of the limitations of the Javascript engine, setTimeout and setInterval timers are allowed to lag arbitrarily and are not guaranteed to run at exact time. They tend to drift and delays due to CPU load are expected to happen.

Considering that, sometimes even when the rate limiter asks to wait for certain amount of time, it is not guaranteed that this timing will be fulfilled. Therefore details.wait provided by this package in callbacks can be considered only as an estimation and can not be exact.

Contributing

So you are interested in contributing to this project? Please see CONTRIBUTING.md.

License

MIT