@hitchy/plugin-rate-limiter
v0.2.0
Published
a Hitchy plugin for limiting request rates
Downloads
9
Readme
@hitchy/plugin-rate-limiter
Hitchy plugin for limiting request rates
License
Installation
npm install @hitchy/plugin-rate-limiterUsage
This plugin is providing a service for generating customized policy routing handlers suitable for limiting request rates of a hitchy-based service. Thus, you can integrate it with your service in context of policy routing configuration.
In configuration file config/policies.js you could use the plugin like this:
module.exports = function() {
const { RateLimiter } = this.runtime.service;
return {
policies: {
"POST /api": [
RateLimiter.limitPerSecond( 10 ),
RateLimiter.limitPerMinute( 300 ),
],
}
};
};A consuming configuration file must comply with common module pattern for gaining access on services provided by this plugin as demonstrated in second line of this example.
Currently, rate limiting works per process, only. This is not a problem in most cases for Hitchy is handling all requests in a single process due to the nature of Javascript/Node.js. It becomes an issue when running several instances of your Hitchy-based application e.g. in context of a load balancer. However, in those cases we advise to configure rate limiting at load balancer as well.
API
limitPerSecond
RateLimiter.limitPerSecond( count, { queueSize = 0, retryAfter = 10, perClient = false } )This generator creates handler accepting up to given count of requests within a rolling second. Additional requests are rejected with HTTP status code 503 suggesting to retry after given number of seconds. Either parameter must be given as number.
By defining a positive queue size, an according number of requests is deferred before rejecting additional requests.
If option perClient is true, rate limit are applied per requesting client causing separate queues being managed accordingly.
limitPerMinute
RateLimiter.limitPerMinute( count, { retryAfter = 120, perClient = false } )This function creates handler accepting up to given count of requests within a rolling minute. Additional requests are rejected with HTTP status code 503 suggesting to retry after given number of seconds. Either parameter must be given as number.
This helper method does not support definition of a deferral queue for using it in context of a rate per minute is unusual and counterintuitive to a client expecting requests being processed in seconds at most. You might use underlying RateLimiter.limitPerTime() to have a limit per minute combined with a deferral queue.
If option perClient is true, rate limit are applied per requesting client.
limitPerHour
RateLimiter.limitPerHour( count, { retryAfter = 900, perClient = false } )This function creates handler accepting up to given count of requests within a rolling hour. Additional requests are rejected with HTTP status code 503 suggesting to retry after given number of seconds. Either parameter must be given as number.
This helper method does not support definition of a deferral queue for using it in context of a rate per hour is unusual and counterintuitive to a client expecting requests being processed in seconds at most. You might use underlying RateLimiter.limitPerTime() to have a limit per hour combined with a deferral queue.
If option perClient is true, rate limit are applied per requesting client.
limitPerTime
RateLimiter.limitPerTime( count, {
timeframe = 1,
queueSize = 0,
retryAfter = 10,
peerIdentifier = undefined,
perClient = false,
} )This function is the underlying generator used by those other helper functions listed above. It creates a handler accepting up to given count of requests within a rolling timeframe with its length given in seconds. Additional requests are rejected with HTTP status code 503 suggesting to retry after given number of seconds. Either parameter must be given as number.
By defining a positive queue size, an according number of requests is deferred before rejecting.
If option perClient is true, rate limits are applied per requesting client. However, this option is ignored if a custom callback has been provided as peerIdentifier which is expected to name the peer or its class of peers given either incoming request's descriptor as argument. The callback should return a string naming the peer or its class of peers. It may return a promise for that string, too.
Warning! Combine peer identification or rate limiting per client and queue sizes with great care as it may cause runtime issues due to huge memory consumption.
