npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

lite-fifo

v0.3.3

Published

Lightweight, optimized, and efficient implementations for FIFO (queue) data structure

Downloads

19

Readme

NPM version CI GitHub license

lite-fifo

Lightweight and efficient Queue implementations

This package aims to provide zero-dependency implementations of a queue, focusing on:

  • memory footprint (RAM)
  • throughput (ops/sec)

The production code is dependency free. The only dependencies are for testing.

Installation

npm install lite-fifo

Usage

const { ChunkedQueue } = require('lite-fifo');

const queue = new ChunkedQueue();
queue.enqueue(123);
queue.enqueue(45);
queue.enqueue(67);

console.log(queue.toJSON());
// => [ 123, 45, 67 ]

const temp = queue.dequeue(); // holds 123
console.log(queue.toJSON());
// => [ 45, 67 ]

API

Classes

  • LinkedQueue
  • CyclicQueue (bounded)
  • DynamicCyclicQueue (unbounded)
  • ChunkedQueue
  • DynamicArrayQueue

All of these classes support the following methods

Methods

enqueue (item)

Add an item to the queue.
Bounded implementations might throw an exception if the capacity is exceeded.

dequeue ()

Return the first inserted (or the "oldest") item in the queue, and removes it from the queue.
Zero sized queue would throw an exception.

clear ()

Clear the queue.

size ()

Return the current size of the queue.

peekFirst ()

Return the first inserted (or the "oldest") item in the queue, without removing it from the queue.
Zero sized queue would throw an exception.

peekLast ()

Return the last inserted (or the "newest") item in the queue, without removing it from the queue.
Zero sized queue would throw an exception.

[Symbol.iterator] ()

Iterate over the items in the queue without changing the queue.
Iteration order is the insertion order: first inserted item would be returned first.
In essence this supports JS iterations of the pattern for (let x of queue) { ... }.
Example:

const queue = new ChunkedQueue();
queue.enqueue(123);
queue.enqueue(45);
for (let item of queue) {
  console.log(item);
}
// ==> output would be:
// 123
// 45
// and the queue would remain unchanged

drainingIterator ()

Iterate over the items in the queue.
Every iterated item is removed from the queue.
Iteration order is the insertion order: first inserted item would be returned first.
Example:

const queue = new ChunkedQueue();
queue.enqueue(123);
queue.enqueue(45);
for (let item of queue.drainingIterator()) {
  console.log(item);
}
console.log(`size = ${queue.size()}`);
// ==> output would be:
// 123
// 45
// size = 0

copyTo (arr, startIndex)

startIndex is optional, and defaults to 0 if not given.
Copy the items of the queue to the given array arr, starting from index startIndex.
First item in the array is first item inserted to the queue, and so forth.
No return value.

toArray ()

Create an array with the same size as the queue, populate it with the items in the queue, keeping the iteration order, and return it.

toJSON ()

Return a JSON representation (as a string) of the queue.
The queue is represented as an array: first item in the array is the first one inserted to the queue and so forth.

Common implementations and mistakes

Array + push + shift

A very common implementation of a queue looks like this:

class DynamicArrayQueue { /* DON'T USE THIS CODE, IT DOESN'T SCALE */
    constructor() {
        this._arr = [];
    }
    enqueue(item) {
        this._arr.push(item);
    }
    dequeue() {
        return this._arr.shift();
    }
}

The time complexity of the dequeue operation is O(n). At small scale - we wouldn't notice.
On a high scale, say 300000 items, this implementation would have only 5 (five!) ops per second. Complexity matters..
At the bottom line, this implementation is a mistake.

Linked List

A linked list implementation for a queue behaves very well in terms of time complexity: O(1).
On the memory side, the provided implementation, LinkedQueue, introduces an optimization: instead of relying on a doubly-linked-list, it relies on a singly-linked-list.
However, even with this optimization, the memory footprint of LinkedQueue is the highest (see the benchmark table below).

Better implementations

Linked List of Ring Buffers

A ring buffer, or a cyclic queue, is a bounded data structure that relies on an array. It's very fast, but bounded.
We can, however, introduce a new data structure named ChunkedQueue, in which we create a LinkedQueue with each item in it to be a cyclic queue.

DynamicCyclicQueue

Same as a cyclic queue, but can exceed the initial length of the underlying array.
How? when it's full, the next enqueue operation would trigger a re-order of the underlying array, and then would expand it with push operations. This process is O(1) amortized, and therefore this is a generic queue, and can be used in any scenario.

The Benchmark

Methodology

The scenario being checked is the following:
set P = 100000
enqueue 30P items
dequeue 5P
do 6 times:
  enqueue 1P
  dequeue 5P

Apply this scenario T times (set T=20) for every relevant class (see table below), measure RAM used and ops/sec.
Remove best and worst results (in terms of ops/sec), and take the average (mean) from the rest of the results.

Note: we took a very large value for P, otherwise complexity related issues won't come up.

Results

| Class Name | Ops/Sec | RAM used (MB) | |:-------------------|--------:|--------------:| | DynamicArrayQueue | 5 | 8 | | ChunkedQueue | 31800 | 28 | | DynamicCyclicQueue | 27100 | 102 | | LinkedQueue | 29800 | 143 |

Analysis

  1. The naive implementation, DynamicArrayQueue, is so slow that it can't be considered as an option
  2. The fastest implementation is ChunkedQueue, and has the lowest RAM usage
  3. The common LinkedQueue implementation is not the fastest one, even with O(1) time complexity, and it's the most wasteful in terms of RAM usage
  4. Classes DynamicCyclicQueue and LinkedQueue have quite similar results: the former has a lower RAM usage and the latter performs a bit better.

Suggestions

  • Use the provided ChunkedQueue for a generic solution

License

MIT © Ron Klein