npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@deersheep330/function-pipeline

v1.0.5

Published

javascript function pipeline for complex load test

Downloads

3

Readme

function-pipeline

function-pipeline makes it easy to perform very complex, unstable load test.

Contents

Use Case

Imagine your boss want you to load test the following process on a system:

  • (Step 1): User have to login to the system.
  • (Step 2): User upload a file to the system. e.g. A file contains tons of numbers.
  • (Step 3): User wait for the system processing the uploaded file. e.g. The system have to parse all numbers in the file.
  • (Step 4): User can simultaneously send multiple requests the the system to perform customized operations upon the processed data and wait for results. e.g. Send 3 requests to ask the system to calculate stddev, median and avg at the same time.

Your boss want to know the system can support how many users performing this process before it collapses.

So you're load-testing a process consists of multiple API calls instead of a single API call.

Getting Started

Install this module using npm:

npm i @deersheep330/function-pipeline

Import this module:

const { FunctionPipeline, OnError } = require('@deersheep330/function-pipeline');

Usage

  • Get pipeline instance
let pipeline = new FunctionPipeline()
  • Define a step by adding functions into the pipeline

A step is defined by calling the add(onError, ... functions) method of pipeline instance.

You can pass arbitrary numbers of functions into the "add" method to define a step contains multiple functions. These functions would all be called at the same time and this step is finished if and only if all the functions are resolved or rejected.

The onError argument could be OnError.RETRY, OnError.START_OVER or OnError.CONTINUE.

For OnError.RETRY, if any of functions in this step is rejected, the pipeline would re-run this step again.

For OnError.START_OVER, if any of functions in this step is rejected, the pipeline would re-run from the first step again.

For OnError.CONTINUE, if any of functions in this step is rejected, the error would be ignored, which means: the pipeline would proceed to the next step when all the functions in the step are returned, no matter it's resolved or rejected.

The following code defined a 3-steps pipeline. Each step contains only one function:

  • (Step 1): login
  • (Step 2): download
  • (Step 3): logout

This pipeline would run these 3 steps sequentially.

If Step 1 is rejected, it would try to Step 1 again.

If Step 2 is rejected, it would start from Step 1 again.

If Step 3 is rejected, it would ignore error and continue.

By calling the perform() method of pipeline instance, the pipeline would start to run these defined steps.

let login = async () => { await request('/login') }
let download = async () => { await request('/download') }
let logout = async () => { await request('/logout') }

pipeline.add(OnError.RETRY, login)
        .add(OnError.START_OVER, download)
        .add(OnError.CONTINUE, logout)
        .perform()
  • Define a step runs multiple functions parallelly

By passing multiple functions into a "add" method, these functions would all be called at the same time and this step is finished unless the functions are resolved or rejected. Which means: These functions are run parallelly instead of sequentially.

let task1 = async () => { await doSomething() }
let task2 = async () => { await doSomething() }
let task3 = async () => { await doSomething() }

pipeline.add(OnError.CONTINUE, task1, task2, task3).perform()
  • Function parameters

What if a function is dependent on another function's result?

e.g. There are two functions: login & download. "login" returns a cookie, and "download" requires a logined user so it needs the cookie returned by "login".

let login = async () => { await request('/login') }
let download = async (cookie) => { await request('/download', cookie) }

FunctionPipeline already take care this for you :)

Once a function resolved in the pipeline instance, the resolved value would be stored in a dictionary in the pipeline instance. (So it requires the resolved value to be a key-value pair.)

If a step contains a function which has arguments, the arguments names would be parsed, and try to find these arguments names in the dictionary, and automatically pass the value found into the function.

So back to our example: the "login" and "download" functions just need a little modification to follow FunctionPipeline's design:

// "login" needs to return a key-value pair, and the key has to exactly match "download"'s argument name
// the key-value pair would be stored in a dictionary
let login = async () => { let cookieVal = await request('/login'); resolve({ cookie: cookieVal }) }

// after parsing, an argument named "cookie" is found.
// automatically lookup the dictionary and try to find a key named "cookie"
// if it's found, pass the value of the key to this "download" function
let download = async (cookie) => { await request('/download', cookie) }
  • Fetching logs

There's an event emitter in the pipeline instance which emits different kinds of events so you can get the realtime progress and status of the pipeline:

let pipeline = new FunctionPipeline()

// verbose logs, current steps, function's resolved or rejected
pipeline.emitter.on('log', function(data) {
    console.log(data)
})
// records of test results, the time consuming of each functions
pipeline.emitter.on('record', function(data) {
    console.log(data)
})
// contains error logs only, function's rejected reason
pipeline.emitter.on('err', function(data) {
    console.log(data)
})

// build the pipeline and run it
await pipeline.add(OnError.RETRY, login)
              .add(OnError.RETRY, upload)
              .perform()

Demo

You can find more detailed example at this project's test code.