@deersheep330/function-pipeline
v1.0.5
Published
javascript function pipeline for complex load test
Downloads
3
Readme
function-pipeline
function-pipeline makes it easy to perform very complex, unstable load test.
Contents
Use Case
Imagine your boss want you to load test the following process on a system:
- (Step 1): User have to login to the system.
- (Step 2): User upload a file to the system. e.g. A file contains tons of numbers.
- (Step 3): User wait for the system processing the uploaded file. e.g. The system have to parse all numbers in the file.
- (Step 4): User can simultaneously send multiple requests the the system to perform customized operations upon the processed data and wait for results. e.g. Send 3 requests to ask the system to calculate stddev, median and avg at the same time.
Your boss want to know the system can support how many users performing this process before it collapses.
So you're load-testing a process consists of multiple API calls instead of a single API call.
Getting Started
Install this module using npm:
npm i @deersheep330/function-pipeline
Import this module:
const { FunctionPipeline, OnError } = require('@deersheep330/function-pipeline');
Usage
- Get pipeline instance
let pipeline = new FunctionPipeline()
- Define a step by adding functions into the pipeline
A step is defined by calling the add(onError, ... functions) method of pipeline instance.
You can pass arbitrary numbers of functions into the "add" method to define a step contains multiple functions. These functions would all be called at the same time and this step is finished if and only if all the functions are resolved or rejected.
The onError argument could be OnError.RETRY, OnError.START_OVER or OnError.CONTINUE.
For OnError.RETRY, if any of functions in this step is rejected, the pipeline would re-run this step again.
For OnError.START_OVER, if any of functions in this step is rejected, the pipeline would re-run from the first step again.
For OnError.CONTINUE, if any of functions in this step is rejected, the error would be ignored, which means: the pipeline would proceed to the next step when all the functions in the step are returned, no matter it's resolved or rejected.
The following code defined a 3-steps pipeline. Each step contains only one function:
- (Step 1): login
- (Step 2): download
- (Step 3): logout
This pipeline would run these 3 steps sequentially.
If Step 1 is rejected, it would try to Step 1 again.
If Step 2 is rejected, it would start from Step 1 again.
If Step 3 is rejected, it would ignore error and continue.
By calling the perform() method of pipeline instance, the pipeline would start to run these defined steps.
let login = async () => { await request('/login') }
let download = async () => { await request('/download') }
let logout = async () => { await request('/logout') }
pipeline.add(OnError.RETRY, login)
.add(OnError.START_OVER, download)
.add(OnError.CONTINUE, logout)
.perform()
- Define a step runs multiple functions parallelly
By passing multiple functions into a "add" method, these functions would all be called at the same time and this step is finished unless the functions are resolved or rejected. Which means: These functions are run parallelly instead of sequentially.
let task1 = async () => { await doSomething() }
let task2 = async () => { await doSomething() }
let task3 = async () => { await doSomething() }
pipeline.add(OnError.CONTINUE, task1, task2, task3).perform()
- Function parameters
What if a function is dependent on another function's result?
e.g. There are two functions: login & download. "login" returns a cookie, and "download" requires a logined user so it needs the cookie returned by "login".
let login = async () => { await request('/login') }
let download = async (cookie) => { await request('/download', cookie) }
FunctionPipeline already take care this for you :)
Once a function resolved in the pipeline instance, the resolved value would be stored in a dictionary in the pipeline instance. (So it requires the resolved value to be a key-value pair.)
If a step contains a function which has arguments, the arguments names would be parsed, and try to find these arguments names in the dictionary, and automatically pass the value found into the function.
So back to our example: the "login" and "download" functions just need a little modification to follow FunctionPipeline's design:
// "login" needs to return a key-value pair, and the key has to exactly match "download"'s argument name
// the key-value pair would be stored in a dictionary
let login = async () => { let cookieVal = await request('/login'); resolve({ cookie: cookieVal }) }
// after parsing, an argument named "cookie" is found.
// automatically lookup the dictionary and try to find a key named "cookie"
// if it's found, pass the value of the key to this "download" function
let download = async (cookie) => { await request('/download', cookie) }
- Fetching logs
There's an event emitter in the pipeline instance which emits different kinds of events so you can get the realtime progress and status of the pipeline:
let pipeline = new FunctionPipeline()
// verbose logs, current steps, function's resolved or rejected
pipeline.emitter.on('log', function(data) {
console.log(data)
})
// records of test results, the time consuming of each functions
pipeline.emitter.on('record', function(data) {
console.log(data)
})
// contains error logs only, function's rejected reason
pipeline.emitter.on('err', function(data) {
console.log(data)
})
// build the pipeline and run it
await pipeline.add(OnError.RETRY, login)
.add(OnError.RETRY, upload)
.perform()
Demo
You can find more detailed example at this project's test code.