npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@blocklet/benchmark

v0.0.46

Published

Easy to benchmark your server and analyze the results with AI

Downloads

49

Readme

@blocklet/benchmark

中文文档

A powerful, flexible HTTP API benchmarking tool tailored for Blocklet and general Node.js services. Supports multiple modes (RPS, concurrency), ramp-up testing, AI-powered analysis, and outputs performance charts and logs.

📦 Installation

npm install -g @blocklet/benchmark

Or use it directly via npx:

npx @blocklet/benchmark

🚀 Quick Start

Step 1: Initialize Config File

npx @blocklet/benchmark init --type server

Other available types:

  • discuss-kit
  • tool
  • You can also combine them: --type server,tool

This will generate a benchmark.yml file in your current directory.

Step 2: Run the Benchmark

npx @blocklet/benchmark run

Options:

| Option | Description | Default | | ---------- | ------------------------------------------ | --------------- | | --config | Path to config file | benchmark.yml | | --format | Output format: row, json, or table | table | | --mode | Benchmark mode: rps, concurrent, all | all |

🧩 Configuration

Here's a sample benchmark.yml and explanation of the fields:

origin: https://example.blocklet.dev
concurrency: 100
timelimit: 20
ramp: 20
data:
  loginToken: your-login-token
  teamDid: your-team-did
  userDid: your-user-did
body: '{"example": true}'
logError: true
logResponse: false
aiAnalysis:
  enable: true
  language: en
  techStack: node.js
  model: gpt-4o
apis:
  - name: Get User Info
    api: /api/user/info
    method: GET
    assert:
      id: not-null
  - name: Update Status
    api: /api/status
    method: POST
    body: '{"status": "ok"}'
    assert:
      success: true

Top-Level Fields

| Field | Description | | ------------- | ----------------------------------------------------------------------------- | | origin | Base URL of the API server | | concurrency | Number of concurrent users | | timelimit | Duration of the test per mode (in seconds) | | ramp | (Optional) Ramp step to gradually increase concurrency | | data | Dynamic values to be injected into API paths or headers | | body | Default request body | | logError | Print error logs to console | | logResponse | Print full API responses | | aiAnalysis | Enable GPT-powered result interpretation (requires OPENAI_CLIENT in .env) | | sitemap | The remote endpoint should return a JSON response |

API List (apis)

Each item defines one endpoint to test:

| Field | Description | | -------- | --------------------------------------------------------------------- | | name | Human-readable name of the test case | | api | API path (joined with origin) | | method | HTTP method (GET, POST, etc.) | | body | Request body (if POST/PUT) | | assert | Assertions on response (supports not-null, null, or fixed values) | | only | If true, run only this endpoint | | skip | If true, skip this endpoint |

🌐 Using sitemap to Auto-Load API Definitions

To simplify and centralize API configuration, @blocklet/benchmark supports loading APIs dynamically from a remote sitemap. This allows you to avoid manually writing all your API definitions in the benchmark.yml file, and instead retrieve them from a maintained endpoint.

🧩 Configuration

You can enable and configure the sitemap in your benchmark.yml like this:

sitemap:
  enable: true
  url: 'https://your-server-url.com/sitemap'
  • enable: Set to true to activate the feature.
  • url: URL of the remote endpoint that returns the sitemap JSON.

📌 If enable is set to false, or the request to the sitemap fails, it will fall back to using the apis defined in your benchmark.yml file.


📝 Expected Sitemap Response Format

The remote endpoint should return a JSON response with the following structure:

{
  "apis": [
    {
      "name": "/api/example",
      "api": "/api/example"
    },
    {
      "name": "/api/full",
      "api": "/api/full",
      "method": "GET",
      "cookie": "login_token=$$loginToken",
      "format": "json",
      "headers": {
        "Content-Type": "application/json; charset=utf-8"
      },
      "skip": false,
      "only": false,
      "body": {},
      "assert": {}
    }
  ],
  "data": {
    "key": "option use some data"
  }
}

📊 Output

All results are saved to the benchmark-output folder:

  • benchmark.log: All logs
  • 0-benchmark-raw.yml: Raw result file
  • *.png: Chart images (RPS, latency percentiles)
  • console output: A summary table of all benchmark results

If aiAnalysis is enabled and OPENAI_CLIENT is set in .env, a GPT-powered summary of the test will be provided in the console.

📘 License

MIT License