npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

autoaiload

v14.7.1

Published

The ultimate, future-ready CLI for smart load testing. Built with zero external dependencies, it offers multi-platform support, real-time accurate results, and powerful terminal styling. Features include configurable test stages, detailed network phase la

Downloads

9

Readme

🚀 autoaiload - The Definitive Command-Line Interface for Modern Load Testing

Project Banner

🌟 Overview: A New Paradigm in Performance Testing

autoaiload, a flagship tool from Smart Tell line, represents a revolutionary approach to performance and load testing. Engineered from the ground up using a zero-dependency architecture and leveraging the full power of native Node.js modules, it offers an unparalleled, frictionless, and highly reliable solution for analyzing web servers, APIs, and microservices. This tool goes far beyond simply generating load; it provides a comprehensive ecosystem with real-time analytics, detailed final reports, and intelligent, actionable recommendations to ensure your applications are robust and production-ready. Whether you're a developer needing a quick sanity check or a DevOps engineer building a robust CI/CD pipeline, autoaiload is designed to fit your needs.


✨ Core Principles & Key Features: The autoaiload Advantage

The design of autoaiload is guided by a few core principles that differentiate it from other tools. Our focus is on performance, usability, and providing deep, meaningful insights into your application's behavior under stress.

1. The Dynamic Live Dashboard

The live dashboard is the heart of the autoaiload experience. As soon as a test begins, your terminal transforms into a real-time monitoring station, providing a dynamic visualization of critical metrics:

  • Progress Bar: A clean, customizable progress bar that tracks the overall test duration.
  • ASCII Charts: Real-time graphs for both Requests Per Second (RPS) and Latency, allowing you to instantly observe performance trends and identify anomalies as they happen.
  • Status Code Heatmap: A color-coded heatmap that shows the distribution of HTTP status codes (2xx, 3xx, 4xx, 5xx). This gives you an immediate visual cue for where errors are originating.

This feature is designed to provide immediate feedback, enabling you to intervene or stop a test if a critical issue is discovered early.

Live Dashboard Screenshot

2. Advanced Reporting and Analysis

The final report is autoaiload's most powerful feature. It is meticulously structured to provide a wealth of information, moving beyond simple averages to give you a complete performance profile.

  • Latency Percentiles: This section is crucial for understanding tail latency. Metrics like p50 (median), p90, and p99 reveal the experience of your slowest users. A high p99 latency often indicates that a small, but significant, portion of your user base is experiencing poor performance.
  • Network Phase Latency: A detailed breakdown of the time spent in each network phase: DNS Lookup, TCP Connect, and TLS Handshake. This is invaluable for diagnosing network-related issues that are external to your application's code.
  • Status Code & Error Breakdown: A detailed table that lists every HTTP status code received, along with its count and percentage rate. This helps you quickly distinguish between client-side (4xx) and server-side (5xx) errors.
  • Intelligent Recommendations: The tool analyzes the test data and provides tailored, actionable advice. For example, if it detects high DNS latency, it might suggest "Investigate your DNS provider or caching strategy."

3. Multi-Stage Load Profiles: Simulating Reality

Real-world traffic is rarely static. It often involves a ramp-up, a peak period, and a cool-down. autoaiload allows you to simulate these complex traffic patterns using a stages configuration. You can define multiple stages, each with its own duration, RPS, and concurrent user count. This functionality enables you to:

  • Ramp-up: Gradually increase load to see how your system behaves as it warms up.
  • Peak Load: Sustain a high load for a period to test stability and resilience.
  • Cool-down: Reduce the load to observe how quickly your system recovers after a peak.

4. The Zero-Dependency Advantage

autoaiload is built exclusively on native Node.js modules. This design choice provides several key benefits:

  • Minimal Footprint: The installation size is extremely small.
  • No Conflicts: You'll never face issues with external library dependencies or version mismatches.
  • Portability: The tool is highly portable and has been thoroughly tested on all major platforms, including Windows, macOS, Linux, and even mobile terminals like Termux.

🏃 Usage Guide: From Simple to Advanced

Getting started is easy, but autoaiload offers a powerful command-line interface for advanced automation.

Installation

# Global Installation via npm
# This is the recommended way to use the tool
npm install -g autoaiload

# Using npx for a one-off run without global installation
npx autoaiload

1. Interactive Mode

For beginners or quick ad-hoc tests, simply run the command without any flags. The tool will guide you through the setup with a series of prompts.

autoaiload

2. Command-Line Arguments (For Automation & CI/CD)

For scripted tests, you can provide all configurations directly via flags. This is perfect for integration into CI/CD pipelines.

# A basic load test with a GET request
autoaiload --url [https://api.example.com](https://api.example.com) --duration 60 --rps 250 --concurrent 100

# A POST request with a JSON payload
autoaiload -u [https://api.example.com/login](https://api.example.com/login) -m POST --jsonBody './payloads/login.json' -d 120 -c 50 -H "Content-Type: application/json"

# A complex multi-stage test profile
# This JSON array is passed as a string
autoaiload -u [https://api.example.com/data](https://api.example.com/data) -s '[{"duration":60,"rps":100},{"duration":120,"rps":500,"concurrent":50}]'

📊 Sample Final Report

Here is an example of a detailed report generated by autoaiload after a test run.

autoaiload - Final Report

Test run on: 2025-08-09 10:46:00
URL: [https://api.example.com/data](https://api.example.com/data)
Total Duration: 120s
--------------------------------------------------------------------------------

📊 High-Level Summary
- Total Requests: 18000
- Success Rate: 99.8%
- Error Rate: 0.2%
- Average Latency: 45.23ms

--------------------------------------------------------------------------------

📈 Latency Percentiles (p-values)
- p50 (Median): 35.1ms
- p90: 58.7ms
- p95: 72.4ms
- p99: 155.6ms

--------------------------------------------------------------------------------

🌐 Network Phase Latency
- DNS Lookup: 1.2ms
- TCP Connect: 5.8ms
- TLS Handshake: 12.3ms

--------------------------------------------------------------------------------

🔴 Status Code Breakdown
| Status Code | Count | Rate    |
|-------------|-------|---------|
| 200 OK      | 17964 | 99.8%   |
| 500 Error   | 36    | 0.2%    |

--------------------------------------------------------------------------------

💡 Intelligent Recommendations
- High p99 latency (155.6ms) suggests a performance bottleneck affecting a small percentage of requests. Investigate slow database queries or resource contention.
- The presence of 500 errors indicates server-side issues under load. Review server logs during the test period to identify the root cause.

--------------------------------------------------------------------------------

Final Report Screenshot


🧑‍💻 The Developer's Roadmap & Contribution

autoaiload is an open-source project by Smart Tell line, and its future is shaped by the community. We are constantly working on new features and improvements.

Roadmap Highlights

  • Advanced Reporting: Integration with popular visualization tools like Grafana.
  • Protocol Support: Extending support beyond HTTP/HTTPS to include other protocols like gRPC and WebSockets.
  • Scalability: Enhancing the tool's ability to run distributed load tests from multiple machines.

Contribution Guidelines

We highly value and encourage contributions from the community. If you are passionate about performance and want to help, please follow these steps:

  1. Fork the repository on GitHub.
  2. Clone your forked repository.
  3. Create a new branch (git checkout -b feature/your-awesome-feature-name).
  4. Make your changes and ensure tests pass.
  5. Commit your changes with a clear and descriptive message.
  6. Push to your new branch.
  7. Open a Pull Request with a detailed description of the changes you've made.

📜 License

This project is open-source and is licensed under the MIT License.


📧 Contact Information