npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details


  • User packages



Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.


Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2021 – Pkg Stats / Ryan Hefner




Automatically manage configuration files.





Build Status Test Coverage Dependabot Status Dependencies NPM Downloads Semantic-Release Gardener

Automatically manage configuration files.

Getting Started

The package itself requires npm. However it can be used to manage files for any type of project.

First we need to set up the (hopefully) only manually managed configuration file called .roboconfig.json.

This file could for example contain:

  "@blackflux/robo-config-plugin": {
    "tasks": [
    "variables": {}

where @blackflux/robo-config-plugin is a specific robo-config plugin, tasks contains tasks from that plugin, variables contains task required variables and exclude (optionally) contains files that should not be touched by any tasks.

To sync the configuration into the project we have two options:

Option A: Sync through test (preferred)

First install robo-config and any plugins referenced in the configuration file, e.g.

$ npm install --save-dev robo-config @blackflux/robo-config-plugin

Then create a test similar to

const expect = require('chai').expect;
const robo = require('robo-config');

describe('Running Robo Config', () => {
  it('Applying Configuration', () => {

Option B: Sync through CLI

// TODO: still needs to be implemented

But why...?

Why does this package even exist? - Let's face it, without npm and micro-services this repo would probably not exist. Npm has encouraged us developers to create a new repo and package for every re-usable code snippet. This is great from the re-usability perspective, however it means that a single developer might actively maintain many repos.

Most maintenance tasks (automated repository configuration, automated tests, automated dependency updates, automated versioning or releases) can be done by just simply adding a configuration file to the repo and activating the corresponding service. That's great, but what happens when:

  • A nasty bug is discovered in one of the config files?
  • A provider changes their configuration file (format)?
  • A major language version was released and tests should also be run against it?
  • A cool new service popped up and one should really use it?

How does one ensure changes will propagate to all relevant repos? If you never had to batch update a few dozen repos with the same change manually, you're lucky - I can tell you it's not fun. Either you do them all at the same time (let's hope it was the right change) or you will inadvertently forget to apply the change to some repos. That's where this package comes in!

Simply pick the plugin(s)/task(s) that are most appropriate for your repo or create your own. Changes will propagate to your repos as dependencies are updated, giving you full control when they are applied.

Writing your own Plugin

Writing your own robo-config plugin is very easy and gives you the most control. However it is recommended that you use popular plugins for basic configuration management and then write your own plugin for those cases that are not covered.

Writing your own Plugin for robo-config is very simple. A full example can be found here.

A plugin is an npm package that exposes an object containing the following keys:

  • name: Fully qualified npm package name of the plugin
  • taskDir: Absolute path to the plugin tasks
  • reqDir: Absolute path to the plugin dependency definitions
  • varDir: Absolute path to the plugin variable definitions
  • targetDir: Absolute path to the plugin target definitions
  • docDir: Absolute path to the automatically maintained internal plugin documentation

The folder structures are as following:


This directory is the core of every robo-config plugin.

Top level it only contains sub-directories, which we call "task directories" since they are used to group tasks. For example a task directory editor might indicate tasks related to the editor that is used for the project that uses robo-config.

Each task directory then contains task files and a snippets folder.

The snippets folder contains raw configuration files or parts thereof which are applied using tasks and merge strategies. Snippet files can contain variables which need to be provided when a task references the snippet.

There are two types of task files:

  • @containerTaskName.json: Public container task files. They do not specify any action themselves, but reference other tasks.
  • #containerTaskName.json: Private container task files. Same as public container tasks files, but private.
  • actionableTaskName.json: Actionable task files, which contain a single task definition, referencing snippets.

Container Tasks

Container task names always starts with an @ or # symbol. Only container tasks starting with an @ are usable from outside your plugin.

A container task definition file contains the following keys:

  • tasks: Array of task names. These can be relative as task or referencing a task directory as taskDirectory/task
  • description: High level description of what this container task does.

Actionable Tasks

Actionable task names must not start with an @ or # symbol. They can only be used by container tasks.

Actionable task definition files contain the following keys:

  • target: The relative file path to the target file in the project that robo-config is used in.
  • format (optional): Indicates the format of the target file. E.g. the file extension might be dat, but the content xml). Automatically deduced by default. See smart-fs for supported formats.
  • resolve: Whether or not to resolve the snippet.
  • strategy: One of the available merge strategies. These are detailed below.
  • create (optional): When set to false, no action is taken if the file does not already exist.
  • pretty (optional): By default files are written in pretty-mode. Can be set to false to deactivate. Note that files are only written when the logical content changes.
  • snippets: Array of snippets. A snippet is either the name of the snippet file (if no variables are present) or an object containing a variables object and the snippet file name as name.
  • requires: Array of dependencies that this task has. For example when managing the .gitignore file this should contain git.
  • purpose: Description of what the task accomplishes provided as Array. Each entry corresponds to a new line in markdown.

Templating: Snippet files can contain mustache templates. This has to be indicated by ending the file with ".mustache". Parsing and variable substitution of mustache templates happens before other parsing and variable resolution.

Local and Global Variables

Variables are specified as ${variableName}.

They can be placed as local variables anywhere in the snippet file (e.g. in the key of an object).

Local variables must be defined in every task that is using the snippet. Variable values can be strings or any other json structure.

The definitions for local variables can contain variables themselves, which are global variables. These are required to be filled in by the maintainer of the project using robo-config and need to be documented.

Variables can also be used in the target of an actionable task. These are also global variables.


Contains a definition file $$REQ$$.json for every global dependency $$REQ$$. Each file contains the following entries:

  • description: Short description of this dependency.
  • details: Array containing detailed description of this dependency and how it's used. Each line corresponds to a new line in markdown.
  • website: Related website for this dependency.


Contains a definition file $$VAR$$.json for every global variable $$VAR$$. Each file contains the following entries:

  • description: Short description of what is expected for this variable.
  • details: Array containing longer description of what is expected and high level "why". Each line corresponds to a new line in markdown.
  • type: The expected variable type.


Contains a definition file $$TARGET$$.json for every target $$TARGET$$. Each file contains the following entries:

  • description: Short description of what this target is used for in project.
  • details: Array containing longer description of what target is used for. Each line corresponds to a new line in markdown.
  • format: Array of possible file formats.


The folder structure is automatically managed and updated by the plugin tests. You should never need to touch this.

Very useful when previewing the configuration your plugin will generate.

To ensure this is synchronized you should set up a test. See below for details.

Merge Strategies

There are several merge strategies available and more will be added over time. For documentation see here.


To ensure your plugin is in a valid state you should set up tests like so

const path = require('path');
const expect = require('chai').expect;
const { load } = require('robo-config');
const plugin = require('./path/to/plugin');

describe('Testing Plugin', () => {
  it('Documenting Plugin Tasks', () => {

  it('Testing Plugin Tasks', () => {
    expect(load(plugin).test(path.join(__dirname, 'path', 'to', 'mock', 'projects'))).to.deep.equal({
      'task-dir/task-name': []

where projects will contain a customizable "project-like folder" for each task.


Variable Escaping

Variables used in your snippets can be escaped as $\{escapedVar}. This is converted into ${escapedVar} before the snippet is applied. Handy when configuration files need to contain variables of the same format.

File Guessing

In almost all cases you don't need to and should not specify the file extension of a task/file you're using. It will automatically be picked up.

Using Tasks Multiple Times

Tasks can be used multiple times with different variables by defining them as an object, containing the keys name and variables.


Modifiers can be used on variables as ${var|MOD}. Currently the following modifiers MOD are available:

  • UPPER: upper case the variable content
  • TITLE: title case the variable content
  • LOWER: lower case the variable content