npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

metamod

v0.3.1

Published

> Composable and cachable processing units

Readme

metamod

Composable and cachable processing units

npm i metamod

Why?

Metamod is a library to create performant and debuggable processing pipelines.

The goal is to divide (and conquer) complex tasks into the smallest processing units possible, called "mods".

Each mod is basically a simple (potentially async) function that takes requests and tokens (more on that later) and returns new requests and tokens. The requests are like parameters triggering the use of one or many mods for a specific "request type" (for example: "compile a code from TypeScript to JavaScript").

Tokens are stored values that can be reused by any mod later, without the need to add data to the requests. This is especially useful for configuration values. Mods explicitly declare which tokens they need (with readTokens) and which tokens they update.

Mods can be pure, meaning they don't have side-effects, or impure, meaning they can have side-effects. Pure mods are cached, meaning they are only executed once for a given combination of request id, request data, token values and package versions - even between pipeline runs! Impure mods are not cached, meaning they are executed every time they are called. Typically mods calling the filesystem to read or write files are impure. Mod author can specifiy if a mod is pure or impure with the sideEffects flag.

Mods are called automatically by Metamod in sequence for a given request id. All mods registered to a pipeline, either directly or via plugins, will be automatically called for requests of the corresponding type.

If multiple requests have the same id, they are grouped together and the mods are called only once for the group. This is useful for example to merge multiple content together.

Requests are processed in parallel and continuously as new requests are created by mods. (Note that multithreading is not supported yet, but it's planned. Until then, all processing is bound by the JavaScript thread.) Mods can however mark certain requests with the waitForNextStep flag so they are queued until all current (and potentially future) requests are processed. Then the pipeline moves onto a new "step" if there are pending requests with the waitForNextStep flag. This can be useful for example to wait for all source files to be processed before bundling them together.

Getting started

Note: currently Metamod is very barebone and low-level. Higher-level utility features are planned to make it easier to use for certain use cases such as processing files.

Anatomy of a mod

A mod is described by an object with a name, a requestType and other properties, plus a process function which is where all the actual work will happen.

Example of impure mod (because it reads a file):

{
  name: 'demo-read-file',
  requestType: 'read-file',
  sideEffects: true,
  readTokens: ['srcDir'],
  process: async ({ requests, tokens }) => {
    const { id } = requests[0]
    const content = await fs.readFile(path.resolve(tokens.get('srcDir'), id), 'utf8')
    return {
      requests: [
        {
          type: 'process-file',
          id,
          data: {
            content,
          },
        },
      ],
    }
  },
}

Example of a pure (thus cacheable) mod:

{
  name: 'demo-process-file',
  requestType: 'process-file',
  sideEffects: false,
  process: async ({ requests }) => {
    const { id, data } = requests[0]
    const content = data.content.toUpperCase()
    return {
      requests: [
        {
          type: 'write-file',
          id,
          data: {
            content,
          },
        },
      ],
    }
  },
}

You can use the createRequest(type, id, data?) helper to create requests:

import { createRequest } from 'metamod'

const request = createRequest('read-file', 'hello.js', { content: 'console.log(`hello`)' })

The process function provides requests and tokens as input, and you can return an object with new requests and new tokens as output:

return {
  requests: [
    {
      type: 'read-file',
      id: 'foo.txt',
    }
  ],
  // Can also be an array of tokens { id: string, value: any }[]
  tokens: {
    srcDir: 'src',
    outDir: 'dist',
  },
}

Defining a pipeline

A pipeline is a list of mods (or plugins adding mods). It has a special init function called with an optional configuration object, that allows the pipeline to bootstrap the initial requests and tokens. This init function is always called at the beginning of the pipeline.

import { definePipeline } from 'metamod'

const pipeline = definePipeline({
  name: 'demo',
  version: '0.0.0',
  init: (config) => {
    return {
      requests: config.entryFiles.map(file => ({
        type: 'read-file',
        id: file,
      })),
      tokens: {
        srcDir: config.srcDir,
        outDir: config.outDir,
      },
    }
  },
  mods: [
    // Mods go here
  ],
  plugins: [
    // Plugins go here
  ],
})

Running a pipeline

Use the runPipeline function to run a pipeline. It takes a pipeline and a configuration object, and returns a promise that resolves when the pipeline is done.

import { runPipeline } from 'metamod'

await runPipeline(pipeline, {
  srcDir: 'input',
  outDir: 'output',
  entryFiles: ['foo.txt'],
})

Debugging

Note: debugging features are still work in progress. For example, a fully interactive UI is planned.

You can enable logging with the DEBUG environment variable:

DEBUG=metamod* node my-pipeline.mjs

Reusing mods

You can create generic mods by passing the request parameters (or even the entire requests) as request data:

import { createRequest } from 'metamod'

export const ReadFileMod = defineMod<{
  requestType: string
}>({
  name: 'read-file',
  requestType: 'read-file',
  sideEffects: true,
  async process ({ requests }) {
    const file = requests[0].id
    const { requestType } = requests[0].data
    if (fs.existsSync(file)) {
      const content = await fs.promises.readFile(file, 'utf8')
      return {
        requests: [
          createRequest(requestType, id, { content }),
        ],
      }
    }
  },
})

You can even create an helper function to help create requests tailored for this mod:

export function requestReadFile (id: string, nextRequestType: string) {
  return createRequest('read-file', id, {
    requestType: nextRequestType,
  })
}