npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@tylt/cli

v0.0.4

Published

Tylt CLI — command-line interface for containerized pipeline execution

Readme

@tylt/cli

Command-line interface for the Tylt containerized pipeline engine.

Installation

npx @tylt/cli run pipeline.yaml

Or install globally:

npm install -g @tylt/cli
tylt run pipeline.yaml

Usage

The run command accepts a pipeline file path, a directory, or nothing (defaults to current directory). When given a directory, tylt looks for pipeline.yml, pipeline.yaml, or pipeline.json in order.

tylt run                           # auto-detect pipeline file in cwd
tylt run examples/geodata/        # run from a directory
tylt run pipeline.yaml            # run a specific file
tylt run --json                   # JSON mode (for CI/CD)
tylt run --workdir /tmp/builds    # custom workdir

Interactive step execution

Execute individual steps without a full pipeline file:

tylt exec my-workspace -f step.yaml --step greet
tylt cat my-workspace greet greeting.txt
tylt exec my-workspace -f step.yaml --step greet --ephemeral
tylt exec my-workspace -f process.yaml --step process --input greet
tylt exec my-workspace -f process.yaml --step process --input data=greet
tylt rm-step my-workspace greet

Inspecting runs

Each step execution produces a run with artifacts, logs (stdout/stderr), and metadata:

tylt show my-pipeline
tylt logs my-pipeline download
tylt logs my-pipeline download --stream stderr
tylt inspect my-pipeline download
tylt inspect my-pipeline download --json
tylt export my-pipeline download ./output-dir

Managing workspaces

tylt list
tylt ls --json
tylt prune my-pipeline
tylt rm my-build other-build
tylt clean

Commands

| Command | Description | |---------|-------------| | run [pipeline] | Execute a pipeline (file, directory, or cwd) | | attach <workspace> | Attach to a running pipeline in a workspace | | exec <workspace> -f <step-file> | Execute a single step in a workspace | | cat <workspace> <step> [path] | Read or list artifact content from a step's latest run | | show <workspace> | Show steps and runs in a workspace | | logs <workspace> <step> | Show stdout/stderr from last run | | inspect <workspace> <step> | Show run metadata (meta.json) | | export <workspace> <step> <dest> | Extract artifacts to the host filesystem | | prune <workspace> | Remove old runs not referenced by current state | | list (alias ls) | List workspaces (with disk sizes) | | rm <workspace...> | Remove one or more workspaces | | rm-step <workspace> <step> | Remove a step's run and state entry | | clean | Remove all workspaces |

Global Options

| Option | Description | |--------|-------------| | --workdir <path> | Workspaces root directory (default: ./workdir) | | --json | Structured JSON logs instead of interactive UI |

Run Options

| Option | Alias | Description | |--------|-------|-------------| | --workspace <name> | -w | Workspace name for caching | | --force [steps] | -f | Skip cache for all steps, or a comma-separated list | | --dry-run | | Validate pipeline, compute fingerprints, show what would run without executing | | --target <steps> | -t | Execute only these steps and their dependencies (comma-separated) | | --concurrency <n> | -c | Max parallel step executions (default: CPU count) | | --env-file <path> | | Load environment variables from a dotenv file for all steps | | --verbose | | Stream container logs in real-time (interactive mode) | | --detach | -d | Run pipeline in background (daemon mode) | | --attach | | Force in-process execution (override detach config) |

Exec Options

| Option | Alias | Description | |--------|-------|-------------| | --file <path> | -f | Step definition file (YAML or JSON, required) | | --step <id> | | Step ID (overrides file's id/name) | | --input <specs...> | | Input steps (e.g. extract or data=extract) | | --ephemeral | | Stream stdout to terminal and discard the run | | --force | | Skip cache check | | --verbose | | Stream container logs in real-time |

Pipeline Format

Pipeline files can be written in YAML or JSON. Steps can be raw (explicit image/cmd) or kit-based (using uses).

Raw Steps

name: my-pipeline
steps:
  - id: download
    image: alpine:3.19
    cmd: [sh, -c, "echo hello > /output/hello.txt"]
  - id: process
    image: alpine:3.19
    cmd: [cat, /input/download/hello.txt]
    inputs: [{ step: download }]

Kit Steps

steps:
  - id: transform
    uses: node
    with: { script: transform.js, src: src/app }
  - id: analyze
    uses: python
    with: { script: analyze.py, src: scripts }
  - id: extract
    uses: shell
    with: { packages: [unzip], run: "unzip /input/transform/archive.zip -d /output/" }
    inputs: [{ step: transform }]

Custom Kits

Beyond the built-in kits (shell, node, python), you can write your own as JS modules.

A kit exports a default function that receives with parameters and returns {image, cmd} (plus optional setup, env, caches, mounts, sources):

// kits/rust.js
export default function (params) {
  return {
    image: `rust:${params.version ?? '1'}`,
    cmd: ['cargo', 'run'],
    sources: [{host: params.src ?? '.', container: '/app'}]
  }
}
steps:
  - id: build
    uses: rust
    with: { version: '1.77', src: ./project/ }

Kit resolution order:

  1. .tylt.yml aliases — mapped name → file path or npm specifier
  2. kits/<name>/index.js — local directory
  3. kits/<name>.js — local file
  4. Built-inshell, node, python
  5. npm module — for scoped packages (@org/kit-name)

.tylt.yml

Place a .tylt.yml file at the project root to declare kit aliases:

kits:
  geo: ./kits/geo.js
  ml: @myorg/tylt-kit-ml

This lets you reference external kits (local files or npm packages) by short names in uses.

You can also set a default execution mode:

detach: true   # tylt run launches a daemon and returns immediately

When detach: true, tylt run starts the pipeline in a background daemon. Use --attach to override and run in-process.

Pipeline and Step Identity

  • id — Machine identifier (alphanum, dash, underscore). Used for caching, state, artifacts.
  • name — Human-readable label. Used for display.
  • At least one must be defined. If id is missing, it is derived from name via slugification.

Step Options

| Field | Type | Description | |-------|------|-------------| | id | string | Step identifier (at least one of id/name required) | | name | string | Human-readable display name | | image | string | Docker image (required for raw steps) | | cmd | string[] | Command to execute (required for raw steps) | | setup | SetupSpec | Optional setup phase | | uses | string | Kit name (required for kit steps) | | with | object | Kit parameters | | inputs | InputSpec[] | Previous steps to mount as read-only | | env | Record | Environment variables | | envFile | string | Path to a dotenv file (relative to pipeline file) | | outputPath | string | Output mount point (default: /output) | | mounts | MountSpec[] | Host directories to bind mount (read-only) | | sources | MountSpec[] | Host directories copied into the container's writable layer | | caches | CacheSpec[] | Persistent caches to mount | | if | string | JEXL condition expression — step is skipped when false | | timeoutSec | number | Execution timeout | | retries | number | Number of retry attempts on transient failure | | retryDelayMs | number | Delay between retries (default: 5000) | | allowFailure | boolean | Continue pipeline if step fails | | allowNetwork | boolean | Enable network access |

Dependencies and Parallel Execution

Steps declare dependencies via inputs. Independent steps run in parallel up to --concurrency.

steps:
  - id: download
    # ...
  - id: process-a
    inputs: [{ step: download }]
  - id: process-b
    inputs: [{ step: download }]
  # process-a and process-b run in parallel
  - id: merge
    inputs: [{ step: process-a }, { step: process-b }]

Inputs

  • Mounted under /input/{stepName}/
  • copyToOutput: true copies content to output before execution
  • optional: true allows the step to run even if the dependency failed or was skipped

Targeted Execution

tylt run pipeline.yaml --target merge
tylt run pipeline.yaml --target process-a,process-b

Conditional Steps

Use if: with a JEXL expression evaluated against env:

- id: deploy
  if: env.NODE_ENV == "production"
  uses: shell
  with:
    run: echo "Deploying..."

Host Mounts

Mount host directories as read-only:

mounts:
  - host: src/app       # relative path (from pipeline file directory)
    container: /app     # absolute path

Sources

Copy host directories into the container's writable layer:

sources:
  - host: src/app
    container: /app

Use sources when the step needs to write alongside source files (e.g. node_modules). Use mounts for read-only access.

Caches

Persistent read-write directories shared across steps and executions:

caches:
  - name: pnpm-store
    path: /root/.local/share/pnpm/store

Setup Phase

Optional phase that runs before the main command, used by kits for dependency installation:

setup:
  cmd: [sh, -c, "apt-get update && apt-get install -y curl"]
  caches:
    - name: apt-cache
      path: /var/cache/apt
      exclusive: true
  allowNetwork: true

Caching & Workspaces

Workspaces enable caching across runs. The workspace ID is determined by:

  1. CLI flag --workspace (highest priority)
  2. Pipeline id (explicit or derived from name)

Steps are skipped when image, command, setup command, env, inputs, and mounts haven't changed.

Troubleshooting

Docker not found

docker --version
docker ps

Permission denied (Linux)

sudo usermod -aG docker $USER
newgrp docker

Workspace disk full

tylt list
tylt rm old-workspace-id
tylt clean

Force re-execution

tylt run --force
tylt run --force download,process