@tylt/cli
v0.0.4
Published
Tylt CLI — command-line interface for containerized pipeline execution
Readme
@tylt/cli
Command-line interface for the Tylt containerized pipeline engine.
Installation
npx @tylt/cli run pipeline.yamlOr install globally:
npm install -g @tylt/cli
tylt run pipeline.yamlUsage
The run command accepts a pipeline file path, a directory, or nothing (defaults to current directory). When given a directory, tylt looks for pipeline.yml, pipeline.yaml, or pipeline.json in order.
tylt run # auto-detect pipeline file in cwd
tylt run examples/geodata/ # run from a directory
tylt run pipeline.yaml # run a specific file
tylt run --json # JSON mode (for CI/CD)
tylt run --workdir /tmp/builds # custom workdirInteractive step execution
Execute individual steps without a full pipeline file:
tylt exec my-workspace -f step.yaml --step greet
tylt cat my-workspace greet greeting.txt
tylt exec my-workspace -f step.yaml --step greet --ephemeral
tylt exec my-workspace -f process.yaml --step process --input greet
tylt exec my-workspace -f process.yaml --step process --input data=greet
tylt rm-step my-workspace greetInspecting runs
Each step execution produces a run with artifacts, logs (stdout/stderr), and metadata:
tylt show my-pipeline
tylt logs my-pipeline download
tylt logs my-pipeline download --stream stderr
tylt inspect my-pipeline download
tylt inspect my-pipeline download --json
tylt export my-pipeline download ./output-dirManaging workspaces
tylt list
tylt ls --json
tylt prune my-pipeline
tylt rm my-build other-build
tylt cleanCommands
| Command | Description |
|---------|-------------|
| run [pipeline] | Execute a pipeline (file, directory, or cwd) |
| attach <workspace> | Attach to a running pipeline in a workspace |
| exec <workspace> -f <step-file> | Execute a single step in a workspace |
| cat <workspace> <step> [path] | Read or list artifact content from a step's latest run |
| show <workspace> | Show steps and runs in a workspace |
| logs <workspace> <step> | Show stdout/stderr from last run |
| inspect <workspace> <step> | Show run metadata (meta.json) |
| export <workspace> <step> <dest> | Extract artifacts to the host filesystem |
| prune <workspace> | Remove old runs not referenced by current state |
| list (alias ls) | List workspaces (with disk sizes) |
| rm <workspace...> | Remove one or more workspaces |
| rm-step <workspace> <step> | Remove a step's run and state entry |
| clean | Remove all workspaces |
Global Options
| Option | Description |
|--------|-------------|
| --workdir <path> | Workspaces root directory (default: ./workdir) |
| --json | Structured JSON logs instead of interactive UI |
Run Options
| Option | Alias | Description |
|--------|-------|-------------|
| --workspace <name> | -w | Workspace name for caching |
| --force [steps] | -f | Skip cache for all steps, or a comma-separated list |
| --dry-run | | Validate pipeline, compute fingerprints, show what would run without executing |
| --target <steps> | -t | Execute only these steps and their dependencies (comma-separated) |
| --concurrency <n> | -c | Max parallel step executions (default: CPU count) |
| --env-file <path> | | Load environment variables from a dotenv file for all steps |
| --verbose | | Stream container logs in real-time (interactive mode) |
| --detach | -d | Run pipeline in background (daemon mode) |
| --attach | | Force in-process execution (override detach config) |
Exec Options
| Option | Alias | Description |
|--------|-------|-------------|
| --file <path> | -f | Step definition file (YAML or JSON, required) |
| --step <id> | | Step ID (overrides file's id/name) |
| --input <specs...> | | Input steps (e.g. extract or data=extract) |
| --ephemeral | | Stream stdout to terminal and discard the run |
| --force | | Skip cache check |
| --verbose | | Stream container logs in real-time |
Pipeline Format
Pipeline files can be written in YAML or JSON. Steps can be raw (explicit image/cmd) or kit-based (using uses).
Raw Steps
name: my-pipeline
steps:
- id: download
image: alpine:3.19
cmd: [sh, -c, "echo hello > /output/hello.txt"]
- id: process
image: alpine:3.19
cmd: [cat, /input/download/hello.txt]
inputs: [{ step: download }]Kit Steps
steps:
- id: transform
uses: node
with: { script: transform.js, src: src/app }
- id: analyze
uses: python
with: { script: analyze.py, src: scripts }
- id: extract
uses: shell
with: { packages: [unzip], run: "unzip /input/transform/archive.zip -d /output/" }
inputs: [{ step: transform }]Custom Kits
Beyond the built-in kits (shell, node, python), you can write your own as JS modules.
A kit exports a default function that receives with parameters and returns {image, cmd} (plus optional setup, env, caches, mounts, sources):
// kits/rust.js
export default function (params) {
return {
image: `rust:${params.version ?? '1'}`,
cmd: ['cargo', 'run'],
sources: [{host: params.src ?? '.', container: '/app'}]
}
}steps:
- id: build
uses: rust
with: { version: '1.77', src: ./project/ }Kit resolution order:
.tylt.ymlaliases — mapped name → file path or npm specifierkits/<name>/index.js— local directorykits/<name>.js— local file- Built-in —
shell,node,python - npm module — for scoped packages (
@org/kit-name)
.tylt.yml
Place a .tylt.yml file at the project root to declare kit aliases:
kits:
geo: ./kits/geo.js
ml: @myorg/tylt-kit-mlThis lets you reference external kits (local files or npm packages) by short names in uses.
You can also set a default execution mode:
detach: true # tylt run launches a daemon and returns immediatelyWhen detach: true, tylt run starts the pipeline in a background daemon. Use --attach to override and run in-process.
Pipeline and Step Identity
id— Machine identifier (alphanum, dash, underscore). Used for caching, state, artifacts.name— Human-readable label. Used for display.- At least one must be defined. If
idis missing, it is derived fromnamevia slugification.
Step Options
| Field | Type | Description |
|-------|------|-------------|
| id | string | Step identifier (at least one of id/name required) |
| name | string | Human-readable display name |
| image | string | Docker image (required for raw steps) |
| cmd | string[] | Command to execute (required for raw steps) |
| setup | SetupSpec | Optional setup phase |
| uses | string | Kit name (required for kit steps) |
| with | object | Kit parameters |
| inputs | InputSpec[] | Previous steps to mount as read-only |
| env | Record | Environment variables |
| envFile | string | Path to a dotenv file (relative to pipeline file) |
| outputPath | string | Output mount point (default: /output) |
| mounts | MountSpec[] | Host directories to bind mount (read-only) |
| sources | MountSpec[] | Host directories copied into the container's writable layer |
| caches | CacheSpec[] | Persistent caches to mount |
| if | string | JEXL condition expression — step is skipped when false |
| timeoutSec | number | Execution timeout |
| retries | number | Number of retry attempts on transient failure |
| retryDelayMs | number | Delay between retries (default: 5000) |
| allowFailure | boolean | Continue pipeline if step fails |
| allowNetwork | boolean | Enable network access |
Dependencies and Parallel Execution
Steps declare dependencies via inputs. Independent steps run in parallel up to --concurrency.
steps:
- id: download
# ...
- id: process-a
inputs: [{ step: download }]
- id: process-b
inputs: [{ step: download }]
# process-a and process-b run in parallel
- id: merge
inputs: [{ step: process-a }, { step: process-b }]Inputs
- Mounted under
/input/{stepName}/ copyToOutput: truecopies content to output before executionoptional: trueallows the step to run even if the dependency failed or was skipped
Targeted Execution
tylt run pipeline.yaml --target merge
tylt run pipeline.yaml --target process-a,process-bConditional Steps
Use if: with a JEXL expression evaluated against env:
- id: deploy
if: env.NODE_ENV == "production"
uses: shell
with:
run: echo "Deploying..."Host Mounts
Mount host directories as read-only:
mounts:
- host: src/app # relative path (from pipeline file directory)
container: /app # absolute pathSources
Copy host directories into the container's writable layer:
sources:
- host: src/app
container: /appUse sources when the step needs to write alongside source files (e.g. node_modules). Use mounts for read-only access.
Caches
Persistent read-write directories shared across steps and executions:
caches:
- name: pnpm-store
path: /root/.local/share/pnpm/storeSetup Phase
Optional phase that runs before the main command, used by kits for dependency installation:
setup:
cmd: [sh, -c, "apt-get update && apt-get install -y curl"]
caches:
- name: apt-cache
path: /var/cache/apt
exclusive: true
allowNetwork: trueCaching & Workspaces
Workspaces enable caching across runs. The workspace ID is determined by:
- CLI flag
--workspace(highest priority) - Pipeline
id(explicit or derived fromname)
Steps are skipped when image, command, setup command, env, inputs, and mounts haven't changed.
Troubleshooting
Docker not found
docker --version
docker psPermission denied (Linux)
sudo usermod -aG docker $USER
newgrp dockerWorkspace disk full
tylt list
tylt rm old-workspace-id
tylt cleanForce re-execution
tylt run --force
tylt run --force download,process