npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@bensandee/tooling

v0.57.3

Published

CLI tool to bootstrap and maintain standardized TypeScript project tooling

Readme

@bensandee/tooling

CLI to bootstrap and maintain standardized TypeScript project tooling.

Installation

pnpm add -D @bensandee/tooling

# Or run directly
pnpm dlx @bensandee/tooling repo:sync

Conventions

The tool auto-detects project structure, CI platform, project type, and Docker packages from the filesystem. .tooling.json stores overrides only — omitted fields use detected defaults. Runtime commands (docker:build, checks:run, release:changesets) work without running repo:sync first.

| Convention | Detection | Default | Override via | | ----------------- | ----------------------------------------------------- | ---------------------------------------- | ------------------------------------ | | Project structure | pnpm-workspace.yaml present | single | structure in .tooling.json | | CI platform | .github/workflows/ or .forgejo/workflows/ dir | none | ci in .tooling.json | | Project type | Dependencies in package.json (react, node, library) | default | projectType in .tooling.json | | Docker packages | Dockerfile or docker/Dockerfile in package dirs | — | docker map in .tooling.json | | Formatter | Existing prettier config detected | oxfmt | formatter in .tooling.json | | Release strategy | Existing release config detected | monorepo: changesets, single: simple | releaseStrategy in .tooling.json |

CLI commands

Project management

| Command | Description | | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | tooling repo:sync [dir] | Detect, generate, and sync project tooling (idempotent). First run prompts for release strategy, CI platform (if not detected), and formatter (if Prettier found). Subsequent runs are non-interactive. | | tooling repo:sync --check [dir] | Dry-run drift detection. Exits 1 if files would change. CI-friendly. | | tooling checks:run | Run project checks (build, docker:build, typecheck, lint, test, format, knip, tooling:check, docker:check). Flags: --skip, --add, --fail-fast, --verbose. |

Flags: --yes (accept all defaults), --no-ci, --no-prompt, --eslint-plugin

checks:run

Runs checks in order: build, docker:build, typecheck, lint, test, format (--check), knip, tooling:check, docker:check. Checks without a matching script in package.json are silently skipped.

The --skip flag supports glob patterns via picomatch:

# Skip all docker steps
tooling checks:run --skip 'docker:*'

# Skip specific checks
tooling checks:run --skip build,knip

The --add flag appends extra checks (must be defined in package.json):

tooling checks:run --add e2e

The generated ci:check script defaults to pnpm check --skip 'docker:*' since CI environments typically lack Docker support.

Custom blocks in workflows

Generated workflow files are regenerated on each repo:sync run. To preserve custom additions (extra steps, environment variables, etc.), wrap them in custom block markers:

- name: Run checks
  run: pnpm ci:check
# @tooling:custom
- name: Upload coverage
  uses: actions/upload-artifact@v4
  with:
    name: coverage
    path: coverage/
# @tooling:endcustom

Custom blocks are anchored to the nearest preceding non-blank line. When repo:sync regenerates the workflow, it extracts custom blocks from the existing file, generates fresh content, then re-inserts each block after its anchor line. If the anchor line is no longer present (e.g. a step was renamed), the block is appended at the end of the file.

To skip a workflow file entirely, add # @bensandee/tooling:ignore as the first line — repo:sync will leave the file untouched.

Repository setup

| Command | Description | | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | | tooling setup:secrets | Configure CI secrets (Forgejo or GitHub). Auto-detects platform from package.json repository URL. Flags: --dry-run, --no-docker. | | tooling forgejo:secrets | Manage Forgejo Actions secrets directly. Subcommands: set, list, delete. |

setup:secrets detects the hosting platform from the repository field in package.json. For Forgejo repositories, it prompts for username and password, creates an access token with appropriate scopes, then sets RELEASE_TOKEN (and Docker secrets if Docker packages are detected). For GitHub repositories, it prompts for an existing Personal Access Token and uses gh secret set to configure secrets.

Release management

| Command | Description | | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | tooling release:changesets | Changesets version/publish for Forgejo CI. Flags: --dry-run, --verbose. Env: FORGEJO_SERVER_URL, FORGEJO_REPOSITORY, RELEASE_TOKEN. | | tooling release:simple | Streamlined release using commit-and-tag-version. Flags: --release-as, --first-release, --prerelease, --verbose. | | tooling release:trigger | Trigger a release workflow. | | tooling forgejo:create-release | Create a Forgejo release from a tag. | | tooling changesets:merge | Merge a changesets version PR. | | tooling webhook:send <url> | POST a structured JSON release payload with HMAC-SHA256 auth. Flags: --tag, --repository, --server-url, --actor, --prerelease. Secret from WEBHOOK_SECRET env var. |

release:simple

Uses commit-and-tag-version under the hood. Version bumps are auto-detected from Conventional Commits:

| Commit prefix | Bump | Example | | ----------------------- | ----- | ------------------------------------ | | fix: | patch | fix: handle null response | | feat: | minor | feat: add retry logic | | feat!: / fix!: etc | major | feat!: drop v1 API | | BREAKING CHANGE: body | major | Any type with breaking change footer |

Override auto-detection with CLI flags:

# Force a major bump
tooling release:simple --release-as major

# Force a specific version
tooling release:simple --release-as 2.0.0

# Create a prerelease
tooling release:simple --release-as major --prerelease beta  # → 2.0.0-beta.0

The generated release workflow exposes these as optional workflow_dispatch inputs (bump and prerelease), so bumps can also be controlled from the CI UI.

Release assets

Upload arbitrary build artifacts (tarballs, binaries, zips, etc.) to each release.

Convention (zero config): Add a build:release-assets script to any package.json and have it write artifacts into a release-assets/ directory alongside the script:

{
  "scripts": {
    "build:release-assets": "mkdir -p release-assets && tar -czf release-assets/app.tar.gz dist"
  }
}

On repo:sync, release-assets/ is added to .gitignore and a notice is printed confirming the convention was detected. In CI, the generated release workflow runs tooling release:assets, which builds every detected package and uploads every file in each release-assets/ directory to the current tag's release. The script runs from the package directory, so monorepo packages can each emit their own set of artifacts.

If the script runs but writes no files to release-assets/, release:assets fails with an actionable error rather than silently skipping the upload.

Override (non-conventional layouts): For artifacts that don't fit the convention, declare them explicitly in .tooling.json:

{
  "releaseAssets": [
    { "file": "dist/app.tar.gz", "command": ["pnpm", "build"], "name": "app.tar.gz" }
  ]
}

| Field | Description | | --------- | ---------------------------------------------------------------------------------------- | | file | Path to the artifact, relative to the project root (required). | | command | argv run before upload (optional; omit for pre-built files). Not interpreted by a shell. | | name | Asset name in the release (optional; defaults to the file's basename). |

Convention and overrides coexist — a repo can use both.

Works for both changesets and simple release strategies, on Forgejo (via API upload) and GitHub (via gh release upload --clobber). The asset step in the workflow is gated on the release itself having published/pushed, so failed releases never upload stale artifacts.

Coverage tracking

| Command | Description | | -------------------------- | --------------------------------------------------------------------------------------------- | | tooling coverage:summary | Print a labeled per-package summary of the latest run. --json for machine-readable form. | | tooling coverage:check | Compare the latest run against coverage-baseline.json; exit non-zero on regression. | | tooling coverage:record | Append the latest run to a bst/coverage-metrics orphan branch (CI: push to default branch). | | tooling coverage:status | Show recent coverage trend. Read-only — fetches the orphan branch, no checkout. |

The feature has two halves: a CI history that captures one record per commit on the default branch (visible to anyone with the repo), and a local ratchet that the agent (or developer) consults before declaring a task complete.

How it works

repo:sync enables coverage tracking automatically when a vitest.config.ts is present. The generated vitest.config.ts includes the json-summary reporter so each pnpm test:coverage writes coverage/coverage-summary.json alongside the human-readable HTML report. A generated coverage.yml workflow runs on push to the default branch and pipes:

pnpm test:coverage  →  coverage:summary  →  coverage:record  →  coverage:status

so every CI log ends with the trend table for the last 10 records.

The history file (coverage-history.jsonl) lives on a dedicated bst/coverage-metrics orphan branch — never on main. This sidesteps branch protection (no privileged token needed beyond contents: write) and keeps git log main clean. coverage:record uses pure git plumbing (hash-objectmktreecommit-treepush --force-with-lease) so the working tree is never touched, and concurrent CI runs are handled safely via compare-and-swap with retry.

The local ratchet

A tracked coverage-baseline.json at the repo root captures the four overall totals (lines/statements/branches/functions) plus per-package numbers. coverage:check compares the latest run against it and exits non-zero if any total drops more than the configured tolerancePp (default 0.1). When all four are flat-or-up, --update-baseline rewrites the file so it can be staged with the change.

repo:sync also inserts a <!-- @tooling:coverage-rule --> managed block into CLAUDE.md describing the workflow:

Before declaring a task complete that touched files under packages/*/src/, run pnpm test:coverage then pnpm exec bst coverage:check. If any total dropped, add tests until parity is restored — or, if a touched file is genuinely not worth covering, exclude it via coverage.exclude in vitest.config.ts and explain why in the commit message. If all four are flat-or-up, run pnpm exec bst coverage:check --update-baseline and stage coverage-baseline.json.

Skip the rule for docs-only, config-only, dependency-bump, or test-only changes.

Bootstrap

On a clean default branch, generate the initial baseline:

pnpm test:coverage
pnpm exec bst coverage:check --update-baseline --no-fail-on-missing-baseline
git add coverage-baseline.json

The first CI run on the default branch creates the orphan bst/coverage-metrics branch automatically. From any clone, inspect the trend with:

pnpm exec bst coverage:status --limit 20

Opt-out

Disable the feature entirely:

{ "coverage": false }

Or keep the local ratchet but skip the CI history append:

{ "coverage": { "history": "none" } }

| Override field | Default | Description | | -------------------- | ------------------------ | ---------------------------------------------------------------------- | | history | "orphan-branch" | "none" disables CI history; the local ratchet still works | | historyBranch | "bst/coverage-metrics" | Override the orphan branch name | | tolerancePp | 0.1 | Tolerance in percentage points before coverage:check flags a drop | | gateOnPullRequests | false | Run coverage on PR builds too (deferred follow-up — currently a no-op) |

Per-package re-sum

The trimmed record's packages map is computed by aggregating files whose path starts with packages/<name>/, so monorepo packages get their own line/branch percentages without per-package vitest configs. Single-package repos collapse to one entry keyed by the directory basename. Per-package numbers are recorded but not gated — only the four overall totals control the ratchet.

Docker

| Command | Description | | ------------------------ | ------------------------------------------------------------------- | | tooling docker:build | Build Docker images for discovered Docker packages. | | tooling docker:publish | Build, tag, and push Docker images to a registry. | | tooling docker:check | Start a Compose stack, run health checks, run smoketest, tear down. |

Docker packages are discovered automatically. Any package with a Dockerfile or docker/Dockerfile is a Docker package. Image names are derived as {root-package-name}-{package-name}, build context defaults to . (project root). For single-package repos, Dockerfile or docker/Dockerfile at the project root is checked.

When Docker packages are present, repo:sync generates a publish workflow (.forgejo/workflows/publish.yml or .github/workflows/publish.yml) triggered via workflow_dispatch for manual runs. For the simple release strategy, docker publishing is also added as a step in the release workflow so it runs automatically after each release.

Overrides

To override defaults, add a docker entry to .tooling.json:

{
  "docker": {
    "server": {
      "dockerfile": "packages/server/docker/Dockerfile",
      "context": "."
    }
  }
}

The context field defaults to "." (project root) when omitted. Versions for tagging are read from each package's own package.json.

Per-package build args

Each docker entry may declare buildArgs — a map of ARG_NAME to value that is passed to that package's docker build as --build-arg ARG_NAME=<value>. Args flow to the configured package only, so they don't trigger "unused build arg" warnings on sibling images. They apply to both docker:build and docker:publish, so CI-published images match local builds.

{
  "docker": {
    "frontend": {
      "dockerfile": "packages/frontend/docker/Dockerfile",
      "context": ".",
      "buildArgs": {
        "VITE_BASE_PATH": "${BASE_PATH:-/}"
      }
    }
  }
}

Values support ${VAR} and ${VAR:-default} expansion against process.env. Empty strings are treated as unset (POSIX :- semantics). Unset variables with no default expand to "" and log a warning. When the CLI also receives --build-arg via -- pass-through, the CLI value comes after the configured value and wins on duplicate ARG_NAME.

docker:build

Builds all discovered packages, or a single package with --package:

# Build all packages with docker config
tooling docker:build

# Build a single package (useful as an image:build script)
tooling docker:build --package packages/server

# Pass extra args to docker build
tooling docker:build -- --no-cache --build-arg FOO=bar

To give individual packages a standalone image:build script for local testing:

{
  "scripts": {
    "image:build": "pnpm exec tooling docker:build --package ."
  }
}

Flags: --package <dir> (build a single package), --verbose

docker:publish

Runs docker:build for all packages, then logs in to the registry, tags each image with semver variants from its own version field, pushes all tags, and logs out.

Tags generated per package: latest, vX.Y.Z, vX.Y, vX

Each package is tagged independently using its own version, so packages in a monorepo can have different release cadences. Packages without a version field are rejected at publish time.

Flags: --dry-run (build and tag only, skip login/push/logout), --verbose

Required CI variables:

| Variable | Type | Description | | --------------------------- | -------- | --------------------------------------------------------------------- | | DOCKER_REGISTRY_HOST | variable | Registry hostname (e.g. code.orangebikelabs.com) | | DOCKER_REGISTRY_NAMESPACE | variable | Full namespace for tagging (e.g. code.orangebikelabs.com/bensandee) | | DOCKER_USERNAME | secret | Registry username | | DOCKER_PASSWORD | secret | Registry password |

Forgejo setup: On Forgejo, DOCKER_USERNAME is your Forgejo account username, and DOCKER_PASSWORD can reuse the same token as RELEASE_TOKEN. The token needs write permissions on the org, package, and repository scopes. These permissions should be set for the user if the package is under a user namespace (e.g. bensandee), or the organization if it's under an org namespace (e.g. orangebikelabs).

Config file

.tooling.json stores overrides only — fields where the project differs from what convention/detection produces. A fully conventional project has {} or no .tooling.json at all.

Available override fields:

| Field | Type | Default (detected) | | -------------------- | ------- | -------------------------------------------------------------------------------- | | structure | string | "monorepo" if pnpm-workspace.yaml present, else "single" | | useEslintPlugin | boolean | true | | formatter | string | "prettier" if config found, else "oxfmt" | | setupVitest | boolean | true | | ci | string | Detected from workflow dirs, else "none" | | setupRenovate | boolean | true | | releaseStrategy | string | Detected from existing config, else monorepo: "changesets", single: "simple" | | projectType | string | Auto-detected from package.json deps | | detectPackageTypes | boolean | true |

Debug logging

All CLI commands support a --verbose flag for detailed debug output. Alternatively, set TOOLING_DEBUG=true as an environment variable — useful in CI workflows:

env:
  TOOLING_DEBUG: "true"

Debug output is prefixed with [debug] and includes exec results (exit codes, stdout/stderr), compose configuration details, container health statuses, and retry attempts.

Library API

The "." export provides type-only exports for programmatic use:

import type {
  ProjectConfig,
  GeneratorResult,
  GeneratorContext,
  Generator,
  DetectedProjectState,
  LegacyConfig,
} from "@bensandee/tooling";

| Type | Description | | ---------------------- | ----------------------------------------------------------------------------------------------- | | ProjectConfig | User config shape (persisted in .tooling.json) | | GeneratorContext | Context passed to generator functions (exists, read, write, remove, confirmOverwrite) | | GeneratorResult | Result from a generator (created/updated/skipped files) | | Generator | Generator function signature: (ctx: GeneratorContext) => Promise<GeneratorResult> | | DetectedProjectState | Detected existing project state (package manager, CI, etc.) | | LegacyConfig | Legacy config detection for migration |

Docker check

docker:check verifies that a Docker Compose stack starts correctly, passes health checks, and (optionally) survives a smoketest command. The lifecycle is: build images, compose up, poll health checks, run smoketest, compose down.

Smoketests

A smoketest is a command that runs against the live Docker stack after all containers are healthy and HTTP health checks pass. This is useful for running integration tests, API contract checks, or any validation that requires the full stack to be running.

Auto-detection: If the root package.json has a "test:smoke" script, it is automatically used as the smoketest command — no configuration needed.

Manual configuration: For non-conventional command names, configure via .tooling.json:

{
  "dockerCheck": {
    "smoketest": ["pnpm", "test:integration"],
    "smoketestCwd": "."
  }
}

If the smoketest command exits non-zero, docker:check dumps container logs and fails with reason "smoketest-failed".

Library API

The @bensandee/tooling/docker-check export provides utilities for checking Docker Compose stacks programmatically.

Quick start

import { createRealExecutor, runDockerCheck } from "@bensandee/tooling/docker-check";
import type { CheckConfig } from "@bensandee/tooling/docker-check";

const config: CheckConfig = {
  compose: {
    cwd: "./deploy",
    composeFiles: ["docker-compose.yaml"],
    services: ["api", "db"],
  },
  buildCommand: ["pnpm", "image:build"],
  healthChecks: [
    {
      name: "API",
      url: "http://localhost:3000/health",
      validate: async (res) => res.ok,
    },
  ],
  smoketest: ["pnpm", "test:smoke"],
  timeoutMs: 120_000,
  pollIntervalMs: 5_000,
};

const result = await runDockerCheck(createRealExecutor(), config);
if (!result.success) {
  console.error(result.reason, result.message);
}

Exports

| Export | Description | | -------------------------------------- | ---------------------------------------------------------------------------- | | runDockerCheck(executor, config) | Full lifecycle: build, compose up, health check polling, smoketest, teardown | | createRealExecutor() | Production executor (real shell, fetch, timers) | | composeUp(executor, config) | Start compose services | | composeDown(executor, config) | Stop and remove compose services | | composeLogs(executor, config) | Stream compose logs | | composePs(executor, config) | List running containers | | checkHttpHealth(executor, check) | Run a single HTTP health check | | getContainerHealth(executor, config) | Check container-level health status |

Types

| Type | Description | | --------------------- | ------------------------------------------------------------------------------------------ | | CheckConfig | Full check config (compose settings, build command, health checks, smoketest, timeouts) | | ComposeConfig | Docker Compose settings (cwd, compose files, env file, services) | | HttpHealthCheck | Health check definition (name, URL, validate function) | | CheckResult | Result: { success: true, elapsedMs } or { success: false, reason, message, elapsedMs } | | DockerCheckExecutor | Side-effect abstraction (exec, fetch, timers) for testability | | ContainerInfo | Container status info from composePs |