npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@safetnsr/autotune

v0.1.0

Published

GPU-poor autoresearch — calibrate any scoring tool against ground truth

Readme

autotune

GPU-poor autoresearch. calibrate any scoring tool by iterating parameters against ground truth.

inspired by karpathy/autoresearch — the same pattern (iterate, measure, keep or discard) applied without a GPU.

what it does

you have a scoring tool with tunable parameters. you have a dataset with expected results. autotune:

  1. mutates your parameters
  2. runs your evaluation
  3. measures the metric
  4. keeps improvements, discards failures
  5. repeats

no machine learning. no GPU. just systematic parameter search with a clear metric.

quickstart

npx @safetnsr/autotune init

this creates three files:

  • autotune.json — defines your parameters and their ranges
  • params.json — your current parameter values
  • eval.js — your evaluation script (you implement this)

edit eval.js to run your tool against a dataset and output a metric. then:

npx @safetnsr/autotune run --iterations 100

autotune will iterate 100 times, logging each attempt and keeping only improvements.

example: calibrating a code quality scorer

we used autotune to calibrate vet against 43 public repos. correlation went from -0.32 to +0.83 across 13 iterations.

{
  "name": "vet-calibration",
  "params": "thresholds.json",
  "eval": "bash run-calibration.sh",
  "metric": "correlation",
  "higherIsBetter": true,
  "maxIterations": 50,
  "strategy": "hill-climb",
  "paramDefs": [
    {
      "path": "category_weights",
      "type": "weight-group",
      "step": 0.05,
      "members": [
        "category_weights.security",
        "category_weights.integrity",
        "category_weights.debt",
        "category_weights.deps"
      ]
    },
    {
      "path": "integrity.empty_catch_error_penalty",
      "type": "number",
      "min": 1,
      "max": 15,
      "step": 1
    }
  ]
}

full writeup: from -0.32 to +0.83

use cases

  • scoring algorithms — calibrate weights and thresholds against labeled data
  • prompt optimization — iterate on prompt templates, measure output quality
  • config tuning — optimize build configs, deploy settings, rate limits
  • any parameter search — if you can measure it, you can autotune it

how it works

autotune uses hill-climbing with random restarts:

  1. pick 1-3 random parameters
  2. mutate each by 1-3 steps in a random direction
  3. run eval, extract metric
  4. if metric improved → keep new params
  5. if metric worsened → revert to previous best
  6. repeat

results are logged as JSONL in .autotune/ — every iteration, every change, every metric.

cli

autotune init                     # scaffold project
autotune run [--iterations N]     # run optimization loop
autotune status                   # show latest run summary
autotune help                     # show help

programmatic

import { run } from '@safetnsr/autotune';

const config = JSON.parse(fs.readFileSync('autotune.json', 'utf-8'));
const summary = run(config, process.cwd());

console.log(`improved from ${summary.startMetric} to ${summary.bestMetric}`);

license

MIT