npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

livepaper-cli

v0.1.1

Published

Publish ML papers as interactive web pages with replicable results

Readme

Livepaper

Publish research papers as interactive web pages where every number is linked to a replication prompt. Readers click a value, copy the prompt into any AI coding agent, and reproduce the result.

How it works

A livepaper has two source files:

  • paper.md — your paper in markdown, with markers on quantitative values
  • specs.yaml — replication prompts for each marked value

Markers look like standard markdown links:

Our model achieves [98.8%](#=clf_lr_acc) accuracy on the benchmark.

In the built page, "98.8%" is highlighted. Click it, get a popover with the replication prompt and a copy button. Paste into Claude Code, Cursor, Codex, or any agent.

Values not yet verified use pending markers:

Training took [3 days](#=) on 8 A100s.

Install

npm install -g livepaper-cli

Or use without installing:

npx livepaper-cli <command>

The CLI command is still livepaper.

Workflow

1. Initialize from a LaTeX paper

livepaper init paper/paper.tex

Creates livepaper/ with INSTRUCTION.md — an agent playbook for converting your paper.

2. Let the agent do the work

Open your AI coding agent and say:

Please follow livepaper/INSTRUCTION.md to build my livepaper.

The agent will:

  • Convert your LaTeX to livepaper/paper.md with markers on every number
  • Explore your repo to find how each result was produced
  • Fill in livepaper/specs.yaml with replication prompts
  • Ask you when it can't figure something out

3. Build the interactive page

livepaper build

Open livepaper/dist/index.html in your browser.

Commands

livepaper init <tex>

Scaffold livepaper/ and generate INSTRUCTION.md.

livepaper init paper/paper.tex
livepaper init paper/paper.tex --force  # overwrite existing

livepaper build

Build the interactive page from paper.md + specs.yaml.

livepaper build                    # outputs to livepaper/dist/
livepaper build --out ./public     # custom output dir
livepaper build --watch            # rebuild on changes

livepaper check

Validate markers against specs without building.

livepaper check
✓ 25 markers found in livepaper/paper.md
✓ 25 specs found in livepaper/specs.yaml
✓ All markers have matching specs
✓ No orphan specs
✓ All IDs unique
✓ All {var} references resolved
· 7 pending markers (#=)

25/25 ready. 7 pending.

Authoring format

Markers in paper.md

[value](#=id)    verified — has a spec in specs.yaml
[value](#=)      pending — not yet verified

Tables and figures: mark the caption, not individual cells.

[**Table 1: Results by model.**](#=table1)

| Model | Accuracy |
|-------|----------|
| GPT-4 | 84.7%    |

Specs in specs.yaml

Two reserved fields, everything else is a template variable:

clf_lr_acc:
  prompt: "Run: python eval/classify.py. Report logistic_regression.accuracy from the JSON output."
  note: "5-fold stratified CV, seed 42."

Use YAML anchors to DRY up repeated patterns:

_templates:
  classify: &clf
    prompt: "Run: python eval/classify.py. Report {model}.{metric} from the JSON output."

clf_lr_acc:
  <<: *clf
  model: logistic_regression
  metric: accuracy

clf_rf_acc:
  <<: *clf
  model: random_forest
  metric: accuracy

{var} references are expanded at build time. Only prompt and note reach the reader.

Project config (livepaper.yaml)

title: "Your Paper Title"
setup: |
  Help me navigate this replicable livepaper. Clone https://github.com/org/repo and cd into it.
  Run: pip install -r requirements.txt
  You are now ready to replicate any result.

The setup field is shown in a banner at the top of the built page with a copy button.

Repo structure

your-paper-repo/
├── paper/                    # your original LaTeX (untouched)
│   └── paper.tex
├── livepaper/                # created by livepaper init
│   ├── INSTRUCTION.md        # agent playbook
│   ├── paper.md              # markdown + markers
│   ├── specs.yaml            # replication prompts
│   ├── livepaper.yaml        # project config
│   ├── assets/figures/       # figure images
│   └── dist/                 # built output
│       ├── index.html
│       └── specs.json
├── ...                       # other necessary files

License

MIT