livepaper-cli
v0.1.1
Published
Publish ML papers as interactive web pages with replicable results
Readme
Livepaper
Publish research papers as interactive web pages where every number is linked to a replication prompt. Readers click a value, copy the prompt into any AI coding agent, and reproduce the result.
How it works
A livepaper has two source files:
paper.md— your paper in markdown, with markers on quantitative valuesspecs.yaml— replication prompts for each marked value
Markers look like standard markdown links:
Our model achieves [98.8%](#=clf_lr_acc) accuracy on the benchmark.In the built page, "98.8%" is highlighted. Click it, get a popover with the replication prompt and a copy button. Paste into Claude Code, Cursor, Codex, or any agent.
Values not yet verified use pending markers:
Training took [3 days](#=) on 8 A100s.Install
npm install -g livepaper-cliOr use without installing:
npx livepaper-cli <command>The CLI command is still livepaper.
Workflow
1. Initialize from a LaTeX paper
livepaper init paper/paper.texCreates livepaper/ with INSTRUCTION.md — an agent playbook for converting your paper.
2. Let the agent do the work
Open your AI coding agent and say:
Please follow livepaper/INSTRUCTION.md to build my livepaper.
The agent will:
- Convert your LaTeX to
livepaper/paper.mdwith markers on every number - Explore your repo to find how each result was produced
- Fill in
livepaper/specs.yamlwith replication prompts - Ask you when it can't figure something out
3. Build the interactive page
livepaper buildOpen livepaper/dist/index.html in your browser.
Commands
livepaper init <tex>
Scaffold livepaper/ and generate INSTRUCTION.md.
livepaper init paper/paper.tex
livepaper init paper/paper.tex --force # overwrite existinglivepaper build
Build the interactive page from paper.md + specs.yaml.
livepaper build # outputs to livepaper/dist/
livepaper build --out ./public # custom output dir
livepaper build --watch # rebuild on changeslivepaper check
Validate markers against specs without building.
livepaper check✓ 25 markers found in livepaper/paper.md
✓ 25 specs found in livepaper/specs.yaml
✓ All markers have matching specs
✓ No orphan specs
✓ All IDs unique
✓ All {var} references resolved
· 7 pending markers (#=)
25/25 ready. 7 pending.Authoring format
Markers in paper.md
[value](#=id) verified — has a spec in specs.yaml
[value](#=) pending — not yet verifiedTables and figures: mark the caption, not individual cells.
[**Table 1: Results by model.**](#=table1)
| Model | Accuracy |
|-------|----------|
| GPT-4 | 84.7% |Specs in specs.yaml
Two reserved fields, everything else is a template variable:
clf_lr_acc:
prompt: "Run: python eval/classify.py. Report logistic_regression.accuracy from the JSON output."
note: "5-fold stratified CV, seed 42."Use YAML anchors to DRY up repeated patterns:
_templates:
classify: &clf
prompt: "Run: python eval/classify.py. Report {model}.{metric} from the JSON output."
clf_lr_acc:
<<: *clf
model: logistic_regression
metric: accuracy
clf_rf_acc:
<<: *clf
model: random_forest
metric: accuracy{var} references are expanded at build time. Only prompt and note reach the reader.
Project config (livepaper.yaml)
title: "Your Paper Title"
setup: |
Help me navigate this replicable livepaper. Clone https://github.com/org/repo and cd into it.
Run: pip install -r requirements.txt
You are now ready to replicate any result.The setup field is shown in a banner at the top of the built page with a copy button.
Repo structure
your-paper-repo/
├── paper/ # your original LaTeX (untouched)
│ └── paper.tex
├── livepaper/ # created by livepaper init
│ ├── INSTRUCTION.md # agent playbook
│ ├── paper.md # markdown + markers
│ ├── specs.yaml # replication prompts
│ ├── livepaper.yaml # project config
│ ├── assets/figures/ # figure images
│ └── dist/ # built output
│ ├── index.html
│ └── specs.json
├── ... # other necessary filesLicense
MIT
