npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

receiptscc

v1.0.6

Published

Verify your citations say what you claim. One command.

Readme


The Problem

GPTZero found 100 hallucinated citations across 51 papers at NeurIPS 2024. Those are the fake ones.

Nobody is counting the real papers that don't say what authors claim.

Your manuscript says: "Smith et al. achieved 99% accuracy on all benchmarks"

The actual paper says: "We achieve 73% accuracy on the standard benchmark"

Not fraud. Just human memory + exhaustion + LLM assistance = systematic misquotation.

receipts catches this before your reviewers do.


What is this?

Give it your paper. Give it the PDFs you cited. It reads both. Tells you what's wrong.

Runs inside Claude Code (Anthropic's terminal assistant). One command. ~$0.50-$5 per paper.

Built by an MD/PhD student who got tired of manually re-checking citations at 2am before deadlines.


Before You Start

You need two things:

1. Node.js

Check if you have it:

node --version

If you see a version number, you're good. If you see "command not found", download Node.js from nodejs.org and install it.

2. Anthropic API Key or Pro/Max Plan

You need one of these to use Claude Code:

  • API key: Get one at console.anthropic.com. Requires a payment method.
  • Pro or Max plan: If you subscribe to Claude Pro ($20/mo) or Max ($100/mo), you can use Claude Code without a separate API key.

Setup (5 minutes)

Step 1: Open your terminal

Mac: Press Cmd + Space, type Terminal, press Enter

Windows: Press Win + X, click "Terminal" or "PowerShell"

Linux: Press Ctrl + Alt + T


Step 2: Install Claude Code

Copy this command and paste it into your terminal:

npm install -g @anthropic-ai/claude-code

Wait for it to finish.


Step 3: Install receiptscc

Copy and run this:

npx receiptscc

You will see a receipt banner. That means it worked. You only do this once.


Step 4: Set up your paper folder

Create a folder with your paper and sources:

thesis/
├── my_paper.pdf          ← your paper (any name)
└── sources/              ← create this folder
    ├── smith_2020.pdf    ← PDFs you cited
    ├── jones_2021.pdf
    └── chen_2019.pdf

Put your paper in the folder. Create a subfolder called sources. Put the PDFs you cited inside sources.


Step 5: Open Claude Code

Navigate to your paper folder and start Claude Code:

cd ~/Desktop/thesis
claude

Windows users: Replace ~/Desktop/thesis with your actual path, like C:\Users\YourName\Desktop\thesis

The first time you run claude, it will ask for your API key. Paste it in.


Step 6: Run receipts

Now you are inside Claude Code. Type this command:

/receipts

Important: The /receipts command only works inside Claude Code. If you type it in your regular terminal, it will not work.

receipts will read your paper, read your sources, and check every citation. When it finishes, it creates a file called RECEIPTS.md in your folder with the results.


What You Get

A detailed verdict for each citation showing exactly what's wrong and how to fix it. See example verdicts →

Verdict: Reference 1 — ADJUST

Citation: Srivastava et al. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR.

Summary: Reference 1 is cited four times in the manuscript. Two citations are accurate: the general description of dropout and the direct quote about co-adaptations. However, two citations contain errors that require correction.


Instance 2 — Section "Dropout Regularization", paragraph 2

Manuscript claims:

"According to Srivastava et al., the optimal dropout probability is p=0.5 for all layers, which they found to work well across a wide range of networks and tasks."

Source states:

"In the simplest case, each unit is retained with a fixed probability p independent of other units, where p can be chosen using a validation set or can simply be set at 0.5, which seems to be close to optimal for a wide range of networks and tasks."

Source also states:

"All dropout nets use p=0.5 for hidden units and p=0.8 for input units."

Assessment: NOT SUPPORTED

Discrepancy: The manuscript claims "p=0.5 for all layers" but the source explicitly states p=0.5 for HIDDEN units and p=0.8 for INPUT units.


Instance 3 — Section "Dropout Regularization", paragraph 2

Manuscript claims:

"Using this approach, they achieved an error rate of 0.89% on MNIST, demonstrating state-of-the-art performance at the time."

Source states:

"Error rates can be further improved to 0.94% by replacing ReLU units with maxout units."

Assessment: NOT SUPPORTED

Discrepancy: The manuscript claims 0.89% error rate, but the source states 0.94%. The figure 0.89% does not appear in the source.


Required Corrections

  1. Change p=0.5 for all layersp=0.5 for hidden units and p=0.8 for input units
  2. Change 0.89%0.94%

| Status | Meaning | |--------|---------| | VALID | Citation is accurate | | ADJUST | Small fix needed | | INVALID | Source doesn't support claim |


Cost

| Paper Size | Citations | Haiku 3.5 | Sonnet 4 | Opus 4.5 | |------------|-----------|-----------|----------|----------| | Short | 10 | ~$0.50 | ~$2 | ~$9 | | Medium | 25 | ~$1.30 | ~$5 | ~$24 | | Full | 50 | ~$3 | ~$11 | ~$56 |

Use Haiku for drafts. Opus for final submission.


Lightweight Install

receipts adds only 29 tokens to your Claude Code context:

| Component | What it is | Tokens | |-----------|------------|--------| | /receipts | The command definition | 13 | | receipts-verifier | Agent template for verification | 16 |

That's the install footprint—two tiny files. The actual verification work uses Claude's normal token budget (hence the ~$0.50-$5 cost per paper).


Troubleshooting

"npm: command not found"

You need Node.js. Download it from nodejs.org.

"bash: /receipts: No such file or directory"

You typed /receipts in your regular terminal. You need to type it inside Claude Code. First run claude to start Claude Code, then type /receipts.

"No manuscript found"

Make sure your PDF is in the root folder, not inside a subfolder.

"No sources directory"

Create a folder called exactly sources (lowercase) and put your cited PDFs inside.

Claude Code asks for an API key

Either get an API key at console.anthropic.com, or subscribe to Claude Pro/Max at claude.ai.


License

MIT


Disclaimer

By using this tool, you confirm that you have the legal right to upload and process all documents you provide. This includes ensuring compliance with copyright laws, publisher terms, and institutional policies. Many academic papers are protected by copyright; you are responsible for verifying you have appropriate permissions (e.g., personal copies for research, open-access publications, or institutional access rights).

This tool is provided "as is" without warranty of any kind. The author assumes no liability for any claims arising from your use of this software or the documents you process with it. Use of this tool is also subject to Anthropic's Terms of Service.