npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

quackscore

v0.1.9

Published

Analyze GitHub activity and generate a Duck RPG-style developer card

Readme

🦆 quackscore

"Every PR merged is a feather in your cap. Or your tail. Ducks aren't picky."

quackscore turns your GitHub pull request history into a Duck RPG-style developer card. It fetches your merged PRs, scores them, classifies them with an LLM, and generates a slick local HTML report — because your contributions deserve more than a green square.

npx quackscore create <your-github-username>

That's it. Quack.


What it does

  • Fetches your merged pull requests from GitHub
  • Uses an LLM to classify each PR (complexity, area, type, technologies)
  • Awards points and levels based on your contribution history
  • Generates a local HTML developer card you can be proud of
  • Stores history locally so incremental updates are fast
  • Separates PR analysis from profile summary regeneration for fine-grained control

Requirements

  • Node.js 18+
  • QUACKSCORE_GH_TOKEN — a GitHub personal access token. See GitHub fine-grained token setup
  • QUACKSCORE_LLM_API_KEY — API key for your chosen LLM provider. See LLM provider setup. Not required for Ollama or unauthenticated LiteLLM proxies.

Install

npm install -g quackscore

Or run without installing:

npx quackscore --help

Supported LLM providers

| Provider | Value | Notes | |---|---|---| | Anthropic | anthropic | | | OpenAI | openai | | | Google | google | | | OpenRouter | openrouter | | | MiniMax | minimax | | | Ollama (local) | ollama | No API key needed | | LiteLLM proxy | litellm | Routes to Azure OpenAI, Bedrock, and more |

All remote providers use QUACKSCORE_LLM_API_KEY. Ollama needs no key. LiteLLM's key is optional (only needed if your proxy requires auth).

Need setup help? See LLM provider setup.


Quick start

1. Set environment variables

Need a GitHub token first? Follow GitHub fine-grained token setup.

export QUACKSCORE_GH_TOKEN=<your-github-token>
export QUACKSCORE_LLM_API_KEY=<your-llm-api-key>

2. Pick your LLM provider

Need help choosing or configuring a provider? See LLM provider setup.

quackscore init --provider anthropic --model claude-haiku-4-5-20251001

Config is saved to ~/.quackscore/config.json.

3. Generate your card

quackscore create <github-username>

This always rebuilds from scratch — fetches all merged PRs, analyzes each one, generates your profile summary, and produces a fresh card. Any existing local profile is overwritten.

Filter by org or repo:

quackscore create <github-username> --organisation <org> --repository <repo>

4. Update later (only new PRs)

quackscore update <github-username>

Fetches only PRs merged since the last run, analyzes them, and updates your stats, level, and charts. Your existing profile summary is left untouched — use update-summary when you want to regenerate it.

5. Refresh your profile summary

quackscore update-summary <github-username>

Regenerates the RPG title and written summary by sending your full PR history to the LLM. Does not re-fetch or re-analyze any PRs. Useful after a batch of update runs, or whenever you want a fresh take on your narrative.

6. Re-open a saved card

quackscore show <github-username>

7. Leaderboard

quackscore leaderboard

8. Try it without credentials

quackscore mock-report

Generates a demo card from 20 hardcoded PRs. No GitHub or LLM key needed.


Other provider examples

# OpenAI
quackscore init --provider openai --model gpt-4o-mini

# Google
quackscore init --provider google --model gemini-2.0-flash

# OpenRouter
quackscore init --provider openrouter --model anthropic/claude-haiku-4

# MiniMax
quackscore init --provider minimax --model MiniMax-M2.7

# Ollama (local)
quackscore init --provider ollama --model llama3.2

# LiteLLM proxy — Azure OpenAI
quackscore init --provider litellm --model azure/gpt-4o-mini --base-url http://localhost:4000

# LiteLLM proxy — AWS Bedrock
quackscore init --provider litellm --model bedrock/claude-3-5-sonnet --base-url http://localhost:4000

LiteLLM lets you run a single proxy that routes to Azure OpenAI, AWS Bedrock, Hugging Face, and many other backends. Start it with:

pip install litellm
litellm --model azure/gpt-4o-mini  # or whichever backend you want

Then set your key if the proxy requires auth:

export QUACKSCORE_LLM_API_KEY=<your-litellm-master-key>

Data storage

Everything lives in ~/.quackscore/:

| File | Contents | |---|---| | config.json | Provider and model settings | | <username>.json | Analyzed PR data | | <username>.html | Your generated developer card | | leaderboard.json | Local leaderboard cache |


Notes

  • The GitHub token is required regardless of LLM setup.
  • API keys are read from environment variables only — never written to disk.
  • create always rebuilds from scratch; update only fetches and analyzes new PRs without touching the existing summary; update-summary regenerates the summary without touching PRs.
  • Add --diagnostics to any command for detailed internal tracing.