npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

pull-request-score

v1.0.0

Published

[![npm version](https://img.shields.io/npm/v/pull-request-score)](https://www.npmjs.com/package/pull-request-score) [![CI](https://github.com/owner/repo/actions/workflows/ci.yml/badge.svg)](https://github.com/owner/repo/actions/workflows/ci.yml) [![covera

Readme

pull-request-score

npm version CI coverage docs

pull-request-score collects and aggregates pull request data from GitHub. It was built to help teams understand how code moves through their repositories and to highlight opportunities for process improvements. The project exposes both a CLI for quick analysis as well as a library for building custom workflows.

Why does this exist?

Tracking cycle time, review responsiveness and CI reliability across many pull requests is tedious. This package automates the heavy lifting so you can focus on interpreting the numbers. It works against the public GitHub API and GitHub Enterprise installations and is designed to scale to large organizations with many concurrent pull requests.

Requirements

  • Node.js 18 or newer
  • pnpm package manager

Features

  • Comprehensive metrics – see the Metric Reference for the full list.
  • Support for GitHub Enterprise via the --base-url option.
  • CLI and library usage for flexibility.
  • Label based filtering so monorepo users can target a specific team or category of work.
  • Ticket ID extraction from PR titles like BOSS-1252 to capture team and ticket number.
  • Optional throttling helper for environments with strict API limits.

Parsing ticket IDs

import { parseTicket, hasTicket } from 'pull-request-score'

parseTicket('BOSS-1252 fix bug')
// => { team: 'BOSS', number: 1252 }

hasTicket('no ticket here')
// => false

Documentation

Browse the website for usage guides and API details at https://owner.github.io/pull-request-score/.

Installation

pnpm add pull-request-score

When you only need a quick report you can use npx:

npx gh-pr-metrics octocat/hello-world --since 30d

Using the CLI

npx gh-pr-metrics <owner/repo> --since 30d --token YOUR_TOKEN \
  --base-url https://github.mycompany.com/api/v3 --progress

Options

  • --since <duration> – look back period. Values like 30d or 2w are supported. Defaults to 90d.
  • --token <token> – GitHub token or use the GH_TOKEN environment variable.
  • --base-url <url> – API root when running against GitHub Enterprise.
  • --format <json|csv> – output format. Defaults to JSON.
  • --output <path|stdout|stderr> – where to write metrics. Defaults to stdout.
  • --progress – show fetching progress on stderr.
  • --dry-run – print the options that would be used and exit.
  • --include-labels <a,b> – only include pull requests that have any of the given labels.
  • --exclude-labels <a,b> – skip pull requests that have any of the given labels.

Label filters make it easy for large enterprises to slice metrics per team when many groups share a monorepo. Omitting the filters aggregates over all pull requests.

Examples

Output metrics as CSV:

npx gh-pr-metrics myorg/app --format csv --token MY_TOKEN \
  --output metrics.csv

Generate metrics for just the team-a labeled pull requests:

npx gh-pr-metrics myorg/app --since 7d --token MY_TOKEN \
  --include-labels team-a

Exclude work in progress PRs:

npx gh-pr-metrics myorg/app --exclude-labels wip --token MY_TOKEN

Library Usage

All functionality exposed by the CLI is also available programmatically.

import {
  collectPullRequests,
  calculateMetrics,
  calculateCycleTime,
  calculateReviewMetrics,
  calculateCiMetrics,
  writeOutput,
} from "pull-request-score";

const prs = await collectPullRequests({
  owner: "octocat",
  repo: "hello-world",
  since: new Date(Date.now() - 30 * 86_400_000).toISOString(),
  auth: "YOUR_TOKEN",
  includeLabels: ["team-a"],
});

const metrics = calculateMetrics(prs, {
  outsizedThreshold: 1500,
  staleDays: 60,
});

console.log(calculateCycleTime(prs[0]));
console.log(calculateReviewMetrics(prs[0]));
console.log(calculateCiMetrics(prs[0]));
writeOutput(metrics, { format: "json" });

See docs/rate-limiter.md for details on pacing API requests and docs/write-output.md for controlling output formats. For custom metrics see the Plugin API docs.

Enterprise Usage

Enterprises can weight every available metric to derive an overall pull request health score. The example below connects to a GitHub Enterprise server, enables comment quality analysis and combines all metrics into a single number.

import {
  collectPullRequests,
  calculateMetrics,
  scoreMetrics,
} from 'pull-request-score'

const prs = await collectPullRequests({
  owner: 'my-org',
  repo: 'my-repo',
  baseUrl: 'https://github.mycompany.com/api/v3',
  auth: process.env.GH_TOKEN,
  since: new Date(Date.now() - 30 * 86_400_000).toISOString(),
})

const metrics = calculateMetrics(prs, { enableCommentQuality: true })
const pct = (v: number) => v * 100
const sum = (obj: Record<string, number>) =>
  Object.values(obj).reduce((a, b) => a + b, 0)

const enterpriseScore = scoreMetrics(metrics, [
  { metric: 'cycleTime', weight: -0.05 },
  { metric: 'pickupTime', weight: -0.05 },
  { metric: 'mergeRate', weight: 0.05, normalize: pct },
  { metric: 'closedWithoutMergeRate', weight: -0.05, normalize: pct },
  { metric: 'reviewCoverage', weight: 0.05, normalize: pct },
  { metric: 'averageCommitsPerPr', weight: -0.05 },
  { metric: 'outsizedPrs', weight: -0.05, fn: m => m.outsizedPrs.length },
  { metric: 'buildSuccessRate', weight: 0.05, normalize: pct },
  { metric: 'averageCiDuration', weight: -0.05 },
  { metric: 'stalePrCount', weight: -0.05 },
  { metric: 'hotfixFrequency', weight: -0.05, normalize: pct },
  { metric: 'prBacklog', weight: -0.05 },
  { metric: 'prCountPerDeveloper', weight: 0.05, fn: m => Object.keys(m.prCountPerDeveloper).length },
  { metric: 'reviewCounts', weight: 0.05, fn: m => sum(m.reviewCounts) },
  { metric: 'commentCounts', weight: 0.05, fn: m => sum(m.commentCounts) },
  { metric: 'commenterCounts', weight: 0.05, fn: m => sum(m.commenterCounts) },
  { metric: 'discussionCoverage', weight: 0.05, normalize: pct },
  { metric: 'commentDensity', weight: 0.05, normalize: pct },
  { metric: 'commentQuality', weight: 0.05, normalize: pct },
])

console.log(`Enterprise score: ${enterpriseScore}`)

Development

pnpm install
pnpm test

Contributions are welcome! The project is intentionally small and easy to extend for additional metrics.

See docs/metric-reference.md for definitions of all available metrics.

scoreMetrics

After calculating metrics you can derive a single numeric score by combining them with custom weights.

import { scoreMetrics } from 'pull-request-score'

const score = scoreMetrics(metrics, [
  { weight: 0.6, metric: 'mergeRate' },
  { weight: 0.4, metric: 'reviewCoverage' },
])

Rules may also use custom functions to leverage any metric data:

const score = scoreMetrics(metrics, [
  { weight: 1, metric: 'mergeRate' },
  { weight: -0.1, fn: m => m.prBacklog },
])

Metrics can be normalized before weighting using the normalize option. This is useful for converting ratios into a 1–100 scale:

const score = scoreMetrics(metrics, [
  { weight: 0.5, metric: 'mergeRate', normalize: v => v * 100 },
  { weight: 0.5, metric: 'reviewCoverage', normalize: v => v * 100 },
])

To include comments in your score, combine discussionCoverage and commentQuality (enable with enableCommentQuality in calculateMetrics):

const score = scoreMetrics(metrics, [
  { weight: 0.5, metric: 'discussionCoverage', normalize: v => v * 100 },
  { weight: 0.5, metric: 'commentQuality', normalize: v => v * 100 },
])

To reward raw comment volume across all PRs you can supply a custom rule:

const totalComments = (m: any) =>
  Object.values(m.commentCounts).reduce((a, b) => a + b, 0)

const score = scoreMetrics(metrics, [
  { weight: 0.7, fn: totalComments },
  { weight: 0.3, metric: 'commentQuality', normalize: v => v * 100 },
])

More advanced transforms can convert ranges of values to discrete scores. The createRangeNormalizer helper makes this easy:

import { scoreMetrics, createRangeNormalizer } from 'pull-request-score'

const normalizePickupTime = createRangeNormalizer(
  [
    { max: 4, score: 100 },
    { max: 6, score: 80 },
    { max: 12, score: 60 },
  ],
  40,
)

const score = scoreMetrics({ pickupTime: 5 }, [
  { weight: 1, metric: 'pickupTime', normalize: normalizePickupTime },
])

You can reuse the normalizer alongside others for a combined score:

const metrics = { pickupTime: 2, mergeRate: 0.95 }

const normalizePickupTime = createRangeNormalizer(
  [
    { max: 4, score: 100 },
    { max: 6, score: 80 },
    { max: 12, score: 60 },
  ],
  40,
)

const pct = (v: number) => Math.round(v * 100)

const score = scoreMetrics(metrics, [
  { weight: 0.5, metric: 'pickupTime', normalize: normalizePickupTime },
  { weight: 0.5, metric: 'mergeRate', normalize: pct },
])
// => 98

Multiple rules can even target the same metric, for example to compare rounding strategies:

scoreMetrics({ mergeRate: 0.955 }, [
  { weight: 0.5, metric: 'mergeRate', normalize: v => Math.floor(v * 100) },
  { weight: 0.5, metric: 'mergeRate', normalize: v => Math.ceil(v * 100) },
])
// => 95.5