npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

claude-attribution

v1.9.0

Published

AI code attribution tracking for Claude Code and GitHub Copilot sessions

Readme

claude-attribution

AI code attribution for Claude Code and GitHub Copilot. After every git commit, a one-line summary appears in your terminal:

[claude-attribution] a3f1b2c — 142 AI / 38 human / 4 mixed lines (77% AI)

When a PR is opened, full metrics — model usage, token counts when available, tool calls, and attribution percentages — are injected automatically into the PR body. No copy-paste, no manual tracking.

Quick start:

npm install -g claude-attribution
claude-attribution install ~/Code/your-repo
claude-attribution init --ai    # if repo was built with Claude Code; use --human otherwise
git add .claude/settings.json .github/workflows/claude-attribution-pr.yml .gitignore
git commit -m "chore: install claude-attribution hooks"
git push

Using Copilot or @claude (claude-code-action)? Local Copilot CLI sessions are parsed from ~/.copilot/session-state/..., and bot commits are auto-detected as 100% AI when no local session data exists. See Copilot CLI Support and AI Actor Attribution.

Requirements: Bun (preferred) or Node 18+, and gh (GitHub CLI) authenticated for the /pr command.


Measures which lines in a commit were written by an AI assistant vs. a human — using checkpoint snapshots and line-level SHA-256 comparison when available, with explicit fallbacks for Copilot CLI sessions and hosted AI bot commits.


GitHub Actions Requirements

Up to three workflows are installed into repos that use this tool — one always, two optionally. Each requires specific GitHub Actions to be allowed in your org. If your org enforces an action allowlist, you must approve the third-party action before the workflows will run.

Workflows and the actions they use

| Workflow file | Trigger | Install | GitHub-owned actions | Third-party actions | |---|---|---|---|---| | claude-attribution-pr.yml | PR opened / pushed | Always | actions/checkout@v4, actions/github-script@v7 | oven-sh/setup-bun@v2 | | claude-attribution-export.yml | PR merged | Optional (prompted) | actions/checkout@v4 | oven-sh/setup-bun@v2 | | claude-attribution-gha.yml | Every push | Optional (claude-code-action detected) | actions/checkout@v4 | oven-sh/setup-bun@v2 |

GitHub-owned actions (actions/*) are pre-approved in all orgs by default.

oven-sh/setup-bun@v2 is a third-party action from the Bun team. It installs the Bun runtime so the claude-attribution CLI can execute without falling back to npx tsx. If your org requires explicit action approvals, add oven-sh/setup-bun to the allowlist at:

Settings → Actions → General → Allow actions and reusable workflows → add oven-sh/setup-bun@*

Workflow permissions

| Workflow | contents | pull-requests | Why | |---|---|---|---| | claude-attribution-pr.yml | read | write | Reads git history; writes metrics into the PR body | | claude-attribution-export.yml | read | — | Reads git notes and exports metrics via OTLP/webhook | | claude-attribution-gha.yml | write | — | Pushes attribution git notes back to origin |

Required secrets

| Secret / Variable | Workflow | Required | Notes | |---|---|---|---| | GITHUB_TOKEN | All | Automatic | Provided by GitHub Actions; no setup needed | | OTEL_EXPORTER_OTLP_ENDPOINT | claude-attribution-export.yml | One of these | OTLP/HTTP base URL (e.g. https://otlp.datadoghq.com, https://otlp-gateway-<zone>.grafana.net/otlp, http://localhost:4318). Unset = dry-run mode. | | OTEL_EXPORTER_OTLP_HEADERS | claude-attribution-export.yml | Depends on backend | Comma-separated key=value auth headers (e.g. DD-Api-Key=xxx for Datadog, x-honeycomb-team=xxx for Honeycomb) | | DATADOG_API_KEY | claude-attribution-export.yml | Datadog shortcut | Org-level secret; auto-configures the Datadog OTLP endpoint without needing OTEL_EXPORTER_OTLP_ENDPOINT. | | DATADOG_SITE | claude-attribution-export.yml | No | Org-level variable; used with DATADOG_API_KEY. Defaults to datadoghq.com. | | METRICS_WEBHOOK_URL | claude-attribution-export.yml | No | Posts a flat JSON payload to any HTTP endpoint on each PR merge. Can be used alongside OTLP. |


For Repo Maintainers: Installing Into a Repo

Prerequisites

  • Bun is the preferred runtime (curl -fsSL https://bun.sh/install | bash)
  • If Bun isn't available, tsx also works: npm install -g tsx The tool auto-detects the runtime (bun → tsx → npx tsx).

One-time setup (per developer machine)

npm install -g claude-attribution

Install into a repo (per repo, per developer)

Step 1 — Run the installer:

claude-attribution install ~/Code/your-repo

Step 2 — Declare your attribution baseline (init):

This step tells the tool whether the codebase was written by Claude or by humans before this install. It only needs to be run once.

# Repo was built entirely with Claude Code — mark all files as AI-written:
claude-attribution init --ai

# Repo is human-written, or a mix — confirm the default (no note written):
claude-attribution init --human

# Not sure? Run with no flag — same as --human, prints a confirmation:
claude-attribution init

# Repo has mixed history — mark commits after a specific date as AI-written:
claude-attribution init --ai-since 2025-01-01

Why this matters: Without init, the codebase-wide AI% starts at 0% and grows only from new commits. If your repo is all Claude Code, run init --ai now or the metrics will be misleading until the entire codebase has been re-committed line by line.

Step 3 — Commit and push:

git add .claude/settings.json .github/workflows/claude-attribution-pr.yml .gitignore
git commit -m "chore: install claude-attribution hooks"
git push

# If you ran init --ai, also push the minimap notes:
git push origin refs/notes/claude-attribution-map

The installer makes the following changes to the target repo:

.claude/settings.json — merges six Claude Code hooks:

| Event | Hook | What it does | |-------|------|--------------| | PreToolUse (Edit/Write/MultiEdit/NotebookEdit) | pre-tool-use.ts | Snapshot file content before Claude touches it | | PostToolUse (all tools) | post-tool-use.ts | Snapshot file after Claude writes + log tool call | | PostToolUse (Bash) | post-bash.ts | Detect gh pr create and inject metrics into the new PR | | SubagentStart / SubagentStop | subagent.ts | Log subagent activity | | Stop | stop.ts | No-op; registered for future use |

.git/hooks/post-commit — runs attribution after every commit. If the repo already has a post-commit hook from Husky or another tool, the call is appended rather than replacing it. For Lefthook repos, the installer prints the config snippet to add manually.

Best-effort notes sync — after each commit, the post-commit hook writes the attribution notes locally and then tries to push refs/notes/claude-attribution and refs/notes/claude-attribution-map to origin. The commit note now carries durable session metadata too (model usage, notable tool counts, agent counts, skills, and session timing), so PR metrics can be rebuilt in CI even when local .claude logs and transcripts are unavailable. If a notes push is rejected because the remote ref moved first, it fetches the remote notes ref, runs git notes merge, and retries the push. No pre-push hook or remote.origin.push refspec is installed, so plain git push does not fail just because notes metadata raced elsewhere.

.github/workflows/claude-attribution-pr.yml — GitHub Actions workflow that fires on every PR open and push. Injects metrics into the PR body automatically for PRs created outside Claude (Copilot, manual gh pr create, GitHub UI). Skips injection if the local post-bash hook already injected metrics on opened; always updates on synchronize (new commits).

.github/workflows/claude-attribution-export.yml — fires on every PR merge. Exports AI attribution metrics via OTLP/HTTP or a generic webhook. Supports Datadog, Grafana Cloud, Splunk Observability, New Relic, Honeycomb, and any OpenTelemetry Collector. Runs in dry-run mode (prints payload to stdout, exits 0) when no export destination is configured. See Metrics Export.

.github/workflows/claude-attribution-gha.yml (optional — installed when claude-code-action is detected) — fires on every push. Writes a 100% AI git note for commits made by known AI actors (e.g. @claude via claude-code-action, Copilot coding agent). Silent no-op for human commits. See AI Actor Attribution.

.claude/commands/ — installs three slash commands:

  • /metrics — generate a PR metrics report
  • /start — mark the start of a new ticket session
  • /pr — create a PR with metrics embedded

.gitignore — adds .claude/logs/ so tool usage logs don't end up in version control.

Attribution minimap — detailed options

The attribution minimap tracks cumulative AI% across the entire codebase, carrying attribution forward across sessions and developers. For new commits it is updated automatically. For the history that predates the install, you declare the baseline once using claude-attribution init.

There are three options depending on the history of your repo:

Option 1 — Repo was built entirely with Claude Code (--ai):

claude-attribution init --ai
git push origin refs/notes/claude-attribution-map

Marks every currently tracked file as AI-written at HEAD. After this, PR metrics will show:

Codebase: ~100% AI (4150 / 4150 lines)
This PR: 184 lines changed (4% of codebase) · 77% Claude edits · 142 AI lines

Option 2 — Repo is human-written, or a mix (--human / no flag):

claude-attribution init          # or: claude-attribution init --human

Prints a confirmation that the default human baseline is already in effect — no note is written. Attribution accumulates naturally as Claude writes new code going forward.

Option 3 — Already had claude-attribution installed before v1.2.0: The minimap feature didn't exist before v1.2.0 — per-session notes are intact but the codebase-wide signal was missing. Run init --ai now if the repo is all Claude Code, or do nothing (human default) if it's a mix:

claude-attribution init --ai     # only if repo was built 100% with Claude Code
git push origin refs/notes/claude-attribution-map

Re-installing

If you reinstall claude-attribution globally (e.g. after upgrading), you can now refresh every tracked repo in one shot:

claude-attribution update

The installer records each repo installed with claude-attribution v1.8.0 or later in a per-user registry at ~/.claude/claude-attribution/installed-repos.json. update re-runs the installer for every still-valid tracked repo in that registry, skips repos already on the current CLI version, and prunes paths that no longer exist or are no longer git repos. Repos installed before v1.8.0 will not appear there until you run claude-attribution install for them once on the newer CLI.

To force a reinstall even when the recorded version already matches, use:

claude-attribution update --force

You can still re-run the installer manually for a single repo when needed:

claude-attribution install ~/Code/your-repo

Uninstalling

To remove claude-attribution from a repo:

claude-attribution uninstall ~/Code/your-repo

This removes hooks from .claude/settings.json, removes .git/hooks/post-commit, removes the slash commands, removes .github/workflows/claude-attribution-pr.yml, .github/workflows/claude-attribution-export.yml, and .github/workflows/claude-attribution-gha.yml (if present), and removes any legacy pre-push hooks (for example .husky/pre-push or .git/hooks/pre-push) if present. Attribution state (.claude/attribution-state/) and logs (.claude/logs/) are left in place. Any legacy remote.origin.push notes refspecs are also removed from git config. The repo is also removed from the global installed-repo registry used by claude-attribution update.


For Developers: Day-to-Day Usage

What changes for you

Nothing changes in how you work. The hooks run silently in the background. After each git commit you'll see a one-line summary in the terminal:

[claude-attribution] a3f1b2c — 142 AI / 38 human / 4 mixed lines (77% AI)

That's it. No other interaction required.

Starting a new Jira ticket

When you check out a new branch for a new ticket, run /start in Claude Code:

/start

This writes a timestamp to .claude/attribution-state/session-start. The /metrics command uses this to scope tool counts, token usage, and attribution data to only the activity after this marker — so a long-running Claude Code session doesn't inflate metrics across multiple tickets.

Getting metrics into your PR

Metrics are injected automatically — no command needed.

When Claude creates the PR (asks Claude to open a PR, uses /pr): the post-bash hook fires immediately after gh pr create succeeds, injects full local metrics (token usage, tool counts, attribution) into the PR body before Claude continues.

When you create the PR yourself (gh pr create, GitHub UI, Copilot): the claude-attribution-pr.yml GitHub Actions workflow fires on opened and rebuilds metrics from durable git notes when available. If only CI-visible data exists, it falls back to attribution-only metrics because local session logs are not available in CI.

On every new push to an open PR: the workflow fires on synchronize and updates the attribution percentages to reflect new commits.

The metrics block injected into the PR body looks like (when the cumulative minimap exists):

AI Coding Metrics

Codebase: ~77% AI (3200 / 4150 lines) This PR: 184 lines changed (4% of codebase) · 77% AI edits · 142 AI lines Session: 12 prompts · 24m total (18m AI · 6m human) Assistant runtime: Claude Code (claude-sonnet-4-6)

| Model | Calls | Input | Output | Cache | |-------|-------|-------|--------|-------| | Sonnet | 45 | 120K | 35K | 10K | | Total | 45 | 120K | 35K | 10K |

Estimated cost: ~$1.23

External tools: WebSearch ×2

Before running init --ai (or on a fresh install with no minimap), the headline falls back to the session-only view:

AI contribution: ~77% (142 of 184 committed lines)

For Copilot CLI sessions, the same block is rendered with provider-aware differences:

  • assistant runtime shows as GitHub Copilot CLI
  • model usage shows Known Tokens instead of Claude-style input/output/cache columns
  • cost is shown as unavailable unless durable local billing data exists

The block is wrapped in HTML comments for idempotent updates — re-running replaces the existing block rather than appending:

<!-- claude-attribution metrics -->
...metrics content...
<!-- /claude-attribution metrics -->

Manual option

If you need to create a PR with metrics outside of Claude, use the /pr slash command or CLI directly:

claude-attribution pr "feat: PROJ-1234 add user authentication"
claude-attribution pr "feat: my feature" --draft
claude-attribution pr "feat: my feature" --base develop

Requires gh to be installed and authenticated (gh auth status).

Viewing metrics without creating a PR

To see the metrics output without creating a PR, use /metrics or run directly:

claude-attribution metrics

The output is markdown you paste into your PR description:

## AI Coding Metrics

**Codebase: ~77% AI** (3200 / 4150 lines)
**This PR:** 184 lines changed (4% of codebase) · 77% AI edits · 142 AI lines
**Session:** 12 prompts · 24m total (18m AI · 6m human)
**Assistant runtime:** GitHub Copilot CLI (v1.0.15 · gpt-5.4)

| Model | Calls | Known Tokens |
|-------|-------|--------------|
| gpt-5.4 | 12 | 48K |
| **Total** | 12 | 48K |

**Estimated cost:** unavailable — Copilot session data does not expose enough local billing data to estimate spend reliably.

Running with a specific session ID

If you have multiple sessions and want metrics for a specific one:

claude-attribution metrics <session-id>

Claude session IDs are shown in .claude/logs/tool-usage.jsonl. Copilot CLI session IDs are the directory names under ~/.copilot/session-state/.

Copilot CLI support

claude-attribution now supports local GitHub Copilot CLI sessions without adding a separate Copilot-specific install flow.

How it works:

  1. The existing repo install still provides the post-commit hook and PR workflow.
  2. On commit or /metrics, the tool first tries the normal Claude-local data sources.
  3. If Claude session data is unavailable, it looks for a matching Copilot CLI session under ~/.copilot/session-state/<session-id>/events.jsonl.
  4. It matches sessions by repo path, branch, and recency, then extracts:
    • prompt count
    • AI vs human active time
    • tool usage
    • skill usage
    • agent usage
    • dominant model / known token counts
  5. That normalized session data is attached to the git note so PR metrics can still be rebuilt later from durable note metadata.

Optional prompt markers:

  • If your Copilot CLI workflow includes prompts beginning with start work and create pr, those are used as a tighter timing window.
  • If not, the tool falls back to the broader repo/branch session window automatically.

Limitations:

  • Copilot CLI local session-state currently does not expose enough durable billing data to estimate cost reliably, so cost is rendered as unavailable.
  • Copilot CLI does not provide Claude-style file checkpoints, so attribution may fall back to commit/diff heuristics when no checkpoints exist.

Checking raw attribution data

Attribution results are stored as git notes and queryable directly:

# View per-commit attribution for the last commit
git notes --ref=claude-attribution show HEAD

# List all attributed commits in the repo
git notes --ref=claude-attribution list

# View the cumulative codebase minimap (all files, AI% totals)
git notes --ref=refs/notes/claude-attribution-map show HEAD | jq .totals

# Check codebase AI% quickly
git notes --ref=refs/notes/claude-attribution-map show HEAD | jq .totals.pctAi

Example output:

{
  "commit": "a3f1b2c",
  "session": "abc-123-...",
  "branch": "feature/PROJ-1234",
  "timestamp": "2026-03-26T15:32:00.000Z",
  "files": [
    { "path": "src/components/Foo.tsx", "ai": 82, "human": 10, "mixed": 2, "total": 94, "pctAi": 87 }
  ],
  "totals": { "ai": 142, "human": 38, "mixed": 4, "total": 184, "pctAi": 77 },
  "modelUsage": [
    { "modelFull": "claude-sonnet-4.5", "modelShort": "Sonnet", "calls": 12, "inputTokens": 54000, "outputTokens": 9000, "cacheCreationTokens": 3200, "cacheReadTokens": 18000 }
  ],
  "sessionMetrics": {
    "toolCounts": { "WebSearch": 2, "WebFetch": 1 },
    "agentCounts": { "code-review": 1 },
    "skillNames": ["pr"],
    "humanPromptCount": 6,
    "activeMinutes": 28,
    "aiMinutes": 18,
    "humanMinutes": 10
  }
}

How Attribution Works

The algorithm

Every time Claude writes to a file, two snapshots are captured:

  • Before snapshot — file content before Claude's first edit in the session (saved by the PreToolUse hook, preserved on subsequent edits)
  • After snapshot — file content after Claude's last edit (saved by the PostToolUse hook, updated on every edit)

After you git commit, the post-commit hook runs and compares Claude's last after-snapshot for each changed file against what was actually committed, line by line.

Each committed line is classified:

| Label | Rule | Meaning | |-------|------|---------| | AI | Hash in after-snapshot, not in before-snapshot | Claude wrote this line and it survived to the commit unchanged | | HUMAN | Not in after-snapshot, or existed before Claude touched the file | You wrote it, or it predates the session | | MIXED | Claude wrote a line at this position but the committed content differs | Claude wrote it, you modified it before committing |

Empty lines are always HUMAN — they carry no attribution signal.

Worked example

Say a file originally contains two lines:

before: ["const a = 1;", "const b = 2;"]

Claude edits it and the after-snapshot is:

after:  ["const a = 1;", "const b = 2;", "const c = 3;", "export default fn;"]

You then modify fn to main before committing:

committed: ["const a = 1;", "const b = 2;", "const c = 3;", "export default main;"]

Line-by-line classification:

| Line | Before | After | Committed | Label | |------|--------|-------|-----------|-------| | const a = 1; | ✓ | ✓ | ✓ | HUMAN (existed before Claude) | | const b = 2; | ✓ | ✓ | ✓ | HUMAN (existed before Claude) | | const c = 3; | ✗ | ✓ | ✓ | AI (Claude wrote it, committed unchanged) | | export default main; | ✗ | hash differs (fn vs main) | ✓ | MIXED (Claude wrote fn, you changed to main) |

Result: 1 AI / 2 HUMAN / 1 MIXED = 25% AI contribution.

Why this is correct (unlike the previous approach)

The old calculate-metrics.sh summed every line Claude ever wrote across all Edit/Write operations and divided by the net git diff. If Claude edited a file four times and you rewrote it entirely, the script claimed ~100% AI. That was wrong.

This tool compares Claude's final written state against what was committed. If you rewrote it entirely, none of your lines hash-match Claude's snapshot → 0% AI. Correct.

Multi-commit workflow

Each commit is attributed independently at commit time (when checkpoints are freshest). The /metrics command aggregates across all commits on the current branch using "last-write-wins" per file: for files touched in multiple commits, the stats from the most recent commit are used. This represents the final state of each file and prevents double-counting when the same file appears in multiple commits.

Running /start scopes both tool/token metrics AND attribution data to commits made after the start marker. Use it when beginning a new Jira ticket to get per-ticket metrics.


Known Limitations

  • Binary files are skipped (git shows them as binary, content can't be compared line-by-line).

  • Identical-line content — the algorithm is set-based. Two lines with the same trimmed content produce the same hash. If Claude and a human both write a line like } (a closing brace) and } didn't exist in the before-snapshot, the tool attributes it as AI regardless of who actually wrote it. This is a conservative bias toward AI for identical content — an acceptable trade-off since lines with identical content are indistinguishable without tracking insertion history.

  • MIXED detection is positional (best-effort) — MIXED is detected by checking whether Claude's i-th line was changed in the committed file. If a human inserts or deletes lines above position i, the commit's line positions shift while the after-snapshot's positions don't, causing false MIXED classifications. MIXED is most accurate when human edits are small in-place tweaks (e.g., changing a value on a line Claude wrote) rather than bulk insertions or deletions.

  • Sessions without checkpoints — commits made outside an active Claude session (no current-session file, or checkpoints already cleaned up) are attributed 100% HUMAN for that commit's per-session stats. However, the cumulative minimap carries AI attribution forward for untouched lines from previous sessions — so the codebase-wide AI% is not lost when another developer commits without hooks installed.

  • git commit --amend — when a commit is amended, the original SHA is replaced but the old git note (pointing to the now-orphaned SHA) remains in the notes object store. /metrics reads notes across the entire branch, so an amended commit's lines may appear twice. Avoid amending published commits; if you do, run /metrics knowing totals may be slightly inflated for that commit's files.

  • Multiple sessions on the same branch — if two different Claude sessions both touch the same file, the last write wins. Only the session whose after checkpoint was written most recently contributes to attribution for that file.

  • Hash collisions — uses a 16-character SHA-256 prefix (2^64 space); collision threshold is ~4 billion unique lines (negligible in practice).


Security Model

  • The package directory is a trusted location — installed hooks embed an absolute path to bin/claude-attribution at install time. Compromise of the package directory would affect all repos that have hooks installed. Keep it in a location with normal user-only permissions (npm global installs and ~/Code/ clones are both fine).

  • session_id is validated — checkpoint paths include the session ID as a directory component. The tool validates session IDs against [a-zA-Z0-9_-]{1,128} before use, preventing path traversal attacks.

  • Checkpoint directories are chmod 0700/tmp/claude-attribution/<session>/ is created with owner-only read/write so other users on shared machines cannot access file snapshots.

  • .claude/logs/ is gitignored — the installer adds .claude/logs/ to .gitignore automatically. Tool usage logs contain session IDs and tool inputs that should not be committed to version control.


Checkpoints and Temp Files

Checkpoints are stored in /tmp/claude-attribution/<session-id>/ as JSON files. They are intentionally not cleaned up when your Claude Code session ends — you may close Claude Code and commit later, and the checkpoints need to survive until you do. The OS clears /tmp/ on reboot, which is sufficient. Stale checkpoints from old sessions are harmless.

To force re-detection of the TypeScript runtime (e.g., after installing Bun): rm /tmp/claude-attribution-runtime


Metrics Export

Attribution metrics are exported automatically on every PR merge via GitHub Actions (.github/workflows/claude-attribution-export.yml). The workflow uses the OpenTelemetry OTLP/HTTP JSON format and supports any OTel-compliant backend.

Metrics exported on each merged PR:

| Metric | Unit | Description | |--------|------|-------------| | claude_attribution.ai_lines | lines | Lines written by Claude and committed unchanged | | claude_attribution.human_lines | lines | Lines written or left unchanged by the developer | | claude_attribution.total_lines | lines | Total committed lines in the PR | | claude_attribution.pct_ai | % | Percentage of lines attributed to Claude (this PR) | | claude_attribution.codebase_pct_ai | % | Cumulative codebase-wide AI% at PR merge time (requires minimap) | | claude_attribution.codebase_total_lines | lines | Total codebase lines tracked in the minimap | | claude_attribution.cost_usd | $ | Estimated Claude API cost for this PR (requires v1.5.0+ notes) | | claude_attribution.input_tokens | tokens | Total input tokens consumed by Claude in this PR | | claude_attribution.output_tokens | tokens | Total output tokens generated by Claude in this PR | | claude_attribution.cache_read_tokens | tokens | Cache read tokens in this PR | | claude_attribution.cache_creation_tokens | tokens | Cache creation tokens in this PR |

Token and cost metrics are only emitted when the git notes contain token data (written by v1.5.0+ hooks). All metrics carry attributes pr, branch, author, tool — enabling per-PR trend analysis.

Supported backends:

| Backend | Configuration | |---------|---------------| | Datadog (shortcut) | Set DATADOG_API_KEY secret (and optionally DATADOG_SITE org variable, defaults to datadoghq.com). Endpoint is auto-configured. | | Datadog (explicit) | OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.datadoghq.com, OTEL_EXPORTER_OTLP_HEADERS=DD-Api-Key=<key> | | Grafana Cloud | OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp-gateway-<zone>.grafana.net/otlp, OTEL_EXPORTER_OTLP_HEADERS=Authorization=Basic <base64(user:token)> | | Splunk Observability | OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.<realm>.signalfx.com/v2/datapoint/otlp, OTEL_EXPORTER_OTLP_HEADERS=X-SF-Token=<token> | | New Relic | OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net, OTEL_EXPORTER_OTLP_HEADERS=api-key=<key> | | Honeycomb | OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io, OTEL_EXPORTER_OTLP_HEADERS=x-honeycomb-team=<key> | | OTel Collector | OTEL_EXPORTER_OTLP_ENDPOINT=http://your-collector:4318 | | Generic webhook | METRICS_WEBHOOK_URL=https://... — POSTs a flat JSON payload with pr, repo, author, branch, ai_lines, human_lines, total_lines, pct_ai, and optionally codebase_pct_ai / codebase_total_lines / cost_usd / input_tokens / output_tokens |

When no destination is configured, the workflow runs in dry-run mode — it prints the OTLP payload to stdout and exits 0. This makes it safe to install and test before secrets are set.

Set secrets at the org level so they apply to all repos that use this tool. Multiple export destinations can be active simultaneously (e.g. both OTEL_EXPORTER_OTLP_ENDPOINT and METRICS_WEBHOOK_URL).

Model pricing (Claude-only org variables — update when Anthropic changes pricing):

| Variable | Default | Description | |----------|---------|-------------| | CLAUDE_PRICE_OPUS_INPUT | 15.00 | $ per 1M input tokens (Claude Opus) | | CLAUDE_PRICE_OPUS_OUTPUT | 75.00 | $ per 1M output tokens (Claude Opus) | | CLAUDE_PRICE_SONNET_INPUT | 3.00 | $ per 1M input tokens (Claude Sonnet) | | CLAUDE_PRICE_SONNET_OUTPUT | 15.00 | $ per 1M output tokens (Claude Sonnet) | | CLAUDE_PRICE_HAIKU_INPUT | 0.80 | $ per 1M input tokens (Claude Haiku) | | CLAUDE_PRICE_HAIKU_OUTPUT | 4.00 | $ per 1M output tokens (Claude Haiku) | | CLAUDE_PRICE_CACHE_READ_MULT | 0.1 | Fraction of input price for cache reads | | CLAUDE_PRICE_CACHE_WRITE_MULT | 1.25 | Fraction of input price for cache writes |

These pricing variables apply to Claude token-based cost estimation only. Copilot metrics currently report cost as unavailable unless a future durable billing/token source is added. Unrecognized Claude model names fall back to Opus pricing. Set these as org-level variables (not secrets) — they're not sensitive.


AI Actor Attribution (Copilot Bot, @claude GHA)

The local post-commit hook only runs when code is committed on a developer's machine. Commits made server-side by AI bots — such as @claude via claude-code-action or the Copilot coding agent — bypass the local hook entirely and would otherwise appear as 100% human in metrics.

claude-attribution handles these two ways:

1. Auto-detection in metrics (no setup required)

When generating PR metrics, branch commits that have no git note are checked for known AI actor signals:

| Signal | Example | |---|---| | Bot author email/name | github-actions[bot], copilot[bot] | | Co-authored-by: trailer | Co-authored-by: Claude <...> | | Co-authored-by: trailer | Co-authored-by: GitHub Copilot <...> |

If detected, all non-blank committed lines are counted as AI in the metrics output and the assistant runtime is labeled as Copilot or Claude when that can be inferred. No git note is written — this is a metrics-time synthesis only.

This hosted/bot path is intentionally conservative:

  • attribution: heuristic but durable enough for PR metrics
  • assistant runtime: best effort
  • time/cost: not available from current hosted signals

Not detectable: Copilot "Commit suggestion" (the button on PR review comments). Those commits are authored as the human who clicked the button — there is no metadata distinguishing them from a human edit. They will count as HUMAN.

2. note-ai-commit command (writes a permanent git note)

For @claude via claude-code-action, you can write a permanent git note at CI time so attribution is durable (survives metrics regeneration, appears in Datadog dashboard):

# Write a 100% AI note for HEAD (then push the note):
claude-attribution note-ai-commit --push

# Only write the note if the commit looks like an AI actor commit (safe on every push):
claude-attribution note-ai-commit --if-ai-actor --push

# Write a note for a specific SHA:
claude-attribution note-ai-commit abc1234 --push

Setting up for claude-code-action

If your repo uses anthropics/claude-code-action, run claude-attribution install — it detects the action and offers to install claude-attribution-gha.yml, which records attribution automatically on every AI actor push.

To install the workflow manually, add .github/workflows/claude-attribution-gha.yml:

name: Claude Attribution — AI Actor Commits

on:
  push:
    branches: ["**"]

permissions:
  contents: write

jobs:
  note-ai-commit:
    name: Record AI actor commit attribution
    runs-on: ubuntu-latest   # replace with your self-hosted runner label if needed
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Fetch attribution notes
        run: git fetch origin refs/notes/claude-attribution:refs/notes/claude-attribution || true

      - name: Configure git identity for notes
        run: |
          git config user.email "claude-attribution[bot]@users.noreply.github.com"
          git config user.name "claude-attribution[bot]"

      - uses: oven-sh/setup-bun@v2

      - name: Install claude-attribution
        run: |
          npm install -g --prefix "${HOME}/.npm-global" claude-attribution
          echo "${HOME}/.npm-global/bin" >> "$GITHUB_PATH"

      - name: Record attribution if AI actor commit
        run: claude-attribution note-ai-commit --if-ai-actor --push

The --if-ai-actor flag makes this a silent no-op for human commits — it only fires when the commit author or message matches a known AI actor pattern.


OTel Traces (optional)

claude-attribution can export OpenTelemetry traces to any OTLP-compatible backend (Datadog APM, Jaeger, Tempo, etc.). This is opt-in — set one env var to enable it.

Env vars

| Variable | Required | Example | Description | |----------|----------|---------|-------------| | OTEL_EXPORTER_OTLP_ENDPOINT | Yes (to enable) | http://localhost:4318 | OTLP/HTTP receiver. Unset = OTel disabled. | | OTEL_EXPORTER_OTLP_HEADERS | No | DD-Api-Key=abc123 | Comma-separated key=value headers (required for Datadog OTLP intake, not needed for local Agent) | | OTEL_SERVICE_NAME | No | claude-code | Service name in APM. Defaults to claude-code. |

Datadog via local Agent (Agent with OTLP enabled on port 4318 — no API key needed):

export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318

Datadog OTLP intake (direct, without Agent):

export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.datadoghq.com
export OTEL_EXPORTER_OTLP_HEADERS=DD-Api-Key=<your-api-key>

What gets traced

Two span types are emitted:

tool_call/{toolName} (child span) — one per tool call. Emitted at the end of each PostToolUse hook invocation. Attributes: tool.name, file.path (if applicable), session.id.

claude-session (root span) — one per commit. Emitted at the end of the post-commit hook. Covers startTime (first tool call in the session) → commit time. Attributes: session.id, git.branch, git.commit, attribution.ai_lines, attribution.human_lines, attribution.mixed_lines, attribution.total_lines, attribution.pct_ai.

How context is persisted

Each hook invocation is a short-lived process that exits immediately. The trace context (traceId, rootSpanId, startTime, etc.) is persisted to .claude/attribution-state/otel-context.json and read by each subsequent hook so all spans share the same trace. The context file is deleted when the root session span is exported at commit time.


Troubleshooting

"No attribution data found" in the metrics output

The post-commit hook may not have run. Check:

  1. Is bun (or tsx) on your PATH in a git hook context? Run which bun from your shell, then check if that path is in .git/hooks/post-commit.
  2. Did you run claude-attribution install <repo> for this specific repo?
  3. Check .claude/logs/attribution.jsonl — if it's empty, the hook isn't firing.

Attribution is 0% AI even though Claude wrote everything

Most likely the commit happened in a different terminal session from where Claude is running. The current-session file in .claude/attribution-state/ needs to match the session that created the checkpoints. Start Claude, make your changes, and commit without switching sessions.

Codebase AI% shows 0% (or very low) even though the repo is all Claude Code

The cumulative minimap hasn't been initialized yet. Run once to backfill:

claude-attribution init --ai
git push origin refs/notes/claude-attribution-map

See Backfilling the attribution minimap for details.

The hook is slowing down commits

The post-commit hook runs attribution after the commit is already recorded — it can't block the commit. If you see a pause, it's likely runtime startup time on a cold start (~100ms for Bun, ~300ms for npx tsx). Subsequent runs use the cached runtime from /tmp/claude-attribution-runtime and are faster.

Runtime cache is stale (wrong runtime used after installing Bun)

Delete the cache file to force re-detection:

rm /tmp/claude-attribution-runtime