npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

bluebookify

v0.1.1

Published

Bluebook citation formatting powered by LLMs

Readme

bluebookify

npm license

Context-aware Bluebook citation formatting powered by LLMs.

Extracts citations from legal text — case citations, statutory references, short forms, and signals — and corrects their formatting to Bluebook standard. Only the citation contexts (not the full document) are sent to the LLM, making it token-efficient and privacy-conscious.

Install

npm install bluebookify

Quick start

import { bluebookify } from 'bluebookify'

const result = await bluebookify(
  'See Marbury v Madison, 5 US 137 (1803). The Court held that...',
  { apiKey: process.env.OPENAI_API_KEY, provider: 'openai' }
)

result.text
// → 'See *Marbury v. Madison*, 5 U.S. (1 Cranch) 137 (1803). The Court held that...'

result.corrections
// → [{ position: 4, original: 'Marbury v Madison, 5 US 137 (1803)', replacement: '*Marbury v. Madison*, 5 U.S. (1 Cranch) 137 (1803)', context: '...' }]

result.unchanged
// → false

Providers

Built-in support for any OpenAI-compatible API, plus a native Anthropic adapter.

| Provider | Default model | Notes | |---|---|---| | openai | gpt-4o-mini | | | anthropic | claude-haiku-4-5-20251001 | Native adapter (different API format) | | gemini | gemini-2.0-flash | OpenAI-compatible endpoint | | groq | llama-3.3-70b-versatile | | | together | meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo | | | mistral | mistral-small-latest | | | xai | grok-3-mini-fast | | | deepseek | deepseek-chat | | | openrouter | (none — must specify model) | |

Override the default model:

const result = await bluebookify(text, {
  apiKey: process.env.OPENAI_API_KEY,
  provider: 'openai',
  model: 'gpt-4o',
})

Custom LLM function

Bypass the built-in client entirely:

const result = await bluebookify(text, {
  llm: async (messages) => {
    const res = await myLlmCall(messages)
    return res.text
  },
})

Options

| Option | Type | Default | Description | |---|---|---|---| | apiKey | string | — | API key for the LLM provider | | provider | Provider | — | Provider name (maps to base URL + default model) | | model | string | (per provider) | Model name. Required if no provider default. | | baseURL | string | — | Custom endpoint URL. Overrides provider mapping. | | llm | (messages) => Promise<string> | — | Custom LLM function. Overrides apiKey/provider/model. | | rules | string | "" | Custom rules prepended to the system prompt | | contextSize | number | 100 | Characters of context on each side of a citation | | batchSize | number | 20 | Maximum citations per LLM call |

You must provide either apiKey (with provider or model) or llm.

Result

interface BluebookifyResult {
  text: string          // The corrected text
  corrections: Array<{  // Only citations that were changed
    position: number    // Index in original text
    original: string    // What was there
    replacement: string // What it became
    context: string     // Surrounding snippet for audit
  }>
  unchanged: boolean    // true if nothing was modified
}

No citations in text: LLM is not called. Returns immediately with unchanged: true.

All citations already correct: LLM is called (correctness can't be pre-judged), but corrections is empty and unchanged is true.

Custom rules

Pass domain-specific rules via the rules option. Works with lexstyle for structured rule management:

import { rules, serialize } from 'lexstyle'
import { bluebookify } from 'bluebookify'

const result = await bluebookify(text, {
  apiKey: process.env.OPENAI_API_KEY,
  provider: 'openai',
  rules: serialize(rules, 'citations'),
})

Or pass rules as a plain string:

const result = await bluebookify(text, {
  apiKey: '...',
  provider: 'openai',
  rules: `Italicize case names. Use "U.S." not "US" for reporters.
Use Id. for immediately preceding authority. Use en dashes for page spans.`,
})

Design decisions

Extract-and-send. Citations are discrete, identifiable chunks. Regex extracts them with surrounding context, and only those snippets are sent to the LLM. A 10,000-word brief with 15 citations sends ~15 small context windows, not 10,000 words.

Wide context windows. Default context size is 100 characters (vs 50 for dashes) because citations are longer and the LLM needs to see signals, parentheticals, and preceding/following citations to determine short-form vs full-cite relationships.

Signal inclusion. Introductory signals (See, Cf., But see, etc.) immediately before a citation are merged into the extraction, so the LLM can format them as a unit.

Batch validation. Each batch response is validated against its expected IDs before merging. Missing or unknown correction IDs are caught immediately.

Robust response parsing. LLM output is parsed via strict JSON first, with a hardened bracket-extraction fallback that skips stray brackets in preamble text.

Development

npm install
npm test
npm run typecheck
npm run build     # ESM + CJS + .d.ts

License

Apache-2.0