npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

content-grade

v1.1.4

Published

AI-powered content analysis CLI. Score any blog post, landing page, or ad copy in under 30 seconds — runs on Claude CLI, no API key needed.

Readme

ContentGrade

AI-powered content quality scoring for developers and content teams. Score any blog post, landing page, ad copy, or email — in under 30 seconds. No API keys. No accounts. Runs on your local Claude CLI.

CI npm npm downloads/month npm downloads/week License: MIT Node.js 18+ Requires Claude CLI GitHub Issues GitHub Discussions Early Adopter seats open


60-second quickstart

# Step 1: See it in action — instant demo, no file needed
npx content-grade

# Step 2: Score your own content
npx content-grade grade README.md
npx content-grade grade ./my-post.md

# Step 3: Grade a headline
npx content-grade headline "Why Most Startups Fail at Month 18"

One requirement: Claude CLI must be installed and logged in (claude login). No API keys, no accounts, your content never leaves your machine.

Free tier: 5 analyses/day. No signup. npx content-grade grade README.md works immediately.

Install globally (recommended — skips the npx download on every run):

npm install -g content-grade
content-grade --help
# More commands
content-grade https://yourblog.com/post    # analyze any URL
content-grade analyze ./post.md --json     # raw JSON for CI pipelines
content-grade analyze ./post.md --quiet    # score number only (for scripts)
content-grade activate                     # unlock Pro: batch mode, 100/day

If the quickstart saved you 10 minutes, star the repo — it helps other developers find it.


Upgrade to Pro

The free tier is real. 5 analyses/day lets you evaluate ContentGrade properly.

Most developers hit the limit when they start trusting it enough to use it on every file — before publishing a post, on every PR, iterating until the score crosses 70. That's when Pro pays for itself.

| | Free | Pro | |---|---|---| | Analyses/day | 5 | Unlimited | | All 6 CLI commands | ✓ | ✓ | | Web dashboard (6 tools) | ✓ | ✓ | | Batch mode (batch <dir>) | — | ✓ | | CI integration (--json exit codes) | ✓ | ✓ | | Price | $0 | $9/mo |

Upgrade to Pro → $9/mo

Early Adopter? The first 50 users who share their results in Show & Tell get 12 months Pro free. Details →

What Pro unlocks in practice:

# Score your entire blog before the next publish run
content-grade batch ./posts/

# Gate low-quality content in CI (exits 1 if score < 50)
content-grade analyze ./draft.md --quiet

# Iterate on a piece without hitting the wall
content-grade grade ./post.md   # run 10x, no limit

After payment:

  1. License key delivered to your email within 60 seconds
  2. Run content-grade activate and paste the key when prompted
  3. Limit lifts immediately — no restart, no reinstall
content-grade activate
# Enter your license key when prompted
# ✓ Pro activated — unlimited analyses unlocked

Usage Examples

Grade a single file:

npx content-grade article.md

Grade multiple files with a glob pattern:

npx content-grade 'blog/**/*.md'

JSON output for CI integration:

npx content-grade README.md --json
# Returns structured JSON with score, grade, dimensions, and improvements
# Exit code 1 if score < 50 — useful for blocking low-quality merges

What you get

  ╔═══════════════════════════════════════╗
  ║   ContentGrade  ·  Content Analysis   ║
  ╚═══════════════════════════════════════╝

  OVERALL SCORE   72/100  B+   blog post
  ████████████████████████░░░░░░░░░░░░░░░░

  HEADLINE SCORE   ████████████░░░░░░░░ 61/100
  "Why Most Startups Fail"
  ↳ Too vague — the reader doesn't know if this applies to them.

  ──────────────────────────────────────────────────────────
  DIMENSION BREAKDOWN

  Clarity           ████████████████████ 80/100
    ↳ Well-organized with clear section breaks.

  Engagement        ████████████░░░░░░░░ 60/100
    ↳ Opens with a statistic but drops momentum in paragraph 3.

  Structure         ████████████████░░░░ 72/100
    ↳ Good use of subheadings but the conclusion is thin.

  Value Delivery    █████████████░░░░░░░ 65/100
    ↳ Core advice is solid but needs a concrete takeaway.

  VERDICT
  Strong structure undermined by a weak headline — fix that first.

  TOP IMPROVEMENTS
  ● Make the headline specific: who fails, why, and what the fix is
    Fix: "Why 90% of SaaS Startups Fail at Month 18 (and How to Be the 10%)"

  ● Add one data point to support claims in paragraph 2
    Fix: Cite a specific study — "CB Insights found..." beats "many startups..."

  HEADLINE REWRITES
  1. Why 90% of SaaS Startups Fail at Month 18 (and How to Be the 10%)
  2. The Month 18 Startup Trap: What Kills Growth-Stage Companies

  Unlock team features at content-grade.github.io/Content-Grade

Commands

| Command | What it does | |---------|-------------| | (no args) | Instant demo on built-in sample content | | <url> | Fetch and analyze a live URL — npx content-grade https://example.com/blog/post | | <file> | Analyze a file directly — npx content-grade ./post.md | | . or <dir> | Scan a directory, find and analyze the best content file | | demo | Same as no args | | analyze <file> | Full content audit: score, grade, dimensions, improvements | | grade <file> | Alias for analyze (e.g. grade README.md) | | check <file> | Alias for analyze | | batch <dir> | [Pro] Analyze all .md/.txt files in a directory | | headline "<text>" | Grade a headline on 4 copywriting frameworks | | activate | Enter license key to unlock Pro features (batch, 100/day) | | init | First-run setup: verify Claude CLI, run smoke test | | start | Launch the full web dashboard (6 tools) | | telemetry [on\|off] | View or toggle anonymous usage tracking | | check-updates | Check if a newer version is available on npm | | help | Full usage and examples |

Flags:

| Flag | What it does | |------|-------------| | --json | Raw JSON output — pipe to jq for CI integration | | --quiet | Score number only — for shell scripts | | --version | Print version | | --verbose | Show debug info (model, timing, Claude response length) | | --no-telemetry | Skip usage tracking for this run |

Global flags:

| Flag | Description | |------|-------------| | --no-telemetry | Skip usage tracking for this one invocation | | --version, -v, -V | Show installed version | | --help, -h | Show help |


The 6 web dashboard tools

Launch with content-grade start — opens at http://localhost:4000:

| Tool | Route | What it does | |------|-------|-------------| | HeadlineGrader | /headline | A/B comparison, scores across 4 frameworks, suggests rewrites | | PageRoast | /page-roast | Landing page URL audit: hero, social proof, clarity, CRO | | AdScorer | /ad-scorer | Google/Meta/LinkedIn ad copy scoring | | ThreadGrader | /thread | Twitter/X thread hook, structure, and shareability score | | EmailForge | /email-forge | Subject line + body copy for click-through optimization | | AudienceDecoder | /audience | Twitter handle → audience archetypes and content patterns |

Free tier: 5 analyses/day. Pro ($9/mo): unlimited analyses + batch mode.


Community

COMMUNITY.md — how to report bugs, request features, join the Early Adopter program, and share your work.

| Channel | Best for | |---------|----------| | Q&A | Questions, troubleshooting, "why is my score X?" | | Show & Tell | Workflows, CI integrations, results — claim Early Adopter here | | Ideas | Feature requests before they become issues | | Bug reports | Something broken? File it here | | Feedback | General feedback — what's working, what isn't |

All discussions →

Early Adopter Program — 12 months Pro free

Over 1,000 developers have installed ContentGrade. Zero have claimed an Early Adopter seat yet. That means you're actually first — and there are only 50.

The first 50 users who run ContentGrade on real content and share it get 12 months Pro free ($108 value) plus direct input on what gets built next.

How to claim your seat:

  1. Run ContentGrade on a real file: npx content-grade analyze ./your-post.md
  2. Post in Show & Tell with [Early Adopter] in the title — include your use case, one thing you'd change, and your score
  3. Maintainer responds within 48 hours with Pro access

Short on time? Just open an issue with [Early Adopter] in the title — same deal, lower friction.

Full benefits: EARLY_ADOPTERS.md


Contributing

Already using ContentGrade? That makes you the most valuable contributor. You know what's missing.

Three ways to help (pick what fits your time):

| Effort | Action | |--------|--------| | 2 min | Open a discussion — describe a workflow gap or missing feature | | 20 min | File a bug report with --verbose output | | 2 hours | Pick a good first issue — scoped tasks with acceptance criteria and a pointer to the right file |

Quick dev setup:

git clone https://github.com/StanislavBG/Content-Grade.git
cd Content-Grade
npm install
node bin/content-grade.js demo  # smoke test — no Claude needed
npm test                        # all tests green

Requires Claude CLI for the analysis commands (claude login once). No API keys. See CONTRIBUTING.md for PR guidelines, code style, and what gets merged.

Contributors who file a useful bug report or land a PR automatically qualify for the Early Adopter program.


Show Us Your Results

Over 1,000 developers have run ContentGrade. We'd love to hear what you're scoring — blog posts, landing pages, ad copy, emails, documentation.

→ Post in Show & Tell (also counts as an Early Adopter claim — 12 months Pro free)

Tell us:

  • What content type? What score did you get?
  • Running it in CI? What threshold are you using?
  • Something that surprised you — good or bad?

Use cases shared here get featured in this README. We'll credit you by username. And yes — every Show & Tell post qualifies for the Early Adopter program.


Requirements

  • Node.js 18+
  • Claude CLI — install from claude.ai/code, then run claude login

Verify Claude is working:

claude -p "say hi"

If this fails, run npx content-grade init for step-by-step diagnostics.



Troubleshooting

Error: Claude CLI not found

ContentGrade requires Claude CLI. Install it:

npm install -g @anthropic-ai/claude-code
claude login

Then verify it works:

claude -p "say hi"

If Claude is installed but not on your PATH, set CLAUDE_PATH:

CLAUDE_PATH=/path/to/claude content-grade analyze ./post.md

Or add it permanently to your shell profile:

export CLAUDE_PATH=/path/to/claude

Analysis times out or hangs

Claude CLI calls can take 10–60 seconds on first run. If it hangs beyond 2 minutes:

  1. Test Claude directly: claude -p "say hi" — if this hangs too, Claude CLI is the issue
  2. Check your internet connection (Claude CLI phones home on first use)
  3. Run content-grade init for a guided diagnostic

Could not parse response as JSON

This means Claude returned a non-JSON response. Common causes:

  • Claude CLI rate-limited your account (try again in a few minutes)
  • The file content was too short or unrecognizable as text content
  • Claude returned an error message instead of scoring output

Run with --verbose to see the raw response:

content-grade analyze ./post.md --verbose

Daily limit reached

Free tier: 5 analyses/day, resets at midnight UTC.

✗ Daily limit reached (5/5 free checks used today).
  Upgrade to Pro for unlimited analyses: https://buy.stripe.com/4gM14p87GeCh9vn9ks8k80a

Options:

  • Wait until midnight UTC for the limit to reset
  • Upgrade to Pro for unlimited analyses: content-grade activate

content-grade: command not found after global install

If npm install -g content-grade succeeded but the command isn't found:

# Find where npm puts global binaries
npm bin -g

# Add to PATH — for bash/zsh, add this to ~/.bashrc or ~/.zshrc:
export PATH="$(npm bin -g):$PATH"

activate says "Could not reach server"

The activation server may be temporarily unreachable. The key is stored locally without online verification — this is normal and your Pro access will still work for CLI tools. Re-run content-grade activate to re-verify once the server is back.


Something else is broken

Run the built-in diagnostic:

content-grade init

This checks: Claude CLI on PATH, live smoke test, production build status. It will tell you exactly what's wrong.

Still stuck? Open an issue: github.com/Content-Grade/Content-Grade/issues

CLI reference

analyze / check

content-grade analyze <file>
content-grade check <file>

Analyzes written content from any plain-text file. Detects content type automatically (blog, landing page, email, ad copy, social thread) and calibrates scoring accordingly.

File limits:

  • Maximum size: 500 KB
  • Minimum length: 20 characters
  • Binary files are rejected — use .md, .txt, .mdx, or any plain-text format
  • Files over 6,000 characters are truncated for analysis (the full file is still read)

Output sections:

  1. Overall score (0–100) with letter grade (A+ to F)
  2. Headline score with 1-2 sentence feedback
  3. Dimension breakdown — Clarity, Engagement, Structure, Value Delivery
  4. One-line verdict
  5. Strengths list
  6. Top improvements with specific, actionable fixes
  7. 2 headline rewrite suggestions
# Examples
content-grade analyze ./blog-post.md
content-grade check ./email-draft.txt
content-grade analyze ~/landing-page-copy.md

headline / grade

content-grade headline "<text>"
content-grade grade "<text>"

Grades a single headline on 4 direct-response copywriting frameworks:

| Framework | Max points | What it checks | |-----------|-----------|----------------| | Rule of One | 30 | Single focused idea, one reader, one promise | | Value Equation | 30 | Benefit clarity, specificity, reader desire | | Readability | 20 | Length, word choice, scan-ability | | Proof/Promise | 20 | Credibility signals, specificity, claims |

Output: overall score, per-framework breakdown with notes, 2 stronger rewrites.

content-grade headline "How I 10x'd My Conversion Rate in 30 Days"
content-grade headline "The $0 Marketing Strategy That Got Us 10,000 Users"

init / setup

content-grade init

Three-step diagnostic:

  1. Checks Claude CLI is on PATH (or CLAUDE_PATH)
  2. Runs a live smoke test against Claude
  3. Checks whether the production web dashboard build exists

Use this when something isn't working — it tells you exactly what to fix.


telemetry

content-grade telemetry          # show current status
content-grade telemetry off      # opt out — persisted permanently
content-grade telemetry on       # opt back in

Anonymous usage tracking (command name + score range, no file contents, no PII). Enabled by default; telemetry off writes a local flag file and is honoured by all future invocations.

One-time opt-out per invocation:

content-grade analyze ./post.md --no-telemetry

Pass --no-telemetry to any command to skip tracking for that run only without changing the saved preference.


start / serve

content-grade start

Starts the production web server at http://localhost:4000 and opens your browser. Requires a production build (run npm run build first if self-hosting).

Port: controlled by PORT environment variable (default: 4000).


demo

content-grade demo
# or just:
npx content-grade

Runs a full analysis on built-in sample content so you see exactly what ContentGrade does before analyzing your own work.


Directory scan

content-grade .
content-grade ./posts
content-grade /path/to/content

Scans up to 2 directory levels deep for .md, .txt, and .mdx files. Picks the best candidate (prefers files with names like post, article, blog, draft, copy over README.md; larger files score higher). Then runs analyze on it.



check-updates

content-grade check-updates

Checks npm for the latest published version of content-grade and compares it against the installed version. If an update is available, prints the upgrade command.

  ⚠  Update available: v1.0.4  →  v1.0.5
  Update with:
    npm install -g content-grade    (global install)
    npm install content-grade@latest (local install)

activate

content-grade activate
content-grade activate <your-license-key>

Unlocks Pro features by storing your license key locally. Can be run interactively (prompts for the key) or non-interactively by passing the key as an argument — useful for automated setups:

# Interactive
content-grade activate

# Non-interactive (CI/CD or scripted setup)
CONTENT_GRADE_KEY=your-key-here content-grade analyze ./post.md

The key is validated against the server and saved to ~/.config/content-grade/config.json. If the server is unreachable (corporate proxy, offline), the key is stored locally anyway with a warning.

Pro unlocks:

  • 100 analyses/day (vs 50 free)
  • content-grade batch <dir> — analyze entire directories at once
  • A/B headline comparison (web dashboard)
  • Landing page URL audit, ad copy scoring, email optimizer

Get a license: content-grade.github.io/Content-Grade/

Configuration

All configuration is through environment variables — no config file needed.

| Variable | Default | Description | |----------|---------|-------------| | PORT | 4000 | Web dashboard server port | | PUBLIC_URL | http://localhost:4000 | Base URL for the server | | CLAUDE_PATH | claude | Path to Claude CLI binary if not on PATH | | STRIPE_SECRET_KEY | (empty) | Stripe secret key — leave empty to show "Coming Soon" on upgrade prompts | | STRIPE_PRICE_CONTENTGRADE_PRO | (empty) | Stripe Price ID for Pro subscription | | STRIPE_PRICE_AUDIENCEDECODER | (empty) | Stripe Price ID for AudienceDecoder one-time purchase | | STRIPE_WEBHOOK_SECRET | (empty) | Stripe webhook signing secret | | VITE_API_URL | /api | Vite dev server API proxy (development only) |

Copy .env.example to .env and fill in what you need:

cp .env.example .env

Stripe variables are entirely optional — all tools work without them; upgrade prompts show "Coming Soon" instead of a checkout button.


Self-hosting / Development

git clone https://github.com/StanislavBG/Content-Grade
cd Content-Grade
npm install
npm run dev     # hot reload: server at :4000, client at :3000

To build for production:

npm run build   # builds client to dist/ and server to dist-server/
npm start       # runs the production server

Project layout:

bin/               CLI entry point (content-grade.js)
docs/              REST API reference (api.md)
server/            Fastify API server
  routes/          API route handlers (demos, stripe)
  services/        Claude integration, Stripe helpers
  db.ts            SQLite setup (usage tracking, subscriptions)
src/               React web dashboard
  views/           6 tool views (HeadlineGrader, PageRoast, etc.)
tests/
  unit/            Unit tests
  integration/     Server integration tests

REST API: The web dashboard exposes a full REST API at http://localhost:4000. See docs/api.md for the complete reference — every endpoint, request shape, response schema, and cURL examples.

Documentation:

Run tests:

npm test              # run all tests
npm run test:watch    # watch mode
npm run test:coverage # coverage report

Output format

When content-grade analyze runs, Claude returns structured JSON that is rendered to the terminal. The raw schema:

{
  "content_type": "blog | email | ad | social | landing",
  "total_score": 72,
  "grade": "B+",
  "headline": {
    "text": "Why Most Startups Fail",
    "score": 61,
    "feedback": "Too vague — the reader doesn't know if this applies to them."
  },
  "dimensions": {
    "clarity":        { "score": 80, "note": "Well-organized with clear section breaks." },
    "engagement":     { "score": 60, "note": "Opens with a statistic but drops momentum in paragraph 3." },
    "structure":      { "score": 72, "note": "Good use of subheadings but the conclusion is thin." },
    "value_delivery": { "score": 65, "note": "Core advice is solid but needs a concrete takeaway." }
  },
  "one_line_verdict": "Strong structure undermined by a weak headline — fix that first.",
  "strengths": [
    "Well-organized with clear section breaks",
    "Specific statistics in the opening hook"
  ],
  "improvements": [
    {
      "issue": "Headline is too vague",
      "priority": "high",
      "fix": "Why 90% of SaaS Startups Fail at Month 18 (and How to Be the 10%)"
    },
    {
      "issue": "Paragraph 2 makes claims without evidence",
      "priority": "medium",
      "fix": "Cite a specific study — 'CB Insights found...' beats 'many startups...'"
    }
  ],
  "headline_rewrites": [
    "Why 90% of SaaS Startups Fail at Month 18 (and How to Be the 10%)",
    "The Month 18 Startup Trap: What Kills Growth-Stage Companies"
  ]
}

For content-grade headline, the schema:

{
  "score": 74,
  "grade": "B",
  "verdict": "Strong promise but lacks the proof element that separates a 74 from an 88.",
  "scores": {
    "rule_of_one":        { "score": 22, "max": 30, "note": "Single idea present but promise is vague." },
    "value_equation":     { "score": 22, "max": 30, "note": "Dream outcome clear; likelihood undermined by '10x' without proof." },
    "readability":        { "score": 16, "max": 20, "note": "Short, no jargon — reads well." },
    "proof_promise_plan": { "score": 14, "max": 20, "note": "Timeframe adds proof; missing the 'how'." }
  },
  "rewrites": [
    "How I Doubled Conversions in 30 Days by Fixing One Headline",
    "The 3 Headline Fixes That Grew My Conversions 10x (Backed by 847 A/B Tests)"
  ]
}

Scoring methodology

ContentGrade uses Claude as a scoring engine with calibrated anchor examples that force use of the full 0–100 range. This prevents the common AI problem of all scores bunching in the 60–80 band.

Score thresholds (for analyze):

| Score | What it means | |-------|--------------| | 90–100 | Exceptional — data authority, curiosity gap, ultra-specific, publication-ready | | 80–89 | Strong — clear value equation, good proof, specific promise | | 65–79 | Good — readable, structured, some specificity gaps | | 50–64 | Adequate — readable but vague, generic structure | | 35–49 | Poor — clichés, no hook, commodity topic | | 0–34 | Very weak — no structure, zero reader benefit, or content too short |

Dimensions:

  • Clarity — Organization, section breaks, readability. Does the reader always know where they are?
  • Engagement — Hook strength, momentum, reader retention. Would someone read to the end?
  • Structure — Heading hierarchy, flow, conclusion strength. Is there a logical arc?
  • Value Delivery — Concrete advice, takeaways, actionable insights. Does the reader leave with something useful?

Content-type calibration: The scoring prompt knows it's reading a blog post vs. an email vs. ad copy. A 200-word email doesn't get penalized for "thin structure" the way a blog post would.

Headline framework calibration anchors:

| Score range | What it takes | |-------------|--------------| | 95+ | Data authority + curiosity gap + ultra-specificity + clear reader identity | | 88–92 | Strong proof + promise + plan, low jargon, 6–10 words | | 75–84 | Clear value equation, objection handling, some specificity | | 65–74 | Clear differentiator or positioning, but vague on proof | | 50–64 | Decent structure but generic or lacking urgency | | 35–49 | Standard cliché phrasing, no hook element | | 0–34 | Commodity topic, passive voice, zero benefit stated |


Programmatic use (REST API)

Start the server with content-grade start, then call any tool via HTTP:

Grade a headline

curl -s -X POST http://localhost:4000/api/demos/headline-grader \
  -H 'Content-Type: application/json' \
  -d '{"headline": "Why Most Startups Fail at Month 18"}' | jq .

Response:

{
  "total_score": 74,
  "grade": "B",
  "diagnosis": "Strong promise but lacks the proof element that separates a 74 from an 88.",
  "framework_scores": {
    "rule_of_one":        { "score": 22, "max": 30, "note": "..." },
    "value_equation":     { "score": 22, "max": 30, "note": "..." },
    "readability":        { "score": 16, "max": 20, "note": "..." },
    "proof_promise_plan": { "score": 14, "max": 20, "note": "..." }
  },
  "rewrites": ["...", "..."],
  "usage": { "remaining": 2, "limit": 3, "isPro": false }
}

Compare two headlines (Pro)

curl -s -X POST http://localhost:4000/api/demos/headline-grader/compare \
  -H 'Content-Type: application/json' \
  -d '{
    "headlineA": "Why Most Startups Fail",
    "headlineB": "Why 90% of SaaS Startups Fail at Month 18",
    "email": "[email protected]"
  }' | jq .

Audit a landing page URL (Pro)

curl -s -X POST http://localhost:4000/api/demos/page-roast \
  -H 'Content-Type: application/json' \
  -d '{"url": "https://example.com", "email": "[email protected]"}' | jq .

Score ad copy

curl -s -X POST http://localhost:4000/api/demos/ad-scorer \
  -H 'Content-Type: application/json' \
  -d '{
    "adCopy": "Stop wasting money on ads that don'\''t convert.",
    "platform": "facebook",
    "email": "[email protected]"
  }' | jq .

Grade a Twitter thread

curl -s -X POST http://localhost:4000/api/demos/thread-grader \
  -H 'Content-Type: application/json' \
  -d '{
    "threadText": "1/ Most people think productivity is about working more hours...\n\n2/ They'\''re wrong.",
    "email": "[email protected]"
  }' | jq .

Generate an email

curl -s -X POST http://localhost:4000/api/demos/email-forge \
  -H 'Content-Type: application/json' \
  -d '{
    "product": "A CLI tool that grades content quality",
    "audience": "developers who write technical blog posts",
    "goal": "cold_outreach",
    "tone": "casual",
    "email": "[email protected]"
  }' | jq .

Rate limit response (when daily free limit is hit):

{
  "gated": true,
  "isPro": false,
  "remaining": 0,
  "limit": 50,
  "message": "Free daily limit reached (50/day). Get 100 analyses/day with Pro"
}

See docs/api.md for the full reference — every endpoint, complete request/response shapes, and more cURL examples.


Workflows & integrations

GitHub Actions — CI content quality gate

Fail the build if content quality drops below a threshold. Useful for content-heavy repos (marketing site, docs, blog).

# .github/workflows/content-quality.yml
name: Content quality check

on:
  pull_request:
    paths:
      - 'content/**'
      - 'blog/**'
      - '*.md'

jobs:
  grade:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install Claude CLI
        run: npm install -g @anthropic-ai/claude-code

      - name: Login to Claude (non-interactive)
        run: echo "${{ secrets.ANTHROPIC_API_KEY }}" | claude login --api-key
        # Note: Claude CLI requires authentication. Store your API key in GitHub secrets.

      - name: Grade changed content
        run: |
          for f in $(git diff --name-only origin/main...HEAD | grep -E '\.(md|txt|mdx)$'); do
            echo "=== Grading $f ==="
            npx content-grade analyze "$f" --no-telemetry
          done

Note: Claude CLI must be authenticated. In CI, this requires an Anthropic API key. Set ANTHROPIC_API_KEY in your repository secrets and use it to log in.


Pre-commit hook

Grade content before it enters the repo. Add this to .git/hooks/pre-commit (make it executable with chmod +x .git/hooks/pre-commit):

#!/bin/sh
# Content quality pre-commit hook
# Grades staged .md/.txt files and blocks commit if any score below threshold

THRESHOLD=40

staged=$(git diff --cached --name-only --diff-filter=ACMR | grep -E '\.(md|txt|mdx)$')

if [ -z "$staged" ]; then
  exit 0
fi

echo "Running ContentGrade on staged files..."

for file in $staged; do
  # Only check files over 100 characters (skip tiny files)
  chars=$(wc -c < "$file" 2>/dev/null || echo 0)
  if [ "$chars" -lt 100 ]; then
    continue
  fi

  echo "  Grading $file..."
  npx content-grade analyze "$file" --no-telemetry
done

echo "✓ Content quality check complete."
exit 0

To block low-scoring commits, parse the score from output and compare against $THRESHOLD. The current output format prints OVERALL SCORE 72/100 — grep for the number and exit 1 if below threshold.


Batch processing a directory

Grade every content file in a directory and save results:

# Grade all markdown files and save output
find ./content -name '*.md' -not -name 'README.md' | while read f; do
  echo "--- $f ---"
  npx content-grade analyze "$f" --no-telemetry
  echo ""
done > content-grades-$(date +%Y%m%d).txt

# Or use the built-in directory scan (picks the best candidate):
npx content-grade ./content

Comparing scores over time

Track score progression across revisions:

# Score the current version of a post
npx content-grade analyze ./blog/my-post.md --no-telemetry > scores/my-post-$(date +%Y%m%d).txt

# Compare two versions side-by-side
diff scores/my-post-20240101.txt scores/my-post-20240201.txt

# Quick numeric comparison
grep "OVERALL SCORE" scores/my-post-*.txt

For structured tracking, use the REST API (run content-grade start first):

# Score via API, extract just the number
curl -s -X POST http://localhost:4000/api/demos/headline-grader \
  -H 'Content-Type: application/json' \
  -d '{"headline": "Your Post Title Here"}' \
  | jq '.total_score'

Log to a CSV:

DATE=$(date +%Y-%m-%d)
HEADLINE="Your Post Title Here"
SCORE=$(curl -s -X POST http://localhost:4000/api/demos/headline-grader \
  -H 'Content-Type: application/json' \
  -d "{\"headline\": \"$HEADLINE\"}" | jq '.total_score')

echo "$DATE,$HEADLINE,$SCORE" >> headline-scores.csv

Pricing

| | Free | Pro | |-|------|-----| | Price | Free forever | $19/month | | Analyses/day (CLI) | Unlimited | Unlimited | | Analyses/day (web dashboard, per tool) | 3 | 100 | | HeadlineGrader (single grade) | ✓ | ✓ | | HeadlineGrader (A/B comparison) | — | ✓ | | PageRoast (URL audit) | — | ✓ | | AdScorer | — | ✓ | | ThreadGrader | — | ✓ | | EmailForge | — | ✓ | | AudienceDecoder | — | One-time purchase |


How it works

ContentGrade uses your local Claude CLI — the same binary you use for claude -p commands. When you run analyze, it:

  1. Reads your file and detects content type (blog, email, ad, landing page, social)
  2. Calls claude -p --no-session-persistence --model claude-sonnet-4-6 with a structured scoring prompt
  3. Parses the JSON response and renders the scored output in your terminal

No data is sent to any external service. No account required. Your content stays on your machine.

The web dashboard works the same way — every tool call goes through claude -p running locally on your server.


Community

ContentGrade is built in public. The community is the roadmap.

COMMUNITY.md — how to report bugs, request features, and claim an Early Adopter seat.

GitHub Discussions

Join the conversation →

| Category | What it's for | |----------|--------------| | Q&A | "Why is my score lower than expected?" — questions get answered here | | Show & Tell | Share your workflow, integration, or results — early adopters post here | | Ideas | Feature requests and suggestions before they become issues |

Early Adopter Program — 50 seats

EARLY_ADOPTERS.md — The first 50 users who install and share feedback get:

  • Pro tier free for 12 months ($19/mo value)
  • Direct feedback channel with the maintainer
  • Name in CONTRIBUTORS.md (permanent)
  • Roadmap preview + design input before features ship
  • 30-minute onboarding call (optional)

One action to claim: run ContentGrade, then post in Show & Tell with [Early Adopter] in the title.

Champions Program

Power users who go beyond early adoption get early access to @beta releases, a private Champions discussion thread, and a SVG badge for their profile. See CHAMPIONS.md for selection criteria and current members.

Contributing

All contributions get a review within 48 hours.

First contribution in ~15 minutes:

git clone https://github.com/StanislavBG/Content-Grade.git
cd Content-Grade
pnpm install
pnpm dev         # type-check + test suite

Open a PR against main. Branch name format: fix/short-description or feat/short-description.

What's most needed right now:

| Area | Examples | |------|---------| | Scoring rules | Catch passive voice, jargon density, weak CTAs — add a new rule to the prompt | | CLI output | Better rendering in narrow terminals, CI-friendly formatting | | Bug fixes | Start with a good first issue — each has acceptance criteria and the exact file to touch | | Documentation | Fix unclear copy, add real-world examples, document edge cases | | Integrations | GitHub Actions workflows, pre-commit hook improvements, VS Code tasks |

Not sure where to start? Browse good first issue — scoped, completable in a few hours, no prior codebase knowledge required.

See CONTRIBUTING.md for full code style, test guidelines, and PR checklist.

Roadmap

ROADMAP.md — what's built, what's next, what's planned, and what we're deliberately not doing. Open a Discussion to challenge or extend it.

Feedback Loop

FEEDBACK_LOOP.md — how feedback flows into product decisions. Monthly structured check-ins, 48h triage SLA, and roadmap sync. Use the monthly feedback template to share your experience.

Contributors

CONTRIBUTORS.md — everyone who has improved ContentGrade. Code, bug reports, and real-world feedback all count. Early Adopters and Champions are listed here permanently.

How did you find ContentGrade?

If you found this tool useful, we'd love to know where you came across it — a Reddit thread, a blog post, a colleague's recommendation? Drop a note in GitHub Discussions → Q&A with the subject "How I found ContentGrade". This is 100% optional and takes 30 seconds. It helps us understand which communities to focus on and which content resonates.

Use ContentGrade in your project

Add a badge to show your content meets the bar:

[![content-grade](https://img.shields.io/badge/content--grade-passing-brightgreen)](https://www.npmjs.com/package/content-grade)

Score-specific badge: https://img.shields.io/badge/content--grade-{SCORE}%2F100-brightgreen

See docs/social-proof/built-with-badge.md for SVG assets and CI integration examples.


Legal

License

MIT