npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

lhci-ai-assistant

v0.2.1

Published

AI-powered companion tool for Lighthouse CI that analyzes performance regressions, explains root causes, and suggests fixes

Readme

LHCI AI Assistant

AI-powered companion tool for Lighthouse CI that analyzes performance regressions, explains root causes, and suggests fixes.

Features

  • AI-Powered Analysis: Uses GitHub Copilot, OpenAI, or local heuristics to analyze Lighthouse results
  • Regression Detection: Automatically identifies performance regressions by comparing against baselines
  • Root Cause Analysis: Explains why performance degraded based on code changes and Lighthouse audits
  • Auto-Fix Suggestions: Generates specific code fixes for common performance issues
  • GitHub Integration: Posts analysis as PR comments and fetches code diffs for context
  • Multiple Output Formats: Terminal (colored), JSON, Markdown

Installation

npm install -g lhci-ai-assistant
# or
pnpm add -g lhci-ai-assistant

Quick Start

  1. Run Lighthouse CI to collect reports:
lhci collect --url https://your-site.com
  1. Analyze with AI:
# Using local heuristics (no API key required)
lhci-ai analyze

# Using GitHub Copilot (requires gh auth login or GITHUB_TOKEN)
gh auth login  # One-time setup
lhci-ai analyze --provider copilot

# Using OpenAI
lhci-ai analyze --provider openai --openai-key $OPENAI_API_KEY

Commands

lhci-ai analyze

Analyze Lighthouse results with AI assistance.

lhci-ai analyze [options]

Options:
  --provider <name>      AI provider: copilot | openai | local (default: local)
  --github-token <token> GitHub token for Copilot/PR access
  --openai-key <key>     OpenAI API key
  --auto-fix             Generate auto-fix suggestions
  --post-comment         Post analysis to GitHub PR
  --pr-number <number>   PR number for posting comments
  --repo <repo>          GitHub repository (owner/repo)
  --baseline-strategy <strategy> Baseline strategy: latest | same-url | median | pXX (default: same-url)
  --output <format>      Output format: terminal | json | markdown (default: terminal)
  --config <path>        Config file path

Baseline strategies:

  • latest: compare against the second-most-recent report.
  • same-url: compare against the most recent prior run for the same URL (fallback to latest).
  • median: compare against a median baseline from multiple prior runs (same URL when available).
  • pXX (for example p75, p90): compare against a percentile baseline from multiple prior runs. For stricter regression guarding, scores use the requested percentile while timing metrics use the complementary percentile.

lhci-ai check

Quick threshold check for CI/CD gates.

lhci-ai check [options]

Options:
  --performance <score>     Minimum performance score (0-100)
  --accessibility <score>   Minimum accessibility score (0-100)
  --best-practices <score>  Minimum best practices score (0-100)
  --seo <score>             Minimum SEO score (0-100)

Example:

# Fail if performance drops below 90%
lhci-ai check --performance 90

lhci-ai init

Display configuration setup instructions.

Configuration

Create a .lhci-ai.js or add to your existing .lhcirc.js:

module.exports = {
  ai: {
    provider: 'local',        // 'copilot' | 'openai' | 'local'
    baselineStrategy: 'same-url', // 'latest' | 'same-url' | 'median' | 'p75'
    autoFix: true,
    outputFormat: 'terminal',
  },
  thresholds: {
    performance: 0.9,
    accessibility: 0.9,
    bestPractices: 0.9,
    seo: 0.9,
  },
};

Environment Variables

  • GITHUB_TOKEN - GitHub token for PR comments (Copilot uses GitHub CLI auth)
  • OPENAI_API_KEY - OpenAI API key

Copilot Authentication

The Copilot provider uses the official GitHub Copilot SDK which authenticates via:

  1. GitHub CLI (recommended): Run gh auth login once
  2. Environment variable: Set GITHUB_TOKEN with a token that has Copilot access

CI/CD Integration

GitHub Actions

name: Lighthouse CI

on: [push, pull_request]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Build
        run: npm run build

      - name: Run Lighthouse CI
        run: |
          npm install -g @lhci/cli lhci-ai-assistant
          lhci collect --url http://localhost:3000
          lhci-ai analyze --provider copilot --post-comment
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Programmatic API

import { analyze, quickCheck, loadLighthouseReports } from 'lhci-ai-assistant';

// Run full analysis
const result = await analyze({
  provider: 'local',
  output: 'json',
  autoFix: true,
});

console.log(result.summary);
console.log(result.regressions);
console.log(result.recommendations);

// Quick threshold check
const check = await quickCheck({
  performance: 0.9,
  accessibility: 0.9,
});

if (!check.passed) {
  console.log('Threshold failures:', check.failures);
  process.exit(1);
}

How It Works

  1. Load Reports: Reads Lighthouse JSON reports from .lighthouseci/ directory
  2. Extract Metrics: Parses Core Web Vitals (FCP, LCP, TBT, CLS) and category scores
  3. Compare: Identifies regressions and improvements vs baseline
  4. Fetch Context: Retrieves code changes from GitHub PR diff
  5. Analyze: Uses AI or heuristics to identify root causes
  6. Recommend: Generates prioritized, actionable recommendations
  7. Output: Formats results for terminal, JSON, or PR comments

Supported Metrics

  • Performance Score: Overall Lighthouse performance score
  • FCP: First Contentful Paint
  • LCP: Largest Contentful Paint
  • TBT: Total Blocking Time
  • CLS: Cumulative Layout Shift
  • Speed Index: Visual progress speed
  • TTI: Time to Interactive

Auto-Fix Patterns

The tool can detect and suggest fixes for:

  • Missing loading="lazy" on images
  • Missing image dimensions (CLS fix)
  • Scripts without defer or async
  • Missing preconnect hints
  • Missing font-display: swap
  • Render-blocking resources

Contributing

See CONTRIBUTING.md for development setup and guidelines.

License

MIT - see LICENSE

Related Projects