npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@viewgraph/core

v0.9.7

Published

MCP server for AI-powered UI capture, auditing, and annotation

Downloads

3,194

Readme

Browser extension + MCP server for AI-powered UI capture, auditing, and annotation.

ViewGraph captures structured DOM snapshots from any web page and exposes them to AI coding assistants via the Model Context Protocol. Agents can query page structure, audit accessibility, find missing test IDs, compare captures, track regressions, and act on human annotations - all through 41 MCP tools and 12 prompt templates.

Works with any MCP-compatible agent: Kiro, Claude Code, Cursor, Windsurf, Cline, Aider, and more. No agent-specific code - pure MCP protocol. Tools that don't support MCP can read .viewgraph.json capture files directly from disk.

Components

| Component | Description | |---|---| | server/ | MCP server - 41 query/analysis/request tools, WebSocket collab, baselines | | extension/ | Chrome/Firefox extension - DOM capture, annotate, 21 enrichment collectors, multi-export | | packages/playwright/ | Playwright fixture - capture structured DOM snapshots during E2E tests | | power/ | Kiro Power assets - 3 hooks, 9 prompts, 3 steering docs, MCP config |

How It Works

ViewGraph runs alongside your project as a standalone tool. It does not embed into your codebase or require changes to your application. It works with any web app regardless of backend technology (Python, Ruby, Java, Go, PHP, etc.).

Your app (any language) --> serves HTML --> Browser renders it --> Extension captures DOM
                                                                        |
                                                                        v
Kiro / Claude / Cursor  <-- MCP protocol <-- ViewGraph server <-- .viewgraph.json files

The extension captures the DOM from Chrome or Firefox. The server reads those capture files and exposes them to your AI agent via MCP. Your agent then uses this context to modify your source code - it never injects into or manipulates the running application directly.

Getting Started

Prerequisites: Node.js 22+, npm 9+, Chrome 116+ or Firefox 109+

# 1. Install the browser extension from Chrome Web Store or Firefox Add-ons (links above)

# 2. Add to your AI agent's MCP config (~/.kiro/settings/mcp.json):
{
  "mcpServers": {
    "viewgraph": { "command": "npx", "args": ["-y", "@viewgraph/core"] }
  }
}

# 3. Capture: click the ViewGraph toolbar icon on any page
# 4. Ask your agent: "Fix the annotations from my last review"

The server runs automatically via npx - no install needed. It auto-creates .viewgraph/captures/ and learns your URL pattern from the first capture.

GitHub Releases = latest version, always. Chrome/Firefox store reviews can delay updates by days or weeks. GitHub Releases always has the newest extension ZIPs and changelog. For the bleeding edge, get it from GitHub.

Alternative: npm install -g @viewgraph/core for explicit version pinning, then run viewgraph-init from each project folder to configure URL patterns and capture routing.

Uninstall: npx @viewgraph/core uninstall from your project folder. Removes MCP config, steering docs, prompts, and hooks. Optionally removes capture data. Uninstall guide.

The extension sidebar opens with Review (annotate and comment) and Inspect (network errors, console issues) tabs. Export via Send to Agent (MCP), Copy Markdown (Jira/GitHub), or Download Report (ZIP).

For detailed setup with screenshots, browser-specific instructions, and multi-project configuration, see the Quick Start Guide.

Try the demo: Open docs/demo/index.html - a login page with 8 planted bugs. Annotate, send to Kiro, watch them get fixed. Walkthrough.

Workflows

ViewGraph supports three broad workstreams. For the full list of 23 problems it solves, see Why ViewGraph?.

For developers with AI agents

  1. Open your app in the browser, click the ViewGraph icon
  2. Click elements or shift+drag regions, add comments describing what to fix
  3. Check the Inspect tab for network errors or console issues
  4. Click Send to Agent - annotations bundle with the full DOM capture + enrichment data
  5. Ask your agent to fix the issues - it has full DOM context

For testers and reviewers (no AI agent needed)

The extension works standalone. No MCP server required.

  1. Open the app in the browser, click the ViewGraph icon
  2. Click or shift+drag to select problem areas, add comments
  3. Export:
    • Copy Markdown - paste into Jira/Linear/GitHub (includes network failures, console errors, viewport breakpoint)
    • Download Report - ZIP with markdown, screenshots, network.json, console.json

For teams

A tester annotates and exports to markdown. A developer annotates and sends to Kiro. A reviewer compares captures against baselines. Same tool, same workflow, same format - the only difference is where the output goes. See Why ViewGraph? for the full list of review, release, and platform workflows.

For test automation teams

Capture structured DOM snapshots during Playwright E2E tests, or generate tests from browser captures:

  • Generate tests from captures: Capture a page with the extension, ask your agent @vg-tests - it generates a complete Playwright test file with correct locators for every interactive element. 20-30 minutes of manual inspection reduced to one prompt.
  • Capture during tests: Add await viewgraph.capture('checkout-page') to existing tests. The agent can then diff captures between runs, audit accessibility, and detect structural regressions.
  • Annotate from tests: await viewgraph.annotate('#email', 'Missing aria-label') flags issues for the agent to fix with full DOM context.

See @viewgraph/playwright for setup, API, and examples.

Capture Accuracy

ViewGraph's capture accuracy is measured automatically against 150 diverse real-world websites using a bulk capture experiment. The experiment runs ViewGraph's DOM traverser via Puppeteer, then compares the output against live DOM ground truth across 7 dimensions.

Latest results (Set A - Breadth, 48 sites across 12 categories, 4 rendering types, 6 writing systems):

| Dimension | Median | What it measures | |---|---|---| | Composite | 92.1% | Weighted combination of all dimensions | | Selector accuracy | 99.7% | VG's CSS selectors resolve to real DOM elements | | Testid recall | 100.0% | All data-testid elements captured | | Interactive recall | 97.9% | Buttons, links, inputs captured | | Bbox accuracy | 100.0% | Bounding boxes preserved through serialization | | Semantic recall | 88.2% | Landmark elements (nav, main, header) captured | | Text match | 53.1% | visibleText matches element text (see note) |

Full methodology, per-site breakdowns, and run history: scripts/experiments/bulk-capture/

Token Efficiency (v3 Format)

ViewGraph v3 (format v2.4.0) reduces agent token consumption by up to 97% compared to full-capture approaches:

| Optimization | Measured Impact | Method | |---|---|---| | Action Manifest | 80-85% reduction on interactive queries | Pre-joined flat index with short refs | | Style dedup | 50% dedup rate across 175 captures | Shared style table, hash-based refs | | Default omission | 41.8% of style values removed | Browser defaults filtered at serialization | | Container merging | 30-50% fewer nodes | D2Snap-aligned empty wrapper removal | | observationDepth | 96% reduction at interactive-only | Agent chooses depth per request | | File-backed receipts | 99.8% reduction on transmission | ~200 token receipt instead of full JSON | | JSON Patch diffs | 50-1500x compression | RFC 6902 patches for sequential captures |

All measurements from experiments on 175 real captures across 4 projects + 48 diverse websites. See scripts/experiments/ for methodology and results.

Format spec: docs/architecture/viewgraph-v2-format.md | Research: docs/architecture/viewgraph-v3-format-agentic-enhancements.md

Documentation

Companion Tools

ViewGraph sees the UI. TracePulse feels the backend. Together with Chrome DevTools MCP, they form the three-layer agentic debugging stack: backend verification (TracePulse) + browser verification (Chrome DevTools MCP) + visual verification (ViewGraph).

Acknowledgments

ViewGraph's capture format was inspired by Element to LLM (E2LLM) by insitu.im - the first browser extension to frame DOM capture as a structured perception layer for AI agents. The core insight - that agents need a purpose-built intermediate representation, not raw HTML - came from E2LLM. ViewGraph extended these foundations through deep format research that produced 20 improvement proposals across token efficiency, accessibility, enrichment, and bidirectional MCP integration. Full comparison.

ViewGraph's security assessment was conducted using the AWS Labs Threat Modeling MCP Server by Aidin Ferdowsi (AWS). The tool's structured STRIDE analysis and Threat Composer integration produced the 9-threat, 9-mitigation model that drove ViewGraph's HMAC auth implementation, prompt injection defenses, and seven rounds of security reviews. Full threat model.

License

AGPL-3.0 - see COPYING for the full license text.

Copyright (c) 2026 Sourjya S. Sen. See ADR-009 for licensing rationale.