npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

yo-bug

v0.3.4

Published

MCP Server for visual test feedback in vibe coding — QA capability as a protocol

Readme

yo-bug 🐛

"Yo, bug!" — Point at bugs, AI fixes them.

MCP Server that gives AI coding assistants QA superpowers. One install, then your AI handles the entire test-feedback-fix loop.

In vibe coding, the bottleneck is testing: humans find bugs but struggle to describe them. yo-bug solves this by letting users point, click, and annotate — while the AI automatically receives element locations, console errors, network failures, action recordings, and annotated screenshots.

What it does

| For the Human | For the AI | |---|---| | Click a broken element → done | Gets: CSS selector + computed styles + React/Vue component name | | Draw a circle on a screenshot → done | Gets: annotated screenshot as image content | | Just use the app normally | Gets: last 100 user actions (clicks, inputs, navigation) | | Check off a test list | Gets: which tests passed/failed with linked feedback |

The AI drives the entire workflow through MCP tools. Humans never need to learn commands, configure proxies, or modify their code.

Install

npx yo-bug install

# That's it. Your AI now has QA superpowers.

This auto-detects your AI tool (Claude Code / Cursor / Windsurf) and writes the MCP config. One time, done forever.

How it works

AI writes code
    → AI calls start_test_session()
    → Browser opens with test overlay injected (zero code changes)
    → AI pushes a test checklist (8 QA dimensions, 40+ sub-scenarios)
    → Human tests, clicks problems, checks off items
    → AI calls list_feedbacks() → sees everything
    → AI fixes → calls resolve_feedback() → browser asks human to verify
    → Loop until done

MCP Tools (9 total)

Session Control

| Tool | Description | |---|---| | start_test_session(port?, open?) | Start test mode: auto-detect dev server, launch reverse proxy with SDK injection, open browser | | stop_test_session() | Stop test mode, return session summary (feedback stats, checklist results, weak dimensions) |

Feedback

| Tool | Description | |---|---| | list_feedbacks(status?, type?, limit?) | List submitted feedback (filter by status: open/verify/resolved) | | get_feedback(id) | Full details: element info, console errors, network errors, action steps, annotated screenshot | | resolve_feedback(id) | Mark as fixed → pushes verification request to browser → human confirms |

Test Checklist

| Tool | Description | |---|---| | create_checklist(title, items) | Push structured test checklist to browser. Items have step, expected result, priority, and dimension | | get_checklist_status() | See which items passed/failed and any user feedback |

Test History

| Tool | Description | |---|---| | save_test_record(module, ...) | Save test results per module. Accumulates history for future reference | | get_test_history(module) | Get historical test records. Shows frequently failing scenarios |

8 QA Test Dimensions

The create_checklist tool embeds professional QA methodology. AI is guided to systematically cover:

  1. Happy path — Core functionality works end-to-end
  2. Empty/boundary — Empty inputs, special chars, max length, zero/negative values
  3. Error states — Offline, server errors, timeouts, recovery
  4. Duplicate ops — Double-click, re-submit, concurrent requests
  5. State recovery — Refresh, back/forward, deep links, tab close/reopen
  6. Loading/async — Loading states, failed loads, stale data
  7. Responsive — 375px mobile width, touch targets, overflow
  8. Interaction detail — Tab order, Enter/Escape, disabled states, focus

Each dimension has detailed sub-scenario templates that the AI selects from based on actual code changes.

Three Feedback Modes

Users choose how to report issues:

| Mode | Shortcut | How it works | |---|---|---| | Quick | Alt+Q | Click an element → instantly flagged with full context. One click, done. | | Describe | Alt+D | Click an element → fill a form with problem type and description | | Screenshot | Alt+S | Drag to select screen region → annotate with arrows/shapes/text → submit |

Other shortcuts: Alt+X toggle test mode, Esc exit.

Auto-Captured Context

Every feedback submission automatically includes:

  • Console errors — last 50 console.error/console.warn entries with stack traces
  • Network failures — failed fetch/XHR requests with status, URL, duration
  • Unhandled exceptionswindow.onerror + unhandledrejection
  • Action recording — last 100 user actions (clicks, inputs, keypresses, navigation) with timestamps
  • Element info — CSS selector, tag, text content, bounding rect, computed styles, React/Vue component name

Verify-Fix Flow

When AI calls resolve_feedback(), instead of silently marking it done:

  1. Browser shows a "Verify Fix" card to the user
  2. User clicks "Fixed" or "Still broken"
  3. Status updates accordingly — AI knows if the fix actually worked

i18n

SDK auto-detects <html lang="...">:

  • zh-* → Chinese interface
  • Everything else → English interface

MCP tool descriptions are in English (AI translates to user's language naturally).

Architecture

Browser → Reverse Proxy (localhost:3695) → Dev Server (localhost:5173)
              │
              ├─ Auto-injects SDK into HTML responses
              ├─ WebSocket passthrough (HMR works normally)
              ├─ Feedback API (POST/GET)
              ├─ Checklist API (push/poll/update)
              └─ Verify API (push/confirm)

MCP Server (stdio) → AI Tool (Claude Code / Cursor / Windsurf)
              │
              ├─ start/stop_test_session → controls proxy lifecycle
              ├─ feedback tools → reads user submissions
              ├─ checklist tools → pushes test plans, reads results
              └─ history tools → persists test records per module

The proxy auto-detects dev server framework (Vite, Next.js, CRA, Webpack, Nuxt, Angular, Svelte, Astro) and port. If the dev server is already running, it connects. If not, it starts one.

Security

  • All data stays local (~/.yo-bug/)
  • Feedback IDs validated against path traversal
  • Input fields whitelist-filtered and length-limited
  • Network interceptor uses exact pathname matching (no substring false positives)
  • No data sent to any external service

Requirements

  • Node.js >= 18
  • Any MCP-compatible AI tool
  • Any web application running in a browser

License

MIT