npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

topsclaude

v1.1.1

Published

topsClaude — automated peer review for Claude Code with 7 specialized review agents

Readme

topsClaude

Automated peer review for Claude Code — end-to-end Jira → GitLab workflow with 7 specialized review agents.

Overview

topsClaude automates the full peer review cycle:

  1. Fetches your Jira tickets in "Peer Review" status (or GitLab MRs assigned to you)
  2. Extracts the merge request from the ticket
  3. Runs 6 specialized review agents in parallel against the MR diff
  4. Posts findings as comments to the GitLab MR
  5. Approves the MR or requests changes
  6. Transitions the Jira ticket

Everything runs inside Claude Code with one command: /topsclaude:peer-review.

Quick Start

# Install (one-time)
npx topsclaude@latest

# Then in Claude Code
/topsclaude:peer-review

Agents

1. comment-analyzer

Focus: Code comment accuracy and maintainability

Analyzes:

  • Comment accuracy vs actual code
  • Documentation completeness
  • Comment rot and technical debt
  • Misleading or outdated comments

When to use:

  • After adding documentation
  • Before finalizing PRs with comment changes
  • When reviewing existing comments

Triggers:

"Check if the comments are accurate"
"Review the documentation I added"
"Analyze comments for technical debt"

2. pr-test-analyzer

Focus: Test coverage quality and completeness

Analyzes:

  • Behavioral vs line coverage
  • Critical gaps in test coverage
  • Test quality and resilience
  • Edge cases and error conditions

When to use:

  • After creating a PR
  • When adding new functionality
  • To verify test thoroughness

Triggers:

"Check if the tests are thorough"
"Review test coverage for this PR"
"Are there any critical test gaps?"

3. silent-failure-hunter

Focus: Error handling and silent failures

Analyzes:

  • Silent failures in catch blocks
  • Inadequate error handling
  • Inappropriate fallback behavior
  • Missing error logging

When to use:

  • After implementing error handling
  • When reviewing try/catch blocks
  • Before finalizing PRs with error handling

Triggers:

"Review the error handling"
"Check for silent failures"
"Analyze catch blocks in this PR"

4. type-design-analyzer

Focus: Type design quality and invariants

Analyzes:

  • Type encapsulation (rated 1-10)
  • Invariant expression (rated 1-10)
  • Type usefulness (rated 1-10)
  • Invariant enforcement (rated 1-10)

When to use:

  • When introducing new types
  • During PR creation with data models
  • When refactoring type designs

Triggers:

"Review the UserAccount type design"
"Analyze type design in this PR"
"Check if this type has strong invariants"

5. code-reviewer

Focus: General code review for project guidelines

Analyzes:

  • CLAUDE.md compliance
  • Style violations
  • Bug detection
  • Code quality issues

When to use:

  • After writing or modifying code
  • Before committing changes
  • Before creating pull requests

Triggers:

"Review my recent changes"
"Check if everything looks good"
"Review this code before I commit"

6. code-simplifier

Focus: Code simplification and refactoring

Analyzes:

  • Code clarity and readability
  • Unnecessary complexity and nesting
  • Redundant code and abstractions
  • Consistency with project standards
  • Overly compact or clever code

When to use:

  • After writing or modifying code
  • After passing code review
  • When code works but feels complex

Triggers:

"Simplify this code"
"Make this clearer"
"Refine this implementation"

Note: This agent preserves functionality while improving code structure and maintainability.

7. peer-reviewer (orchestrator)

Focus: End-to-end MR review coordination

What it does:

  • Fetches the MR diff via GitLab MCP
  • Spawns all 6 review agents above in parallel
  • Aggregates findings by severity (critical / major / minor)
  • Posts comments directly to the GitLab MR
  • Approves or requests changes
  • Transitions the linked Jira ticket

Invoked by: /topsclaude:peer-review command (you rarely call it directly).

Usage Patterns

Individual Agent Usage

Simply ask questions that match an agent's focus area, and Claude will automatically trigger the appropriate agent:

"Can you check if the tests cover all edge cases?"
→ Triggers pr-test-analyzer

"Review the error handling in the API client"
→ Triggers silent-failure-hunter

"I've added documentation - is it accurate?"
→ Triggers comment-analyzer

Comprehensive PR Review

For thorough PR review, ask for multiple aspects:

"I'm ready to create this PR. Please:
1. Review test coverage
2. Check for silent failures
3. Verify code comments are accurate
4. Review any new types
5. General code review"

This will trigger all relevant agents to analyze different aspects of your PR.

Proactive Review

Claude may proactively use these agents based on context:

  • After writing code → code-reviewer
  • After adding docs → comment-analyzer
  • Before creating PR → Multiple agents as appropriate
  • After adding types → type-design-analyzer

Installation

Install

npx topsclaude@latest

You'll be prompted for:

  1. Install scope — Global (~/.claude, available in every project) or Local (./.claude, current project only)
  2. Secret key — Ask your team lead if you don't have it

After the install completes, restart Claude Code and run /topsclaude:peer-review.

Non-interactive install (for CI)

TOPSCLAUDE_KEY=your-secret-key npx topsclaude@latest --global --yes

Flags:

  • --global / -g — install to ~/.claude (default)
  • --local / -l — install to ./.claude
  • --yes / -y — skip all prompts
  • --uninstall / -u — remove topsClaude
  • --help / -h — show usage

Update

Just run the installer again — it auto-detects existing installs and upgrades:

npx topsclaude@latest

Or from inside Claude Code:

/topsclaude:update

Uninstall

npx topsclaude@latest --uninstall

This reads the manifest at ~/.claude/.topsclaude-manifest.json and cleanly removes every file the installer wrote — nothing else is touched.

Verify Installation

After installing, you should see these files on disk:

ls ~/.claude/agents/topsclaude/
#   code-reviewer.md  code-simplifier.md  comment-analyzer.md
#   peer-reviewer.md  pr-test-analyzer.md  silent-failure-hunter.md
#   type-design-analyzer.md

ls ~/.claude/commands/topsclaude/
#   peer-review.md  update.md

Inside Claude Code, type / and you should see /topsclaude:peer-review and /topsclaude:update in the autocomplete list.

Agent Details

Confidence Scoring

Agents provide confidence scores for their findings:

comment-analyzer: Identifies issues with high confidence in accuracy checks

pr-test-analyzer: Rates test gaps 1-10 (10 = critical, must add)

silent-failure-hunter: Flags severity of error handling issues

type-design-analyzer: Rates 4 dimensions on 1-10 scale

code-reviewer: Scores issues 0-100 (91-100 = critical)

code-simplifier: Identifies complexity and suggests simplifications

Output Formats

All agents provide structured, actionable output:

  • Clear issue identification
  • Specific file and line references
  • Explanation of why it's a problem
  • Suggestions for improvement
  • Prioritized by severity

Best Practices

When to Use Each Agent

Before Committing:

  • code-reviewer (general quality)
  • silent-failure-hunter (if changed error handling)

Before Creating PR:

  • pr-test-analyzer (test coverage check)
  • comment-analyzer (if added/modified comments)
  • type-design-analyzer (if added/modified types)
  • code-reviewer (final sweep)

After Passing Review:

  • code-simplifier (improve clarity and maintainability)

During PR Review:

  • Any agent for specific concerns raised
  • Targeted re-review after fixes

Running Multiple Agents

You can request multiple agents to run in parallel or sequentially:

Parallel (faster):

"Run pr-test-analyzer and comment-analyzer in parallel"

Sequential (when one informs the other):

"First review test coverage, then check code quality"

Tips

  • Be specific: Target specific agents for focused review
  • Use proactively: Run before creating PRs, not after
  • Address critical issues first: Agents prioritize findings
  • Iterate: Run again after fixes to verify
  • Don't over-use: Focus on changed code, not entire codebase

Troubleshooting

Agent Not Triggering

Issue: Asked for review but agent didn't run

Solution:

  • Be more specific in your request
  • Mention the agent type explicitly
  • Reference the specific concern (e.g., "test coverage")

Agent Analyzing Wrong Files

Issue: Agent reviewing too much or wrong files

Solution:

  • Specify which files to focus on
  • Reference the PR number or branch
  • Mention "recent changes" or "git diff"

Integration with Workflow

This plugin works great with:

  • build-validator: Run build/tests before review
  • Project-specific agents: Combine with your custom agents

Recommended workflow:

  1. Write code → code-reviewer
  2. Fix issues → silent-failure-hunter (if error handling)
  3. Add tests → pr-test-analyzer
  4. Document → comment-analyzer
  5. Review passes → code-simplifier (polish)
  6. Create PR

Contributing

Source lives at [email protected]:internal/topsClaude.git. File issues or suggestions with the internal team.

License

MIT

Author

Prafulk ([email protected])


Quick Start: npx topsclaude@latest → restart Claude Code → /topsclaude:peer-review