npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ship-code

v3.0.0

Published

Anti-slop agentic coding workflow for Claude Code — 3 agents, graded quality

Downloads

355

Readme

ship-code

Anti-slop agentic coding workflow for Claude Code. 3 agents, no enterprise theater.

Core philosophy: Slop is an engineering problem, not an LLM problem. If an agent produces bad code, fix the environment — never patch the output.

Install

npx ship-code@latest

Prompts you to install globally (all projects) or locally (this project only). Then restart Claude Code and type /ship-code: to see all commands.

Flags:

npx ship-code@latest --global    # global, no prompt
npx ship-code@latest --local     # project-only, no prompt
npx ship-code@latest --uninstall # remove

Commands

| Command | What it does | |---|---| | /ship-code:init | Set up hooks, gates, config, and hard blocks | | /ship-code:ship | Full flow — interview, plan, generator-evaluator loops | | /ship-code:plan <desc> | Create feature briefs for what to build | | /ship-code:loop | Resume execution from the plan | | /ship-code:queue | Show plan status or add features | | /ship-code:run <feature> | Run one feature through generator-evaluator | | /ship-code:verify | Run graded quality evaluation | | /ship-code:quick <desc> | Small ad-hoc task — gates still enforced | | /ship-code:help | Show the guide |

How It Works

You describe what to build
        │
        ▼
  Interview — what, why, constraints
        │
        ▼
  Planner creates feature briefs
  (goals + requirements, NOT implementation steps)
        │
        ▼
  For each feature:
  ┌─────────────────────────┐
  │  Generator-Evaluator    │
  │  Loop (max 3 rounds)    │
  │                         │
  │  Generator: explores    │
  │  codebase, implements,  │
  │  runs gates, commits    │
  │         ↓               │
  │  Evaluator: scores on   │
  │  5 dimensions (1-5)     │
  │         ↓               │
  │  Score < 3? → revise    │
  │  Score >= 3? → ship     │
  └─────────────────────────┘
        │
        ▼
  You review. Push when ready.

The 3 Agents

| Agent | Role | |---|---| | Planner | Creates feature briefs — what + why, never how | | Generator | Autonomous builder — explores, decides, implements | | Evaluator | Adversarial reviewer — graded rubric, not pass/fail |

Why only 3?

Modern AI models don't need micro-managed specs with line numbers and step-by-step instructions. They need:

  • Clear goals (Planner)
  • Autonomy to implement (Generator)
  • Adversarial quality checks (Evaluator)

Everything else — complex queue systems, XML task specs, separate research agents, wave orchestration, sprint contracts — is dead weight that actually limits the model's ability to self-correct.

Evaluator Rubric

Every feature gets scored 1-5 on:

| Dimension | What it measures | |---|---| | Correctness | Does it meet requirements? | | Design | Does it fit existing patterns? | | Code quality | Is it clean and readable? | | Test quality | Are tests meaningful? | | Security | Is it safe? |

  • All >= 3 → SHIP
  • Any = 2 → REVISE (generator gets specific feedback)
  • Any = 1 → REJECT (generator redoes from scratch)

Config

.ship/config.json:

{
  "workflow": {
    "parallel_features": true,
    "max_eval_rounds": 3,
    "skip_permissions": true
  }
}

The Golden Rules

  1. Never fix bad output. Reset and fix the brief — not the code.
  2. 3 agents, 3 roles. Planner plans, generator builds, evaluator reviews.
  3. Gates before everything. Lint + types + tests pass 100% before any commit.
  4. Quality is graded, not binary. Passing gates is the floor, not the ceiling.
  5. Generator decides the how. Briefs say what and why. Implementation is autonomous.
  6. Escalate, don't improvise. If stuck, stop and ask.

File Structure After /ship-code:init

.ship/
├── config.json        # Settings
├── HARD_BLOCKS.md     # What agents can never do
├── issues.md          # Agent blockers & learnings
└── plan.md            # Feature briefs (created during /ship-code:ship or /ship-code:plan)

License

MIT