npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

haiku-method

v3.4.0

Published

H·AI·K·U methodology — universal lifecycle orchestration with hat-based workflows, completion criteria, and automatic context preservation.

Readme

H·AI·K·U

Human + AI Knowledge Unification — a Claude Code plugin that turns "tell the AI what to do" into a structured workflow with role-based hats, hard quality gates, and completion criteria.

AI can only move as fast as the rails it runs on. H·AI·K·U is the rails.

This is a Claude Code plugin, not a JavaScript library. import and require won't do anything useful — install it via your Claude harness (instructions below). The npm package exists so the plugin can be distributed and version-pinned through standard tooling.

The problem

Most AI workflows are a single prompt and a hope. The model produces 400 lines of plausible-looking code, you skim it, you ship it, and the bug shows up in production. The model wasn't wrong because it was dumb — it was wrong because nothing was checking, and nothing was telling it to slow down.

H·AI·K·U adds the structure that stops fast movement from becoming fast-wrong. Every piece of work runs through stages. Every stage has a hat sequence (planner → builder → verifier, or whatever the work demands). Every stage ends in a review gate. State lives on disk, so a context reset isn't a progress reset.

Install

/plugin marketplace add gigsmart/haiku-method
/plugin install haiku --scope project

Works in Claude Code, Claude Cowork, and other MCP-compatible harnesses (Cursor, Windsurf, Gemini CLI, OpenCode, Kiro).

Other MCP harnesses

Configure your harness's MCP server list with:

{
  "mcpServers": {
    "haiku": {
      "command": "npx",
      "args": ["-y", "haiku-method", "mcp"]
    }
  }
}

The skills (/haiku:start, /haiku:pickup, etc.) are Claude-specific — outside Claude, you drive the workflow by calling MCP tools directly (haiku_run_next, haiku_intent_create, etc.).

Quickstart

/haiku:start              # describe what you want; the plugin scaffolds an intent
/haiku:pickup             # advance the workflow one tick at a time

The orchestrator drives the stage loop. It tells you what to do next; you do it; it advances. When a stage finishes, adversarial review agents try to break the output before the gate opens.

Studios

Studios are the lifecycle templates — pre-built sequences of stages, hats, and gates tuned for a class of work.

| Engineering | Go-to-Market | General Purpose | |---|---|---| | Software | Sales | Ideation | | Data Pipeline | Marketing | Documentation | | Migration | Customer Success | Project Management | | Incident Response | Product Strategy | Executive Strategy | | Compliance | Dev Evangelism | Training | | Security Assessment | | | | Quality Assurance | | | | Hardware Dev | | | | Game Dev | | | | Library Dev | | |

Plus support studios for HR, Legal, Finance, Vendor Management, and more. See the full catalog at haikumethod.ai/studios.

The model

Studio > Stage > Unit > Bolt
  • Studio — the lifecycle template
  • Stage — a phase within the studio with its own hat sequence and review gate
  • Unit — a discrete piece of work with explicit completion criteria
  • Bolt — one iteration through the hat sequence on a unit

Studio is not the same as Stage. Unit is not the same as Bolt. The vocabulary is load-bearing — see the paper for the full model.

Why it's different

  • The work fails fast, not slow. Quality gates run after every stage. A broken build doesn't propagate downstream; it bounces back to the hat that caused it.
  • Context-reset proof. State lives on disk in .haiku/intents/<slug>/. Lose your conversation, restart Claude, swap models — the work picks up where it left off.
  • No magic, just file boundaries. Every artifact is a markdown file you can read, grep, diff, and commit. The plugin enforces who can write what; nothing else is hidden.
  • Adversarial review is the default. Every stage spawns review agents whose job is to find what's wrong. Findings become open feedback that has to be addressed (or explicitly rejected) before the gate opens.
  • Hats over agents. Roles are stage-scoped behavioural definitions, not separate agent processes. The same Claude session puts on the planner hat, then the builder hat, then the verifier hat — context flows through.

Review gates

Each stage's gate decides whether work advances:

  • auto — the harness advances automatically (low-risk, machine-verifiable work)
  • ask — local human approval via the review web UI
  • external — blocks until a PR/MR is approved on GitHub or GitLab
  • await — blocks until an external event fires (customer reply, pipeline finish, etc.)

Mix them per stage. Compound gates like [external, ask] let you choose at runtime.

Links

License

Apache 2.0 — use it, fork it, ship it.