npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

hive-agent

v0.5.5

Published

Hive — Lightweight Feature Tracker for AI Coding Agents. Built on Pi.

Downloads

1,605

Readme

🐝 Hive

Lightweight Feature Tracker for AI Coding Agents

No database, no MCP — just features.md as the single source of truth.

Hive breaks your project into independent features, then runs AI agents to implement them — sequentially or in parallel via zero-conflict git worktrees.

npm install -g hive-agent
hive setup
hive spec "Build a REST API with auth and CRUD"
hive run

How It Works

hive spec "description"     →  features.md (scout → planner → reviewer agents)
hive run                    →  implements features one by one
hive worktree-split --count 3 --fork  →  N worktrees, each with a PROMPT.md + agent
hive worktree-merge wt1     →  merge completed worktree back

The features.md Format

# Features

## Feature 1: User Authentication
- Description: JWT-based login/register endpoints
- Dependencies: none
- Status: pending

## Feature 2: User CRUD
- Description: REST endpoints for user management
- Dependencies: Feature 1
- Status: pending

Status values: pendingin_progressdone

Dependencies are respected — blocked features won't run until their deps are done.


Installation

1. Install Hive

npm install -g hive-agent

Requires Node.js >= 18 and git.

2. Install Pi (required for LLM commands)

Hive uses Pi as its AI engine. Pi is the agent that actually analyzes your code, generates features, and implements them.

npm install -g @mariozechner/pi-coding-agent

Without Pi, only these commands work (no AI, pure TypeScript):

| Command | What it does | |---------|-------------| | hive status | Show feature progress | | hive next | Show next feature to implement | | hive setup | Configure API key | | hive doctor | Check requirements | | hive worktree-merge | Merge a worktree back |

With Pi + API key, the full workflow is available:

| Command | What it does | |---------|-------------| | hive map-codebase | Scout agent analyzes your codebase | | hive to-features-md | Planner agent generates features.md | | hive spec "description" | Full chain: scout -> planner -> reviewer | | hive run | Builder agent implements features one by one | | hive q "question" | Scout agent answers questions about the code | | hive worktree-split | Analyzes territories, creates worktrees, generates a PROMPT.md per worktree with agent instructions |

3. Configure your API key

hive setup    # interactive — picks provider, saves API key to ~/.hive/config.json
hive doctor   # check all requirements

Supported providers:

| Provider | Env Var | Notes | |----------|---------|-------| | Anthropic (recommended) | ANTHROPIC_API_KEY | Claude models | | OpenAI | OPENAI_API_KEY | GPT models | | Google | GOOGLE_API_KEY | Gemini models | | Ollama (local, free) | OLLAMA_HOST | Runs locally, no API key needed |

You can set the env var directly or use hive setup to save it to ~/.hive/config.json.

4. Optional dependencies

| Tool | Required for | Install | |------|-------------|---------| | tmux | --fork tmux (parallel agents in any terminal) | brew install tmux (macOS) / sudo apt install tmux (Ubuntu) | | Warp | --fork warp (parallel agents in Warp panes) | Download from warp.dev | | Docker Desktop | --sandbox (isolated microVM per agent) | Download from docker.com (requires v4.58+ with sandbox plugin) |


Complete Walkthrough

A real example: you have an Express API project and want to add search, caching, rate limiting and API docs.

Step 1. Enter your project and configure

cd ~/projects/my-api
hive setup          # one-time: picks provider, saves API key

Step 2. Map the codebase

hive map-codebase

The scout agent analyzes your existing code structure — files, patterns, frameworks, conventions.

Output:
  .hive/codebase-map.json   <- full file tree + dependency graph
  .hive/summary.md          <- human-readable summary

Step 3. Generate features.md

hive to-features-md

The planner agent reads the codebase map and generates features.md considering what already exists — reusing patterns, avoiding duplication.

Step 4. Review and adjust

Open features.md and edit manually. Mark what's already done, fix dependencies, add detail:

# Features

## 1. [DONE] Initial Setup
Category: Infra
Already implemented — Express server, DB connection, basic middleware.

## 2. Search Endpoint
Category: API
Dependencies: 1
Full-text search with Elasticsearch.

### Steps
1. Configure Elasticsearch client
2. Create GET /search endpoint
3. Index existing data

## 3. Cache Layer
Category: Infra
Dependencies: 2
Add Redis cache for frequent queries.

## 4. Rate Limiting
Category: API
Dependencies: 1
Add rate limiting middleware to API endpoints.

## 5. API Documentation
Category: Docs
Dependencies: 2, 4
Generate OpenAPI docs with Swagger.

Step 5. Check status

$ hive status

Hive Status
==============================
Progress: [####----------------] 1/5 (20%)
  Done:        1
  In Progress: 0
  Ready:       2
  Blocked:     2

Ready (next up):
  Feature 2: Search Endpoint
  Feature 4: Rate Limiting

Blocked:
  Feature 3: Cache Layer (needs: Feature 2)
  Feature 5: API Documentation (needs: Feature 2, Feature 4)

Step 6. Split into worktrees and launch agents

This is where Hive shines. You choose how to run the parallel agents:

hive worktree-split --count 2

Hive analyzes feature dependencies and file ownership, then creates the split:

Territory Map:
+----------+-----------------------------+--------------------------+
| Worktree | Features                    | Owned Files              |
+----------+-----------------------------+--------------------------+
| wt1      | 2: Search, 3: Cache         | src/search/*, src/cache/*|
| wt2      | 4: Rate Limit, 5: API Docs  | src/middleware/*, docs/* |
+----------+-----------------------------+--------------------------+

Shared files:
  package.json -> append_only (both can add deps, neither removes)
  src/app.ts   -> deferred (neither modifies; create ROUTES_TO_REGISTER.md)

Features marked as [DONE] are skipped.

For each worktree, Hive generates a PROMPT.md in its root containing:

  • Which features to implement (in order)
  • Owned files/dirs — files this agent CAN create and edit
  • Forbidden files — files owned by other worktrees (DO NOT touch)
  • Shared file strategies — how to handle package.json, type aggregators, etc.
  • Integration contracts — stubs for cross-worktree dependencies
  • Project context (tech stack, conventions, testing setup)

The agent just needs to read PROMPT.md and follow the instructions. No extra context needed.

The resulting directory structure:

~/projects/
  my-api/                  <- original project (main branch)
  |-- features.md
  |-- territory_map.json
  my-api-wt1/              <- worktree 1 (branch: my-api-wt1)
  |-- PROMPT.md            <- agent instructions for Features 2, 3
  |-- src/search/...
  my-api-wt2/              <- worktree 2 (branch: my-api-wt2)
  |-- PROMPT.md            <- agent instructions for Features 4, 5
  |-- src/middleware/...

Now you need to launch agents in each worktree. There are 4 modes:


Mode A: Manual (no flags — works everywhere)

hive worktree-split --count 2

Worktrees are created but no agents are launched. You open terminals yourself:

+-----------------------------------------------------+
| Terminal 1                                          |
| $ cd ~/projects/my-api-wt1                          |
| $ claude    # or pi, cursor, etc.                   |
| > "Read PROMPT.md and follow the instructions"      |
+-----------------------------------------------------+
| Terminal 2                                          |
| $ cd ~/projects/my-api-wt2                          |
| $ claude                                            |
| > "Read PROMPT.md and follow the instructions"      |
+-----------------------------------------------------+

Mode B: Warp Terminal (--fork or --fork warp)

hive worktree-split --count 2 --fork

Opens a new Warp window with 2 split panes. Each pane auto-runs the agent in its worktree:

+- Warp Terminal --------------------------------------+
| +- Pane 1 (my-api-wt1) ----------------------------+ |
| | pi 'Read PROMPT.md and implement all assigned    | |
| | features...'                                     | |
| | > Implementing Feature 2: Search Endpoint...     | |
| +- Pane 2 (my-api-wt2) ----------------------------+ |
| | pi 'Read PROMPT.md and implement all assigned    | |
| | features...'                                     | |
| | > Implementing Feature 4: Rate Limiting...       | |
| +--------------------------------------------------+ |
+------------------------------------------------------+

Requires Warp Terminal. You just watch.


Mode C: tmux (--fork tmux — works in any terminal)

hive worktree-split --count 2 --fork tmux

Creates a tmux session with 2 panes in a tiled layout:

+- tmux session: hive ---------------------------------+
| +- Pane 0 (my-api-wt1) ----------------------------+ |
| | pi 'Read PROMPT.md and implement all assigned    | |
| | features...'                                     | |
| +- Pane 1 (my-api-wt2) ----------------------------+ |
| | pi 'Read PROMPT.md and implement all assigned    | |
| | features...'                                     | |
| +--------------------------------------------------+ |
+------------------------------------------------------+

Shortcuts:
  Ctrl+B, arrows   navigate panes
  Ctrl+B, z         zoom/unzoom pane
  Ctrl+B, d         detach (agents keep running in background)

Requires tmux (brew install tmux on macOS, sudo apt install tmux on Ubuntu).


Mode D: Docker Sandbox (--sandbox — full isolation)

hive worktree-split --count 2 --sandbox --fork warp

Each agent runs inside an isolated Docker microVM. Agents can't access your host filesystem beyond their worktree. Implies YOLO mode (no permission prompts).

+- Warp Terminal --------------------------------------+
| +- Pane 1: Docker Sandbox (my-api-wt1) ------------+ |
| | docker sandbox run --name my-api-wt1             | |
| |   -w "/Users/you/projects/my-api-wt1" claude     | |
| | > Implementing Feature 2: Search Endpoint...     | |
| +- Pane 2: Docker Sandbox (my-api-wt2) ------------+ |
| | docker sandbox run --name my-api-wt2             | |
| |   -w "/Users/you/projects/my-api-wt2" claude     | |
| | > Implementing Feature 4: Rate Limiting...       | |
| +--------------------------------------------------+ |
+------------------------------------------------------+

--sandbox combines with any fork mode:

hive worktree-split --count 3 --sandbox --fork tmux    # sandbox + tmux
hive worktree-split --count 3 --sandbox --fork warp    # sandbox + Warp
hive worktree-split --count 3 --sandbox                # sandbox + manual

Requires Docker Desktop 4.58+ with the sandbox plugin.


Mode comparison

| | Manual | --fork warp | --fork tmux | --sandbox | |---|---|---|---|---| | Auto-launches agents | no | yes | yes | yes | | Requires extra install | nothing | Warp | tmux | Docker Desktop | | Agents isolated | no | no | no | yes (microVM) | | Background execution | no | no | yes (detach) | no | | Works in any terminal | yes | no | yes | yes | | Combinable with sandbox | -- | yes | yes | -- |

Step 7. Merge back

After agents finish, merge worktrees one at a time following the recommended order:

hive worktree-merge my-api-wt1    # merges branch, deletes worktree + branch
hive worktree-merge my-api-wt2    # if sandbox was used, also removes container
hive status                       # should show 100%
Hive Status
==============================
Progress: [####################] 5/5 (100%)
  Done:        5
  In Progress: 0
  Ready:       0
  Blocked:     0

Worktree Splitting

hive worktree-split creates parallel git worktrees with zero-conflict territory mapping. Each worktree gets:

  • A PROMPT.md with its assigned features, owned files/dirs, and boundaries
  • Exclusive file ownership — no two agents edit the same files
  • Shared file strategies (e.g., package.json → append-only)

Flags

| Flag | Default | Description | |------|---------|-------------| | --count N | 3 | Number of worktrees (1–10) | | --fork [warp\|tmux] | off | Auto-launch agents. Default: warp. Use tmux for any terminal | | --sandbox | off | Run each agent in Docker Sandbox (isolated microVM). Combinable with --fork | | --auto-approve | off | Skip all confirmation prompts | | --features "1,2;3,4" | auto | Explicit feature grouping (semicolon-separated groups) |

Launch Modes

Warp Terminal (--fork or --fork warp): Creates a Warp Launch Configuration with N split panes and opens it automatically.

tmux (--fork tmux): Creates a tmux session with N panes in a tiled layout. Works in any terminal.

hive worktree-split --count 3 --fork tmux

Shortcuts: Ctrl+B → arrows (navigate), Ctrl+Bz (zoom), Ctrl+Bd (detach — agents keep running).

Docker Sandbox (--sandbox): Each agent runs in an isolated microVM. Implies YOLO mode (no --auto-approve needed). Requires Docker Desktop 4.58+ with the sandbox plugin.

hive worktree-split --count 3 --sandbox --fork tmux

Merge Order

The territory map includes a recommended merge order. Merge worktrees one at a time:

hive worktree-merge myproject-wt1    # merges, cleans up worktree + branch + sandbox
hive worktree-merge myproject-wt2
hive worktree-merge myproject-wt3

If sandbox was used, worktree-merge automatically removes the sandbox container. If conflicts arise, Hive aborts the merge and shows manual resolution steps.


Configuration

~/.hive/config.json

Created by hive setup. Stores API keys so you don't need env vars:

{
  "ANTHROPIC_API_KEY": "sk-ant-..."
}

territory_map.json

Created by hive worktree-split in your project root. Maps worktrees to features and file territories. Used by hive worktree-merge for merge ordering.


License

MIT