npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

opencode-autoresearch

v3.18.1

Published

Autonomous recursive self-improvement engine for OpenCode and Hermes Agent. Subagent-first iteration loop with mechanical verification.

Readme

Auto Research

┌──────────────────────────────────────────────┐
│  ITERATION MODEL        Subagent-first        │
│  ORCHESTRATION          Standing pool         │
│  VERIFICATION           Mechanical metrics    │
│  PERSISTENCE            State + Memory        │
│  META-LEARNING          Strategy adaptation   │
└──────────────────────────────────────────────┘

What It Does

Auto Research is a subagent-first autonomous iteration engine that runs structured improve-verify loops inside OpenCode or Hermes Agent. Unlike simple task runners, it maintains a pool of specialized subagents, persists learnings across iterations, and can run recursive self-improvement loops on its own codebase.

  • Plans experiments from a measurable goal
  • Modifies one focused change per iteration
  • Verifies mechanically — never on intuition alone
  • Keeps or discards based on strict metric improvement
  • Learns from patterns across iterations
  • Repeats until the stop condition is met

The Core Loop

flowchart LR
    A["Plan"] --> B["Modify"]
    B --> C["Verify"]
    C --> D{"Keep?"}
    D -->|yes| E["Learn"]
    D -->|no| B
    E --> F["Memory"]
    F --> A
flowchart TD
    A["Goal + Metric + Verify"] --> B["Baseline"]
    B --> C["Pool Init"]
    C --> D["Iteration N"]
    D --> E["Subagent Context"]
    E --> F["Focused Change"]
    F --> G["Mechanical Verify"]
    G --> H{"Strict Improvement?"}
    H -->|yes| I["Keep + Record"]
    H -->|no| J["Discard + Reset"]
    I --> K{"Stop Condition?"}
    J --> K
    K -->|no| D
    K -->|yes| L["Report + Memory"]

The Self-Improvement Loop

Auto Research can run on itself. The recursive loop adds a meta-orchestrator that:

flowchart TD
    A["Meta-Goal: Improve AutoResearch"] --> B["Run Child Loop"]
    B --> C["Measure: Tests pass? Docs improved?"]
    C --> D{"Child Success?"}
    D -->|yes| E["Update Memory + Strategy"]
    D -->|no| F["Adapt Approach"]
    E --> G["Persist Learnings"]
    F --> B
    G --> H["Meta-Report"]
    H --> I{"Meta-Stop?"}
    I -->|no| B
    I -->|yes| J["Archive Run"]

See skills/autoresearch/references/self-improve-loop.md for the full recursive loop specification.

Installation

OpenCode

For OpenCode, paste this one-line install prompt into your agent. This URL is pinned to the immutable v3.18.1 release instructions:

Fetch and follow instructions from https://raw.githubusercontent.com/Maleick/AutoResearch/refs/tags/v3.18.1/INSTALL.md

Recommended plugin install in opencode.json:

{
  "plugin": ["opencode-autoresearch@latest"]
}

For reproducible/pinned installs, see INSTALL.md.

Restart OpenCode, then run the setup wizard:

/autoresearch

Hermes Agent

# 1. Clone AutoResearch
git clone https://github.com/Maleick/AutoResearch.git
cd AutoResearch
npm install

# 2. Install the Hermes skill
mkdir -p ~/.hermes/skills/software-development/autoresearch
cp skills/hermes/autoresearch-prompt.md ~/.hermes/skills/software-development/autoresearch/SKILL.md
cp skills/hermes/INTEGRATION.md ~/.hermes/skills/software-development/autoresearch/REFERENCES.md

# 3. Create a cronjob
hermes cron create \
  --name "autoresearch-loop" \
  --workdir ~/projects/AutoResearch \
  --skill autoresearch-hermes \
  "every 15m" \
  "Run AutoResearch iteration loop. Detect phase from .autoresearch/state.json and execute one phase. Approved verify command: \"npm run test:coverage\". Approved guard command: \"npm run typecheck\"."

See skills/hermes/README.md for full Hermes setup, troubleshooting, and command mapping.

npm CLI (both runtimes)

Global install path:

npm install -g opencode-autoresearch
autoresearch doctor
autoresearch --version

One-time package runner path:

npx opencode-autoresearch doctor

See INSTALL.md for prerequisites, verification, updating, and troubleshooting.

Quick Start

OpenCode

# 1. Add the plugin to opencode.json
# { "plugin": ["opencode-autoresearch@latest"] }

# 2. Restart OpenCode

# 3. Navigate to your project
cd ~/Projects/my-project

# 4. Start Auto Research in OpenCode
/autoresearch

Hermes Agent

# 1. Ensure the skill is installed (see Installation above)

# 2. Initialize state from a trusted shell before enabling unattended cron
autoresearch init \
  --goal "Improve test coverage" \
  --metric "coverage_pct" \
  --direction "higher" \
  --verify "npm run test:coverage" \
  --guard "npm run typecheck" \
  --iterations 20 \
  --mode background

# 3. Start the cronjob
hermes cron resume autoresearch-loop

# 4. Check progress
cat .autoresearch/state.json | jq .

Runtime Surfaces

| Surface | Entry point | | --- | --- | | OpenCode | /autoresearch, /autoresearch:plan, /autoresearch:debug, /autoresearch:fix, /autoresearch:learn, /autoresearch:predict, /autoresearch:scenario, /autoresearch:security, /autoresearch:ship | | Hermes | Cronjob autoresearch-loop (see skills/hermes/README.md) |

Commands

| Command | Purpose | Hermes Equivalent | | --- | --- | --- | | /autoresearch | Default improve-verify loop | Cron runs iteration loop | | /autoresearch:plan | Planning workflow | Subagent task: plan experiments | | /autoresearch:debug | Debugging workflow | Subagent task: debug failures | | /autoresearch:fix | Fix workflow | Subagent task: fix issues | | /autoresearch:learn | Learning workflow | Memory tool + pattern analysis | | /autoresearch:predict | Prediction workflow | Subagent task: predict outcomes | | /autoresearch:scenario | Scenario expansion | Subagent task: expand scenarios | | /autoresearch:security | Security review | Subagent task: security audit | | /autoresearch:ship | Ship-readiness workflow | Subagent task: ship checks |

CLI Commands

| Command | Purpose | | --- | --- | | autoresearch init | Initialize a run | | autoresearch goal init | Create a GOAL.md goal definition file (interactive or from flags) | | autoresearch wizard | Generate setup summary | | autoresearch status | Print run status | | autoresearch explain | Human-readable run state | | autoresearch history | Show recent iteration log | | autoresearch scores | Show score trend history | | autoresearch badge | Generate score/component badge markdown + SVG | | autoresearch config | Show runtime configuration | | autoresearch report | Generate markdown report | | autoresearch summary | Aggregate stats across runs | | autoresearch suggest | Suggest next goal from memory | | autoresearch launch | Launch background run | | autoresearch stop | Request stop | | autoresearch resume | Resume background run | | autoresearch complete | Mark run complete | | autoresearch record | Record iteration result | | autoresearch export | Export run data (json/md) | | autoresearch completion | Generate shell completions | | autoresearch doctor | Verify installation | | autoresearch help | Show usage |

autoresearch goal init

Create a GOAL.md goal definition file. Supports interactive wizard, CLI flags, and stdin JSON.

Interactive (TTY):

autoresearch goal init

From flags (non-interactive):

autoresearch goal init \
  --goal "Reduce test failures" \
  --metric failures \
  --direction lower \
  --verify "npm test" \
  --guard "npm run lint"

From a preset template:

autoresearch goal init --template performance   # benchmark_ms / lower / npm run bench
autoresearch goal init --template quality       # test_failures / lower / npm test
autoresearch goal init --template coverage      # coverage_pct / higher / npm run coverage

From stdin JSON (CI / scripted use):

echo '{"goal":"reduce latency","metric":"p99_ms","direction":"lower","verify":"npm run bench"}' \
  | autoresearch goal init

Dry-run preview:

autoresearch goal init --template performance --dry-run

Architecture

flowchart LR
    A["OpenCode /autoresearch"] --> B["CLI"]
    H["Hermes Cronjob"] --> B
    B --> C["Run Manager"]
    C --> D["State JSON"]
    C --> E["Results TSV"]
    C --> F["Subagent Pool"]
    F --> G["Orchestrator"]
    F --> I["Scout"]
    F --> J["Analyst"]
    F --> K["Verifier"]
    F --> L["Synthesizer"]

Runtime Artifacts

| Artifact | Purpose | | --- | --- | | .autoresearch/state.json | Checkpoint state for the current run | | autoresearch-results.tsv | Iteration log | | autoresearch-report.md | End-of-run report | | autoresearch-memory.md | Reusable memory for later runs | | .autoresearch/launch.json | Background launch manifest |

Examples

See docs/examples/README.md for reproducible run examples with complete state, results, and report artifacts.

Self-Improvement Mode

Run Auto Research on its own codebase:

# Initialize a recursive self-improvement run
autoresearch init \
  --goal "Improve test coverage and documentation" \
  --metric "coverage_pct" \
  --direction "higher" \
  --verify "npm run test:coverage" \
  --guard "npm run typecheck" \
  --mode "background" \
  --iterations "20"

# Check status
autoresearch status

# Resume if stopped
autoresearch resume

The self-improvement loop:

  1. Baselines current state (tests, docs, metrics)
  2. Dispatches subagents to identify improvement opportunities
  3. Makes one focused change per iteration
  4. Verifies mechanically (tests, typechecks, lint)
  5. Keeps strict improvements, discards regressions
  6. Records patterns to autoresearch-memory.md
  7. Adapts strategy when repeated discards occur
  8. Continues until iteration cap or goal met

Subagent Pool

The standing pool provides specialized roles reused across iterations:

| Role | Purpose | | --- | --- | | orchestrator | Owns goal, state, and keep/discard decisions | | scout | Gathers context and surfaces opportunities | | analyst | Challenges hypotheses and identifies risks | | verifier | Runs mechanical verification independently | | synthesizer | Compiles findings into next iteration plan | | security_reviewer | Security-focused review variant | | debugger | Debug workflow specialization | | release_guard | Ship-readiness verification | | research_tracker | Pattern tracking across iterations |

Runtime Comparison

| Feature | OpenCode | Hermes | |---------|----------|--------| | Entry | /autoresearch slash command | Cronjob or delegate_task | | Subagents | Standing pool (unlimited) | Batch via delegate_task (max 3) | | Real-time | Yes | 15-minute cron intervals | | Slash commands | 8 variants | Separate cron jobs or tasks | | State | .autoresearch/state.json | Same file format | | Memory | File-based (autoresearch-memory.md) | memory tool + file | | Background | autoresearch launch | Native cron | | Resume | autoresearch resume | Cron continues automatically |

Development

npm run typecheck   # TypeScript strict checks
npm run build       # Compile TypeScript to dist/
npm run test        # Run test suite
npm pack --dry-run  # Preview shipped package contents

Repository Layout

src/                           # TypeScript source (runtime helpers, CLI, subagent pool)
dist/                          # Compiled JavaScript output
commands/                      # OpenCode command surfaces
skills/autoresearch/           # OpenCode skill bundle with references
  references/                  # Workflow and runtime references
    core-principles.md         # Loop discipline
    loop-workflow.md           # Main iteration workflow
    subagent-orchestration.md  # Pool management
    state-management.md        # State semantics
    self-improve-loop.md       # Recursive self-improvement
skills/hermes/                 # Hermes Agent skill bundle
  README.md                    # Hermes setup and usage
  INTEGRATION.md               # Architecture and command mapping
  autoresearch-prompt.md       # Cron prompt template
hooks/                         # Shell hooks for session lifecycle
docs/                          # Install and architecture docs
wiki/                          # GitHub wiki pages
.autoresearch/                 # Runtime state directory
.opencode-plugin/              # Plugin manifest

Notes

  • OpenCode users: install via opencode.json plugin array or npm install -g opencode-autoresearch.
  • Hermes Agent users: install via skill files in skills/hermes/ and create a cronjob.
  • The CLI uses Node.js ESM modules.
  • Self-improvement loops require --mode background for long-running unattended operation.
  • Memory files (autoresearch-memory.md) are portable across runs and repositories.

License

MIT — See LICENSE for details.