npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@sndrgrdn/opencode-autoresearch

v0.1.2

Published

Autonomous experiment loop plugin for OpenCode - optimize code through iterative experimentation

Downloads

252

Readme

Autoresearch Plugin for OpenCode

An OpenCode plugin that implements an autonomous keep/discard experiment loop for optimizing code through iterative experimentation.

Features

  • Experiment Tracking: Record and track experiments with metrics in JSONL format
  • Git Integration: Keep or discard experiments with proper git safety
  • Metric Parsing: Automatically parse METRIC name=value output lines
  • Markdown Documentation: Auto-generated human-readable experiment logs
  • Checks Support: Optional validation via autoresearch.checks.sh
  • AI Skill: Included skill provides guided workflows and best practices

Installation

Local Development

Add to your OpenCode configuration:

{
  "plugin": ["file:///path/to/oc-autoresearch"]
}

From npm (when published)

{
  "plugin": ["@sndrgrdn/opencode-autoresearch"]
}

Using the Skill

Copy the skill to your OpenCode skills directory:

mkdir -p ~/.config/opencode/skills/autoresearch
cp node_modules/@sndrgrdn/opencode-autoresearch/skills/autoresearch/SKILL.md ~/.config/opencode/skills/autoresearch/

The skill provides guided workflows and best practices for the autoresearch loop.

Tools

init_experiment

Initialize a new autoresearch experiment session.

{
  "name": "optimize-parser",
  "metric_name": "parse_time",
  "metric_unit": "ms",
  "direction": "lower",
  "command": "node benchmark.js",
  "branch": "experiment/parser-opt",
  "files_in_scope": ["src/parser.js", "src/lexer.js"]
}

run_experiment

Execute the experiment command and capture metrics.

{
  "timeout_seconds": 600,
  "checks_timeout_seconds": 300
}

log_experiment

Log the experiment result with decision.

{
  "run_id": "uuid-from-run",
  "commit": "abc123",
  "metric": 45.2,
  "status": "keep",
  "description": "Refactored parse loop to use iterator",
  "metrics": {
    "memory_mb": 128
  }
}

keep_experiment

Commit the current experiment changes.

{
  "commit_message": "perf(parser): optimize parse loop using iterator"
}

discard_experiment

Discard uncommitted changes (requires confirmation).

{
  "confirmation": "DISCARD"
}

autoresearch_status

Get current session status including metrics and counts.

Workflow

  1. Initialize: init_experiment creates autoresearch.jsonl and autoresearch.md
  2. Baseline: run_experiment to establish baseline metrics
  3. Log: log_experiment to record the baseline
  4. Iterate:
    • Edit code
    • run_experiment to measure
    • log_experiment to record decision
    • keep_experiment or discard_experiment
  5. Status: autoresearch_status to review progress

Metric Format

Experiments should output metrics in this format:

METRIC parse_time=45.2
METRIC memory_mb=128
METRIC throughput=1000

Files Created

  • autoresearch.jsonl - Append-only event log
  • autoresearch.md - Human-readable experiment notes
  • autoresearch.checks.sh - Optional validation script

Development

# Install dependencies
bun install

# Type check
bun run typecheck

# Build
bun run build

# Smoke test
bun run smoke

# Watch mode
bun run dev

License

MIT