npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

hone-cli

v0.6.0

Published

Transparent runtime prompt & model optimizer proxy for Claude Code and Anthropic-compatible clients.

Readme

Hone

npm downloads node license CI

Hone is a local runtime, eval, and policy layer for AI coding agents. It keeps the inline Claude Code proxy as the core workflow, then adds local traces, eval suites, a lightweight Studio, and MCP tools around that runtime.

Install from npm:

npm install -g hone-cli

The package name is hone-cli (the unscoped hone belongs to an unrelated DOM library). The CLI binary is just hone.

ANTHROPIC_BASE_URL=http://127.0.0.1:47823

Claude Code keeps working the way you are used to. Hone sits between it and api.anthropic.com, classifies prompts locally, rewrites ambiguous requests when useful, strips legacy scaffolding, applies model-specific API hygiene, places cache breakpoints, and forwards responses as raw bytes.

Why this exists

Prompt Optimizer-style tools are useful for manual prompt work: paste a prompt, optimize, compare, copy it back. Hone’s bet is different: the most valuable prompt optimization happens in the hot path, where developers actually ask coding agents to do work.

Hone is built for that loop:

  1. Inline proxy for Claude Code and Anthropic-compatible clients.
  2. Local Ollama-based classifier/optimizer; no prompt data leaves your machine for optimization.
  3. Per-model profiles for effort, thinking, cache, and forbidden API parameters.
  4. Configurable local traces and evals so improvements are inspectable, not vibes.
  5. Static local Studio and MCP surface for manual workflows without turning the proxy into a SaaS app.

Fail-open is load-bearing. If profile loading, classification, optimization, logging, or tracing fails, Hone forwards the original request.

Quickstart

Prerequisites: Node 20 or newer, Ollama running locally with an instruction-following model.

npm install -g hone-cli

hone init
hone start

Zero-install alternative:

npx hone-cli init
npx hone-cli start

In another terminal:

export ANTHROPIC_BASE_URL=http://127.0.0.1:47823
claude

hone init checks Ollama, lets you choose a local model, writes .env, saves trace-retention settings to ~/.hone/config.json, offers to add the shell export, and registers the bundled /raw Claude Code plugin.

From source

git clone https://github.com/Haydn-opti/Hone.git
cd Hone
npm install
npm run build
node dist/index.js start

What You Get

Runtime Optimization

  • Legacy scaffolding like “think step by step” is stripped before the target model sees it.
  • Short vague prompts like fix the bug get deterministic clarification instead of slipping through the short-prompt fast path.
  • Clear short prompts stay fast and skip the local model round trip.
  • /raw is a true bypass: Hone strips the token and skips all mutation for that request.

Profiles And API Hygiene

Hone ships model profiles for Opus 4.7, Sonnet 4.6, and Haiku 4.5. Requests auto-resolve the matching profile from body.model, including date-suffixed Claude model IDs.

Profiles control:

  • Allowed effort levels and optional effort routing.
  • Forbidden request params to strip before Anthropic rejects them.
  • Thinking settings.
  • Cache breakpoint placement.
  • Local optimizer sampling and ambiguity thresholds.

Evals

hone bench remains as a compatibility shortcut, but the forward path is hone eval:

hone eval list
hone eval add my-suite "fix the bug"
ANTHROPIC_API_KEY=... hone eval run coding-smoke
hone eval report ~/.hone/eval_runs/<run>.json

Eval runs compare raw prompts against the same optimized replacement prompt shape the proxy uses. Reports include changed counts, judge deltas, model names, and run metadata.

Transparency

Hone is load-bearing for trust. You should always know what got sent to Anthropic on your behalf, and you should be able to go back and refine if Hone misread you.

  • Every time Hone rewrites a prompt, the proxy prints a human-readable ✎ hone rewrote block to stderr with the reason, the diff, and a line of meta. Default mode is diff (only show rewrites), change via hone config set transparency.mode silent|diff|full.
  • hone last prints the most recent rewrite in the same format, including a banner with the trace id and age so a stale read is obvious.
  • hone watch streams live rewrites in a second terminal while you run Claude Code in the first.
  • The bundled Claude Code plugin exposes /hone-last, which shells out to hone last --plain so you can see the last rewrite without leaving the conversation.

The ring that backs hone last, hone watch, and Studio's Live Traces is memory-only. It captures prompts in-flight before the disk retention mode decides whether to persist them, so transparency works even when retention=metadata (the default).

Local Studio

hone studio

Opens a local-only browser UI at http://127.0.0.1:47824 with nine functional tabs:

  • Optimize — paste a prompt, see ambiguity score, task class, analysis, and a side-by-side diff.
  • Iterate — take an existing prompt plus a goal, get a revised version.
  • Compare — run prompts A and B in parallel through the Hone proxy against your configured model.
  • Live Traces — filterable table of every request the proxy sees, with a detail drawer.
  • Evals — run suites, view runs.
  • Templates — read-only built-in task-class templates, create your own customs.
  • Variables{{name}} key-value store used by Optimize and Iterate.
  • Models — Ollama connectivity test, upstream URL, profile picker.
  • Settings — all persisted config with a common/advanced split and a theme toggle.

Studio is dark by default, respects prefers-color-scheme, and stores theme preference per browser. Mutating requests require a token stored at ~/.hone/studio.token.

MCP

hone mcp start
hone mcp start --transport http

The MCP surface exposes tools for prompt optimization, prompt iteration, eval runs, and trace explanation. It is intentionally local-first and backed by the same runtime pipeline.

Commands

| Command | What it does | |---|---| | hone init | Interactive setup for Ollama, .env, retention, shell rc, and plugin | | hone start | Start the proxy in the foreground | | hone stop | Send SIGTERM to the running proxy | | hone status | Print running state and PID | | hone doctor | Show environment and proxy status | | hone optimize "<prompt>" | Dry-run classifier + optimizer without Anthropic | | hone bench | Compatibility 10-prompt eval against Opus | | hone eval list/add/run/report | Manage and run eval suites | | hone trace list/show/export/delete | Inspect local trace metadata | | hone last [--plain] | Print the most recent rewrite with diff (requires proxy running) | | hone watch [--interval MS] | Stream live rewrites in a second terminal | | hone config get/set | Read and update ~/.hone/config.json (supports dotted keys like transparency.mode) | | hone studio | Start the local browser Studio | | hone mcp start | Start the local MCP server |

Configuration

Config precedence is:

  1. CLI flags
  2. Environment variables / .env
  3. ~/.hone/config.json
  4. Built-in defaults

Useful settings:

HONE_PROXY_PORT=47823
HONE_STUDIO_PORT=47824
HONE_MCP_PORT=47825
HONE_UPSTREAM=https://api.anthropic.com
HONE_OLLAMA_URL=http://127.0.0.1:11434
HONE_OLLAMA_MODEL=qwen2.5:3b
HONE_RETENTION=metadata   # metadata | local | disabled
HONE_TRANSPARENCY=diff    # silent | diff | full
HONE_TRANSPARENCY_RING=50 # in-memory ring size (1..1000)
HONE_PROFILE=/path/to/profile.yaml

Retention defaults to metadata. Prompt and response bodies are not stored unless you choose local.

Privacy

Local by default. Prompt classification and rewriting use your configured local model. The proxy forwards requests to Anthropic because that is the endpoint Claude Code already uses.

Trace storage is configurable:

  • metadata: store timings, classes, changed flags, byte counts, warnings, and routing info.
  • local: also store original and optimized prompt bodies locally.
  • disabled: do not store traces.

~/.hone/proxy.log contains byte counts, task classes, timings, and redacted auth headers.

Development

npm install
npm run typecheck
npm test
npm run build
npm run pack:check

The test suite covers parsing, request mutation, raw bypass, short vague prompts, injector behavior, cache markers, profile resolution, hallucination guards, config validation, wrapper preservation, SSE streaming passthrough, and package contents.

Streaming correctness is load-bearing: Claude Code sets stream: true on every request and consumes the response as Server-Sent Events. tests/sse-streaming.test.ts spins up a mock upstream that writes a canonical Anthropic SSE payload in small chunks, runs it through the real proxy, and asserts byte-for-byte equality on the client side. Any future change that buffers, reorders, or re-encodes response bytes fails this test.

Positioning

Hone is not trying to be a broad Prompt Optimizer clone. Prompt Optimizer wins at general manual prompt editing across many surfaces. Hone adapts the useful parts: evals, compare workflows, local history, model settings, templates, Studio, and MCP, but keeps the runtime agent loop as the center.

The goal is a repo worth starring because it makes AI coding agents cheaper, safer, more inspectable, and less sloppy without asking developers to change how they work.

Roadmap

Shipped in v0.5.0:

  • True /raw bypass.
  • Configurable trace retention.
  • Trace CLI.
  • Eval suite CLI.
  • Static local Studio.
  • MCP start command.
  • npm package smoke check including the Claude plugin.
  • Broader unit coverage.

Next:

  • Rich browser-side Optimize/Templates editing in Studio.
  • More eval dataset formats and replay mode.
  • Full MCP compliance hardening.
  • Session-aware context advisor.
  • OpenAI-compatible agent-client profiles after Anthropic/Claude Code stays solid.

License

MIT. See LICENSE.