npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

never2average-clone

v1.1.13

Published

Installable TUI financial analyst app with value creation and pricing research personas.

Readme

never2average-clone

never2average-clone is an installable terminal app with two analyst personas:

  • value_creation
  • pricing_research

It includes native tools for:

  • Artificial Analysis
  • Langfuse
  • Local memory search via SQLite
  • OpenMeter via JS REPL, helper skill, and openmeter CLI
  • Frontmatter-based skills loaded from the packaged .agents/skills/ directory

Documentation

Install

After publish, install it like a normal CLI:

npm install -g never2average-clone

Install now auto-creates:

~/.never2average-clone/
~/.never2average-clone/config.json
~/.never2average-clone/memory.db
~/.never2average-clone/sessions/
~/.never2average-clone/artifacts/actuarial/
~/.never2average-clone/runtimes/actuarial/

For local development from this repo:

npm install
npm link

First Run

Set up the CLI once per machine:

never2average-clone login

That flow stores local config in:

~/.never2average-clone/config.json
~/.never2average-clone/memory.db
~/.never2average-clone/sessions/

The config file is created with local user permissions and keeps secrets on the machine where you run the CLI. A baseline config.json is also created automatically at install/first run, so the path exists before you edit it manually.

Validate Setup

Run:

never2average-clone doctor
never2average-clone doctor --install-mlx

That checks:

  • provider credential presence for OpenCode Go or OpenRouter
  • actuarial Python runtime bootstrap
  • MLX availability on Apple Silicon
  • Langfuse connectivity
  • Artificial Analysis connectivity
  • OpenMeter connectivity

Use this to bootstrap MLX support into the managed actuarial runtime:

never2average-clone doctor --install-mlx

That installs mlx-lm[train] into ~/.never2average-clone/runtimes/actuarial/.venv and then verifies mlx_lm.lora --help.

Analyst Selection

Start the value creation analyst:

never2average-clone value-creation-analyst

Start the pricing research analyst:

never2average-clone pricing-research-analyst

Direct aliases also work:

value-creation-analyst
pricing-research-analyst

TUI Shortcuts

  • Start typing / to see matching commands live.
  • Press F2 to switch between chat mode and the audit workspace.
  • In chat mode, the header shows the active task, analyst/model, artifact/dataset, delegation state, checkpoint state, and last repair summary.
  • In audit mode, use ? for audit-only help and / or Ctrl-p to open the jump/filter overlay for tasks, child sessions, checkpoints, blocked items, and errors.
  • In audit mode, use j/k or arrow keys to move across tasks, h/l to drill to parent/child context, g/G for first/last task, and - or Ctrl-o to backtrack focus.
  • In audit mode, use 1/2/3/4 to switch detail focus and d to toggle raw detail.
  • In chat mode, use Ctrl-j for multiline input. Cursor movement, word movement, history recall, and line-boundary deletion are supported directly in the composer.
  • Run /memory to open the local memory picker, type to search, and inspect indexed observations.
  • Run /dataset register to add a typed dataset binding for CSV or JSONL training data.
  • Run /dataset or /dataset show to inspect the registered datasets available to the current dataset family.
  • Run /actuarial to open the actuarial artifact picker for the current dataset and analyst.
  • Use /actuarial create, /actuarial show, /actuarial validate, /actuarial compare, and /actuarial use to manage statistical and classical ML model artifacts in-session.
  • Use /actuarial jobs to inspect experiment workspaces for the active artifact.
  • Use /actuarial job create to create an autoresearch-style workspace copy, /actuarial job show to inspect one workspace, and /actuarial job run to execute it and promote improvements.
  • Use /actuarial leaderboard to rank jobs by the active artifact metric.
  • Run /fork to branch the current conversation into a new session and new event log.
  • Run /compact to summarize older history into a fresh session while preserving the recent tail.
  • Auto-compaction is enabled by default and triggers after large sessions by message count or token count.
  • Type /model to inspect the current model and open the provider-aware model picker.
  • Type /model <prefix> to narrow model choices from the provider model catalog.
  • Run /analyst, /provider, or /model with no arguments to open an interactive picker with arrow-key selection and inline filtering.
  • /model also shows whether the visible list came from the provider API or the documented fallback set.

TUI Modes

The fullscreen TUI now has two distinct work surfaces:

  • chat
    • active conversation and command composition
    • compact runtime status rail
    • multiline composer with history and cursor editing
  • audit
    • task tree and child-session inspection
    • checkpoint trail and interrupt state
    • jump/filter navigation across delegated work and error surfaces

Skills

The app uses one canonical skill location:

.agents/skills/

Current packaged skills:

  • openmeter-finops
  • value-creation-analyst
  • pricing-research-analyst

Each skill must be a SKILL.md file with YAML frontmatter including:

  • name
  • description

The runtime always allows skill access. The analyst workflows are loaded directly from these packaged skill files rather than copied into TypeScript prompt constants.

Configuration Model

This package follows the same general distribution model as Claude Code:

  • the npm package is public and installable
  • the CLI runs locally on the user’s machine
  • credentials are provided at runtime and stored locally
  • secrets are not bundled into the npm package

You can still override local config with environment variables or a project-local .env file when needed.

Environment Variables

The interactive login flow is the normal path. For automation or overrides, the CLI also supports:

AGENT_PROVIDER=opencode-go
OPENCODE_GO_API_KEY=
OPENROUTER_API_KEY=
AGENT_MODEL=glm-5.1
AGENT_DATASET=finops_baseline
AGENT_ACTUARIAL_MODEL=actuarial_base_v1
AGENT_BASE_URL=https://opencode.ai/zen/go/v1
AGENT_ANALYST=value_creation
AGENT_AUTO_COMPACT_ENABLED=true
AGENT_AUTO_COMPACT_MAX_MESSAGES=24
AGENT_AUTO_COMPACT_MAX_TOKENS=120000
AGENT_AUTO_COMPACT_TAIL_MESSAGES=6
AGENT_MEMORY_DB_PATH=
AGENT_SESSION_DIR=
LANGFUSE_BASE_URL=https://langfuse.example.internal
LANGFUSE_PUBLIC_KEY=
LANGFUSE_SECRET_KEY=
AA_API_KEY=
OPENMETER_BASE_URL=https://openmeter.example.internal
OPENMETER_API_KEY=

OpenMeter CLI

The install also exposes an openmeter executable that uses the same local config:

openmeter meters list
openmeter meters query tokens_total --from 2026-01-01T00:00:00Z --to 2026-02-01T00:00:00Z --subject acme-capital
openmeter customers get acme-capital
openmeter invoices list --status draft

Local Memory

The CLI now ships with a local SQLite-backed memory index. You do not install SQLite separately.

  • The npm package installs sql.js, so there is no native addon build and no prebuild-install warning on install
  • The database file defaults to ~/.never2average-clone/memory.db
  • Each assistant turn is indexed locally
  • The agent automatically retrieves a few relevant memory summaries before new runs
  • Memory observations now also index active actuarial artifact refs so prior model work is queryable by artifact lineage
  • Native agent tools exposed in-session:
    • memory_search
    • memory_get

Actuarial Artifacts

The TUI now treats statistical and classical ML actuarial models as first-class artifacts.

  • Artifact bundles live under ~/.never2average-clone/artifacts/actuarial/
  • Managed Python runtime lives under ~/.never2average-clone/runtimes/actuarial/
  • Each artifact version includes:
    • manifest.json
    • README.md
    • train.py
    • score.py
    • validate.py
    • requirements.txt
    • metrics.json
    • backtest.json
    • artifacts/ for trained outputs
  • Active actuarial artifact is tracked per session and shown in the footer
  • Native agent tools exposed in-session:
    • actuarial_artifact_create
    • actuarial_artifact_list
    • actuarial_artifact_get
    • actuarial_artifact_activate
    • actuarial_artifact_validate
    • actuarial_artifact_compare
    • actuarial_artifact_run
    • actuarial_dataset_register
    • actuarial_dataset_list
    • actuarial_dataset_get
    • actuarial_job_create
    • actuarial_job_list
    • actuarial_job_get
    • actuarial_job_leaderboard
    • actuarial_job_run

The current AGENT_ACTUARIAL_MODEL value is treated as the default artifact id to activate when a matching artifact exists for the active dataset.

Recommended operator flow

/dataset register
/actuarial create
/actuarial show
/actuarial job create
/actuarial job show
/actuarial jobs
/actuarial leaderboard
/actuarial job run
/actuarial validate
/actuarial use

Template families

  • baseline_regression
    • simple train/score/validate bundle
    • best for deterministic statistical or basic regression models
  • autoresearch_sklearn
    • single-artifact research loop with mutable train.py and program.md
    • best for feature, objective, and classical ML iteration
  • mlx_lora_autoresearch
    • Apple Silicon-oriented deep-model finetune loop
    • materializes registered JSONL data into the MLX directory layout
    • runs python -m mlx_lm.lora --train and optional --test inside the managed actuarial runtime
    • based on the official mlx-lm LoRA workflow and a fixed-budget autoresearch-style iteration loop

Each artifact version now carries orchestration metadata:

  • tracked metric name and goal
  • mutable files
  • fixed-budget research hint
  • best promoted job id and score

Dataset bindings

Typed dataset bindings keep artifact creation and job runs deterministic:

  • tabular_csv
    • primaryPath
    • targetColumn
    • featureColumns
  • chat_jsonl
    • trainPath, optional validPath, optional testPath
    • optional chatField
  • completions_jsonl
    • trainPath, optional validPath, optional testPath
    • optional promptField, completionField
  • text_jsonl
    • trainPath, optional validPath, optional testPath
    • optional textField

When you create an artifact from a registered dataset, the TUI pre-seeds target/features/input paths from the binding instead of making the agent guess them.

Repo Utility Commands

From the repo checkout:

npm run test:actuarial-artifacts
npm run test:live-usecases
npm run test:live-usecases:update-snapshots
npm run test:mlx-parity
npm run test:fullscreen-ui
npm run test:fullscreen-pty
npm run check:connectivity
npm run seed:dummy-data

test:live-usecases now runs a sixteen-scenario live parity matrix:

  • value_creation / premium_improvement
  • value_creation / reallocation_sensitivity
  • value_creation / telemetry_shift_guardrail
  • value_creation / artifact_comparison
  • value_creation / partial_context_triage
  • value_creation / contradictory_signal_repair
  • value_creation / multi_turn_followup_revision
  • pricing_research / roi_pricing_experiment
  • pricing_research / cashflow_rollout
  • pricing_research / segment_packaging
  • pricing_research / competitor_reference
  • pricing_research / partial_context_triage
  • pricing_research / feedback_revision_loop
  • pricing_research / delegated_segment_probe
  • pricing_research / delegated_followup_revision
  • pricing_research / delegated_competitive_reentry

The regression gate compares each scenario against the committed normalized snapshot at scripts/snapshots/live-usecase-parity.json. Update that file only after intentional workflow changes.

Each run also writes a machine-readable report to scripts/reports/live-usecase-parity.latest.json with per-scenario status, required tool usage, token usage, recoverable tool error count, repair count, subagent count, resumed subagent count, tool governor count, domain/generic/fallback route counts, stage visited count, duration, missing required steps, forbidden tool use count, decision-quality status, experiment-specificity status, and snapshot match status.

Release

Use the documented release flow in docs/releasing.md. The short version is:

npm run typecheck
npm run build
npm run test:skills
npm run test:subagent-resume
npm run test:fullscreen-ui
npm run test:fullscreen-pty
npm run test:actuarial-artifacts
npm run test:parallel-analyst-pty
npm run test:live-usecases
npm run test:mlx-parity
npm pack --dry-run
npm publish