never2average-clone
v1.1.13
Published
Installable TUI financial analyst app with value creation and pricing research personas.
Maintainers
Readme
never2average-clone
never2average-clone is an installable terminal app with two analyst personas:
value_creationpricing_research
It includes native tools for:
- Artificial Analysis
- Langfuse
- Local memory search via SQLite
- OpenMeter via JS REPL, helper skill, and
openmeterCLI - Frontmatter-based skills loaded from the packaged
.agents/skills/directory
Documentation
- Operator setup
- TUI command reference
- Release and npm publish process
- Codex gap checklist
- Open runtime gaps
Install
After publish, install it like a normal CLI:
npm install -g never2average-cloneInstall now auto-creates:
~/.never2average-clone/
~/.never2average-clone/config.json
~/.never2average-clone/memory.db
~/.never2average-clone/sessions/
~/.never2average-clone/artifacts/actuarial/
~/.never2average-clone/runtimes/actuarial/For local development from this repo:
npm install
npm linkFirst Run
Set up the CLI once per machine:
never2average-clone loginThat flow stores local config in:
~/.never2average-clone/config.json
~/.never2average-clone/memory.db
~/.never2average-clone/sessions/The config file is created with local user permissions and keeps secrets on the machine where you run the CLI. A baseline config.json is also created automatically at install/first run, so the path exists before you edit it manually.
Validate Setup
Run:
never2average-clone doctor
never2average-clone doctor --install-mlxThat checks:
- provider credential presence for OpenCode Go or OpenRouter
- actuarial Python runtime bootstrap
- MLX availability on Apple Silicon
- Langfuse connectivity
- Artificial Analysis connectivity
- OpenMeter connectivity
Use this to bootstrap MLX support into the managed actuarial runtime:
never2average-clone doctor --install-mlxThat installs mlx-lm[train] into ~/.never2average-clone/runtimes/actuarial/.venv and then verifies mlx_lm.lora --help.
Analyst Selection
Start the value creation analyst:
never2average-clone value-creation-analystStart the pricing research analyst:
never2average-clone pricing-research-analystDirect aliases also work:
value-creation-analyst
pricing-research-analystTUI Shortcuts
- Start typing
/to see matching commands live. - Press
F2to switch betweenchatmode and the audit workspace. - In
chatmode, the header shows the active task, analyst/model, artifact/dataset, delegation state, checkpoint state, and last repair summary. - In
auditmode, use?for audit-only help and/orCtrl-pto open the jump/filter overlay for tasks, child sessions, checkpoints, blocked items, and errors. - In
auditmode, usej/kor arrow keys to move across tasks,h/lto drill to parent/child context,g/Gfor first/last task, and-orCtrl-oto backtrack focus. - In
auditmode, use1/2/3/4to switch detail focus anddto toggle raw detail. - In
chatmode, useCtrl-jfor multiline input. Cursor movement, word movement, history recall, and line-boundary deletion are supported directly in the composer. - Run
/memoryto open the local memory picker, type to search, and inspect indexed observations. - Run
/dataset registerto add a typed dataset binding for CSV or JSONL training data. - Run
/datasetor/dataset showto inspect the registered datasets available to the current dataset family. - Run
/actuarialto open the actuarial artifact picker for the current dataset and analyst. - Use
/actuarial create,/actuarial show,/actuarial validate,/actuarial compare, and/actuarial useto manage statistical and classical ML model artifacts in-session. - Use
/actuarial jobsto inspect experiment workspaces for the active artifact. - Use
/actuarial job createto create an autoresearch-style workspace copy,/actuarial job showto inspect one workspace, and/actuarial job runto execute it and promote improvements. - Use
/actuarial leaderboardto rank jobs by the active artifact metric. - Run
/forkto branch the current conversation into a new session and new event log. - Run
/compactto summarize older history into a fresh session while preserving the recent tail. - Auto-compaction is enabled by default and triggers after large sessions by message count or token count.
- Type
/modelto inspect the current model and open the provider-aware model picker. - Type
/model <prefix>to narrow model choices from the provider model catalog. - Run
/analyst,/provider, or/modelwith no arguments to open an interactive picker with arrow-key selection and inline filtering. /modelalso shows whether the visible list came from the provider API or the documented fallback set.
TUI Modes
The fullscreen TUI now has two distinct work surfaces:
chat- active conversation and command composition
- compact runtime status rail
- multiline composer with history and cursor editing
audit- task tree and child-session inspection
- checkpoint trail and interrupt state
- jump/filter navigation across delegated work and error surfaces
Skills
The app uses one canonical skill location:
.agents/skills/Current packaged skills:
openmeter-finopsvalue-creation-analystpricing-research-analyst
Each skill must be a SKILL.md file with YAML frontmatter including:
namedescription
The runtime always allows skill access. The analyst workflows are loaded directly from these packaged skill files rather than copied into TypeScript prompt constants.
Configuration Model
This package follows the same general distribution model as Claude Code:
- the npm package is public and installable
- the CLI runs locally on the user’s machine
- credentials are provided at runtime and stored locally
- secrets are not bundled into the npm package
You can still override local config with environment variables or a project-local .env file when needed.
Environment Variables
The interactive login flow is the normal path. For automation or overrides, the CLI also supports:
AGENT_PROVIDER=opencode-go
OPENCODE_GO_API_KEY=
OPENROUTER_API_KEY=
AGENT_MODEL=glm-5.1
AGENT_DATASET=finops_baseline
AGENT_ACTUARIAL_MODEL=actuarial_base_v1
AGENT_BASE_URL=https://opencode.ai/zen/go/v1
AGENT_ANALYST=value_creation
AGENT_AUTO_COMPACT_ENABLED=true
AGENT_AUTO_COMPACT_MAX_MESSAGES=24
AGENT_AUTO_COMPACT_MAX_TOKENS=120000
AGENT_AUTO_COMPACT_TAIL_MESSAGES=6
AGENT_MEMORY_DB_PATH=
AGENT_SESSION_DIR=
LANGFUSE_BASE_URL=https://langfuse.example.internal
LANGFUSE_PUBLIC_KEY=
LANGFUSE_SECRET_KEY=
AA_API_KEY=
OPENMETER_BASE_URL=https://openmeter.example.internal
OPENMETER_API_KEY=OpenMeter CLI
The install also exposes an openmeter executable that uses the same local config:
openmeter meters list
openmeter meters query tokens_total --from 2026-01-01T00:00:00Z --to 2026-02-01T00:00:00Z --subject acme-capital
openmeter customers get acme-capital
openmeter invoices list --status draftLocal Memory
The CLI now ships with a local SQLite-backed memory index. You do not install SQLite separately.
- The npm package installs
sql.js, so there is no native addon build and noprebuild-installwarning on install - The database file defaults to
~/.never2average-clone/memory.db - Each assistant turn is indexed locally
- The agent automatically retrieves a few relevant memory summaries before new runs
- Memory observations now also index active actuarial artifact refs so prior model work is queryable by artifact lineage
- Native agent tools exposed in-session:
memory_searchmemory_get
Actuarial Artifacts
The TUI now treats statistical and classical ML actuarial models as first-class artifacts.
- Artifact bundles live under
~/.never2average-clone/artifacts/actuarial/ - Managed Python runtime lives under
~/.never2average-clone/runtimes/actuarial/ - Each artifact version includes:
manifest.jsonREADME.mdtrain.pyscore.pyvalidate.pyrequirements.txtmetrics.jsonbacktest.jsonartifacts/for trained outputs
- Active actuarial artifact is tracked per session and shown in the footer
- Native agent tools exposed in-session:
actuarial_artifact_createactuarial_artifact_listactuarial_artifact_getactuarial_artifact_activateactuarial_artifact_validateactuarial_artifact_compareactuarial_artifact_runactuarial_dataset_registeractuarial_dataset_listactuarial_dataset_getactuarial_job_createactuarial_job_listactuarial_job_getactuarial_job_leaderboardactuarial_job_run
The current AGENT_ACTUARIAL_MODEL value is treated as the default artifact id to activate when a matching artifact exists for the active dataset.
Recommended operator flow
/dataset register
/actuarial create
/actuarial show
/actuarial job create
/actuarial job show
/actuarial jobs
/actuarial leaderboard
/actuarial job run
/actuarial validate
/actuarial useTemplate families
baseline_regression- simple train/score/validate bundle
- best for deterministic statistical or basic regression models
autoresearch_sklearn- single-artifact research loop with mutable
train.pyandprogram.md - best for feature, objective, and classical ML iteration
- single-artifact research loop with mutable
mlx_lora_autoresearch- Apple Silicon-oriented deep-model finetune loop
- materializes registered JSONL data into the MLX directory layout
- runs
python -m mlx_lm.lora --trainand optional--testinside the managed actuarial runtime - based on the official
mlx-lmLoRA workflow and a fixed-budget autoresearch-style iteration loop
Each artifact version now carries orchestration metadata:
- tracked metric name and goal
- mutable files
- fixed-budget research hint
- best promoted job id and score
Dataset bindings
Typed dataset bindings keep artifact creation and job runs deterministic:
tabular_csvprimaryPathtargetColumnfeatureColumns
chat_jsonltrainPath, optionalvalidPath, optionaltestPath- optional
chatField
completions_jsonltrainPath, optionalvalidPath, optionaltestPath- optional
promptField,completionField
text_jsonltrainPath, optionalvalidPath, optionaltestPath- optional
textField
When you create an artifact from a registered dataset, the TUI pre-seeds target/features/input paths from the binding instead of making the agent guess them.
Repo Utility Commands
From the repo checkout:
npm run test:actuarial-artifacts
npm run test:live-usecases
npm run test:live-usecases:update-snapshots
npm run test:mlx-parity
npm run test:fullscreen-ui
npm run test:fullscreen-pty
npm run check:connectivity
npm run seed:dummy-datatest:live-usecases now runs a sixteen-scenario live parity matrix:
value_creation / premium_improvementvalue_creation / reallocation_sensitivityvalue_creation / telemetry_shift_guardrailvalue_creation / artifact_comparisonvalue_creation / partial_context_triagevalue_creation / contradictory_signal_repairvalue_creation / multi_turn_followup_revisionpricing_research / roi_pricing_experimentpricing_research / cashflow_rolloutpricing_research / segment_packagingpricing_research / competitor_referencepricing_research / partial_context_triagepricing_research / feedback_revision_looppricing_research / delegated_segment_probepricing_research / delegated_followup_revisionpricing_research / delegated_competitive_reentry
The regression gate compares each scenario against the committed normalized snapshot at scripts/snapshots/live-usecase-parity.json. Update that file only after intentional workflow changes.
Each run also writes a machine-readable report to scripts/reports/live-usecase-parity.latest.json with per-scenario status, required tool usage, token usage, recoverable tool error count, repair count, subagent count, resumed subagent count, tool governor count, domain/generic/fallback route counts, stage visited count, duration, missing required steps, forbidden tool use count, decision-quality status, experiment-specificity status, and snapshot match status.
Release
Use the documented release flow in docs/releasing.md. The short version is:
npm run typecheck
npm run build
npm run test:skills
npm run test:subagent-resume
npm run test:fullscreen-ui
npm run test:fullscreen-pty
npm run test:actuarial-artifacts
npm run test:parallel-analyst-pty
npm run test:live-usecases
npm run test:mlx-parity
npm pack --dry-run
npm publish