npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

monte-engine

v0.1.10

Published

Probabilistic life simulation platform

Readme

Monte

Give AI agents the judgment layer they need for hard real-life decisions.

Monte turns your personal data into a behavioral decision model that AI agents can consult before making high-stakes calls on your behalf. Instead of guessing what an average user might do, Monte helps an agent simulate how you are likely to think, hesitate, react under pressure, and trade off risk across hundreds of realistic futures.

For career moves, money choices, relocations, relationships, or any irreversible bet, Monte gives agents more than a one-shot answer: it returns outcome distributions, dominant uncertainties, recommended experiments, and evidence-adjusted reruns so decisions can improve as reality unfolds.

Monte also exposes an additive personalization surface so agents can adapt how they explain, plan, and guide work for a specific user without running a simulation first.

Core loop: Data -> Signals -> Persona -> Clones -> Simulation -> Evidence loop

What Monte gives you

  • A judgment engine AI agents can call before making expensive, emotional, or irreversible decisions
  • A deterministic personalization profile agents can call to adapt tone, pacing, structure, and decision framing
  • Outcome distributions instead of a single yes/no answer
  • A persona built from revealed behavioral signals, not just self-reported traits
  • Decision intelligence with dominant uncertainties and recommended experiments
  • Evidence capture plus reruns after the world gives you new information
  • A deterministic benchmark harness for regression-testing the simulation layer

Evidence, explained simply

Evidence means new information you learn in real life after a simulation runs.

Think of the loop like this:

  • Monte simulates a decision
  • reality gives you a new signal
  • you record that signal as evidence
  • Monte reruns the decision with the updated information

Example:

  • original question: Should I take a job offer for 120K a year or keep looking?
  • new evidence: The company gave me a signed offer, but the hiring manager hinted there is little growth in the role.
  • rerun: Monte updates the decision using that new signal instead of pretending nothing changed

In plain English:

  • persona data tells Monte who you are
  • evidence tells Monte what the world just taught you about this specific decision

That is why evidence lives after the first simulation, not before it.

Best Data To Feed Monte

Monte works best on traces of what you actually do, not polished summaries of who you think you are.

S tier

  • search_history: what you repeatedly look up when stressed, curious, ambitious, or stuck
  • financial: spending, saving, subscriptions, impulse buys, risk behavior, and constraint signals
  • ai_chat: real conversations with ChatGPT, Claude, or other AI tools where you ask for help, vent, plan, or debate decisions

These are the strongest sources because they capture revealed preferences instead of self-marketing.

A tier

  • notes: journals, reflections, private writing, planning docs, and long-form thought dumps
  • watch_history: what you spend time learning from, doomscrolling through, or aspirationally consuming
  • files: personal docs with genuine authored content, especially .md, .txt, .pdf, .docx, and similar artifacts

These are excellent when they are personal and unfiltered.

B tier

  • social_media: posts, comments, replies, and public writing across Reddit, X, forums, or similar surfaces

Useful, but more performative than private data. People often posture in public.

C tier

  • files that are mostly screenshots, random exports, or generic documents with little actual authored text
  • thin or highly one-sided datasets, even if the format is technically supported

Monte can ingest them, but signal quality is usually weaker.

D tier

  • polished bios, resumes, one-page self-descriptions, and "about me" summaries
  • heavily curated or AI-generated personal summaries
  • tiny datasets that only show one mood, one week, or one context

This kind of input is usually too cleaned up and too self-conscious to model real behavior well.

Practical rule

If you only have time to feed Monte a few things, start with:

  1. search history
  2. financial data
  3. AI chat exports
  4. private notes
  5. watch history

That mix usually gives Monte the fastest jump in behavioral signal quality.

Current product shape

Monte currently ships as:

  • a Fastify API that can serve the bundled dashboard from the npm package
  • a globally installable Commander CLI, including monte start
  • a repo-local Vite + React dashboard in apps/web for local development and UI iteration
  • BullMQ workers for ingestion, persona builds, and simulation batches
  • Neo4j for graph persistence
  • Redis for cache, live progress, and queue transport
  • MinIO for uploaded source storage

In self-hosted OSS mode, auth is stubbed to a local injected user.

Dashboard Preview

The repo-local dashboard gives Monte a fast five-minute product walkthrough instead of a terminal recording.

Monte dashboard overview Monte dashboard graph

For more UI screens, see docs/dashboard.md.

Quickstart

Requirements

  • Node.js 20+
  • Docker and Docker Compose
  • A chat-capable OpenAI-compatible model key
  • An embedding-capable key for persona builds

The simplest setup is OPENROUTER_API_KEY, which can cover both chat and embeddings. For the globally installed CLI, you can either keep using environment variables or store provider credentials once with monte config.

1. Configure the environment

cp .env.example .env

Set at least:

  • NEO4J_PASSWORD
  • OPENROUTER_API_KEY, or equivalent chat plus embedding keys

Optional runtime tuning:

  • SIMULATION_BATCH_SIZE
  • SIMULATION_DECISION_CONCURRENCY
  • SIMULATION_ACTIVE_FRONTIER
  • SIMULATION_CONCURRENCY (legacy alias for decision concurrency)
  • SIMULATION_WORKER_CONCURRENCY
  • SIMULATION_DECISION_BATCH_SIZE
  • SIMULATION_DECISION_BATCH_FLUSH_MS
  • LLM_RPM_LIMIT

2. Start dependencies and install packages

docker compose up -d neo4j redis minio
npm install

Optional frontend env for the dashboard:

cp apps/web/.env.example apps/web/.env

3. Run the Monte API

npm run dev

The API starts on http://localhost:3000 by default. Swagger docs are available at http://localhost:3000/docs.

4. Run the dashboard

npm run web:dev

The demo UI starts on http://localhost:3001 by default and talks to the API on http://localhost:3000 unless VITE_MONTE_API_BASE_URL is overridden. This two-process flow is for repo-local development with live frontend edits.

5. Install the global CLI

npm install -g monte-engine
monte config set-api http://localhost:3000
monte config set-provider openrouter
monte config set-api-key <your-openrouter-key>

6. Verify the stack

monte doctor
monte doctor --json
monte config show
npm run web:build
monte personalize profile --json

Agent personalization

Once you have ingested data and built a persona, agents can use Monte without starting a simulation:

monte personalize profile --json
monte personalize context "Help me plan next week" --json

The API exposes the same additive surface:

  • GET /personalization/profile
  • POST /personalization/context

For AI Agents

If a user tells you something like:

Install monte-engine. My data is in /temp.

treat that as enough information to start the Monte personalization flow.

Read this operator guide first if you can fetch raw docs:

Default sequence:

npm install -g monte-engine
monte ingest /temp
monte persona build
monte persona status
monte personalize profile --json

If the user gave you a concrete task, continue with:

monte personalize context "Help me with this task" --json

Use these heuristics:

  • If the persona is not ready yet, keep checking monte persona status until it is.
  • Use monte personalize ... for adapting tone, pacing, structure, planning, writing, learning, or decision guidance to the user.
  • Use monte decide ... only when the user explicitly wants a simulation-style judgment call.
  • If monte commands fail because the local API stack is not running yet, fall back to the setup sections in this README and bring the stack up before continuing.

The shortest reliable mental model is:

  1. ingest the user's data
  2. build the persona
  3. wait for readiness
  4. call personalization commands for the current task

Global CLI Install

The primary published package is monte-engine, and the executable on your PATH is monte.

npm install -g monte-engine
monte config set-api http://localhost:3000
monte config set-provider openrouter
monte config set-api-key <your-openrouter-key>
monte doctor

If you use Groq for chat and a separate embedding provider, store both:

monte config set-provider groq
monte config set-api-key <your-groq-key>
monte config set-embedding-key <your-embedding-key>

CLI key storage lives in ~/.monte/config.json. Environment variables still take precedence if both are set.

Monte also publishes a GitHub Packages mirror as @elironb/monte-engine so the package can be associated with this repository and show up in GitHub Packages. GitHub Packages is a secondary distribution path here, not the main install path, and it still requires npm auth against https://npm.pkg.github.com.

For local development inside this repo, use the source-running variant instead:

npm run cli:dev -- doctor

Installed Dashboard And API

If Docker dependencies are already running and your current working directory contains the Monte .env, the globally installed package can start the API, workers, and bundled dashboard with one command:

monte start

Then open http://localhost:3000.

The bundled dashboard now includes a dedicated Graph tab for a clickable scenario DAG with live clone occupancy, edge flow, and sampled trace overlays alongside the existing overview, persona, live run, results, evidence, and sources surfaces.

Useful variants:

monte start --port 3001
monte start --no-dashboard

Repo contributors should still use npm run dev plus npm run web:dev when they want live backend and frontend reloads at the same time.

Agent Integration

Monte is designed to be usable as a CLI step inside external agent systems like Claude Code, OpenClaw, or Hermes. The agent-facing entrypoint is monte decide.

Preflight:

monte config set-api http://localhost:3000
monte config set-provider openrouter
monte config set-api-key <your-openrouter-key>
monte doctor --json

One-shot decision:

monte decide "should I quit my job to start a company?" --mode standard --wait --json

Async flow:

monte decide "should I move to Berlin for this job?" --mode fast --json
monte simulate progress <simulation-id> --json
monte simulate results <simulation-id> -f json

monte decide --json returns a single JSON object. Without --wait, it returns the queued simulation plus recommended polling commands. With --wait, it also returns a condensed decision bundle and the raw aggregated results payload.

Bundled Example Persona

Monte now ships a bundled starter persona inside the npm package so you can test the full loop without generating data first.

monte example list
monte example ingest starter
monte persona build
monte persona psychology
monte decide "should I leave my stable product job to join a startup and put $25k into the idea?" --mode fast --wait

If you want the raw filesystem path to the bundled dataset:

monte example path starter

Quick Demo

Monte still ships a synthetic persona generator if you want a fresh persona tailored to a specific description.

monte generate "conservative 40 year old accountant, disciplined saver, risk-averse" -o ./persona-conservative
monte generate "25 year old crypto trader, YOLO mentality, high risk tolerance" -o ./persona-aggressive

Then ingest, build, and simulate each separately, or use compare for an A/B workflow.

Progress Reporting

Simulation progress is phase-aware. Instead of appearing stuck at 95-99%, Monte now surfaces the active phase:

  • queued
  • executing
  • persisting
  • aggregating
  • completed
  • failed

During execution, progress covers 0-90%. Persistence covers 90-96%. Aggregation uses stable end markers at 97-99% so long-tail work is explained rather than looking frozen.

Monte also batches concurrent LLM decisions by decision node inside each worker batch. Instead of making one remote call per clone per node, Monte can group multiple clones waiting on the same fork into a single structured LLM request. This keeps decision quality LLM-backed while cutting request overhead and rate-limit pressure.

Under the hood, the scheduler is frontier-based rather than whole-clone-concurrency-based. Each worker batch keeps an active frontier of clones in memory, advances them locally until they block on a decision, groups those waiting decisions by node, and only then spends LLM concurrency.

If the provider starts rejecting large batched decision payloads, Monte now adapts by shrinking later batch sizes for that scenario/mode instead of repeating the same oversized request pattern all run long.

Example:

monte simulate "should I buy this house?" --wait
monte simulate progress <simulation-id> --json

Runtime Telemetry

Completed simulations now include runtime telemetry in simulate results -f json, and the human-readable simulate results output shows a short runtime section. This includes:

  • wall-clock duration
  • execution, persistence, and aggregation timing
  • decision concurrency and active frontier usage
  • total LLM decision evaluations
  • batched vs single LLM call counts
  • batch retry, split, and leaf-fallback counts
  • total rate-limiter wait time
  • embedding time
  • slowest decision nodes

Use this to understand whether a run is bottlenecked by chat latency, queueing, retries, or persistence instead of guessing.

Benchmark Snapshot

Monte's benchmark harness is deterministic and seeded, so these numbers are a regression surface rather than a marketing screenshot. The current suite tracks:

  • fixture pass rate
  • calibration mean absolute error
  • static policy regret
  • uncertainty reduction after evidence
  • deterministic stability drift

Current committed snapshot (phase3-v2):

  • Fixtures: 3
  • Pass rate: 100%
  • Calibration MAE: 0.000
  • Policy regret: 0.232
  • Uncertainty reduction: 0.080
  • Max drift: 0.000

You can regenerate the machine-readable snapshot with:

npm run benchmark -- --output examples/benchmarks/latest-benchmark.json

The latest benchmark snapshot is committed under examples/benchmarks/.

Common CLI Workflows

Persona workflow

monte ingest ./path/to/data
monte persona build
monte persona status
monte persona psychology

Simulation workflow

monte simulate "should I quit my job and start a business?" --wait
monte simulate evidence <simulation-id> --recommendation 1 --result positive --signal "Customer interviews converted at 3x the prior rate"
monte simulate rerun <simulation-id> --wait

Development workflow inside this repo

npm run cli:dev -- ingest ./path/to/data
npm run cli:dev -- persona build
npm run cli:dev -- decide "should I do this?" --mode standard --wait --json

Built-in Scenario Types

Monte currently ships 8 scenario types including custom:

  • day_trading
  • startup_founding
  • career_change
  • advanced_degree
  • geographic_relocation
  • real_estate_purchase
  • health_fitness_goal
  • custom

Benchmark Harness

The benchmark harness is a first-class regression surface for the simulation stack. It verifies:

  • calibration error
  • static policy regret
  • uncertainty reduction after evidence
  • deterministic stability drift

Commands:

npm run benchmark:pretty
npm run benchmark -- --output benchmark-suite.json
npm run test:benchmarks

Current fixture corpus:

  • startup_founding_seeded_corpus
  • real_estate_purchase_carry_costs
  • day_trading_edge_discipline

Publish To npm

The npm package name is currently monte-engine, while the installed executable is still monte. GitHub Packages publishing is handled automatically by the release workflow as a mirror at @elironb/monte-engine, so the manual steps below are for npmjs.org only.

Release checklist:

npm login
npm whoami
npm run release:check
npm publish

If you eventually acquire the monte package name on npm, you can rename the package later without changing the CLI binary name.

Project Map

  • src/index.ts -> Fastify bootstrap and route registration
  • src/api/ -> HTTP routes and plugins
  • src/cli/ -> CLI bootstrap, config, and commands
  • src/ingestion/ -> ingestion, extractors, contradictions, queues
  • src/persona/ -> dimension mapping, graph build, compression, psychology, clone generation
  • src/simulation/ -> scenario compilation, engine, aggregation, evidence loop, progress helpers
  • src/benchmarks/ -> seeded benchmark harness
  • tests/ -> Vitest suites
  • docs/architecture.md -> system architecture
  • CONTEXT.md -> durable repo state
  • SKILL.md -> repo-aware coding guidance
  • AGENTS.md -> agent operating rules for this repository

Development Notes

  • Signal extraction is rule-based; do not route extraction through an LLM.
  • Use the openai SDK for provider integrations.
  • If simulation semantics change, rerun the benchmark harness.
  • If architecture or commands change, keep README.md, CONTEXT.md, AGENTS.md, docs/architecture.md, and SKILL.md aligned.
  • connect / Composio exists but is still experimental.