npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

automatey

v0.1.5

Published

Lean & mean MCP-powered CLI agent — Nemotron, OpenAI, Anthropic, Perplexity

Readme


Automatey is a minimal, MCP-powered CLI agent that lets any LLM wield real tools — file ops, shell commands, web search, memory, planning — through a clean ReAct loop. No bloat. No framework lock-in. Just a sharp hook and a fast ship.

══════════════════════════════════════════════════════════════════════
  🤖  automatey — lean & mean agent
══════════════════════════════════════════════════════════════════════
  Provider:  nemotron
  Model:     nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4
  Session:   session-2026-03-12
  Sandbox:   /workspace/my-project/sandbox

  Type /help for commands  |  Ctrl+C to exit
  Logs: tail -f ~/.automatey/logs/agent.log
══════════════════════════════════════════════════════════════════════

Features

  • Providers: Nemotron (vLLM/OpenAI-compatible), OpenAI, Anthropic, Perplexity
  • MCP tools: Any stdio or HTTP MCP server; auto-loaded from ./mcp.json or ~/.automatey/mcp.json
  • ReAct loop: Up to 20 tool-call rounds per message (configurable)
  • Chain-of-thought: /think toggle for Nemotron / Anthropic reasoning tokens
  • Sandbox: Isolated directory for agent file I/O and code execution (./sandbox default, gitignored)
  • Sessions: Save/load conversation sessions in ~/.automatey/sessions/
  • Checkpoints: /checkpoint — full conversation snapshots with BM25 keyword search
  • Auto-compact: LLM summarises older context when usage ≥ 80%
  • Skills: Progressive SKILL.md loading from .agents/skills/
  • Planner MCP: Bundled mcp/planner — todos + plans
  • Coder MCP: Bundled mcp/coder — read/write/edit files, run commands, glob files

Quick Start

git clone https://github.com/automatey-org/automatey.git
cd automatey
npm install
cp mcp.example.json mcp.json   # edit to add your API keys
npm run build
node dist/index.js chat

Override provider, model, and sandbox:

node dist/index.js chat --provider openai --model gpt-4o --sandbox ./my-sandbox

Non-interactive one-shot (quiet by default, --verbose to see diagnostics):

node dist/index.js chat --message "List all TODO comments in this repo"
node dist/index.js chat --message "Summarise this file" --verbose

CLI Installation

Option A — npm link (recommended)

npm run build
npm run link:cli   # registers "automatey" globally via symlink

After linking:

automatey chat
automatey --help

Unlink: npm run unlink:cli

Option B — Shell alias

alias automatey="node /workspace/simple-agent/dist/index.js"

JSON / JSONC in mcp.json: Standard JSON does not allow // comments. The agent uses a JSONC parser — // line comments and /* */ block comments are fully supported in all mcp.json files.

MCP Config — mcp.json

The agent looks for MCP config in this order:

  1. ./mcp.json (current working directory / project root)
  2. ~/.automatey/mcp.json (global fallback)

Copy mcp.example.json from this repo as your starting point:

cp mcp.example.json mcp.json          # project-local
# OR
cp mcp.example.json ~/.automatey/mcp.json  # global

Example mcp.json:

{
  // JSONC is supported — '//' and '/* */' comments are stripped before parsing
  "mcpServers": {
    "planner": {
      "type": "stdio",
      "command": "node",
      "args": ["./mcp/planner/dist/server.js"]
    },
    "brave-search": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@brave/brave-search-mcp-server", "--transport", "stdio"],
      "env": { "BRAVE_API_KEY": "${env:BRAVE_API_KEY}" },
      "requiresEnv": "BRAVE_API_KEY"
    },
    "memento": {
      "type": "http",
      "url": "http://localhost:3500/mcp",
      "portCheck": true
    }
  }
}

Conditional loading:

  • requiresEnv — skip server if env var is missing (no key = no server, no error)
  • portCheck: true — skip HTTP/SSE server if the URL is unreachable at startup (1 s probe)

Config

Config lives in ~/.automatey/config.json (auto-created on first run):

{
  "provider": "nemotron",
  "llm": {
    "baseUrl": "http://localhost:8002",
    "model": "nvidia/Llama-3.1-Nemotron-Nano-8B-v1"
  }
}

Environment variables (copy .env.defaults.env to override):

| Variable | Default | Description | |---|---|---| | LLM_PROVIDER | nemotron | nemotron | openai | anthropic | perplexity | | LLM_MODEL | Nemotron NVFP4 | Model ID | | LLM_BASE_URL | http://localhost:8002 | vLLM / OpenAI-compatible endpoint | | LLM_MAX_TOKENS | 200000 | Context window budget (chars, ~4/token) | | LLM_MAX_OUTPUT_TOKENS | 4096 | Per-call output limit | | TEMPERATURE | 0.1 | Sampling temperature | | AGENT_MAX_TOOL_ROUNDS | 20 | Max ReAct rounds | | AGENT_MAX_EMPTY_RETRIES | 2 | Retries on empty LLM response | | AGENT_COMPACT_THRESHOLD | 0.8 | Auto-compact at 80% context fill |

Commands

| Command | Description | |---------|-------------| | /help | Show all commands | | /model | List / switch model | | /think [on\|off\|budget N] | Toggle CoT reasoning | | /save [name] | Save session | | /load [name] | Load session | | /servers | Manage MCP connections | | /config | Show config | | /cost | Show estimated token usage | | /compact | Manually compact context via LLM summarization | | /checkpoint [save\|list\|restore N\|search q\|delete N] | Manage checkpoints | | /eval <file.jsonl> | Run a JSONL eval file against the current LLM | | /clear | Clear context | | /exit | Quit |

Context Management

Auto-compact

When estimated context usage reaches AGENT_COMPACT_THRESHOLD (default 80%), the older portion of the conversation is automatically summarized by the LLM and replaced with a concise summary message. This keeps the token count manageable without discarding knowledge.

Disable per-session: the /compact command can be used to trigger compaction manually at any time.

Checkpoints

Checkpoints are full conversation snapshots saved to ~/.automatey/checkpoints/ as JSON:

/checkpoint save              # save current conversation
/checkpoint list              # list checkpoints (newest first)
/checkpoint restore 2         # restore checkpoint #2 into context
/checkpoint search "bm25"     # BM25 keyword search across all checkpoints
/checkpoint delete 3          # delete checkpoint #3

The BM25 search indexes the full message history of every checkpoint and ranks them by keyword relevance.

Built-in MCP Servers

🗂 Coder (mcp/coder)

| Tool | What it does | |------|-------------| | read_file | Read file contents with optional line range | | write_file | Write / create a file | | edit_file | Replace an exact string in a file | | execute_command | Run a shell command (default cwd: sandbox) | | search_text | Grep-style text search | | list_dir | List directory contents | | glob_files | Find files by glob pattern (**/*.ts, src/**) |

📋 Planner (mcp/planner)

Todos and multi-step plans persisted to ~/.automatey/planner/.

Docker

Run as a container

# Interactive chat
docker run -it --rm \
  -e LLM_PROVIDER=openai \
  -e OPENAI_API_KEY=sk-... \
  -v ~/.automatey:/home/automatey/.automatey \
  maxgolovanov/automatey chat

# One-shot (non-interactive)
docker run --rm \
  -e LLM_PROVIDER=openai \
  -e OPENAI_API_KEY=sk-... \
  maxgolovanov/automatey --message "Summarise this repo" --verbose

Using Docker Model Runner as the LLM backend

Docker Model Runner (DMR) lets you run LLMs locally via an OpenAI-compatible API on localhost:12434. It ships with Docker Desktop and Docker Engine 28+ — no API key needed.

# Pull and start a model on the host first
docker model pull ai/llama3.2

The networking challenge: when automatey runs in a container localhost:12434 resolves inside the container (nowhere). You need to point it at the host.

Option A — host.docker.internal (Docker Desktop on Mac/Windows, Docker Engine 28+ on Linux)

docker run -it --rm \
  -e LLM_PROVIDER=openai \
  -e LLM_BASE_URL=http://host.docker.internal:12434/engines/v1 \
  -e LLM_MODEL=ai/llama3.2 \
  -e LLM_API_KEY=ignored \
  --add-host=host.docker.internal:host-gateway \
  maxgolovanov/automatey chat

--add-host=host.docker.internal:host-gateway is required on Linux Docker Engine (it's automatic on Docker Desktop).

Option B — --network host (Linux only, simplest)

docker run -it --rm \
  --network host \
  -e LLM_PROVIDER=openai \
  -e LLM_BASE_URL=http://localhost:12434/engines/v1 \
  -e LLM_MODEL=ai/llama3.2 \
  -e LLM_API_KEY=ignored \
  maxgolovanov/automatey chat

Option C — Docker Compose (recommended for persistent setups)

# docker-compose.yml
services:
  automatey:
    image: maxgolovanov/automatey
    stdin_open: true
    tty: true
    extra_hosts:
      - "host.docker.internal:host-gateway"  # Linux only; remove on Docker Desktop
    environment:
      LLM_PROVIDER: openai
      LLM_BASE_URL: http://host.docker.internal:12434/engines/v1
      LLM_MODEL: ai/llama3.2
      LLM_API_KEY: ignored
    volumes:
      - ~/.automatey:/home/automatey/.automatey
docker compose run --rm automatey chat

Available DMR models

See docker model list or hub.docker.com/u/ai. Common choices:

| Model | Size | Use case | |---|---|---| | ai/llama3.2 | 2GB | Fast, general purpose | | ai/smollm2 | 500MB | Lightweight, low RAM | | ai/phi4-mini | 4GB | Strong reasoning | | ai/qwen2.5-coder | 5GB | Code tasks |

DMR uses llama.cpp under the hood — CPU-only works, GPU (NVIDIA/Apple Silicon) is used automatically when available.

Development

npm run dev          # tsx watch (no build needed)
npm run build        # tsc + build all MCP servers
npm test             # 124 tests (Vitest)
npm run test:watch   # watch mode

VS Code tasks (Ctrl+Shift+B / Ctrl+Shift+P → Run Task):

  • Build: All
  • Run: CLI (automatey) — builds first, then launches
  • Run: CLI (dev, no build) — tsx, faster iteration
  • Test: All
  • Test: Hello World (verbose)

Tests

Tests  172 passed | 2 skipped
  ├── unit/           command-parser, context-manager, config, session,
  │                   mcp-config, llm-client, skills, coder-server, eval-runner
  └── integration/    openai, anthropic, nemotron, planner,
                      coder-hello-world (all 3 providers), cli-message

The coder-hello-world integration tests drive a full ReAct loop per provider — LLM writes index.js, executes it, output is verified. Results live in sandbox/<provider>/:

[openai]    execute_command output: Hello, World!
[anthropic] execute_command output: Hello, World!
[nemotron]  execute_command output: Hello, World!

Artwork

Logos in extra/logo/ are from the Automatey terminal project,
licensed CC BY 4.0 — Copyright © 2024–2025 Top-5 And Contributors.
Used here with attribution as permitted by the license.

License

MIT — see LICENSE.