npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

newtonclaw

v1.0.10

Published

<div align="center"> <img src="assets/nClaw.png" width="200" alt="NewtonClaw" />

Readme

npx newtonclaw

NewtonClaw is a self-hosted developer tool that transforms your terminal into a competition command center. It pairs an agentic AI reasoning loop with targeted web scraping (GitHub, LeetCode, Codeforces), MCP-driven academic tooling, and zero-config Google Calendar synchronization -- all executed locally on your machine. Your data never leaves your device.


Table of Contents


Why NewtonClaw

Most developer tools either require a cloud backend, expose your credentials to third-party servers, or provide passive dashboards that only display data. NewtonClaw takes a fundamentally different approach:

| Principle | What It Means | |:----------|:--------------| | Local-First | All processing -- LLM reasoning, MCP tool execution, credential storage -- happens on your machine. Nothing is proxied through an intermediary server. | | Agentic, Not Passive | NewtonClaw does not wait for you to check a dashboard. Background cron jobs proactively alert you before deadlines, and an overtake monitor watches your rivals around the clock. | | Write Once, Run Anywhere | A single core engine powers both the Terminal CLI and the Telegram Bot. Every new capability added to the core is automatically available across both interfaces without duplication. | | Pluggable Intelligence | Swap between 8 LLM providers (Anthropic, OpenAI, NVIDIA NIM, Gemini, Groq, OpenRouter, Ollama, Custom) with a single command. No code changes. No redeployment. |


Core Capabilities

| Capability | Description | Under the Hood | |:-----------|:------------|:---------------| | The Arena | 1v1 XP Battle Reports with "Tale of the Tape" ASCII visualizations. Identifies mutual course branches and calculates exactly how many assignments you need to overtake a rival. | Aggregates Newton XP via MCP, cross-references with public scraper API, applies a gap / 68 XP-per-task formula. | | Intel & Dossiers | Full developer background analysis: Newton course data, GitHub repo/star counts, LeetCode solve counts and contest ratings, Codeforces ratings. Computes a weighted Power Score. | Concurrent Promise.allSettled scraping across GitHub REST API, LeetCode GraphQL, and Codeforces API. | | AI Roast / Review | LLM-generated profile analysis. Two modes: a savage comedic roast, or a constructive recruiter-style review. | Feeds minimized profile JSON (bio, skills, projects, experience) into a system-prompted LLM call. | | Google Calendar Sync | Zero-config OAuth flow. Spawns a temporary local HTTP server on localhost:8080, auto-opens the browser, catches the redirect, and injects Newton lecture schedules as notification-ready Google Calendar events. | googleapis OAuth2 client with auto-refresh token hook, upsert event pattern (patch-or-insert), base32hex event ID generation. | | MCP Automation | Direct interface to @newtonschool/newton-mcp binary for schedule fetching, assignment queries, course overviews, and leaderboard data. | Spawns the MCP binary as a child process via StdioClientTransport, with credential injection through a sandboxed HOME directory. | | Proactive Alerts | Background cron jobs that scan for upcoming deadlines and send formatted Telegram notifications. Includes deduplication logic and cooldown timers to prevent alert spam. | node-cron with 30-minute intervals, 2-hour cooldown cache, AI-formatted alert messages. | | Rival Watchlist | Persistent local hitlist of developers to track. The overtake monitor runs twice daily and sends instant alerts if a rival surpasses your XP. | JSON-backed watchlist in ~/.newtonclaw/config.json, scheduled 9AM/9PM cron comparisons. | | Telegram Bot | Mobile companion that mirrors the full CLI feature set through inline keyboards, command handlers, and the same agentic reasoning loop. | Telegraf framework with middleware pipeline, command adapters, and sanitizeForTelegram HTML formatting. |


System Architecture

Hexagonal Design (Ports & Adapters)

NewtonClaw is built on a Hexagonal Architecture (also known as Ports & Adapters). The core engine is completely decoupled from its delivery mechanisms. The CLI and Telegram Bot are interchangeable adapters that plug into the same set of ports -- the agent, the MCP runner, the competition engine, and the background jobs.

This means a new interface (a Discord bot, a web dashboard, a VS Code extension) can be added by writing a thin adapter layer without touching any business logic.

graph TB
    subgraph "Adapter Layer"
        CLI["Terminal CLI<br/><code>inquirer + chalk</code>"]
        TG["Telegram Daemon<br/><code>telegraf</code>"]
        FUTURE["Future Adapters<br/><code>Discord / Web / VS Code</code>"]
    end

    subgraph "Core Domain"
        AGENT["Agentic Reasoning Loop<br/><code>processUserMessage()</code>"]
        COMPETE["Competition Engine<br/><code>generateCompeteReport()</code>"]
        POWER["Power Level Calculator<br/><code>calculatePowerLevel()</code>"]
        REVIEW["AI Reviewer<br/><code>generateProfileReview()</code>"]
        SYNC["Calendar Sync Engine<br/><code>performSync()</code>"]
    end

    subgraph "Service Layer"
        LLM["LLM Abstraction<br/><code>askBrain()</code>"]
        MCP["MCP Runner<br/><code>executeMcpTool()</code>"]
        SCRAPERS["External Scrapers<br/><code>GitHub / LC / CF</code>"]
        GCAL["Google Calendar<br/><code>googleapis</code>"]
    end

    subgraph "Infrastructure"
        POOL["MCP Connection Pool<br/><code>Singleton + Idle Timeout</code>"]
        STORE["Local JSON Store<br/><code>~/.newtonclaw/config.json</code>"]
        CRON["Background Jobs<br/><code>node-cron</code>"]
    end

    CLI --> AGENT
    TG --> AGENT
    FUTURE -.-> AGENT

    AGENT --> LLM
    AGENT --> MCP

    COMPETE --> MCP
    COMPETE --> SCRAPERS
    POWER --> SCRAPERS
    REVIEW --> LLM
    SYNC --> MCP
    SYNC --> GCAL

    MCP --> POOL
    POOL --> STORE
    CRON --> STORE

    LLM -.-> |"Anthropic / OpenAI / NVIDIA<br/>Gemini / Groq / Ollama"| EXT_LLM["LLM Provider APIs"]
    SCRAPERS -.-> EXT_API["GitHub API / LeetCode GraphQL<br/>Codeforces API / Newton API"]
    GCAL -.-> GOOGLE["Google Calendar API"]
    POOL -.-> BINARY["@newtonschool/newton-mcp<br/>(Spawned Child Process)"]

Data Flow: From Prompt to Execution

When a user sends a message (via CLI or Telegram), here is the exact path it traverses through the system:

sequenceDiagram
    participant U as User
    participant A as Adapter (CLI/Telegram)
    participant AG as Agent Loop
    participant B as askBrain()
    participant LLM as LLM Provider
    participant MCP as executeMcpTool()
    participant BIN as newton-mcp Binary

    U->>A: "What assignments are due this week?"
    A->>AG: processUserMessage(user, message)
    AG->>AG: Build system prompt + inject tool definitions
    AG->>B: askBrain(messages, tools, config)
    B->>LLM: POST /chat/completions (with function schemas)
    LLM-->>B: Response with tool_calls: [get_assignments]
    B-->>AG: Parsed response

    AG->>MCP: executeMcpTool(user, "get_assignments", {})
    MCP->>MCP: Pool.getClientForUser() -- reuse or spawn
    MCP->>BIN: StdioTransport.callTool("get_assignments")
    BIN-->>MCP: Raw MCP content blocks
    MCP->>MCP: Unwrap envelope, parse JSON payload
    MCP-->>AG: Minimized result (via minimizeToolResult)

    AG->>B: Inject tool result into conversation
    B->>LLM: POST /chat/completions (with tool output)
    LLM-->>B: Final natural language response
    B-->>AG: "You have 3 assignments due..."
    AG-->>A: Formatted response
    A-->>U: Rendered output (ANSI colors / Telegram HTML)

Key implementation details in this flow:

  • Tool Loop Cap: The agent iterates up to 5 reasoning turns before forcefully terminating with an overflow warning. This prevents runaway LLM tool-call chains.
  • Result Minimization: Raw MCP tool outputs are aggressively compressed by minimizeToolResult() -- a 10KB+ response module that strips unnecessary fields, truncates arrays, and normalizes data structures to keep context windows lean.
  • Hint System: Before calling a tool, the LLM can invoke get_mcp_tool_hints to retrieve parameter guidance, filtering rules, and usage patterns, improving first-call accuracy.

The MCP Bridge: Local Process Isolation

The connection to Newton School's data layer is achieved through the Model Context Protocol (MCP). Rather than making direct API calls, NewtonClaw spawns the official @newtonschool/newton-mcp binary as a child process and communicates over stdio.

graph LR
    subgraph "Your Machine"
        NC["NewtonClaw Process"]
        subgraph "Sandboxed Environment"
            FAKE["Fake HOME Directory<br/><code>/tmp/nclaw-{userId}-xxx/</code>"]
            CREDS["Injected Credentials<br/><code>.newton-mcp/credentials.json</code>"]
            BINARY["newton-mcp Binary<br/>(Child Process via npx)"]
        end
        POOL_MGR["McpPool (Singleton)<br/>5-min idle timeout"]
    end

    NC -->|"getClientForUser()"| POOL_MGR
    POOL_MGR -->|"createFakeHome() + writeCredentials()"| FAKE
    FAKE --> CREDS
    POOL_MGR -->|"StdioClientTransport"| BINARY
    BINARY -->|"stdio (JSON-RPC)"| POOL_MGR
    CREDS -.->|"Read by binary on launch"| BINARY

Why this matters:

  1. Process Isolation: Each user session gets its own sandboxed HOME directory. The newton-mcp binary reads credentials from this fake home, ensuring no cross-contamination between sessions.
  2. Connection Pooling: The McpPool singleton maintains active connections with a 5-minute idle timeout. Repeated tool calls reuse the existing child process instead of paying the boot-time penalty each time.
  3. Zombie Cleanup: On startup, the ZombieHunter job scans for orphaned nclaw-* temp directories and lingering MCP processes from previous crashes, terminating them with SIGKILL and cleaning up the filesystem.

The LLM Abstraction Layer

NewtonClaw supports 8 LLM providers through a unified askBrain() interface. The abstraction normalizes all provider-specific quirks (different API formats, tool-calling schemas, streaming behavior) into a single function signature.

graph TD
    BRAIN["askBrain(messages, tools, config)"]

    BRAIN --> |"provider: anthropic"| ANT["Anthropic SDK<br/>Claude 3.5 Sonnet"]
    BRAIN --> |"provider: openai"| OAI["OpenAI SDK<br/>GPT-4o / GPT-4o-mini"]
    BRAIN --> |"provider: nvidia"| NV["NVIDIA NIM<br/>Nemotron Ultra 253B"]
    BRAIN --> |"provider: gemini"| GEM["Google Gemini<br/>Flash Lite"]
    BRAIN --> |"provider: groq"| GRQ["Groq<br/>LLaMA 3.3 70B"]
    BRAIN --> |"provider: openrouter"| OR["OpenRouter<br/>Free Model Tier"]
    BRAIN --> |"provider: ollama"| OLL["Ollama (Local)<br/>100% Offline"]
    BRAIN --> |"provider: custom"| CUST["Custom Endpoint<br/>Any OpenAI-compatible API"]

Each provider includes a strength priority system that ranks models by capability. When auto-selecting a model, the engine defaults to the highest-priority option available for the chosen provider.

| Provider | Default Model | Strength | Key Advantage | |:---------|:-------------|:---------|:--------------| | Anthropic | claude-3-5-sonnet | Highest reliability | Best reasoning and tool-call accuracy | | NVIDIA NIM | nemotron-ultra-253b | Highest raw power | Free credit tier available | | OpenAI | gpt-4o | High | Industry standard, stable | | Groq | llama-3.3-70b | High | Fastest inference speed | | OpenRouter | hermes-3-llama-405b:free | High | Free access to premium models | | Gemini | gemini-2.0-flash-lite | Moderate | Budget-friendly | | Ollama | qwen2.5-coder:7b | Variable | Fully offline, maximum privacy | | Custom | User-defined | Variable | Any OpenAI-compatible endpoint |

Privacy Note: When using Ollama, all LLM processing happens entirely on-device. For cloud providers, API calls are sent directly to the provider -- NewtonClaw never proxies your prompts through any intermediary.


Installation

Zero-Install (Recommended)

Run directly without global installation:

npx newtonclaw

Global Installation

npm install -g newtonclaw

On first launch, the Neural Uplink Wizard will guide you through:

  1. Selecting an LLM provider and entering your API key
  2. Authenticating with Newton School via browser-based login
  3. Optionally configuring a Telegram bot token

All configuration is stored locally at ~/.newtonclaw/config.json.


CLI Interface & Command Reference

The terminal interface uses a categorized "bucket" system with contextual separators, single-letter aliases, and built-in fuzzy search. Typing keywords like roast, hitlist, or power will automatically resolve to the correct command.

  You: Type to chat, or select a command!

  --- ARENA & INTEL --------------------------
> /intel   <user> - Analyze, roast, or review a dev
  /arena   <user> - Compare XP or add to hitlist

  --- TOOLS & AUTOMATION ---------------------
  /sync           - Sync Newton to Google Calendar
  /run            - Execute a direct MCP command
  /cron           - Manage proactive background jobs

  --- SYSTEM & CONFIG ------------------------
  /bot            - Setup/configure Telegram Bot
  /engine         - Change LLM provider/model
  /clear          - Clear chat memory

  --- SESSION --------------------------------
  /logout         - Wipe credentials and exit
  /exit           - Close the session

Command Deep Dives

/intel <username> (alias: /i) -- Developer Dossier

Generates a comprehensive intelligence report by pulling data from multiple sources concurrently:

  • Newton School course XP and rank (via MCP)
  • GitHub public repos and star count (via REST API)
  • LeetCode problems solved and contest rating (via GraphQL)
  • Codeforces rating (via REST API)

Computes a weighted Power Score using the formula:

Power = (Newton XP x 0.1) + (LC Solved x 50) + (GH Repos x 20) + (GH Stars x 100) + LC Rating

Sub-modes: roast (comedic AI takedown), review (constructive recruiter analysis), power (raw metric comparison).


/arena <username> (alias: /a) -- Battle Report

Initiates a 1v1 comparison against any Newton School user. Identifies mutual course branches, renders an ASCII "Tale of the Tape" showing both players' stats side-by-side, and calculates the exact grind delta -- how many assignments (at ~68 XP each) you need to complete to overtake your rival.

Supports adding rivals to a persistent local watchlist for ongoing tracking.


/sync (alias: /s) -- Google Calendar Injection

Triggers the full OAuth flow (if not already authenticated) and syncs all upcoming Newton lecture slots to your Google Calendar. Uses an idempotent upsert pattern -- running /sync multiple times will update existing events rather than creating duplicates.


/engine -- LLM Provider Switcher

Interactive selector for changing the active LLM provider and model. After switching, all subsequent prompts, tool calls, and background AI-formatted alerts will route through the new provider immediately. No restart required.


Telegram Daemon

The Telegram interface is a full-featured mobile companion that exposes the same core engine through Telegraf command handlers and inline keyboard layouts.

Available Commands

| Command | Purpose | |:--------|:--------| | /start | Connection health check and status display | | /menu | Interactive dashboard with inline keyboard navigation | | /schedule | Monospaced, formatted timetable | | /sync | Trigger Google Calendar sync from mobile | | /watch <user> | Add a rival to your persistent watchlist | | /watchlist | View all tracked rivals with XP data | | /compete <user> | Generate a Battle Report | | /power <user> | Calculate a developer's Power Score | | /roast <user> | AI-generated comedic profile takedown | | /review <user> | Constructive AI profile review |

Architecture: How the Daemon Works

graph TB
    subgraph "Telegram Bot Process"
        TELEGRAF["Telegraf Instance"]
        MW["Middleware Pipeline<br/><code>userMiddleware</code>"]
        CMD["Command Handlers<br/><code>/start, /menu, /watch, etc.</code>"]
        CATCH["Catch-All Handler<br/>(Free-form messages)"]
        CRON_PA["Cron: Pending Assignments<br/>Every 30 minutes"]
        CRON_OT["Cron: Overtake Monitor<br/>9 AM & 9 PM daily"]
        BOOT["Boot-up Briefing<br/>(Non-blocking)"]
    end

    subgraph "Core Engine"
        AGENT["processUserMessage()<br/>(Shared with CLI)"]
    end

    TELEGRAF --> MW
    MW --> CMD
    MW --> CATCH
    CATCH --> AGENT

    CRON_PA -.-> |"Proactive alerts"| TELEGRAF
    CRON_OT -.-> |"Rival overtake alerts"| TELEGRAF
    BOOT -.-> |"Immediate deadline scan"| TELEGRAF

The daemon includes:

  • Retry Logic: Auto-recovers from Telegram 409 conflicts (concurrent bot instances) with exponential backoff up to 3 attempts.
  • Graceful Shutdown: Handles SIGINT and SIGTERM to cleanly close the Telegraf instance and drain the MCP connection pool.
  • Global Error Handler: Catches unhandled Telegram errors to prevent process crashes, sending a user-facing fallback message.

Running 24/7: Persistent Deployment with PM2

By default, the Telegram daemon runs in the foreground and dies when you close your terminal. To keep it running as a persistent background service, use PM2 -- a production-grade Node.js process manager.

Setup

# 1. Install PM2 globally
npm install -g pm2

# 2. Launch the daemon as a managed background process
pm2 start newtonclaw --name "newton-bot" -- telegram

# 3. Verify it is running
pm2 status

Lifecycle Management

# View real-time logs (stdout + stderr)
pm2 logs newton-bot

# Restart the daemon (e.g., after updating config)
pm2 restart newton-bot

# Stop the daemon
pm2 stop newton-bot

# Remove from PM2 process list entirely
pm2 delete newton-bot

Auto-Start on System Boot

To ensure NewtonClaw survives machine reboots:

# Generate the startup script for your OS
pm2 startup

# Save the current process list so PM2 restores it on boot
pm2 save

How It Works Under the Hood

graph LR
    subgraph "Your Machine (Always Running)"
        PM2["PM2 Process Manager"]
        DAEMON["NewtonClaw Daemon<br/><code>newtonclaw telegram</code>"]
        CRON1["Cron: Assignment Scanner<br/>Every 30 min"]
        CRON2["Cron: Overtake Monitor<br/>9 AM & 9 PM"]
        HUNTER["ZombieHunter<br/>(Startup cleanup)"]
    end

    subgraph "External Services"
        TG_API["Telegram Bot API<br/>(Long Polling)"]
        NEWTON["Newton School API<br/>(via MCP Binary)"]
        LLM_API["LLM Provider API"]
    end

    PM2 -->|"Manages lifecycle"| DAEMON
    DAEMON -->|"Long polling"| TG_API
    DAEMON --> CRON1
    DAEMON --> CRON2
    DAEMON -->|"On startup"| HUNTER

    CRON1 -.->|"Fetch assignments"| NEWTON
    CRON1 -.->|"Format alerts"| LLM_API
    CRON1 -.->|"Send notification"| TG_API

    CRON2 -.->|"Fetch rival XP"| NEWTON
    CRON2 -.->|"Overtake alert"| TG_API

What PM2 provides:

  • Auto-restart on crash: If the Node.js process exits unexpectedly, PM2 will relaunch it immediately.
  • Log rotation: Prevents log files from consuming disk space indefinitely.
  • System boot persistence: With pm2 startup + pm2 save, the daemon will automatically restart after a system reboot.
  • Zero-downtime restarts: pm2 restart ensures the new process is up before the old one is killed.

Background Job System

NewtonClaw runs several automated background jobs when the Telegram daemon is active:

Pending Assignments Scanner

  • Schedule: Every 30 minutes
  • Behavior: Queries the MCP binary for all current assignments, filters for those due within the user's configured threshold (default: 2 hours), formats an alert using the active LLM, and sends it via Telegram.
  • Deduplication: Maintains a cache of previously alerted assignment IDs. If the same set of assignments is detected within a 2-hour cooldown window, the alert is suppressed.

Overtake Monitor

  • Schedule: 9:00 AM and 9:00 PM daily
  • Behavior: Iterates through the local watchlist, fetches each rival's current XP via the public Newton API, and compares it against the user's XP. If a rival has crossed the user's XP threshold since the last check, an immediate Telegram alert is dispatched.
  • XP Tracking: Quietly updates each rival's lastKnownXp in the local store after every check, regardless of whether an alert was triggered.

Boot-up Briefing

  • Schedule: Once, on daemon startup
  • Behavior: Non-blocking initial deadline scan. If urgent assignments are found, a formatted briefing is sent to Telegram immediately after boot.

Zombie Hunter

  • Schedule: Once, on daemon startup (runs before anything else)
  • Behavior: Cleans up artifacts from previous crashes:
    • Scans /tmp/ for orphaned nclaw-* and newton-user-* directories and removes them.
    • Searches for lingering @newtonschool/newton-mcp processes and terminates them with SIGKILL.

Google Calendar Sync Pipeline

The /sync command executes a multi-stage pipeline to inject Newton School lecture schedules into Google Calendar.

sequenceDiagram
    participant U as User
    participant NC as NewtonClaw
    participant SRV as Local HTTP Server<br/>(localhost:8080)
    participant BR as Browser
    participant G as Google OAuth
    participant GCAL as Google Calendar API
    participant MCP as newton-mcp Binary

    U->>NC: /sync
    NC->>NC: Check for existing refresh_token

    alt First-Time Authentication
        NC->>SRV: Spawn temporary HTTP server
        NC->>BR: Auto-open Google consent URL
        BR->>G: User grants calendar.events scope
        G->>SRV: Redirect with authorization code
        SRV->>NC: Exchange code for tokens
        NC->>NC: Store refresh_token locally
        SRV->>SRV: Self-terminate (3-min timeout)
    end

    NC->>MCP: get_calendar (via MCP)
    MCP-->>NC: Raw lecture slot data

    loop For each lecture_slot event
        NC->>NC: Generate base32hex event ID
        NC->>GCAL: Upsert event (patch or insert)
    end

    NC-->>U: "Synced X events, Y errors, Z skipped"

Key behaviors:

  • Idempotent: Uses deterministic event IDs derived from lecture hashes. Re-running /sync updates existing events rather than creating duplicates.
  • Auto-refresh: The OAuth client hooks into a tokens event listener that automatically persists refreshed credentials to the local store.
  • Selective sync: Only lecture_slot type events are synced. Other event types (assignments, etc.) are counted as "skipped" in the summary.

Configuration & Data Privacy

All configuration and state is stored in a single JSON file:

~/.newtonclaw/config.json

This file contains:

| Field | Purpose | |:------|:--------| | accessToken | Newton School session token (used by MCP binary) | | llmProvider / llmApiKey | Active LLM engine configuration | | telegramBotToken | Telegram bot token (from BotFather) | | googleAuth.refresh_token | Google Calendar OAuth refresh token | | watchlist[] | Array of tracked rivals with XP snapshots | | cronPendingAssignments | Background alert configuration (enabled/disabled, threshold hours) |

Privacy guarantees:

  • No telemetry. No analytics. No remote logging.
  • API keys are stored in your home directory and never transmitted anywhere except directly to your chosen LLM provider.
  • The Newton access token is passed to the locally-spawned MCP binary via a sandboxed temporary directory, not via environment variables or network calls.

Technical Stack

| Layer | Technology | Role | |:------|:-----------|:-----| | Language | TypeScript (tsx for dev, tsc for production builds) | Type-safe logic across the entire codebase | | CLI Framework | inquirer + inquirer-autocomplete-standalone | Categorized command menus, fuzzy search, interactive prompts | | Terminal Styling | chalk + ora + figlet | ANSI colors, multi-state spinners, ASCII art rendering | | Telegram | telegraf | Bot framework with middleware pipeline and inline keyboards | | AI / LLM | openai SDK + @anthropic-ai/sdk + @google/generative-ai | Multi-provider LLM abstraction with unified tool-calling | | MCP | @modelcontextprotocol/sdk + @newtonschool/newton-mcp | Stdio-based tool execution with connection pooling | | Web Scraping | cheerio + axios | GitHub, LeetCode (GraphQL), and Codeforces profile scraping | | Calendar | googleapis + google-auth-library + express | OAuth flow with local loopback server | | Background Jobs | node-cron | Scheduled assignment scanning and rival monitoring | | Logging | winston | Structured log output | | Process Mgmt | PM2 (recommended) | Production daemon lifecycle management |


Development Setup

# Clone the repository
git clone https://github.com/DikshantJangra/NewtonClaw.git
cd NewtonClaw/backend

# Install dependencies
npm install

# Run the CLI in development mode (with hot reload)
npm run cli

# Run the Telegram daemon in development mode
npm run bot

# Build for production
npm run build

# Run the production orchestrator
npm run start

Project Structure

backend/
  bin/
    index.js               -- Entry point: mode selector (CLI / Telegram / Reconfig)
  src/
    core/
      agent.ts             -- Agentic reasoning loop (5-turn tool-call chain)
      mcp-runner.ts        -- MCP tool execution + my-stats + leaderboard fetchers
      compete.ts           -- Competition math engine (grind delta calculator)
      power-level.ts       -- Multi-source Power Score aggregator
      ai-reviewer.ts       -- LLM-driven profile roast/review generator
      calendar-sync.ts     -- Google Calendar upsert pipeline
    interfaces/
      cli.ts               -- Terminal adapter (42KB+ of interactive UI logic)
      telegram.ts          -- Telegram adapter (daemon bootstrap + command wiring)
      google-calendar.ts   -- OAuth client, token management, event upsert
      external-scrapers.ts -- GitHub, LeetCode, Codeforces API clients
      newton-scraper.ts    -- Newton School public profile API
    services/
      llm/
        brain.ts           -- Unified askBrain() across all providers
        constants.ts       -- Provider registry with strength-priority rankings
        provider.ts        -- Provider factory
      mcp/
        pool.ts            -- Singleton connection pool with idle timeout
        minimizer.ts       -- MCP result compression (10KB module)
        toolMapper.ts      -- LLM-compatible tool schema generator
        hints.ts           -- Tool usage hint provider
    config/
      store.ts             -- Local JSON config (read/write/watchlist ops)
    jobs/
      pendingAssignments.ts -- Proactive deadline scanner with deduplication
      zombieHunter.ts       -- Orphaned process/directory cleanup
    bot/
      commands/            -- 11 Telegram command handlers
      cron/
        overtake-monitor.ts -- Twice-daily rival XP comparison
      ui/                  -- Telegram formatters and message templates
      middlewares/         -- User context injection
  assets/
    nClaw.png              -- Project logo

Roadmap

NewtonClaw is under active development. The architecture has been designed from the ground up to support modular expansion.

| Phase | Planned Capability | Status | |:------|:-------------------|:-------| | v1.1 | Codeforces rating integration into Power Score formula | Planned | | v1.2 | Discord bot adapter (same core, new interface) | Planned | | v1.3 | Historical XP tracking with trend graphs | Planned | | v1.4 | VS Code extension adapter | Planned | | v2.0 | Team-based competition mode (squad vs. squad) | Planned | | v2.1 | Assignment auto-submission reminders with Google Calendar integration | Planned | | v2.2 | Local SQLite migration for richer query support | Planned |

This roadmap is directional, not a commitment. Features may be added, modified, or reprioritized as the tool evolves.


License

MIT License -- Copyright (c) 2026 Dikshant Jangra

Full license text: LICENSE

All generated configurations, hitlists, and API keys are stored exclusively in your local ~/.newtonclaw/ directory. No data is collected, transmitted, or stored by the tool itself.


github.com/DikshantJangra/NewtonClaw