npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

goto-assistant

v0.9.4

Published

Lightweight, self-hosted personal AI assistant

Downloads

1,502

Readme

goto-assistant

Personal AI assistant that remembers past conversations, runs scheduled tasks, and works on both web and WhatsApp. Supports Claude, OpenAI, and OpenAI-compatible providers (Gemini, Groq, Ollama, etc.).

Quick Start

npx goto-assistant

Open http://localhost:3000 — first run redirects to setup page for API key config.

Requirements

  • Node.js 20.11 or later — npx runs the app and most MCP servers
  • uvuvx runs the time MCP server (Python-based)
  • Anthropic, OpenAI, or OpenAI-compatible API key (Gemini, Groq, Ollama, etc.)

Data Storage

All data (config, conversations, uploads) stored in ~/.goto-assistant/. Custom location: GOTO_DATA_DIR=/path/to/data npx goto-assistant

Custom Port

PORT=3001 npx goto-assistant

Why goto-assistant?

One command, no Docker, no framework — just MCP. Chat from the web or WhatsApp.

        You
         │
    chat / ask
         │
         ▼
   ┌──────────────────┐
   │    AI Assistant  │
   └──┬──┬──┬──┬──┬──┘
      │  │  │  │  │
      │  │  │  │  │      create / update / run /   ┌──────────────┐
      │  │  │  │  └───── schedule / get results ─▶ │     Cron     │──── ┐
      │  │  │  │                                   ├──────────────┤     │
      │  │  │  └───── remember / recall ────────▶  │    Memory    │   AI tasks
      │  │  │                                      ├──────────────┤   w/ MCP
      │  │  └── recall conversations & task runs ▶ │   Episodic   │◀── access
      │  │                                         ├──────────────┤
      │  └────────── current time ─────────────▶   │     Time     │
      │                                            ├──────────────┤
      └────────── do anything ─────────────────▶   │    Broker    │
                   (search & call tools)           │  ↕ your MCP  │
                                                   │   servers    │
                                                   └──────────────┘

That one npx command gives you an AI assistant that can remember across conversations, search past interactions, and run tasks on a schedule or on-demand — all through the standard MCP protocol. Add any MCP server to extend it further — the built-in broker dynamically discovers and routes your tools.

See it in action

Setup

First run — provider, API key & WhatsApp

Run npx goto-assistant, pick your AI provider, paste your API key, and connect WhatsApp by scanning the QR code — done.

Adding an MCP server

Add MCP servers through the setup wizard. The assistant verifies each server before save (trimmed for brevity — verification may take up to minutes for security purposes).

Tasks

Create a task

Ask the assistant to create an on-demand task.

Update a task

Modify task prompts, commands, or settings through chat.

Run a task & compare results

Run tasks on demand and compare results across runs.

Schedule a task

Schedule tasks to run periodically using natural language.

Chat & manage tasks on WhatsApp

Chat with the AI assistant and manage tasks from WhatsApp — the same assistant, on the go.

Data Privacy

goto-assistant connects directly to AI providers using your own API keys. Both Anthropic and OpenAI have clear policies that API data is not used for model training by default:

Anthropic (Commercial Terms; Privacy Center):

"Anthropic may not train models on Customer Content from Services."

"By default, we will not use your inputs or outputs from our commercial products to train our models."

OpenAI (Platform Data Controls; Enterprise Privacy):

"Data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us)."

"We do not train our models on your data by default."

Your conversations and data stay between you and the provider's API. All local data is stored on your machine:

  • goto-assistant: conversations, config, uploads, and WhatsApp auth in ~/.goto-assistant/
  • mcp-cron: tasks and results in ~/.mcp-cron/

WhatsApp Integration

Chat with the assistant directly from WhatsApp — no extra apps, no Docker, no webhooks needed.

Uses Baileys (WhatsApp Web multi-device protocol) running in-process. Enable it in the setup wizard or toggle it on the setup page, scan the QR code once, and you're connected. Auth persists across restarts.

Messages go through the same AI pipeline as the web chat. The agent only responds in your self-chat ("Message yourself") — it never replies to other people messaging your number.

Architecture

Browser and WhatsApp clients connect to server.ts (WebSocket + REST), which routes messages through router.ts to the Claude or OpenAI agent SDK. Agents access MCP servers for extended capabilities — user-added servers are accessed through mcp-broker, an FTS5-powered gateway that dynamically discovers and routes tool calls, while built-in servers (cron, memory, messaging, episodic-memory) connect directly. Messaging flows through a channel registry — the mcp-messaging MCP server proxies tool calls to POST /api/messaging/send, which routes to the appropriate channel (WhatsApp, etc.). The episodic-memory MCP server provides full-text search over past conversations and task results using SQLite FTS5, enabling the agent to recall prior interactions.

See docs/architecture.md for the full architecture diagram.

Development Setup

  1. Install dependencies:

    pnpm install
  2. Start the development server:

    pnpm dev
  3. Open http://localhost:3000 — you'll be redirected to the setup page on first run to configure your AI provider and API key.

  4. Lint and test:

    pnpm lint
    pnpm test

Configuration

App configuration is stored in data/config.json (created on first setup). MCP server configuration is stored separately in data/mcp.json. Environment variables override file config:

  • ANTHROPIC_API_KEY — API key for Claude
  • OPENAI_API_KEY — API key for OpenAI (also used for OpenAI-compatible providers)

For OpenAI-compatible providers, set the base URL in the setup page (e.g. https://generativelanguage.googleapis.com/v1beta/openai for Gemini). The app defaults to the Chat Completions API for all third-party base URLs; the Responses API is only used for direct OpenAI and Azure OpenAI endpoints.

MCP Servers

The assistant comes pre-configured with these MCP servers:

| Server | Package | Capabilities | |--------|---------|-------------| | memory (semantic) | @modelcontextprotocol/server-memory | Persistent knowledge graph across conversations | | time | mcp-server-time | Current time and timezone conversions | | cron | mcp-cron | Schedule or run on-demand shell commands and AI prompts with access to MCP servers | | messaging | built-in | Send messages via connected platforms (WhatsApp, more coming) | | episodic-memory | built-in | Full-text search over past conversations and task results | | broker | mcp-broker | MCP gateway — dynamically discovers and routes tool calls to user-added servers via FTS5 search |

Add your own through the setup page — either via the form or by asking the setup wizard AI chat — or by editing data/mcp.json directly. Any MCP server that supports stdio transport will work — browse the MCP server directory for more.