@ducci/jarvis
v1.0.101
Published
A fully automated agent system that lives on a server.
Downloads
2,660
Readme
Jarvis
A self-hosted AI agent that runs as a background server. Chat with it via a web UI or Telegram, give it tools to run shell commands and manage files, and schedule recurring tasks — all powered by any model on OpenRouter, z.ai, or the Anthropic API.
Features
- Agent loop — runs tools autonomously, hands off to a fresh context when it hits the iteration limit, and keeps going until the task is done
- Web UI — built-in chat interface served at
http://localhost:18008 - Telegram — optional channel adapter; chat from your phone, send photos, get proactive notifications
- Cron scheduler — schedule recurring or one-time tasks in plain English; agent runs them autonomously and can notify you via Telegram
- Skills — Markdown-defined workflows the agent discovers and follows for specific task types
- Custom tools — define tools in JSON (name, description, JS code); the agent picks them up without a restart
- Multi-provider — OpenRouter, z.ai, or Anthropic directly (with prompt caching)
- Persistent sessions — full conversation history per session, sliding context window
Quick start
npm i -g @ducci/jarvis
jarvis setup # configure API key, model, and optionally Telegram
jarvis start # start the background server (auto-restarts on crash)Open http://localhost:18008 to use the chat UI.
jarvis stop # stop the server
jarvis status # show PID, uptime, restart countRecommended models
Any OpenRouter model works, but here's what's worth trying right now:
| Model | Provider | Notes |
|---|---|---|
| glm-5 | z.ai directly | Personal pick — strong at coding and tool use, great value |
z.ai tip: z.ai offers a "Coding Plan Pro" subscription that gives you direct, high-rate access to GLM-5. If you do a lot of agentic coding tasks, it's worth it. Run jarvis setup and select z.ai as your provider — it will configure the endpoint and model automatically.
Fallback recommendation: set fallbackModel to openrouter/auto in settings.json so failed requests automatically retry on a capable free model.
Docs
- Setup and configuration
- CLI and server lifecycle
- Agent system
- Telegram channel
- Cron scheduler
- Skills
- Identity and persona
- UI
Development
npm run dev # start server with nodemon (auto-reload)For UI hot-reload, run both the server and the Vite dev server:
npm run dev # server on :18008
cd ui && npm install && npm run dev # UI on :5173, proxies /api to :18008Build the UI for production:
cd ui && npm run build # outputs to ui/dist/, served automatically by the serverSecurity
Jarvis is designed for local or private server use only. The API has no authentication — do not expose port 18008 to the public internet. The exec tool runs shell commands with the same permissions as the server process.
If you run Jarvis on a VPS, make sure your firewall only allows what's necessary. With ufw:
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp # SSH
ufw enablePorts like 18008 stay closed to the outside world — access the UI via an SSH tunnel instead:
ssh -L 18008:localhost:18008 user@your-vpsData
All runtime data lives in ~/.jarvis/ and is never stored in the repo:
~/.jarvis/.env— API keys~/.jarvis/data/config/settings.json— model, port, channel config~/.jarvis/data/conversations/— session history~/.jarvis/data/tools/tools.json— tool registry~/.jarvis/data/skills/— skill definitions~/.jarvis/logs/— per-session JSONL logs, cron logs, PM2 stdout
