anvilterm
v0.2.43
Published
AnvilTerm — AI-era web terminal with tabs, image overlays, table detection, and session persistence
Downloads
1,090
Maintainers
Readme
AnvilTerm is a terminal built for the AI era. You get a normal terminal that feels like iTerm or Terminal.app — Cmd+V, Ctrl+V, drag-drop, image paste, Claude Code / Codex / Gemini screenshots, all work natively. On top, it ships a swarm engine that can spawn Claude, Codex, Gemini, OpenCode, Copilot, or Ollama side-by-side as real CLIs in live tiles, a shared chat room where those agents coordinate, an MCP server that any agent can drive (spawn tabs, type, screenshot, push artifacts), an Arena for head-to-head model benchmarks, and a Forge marketplace for MCP servers + Skills. Real CLIs, real tools, one workspace.
npx anviltermOr grab the native macOS app: AnvilTerm.dmg.
Features
Swarm — orchestrate multiple AI agents in parallel
Spawn a team — each agent is its own live terminal tile. They coordinate through a shared SwarmRoom. Send the same prompt to all, or route by capability.
- Real CLIs, not wrappers — agents are literal
claude,codex,gemini,opencode,copilot,ollamaprocesses. You see exactly what the TUI sees. - Auto-grid layout — one agent = one tab, N agents = N tiles, re-flows as agents exit.
- SwarmRoom inter-agent chat — a shared room (
swarm_room_post,swarm_room_listen,swarm_room_thread) where lead delegates and specialists report back. You watch it happen live. - Lead-led composition — spawn a single lead that picks its own team, routes subtasks, and merges results.
- Capability routing —
swarm_route(capabilities=["multimodal"])→ Gemini.["reasoning"]→ Claude Opus.["refactor"]→ Codex. Cost-weighted. - Per-tile controls — Stop, Close, scrollback export per agent.
- Live token + usage tracking — scraped from each TUI plus aggregated daily/weekly spend for Claude + Codex + OpenCode.
- One-tap CLI updates — see every AI CLI's version status, update all outdated in one click.
Bundled vendor capability map (swarm_vendors):
| Vendor | Strengths | Cost | |---------|---------------------------------------------------------------|--------| | claude | reasoning, long-context, coding, planning, refactor | high | | codex | coding, code-review, research, refactor, test-gen | medium | | gemini | multimodal, image/video/youtube, image-gen, long-context | medium | | copilot | shell-command, git, gh-cli | low | | ollama | local, cheap, private, offline | free | | shell | deterministic, no-ai | free |
Full launch matrix in SWARM_CLI_REFERENCE.md.
AnvilArena — head-to-head model battles
Two (or more) agents enter. One delivers.
- Pick contestants (Opus 4.7 vs Sonnet 4.6, GPT-5.4 vs Gemini 3, etc.) from the model matrix.
- Paste a prompt. Hit Start.
- Each tile streams redacted live output side-by-side. Each contestant calls
arena_push_artifact— tiles swap to iframes rendering the deliverable (HTML, SVG, markdown, code). - Judge quality, speed, correctness. Export the fight as markdown + JSONL for reproducibility, or as a 9:16 battle video for your channel.
- Session stored at
~/.anvil/arena/<id>.jsonl.
Model Radar — never miss a launch
A live feed of the freshest checkpoints on OpenRouter. Hit the 📡 button in the toolbar.
- Three-tab window — This week / This month / All recent. Top 12 / 25 / 50 entries by drop date, deduped by alias.
- Color-coded chips per row —
CTX 200k(blue) ·IN $1.00/M(amber) ·OUT $5.00/M(orange) ·FREE(green replaces in/out when zero) ·# rank(lavender, gold for #1). - Three actions per model:
- ▶ Try in Arena — opens the Arena modal with the model pre-loaded into contestant #1, ready to battle your current driver.
- ⌨ Use in OpenCode — spawns a fresh terminal tab and runs
opencode -m openrouter/<provider>/<model>so you land directly in a chat session against the new model. - ↗ View on OR — opens the OpenRouter model page in your default browser.
- Latest-alias resolver — OpenRouter's
~<provider>/<family>-latestaliases (e.g.~anthropic/claude-haiku-latest) aren't in opencode's catalog. AnvilTerm token-matches them against the live OR feed and resolves to the newest concrete versioned id (anthropic/claude-haiku-4.5) before launching, so the model actually loads instead of falling back to the default. - Pulse indicator — the 📡 icon glows when fresh models drop since you last checked.
Media library — terminals shouldn't be blind to pixels
Hover any path or URL in the scrollback to peek the file in a floating thumbnail. The side Media rail collects every image, video, audio, and doc this session has touched, with one-tap Path · URL · Copy · Insert actions. YouTube embeds, GIFs, MP3 waveforms, PDF tiles — all rendered inline.
MCP server — anvilterm-mcp
Ships a full Model Context Protocol server so any MCP client (Claude Code, Cursor, Windsurf, Codex, Gemini, OpenCode) can drive AnvilTerm.
- Terminal-in-terminal —
terminal_create,terminal_write,terminal_read,terminal_screen,terminal_run,terminal_close. Agents spawn their own PTYs, run commands, read output, capture screenshots. - TUI control —
tui_type,tui_paste_ref,tui_choose,tui_interrupt. Type into vim, answer aY/Nprompt, interrupt a hung Claude session. - Swarm + Room —
swarm_spawn,swarm_route,swarm_vendors,swarm_room_post,swarm_room_listen,swarm_room_thread,swarm_route,swarm_artifact_save. - Arena —
arena_current,arena_push_artifact. - Lives as a tab — every PTY created via MCP appears as a live tab in the AnvilTerm window, so the human watches (and can intervene) in real time.
- Standalone fallback — if the web server isn't running, the MCP spawns local PTYs via
node-ptyso the server still works without the UI.
Register in one line (or use anvilterm-doctor to wire every installed agent at once):
// ~/.claude.json
{ "mcpServers": { "anvilterm": { "command": "npx", "args": ["-y", "anvilterm-mcp"] } } }Forge — MCP + Skills marketplace
One-click install any MCP server or Skill into any agent's config. Open from the toolbar.
- Catalog — curated MCP registry entries + community skills, star-ranked, kind-filtered (
mcp,skill,gemini-extension). - Install targets — Claude Code, Codex, Gemini, OpenCode — or "all agents" in one tap. Writes directly to each agent's config with a
_forge:truetag so Forge can manage updates/removal. - Installed view — per-agent status chips (ready / unavailable), managed-by-Forge list, externally-installed detection, per-row Remove.
- Skills support — git-clones the repo into
~/.anvil/skills/<owner>/<repo>/and symlinks into each agent's skills dir.
Onboarding doctor
One script wires your installed AI CLIs to AnvilTerm:
# Install once globally — npm exposes every bin (anvilterm, anvilterm-doctor,
# anvilterm-mcp, anvilterm-mcp-install, anvilterm-agent-loop, anvil-showcase).
npm install -g anvilterm
anvilterm-doctor # diagnose: which vendors are ready
anvilterm-doctor --install # register MCP + install anvil-swarm skill in every installed vendor
anvilterm-doctor --test # spawn each vendor, verify it can post to a test SwarmRoom
# Or, without a global install — npx must be told which package owns the bin
# because the bin name doesn't match a separate npm package:
npx -p anvilterm anvilterm-doctor
npx -p anvilterm anvilterm-mcp-installNative-feel paste & drag-drop
- Cmd+V → browser paste event → Claude / Codex see the image natively + a copy lands in the media panel
- Ctrl+V → raw
\x16reaches Claude / Codex → they read the clipboard themselves → media panel also gets it via Rust clipboard read (no browser permission popup) - Drag file in → path typed at cursor (iTerm-style) + file registered in media panel
- Image paste just works — no bracketed-paste hacks, no interception
Multi-tab terminal
Independent shell sessions with persistent PTY. Close the window, reopen, pick up where you left off. 200K char scrollback per tab.
Smart media detection
- Images — pastes, drops, or
ls ~/Desktop/*.pngoutput → hoverable + clickable inline - SVG — render directly inline (perfect for agent-generated diagrams and UI mockups)
- YouTube — thumbnail on hover, inline player on click
- Video / audio — drop
.mp4/.mp3→ inline play. Audio gets a waveform visualizer - PDF — click to preview in a scrollable card
- Tables — markdown tables detected and openable as spreadsheet with copy-TSV / download-CSV / export-xlsx
- Rich link previews — hover any URL for title + description + image
Interactive screenshot (macOS)
Camera button → native screencapture -i crosshair → drag to crop → image both on clipboard AND saved to ~/.anvilterm/screenshots/ with a thumbnail card in the media panel. Also exposed to agents via MCP (terminal_screen).
Session log recording
Red-dot record button captures every PTY output byte. Stop saves:
~/.anvilterm/sessions/session-<ts>.log— ANSI-stripped, human-readable~/.anvilterm/sessions/session-<ts>.raw.log— original colors preserved forcatreplay
Log card appears in the media panel with Insert (types @path into terminal), Finder (reveal), Open (default text editor), Copy path.
Real-time AI usage tracking (community)
Opt-in chips in the toolbar for Claude Code and OpenAI Codex showing real-time consumption of your 5-hour + weekly quotas — same data as Claude's /usage and Codex's /status, pulled from the same OAuth endpoints those CLIs use.
- Pixel-art crab (Claude) + official Codex glyph — state-reactive animations (bob / panic shake / storm pulse)
- macOS menu bar tray — glanceable usage even when the app is unfocused
- Native threshold notifications at 50% / 80% / 95% (debounced per reset window)
- Per-provider convention — Claude shows "% used", Codex shows "% left"
- Live model + effort detection — scrapes Claude/Codex startup banners from the PTY
- Credits tracking — displays pay-as-you-go overage usage ($/€)
- Sparkline history — last 24h of samples in a tiny trend chart
- Usage forecast — linear regression projects when you'll hit 100% before reset
- Polling is non-aggressive — adaptive (10min / 5min / 2min) or user-configurable
Opt-in, zero data sent anywhere except the provider's own servers. More details ↓
Claude Code privacy toggles
Surface everything tucked inside ~/.claude/settings.json as one-click toggles, with daily backups before every write:
- Hide Claude git attribution (
attribution.commit/pr = "") - Disable Claude telemetry (
env.CLAUDE_CODE_ENABLE_TELEMETRY = "0") - Hide email / org (
env.CLAUDE_CODE_HIDE_ACCOUNT_INFO = "1") - Demo mode (
env.IS_DEMO = "1") - Disable non-essential traffic
- Default model / effort dropdown
- Lock effort across
/effortcalls
Built-in AI assistant (Cmd+K)
On-device AI via Ollama. Sees your terminal context, suggests commands, types into TUI apps. Any Ollama model — Gemma 4, Qwen, Llama, Mistral.
Native function calling via Ollama's tools API — Gemma 4, Qwen 2.5/3, Llama 3.1, Mistral Nemo return structured tool_calls that render as Run buttons. Models without tool support fall back to prompt-engineered <exec> tags. Markdown rendering handles headings, lists, tables, --- rules, and auto-promotes single-line shell code blocks to Run buttons.
Voice input
Push-to-talk with live waveform. Speech-to-text types into the terminal or AI chat.
Dark + light mode
Auto-detects system preference. Full terminal theme customization.
Keyboard shortcuts
| Shortcut | Action |
|---|---|
| Cmd+K | Toggle AI assistant sidebar |
| Cmd+T | New tab |
| Cmd+W | Close tab |
| Cmd+Shift+[ / ] | Prev / next tab |
| Cmd+1 … Cmd+9 | Jump to tab N |
| Cmd+V | Paste (fires browser paste event, media panel archives) |
| Ctrl+V | Paste (sends \x16 to PTY — Claude / Codex pick up clipboard) |
Quick Start
npx anvilterm # instant, no install
npm install -g anvilterm
anvilterm --port=8080
anvilterm --no-browserOpen http://localhost:3030 (or whatever port you chose). All terminal features work in the browser. The native menu-bar tray, Finder reveal, interactive screenshots, and system notifications are desktop-only (run via Tauri — see below).
Prerequisites
| Requirement | Version | Required | Notes | |---|---|---|---| | Node.js | >= 18 | Yes | nodejs.org | | npm | >= 9 | Yes | Comes with Node.js | | Ollama | any | Optional | For on-device AI assistant — ollama.com | | Claude Code | any | Optional | For swarm + usage chip — claude.ai/code | | OpenAI Codex | any | Optional | For swarm + usage chip — chatgpt.com/codex | | Gemini CLI | any | Optional | For swarm multimodal agents | | OpenCode | any | Optional | For swarm low-cost agents | | GitHub Copilot CLI | any | Optional | For swarm shell-command agents |
Desktop app (macOS)
The desktop build runs the node server as a sidecar inside Tauri, so the webview, any browser tab, and MCP all share one session pool. That means an agent calling swarm_spawn from Claude Code appears as a live tile in the AnvilTerm window you're staring at.
git clone https://github.com/AnassKartit/anvilterm.git
cd anvilterm
npm install
npm run tauri:dev # hot-reload dev
npm run tauri:build # release .dmg + .app in src-tauri/target/release/bundle/A signed + notarized build is produced by scripts/sign-and-notarize.sh (macOS Developer ID required).
Wire AnvilTerm into your AI CLIs
npx anvilterm-doctor --installThat registers the anvilterm MCP server in every installed vendor's config (Claude Code's ~/.claude.json, Codex's ~/.codex/config.toml, Gemini's settings, OpenCode's config) and installs the anvil-swarm skill where supported. Restart your CLI — you now have swarm_spawn, terminal_create, arena_push_artifact, and ~20 other tools.
Manual install: see MCP server above.
AI Usage Tracker
Opt-in community feature. In Settings → AI Usage · Community, toggle:
- Claude tracker — reads the OAuth token Claude Code stored locally (macOS Keychain or
~/.claude/.credentials.json), callsapi.anthropic.com/api/oauth/usage - Codex tracker — reads
~/.codex/auth.json, callschatgpt.com/backend-api/wham/usage
Both endpoints are what those CLIs themselves call when you run /usage or /status. No new credentials are created, no data is sent anywhere except the provider's own servers.
How it looks
- Toolbar chip per provider, small colored pill with the character icon + percentage
- Click → popover with progress bars for each bucket (5h / weekly / Sonnet-only / Opus-only / credits) with reset times
- Right-click ⋯ toggle → detailed view (exact window duration + absolute reset timestamps)
- Mac menu bar tray with the same data
Colors + animations
- Green — <70% → crab bobs gently, cloud still
- Amber — 70–90% → warning glow, soft pulse
- Red — ≥90% → crab panic-shakes, cloud storm-pulses, chip border turns red
- Threshold-crossing flash — chip scale-punches + halo when crossing 70% / 90%
Notifications
Native macOS notifications at 50% / 80% / 95% per bucket, debounced per reset window. Format:
Claude · weekly Sonnet at 82% Resets 2d 14h.
Usage forecast
Linear regression over the last ~24 samples projects when each bucket will hit 100%. If the ETA is before the next reset, it's flagged red ("will hit 100% before reset"); if comfortably after, it's green. Requires 2 samples >1 min apart.
Live model + effort detection
Claude's configured model and effort aren't externally queryable, so AnvilTerm derives the best guess through five fallbacks (first hit wins):
- PTY banner scraping — catches Claude Code's startup line (
Opus 4.7 (1M context) with xhigh effort) - Running
claudeprocess args — parses--model/--effortflags fromps - Newest session transcript — reads the freshest
.jsonlin~/.claude/projects/<cwd-hash>/ - Shell rc exports — scans
.zshrc/.zshenv/.bashrcforCLAUDE_CODE_MODEL/CLAUDE_CODE_EFFORT_LEVEL ~/.claude/settings.json—envblock then top-levelmodel/effortLevel- Plan default — Max → Opus 4.7, Pro → Sonnet 4.6
For Codex, state comes from ~/.codex/config.toml plus PTY banner scraping.
Silent fallback
If you don't have Claude Code / Codex installed, or aren't signed in, the corresponding chip hides itself. No nagging.
Unofficial API disclosure
api.anthropic.com/api/oauth/usage and chatgpt.com/backend-api/wham/usage are undocumented. Anthropic / OpenAI may change them without notice. We only hit them with your own token at an adaptive cadence.
Adding a third provider
Drop a file in src/usage/providers/. Each provider exports:
exports.myProvider = {
id, label, short, icon, css,
invokeUsage, invokeAccount,
displayMode: 'used' | 'left',
scanBanner(cleanText) → {model, effort}|null,
normalize(response) → {buckets, credits},
planFormat(account, liveState) → {label, note},
};Register in src/usage/index.js + add a matching #[tauri::command]. Tray, polling, notifications — all provider-agnostic.
AI Assistant (Cmd+K)
AnvilTerm includes a sidebar AI that connects to Ollama.
curl -fsSL https://ollama.com/install.sh | sh # 1. install
ollama pull gemma4:e2b # 2. pull a model
# 3. open AnvilTerm, press Cmd+KRecommended models
- gemma4:e2b — fast, 2B params
- gemma4:e4b — balanced, 4B
- gemma4:26b — powerful, MoE
- qwen3:8b — strong reasoning
- llama3:8b — general purpose
Capabilities
- Reads the terminal screen as context
- Suggests + runs commands (click to execute)
- Types into TUI apps (vim, Claude Code, Codex, Gemini CLI)
- Multi-turn conversation with history
- Model picker with pull-to-download
- Thinking-mode toggle per model
Automation API
Every feature is scriptable via window.anvilterm:
await window.anvilterm.exec('echo hello');
await window.anvilterm.waitForText('hello');
const screen = window.anvilterm.getScreen();
await window.anvilterm.newTab('deploy');Full API: exec(), type(), getScreen(), getScrollback(), waitForText(), waitForIdle(), newTab(), switchTab(), closeTab(), getTabs(), resize(), togglePanel(), toggleOverlays(), isConnected().
Architecture
Client (xterm.js + vanilla JS)
├── Link providers (YouTube, media URLs, local image paths)
├── Media panel (pastes, drops, screenshots, session logs)
├── Swarm grid + SwarmRoom chat tile
├── Arena tiles + artifact iframes
├── Forge marketplace modal
├── AI sidebar (Ollama chat with terminal context)
├── Usage tracker (src/usage/ — modular per-provider)
└── window.anvilterm Automation API
│
WebSocket (shared by webview + browser tabs + MCP)
│
Node sidecar (server.js — Express + ws)
├── PTY sessions (persistent, 200K scrollback, subscribe-all)
├── Ollama proxy
├── SwarmRoom (post/listen/thread pub-sub)
├── Arena state + artifact storage
├── Forge catalog + install-to-agent writer
└── Media serving
│
Tauri native shell (Rust)
├── Spawns node sidecar on 127.0.0.1:<port>, points webview at it
├── Graceful shutdown (kills node child cleanly)
└── Native-only: clipboard image read, interactive screenshot,
reveal-in-finder, menu-bar tray, native notifications, keychain
│
MCP client (stdio)
└── bin/anvilterm-mcp.js — same WS protocol, agent gets a live tabKey property: one unified session pool across the native window, any browser tab, and MCP. An agent's tab created via swarm_spawn appears live in whichever surface you're watching.
Competitors & landscape
I built AnvilTerm because I couldn't find a terminal that did all of these at once: real multi-agent orchestration with live tiles, media previews in output, cross-vendor usage tracking, MCP-first design, head-to-head benchmarking, and open source.
| Terminal | What it's great at | What I was missing | |---|---|---| | Warp | Polished AI agents, block-based commands, team Drive sync, MCP ecosystem, native GPU speed, enterprise-ready | Cloud-tied single-agent AI, no local-only mode, closed source, no cross-vendor benchmarking, no inline video/PDF/audio | | iTerm2 | The macOS power-user reference — mature, fast, built-in LLM chat | No multi-agent swarm, no MCP server, no cross-vendor usage tracking, macOS only | | Hyper | JS-extensible Electron terminal, plugin ecosystem | Stagnant, no AI, no media awareness | | Ghostty | Beautiful native GPU terminal, blazing fast, Zig-based | No AI, no media, pure emulator by design | | WezTerm / Alacritty / Kitty | Fast GPU-rendered, scriptable, beloved by CLI purists | No AI story, minimalist by choice | | Tabby | Cross-platform Electron terminal with profile management | No built-in AI, no multi-agent, no MCP |
What's in AnvilTerm that I didn't find elsewhere
One workspace for every AI CLI. Run Claude, Codex, Gemini, OpenCode, Copilot and Ollama side-by-side as real CLIs. Send the same prompt to all, benchmark outputs, or route each task to the best-fit model.
Preview before you spend AI tokens. Every media or attachment is previewable in the terminal itself before it goes to an LLM.
- Images / SVGs — paste or drop, see them in the media panel and inline; confirm, then send.
- Videos — drop
.mp4/.webm→ inline player. Preview before piping to AI. - PDFs — click path → scroll preview. Scan first, feed second.
- YouTube — thumbnail on hover, play on click.
- Audio — drop
.mp3→ waveform. Scrub before sending. - Markdown tables — detected and exportable as
.xlsx/ TSV / CSV. Validate shape before asking Claude to analyze. - Interactive screenshot — macOS crosshair →
~/.anvilterm/screenshots/+ clipboard.
Smart AI routing. Live model + effort detection (PTY banner, process args, session transcripts, shell env, settings.json) feeds usage forecasting and swarm_route — so the suggestion reflects what you're actually running.
Local-first AI sidebar — Cmd+K opens an Ollama-backed assistant with native function calling (Gemma 4, Qwen 2.5/3, Llama 3.1, Mistral Nemo). Prompts never leave the machine.
Session log recording — full PTY stream to ~/.anvilterm/sessions/, both ANSI-stripped and raw. Replay, share, audit, or feed back to Claude as @path for context.
Automation API + MCP — window.anvilterm.exec() for DevTools / Playwright. anvilterm-mcp for every MCP client.
None of these are individually revolutionary — inline images exist in Kitty, session recording exists in script(1), Ollama integration exists in iTerm2. I just wanted them in the same terminal, with real multi-agent tooling on top, under a non-predatory license, built around the way I actually work with AI CLIs.
What AnvilTerm isn't
- Not a Warp replacement if you want cloud-hosted team drive
- Not as fast as native GPU terminals for multi-megabyte floods (xterm.js in a WebView is lightweight but not Ghostty-speed)
- Not enterprise-ready today — no SSO, no audit export, no formal compliance posture yet. See roadmap
- Not a CLI purist's terminal — WezTerm / Alacritty / Kitty are still the right choice if you just want a fast, scriptable, no-frills emulator
Longer term: the Control Plane
There's room for an open-core commercial layer — client + server stay open (Apache 2.0 / FSL-1.1), and a separate hosted or self-hostable AnvilTerm Control Plane handles team-wide concerns:
- Team usage dashboards + budget alerts for every AI CLI
- Tamper-evident session log shipping to S3-compatible storage
- SSO (SAML / OIDC), SCIM, RBAC, audit replay
- Fleet policy: on-prem LLM routing, command allowlists, secret redaction
- Signed builds + SBOM for regulated environments
Same pattern as GitLab, Tailscale, Grafana.
Development
npm install
npm run dev # Build + watch
npm run build # One-shot build
npm test # Unit tests (includes usage-tracker + forge tests)
npm run test:e2e # Playwright E2E (tests/anvilterm.spec.js, tests/forge.e2e.test.js)
npm run tauri:dev # Tauri dev mode
npm run tauri:build # Tauri production build → .dmg + .appTests
The tracker ships with fixtures from real provider responses + banner strings. Forge has a full Playwright E2E flow (catalog → install → installed view → remove).
Usage Tracker Unit Tests
utils (bucketLevel, fmtPct, stripAnsi, escapeHtml)
claude (scanBanner, normalize, planFormat)
codex (scanBanner, normalize, planFormat)Tests catch breaking changes when providers change their banner format, API shape, or plan-default mapping.
Privacy
- Usage tracker is opt-in, off by default
- Each tracker reads creds already on disk — no new auth flows
- OAuth tokens never leave your machine except as a
Bearerheader to the provider's own API - Session logs stored locally in
~/.anvilterm/sessions/— never uploaded - Screenshots stored locally in
~/.anvilterm/screenshots/ - Pasted images stored locally in
~/.anvilterm/pastes/ - SwarmRoom messages stored locally in
~/.anvil/rooms/ - Arena sessions stored locally in
~/.anvil/arena/ - AI assistant uses local Ollama — no cloud calls
- Daily backup of
~/.claude/settings.jsonto~/.claude/backups/before AnvilTerm ever writes to it - MCP server binds to
127.0.0.1only; no external exposure
Roadmap
- [x] Multi-tab terminal with persistent sessions
- [x] Image / table / YouTube / PDF / SVG / audio / video media detection
- [x] Dark/light mode with system detection
- [x] Voice input
- [x] AI assistant (Ollama)
- [x] Tauri desktop app + sidecar architecture (unified session pool)
- [x] Drag-drop file handling
- [x] Keyboard shortcuts
- [x] Community usage tracker (Claude + Codex)
- [x] macOS menu bar tray
- [x] Native threshold notifications
- [x] Interactive screenshot with clipboard + media archive
- [x] Session log recording
- [x] Claude Code privacy toggles
- [x] Automation API
- [x] CI/CD (GitHub Actions + npm publish)
- [x] MCP server (
anvilterm-mcp) — terminal + TUI + swarm + arena + screenshot - [x] Swarm engine — multi-vendor parallel agents + auto-grid + per-tile controls
- [x] SwarmRoom — inter-agent chat
- [x] Capability routing (
swarm_route) - [x] Lead-led team composition
- [x] AnvilArena — head-to-head battles with artifact render + export
- [x] Forge — MCP + Skills marketplace
- [x]
anvilterm-doctoronboarding script - [x]
anvil-swarmskill - [ ] Gemini provider for the usage tracker
- [ ] Tool Presets for Swarm + Arena (pre-wire MCP toolkit per agent)
- [ ] Capability-gap discovery in Forge (suggest missing MCPs for your workflow)
- [ ] Split panes
- [ ] Terminal playback from session logs
- [ ] Collaborative mode (shared session across users)
License
AnvilTerm is a multi-license project — see LICENSE.md for the full table.
- Core terminal, usage tracker, UI, server, Tauri shell — Apache License 2.0 (
LICENSE-APACHE.md) - Swarm + multi-agent orchestration + SwarmRoom + arena + related skills — Functional Source License v1.1, Apache 2.0 Future License (
LICENSE-FSL.md) — free for any non-competing use; converts to Apache 2.0 two years after release - Future enterprise control plane — proprietary (not yet shipped)
Files carry an SPDX identifier when their license differs from the default for their directory. Third-party dependencies keep their original licenses (see NOTICE).
