boten-gemma
v0.2.1
Published
The AI assistant that asks before it acts. Self-hosted, multi-channel, confirmation-first.
Maintainers
Readme
Quick Start
npm install -g boten-gemma
gemmaThat's it. On first run, an interactive setup wizard walks you through authentication, channels, and workspace configuration. Then open localhost:18789.
Or try it without installing:
npx boten-gemmaWhy Gemma?
AI agents are powerful, but most give your LLM unrestricted access to your machine. One hallucinated rm -rf and you're in trouble. And when something does go wrong, you're staring at a chat log trying to figure out what happened.
Gemma is built around two ideas:
The Confirmation Gate — Every file write, shell command, and API call goes through a gate. You see exactly what the AI wants to do, and you decide whether it happens. This isn't a policy check — the gate is the only code path to tool execution, enforced by module architecture.
Extensible Panels — Build custom React or HTML panels that live inside Gemma's UI. Dashboards, data visualizers, control surfaces — anything you need to work alongside your AI visually. Gemma can even build them for you on the fly.
You (Telegram / Web / CLI)
│
▼
┌──────────────────────────────────────┐
│ Gateway │
│ │
│ LLM ──► Agent Loop │
│ │ │
│ ┌────────▼──────────┐ │
│ │ Confirmation Gate │ ← approve / deny here
│ └────────┬──────────┘ │
│ │ │
│ ┌────────▼──────────┐ │
│ │ Tool Executor │ only reachable through the gate
│ └───────────────────┘ │
│ │
│ ┌────────────────────────────────┐ │
│ │ Panels │ │
│ │ Custom React/HTML UIs with │ │
│ │ backend services & SDK │ │
│ └────────────────────────────────┘ │
└──────────────────────────────────────┘Features
| | Feature | What it does |
|---|---------|-------------|
| 🔒 | Confirmation Gate | Every tool call needs your approval. Configurable auto-approve rules. Cowboy mode for speed (but exec always asks). Full audit log. |
| 🧩 | Panels | Build custom React/HTML UIs inside Gemma. Backend services with RPC, scoped storage, custom tools. Gemma can scaffold them for you. |
| 📱 | Multi-Channel | Same assistant on Telegram (inline keyboard approvals), Web UI (diff previews), and CLI. Shared sessions. |
| 🤖 | Pluggable LLMs | Gemini (default), OpenAI-compatible (Ollama, LM Studio, vLLM), or Google ADC. |
| 🔌 | MCP Support | Any Model Context Protocol server. Hot-reload at runtime. |
| 📝 | Skills | Teach new behaviors with markdown files. No code required. |
| ⏰ | Cron & Heartbeat | Schedule recurring tasks with a safe-tool whitelist. Periodic check-ins. |
| 🧠 | Memory | Persistent workspace, semantic search via embeddings, session transcripts. |
| 🛠️ | 16 Built-in Tools | File I/O, shell, web search/fetch, cron, panels, config, and more — all gated. |
| 🎨 | First-Run Bootstrap | Gemma interviews you on first launch — learns your name, preferences, and work style. |
The Confirmation Gate
The gate is not a feature flag or a wrapper. It's a structural guarantee: the tool executor module is imported by exactly one file (gate.ts) and never re-exported. There is no other code path to execute a tool.
Auto-approve rules let you skip confirmation for safe operations:
{
"confirm": {
"autoApprove": { "internalReads": true },
"rules": [
{ "action": "allow", "tool": "read" },
{ "action": "allow", "tool": "exec", "args": "git *" },
{ "action": "deny", "tool": "write", "args": "/etc/*" }
]
}
}Rules use glob patterns with deny > ask > allow priority. Read-only tools (glob, grep, ls) are auto-approved by default.
Cowboy mode auto-approves everything except shell commands — exec always requires confirmation, even in cowboy mode. Every decision is logged to ~/.gemma/logs/audit.jsonl.
Panels
Panels are custom UIs that run inside Gemma's web interface as sandboxed iframes. Build dashboards, data views, or control surfaces — in plain HTML or React + TypeScript + shadcn/ui.
Ask Gemma to build one:
"Create a panel that shows my project's git log with a graph"
Or build your own with the Panel SDK:
const panel = new GemmaPanel();
// Scoped storage — read/write data private to your panel
const data = await panel.readData('settings.json');
await panel.writeData('cache.json', results);
// Send messages through the chat pipeline
panel.sendChat('Summarize the latest git commits');
// Call your panel's backend service
const stats = await panel.callBackend('getStats', { range: '7d' });
// Listen for real-time events from your backend
panel.onEvent('update', (data) => render(data));Panel backends can register their own tools that Gemma can call — still gated through the confirmation system. Panels get automatic theme sync, hot-reload in dev mode, and workspace memory access.
Channels
Web UI
React + shadcn/ui with streaming, diff previews for approvals, and panel support.
gemma start
# → localhost:18789Telegram
Approve or deny tool calls with inline keyboard buttons, right from your phone.
"telegram": {
"enabled": true,
"token": "bot-token",
"allowFrom": [your-id]
}CLI
Color-coded output, text-based confirmations.
gemma start --cliUse Any LLM
// Gemini (default — free via OAuth, or use API key)
{ "model": { "provider": "gemini", "model": "gemini-2.5-flash" } }
// Ollama (local)
{ "model": { "provider": "openai-compat",
"openaiCompat": { "baseUrl": "http://localhost:11434/v1", "model": "llama3.2" } } }
// Any OpenAI-compatible server (LM Studio, vLLM, etc.)
{ "model": { "provider": "openai-compat",
"openaiCompat": { "baseUrl": "http://your-server/v1", "model": "your-model" } } }Configuration
gemma.json with JSON5 comments, ${ENV_VAR} expansion, and Zod validation:
{
"model": { "provider": "gemini", "auth": "api-key" },
"channels": {
"web": { "enabled": true },
"telegram": { "enabled": true, "token": "${TELEGRAM_BOT_TOKEN}" }
},
"confirm": { "autoApprove": { "internalReads": true } },
"mcp": {
"servers": {
"filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "~"] }
}
}
}See gemma.example.json for all options.
Deployment
# Install and run as a background service
npm install -g boten-gemma
gemma setup
gemma daemon install # launchd (macOS) or systemd (Linux)
gemma daemon startAll state lives in ~/.gemma/. Docker is also available in docker/.
Development
git clone https://github.com/challehallberg/gemma.git
cd gemma && npm install
npm run dev # Gateway with auto-reload
npm run dev:ui # Vite dev server on :5173 with HMR
npm run build # Full production buildContributing
Contributions welcome! See the open issues for things to work on.
