@marcopeg/hal
v1.3.0
Published
Telegram bot for AI coding agents. Multi-project and multi-engine: run Claude Code, Copilot, Codex, or OpenCode per project, each with its own bot.
Maintainers
Readme
A Telegram bot that provides access to AI coding agents as a personal assistant. Run multiple engines (Claude Code, GitHub Copilot, and more) across multiple projects simultaneously, each with its own dedicated Telegram bot.
Features
- Chat with your AI coding agent via Telegram
- Send images and documents for analysis
- Voice message support with local Whisper transcription
- File sending — the engine can send files back to you
- Multi-engine support — use Claude Code, GitHub Copilot, Codex, OpenCode, Cursor, or Antigravity per project
- Multi-project support — run multiple bots from a single config, each connected to a different independent directory
- Context injection — every message includes system metadata (timestamps, user info, custom values) and supports custom injections via config and per-project hooks (
.mjs) with hot-reload - Custom slash commands — add
.mjscommand files per-project or globally; hot-reloaded so the engine can create new commands at runtime - Skills —
.agents/skills/entries can be exposed as Telegram slash commands by addingtelegram: trueto their frontmatter - Persistent conversation sessions per user (availability based on engine)
- Per-project access control, rate limiting, and logging
How It Works
This tool runs one AI coding agent subprocess per project, each in its configured working directory. You can choose your favourite engine globally, or each project can use a different engine.
The engine reads its standard config files from the project directory:
AGENTS.md— Project-specific instructions and context (filename may depend on engine).agents/skills/— Custom skills and slash commands (pattern may depend on engine).claude/settings.json— Permissions and tool settings (Claude Code).mcp.json— MCP server configurations
You get the full power of your chosen AI coding agent — file access, code execution, configured MCP tools — all accessible through Telegram.
Prerequisites
- Node.js 18+
- At least one supported AI coding CLI installed and authenticated - see engines
- A Telegram bot token per project (from @BotFather) — see Telegram
- ffmpeg (required for voice messages) —
brew install ffmpegon macOS
Supported Engines 🤖
This is the list of the currently supported engines (Claude Code, Copilot, Codex, OpenCode, Cursor, Antigravity):
Each engine has pros/cons and some limitations.
Here we try to keep updated a feature comparison table:
| Feature | OpenCode | Codex | Claude Code | Copilot | Cursor | Antigravity |
|--------|:--------:|:-----:|:------:|:-------:|:------:|:------------:|
| Instruction file | AGENTS.md | AGENTS.md | CLAUDE.md | AGENTS.md | AGENTS.md | GEMINI.md |
| Main skills folder | .agents/skills/ | .agents/skills/ | .claude/skills/ | .agents/skills/ | .agents/skills/ | .agent/skills/ |
| Per-user session | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ |
| Network access | — | ✓ | — | — | — | — |
| Full disk access | — | ✓ | — | — | — | — |
| YOLO mode | — | ✓ | — | — | — | ✓ |
| Streaming progress | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ |
Read more in the engine docs.
Quick Start
The easiest way to get going is the interactive setup wizard. It creates or completes your config and can start the bot when done.
# Run the wizard (recommended): it will ask for project dir, bot token, user ID, engine, etc.
npx @marcopeg/hal wiz
# Or just run HAL: if no config exists (or it’s incomplete), HAL will suggest running the wizard
npx @marcopeg/hal
npx @marcopeg/hal --config-dir ./workspaceYou can pre-fill some values so the wizard only asks for the rest (see Setup wizard):
npx @marcopeg/hal wiz --engine cursor
npx @marcopeg/hal wiz --engine codex --model gpt-5.2-codexLegacy: npx @marcopeg/hal init still creates a config from a template (non-interactive) but is deprecated in favour of wiz.
Telegram
Before running HAL you need a Telegram bot token and your own Telegram user ID. Both are required to set up your first project.
- Register a bot — Get a bot token from BotFather and add it to your config.
- Find your user ID — Get your numeric user ID and add it to
allowedUserIds.
Configuration
HAL is configured via a config file in the directory where you run the CLI. Use the Setup wizard to create or complete your config interactively; you can run it directly with wiz, and HAL will suggest it if you run start with no or incomplete config. YAML is the recommended format; JSON and JSONC are also supported. See Configuration and Configuration alternatives for details. Full reference:
- Setup wizard — interactive config creation and completion, start-time suggestion, pre-fill flags
- Configuration — config files, reference.yaml (all keys), examples/hal.config.yaml, env vars,
globals,projects(map), dataDir, log files - Context — context injection (implicit keys, custom context, hooks)
- Commands — built-in command config (
/start,/help,/reset,/clean,/model,/engine,/git) - Engines — supported engines, engine config, model list, model defaults, per-engine setup
- Logging — log level, flow, persist, log file paths
- Rate limit — max messages per user per time window
Create a hal.config.yaml in your workspace (or use examples/hal.config.yaml). Use ${VAR_NAME} for secrets and set them in .env.local in the same directory where you run the HAL CLI. See Configuration — Environment variable substitution. Full key reference: docs/config/reference.yaml.
globals:
engine:
name: claude
logging:
level: info
flow: true
persist: false
rateLimit:
max: 10
windowMs: 60000
access:
allowedUserIds: [123456789]
projects:
backend:
cwd: ./backend
telegram:
botToken: "${BACKEND_BOT_TOKEN}"
logging:
persist: true
frontend:
cwd: ./frontend
engine:
name: copilot
model: gpt-5-mini
telegram:
botToken: "${FRONTEND_BOT_TOKEN}"JSON and JSONC are also supported alongside YAML. For a minimal JSON/JSONC example and supported JSONC features (comments, trailing commas), see Configuration alternatives. Use the YAML reference or example and convert if needed.
Bot Commands
| Command | Description |
|----------|-------------------------------------------------------|
| /start | Welcome message |
| /help | Show help information |
| /reset | Wipes out all user data and resets the LLM session |
| /clean | Resets the LLM session |
Custom commands and skills
Add your own slash commands as .mjs files (project or global), or use engine skill folders that HAL exposes as commands. Custom commands can override a skill with the same name. Both are hot-reloaded.
- Custom commands — file locations, handler arguments (
args,ctx,gram,agent,projectCtx), examples. - Skills — SKILL.md format, per-engine directories, precedence.
Voice messages
Voice messages are transcribed locally with Whisper (no audio sent to external services). Voice messages — setup (ffmpeg, CMake, nodejs-whisper), model options.
Sending Files to Users
The engine can send files back through Telegram. Each user has a downloads/ folder under their data directory. The engine is informed of this path in every prompt.
- The engine writes a file to the downloads folder
- The bot detects it after the engine's response completes
- The file is sent via Telegram (as a document)
- The file is deleted from the server after delivery
Development
For local setup, running the bot, and releasing: Development — requirements, quick start (pnpm install, pnpm start), examples folder and .env, release scripts, and npm token setup for publish.
Security Notice
Important: Conversations with this bot are not end-to-end encrypted. Messages pass through Telegram's servers. Do not share:
- Passwords or API keys
- Personal identification numbers
- Financial information
- Confidential business data
This bot is intended for development assistance only. Treat all conversations as potentially visible to third parties.
License
MIT
This project is forked by the CCP at Telegram.
