npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

telegram-ai-bridge

v2.2.0

Published

Run one local AI coding agent behind one Telegram bot.

Downloads

123

Readme

telegram-ai-bridge

Your AI Agents, Fully Managed from Telegram

Create sessions, browse history, switch models, orchestrate multi-agent workflows — all from your phone.

A self-hosted Telegram bridge that gives you full session control over local AI coding agents — Claude Code, Codex, and Gemini.

MIT License Bun Telegram

English | 简体中文


Why Not Just Use Claude's Built-in Remote Features?

Claude Code now ships Remote Control (Feb 2026) and a Telegram channel plugin (Mar 2026). Both let you talk to Claude from your phone. Neither gives you session management, multi-backend support, or agent-to-agent collaboration.

| What you'd expect from phone control | Remote Control | Channels (TG plugin) | This project | |---------------------------------------|:-:|:-:|:-:| | Create new sessions from phone | — | — | /new | | Browse & resume past sessions | — | — | /sessions /resume /peek | | Switch models on the fly | — | — | /model with inline buttons | | Claude + Codex + Gemini backends | Claude only | Claude only | All three, per-chat switchable | | Tool approval from phone | Partial (limited UI) | Yes | Inline buttons: Allow / Deny / Always / YOLO | | Multi-agent group collaboration | — | — | A2A bus + shared context | | Cross-agent relay & fact-checking | — | — | /relay (works in DM + groups) | | Real-time progress streaming | Terminal output only | — | Tool icons + 3 verbosity levels + summary | | Rapid message batching | N/A | — | FlushGate: 800ms window, auto-merge | | Photo / document / voice input | — | Text only | Auto-download + reference in prompt | | Smart quick-reply buttons | — | — | Yes/No + numbered options (1. 1、 1) formats) | | Runs as background daemon | Terminal must stay open | Session must be open | LaunchAgent / Docker | | Survives network interruptions | 10-min timeout kills session | Tied to session lifecycle | SQLite + Redis persistence | | Group context compression | N/A | N/A | 3-tier: recent full / middle truncated / old keywords | | Shared context backend | N/A | N/A | SQLite / JSON / Redis (pluggable) | | Task audit trail | — | — | SQLite: status, cost, duration, approval log | | Loop guard for bot-to-bot | N/A | N/A | 5-layer: generation + cooldown + rate + dedup + AI | | Stable release | Yes | Research preview | Yes (v2.2) |

What official tools do better: Remote Control streams full terminal output. Channels relay tool-approval dialogs natively. Claude Code on the web provides cloud compute without local setup. This project optimizes for a different job: persistent, multi-agent session management entirely from Telegram.

How they differ: Remote Control = your phone watches the terminal. Channels = the terminal receives phone messages. This project = your phone IS the terminal.

Supported backends:

| Backend | SDK | Status | |---------|-----|--------| | claude | Claude Agent SDK | Recommended | | codex | Codex SDK | Recommended | | gemini | Gemini Code Assist API | Experimental |

Core rule: One bot = one backend = one mental model.


Quick Start

git clone https://github.com/AliceLJY/telegram-ai-bridge.git
cd telegram-ai-bridge
bun install
bun run bootstrap --backend claude
bun run setup --backend claude
bun run check --backend claude
bun run start --backend claude

Recommended Deployment

Run separate bots for separate agents:

  • @your-claude-bot → Claude only
  • @your-codex-bot → Codex only
  • @your-gemini-bot → Gemini only (if you explicitly need it)

What This Unlocks

Phone-First Agent Control

Walk away from your desk. Open Telegram. /new starts a fresh session. /resume 3 picks up where you left off. /peek 5 reads a session without touching it. /model switches models on the fly. Full session lifecycle from your phone — no terminal required.

Multi-Agent Collaboration

Put @claude-bot and @codex-bot in the same Telegram group. Ask Claude to review code — Codex reads the reply via shared context and offers its own take. Use /relay codex Do you agree? for explicit cross-checking. Built-in loop guards and circuit breakers prevent runaway bot-to-bot conversations.

Always-On, Self-Hosted

macOS LaunchAgent or Docker keeps the bridge running in the background. Sessions persist in SQLite across restarts and reboots. Code and credentials never leave your machine. Owner-only access by default.


Telegram Commands

Sessions are sticky: messages continue the current session until you explicitly change it.

| Command | Description | |---------|-------------| | /new | Start a new session | | /sessions | List recent sessions | | /peek <id> | Read-only preview a session | | /resume <id> | Rebind current chat to an owned session | | /model | Pick a model for the current bot | | /status | Show backend, model, cwd, and session | | /tasks | Show recent task history | | /verbose 0\|1\|2 | Change progress verbosity | | /relay <target> <msg> | Forward a message to another bot and return its reply | | /a2a status | Show A2A bus status, peer health, and loop guard stats |


Multi-Bot Group Collaboration

Telegram bots cannot see each other's messages — this is a platform-level limitation. When you put Claude and Codex in the same group, neither can read the other's replies.

This project works around it with a pluggable shared context store. Each bot writes its reply after responding. When another bot is @mentioned, it reads the shared context and includes the other bot's replies in its prompt.

You: @claude Review this code
CC:  [reviews code, writes reply to shared store]

You: @codex Do you agree with CC's review?
Codex: [reads CC's reply from shared store, gives opinion]

No copy-pasting needed. Built-in limits (30 messages / 3000 tokens / 20-minute TTL) prevent context bloat.

Storage Backend Comparison

| Backend | Dependencies | Concurrency | Best For | |---------|-------------|-------------|----------| | sqlite (default) | None (built-in) | WAL mode, single-writer | Single bot, low concurrency | | json | None (built-in) | Atomic write (tmp+rename) | Zero-dependency deployment | | redis | ioredis | Native concurrency + TTL | Multi-bot, Docker environment |

Set sharedContextBackend in config.json:

{
  "shared": {
    "sharedContextBackend": "redis",
    "redisUrl": "redis://localhost:6379"
  }
}

Note: Bots only respond when explicitly @mentioned or replied to. They don't auto-reply to each other.

A2A: Agent-to-Agent Communication

Beyond passive shared context, A2A lets bots actively respond to each other in group chats. When one bot replies to a user, the A2A bus broadcasts the response to sibling bots. Each sibling independently decides whether to chime in.

You:    @claude What's the best way to handle retries?
Claude: [responds with retry pattern advice]
         ↓ A2A broadcast
Codex:  [reads Claude's reply, adds: "I'd also suggest exponential backoff..."]

Built-in safety:

  • Loop guard: Max 2 generations of bot-to-bot replies per conversation turn
  • Cooldown: 60s minimum between A2A responses per bot
  • Circuit breaker: Auto-disables unreachable peers after 3 failures
  • Rate limit: Max 3 A2A responses per 5-minute window

Important: A2A only works in group chats. Private/DM conversations are never broadcast — this prevents cross-bot message leaking between separate DM windows.

Enable in config.json:

{
  "shared": {
    "a2aEnabled": true,
    "a2aPorts": { "claude": 18810, "codex": 18811 }
  }
}

Each bot instance listens on its assigned port. Peers are auto-discovered from a2aPorts (excluding self).

/relay — Cross-Bot Point-to-Point Messaging

While A2A broadcast is group-only, /relay works everywhere — including DMs. It sends a message to another bot's AI backend and returns the response directly.

/relay codex What do you think of this approach?

Aliases for less typing: cc=claude, cx=codex, gm=gemini.

Reply-to forwarding: Long-press a bot's reply and respond with /relay <target> [instruction] — the replied-to message is automatically included in the relay prompt. No copy-pasting needed.

Claude:  [reviews your code]
You:     (reply to Claude's message) /relay cx Do you agree with this review?
Codex:   [sees Claude's full review + your instruction, gives opinion]

This is ideal for fact-checking and cross-review workflows.


Architecture

Telegram bot
  → start.js
  → config.json
  → bridge.js
  → executor (direct | local-agent)
  → backend adapter (claude | codex | gemini)
  → local credentials and session files

Each bot instance keeps its own Telegram token, SQLite DBs, credential directory, and model settings.


bun run bootstrap --backend claude generates a starter config.json. Or copy config.example.json.

{
  "shared": {
    "ownerTelegramId": "123456789",
    "cwd": "/Users/you",
    "httpProxy": "",
    "defaultVerboseLevel": 1,
    "executor": "direct",
    "tasksDb": "tasks.db",
    "sharedContextBackend": "sqlite",
    "sharedContextDb": "shared-context.db",
    "redisUrl": ""
  },
  "backends": {
    "claude": {
      "enabled": true,
      "telegramBotToken": "...",
      "sessionsDb": "sessions.db",
      "model": "claude-sonnet-4-6",
      "permissionMode": "default"
    },
    "codex": {
      "enabled": true,
      "telegramBotToken": "...",
      "sessionsDb": "sessions-codex.db",
      "model": ""
    },
    "gemini": {
      "enabled": false,
      "telegramBotToken": "",
      "sessionsDb": "sessions-gemini.db",
      "model": "gemini-2.5-pro",
      "oauthClientId": "",
      "oauthClientSecret": "",
      "googleCloudProject": ""
    }
  }
}

config.json is gitignored. Sessions run until completion — no hard timeout (a soft watchdog logs after 15 minutes without aborting).

Inspect resolved config: bun run config --backend claude (secrets redacted).

Claude:

  • Requires local login state under ~/.claude/
  • Supports permissionMode: default or bypassPermissions

Codex:

  • Requires local login state under ~/.codex/
  • Optional model override; empty string uses Codex defaults

Gemini:

  • Experimental compatibility backend, not primary
  • Requires ~/.gemini/oauth_creds.json, oauthClientId, oauthClientSecret
  • Uses Gemini Code Assist API mode, not full CLI terminal control
  • Recommended only when you intentionally need Gemini support

Generate and install:

./scripts/install-launch-agent.sh --backend claude --install
./scripts/install-launch-agent.sh --backend codex --install

The wrapper runs bun run check before bun run start, so bad config fails fast.

Default labels: com.telegram-ai-bridge, com.telegram-ai-bridge-codex, com.telegram-ai-bridge-gemini.

launchctl print gui/$(id -u)/com.telegram-ai-bridge
launchctl kickstart -k gui/$(id -u)/com.telegram-ai-bridge
tail -f bridge.log

If you see 409 Conflict, another process is polling the same bot token.

docker build -t telegram-ai-bridge .

docker run -d \
  --name tg-ai-bridge-claude \
  -v $(pwd)/config.json:/app/config.json:ro \
  -v ~/.claude:/root/.claude \
  telegram-ai-bridge --backend claude

Swap credential mount and --backend for other backends. See docker-compose.example.yml for a Compose starter.

  • start.js — CLI entry for start, bootstrap, check, setup, config
  • config.js — Config loader and setup wizard
  • bridge.js — Telegram bot runtime
  • sessions.js — SQLite session persistence
  • shared-context.js — Cross-bot shared context entry point
  • shared-context/ — Pluggable backends (SQLite / JSON / Redis)
  • a2a/ — Agent-to-agent communication bus, loop guard, peer health
  • adapters/ — Backend integrations
  • launchd/ — LaunchAgent template for macOS
  • scripts/ — Install wrapper and runtime launcher
  • docker-compose.example.yml — Compose starter
  • direct — runs the backend adapter in-process (default)
  • local-agent — communicates with a local agent subprocess over JSONL stdio

Set in config.json at shared.executor, or override with BRIDGE_EXECUTOR.


How It Fits Together

Three ways to make AI agents talk to each other — different protocols, different scenarios:

| Layer | Protocol | How | Scenario | |-------|----------|-----|----------| | Terminal | MCP | Built-in codex mcp-server + claude mcp serve, zero code | CC ↔ Codex direct calls in your terminal | | Telegram Group | Custom A2A | This project's A2A bus, auto-broadcast | Multiple bots in one group, chiming in | | Telegram DM | Custom A2A | This project's /relay command | Explicit cross-bot forwarding from phone | | Server | Google A2A v0.3.0 | openclaw-a2a-gateway | OpenClaw agents across servers |

MCP vs A2A: MCP is a tool-calling protocol (I invoke your capability). A2A is a peer communication protocol (I talk to you as an equal). CC calling Codex via MCP is using Codex as a tool — not two agents chatting.

Terminal: CLI-to-CLI via MCP (No Telegram Needed)

Claude Code and Codex each have a built-in MCP server mode. Register them with each other and they can call each other directly — no bridge, no Telegram, no custom code:

# In Claude Code: register Codex as MCP server
claude mcp add codex -- codex mcp-server

# In Codex: register Claude Code as MCP server (in ~/.codex/config.toml)
[mcp_servers.claude-code]
type = "stdio"
command = "claude"
args = ["mcp", "serve"]

Telegram: This Project

Groups use A2A auto-broadcast. DMs use /relay. See sections above.

Server: openclaw-a2a-gateway

For OpenClaw agents communicating across servers via the Google A2A v0.3.0 standard protocol. A different system entirely — see openclaw-a2a-gateway.

Development

bun test

GitHub Actions runs the same suite on every push and pull request.

License

MIT