hashi-bridge
v1.0.1
Published
HASHI — Universal AI Agent Orchestration Platform. Privacy-first, multi-backend, multi-agent system with no OAuth token storage.
Maintainers
Readme
HASHI
About
HASHI is a privacy-first, compliant alternative to OpenClaw designed for a more secure agentic experience. It prioritizes your security by never requiring or storing your Claude, Codex, or Gemini OAuth authentication tokens, ensuring your setup remains fully compliant with current Terms of Service.
Beyond safety, HASHI introduces practical features built for real-world workflows:
• Context Recovery: Use the /handoff command to instantly restore project context when work is lost during conversation compression. • Multi-Agent Connectivity: Connect and switch between multiple specialized agents through a single WhatsApp account.
HASHI is built to evolve. We are committed to adding the tools and functions the community needs to make AI collaboration safer and more productive.
Project History & Name Origin
The Name: HASHI (ハシ / 橋)
HASHI means "bridge" in Japanese (橋).
The kanji 橋 combines:
- 木 (tree/wood) - the natural foundation
- 喬 (tall) - reaching upward, connecting heights
Project Philosophy:
「橋」は「知」を繋ぎ、「知」は未来を拓く。 The Bridge connects Intellect; Intellect opens the future.
Just as bridges connect distant shores, HASHI connects:
- Human creativity ↔ AI capabilities
- Multiple AI systems ↔ Unified interface
- Present workflows ↔ Future possibilities
Authorship & Credits
HASHI was conceived and designed from scratch by Barry Li (https://barryli.phd), a PhD candidate at the University of Newcastle, Australia.
Coming from a non-technical background with no prior IT experience, Barry built this project through "Vibe-Coding" — every line of code was generated by AI (Claude, Gemini, and Codex) and cross-reviewed by AI. Barry's role was that of System Architect and Director, providing the vision, operational judgment, and iterative direction. This marks Barry's first publishable AI project.
This project would not exist without OpenClaw by Peter Steinberg and the OpenClaw contributors. OpenClaw provided both a cutting-edge AI agent framework and the inspirational ideas that shaped this system. Deep thanks to Peter and all OpenClaw contributors.
Development Codename: bridge-u-f
Throughout the codebase, you'll see references to bridge-u-f - this was the internal development codename used during the project's evolution from OpenClaw.
Why "bridge"? The core metaphor: HASHI connects human intent with AI intelligence, serving as a bridge between natural language requests and computational power.
Why "u-f"?
u= universal (multi-backend, multi-agent)f= flexible (adaptive, modular, extensible)
Quick Technical Overview
HASHI is a universal multi-agent orchestration platform that runs entirely locally. It routes user requests to AI backends (Claude CLI, Codex CLI, Gemini CLI, or OpenRouter API) through a flexible adapter system, eliminating the need to store sensitive OAuth tokens.
Core Components:
- Onboarding - Multi-language guided setup to create your first agent
- Workbench - Local web UI (React + Vite) for multi-agent conversations
- Orchestrator - Central runtime managing agents, memory, skills, and scheduling
- Transports - Connect via Telegram, WhatsApp, or Workbench
- Skills - Modular capabilities (prompts, toggles, actions) that extend agents
- Jobs - Automated scheduling (heartbeats + cron) for periodic agent tasks
What makes HASHI different:
- No Token Storage - Uses CLI backends (gemini, claude, codex) with local authentication, not stored tokens
- Multi-Agent, Single Interface - Chat with multiple specialized agents through one WhatsApp or Telegram account
- Context Recovery -
/handoffcommand instantly restores project context after compression - Vibe-Coded - Every line written by AI, reviewed by AI, directed by human vision
Installation
See INSTALL.md for detailed installation instructions.
Quick Start (Recommended)
# Clone the repository
git clone https://github.com/Bazza1982/hashi.git
cd hashi
# Install Python dependencies
pip install -r requirements.txt
# Run onboarding (creates your first agent)
python onboarding/onboarding_main.py
# Start HASHI
./bin/bridge-u.sh # Linux (macOS untested)
# or
bin\bridge-u.bat # Windows
# or
python main.py # Any platformPrerequisites
- Python 3.10+
- At least one AI backend:
- [Gemini CLI] (
gemini) - [Claude Code] (
claude) - [Codex CLI] (
codex) - Or an OpenRouter API key
- [Gemini CLI] (
- Optional: Node.js 18+ (for Workbench UI)
Comprehensive Technical Details
Architecture
HASHI uses a Universal Orchestrator pattern where a single Python process manages multiple concurrent agent runtimes:
┌─────────────────────────────────────────────────────────────┐
│ Universal Orchestrator │
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐│
│ │ Agent Runtime │ │ Agent Runtime │ │ Agent Runtime ││
│ │ (Hashiko) │ │ (Assistant) │ │ (Coder) ││
│ └────────────────┘ └────────────────┘ └────────────────┘│
│ ▲ ▲ ▲ │
│ └──────────────────┴──────────────────┘ │
│ │ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Flexible Backend Manager │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐│ │
│ │ │ Gemini │ │ Claude │ │ Codex │ │OpenRouter││ │
│ │ │ Adapter │ │ Adapter │ │ Adapter │ │ Adapter ││ │
│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘│ │
│ └───────────────────────────────────────────────────────┘ │
│ ▲ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Transport Layer │ │
│ │ [Telegram] [WhatsApp] [Workbench API] │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────────────┐ │
│ │ Skill │ │ Scheduler │ │ Memory System │ │
│ │ Manager │ │ (Jobs/Cron)│ │ (Vector + Recall) │ │
│ └────────────┘ └────────────┘ └────────────────────┘ │
└─────────────────────────────────────────────────────────────┘Key Design Principles:
- Backend Agnostic - Agents work with any supported backend; you can switch mid-conversation
- Shared Sessions - Telegram and Workbench share the same agent queues and memory
- Explicit over Automatic - Skills, jobs, and features are user-activated, never magic
- Single Instance - File-based locking prevents multiple HASHI processes from conflicting
File Structure
hashi/
├── main.py # Orchestrator entry point
├── agents.json # Agent definitions (name, backend, system prompt)
├── secrets.json # API keys (OpenRouter, etc.)
├── tasks.json # Heartbeat + cron job definitions
├── onboarding/ # Multi-language guided setup
│ ├── onboarding_main.py
│ └── languages/ # 9 languages (en, ja, zh-Hans, zh-Hant, ko, de, fr, ru, ar)
├── orchestrator/ # Core orchestration logic
│ ├── agent_runtime.py # Individual agent runtime (fixed backend)
│ ├── flexible_agent_runtime.py # Flex agent (switchable backend)
│ ├── scheduler.py # Heartbeat + cron job runner
│ ├── skill_manager.py # Skills system
│ ├── bridge_memory.py # Context assembly + memory retrieval
│ ├── memory_index.py # Vector similarity search
│ ├── workbench_api.py # Workbench REST API server
│ └── api_gateway.py # External API gateway (optional)
├── adapters/ # Backend adapters
│ ├── base.py # Abstract base adapter
│ ├── gemini_cli.py
│ ├── claude_cli.py
│ ├── codex_cli.py
│ └── openrouter_api.py
├── transports/ # Communication channels
│ ├── whatsapp.py # WhatsApp transport (whatsapp-web.js)
│ └── chat_router.py # Message routing logic
├── skills/ # Skill library
│ ├── README.md
│ └── [skill_name]/
│ ├── skill.md # Skill definition
│ └── run.py # Action script (optional)
├── workbench/ # Local web UI
│ ├── server/ # Node.js API server
│ └── src/ # React frontend
├── memory/ # Agent memory files
├── state/ # Runtime state
├── logs/ # Log files
└── workspaces/ # Agent working directoriesOnboarding System
The onboarding program (onboarding/onboarding_main.py) provides a guided, multi-language setup experience:
Features:
- 9 Languages - English, Japanese, Simplified Chinese, Traditional Chinese, Korean, German, French, Russian, Arabic
- Environment Detection - Automatically detects installed CLI backends (Gemini, Claude, Codex)
- Fallback to OpenRouter - If no CLI is detected, prompts for OpenRouter API key
- Workbench Auto-Launch - Optionally opens Workbench UI after setup
Onboarding Flow:
- Language selection
- Environment audit (detect Gemini/Claude/Codex CLI)
- If no CLI found → prompt for OpenRouter API key
- Display AI Ethics & Human Well-being Statement
- Create first agent in
agents.json - Create
secrets.jsonwith API keys (if needed) - Launch Workbench
Files Created:
agents.json- First agent definitionsecrets.json- API keys (OpenRouter, Telegram, etc.).hashi_onboarding_complete- Flag file to prevent re-onboarding
Workbench
The Workbench is a local web interface for multi-agent conversations:
Architecture:
- Frontend - React + Vite, runs on
http://localhost:5173 - Backend - Node.js Express server, runs on
http://localhost:3003 - Bridge API - Connects to orchestrator at
http://127.0.0.1:18800
Features:
- Multi-agent chat interface with agent switching
- Real-time transcript polling
- File and media upload support
- System status display
- Shared sessions with Telegram/WhatsApp
Start/Stop:
./workbench.bat # Start workbench (Windows)
./workbench-ctl.sh start # Start workbench (Linux)
./stop_workbench.bat # Stop workbenchHow It Works:
- Workbench frontend polls orchestrator
/api/agentsfor agent list - User sends message through Workbench → POST
/api/agents/{name}/send - Orchestrator queues message in agent runtime (same queue as Telegram)
- Backend processes message, streams response
- Workbench polls
/api/agents/{name}/transcriptfor updates
Connections (Transports)
HASHI supports multiple communication channels through a transport layer:
Telegram
- Default transport, enabled by default
- Requires
telegram_bot_tokeninsecrets.json - Commands:
/start,/stop,/restart,/handoff,/skill, etc. - Supports inline keyboards, file uploads, voice messages
Setup:
- Create bot via @BotFather
- Add
telegram_bot_tokentosecrets.json - Add your Telegram user ID to agent's
authorized_idinagents.json
- Uses
whatsapp-web.jslibrary - Requires QR code scan on first launch
- Multi-Agent Support - Route messages to different agents using
/agent <name>prefix
Setup:
- Run
python scripts/link_whatsapp.pyto generate QR code - Scan QR code with WhatsApp mobile app
- Session saved in
.wwebjs_auth/ - Configure routing in agent's
whatsapp_routingsettings
WhatsApp Commands:
/agent hashiko- Switch to "hashiko" agent/agents- List available agents- Normal messages → routed to current active agent
Workbench
- Local web UI (see Workbench section above)
- No authentication required (localhost only)
- Shared sessions with Telegram/WhatsApp
Commands
HASHI agents respond to both natural language and structured commands:
Universal Commands (All Agents)
| Command | Description |
|---------|-------------|
| /start | Restart conversation, clear context |
| /stop | Pause agent, stop processing new messages |
| /restart | Hot restart agent runtime |
| /status | Show agent status, backend info, memory usage |
| /handoff | Generate context restoration prompt for new session |
| /export | Export daily transcript as markdown |
| /skill | Access skills system (see Skills section) |
| /help | Show available commands |
Memory Commands
| Command | Description |
|---------|-------------|
| /remember <text> | Store long-term memory |
| /recall <query> | Search memory by semantic similarity |
| /forget <id> | Delete specific memory |
| /memories | List all stored memories |
Job Commands
| Command | Description |
|---------|-------------|
| /heartbeat | Manage heartbeat tasks (periodic checks) |
| /cron | Manage cron jobs (scheduled tasks) |
| /job add | Add new scheduled job |
| /job list | List all jobs |
| /job delete <id> | Delete job |
Skills System
Skills are modular capabilities that extend agent functionality. Every skill is defined by a skill.md file with frontmatter + instructions.
Skill Types
| Type | Behavior | Example |
|------|----------|---------|
| Action | One-shot execution, runs a script | restart_pc, system_status |
| Prompt | Routes user input to a backend/tool | codex, gemini, claude |
| Toggle | Injects instructions while active | TTS, carbon-accounting, academic-writing |
Skill Structure
Each skill lives in skills/<skill_id>/:
skills/
carbon-accounting/
skill.md # Frontmatter + instructions
standards/
ghg-protocol-summary.md
iso14064-notes.mdskill.md Example:
---
id: carbon-accounting
name: Carbon Accounting Expert
type: toggle
description: Activate deep carbon accounting expertise (GHG Protocol, ISO 14064)
---
You now have deep expertise in carbon accounting and GHG reporting.
## Standards
- GHG Protocol Corporate Standard
- ISO 14064-1:2018
- TCFD for climate-related financial disclosure
## Reference files in this skill folder
- `standards/ghg-protocol-summary.md`
- `standards/iso14064-notes.md`Using Skills
/skill → Show skill grid (Telegram inline keyboard)
/skill help → List all skills
/skill <name> → Show skill info
/skill <name> <prompt> → Run prompt skill with input
/skill <name> on → Enable toggle skill
/skill <name> off → Disable toggle skillToggle Skills in Action:
When a toggle skill is on, its skill.md content is injected into the prompt under --- ACTIVE SKILLS --- section. This persists across messages until explicitly turned off.
Action Skills:
Action skills execute a script (run.py or run.sh) and return the output.
Prompt Skills:
Prompt skills route user input to a specific backend or workflow (e.g., codex routes to Codex CLI).
Job System (Scheduler)
HASHI includes a built-in task scheduler for automated agent actions:
Heartbeats
Heartbeats are periodic checks that run at fixed intervals:
{
"id": "email-check",
"enabled": true,
"agent": "hashiko",
"interval_seconds": 1800,
"prompt": "Check my email for urgent messages and summarize",
"action": "enqueue_prompt"
}Common Use Cases:
- Email monitoring
- Calendar reminders
- System health checks
- Market/news updates
Cron Jobs
Cron jobs run at specific times (HH:MM format):
{
"id": "morning-briefing",
"enabled": true,
"agent": "hashiko",
"time": "08:00",
"prompt": "Provide morning briefing: weather, calendar, top news",
"action": "enqueue_prompt"
}Common Use Cases:
- Daily reports
- Scheduled backups
- Time-sensitive reminders
Skill-Based Jobs
Jobs can invoke skills instead of prompts:
{
"id": "daily-backup",
"enabled": true,
"agent": "coder",
"time": "03:00",
"action": "skill:backup_workspace",
"args": ""
}Managing Jobs
Via Telegram:
/heartbeat → List heartbeat tasks
/cron → List cron jobs
/job add → Add new job (guided)
/job delete <id> → Delete jobVia tasks.json:
{
"heartbeats": [
{ "id": "...", "enabled": true, "agent": "...", ... }
],
"crons": [
{ "id": "...", "enabled": true, "agent": "...", ... }
]
}Backend Adapters
HASHI's adapter system provides a unified interface to multiple AI backends:
Supported Backends
| Backend | Engine ID | Requirements |
|---------|-----------|--------------|
| Gemini CLI | gemini-cli | gemini CLI installed and authenticated |
| Claude CLI | claude-cli | claude CLI installed and authenticated |
| Codex CLI | codex-cli | codex CLI installed and authenticated |
| OpenRouter API | openrouter-api | API key in secrets.json |
Adapter Architecture
All adapters inherit from BaseBackendAdapter (adapters/base.py):
class BaseBackendAdapter:
async def send_request(self, messages, tools, thinking, stream_callback):
"""Send request to backend, stream response"""
async def cancel_request(self):
"""Cancel in-flight request"""Key Features:
- Streaming support (token-by-token)
- Tool use (file operations, web search, etc.)
- Thinking mode (extended reasoning)
- Graceful cancellation
CLI Backends (Gemini, Claude, Codex)
CLI backends spawn subprocess and communicate via stdin/stdout:
- No OAuth tokens stored
- Uses local CLI authentication (Google account, Anthropic API, OpenAI API)
- Full tool support
- Conversation memory managed by CLI
OpenRouter Backend
OpenRouter adapter uses HTTP API:
- Requires
openrouter_api_keyinsecrets.json - Supports multiple models via
modelparameter - Stateless (HASHI manages conversation history)
Memory System
HASHI includes a vector-based memory system for long-term context retrieval:
Memory Types
| Type | Storage | Lifetime |
|------|---------|----------|
| Short-term | In-process (agent runtime) | Current session |
| Transcript | memory/<agent>_transcript.json | Permanent, daily rollover |
| Long-term | memory/<agent>_memory.json | User-controlled |
| Vector Index | memory/<agent>_memory_index.json | Auto-synced with long-term |
How It Works
User stores memory:
/remember The user prefers formal academic writing styleMemory is vectorized:
- Text embedded using sentence-transformers (local)
- Vector + text stored in
_memory.json - Index updated in
_memory_index.json
Context assembly retrieves relevant memories:
- Current user message is vectorized
- Top-K similar memories retrieved via cosine similarity
- Injected into prompt under
--- RELEVANT LONG-TERM MEMORY ---
User recalls memory:
/recall writing preferencesReturns ranked list of relevant memories.
Memory Commands
/remember <text> → Store long-term memory
/recall <query> → Search memories
/forget <id> → Delete memory by ID
/memories → List all memoriesMemory in Prompts
Every agent request includes:
--- SYSTEM IDENTITY ---
{agent.md contents}
--- ACTIVE SKILLS ---
{active toggle skills}
--- RELEVANT LONG-TERM MEMORY ---
{top 3 retrieved memories}
--- RECENT CONTEXT ---
{last 10 conversation turns}
--- NEW REQUEST ---
{user message}Handoff System
The /handoff command generates a context restoration prompt for recovering work after conversation compression or session loss:
Use Cases:
- Agent conversation hit token limit and compressed
- Switching to a new agent mid-project
- Resuming work after system restart
How It Works:
User: /handoff
Agent: [Generates comprehensive project summary]
--- HANDOFF CONTEXT ---
Project: Building a web scraper for research papers
Status: Parser module complete, need to add citation extraction
Files: src/parser.py (500 lines), tests/ (3 files)
Next: Implement citation regex patterns
Dependencies: beautifulsoup4, requests
---User copies this output and sends to a new agent:
User: [Paste handoff context]
Continue building the citation extractor...New agent picks up exactly where the previous left off.
Configuration Files
agents.json
Defines your agents:
{
"global": {
"authorized_id": 123456789,
"whatsapp": {
"enabled": false,
"allowed_numbers": [],
"default_agent": "hashiko"
}
},
"agents": [
{
"name": "hashiko",
"display_name": "Hashiko",
"engine": "gemini-cli",
"model": "gemini-3-flash",
"system_md": "workspaces/hashiko/agent.md",
"workspace_dir": "workspaces/hashiko",
"is_active": true
}
]
}secrets.json
Stores API keys and tokens:
{
"hashiko": "your_telegram_bot_token",
"openrouter-api_key": "sk-or-v1-...",
"authorized_telegram_id": 123456789
}tasks.json
Defines scheduled jobs:
{
"heartbeats": [
{
"id": "check-email",
"enabled": true,
"agent": "hashiko",
"interval_seconds": 1800,
"prompt": "Check email for urgent messages"
}
],
"crons": [
{
"id": "morning-brief",
"enabled": true,
"agent": "hashiko",
"time": "08:00",
"prompt": "Morning briefing: weather, calendar, news"
}
]
}Advanced Features
Multi-Agent WhatsApp Routing
Connect multiple agents to one WhatsApp account:
- Configure WhatsApp routing in each agent's config:
{
"name": "coder",
"whatsapp_enabled": true,
"whatsapp_routing": {
"keywords": ["code", "debug", "fix"],
"priority": 10
}
}- Use
/agent <name>to manually switch agents - Messages auto-route based on keywords and priority
Flexible Backend Switching
Agents can switch backends mid-conversation:
User: Switch to Codex for the next task
Agent: [Switches to codex-cli backend]Configured in agent as:
{
"name": "flex-agent",
"engine": "flexible",
"default_backend": "gemini-cli",
"fallback_backends": ["claude-cli", "codex-cli"]
}API Gateway (Optional)
Enable external API access:
./bridge-u.sh --api-gatewayExposes REST API on http://localhost:18801:
POST /api/chat
{
"agent": "hashiko",
"message": "Hello",
"user_id": "external_user_123"
}⚠️ Security Warning: API Gateway has no authentication. Use firewall rules or reverse proxy for production.
Debugging and Logs
Log Files
| Log | Location | Contents |
|-----|----------|----------|
| Main orchestrator | logs/bridge_launch.log | Orchestrator startup, agent launches, errors |
| Workbench | state/workbench/logs/ | Workbench server logs |
| Onboarding | onboarding_crash.log | Onboarding errors |
Debug Mode
Enable verbose logging:
export BRIDGE_DEBUG=1
./bridge-u.shCommon Issues
"bridge-u-f is already running"
- Another instance is active
- Kill it:
./kill-sessions.sh(Linux) orkill_bridge_u_f_sessions.bat(Windows)
"No CLI backends detected"
- Install Gemini/Claude/Codex CLI
- Or provide OpenRouter API key during onboarding
Telegram bot not responding
- Check
telegram_bot_tokeninsecrets.json - Verify bot token with @BotFather
- Check
authorized_idmatches your Telegram user ID
WhatsApp QR code not showing
- Run
python scripts/link_whatsapp.pyseparately - Check firewall allows localhost connections
⚠️ Important Warnings
This is Version 1.0
HASHI version 1.0 is a working prototype built entirely through AI-assisted development ("Vibe-Coding"). While functional and field-tested by the author, it is not production-ready.
Known Limitations:
- Bugs - Expect edge cases and unexpected behavior
- Error Handling - Some error messages may be cryptic
- Performance - Not optimized for high-volume usage
- Security - Local-only deployment recommended; API Gateway has no auth
- Platform Support - Tested on Windows and Linux only; macOS untested
Use with Caution:
- Keep backups of
agents.json,secrets.json, andmemory/files - Do not expose API Gateway to public internet without proper authentication
- Test thoroughly before relying on scheduled jobs for critical tasks
- Review agent outputs for sensitive information before sharing
Reporting Issues: If you encounter bugs or unexpected behavior, please report them on the GitHub Issues page with:
- Your OS and Python version
- Backend(s) you're using (Gemini/Claude/Codex/OpenRouter)
- Relevant log excerpts from
logs/bridge_launch.log - Steps to reproduce
Version 1.0 Release
Release Date: March 15, 2026
This marks the first public release of HASHI - a milestone in demonstrating what's possible when human vision directs AI execution.
What's Included in v1.0:
- ✅ Multi-language onboarding (9 languages)
- ✅ Support for 4 backends (Gemini CLI, Claude CLI, Codex CLI, OpenRouter)
- ✅ Telegram + WhatsApp + Workbench transports
- ✅ Skills system (action, prompt, toggle)
- ✅ Job scheduler (heartbeats + cron)
- ✅ Memory system (vector-based retrieval)
- ✅ Handoff context recovery
- ✅ Multi-agent workspace management
Coming in Future Versions:
- Enhanced security (API Gateway authentication)
- Mobile app (iOS/Android)
- Cloud deployment options
- Expanded skill library
- Performance optimizations
- Voice-first interfaces
License
HASHI is released under the MIT License.
You are free to use, modify, and distribute this software. See LICENSE file for full terms.
Support & Community
- GitHub Issues: Report bugs and request features
- Discussions: Ask questions and share tips
- Author: HASHI Team
Built with Vision. Written by AI. Directed by Human. HASHI - The Bridge to the Future of AI Collaboration.
