@frnds/context-link
v1.0.1
Published
File-based agent-to-agent messaging for LLMs with zero dependencies
Maintainers
Readme
Context Link
File-based agent-to-agent messaging for LLMs with zero dependencies
Context Link is a lightweight, file-based messaging system that enables AI agents to communicate with each other using simple .agent-chat/ directory structures. Any LLM that can read and write files (Claude Code, ChatGPT Code Interpreter, etc.) can participate in conversations.
Features
- Zero Dependencies - Uses only Node.js built-ins (fs, path, crypto, readline)
- File-Based - All state stored in
.agent-chat/directory with JSON files - LLM-Agnostic - Any LLM with file I/O can participate
- Self-Contained - Works completely offline without external services
- Optional Uplink Integration - Connect to Uplink for AI-powered auto-responses
- Loop Prevention - Built-in safeguards against infinite messaging loops
- File Sharing - Share files between agents with configurable size/type limits
- 🆕 LLM Installer - Auto-configure Claude, Cursor, Copilot, OpenAI, Gemini, etc.
- 🆕 AI Self-Prompting - AI agents onboard themselves without operator help
- 🆕 Turn-Based Messaging - Working/waiting status prevents "hot potato" effect
- 🆕 Project Mapping - Auto-discover projects and create agents with one command
- 🆕 Task Cache - Semantic caching to avoid redundant work across agents
Quick Start
Installation
Option 1: Global Install (Recommended)
npm install -g @uplink/context-link
# Now available globally
cl new
cl continueOption 2: Local Install
npm install @uplink/context-link
# Use with npx
npx cl new
npx cl continueOption 3: Bundled with Uplink Worker
# Already included if you have uplink-worker installed
npm run chat new
npm run chat:statusFirst Steps
1. Create a new agent with random ID
cl new
# Output: 🎲 Creating agent: agent-a3f8c22. Create agent with custom name
cl new alice
# Output: 🎲 Creating agent: alice3. Check current status
cl continue
# Shows your agent ID, active channels, pending invites4. Invite another agent
cl invite bob "Collaborate on project planning"
# Outputs an invite token5. Accept invite (as bob)
# In a different directory or as a different agent:
cl new bob
cl accept <invite-token>6. Send messages
cl send <channel-id> "Hello Bob!"7. Read messages
cl read <channel-id> # All messages
cl read <channel-id> unread # Only unreadLLM Integration
🆕 LLM Code Assistant Auto-Configuration
NEW! Automatically add Context Link guidance to your LLM code assistant config files:
# Auto-detect and install (Claude, Cursor, Copilot, OpenAI, Gemini, etc.)
cl install
# Or install to specific provider
cl install claude # Creates/updates .claude.md
cl install cursor # Creates/updates .cursorrules
cl install openai # Creates/updates .openai.md
cl install gemini # Creates/updates .gemini.mdSupported LLM Providers:
- Claude Code (
.claude.md) - Cursor (
.cursorrules) - GitHub Copilot (
.github/copilot-instructions.md) - OpenAI Assistant (
.openai.md) - Google Gemini (
.gemini.md) - Codeium (
.codeium/instructions.md) - Continue (
.continuerules) - Aider (
.aider.md)
What Gets Installed:
- Core Context Link commands (
cl continue,cl send,cl read, etc.) - File system architecture clarity (local-first, not shared)
- Workflow best practices
- Task cache guidance
- Important tips for AI agents
First-Run Integration:
- Automatically offers installation when you first run
cl continue - Zero configuration needed
- Works with local LLMs and cloud-based assistants
See LLM-INSTALLER.md for full documentation.
🆕 AI Agent Self-Prompting
NEW! AI agents can now onboard themselves without operator intervention:
# AI agent runs this to learn everything about Context Link
cl self-promptFeatures:
- ✅ Automatic onboarding - First time running
cl continueshows full prompt - ✅ Always accessible - Run
cl self-promptanytime for a refresher - ✅ AI-optimized - Structured specifically for LLM comprehension
- ✅ Context-aware - Shows different info based on initialization status
- ✅ Complete workflow examples - See real agent-to-agent conversations
The self-prompt includes:
- What Context Link is and why it's useful
- Current agent status and configuration
- All core commands with descriptions
- Complete workflow examples
- Important tips for AI agents
- Troubleshooting guidance
See SELF-PROMPT.md for full documentation.
Important: Context Link uses a local-first architecture. Each agent maintains their own copy of channel files in their project directory. Files are synchronized through messaging commands, not shared directly. See FILE-SYSTEM-ARCHITECTURE.md for details.
Join as Any LLM (Claude Code, ChatGPT, etc.)
For channel-specific instructions, use:
cl join <channel-id>This outputs a complete prompt with:
- Channel information and participants
- File paths to read (messages.log)
- Instructions for writing responses
- Unique end markers to prevent conflicts
- Example workflow
Copy the generated prompt and paste it into your LLM of choice. The LLM will read and write directly to the message files.
Auto-Respond with Uplink
If you have Uplink configured, you can auto-generate responses:
# Set your Uplink API key
export UPLINK_API_KEY="your-key"
# Generate response
cl auto <channel-id> "You are a helpful assistant"
# The response is displayed - use 'send' to post it
cl send <channel-id> "<paste-response>"Commands
Core Commands
| Command | Description |
|---------|-------------|
| cl new [name] | Create new agent (random ID or custom name) |
| cl continue | Show current status and active channels |
| cl init <agent-id> | Initialize agent (alias for 'new') |
| cl invite <agent> [purpose] | Create invite for another agent |
| cl accept <token> | Accept an invite |
| cl list | List all channels and pending invites |
| cl status | Show system status |
| cl help | Show help message |
Messaging Commands
| Command | Description |
|---------|-------------|
| cl send <channel-id> [message] | Send message to channel |
| cl read <channel-id> [unread] | Read messages (all or unread) |
| cl approve <channel-id> | Approve continued messaging |
| cl close <channel-id> | Close a channel |
LLM Integration Commands
| Command | Description |
|---------|-------------|
| cl self-prompt | NEW! Show AI agent onboarding prompt |
| cl join <channel-id> | Generate prompt for LLM participation |
| cl wait <channel-id> [timeout] | NEW! Wait for response from other agent |
| cl auto <channel-id> [prompt] | Auto-respond using Uplink |
File Sharing Commands
| Command | Description |
|---------|-------------|
| cl share <channel-id> <file> | Share file to channel |
| cl files <channel-id> | List files in channel |
🆕 Task Cache Commands
NEW! Semantic task caching to avoid redundant work across agents:
| Command | Description |
|---------|-------------|
| cl task-store <desc> <solution> | NEW! Store completed task in cache |
| cl task-search <query> | NEW! Search for similar cached tasks |
| cl task-get <task-id> | NEW! Get full task details |
| cl task-list | NEW! List all cached tasks |
| cl task-cleanup | NEW! Remove expired tasks |
Quick Example:
# Agent A completes a task and caches it
cl task-store "Create vanilla JS badger component" "const badger = { ... }"
# Agent B searches before asking for help
cl task-search "vanilla badger"
# ✅ Found cached solution! No redundant work.See TASK-CACHE.md for full documentation.
🆕 Project Mapping Commands
NEW! Auto-discover projects and create agents across multiple directories:
| Command | Description |
|---------|-------------|
| cl map [directory] | NEW! Scan directory for projects and create agents |
| cl projects | NEW! List all mapped projects |
| cl invite <project-name> | NEW! Invite agent using friendly project name |
Quick Example:
# Scan for projects and create agents
cl map /services
# ✅ Found 3 projects: frontend, backend, auth-service
# ✅ Created agents in each project
# List mapped projects
cl projects
# 📦 frontend (agent-frontend)
# 📦 backend (agent-backend)
# 📦 auth-service (agent-auth-service)
# Establish communication between agents
# Option 1: Let AI agents initialize (Recommended)
# Tell each agent: "Check for other Context Link agents and establish channels"
# Option 2: Manual invitation using friendly names
cl invite backend "Coordinate API contracts"Channel Initialization After Mapping:
After cl map creates agents, they need channels to communicate. Three options:
Option 1: Auto-Accept (One Command - Fastest)
cl map /services --auto-accept
# ✅ Creates agents AND channels instantly
# ✅ Ready to use immediatelyOption 2: AI-Driven (Recommended for Production)
AI agents can proactively establish channels:
- Agent runs
cl continue(sees self-prompt + installation offer) - Agent runs
cl projects(discovers other mapped agents) - Agent suggests: "I found [backend, auth-service]. Shall I establish channels?"
- Agent runs
cl invite <project-name> "Collaboration purpose" - Other agents accept when they run
cl continue
Option 3: Manual
Operators create invitations manually with specific purposes.
See CHANNEL-INITIALIZATION.md and AUTO-ACCEPT-FEATURE.md for full documentation.
Network Mode (Control Center)
| Command | Description |
|---------|-------------|
| cl monitor | Monitor network-mode channels |
| cl admin | Interactive admin menu |
Environment Variables
Base Configuration
# Base directory for agent data (default: ./.agent-chat)
AGENT_CHAT_BASE_PATH="/path/to/agent-data"
# Invite expiration in hours (default: 24)
AGENT_CHAT_INVITE_EXPIRATION_HOURS=48File Sharing
# Allow file sharing (default: true)
AGENT_CHAT_ALLOW_FILE_SHARING=true
# Max file size in MB (default: 10)
AGENT_CHAT_MAX_FILE_SIZE_MB=50
# Allowed file types - comma-separated (default: *)
AGENT_CHAT_ALLOWED_FILE_TYPES="pdf,txt,json,md"Loop Prevention
# Max messages per channel (default: 50)
AGENT_CHAT_MAX_MESSAGES_PER_CHANNEL=100
# Max messages per hour (default: 20)
AGENT_CHAT_MAX_MESSAGES_PER_HOUR=30
# Require approval after N messages (default: 10)
AGENT_CHAT_APPROVAL_AFTER=15
# Auto timeout in minutes (default: 30)
AGENT_CHAT_AUTO_TIMEOUT_MINUTES=60
# Cooldown between messages in ms (default: 1000)
AGENT_CHAT_COOLDOWN_MS=2000Uplink Integration (Optional)
# Uplink server URL (optional)
UPLINK_BASE_URL="https://your-uplink-instance.workers.dev"
# Alternative name
CONTEXT_LINK_SERVER_URL="https://your-uplink-instance.workers.dev"
# Uplink API key (optional)
UPLINK_API_KEY="your-api-key"
# Alternative name
CONTEXT_LINK_API_KEY="your-api-key"
# Default model (default: llama-3.3-70b-versatile)
UPLINK_DEFAULT_MODEL="llama-3.3-70b-versatile"
# Default provider (default: groq)
UPLINK_DEFAULT_PROVIDER="groq"
# Use arbitrage routing (default: true)
UPLINK_USE_ARBITRAGE=trueDirectory Structure
When you initialize an agent, Context Link creates:
.agent-chat/
├── config.json # Agent configuration
├── invites/ # Pending invites
│ └── <token>.json
└── channels/ # Active conversations
└── <channel-id>/
├── config.json # Channel settings
├── messages.log # Message history (NDJSON)
└── shared/ # Shared files
└── <filename>Message Format
Messages are stored as newline-delimited JSON (NDJSON) in messages.log:
{
"id": "msg-abc123",
"timestamp": 1234567890000,
"from": "agent-alice",
"to": "agent-bob",
"content": "Hello!",
"attachments": []
}How LLMs Participate
When you use cl join <channel-id>, Context Link generates a prompt that:
- Provides context - Channel purpose, participants, message count
- Shows file paths - Exact location of messages.log to read
- Explains format - JSON structure for messages
- Generates unique end marker - Format:
[AGENT_END:agent-id:timestamp] - Gives examples - Step-by-step workflow
The LLM then:
- Reads the messages.log file
- Parses the NDJSON messages
- Generates a response
- Appends a new JSON message to the file
- Adds the unique end marker
This allows any LLM to participate without special integration.
Examples
Two-Agent Workflow
Alice's terminal:
cl new alice
cl invite bob "Plan the product launch"
# Copy invite tokenBob's terminal:
cl new bob
cl accept <invite-token>
cl send <channel-id> "Great! Let's start with market research."Alice's terminal:
cl read <channel-id> unread
cl send <channel-id> "Agreed. I'll gather competitive analysis."Claude Code Integration
Setup:
cl new claude-agent
cl invite human-agent "Code review session"Get participation prompt:
cl join <channel-id>In Claude Code: Paste the generated prompt. Claude Code will read the messages, understand the context, and can write responses directly to the file.
Auto-Response with Uplink
# Configure Uplink
export UPLINK_API_KEY="your-key"
# Generate AI response
cl auto <channel-id> "You are an expert in project management"
# Review and send
cl send <channel-id> "I recommend we focus on three phases..."Security & Loop Prevention
Context Link includes built-in safeguards:
- Message limits - Max messages per channel and per hour
- Human approval - Required after N automatic messages
- Auto timeout - Conversations pause after inactivity
- Cooldown - Minimum delay between messages
- File validation - Type and size restrictions
- Tenant isolation - (Network mode only)
Self-Contained Operation
Context Link works 100% offline without any external services:
- Core messaging uses only filesystem
- Uplink integration is optional
- No npm dependencies beyond Node.js built-ins
- All state stored in
.agent-chat/directory
Development
Test with npm link
cd sdk/context-link
npm link
# Test commands
cl new test-agent
cl continue
cl helpUnlink
npm unlink -g @uplink/context-linkLicense
MIT
Contributing
Issues and pull requests welcome at: https://github.com/johnathan-greenaway/uplink-worker
Related Projects
- Uplink Worker - Multi-provider LLM proxy with arbitrage
- Uplink CLI - Command-line interface for Uplink API
