a2a-copilot
v1.4.0
Published
A2A (Agent-to-Agent) protocol wrapper for the GitHub Copilot SDK. Drop a JSON config file in, get a fully-spec-compliant A2A server out. Supports any agent persona, MCP tool servers, streaming, context building, and multi-turn conversations.
Maintainers
Readme
a2a-copilot
GitHub Copilot is a production-grade agent. It already handles multi-step planning, MCP tool execution, context management, and streaming — everything you'd spend months rebuilding from scratch.
a2a-copilot exposes it as a standalone, interoperable agent via the A2A protocol. Drop a JSON config file in, get a fully spec-compliant A2A server out. Any orchestrator that speaks A2A can discover and call it — no Copilot-specific integration code required.
The pattern: MCP is the vertical rail — how agents access tools. A2A is the horizontal rail — how agents talk to each other. This library adds the horizontal rail to GitHub Copilot.
Features:
- Full A2A v0.3.0 protocol — Agent Card, JSON-RPC, REST, SSE streaming
- Powered by GitHub Copilot (GPT-4.1, Claude Sonnet 4.5, and more)
- MCP tool server support — HTTP and stdio transports
- Multi-turn conversations via persistent Copilot sessions
- JSON config file with layered overrides (JSON → env vars → CLI flags)
- Docker-ready with corporate proxy CA support
- TypeScript source with full type declarations
Why not just embed the Copilot SDK directly?
Direct SDK embedding works — but it tightly couples your application to Copilot's session model and integration pattern. Swapping the AI backend means rewriting integration code. Adding a second agent means writing a second bespoke integration.
With the A2A protocol surface:
- Your orchestrator speaks one interface regardless of what's behind it
- Copilot becomes swappable — replace it without changing orchestration logic
- Copilot becomes composable — route tasks to it alongside other A2A agents
- Copilot becomes discoverable — any A2A-compatible system can find it via Agent Card
Works with agent frameworks
This library complements — not replaces — frameworks like LangGraph, Google ADK, Microsoft Agent Framework, and CrewAI. Use those frameworks for orchestration, state, and memory control. Use a2a-copilot as the execution node they call.
LangGraph / ADK / Microsoft Agent Framework
(state, memory, flow control)
↓
A2A Protocol
↓
a2a-copilot
(GitHub Copilot execution)Quick Start
# Install globally
npm install -g a2a-copilot
# Run the bundled example agent
a2a-copilot --config agents/example/config.jsonOr run without installing:
npx a2a-copilot --config agents/example/config.json⚠️ Authentication required: You must set a
GITHUB_TOKENenvironment variable or rungh auth loginbefore starting the server. Without valid GitHub credentials the server will fail with an auth error. You also need a GitHub account with Copilot access.
Architecture
A2A Client (Orchestrator / Inspector / curl)
│
│ JSON-RPC or REST over HTTP
▼
Express Server (a2a-copilot)
│ ├─ /.well-known/agent-card.json → Agent Card
│ ├─ /a2a/jsonrpc → JSON-RPC (message/send, message/sendSubscribe, …)
│ ├─ /a2a/rest → REST handler
│ ├─ /context → Read context.md
│ ├─ /context/build → Trigger context discovery
│ └─ /health → Health check
│
│ @a2a-js/sdk DefaultRequestHandler
▼
CopilotExecutor (AgentExecutor)
│ ├─ SessionManager — contextId → Copilot session
│ ├─ Streaming — delta events → A2A artifact chunks
│ └─ EventPublisher — Copilot events → A2A events
│
│ @github/copilot-sdk
▼
GitHub Copilot
│ ├─ LLM inference (GPT-4.1, Claude Sonnet 4.5, …)
│ └─ MCP tool execution
│
│ MCP Protocol (HTTP / stdio)
▼
MCP Servers (filesystem, custom tools, …)Installation
# npm
npm install a2a-copilot
# yarn
yarn add a2a-copilot
# pnpm
pnpm add a2a-copilotUsage
CLI
a2a-copilot --config agents/example/config.jsonFull flag reference:
a2a-copilot [options]
--config <path> JSON agent config file
--port <number> Server port (default: 3000)
--hostname <addr> Bind address (default: 0.0.0.0)
--advertise-host <host> Hostname for agent card URLs (default: localhost)
--cli-url <url> External Copilot CLI URL (default: auto)
--model <model> LLM model (default: gpt-4.1)
--workspace <path> Workspace directory
--agent-name <name> Agent display name
--agent-description <desc> Agent description
--stream-artifacts Stream chunks in real time (A2A spec mode)
--no-stream-artifacts Buffer artifacts — Inspector-compatible (default)
--log-level <level> debug | info | warn | error (default: info)
--help Show this help
--version Show versionProgrammatic API
import { createA2AServer, resolveConfig } from 'a2a-copilot';
const config = await resolveConfig({ configPath: 'agents/example/config.json' });
const { server, url } = await createA2AServer(config);
console.log(`Agent running at ${url}`);Configuration
Config is resolved in priority order: defaults ← JSON file ← env vars ← CLI flags
JSON Config File
Create a config.json (see agents/example/config.json for the fully annotated template):
{
"agentCard": {
"name": "My Agent",
"description": "What my agent does",
"version": "1.0.0",
"protocolVersion": "0.3.0",
"streaming": true,
"skills": [
{
"id": "my-skill",
"name": "My Skill",
"description": "Describe the skill",
"tags": ["example"]
}
]
},
"server": {
"port": 3000,
"hostname": "0.0.0.0",
"advertiseHost": "localhost"
},
"copilot": {
"model": "gpt-4.1",
"streaming": true,
"systemPrompt": "You are a specialist agent that...",
"contextFile": "context.md"
},
"mcp": {
"my-tools": {
"type": "http",
"url": "http://localhost:8002/mcp"
}
},
"events": {
"enabled": true,
"transport": "a2a"
}
}Environment Variables
| Variable | Description | Default |
|---|---|---|
| GITHUB_TOKEN | GitHub PAT for headless auth | uses gh CLI |
| PORT | Server port | 3000 |
| HOSTNAME | Bind address | 0.0.0.0 |
| ADVERTISE_HOST | Hostname in agent card URLs | localhost |
| COPILOT_MODEL | LLM model | gpt-4.1 |
| COPILOT_CLI_URL | External Copilot CLI URL | auto |
| WORKSPACE_DIR | Workspace directory | (empty) |
| STREAM_ARTIFACTS | Stream chunks in real time | false |
| LOG_LEVEL | debug|info|warn|error | info |
| AGENT_NAME | Override agent card name | (from config) |
| AGENT_DESCRIPTION | Override agent card description | (from config) |
See .env.example for the full reference.
Bundled Agent Examples
Example Agent (minimal)
./agents/example/start.sh start
./agents/example/start.sh status
./agents/example/start.sh logs
./agents/example/start.sh stopRuns on port 3000. No external tools. Good starting point for custom agents.
Filesystem Assistant
./agents/filesystem-assistant/start.sh startRuns on port 3000 and connects to the @modelcontextprotocol/server-filesystem MCP server. The agent can read, write, and search files inside its workspace/ directory.
Creating Your Own Agent
# Copy the example agent
cp -r agents/example agents/my-agent
# Edit the config
$EDITOR agents/my-agent/config.json
# Start it
./agents/my-agent/start.sh startMCP Tool Servers
HTTP / SSE server
"mcp": {
"my-tools": {
"type": "http",
"url": "http://localhost:8002/mcp"
}
}stdio server (child process)
"mcp": {
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/workspace"]
}
}Memory Persistence
Give your agent persistent instructions and skills that survive across sessions. Declare them in config.json and they're materialized into the workspace at startup — the Copilot LLM reads them automatically.
Config
{
"memory": {
"instructions": "./memory/instructions.md",
"skills": ["./memory/skills/code-review"]
}
}Paths are relative to the directory containing config.json.
Instructions
A markdown file with project-level instructions, coding conventions, safety rules, or behavioral guidelines. Written to .github/copilot-instructions.md in the workspace.
Skills
Each skill is a directory containing a SKILL.md file with YAML frontmatter and optional resource directories:
memory/skills/code-review/
├── SKILL.md # Required: frontmatter + instructions
├── scripts/ # Optional: helper scripts
├── references/ # Optional: reference docs
└── assets/ # Optional: static filesSKILL.md format:
---
name: code-review
description: Provides code review guidelines and checklists
license: MIT
allowed-tools:
- read_file
- search_files
---
# Code Review Skill
Detailed instructions for the LLM...The name field must be kebab-case (lowercase + hyphens, max 64 chars). It determines the output directory name under .github/skills/.
Where files are written
| Source | Target in workspace |
|---|---|
| memory.instructions | .github/copilot-instructions.md |
| memory.skills[]/SKILL.md | .github/skills/<name>/SKILL.md |
| memory.skills[]/scripts/ | .github/skills/<name>/scripts/ |
agentCard.skills vs memory.skills
These serve different purposes:
agentCard.skills— External metadata for orchestrators. Advertised in the agent card for discovery and routing.memory.skills— Internal instructions for the LLM. Never exposed externally. Teaches the LLM how to perform tasks.
Both should be maintained — the agent card tells callers what the agent can do, while memory skills tell the LLM how to do it. Descriptions may differ: agent card skills are high-level and marketing-friendly, memory skills are technical and detailed.
Example
See agents/filesystem-assistant/ for a working example with instructions, a skill, and a helper script.
Event Transport (Observability)
Agents emit structured trace events for MCP tool calls, reasoning, and lifecycle. By default, these flow as sideband artifacts through the A2A protocol itself — orchestrators discover them via the urn:x-a2a:trace:v1 extension on the agent card.
Default (A2A sideband)
No config needed. Trace artifacts appear alongside response artifacts and can be filtered by the urn:x-a2a:trace:v1 extension URI.
HTTP collector
Route events to an external telemetry endpoint:
{
"events": {
"enabled": true,
"transport": "http",
"httpUrl": "https://telemetry.example.com/events",
"httpHeaders": {
"Authorization": "Bearer ${TELEMETRY_TOKEN}"
}
}
}Custom transport (programmatic)
For Kafka, Redis, or database sinks, use the programmatic API. See the @a2a-wrapper/core README for full details.
Docker
# Build
docker build -t a2a-copilot:latest .
# Run with a config file
docker run -p 3000:3000 \
-e GITHUB_TOKEN=<your-token> \
a2a-copilot:latest --config agents/example/config.json
# Mount a custom agent config
docker run -p 3000:3000 \
-v /host/path/my-agent:/app/agents/my-agent \
-e GITHUB_TOKEN=<your-token> \
a2a-copilot:latest --config agents/my-agent/config.jsonCorporate Proxy (Netskope / Zscaler)
Mount your CA certificate into the container and the entrypoint injects it automatically:
docker run -p 3000:3000 \
-v /path/to/corporate-ca.crt:/etc/ssl/certs/corporate-ca.crt:ro \
-e GITHUB_TOKEN=<your-token> \
a2a-copilot:latest --config agents/example/config.jsonA2A Protocol
Implements A2A v0.3.0:
| Endpoint | Description |
|---|---|
| GET /.well-known/agent-card.json | Agent identity and capabilities |
| POST /a2a/jsonrpc | JSON-RPC: message/send, message/sendSubscribe |
| POST /a2a/rest | REST equivalent |
| GET /health | Health check |
| POST /context/build | Trigger context discovery |
| GET /context | Read the built context file |
Example JSON-RPC request (message/send):
{
"jsonrpc": "2.0",
"id": 1,
"method": "message/send",
"messageId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"params": {
"message": {
"role": "user",
"parts": [{ "kind": "text", "text": "Hello, agent!" }]
}
}
}Streaming uses SSE for real-time status updates and artifact chunks. Set --stream-artifacts for spec-correct chunk streaming or leave it unset (default) for buffered output compatible with the A2A Inspector.
External Copilot CLI
For debugging or sharing a single CLI instance across multiple agents:
# Start CLI in headless mode
copilot --headless --port 4321
# Point the wrapper at it
a2a-copilot --config agents/example/config.json --cli-url localhost:4321Known Issues
Node 22 ESM compatibility
The vscode-jsonrpc package (a transitive dependency of @github/copilot-sdk) lacks an exports map in its package.json. Node 22's stricter ESM resolver rejects the vscode-jsonrpc/node subpath import, causing a startup crash.
A postinstall script is included that automatically patches vscode-jsonrpc/package.json to add the missing exports field. The patch runs on every npm install and is idempotent — it is a no-op on Node 18/20 or when the field already exists.
If you see ERR_MODULE_NOT_FOUND referencing vscode-jsonrpc/node, run npm install again to re-apply the patch.
Related Packages
This package is part of the a2a-wrapper monorepo:
| Package | Description |
|---|---|
| @a2a-wrapper/core | Shared infrastructure (logging, config, server, events, session, CLI) |
| a2a-opencode | A2A wrapper for OpenCode |
Contributing
Contributions are welcome! Please read CONTRIBUTING.md first.
License
MIT © Shashi Kanth
