a2a-mcp-skillmap
v0.2.1
Published
Turn any A2A agent into an MCP tool server. Dynamic skill discovery, multimodal output, stdio + HTTP transports. Zero glue code.
Maintainers
Readme
a2a-mcp-skillmap
Turn any A2A agent into a first-class MCP tool server — with zero glue code.
Point it at one or more A2A agent URLs and it resolves their skill cards, projects every skill as an MCP tool, and serves the result over stdio or HTTP. Your MCP client sees ordinary tools; the bridge handles everything behind the scenes — validation, task lifecycle, auth, response shaping.
npx a2a-mcp-skillmap --a2a-url https://agent.example.comThat's it. No schemas to hand-map, no wrappers to write, no protocol translation to maintain.
Why this bridge
| | |
|---|---|
| One tool per skill, not per agent | Each A2A skill becomes its own MCP tool — research-agent__search, research-agent__summarize. LLMs pick the right one like any other typed function; no fuzzy "agent-of-many-things" wrapper. |
| Token-optimized responses | The default artifact mode strips the A2A envelope and emits only the content — native MCP blocks for text, image, audio, and file. Every token saved is a token the LLM spends on reasoning. |
| Sync-fast, async-safe | Replies within the configured sync budget (default 30 s, tunable via --sync-budget-ms) come back inline; anything slower returns a taskId and three built-in polling tools — task_status, task_result, task_cancel — that actively re-query the agent before responding. No streaming wiring, no hanging calls. |
| Dynamic, not declarative | Skills added, renamed, or re-typed on the A2A side are picked up on the next refresh. No PR to this project, no hand-written adapter. |
| Deterministic by design | Same agent card in → same MCP tools out. Tool names are pure functions of (agentId, skillId), so client tool-caches stay valid across restarts and deployments. |
| Pluggable where it matters | Response projector, tool-naming strategy, storage backends, and auth providers are all swappable interfaces with sensible defaults. |
| SDK-first | Built on the official @modelcontextprotocol/sdk and @a2a-js/sdk — no hand-rolled JSON-RPC framing. Upstream protocol improvements land here automatically. |
What you get out of the box
- Two transports, same engine: stdio for local MCP clients, Streamable HTTP for networked deployments.
- Session continuity — pass a
sessionIdacross calls to maintain multi-turn conversations with agents. The bridge handles A2A context/task threading automatically. - Four response modes —
artifact(default, multimodal),structured(full canonical + metadata),compact(≤ 280-char summary),raw(byte-equivalent A2A payload). Switch per-deployment. Side-by-side JSON examples in the operator guide. - Structured JSON logs (pino) with automatic correlation IDs tying every log line, every telemetry event, and every OpenTelemetry span to a single tool invocation.
- OpenTelemetry, optional:
setOtelTracer(tracer)and you get spans around every invocation and agent resolution. Zero runtime cost when unused. - Graceful degradation. One broken agent card never takes down the others. One unsupported skill schema never kills its siblings. Agent refreshes are atomic — the old card keeps serving until the new one validates.
Quickstart
Three ways to run the bridge, in order of effort.
1. One agent, stdio, no config file
The simplest setup. Point it at a single A2A agent and let it listen on stdin/stdout — that's what an MCP client (Claude Desktop, VS Code, Inspector) expects when it launches the bridge as a child process.
npx a2a-mcp-skillmap --a2a-url https://agent.example.comEvery skill the agent advertises now shows up as a tool in your MCP client.
2. Multiple agents over HTTP, with a config file
When you have more than one agent, need per-agent credentials, or want to serve MCP over the network instead of stdin — use a config file. Save the following as bridge.json (anywhere you like — the path is passed on the command line):
{
"agents": [
{ "url": "https://research-agent.example.com" },
{
"url": "https://compliance-agent.example.com",
"auth": { "mode": "bearer", "token": "..." }
}
],
"transport": "http",
"http": {
"port": 3000,
"inboundAuth": { "mode": "bearer", "token": "my-mcp-secret" }
},
"responseMode": "artifact"
}Then start the bridge against that file:
npx a2a-mcp-skillmap --config ./bridge.jsonThe bridge loads both agents (each with its own outbound credential), exposes a single HTTP endpoint at http://localhost:3000/mcp, and requires MCP clients to authenticate with the my-mcp-secret bearer token on the way in.
3. Embed in your own Node app (programmatic SDK)
import { createBridge, DefaultA2ADispatcher, loadConfig } from 'a2a-mcp-skillmap';
const config = loadConfig({ filePath: './bridge.json' });
const bridge = createBridge(config, {
dispatcher: new DefaultA2ADispatcher(),
// swap any default here — projector, naming, stores, auth providers
});
await bridge.start();
// ... bridge.engine.listTools(), bridge.engine.callTool(name, args)
await bridge.stop();Using with MCP clients
VS Code (GitHub Copilot / Kiro)
Add the bridge to your workspace MCP config at .vscode/mcp.json (or .kiro/settings/mcp.json for Kiro):
{
"mcpServers": {
"research-agent": {
"command": "npx",
"args": [
"-y",
"a2a-mcp-skillmap",
"--a2a-url",
"https://research-agent.example.com"
]
}
}
}For multiple agents or auth, point to a config file instead:
{
"mcpServers": {
"my-agents": {
"command": "npx",
"args": ["-y", "a2a-mcp-skillmap", "--config", "./bridge.json"]
}
}
}Restart the MCP server from the command palette and the agent's skills appear as tools in your chat.
Claude Desktop
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS, %APPDATA%\Claude\claude_desktop_config.json on Windows):
{
"mcpServers": {
"research-agent": {
"command": "npx",
"args": [
"-y",
"a2a-mcp-skillmap",
"--a2a-url",
"https://research-agent.example.com"
]
}
}
}Restart Claude Desktop. The agent's skills will appear as available tools in your conversation.
Cursor
Add to your Cursor MCP config at .cursor/mcp.json in your project root:
{
"mcpServers": {
"research-agent": {
"command": "npx",
"args": [
"-y",
"a2a-mcp-skillmap",
"--a2a-url",
"https://research-agent.example.com"
]
}
}
}Tips
- Use
--sync-budget-ms 10000for interactive use (faster task-handle responses). - Set
--log-level warnin MCP client configs to keep stderr quiet. - For agents requiring auth, use a config file — don't put tokens in args where they may appear in process listings.
How it works
┌──────────────┐ MCP (stdio / HTTP) ┌─────────────────────────┐ A2A (JSON-RPC) ┌──────────────┐
│ MCP Client │ ◀───────────────────────▶ │ a2a-mcp-skillmap │ ◀────────────────────▶ │ A2A Agent │
└──────────────┘ │ ┌─────────────────────┐ │ └──────────────┘
│ │ AgentRegistry │ │
│ │ ToolGenerator │ │ ↻ resolves & refreshes agent cards
│ │ InvocationRuntime │ │ ↻ validates args (Zod) pre-dispatch
│ │ TaskManager │ │ ↻ tracks long-running jobs
│ │ ResponseProjector │ │ ↻ shapes result per mode
│ └─────────────────────┘ │
└─────────────────────────┘All external data — agent cards, skill schemas, MCP tool calls — is validated at ingress and normalized into a canonical internal model before any logic runs. That boundary is why behavior stays deterministic: the engine never sees raw wire data.
Session continuity (multi-turn conversations)
Every tool response includes a sessionId. Pass it back on the next call to maintain conversation context with the same agent — the bridge maps it to the A2A contextId and taskId so the remote agent sees a continuous thread.
// First call — no sessionId
{ "message": "What's the weather in Berlin?" }
// Response includes sessionId
{ "sessionId": "a1b2c3...", "artifacts": [...] }
// Follow-up — pass sessionId back
{ "message": "And tomorrow?", "sessionId": "a1b2c3..." }If a previous task on the same session is still running, the bridge rejects the new call with a SESSION_TASK_RUNNING error and tells the LLM to wait or cancel first — preventing race conditions on the agent side.
Sync budget
The sync budget controls how long the bridge waits for an A2A agent to respond before switching to async task polling. Default: 30 000 ms. Set to 0 to wait indefinitely.
# Wait up to 10 seconds, then return a task handle
npx a2a-mcp-skillmap --a2a-url https://agent.example.com --sync-budget-ms 10000When the budget expires:
- The bridge immediately returns a
taskIdto the MCP client. - The A2A dispatch continues in the background.
- The LLM can poll via
task_resultortask_status— both actively re-query the remote agent and wait briefly before responding, so the LLM doesn't hammer the tool in a tight loop.
Documentation
- Examples — every supported way to start the bridge (CLI, env, config file, programmatic, embedded in MCP clients), with copy-pasteable snippets.
- API reference — every exported symbol, its parameters, return types, and error conditions.
- CLI reference — every flag, env var, config key, and exit code.
- Operator guide — transport selection, authentication, response modes, session continuity, sync budget, observability, reference performance.
- Contributor guide — dev setup, commit conventions, review process, release process.
- Security — threat model, secret handling, vulnerability reporting.
- Traceability matrix — every requirement mapped to design, code, and tests.
Requirements
- Node.js
>=20 - ESM (
"type": "module") - Peer dep
@opentelemetry/apiis optional — only needed if you wire a tracer
Contributing
Issues and PRs welcome. See the contributor guide for setup and conventions. Security issues go through GitHub Security Advisories — please don't open public issues for suspected vulnerabilities.
