npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

a2a-mcp-skillmap

v0.2.1

Published

Turn any A2A agent into an MCP tool server. Dynamic skill discovery, multimodal output, stdio + HTTP transports. Zero glue code.

Readme

a2a-mcp-skillmap

CI npm node license

Turn any A2A agent into a first-class MCP tool server — with zero glue code.

Point it at one or more A2A agent URLs and it resolves their skill cards, projects every skill as an MCP tool, and serves the result over stdio or HTTP. Your MCP client sees ordinary tools; the bridge handles everything behind the scenes — validation, task lifecycle, auth, response shaping.

npx a2a-mcp-skillmap --a2a-url https://agent.example.com

That's it. No schemas to hand-map, no wrappers to write, no protocol translation to maintain.


Why this bridge

| | | |---|---| | One tool per skill, not per agent | Each A2A skill becomes its own MCP tool — research-agent__search, research-agent__summarize. LLMs pick the right one like any other typed function; no fuzzy "agent-of-many-things" wrapper. | | Token-optimized responses | The default artifact mode strips the A2A envelope and emits only the content — native MCP blocks for text, image, audio, and file. Every token saved is a token the LLM spends on reasoning. | | Sync-fast, async-safe | Replies within the configured sync budget (default 30 s, tunable via --sync-budget-ms) come back inline; anything slower returns a taskId and three built-in polling tools — task_status, task_result, task_cancel — that actively re-query the agent before responding. No streaming wiring, no hanging calls. | | Dynamic, not declarative | Skills added, renamed, or re-typed on the A2A side are picked up on the next refresh. No PR to this project, no hand-written adapter. | | Deterministic by design | Same agent card in → same MCP tools out. Tool names are pure functions of (agentId, skillId), so client tool-caches stay valid across restarts and deployments. | | Pluggable where it matters | Response projector, tool-naming strategy, storage backends, and auth providers are all swappable interfaces with sensible defaults. | | SDK-first | Built on the official @modelcontextprotocol/sdk and @a2a-js/sdk — no hand-rolled JSON-RPC framing. Upstream protocol improvements land here automatically. |


What you get out of the box

  • Two transports, same engine: stdio for local MCP clients, Streamable HTTP for networked deployments.
  • Session continuity — pass a sessionId across calls to maintain multi-turn conversations with agents. The bridge handles A2A context/task threading automatically.
  • Four response modesartifact (default, multimodal), structured (full canonical + metadata), compact (≤ 280-char summary), raw (byte-equivalent A2A payload). Switch per-deployment. Side-by-side JSON examples in the operator guide.
  • Structured JSON logs (pino) with automatic correlation IDs tying every log line, every telemetry event, and every OpenTelemetry span to a single tool invocation.
  • OpenTelemetry, optional: setOtelTracer(tracer) and you get spans around every invocation and agent resolution. Zero runtime cost when unused.
  • Graceful degradation. One broken agent card never takes down the others. One unsupported skill schema never kills its siblings. Agent refreshes are atomic — the old card keeps serving until the new one validates.

Quickstart

Three ways to run the bridge, in order of effort.

1. One agent, stdio, no config file

The simplest setup. Point it at a single A2A agent and let it listen on stdin/stdout — that's what an MCP client (Claude Desktop, VS Code, Inspector) expects when it launches the bridge as a child process.

npx a2a-mcp-skillmap --a2a-url https://agent.example.com

Every skill the agent advertises now shows up as a tool in your MCP client.

2. Multiple agents over HTTP, with a config file

When you have more than one agent, need per-agent credentials, or want to serve MCP over the network instead of stdin — use a config file. Save the following as bridge.json (anywhere you like — the path is passed on the command line):

{
  "agents": [
    { "url": "https://research-agent.example.com" },
    {
      "url": "https://compliance-agent.example.com",
      "auth": { "mode": "bearer", "token": "..." }
    }
  ],
  "transport": "http",
  "http": {
    "port": 3000,
    "inboundAuth": { "mode": "bearer", "token": "my-mcp-secret" }
  },
  "responseMode": "artifact"
}

Then start the bridge against that file:

npx a2a-mcp-skillmap --config ./bridge.json

The bridge loads both agents (each with its own outbound credential), exposes a single HTTP endpoint at http://localhost:3000/mcp, and requires MCP clients to authenticate with the my-mcp-secret bearer token on the way in.

3. Embed in your own Node app (programmatic SDK)

import { createBridge, DefaultA2ADispatcher, loadConfig } from 'a2a-mcp-skillmap';

const config = loadConfig({ filePath: './bridge.json' });
const bridge = createBridge(config, {
  dispatcher: new DefaultA2ADispatcher(),
  // swap any default here — projector, naming, stores, auth providers
});

await bridge.start();
// ... bridge.engine.listTools(), bridge.engine.callTool(name, args)
await bridge.stop();

Using with MCP clients

VS Code (GitHub Copilot / Kiro)

Add the bridge to your workspace MCP config at .vscode/mcp.json (or .kiro/settings/mcp.json for Kiro):

{
  "mcpServers": {
    "research-agent": {
      "command": "npx",
      "args": [
        "-y",
        "a2a-mcp-skillmap",
        "--a2a-url",
        "https://research-agent.example.com"
      ]
    }
  }
}

For multiple agents or auth, point to a config file instead:

{
  "mcpServers": {
    "my-agents": {
      "command": "npx",
      "args": ["-y", "a2a-mcp-skillmap", "--config", "./bridge.json"]
    }
  }
}

Restart the MCP server from the command palette and the agent's skills appear as tools in your chat.

Claude Desktop

Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS, %APPDATA%\Claude\claude_desktop_config.json on Windows):

{
  "mcpServers": {
    "research-agent": {
      "command": "npx",
      "args": [
        "-y",
        "a2a-mcp-skillmap",
        "--a2a-url",
        "https://research-agent.example.com"
      ]
    }
  }
}

Restart Claude Desktop. The agent's skills will appear as available tools in your conversation.

Cursor

Add to your Cursor MCP config at .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "research-agent": {
      "command": "npx",
      "args": [
        "-y",
        "a2a-mcp-skillmap",
        "--a2a-url",
        "https://research-agent.example.com"
      ]
    }
  }
}

Tips

  • Use --sync-budget-ms 10000 for interactive use (faster task-handle responses).
  • Set --log-level warn in MCP client configs to keep stderr quiet.
  • For agents requiring auth, use a config file — don't put tokens in args where they may appear in process listings.

How it works

┌──────────────┐     MCP (stdio / HTTP)    ┌─────────────────────────┐     A2A (JSON-RPC)     ┌──────────────┐
│  MCP Client  │ ◀───────────────────────▶ │   a2a-mcp-skillmap      │ ◀────────────────────▶ │  A2A Agent   │
└──────────────┘                           │ ┌─────────────────────┐ │                        └──────────────┘
                                           │ │ AgentRegistry       │ │
                                           │ │ ToolGenerator       │ │     ↻ resolves & refreshes agent cards
                                           │ │ InvocationRuntime   │ │     ↻ validates args (Zod) pre-dispatch
                                           │ │ TaskManager         │ │     ↻ tracks long-running jobs
                                           │ │ ResponseProjector   │ │     ↻ shapes result per mode
                                           │ └─────────────────────┘ │
                                           └─────────────────────────┘

All external data — agent cards, skill schemas, MCP tool calls — is validated at ingress and normalized into a canonical internal model before any logic runs. That boundary is why behavior stays deterministic: the engine never sees raw wire data.


Session continuity (multi-turn conversations)

Every tool response includes a sessionId. Pass it back on the next call to maintain conversation context with the same agent — the bridge maps it to the A2A contextId and taskId so the remote agent sees a continuous thread.

// First call — no sessionId
{ "message": "What's the weather in Berlin?" }

// Response includes sessionId
{ "sessionId": "a1b2c3...", "artifacts": [...] }

// Follow-up — pass sessionId back
{ "message": "And tomorrow?", "sessionId": "a1b2c3..." }

If a previous task on the same session is still running, the bridge rejects the new call with a SESSION_TASK_RUNNING error and tells the LLM to wait or cancel first — preventing race conditions on the agent side.


Sync budget

The sync budget controls how long the bridge waits for an A2A agent to respond before switching to async task polling. Default: 30 000 ms. Set to 0 to wait indefinitely.

# Wait up to 10 seconds, then return a task handle
npx a2a-mcp-skillmap --a2a-url https://agent.example.com --sync-budget-ms 10000

When the budget expires:

  1. The bridge immediately returns a taskId to the MCP client.
  2. The A2A dispatch continues in the background.
  3. The LLM can poll via task_result or task_status — both actively re-query the remote agent and wait briefly before responding, so the LLM doesn't hammer the tool in a tight loop.

Documentation

  • Examples — every supported way to start the bridge (CLI, env, config file, programmatic, embedded in MCP clients), with copy-pasteable snippets.
  • API reference — every exported symbol, its parameters, return types, and error conditions.
  • CLI reference — every flag, env var, config key, and exit code.
  • Operator guide — transport selection, authentication, response modes, session continuity, sync budget, observability, reference performance.
  • Contributor guide — dev setup, commit conventions, review process, release process.
  • Security — threat model, secret handling, vulnerability reporting.
  • Traceability matrix — every requirement mapped to design, code, and tests.

Requirements

  • Node.js >=20
  • ESM ("type": "module")
  • Peer dep @opentelemetry/api is optional — only needed if you wire a tracer

Contributing

Issues and PRs welcome. See the contributor guide for setup and conventions. Security issues go through GitHub Security Advisories — please don't open public issues for suspected vulnerabilities.

License

MIT