launchroad-mcp
v0.13.2
Published
MCP server for LaunchRoad — gives any MCP-compatible AI coding assistant (Claude Code, Cursor, VS Code, ChatGPT Desktop) live read/write access to your team's shared brain.
Downloads
2,196
Maintainers
Readme
launchroad-mcp
MCP server for LaunchRoad — the coordination layer for AI-powered dev teams.
Gives any MCP-compatible AI coding tool live access to your team's shared brain, knowledge graph, work coordination, and session continuity. Works with Claude Code, Cursor, VS Code (Continue/Cline), ChatGPT Desktop, and any tool that speaks the Model Context Protocol.
Quick start
One command (recommended)
npm install -g launchroad-mcp
launchroad initlaunchroad init wires Claude Code, Cursor, and VS Code to LaunchRoad in one shot:
- Registers the MCP server in
~/.cursor/mcp.jsonand the repo's.mcp.json - Adds
PreToolUse,UserPromptSubmit, and asyncStophooks to~/.claude/settings.json - Drops a
.cursor/rules/launchroad.mdcrule into the current repo if Cursor is detected - Appends a
CLAUDE.mdsection so the next AI session knows what's wired - Idempotent — running it again replaces the LaunchRoad entries with current values, never duplicates
After the command finishes, restart your editor.
Claude Code (manual)
claude mcp add launchroad -- npx -y launchroad-mcpThen set environment variables:
export LAUNCHROAD_TOKEN="<your-token>"
export LAUNCHROAD_API_URL="https://your-launchroad-instance.vercel.app"Get your token from Settings → Integrations in the LaunchRoad web app.
Manual config (Claude Code / Cursor / VS Code)
Add to your MCP config file:
| Tool | Config file |
|------|------------|
| Claude Code | ~/.claude/claude_code_config.json |
| Cursor | ~/.cursor/mcp.json |
| VS Code | varies by extension |
{
"mcpServers": {
"launchroad": {
"command": "npx",
"args": ["-y", "launchroad-mcp"],
"env": {
"LAUNCHROAD_TOKEN": "<your-token>",
"LAUNCHROAD_API_URL": "https://your-instance.vercel.app"
}
}
}
}Restart your editor after adding.
Pre-edit gate (enforcement, not advice)
The MCP tools below this section are advisory — your AI has to remember to call them. The pre-edit gate flips that: every edit is checked against active claims, file locks, and guardrails before it runs. Conflicts deny; warnings ask the user; otherwise it auto-claims so teammates see what you're touching.
Three install paths depending on your editor — all hit the same backend.
Claude Code (real hook — strongest enforcement)
The hook fires on every edit, so install globally for fast startup (npx adds ~1.5s of resolution per invocation):
npm install -g launchroad-mcpThen add this to ~/.claude/settings.json:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit|Write|MultiEdit",
"hooks": [
{
"type": "command",
"command": "LAUNCHROAD_TOKEN=<your-token> LAUNCHROAD_API_URL=https://your-instance.vercel.app launchroad-hook"
}
]
}
]
}
}If you'd rather not put the token in settings.json, export LAUNCHROAD_TOKEN and LAUNCHROAD_API_URL in your shell rc and just run launchroad-hook — the hook reads them from the environment. On a backend error, missing token, non-gated tool, or 5s timeout, it fails open and never blocks your edits.
The hook POSTs the path + repo + tool name to /api/mcp/preedit (~250ms warm) and returns:
- deny — hard lock by another member, active claim by another member, or
blockguardrail. Edit is blocked with the conflicting claim_id surfaced. - warn — soft lock or
warnguardrail. Claude Code asks you before proceeding. - allow — no conflicts. A 30-min soft claim is auto-created so teammates see this file is in play.
To verify: open two Claude Code sessions on the same repo. Session A claims a file. Ask session B to edit it. The hook denies and surfaces the claim_id.
Auto-context (every prompt starts hot)
A second hook on the UserPromptSubmit event silently injects compressed team context into every prompt your AI sees: hard guardrails, active claims by teammates, signals targeting you, knowledge graph entries relevant to paths/keywords in the prompt, recent decisions. No tool call, no MCP overhead — your AI just knows.
Add this to the same ~/.claude/settings.json next to the PreToolUse block:
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "LAUNCHROAD_TOKEN=<your-token> LAUNCHROAD_API_URL=https://your-instance.vercel.app launchroad-prompt-hook"
}
]
}
]The hook reads your prompt, extracts mentioned paths + keywords, calls /api/mcp/preprompt (~250ms warm server-side), and emits additionalContext capped at ~1.5K tokens. Every error path fails open — never blocks a prompt.
Auto-knowledge capture (every session contributes back)
The third hook on the Stop event runs after the assistant finishes a turn. Async — never blocks the user. It reads the transcript, runs git diff HEAD --numstat to find changed files, and POSTs a knowledge contribution (most-edited file, last assistant message as the summary, total line counts as the raw token estimate). The team's knowledge graph fills itself as devs work.
Add this to the same ~/.claude/settings.json next to the other hook blocks:
"Stop": [
{
"hooks": [
{
"type": "command",
"async": true,
"command": "LAUNCHROAD_TOKEN=<your-token> LAUNCHROAD_API_URL=https://your-instance.vercel.app launchroad-stop-hook"
}
]
}
]The hook skips silently if there are no uncommitted changes, no git remote, no LaunchRoad token, or the assistant's last message is short (<200 chars). All errors fail open — never log to stdout, never block the next prompt.
Cursor (rule-driven — same gate, different path)
Cursor doesn't have file-edit hooks, so we enforce via a Cursor rule that tells the AI to call the MCP tool check_before_edit before any write. Same backend, same decisions — just the AI making the call instead of a hook intercepting.
Drop this file at .cursor/rules/launchroad.mdc in your repo (template ships in node_modules/launchroad-mcp/templates/cursor-rule.mdc):
---
description: LaunchRoad pre-edit coordination
alwaysApply: true
---
Before any tool call that writes to a file (Edit / Write / MultiEdit /
search_replace / patch / apply), call the LaunchRoad MCP tool
`check_before_edit` with the repo-relative paths you're about to modify.
If the decision is `deny`, do not proceed — report the conflict to the
user with the surfaced claim_id and ask how to handle it. If `warn`,
confirm with the user before proceeding. If `allow`, proceed.Make sure the launchroad MCP server is configured in ~/.cursor/mcp.json (see top of this README).
VS Code
If you're using the Anthropic Claude Code extension, your ~/.claude/settings.json hook from the section above already applies — same hook, same enforcement. No extra config.
For other VS Code MCP clients (Continue, Cline, etc.), use the same Cursor rule pattern: a system prompt or rule file telling the AI to call check_before_edit first.
What your AI gets
27 tools. Each returns compact, token-efficient responses.
Session continuity
| Tool | What it does |
|------|-------------|
| start_session | Call this first. Returns last session's summary, active claims, pending signals, guardrails, org brain, and team status in one call. |
| end_session | Save what you did, files touched, and pending work. The next session picks up from here. |
Knowledge graph (saves tokens)
| Tool | What it does |
|------|-------------|
| query_codebase_knowledge | Search AI-compressed summaries BEFORE reading raw files. |
| contribute_knowledge | After reading >2K tokens of source, save a summary so future sessions skip the raw read. |
Coordination
| Tool | What it does |
|------|-------------|
| claim_work | Declare what you're working on + file paths. Auto-checks for conflicts. |
| check_claims | See all active work claims across the team. |
| complete_claim | Mark a claim done or abandoned. |
| lock_files / unlock_files | Soft/hard lock files before editing. |
| check_file_ownership | See who last touched files + lock status. |
| send_signal | Proactive notification to a teammate: needs_input, fyi, review, conflict, unblocked. |
| get_my_signals / resolve_signal | Check and resolve signals targeting you. |
Shared brain
| Tool | What it does |
|------|-------------|
| get_team_context | Read the org brain: decisions, goals, milestones, blockers, notes. |
| log_decision / add_blocker | Write to the org brain. |
| get_guardrails / add_guardrail | Architecture rules every AI must follow. |
| update_my_focus | Set what you're working on so the team can see. |
Communication
| Tool | What it does |
|------|-------------|
| post_update | Post a status update visible to the whole team. |
| ask_team | Ask a question visible to everyone. |
| get_recent_activity | Compact timeline of everything that happened. |
| get_team_status | Current focus + blockers for every teammate. |
Resources (passive context — token-free reads)
Resources let your MCP client attach live coordination context as background reading without spending a tool call. Claude Code surfaces them automatically; other clients can pull them on demand.
| URI | What it is |
|------|-------------|
| launchroad://activity/recent | Last 20 items from the team activity feed |
| launchroad://claims/active | Currently active work claims across the team |
| launchroad://guardrails | All architecture guardrails for the org |
| launchroad://decisions/recent | Durable org-brain entries (decisions, goals, milestones) |
In Claude Code: /mcp → expand launchroad → resources show up under the server. Read them as background; no tool calls.
GitHub integration
| Tool | What it does |
|------|-------------|
| get_recent_commits | Recent commits from connected repos. |
| get_open_prs | Open pull requests. |
| get_pr_diff | Read a PR's diff. |
| get_deployment_status | Vercel deployment status. |
Add to your CLAUDE.md
Copy this into your project's CLAUDE.md so Claude Code knows about LaunchRoad on every session:
## LaunchRoad (MCP)
This project uses LaunchRoad for AI team coordination.
### Every session:
1. Call `start_session` first — gets full team context in one call
2. Call `query_codebase_knowledge` before reading raw files
3. Call `end_session` when done — saves context for the next session
### Key tools:
- `contribute_knowledge` — save summaries after reading >2K tokens of source
- `claim_work` — declare what you're editing to prevent conflicts
- `send_signal` — proactively warn teammates about conflicts or share discoveries
- `get_guardrails` — check architecture rules before structural changesEnvironment variables
| Variable | Required | Description |
|----------|----------|-------------|
| LAUNCHROAD_TOKEN | Yes | Your personal MCP token from the LaunchRoad web app |
| LAUNCHROAD_API_URL | No | API URL (defaults to http://localhost:3000) |
How it works
LaunchRoad never runs an LLM. Your AI tool calls LaunchRoad's MCP tools mid-conversation to read shared context, log decisions, and coordinate with teammates' AIs. Token-efficient responses keep costs low. The knowledge graph compresses codebase summaries so future sessions read summaries instead of raw files.
License
MIT
