@admtoolkit/mcp-server
v1.0.3
Published
MCP server for ADM Toolkit — ROI, metrics, deployment playbook, competitive intelligence for AI Deployment Managers
Maintainers
Readme
ADM MCP Server
MCP server for the ADM Toolkit — used by AI Deployment Managers at Cursor to help enterprise customers and prove platform value.
Implements the ADM Toolkit Spec. Evolves incrementally toward full spec compliance.
GSD Status
| Tier | Status | Notes | | ------------------------- | ------ | ------------------------------------------------------------------- | | Infrastructure | Done | MongoDB singleton, constants, dotenv loads root .env | | ROI Calculator | Done | Spec formulas, conservative/base/aggressive scenarios | | Metrics Builder | Done | adm_metrics_builder with DORA benchmarks, baseline overrides | | Account Research | Done | adm_account_research (stub; returns cached when MongoDB has data) | | Deployment Plan | Done | adm_deployment_plan (LLM-powered, needs ANTHROPIC_API_KEY) | | Presentation Scaffold | Done | adm_presentation_scaffold with brand config, sections, color_accent |
Environment (Optional)
Copy .env.example to .env and set MONGODB_URI for persistence. Tools work without MongoDB; persistence is added incrementally.
Seed sample accounts (Adobe, Stripe) for testing:
cd mcp-server && npm run seedThen run adm_account_research for "Adobe" or "Stripe" to see cached data.
Tools
| Tool | Description |
| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| adm_intro | Get an overview: what the toolkit is, how it works, and example prompts. Call when users ask "what is this?", "help", or need an introduction. |
| adm_calculate_roi | ROI calculator with spec formulas. Pass accountId to persist to MongoDB. Productivity, bug remediation, onboarding savings. Conservative/base/aggressive scenarios. |
| adm_get_metrics | Get DORA and code quality metrics for pilot (200 seats) or expansion (1,200 seats). |
| adm_metrics_builder | Build DORA-based metrics model with benchmarks, baseline overrides, measured improvements. Persists to MongoDB. Includes METR caveat. |
| adm_account_research | Research enterprise account. Returns cached data from MongoDB or stub when web search not configured. |
| adm_deployment_plan | Generate phased deployment plan (Pilot → Expansion → Full). LLM-powered. Requires ANTHROPIC_API_KEY. |
| adm_presentation_scaffold | Scaffold presentation config for an account. Loads account, ROI, metrics from MongoDB when available. Brand, sections, color_accent. |
| adm_get_deployment_phases | Get phased deployment journey: Pilot → Expansion → Enterprise Scale. Target, focus, success criteria per phase. |
| adm_get_competitive_landscape | Cursor vs GitHub Copilot vs Claude Code comparison matrix + "Why Cursor for Enterprise" talking points. |
| adm_get_proof_points | Executive summary stats (94% active, 12.4x ROI, 32% faster) and developer sentiment (NPS, top 3 drivers). |
| adm_search_playbook | Search ADM playbook for objection handling, deployment tips, pilot success criteria, DORA baselines. |
| adm_generate_presentation_config | Generate presentation config for a customer. Custom badge and sections. |
Install & Run
cd mcp-server
npm install
npm start # or: node index.jsRuns on stdio (for Cursor integration). Set ADM_MCP_VERBOSE=1 for debug output.
Cursor Configuration
Option A — Project config: If .cursor/mcp.json exists in this repo, Cursor may pick it up when the workspace is this project.
Option B — User config: Add to Cursor MCP settings (Settings → MCP → Edit Config):
{
"mcpServers": {
"adm-toolkit": {
"command": "node",
"args": ["/path/to/cursor-adm-toolkit/mcp-server/index.js"],
"cwd": "/path/to/cursor-adm-toolkit"
}
}
}Replace the path with your actual project path. Restart Cursor after adding.
Data Source
**mcp-server/lib/constants.js**— ROI formulas (spec), DORA benchmarks, Cursor improvement ranges**mcp-server/lib/mongodb.js**— MongoDB connection (when MONGODB_URI set)- Parent
src/data/— metrics.js, timeline.js, competitive.js, features.js (static data) timeline.js— Deployment phasescompetitive.js— Comparison matrixfeatures.js— Executive summary stats
Playbook content lives in mcp-server/playbook.js — extend it with objection handling, best practices, and internal knowledge.
Usage Examples
Once the MCP server is configured in Cursor, you can ask in natural language. The AI will call the appropriate tools. Examples:
ROI for a prospect
You: "What's the ROI for 2,500 developers at 35% velocity improvement?"
The AI calls adm_calculate_roi and returns something like:
**ROI for 2,500 developers**
• ROI: **11.2x**
• Payback: **1.1 months**
• Annual license: $1.2M
• Total savings: $13.4M
• Net savings: $12.2MYou: "Model a conservative scenario: 800 seats, 20% improvement."
Proof points and competitive
You: "Give me the proof points for a slide."
The AI calls adm_get_proof_points:
**Executive Summary Stats**
• 94% — Active Usage: Weekly active developers on Cursor
• 12.4x — ROI: Return on 1,200-seat deployment
• 32% — Faster: Average PR cycle time improvement
**Developer NPS:** 87
**Top 3 things developers love:**
• "Codebase understanding" — 89% positive
• "Speed of Tab completions" — 86% positive
• "Multi-file Agent mode" — 82% positiveYou: "They're comparing us to Copilot. What should I say?"
The AI calls adm_get_competitive_landscape and returns the comparison matrix plus "Why Cursor for Enterprise" talking points.
Objection handling
You: "How do I handle the 'we already have Copilot' objection?"
The AI calls adm_search_playbook with query "objection copilot":
**Objection: "We already have Copilot"**
Cursor is AI-native IDE with full codebase context, not a plugin. Copilot is file-level. For complex multi-repo architectures, codebase-level understanding matters. Show the competitive matrix.
_(tags: objection, copilot, competitive)_You: "What about security concerns?"
The AI searches for "objection security" and returns the playbook entry on data residency and audit logs.
Deployment planning
You: "What are the deployment phases and success criteria?"
The AI calls adm_get_deployment_phases:
**Phase 1** — Months 1-3 (200 seats)
Target: 3 volunteer teams across key product areas
Focus: Establish baselines, identify champions, configure Cursor Rules
Success: >80% weekly active usage, positive developer NPS
**Phase 2** — Months 4-8 (1,200 seats)
...You: "Search the playbook for pilot success criteria."
Custom presentation
You: "Generate a presentation config for Acme Corp with badge 'Acme'."
The AI calls adm_generate_presentation_config and returns a presentation.js snippet you can drop into the harness.
Pre-meeting workflow
Before a customer call, try:
- "Get proof points and competitive landscape" — one prompt, two tools.
- "Calculate ROI for [their seat count] developers" — tailor the numbers.
- "Search playbook for [their main concern]" — e.g. "objection cost", "deployment champions".
Use Cases (Quick Reference)
- Pre-meeting prep:
adm_get_proof_points,adm_get_competitive_landscape - ROI modeling:
adm_calculate_roiwith prospect's seat count - Deployment planning:
adm_get_deployment_phases,adm_search_playbookfor "pilot success" - Objection handling:
adm_search_playbookfor "objection copilot", "objection security" - Custom deck:
adm_generate_presentation_configfor a prospect
