npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

cursor-cli-mcp

v1.0.2

Published

MCP server that connects any AI agent to every model in your Cursor subscription. No API keys needed.

Downloads

287

Readme

cursor-cli-mcp

Use every model in your Cursor subscription from any AI agent. No API keys needed.

An MCP (Model Context Protocol) server that bridges any MCP-compatible tool to Cursor's full model roster — GPT-5.x, Claude 4.x, Gemini 3.x, Grok, and more — using your existing Cursor subscription credits.

Why?

AI coding agents like Claude Code, Codex CLI, and others are powerful — but they're locked to a single model provider. Meanwhile, your Cursor subscription gives you access to 40+ models across OpenAI, Anthropic, Google, and others.

cursor-cli-mcp bridges this gap. It lets any MCP-compatible agent call any Cursor model as a tool, using your subscription. No API keys, no separate billing, no rate limits beyond your plan.

The value

| Without cursor-cli-mcp | With cursor-cli-mcp | |------------------------|---------------------| | Each model needs its own API key | Zero API keys — use your Cursor subscription | | Separate billing per provider | One subscription covers all models | | Rate limits per API | Use your existing Cursor allowance | | Locked to one model per agent | Switch models per-task from any agent |

What you get

| Tool | Description | |------|-------------| | cursor_prompt | Send any prompt to any Cursor model and get a response | | cursor_review | Code review using any model — checks bugs, security, performance, quality | | cursor_review_file | Review a file by path using any model | | cursor_list_models | List all models available in your subscription |

Prerequisites

  1. Cursor installed with an active subscription (cursor.com)
  2. Cursor CLI agent installed at ~/.local/bin/agent (Cursor installs this automatically — run cursor from terminal to verify)
  3. Node.js >= 18

Custom agent path? Set the CURSOR_AGENT_PATH environment variable to override the default location.

Install

Option 1: npx (no install)

Use directly in your MCP config — no global install needed:

{
  "mcpServers": {
    "cursor-cli": {
      "command": "npx",
      "args": ["-y", "cursor-cli-mcp"]
    }
  }
}

Option 2: Global install

npm install -g cursor-cli-mcp

Then reference it in your MCP config:

{
  "mcpServers": {
    "cursor-cli": {
      "command": "cursor-cli-mcp"
    }
  }
}

Option 3: From source

git clone https://github.com/MazRadwan/cursor-cli-mcp.git
cd cursor-cli-mcp
npm install
{
  "mcpServers": {
    "cursor-cli": {
      "command": "node",
      "args": ["/path/to/cursor-cli-mcp/index.js"]
    }
  }
}

Setup by client

Claude Code

claude mcp add cursor-cli -- npx -y cursor-cli-mcp

Or add to your project .mcp.json:

{
  "mcpServers": {
    "cursor-cli": {
      "command": "npx",
      "args": ["-y", "cursor-cli-mcp"]
    }
  }
}

Codex CLI

Add to your MCP configuration:

{
  "mcpServers": {
    "cursor-cli": {
      "command": "npx",
      "args": ["-y", "cursor-cli-mcp"]
    }
  }
}

Any MCP-compatible client

cursor-cli-mcp uses stdio transport — it works with any client that supports the MCP standard. Just point your client's MCP server config to npx -y cursor-cli-mcp or the installed binary.

Use cases

Cross-model code review

Use Claude Code as your primary agent, but send reviews to GPT-5.4 or Gemini for a second opinion:

"Review this auth module for security issues"
→ cursor_review_file({ file_path: "/src/auth.ts", model: "gpt-5.4-high" })

Model comparison

Test the same prompt across different models to compare outputs:

→ cursor_prompt({ prompt: "Explain this regex: ...", model: "gpt-5.4-high" })
→ cursor_prompt({ prompt: "Explain this regex: ...", model: "sonnet-4.6-thinking" })

Overflow / fallback

When your primary agent's API hits rate limits or quota, route to Cursor as a fallback without changing your workflow.

Specialized tasks

Route tasks to the best model for the job:

  • Deep reasoningopus-4.6-thinking
  • Fast iterationgpt-5.4-high-fast
  • Code generationgpt-5.3-codex-high
  • Planningsonnet-4.6-thinking

Available models

Run cursor_list_models to see your full list. Typical subscriptions include:

  • OpenAI: GPT-5.4, GPT-5.3 Codex, GPT-5.2, GPT-5.1
  • Anthropic: Claude 4.6 Opus, Claude 4.6 Sonnet, Claude 4.5
  • Google: Gemini 3.1 Pro, Gemini 3 Pro, Gemini 3 Flash
  • Others: Grok, Kimi K2.5, and more

Model availability depends on your Cursor subscription tier.

Configuration

| Environment variable | Default | Description | |---------------------|---------|-------------| | CURSOR_AGENT_PATH | ~/.local/bin/agent | Path to the Cursor CLI agent binary |

How it works

flowchart LR
    A["MCP Client<br/>(Claude Code, Codex CLI, etc.)"] -- "stdio / MCP protocol" --> B["cursor-cli-mcp"]
    B -- "exec / agent binary" --> C["Cursor CLI<br/>(40+ models)"]
    C -- response --> B
    B -- response --> A

    style A fill:#2d333b,stroke:#539bf5,stroke-width:2px,color:#adbac7
    style B fill:#2d333b,stroke:#57ab5a,stroke-width:2px,color:#adbac7
    style C fill:#2d333b,stroke:#daaa3f,stroke-width:2px,color:#adbac7
  1. Your MCP client sends a tool call over stdio
  2. cursor-cli-mcp translates it into a Cursor CLI agent invocation
  3. Cursor routes to the requested model using your subscription
  4. Response flows back through the same path

Contributing

Contributions welcome! Please open an issue or PR.

License

See LICENSE for details.