npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@trishchuk/gemini-mcp-server

v1.2.0

Published

MCP server for Gemini CLI integration

Readme

@trishchuk/gemini-mcp-server

An advanced Model Context Protocol (MCP) server that bridges any MCP-compatible client (Claude Desktop, Cursor, Windsurf, etc.) to Google's Gemini CLI.

By acting as a proxy to the Gemini CLI, this server exposes Gemini as a powerful, agentic backend directly within your favorite AI coding assistant. It brings Gemini's robust features—such as model selection, session management, sandbox controls, and batch processing—straight into your workflow through the standard Model Context Protocol.

🌟 Why Use This?

  • Dual-Brain Power: Use Gemini as a "second brain" inside Claude or Cursor. Delegate heavy file-analysis tasks, ask for a second architectural opinion, or run parallel code reviews.
  • Batch Processing: Perform mass refactoring, project-wide code transformations, and multi-file edits using Gemini's native tool-use capabilities (batch-gemini tool).
  • Autonomous Act → Check → Fix Loop: Execute a task, automatically verify the result via a shell command (e.g., npm test), and self-correct on failure using resume-based retries (do-act tool).
  • Session Persistence: Resume long-running, multi-turn conversations with Gemini across isolated tool calls.
  • Full CLI Parity: Full support for Gemini models, approval modes (yolo, auto_edit, plan), sandboxing, extensions, skills, and even nested MCP server management.

🚀 Features & Tools

This server exposes a comprehensive suite of 16+ tools grouped logically into the following categories:

Core Execution & Analysis

  • ask-gemini: Execute a prompt via the Gemini CLI for code analysis and generation. Supports file context, model selection, approval modes, session resuming, and sandbox controls.
  • batch-gemini: Concurrently execute multiple atomic Gemini tasks in batch mode—ideal for mass refactoring or repetitive code transformations across a codebase.
  • do-act: Execute a task with verification-driven retry. Performs a change, verifies it with a user-provided shell command, and uses session history to automatically fix issues if the verification fails.
  • review-changes: Perform code reviews of Git changes (uncommitted changes, branch diffs, specific commits) in read-only mode, with built-in prompt injection protection.
  • brainstorm: Generate creative ideas, architectural designs, or solutions using structured methodologies like SCAMPER, Six Thinking Hats, First Principles, etc.

Session Management

  • resume-session: Resume a previous Gemini CLI session using a session ID or index to continue a train of thought.
  • list-sessions: View and manage (list or delete) active Gemini CLI sessions for the current project.

Control & Execution

  • abort-gemini: Instantly kill all running Gemini CLI child processes. Essential for safely aborting long-running tasks or out-of-control loops.

Extensions & Skills

  • list-extensions: List all installed Gemini CLI extensions.
  • manage-extensions: Install, uninstall, update, enable, or disable Gemini CLI extensions.
  • list-skills: Discover and list available Gemini agent skills.
  • manage-skills: Install, uninstall, enable, disable, or link Gemini agent skills.

Administration

  • manage-mcp: Manage nested MCP servers directly configured within the Gemini CLI itself (list, add, remove, enable, disable).

Diagnostics & Utility

  • health: Check the underlying Gemini CLI health, authentication, and connectivity status.
  • ping: Simple echo test to verify MCP server responsiveness.
  • version: Display detailed version and system information (Gemini CLI, Node.js, OS, and MCP server versions).
  • Help: Retrieve standard help documentation directly from the Gemini CLI.

💻 Prerequisites

  • Node.js >= 18.0.0
  • Gemini CLI installed and authenticated (gemini must be available in your system PATH).

📦 Installation

You can use the server immediately via npx, or install it globally via npm:

Global Install (Recommended)

npm install -g @trishchuk/gemini-mcp-server

Using NPX

npx -y @trishchuk/gemini-mcp-server

From Source

git clone https://github.com/x51xxx/gemini-mcp-server.git
cd gemini-mcp-server
npm install
npm run build
# The compiled executable will be at dist/index.js
node dist/index.js

⚙️ Configuration

To connect your AI assistant to the Gemini MCP server, update your client's configuration file.

Claude Desktop

Edit your claude_desktop_config.json (typically found at ~/.claude/claude_desktop_config.json or %APPDATA%\Claude\claude_desktop_config.json):

{
  "mcpServers": {
    "gemini": {
      "command": "npx",
      "args": ["-y", "@trishchuk/gemini-mcp-server"]
    }
  }
}

(If installed globally, you can replace "npx" and args with "command": "gemini-mcp")

Cursor / Windsurf / Generic MCP

In Cursor or Windsurf, navigate to the Settings > MCP Settings and add a new server:

{
  "mcpServers": {
    "gemini": {
      "command": "npx",
      "args": ["-y", "@trishchuk/gemini-mcp-server"]
    }
  }
}

⚡ Quick Start Examples

Once configured, you can prompt your AI assistant (e.g., Claude) to use Gemini tools seamlessly. Here are examples of how to utilize the most powerful tools:

  • Using ask-gemini:

    "Ask Gemini to analyze my src/utils/ directory and explain the core logic of commandExecutor.ts. Run it using the gemini-3.1-pro-preview model."

  • Using do-act:

    "Use the do-act tool to implement a new login route in app.js. Use npm run test as the verification command. If the tests fail, let Gemini automatically fix the code up to 3 times."

  • Using batch-gemini:

    "Use batch-gemini to add descriptive JSDoc comments to all exported TypeScript interfaces across the entire src/ directory."

  • Using review-changes:

    "Ask Gemini to review my current uncommitted git changes. Look specifically for security vulnerabilities or performance regressions."

  • Using brainstorm:

    "Use the brainstorm tool to generate 5 distinct architectural approaches for a real-time chat feature, applying the 'Six Thinking Hats' methodology."

🌍 Environment Variables

You can configure the behavior of the server using the following environment variables:

| Variable | Description | |----------|-------------| | GEMINI_MCP_CWD | Override the default working directory for all Gemini tool executions. | | GEMINI_API_KEY | Preferred API key for Gemini CLI authentication. | | GOOGLE_API_KEY | Alternative/Fallback API key for Google AI services. | | GOOGLE_CLOUD_PROJECT | Google Cloud Project ID (Required if using Vertex AI). | | GOOGLE_GENAI_USE_VERTEXAI| Set to true to enable Google Cloud Vertex AI mode instead of AI Studio. |

🧠 Available Models

By default, the server uses the most capable model available. You can specify different models via tool arguments based on your performance vs. cost needs:

| Model ID | Description | |----------|-------------| | gemini-3.1-pro-preview | (Default) Newest, most capable pro model. | | gemini-3-pro-preview | Previous generation pro model. | | gemini-3-flash-preview | Extremely fast model for high-frequency tasks. | | gemini-2.5-pro | Stable, robust pro model for complex reasoning. | | gemini-2.5-flash | Stable fast model for general tasks. | | gemini-2.5-flash-lite | Highly efficient lightweight model for simple operations. |

(Note: Available models may update automatically as Google releases new CLI versions).

🏗️ Architecture Overview

  1. MCP Protocol Layer: The server communicates with clients (Claude/Cursor) over stdio using the official @modelcontextprotocol/sdk.
  2. Tool Registry: Inbound tool execution requests are validated via zod schemas and routed to specific handlers in src/tools/.
  3. Execution Engine: Handlers use cross-spawn to spawn isolated gemini child processes.
  4. Output Parsing: Raw stdout/stderr streams from the CLI are captured, parsed, and formatted into clean Markdown and structured JSON before being returned to the MCP client.
  5. Session State: The server leverages Gemini CLI's native session history on disk, passing session IDs back and forth to maintain context across disparate MCP turns.

🤝 Contributing

Contributions are welcome! If you want to add new tools, improve output parsing, or fix bugs:

  1. Fork the repository.
  2. Clone your fork: git clone https://github.com/<YOUR_USERNAME>/gemini-mcp-server.git
  3. Install dependencies: npm install
  4. Make your changes in src/.
  5. Run npm run build and npm run lint to ensure everything compiles.
  6. Submit a Pull Request with a clear description of your changes.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

Author: Taras Trishchuk [email protected]