@trishchuk/gemini-mcp-server
v1.2.0
Published
MCP server for Gemini CLI integration
Maintainers
Readme
@trishchuk/gemini-mcp-server
An advanced Model Context Protocol (MCP) server that bridges any MCP-compatible client (Claude Desktop, Cursor, Windsurf, etc.) to Google's Gemini CLI.
By acting as a proxy to the Gemini CLI, this server exposes Gemini as a powerful, agentic backend directly within your favorite AI coding assistant. It brings Gemini's robust features—such as model selection, session management, sandbox controls, and batch processing—straight into your workflow through the standard Model Context Protocol.
🌟 Why Use This?
- Dual-Brain Power: Use Gemini as a "second brain" inside Claude or Cursor. Delegate heavy file-analysis tasks, ask for a second architectural opinion, or run parallel code reviews.
- Batch Processing: Perform mass refactoring, project-wide code transformations, and multi-file edits using Gemini's native tool-use capabilities (
batch-geminitool). - Autonomous Act → Check → Fix Loop: Execute a task, automatically verify the result via a shell command (e.g.,
npm test), and self-correct on failure using resume-based retries (do-acttool). - Session Persistence: Resume long-running, multi-turn conversations with Gemini across isolated tool calls.
- Full CLI Parity: Full support for Gemini models, approval modes (
yolo,auto_edit,plan), sandboxing, extensions, skills, and even nested MCP server management.
🚀 Features & Tools
This server exposes a comprehensive suite of 16+ tools grouped logically into the following categories:
Core Execution & Analysis
ask-gemini: Execute a prompt via the Gemini CLI for code analysis and generation. Supports file context, model selection, approval modes, session resuming, and sandbox controls.batch-gemini: Concurrently execute multiple atomic Gemini tasks in batch mode—ideal for mass refactoring or repetitive code transformations across a codebase.do-act: Execute a task with verification-driven retry. Performs a change, verifies it with a user-provided shell command, and uses session history to automatically fix issues if the verification fails.review-changes: Perform code reviews of Git changes (uncommitted changes, branch diffs, specific commits) in read-only mode, with built-in prompt injection protection.brainstorm: Generate creative ideas, architectural designs, or solutions using structured methodologies like SCAMPER, Six Thinking Hats, First Principles, etc.
Session Management
resume-session: Resume a previous Gemini CLI session using a session ID or index to continue a train of thought.list-sessions: View and manage (list or delete) active Gemini CLI sessions for the current project.
Control & Execution
abort-gemini: Instantly kill all running Gemini CLI child processes. Essential for safely aborting long-running tasks or out-of-control loops.
Extensions & Skills
list-extensions: List all installed Gemini CLI extensions.manage-extensions: Install, uninstall, update, enable, or disable Gemini CLI extensions.list-skills: Discover and list available Gemini agent skills.manage-skills: Install, uninstall, enable, disable, or link Gemini agent skills.
Administration
manage-mcp: Manage nested MCP servers directly configured within the Gemini CLI itself (list, add, remove, enable, disable).
Diagnostics & Utility
health: Check the underlying Gemini CLI health, authentication, and connectivity status.ping: Simple echo test to verify MCP server responsiveness.version: Display detailed version and system information (Gemini CLI, Node.js, OS, and MCP server versions).Help: Retrieve standard help documentation directly from the Gemini CLI.
💻 Prerequisites
- Node.js >= 18.0.0
- Gemini CLI installed and authenticated (
geminimust be available in your systemPATH).
📦 Installation
You can use the server immediately via npx, or install it globally via npm:
Global Install (Recommended)
npm install -g @trishchuk/gemini-mcp-serverUsing NPX
npx -y @trishchuk/gemini-mcp-serverFrom Source
git clone https://github.com/x51xxx/gemini-mcp-server.git
cd gemini-mcp-server
npm install
npm run build
# The compiled executable will be at dist/index.js
node dist/index.js⚙️ Configuration
To connect your AI assistant to the Gemini MCP server, update your client's configuration file.
Claude Desktop
Edit your claude_desktop_config.json (typically found at ~/.claude/claude_desktop_config.json or %APPDATA%\Claude\claude_desktop_config.json):
{
"mcpServers": {
"gemini": {
"command": "npx",
"args": ["-y", "@trishchuk/gemini-mcp-server"]
}
}
}(If installed globally, you can replace "npx" and args with "command": "gemini-mcp")
Cursor / Windsurf / Generic MCP
In Cursor or Windsurf, navigate to the Settings > MCP Settings and add a new server:
{
"mcpServers": {
"gemini": {
"command": "npx",
"args": ["-y", "@trishchuk/gemini-mcp-server"]
}
}
}⚡ Quick Start Examples
Once configured, you can prompt your AI assistant (e.g., Claude) to use Gemini tools seamlessly. Here are examples of how to utilize the most powerful tools:
Using
ask-gemini:"Ask Gemini to analyze my
src/utils/directory and explain the core logic ofcommandExecutor.ts. Run it using thegemini-3.1-pro-previewmodel."Using
do-act:"Use the do-act tool to implement a new login route in
app.js. Usenpm run testas the verification command. If the tests fail, let Gemini automatically fix the code up to 3 times."Using
batch-gemini:"Use batch-gemini to add descriptive JSDoc comments to all exported TypeScript interfaces across the entire
src/directory."Using
review-changes:"Ask Gemini to review my current uncommitted git changes. Look specifically for security vulnerabilities or performance regressions."
Using
brainstorm:"Use the brainstorm tool to generate 5 distinct architectural approaches for a real-time chat feature, applying the 'Six Thinking Hats' methodology."
🌍 Environment Variables
You can configure the behavior of the server using the following environment variables:
| Variable | Description |
|----------|-------------|
| GEMINI_MCP_CWD | Override the default working directory for all Gemini tool executions. |
| GEMINI_API_KEY | Preferred API key for Gemini CLI authentication. |
| GOOGLE_API_KEY | Alternative/Fallback API key for Google AI services. |
| GOOGLE_CLOUD_PROJECT | Google Cloud Project ID (Required if using Vertex AI). |
| GOOGLE_GENAI_USE_VERTEXAI| Set to true to enable Google Cloud Vertex AI mode instead of AI Studio. |
🧠 Available Models
By default, the server uses the most capable model available. You can specify different models via tool arguments based on your performance vs. cost needs:
| Model ID | Description |
|----------|-------------|
| gemini-3.1-pro-preview | (Default) Newest, most capable pro model. |
| gemini-3-pro-preview | Previous generation pro model. |
| gemini-3-flash-preview | Extremely fast model for high-frequency tasks. |
| gemini-2.5-pro | Stable, robust pro model for complex reasoning. |
| gemini-2.5-flash | Stable fast model for general tasks. |
| gemini-2.5-flash-lite | Highly efficient lightweight model for simple operations. |
(Note: Available models may update automatically as Google releases new CLI versions).
🏗️ Architecture Overview
- MCP Protocol Layer: The server communicates with clients (Claude/Cursor) over
stdiousing the official@modelcontextprotocol/sdk. - Tool Registry: Inbound tool execution requests are validated via
zodschemas and routed to specific handlers insrc/tools/. - Execution Engine: Handlers use
cross-spawnto spawn isolatedgeminichild processes. - Output Parsing: Raw stdout/stderr streams from the CLI are captured, parsed, and formatted into clean Markdown and structured JSON before being returned to the MCP client.
- Session State: The server leverages Gemini CLI's native session history on disk, passing session IDs back and forth to maintain context across disparate MCP turns.
🤝 Contributing
Contributions are welcome! If you want to add new tools, improve output parsing, or fix bugs:
- Fork the repository.
- Clone your fork:
git clone https://github.com/<YOUR_USERNAME>/gemini-mcp-server.git - Install dependencies:
npm install - Make your changes in
src/. - Run
npm run buildandnpm run lintto ensure everything compiles. - Submit a Pull Request with a clear description of your changes.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Author: Taras Trishchuk [email protected]
