@bigcookie/mcp-nano-banana-image
v1.2.0
Published
MCP Server for generating images using Google Gemini (Nano Banana) models
Maintainers
Readme
mcp-nano-banana-image
MCP Server for generating images using Google Gemini (Nano Banana) models.
Installation
npm install @modelcontextprotocol/sdk @google/genai zod
npm install -D typescript @types/nodeBuild
npm run buildConfiguration
Environment Variables
| Variable | Description |
|----------|-------------|
| GEMINI_API_KEY | Google AI API Key (recommended) |
| GOOGLE_API_KEY | Alternative API key name |
The SDK automatically reads from GEMINI_API_KEY or GOOGLE_API_KEY.
Claude Desktop Integration
Add to your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"nano-banana-image": {
"command": "node",
"args": ["/path/to/dist/index.js"],
"env": {
"GEMINI_API_KEY": "your-api-key"
}
}
}
}Tool: generate_image
Generate images using Google Gemini models.
Input Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| prompt | string | Yes | - | Image generation prompt |
| image_path | string | No | - | Reference image path (for image editing) |
| model | string | No | gemini-2.5-flash-image | Gemini model to use |
| output_dir | string | Yes | - | Output directory for generated images |
Output
{
"file_paths": [
"/path/to/output/generated_a1b2c3_1702345678901.png"
],
"usage_metadata": {
"prompt_tokens": 22,
"completion_tokens": 1553,
"total_tokens": 1575,
"thoughts_tokens": 292,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0,
"model": "gemini/gemini-2.5-flash-image"
},
"text_response": "Optional text response from the model"
}usage_metadata Fields
| Field | Description |
|-------|-------------|
| prompt_tokens | Input token count |
| completion_tokens | Output token count (includes thinking tokens) |
| total_tokens | Total token count |
| thoughts_tokens | Thinking/reasoning token count (Gemini 3 models) |
| cache_creation_input_tokens | Cache creation token count |
| cache_read_input_tokens | Cached content token count |
| model | Model name in LiteLLM format for cost calculation |
Available Models
| Model | Description |
|-------|-------------|
| gemini-2.5-flash-image | Fast image generation (default) |
| gemini-3-pro-image-preview | High quality image generation (preview) |
Testing
npm run build
npx @modelcontextprotocol/inspector node dist/index.jsLicense
MIT
