@peleke.s/comfyui-mcp
v1.0.1
Published
40-tool MCP server for ComfyUI: image generation, ControlNet, IP-Adapter, inpainting, TTS voice cloning, lip-sync video, and AnimateDiff — with smart prompt optimization across 6 model families (Flux, SDXL, SD1.5, Illustrious, Pony, Realistic).
Maintainers
Readme
ComfyUI MCP Server

Yes, this image was generated by the tool. Yes, we know AI art is ethically fraught at best and genuinely dangerous at worst. We built this for internal asset generation, not to replace artists. Use responsibly or don't use at all.
When your Mac renders black frames, you learn to distribute.
I wanted Claude to generate images through my local ComfyUI setup. Ran Sonic. Ten seconds of pure black frames. The GPU couldn't keep up.
So I distributed it.
The MCP server runs on Fly.io—stateless, auto-scaling. GPU compute lives on RunPod, pay-per-second. Generated assets go to Supabase with signed URLs. Tailscale meshes it all together securely. What started as "let me generate some images" became a production distributed system because the alternative was a space heater that outputs nothing.
Now Claude can generate images, upscale them, run ControlNet pipelines, do intelligent inpainting/outpainting, transfer styles with IP-Adapter, synthesize speech, and create lip-synced talking head videos—all through natural conversation. 37 tools. 745 tests. No API fees. Full parameter control.
You: "Generate a cyberpunk cityscape at sunset and save it to ./assets/hero.png"
Claude: I'll generate that image for you.
[Calls generate_image with prompt, saves to ./assets/hero.png]
Done! The cyberpunk cityscape has been saved to ./assets/hero.pngFeatures
- 🎨 Imagine Tool: The ultimate generation tool—describe what you want in natural language and get optimized results with auto-detected model settings
- Smart Prompting: Auto-generates optimized prompts based on your model (Illustrious, Pony, Flux, SDXL, Realistic, SD1.5)
- Pipeline Execution: Chain txt2img → hi-res fix → upscale in a single command
- Quality Presets: From "draft" (fast) to "ultra" (full pipeline with upscaling)
- Text-to-Image: Generate images from text prompts with full parameter control
- Image-to-Image: Transform existing images with AI guidance
- AI Upscaling: Enhance resolution using RealESRGAN and other models
- LoRA Support: Apply custom style and character LoRAs with adjustable weights
- ControlNet: Guide generation with edge detection, depth maps, poses, and more
- IP-Adapter: Transfer style and composition from reference images
- Inpainting: Selectively regenerate masked regions while preserving context
- Outpainting: Extend canvas in any direction with coherent AI generation
- Intelligent Masks: Auto-generate masks using GroundingDINO + SAM segmentation
- Text-to-Speech: Clone voices with F5-TTS from short audio samples
- Lip-Sync Video: Create talking head videos with SONIC
- Portrait Generation: Multi-backend avatar creation (SDXL, Flux GGUF, Flux FP8)
- Model Discovery: List available checkpoints, LoRAs, samplers, and schedulers
- Queue Monitoring: Check generation status and pending jobs
Prerequisites
- ComfyUI running locally
- Node.js 18+
- At least one Stable Diffusion checkpoint model
Required Custom Nodes
Some features require additional ComfyUI custom nodes. Install these via ComfyUI Manager or manually:
| Feature | Required Node | Repository | |---------|--------------|------------| | IP-Adapter (style transfer) | ComfyUI_IPAdapter_plus | cubiq/ComfyUI_IPAdapter_plus | | ControlNet preprocessing | comfyui_controlnet_aux | Fannovel16/comfyui_controlnet_aux | | Intelligent masks (SAM) | comfyui_segment_anything | storyicon/comfyui_segment_anything | | Text detection (GroundingDINO) | ComfyUI-GroundingDINO | IDEA-Research/GroundingDINO | | Lip-sync (SONIC) | ComfyUI-SONIC | smthemex/ComfyUI-SONIC | | TTS (F5-TTS) | ComfyUI-F5-TTS | chaojie/ComfyUI-F5-TTS |
Quick install with ComfyUI Manager:
- Open ComfyUI web interface
- Click "Manager" button
- Search for node name and install
- Restart ComfyUI
Core features (txt2img, img2img, upscaling, basic ControlNet) work without any custom nodes.
Quick Start
1. Clone and Install
git clone https://github.com/yourusername/comfyui-mcp.git
cd comfyui-mcp
npm install2. Configure Claude Code
Add to ~/.claude/settings.json (global) or .claude/settings.local.json (project-specific):
{
"mcpServers": {
"comfyui": {
"command": "npx",
"args": ["-y", "tsx", "/absolute/path/to/comfyui-mcp/src/index.ts"],
"env": {
"COMFYUI_URL": "http://localhost:8188",
"COMFYUI_MODEL": "dreamshaper_8.safetensors"
}
}
}
}Note: Replace
/absolute/path/to/comfyui-mcpwith the actual path where you cloned this repo.
3. Start ComfyUI
Ensure ComfyUI is running at the configured URL (default: http://localhost:8188).
4. Restart Claude Code
Important: Claude Code loads MCP servers at startup. You must restart Claude Code (exit and relaunch) after adding the configuration.
5. Generate Images
Start generating:
"Generate a portrait with warm lighting and save it to ./images/portrait.png"Available Tools
generate_image
Generate an image from a text prompt (txt2img).
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | prompt | string | required | What to generate | | negative_prompt | string | "bad quality, blurry" | What to avoid | | width | number | 512 | Image width in pixels | | height | number | 768 | Image height in pixels | | steps | number | 28 | Sampling steps | | cfg_scale | number | 7 | Classifier-free guidance scale | | sampler | string | "euler_ancestral" | Sampling algorithm | | scheduler | string | "normal" | Noise scheduler | | model | string | env default | Checkpoint model name | | seed | number | random | Random seed for reproducibility | | loras | array | none | LoRAs to apply (see below) | | output_path | string | required | Where to save the image | | upload_to_cloud | bool | true | Upload to cloud storage and return signed URL |
Returns: { success, path, remote_url?, seed, message }
When upload_to_cloud is true and cloud storage is configured, remote_url contains a signed URL valid for 1 hour.
LoRA format:
{
"name": "style_lora.safetensors",
"strength_model": 0.8,
"strength_clip": 0.8
}img2img
Transform an existing image with AI guidance.
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | prompt | string | required | What to generate | | input_image | string | required | Filename in ComfyUI input folder | | denoise | number | 0.75 | 0.0 = no change, 1.0 = full regeneration | | output_path | string | required | Where to save the result | | (plus all txt2img params) | | | |
upscale_image
Upscale an image using AI upscaling models.
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | input_image | string | required | Filename in ComfyUI input folder | | upscale_model | string | "RealESRGAN_x4plus.pth" | Upscaling model | | target_width | number | native | Optional resize width | | target_height | number | native | Optional resize height | | output_path | string | required | Where to save the result |
Discovery Tools
| Tool | Description | |------|-------------| | list_models | Available checkpoint models | | list_loras | Available LoRA adapters | | list_samplers | Sampling algorithms (euler, dpm++, etc.) | | list_schedulers | Noise schedulers (normal, karras, etc.) | | list_upscale_models | Upscaling models (RealESRGAN, etc.) | | get_queue_status | Running and pending jobs |
🎨 imagine (Recommended!)
The easiest way to generate images. Describe what you want in natural language, and it handles everything: auto-detects your model family, crafts optimized prompts, applies quality presets, and runs the full pipeline.
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | description | string | required | Natural language description of what to generate | | output_path | string | required | Where to save the final image | | model | string | env default | Checkpoint model (auto-detected if not set) | | model_family | string | auto | "illustrious", "pony", "flux", "sdxl", "realistic", "sd15" | | style | string | none | "anime", "cinematic", "portrait", "landscape", etc. | | artist_reference | string | none | Artist style, e.g., "studio ghibli" | | quality | string | "standard" | "draft", "standard", "high", "ultra" | | loras | array | none | LoRAs to apply | | seed | number | random | For reproducibility | | upload_to_cloud | bool | true | Upload to cloud and return signed URL |
Returns: { success, imagePath, remote_url?, seed, prompt, modelFamily, pipelineSteps, settings, message }
Quality presets:
draft: Fast generation, txt2img onlystandard: Balanced quality (default)high: Includes hi-res fix passultra: Full pipeline with hi-res fix + upscaling
execute_pipeline
Run a multi-step generation pipeline: txt2img → hi-res fix → upscale.
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | prompt | string | required | The positive prompt | | model | string | required | Checkpoint model | | output_path | string | required | Final output location | | enable_hires_fix | bool | false | Add img2img refinement pass | | hires_denoise | number | 0.4 | Denoise for hi-res (0.3-0.5 recommended) | | enable_upscale | bool | false | Add AI upscaling step | | (plus all txt2img params) | | | |
craft_prompt
Generate an optimized prompt from a natural description. Useful when you want to see/edit the prompt before generating.
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | description | string | required | Natural language description | | model_name | string | none | For auto-detection of model family | | model_family | string | auto | Explicit family override | | style | string | none | Style preset to apply | | rating | string | "safe" | Content rating (for Pony models) |
Returns: optimized positive prompt, negative prompt, recommended settings, LoRA suggestions.
get_prompting_guide
Get tips and example prompts for a specific model family.
| Parameter | Type | Required | Description | |-----------|------|----------|-------------| | model_family | string | yes | "illustrious", "pony", "flux", etc. |
list_prompting_strategies
List all supported model families and their prompting characteristics.
ControlNet Tools
Guide image generation using structural information from reference images.
| Tool | Description | |------|-------------| | generate_with_controlnet | Single ControlNet conditioning (canny, depth, pose, etc.) | | generate_with_multi_controlnet | Combine multiple ControlNet conditions | | preprocess_control_image | Preview control signal (edge map, skeleton, etc.) | | generate_with_hidden_image | Embed hidden images using QR Code ControlNet | | stylize_photo | Transform photos to artistic styles (anime, oil painting) | | generate_with_pose | Copy exact pose from reference using OpenPose | | generate_with_composition | Match layout/composition using semantic segmentation | | list_controlnet_models | List available ControlNet models by type |
Supported control types: canny, depth, openpose, qrcode, scribble, lineart, semantic_seg
IP-Adapter
Transfer style, composition, or character likeness from reference images.
Requires: ComfyUI_IPAdapter_plus custom node
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | prompt | string | required | Text prompt for generation | | reference_image | string | required | Reference image filename | | influence | string | "balanced" | "subtle", "balanced", "strong", "dominant" | | transfer_type | string | "style" | "style", "composition", "face" | | output_path | string | required | Where to save the result |
Inpainting & Outpainting
Selectively edit regions of images or extend canvas boundaries.
| Tool | Description | |------|-------------| | inpaint | Regenerate masked regions (white = regenerate, black = keep) | | outpaint | Extend canvas in any direction with AI-generated content | | create_mask | Generate masks using AI segmentation or manual regions |
inpaint parameters:
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | prompt | string | required | What to generate in masked region | | source_image | string | required | Source image filename | | mask_image | string | required | Mask image (white = inpaint) | | denoise_strength | number | 0.75 | 0.0 = no change, 1.0 = full regen | | output_path | string | required | Where to save result |
outpaint parameters:
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | prompt | string | required | What to generate in extended regions | | source_image | string | required | Source image filename | | extend_left/right/top/bottom | number | 0 | Pixels to extend | | feathering | number | 40 | Edge blending in pixels | | output_path | string | required | Where to save result |
create_mask parameters:
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | source_image | string | required | Image to create mask from | | preset | string | none | "hands", "face", "eyes", "body", "background", "foreground" | | text_prompt | string | none | Custom detection ("red shirt", "the cat") | | region | object | none | Manual {x, y, width, height} percentages | | expand_pixels | number | 0 | Grow mask outward | | feather_pixels | number | 0 | Blur mask edges | | invert | bool | false | Swap white/black | | output_path | string | required | Where to save mask |
Text-to-Speech (F5-TTS)
Generate speech with voice cloning from short reference audio.
| Tool | Description | |------|-------------| | tts_generate | Generate speech from text with cloned voice | | list_tts_models | Available TTS models | | list_voices | Available voice samples |
tts_generate parameters:
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | text | string | required | Text to speak | | voice_reference | string | required | Reference audio for voice cloning | | voice_reference_text | string | none | Transcript of reference (improves quality) | | speed | number | 1.0 | Speech speed (0.5-2.0) | | output_path | string | required | Where to save audio |
Lip-Sync Video
Create talking head videos from portraits and audio.
| Tool | Description | |------|-------------| | lipsync_generate | Generate lip-synced video from image + audio | | talk | Full pipeline: text → TTS → lip-sync → video | | list_lipsync_models | Available lip-sync models | | list_avatars | Portrait images in input/avatars/ | | list_voices_catalog | Voice samples with metadata |
Portrait Generation
Generate AI portraits optimized for lip-sync and avatar use.
| Tool | Description | |------|-------------| | create_portrait | Single portrait with style/expression control | | batch_create_portraits | Generate multiple portraits in batch |
Backend options: sdxl (checkpoints), flux_gguf (quantized), flux_fp8 (full precision)
Health & Diagnostics
| Tool | Description | |------|-------------| | check_connection | Full health check with GPU info and latency | | ping_comfyui | Quick connectivity check |
Usage Examples
Using Imagine (Recommended)
"Imagine a cozy coffee shop with warm lighting and plants,
save to ./assets/coffee_shop.png with high quality""Create an anime-style portrait of a warrior princess in a
fantasy setting, style: anime, quality: ultra""Generate a professional product photo of a sneaker on white
background using my realistic model, artist reference: apple product photography"Basic Generation
"Generate a mountain landscape at golden hour, save to ./assets/landscape.png"With LoRAs
"Create an anime-style portrait using the animeStyle.safetensors LoRA
at 0.8 strength, save to ./output/anime_portrait.png"Image Transformation
"Take the sketch in ComfyUI's input folder called sketch.png and turn
it into a detailed illustration with 0.7 denoise"Upscaling
"Upscale hero.png to 4K using RealESRGAN"Batch Workflow
"Generate 3 variations of a forest scene with different lighting:
1. Misty morning
2. Harsh noon sun
3. Sunset through trees
Save them to ./scenes/forest_*.png"Architecture
┌─────────────┐ MCP Protocol ┌─────────────┐ REST/WS ┌──────────┐
│ Claude │ ◄─────────────────► │ MCP Server │ ◄─────────────► │ ComfyUI │
│ (Client) │ Tool Calls │ (Bridge) │ Workflows │ (API) │
└─────────────┘ └─────────────┘ └──────────┘The MCP server:
- Exposes tools to the AI client
- Receives requests with parameters
- Builds ComfyUI workflow JSON
- Queues workflows via REST API
- Monitors progress via WebSocket
- Retrieves and saves generated images
How ComfyUI Workflows Work
ComfyUI represents image generation as a graph of nodes. Each node performs an operation:
CheckpointLoader → CLIPTextEncode → KSampler → VAEDecode → SaveImage
↓ ↑
LoraLoader(s) ──────────────────────┘Our server dynamically constructs these graphs based on your parameters. When you specify LoRAs, we inject LoraLoader nodes into the chain. The workflow is submitted as JSON to ComfyUI's /prompt endpoint.
Why This Architecture
Some decisions here didn't come from the AI—they came from knowing what breaks at scale:
- Distributed rate limiting (Upstash Redis): In-memory rate limiters fail when you have multiple Fly.io instances. Sliding window algorithm, tier-based limits, per-API-key with IP fallback.
- Quirrel for long-running jobs: Fly.io has connection limits (25 hard, 20 soft). Portrait generation, TTS, and lipsync run async through job queues. Otherwise HTTP timeouts kill requests.
- Storage abstraction layer: Single interface, three implementations (Supabase, GCP, local). Swap providers with an env var. No vendor lock-in.
- Tailscale mesh: RunPod URLs are public. Tailscale adds a security layer between Fly.io and the GPU node.
- Strategy pattern for prompting: Illustrious wants tags, Flux wants natural language, Pony needs score tags. Six model families, six strategies, auto-detected from checkpoint name. The alternative was unmaintainable spaghetti.
The AI suggested "use Playwright locally" for browser automation. That's not remotely plausible in a distributed context. Knowing when to distribute matters.
Environment Variables
Core
| Variable | Default | Description | |----------|---------|-------------| | COMFYUI_URL | http://localhost:8188 | ComfyUI API endpoint | | COMFYUI_MODEL | (none) | Default checkpoint model | | COMFYUI_OUTPUT_DIR | /tmp/comfyui-output | Fallback output directory |
Cloud Storage (Optional)
| Variable | Default | Description | |----------|---------|-------------| | STORAGE_PROVIDER | (none) | Set to "supabase" to enable cloud uploads | | SUPABASE_URL | (none) | Your Supabase project URL | | SUPABASE_SECRET_KEY | (none) | Supabase service role key (sb_secret_...) | | SUPABASE_BUCKET | generated-assets | Storage bucket name |
HTTP Server
| Variable | Default | Description | |----------|---------|-------------| | PORT | 3001 | HTTP server port | | NODE_ENV | development | Set to "production" for Fly.io |
Project Structure
comfyui-mcp/
├── src/
│ ├── index.ts # MCP server entry point (37 tools)
│ ├── comfyui-client.ts # ComfyUI REST/WebSocket client
│ ├── workflows/
│ │ ├── txt2img.json # Text-to-image template
│ │ ├── img2img.json # Image-to-image template
│ │ ├── upscale.json # Upscaling template
│ │ ├── inpaint.json # Inpainting template
│ │ ├── outpaint.json # Outpainting template
│ │ └── builder.ts # Workflow parameterization & LoRA injection
│ ├── prompting/ # Smart prompt generation system
│ │ ├── generator.ts # Main PromptGenerator class
│ │ ├── model-detection.ts# Auto-detect model family
│ │ └── strategies/ # Per-model prompting strategies
│ ├── storage/ # Cloud storage abstraction
│ │ ├── index.ts # Provider factory
│ │ └── supabase.ts # Supabase implementation
│ └── tools/
│ ├── imagine.ts # 🎨 Main generation tool
│ ├── pipeline.ts # Multi-step pipeline executor
│ ├── controlnet.ts # ControlNet tools
│ ├── ipadapter.ts # IP-Adapter style transfer
│ ├── inpaint.ts # Inpaint/outpaint/mask tools
│ ├── tts.ts # Text-to-speech (F5-TTS)
│ ├── lipsync.ts # Lip-sync video generation
│ ├── avatar.ts # Portrait generation
│ └── health.ts # Connection diagnostics
├── deploy/ # RunPod deployment
│ ├── serverless/ # Serverless handler
│ ├── terraform/ # Infrastructure as code
│ └── scripts/ # Deployment utilities
├── buildlog/ # Development journal
├── .github/workflows/ # CI/CD pipelines
├── vitest.config.ts # Test configuration
├── BUILD_JOURNAL.md # Feature narratives
└── README.mdDevelopment
# Install dependencies
npm install
# Run in development mode
npm run dev
# Build for production
npm run build
# Run production build
npm start
# Run tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with coverage
npm run test:coverage745 tests covering all tools, prompting strategies, storage providers, and pipeline execution.
Troubleshooting
Tools not showing in Claude Code
- Ensure you've restarted Claude Code after adding the MCP configuration
- Check that the path to
src/index.tsis absolute and correct - Verify the server starts manually:
npx tsx /path/to/src/index.ts(should print "ComfyUI MCP server running on stdio") - Try killing all Claude Code instances and restarting fresh
"Connection refused"
Ensure ComfyUI is running at the configured COMFYUI_URL.
"Model not found"
Run list_models to see available checkpoints. Model names must match exactly, including file extension.
"No image in output"
Check ComfyUI's web interface for workflow errors. The queued prompt may have failed due to missing nodes or invalid parameters.
"Node does not exist" errors
Some features require custom nodes. If you see errors like IPAdapterModelLoader does not exist or DWPreprocessor does not exist, install the required custom nodes (see Prerequisites section).
Slow generation
Generation time depends on hardware, model size, and step count. Reduce steps for faster drafts.
LoRA not applying
Verify the LoRA filename with list_loras. Ensure strength values are reasonable (0.5-1.2 typically).
Extending
The codebase is designed for extension:
- Video Generation: ComfyUI supports AnimateDiff—same workflow pattern applies
- Custom nodes: Any ComfyUI custom node can be integrated into workflow templates
- New model families: Add prompting strategies in
src/prompting/strategies/ - Additional backends: Extend portrait generation with new model backends
Cloud Deployment
Architecture Overview
flowchart TB
subgraph Client["Client Layer"]
CC[Claude Code<br/>MCP Client]
WEB[Web App<br/>landline-landing]
end
subgraph Service["Service Layer (Fly.io)"]
MCP[comfyui-mcp<br/>MCP Server]
HTTP[HTTP Server<br/>REST API]
end
subgraph GPU["GPU Layer (RunPod)"]
COMFY[ComfyUI<br/>RTX 4090]
end
subgraph Storage["Storage Layer (Supabase)"]
BUCKET[Storage Bucket<br/>generated-assets]
SIGNED[Signed URLs<br/>1hr expiry]
end
CC -->|MCP Protocol| MCP
WEB -->|HTTPS REST| HTTP
MCP -->|Proxy URL| COMFY
HTTP -->|Proxy URL| COMFY
MCP -->|Upload| BUCKET
HTTP -->|Upload| BUCKET
BUCKET -->|Generate| SIGNED
SIGNED -->|View in Browser| WEB
SIGNED -->|View in Browser| CCHow It Works
- Client Request: Claude Code (via MCP) or web apps (via HTTP) send generation requests
- Service Processing: The service builds ComfyUI workflows and queues them
- GPU Execution: RunPod executes the workflow on a rented GPU (RTX 4090)
- Cloud Storage: Results are uploaded to Supabase Storage
- Signed URLs: A 1-hour signed URL is returned for secure browser viewing
RunPod Setup
Don't have a local GPU? Run ComfyUI on RunPod and connect remotely.
- Create a RunPod pod with PyTorch template
- SSH in and run:
curl -fsSL https://raw.githubusercontent.com/YOUR_REPO/main/deploy/quick-deploy.sh | bash -s -- --dreamshaper- Get your pod URL:
https://<POD_ID>-8188.proxy.runpod.net - Configure locally:
./deploy/scripts/configure-local.sh https://<POD_ID>-8188.proxy.runpod.netSee deploy/README.md for detailed instructions.
Cloud Storage (Supabase)
Generated images can be automatically uploaded to Supabase Storage with signed URLs for secure sharing.
Setup:
- Create a Supabase project
- Create a storage bucket named
generated-assets - Add the RLS policy:
true(orauth.role() = 'service_role'for restricted access) - Set environment variables (see below)
Usage:
- MCP tools (
imagine,generate_image) includeupload_to_cloud: trueby default - Results include a
remote_urlfield with a 1-hour signed URL - URLs can be opened directly in browsers for viewing
HTTP Server (Web Integration)
For web apps that can't use MCP directly, an HTTP server is available:
# Start the HTTP server
npm run serve
# or
node dist/http-server.jsEndpoints:
| Endpoint | Method | Description |
|----------|--------|-------------|
| /health | GET | Health check with GPU info |
| /imagine | POST | Natural language image generation |
| /image | POST | Direct txt2img generation |
| /portrait | POST | AI portrait generation |
| /tts | POST | Text-to-speech with F5-TTS |
| /lipsync | POST | Video lip-sync with MuseTalk |
Example:
curl -X POST https://your-service.fly.dev/imagine \
-H "Content-Type: application/json" \
-d '{
"description": "A cyberpunk cityscape at sunset",
"output_path": "/tmp/cyberpunk.png"
}'
# Response:
{
"success": true,
"imagePath": "/tmp/cyberpunk.png",
"signedUrl": "https://xxx.supabase.co/storage/v1/object/sign/...",
"seed": 1234567890,
"message": "✨ Image generated successfully!"
}Fly.io Deployment
Deploy the HTTP server to Fly.io for production web integration:
# Install flyctl
curl -L https://fly.io/install.sh | sh
# Deploy
fly launch
fly secrets set \
COMFYUI_URL="https://<POD_ID>-8188.proxy.runpod.net" \
COMFYUI_MODEL="novaFurryXL.safetensors" \
SUPABASE_URL="https://xxx.supabase.co" \
SUPABASE_SECRET_KEY="sb_secret_xxx" \
SUPABASE_BUCKET="generated-assets" \
STORAGE_PROVIDER="supabase"
fly deployRelated
- ComfyUI - The backend we're wrapping
- MCP SDK - The protocol implementation
- Claude Code - Primary MCP client
License
MIT
Full Tutorial: See ARTICLE.md for a detailed walkthrough of building this server from scratch, including architecture decisions and implementation details.
