adbrand-mcp
v0.1.2
Published
Adbrand MCP - AI-powered creative tools for advertising
Readme
adbrand-mcp
Adbrand MCP - AI-powered creative tools for advertising. An MCP (Model Context Protocol) server that provides AI-powered image and video generation using FAL's advanced models.
Features
- Image-to-Image Transformation: Transform existing images
- Flux i2i models (dev, kontext, general)
- Style transfer and modifications
- Adjustable transformation strength (0-1)
- Text-to-Video Generation: Generate videos from text prompts
- Kling 2.1 Master: Premium video generation with fluid motion
- Vidu Q1: High-quality text-to-video generation
- Configurable duration (1-30 seconds) and aspect ratios
- Image-to-Video Transformation: Animate static images
- Kling 2.1 Master I2V: Premium image animation
- Vidu I2V: Standard image-to-video conversion
- Motion control with prompt guidance and negative prompts
- Image Enhancement: Upscale and enhance images with AI
- ESRGAN: High-quality image upscaling (2x-4x)
- Flexible Options:
- Multiple image sizes: square, portrait (4:3, 16:9), landscape (4:3, 16:9)
- Batch generation: 1-4 images per request
- Optional local download with timestamped filenames
- Customizable output format (PNG/JPEG for images, MP4 for videos)
- Local file support: Automatically uploads local images to FAL storage
- Model Management:
- List all available models by category
- Easy model configuration via JSON
Prerequisites
- Node.js 18+ or Bun
- FAL API key (get one at fal.ai)
Installation
As a package
npm install adbrand-mcp
# or
bun add adbrand-mcpFor development
git clone https://github.com/skunc-ai/adbrand-mcp.git
cd adbrand-mcp
bun installProject Structure
src/
├── api/ # API implementations
│ ├── text2img/ # Text-to-image generation
│ ├── img2img/ # Image-to-image transformation
│ ├── text2video/ # Text-to-video generation
│ └── img2video/ # Image-to-video transformation
├── config/ # Configuration files
│ ├── index.ts # Config loader
│ └── models.json # Model definitions
├── lib/ # Utility libraries
│ └── download.ts # File download utilities
└── mcp-server.ts # Main MCP serverSetup
Claude Desktop Configuration
Open Claude Desktop config:
- Mac:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
- Mac:
Add the server configuration:
{
"mcpServers": {
"adbrand-mcp": {
"command": "npx",
"args": ["adbrand-mcp"],
"env": {
"FAL_API_KEY": "your-fal-api-key"
}
}
}
}For local development:
{
"mcpServers": {
"adbrand-mcp-dev": {
"command": "node",
"args": [
"/path/to/adbrand-mcp/node_modules/.bin/tsx",
"/path/to/adbrand-mcp/src/mcp-server.ts"
],
"env": {
"FAL_API_KEY": "your-fal-api-key"
}
}
}
}- Restart Claude Desktop
Available Tools
text_to_image
Generate AI images from text prompts.
Parameters:
prompt(required): Text prompt for image generationmodel: Model endpoint to use (e.g., fal-ai/flux/schnell)imageSize: Image size (square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9)numImages: Number of images to generate (1-4)downloadImages: Download generated images to local disk (true/false)
list_models
List all available image generation models organized by category.
Parameters: None
image_to_image
Transform an existing image using AI models.
Parameters:
imageUrl(required): URL or local file path of the input image to transformprompt(required): Text prompt describing the desired transformationmodel: Model endpoint to use (e.g., fal-ai/flux/dev/image-to-image)strength: Transformation strength 0-1 (default: 0.8)downloadImage: Download transformed image to local disk (true/false)
text_to_video
Generate AI videos from text prompts using Kling 2.1 or Vidu.
Parameters:
prompt(required): Text prompt for video generationmodel: Model endpoint to use (e.g., fal-ai/kling-video/v2.1/master/text-to-video)duration: Video duration in seconds (1-30, default: 5)aspectRatio: Video aspect ratio (16:9, 9:16, 1:1, 4:3, 3:4, default: 16:9)downloadVideo: Download generated video to local disk (true/false)
image_to_video
Transform an image into a video using Kling 2.1 or Vidu.
Parameters:
imageUrl(required): URL or local file path of the input imageprompt(required): Motion description promptmodel: Model endpoint to use (e.g., fal-ai/kling-video/v2.1/master/image-to-video)duration: Video duration in seconds (5 or 10, default: 5)aspectRatio: Video aspect ratio (16:9, 9:16, 1:1, default: 16:9)negativePrompt: What to avoid in the videocfgScale: How closely to follow the prompt (0-1, default: 0.5)downloadVideo: Download generated video to local disk (true/false)
enhance_image
Enhance and upscale images using AI models.
Parameters:
imageUrl(required): URL or local file path of the image to enhancemodel: Enhancement model to use (e.g., fal-ai/aurasr)scale: Upscale factor (2-4, default: 4)downloadImage: Download enhanced image to local disk (true/false)
Configuration
Model Configuration
Model definitions are stored in src/config/models.json. You can customize available models by editing this file:
{
"text2img": {
"flux": {
"dev": "fal-ai/flux/dev",
"pro": "fal-ai/flux-pro",
"kontext": "fal-ai/flux-pro/kontext/text-to-image"
},
"imagen": "fal-ai/imagen4/preview"
},
"img2img": {
"flux": {
"dev": "fal-ai/flux/dev/image-to-image",
"kontext": "fal-ai/flux-pro/kontext/image-to-image",
"general": "fal-ai/flux-general/image-to-image"
}
},
"text2video": {
"kling": {
"master": "fal-ai/kling-video/v2.1/master/text-to-video"
},
"vidu": {
"q1": "fal-ai/vidu/q1/text-to-video"
}
},
"img2video": {
"kling": {
"master": "fal-ai/kling-video/v2.1/master/image-to-video"
},
"vidu": {
"standard": "fal-ai/vidu/image-to-video"
}
}
}Environment Variables
FAL_API_KEY: Your FAL API key (required)DOWNLOAD_PATH: Custom download directory (optional, defaults to~/Downloads/adbrand-mcp)
Development
API Usage Examples
Each API follows a consistent pattern:
// Text to Image
import { textToImage } from './api/text2img';
const result = await textToImage(
'fal-ai/flux/dev', // model endpoint
{ prompt: 'a beautiful sunset' }, // options
apiKey, // FAL API key
true // download images
);
// Image to Image
import { imageToImage } from './api/img2img';
const result = await imageToImage(
'fal-ai/flux/dev/image-to-image',
{
imageUrl: 'https://example.com/image.jpg',
prompt: 'transform to watercolor',
strength: 0.8
},
apiKey,
true
);Building and Testing
# Install dependencies
bun install
# Run type checking
bun run typecheck
# Start the MCP server
bun run start
# Run tests
bun testTesting with Claude Desktop
Use the local development configuration above
Restart Claude Desktop
Test commands:
- "List available models"
- "Generate a text to image of a sunset"
- "Transform this image into a watercolor painting" (with an image URL)
- "Generate a video of a cat playing with a ball"
- "Animate this image with gentle motion" (with an image URL)
Local file examples:
- "Transform /Users/me/photo.jpg into anime style"
- "Enhance /path/to/image.png with 4x upscaling"
- "Create a video from ~/Desktop/portrait.jpg with subtle zoom"
Manual Testing
# Set API key
export FAL_API_KEY="your-api-key"
# Start server
bun run start
# In another terminal, send commands:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | bun run startTroubleshooting
- "FAL_KEY is required" error: Make sure
FAL_API_KEYis set in your environment or MCP config - Images not downloading: Check write permissions for the download directory
- Model not found: Verify the model name in
src/config/models.json
License
MIT
