mcp-prompt-optimizer
v3.0.1
Published
Professional cloud-based MCP server for AI-powered prompt optimization with intelligent context detection, Bayesian optimization, AG-UI real-time optimization, template auto-save, optimization insights, personal model configuration via WebUI, team collabo
Maintainers
Keywords
Readme
MCP Prompt Optimizer v3.0.0
🚀 Professional cloud-based MCP server for AI-powered prompt optimization with intelligent context detection, template management, team collaboration, enterprise-grade features, and optional personal model configuration. Starting at $2.99/month.
⚠️ v3.0.0 Breaking Changes: API key is now REQUIRED for all operations. Development mode and offline mode have been removed for security.
✨ Key Features
🧠 AI Context Detection - Automatically detects and optimizes for image generation, LLM interaction, technical automation
📁 Template Management - Auto-save high-confidence optimizations, search & reuse patterns
👥 Team Collaboration - Shared quotas, team templates, role-based access
📊 Real-time Analytics - Confidence scoring, usage tracking, optimization insights (Note: Advanced features like Bayesian Optimization and AG-UI are configurable and may provide mock data if disabled in the backend)
☁️ Cloud Processing - Always up-to-date AI models, no local setup required
🎛️ Personal Model Choice - Use your own OpenRouter models via WebUI configuration
🔧 Universal MCP - Works with Claude Desktop, Cursor, Windsurf, Cline, VS Code, Zed, Replit
🚀 Quick Start
1. Get your API key (REQUIRED):
⚠️ Important: An API key is REQUIRED to use this package. Choose your tier:
- 🆓 FREE Tier (
sk-local-*): 5 daily optimizations - Get started at https://promptoptimizer-blog.vercel.app/pricing - ⭐ Paid Tiers (
sk-opt-*,sk-team-*): More optimizations, team features, advanced capabilities
2. Install the MCP server:
npm install -g mcp-prompt-optimizer3. Configure Claude Desktop:
Add to your ~/.claude/claude_desktop_config.json:
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-local-your-key-here" // REQUIRED: Use your API key here
}
}
}
}Note: All API keys are validated against our backend server. Internet connection required (brief caching for reliability).
4. Restart Claude Desktop and start optimizing with AI context awareness!
5. (Optional) Configure custom models - See Advanced Model Configuration below
🎛️ Advanced Model Configuration (Optional)
WebUI Model Selection & Personal OpenRouter Keys
Want to use your own AI models? Configure them in the WebUI first, then the NPM package automatically uses your settings!
Step 1: Configure in WebUI
- Visit Dashboard: https://promptoptimizer-blog.vercel.app/dashboard
- Go to Settings → User Settings
- Add OpenRouter API Key: Get one from OpenRouter.ai
- Select Your Models:
- Optimization Model: e.g.,
anthropic/claude-3-5-sonnet(for prompt optimization) - Evaluation Model: e.g.,
google/gemini-pro-1.5(for quality assessment)
- Optimization Model: e.g.,
Step 2: Use NPM Package
Your configured models are automatically used by the MCP server - no additional setup needed!
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-opt-your-key-here" // Your service API key
}
}
}
}Model Selection Priority
1. 🎯 Your WebUI-configured models (highest priority)
2. 🔧 Request-specific model (if specified)
3. ⚙️ System defaults (fallback)Benefits of Personal Model Configuration
✅ Cost Control - Pay for your own OpenRouter usage
✅ Model Choice - Access 100+ models (Claude, GPT-4, Gemini, Llama, etc.)
✅ Performance - Choose faster or more capable models
✅ Consistency - Same models across WebUI and MCP tools
✅ Privacy - Your data goes through your OpenRouter account
Example Model Recommendations
For Creative/Complex Prompts:
- Optimization:
anthropic/claude-3-5-sonnet - Evaluation:
google/gemini-pro-1.5
For Fast/Simple Optimizations:
- Optimization:
openai/gpt-4o-mini - Evaluation:
openai/gpt-3.5-turbo
For Technical/Code Prompts:
- Optimization:
anthropic/claude-3-5-sonnet - Evaluation:
anthropic/claude-3-haiku
Important Notes
🔑 Two Different API Keys:
- Service API Key (
sk-opt-*): For the MCP service subscription - OpenRouter API Key: For your personal model usage (configured in WebUI)
💰 Cost Structure:
- Service subscription: Monthly fee for optimization features
- OpenRouter usage: Pay-per-token for your chosen models
🔄 No NPM Package Changes Needed: When you update models in WebUI, the NPM package automatically uses the new settings!
💰 Cloud Subscription Plans
All plans include the same sophisticated AI optimization quality
🎯 Explorer - $2.99/month
- 5,000 optimizations per month
- Individual use (1 user, 1 API key)
- Full AI features - context detection, template management, insights
- Personal model configuration via WebUI
- Community support
🎨 Creator - $25.99/month ⭐ Popular
- 18,000 optimizations per month
- Team features (2 members, 3 API keys)
- Full AI features - context detection, template management, insights
- Personal model configuration via WebUI
- Priority processing + email support
🚀 Innovator - $69.99/month
- 75,000 optimizations per month
- Large teams (5 members, 10 API keys)
- Full AI features - context detection, template management, insights
- Personal model configuration via WebUI
- Advanced analytics + priority support + dedicated support channel
🆓 Free Trial: 5 optimizations with full feature access
🧠 AI Context Detection & Enhancement
The server automatically detects your prompt type and enhances optimization goals:
🎨 Image Generation Context
Detected patterns: --ar, --v, midjourney, dall-e, photorealistic, 4k
Input: "A beautiful landscape --ar 16:9 --v 6"
✅ Enhanced goals: parameter_preservation, keyword_density, technical_precision
✅ Preserves technical parameters (--ar, --v, etc.)
✅ Optimizes quality keywords and visual descriptors🤖 LLM Interaction Context
Detected patterns: analyze, explain, evaluate, summary, research, paper, analysis, interpret, discussion, assessment, compare, contrast
Input: "Analyze the pros and cons of this research paper and provide a comprehensive evaluation"
✅ Enhanced goals: context_specificity, token_efficiency, actionability
✅ Improves role clarity and instruction precision
✅ Optimizes for better AI understanding💻 Code Generation Context
Detected patterns: def, function, code, python, javascript, java, c++, return, import, class, for, while, if, else, elif
Input: "def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)"
✅ Enhanced goals: technical_accuracy, parameter_preservation, precision
✅ Protects code elements and technical syntax
✅ Enhances technical precision and clarity⚙️ Technical Automation Context
Detected patterns: automate, script, api
Input: "Create a script to automate deployment process"
✅ Enhanced goals: technical_accuracy, parameter_preservation, precision
✅ Protects code elements and technical syntax
✅ Enhances technical precision and clarity💬 Human Communication Context (Default)
All other prompts get standard optimization for human readability and clarity.
📊 Enhanced Optimization Features
Professional Optimization (All Users)
🎯 Optimized Prompt
Create a comprehensive technical blog post about artificial intelligence that systematically explores current real-world applications, evidence-based benefits, existing limitations and challenges, and data-driven future implications for businesses and society.
Confidence: 87.3%
Plan: Creator
AI Context: Human Communication
Goals Enhanced: Yes (clarity → clarity, specificity, actionability)
🧠 AI Context Benefits Applied
- ✅ Standard optimization rules applied
- ✅ Human communication optimized
✅ Auto-saved as template (ID: tmp_abc123)
*High-confidence optimization automatically saved for future use*
📋 Similar Templates Found
1. AI Article Writing Template (92.1% similarity)
2. Technical Blog Post Structure (85.6% similarity)
*Use `search_templates` tool to explore your template library*
📊 Optimization Insights
Performance Analysis:
- Clarity improvement: +21.9%
- Specificity boost: +17.3%
- Length optimization: +15.2%
Prompt Analysis:
- Complexity level: intermediate
- Optimization confidence: 87.3%
AI Recommendations:
- Optimization achieved 87.3% confidence
- Template automatically saved for future reference
- Prompt optimized from 15 to 23 words
*Professional analytics and improvement recommendations*
---
*Professional cloud-based AI optimization with context awareness*
💡 Manage account & configure models: https://promptoptimizer-blog.vercel.app/dashboard
📊 Check quota: Use `get_quota_status` tool
🔍 Search templates: Use `search_templates` tool🔧 Universal MCP Client Support
Claude Desktop
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-opt-your-key-here"
}
}
}
}Cursor IDE
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-opt-your-key-here"
}
}
}
}Windsurf
Configure in IDE settings or add to MCP configuration file.
Other MCP Clients
- Cline: Standard MCP configuration
- VS Code: MCP extension setup
- Zed: MCP server configuration
- Replit: Environment variable setup
- JetBrains IDEs: MCP plugin configuration
- Emacs/Vim/Neovim: MCP client setup
🛠️ Available MCP Tools (for AI Agents & MCP Clients)
These tools are exposed via the Model Context Protocol (MCP) server and are intended for use by AI agents, MCP-compatible clients (like Claude Desktop, Cursor IDE), or custom scripts that interact with the server via stdin/stdout.
optimize_prompt
Professional AI optimization with context detection, auto-save, and insights.
{
"prompt": "Your prompt text",
"goals": ["clarity", "specificity"], // Optional: e.g., "clarity", "conciseness", "creativity", "technical_accuracy"
"ai_context": "llm_interaction", // Optional: Auto-detected if not specified. e.g., "code_generation", "image_generation"
"enable_bayesian": true // Optional: Enable Bayesian optimization features (if available in backend)
}detect_ai_context
Detects the AI context for a given prompt using advanced backend analysis.
{
"prompt": "The prompt text for which to detect the AI context"
}create_template
Create a new optimization template.
{
"title": "Title of the template",
"description": "Description of the template", // Optional
"original_prompt": "The original prompt text",
"optimized_prompt": "The optimized prompt text",
"optimization_goals": ["clarity"], // Optional: e.g., ["clarity", "conciseness"]
"confidence_score": 0.9, // (0.0-1.0)
"model_used": "openai/gpt-4o-mini", // Optional
"optimization_tier": "llm", // Optional: e.g., "rules", "llm", "hybrid"
"ai_context_detected": "llm_interaction", // Optional: e.g., "code_generation", "image_generation"
"is_public": false, // Optional: Whether the template is public
"tags": ["marketing", "email"] // Optional
}get_template
Retrieve a specific template by its ID.
{
"template_id": "the-template-id"
}update_template
Update an existing optimization template.
{
"template_id": "the-template-id",
"title": "New title for the template", // Optional
"description": "New description for the template", // Optional
"is_public": true // Optional: Update public status
// Other fields from 'create_template' can also be updated
}search_templates
Search your saved template library with AI-aware filtering.
{
"query": "blog post", // Optional: Search term to filter templates by content or title
"ai_context": "human_communication", // Optional: Filter templates by AI context type
"sophistication_level": "advanced", // Optional: Filter by template sophistication level
"complexity_level": "complex", // Optional: Filter by template complexity level
"optimization_strategy": "rules_only", // Optional: Filter by optimization strategy used
"limit": 5, // Optional: Number of templates to return (1-20)
"sort_by": "confidence_score", // Optional: e.g., "created_at", "usage_count", "title"
"sort_order": "desc" // Optional: "asc" or "desc"
}get_quota_status
Check subscription status, quota usage, and account information.
// No parameters neededget_optimization_insights (Conditional)
Get advanced Bayesian optimization insights, performance analytics, and parameter tuning recommendations. Note: This tool provides mock data if Bayesian optimization is disabled in the backend.
{
"analysis_depth": "detailed", // Optional: "basic", "detailed", "comprehensive"
"include_recommendations": true // Optional: Include optimization recommendations
}get_real_time_status (Conditional)
Get real-time optimization status and AG-UI capabilities. Note: This tool provides mock data if AG-UI features are disabled in the backend.
// No parameters needed🔧 Professional CLI Commands (Direct Execution)
These are direct command-line tools provided by the mcp-prompt-optimizer executable for administrative and diagnostic purposes.
# Check API key and quota status
mcp-prompt-optimizer check-status
# Validate API key with backend
mcp-prompt-optimizer validate-key
# Test backend integration
mcp-prompt-optimizer test
# Run comprehensive diagnostic
mcp-prompt-optimizer diagnose
# Clear validation cache
mcp-prompt-optimizer clear-cache
# Show help and setup instructions
mcp-prompt-optimizer help
# Show version information
mcp-prompt-optimizer version🏢 Team Collaboration Features
Team API Keys (sk-team-*)
- Shared quotas across team members
- Centralized billing and management
- Team template libraries for consistency
- Role-based access control
- Team usage analytics
Individual API Keys (sk-opt-*)
- Personal quotas and billing
- Individual template libraries
- Personal usage tracking
- Account self-management
🔐 Security & Privacy
- Enterprise-grade security with encrypted data transmission
- API key validation with secure backend authentication
- Quota enforcement with real-time usage tracking
- Professional uptime with 99.9% availability SLA
- GDPR compliant data handling and processing
- No data retention - prompts processed and optimized immediately
📈 Advanced Features
Automatic Template Management
- Auto-save high-confidence optimizations (>70% confidence)
- Intelligent categorization by AI context and content type
- Similarity search to find related templates
- Template analytics with usage patterns and effectiveness
Real-time Optimization Insights
- Performance metrics - clarity, specificity, length improvements
- Confidence scoring with detailed analysis
- AI-powered recommendations for continuous improvement
- Usage analytics and optimization patterns Note: Advanced features like Bayesian Optimization and AG-UI Real-time Features are configurable and may provide mock data if disabled in the backend.
Intelligent Context Routing
- Automatic detection of prompt context and intent
- Goal enhancement based on detected context
- Parameter preservation for technical prompts
- Context-specific optimizations for better results
🚀 Getting Started
🏃♂️ Fast Start (System Defaults)
- Sign up at promptoptimizer-blog.vercel.app/pricing
- Install the MCP server:
npm install -g mcp-prompt-optimizer - Configure your MCP client with your API key
- Start optimizing with intelligent AI context detection!
🎛️ Advanced Start (Custom Models)
- Sign up at promptoptimizer-blog.vercel.app/pricing
- Configure WebUI at dashboard with your OpenRouter key & models
- Install the MCP server:
npm install -g mcp-prompt-optimizer - Configure your MCP client with your API key
- Enjoy enhanced optimization with your chosen models!
🔄 Migration to v3.0.0
⚠️ Breaking Changes from v2.x
IMPORTANT: v3.0.0 includes security enhancements that remove authentication bypasses.
What Changed:
- ❌
OPTIMIZER_DEV_MODE=trueno longer works - ❌
NODE_ENV=developmentno longer enables mock mode - ❌ Offline mode has been removed
- ✅ All API keys must be validated against backend server
- ✅ Internet connection required (1-2 hour caching for reliability)
Migration Steps:
# 1. Ensure you have a valid API key
export OPTIMIZER_API_KEY="sk-opt-your-key-here"
# 2. Update to v3.0.0
npm update -g mcp-prompt-optimizer
# 3. Verify it works
mcp-prompt-optimizer --version # Should show 3.0.0For Developers:
- Mock mode removed - use real test API keys from backend database
- Development keys (
sk-dev-*) must be real keys, not mocked - Offline testing no longer supported - backend connection required
Cache Behavior:
- Primary cache: 1 hour
- Network failure fallback: Up to 2 hours
- After 2 hours: Must reconnect to backend
📞 Support & Resources
- 📚 Documentation: https://promptoptimizer-blog.vercel.app/docs
- 💬 Community Support: GitHub Discussions
- 📧 Email Support: [email protected] (Creator/Innovator)
- 🏢 Enterprise: [email protected]
- 📊 Dashboard & Model Config: https://promptoptimizer-blog.vercel.app/dashboard
- 🔧 Troubleshooting: https://promptoptimizer-blog.vercel.app/docs/troubleshooting
🌟 Why Choose MCP Prompt Optimizer?
✅ Professional Quality - Enterprise-grade optimization with consistent results
✅ Universal Compatibility - Works with 10+ MCP clients out of the box
✅ AI Context Awareness - Intelligent optimization based on prompt type
✅ Personal Model Choice - Use your own OpenRouter models & pay-per-use
✅ Template Management - Build and reuse optimization patterns
✅ Team Collaboration - Shared resources and centralized management
✅ Real-time Analytics - Track performance and improvement over time
✅ Startup Validation - Comprehensive error handling and troubleshooting
✅ Professional Support - From community to enterprise-level assistance
🚀 Professional MCP Server - Built for serious AI development with intelligent context detection, comprehensive template management, personal model configuration, and enterprise-grade reliability.
Get started with 5 free optimizations at promptoptimizer-blog.vercel.app/pricing
