@ririaru/mcp-gpt5-server
v2.1.5
Published
Enhanced MCP server for GPT-5 with advanced features
Readme
MCP GPT-5 Server Enhanced v2.0
Enhanced MCP (Model Context Protocol) server for accessing GPT-5 with advanced features including caching, reasoning levels, and verbosity control.
🚀 New Features in v2.0
- TypeScript Support: Full type safety and better IDE integration
- Reasoning Levels: Control GPT-5's reasoning effort (low/medium/high)
- Verbosity Control: Adjust response detail level (concise/normal/verbose)
- Response Caching: Optional caching to reduce API calls
- Enhanced Error Handling: Detailed error types and retry logic
- Web Search Integration: Enable web search capabilities
- Statistics Tracking: Monitor usage and performance
- Configuration Flexibility: Environment variables for all settings
Installation
Quick Setup with Claude Code
claude mcp add -s user sk_gpt5 "npx @ririaru/mcp-gpt5-server"Manual Installation
npm install @ririaru/mcp-gpt5-serverUsage
Basic Query
sk_gpt5("Hello, GPT-5!")Advanced Query with Options
sk_gpt5({
message: "Explain quantum computing",
reasoning_effort: "high",
verbosity: "verbose",
max_tokens: 8000,
web_search: true
})Cache Management
// Get cache statistics
sk_gpt5_cache({ action: "stats" })
// Clear cache
sk_gpt5_cache({ action: "clear" })Usage Statistics
sk_gpt5_stats()Configuration
Environment Variables
Create a .env file based on .env.example:
# API Configuration
GPT5_API_URL=https://mcpgpt5.vercel.app/api/messages
GPT5_DEFAULT_MODEL=gpt-5
# Default Parameters
GPT5_DEFAULT_REASONING=medium # low, medium, high
GPT5_DEFAULT_VERBOSITY=normal # concise, normal, verbose
GPT5_DEFAULT_MAX_TOKENS=4096
# Cache Configuration
GPT5_ENABLE_CACHE=true
GPT5_CACHE_EXPIRY=3600000 # 1 hour
# Debug Mode
GPT5_DEBUG=falseAPI Reference
sk_gpt5 Tool
Main tool for querying GPT-5.
Parameters:
message(string, required): The message to sendmodel(string, optional): Model to use (default: gpt-5)reasoning_effort(enum, optional): low/medium/high (default: medium)verbosity(enum, optional): concise/normal/verbose (default: normal)max_tokens(number, optional): Maximum response tokenstemperature(number, optional): Sampling temperature (GPT-5 uses 1.0)web_search(boolean, optional): Enable web searchmax_thinking_chars(number, optional): Limit thinking charactersuse_cache(boolean, optional): Use cached response if available
sk_gpt5_stats Tool
Get client statistics including request count, error rate, and cache usage.
sk_gpt5_cache Tool
Manage the response cache.
Parameters:
action(enum, required): "clear" or "stats"
Advanced Features
Reasoning Effort Levels
- low: Quick responses with minimal reasoning
- medium: Balanced reasoning and response time
- high: Deep reasoning for complex queries
Verbosity Modes
- concise: Brief, direct responses
- normal: Standard response detail
- verbose: Comprehensive responses with examples
Constraints
- Web search requires at least medium reasoning effort
- Temperature is fixed at 1.0 for GPT-5
- Maximum tokens capped at 10,000
Development
Building from Source
# Install dependencies
npm install
# Build TypeScript
npm run build
# Watch mode for development
npm run devTesting Locally
# Run the MCP server
npm start
# Or directly
node dist/mcp-gpt5-bridge.jsDeployment
Vercel API Deployment
The api/messages.js file is ready for Vercel deployment:
vercel deployNPM Publishing
npm version patch # or minor/major
npm publishError Handling
The server handles various error types:
api_error: API request failurestimeout: Request timeoutsnetwork_error: Network issuesvalidation_error: Invalid parametersconstraint_error: Constraint violationscache_error: Cache-related errors
Performance Optimization
- Response caching reduces API calls
- Automatic retry with exponential backoff
- AbortController for timeout management
- Streaming support with fallback to non-streaming
Requirements
- Node.js 18 or higher
- Claude Code with MCP support
- OpenAI API key (for Vercel deployment)
License
MIT
Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
Changelog
v2.0.0
- Complete TypeScript rewrite
- Added reasoning effort levels
- Added verbosity control
- Implemented response caching
- Enhanced error handling
- Added statistics tracking
- Web search integration
- Multiple tool support
v1.0.5
- Initial release
- Basic GPT-5 proxy functionality
