@justintabb/openai-second-opinion-mcp
v0.2.0
Published
MCP server that provides an OpenAI second opinion tool for Claude Code
Downloads
506
Maintainers
Readme
OpenAI Second Opinion MCP Server
An MCP (Model Context Protocol) server that exposes an openai_second_opinion tool, allowing Claude Code to query OpenAI for alternative perspectives, debugging help, architecture reviews, and more.
Quick Start (Recommended)
Use directly with npx - no installation required:
{
"mcpServers": {
"openai-second-opinion": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@justintabb/openai-second-opinion-mcp"],
"env": {
"OPENAI_API_KEY": "sk-your-api-key-here",
"OPENAI_MODEL": "gpt-4o"
}
}
}
}Add this to your .mcp.json file (in your project or ~/.claude/), replace the API key, and restart Claude Code.
Installation Options
Option 1: npx (Recommended)
No installation needed. Just add the configuration above to your .mcp.json.
Option 2: Global Install
npm install -g @justintabb/openai-second-opinion-mcpThen in .mcp.json:
{
"mcpServers": {
"openai-second-opinion": {
"type": "stdio",
"command": "openai-second-opinion-mcp",
"env": {
"OPENAI_API_KEY": "sk-your-api-key-here",
"OPENAI_MODEL": "gpt-4o"
}
}
}
}Option 3: Local Development
git clone https://github.com/justintabb/openai-second-opinion-mcp
cd openai-second-opinion-mcp
npm install
npm run buildThen in .mcp.json:
{
"mcpServers": {
"openai-second-opinion": {
"command": "node",
"args": ["/path/to/openai-second-opinion-mcp/dist/index.js"],
"env": {
"OPENAI_API_KEY": "sk-your-api-key-here",
"OPENAI_MODEL": "gpt-4o"
}
}
}
}Configuration
Required Environment Variables
| Variable | Description |
|----------|-------------|
| OPENAI_API_KEY | Your OpenAI API key |
| OPENAI_MODEL | Model to use (e.g., gpt-4o, gpt-4-turbo) |
Optional Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| OPENAI_BASE_URL | https://api.openai.com/v1 | Custom API endpoint (for Azure, proxies, etc.) |
| OPENAI_TIMEOUT_MS | 600000 | Request timeout in milliseconds (10 min default for reasoning models) |
| OPENAI_MAX_OUTPUT_TOKENS | 4096 | Maximum tokens in response |
| OPENAI_TEMPERATURE | 0.7 | Temperature setting (0-2) |
| RATE_LIMIT_MAX_CALLS | 3 | Max API calls per window |
| RATE_LIMIT_WINDOW_MS | 60000 | Rate limit window in milliseconds |
| DEBUG | false | Include raw API response in output |
Consensus Mode (Optional) - Google Gemini
To enable Model Consensus mode (calling both OpenAI and Gemini for comparison):
| Variable | Default | Description |
|----------|---------|-------------|
| GEMINI_API_KEY | - | Your Google Gemini API key (enables consensus mode) |
| GEMINI_MODEL | gemini-2.0-flash-exp | Gemini model to use |
| GEMINI_TIMEOUT_MS | 600000 | Request timeout in milliseconds |
| GEMINI_MAX_OUTPUT_TOKENS | 8192 | Maximum tokens in response |
| GEMINI_TEMPERATURE | 0.7 | Temperature setting |
Full Configuration Example (with Consensus Mode)
{
"mcpServers": {
"openai-second-opinion": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@justintabb/openai-second-opinion-mcp"],
"env": {
"OPENAI_API_KEY": "sk-your-openai-key-here",
"OPENAI_MODEL": "gpt-4o",
"OPENAI_TIMEOUT_MS": "600000",
"OPENAI_MAX_OUTPUT_TOKENS": "4096",
"OPENAI_TEMPERATURE": "0.7",
"GEMINI_API_KEY": "your-gemini-key-here",
"GEMINI_MODEL": "gemini-2.0-flash-exp",
"RATE_LIMIT_MAX_CALLS": "5",
"RATE_LIMIT_WINDOW_MS": "60000"
}
}
}
}Tool Schema
Tool Name
openai_second_opinion
Description
Query OpenAI for a second opinion or deeper analysis; returns a concise, actionable response.
Input Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| task | string | Yes | The problem statement |
| context | string | No | Relevant surrounding details/logs/code excerpts |
| mode | enum | No | Analysis mode (see below) |
| constraints | string | No | Must-follow requirements |
| output_format | enum | No | plain, bullets, or json |
| json_schema | object | No | Schema for structured JSON responses |
| safety_notes | string | No | Things that must not be suggested |
| consensus | boolean | No | Enable Model Consensus mode (calls both OpenAI + Gemini) |
Analysis Modes
| Mode | Description |
|------|-------------|
| second_opinion | General validation / alternative approach (default) |
| debug | Root cause analysis + experiments |
| architecture_review | 2-3 options with tradeoffs |
| edge_cases | Failure modes, missing requirements |
| security_review | OWASP top 10, auth gaps, injection vectors |
| prompt_review | Clarity, structure, reliability |
Output
{
"answer": "OpenAI's response text",
"model": "gpt-4o",
"usage": {
"prompt_tokens": 150,
"completion_tokens": 300,
"total_tokens": 450
},
"request_id": "req_abc123",
"warnings": []
}Example Invocations
Debug Mode
{
"task": "Next.js API route intermittently returns 504 behind nginx",
"context": "nginx proxy_read_timeout=60; node logs show occasional long GC pauses; some requests do heavy PDF generation",
"mode": "debug",
"constraints": "Must keep nginx; can change node code",
"output_format": "bullets"
}Architecture Review
{
"task": "Design a caching strategy for our API",
"context": "REST API with 10k RPM, PostgreSQL backend, some queries take 2-5 seconds",
"mode": "architecture_review",
"output_format": "bullets"
}Model Consensus Mode
When you need high-confidence validation, enable consensus mode to get responses from both OpenAI and Gemini:
{
"task": "Review this database schema design for a multi-tenant SaaS application",
"context": "PostgreSQL, expecting 1000+ tenants, need row-level security, tenant isolation critical",
"mode": "architecture_review",
"consensus": true
}Consensus mode output:
{
"consensus_mode": true,
"both_succeeded": true,
"openai_response": {
"answer": "OpenAI's analysis...",
"model": "gpt-4o"
},
"gemini_response": {
"answer": "Gemini's analysis...",
"model": "gemini-2.0-flash-exp"
},
"guidance": "Compare both responses. If they agree on the core approach, proceed with confidence..."
}Structured JSON Output
{
"task": "Create a debug plan to isolate slow database queries",
"context": "Slow queries in production only; local is fast; RLS enabled",
"mode": "debug",
"output_format": "json",
"json_schema": {
"type": "object",
"properties": {
"hypotheses": { "type": "array", "items": { "type": "string" } },
"experiments": {
"type": "array",
"items": {
"type": "object",
"properties": {
"step": { "type": "string" },
"expected_signal": { "type": "string" }
}
}
},
"quick_wins": { "type": "array", "items": { "type": "string" } }
}
}
}Security Features
Secret Redaction: Automatically detects and redacts common secret patterns:
- API keys (OpenAI, AWS, Stripe, GitHub, etc.)
- Bearer tokens
- Database connection strings
- Private keys (PEM format)
- Environment variable patterns
- JWTs
Rate Limiting: In-memory sliding window rate limiter prevents excessive API usage.
No Secret Logging: The OpenAI API key is never logged.
Timeout Protection: All requests have configurable timeouts.
Troubleshooting
Server won't start
- Check that
OPENAI_API_KEYandOPENAI_MODELare set in your.mcp.json - Ensure Node.js >= 18 is installed
- Try running manually:
npx -y @justintabb/openai-second-opinion-mcp
Rate limit errors
Increase the limit or window in your .mcp.json:
{
"env": {
"RATE_LIMIT_MAX_CALLS": "10",
"RATE_LIMIT_WINDOW_MS": "60000"
}
}Timeout errors
Increase the timeout:
{
"env": {
"OPENAI_TIMEOUT_MS": "60000"
}
}Using with Azure OpenAI
Set the base URL to your Azure endpoint:
{
"env": {
"OPENAI_BASE_URL": "https://your-resource.openai.azure.com/openai/deployments/your-deployment",
"OPENAI_API_KEY": "your-azure-key",
"OPENAI_MODEL": "gpt-4o"
}
}License
MIT
