polydev-perspectives-mcp
v1.0.1
Published
Agentic workflow assistant with CLI integration - get diverse perspectives from multiple LLMs when stuck or need enhanced reasoning
Maintainers
Readme
Polydev AI Website
Advanced Model Context Protocol platform with comprehensive multi-LLM integration, subscription-based CLI access, OAuth bridges, and advanced tooling for AI development.
Features
🤖 Comprehensive LLM Integration
- API-Based Providers: Direct integration with 8+ providers (Anthropic, OpenAI, Google, etc.)
- Subscription-Based CLI Access: Use your existing ChatGPT Plus, Claude Pro, GitHub Copilot subscriptions
- Unified Interface: Single API for all providers with consistent streaming responses
- Auto-Detection: Automatic CLI tool discovery and path configuration
🔧 CLI Provider Support
- Codex CLI: Access GPT-5 with high reasoning through ChatGPT subscription
- Claude Code CLI: Use Claude via Anthropic subscription
- Gemini CLI: Google Cloud authentication integration
- GitHub Copilot: VS Code Language Model API integration
🛠 Advanced Tooling
- Model Context Protocol (MCP): Hosted MCP server with OAuth authentication (like Vercel)
- Multi-Authentication: Both OAuth and API token support for maximum flexibility
- Process Execution: Cross-platform CLI management with timeout handling
- Path Auto-Discovery: Smart detection of CLI installations across Windows, macOS, Linux
- Real-time Status: Live CLI availability and authentication checking
🔒 Security & Authentication
- Encrypted Storage: Browser-based API key encryption
- OAuth Bridges: Secure authentication flows
- Subscription Auth: No API costs - use existing subscriptions
- Local Storage: Keys never leave your device
📊 Monitoring & Analytics
- PostHog Integration: Advanced user analytics and feature tracking
- BetterStack Monitoring: System health and performance monitoring
- Upstash Redis: High-performance caching layer
- Supabase Auth: Robust authentication system
Tech Stack
Frontend
- Framework: Next.js 15 with App Router
- UI Library: React 18 with TypeScript
- Styling: Tailwind CSS with shadcn/ui components
- Icons: Lucide React
- State Management: React hooks with custom providers
LLM Integration
- API Handlers: Custom TypeScript handlers for each provider
- CLI Integration: Cross-platform process execution utilities
- Streaming: Server-Sent Events for real-time responses
- Authentication: Both API key and subscription-based authentication
Backend Services
- Analytics: PostHog for user tracking and feature analytics
- Monitoring: BetterStack for system health and logging
- Caching: Upstash Redis for high-performance data caching
- Authentication: Supabase for user management and auth flows
- Database: Supabase PostgreSQL for user data
Security & Storage
- Encryption: Browser SubtleCrypto API for client-side encryption
- Storage: Local browser storage with encrypted API keys
- CORS: Configured for secure cross-origin requests
Development & Deployment
- Package Manager: npm with Node.js 18+
- Build System: Next.js with TypeScript compilation
- Deployment: Vercel with automatic deployments
- Environment: Support for multiple deployment environments
Getting Started
Prerequisites
- Node.js 18+
- npm or yarn package manager
- (Optional) CLI tools for subscription-based access:
- Codex CLI for ChatGPT Plus integration
- Claude Code CLI for Anthropic Pro integration
- Gemini CLI for Google Cloud integration
- VS Code with GitHub Copilot for Copilot integration
Installation
- Clone the repository
git clone <repository-url>
cd polydev-website- Install dependencies
npm installSet up environment variables (see Environment Variables section)
Start development server
npm run dev- Open the application Navigate to http://localhost:3000 to view the application.
Quick Configuration
- API Key Setup: Go to Settings → API Keys tab to configure traditional API access
- CLI Setup: Go to Settings → CLI Subscriptions tab to set up subscription-based access
- Provider Selection: Choose your preferred LLM provider from the dropdown
- Test Integration: Use the chat interface to test your configuration
Environment Variables
Create a .env.local file with the following variables:
# Supabase
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
# PostHog Analytics
NEXT_PUBLIC_POSTHOG_KEY=your_posthog_key
NEXT_PUBLIC_POSTHOG_HOST=https://us.i.posthog.com
# Upstash Redis
UPSTASH_REDIS_REST_URL=your_upstash_redis_url
UPSTASH_REDIS_REST_TOKEN=your_upstash_redis_token
# BetterStack Logging
BETTERSTACK_LOGS_TOKEN=your_betterstack_tokenCLI Provider Setup
Codex CLI (ChatGPT Plus Integration)
Install Codex CLI:
- Download from OpenAI's official repository
- Ensure you have an active ChatGPT Plus subscription
Authentication:
codex authVerify Installation:
codex --version
Claude Code CLI (Anthropic Pro Integration)
Install Claude Code CLI:
- Follow instructions at Claude Code Documentation
- Requires active Claude Pro subscription
Authentication:
claude loginVerify Installation:
claude --version
Gemini CLI (Google Cloud Integration)
Install Google Cloud CLI:
# macOS brew install google-cloud-sdk # Windows # Download from https://cloud.google.com/sdk/docs/install # Linux curl https://sdk.cloud.google.com | bashAuthentication:
gcloud auth login gcloud auth application-default login
GitHub Copilot Integration
- Install VS Code with GitHub Copilot extension
- Authentication: Sign in with your GitHub account that has Copilot access
- Verification: The application will detect VS Code and Copilot availability automatically
API Provider Configuration
Setting Up API Keys
- Navigate to Settings → API Keys tab
- Select your preferred provider from the dropdown
- Enter your API key (encrypted automatically)
- Choose your preferred model
- Test the configuration
Supported API Providers
| Provider | Models | Context Window | Features | |----------|--------|---------------|----------| | Anthropic | Claude 3.5 Sonnet, Haiku, Opus | 200K tokens | Best for reasoning and code | | OpenAI | GPT-4o, GPT-4 Turbo, GPT-3.5 | 128K tokens | Versatile, widely adopted | | Google Gemini | Gemini 1.5 Pro, Flash | 1M+ tokens | Large context window | | OpenRouter | 100+ models | Varies | Access to multiple providers | | Groq | Open-source models | Varies | Ultra-fast inference | | Perplexity | Search-optimized models | Varies | AI search and reasoning | | DeepSeek | Reasoning models | Varies | Advanced reasoning capabilities | | Mistral AI | European AI models | Varies | Strong performance, EU-based |
Usage Examples
Basic Chat Interface
- Configure your preferred provider (API key or CLI)
- Select a model from the dropdown
- Start chatting in the main interface
- Switch providers anytime without losing conversation history
CLI Provider Usage
// The application automatically detects CLI availability
// Users can configure custom paths if needed
// Example: Using Codex CLI for high reasoning
const response = await llmService.createCliMessage(
'codex',
'You are a helpful AI assistant',
[{ role: 'user', content: 'Explain quantum computing' }],
{ temperature: 0.7 }
)API Provider Usage
// Standard API key usage
const response = await llmService.createMessage(
'You are a helpful AI assistant',
[{ role: 'user', content: 'Write a Python function' }],
{ provider: 'anthropic', model: 'claude-3-5-sonnet-20241022' }
)Architecture Overview
CLI Integration Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Frontend UI │────│ Process Utils │────│ CLI Tools │
│ (React/TS) │ │ (Node.js) │ │ (External) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ LLM Service │ │ CLI Handlers │ │ Subscriptions │
│ (Unified API) │ │ (Per Provider) │ │ (ChatGPT+, etc) │
└─────────────────┘ └──────────────────┘ └─────────────────┘Security Model
- API Keys: Encrypted using browser SubtleCrypto API
- Local Storage: Keys never leave your device
- CLI Authentication: Uses existing subscription authentication
- Process Isolation: CLI processes run in isolated environments
Troubleshooting
CLI Provider Issues
Codex CLI not detected:
- Verify ChatGPT Plus subscription is active
- Check installation path:
which codex - Re-run authentication:
codex auth - Configure custom path in Settings → CLI Subscriptions
Claude Code CLI authentication failed:
- Ensure Claude Pro subscription
- Run
claude loginmanually - Check network connectivity
- Verify CLI version compatibility
Gemini CLI setup issues:
- Install Google Cloud SDK completely
- Run both
gcloud authcommands - Enable required APIs in Google Cloud Console
- Check quota limits
GitHub Copilot not available:
- Install VS Code with Copilot extension
- Sign in to GitHub account with Copilot access
- Restart the application
- Check VS Code Language Model API availability
API Provider Issues
API key validation failed:
- Verify the key is correctly copied (no extra spaces)
- Check if the key has required permissions
- Ensure sufficient account credits/quota
- Try regenerating the API key
Connection timeout:
- Check internet connectivity
- Verify firewall settings
- Try different model/provider
- Increase timeout in settings
Model not available:
- Check provider documentation for model availability
- Verify your account tier supports the model
- Try alternative models from the same provider
Development
Adding New CLI Providers
- Create handler in
src/lib/llm/handlers/ - Update types in
src/types/api-configuration.ts - Add configuration to
CLI_PROVIDERSconstant - Update UI in
CliProviderConfiguration.tsx - Register handler in
LLMService
Adding New API Providers
- Create handler implementing
ApiHandlerinterface - Update
ApiProvidertype andPROVIDERSconfiguration - Add API key fields to
ApiConfiguration - Update UI components for new provider
- Add authentication logic
Performance Optimization
CLI Response Optimization
- CLI processes are cached and reused when possible
- Streaming responses reduce perceived latency
- Process timeouts prevent hanging connections
- Cross-platform path detection minimizes setup time
API Response Optimization
- Server-Sent Events for real-time streaming
- Connection pooling for API requests
- Response caching for repeated queries
- Automatic retry logic with exponential backoff
Health Check
The application includes a health check endpoint at /api/health for monitoring purposes.
