vscode-ai-model-detector
v2.1.0
Published
Real-time AI model detection with 100% accuracy using Chat Participant API
Maintainers
Readme
VS Code AI Model Detector
Created: September 9, 2025 | Last Updated: September 9, 2025
Repository Setup: August 6, 2025 | Documentation Version: 2.0.0
Phase 3 Status: ✅ COMPLETE - MCP Integration Implemented
🎯 Real-time AI model detection with 100% accuracy + MCP integration - The perfect solution for VS Code users who need precise model detection with enhanced capabilities through Model Context Protocol.
🚀 Breakthrough Achievement
This extension leverages the Chat Participant API breakthrough - using request.model for perfect real-time accuracy that directly accesses VS Code's model dropdown selection. No more guessing, no more file-based detection limitations!
📋 Features
- ✅ 100% Real-time Accuracy: Direct access to VS Code's selected AI model
- 📊 Status Bar Integration: Continuous monitoring with one-click details
- 🤖 Chat Participant:
@modeldetectorfor comprehensive model analysis - 🌉 MCP Integration: Enhanced capabilities through Model Context Protocol
- 🔗 IPC Bridge: Seamless communication between VS Code and MCP server
- 📈 Detection History: Track model usage patterns over time
- ⚙️ Highly Configurable: Customize monitoring intervals and display options
🏆 Why This Extension?
Perfect Accuracy Through API Breakthrough
// 🎯 The breakthrough: Direct model access
public async detectFromChatContext(request: vscode.ChatRequest): Promise<ModelDetectionResult> {
const model = request.model; // 100% ACCURATE - Direct VS Code model access
return { accuracy: 'Perfect', source: 'chat-context', model };
}Multi-Layer Detection Strategy
- Chat Context (100% accurate) - Direct
request.modelaccess - Language Model API (Available models) -
vscode.lm.selectChatModels() - Storage Cache (Historical) - Previous detection results
🎮 Usage
Chat Participant Commands
@modeldetector # Comprehensive model detection
@modeldetector /detect # Detailed model information
@modeldetector /monitor # Enable status bar monitoringKeyboard Shortcuts
Ctrl+Shift+M- Quick model detectionCtrl+Shift+Alt+M- Toggle status bar monitoring
Status Bar Integration
The status bar shows:
- 🤖 Model Icon (vendor-specific)
- Model Name (current selection)
- Click for detailed quick pick menu
🌉 MCP Integration (Phase 3: COMPLETE)
Status: ✅ Implementation Complete - Ready for VS Code Testing
This extension now includes Model Context Protocol (MCP) integration through an IPC bridge, providing enhanced capabilities beyond the core Chat Participant API.
Hybrid Architecture
VS Code Chat Interface
↕️ (Primary: 100% Accurate)
Chat Participant API
↕️ (Secondary: Enhanced Features)
IPC Bridge (Port 3001)
↕️
MCP Server (4 Tools)Enhanced Capabilities
- Real-time Model Detection - Core Chat Participant API (100% accurate)
- Model Capabilities Analysis - Enhanced through MCP tools
- Change Monitoring - Advanced tracking via MCP integration
- Access Validation - Comprehensive verification through bridge
MCP Tools Available
detect_current_model- Enhanced detection with metadataget_model_capabilities- Detailed model specificationsmonitor_model_changes- Real-time change trackingvalidate_model_access- Access verification and testing
Benefits:
- ✅ Preserves 100% accurate Chat Participant breakthrough
- ✅ Adds enhanced MCP capabilities when available
- ✅ Graceful degradation if MCP server unavailable
- ✅ Zero regression in existing functionality
🔧 Installation
Development Setup
# Navigate to the extension directory
cd packages/vscode-ai-model-detector
# Install dependencies
npm install
# Compile TypeScript
npm run compile
# Launch Extension Development Host
npm run watch # In one terminal
F5 # In VS Code (or use Run Extension debug config)Building for Distribution
# Package the extension
npm run package
# This creates: vscode-ai-model-detector-1.0.0.vsix📊 Configuration
Access via Ctrl+, → Search "AI Model Detector"
{
"aiModelDetector.enableStatusBar": true,
"aiModelDetector.statusBarUpdateInterval": 5000,
"aiModelDetector.autoDetectInterval": 5000
}Configuration Options
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| enableStatusBar | boolean | true | Show/hide status bar item |
| statusBarUpdateInterval | number | 5000 | Update frequency (ms) |
| autoDetectInterval | number | 5000 | Auto-detection interval (ms) |
🏗️ Architecture
Project Structure
vscode-ai-model-detector/
├── src/
│ ├── extension.ts # Main activation & command registration
│ ├── modelDetector.ts # Core detection service (breakthrough)
│ ├── chatParticipant.ts # @modeldetector chat integration
│ ├── statusBar.ts # Continuous monitoring UI
│ └── types.ts # TypeScript interfaces
├── .vscode/ # Debug configuration
├── package.json # Extension manifest
└── tsconfig.json # TypeScript configurationKey Components
ModelDetectorService (Core Breakthrough)
- Primary Method:
detectFromChatContext()- 100% accurate viarequest.model - Fallback Methods: LM API detection, storage cache
- History Tracking: Complete audit trail of detections
Chat Participant Integration
- Participant:
@modeldetectorwith commands/detect,/monitor - Real-time Analysis: Instant model information in chat
- Interactive UI: Buttons and follow-up suggestions
Status Bar Manager
- Continuous Display: Real-time model information
- Vendor Icons: Visual identification (🤖 OpenAI, 🔮 Claude, 🧠 Gemini)
- Quick Access: Click for detailed information and controls
🎯 Technical Implementation
Chat Participant API Breakthrough
The key innovation is leveraging VS Code's Chat Participant API:
// Register chat participant with 100% accurate detection
const participant = vscode.chat.createChatParticipant('modeldetector', async (request, context, stream, token) => {
// 🎯 BREAKTHROUGH: Direct model access
const model = request.model;
// Perfect accuracy - no file parsing, no guessing
const modelInfo = {
id: model.id,
name: model.name,
vendor: model.vendor,
accuracy: 'Perfect',
source: 'chat-context'
};
});Multi-Context Detection Strategy
public async detectCurrentModel(): Promise<ModelDetectionResult> {
// 1. Try chat context (if available) - 100% accurate
// 2. Try LM API - available models
// 3. Use cached result - historical data
// 4. Graceful error handling
}📈 Benefits & Use Cases
For Developers
- Model Debugging: Instantly verify which model is processing requests
- Performance Analysis: Track model usage patterns and response quality
- Context Switching: Monitor model changes during development workflows
For Teams
- Consistency: Ensure all team members use appropriate models
- Compliance: Track model usage for organizational policies
- Training: Help new developers understand model selection
For Productivity
- No Guessing: Always know your current AI assistant
- Quick Reference: Model capabilities at a glance
- Historical Analysis: Understand your AI workflow patterns
🔗 Integration with Emblem-Projects Ecosystem
Tool Launcher Integration
🔍-ai-model-detector.cmd # Quick detection via command lineChat History Correlation
- Compatible: Works alongside VS Code Copilot Chat Extractor
- Enhanced Context: Model information included in chat extraction
- Workflow Integration: Perfect for development documentation
Cross-Repository Usage
- Tools Repository: Enhanced development workflow tracking
- Operations Repository: Client work model verification
- Documentation: Technical specifications with model context
🚀 Future Enhancements
Planned Features
- Cost Tracking: Monitor token usage and estimated costs
- Model Comparison: Side-by-side capability analysis
- Team Dashboard: Shared model usage insights
- API Integration: External monitoring and alerting
Community Requests
- Custom Icons: User-defined model indicators
- Export Data: CSV/JSON export of detection history
- Notifications: Model change alerts and recommendations
🤝 Contributing
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Make changes: Follow TypeScript best practices
- Test thoroughly: Use debug configuration for testing
- Submit PR: Include detailed description and testing steps
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🔗 Related Projects
- VS Code Copilot Chat Extractor: Chat history analysis with model correlation
- MCP Server: Sequential thinking and development methodology
- Emblem-Projects Tools: Complete development workflow optimization
🎯 Perfect Solution: This extension solves the fundamental problem of AI model detection in VS Code through the Chat Participant API breakthrough, providing 100% real-time accuracy without file-based limitations.
📊 Production Ready: Complete TypeScript implementation with comprehensive error handling, configuration options, and professional UI integration.
🔧 Developer Focused: Built by developers, for developers, solving a real workflow visibility challenge with modern VS Code extension architecture.
