cclm
v1.0.1
Published
Claude Code with LM Studio - Use official Claude Code CLI with local LM Studio models
Downloads
55
Maintainers
Readme
CCLM - Claude Code with LM Studio
Use the official Claude Code CLI with local LM Studio models from anywhere in your terminal
Features
✅ Official Claude Code Interface - Full compatibility with Claude Code CLI
✅ Local LM Studio Models - Use any model loaded in LM Studio
✅ Global Command - Run cclm from any directory
✅ Auto Proxy Management - Automatic start/stop of translation proxy
✅ Zero Configuration - Works out of the box with sensible defaults
✅ Configurable - Easy configuration via CLI commands
Prerequisites
- LM Studio - Download from https://lmstudio.ai/
- Node.js 18+ - Download from https://nodejs.org/
Installation
Global Installation (Recommended)
npm install -g cclmLocal Installation
git clone https://github.com/yourusername/cclm
cd cclm
npm install
npm linkQuick Start
Start LM Studio
- Open LM Studio
- Load a model (e.g., Granite 4.0 H 1B, Llama 3.1 8B)
- Go to "Local Server" tab
- Click "Start Server"
Run CCLM
cclm
That's it! You're now using Claude Code with your local LM Studio model.
Usage
Basic Commands
# Start interactive session
cclm
# Show help
cclm --help
# Show version
cclm --version
# Pass arguments to Claude Code
cclm --model my-modelConfiguration
# Show current configuration
cclm config show
# Set LM Studio model
cclm config set lmStudioModel granite-4.0-h-1b
# Set LM Studio URL (if using custom port)
cclm config set lmStudioUrl http://localhost:1234/v1
# Enable debug mode
cclm config set debug true
# Change proxy port
cclm config set proxyPort 3001
# Disable auto-start proxy
cclm config set autoStartProxy falseConfiguration Options
| Key | Default | Description |
|-----|---------|-------------|
| lmStudioUrl | http://localhost:1234/v1 | LM Studio API URL |
| lmStudioModel | local-model | Model name in LM Studio |
| proxyPort | 3000 | Port for proxy server |
| debug | false | Enable debug logging |
| autoStartProxy | true | Auto-start proxy if not running |
How It Works
┌─────────────┐ ┌──────────┐ ┌───────────┐
│ cclm CLI │ ─────>│ Proxy │ ─────>│ LM Studio │
│ (Official) │ API │ Server │ API │ Server │
└─────────────┘ └──────────┘ └───────────┘
Anthropic Translator OpenAI format
formatCCLM consists of three parts:
- CCLM Wrapper (
cclmcommand) - Manages setup and launches components - Proxy Server - Translates between Anthropic API and LM Studio (OpenAI) API
- Claude Code CLI - Official Anthropic CLI (installed as dependency)
Features in Detail
Full Claude Code Compatibility
All official Claude Code features work:
- ✅ File operations (Read, Write, Edit)
- ✅ Search (Glob, Grep)
- ✅ Terminal commands (Bash)
- ✅ Task management
- ✅ Agents and plugins
- ✅ Streaming responses
- ✅ Function/tool calling
- ✅ MCP servers
- ✅ Slash commands
Auto Proxy Management
The proxy server starts automatically when needed:
# First run - proxy starts automatically
$ cclm
[1/3] Checking LM Studio... ✓
[2/3] Checking proxy server... starting...
✓ Proxy server started
[3/3] Starting Claude Code... ✓On subsequent runs, if the proxy is still running:
$ cclm
[1/3] Checking LM Studio... ✓
[2/3] Checking proxy server... ✓
[3/3] Starting Claude Code... ✓Configuration Persistence
Your configuration is saved to config.json in the package directory and persists across updates.
Examples
Use Specific Model
# Configure once
cclm config set lmStudioModel llama-3.1-70b
# Then just run
cclmDebug Mode
# Enable debug logging
cclm config set debug true
# Run with debug output
cclmCustom Port
# If LM Studio uses a different port
cclm config set lmStudioUrl http://localhost:8080/v1
# Or if you want proxy on different port
cclm config set proxyPort 8000Working Directory
CCLM works from any directory:
# Navigate to your project
cd ~/projects/my-app
# Run cclm
cclm
# Claude Code operates in current directoryTroubleshooting
LM Studio Not Detected
Error: LM Studio is not running or not accessible
Solution:
- Verify LM Studio is running
- Check that local server is enabled
- Test manually:
curl http://localhost:1234/v1/models - If using different port:
cclm config set lmStudioUrl http://localhost:YOUR_PORT/v1
Proxy Won't Start
Error: Failed to start proxy server
Solution:
- Check if port 3000 is available:
netstat -ano | findstr :3000(Windows) orlsof -ti:3000(Mac/Linux) - Use different port:
cclm config set proxyPort 3001 - Enable debug:
cclm config set debug trueand check logs
Command Not Found
Error: cclm: command not found
Solution:
# Verify installation
npm list -g cclm
# Reinstall globally
npm install -g cclm
# Or link locally
cd cclm
npm linkModel Not Responding
Solution:
- Verify model name matches LM Studio:
cclm config set lmStudioModel your-exact-model-name - Check LM Studio logs for errors
- Try smaller model or longer timeout
- Enable debug mode to see requests:
cclm config set debug true
Advanced Usage
Use with Different LLM Providers
CCLM works with any OpenAI-compatible API:
# Ollama
cclm config set lmStudioUrl http://localhost:11434/v1
# LocalAI
cclm config set lmStudioUrl http://localhost:8080/v1
# Text Generation WebUI
cclm config set lmStudioUrl http://localhost:5000/v1Programmatic Usage
import { spawn } from 'child_process';
const cclm = spawn('cclm', ['--model', 'my-model'], {
env: process.env,
stdio: 'inherit'
});Multiple Configurations
# Save current config
cp ~/.cclm/config.json ~/.cclm/config-granite.json
# Switch configs
cp ~/.cclm/config-llama.json ~/.cclm/config.json
cclmComparison with Alternatives
| Feature | CCLM | Custom CLI | API Direct | |---------|------|------------|------------| | Official CLI | ✅ | ❌ | ❌ | | Full Features | ✅ | ⚠️ Limited | ❌ | | Auto Updates | ✅ | ❌ | ❌ | | Easy Setup | ✅ | ⚠️ Medium | ❌ | | LM Studio | ✅ | ✅ | ✅ | | Global Command | ✅ | ⚠️ Manual | N/A |
Performance
Typical performance with Granite 4.0 H 1B (1M context):
- Startup: ~3-5 seconds
- First response: ~2-5 seconds
- Streaming: Real-time, word-by-word
- Tool execution: <100ms proxy overhead
- Memory: ~50MB (proxy) + ~2GB (LM Studio model)
Security & Privacy
- ✅ Runs 100% locally - no data leaves your machine
- ✅ No telemetry or tracking
- ✅ Proxy only accessible on localhost
- ✅ Dummy API key (not used externally)
Development
Build from Source
git clone <repository>
cd cclm-package
npm install
npm linkRun Tests
npm testDebug
# Enable debug mode
cclm config set debug true
# Or set environment variable
DEBUG=true cclmChangelog
1.0.0 (Initial Release)
- Official Claude Code CLI integration
- LM Studio proxy server
- Auto proxy management
- Configuration system
- Global command installation
License
MIT
Credits
- Claude Code - Official CLI by Anthropic
- LM Studio - Local LLM hosting platform
- CCLM - Integration layer and proxy server
Support
For issues, questions, or contributions:
- Check this README
- Enable debug mode:
cclm config set debug true - Review proxy logs
- Check LM Studio server status
Enjoy using Claude Code with LM Studio! 🚀
