positron-code
v1.0.2
Published
[](https://github.com/joeloliver/positron-code/actions/workflows/ci.yml) [](https://www.npmjs.com/pack
Maintainers
Readme
Positron Code

Positron Code is a community fork of Google's Gemini CLI that extends it with support for local/remote Ollama models. It maintains full compatibility with the original Gemini CLI while adding the ability to run powerful language models locally on your own hardware or remote servers.
⚠️ Note: The sandbox Docker image URI is currently being configured. This feature will be available in a future release.
🏠 Ollama Support - Local & Remote AI Models
Positron Code is a fork of Google's Gemini CLI that adds support for Ollama models!
🔗 What is this fork?
This fork extends the original Gemini CLI to work with Ollama - allowing you to run powerful language models locally on your own hardware or on remote servers. It retains all the original Gemini CLI features while adding Ollama support. Perfect for:
- 🔒 Privacy-focused development - Keep your code and conversations 100% local
- 🌐 Offline workflows - Work without internet connectivity
- 🏢 Enterprise environments - Use behind corporate firewalls and intranet
- 💰 Cost-effective - No API costs, unlimited usage with your own models
- 🎛️ Model flexibility - Use any Ollama-supported model (Llama, Qwen, Mistral, CodeLlama, etc.)
🚀 Quick Ollama Setup
Install and start Ollama on your local machine or server:
# Install Ollama (see https://ollama.ai for other platforms) curl -fsSL https://ollama.ai/install.sh | sh # Pull a model (example with CodeLlama) ollama pull codellama:13b # Start Ollama server ollama serveConfigure Positron Code to use Ollama:
Option A: Using settings file (recommended)
# Create ~/.positron/settings.json { "selectedAuthType": "use_ollama", "ollamaHost": "http://localhost:11434", "ollamaModel": "codellama:13b", "ollamaEmbeddingModel": "nomic-embed-text" } # Run the CLI positronOption B: Using environment variables
# Set environment variables export AUTH_METHOD=ollama export OLLAMA_HOST=http://localhost:11434 export OLLAMA_MODEL=codellama:13b export OLLAMA_EMBEDDING_MODEL=nomic-embed-text # Run the CLI positronFor remote Ollama servers (like your intranet setup):
Settings file:
{ "selectedAuthType": "use_ollama", "ollamaHost": "http://server.joeloliver.com:11434", "ollamaModel": "positron3:8b", "ollamaToken": "your-auth-token" // Optional: if your server requires authentication }Or environment variables:
export AUTH_METHOD=ollama export OLLAMA_HOST=http://server.joeloliver.com:11434 export OLLAMA_MODEL=positron3:8b export OLLAMA_TOKEN=your-auth-token # Optional: if your server requires authentication positron
🎯 Ollama Benefits
- 🏠 Self-hosted - Complete control over your AI infrastructure
- 🔐 Privacy first - No data leaves your network
- ⚡ Fast inference - Direct access to your local GPU/CPU
- 🔧 Customizable - Use any model that fits your needs
- 📦 Easy deployment - Simple Docker setup for teams
🔄 Compatibility
This fork maintains 100% compatibility with the original Gemini CLI features while adding Ollama support:
- ✅ All original commands and features work unchanged
- ✅ Can switch between Gemini API and Ollama seamlessly
- ✅ Same authentication options (OAuth, API keys) for Gemini
- ✅ MCP servers, tools, and extensions work identically
- ✅ Drop-in replacement - same installation and usage
🚀 Why Use This Fork?
- 🎯 Free tier: 60 requests/min and 1,000 requests/day with personal Google account
- 🧠 Powerful Gemini 2.5 Pro: Access to 1M token context window
- 🔧 Built-in tools: Google Search grounding, file operations, shell commands, web fetching
- 🔌 Extensible: MCP (Model Context Protocol) support for custom integrations
- 💻 Terminal-first: Designed for developers who live in the command line
- 🛡️ Open source: Apache 2.0 licensed
📦 Installation
Quick Install
Run instantly with npx
# Using npx (no installation required)
npx positron-codeInstall globally with npm
npm install -g positron-codeInstall globally with Homebrew (macOS/Linux)
# Homebrew formula coming soon
# For now, use npm:
npm install -g positron-codeSystem Requirements
- Node.js version 20 or higher
- macOS, Linux, or Windows
📋 Key Features
Code Understanding & Generation
- Query and edit large codebases
- Generate new apps from PDFs, images, or sketches using multimodal capabilities
- Debug issues and troubleshoot with natural language
Automation & Integration
- Automate operational tasks like querying pull requests or handling complex rebases
- Use MCP servers to connect new capabilities, including media generation with Imagen, Veo or Lyria
- Run non-interactively in scripts for workflow automation
Advanced Capabilities
- Ground your queries with built-in Google Search for real-time information
- Conversation checkpointing to save and resume complex sessions
- Custom context files (GEMINI.md) to tailor behavior for your projects
GitHub Integration
Integrate Gemini CLI directly into your GitHub workflows with Gemini CLI GitHub Action:
- Pull Request Reviews: Automated code review with contextual feedback and suggestions
- Issue Triage: Automated labeling and prioritization of GitHub issues based on content analysis
- On-demand Assistance: Mention
@gemini-cliin issues and pull requests for help with debugging, explanations, or task delegation - Custom Workflows: Build automated, scheduled and on-demand workflows tailored to your team's needs
🔐 Authentication Options
Choose the authentication method that best fits your needs:
Option 0: Ollama (Local Models) 🆕
✨ Best for: Privacy-focused development, offline work, enterprise environments, cost-effective unlimited usage
Benefits:
- 🔒 Complete privacy - No data leaves your network
- 💰 Zero API costs - Unlimited usage with your own hardware
- 🌐 Offline capable - Works without internet connectivity
- 🎛️ Model choice - Use any Ollama-supported model
- ⚡ Local performance - Direct GPU/CPU access
# Setup Ollama server (one-time)
ollama pull codellama:13b # or any other model
ollama serve
# Configure environment
export AUTH_METHOD=ollama
export OLLAMA_HOST=http://localhost:11434
export OLLAMA_MODEL=codellama:13b
# Start using local models
geminiOption 1: OAuth login (Using your Google Account)
✨ Best for: Individual developers as well as anyone who has a Gemini Code Assist License. (see quota limits and terms of service for details)
Benefits:
- Free tier: 60 requests/min and 1,000 requests/day
- Gemini 2.5 Pro with 1M token context window
- No API key management - just sign in with your Google account
- Automatic updates to latest models
Start Gemini CLI, then choose OAuth and follow the browser authentication flow when prompted
geminiIf you are using a paid Code Assist License from your organization, remember to set the Google Cloud Project
# Set your Google Cloud Project
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_NAME"
geminiOption 2: Gemini API Key
✨ Best for: Developers who need specific model control or paid tier access
Benefits:
- Free tier: 100 requests/day with Gemini 2.5 Pro
- Model selection: Choose specific Gemini models
- Usage-based billing: Upgrade for higher limits when needed
# Get your key from https://aistudio.google.com/apikey
export GEMINI_API_KEY="YOUR_API_KEY"
geminiOption 3: Vertex AI
✨ Best for: Enterprise teams and production workloads
Benefits:
- Enterprise features: Advanced security and compliance
- Scalable: Higher rate limits with billing account
- Integration: Works with existing Google Cloud infrastructure
# Get your key from Google Cloud Console
export GOOGLE_API_KEY="YOUR_API_KEY"
export GOOGLE_GENAI_USE_VERTEXAI=true
geminiFor Google Workspace accounts and other authentication methods, see the authentication guide.
🚀 Getting Started
Basic Usage
Start in current directory
geminiInclude multiple directories
gemini --include-directories ../lib,../docsUse specific model
gemini -m gemini-2.5-flashNon-interactive mode for scripts
gemini -p "Explain the architecture of this codebase"Quick Examples
Start a new project
cd new-project/
gemini
> Write me a Discord bot that answers questions using a FAQ.md file I will provide
#### Analyze existing code
```bash
git clone https://github.com/google-gemini/gemini-cli
cd gemini-cli
gemini
> Give me a summary of all of the changes that went in yesterday📚 Documentation
Getting Started
- Quickstart Guide - Get up and running quickly
- Authentication Setup - Detailed auth configuration
- Configuration Guide - Settings and customization
- Keyboard Shortcuts - Productivity tips
Core Features
- Commands Reference - All slash commands (
/help,/chat,/mcp, etc.) - Checkpointing - Save and resume conversations
- Memory Management - Using GEMINI.md context files
- Token Caching - Optimize token usage
Tools & Extensions
- Built-in Tools Overview
- MCP Server Integration - Extend with custom tools
- Custom Extensions - Build your own commands
Advanced Topics
- Architecture Overview - How Gemini CLI works
- IDE Integration - VS Code companion
- Sandboxing & Security - Safe execution environments
- Enterprise Deployment - Docker, system-wide config
- Telemetry & Monitoring - Usage tracking
- Tools API Development - Create custom tools
Configuration & Customization
- Settings Reference - All configuration options
- Theme Customization - Visual customization
- .gemini Directory - Project-specific settings
- Environment Variables
Troubleshooting & Support
- Troubleshooting Guide - Common issues and solutions
- FAQ - Quick answers
- Use
/bugcommand to report issues directly from the CLI
Using MCP Servers
Configure MCP servers in ~/.positron/settings.json to extend Positron Code with custom tools:
> @github List my open pull requests
> @slack Send a summary of today's commits to #dev channel
> @database Run a query to find inactive usersSee the MCP Server Integration guide for setup instructions.
🤝 Contributing
We welcome contributions! Gemini CLI is fully open source (Apache 2.0), and we encourage the community to:
- Report bugs and suggest features
- Improve documentation
- Submit code improvements
- Share your MCP servers and extensions
See our Contributing Guide for development setup, coding standards, and how to submit pull requests.
Check our Official Roadmap for planned features and priorities.
📖 Resources
- Official Roadmap - See what's coming next
- NPM Package - Package registry
- GitHub Issues - Report bugs or request features
- Security Advisories - Security updates
Uninstall
See the Uninstall Guide for removal instructions.
📄 Legal
- License: Apache License 2.0
- Terms of Service: Terms & Privacy
- Security: Security Policy
