npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

positron-code

v1.0.2

Published

[![Positron Code CI](https://github.com/joeloliver/positron-code/actions/workflows/ci.yml/badge.svg)](https://github.com/joeloliver/positron-code/actions/workflows/ci.yml) [![Version](https://img.shields.io/npm/v/positron-code)](https://www.npmjs.com/pack

Readme

Positron Code

Positron Code CI Version License

Positron Code Screenshot

Positron Code is a community fork of Google's Gemini CLI that extends it with support for local/remote Ollama models. It maintains full compatibility with the original Gemini CLI while adding the ability to run powerful language models locally on your own hardware or remote servers.

⚠️ Note: The sandbox Docker image URI is currently being configured. This feature will be available in a future release.

🏠 Ollama Support - Local & Remote AI Models

Positron Code is a fork of Google's Gemini CLI that adds support for Ollama models!

🔗 What is this fork?

This fork extends the original Gemini CLI to work with Ollama - allowing you to run powerful language models locally on your own hardware or on remote servers. It retains all the original Gemini CLI features while adding Ollama support. Perfect for:

  • 🔒 Privacy-focused development - Keep your code and conversations 100% local
  • 🌐 Offline workflows - Work without internet connectivity
  • 🏢 Enterprise environments - Use behind corporate firewalls and intranet
  • 💰 Cost-effective - No API costs, unlimited usage with your own models
  • 🎛️ Model flexibility - Use any Ollama-supported model (Llama, Qwen, Mistral, CodeLlama, etc.)

🚀 Quick Ollama Setup

  1. Install and start Ollama on your local machine or server:

    # Install Ollama (see https://ollama.ai for other platforms)
    curl -fsSL https://ollama.ai/install.sh | sh
       
    # Pull a model (example with CodeLlama)
    ollama pull codellama:13b
       
    # Start Ollama server
    ollama serve
  2. Configure Positron Code to use Ollama:

    Option A: Using settings file (recommended)

    # Create ~/.positron/settings.json
    {
      "selectedAuthType": "use_ollama",
      "ollamaHost": "http://localhost:11434",
      "ollamaModel": "codellama:13b",
      "ollamaEmbeddingModel": "nomic-embed-text"
    }
       
    # Run the CLI
    positron

    Option B: Using environment variables

    # Set environment variables
    export AUTH_METHOD=ollama
    export OLLAMA_HOST=http://localhost:11434
    export OLLAMA_MODEL=codellama:13b
    export OLLAMA_EMBEDDING_MODEL=nomic-embed-text
       
    # Run the CLI
    positron
  3. For remote Ollama servers (like your intranet setup):

    Settings file:

    {
      "selectedAuthType": "use_ollama",
      "ollamaHost": "http://server.joeloliver.com:11434",
      "ollamaModel": "positron3:8b",
      "ollamaToken": "your-auth-token"  // Optional: if your server requires authentication
    }

    Or environment variables:

    export AUTH_METHOD=ollama
    export OLLAMA_HOST=http://server.joeloliver.com:11434
    export OLLAMA_MODEL=positron3:8b
    export OLLAMA_TOKEN=your-auth-token     # Optional: if your server requires authentication
       
    positron

🎯 Ollama Benefits

  • 🏠 Self-hosted - Complete control over your AI infrastructure
  • 🔐 Privacy first - No data leaves your network
  • ⚡ Fast inference - Direct access to your local GPU/CPU
  • 🔧 Customizable - Use any model that fits your needs
  • 📦 Easy deployment - Simple Docker setup for teams

🔄 Compatibility

This fork maintains 100% compatibility with the original Gemini CLI features while adding Ollama support:

  • ✅ All original commands and features work unchanged
  • ✅ Can switch between Gemini API and Ollama seamlessly
  • ✅ Same authentication options (OAuth, API keys) for Gemini
  • ✅ MCP servers, tools, and extensions work identically
  • ✅ Drop-in replacement - same installation and usage

🚀 Why Use This Fork?

  • 🎯 Free tier: 60 requests/min and 1,000 requests/day with personal Google account
  • 🧠 Powerful Gemini 2.5 Pro: Access to 1M token context window
  • 🔧 Built-in tools: Google Search grounding, file operations, shell commands, web fetching
  • 🔌 Extensible: MCP (Model Context Protocol) support for custom integrations
  • 💻 Terminal-first: Designed for developers who live in the command line
  • 🛡️ Open source: Apache 2.0 licensed

📦 Installation

Quick Install

Run instantly with npx

# Using npx (no installation required)
npx positron-code

Install globally with npm

npm install -g positron-code

Install globally with Homebrew (macOS/Linux)

# Homebrew formula coming soon
# For now, use npm:
npm install -g positron-code

System Requirements

  • Node.js version 20 or higher
  • macOS, Linux, or Windows

📋 Key Features

Code Understanding & Generation

  • Query and edit large codebases
  • Generate new apps from PDFs, images, or sketches using multimodal capabilities
  • Debug issues and troubleshoot with natural language

Automation & Integration

  • Automate operational tasks like querying pull requests or handling complex rebases
  • Use MCP servers to connect new capabilities, including media generation with Imagen, Veo or Lyria
  • Run non-interactively in scripts for workflow automation

Advanced Capabilities

  • Ground your queries with built-in Google Search for real-time information
  • Conversation checkpointing to save and resume complex sessions
  • Custom context files (GEMINI.md) to tailor behavior for your projects

GitHub Integration

Integrate Gemini CLI directly into your GitHub workflows with Gemini CLI GitHub Action:

  • Pull Request Reviews: Automated code review with contextual feedback and suggestions
  • Issue Triage: Automated labeling and prioritization of GitHub issues based on content analysis
  • On-demand Assistance: Mention @gemini-cli in issues and pull requests for help with debugging, explanations, or task delegation
  • Custom Workflows: Build automated, scheduled and on-demand workflows tailored to your team's needs

🔐 Authentication Options

Choose the authentication method that best fits your needs:

Option 0: Ollama (Local Models) 🆕

✨ Best for: Privacy-focused development, offline work, enterprise environments, cost-effective unlimited usage

Benefits:

  • 🔒 Complete privacy - No data leaves your network
  • 💰 Zero API costs - Unlimited usage with your own hardware
  • 🌐 Offline capable - Works without internet connectivity
  • 🎛️ Model choice - Use any Ollama-supported model
  • ⚡ Local performance - Direct GPU/CPU access
# Setup Ollama server (one-time)
ollama pull codellama:13b  # or any other model
ollama serve

# Configure environment
export AUTH_METHOD=ollama
export OLLAMA_HOST=http://localhost:11434
export OLLAMA_MODEL=codellama:13b

# Start using local models
gemini

Option 1: OAuth login (Using your Google Account)

✨ Best for: Individual developers as well as anyone who has a Gemini Code Assist License. (see quota limits and terms of service for details)

Benefits:

  • Free tier: 60 requests/min and 1,000 requests/day
  • Gemini 2.5 Pro with 1M token context window
  • No API key management - just sign in with your Google account
  • Automatic updates to latest models

Start Gemini CLI, then choose OAuth and follow the browser authentication flow when prompted

gemini

If you are using a paid Code Assist License from your organization, remember to set the Google Cloud Project

# Set your Google Cloud Project
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_NAME"
gemini

Option 2: Gemini API Key

✨ Best for: Developers who need specific model control or paid tier access

Benefits:

  • Free tier: 100 requests/day with Gemini 2.5 Pro
  • Model selection: Choose specific Gemini models
  • Usage-based billing: Upgrade for higher limits when needed
# Get your key from https://aistudio.google.com/apikey
export GEMINI_API_KEY="YOUR_API_KEY"
gemini

Option 3: Vertex AI

✨ Best for: Enterprise teams and production workloads

Benefits:

  • Enterprise features: Advanced security and compliance
  • Scalable: Higher rate limits with billing account
  • Integration: Works with existing Google Cloud infrastructure
# Get your key from Google Cloud Console
export GOOGLE_API_KEY="YOUR_API_KEY"
export GOOGLE_GENAI_USE_VERTEXAI=true
gemini

For Google Workspace accounts and other authentication methods, see the authentication guide.

🚀 Getting Started

Basic Usage

Start in current directory

gemini

Include multiple directories

gemini --include-directories ../lib,../docs

Use specific model

gemini -m gemini-2.5-flash

Non-interactive mode for scripts

gemini -p "Explain the architecture of this codebase"

Quick Examples

Start a new project

cd new-project/
gemini
> Write me a Discord bot that answers questions using a FAQ.md file I will provide

#### Analyze existing code
```bash
git clone https://github.com/google-gemini/gemini-cli
cd gemini-cli
gemini
> Give me a summary of all of the changes that went in yesterday

📚 Documentation

Getting Started

Core Features

Tools & Extensions

Advanced Topics

Configuration & Customization

Troubleshooting & Support

  • Troubleshooting Guide - Common issues and solutions
  • FAQ - Quick answers
  • Use /bug command to report issues directly from the CLI

Using MCP Servers

Configure MCP servers in ~/.positron/settings.json to extend Positron Code with custom tools:

> @github List my open pull requests
> @slack Send a summary of today's commits to #dev channel
> @database Run a query to find inactive users

See the MCP Server Integration guide for setup instructions.

🤝 Contributing

We welcome contributions! Gemini CLI is fully open source (Apache 2.0), and we encourage the community to:

  • Report bugs and suggest features
  • Improve documentation
  • Submit code improvements
  • Share your MCP servers and extensions

See our Contributing Guide for development setup, coding standards, and how to submit pull requests.

Check our Official Roadmap for planned features and priorities.

📖 Resources

Uninstall

See the Uninstall Guide for removal instructions.

📄 Legal