npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@tcsenpai/ollama-code

v0.0.3

Published

![Ollama Code Screenshot](./docs/assets/ollama-screenshot.png)

Downloads

535

Readme

Ollama Code

Ollama Code Screenshot

Ollama Code is a privacy-focused command-line AI workflow tool forked from Qwen Code, designed to work with locally-hosted Ollama models for enhanced privacy and data sovereignty. This tool gives you the power of AI-assisted development while keeping your code and data completely under your control.

🔒 Privacy & Data Sovereignty First

Your code never leaves your environment. Unlike cloud-based AI tools, Ollama Code processes everything locally through your own Ollama server, ensuring:

  • Complete Privacy: No data transmission to external services
  • Data Sovereignty: Full control over your models and processing
  • Offline Capability: Work without internet dependency once models are downloaded
  • Enterprise Ready: Perfect for sensitive codebases and air-gapped environments

⚠️ Quality Considerations

Important: This tool uses local Ollama models which may have different capabilities compared to cloud-based models:

  • Smaller models (7B-14B parameters) may provide less accurate results than larger cloud models
  • Response quality varies significantly based on your chosen model and hardware
  • Complex reasoning tasks may require larger models (70B+) for optimal results
  • Consider your use case: Test with your specific workflows to ensure model suitability

Key Features

  • Code Understanding & Editing - Query and edit large codebases beyond traditional context window limits
  • Workflow Automation - Automate operational tasks like handling pull requests and complex rebases
  • Local Model Support - Works with any Ollama-compatible model (Qwen, Llama, CodeLlama, etc.)
  • Privacy-First Architecture - All processing happens on your infrastructure

Quick Start

Prerequisites

  1. Node.js: Ensure you have Node.js version 20 or higher installed
  2. Ollama Server: Install and run Ollama with your preferred models

Installation

npm install -g @tcsenpai/ollama-code
ollama-code --version

Then run from anywhere:

ollama-code

Or install from source:

git clone https://github.com/tcsenpai/ollama-code.git
cd ollama-code
npm install
npm install -g .

Ollama Server Setup

  1. Install Ollama (if not already installed):

    curl -fsSL https://ollama.com/install.sh | sh
  2. Download a coding model:

    ollama pull qwen2.5-coder:14b  # Recommended for code tasks
    # or
    ollama pull codellama:13b      # Alternative coding model
    # or
    ollama pull llama3.1:8b        # Smaller, faster option
  3. Start Ollama server:

    ollama serve

Configuration

Configure your Ollama connection (the tool auto-detects local Ollama by default):

# Optional: Custom Ollama server
export OLLAMA_BASE_URL="http://localhost:11434/v1"
export OLLAMA_MODEL="qwen2.5-coder:14b"

# Or create ~/.config/ollama-code/config.json:
{
  "baseUrl": "http://localhost:11434/v1",
  "model": "qwen2.5-coder:14b"
}

Usage Examples

Explore Codebases

cd your-project/
ollama-code
> Describe the main pieces of this system's architecture

Code Development

> Refactor this function to improve readability and performance

Automate Workflows

> Analyze git commits from the last 7 days, grouped by feature and team member
> Convert all images in this directory to PNG format

Popular Tasks

Understand New Codebases

> What are the core business logic components?
> What security mechanisms are in place?
> How does the data flow work?

Code Refactoring & Optimization

> What parts of this module can be optimized?
> Help me refactor this class to follow better design patterns
> Add proper error handling and logging

Documentation & Testing

> Generate comprehensive JSDoc comments for this function
> Write unit tests for this component
> Create API documentation

Recommended Models

For optimal results with coding tasks:

| Model | Size | Best For | Quality | Speed | |-------|------|----------|---------|-------| | qwen2.5-coder:14b | 14B | Code generation, refactoring | ⭐⭐⭐⭐ | ⭐⭐⭐ | | codellama:13b | 13B | Code completion, debugging | ⭐⭐⭐ | ⭐⭐⭐ | | llama3.1:8b | 8B | General coding, faster responses | ⭐⭐ | ⭐⭐⭐⭐ | | qwen2.5-coder:32b | 32B | Complex reasoning, best quality | ⭐⭐⭐⭐⭐ | ⭐⭐ |

Project Structure

ollama-code/
├── packages/           # Core packages
├── docs/              # Documentation
├── examples/          # Example code
└── tests/            # Test files

Development & Contributing

See CONTRIBUTING.md to learn how to contribute to the project.

Privacy & Security

  • Local Processing: All AI computations happen on your Ollama server
  • No Telemetry: No usage data is transmitted externally
  • Code Isolation: Your source code never leaves your environment
  • Audit Trail: Full visibility into all AI interactions

Troubleshooting

If you encounter issues, check the troubleshooting guide.

Common issues:

  • Connection refused: Ensure Ollama is running (ollama serve)
  • Model not found: Pull the model first (ollama pull model-name)
  • Slow responses: Consider using smaller models or upgrading hardware

Acknowledgments

This project is forked from Qwen Code, which was originally based on Google Gemini CLI. We acknowledge and appreciate the excellent work of both teams. Our contribution focuses on privacy-first local model integration through Ollama.

License

LICENSE