npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@fsfalmansour/neohub-cli

v1.1.0

Published

NeoHub CLI - AI code assistant in your terminal

Readme

NeoHub CLI

AI-powered code assistant in your terminal using local Ollama models

npm version License: MIT

Privacy-first AI coding assistant that runs 100% locally. No cloud, no API keys, no data sent anywhere.

✨ Features

  • 🔒 100% Private - All processing happens locally on your machine
  • Lightning Fast - No API latency, instant responses
  • 🧠 Smart Model Selection - AI-powered Model Supervisor recommends the best model for each task
  • 🚀 Powerful Models - DeepSeek Coder 33B, CodeLlama 34B, and more
  • 💬 Interactive Chat - Conversational AI assistance
  • ✏️ Code Editing - AI-powered file modifications
  • 🔍 Code Analysis - Review, explain, security, performance analysis

🚀 Quick Start

Prerequisites

Installation

# Install globally
npm install -g @fsfalmansour/neohub-cli

# Verify installation
neohub --version

First Run

# Initialize configuration
neohub init

# Start chatting with AI
neohub chat

📋 Commands

neohub chat

Start an interactive chat session with AI

neohub chat

Example:

You: Explain async/await in JavaScript
AI: Async/await is syntactic sugar for promises...

neohub edit

Edit files with AI assistance

neohub edit -f app.js -i "add error handling"

Example:

# Add error handling to a function
neohub edit -f server.js -i "add try-catch to all async functions"

# Refactor code
neohub edit -f utils.js -i "convert to TypeScript"

# Create backup first
neohub edit -f config.js -i "add validation" --backup

neohub analyze

Analyze code for issues, explanations, or improvements

neohub analyze <path> [--type review|explain|security|performance]

Examples:

# Code review
neohub analyze src/app.js --type review

# Security analysis
neohub analyze . --type security

# Performance analysis
neohub analyze lib/ --type performance

# Explain code
neohub analyze components/Header.tsx --type explain

neohub models

List available Ollama models

neohub models

Output:

📦 Available Models

● deepseek-coder:33b (17.53 GB)
● codellama:34b (17.74 GB)
● qwen2.5-coder:1.5b (0.92 GB)

neohub recommend

Get intelligent model recommendations

neohub recommend

The Model Supervisor analyzes:

  • Task type (code generation, review, debugging, etc.)
  • Task complexity
  • Available models
  • Performance history

Recommends the best model for your specific task!

neohub config

Show current configuration

neohub config

neohub completion

Generate shell completion script for tab completion

# Auto-detect your shell
neohub completion

# Specify shell type
neohub completion --shell bash
neohub completion --shell zsh
neohub completion --shell fish

Enable autocomplete:

# Bash - add to ~/.bashrc or ~/.bash_profile
eval "$(neohub completion --shell bash)"

# Zsh - add to ~/.zshrc
eval "$(neohub completion --shell zsh)"

# Fish - add to ~/.config/fish/config.fish
neohub completion --shell fish | source

After enabling, you can:

  • Press TAB to complete commands: neohub ch<TAB>neohub chat
  • Press TAB to complete options: neohub analyze --type <TAB> → shows review explain security performance
  • Press TAB to complete file paths: neohub edit -f <TAB> → shows available files

neohub analytics

View usage statistics and analytics

# View analytics dashboard
neohub analytics

# Export analytics data
neohub analytics --export

# Clear analytics data
neohub analytics --clear

# Disable/enable tracking
neohub analytics --disable
neohub analytics --enable

Shows:

  • Total commands executed
  • Success rate
  • Average response time
  • Most used commands
  • Model performance metrics

Privacy: All analytics stored locally, never sent to cloud

neohub search

Search for code patterns across your project

# Basic search
neohub search "function"

# Case-sensitive search
neohub search "MyClass" --case-sensitive

# Regex search
neohub search "class\s+\w+" --regex

# Search with context
neohub search "TODO" --context-lines 5

# Limit results
neohub search "import" --max-results 20

Options:

  • -i, --case-sensitive - Case sensitive search
  • -w, --whole-word - Match whole words only
  • -r, --regex - Use regex pattern
  • -p, --path <path> - Directory to search in
  • -m, --max-results <number> - Maximum results (default: 100)
  • -c, --context-lines <number> - Context lines (default: 2)

Output:

{
  "ollama": {
    "baseUrl": "http://localhost:11434",
    "model": "deepseek-coder:33b",
    "timeout": 60000
  },
  "preferences": {
    "autoContext": true,
    "maxContextFiles": 10
  }
}

🎯 Model Supervisor

NeoHub includes an intelligent Model Supervisor that automatically recommends the best model for each task:

Task-based Recommendations:

  • 📝 Code Generation → DeepSeek Coder 33B (better at generating new code)
  • 🔍 Code Review → CodeLlama 34B (trained on review patterns)
  • ♻️ Refactoring → DeepSeek Coder 33B (understands structure)
  • 🐛 Debugging → CodeLlama 34B (better at finding issues)
  • 📖 Code Explanation → CodeLlama 34B (natural language strength)
  • 🏗️ Architecture → DeepSeek Coder 33B (system design)

🔧 Configuration

Config file location: ~/.config/configstore/neohub.json

Change Ollama URL

# Edit config file or use init
neohub init

Change Default Model

Edit config file:

{
  "ollama": {
    "model": "codellama:34b"
  }
}

📦 Supported Models

NeoHub works with any Ollama model:

Recommended for Coding:

  • deepseek-coder:33b - Best for code generation
  • codellama:34b - Best for code review/explanation
  • qwen2.5-coder:1.5b - Lightweight, fast

Install Models:

ollama pull deepseek-coder:33b
ollama pull codellama:34b

🌟 Use Cases

1. Interactive Coding Assistant

neohub chat
> How do I implement JWT authentication in Express?

2. Bulk Code Analysis

neohub analyze src/ --type security

3. Automated Refactoring

neohub edit -f *.js -i "convert var to const/let"

4. Learning & Exploration

neohub analyze node_modules/react/index.js --type explain

🚀 Why NeoHub?

vs Cloud AI Tools (ChatGPT, Claude, Copilot)

  • Free - No subscription required
  • Private - Code never leaves your machine
  • Offline - Works without internet
  • Unlimited - No rate limits or tokens

vs Other Local AI Tools

  • Model Supervisor - Intelligent model selection
  • Purpose-built - Designed specifically for coding
  • CLI-first - Fast workflow integration
  • Zero config - Works out of the box

🛠️ Requirements

  • Node.js: 18+
  • Ollama: Latest version
  • Disk Space: 2-20GB (depends on models)
  • RAM: 8GB minimum (16GB+ recommended for 33B models)

📊 Performance

Typical Response Times:

  • Code completion: <1s
  • Code review: 2-5s
  • Complex refactoring: 5-10s

Times vary based on model size and hardware

🔗 Links

📄 License

MIT © 2025 Fahad Almansour

🙏 Credits

Built with:


Made with ❤️ for developers who value privacy