@fsfalmansour/neohub-cli
v1.1.0
Published
NeoHub CLI - AI code assistant in your terminal
Maintainers
Readme
NeoHub CLI
AI-powered code assistant in your terminal using local Ollama models
Privacy-first AI coding assistant that runs 100% locally. No cloud, no API keys, no data sent anywhere.
✨ Features
- 🔒 100% Private - All processing happens locally on your machine
- ⚡ Lightning Fast - No API latency, instant responses
- 🧠 Smart Model Selection - AI-powered Model Supervisor recommends the best model for each task
- 🚀 Powerful Models - DeepSeek Coder 33B, CodeLlama 34B, and more
- 💬 Interactive Chat - Conversational AI assistance
- ✏️ Code Editing - AI-powered file modifications
- 🔍 Code Analysis - Review, explain, security, performance analysis
🚀 Quick Start
Prerequisites
- Node.js 18+
- Ollama - Install from ollama.ai
Installation
# Install globally
npm install -g @fsfalmansour/neohub-cli
# Verify installation
neohub --versionFirst Run
# Initialize configuration
neohub init
# Start chatting with AI
neohub chat📋 Commands
neohub chat
Start an interactive chat session with AI
neohub chatExample:
You: Explain async/await in JavaScript
AI: Async/await is syntactic sugar for promises...neohub edit
Edit files with AI assistance
neohub edit -f app.js -i "add error handling"Example:
# Add error handling to a function
neohub edit -f server.js -i "add try-catch to all async functions"
# Refactor code
neohub edit -f utils.js -i "convert to TypeScript"
# Create backup first
neohub edit -f config.js -i "add validation" --backupneohub analyze
Analyze code for issues, explanations, or improvements
neohub analyze <path> [--type review|explain|security|performance]Examples:
# Code review
neohub analyze src/app.js --type review
# Security analysis
neohub analyze . --type security
# Performance analysis
neohub analyze lib/ --type performance
# Explain code
neohub analyze components/Header.tsx --type explainneohub models
List available Ollama models
neohub modelsOutput:
📦 Available Models
● deepseek-coder:33b (17.53 GB)
● codellama:34b (17.74 GB)
● qwen2.5-coder:1.5b (0.92 GB)neohub recommend
Get intelligent model recommendations
neohub recommendThe Model Supervisor analyzes:
- Task type (code generation, review, debugging, etc.)
- Task complexity
- Available models
- Performance history
Recommends the best model for your specific task!
neohub config
Show current configuration
neohub configneohub completion
Generate shell completion script for tab completion
# Auto-detect your shell
neohub completion
# Specify shell type
neohub completion --shell bash
neohub completion --shell zsh
neohub completion --shell fishEnable autocomplete:
# Bash - add to ~/.bashrc or ~/.bash_profile
eval "$(neohub completion --shell bash)"
# Zsh - add to ~/.zshrc
eval "$(neohub completion --shell zsh)"
# Fish - add to ~/.config/fish/config.fish
neohub completion --shell fish | sourceAfter enabling, you can:
- Press TAB to complete commands:
neohub ch<TAB>→neohub chat - Press TAB to complete options:
neohub analyze --type <TAB>→ showsreview explain security performance - Press TAB to complete file paths:
neohub edit -f <TAB>→ shows available files
neohub analytics
View usage statistics and analytics
# View analytics dashboard
neohub analytics
# Export analytics data
neohub analytics --export
# Clear analytics data
neohub analytics --clear
# Disable/enable tracking
neohub analytics --disable
neohub analytics --enableShows:
- Total commands executed
- Success rate
- Average response time
- Most used commands
- Model performance metrics
Privacy: All analytics stored locally, never sent to cloud
neohub search
Search for code patterns across your project
# Basic search
neohub search "function"
# Case-sensitive search
neohub search "MyClass" --case-sensitive
# Regex search
neohub search "class\s+\w+" --regex
# Search with context
neohub search "TODO" --context-lines 5
# Limit results
neohub search "import" --max-results 20Options:
-i, --case-sensitive- Case sensitive search-w, --whole-word- Match whole words only-r, --regex- Use regex pattern-p, --path <path>- Directory to search in-m, --max-results <number>- Maximum results (default: 100)-c, --context-lines <number>- Context lines (default: 2)
Output:
{
"ollama": {
"baseUrl": "http://localhost:11434",
"model": "deepseek-coder:33b",
"timeout": 60000
},
"preferences": {
"autoContext": true,
"maxContextFiles": 10
}
}🎯 Model Supervisor
NeoHub includes an intelligent Model Supervisor that automatically recommends the best model for each task:
Task-based Recommendations:
- 📝 Code Generation → DeepSeek Coder 33B (better at generating new code)
- 🔍 Code Review → CodeLlama 34B (trained on review patterns)
- ♻️ Refactoring → DeepSeek Coder 33B (understands structure)
- 🐛 Debugging → CodeLlama 34B (better at finding issues)
- 📖 Code Explanation → CodeLlama 34B (natural language strength)
- 🏗️ Architecture → DeepSeek Coder 33B (system design)
🔧 Configuration
Config file location: ~/.config/configstore/neohub.json
Change Ollama URL
# Edit config file or use init
neohub initChange Default Model
Edit config file:
{
"ollama": {
"model": "codellama:34b"
}
}📦 Supported Models
NeoHub works with any Ollama model:
Recommended for Coding:
deepseek-coder:33b- Best for code generationcodellama:34b- Best for code review/explanationqwen2.5-coder:1.5b- Lightweight, fast
Install Models:
ollama pull deepseek-coder:33b
ollama pull codellama:34b🌟 Use Cases
1. Interactive Coding Assistant
neohub chat
> How do I implement JWT authentication in Express?2. Bulk Code Analysis
neohub analyze src/ --type security3. Automated Refactoring
neohub edit -f *.js -i "convert var to const/let"4. Learning & Exploration
neohub analyze node_modules/react/index.js --type explain🚀 Why NeoHub?
vs Cloud AI Tools (ChatGPT, Claude, Copilot)
- ✅ Free - No subscription required
- ✅ Private - Code never leaves your machine
- ✅ Offline - Works without internet
- ✅ Unlimited - No rate limits or tokens
vs Other Local AI Tools
- ✅ Model Supervisor - Intelligent model selection
- ✅ Purpose-built - Designed specifically for coding
- ✅ CLI-first - Fast workflow integration
- ✅ Zero config - Works out of the box
🛠️ Requirements
- Node.js: 18+
- Ollama: Latest version
- Disk Space: 2-20GB (depends on models)
- RAM: 8GB minimum (16GB+ recommended for 33B models)
📊 Performance
Typical Response Times:
- Code completion: <1s
- Code review: 2-5s
- Complex refactoring: 5-10s
Times vary based on model size and hardware
🔗 Links
- GitHub: fahadalmansour/NeoHub
- npm: @fsfalmansour/neohub-cli
- Issues: Report a bug
📄 License
MIT © 2025 Fahad Almansour
🙏 Credits
Built with:
- Ollama - Local LLM runtime
- Commander.js - CLI framework
- Inquirer.js - Interactive prompts
Made with ❤️ for developers who value privacy
