ask-gpt-cli
v1.0.0
Published
A CLI tool to quickly ask questions to various LLM providers (Ollama, OpenAI, OpenRouter) via API without leaving terminal.
Maintainers
Readme
Ask GPT CLI
A simple and fast CLI tool to quickly ask questions to various LLM providers from your terminal simply by typing 'ask '.
If you are too lazy to leave terminal...or maybe use it for other tasks, this tool is for you.
Features
- 🚀 Quick questions to LLMs without leaving your terminal
- 🔄 Support for multiple providers: Ollama, OpenAI, OpenRouter
- ⚙️ Easy configuration management with visual status indicators
- 🌐 Global installation support
- 💡 Simple and intuitive commands
- 🏠 Support for both local and remote Ollama servers
- 📋 List available models for each provider
- 🔧 Advanced configuration options with host/port separation
- ❌ Clear error messages with helpful setup guidance
Installation
Global Installation (Recommended)
npm install -g ask-gpt-cliLocal Installation
git clone <repository-url>
cd ask-gpt
npm install
npm install -g .Quick Start
- Install the CLI globally
- Configure your preferred provider
- Start asking questions!
# Configure for Ollama (localhost)
ask config --provider ollama --model llama3.2
# Configure for remote Ollama
ask config --provider ollama --model llama3.2 --host myserver.com
# Configure for OpenAI
ask config --provider openai --model gpt-3.5-turbo --api-key YOUR_API_KEY
# Configure for OpenRouter
ask config --provider openrouter --model meta-llama/llama-3.2-3b-instruct:free --api-key YOUR_API_KEY
# Ask a question
ask "Why is the sky blue?"Usage
Basic Command
ask "your question here"Examples:
ask "What is the capital of France?"
ask "Explain quantum computing in simple terms"
ask "Write a Python function to reverse a string"Configuration Commands
Show current configuration
ask config --show
# or
ask listSet provider
ask config --provider <ollama|openai|openrouter>Set model
ask config --model <model-name>Set host and port (for Ollama)
# For localhost (default port 11434)
ask config --host localhost
# For localhost with custom port
ask config --host localhost --port 8080
# For remote server (no port needed)
ask config --host myserver.com
# For remote server with custom port
ask config --host myserver.com --port 8080Set API key (for OpenAI/OpenRouter)
ask config --api-key YOUR_API_KEYMultiple configurations at once
ask config --provider openai --model gpt-4 --api-key YOUR_API_KEYModel Management
List available models
# List models for current provider
ask models
# List models for specific provider
ask models --provider ollama
ask models --provider openai
ask models --provider openrouterConfiguration Management
View current configuration with status
ask listThis shows:
- ✓ Configured settings (green checkmarks)
- ✗ Missing settings (red X marks)
- Active/inactive status for each setting
- Full Ollama URL construction
Reset configuration
# Reset everything
ask reset --all
# Reset specific settings
ask reset --provider
ask reset --model
ask reset --host
ask reset --port
ask reset --api-keyHelp and Information
Show version
ask --versionShow help
ask --help
ask <command> --help # Help for specific commandsSupported Providers
Ollama
- Provider:
ollama - Default Host:
localhost - Default Port:
11434 - API Key: Not required
- Setup: Make sure Ollama is running locally or remotely
- Models: Fetched dynamically from your Ollama instance
Examples:
# Local setup
ask config --provider ollama --model llama3.2
# Remote setup
ask config --provider ollama --model llama3.2 --host ai.mycompany.com
# Custom port
ask config --provider ollama --model llama3.2 --host localhost --port 8080OpenAI
- Provider:
openai - Popular Models:
gpt-4o,gpt-4o-mini,gpt-4-turbo,gpt-4,gpt-3.5-turbo - API Key: Required
- Setup: Get API key from OpenAI Platform
Example:
ask config --provider openai --model gpt-3.5-turbo --api-key sk-your-key-hereOpenRouter
- Provider:
openrouter - Models: 100+ models available (fetched dynamically)
- API Key: Required
- Setup: Get API key from OpenRouter
- Note: Shows pricing information for each model
Example:
ask config --provider openrouter --model meta-llama/llama-3.2-3b-instruct:free --api-key sk-your-key-hereConfiguration File
The CLI stores configuration in your system's config directory using the conf package. You can find the config file at:
- macOS:
~/Library/Preferences/ask-gpt-cli-nodejs/config.json - Linux:
~/.config/ask-gpt-cli-nodejs/config.json - Windows:
%APPDATA%\ask-gpt-cli-nodejs\config.json
Examples
Development Questions
ask "How do I create a REST API in Node.js?"
ask "What's the difference between let and const in JavaScript?"
ask "Show me a Python function to sort a list"
ask "Explain the difference between SQL and NoSQL databases"General Knowledge
ask "What is photosynthesis?"
ask "Explain the theory of relativity"
ask "Who painted the Mona Lisa?"
ask "How does machine learning work?"Creative Tasks
ask "Write a haiku about programming"
ask "Create a story about a robot learning to cook"
ask "Suggest names for a pet cat"
ask "Write a professional email template"Model Comparison
# Compare responses from different providers
ask config --provider ollama --model llama3.2
ask "Explain blockchain technology"
ask config --provider openai --model gpt-3.5-turbo
ask "Explain blockchain technology"Error Handling
The CLI provides clear, helpful error messages:
Configuration Not Set
When you try to ask a question without configuration:
Error: You have not set any configuration yet.
To get started, configure your LLM provider:
ask config --provider <provider> --model <model> [--api-key <api-key>]
Quick setup examples:
Ollama: ask config --provider ollama --model llama3.2
OpenAI: ask config --provider openai --model gpt-3.5-turbo --api-key sk-...
OpenRouter: ask config --provider openrouter --model meta-llama/llama-3.2-3b-instruct:free --api-key sk-...Missing API Key
When API key is required but not set:
Error: API key required for openai.
Set your API key with: ask config --api-key your-api-keyModels Command Without Configuration
Error: You have not set any configuration yet.
Please run 'ask config --provider <provider> --model <model> --api-key <api-key>' to set it.
Or specify a provider directly:
ask models --provider ollama
ask models --provider openai
ask models --provider openrouterTroubleshooting
Common Issues
"Command not found: ask"
- Make sure you installed globally:
npm install -g ask-gpt-cli - Check if npm global bin is in your PATH
- Make sure you installed globally:
Connection errors with Ollama
- Ensure Ollama is running:
ollama serve - Check if the host/port is correct:
ask config --show - Test the connection:
curl http://localhost:11434/api/tags
- Ensure Ollama is running:
API key errors
- Verify your API key is set:
ask config --show - Make sure the API key is valid and has sufficient credits
- Verify your API key is set:
Model not found
- List available models:
ask models - For Ollama, ensure the model is downloaded:
ollama pull llama3.2 - Check the exact model name with your provider
- List available models:
Configuration issues
- View current config:
ask list - Reset if needed:
ask reset --all - Check config file location (see Configuration File section)
- View current config:
Debug Steps
Check your configuration:
ask listTest model availability:
ask modelsTry a simple question:
ask "Hello, are you working?"Reset and reconfigure if needed:
ask reset --all ask config --provider ollama --model llama3.2
Development
Local Development
git clone <repository-url>
cd ask-gpt
npm install
# Test locally
node bin/ask.js "test question"
# Link for global testing
npm linkProject Structure
ask-gpt/
├── bin/
│ └── ask.js # Main CLI script
├── package.json # Dependencies and metadata
├── README.md # This file
└── TODO.md # Development notesContributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Test thoroughly with different providers
- Update documentation if needed
- Submit a pull request
Adding New Providers
To add a new LLM provider:
- Add the provider logic in the main action handler
- Update the help text and examples
- Add configuration options if needed
- Update the models command to support the new provider
- Add documentation and examples
License
MIT License - see LICENSE file for details.
Support
If you encounter any issues or have suggestions, please open an issue on the GitHub repository.
Changelog
Latest Version
- ✅ Enhanced error messages with helpful setup guidance
- ✅ Added
ask listcommand for configuration overview - ✅ Added
ask modelscommand to list available models - ✅ Added
ask resetcommand for configuration management - ✅ Improved Ollama host/port configuration with smart URL building
- ✅ Added support for remote Ollama servers
- ✅ Visual status indicators for configuration
- ✅ Comprehensive help system with examples
- ✅ Better validation and error handling
