npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ask-gpt-cli

v1.0.0

Published

A CLI tool to quickly ask questions to various LLM providers (Ollama, OpenAI, OpenRouter) via API without leaving terminal.

Readme

Ask GPT CLI

A simple and fast CLI tool to quickly ask questions to various LLM providers from your terminal simply by typing 'ask '.

If you are too lazy to leave terminal...or maybe use it for other tasks, this tool is for you.

Features

  • 🚀 Quick questions to LLMs without leaving your terminal
  • 🔄 Support for multiple providers: Ollama, OpenAI, OpenRouter
  • ⚙️ Easy configuration management with visual status indicators
  • 🌐 Global installation support
  • 💡 Simple and intuitive commands
  • 🏠 Support for both local and remote Ollama servers
  • 📋 List available models for each provider
  • 🔧 Advanced configuration options with host/port separation
  • ❌ Clear error messages with helpful setup guidance

Installation

Global Installation (Recommended)

npm install -g ask-gpt-cli

Local Installation

git clone <repository-url>
cd ask-gpt
npm install
npm install -g .

Quick Start

  1. Install the CLI globally
  2. Configure your preferred provider
  3. Start asking questions!
# Configure for Ollama (localhost)
ask config --provider ollama --model llama3.2

# Configure for remote Ollama
ask config --provider ollama --model llama3.2 --host myserver.com

# Configure for OpenAI
ask config --provider openai --model gpt-3.5-turbo --api-key YOUR_API_KEY

# Configure for OpenRouter
ask config --provider openrouter --model meta-llama/llama-3.2-3b-instruct:free --api-key YOUR_API_KEY

# Ask a question
ask "Why is the sky blue?"

Usage

Basic Command

ask "your question here"

Examples:

ask "What is the capital of France?"
ask "Explain quantum computing in simple terms"
ask "Write a Python function to reverse a string"

Configuration Commands

Show current configuration

ask config --show
# or
ask list

Set provider

ask config --provider <ollama|openai|openrouter>

Set model

ask config --model <model-name>

Set host and port (for Ollama)

# For localhost (default port 11434)
ask config --host localhost

# For localhost with custom port
ask config --host localhost --port 8080

# For remote server (no port needed)
ask config --host myserver.com

# For remote server with custom port
ask config --host myserver.com --port 8080

Set API key (for OpenAI/OpenRouter)

ask config --api-key YOUR_API_KEY

Multiple configurations at once

ask config --provider openai --model gpt-4 --api-key YOUR_API_KEY

Model Management

List available models

# List models for current provider
ask models

# List models for specific provider
ask models --provider ollama
ask models --provider openai
ask models --provider openrouter

Configuration Management

View current configuration with status

ask list

This shows:

  • ✓ Configured settings (green checkmarks)
  • ✗ Missing settings (red X marks)
  • Active/inactive status for each setting
  • Full Ollama URL construction

Reset configuration

# Reset everything
ask reset --all

# Reset specific settings
ask reset --provider
ask reset --model
ask reset --host
ask reset --port
ask reset --api-key

Help and Information

Show version

ask --version

Show help

ask --help
ask <command> --help  # Help for specific commands

Supported Providers

Ollama

  • Provider: ollama
  • Default Host: localhost
  • Default Port: 11434
  • API Key: Not required
  • Setup: Make sure Ollama is running locally or remotely
  • Models: Fetched dynamically from your Ollama instance

Examples:

# Local setup
ask config --provider ollama --model llama3.2

# Remote setup
ask config --provider ollama --model llama3.2 --host ai.mycompany.com

# Custom port
ask config --provider ollama --model llama3.2 --host localhost --port 8080

OpenAI

  • Provider: openai
  • Popular Models: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4, gpt-3.5-turbo
  • API Key: Required
  • Setup: Get API key from OpenAI Platform

Example:

ask config --provider openai --model gpt-3.5-turbo --api-key sk-your-key-here

OpenRouter

  • Provider: openrouter
  • Models: 100+ models available (fetched dynamically)
  • API Key: Required
  • Setup: Get API key from OpenRouter
  • Note: Shows pricing information for each model

Example:

ask config --provider openrouter --model meta-llama/llama-3.2-3b-instruct:free --api-key sk-your-key-here

Configuration File

The CLI stores configuration in your system's config directory using the conf package. You can find the config file at:

  • macOS: ~/Library/Preferences/ask-gpt-cli-nodejs/config.json
  • Linux: ~/.config/ask-gpt-cli-nodejs/config.json
  • Windows: %APPDATA%\ask-gpt-cli-nodejs\config.json

Examples

Development Questions

ask "How do I create a REST API in Node.js?"
ask "What's the difference between let and const in JavaScript?"
ask "Show me a Python function to sort a list"
ask "Explain the difference between SQL and NoSQL databases"

General Knowledge

ask "What is photosynthesis?"
ask "Explain the theory of relativity"
ask "Who painted the Mona Lisa?"
ask "How does machine learning work?"

Creative Tasks

ask "Write a haiku about programming"
ask "Create a story about a robot learning to cook"
ask "Suggest names for a pet cat"
ask "Write a professional email template"

Model Comparison

# Compare responses from different providers
ask config --provider ollama --model llama3.2
ask "Explain blockchain technology"

ask config --provider openai --model gpt-3.5-turbo
ask "Explain blockchain technology"

Error Handling

The CLI provides clear, helpful error messages:

Configuration Not Set

When you try to ask a question without configuration:

Error: You have not set any configuration yet.

To get started, configure your LLM provider:
ask config --provider <provider> --model <model> [--api-key <api-key>]

Quick setup examples:
  Ollama:     ask config --provider ollama --model llama3.2
  OpenAI:     ask config --provider openai --model gpt-3.5-turbo --api-key sk-...
  OpenRouter: ask config --provider openrouter --model meta-llama/llama-3.2-3b-instruct:free --api-key sk-...

Missing API Key

When API key is required but not set:

Error: API key required for openai.
Set your API key with: ask config --api-key your-api-key

Models Command Without Configuration

Error: You have not set any configuration yet.
Please run 'ask config --provider <provider> --model <model> --api-key <api-key>' to set it.

Or specify a provider directly:
  ask models --provider ollama
  ask models --provider openai
  ask models --provider openrouter

Troubleshooting

Common Issues

  1. "Command not found: ask"

    • Make sure you installed globally: npm install -g ask-gpt-cli
    • Check if npm global bin is in your PATH
  2. Connection errors with Ollama

    • Ensure Ollama is running: ollama serve
    • Check if the host/port is correct: ask config --show
    • Test the connection: curl http://localhost:11434/api/tags
  3. API key errors

    • Verify your API key is set: ask config --show
    • Make sure the API key is valid and has sufficient credits
  4. Model not found

    • List available models: ask models
    • For Ollama, ensure the model is downloaded: ollama pull llama3.2
    • Check the exact model name with your provider
  5. Configuration issues

    • View current config: ask list
    • Reset if needed: ask reset --all
    • Check config file location (see Configuration File section)

Debug Steps

  1. Check your configuration:

    ask list
  2. Test model availability:

    ask models
  3. Try a simple question:

    ask "Hello, are you working?"
  4. Reset and reconfigure if needed:

    ask reset --all
    ask config --provider ollama --model llama3.2

Development

Local Development

git clone <repository-url>
cd ask-gpt
npm install

# Test locally
node bin/ask.js "test question"

# Link for global testing
npm link

Project Structure

ask-gpt/
├── bin/
│   └── ask.js          # Main CLI script
├── package.json        # Dependencies and metadata
├── README.md          # This file
└── TODO.md           # Development notes

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Test thoroughly with different providers
  5. Update documentation if needed
  6. Submit a pull request

Adding New Providers

To add a new LLM provider:

  1. Add the provider logic in the main action handler
  2. Update the help text and examples
  3. Add configuration options if needed
  4. Update the models command to support the new provider
  5. Add documentation and examples

License

MIT License - see LICENSE file for details.

Support

If you encounter any issues or have suggestions, please open an issue on the GitHub repository.

Changelog

Latest Version

  • ✅ Enhanced error messages with helpful setup guidance
  • ✅ Added ask list command for configuration overview
  • ✅ Added ask models command to list available models
  • ✅ Added ask reset command for configuration management
  • ✅ Improved Ollama host/port configuration with smart URL building
  • ✅ Added support for remote Ollama servers
  • ✅ Visual status indicators for configuration
  • ✅ Comprehensive help system with examples
  • ✅ Better validation and error handling