viho
v0.2.3
Published
A lightweight CLI tool for managing and interacting with AI models
Maintainers
Readme
Features
- Multiple AI model management (OpenAI, Google Gemini)
- Support for OpenAI-compatible APIs
- Support for Google Gemini AI (API and Vertex AI)
- Interactive Q&A with streaming responses
- Continuous chat sessions for multi-turn conversations
- Support for thinking mode (for compatible models)
- Expert mode with domain-specific knowledge
- Configurable API endpoints
- Default model configuration
- Simple and intuitive CLI interface
- Persistent configuration storage
Installation
Install globally via npm:
npm install -g vihoRequirements
- Node.js >= 18.0.0
Quick Start
- Add your first AI model:
viho model add- Set it as default:
viho model default- Start asking questions:
viho askCommands
Model Management
viho model add
Add a new AI model configuration interactively.
First, you'll select the platform:
- openai - For OpenAI official API
- gemini api - For Google Gemini AI Studio
- gemini vertex - For Google Gemini Vertex AI
- deepseek - For DeepSeek (OpenAI-compatible)
- kimi - For Kimi/Moonshot (OpenAI-compatible)
Then you'll be prompted to enter the required information based on the platform:
For OpenAI-compatible platforms (openai, deepseek, kimi):
- Model name (a custom identifier)
- API key
- Base URL (e.g., https://api.openai.com/v1)
- Model ID (e.g., gpt-4, gpt-4o)
- Thinking mode (enabled/disabled)
For Gemini API:
- Model name (a custom identifier)
- API key (from Google AI Studio)
- Model ID (e.g., gemini-pro, gemini-1.5-flash, gemini-1.5-pro)
For Gemini Vertex:
- Model name (a custom identifier)
- Project ID (your GCP project)
- Location (e.g., us-east1, us-central1)
- Model ID (e.g., gemini-1.5-flash-002, gemini-1.5-pro-002)
viho model addAfter adding a model, it will be available for use with viho ask, viho chat, and viho expert commands.
viho model list
List all configured models with detailed information.
viho model listThis command displays:
- Model name with a
(default)tag for the default model - Platform (openai, deepseek, kimi, gemini api, or gemini vertex)
- Platform-specific configuration details
Example output:
Configured models:
• gpt4 (default)
Platform: openai
Model ID: gpt-4o
Base URL: https://api.openai.com/v1
Thinking: enabled
• deepseek
Platform: deepseek
Model ID: deepseek-chat
Base URL: https://api.deepseek.com/v1
Thinking: enabled
• gemini
Platform: gemini api
Model ID: gemini-1.5-flash
API Key: ***
• gemini-pro
Platform: gemini vertex
Model ID: gemini-1.5-pro-002
Project ID: my-project-123
Location: us-east1viho model remove
Remove a model configuration:
viho model removeYou'll be presented with a list of your configured models to choose from. If the removed model was set as default, you'll need to set a new default model.
viho model default
Set a default model for chat and ask sessions:
viho model defaultYou'll be presented with a list of your configured models to choose from. The default model will be used when you run viho ask, viho chat, or viho expert commands without specifying a model name.
Ask
viho ask [modelName]
Ask a question to an AI model.
If no model name is provided, uses the default model:
viho askOr specify a model explicitly:
viho ask mymodelThe interface includes:
- Editor-based question input
- Streaming responses
- Visual thinking process (when enabled)
- Colored output for better readability
Chat
viho chat [modelName]
Start a continuous chat session with an AI model for multi-turn conversations.
If no model name is provided, uses the default model:
viho chatOr specify a model explicitly:
viho chat mymodelThe chat session runs in a loop, allowing you to ask multiple questions continuously without restarting the command. Each question uses the same model configuration but starts a fresh conversation context.
Note: The main difference between viho ask and viho chat:
viho ask- Single question, exits after receiving the answerviho chat- Continuous loop, keeps asking for new questions until manually stopped (Ctrl+C)
Expert Mode
Expert mode allows you to chat with an AI model that has access to domain-specific documentation, making it more knowledgeable about particular libraries or frameworks.
viho expert list
List all available expert resources:
viho expert listThis displays available experts like:
antd- Ant Design documentationdaisyui- DaisyUI documentation
viho expert <name> [modelName]
Start an expert chat session with domain-specific knowledge:
viho expert antdOr specify a model explicitly:
viho expert daisyui mymodelThe expert mode works similarly to viho chat but includes the relevant documentation as context, making the AI more accurate when answering questions about that specific library or framework.
Example:
# Get help with Ant Design
viho expert antd
# Ask: "How do I create a responsive table with sorting?"
# The AI will use Ant Design documentation to provide accurate answersConfiguration
Configuration is stored in ~/viho.json. You can manage all settings through the CLI commands.
Example Configuration Structure
{
"models": [
{
"modelName": "mymodel",
"apiKey": "your-api-key",
"baseURL": "https://api.openai.com/v1",
"modelID": "gpt-4",
"modelThinking": "auto"
}
],
"default": "mymodel"
}Supported Providers
viho supports multiple AI providers and platforms:
OpenAI-Compatible APIs
- OpenAI - Official OpenAI API (GPT-4, GPT-4o, GPT-3.5, etc.)
- DeepSeek - DeepSeek AI with reasoning capabilities
- Kimi - Moonshot AI (Kimi)
- Any other OpenAI-compatible API endpoints
Google Gemini
Gemini API (via Google AI Studio)
- Ideal for personal development and prototyping
- Get API key from Google AI Studio
Gemini Vertex AI (via Google Cloud)
- Enterprise-grade with advanced features
- Requires Google Cloud project with Vertex AI enabled
- Supports context caching for cost optimization
Examples
Adding an OpenAI Model
viho model add
# Select platform: openai
# Enter model name: gpt4
# Enter API key: sk-...
# Enter base URL: https://api.openai.com/v1
# Enter model ID: gpt-4o
# Thinking mode: disabledAdding a DeepSeek Model
viho model add
# Select platform: deepseek
# Enter model name: deepseek
# Enter API key: sk-...
# Enter base URL: https://api.deepseek.com/v1
# Enter model ID: deepseek-chat
# Thinking mode: enabledAdding a Kimi Model
viho model add
# Select platform: kimi
# Enter model name: kimi
# Enter API key: sk-...
# Enter base URL: https://api.moonshot.cn/v1
# Enter model ID: moonshot-v1-8k
# Thinking mode: disabledAdding a Gemini API Model
viho model add
# Select platform: gemini api
# Enter model name: gemini
# Enter API key: your-google-ai-api-key
# Enter model ID: gemini-1.5-flashAdding a Gemini Vertex AI Model
viho model add
# Select platform: gemini vertex
# Enter model name: gemini-pro
# Enter projectId: my-gcp-project
# Enter location: us-east1
# Enter model ID: gemini-1.5-pro-002Setting Up for First Use
# Add a model
viho model add
# Set it as default
viho model default
# Start asking questions
viho askDependencies
- qiao-cli - CLI utilities
- qiao-config - Configuration management
- qiao-file - File utilities
- viho-llm - Multi-provider LLM integration (OpenAI, Gemini)
License
MIT
Author
uikoo9 [email protected]
Issues
Report issues at: https://github.com/uikoo9/viho/issues
Homepage
https://github.com/uikoo9/viho
