@ai-of-mine/cli
v1.4.1
Published
Universal LLM Command Line Interface with multi-agent support and task management
Maintainers
Readme
🚀 AI CLI - Universal LLM Command Line Interface
Professional AI CLI with multi-agent support, task management, and enterprise-grade features.
🎯 Quick Start (Works in 60 seconds!)
# Clone and setup
git clone https://github.com/ai-of-mine/aicli.git
cd aicli
./init.sh # Installs deps, builds, creates config
# Test immediately with LM Studio
npm start "hello world"
# Or install globally
npm install -g .
ai "hello world"That's it! The init script automatically:
- ✅ Installs all dependencies
- ✅ Creates ~/.ai/config.json with working defaults
- ✅ Creates ~/.ai/.env with LM Studio config
- ✅ Builds the entire project
- ✅ Works immediately with LM Studio (localhost:1234)
Alternative manual setup:
cp .env.example .env
npm install
npm run build✅ KEY FEATURES (September 2025)
Enterprise-grade AI CLI with multi-provider support:
- 🤖 Multi-provider support - LMStudio, IBM Gateway, WatsonX, Bedrock, OpenAI, IBM Research
- 🛠️ Intuitive tools - edit, search, fetch, create_file (natural AI interaction)
- ⚡ Background processes - Start dev servers, long builds with process management
- 🔧 Provider aliases - Use intuitive names (ibm-model-gateway, claude, aws)
- 📁 Professional config - Clean separation of models and API keys
🚀 SUPPORTED PROVIDERS
| Provider Type | Examples | Features | |---------------|----------|----------| | Local | LMStudio, Ollama | No API keys needed, privacy, unlimited usage | | Enterprise | IBM Gateway, WatsonX | GPT-4o, AI Assistant, Mistral via enterprise APIs | | Cloud | OpenAI, IBM Research, Bedrock | Direct provider access, latest models |
🛠️ AVAILABLE TOOLS
Core File Operations
- edit - Edit files (alias for replace)
- write_file - Create new files
- create_file - Create files (alias)
- read_file - Read file contents
- list_directory - List directory contents
Web & Search
- search - Web search (alias for google_web_search)
- google_web_search - Comprehensive web search
- fetch - URL content (alias for web_fetch)
- web_fetch - Process URLs and content
System Operations
- run_shell_command - Execute shell commands
- Background support:
run_in_background: truefor dev servers
- Background support:
- save_memory - Save project/user context
📚 Documentation
- 🏗️ Architecture Documentation
- 🔧 Development Guide
- 🔌 Provider Configuration
- 🧪 Testing Documentation
- 🚀 Deployment Guide
- 📖 User Guides
📦 INSTALLATION
Quick Install (Recommended)
# Install globally via npm
npm install -g @ai-of-mine/cli
# The 'ai' command is now available globally
ai "hello world"First-Time Setup
# Initialize configuration on first run
ai "test setup"
# Follow the interactive setup wizard:
# ✅ Choose default model (LMStudio, WatsonX, etc.)
# ✅ Configure API keys
# ✅ Set provider preferences✅ The ai command is automatically available in your terminal!
From Source (Development)
# Clone and build
git clone https://github.com/ai-of-mine/aicli.git
cd aicli
npm install && npm run build
# Link globally for development
npm link
# Test installation
ai --version
ai "hello world" # Uses Qwen 3.2 4B by default3. Configuration
API Keys Setup (Required)
# Copy unified environment configuration
cp .env.example .env
# Edit .env with your API keys:
AI_PROVIDER_IBM_API_KEY=your-ibm-gateway-key # For IBM Gateway (GPT-4o, AI Assistant)
AI_PROVIDER_WATSONX_API_KEY=your-watsonx-key # For IBM WatsonX (Mistral)
AI_PROVIDER_OPENAI_API_KEY=sk-your-openai-key # For direct OpenAI
AI_PROVIDER_ANTHROPIC_API_KEY=sk-ant-your-claude-key # For direct AI Assistant
AI_PROVIDER_LMSTUDIO_URL=http://localhost:1234/v1 # For local LMStudio
AI_PROVIDER_BEDROCK_API_KEY=your-aws-bearer-token # For AWS BedrockConfiguration Architecture
# ~/.ai/config.json - Model definitions (NO API keys)
# Purpose: Provider configs, model lists, user preferences
# Example: {"providers": {"lmstudio": {"models": [{"name": "openai/gpt-oss-20b"}]}}}
# Safe to share: Contains no secrets
# .env - API keys and secrets (SINGLE SOURCE OF TRUTH)
# Purpose: All API keys with unified AI_PROVIDER_* structure
# Example: AI_PROVIDER_IBM_API_KEY=your-secret-key
# Never commit: Add to .gitignore, contains private keys
# Why separated?
# - config.json: Shareable model/provider definitions
# - .env: Private API keys, environment-specific settings4. Test Installation
# Test basic functionality
ai --help
# Interactive mode with local model
ai --model lmstudio/openai/gpt-oss-20b
# Test with cloud provider
ai --model ibm-model-gateway/sonnet-3-7🚀 QUICK START
First Time Setup
# 1. Install dependencies
npm install && npm run build
# 2. Set up your preferred model
ai --model lmstudio/qwen/qwen3-4b-2507
# 3. Interactive mode
ai
# 4. Background dev server
echo 'Start npm dev server in background' | ai -y🔧 BACKGROUND PROCESSES
// Start development server
run_shell_command({
"command": "npm run dev",
"run_in_background": true,
"description": "Development server"
})
// Background Python server
run_shell_command({
"command": "python manage.py runserver",
"run_in_background": true
})🔄 PROVIDER ALIASES
Intuitive provider names supported:
# These all work thanks to provider aliases:
ai --model ibm-model-gateway/sonnet-3-7 # Resolves to ibm/sonnet-3-7
ai --model claude/sonnet # Resolves to anthropic/sonnet
ai --model aws/gpt-oss # Resolves to bedrock/gpt-oss
ai --model google-ai/gemini # Resolves to google/gemini
ai --model local/qwen # Resolves to lmstudio/qwenSupported aliases:
ibm-model-gateway,ibm-gateway→ibmclaude,anthropic-claude→anthropicaws,aws-bedrock→bedrockgoogle-ai,vertex-ai→googlelocal,lm-studio→lmstudio
⚙️ CONFIGURATION
Requirements
- Node.js 18+
- Git for cloning
- API keys for cloud providers (optional for local LMStudio)
Configuration Files
| File | Purpose | Contains | Shareable |
|------|---------|----------|-----------|
| ~/.ai/config.json | Model definitions | Providers, models, preferences | ✅ Yes (no secrets) |
| .env | API keys & environment | AI_PROVIDER_*_API_KEY values | ❌ No (private keys) |
Key Points:
- NO API keys in config.json - Clean separation of concerns
- ALL API keys in .env - Single source of truth for authentication
- config.json auto-generated - Populated when providers are used
Complete Remote Installation
# 1. Clone and build
git clone https://github.com/your-repo/aicli.git
cd aicli
npm install
npm run build
# 2. Setup configuration structure
# The CLI will create ~/.ai/ directory automatically on first run
# You can also create it manually:
mkdir -p ~/.ai/sessions
# 3. Copy and configure API keys
cp .env.example .env
# Edit .env with your actual API keys (see Configuration section above)
# 4. Setup permanent alias
echo "alias ai='$(pwd)/ai'" >> ~/.zshrc
source ~/.zshrc
# 5. Test installation
ai --help
ai --model lmstudio/openai/gpt-oss-20b # If LMStudio running
ai --model ibm-model-gateway/sonnet-3-7 # If you have IBM Gateway key
# 6. Verify configuration structure
ls ~/.ai/ # Should show: config.json, sessions/
cat ~/.ai/config.json # Should show: providers, models (NO apiKeys)
cat .env | grep API_KEY # Should show: AI_PROVIDER_*_API_KEY variablesWhat Gets Created:
~/.ai/ # Global configuration directory
├── config.json # Providers & models (auto-generated, no secrets)
├── settings.json # User preferences
└── sessions/ # Session history by project
your-project/.env # API keys (you create this from .env.example)📚 ADVANCED FEATURES
Tool Synonyms
AI naturally expects intuitive tool names:
// These all work (synonyms automatically resolved):
edit({file_path: "test.txt", old_string: "old", new_string: "new"}) // → replace
search({query: "AI tutorials"}) // → google_web_search
fetch({prompt: "Summarize https://example.com"}) // → web_fetch
create_file({file_path: "new.txt", content: "Hello"}) // → write_fileSession Management
ai -C # Continue last session
ai /save # Save current session
ai /load session-id-here # Load specific sessionModel Switching
ai /model # Interactive model selection
ai --model provider/model # Direct model specification🚨 TROUBLESHOOTING
Installation Issues
# Build errors after git pull:
npm run build
# Permission denied:
chmod +x ai
# Command not found:
echo "alias ai='$(pwd)/ai'" >> ~/.zshrc && source ~/.zshrcAPI Key Issues
# Check if keys are loaded:
source .env && echo $AI_PROVIDER_IBM_API_KEY
# Provider authentication errors:
# 1. Verify key in .env file
# 2. Check provider URL is correct
# 3. Test with curl (see .env.example for curl examples)
# WatsonX tool schema errors:
# If you see "tools[X].function.parameters" validation errors:
# This is a tool schema issue - try switching to LMStudio or IBM Gateway
ai --model lmstudio/openai/gpt-oss-20b # Switch to working modelModel Issues
# Model not found:
ai /model # See available models
ai --model lmstudio/model-name # Use full provider/model format
# Context limit errors:
ai /new # Start fresh conversation
ai /compress # Compress current contextReady for production use with GREG's AI branding! 🇧🇷
