@ramusriram/versus
v0.1.2
Published
AI-powered CLI to compare two Linux commands or concepts, grounded in local docs (man, --help, info).
Maintainers
Readme
Versus CLI (versus)
Compare two Linux commands or concepts (A vs B) from inside your terminal, grounded in your machine’s local documentation (man pages, --help, info) and summarized by an LLM backend.
Demo

versus curl wget
versus nano vim --backend gemini
versus "git pull" "git fetch" --level beginnerWhy this exists
When you’re learning Linux, you constantly ask: “What’s the difference between X and Y?”
Versus answers that in a structured way, using local docs as grounding so the model is less likely to hallucinate.
Features
- Grounded comparisons using
man,--help, andinfo(best-effort) - Multiple backends:
gemini,openai,ollama,mock - Markdown rendering in terminal by default (TTY). Use
--rawfor source markdown - TTL cache to reduce repeated API calls and speed up repeat runs
- Prompt inspection: view the exact prompt without calling any backend
- CI (GitHub Actions) runs tests on every push/PR
Requirements
- Node.js 20+ (matches CI +
enginesinpackage.json) - Linux/WSL (macOS likely works too if
manexists)
Install
From npm (recommended)
npm install -g @ramusriram/versusThen run:
versus nano vimFrom source (for development)
git clone https://github.com/RamuSriram/versus-ai-cli.git
cd versus-ai-cli
npm install
npm linkQuickstart
Backend selection (when --backend auto):
- Uses OpenAI if
OPENAI_API_KEYis set - Else uses Gemini if
GEMINI_API_KEYis set - Else uses Ollama if it’s running locally
- Else falls back to the Mock backend
Run without API keys (force mock backend):
versus nano vim --backend mockGemini:
export GEMINI_API_KEY="your_key_here"
versus nano vim --backend geminiOpenAI:
export OPENAI_API_KEY="your_key_here"
versus nano vim --backend openaiOllama (local):
# install + run ollama first
versus nano vim --backend ollama --model llama3Usage
versus <left> <right> [options]Common options:
-b, --backend <backend>:auto|openai|gemini|ollama|mock-m, --model <model>: provider-specific model name--level <level>:beginner|intermediate|advanced--mode <mode>:summary|cheatsheet|table--format <format>:rendered|markdown|json--raw: output raw Markdown (disable terminal rendering)--color <mode>:auto|always|never(alias:--no-color, also respectsNO_COLOR=1)--no-cache: bypass cache--ttl-hours <n>: cache TTL in hours (default:720= 30 days)--max-doc-chars <n>: max local docs characters per side--no-docs: don’t read local docs (LLM general knowledge only)-d, --debug: debug metadata
Tip: flags can be passed as --backend gemini or --backend=gemini.
Helpful subcommands
versus status (alias: versus doctor)
Checks your environment (Node, man, cache path) and backend configuration.
versus status
versus doctorversus cache
Inspect or clear your local cache:
versus cache
versus cache --clearversus prompt
View the full prompt that would be sent to the backend (no API call). Opens in a pager by default to avoid dumping huge text.
versus prompt nano vimOther modes:
versus prompt nano vim --stdout
versus prompt nano vim --editor
versus prompt nano vim --output prompt.txtConfiguration
Optional config file locations (first found wins):
${XDG_CONFIG_HOME:-~/.config}/versus/config.json~/.versusrc.json
Example:
{
"backend": "gemini",
"model": "gemini-2.5-flash",
"level": "intermediate",
"mode": "summary",
"ttlHours": 720,
"maxDocChars": 6000
}Caching
Cache file:
~/.cache/versus/cache.json
Use --no-cache to force a fresh response.
Privacy / Data
Versus sends your inputs (and, by default, any collected local docs like man/--help/info) to the selected backend.
- Want zero network calls: use
--backend mock - Want no local docs included: use
--no-docs
Troubleshooting
WSL: Error: fetch failed
Some WSL setups prefer IPv6 DNS and can cause fetch failures.
export NODE_OPTIONS="--dns-result-order=ipv4first"Development
npm testLicense
MIT
