@peixl/ifq
v0.11.8
Published
Think it. Ask it. Done. — A zero-dependency AI CLI for your terminal. Ask, translate, explain, commit — all in one keystroke.
Maintainers
Readme
A tiny AI companion that lives in your terminal.
No browser. No context switching. No bloat.
Just you, your keyboard, and an answer — instantly.
Now with a persistent chat mode: stay inside ifq, keep asking, and carry forward recent context automatically.
Why ifq?
You're deep in terminal. You have a question. Don't leave. Don't switch windows. Don't break flow.
Just ask.
ifq "what's the difference between rebase and merge"That's it. AI answers, right where you are.
What it does
Ask anything — like having a brilliant friend on speed dial.
ifq ask "explain kubernetes in one sentence"Stay in chat — enter the app once, then keep talking without repeating ifq.
ifq
ifq > explain why my curl command returns 403
ifq > now rewrite it with headers
ifq > /switch work
ifq > /model gpt-4o
ifq > /exitDecode any command — never Google a cryptic shell command again.
ifq explain "find . -name '*.log' -mtime +7 -delete"Generate shell commands — describe what you want, get the command.
ifq shell "find all png files larger than 1MB modified in the last week"Review code — instant code review from a diff.
git diff | ifq review
git diff --cached | ifq crTranslate instantly — Chinese to English. English to Chinese. Auto-detected.
ifq t "这段代码有什么问题"Write commit messages — because you'd rather ship code than describe it.
git add .
ifq commitPipe anything — ifq plays well with the tools you already love.
cat error.log | ifq ask "what went wrong"
curl -s api.example.com | ifq ask "summarize this response"
git diff | ifq review
git diff --cached | ifq crGet started
Two steps. Thirty seconds.
Docs, examples, and release notes live at cli.ifq.ai.
npm install -g @peixl/ifq
ifq config --key sk-your-api-keyDone. Start asking.
Secure Agent OS prompt store
ifq now ships the full Agent OS prompt-engineering templates inside the npm package, but stores the deployed runtime copies encrypted at rest in your user directory.
What this means:
npm install -g @peixl/ifq
ifqOn first run, ifq will:
- verify the packaged template manifest
- deploy the Agent OS templates into the secure store
- encrypt them under
~/.ifq/.secure/files/ - read and decrypt them only when needed at runtime
- preserve your local changes during future updates unless you force overwrite
Useful commands:
ifq evolve init # initialize or补齐 Agent OS prompt store
ifq evolve init --force # force refresh packaged templates
ifq evolve doctor # check encryption store, manifest integrity, plaintext leftoversYou can also provide your own master key:
export IFQ_PROMPT_MASTER_KEY=<64-char-hex-or-base64-32-byte-key>If no environment key is provided, ifq generates a local key file at ~/.ifq/.keys/prompt-master.key.
Works with everything
OpenAI, Anthropic (Claude), OpenRouter, DeepSeek, Ollama, any OpenAI-compatible API. Provider is auto-detected from URL, or can be set explicitly.
OpenAI (default)
ifq config --key sk-your-openai-key
ifq config --model gpt-4o-miniAnthropic (Claude)
ifq config --key sk-ant-your-key
ifq config --url https://api.anthropic.com/v1
ifq config --model claude-sonnet-4-20250514OpenRouter
ifq config --key sk-or-your-key
ifq config --url https://openrouter.ai/api/v1
ifq config --model anthropic/claude-sonnet-4-20250514DeepSeek
ifq config --url https://api.deepseek.com/v1
ifq config --model deepseek-chatOr use environment variables:
export IFQ_API_KEY=sk-...
export IFQ_API_URL=https://api.openai.com/v1
export IFQ_MODEL=gpt-4o-mini
export IFQ_PROVIDER=openai # optional: openai, anthropic, openrouterDeep Analysis — the killer feature 🧠
Ask one question. Get answers from every model — then a synthesized consensus.
Deep Analysis queries multiple models in parallel (OpenClaw model chain or your configured model), compares their answers, and produces a single expert synthesis. Think of it as a panel of AI experts working for you.
ifq deep "is Rust or Go better for microservices in 2025?"What happens behind the scenes:
- Your question is sent to up to 4 models simultaneously
- Each model responds independently
- A synthesis pass compares all answers and produces a consensus
- You get individual perspectives + the final expert synthesis
Inside chat mode:
/deep explain the trade-offs of event sourcing vs CRUDThis is something no single-model tool can do. It's like having a committee of domain experts — instantly.
Web context injection 🌐
Capture a live web page and feed it directly into your next chat message.
/web https://docs.example.com/api-reference
explain how the auth flow worksThe page snapshot becomes invisible context for your very next question — no copy-pasting, no switching windows.
OpenClaw integration
When OpenClaw is installed, ifq automatically detects it and unlocks a full suite of capabilities — including model import, proxy mode, and deep multi-model analysis.
Model management
One command scans all installed AI CLI tools and imports their models into ifq:
ifq i # Scan & import from all tools
ifq import gemini-flash # Import a specific model by aliasSupported tools:
| Tool | What it extracts | |------|------------------| | OpenClaw | Full model chain, aliases, context windows | | Claude Code | Active model, API endpoint + credentials | | Codex (OpenAI) | Model catalog from cache, active model | | Gemini CLI | Detected via OAuth presence | | OpenCode | Recently used models + providers |
Models with extractable credentials (e.g. Claude Code's API token) are stored as profiles — switching to them auto-applies the correct endpoint and key.
Browse, switch, and probe models interactively:
ifq m # List models with status + interactive selection
ifq m 3 # Switch to model #3
ifq m gemini-flash # Switch by name/alias
ifq m --probe # List models + live latency testInside chat: /m, /m 3, /i, /m probe — same shortcuts work.
Invisible router: When you select an OpenClaw model (e.g. openai-codex/gpt-5.4), ifq automatically routes queries through the OpenClaw agent. When you select a profiled model (e.g. MiniMax-M2.7 from Claude Code), ifq auto-applies the stored credentials — no manual config needed.
Proxy mode
Don't have an API key? Use OpenClaw as your AI backend:
ifq config --proxy on # Route all queries through OpenClaw agentWhen proxy mode is active, ifq falls back to OpenClaw's agent for all chat queries — zero API keys needed.
Command line
ifq m # List models + interactive switch
ifq m --probe # List models + latency test
ifq i # Scan & import all AI tool models
ifq claw # Status & capabilities
ifq claw agent "summarize my last session" # Talk to the OpenClaw agent
ifq claw models [--probe] # Model chain + auth + aliases + optional probe
ifq claw import [model] # Import models (all or by name)
ifq claw skills # List available skills
ifq claw skill "web scraping" # Search ClawHub skills
ifq claw memory "project plan"# Search semantic memory
ifq claw browser https://example.com # Navigate browser
ifq claw snapshot # Browser page snapshot
ifq claw send <target> <msg> # Send via channels
ifq claw docs "mcp setup" # Search OpenClaw docs
ifq claw cron # List scheduled jobs
ifq deep "question" # Multi-model deep analysisInside chat mode
/m [N|name|probe] List models / switch by number or name / latency probe
/i [model] Scan & import AI models (or one by name)
/d <question> Deep analysis shortcut
/claw OpenClaw status
/agent <msg> Run an agent turn
/deep <question> Multi-model deep analysis
/web <url> Capture web page for next chat context
/skills List skills
/skill <query> Search ClawHub
/memory <query> Search memory
/browser <url> Navigate browser
/snapshot Browser page snapshot
/send <target> <msg> Send message
/docs <query> Search docs
/cron List cron jobs
/status Full OpenClaw statusWhen OpenClaw is connected, the AI system prompt is automatically enriched with available capabilities for smarter context-aware responses.
Design principles
- Zero dependencies. Nothing to break. Nothing to audit.
- Multi-provider. OpenAI, Anthropic, OpenRouter — auto-detected from URL.
- Streams by default. Answers appear as they're written.
- 30s connection timeout. No more hanging on bad networks.
- Pipes welcome. Compose with grep, cat, curl — whatever.
- Your key, your model. No middleman. No data collection.
- One config file.
~/.ifqrc, permission 600. That's it. - Persistent chat memory. The latest 10 messages stay verbatim; older turns are compressed into key memory points.
- Performance-first memory. Older turns are compacted in batches, not on every single round.
- Graceful error recovery. API failures don't crash the chat — you stay in the REPL.
- Config cached. Config file is only re-read when it changes on disk.
- Response timing. See how long each response takes.
Quick reference
| Command | What it does |
|---|---|
| ifq | Enter persistent chat app |
| ifq --session <name> | Enter a named chat session |
| ifq chat [question] | Enter chat app, optionally with a first message |
| ifq chat --session <name> [question] | Enter a named chat session |
| ifq sessions | List local chat sessions |
| ifq delete <name> | Delete a session |
| ifq "question" | Ask anything (shorthand) |
| ifq ask <question> | Ask with explicit subcommand |
| ifq deep <question> | Multi-model deep analysis |
| ifq explain <cmd> | Explain a shell command |
| ifq shell <desc> | Generate a shell command |
| ifq translate <text> | Translate (zh↔en) |
| ifq t <text> | Quick translate |
| ifq commit | Generate commit message |
| ifq review | Code review from diff |
| ifq m | List / switch models (interactive) |
| ifq m --probe | Model list + live latency test |
| ifq i | Scan & import all AI tool models |
| ifq claw import [model] | Import OpenClaw model into ifq |
| ifq config --show | View current config |
| ifq config --proxy on | Enable OpenClaw proxy mode |
| ifq help | Show help |
Chat memory
Interactive chat stores session data locally under ~/.ifq/sessions/<session>.json.
- The most recent 10 messages are kept exactly as-is.
- Older messages are queued and compacted into a rolling summary of goals, facts, decisions, preferences, and open questions.
- The first overflow compacts immediately to seed memory; after that, compaction happens in batches for better performance.
- Memory compaction failures are silently retried on the next turn — your messages are never lost.
- Use
--session <name>to separate work, ops, research, or personal contexts. - Use
/clearinside chat to reset the current session memory. - Use
/sessioninside chat to show the current session and model. - Use
/sessionsorifq sessionsto list local sessions and recent activity. - Use
/switch <name>to switch sessions without leaving chat. - Use
/model <name>to change model on the fly. - Use
/retryto re-run the last message. - Use
/delete <name>orifq delete <name>to remove a session. - Use
/summaryinside chat to inspect the current memory summary.
The philosophy
Software should feel light. It should solve real problems in the fewest keystrokes. It should respect your time, your privacy, and your flow.
ifq is built for people who think fast, work fast, and want AI that keeps up.
Beautiful tools make beautiful work.
