@todoforai/subagent
v0.1.2
Published
Run a TODO for AI sub-agent (FluidAgent) from the CLI. Pipe stdin in, ask a question, get an answer.
Readme
todoforai-subagent
Run a TODO for AI sub-agent (FluidAgent) from the CLI. Pipe stdin in, ask a question, get an answer.
Install
npm install -g @todoforai/subagent
# or in the workspace:
cd api-apps/todoforai-subagent
bun install && bun run build
chmod +x dist/cli.js
ln -s "$(pwd)/dist/cli.js" ~/.local/bin/todoforai-subagentAuth
export TODOFORAI_API_KEY=sk_...
# or pass --api-key
# or rely on ~/.todoforai/credentials.json (written by `todoforai login`)Usage
curl https://example.com | todoforai-subagent "What color is this site?"
todoforai-subagent -m openai:openai/gpt-5.4-mini "Translate to Hungarian: hello"
echo "1+1" | todoforai-subagent -s "You are a calculator." "Compute"Flags
| Flag | Description |
|------|-------------|
| -m, --model | Model (default: openai:openai/gpt-5.4-mini) |
| -s, --sysmsg | System message: :<preset> | @<file> | raw text. Presets: :review, :explore, :plan, :summarize |
| --tools | Comma-separated tool names (e.g. read,grep,bash) |
| --timeout | Timeout seconds (default: 120, max: 1800) |
| --api-url | API URL override |
| --api-key | API key override |
| --no-stdin | Ignore piped stdin |
| -h, --help | Show help |
| -v, --version | Show version |
Companion CLIs
todoforai-review— diff a repo and ask the agent to review ittodoforai-explore— ask the agent to map a codebasetodoforai-summary— summarize files or piped input
Default models per CLI
| CLI | Default model | Rationale |
|-----|---------------|-----------|
| todoforai-subagent | BALANCED_MODEL | General-purpose baseline |
| todoforai-explore | BALANCED_MODEL | Codebase mapping needs reasoning |
| todoforai-review | CLEVER_MODEL | Smartest model — review quality matters |
| todoforai-summary | FAST_MODEL | Fast/cheap — fits summarization workload |
Single source of truth: src/models.ts — bump model strings there, all CLIs follow.
How it works
todoforai-subagent posts to POST {api-url}/api/v1/subagent/work with the prompt; the backend dispatches it via WebSocket to a connected Julia agent process, which runs FluidAgent.work() and returns the response.
