text-gen-cli
v0.1.0
Published
Production-grade Bun + TypeScript CLI scaffold
Readme
text-gen
text-gen is a Bun + TypeScript CLI for provider-agnostic text generation with:
- OpenAI (
text-gen openai) - Gemini (
text-gen gemini) - Claude (
text-gen claude)
It supports direct/file/stdin prompts, template interpolation, JSON output, and writing generated text to disk.
Install and Setup (Bun)
Requirements
- Bun
>=1.1.0
Install dependencies
bun installRun in development
bun run dev -- openai --model gpt-4o-mini --user "Say hello"Build
bun run buildAPI Keys
Set provider keys in your shell environment:
export OPENAI_API_KEY="your_openai_key"
export GEMINI_API_KEY="your_gemini_key"
# or use GOOGLE_API_KEY for Gemini
export GOOGLE_API_KEY="your_google_key"
export ANTHROPIC_API_KEY="your_anthropic_key"Provider key resolution:
openai:OPENAI_API_KEYgemini:GEMINI_API_KEYorGOOGLE_API_KEYclaude:ANTHROPIC_API_KEY
Command Reference
Global
text-gen --help
text-gen -h
text-gen --version
text-gen -vProviders
text-gen openai [options]
text-gen gemini [options]
text-gen claude [options]Provider help:
text-gen openai --help
text-gen gemini --help
text-gen claude --helpFull Flag Reference
All provider commands support:
--model <id>: optional model identifier (defaults shown below)--temperature <number>: sampling temperature (validated to0..2)--max-tokens <integer>: max output tokens (integer, validated>0)--top-p <number>: nucleus sampling (validated to0..1)--system <text>: inline system prompt--user <text>: inline user prompt--system-file <path>: read system prompt from file--user-file <path>: read user prompt from file--var <key=value>: template variable, repeatable--json: print structured JSON payload--output <path>: write generated text to a file-h, --help: command help
Rules:
- If omitted,
--modeldefaults to:openai:gpt-5.2-chat-latestgemini:gemini-pro-latestclaude:claude-sonnet-4-20250514
--varcan be repeated; most other flags can only appear once.- You cannot combine
--systemwith--system-file. - You cannot combine
--userwith--user-file.
Prompt Input Modes
User prompt resolution order:
--user <text>--user-file <path>- stdin fallback (only if no
--useror--user-fileis provided)
System prompt can come from:
--system <text>, or--system-file <path>
Direct text
text-gen openai --model gpt-4o-mini --user "Summarize this release note."File input
text-gen gemini --model gemini-2.5-flash --system-file prompts/system.txt --user-file prompts/user.txtstdin fallback
echo "Write a 1-line tagline for a coffee shop." | text-gen claude --model claude-3-7-sonnetTemplate Variables
Use {{key}} placeholders in prompt text and pass values via repeatable --var key=value.
Example template:
Write a {{tone}} summary about {{topic}}.CLI usage:
text-gen openai \
--model gpt-4o-mini \
--user "Write a {{tone}} summary about {{topic}}." \
--var tone=concise \
--var topic=observabilityFile + variables:
text-gen gemini \
--model gemini-2.5-flash \
--user-file prompts/template.txt \
--var product="text-gen" \
--var audience=developersOutput Modes
Default (plain text)
Without --json, stdout is only the generated text.
JSON mode (--json)
With --json, stdout is:
{
"success": true,
"provider": "openai",
"model": "gpt-4o-mini",
"text": "Hello world",
"usage": {
"inputTokens": 1,
"outputTokens": 2,
"totalTokens": 3
},
"finish_reason": "stop"
}Notes:
usageandfinish_reasoncan benulldepending on provider response.rawprovider output is not printed by the CLI payload.
--output <path> behavior
- Always writes generated
textto the target file. - Still prints to stdout (plain text without
--json, JSON payload with--json).
Examples
OpenAI
text-gen openai \
--model gpt-4o-mini \
--system "You are concise." \
--user "Explain retries in distributed systems in 3 bullets." \
--temperature 0.2Gemini
text-gen gemini \
--model gemini-2.5-flash \
--user "Generate 5 release-note headlines for a CLI tool." \
--top-p 0.9 \
--max-tokens 120Claude
text-gen claude \
--model claude-3-7-sonnet \
--user "Rewrite this paragraph for a technical audience." \
--jsonstdin + output file
cat prompts/user.txt | text-gen openai --model gpt-4o-mini --output generated.txtfile prompt + JSON
text-gen claude --model claude-3-7-sonnet --user-file prompts/user.txt --jsonTroubleshooting
Missing API keys
Examples:
- OpenAI:
Missing API key for OpenAI. Fix: set OPENAI_API_KEY... - Gemini:
Missing API key for Gemini. Fix: set GEMINI_API_KEY or GOOGLE_API_KEY... - Claude:
Missing API key for Claude. Fix: set ANTHROPIC_API_KEY...
Fix by exporting the required key(s) before running.
Conflicting prompt arguments
If both inline and file are passed for the same field:
--user+--user-file--system+--system-file
CLI fails with a conflict error. Use one source per field.
Unresolved template variables
If placeholders remain after interpolation, CLI fails, for example:
Unresolved template variables in user prompt: {{topic}}. Provide values via --var key=value.
Add all required --var key=value entries.
Flag parsing/validation failures
Examples:
- Empty model value still fails, e.g.
--modelwithout a value. - Bad number:
Invalid --temperature value "nan". Expected a number. - Bad integer:
Invalid --max-tokens value "12.5". Expected an integer. - Range checks:
temperaturemust be0..2,topPmust be0..1, andmaxTokensmust be>0.
Missing user prompt
If no --user, no --user-file, and no stdin input:
Missing user prompt. Provide --user, --user-file, or pipe content via stdin...
Development, Scripts, and Tests
bun run dev: run CLI entrypoint in developmentbun run build: build distributable files todist/bun run typecheck: TypeScript type checks (tsc --noEmit)bun run test: run test suite (bun test)bun run check: runtypecheckthentest
Local verification:
bun run typecheck
bun test