kamuicode
v0.1.20
Published
Agent-friendly CLI for AI generation workflows.
Readme
kamuicode
Agent-friendly npm CLI for AI generation from tools like Codex CLI and Claude Code.
The CLI reads your own FAL_KEY from the environment and keeps the agent-facing workflow on the kamuicode command. Generated user code should call kamuicode, not provider SDKs or provider HTTP endpoints directly.
Before any generation work, FAL_KEY must already be set in the local shell or a cloud secret/environment-variable store. If it is missing, agents must stop before creating scripts, package.json, helper files, request folders, ffmpeg fallbacks, or project scaffolding.
One-line prompt for AI agents
After installing, give this to Codex CLI, Claude Code, or another coding agent:
kamuicode prompt bytedance/seedance-2.0/image-to-videoThat prints a tiny instruction telling the agent to use kamuicode. Normal kamuicode commands also print a short guide reminder to stderr, so agents see the operating rules in their logs without breaking JSON stdout.
kamuicode guide bytedance/seedance-2.0/image-to-videoUse this for the full bundled guide. Set KAMUICODE_GUIDE=full to print the full guide to stderr on each command, or KAMUICODE_GUIDE=off to suppress the guide reminder.
Install
npm install -g kamuicodeRelease Notes
The npm package page renders this README as Markdown, so the latest release notes are shown here directly.
Latest
0.1.20
- Added exhaustive model discovery helpers for agents.
- Added
kamuicode models --categoriesto print the bundled category list. - Added
kamuicode models --allto page through the current model query or category. - Added
kamuicode models --completeto merge the bulk model listing with every known category listing byendpoint_id. --completereportssources.category_onlyandsources.bulk_onlyso gaps between bulk and category indexes are visible.
0.1.19
- Improved
batchfor long-running parallel jobs. --output-filenow writes a live JSON snapshot as each job completes, not only at the end.- Added
--output-jsonl <path>to append each completed job immediately as JSONL. - Added
--jsonlto print each completed job to stdout as soon as it finishes. - Batch result items now include extracted
urlandurlsfields when URLs are present in the model output.
0.1.18
- Fixed
--auto-uploadhandling for bare relative local paths such aspublic/images/input.png. - URL-like fields now upload existing relative files even when the path does not start with
./or../. - Missing relative files are still left as normal strings, so prompts and remote URLs are not accidentally rewritten.
0.1.17
- Added retry handling around local file uploads using the official storage client path.
- Retries transient storage failures such as 500/502/503/504, 429, and common network errors.
- Added
--upload-retries,--upload-retry-delay,KAMUICODE_UPLOAD_RETRIES, andKAMUICODE_UPLOAD_RETRY_DELAY. - Improved upload failure diagnostics so temporary storage failures are distinguishable from model/run/result failures.
0.1.16
- Moved the latest release notes directly into the README so the npm package page renders them as Markdown.
- Kept
CHANGELOG.mdas the full history for direct package/file access.
0.1.15
- Added
CHANGELOG.mdto the npm package files. - Added a visible Release Notes section in the README with a public npm/unpkg URL.
- Clarified where users can inspect release notes from the web after publish.
0.1.14
- Added official cloud secret storage guidance for GitHub Actions, Vercel, Cloudflare Workers, AWS Secrets Manager, Google Secret Manager, Azure Key Vault, 1Password, Bitwarden, and Doppler.
- Clarified that cloud secret storage alone does not make
FAL_KEYavailable to a local terminal. - Added an execution matrix for local terminal, GitHub Actions, Vercel, Cloudflare Workers, and cloud runtimes.
Full history is maintained in CHANGELOG.md.
After publish, users can also inspect the raw changelog from the web:
- npm package page:
https://www.npmjs.com/package/kamuicode - latest raw changelog:
https://unpkg.com/kamuicode@latest/CHANGELOG.md - specific raw changelog:
https://unpkg.com/[email protected]/CHANGELOG.md
After install, kamuicode prints the minimum setup and AI-agent instruction:
FIRST: configure the key before asking an agent to generate anything.
export FAL_KEY="<your-fal-key>"
Use kamuicode as a one-shot CLI. First check FAL_KEY. If missing, stop before creating files.
kamuicode guide <model-id>Then set your key locally in your own shell. Do not paste the key into AI chat:
export FAL_KEY="<your-fal-key>"For persistent local use, add the export to your shell profile such as ~/.zshrc, or use your own local secret manager. Do not put FAL_KEY in project files, generated scripts, README snippets, or chat messages.
For cloud, CI, or hosted agent environments, set FAL_KEY in the platform's Secret or Environment Variable settings before running the agent. The agent should only see FAL_KEY as an environment variable, never as pasted chat text.
Important: putting a key in a cloud secret store does not automatically make it available to your local terminal. kamuicode can read only the environment of the process that runs it.
Use the matching setup for the execution location:
- Local Mac/PC:
export FAL_KEY=..., or inject it for one command withop run -- kamuicode .../doppler run -- kamuicode .... - GitHub Actions: map
${{ secrets.FAL_KEY }}into the workflowenv. - Vercel: store
FAL_KEYin Project Settings -> Environment Variables and run inside that Vercel build/function/runtime. - Cloudflare Workers: store
FAL_KEYas a Worker Secret and run inside that Worker runtime. - AWS/GCP/Azure: retrieve from the cloud secret manager with the platform's IAM/SDK path, then pass it to the
kamuicodeprocess asFAL_KEY.
Cloud secret storage guidance checked against official docs:
- GitHub Actions Secrets: repository, organization, or environment secrets for workflow commands.
https://docs.github.com/en/actions/reference/security/secrets - Vercel Environment Variables: project/team environment variables for build and function runtime.
https://vercel.com/docs/environment-variables - Cloudflare Workers Secrets: encrypted text bindings for Worker runtime secrets.
https://developers.cloudflare.com/workers/configuration/secrets/ - AWS Secrets Manager: managed secrets with KMS-backed encryption, IAM controls, TLS retrieval, and rotation support.
https://aws.amazon.com/documentation-overview/secrets-manager/ - Google Secret Manager: managed service for storing, managing, and accessing text or binary secrets.
https://cloud.google.com/secret-manager/docs - Azure Key Vault: centralized management for secrets, keys, and certificates.
https://learn.microsoft.com/en-us/azure/key-vault/ - 1Password: use Environments or secret references with
op run --to inject secrets into a subprocess.https://developer.1password.com/docs/cli/secrets-environment-variables/ - Bitwarden Secrets Manager: centralized storage, management, and deployment of machine/application secrets.
https://bitwarden.com/help/secrets-manager-overview/ - Doppler: use project/config secrets and
doppler run --to inject secrets as environment variables.https://docs.doppler.com/docs/secrets
If FAL_KEY is missing, kamuicode stops immediately, prints setup guidance, and tries to open the API key page:
https://fal.ai/dashboard/keysIn that missing-key state, do not let an agent "prepare" a script for later. The correct behavior is no generated shell script, no project setup, no fallback media pipeline, and no workspace file creation.
If a generation request returns Forbidden, kamuicode stops and opens the credits page because this commonly means credits or billing are not ready:
https://fal.ai/dashboard/usage-billing/creditsFor local development in this repo:
npm install
npm linkCommands
kamuicode prompt bytedance/seedance-2.0/image-to-video
kamuicode models --q "text to image" --limit 5
kamuicode models --complete --pretty
kamuicode schema bytedance/seedance-2.0/image-to-video
kamuicode run bytedance/seedance-2.0/text-to-video prompt="a cat video" duration:=5
kamuicode submit bytedance/seedance-2.0/text-to-video prompt="a cat video"
kamuicode status bytedance/seedance-2.0/text-to-video <request-id> --logs
kamuicode wait bytedance/seedance-2.0/text-to-video <request-id>
kamuicode result bytedance/seedance-2.0/text-to-video <request-id>
kamuicode upload ./input.png
kamuicode urlize ./input.png --plain
kamuicode batch bytedance/seedance-2.0/image-to-video --input-file jobs.jsonl --concurrency 6
kamuicode research-prompt bytedance/seedance-2.0/image-to-videoModel Discovery
Do not rely on a single bulk models page when choosing a model. Some endpoints are visible only through category listing or exact endpoint lookup.
Use these discovery commands:
kamuicode models --categories --pretty
kamuicode models --all --q "image to video" --pretty
kamuicode models --complete --pretty
kamuicode models --endpoint-id beatoven/music-generation --pretty--all pages through the current query or category. --complete fetches the bulk listing, fetches every bundled category, merges by endpoint_id, and reports sources.category_only / sources.bulk_only so index gaps are visible.
For one-shot video generation, do not create a project. Use --plain and download the returned URL:
url="$(npx -y kamuicode run bytedance/seedance-2.0/text-to-video \
prompt="a short cinematic cat video" \
duration=4 \
resolution=720p \
aspect_ratio=16:9 \
--plain)"
curl -L "$url" -o cat-video.mp4Input syntax
The run and submit commands accept JSON input plus key/value args:
kamuicode run bytedance/seedance-2.0/text-to-video \
prompt="a cat wearing sunglasses" \
image_size.width:=1280 \
image_size.height:=720 \
num_images:=2 \
enable_safety_checker:=trueRules:
key=valuepasses a string.key:=valueparses JSON primitives/objects/arrays, so use it for numbers, booleans, arrays, and objects.nested.key:=valuebuilds nested objects. Bracket form also works when quoted, for example'image_size[width]':=1280.--input '{"prompt":"a cat"}',--input-file input.json, and--stdinare supported.--dry-runprints the resolved input without calling the API.
For local files, upload first and pass the returned URL to the model:
url="$(kamuicode upload ./image.png --plain)"
kamuicode run <model-id> image_url="$url" prompt="..."Or let run/batch upload local paths for URL-like fields:
kamuicode run bytedance/seedance-2.0/image-to-video \
--auto-upload \
image_url=./cut-sheet.png \
prompt="Use this cut sheet only as a storyboard reference. Output one fullscreen sequence."Bare relative paths work too when the file exists:
kamuicode run openai/gpt-image-2/edit \
--auto-upload \
image_urls:='["public/images/input.png"]' \
prompt="edit this image"If you pass a local path to an *_url field without --upload or --auto-upload, kamuicode fails early instead of sending an unusable local path to the model.
kamuicode refuses to upload sensitive-looking files such as .env, .npmrc, SSH keys, and certificate key files by default. If you intentionally need that, use --allow-sensitive-upload or KAMUICODE_ALLOW_SENSITIVE_UPLOAD=1.
Local uploads use the official storage client path and retry transient storage failures by default:
kamuicode upload ./image.png --plain --upload-retries 5 --upload-retry-delay 1000Environment defaults are also supported:
export KAMUICODE_UPLOAD_RETRIES=5
export KAMUICODE_UPLOAD_RETRY_DELAY=1000Queue and concurrency
run now prints progress to stderr by default and prints the final JSON to stdout. Use --quiet only when another program needs clean stdout without progress.
For queued jobs you want to manage yourself:
request="$(kamuicode submit <model-id> prompt="..." | jq -r .request_id)"
kamuicode wait <model-id> "$request" --poll-interval 5000For multiple jobs, put JSON objects in jobs.jsonl and control parallelism from the CLI:
{"id":"cut-01","input":{"prompt":"...","image_url":"https://..."}}
{"id":"cut-02","input":{"prompt":"...","image_url":"https://..."}}kamuicode batch bytedance/seedance-2.0/image-to-video --input-file jobs.jsonl --concurrency 6For long-running batches, write completed outputs as they finish instead of waiting for every job:
kamuicode batch bytedance/seedance-2.0/image-to-video \
--input-file jobs.jsonl \
--concurrency 6 \
--output-file results.live.json \
--output-jsonl results.completed.jsonlresults.live.json is updated after each completion, and results.completed.jsonl receives one line per completed job immediately. Each completed item includes url and urls when URLs are present in the model output.
Parameter research workflow
For unknown models, do this before spending credits:
kamuicode research-prompt <model-id>
kamuicode schema <model-id>
kamuicode run <model-id> --dry-run prompt="..." num_images:=1The prompt in prompts/fal-research.md tells agents to check:
https://api.fal.ai/v1/models?endpoint_id=<model>&expand=openapi-3.0https://fal.ai/models/<model>/apihttps://api.fal.ai/v1/models?q=<query>&status=activehttps://fal.ai/docs/model-api-referencehttps://fal.ai/docs/documentation/model-apis/model-argumentshttps://fal.ai/docs/api-reference/client-libraries/javascripthttps://fal.ai/docs/documentation/setting-up/authentication
Output
Most commands print JSON to stdout. The guide reminder, progress, status, and queue logs go to stderr by default, which keeps stdout usable for downstream agent parsing while still showing that generation is moving.
