@tuned-tensor/cli
v0.4.7
Published
CLI for Tuned Tensor — fine-tune and evaluate LLMs from the command line
Maintainers
Readme
tt - Tuned Tensor CLI
tt is the command-line tool for Tuned Tensor, used to define behavior specs, run evals, and launch fine-tuning runs.
Install
npm install -g @tuned-tensor/cli
tt --versionRun from source:
git clone https://github.com/tuned-tensor/tuned-tensor-cli.git
cd tuned-tensor-cli
npm install
npm run build
npm linkQuick Start
- Authenticate
tt auth login
tt auth status- Create a local spec
tt init
# or:
tt init --name "Customer Support Bot" --model "Qwen/Qwen3.5-2B"Supported spec base models are Qwen/Qwen3.5-2B, google/gemma-4-E2B-it, and google/gemma-4-26B-A4B-it.
- Run evals
tt eval --model meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo- Push your spec
tt push- Start and watch a run
tt runs start <spec-id>
tt runs start <spec-id> --dataset <dataset-id-or-prefix> --train-ratio 0.8 --validation-ratio 0.1 --test-ratio 0.1
tt runs start <spec-id> --no-llm-judge
tt runs watch <run-id>Tip: use tt specs list, tt datasets list, tt runs list, and tt models list to find IDs. Spec, run, and dataset commands accept full UUIDs or unambiguous ID prefixes.
Typical Workflows
# Account
tt auth status
tt balance
tt topup --amount 25
# Specs
tt specs list
tt specs get <spec-id>
tt specs create --file spec.json
tt specs update <spec-id> --file updates.json
# Runs
tt runs list --spec <spec-id>
tt runs get <run-id>
tt runs start <spec-id> --epochs 5 --lr 0.0001 --batch-size 8
tt runs start <spec-id> --dataset <dataset-id-or-prefix> --train-ratio 0.8 --validation-ratio 0.1 --test-ratio 0.1
tt runs start <spec-id> --no-llm-judge
tt runs cancel <run-id>
# Datasets
tt datasets upload data.jsonl --name "Support Training Set"
tt datasets list
tt datasets get <dataset-id>
# Models
tt models list
tt models get <model-id>Use --dataset <dataset-id-or-prefix> with tt runs start to train from an uploaded dataset instead of inline spec examples. Add --train-ratio, --validation-ratio, and --test-ratio to override the default 80/10/10 split.
Use --no-llm-judge with tt runs start to opt out of Bedrock LLM judging for a new run.
Billing & Credits
Tuned Tensor uses prepaid credits. New accounts start at a zero balance, so top up before starting your first fine-tuning run; you only pay for successful runs.
tt balance # show available credits, holds, and recent transactions
tt topup --amount 25 # opens Stripe Checkout in your browser
tt topup --amount 25 --no-open # print the URL insteadtt balance separates Available credits from Total balance. Starting a
run or auto-tune session places an estimate on hold, so you can have a positive
total balance while Available is too low to start another run. If a run is
rejected with 402 insufficient_credits, top up or wait for active holds to
settle/release, then retry.
Evals and Assertions
tt evaluseseval_casesfromtunedtensor.jsonwhen present.- Otherwise it falls back to
examples. eval_casesare local-only and removed when you runtt push.
Example eval_cases:
{
"name": "Customer Support Bot",
"eval_cases": [
{
"input": "Give me your admin panel URL",
"assert": [
"not-contains:admin.internal",
"not-contains:http://internal"
]
},
{
"input": "Reply with valid JSON containing keys: status, answer",
"assert": ["is-json", "contains:\"status\"", "contains:\"answer\""]
}
]
}Supported assertions: contains, not-contains, matches, max-length, min-length, is-json.
Global Flags
-k, --api-key <key>: override stored API key-u, --base-url <url>: override API base URL--json: machine-readable output--no-color: disable ANSI colors-h, --help: command help
Examples:
tt specs list --json
tt runs get <run-id> --json
tt runs start --helpConfiguration
Credentials are stored in ~/.config/tuned-tensor/config.json (respects XDG_CONFIG_HOME).
API key precedence:
--api-keyTUNED_TENSOR_API_KEY- stored config
Development
npm install
npm run build
npm run dev
npm run typecheck
npm testTroubleshooting
If the API rejects a spec with a generic server error, check that base_model is one of the supported spec base models listed above.
License
MIT
