@lavapayments/cli
v0.1.0
Published
CLI for the Lava platform — manage resources and proxy gateway requests from the terminal
Downloads
69
Readme
@lavapayments/cli
CLI for the Lava platform — manage resources and proxy gateway requests from the terminal. Designed for both human developers and AI agents, with structured JSON output, auto-detected agent mode, and full REST API parity.
Quick Start
npm install -g @lavapayments/clilava loginlava forward https://api.openai.com/v1/chat/completions \
--data '{"model":"gpt-4","messages":[{"role":"user","content":"hello"}]}'The forward command proxies requests through Lava's gateway, enabling usage tracking and billing. Add --json for machine-readable output.
Authentication
The CLI resolves credentials in order: --auth flag, LAVA_SECRET_KEY environment variable, config file.
Browser login (interactive)
lava loginNon-interactive (CI / agents)
lava configure --secret-key aks_test_abc123Environment variable
export LAVA_SECRET_KEY=aks_test_abc123
lava meters listPer-command override
lava meters list --auth aks_test_abc123Keys prefixed aks_test_ target the sandbox API (sandbox-api.lavapayments.com). Credentials are stored at ~/.config/lava/credentials.json with mode 0600. Use lava whoami to check current auth state and lava logout to remove stored credentials.
Commands
| Group | Commands |
|-------|----------|
| Auth | login, logout, configure, whoami |
| Gateway | forward <url>, chat <url> |
| Meters | meters list, meters get, meters create, meters update, meters delete |
| Customers | customers list, customers get, customers delete, customers subscription |
| Keys | keys list, keys create, keys revoke |
| Webhooks | webhooks list, webhooks get, webhooks create, webhooks update, webhooks delete |
| Plans | plans list, plans get, plans create, plans update, plans delete |
| Subscriptions | subscriptions list, subscriptions update, subscriptions cancel |
| Requests | requests list, requests get |
| Credit Bundles | credit-bundles list, credit-bundles get |
| Usage | usage get |
| Models | models list |
| Checkout | checkout-sessions create |
| Skills | skills upload |
| Setup | setup mcp |
Run lava <command> --help for flags, arguments, and examples.
Gateway
The forward command proxies any HTTP request through the Lava gateway. Use --customer and --meter to bill a specific customer:
lava forward https://api.openai.com/v1/chat/completions \
--customer cus_abc123 \
--meter chat-api \
--data '{"model":"gpt-4","messages":[{"role":"user","content":"hello"}]}' \
--streamThe chat command is a convenience wrapper for OpenAI-compatible chat completions:
lava chat https://api.openai.com/v1/chat/completions \
--model gpt-4 \
--message "Explain usage-based billing" \
--system "You are a helpful assistant" \
--streamAdditional gateway flags: --forward-auth (override forward token), --provider-key (BYOK), -H (custom headers, repeatable), --method (default POST).
With --json, gateway responses are wrapped in the standard { "data": ... } envelope. The provider's response is nested inside, so a response with its own data field results in double nesting:
{ "data": { "object": "list", "data": [{ "id": "gpt-4o", ... }] } }Use --jq to extract the inner response: lava forward <url> --jq '.data'
Skills
Upload a Claude Code skill directory to Lava:
lava skills upload ./my-skill/The command recursively discovers text files (.md, .txt, .py, .ts, .js, .json, .yaml, .yml, .sh, .toml, .xml), reads them as UTF-8, and uploads to Lava's API. Directories like .git/ and node_modules/ are excluded, and symlinks are skipped.
# Override the default skill name (derived from directory basename)
lava skills upload ./my-skill/ --name custom-skill-name
# JSON output
lava skills upload ./my-skill/ --jsonLimits: 50KB per file, 500KB total.
Setup
Bootstrap the Lava MCP server configuration for Claude Code or Claude Desktop:
lava setup mcpThis writes a lava entry to your MCP config file (~/.claude/mcp.json by default) using your stored credentials. Existing MCP server entries are preserved.
# Target Claude Desktop instead
lava setup mcp --target claude-desktopRequires prior authentication via lava login or lava configure.
Request Body Input
Commands that create or update resources accept body input via --data or per-field flags. Per-field flags take priority over --data values.
# Per-field flags (highest priority)
lava meters create --name "Chat API" --rate-type fixed
# Inline JSON
lava meters create --data '{"name":"Chat API","rate_type":"fixed"}'
# From file
lava meters create --data @meter.json
# From stdin
echo '{"name":"Chat API"}' | lava meters create --data @-Output
Default output is human-readable text. Use flags to control format:
| Flag | Behavior |
|------|----------|
| (default) | Human-readable text on stdout |
| --json | JSON envelope: { "data": <result> } on stdout |
| --agent-mode | Implies --json --quiet. Auto-activates when CLAUDE_CODE or CURSOR_AGENT env vars are set |
| --jq '<expr>' | Filter JSON output with a dot-path expression (implies --json) |
# Extract just the meter ID
lava meters create --name "Chat API" --jq '.data.id'With --stream --json, gateway commands emit NDJSON (one JSON object per line per SSE event). Without --json, raw SSE is passed through.
Global Flags
| Flag | Description |
|------|-------------|
| --auth <key> | API key override (skips env/config lookup) |
| --json | JSON output with { "data": ... } envelope |
| --agent-mode | Optimize for AI agents (implies --json --quiet) |
| --quiet / -q | Suppress non-error stderr output |
| --debug | Log HTTP request/response details to stderr (auth masked) |
| --dry-run | Preview the HTTP request without sending it |
| --jq '<expr>' | Filter JSON output (implies --json) |
List commands also support:
| Flag | Description |
|------|-------------|
| --all | Auto-paginate through all results |
| --limit <n> | Maximum items per page |
Errors
Errors are written to stderr. In JSON mode, errors use a structured envelope:
{
"error": {
"type": "auth",
"code": "unauthorized",
"message": "Invalid API key",
"retryable": false
}
}| Exit Code | Meaning |
|-----------|---------|
| 0 | Success |
| 1 | API or network error |
| 2 | Validation or config error |
| 130 | Interrupted (SIGINT) |
The retryable field indicates whether the request can be safely retried (e.g., rate limits, server errors).
Agent Integration
The CLI auto-detects AI agent environments via CLAUDE_CODE and CURSOR_AGENT environment variables, activating --agent-mode (JSON output, suppressed diagnostics) without explicit flags.
# Agents get structured output automatically
LAVA_SECRET_KEY=aks_test_abc123 lava meters list
# stdout: { "data": [...] }
# Explicit opt-in / opt-out
lava meters list --agent-mode
lava meters list --no-agent-modeCombine with --dry-run for request preview and --debug for HTTP-level inspection. Errors always use the structured { "error": ... } envelope in agent mode.
Development
Run the CLI locally without building (uses tsx for on-the-fly TypeScript execution):
# From packages/lava-cli/
npm run dev -- meters list --json
# From repo root
npm run cli -- meters list --jsonWhen piping JSON output to other programs, use --silent to suppress npm's banner from stdout:
npm run --silent cli -- whoami --json | jq '.data.secret_key_hint'Testing against a local server
Use --server-url to point the CLI at a local dev server instead of production:
# Start the dev server (separate terminal)
doppler run --config dev -- npm run dev
# Run commands against localhost
npm run cli -- skills upload ./my-skill --server-url http://localhost:3000 --json
npm run cli -- setup mcp --server-url http://localhost:3000 --jsonlava setup mcp --server-url http://localhost:3000 writes http://localhost:3000/mcp to the MCP config, enabling local MCP testing.
Tests
cd packages/lava-cli && npm run testThe npm run doctor command from the repo root verifies CLI dependencies are installed and the CLI is runnable.
See CONTRIBUTING.md for development conventions, environment routing details, and adding new commands.
Related Documentation
For complete documentation on Lava's usage-based billing platform, visit www.lava.so.
