runpod-cli
v0.1.0
Published
CLI for managing RunPod GPU cloud infrastructure. Self-describing for AI agents via --ai flag.
Maintainers
Readme
runpod-cli
CLI for managing RunPod GPU cloud infrastructure. Optimized for AI agent workflows.
Features
- List pods with status, GPU type, region, cost, and uptime
- Start and stop pods on demand
- Create pods with network volume attachment and SSH key injection
- Delete pods with safety confirmation
- List available GPU types with full pricing (secure, community, spot)
- Filter GPUs by minimum VRAM
- List and create network volumes
- Self-describing AI manifest via
--aiflag - JSON output mode on all list commands
- Native fetch only - no external HTTP dependencies
Installation
npm install -g runpod-cliSetup
export RUNPOD_API_KEY=your_api_key_hereGet your API key from: https://www.runpod.io/console/user/settings
Usage
Pods
# List all pods
runpod-cli pods
# List as JSON
runpod-cli pods --json
# Start a stopped pod
runpod-cli pod:start <pod-id>
# Stop a running pod
runpod-cli pod:stop <pod-id>
# Create a new pod
runpod-cli pod:create \
--name my-comfyui \
--gpu "NVIDIA RTX A6000" \
--region EU-CZ-1 \
--volume 52t492qes6 \
--image runpod/comfyui:latest \
--disk 50
# Create pod with SSH access
runpod-cli pod:create \
--gpu "NVIDIA GeForce RTX 3090" \
--ssh-key "ssh-rsa AAAA..." \
--cloud COMMUNITY
# Delete a pod (requires --confirm)
runpod-cli pod:delete <pod-id> --confirmGPUs
# List all GPU types with pricing
runpod-cli gpus
# Filter by minimum VRAM
runpod-cli gpus --vram 24
# Sort by VRAM descending
runpod-cli gpus --sort vram
# Find affordable high-VRAM GPUs
runpod-cli gpus --vram 48 --sort price
# Get JSON for parsing
runpod-cli gpus --jsonVolumes
# List all network volumes
runpod-cli volumes
# Create a new volume
runpod-cli volume:create \
--name comfyui-models \
--size 150 \
--region EU-CZ-1Global Options
# Pass API key directly (overrides env var)
runpod-cli pods --token rp_xxxxxxxxxxxx
# Any command supports --token
runpod-cli pod:start <id> --token rp_xxxxxxxxxxxxAI Integration
This CLI is self-describing for AI agents:
# Full JSON schema (all commands, options, types)
runpod-cli --ai
# Brief one-liner per command
runpod-cli --ai brief
# Usage examples for all commands
runpod-cli --ai examples
# Schema for a specific command
runpod-cli --ai pod:create
runpod-cli --ai gpusClaude Code / MCP Usage
Add to your Claude configuration to give AI agents full RunPod control:
{
"mcpServers": {
"runpod": {
"command": "runpod-cli",
"args": ["--ai"]
}
}
}Or in a CLAUDE.md:
Use `runpod-cli --ai` to discover available RunPod management commands.
RUNPOD_API_KEY is set in the environment.pod:create Options Reference
| Option | Default | Description |
|--------|---------|-------------|
| --name | auto-generated | Pod name |
| --gpu | required | GPU type ID (from runpod-cli gpus) |
| --region | none | Data center ID (e.g. EU-CZ-1) |
| --volume | none | Network volume ID to attach at /workspace |
| --image | runpod/comfyui:latest | Docker image |
| --disk | 20 | Container disk size (GB) |
| --cloud | SECURE | SECURE or COMMUNITY |
| --ports | 8188/http,8080/http,22/tcp | Exposed ports |
| --ssh-key | none | Public SSH key (injected as PUBLIC_KEY env) |
License
MIT
