kadi-deploy
v0.19.3
Published
Deploy KADI agents to local Docker/Podman or Akash Network using profiles defined in `agent.json`.
Downloads
193
Readme
kadi-deploy
Deploy KADI agents to local Docker/Podman or Akash Network using profiles defined in agent.json.
This is a CLI plugin that provides the deploy command. See kadi-by-example for tutorials.
Installation
kadi install kadi-deployQuick Reference
| Command | Purpose |
|---------|---------|
| kadi deploy | Deploy using first available profile |
| kadi deploy --profile production | Deploy using a specific profile |
| kadi deploy --autonomous | Fully autonomous deployment (no human interaction) |
| kadi deploy --dry-run | Preview deployment without executing |
| kadi deploy list | List all active deployments |
| kadi deploy list --json | List deployments as JSON |
| kadi deploy down | Tear down an active deployment |
| kadi deploy down --profile <name> | Tear down a specific profile's deployment |
| kadi deploy down --instance <id> | Tear down a specific instance by ID |
| kadi deploy down --label <label> | Tear down a deployment by its label |
| kadi deploy down --autonomous | Tear down Akash deployment without human interaction |
| kadi deploy down --yes | Tear down without confirmation prompt |
Quick Start
1. Add a deploy profile to agent.json
{
"name": "my-agent",
"version": "1.0.0",
"deploy": {
"local": {
"target": "local",
"engine": "docker",
"services": {
"app": {
"image": "my-agent:latest",
"expose": [{ "port": 3000, "as": 3000 }]
}
}
}
}
}2. Deploy
kadi deploy --profile localThat's it. For Akash Network deployment, see Deploying to Akash.
Configuration & Secrets
kadi deploy uses three configuration sources — agent.json for deploy profiles, config.yml for infrastructure settings, and encrypted vaults for secrets. No .env files needed.
Where Things Live
| What | Where | Purpose |
|------|-------|---------|
| Deploy profiles | agent.json (project root) | Service definitions, target, network, secrets delivery |
| Infrastructure config | config.yml (project or ancestor) | Tunnel server, registry port, container engine |
| Global infrastructure config | ~/.kadi/config.yml | Machine-wide defaults (shared across all projects) |
| Project secrets | secrets.toml (project or ancestor) | Agent-specific API keys, DB passwords — encrypted |
| Infrastructure secrets | ~/.kadi/secrets/config.toml | Tunnel tokens, Akash wallet — encrypted, machine-scoped |
Configuration resolves with walk-up discovery — kadi deploy searches from your CWD upward through parent directories until it finds config.yml or secrets.toml. Global ~/.kadi/ is the final fallback.
Setting Up Secrets
Tunnel token (required for deploying local images to Akash):
# Create the global tunnel vault (one-time)
kadi secret create tunnel -g
# Store the token
kadi secret set KADI_TUNNEL_TOKEN "your-token" -v tunnelThe tunnel vault lives at ~/.kadi/secrets/config.toml (user-level) so it works from any project directory.
Akash wallet (required for autonomous deployment):
# Store in the global vault (already exists by default)
kadi secret set AKASH_WALLET "your twelve or twenty four word mnemonic" -v globalDeployment secrets (shared with deployed containers):
# Create a project-level vault for your agent
kadi secret create my-agent
# Store secrets the deployed container will receive
kadi secret set API_KEY "sk-..." -v my-agent
kadi secret set DB_URL "postgres://..." -v my-agentThen reference it in your deploy profile's secrets block (see Sharing Secrets with Deployments).
Setting Up config.yml
Create a config.yml in your project root (or any ancestor directory):
tunnel:
server_addr: broker.kadi.build
tunnel_domain: tunnel.kadi.build
server_port: 7000
ssh_port: 2200
mode: frpc
transport: wss
wss_control_host: tunnel-control.kadi.build
deploy:
registry_port: 3000
container_engine: docker # docker | podman
auto_shutdown: true
registry_duration: 600000 # 10 minutesAll values have sensible defaults — you only need config.yml if you want to override something. See config.sample.yml in tunnel-services and deploy-ability for full reference.
Resolution Priority
Each setting resolves independently from highest to lowest priority:
- CLI flags —
--engine podman,--network testnet - Environment variables —
KADI_TUNNEL_SERVER,KADI_TUNNEL_TOKEN - Encrypted vault —
secrets.tomlviakadi secret - Project config.yml — walk-up from CWD
- Global
~/.kadi/config.yml— machine-wide defaults - Built-in defaults — e.g.
docker,broker.kadi.build
Project-Level vs Global (User-Level)
| Setting | Scope | Location |
|---------|-------|----------|
| KADI_TUNNEL_TOKEN | Global — same token for all projects | ~/.kadi/secrets/config.toml → tunnel vault |
| AKASH_WALLET | Global — your wallet, not project-specific | ~/.kadi/secrets/config.toml → global vault |
| tunnel: config | Global or Project — usually same infrastructure | ~/.kadi/config.yml or project config.yml |
| deploy: config | Project — may differ per project | Project config.yml |
| Agent secrets (API keys, DB creds) | Project — specific to one agent | Project secrets.toml → custom vault |
| Deploy profiles | Project — specific to one agent | agent.json |
Deploying Locally
Local deployment uses Docker or Podman to run your services via docker-compose.
{
"deploy": {
"local": {
"target": "local",
"engine": "docker",
"services": {
"web": {
"image": "nginx:alpine",
"expose": [{ "port": 80, "as": 8080 }]
}
}
}
}
}kadi deploy --profile localLocal profile options:
| Field | Description |
|-------|-------------|
| target | Must be "local" |
| engine | "docker" or "podman" |
| network | Docker network name (optional) |
| services | Service definitions (see below) |
Container Engine Resolution
The container engine is resolved using the following priority (highest wins):
- CLI
--engineflag — explicit per-command override - Profile
engine— set in the deploy profile inagent.json - Global config
preferences.containerEngine— user default set viakadi config set preferences.containerEngine podman - Default —
docker
This matches the resolution pattern used by kadi broker.
When using Podman, compose commands automatically prefer podman-compose (the standalone Python tool) over podman compose, which avoids issues on systems where podman compose delegates to a broken Docker Desktop shim.
Deploying to Akash
Akash Network is a decentralized cloud platform. Deploying to Akash requires:
- A Keplr wallet with AKT tokens
- Scanning a QR code to connect your wallet
- Selecting a provider from the bid list
{
"deploy": {
"production": {
"target": "akash",
"network": "mainnet",
"services": {
"web": {
"image": "nginx:alpine",
"expose": [{ "port": 80, "as": 80, "to": [{ "global": true }] }],
"resources": {
"cpu": 0.5,
"memory": "512Mi",
"ephemeralStorage": "1Gi"
}
}
}
}
}
}kadi deploy --profile productionThe CLI will:
- Generate a QR code - scan with Keplr mobile
- Create/load your deployment certificate
- Submit to Akash and collect bids
- Let you select a provider interactively
- Deploy and show your service endpoints
Akash profile options:
| Field | Description |
|-------|-------------|
| target | Must be "akash" |
| network | "mainnet" or "testnet" |
| services | Service definitions (see below) |
| blacklist | Provider addresses to exclude |
| deposit | Escrow deposit in AKT (default: 5) |
| cert | Path to saved certificate file |
| useRemoteRegistry | Skip local tunnel, assume images are in a public registry |
Autonomous Deployment (Agent-Controlled)
The --autonomous flag enables fully autonomous deployment with zero human interaction. This is designed for AI agents and automation pipelines that need to deploy without prompts, QR codes, or manual bid selection.
Prerequisites
Store your Akash wallet mnemonic in the KADI secrets vault:
kadi secret set AKASH_WALLET "your twelve or twenty four word mnemonic phrase here" -v globalThe mnemonic is encrypted at rest using age (ChaCha20-Poly1305) with your OS keychain protecting the master key.
Usage
# Basic autonomous deployment (cheapest bid)
kadi deploy --profile production --autonomous
# Use a specific bid strategy
kadi deploy --autonomous --bid-strategy balanced
# Set a maximum price and require audited providers
kadi deploy --autonomous --bid-max-price 500 --require-audited
# Use a custom secrets vault
kadi deploy --autonomous --secrets-vault my-vault --bid-strategy balancedWhat Happens
When --autonomous is set, the CLI:
- Reads wallet mnemonic from the secrets vault (no QR code / Keplr)
- Creates or loads the deployment certificate automatically
- Submits the deployment to Akash Network
- Selects a provider algorithmically based on
--bid-strategy - Shares secrets with the deployment automatically (no approval prompt)
- Reports results including endpoints and lease info
Autonomous Flags
| Flag | Description | Default |
|------|-------------|---------|
| --autonomous | Enable autonomous mode | false |
| --bid-strategy <strategy> | cheapest, most-reliable, or balanced | cheapest |
| --bid-max-price <uakt> | Max price per block in uAKT | No limit |
| --require-audited | Only accept bids from audited providers | false |
| --secrets-vault <vault> | Vault for wallet mnemonic (does NOT affect deployment secrets) | global |
| --auto-approve-secrets | Auto-approve secret sharing (also works in interactive mode) | false |
| --secret-timeout <ms> | How long to wait for the agent to request secrets | 300000 |
Bid Strategies
| Strategy | Behavior |
|----------|----------|
| cheapest | Selects the lowest-price bid. Best for cost-sensitive workloads. |
| most-reliable | Selects the provider with the highest uptime. Best for production. |
| balanced | Weighs price and reliability together. Good default for most cases. |
Example: Agent Workflow
An AI agent can deploy with a single command:
kadi deploy \
--profile production \
--autonomous \
--bid-strategy balanced \
--bid-max-price 1000 \
--require-auditedThe entire flow completes without any human interaction.
Listing Active Deployments
The kadi deploy list command (alias kadi deploy ls) shows all active deployments from the .kadi-deploy.lock file without needing to open it manually.
$ kadi deploy list
Active Deployments (2)
──────────────────────────────────────────────────────────────────────
INSTANCE PROFILE TARGET LABEL DETAILS DEPLOYED
02f5 production akash broker-east dseq=19234567 Mar 10, 2026, 02:15 PM
a3f7 dev local — engine=docker Mar 11, 2026, 09:30 AM
Use `kadi deploy down --instance <id>` or `kadi deploy down --label <label>` to tear down a deployment.List Flags
| Flag | Description | Default |
|------|-------------|---------|
| -p, --project <path> | Path to project with .kadi-deploy.lock | Current directory |
| --profile <profile> | Filter by profile name | Show all |
| --json | Output as JSON (for scripting / automation) | false |
| --verbose | Show additional details (provider address, services, network) | false |
JSON Output
Use --json for machine-readable output, useful in scripts and CI pipelines:
$ kadi deploy list --json
[
{
"instanceId": "02f5",
"profile": "production",
"target": "akash",
"label": "broker-east",
"deployedAt": "2026-03-10T14:15:00.000Z",
"dseq": 19234567,
"owner": "akash1abc...",
"provider": "akash1xyz...",
"providerUri": "https://provider.example.com",
"network": "mainnet"
}
]Tearing Down Deployments
The kadi deploy down command tears down an active deployment launched by kadi deploy. It works for both local (Docker/Podman) and Akash deployments.
After a successful deployment, kadi-deploy writes a .kadi-deploy.lock file to the project root that records everything needed to tear it down. The lock file supports multiple simultaneous deployments — you can have a local dev deployment and an Akash production deployment active at the same time.
Multi-Instance Support
Each deployment gets a unique 4-character instance ID (e.g. a3f7) and an optional human-readable label. Use kadi deploy list to see all active deployments.
When multiple deployments are active, you can identify which one to tear down using --profile, --instance, or --label:
# Specify which profile to tear down
kadi deploy down --profile dev
kadi deploy down --profile production
# Tear down by instance ID (4-char hex shown in deploy output and `deploy list`)
kadi deploy down --instance a3f7
# Tear down by label (set during deploy with --label)
kadi deploy down --label my-broker
# If only one deployment is active, it's auto-selected
kadi deploy down
# If multiple are active and no selector given, you'll be prompted to choose
kadi deploy down --yesTip: When deploying, use
--labelto give your deployment a memorable name:kadi deploy --profile production --label broker-eastThen tear it down easily:
kadi deploy down --label broker-east
Local Teardown
# Tear down local containers
kadi deploy down
# Skip confirmation
kadi deploy down --yes
# Override container engine
kadi deploy down --engine podmanThis runs docker compose down --remove-orphans (or podman-compose for Podman) against the compose file recorded in the lock. Engine resolution follows the same priority as deploy (see Container Engine Resolution).
Akash Teardown (Interactive)
# Close Akash deployment via WalletConnect QR
kadi deploy downThe CLI will:
- Display the active deployment details (DSEQ, provider, network)
- Ask for confirmation
- Show a QR code — scan with Keplr mobile
- Call
closeDeploymenton the blockchain - Display the transaction hash and refund info
- Delete the lock file
Akash Teardown (Autonomous)
For fully automated teardown without human interaction:
# Autonomous teardown using vault mnemonic
kadi deploy down --autonomous
# With custom vault
kadi deploy down --autonomous --secrets-vault my-wallet-vault
# Tear down a specific profile (required when multiple deployments are active)
kadi deploy down --autonomous --profile productionThe autonomous path skips all interactive prompts (confirmation and profile selection), reads the wallet mnemonic from the secrets vault (same as kadi deploy --autonomous), signs the close transaction directly, and refunds the remaining escrow to your wallet.
Note: If multiple deployments are active,
--autonomousrequires--profile,--instance, or--labelto specify which one to tear down (it cannot prompt for selection).
Down Flags
| Flag | Description | Default |
|------|-------------|----------|
| -p, --project <path> | Path to project with .kadi-deploy.lock | Current directory |
| --profile <profile> | Profile name to tear down (prompts if multiple; required in autonomous mode with multiple deployments) | Auto-select |
| --instance <id> | Instance ID to tear down (4-char hex from deploy output or deploy list) | — |
| --label <label> | Tear down the deployment matching this label | — |
| --all | Tear down all active deployments | false |
| --engine <engine> | Override container engine (docker/podman) | From lock file |
| --network <network> | Override Akash network (mainnet/testnet) | From lock file |
| --autonomous | No human interaction — skips confirmation, uses vault mnemonic for Akash | false |
| --secrets-vault <vault> | Vault for wallet mnemonic (autonomous mode) | global |
| -y, --yes | Skip confirmation prompt (implied by --autonomous) | false |
| --verbose | Detailed output | false |
Resolution priority: --instance > --label > --all > --profile > auto-select (single deployment) > interactive prompt.
The Lock File
The .kadi-deploy.lock file is written to the project root after every successful deployment. It uses a v3 format that supports multiple simultaneous deployments — including multiple instances of the same profile — keyed by {profile}:{instanceId}:
{
"version": 3,
"deployments": {
"local:a3f7": {
"instanceId": "a3f7",
"target": "local",
"profile": "local",
"deployedAt": "2026-02-25T12:00:00.000Z",
"local": { "composePath": "...", "engine": "docker", "...": "..." }
},
"akash:b9e2": {
"instanceId": "b9e2",
"target": "akash",
"profile": "akash",
"label": "broker-east",
"deployedAt": "2026-02-25T14:00:00.000Z",
"akash": { "dseq": 12345678, "owner": "akash1...", "...": "..." }
}
}
}Each entry contains:
- instanceId: Unique 4-character hex identifier (e.g.
a3f7) - label: Optional human-readable label set at deploy time with
--label - Local deployments: compose file path, engine, network, service names, container IDs
- Akash deployments: DSEQ, owner address, provider, network, gseq/oseq
Use kadi deploy list to view active deployments without opening the lock file.
When you tear down a deployment, only that entry is removed. The file is deleted entirely when the last deployment is removed. Existing v1/v2 lock files are transparently migrated to v3 on read.
It is safe to add .kadi-deploy.lock to .gitignore.
If the lock file is missing or was manually deleted, you can still tear down containers directly:
# Local: run compose down manually
docker compose down --remove-orphans
# Akash: close via Akash Console or CLI
# You'll need the DSEQ from the original deployment outputDeployment Secrets
Declare secrets your deployed agent needs in the secrets block of a deploy profile. Before deployment, kadi deploy validates that every required secret exists in the vault and shares them with the container.
Single Vault (Legacy)
{
"deploy": {
"production": {
"target": "akash",
"network": "mainnet",
"services": { "..." : "..." },
"secrets": {
"vault": "my-agent",
"required": ["API_KEY", "DB_URL"],
"optional": ["DEBUG_KEY"],
"delivery": "broker"
}
}
}
}Multi-Vault
When your agent needs secrets from more than one vault, use the vaults array:
{
"deploy": {
"production": {
"target": "akash",
"network": "mainnet",
"services": { "..." : "..." },
"secrets": {
"vaults": [
{ "vault": "my-agent", "required": ["API_KEY", "API_URL"] },
{ "vault": "infra", "required": ["TUNNEL_TOKEN"], "optional": ["OBSERVABILITY_KEY"] }
],
"delivery": "broker"
}
}
}
}Each vault entry specifies which secrets to pull from that vault. All vaults are created inside the container and each secret is stored in its designated vault.
Secrets Fields
| Field | Description |
|-------|-------------|
| vault | (Legacy) Single vault name |
| vaults | (Multi-vault) Array of { vault, required?, optional? } entries |
| required | Secret names that must exist before deployment (deploy fails if missing) |
| optional | Secret names shared if available (no error if missing) |
| delivery | "env" (default) — injected as plain env vars. "broker" — E2E encrypted handshake via broker |
Both formats are fully backwards compatible. Existing single-vault configs continue to work unchanged.
Service Configuration
Services are defined the same way for both local and Akash deployments:
{
"services": {
"api": {
"image": "myapp:latest",
"command": ["node", "server.js"],
"env": ["PORT=3000", "NODE_ENV=production"],
"expose": [{ "port": 3000, "as": 80, "to": [{ "global": true }] }],
"resources": {
"cpu": 0.5,
"memory": "512Mi",
"ephemeralStorage": "1Gi"
}
}
}
}Service fields:
| Field | Description |
|-------|-------------|
| image | Container image |
| command | Override container command (array) |
| env | Environment variables (array of KEY=value) |
| expose | Port mappings |
| resources | CPU, memory, storage (Akash only) |
| credentials | Registry credentials for private images |
Private Registry
For private images, add credentials:
{
"services": {
"app": {
"image": "ghcr.io/myorg/private-app:main",
"credentials": {
"host": "ghcr.io",
"username": "github-username",
"password": "github-token"
}
}
}
}Multi-Service
Services can communicate with each other:
{
"services": {
"frontend": {
"image": "nginx:alpine",
"expose": [{ "port": 80, "as": 80, "to": [{ "global": true }] }]
},
"backend": {
"image": "node:20-alpine",
"expose": [{ "port": 3000, "to": [{ "service": "frontend" }] }]
}
}
}Local Images on Akash
When deploying local images to Akash, kadi-deploy automatically:
- Starts a temporary container registry
- Pushes your local images to it
- Exposes the registry via tunnel (kadi, ngrok, serveo, or localtunnel)
- Rewrites image URLs in the deployment manifest
- Waits for the provider to pull images
- Shuts down the registry once containers are running
This happens automatically - no configuration needed for local images.
CLI Options
kadi deploy # Use first available profile
kadi deploy --profile production # Use specific profile
kadi deploy --project /path/to/app # Specify project directory
kadi deploy --dry-run # Preview without deploying
kadi deploy --verbose # Detailed output
kadi deploy --yes # Skip confirmation promptsOverride profile settings:
kadi deploy --network testnet # Override Akash network
kadi deploy --engine podman # Override container engineAutonomous deployment:
kadi deploy --autonomous # No human interaction
kadi deploy --autonomous --bid-strategy balanced # Pick balanced provider
kadi deploy --autonomous --bid-max-price 500 # Cap bid price
kadi deploy --autonomous --require-audited # Audited providers only
kadi deploy --autonomous --secrets-vault myvault # Custom vault
kadi deploy --auto-approve-secrets # Works in interactive mode too
kadi deploy --secret-timeout 120000 # 2 min timeout for secret handshakeList active deployments:
kadi deploy list # Table of all active deployments
kadi deploy ls # Alias for list
kadi deploy list --json # Machine-readable JSON output
kadi deploy list --verbose # Show provider, services, network details
kadi deploy list --profile production # Filter by profileTear down deployments:
kadi deploy down # Tear down active deployment
kadi deploy down --label my-broker # Tear down by label
kadi deploy down --instance a3f7 # Tear down by instance ID
kadi deploy down --all # Tear down all deployments
kadi deploy down --yes # Skip confirmation (interactive)
kadi deploy down --autonomous # Fully non-interactive (skips confirmation + QR)
kadi deploy down --autonomous --label prod-east # Autonomous teardown by label
kadi deploy down --autonomous --profile prod # Required when multiple deployments active
kadi deploy down --autonomous --secrets-vault v # Akash: custom vault
kadi deploy down --engine podman # Override container engine
kadi deploy down --verbose # Detailed outputTroubleshooting
Local Deployment
# Check container engine
docker --version
podman --version
# Start Podman machine (macOS)
podman machine startAkash Deployment
Certificate issues: If deployment fails with a certificate error, add the cert path to your profile:
{
"deploy": {
"production": {
"cert": "~/.kadi/certificate.json"
}
}
}Preview before deploying:
kadi deploy --profile production --dry-runVerbose output for debugging:
kadi deploy --profile production --verboseMigrating from .env to Vaults
As of kadi-deploy v0.19.0, secrets are stored in encrypted vaults (secrets.toml) and config in config.yml — replacing the old .env file approach entirely. The .env fallback still works but is deprecated and will be removed in a future release.
Step 1: Update kadi-deploy
kadi install kadi-deployThis pulls [email protected] (with [email protected] and [email protected]).
Step 2: Move tunnel token to a global vault
Previously you had a .env file (often inside abilities/ or your project root) containing:
KADI_TUNNEL_TOKEN=your-token-here
NGROK_AUTH_TOKEN=your-ngrok-tokenMove these to an encrypted global vault:
# Create the tunnel vault at user level (~/.kadi/secrets/config.toml)
kadi secret create tunnel -g
# Copy your token value from the old .env file, then:
kadi secret set KADI_TUNNEL_TOKEN "your-token-here" -v tunnel
# If you used ngrok:
kadi secret set NGROK_AUTH_TOKEN "your-ngrok-token" -v tunnelThe global (-g) vault lives at ~/.kadi/secrets/config.toml, accessible from any project directory. You no longer need to copy .env files into ability folders.
Step 3: Move Akash wallet (autonomous deployments only)
If you had AKASH_WALLET in a .env or environment variable:
# The 'global' vault likely already exists — if not:
kadi secret create global -g
# Store the mnemonic
kadi secret set AKASH_WALLET "your twelve or twenty four word mnemonic" -v globalStep 4: Create config.yml (optional)
If you had non-secret settings in .env (tunnel server, ports, etc.), move them to config.yml in your project root:
tunnel:
server_addr: broker.kadi.build
tunnel_domain: tunnel.kadi.build
transport: wss
deploy:
container_engine: dockerAll values have sensible defaults — you only need config.yml if you customized something.
Step 5: Clean up
# Delete the old .env file from your kadi abilities folder
rm ~/.kadi/../abilities/.env # or wherever your .env lived
# Also remove from your project if you had one there
rm /path/to/project/.env # only if it only contained kadi tunnel/deploy secretsStep 6: Verify
# From any directory — should return your token
kadi secret get KADI_TUNNEL_TOKEN -v tunnel
# From an agent subdirectory — multi-level discovery finds the global vault
cd agents/my-agent
kadi secret get KADI_TUNNEL_TOKEN -v tunnel
# Test a deploy
kadi deploy --dry-runWhat changed
| Before (.env) | After (vaults + config.yml) |
|---------------|----------------------------|
| Plaintext .env file in ability folder | Encrypted secrets.toml in ~/.kadi/ (age/ChaCha20-Poly1305) |
| Copy .env into each ability that needs it | Global vault — one location, accessible everywhere |
| Secrets and config mixed in one flat file | Secrets in vault, config in config.yml — separated |
| No encryption, easy to leak | Encrypted at rest, master key in OS keychain |
| Per-project only | Global (~/.kadi/) or per-project — your choice |
Backwards compatibility
The .env fallback is still supported as a tier-3 fallback in configResolver.js:
process.env(always wins)- Encrypted vault (
secrets.toml) .envfile walk-up (deprecated — still works)config.yml(for non-secret settings)
If you have an existing .env file, it will continue to work. But new setups should use vaults.
