convoai
v1.5.3
Published
CLI for Agora ConvoAI Engine — start, manage, and monitor conversational AI agents
Maintainers
Readme
____ _ ___ ____ _ ___
/ ___|___ _ ____ _____ / \ |_ _| / ___| | |_ _|
| | / _ \| '_ \ \ / / _ \/ _ \ | | | | | | | |
| |__| (_) | | | \ V / (_) / ___ \ | | | |___| |___ | |
\____\___/|_| |_|\_/ \___/_/ \_\___| \____|_____|___|ConvoAI CLI
The developer-friendly CLI for Agora Conversational AI Engine
Start, manage, and monitor conversational AI agents from your terminal. One command to launch a voice agent. Full control over its lifecycle, conversation history, and real-time performance.
Features
- Full Agora ConvoAI REST API coverage -- every endpoint, one CLI
- Interactive and non-interactive modes -- prompts when you're exploring, flags when you're scripting
- Instant voice chat --
convoai golaunches a voice agent and browser client with zero params - Real-time agent monitoring -- live status, latency, and conversation turns in the runtime panel
- Built-in presets -- start with OpenAI, Anthropic, Gemini, or Realtime in one flag
- Profile management -- switch between dev, staging, and prod with a single option
- JSON output for scripting -- pipe
--jsonintojqfor CI/CD pipelines - Shell completions -- tab-complete commands in bash, zsh, and fish
- Project-level config -- drop a
.convoai.jsonin your repo for team-shared defaults - Telephony support (beta) -- initiate and manage phone calls through ConvoAI
Quick Start
# Install globally
npm install -g convoai
# Authenticate with your Agora credentials
convoai auth login
# Zero-params instant voice chat (launches browser + agent in one step)
convoai go
# Or run guided setup first
convoai go --setup
# Override the model on the fly
convoai go --model gpt-4o
# See what's running
convoai agent list
# Check the conversation so far
convoai agent history <agent-id>
# View turn-level latency analytics
convoai agent turns <agent-id>
# Done for now
convoai agent stop <agent-id>Installation
npm (recommended)
npm install -g convoainpx (no install)
npx convoai auth login
npx convoai agent start --channel demo --preset openai-miniFrom source
git clone https://github.com/AgoraIO/convoai-cli.git
cd convoai-cli
npm install
npm run build
npm linkAuthentication
ConvoAI CLI needs three credentials from Agora Console:
| Credential | Description | |---|---| | App ID | Your Agora project's application ID | | Customer ID | REST API customer identifier | | Customer Secret | REST API customer secret |
Interactive login
convoai auth loginThe CLI will prompt for each value, verify connectivity, and save credentials to ~/.config/convoai/config.json.
Non-interactive login
convoai auth login \
--app-id YOUR_APP_ID \
--customer-id YOUR_CUSTOMER_ID \
--customer-secret YOUR_CUSTOMER_SECRETLogin to a named profile
convoai auth login --profile stagingEnvironment variables
Credentials can also be provided via environment variables. These override any saved config.
export CONVOAI_APP_ID="your-app-id"
export CONVOAI_CUSTOMER_ID="your-customer-id"
export CONVOAI_CUSTOMER_SECRET="your-customer-secret"Check auth status
convoai auth statusClear credentials
convoai auth logoutCommands Reference
Every command supports --json for machine-readable output and --profile <name> to target a specific configuration profile.
Commands are organized into scenario groups:
| Group | Commands |
|---|---|
| Start | go, quickstart |
| Agent | agent start, agent stop, agent status, agent list, agent update, agent speak, agent interrupt, agent history, agent turns, agent join |
| Config | config init, config set, config get, config show, config path, auth login, auth logout, auth status |
| More | preset list, preset use, template save/list/show/delete/use, call initiate/hangup/status, token, completion |
go -- Instant voice chat
Zero-params command that launches a voice agent and opens a browser-based voice client in one step. Uses your saved configuration (or prompts for setup if unconfigured).
convoai go [options]| Flag | Description |
|---|---|
| --setup | Run the guided setup wizard before starting |
| --model <model> | Override the LLM model (e.g. gpt-4o, claude-sonnet-4-20250514) |
Examples:
# Instant voice chat -- no flags needed
convoai go
# Run setup first, then start
convoai go --setup
# Override the model for this session
convoai go --model gpt-4o
# Use a different model provider
convoai go --model claude-sonnet-4-20250514Agent Management
agent start -- Start a new conversational AI agent
convoai agent start --channel <name> [options]| Flag | Description |
|---|---|
| -c, --channel <name> | RTC channel name (required) |
| -n, --name <name> | Agent name (auto-generated if omitted) |
| --preset <name> | Use a built-in preset (e.g. openai-mini, anthropic-claude) |
| --model <model> | LLM model name (e.g. gpt-4o, claude-sonnet-4-20250514) |
| --llm-url <url> | Custom LLM API endpoint |
| --llm-key <key> | LLM API key |
| --tts <vendor> | TTS vendor (e.g. microsoft, elevenlabs) |
| --asr <vendor> | ASR vendor (e.g. deepgram) |
| --system-message <msg> | System prompt for the LLM |
| --greeting <msg> | Greeting message spoken when the agent joins |
| --uid <uid> | Agent RTC UID (default: "Agent") |
| --remote-uids <uids> | Comma-separated remote UIDs (default: "*") |
| --idle-timeout <seconds> | Idle timeout in seconds (default: 30) |
| --dry-run | Print the request payload without sending it |
Examples:
# Quickest start -- use a preset
convoai agent start --channel demo --preset openai-mini
# Custom model with a system prompt
convoai agent start \
--channel support-line \
--model gpt-4o \
--system-message "You are a customer support agent for Acme Corp." \
--greeting "Hello! How can I help you today?"
# Use Anthropic Claude
convoai agent start --channel research --preset anthropic-claude
# OpenAI Realtime (multimodal voice-to-voice)
convoai agent start --channel realtime-demo --preset realtime-openai
# Dry run to inspect the request payload
convoai agent start --channel test --preset openai-mini --dry-run
# Interactive mode -- just run without flags and answer the prompts
convoai agent startagent stop -- Stop a running agent
convoai agent stop <agent-id>
convoai agent stop --all| Flag | Description |
|---|---|
| -a, --all | Stop all running agents |
| -f, --force | Skip confirmation when using --all |
Examples:
# Stop a single agent
convoai agent stop abc123
# Stop everything (with confirmation prompt)
convoai agent stop --all
# Stop everything without confirmation (for scripts)
convoai agent stop --all --forceagent status -- Query the status of an agent
convoai agent status <agent-id>Returns the agent's current status, channel, start time, and stop time (if applicable).
agent list -- List agents
convoai agent list [options]| Flag | Description |
|---|---|
| -s, --state <state> | Filter by state: running, stopped, failed, all (default: running) |
| -c, --channel <name> | Filter by channel name |
| -l, --limit <n> | Maximum results (default: 20) |
Alias: agent ls
Examples:
# List running agents (default)
convoai agent list
# List all agents regardless of state
convoai agent list --state all
# List agents on a specific channel
convoai agent list --channel my-channel
# List failed agents
convoai agent list --state failed --limit 50agent update -- Update a running agent's configuration
convoai agent update <agent-id> [options]| Flag | Description |
|---|---|
| --system-message <msg> | Update the system prompt |
| --model <model> | Update the LLM model |
| --max-tokens <n> | Update max tokens |
| --temperature <n> | Update temperature |
| --token <token> | Update the RTC token |
Examples:
# Change the system prompt on the fly
convoai agent update abc123 --system-message "You are now a French tutor."
# Adjust generation parameters
convoai agent update abc123 --temperature 0.3 --max-tokens 256agent speak -- Instruct an agent to speak
convoai agent speak <agent-id> <text> [options]| Flag | Description |
|---|---|
| --priority <priority> | INTERRUPT, APPEND, or IGNORE (default: INTERRUPT) |
| --no-interrupt | Prevent user from voice-interrupting this message |
Examples:
# Interrupt current speech and say something
convoai agent speak abc123 "We'll be closing in 5 minutes."
# Append to the speech queue
convoai agent speak abc123 "One more thing..." --priority APPEND
# Non-interruptible announcement
convoai agent speak abc123 "Important: scheduled maintenance at midnight." --no-interruptagent interrupt -- Interrupt an agent that is currently speaking
convoai agent interrupt <agent-id>agent history -- View conversation history
convoai agent history <agent-id> [options]| Flag | Description |
|---|---|
| --limit <n> | Show only the last N entries |
Examples:
# Full conversation history
convoai agent history abc123
# Last 5 messages
convoai agent history abc123 --limit 5
# Export as JSON
convoai agent history abc123 --json > conversation.jsonagent turns -- View turn-level latency analytics
convoai agent turns <agent-id> [options]| Flag | Description |
|---|---|
| --limit <n> | Number of turns to show (default: 20) |
Displays a table of each conversation turn with end-to-end latency and per-component breakdown (ASR, LLM, TTS), plus averages. Latency values are color-coded: green (<1s), yellow (1-2s), red (>2s).
Examples:
# See the latest turns
convoai agent turns abc123
# Export turn analytics as JSON for further analysis
convoai agent turns abc123 --json | jq '.turns[] | {turn_id, e2e_latency_ms}'Call Management (Beta)
Telephony integration for initiating and managing phone calls through ConvoAI agents.
call initiate -- Start a phone call
convoai call initiate [options]call hangup -- End a phone call
convoai call hangup <call-id>call status -- Check call status
convoai call status <call-id>Note: Call commands are in beta. The API surface may change in future releases.
Configuration
config init -- Interactive setup wizard
convoai config initWalks you through setting up your Agora credentials, LLM provider, and TTS vendor with an interactive wizard. Verifies connectivity when done.
config set -- Set a configuration value
convoai config set <key> <value> [--profile <name>]Supports dot-notation for nested values.
Examples:
# Set the default LLM model
convoai config set llm.model gpt-4o
# Set ASR language for a profile
convoai config set asr.language en-US --profile production
# Set the TTS voice
convoai config set tts.params.voice_name en-US-AndrewMultilingualNeural
# Set the default region
convoai config set region globalValid keys:
| Category | Keys |
|---|---|
| Credentials | app_id, customer_id, customer_secret, base_url, region, default_profile |
| LLM | llm.url, llm.api_key, llm.vendor, llm.style, llm.model, llm.greeting_message, llm.failure_message, llm.max_history |
| TTS | tts.vendor, tts.params.key, tts.params.region, tts.params.voice_name, tts.params.speed, tts.params.volume |
| ASR | asr.vendor, asr.language, asr.params.key, asr.params.model, asr.params.language |
config get -- Read a configuration value
convoai config get <key> [--profile <name>]convoai config get llm.model
# gpt-4o
convoai config get region --profile staging
# cnconfig show -- Display the full configuration
convoai config show [--profile <name>] [--json]Secrets are automatically masked in the output.
config path -- Print the config file path
convoai config path
# ~/.config/convoai/config.json
convoai config path --dir
# ~/.config/convoaiPresets
Presets are built-in configurations that bundle an LLM, TTS, and ASR stack into a single name.
Available presets
| Preset | LLM | TTS | ASR |
|---|---|---|---|
| openai-gpt4o | OpenAI GPT-4o | Microsoft TTS | Deepgram |
| openai-mini | OpenAI GPT-4o-mini | Microsoft TTS | Deepgram |
| anthropic-claude | Anthropic Claude | Microsoft TTS | Deepgram |
| gemini | Google Gemini 2.0 Flash | Microsoft TTS | Deepgram |
| realtime-openai | OpenAI Realtime (multimodal) | Built-in (MLLM) | Built-in (MLLM) |
preset list -- List all presets
convoai preset listpreset use -- Apply a preset to your profile
convoai preset use <name> [--profile <name>]Saves the preset's LLM, TTS, and ASR settings as your profile defaults. After applying, agent start will use these settings automatically.
# Apply to your default profile
convoai preset use openai-mini
# Apply to a named profile
convoai preset use anthropic-claude --profile researchOther Commands
auth status -- Check authentication status
convoai auth statusauth logout -- Remove saved credentials
convoai auth logoutConfiguration
Precedence
Configuration is resolved in this order (highest priority first):
CLI flags > Environment variables > Project .convoai.json > Profile config > Base configConfig file
The global configuration lives at ~/.config/convoai/config.json:
{
"app_id": "your-app-id",
"customer_id": "your-customer-id",
"customer_secret": "your-customer-secret",
"region": "global",
"default_profile": "default",
"profiles": {
"default": {
"llm": {
"vendor": "openai",
"style": "openai",
"api_key": "sk-...",
"model": "gpt-4o-mini"
},
"tts": {
"vendor": "microsoft",
"params": {
"voice_name": "en-US-AndrewMultilingualNeural",
"speed": 1.0
}
},
"asr": {
"vendor": "deepgram",
"language": "en-US",
"params": {
"model": "nova-2"
}
}
}
}
}Project config
Drop a .convoai.json in your project root to share settings across your team. Project config overrides profile settings.
{
"llm": {
"vendor": "openai",
"model": "gpt-4o",
"system_messages": [
{
"role": "system",
"content": "You are the Acme Corp support assistant. Be helpful and concise."
}
]
},
"tts": {
"vendor": "microsoft",
"params": {
"voice_name": "en-US-JennyNeural"
}
}
}Commit .convoai.json to version control. Keep secrets out of it -- use environment variables or your personal config profile for credentials.
Profiles
Profiles let you maintain separate configurations for different environments.
Setup
# Create a development profile
convoai auth login --profile dev
# Create a staging profile
convoai auth login --profile staging
# Create a production profile
convoai auth login --profile prodUsage
# Start an agent using staging credentials
convoai agent start --channel test --preset openai-mini --profile staging
# List agents in production
convoai agent list --profile prod
# Set the default profile so you don't need --profile every time
convoai config set default_profile stagingSwitch via environment variable
export CONVOAI_PROFILE=prod
convoai agent list # uses prod profile automaticallyScripting and CI/CD
Every command supports --json for machine-readable output, making it straightforward to integrate with pipelines.
Parse output with jq
# Get the agent ID from a start command
AGENT_ID=$(convoai agent start --channel ci-test --preset openai-mini --json | jq -r '.agent_id')
echo "Started agent: $AGENT_ID"
# Check status
convoai agent status "$AGENT_ID" --json | jq '.status'
# Clean up
convoai agent stop "$AGENT_ID"List running agents and extract IDs
convoai agent list --json | jq -r '.data.list[].agent_id'Export conversation history
convoai agent history "$AGENT_ID" --json | jq '.contents[] | "\(.role): \(.content)"'CI/CD pipeline example
#!/bin/bash
set -euo pipefail
# Credentials from CI secrets
export CONVOAI_APP_ID="$AGORA_APP_ID"
export CONVOAI_CUSTOMER_ID="$AGORA_CUSTOMER_ID"
export CONVOAI_CUSTOMER_SECRET="$AGORA_CUSTOMER_SECRET"
# Start a test agent
AGENT_ID=$(convoai agent start \
--channel "ci-${CI_BUILD_ID}" \
--preset openai-mini \
--json | jq -r '.agent_id')
echo "Agent started: $AGENT_ID"
# Run your integration tests here...
# Tear down
convoai agent stop "$AGENT_ID"
# Stop any orphaned agents from failed runs
convoai agent stop --all --forceEnvironment Variables
| Variable | Description |
|---|---|
| CONVOAI_APP_ID | Agora App ID (overrides config) |
| CONVOAI_CUSTOMER_ID | Customer ID (overrides config) |
| CONVOAI_CUSTOMER_SECRET | Customer Secret (overrides config) |
| CONVOAI_PROFILE | Active profile name |
| CONVOAI_BASE_URL | Custom API base URL |
| NO_COLOR | Disable colored output (any value) |
Template Files
.convoai.json -- project configuration
Place this file in your project root for team-shared agent defaults.
{
"llm": {
"vendor": "openai",
"model": "gpt-4o",
"system_messages": [
{
"role": "system",
"content": "You are a helpful voice assistant for our application."
}
],
"greeting_message": "Hi there! How can I help you today?",
"max_history": 20,
"params": {
"temperature": 0.7,
"max_tokens": 512
}
},
"tts": {
"vendor": "microsoft",
"params": {
"voice_name": "en-US-AndrewMultilingualNeural",
"speed": 1.0
}
},
"asr": {
"vendor": "deepgram",
"language": "en-US",
"params": {
"model": "nova-2"
}
}
}Troubleshooting
Missing required credentials: app_id, customer_id, customer_secret
You haven't authenticated yet. Run:
convoai auth loginOr set the environment variables CONVOAI_APP_ID, CONVOAI_CUSTOMER_ID, and CONVOAI_CUSTOMER_SECRET.
--channel is required
The agent start command requires a channel name. Provide it as a flag:
convoai agent start --channel my-channel --preset openai-miniOr run convoai agent start without flags in an interactive terminal to get prompted.
Agent starts but immediately shows FAILED status
Check your LLM API key and TTS credentials. Use config show to inspect:
convoai config showVerify the LLM key is set in your profile:
convoai config get llm.api_keyConnection errors or timeouts
- Confirm your region is correct:
convoai config get region - Check if you need a custom base URL for your deployment
- Verify your network can reach the Agora ConvoAI API
Agent stops after idle_timeout seconds
By default, agents time out after 30 seconds of inactivity. Increase it:
convoai agent start --channel demo --preset openai-mini --idle-timeout 300How do I see the raw API request?
Use --dry-run to print the request payload without sending it:
convoai agent start --channel test --preset openai-mini --dry-runColors not showing / garbled output
If your terminal doesn't support ANSI colors, disable them:
NO_COLOR=1 convoai agent listContributing
- Fork the repository
- Create a feature branch:
git checkout -b my-feature - Install dependencies:
npm install - Run in dev mode:
npm run dev -- agent list - Check types:
npm run lint - Build:
npm run build - Submit a pull request
The project uses TypeScript with strict mode, Commander.js for command parsing, and Zod for config validation.
