retell-cli
v1.0.2
Published
Community CLI for Retell AI - efficient access to transcripts, agents, and prompts for AI assistants without MCP overhead
Maintainers
Readme
Retell AI CLI
Community-built command-line tool for Retell AI - designed to give AI assistants efficient access to transcripts, agents, and prompts without using context-expensive MCP servers.
Features
- Transcript Management - List, retrieve, and analyze call transcripts
- Agent Management - View and configure Retell AI agents
- Prompt Engineering - Pull, edit, and update agent prompts
- Multi-format Support - Works with Retell LLM and Conversation Flows
- AI-Friendly - JSON output by default for AI coding assistants
- Cross-Shell - Works in bash, fish, zsh, and more
Installation
npm install -g retell-cliOr use directly with npx (no installation required):
npx retell-cli@latest --helpQuick Start
1. Authenticate
retell login
# Enter your Retell API key when promptedYour API key will be saved to .retellrc.json in the current directory.
2. List Your Agents
retell agents listOutput:
[
{
"agent_id": "agent_123abc",
"agent_name": "Customer Support Bot",
"response_engine": {
"type": "retell-llm"
}
}
]3. Analyze a Call Transcript
# List recent calls
retell transcripts list --limit 10
# Analyze a specific call
retell transcripts analyze call_abc123Output:
{
"call_id": "call_abc123",
"metadata": {
"status": "ended",
"duration_ms": 45000,
"agent_name": "Customer Support Bot"
},
"analysis": {
"summary": "Customer inquired about product pricing",
"sentiment": "positive",
"successful": true
},
"performance": {
"latency_p50_ms": {
"e2e": 500,
"llm": 200,
"tts": 100
}
}
}4. Manage Agent Prompts
# Pull current prompts
retell prompts pull agent_123abc
# Edit .retell-prompts/agent_123abc/general_prompt.md with your changes
# Check what changed
retell prompts diff agent_123abc
# Dry run to preview changes
retell prompts update agent_123abc --dry-run
# Apply changes
retell prompts update agent_123abc
# Publish the updated agent
retell agent-publish agent_123abcAuthentication
The CLI supports three authentication methods (in order of precedence):
1. Environment Variable (Best for CI/CD)
export RETELL_API_KEY=your_api_key_here
retell agents list2. Local Config File (Best for Development)
retell login
# Creates .retellrc.json in current directoryThe config file format:
{
"apiKey": "your_api_key_here"
}3. Per-Command Override
RETELL_API_KEY=key_abc123 retell agents listNote for Fish shell users:
env RETELL_API_KEY=key_abc123 retell agents listCommand Reference
Authentication
retell login
Save your API key to a local config file.
retell login
# Prompts: Enter your Retell API key:Transcripts
retell transcripts list [options]
List call transcripts with optional filtering.
Options:
-l, --limit <number>- Maximum number of calls to return (default: 50)
Examples:
# List recent calls
retell transcripts list
# List up to 100 calls
retell transcripts list --limit 100retell transcripts get <call_id>
Get detailed information about a specific call.
Example:
retell transcripts get call_abc123retell transcripts analyze <call_id>
Analyze a call transcript with structured insights including sentiment, performance metrics, and cost breakdown.
Example:
retell transcripts analyze call_abc123Agents
retell agents list [options]
List all agents in your account.
Options:
-l, --limit <number>- Maximum number of agents to return (default: 100)
Example:
retell agents listretell agents info <agent_id>
Get detailed information about a specific agent.
Example:
retell agents info agent_123abcPrompts
retell prompts pull <agent_id> [options]
Download agent prompts to a local file.
Options:
-o, --output <path>- Output file path (default:.retell-prompts/<agent_id>.json)
Examples:
# Pull to default location
retell prompts pull agent_123abc
# Pull to specific file
retell prompts pull agent_123abc --output my-prompts.jsonretell prompts diff <agent_id> [options]
Show differences between local and remote prompts before applying updates.
Options:
-s, --source <path>- Source directory path (default:.retell-prompts)-f, --fields <fields>- Comma-separated list of fields to return
Examples:
# Compare local and remote prompts
retell prompts diff agent_123abc
# Use custom source directory
retell prompts diff agent_123abc --source ./custom-prompts
# Show only specific fields
retell prompts diff agent_123abc --fields has_changes,changes.general_promptOutput:
{
"agent_id": "agent_123abc",
"agent_type": "retell-llm",
"has_changes": true,
"changes": {
"general_prompt": {
"old": "You are a helpful assistant...",
"new": "You are a helpful assistant specializing in...",
"change_type": "modified"
}
}
}retell prompts update <agent_id> [options]
Update agent prompts from a local file.
Options:
-s, --source <path>- Source file path (default:.retell-prompts/<agent_id>.json)--dry-run- Preview changes without applying them
Examples:
# Dry run first (recommended)
retell prompts update agent_123abc --source my-prompts.json --dry-run
# Apply changes
retell prompts update agent_123abc --source my-prompts.jsonImportant: After updating prompts, remember to publish the agent:
retell agent-publish agent_123abcretell agent-publish <agent_id>
Publish a draft agent to make changes live.
Example:
retell agent-publish agent_123abcField Selection
Reduce output size and token usage by selecting specific fields:
# Get only call_id and status
retell transcripts list --fields call_id,call_status
# Select nested fields with dot notation
retell transcripts get abc123 --fields metadata.duration,analysis.summary
# Combine with other options
retell agents list --limit 10 --fields agent_id,agent_nameSupported commands:
- All transcript commands (
list,get,analyze) - All agent commands (
list,info)
Features:
- Dot notation for nested fields (e.g.,
metadata.duration) - Works with arrays
- Reduces token usage by 50-90% for AI workflows
- Backward compatible (no --fields = full output)
Raw Output Mode
Get the unmodified API response instead of enriched analysis:
# Raw API response (useful for debugging)
retell transcripts analyze abc123 --raw
# Combine with field selection for minimal output
retell transcripts analyze abc123 --raw --fields call_id,transcript_object
# Compare raw vs enriched
retell transcripts analyze abc123 --raw > raw.json
retell transcripts analyze abc123 > enriched.json
diff raw.json enriched.jsonWhen to use:
- Debugging issues with API responses
- When tools expect the official Retell API schema
- Accessing new API fields before CLI enrichment support
- Comparing raw data to enriched output for validation
Supported commands:
transcripts analyze- returns the raw Call Object exactly as documented in the Retell API reference
Note: The --raw flag works seamlessly with --fields for precise data extraction. Raw output returns the official Retell API schema, allowing you to access all fields documented in the API reference.
Hotspot Detection
Identify conversation issues for focused troubleshooting:
# Find all issues in a call
retell transcripts analyze abc123 --hotspots-only
# Combine with field selection
retell transcripts analyze abc123 --hotspots-only --fields hotspots
# Set custom thresholds
retell transcripts analyze abc123 --hotspots-only --latency-threshold 1500
retell transcripts analyze abc123 --hotspots-only --silence-threshold 3000Detected issues:
- Latency spikes - When p90 latency exceeds threshold (default: 2000ms)
- Long silences - Gaps between turns exceeding threshold (default: 5000ms)
- Sentiment - Negative sentiment indicators
Use cases:
- Rapid troubleshooting of failed calls
- Prompt iteration and refinement
- Performance monitoring across calls
- AI agent workflow optimization
Note: The --hotspots-only flag works seamlessly with --fields for token efficiency.
Search Transcripts
Find calls with advanced filtering - no need for jq or grep:
# Find all error calls
retell transcripts search --status error
# Find calls for specific agent in date range
retell transcripts search \
--agent-id agent_123 \
--since 2025-11-01 \
--until 2025-11-15
# Combine multiple filters
retell transcripts search \
--status error \
--agent-id agent_123 \
--since 2025-11-01 \
--limit 20
# Use field selection for minimal output
retell transcripts search \
--status error \
--fields call_id,call_status,agent_idAvailable filters:
--status- Call status (error, ended, ongoing)--agent-id- Filter by agent--since- Calls after date (YYYY-MM-DD or ISO format)--until- Calls before date (YYYY-MM-DD or ISO format)--limit- Max results (default: 50)--fields- Select specific fields (from Phase 2)
AI Agent Workflow Example:
# 1. Find all recent error calls
retell transcripts search --status error --since 2025-11-08 --fields call_id
# 2. For each call, get hotspots
retell transcripts analyze <call_id> --hotspots-only
# 3. No jq or grep needed - direct JSON parsing!Common Workflows
Analyzing Failed Calls
# List recent calls (look for error status)
retell transcripts list --limit 50 > calls.json
# Filter for failed calls (using jq)
jq '.[] | select(.call_status == "error")' calls.json
# Analyze each failed call
retell transcripts analyze call_xyz789Bulk Prompt Updates
# Pull prompts for all agents
for agent_id in $(retell agents list | jq -r '.[].agent_id'); do
retell prompts pull $agent_id --output "prompts-${agent_id}.json"
done
# ... edit prompt files ...
# Update all agents
for file in prompts-*.json; do
agent_id=$(echo $file | sed 's/prompts-//;s/.json//')
retell prompts update $agent_id --source $file
retell agent-publish $agent_id
doneDaily Performance Monitoring
#!/bin/bash
# Save as: daily-report.sh
# Get all calls from today
retell transcripts list --limit 100 > today-calls.json
# Analyze each call and save report
for call_id in $(jq -r '.[].call_id' today-calls.json); do
retell transcripts analyze $call_id > "analysis-${call_id}.json"
done
# Generate summary report (using jq)
echo "Performance Summary:"
jq -s '[.[] | .performance.latency_p50_ms.e2e] | add / length' analysis-*.jsonFor AI Agents
This CLI was specifically designed for AI assistants to access Retell AI efficiently without the token overhead of MCP servers. All commands output JSON by default, making it perfect for Claude Code, Cursor, Aider, and other AI coding assistants.
Why This Tool Exists
Traditional MCP (Model Context Protocol) servers can consume significant context windows when working with Retell AI data. This CLI provides a lightweight, token-efficient alternative that:
- Reduces token usage by 50-90% with field selection (
--fields) - Provides structured JSON output for easy parsing
- Offers hotspot detection for focused troubleshooting
- Enables safe prompt updates with diff and dry-run features
- Works across all shells (bash, zsh, fish) for maximum compatibility
Example AI Workflow
# AI agent lists all calls and finds issues
retell transcripts list | jq '.[] | select(.call_status == "error")'
# AI analyzes a problematic call
retell transcripts analyze call_123
# AI pulls current prompts
retell prompts pull agent_456
# AI reads and suggests improvements to prompts
# (Edits .retell-prompts/agent_456/general_prompt.md)
# AI shows what changed
retell prompts diff agent_456
# AI explains the changes and uses dry-run to verify
retell prompts update agent_456 --dry-run
# Apply changes
retell prompts update agent_456
retell agent-publish agent_456Error Format
All errors are returned as JSON for easy parsing:
{
"error": "Descriptive error message",
"code": "ERROR_CODE"
}Common error codes:
AUTHENTICATION_ERROR- Invalid API keyNOT_FOUND- Resource not foundCUSTOM_LLM_ERROR- Cannot manage custom LLM agentsTYPE_MISMATCH- Prompt file type doesn't match agent type
Troubleshooting
"API key is missing or invalid"
Solution:
- Run
retell loginto set up authentication - Or set
RETELL_API_KEYenvironment variable - Verify your API key in the Retell dashboard
"Cannot manage custom LLM agents"
Cause: Custom LLM agents use external WebSocket connections and cannot be managed via the API.
Solution: Use the Retell dashboard to manage custom LLM agents.
"Type mismatch" error
Cause: The prompt file type must match the agent's response engine type.
Solution: Check your agent type:
retell agents info <agent_id> | jq '.response_engine.type'Ensure your prompt file has the correct type:
retell-llm- For Retell LLM agentsconversation-flow- For Conversation Flow agents
Permission denied on config file
Cause: The CLI creates .retellrc.json with restricted permissions (0600) for security.
Solution: Check file ownership and permissions:
ls -la .retellrc.json
# Should show: -rw------- (readable/writable by owner only)Command not found after installation
Solution: Ensure npm global bin directory is in your PATH:
npm config get prefix
# Add this path to your PATH environment variableFor npm global installs:
export PATH="$(npm config get prefix)/bin:$PATH"Development
Want to contribute or run the CLI locally? See CONTRIBUTING.md for development setup and guidelines.
# Clone the repository
git clone https://github.com/awccom/retell-cli.git
cd retell-cli
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Link for local development
npm link
retell --versionShell Compatibility
The Retell CLI is fully compatible with:
- Bash (GNU Bash 5.x)
- Zsh (5.x)
- Fish (3.x)
See docs/shell-compatibility.md for detailed test results.
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
License
MIT License - see LICENSE for details.
Resources
Support
If you encounter any issues or have questions:
- Check the Troubleshooting section
- Review the User Guide
- Search existing issues
- Open a new issue
Built by the community for AI-assisted Retell AI development. Not affiliated with or endorsed by Retell AI.
