pod-cli
v0.3.0
Published
CLI for Promptodex - fetch, render, and execute prompts from the Promptodex registry
Maintainers
Readme
pod CLI
The official CLI for prompts registered on Promptodex - a prompt registry where prompts are stored and versioned.
Alias: You can also use
promptodexinstead ofpodif there's a naming conflict on your system.
Features
- Fetch prompts from the Promptodex registry
- Version support - fetch specific versions with
@versionsyntax - Render templates with variables
- Execute prompts against configured AI models
- Print output to stdout
- Project-level prompt management with
promptodex.json - Local caching for faster access
- Interactive setup wizard for easy configuration
Installation
npm install -g pod-cliOr use with npx:
npx pod-cli summarizeQuick Start
1. Configure your API keys
Run the setup wizard:
pod configOr create a config file manually at ~/.promptodex/config.json:
{
"apiKey": "your-promptodex-api-key",
"defaultModel": "4.1",
"vendors": {
"openai": {
"apiKey": "sk-your-openai-key"
},
"anthropic": {
"apiKey": "sk-your-anthropic-key"
},
"localhost": {
"port": 11434
}
},
"models": {
"4.1": {
"vendor": "openai",
"model": "gpt-4.1"
},
"sonnet": {
"vendor": "anthropic",
"model": "claude-sonnet-4"
},
"llama": {
"vendor": "localhost",
"model": "llama3.2"
}
}
}2. Initialize a project (optional)
pod initThis creates a promptodex.json file for managing project-level prompts.
3. Install prompts
pod install summarize4. Run a prompt
pod summarizeUsage
Run a prompt
pod <slug>Example:
pod summarizeFetch a specific version
Prompts on Promptodex are versioned. Fetch a specific version using @version syntax:
pod summarize@2This fetches version 2 of the "summarize" prompt. Without @version, the latest version is fetched.
Pass variables
Prompts can contain template variables like {{topic}}. Pass them with flags:
pod summarize --topic dogsUse stdin
Pipe content into the CLI:
cat article.md | pod summarizeOr:
echo "Hello world" | pod translate --language spanishSpecify a model
Override the prompt's recommended model:
pod summarize --model sonnetView configuration
pod show-configRun diagnostics
pod doctorInteractive setup
Run the setup wizard to configure your vendors and models:
pod configThis will walk you through selecting a vendor, entering API keys (or port for localhost), and choosing a default model.
Configuration
The global config file is located at ~/.promptodex/config.json.
Structure
{
"apiKey": "your-promptodex-api-key",
"defaultModel": "4.1",
"vendors": {
"openai": {
"apiKey": "sk-xxx"
},
"anthropic": {
"apiKey": "sk-xxx"
},
"xai": {
"apiKey": "xai-xxx"
},
"localhost": {
"port": 11434
}
},
"models": {
"4.1": {
"vendor": "openai",
"model": "gpt-4.1"
},
"sonnet": {
"vendor": "anthropic",
"model": "claude-sonnet-4"
},
"grok": {
"vendor": "xai",
"model": "grok-3"
},
"llama": {
"vendor": "localhost",
"model": "llama3.2"
}
}
}Fields
| Field | Description |
|-------|-------------|
| apiKey | (Optional) Promptodex API key for accessing private prompts |
| defaultModel | The model alias to use when no model is specified |
| vendors | API keys for each AI provider |
| models | Model aliases mapping to vendor and model ID |
Model Resolution
- If you specify
--model, that alias is used - Otherwise, if the prompt recommends a model, that is used
- Otherwise, your
defaultModelis used
Commands
pod <slug> or pod <slug>@<version>
Fetch and execute a prompt from the registry.
Options:
--model <alias>- Override the model to use--<variable> <value>- Set template variables-v, --verbose- Show verbose output
Examples:
pod summarize # Latest version
pod summarize@2 # Specific version
pod summarize --model sonnet # Override model
pod summarize --topic "AI" # Pass variablespod init
Initialize a new project in the current directory:
- Creates
promptodex.jsonto track installed prompts - Adds
.promptodex/to.gitignore(if present)
pod install [name] or pod i [name]
Install prompts from the registry:
pod install summarize- Install a specific prompt (latest version)pod install summarize@2- Install a specific versionpod install- Install all prompts listed inpromptodex.json
Prompts are cached in .promptodex/cache/ and version-locked in promptodex.json.
pod uninstall <name>
Remove a prompt from the project:
- Removes from
promptodex.json - Cleans up cached files in
.promptodex/cache/ - Removes any compiled skill artifacts (
.promptodex/data/<slug>/andskills/<slug>.md)
pod skill install <slug> or pod skill i <slug>
Install a prompt and compile it into a reusable skill file.
Options:
--<variable> <value>- Set template variables (persisted to.promptodex/data/<slug>/config.json)-v, --verbose- Show verbose output
What it does:
- Installs the prompt (same as
pod install <slug>). - Persists the provided variables at
.promptodex/data/<slug>/config.jsonalong with the pinned version. - Renders the prompt template using the merged variables and writes the result to
skills/<slug>.md. - Warns on missing optional variables and errors on missing required variables. Required vars can be populated by editing the generated
config.jsonor runningpod doctor skills.
Example:
pod skill install greet --name Matt
# → skills/greet.mdpod skill rebuild <slug>
Re-fetch the latest version of an installed skill's prompt, preserve any existing variable values, and recompile skills/<slug>.md. Warns when the new version introduces required variables that are not yet set.
pod collection install <slug> or pod collection i <slug>
Install every prompt contained in a collection. Items pinned to a specific version in the collection install that version; items with an empty version ("") always install the latest.
pod collection skill install <slug> or pod collection skill i <slug>
Install every prompt in a collection and compile each one as a skill. Any --var value flags are applied to every prompt (variables a prompt doesn't declare are ignored). A batch report summarises the ok / warning / error status of each compiled skill.
pod config
Interactive setup wizard to configure:
- Preferred AI vendor (OpenAI, Anthropic, xAI, localhost)
- API key or port
- Default model
pod show-config
Display configuration information including:
- Config file location
- Current settings (with masked API keys)
pod doctor
Run diagnostic checks:
- Config file exists and is valid
- API keys are configured
- Registry is reachable
- Cache directory is writable
pod doctor skills
Scan every installed skill and report variable coverage against the prompt version pinned in each skill's config.json:
ok– all required and optional variables are satisfied (either explicitly or via defaults)warning– one or more optional variables are missingerror– one or more required variables are missing (exits non-zero)
Template Variables
Prompts use {{variable}} syntax for templates:
Summarize the following about {{topic}}:
{{content}}Pass variables as flags:
pod my-prompt --topic "machine learning" --content "Your text here"Or use stdin for content:
cat article.md | pod my-prompt --topic "machine learning"Cache
Global Cache
When running prompts directly (without pod install), prompts are cached globally at ~/.promptodex/cache/.
Structure:
~/.promptodex/cache/{slug}/{version}.jsonProject Cache
When using pod install, prompts are cached locally in your project at .promptodex/cache/.
Structure:
.promptodex/cache/{slug}/{version}.jsonThe project's promptodex.json tracks which prompts are installed:
{
"prompts": {
"summarize": "2",
"translate": "1"
}
}Supported Providers
- OpenAI - GPT-4.1, GPT-4o, o1, etc.
- Anthropic - Claude Sonnet 4, Claude Opus 4, Claude 3.5, etc.
- xAI - Grok-3, Grok-2, etc.
- Localhost - Ollama, LMStudio, or any OpenAI-compatible local server
Development
Build
npm install
npm run buildRun locally
node bin/pod.js <slug>Watch mode
npm run devRequirements
- Node.js >= 18
License
MIT
