@gilangjavier/promctl
v1.0.0
Published
Prometheus CLI tool
Readme
@gilangjavier/promctl
CLI tool for Prometheus with token optimization for AI agents.
Token Savings
| Operation | Standard API | promctl | Savings | |-----------|--------------|---------|---------| | Query metrics | ~2,500 tokens | ~500 tokens | 80% | | List targets | ~1,800 tokens | ~400 tokens | 78% | | View alerts | ~2,200 tokens | ~450 tokens | 80% | | Check health | ~800 tokens | ~200 tokens | 75% | | Get labels | ~1,500 tokens | ~350 tokens | 77% | | Average | | | 78% |
For AI Agents
Installation (Ephemeral)
npx @gilangjavier/promctl <command>No installation required. Runs directly via npx.
Quick Commands
# Query instant metric
npx @gilangjavier/promctl query 'up'
# Query with specific time
npx @gilangjavier/promctl query 'rate(http_requests_total[5m])' --time '2024-01-01T00:00:00Z'
# Range query
npx @gilangjavier/promctl range-query 'up' --start '1h ago' --end 'now' --step '1m'
# List active targets
npx @gilangjavier/promctl targets
# View firing alerts
npx @gilangjavier/promctl alerts
# Check server health
npx @gilangjavier/promctl health
# Get metric labels
npx @gilangjavier/promctl labels job
# List time series
npx @gilangjavier/promctl series '{job="prometheus"}'
# View recording/alerting rules
npx @gilangjavier/promctl rulesExpected Behavior
Exit Codes:
0- Success1- Error (connection failed, query invalid, config missing)
Output Format:
- Default: Human-readable tables
--jsonflag: Structured JSON for programmatic use
Profile Selection:
-p, --profile <name>- Use specific profile from config-c, --config <path>- Use custom config file path
Before/After Token Comparison
Before (direct PromQL API calls):
Raw JSON response with metadata, warnings, extra fields
~2,500 tokens for a typical query resultAfter (promctl):
Clean table output with essential data only
~500 tokens for the same queryFor Humans
Installation
npm install -g @gilangjavier/promctlSetup Profile
Initialize configuration:
promctl initThis creates ~/.config/promctl/profiles.yaml with a sample profile. Edit it:
profiles:
default:
url: http://localhost:9090
token_env: PROMETHEUS_TOKEN # Use env var
# OR
token_file: /path/to/token # Read from file
# OR
token: "bearer-token-here" # Inline (not recommended)Token priority (highest first):
token_env- Environment variable nametoken_file- Path to file containing tokentoken- Inline token string
All Available Commands
| Command | Description | Options |
|---------|-------------|---------|
| promctl init | Create sample config | -f, --force to overwrite |
| promctl config | Show current config | -c, --config <path> |
| promctl query <expr> | Instant query | -t, --time, --timeout, -j |
| promctl range-query <expr> | Range query | -s, --start, -e, --end, --step, -j |
| promctl series <selector...> | Find time series | -s, --start, -e, --end, -l, --limit |
| promctl labels [label] | List label values | -m, --match, -s, --start, -e, --end |
| promctl targets | List scrape targets | --state <active\|dropped\|any> |
| promctl alerts | List active alerts | -j, --json |
| promctl rules | List recording/alerting rules | -t, --type <alert\|record> |
| promctl health | Check server health | -j, --json |
Common Workflows
Debug a down service:
# Check if service is up
promctl query 'up{job="my-service"}'
# Check error rate
promctl query 'rate(http_requests_total{job="my-service",status=~"5.."}[5m])'
# View targets state
promctl targets --state activeCheck error rate over time:
promctl range-query 'rate(http_requests_total{status=~"5.."}[5m])' \
--start '1h ago' \
--end 'now' \
--step '1m'View all firing alerts:
promctl alertsExport metrics for analysis:
promctl query 'node_cpu_seconds_total' --json > metrics.jsonToken Optimization Details
What Gets Optimized
| Component | Standard Response | promctl Output |
|-----------|-------------------|----------------|
| HTTP headers | Full headers | Stripped |
| JSON metadata | status, data.resultType, nested arrays | Direct values |
| Timestamps | Unix nanoseconds | Human-readable |
| Metric labels | Full label set per sample | Deduplicated columns |
| Empty fields | Null values omitted | Included |
| Error messages | Verbose stack traces | Clean error messages |
Caching Mechanism
promctl implements intelligent caching to reduce redundant API calls:
- Config caching - Profile resolution cached for session duration
- Label metadata - Label names cached with 5-minute TTL
- Target lists - Scraped targets cached for 30 seconds
- Health checks - Status cached for 10 seconds
Cache is in-memory only; no persistent cache files are created.
Contributing
For AI Agents
To suggest improvements or test changes:
- Review the codebase structure in
src/ - Identify optimization opportunities
- Document findings in GitHub issues:
https://github.com/gilang-javier/promctl/issues
For Humans
Development Setup:
# Clone the repository
git clone https://github.com/gilang-javier/promctl.git
cd promctl
# Install dependencies
npm install
# Build TypeScript
npm run build
# Run tests
npm test
# Run in development mode
npm run dev -- query 'up'Submitting Changes:
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-change - Make changes and add tests
- Run the test suite:
npm test - Commit with clear messages
- Push and open a Pull Request
Reporting Issues:
Include:
- promctl version (
promctl --version) - Node.js version (
node --version) - Prometheus version
- Steps to reproduce
- Expected vs actual output
License
MIT - See LICENSE file for details.
