npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@gilangjavier/promctl

v1.0.0

Published

Prometheus CLI tool

Readme

@gilangjavier/promctl

npm GitHub TypeScript License: MIT

CLI tool for Prometheus with token optimization for AI agents.

Token Savings

| Operation | Standard API | promctl | Savings | |-----------|--------------|---------|---------| | Query metrics | ~2,500 tokens | ~500 tokens | 80% | | List targets | ~1,800 tokens | ~400 tokens | 78% | | View alerts | ~2,200 tokens | ~450 tokens | 80% | | Check health | ~800 tokens | ~200 tokens | 75% | | Get labels | ~1,500 tokens | ~350 tokens | 77% | | Average | | | 78% |

For AI Agents

Installation (Ephemeral)

npx @gilangjavier/promctl <command>

No installation required. Runs directly via npx.

Quick Commands

# Query instant metric
npx @gilangjavier/promctl query 'up'

# Query with specific time
npx @gilangjavier/promctl query 'rate(http_requests_total[5m])' --time '2024-01-01T00:00:00Z'

# Range query
npx @gilangjavier/promctl range-query 'up' --start '1h ago' --end 'now' --step '1m'

# List active targets
npx @gilangjavier/promctl targets

# View firing alerts
npx @gilangjavier/promctl alerts

# Check server health
npx @gilangjavier/promctl health

# Get metric labels
npx @gilangjavier/promctl labels job

# List time series
npx @gilangjavier/promctl series '{job="prometheus"}'

# View recording/alerting rules
npx @gilangjavier/promctl rules

Expected Behavior

Exit Codes:

  • 0 - Success
  • 1 - Error (connection failed, query invalid, config missing)

Output Format:

  • Default: Human-readable tables
  • --json flag: Structured JSON for programmatic use

Profile Selection:

  • -p, --profile <name> - Use specific profile from config
  • -c, --config <path> - Use custom config file path

Before/After Token Comparison

Before (direct PromQL API calls):

Raw JSON response with metadata, warnings, extra fields
~2,500 tokens for a typical query result

After (promctl):

Clean table output with essential data only
~500 tokens for the same query

For Humans

Installation

npm install -g @gilangjavier/promctl

Setup Profile

Initialize configuration:

promctl init

This creates ~/.config/promctl/profiles.yaml with a sample profile. Edit it:

profiles:
  default:
    url: http://localhost:9090
    token_env: PROMETHEUS_TOKEN    # Use env var
    # OR
    token_file: /path/to/token     # Read from file
    # OR
    token: "bearer-token-here"     # Inline (not recommended)

Token priority (highest first):

  1. token_env - Environment variable name
  2. token_file - Path to file containing token
  3. token - Inline token string

All Available Commands

| Command | Description | Options | |---------|-------------|---------| | promctl init | Create sample config | -f, --force to overwrite | | promctl config | Show current config | -c, --config <path> | | promctl query <expr> | Instant query | -t, --time, --timeout, -j | | promctl range-query <expr> | Range query | -s, --start, -e, --end, --step, -j | | promctl series <selector...> | Find time series | -s, --start, -e, --end, -l, --limit | | promctl labels [label] | List label values | -m, --match, -s, --start, -e, --end | | promctl targets | List scrape targets | --state <active\|dropped\|any> | | promctl alerts | List active alerts | -j, --json | | promctl rules | List recording/alerting rules | -t, --type <alert\|record> | | promctl health | Check server health | -j, --json |

Common Workflows

Debug a down service:

# Check if service is up
promctl query 'up{job="my-service"}'

# Check error rate
promctl query 'rate(http_requests_total{job="my-service",status=~"5.."}[5m])'

# View targets state
promctl targets --state active

Check error rate over time:

promctl range-query 'rate(http_requests_total{status=~"5.."}[5m])' \
  --start '1h ago' \
  --end 'now' \
  --step '1m'

View all firing alerts:

promctl alerts

Export metrics for analysis:

promctl query 'node_cpu_seconds_total' --json > metrics.json

Token Optimization Details

What Gets Optimized

| Component | Standard Response | promctl Output | |-----------|-------------------|----------------| | HTTP headers | Full headers | Stripped | | JSON metadata | status, data.resultType, nested arrays | Direct values | | Timestamps | Unix nanoseconds | Human-readable | | Metric labels | Full label set per sample | Deduplicated columns | | Empty fields | Null values omitted | Included | | Error messages | Verbose stack traces | Clean error messages |

Caching Mechanism

promctl implements intelligent caching to reduce redundant API calls:

  1. Config caching - Profile resolution cached for session duration
  2. Label metadata - Label names cached with 5-minute TTL
  3. Target lists - Scraped targets cached for 30 seconds
  4. Health checks - Status cached for 10 seconds

Cache is in-memory only; no persistent cache files are created.

Contributing

For AI Agents

To suggest improvements or test changes:

  1. Review the codebase structure in src/
  2. Identify optimization opportunities
  3. Document findings in GitHub issues:
    https://github.com/gilang-javier/promctl/issues

For Humans

Development Setup:

# Clone the repository
git clone https://github.com/gilang-javier/promctl.git
cd promctl

# Install dependencies
npm install

# Build TypeScript
npm run build

# Run tests
npm test

# Run in development mode
npm run dev -- query 'up'

Submitting Changes:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-change
  3. Make changes and add tests
  4. Run the test suite: npm test
  5. Commit with clear messages
  6. Push and open a Pull Request

Reporting Issues:

Include:

  • promctl version (promctl --version)
  • Node.js version (node --version)
  • Prometheus version
  • Steps to reproduce
  • Expected vs actual output

License

MIT - See LICENSE file for details.