npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@llm-dev-ops/connector-hub-cli

v0.1.0

Published

Command-line interface for LLM Connector Hub

Readme

@llm-dev-ops/cli

Command-line interface for LLM Connector Hub - a unified CLI for interacting with multiple Large Language Model providers.

Installation

npm install -g @llm-dev-ops/cli

Quick Start

  1. Initialize configuration:

    llm-hub config init
  2. Set API keys:

    llm-hub config set providers.openai.apiKey "your-api-key"
    llm-hub config set providers.anthropic.apiKey "your-api-key"
    llm-hub config set providers.google.apiKey "your-api-key"
  3. Test provider connectivity:

    llm-hub providers test openai
  4. Get a completion:

    llm-hub complete "What is TypeScript?" --provider openai
  5. Start interactive chat:

    llm-hub chat --provider anthropic --model claude-3-opus-20240229

Commands

complete

Get a single completion from an LLM provider.

llm-hub complete <prompt> [options]

Options:
  -p, --provider <provider>    LLM provider (openai, anthropic, google) (default: "openai")
  -m, --model <model>          Model to use
  -t, --temperature <number>   Temperature (0-2)
  --max-tokens <number>        Maximum tokens to generate
  --stream                     Stream the response
  --json                       Output as JSON

Examples:

# Basic completion
llm-hub complete "Explain quantum computing"

# With specific provider and model
llm-hub complete "Write a haiku" --provider anthropic --model claude-3-sonnet-20240229

# Stream response
llm-hub complete "Tell me a story" --stream

# JSON output
llm-hub complete "What is AI?" --json

chat

Start an interactive chat session.

llm-hub chat [options]

Options:
  -p, --provider <provider>    LLM provider (openai, anthropic, google) (default: "openai")
  -m, --model <model>          Model to use
  -t, --temperature <number>   Temperature (0-2) (default: 0.7)
  --max-tokens <number>        Maximum tokens to generate (default: 1000)
  --system <message>           System message

Examples:

# Start chat with default provider
llm-hub chat

# Chat with specific provider and model
llm-hub chat --provider anthropic --model claude-3-opus-20240229

# Chat with system message
llm-hub chat --system "You are a helpful coding assistant"

config

Manage CLI configuration.

llm-hub config <subcommand>

Subcommands:
  show              Show current configuration
  set <key> <value> Set a configuration value
  get <key>         Get a configuration value
  init              Initialize configuration interactively

Examples:

# Show current configuration
llm-hub config show

# Set API key
llm-hub config set providers.openai.apiKey "sk-..."

# Get default provider
llm-hub config get defaultProvider

# Interactive setup
llm-hub config init

providers

Manage LLM providers.

llm-hub providers <subcommand>

Subcommands:
  list              List available providers
  test <provider>   Test provider connectivity
  models <provider> List available models for a provider

Examples:

# List all providers
llm-hub providers list

# Test OpenAI connectivity
llm-hub providers test openai

# List Anthropic models
llm-hub providers models anthropic

Configuration

Configuration is stored in ~/.llm-hub/config.json.

Configuration File Format

{
  "defaultProvider": "openai",
  "providers": {
    "openai": {
      "apiKey": "sk-..."
    },
    "anthropic": {
      "apiKey": "sk-ant-..."
    },
    "google": {
      "apiKey": "..."
    }
  },
  "defaults": {
    "temperature": 0.7,
    "maxTokens": 1000
  }
}

Environment Variables

You can also set API keys via environment variables:

  • OPENAI_API_KEY - OpenAI API key
  • ANTHROPIC_API_KEY - Anthropic API key
  • GOOGLE_AI_API_KEY - Google AI API key

Supported Providers

OpenAI

  • Provider ID: openai
  • Models: gpt-4-turbo-preview, gpt-4, gpt-3.5-turbo, etc.

Anthropic (Claude)

  • Provider ID: anthropic
  • Models: claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307

Google AI (Gemini)

  • Provider ID: google
  • Models: gemini-pro, gemini-pro-vision

Examples

Basic Usage

# Quick completion
llm-hub complete "What is the capital of France?"

# Stream a longer response
llm-hub complete "Write a short story about a robot" --stream --max-tokens 2000

# Use different provider
llm-hub complete "Explain machine learning" --provider anthropic

Interactive Chat

# Start chat
llm-hub chat

You: Hello!
Assistant: Hi! How can I help you today?

You: What's the weather like?
Assistant: I don't have access to real-time weather data...

You: exit
Goodbye!

Configuration Management

# Initialize config
llm-hub config init

# Set default provider
llm-hub config set defaultProvider anthropic

# Set default temperature
llm-hub config set defaults.temperature 0.8

# View configuration
llm-hub config show

Provider Testing

# Test all providers
llm-hub providers test openai
llm-hub providers test anthropic
llm-hub providers test google

# List available models
llm-hub providers models openai

License

MIT OR Apache-2.0