npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@llm-dev-ops/connector-hub

v0.1.0

Published

Orchestrator for managing LLM providers and middleware

Readme

@llm-connector-hub/hub

Main orchestrator package for the LLM Connector Hub. Provides intelligent provider management, caching, rate limiting, circuit breaking, and comprehensive middleware support.

Features

  • Provider Registry: Register and manage multiple LLM providers
  • Smart Provider Selection: Multiple selection strategies (round-robin, priority-based, cost-optimized, latency-optimized, health-based, failover)
  • Caching: In-memory (LRU) and Redis-backed caching with deterministic key generation
  • Health Monitoring: Automatic health checks with configurable thresholds
  • Middleware Pipeline: Extensible middleware system for cross-cutting concerns
  • Circuit Breakers: Prevent cascading failures
  • Rate Limiting: Per-provider rate limiting
  • Automatic Fallback: Retry failed requests with alternative providers

Installation

npm install @llm-connector-hub/hub @llm-connector-hub/core

Quick Start

import { ConnectorHub } from '@llm-connector-hub/hub';
import { OpenAIProvider } from '@llm-connector-hub/providers';

// Create hub with builder pattern
const hub = ConnectorHub.builder()
  .selectionStrategy('latency-optimized')
  .addProvider(
    new OpenAIProvider(),
    { apiKey: process.env.OPENAI_API_KEY },
    { priority: 1, tags: ['fast', 'reliable'] }
  )
  .build();

// Make a completion request
const response = await hub.complete({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);

Architecture

ConnectorHub

The main orchestrator class that coordinates all components:

const hub = new ConnectorHub({
  selectionStrategy: 'priority',    // Provider selection strategy
  enableCache: true,                 // Enable caching layer
  enableCircuitBreaker: true,        // Enable circuit breakers
  enableHealthMonitoring: true,      // Enable health monitoring
  enableFallback: true,              // Enable automatic fallback
  maxFallbackAttempts: 2,            // Max fallback attempts
});

Provider Registry

Manages provider registration, lookup, and lifecycle:

hub.registerProvider(provider, config, {
  enabled: true,
  priority: 100,  // Lower = higher priority
  tags: ['production', 'fast'],
});

// Find providers
const providers = hub.getRegistry().find({
  enabled: true,
  tags: ['production'],
  model: 'gpt-4',
});

Selection Strategies

Priority-based (default):

  • Selects provider with lowest priority number
  • Deterministic and predictable

Round-robin:

  • Distributes load evenly across providers
  • Good for load balancing

Latency-optimized:

  • Selects provider with lowest average response time
  • Adapts to current performance

Cost-optimized:

  • Selects cheapest provider for the request
  • Requires pricing metadata

Health-based:

  • Prefers healthy providers
  • Integrates with health monitoring

Failover:

  • Uses primary until it fails
  • Automatically switches to backup

Caching

Memory Cache (default):

import { MemoryCache } from '@llm-connector-hub/hub';

const cache = new MemoryCache({
  defaultTTL: 3600000,  // 1 hour
  maxSize: 1000,         // Max entries
  evictionStrategy: 'lru',
});

hub = new ConnectorHub({ cache, enableCache: true });

Redis Cache (optional):

import { RedisCache } from '@llm-connector-hub/hub';

const cache = new RedisCache(
  { defaultTTL: 3600000 },
  { host: 'localhost', port: 6379 }
);

await cache.initialize();
hub = new ConnectorHub({ cache, enableCache: true });

Cache Key Generation

Deterministic cache keys ensure identical requests hit the cache:

import { CacheKey } from '@llm-connector-hub/hub';

const cacheKey = new CacheKey({
  includeProvider: true,
  includeModel: true,
  includeTemperature: true,
  prefix: 'llm',
  hashAlgorithm: 'sha256',
});

const key = cacheKey.generate(request, 'openai');

Advanced Usage

Custom Selection Strategy

const hub = new ConnectorHub({ selectionStrategy: 'latency-optimized' });

// Manual provider selection
const provider = hub.selectProvider(request);

Health Monitoring

import { HealthMonitor } from '@llm-connector-hub/hub';

const monitor = new HealthMonitor(registry, {
  checkInterval: 30000,       // Check every 30s
  failureThreshold: 3,        // Mark unhealthy after 3 failures
  recoveryThreshold: 2,       // Mark healthy after 2 successes
  autoDisable: true,          // Auto-disable unhealthy providers
});

// Listen to health changes
monitor.on((results) => {
  console.log('Health check results:', results);
});

// Get health status
const health = monitor.getHealth('openai');
console.log('OpenAI health:', health);

Fallback and Error Handling

const hub = new ConnectorHub({
  enableFallback: true,
  maxFallbackAttempts: 2,
});

try {
  const response = await hub.complete(request);
} catch (error) {
  // All providers failed, including fallbacks
  console.error('All providers failed:', error);
}

Testing

npm test
npm run test:coverage

API Reference

ConnectorHub

Methods:

  • registerProvider(provider, config, options) - Register a provider
  • complete(request, providerName?) - Execute completion request
  • stream(request, providerName?) - Execute streaming request
  • selectProvider(request) - Select provider for request
  • getRegistry() - Get provider registry

ProviderRegistry

Methods:

  • register(provider, options) - Register provider
  • get(name, index?) - Get provider by name
  • find(filter) - Find providers matching filter
  • markUsed(name, index?) - Mark provider as used

CacheKey

Methods:

  • generate(request, provider?) - Generate cache key
  • generateWithSuffix(request, provider, suffix) - Generate key with suffix
  • validate(key) - Validate key format

MemoryCache / RedisCache

Methods:

  • get<T>(key) - Get value
  • set<T>(key, value, ttl?) - Set value
  • has(key) - Check if key exists
  • delete(key) - Delete value
  • clear() - Clear all entries
  • getStats() - Get cache statistics

License

MIT OR Apache-2.0