npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

rcc-server

v0.2.3

Published

RCC Server Module - Client input proxy response server with virtual model routing

Readme

RCC Server Module

npm version Build Status Coverage Status TypeScript License: MIT

📚 Detailed Architecture Documentation: ARCHITECTURE.md

Overview

The RCC Server Module is a powerful HTTP server component designed for the RCC (Router-Controlled Computing) framework. It provides client input proxy response capabilities with intelligent virtual model routing, middleware support, and comprehensive monitoring features.

Features

🚀 Core Capabilities

  • HTTP Server: High-performance Express.js-based HTTP server with security middleware
  • Virtual Model Routing: Intelligent request routing based on Claude Code Router rules
  • Middleware Support: Extensible middleware system for request processing
  • WebSocket Support: Real-time bidirectional communication
  • Load Balancing: Multiple strategies (round-robin, weighted, least-connections)
  • Monitoring: Comprehensive metrics and health checking
  • Configuration Management: Flexible configuration system

🔧 Advanced Features

  • Model Capability Detection: Automatic capability matching for virtual models
  • Intelligent Routing: Rule-based routing with priority and condition evaluation
  • Health Monitoring: Real-time health checks and system metrics
  • Error Handling: Comprehensive error handling and recovery
  • Performance Metrics: Request tracking and performance analytics
  • Pipeline Integration: Seamless integration with Pipeline Scheduler for request processing
  • Configuration Management: Dynamic configuration integration with virtual model mapping
  • Fallback Processing: Automatic fallback to direct processing when pipeline fails

Installation

npm install rcc-server

Peer Dependencies

This module requires the following RCC modules:

npm install rcc-basemodule rcc-pipeline rcc-errorhandling rcc-configuration rcc-virtual-model-rules rcc-underconstruction

Quick Start

Basic Server Setup

import { ServerModule } from 'rcc-server';

// Create server instance
const server = new ServerModule();

// Initialize with configuration
await server.initialize({
  port: 3000,
  host: 'localhost',
  cors: {
    origin: ['http://localhost:3000'],
    credentials: true
  },
  compression: true,
  helmet: true,
  rateLimit: {
    windowMs: 60000,
    max: 100
  },
  timeout: 30000,
  bodyLimit: '10mb'
});

// Start the server
await server.start();

console.log('Server is running on http://localhost:3000');

Virtual Model Registration

import { VirtualModelConfig } from 'rcc-server';

const modelConfig: VirtualModelConfig = {
  id: 'qwen-turbo',
  name: 'Qwen Turbo',
  provider: 'qwen',
  endpoint: 'https://chat.qwen.ai/api/v1/chat/completions',
  model: 'qwen-turbo',
  capabilities: ['chat', 'streaming', 'tools'],
  maxTokens: 4000,
  temperature: 0.7,
  topP: 1.0,
  priority: 8,
  enabled: true,
  routingRules: [
    {
      id: 'chat-rule',
      name: 'Chat requests',
      condition: 'path:/api/chat',
      weight: 1.0,
      enabled: true,
      priority: 5,
      modelId: 'qwen-turbo'
    }
  ]
};

await server.registerVirtualModel(modelConfig);

Custom Route Registration

import { RouteConfig } from 'rcc-server';

const routeConfig: RouteConfig = {
  id: 'chat-endpoint',
  path: '/api/chat',
  method: 'POST',
  handler: 'chatHandler',
  middleware: ['auth', 'rateLimit'],
  virtualModel: 'qwen-turbo',
  authRequired: true
};

await server.registerRoute(routeConfig);

Pipeline Integration

The RCC Server Module provides seamless integration with the Pipeline Scheduler for advanced request processing capabilities. This integration enables sophisticated request routing, load balancing, and error handling through a unified pipeline architecture.

Complete Integration Setup

import { ServerModule } from 'rcc-server';
import { PipelineScheduler } from 'rcc-pipeline';
import { PipelineSystemConfig } from 'rcc-pipeline';

// Create server instance
const server = new ServerModule();

// Configure server
const serverConfig = {
  port: 3000,
  host: 'localhost',
  // ... other server configuration
};

server.configure(serverConfig);
await server.initialize();
await server.start();

// Create pipeline scheduler configuration
const pipelineConfig: PipelineSystemConfig = {
  pipelines: [
    {
      id: 'qwen-turbo-pipeline',
      name: 'Qwen Turbo Pipeline',
      type: 'ai-model',
      enabled: true,
      priority: 1,
      weight: 3,
      maxConcurrentRequests: 20,
      timeout: 45000,
      config: {
        model: 'qwen-turbo',
        provider: 'qwen',
        maxTokens: 2000,
        temperature: 0.7,
        topP: 0.9
      }
    }
  ],
  loadBalancer: {
    strategy: 'weighted',
    healthCheckInterval: 15000
  },
  scheduler: {
    defaultTimeout: 45000,
    maxRetries: 5,
    retryDelay: 2000
  }
};

// Create and integrate pipeline scheduler
const pipelineScheduler = new PipelineScheduler(pipelineConfig);
await server.setPipelineScheduler(pipelineScheduler);

// Register virtual models
await server.registerVirtualModel({
  id: 'qwen-turbo-virtual',
  name: 'Qwen Turbo Virtual Model',
  provider: 'qwen',
  model: 'qwen-turbo',
  capabilities: ['text-generation', 'chat'],
  maxTokens: 2000,
  temperature: 0.7,
  enabled: true
});

Configuration-to-Pipeline Integration

The server automatically integrates with the Configuration module to provide dynamic virtual model mapping and pipeline generation:

// The server automatically initializes ConfigurationToPipelineModule
// which provides:
// - Virtual model mapping from configuration
// - Pipeline table generation
// - Dynamic pipeline assembly
// - Configuration validation

// Check integration status
const status = server.getStatus();
console.log('Pipeline Integration:', {
  enabled: status.pipelineIntegration.enabled,
  schedulerAvailable: status.pipelineIntegration.schedulerAvailable,
  processingMethod: status.pipelineIntegration.processingMethod,
  fallbackEnabled: status.pipelineIntegration.fallbackEnabled
});

Request Processing Flow

  1. Request Reception: Server receives HTTP request
  2. Virtual Model Routing: Request is routed to appropriate virtual model
  3. Pipeline Execution: Request is processed through Pipeline Scheduler
  4. Fallback Handling: If pipeline fails, falls back to direct processing
  5. Response Generation: Response is formatted and returned to client
// Example request processing
const request = {
  id: 'test-request',
  method: 'POST',
  path: '/api/chat',
  headers: {
    'Content-Type': 'application/json'
  },
  body: {
    messages: [
      { role: 'user', content: 'Hello!' }
    ]
  },
  timestamp: Date.now(),
  virtualModel: 'qwen-turbo-virtual'
};

// Process request (automatically uses pipeline if available)
const response = await server.handleRequest(request);

// Response includes processing metadata
console.log('Response:', {
  status: response.status,
  processingMethod: response.headers['X-Processing-Method'],
  virtualModel: response.headers['X-Virtual-Model'],
  pipelineId: response.headers['X-Pipeline-Id'],
  executionId: response.headers['X-Execution-Id'],
  processingTime: response.processingTime
});

Error Handling and Fallback

The system provides comprehensive error handling with automatic fallback:

// Pipeline execution errors are automatically handled
try {
  const response = await server.handleRequest(request);
  
  if (response.headers['X-Processing-Method'] === 'direct') {
    // Request was processed via fallback
    console.log('Fallback reason:', response.headers['X-Fallback-Reason']);
  }
} catch (error) {
  // Handle critical errors
  console.error('Request failed:', error);
}

Monitoring and Metrics

Monitor pipeline integration performance:

// Get detailed integration status
const integrationConfig = server.getPipelineIntegrationConfig();
console.log('Pipeline Integration Config:', {
  enabled: integrationConfig.enabled,
  defaultTimeout: integrationConfig.defaultTimeout,
  maxRetries: integrationConfig.maxRetries,
  fallbackToDirect: integrationConfig.fallbackToDirect
});

// Monitor overall system health
const health = await server.getHealth();
console.log('System Health:', {
  status: health.status,
  pipelineIntegration: health.checks.pipeline_integration,
  schedulerHealth: health.checks.pipeline_scheduler
});

API Documentation

ServerModule

The main class that provides all server functionality.

Methods

initialize(config: ServerConfig): Promise<void>

Initialize the server with configuration.

start(): Promise<void>

Start the HTTP server.

stop(): Promise<void>

Stop the HTTP server.

handleRequest(request: ClientRequest): Promise<ClientResponse>

Handle a client request and return a response.

registerVirtualModel(model: VirtualModelConfig): Promise<void>

Register a virtual model for request routing.

registerRoute(route: RouteConfig): Promise<void>

Register a custom route.

getStatus(): ServerStatus

Get current server status.

getHealth(): Promise<HealthStatus>

Get detailed health information.

VirtualModelConfig

Configuration for virtual models:

interface VirtualModelConfig {
  id: string;                    // Unique identifier
  name: string;                  // Human-readable name
  provider: string;              // Provider name (e.g., 'qwen', 'openai')
  endpoint: string;              // API endpoint URL
  apiKey?: string;               // Optional API key
  model: string;                 // Model name
  capabilities: string[];        // Supported capabilities
  maxTokens: number;             // Maximum token limit
  temperature: number;           // Temperature parameter
  topP: number;                  // Top-p parameter
  priority: number;              // Load balancing priority (1-10)
  enabled: boolean;              // Whether model is enabled
  routingRules: RoutingRule[];   // Routing rules
}

ClientRequest

Request object format:

interface ClientRequest {
  id: string;                    // Unique request ID
  method: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH';
  path: string;                  // Request path
  headers: Record<string, string>; // Request headers
  body?: any;                    // Request body
  query?: Record<string, string>; // Query parameters
  timestamp: number;             // Request timestamp
  clientId?: string;             // Optional client ID
  virtualModel?: string;         // Optional virtual model override
}

Configuration

ServerConfig

Complete server configuration:

interface ServerConfig {
  port: number;                  // Server port
  host: string;                  // Server host
  cors: {                        // CORS configuration
    origin: string | string[];
    credentials: boolean;
  };
  compression: boolean;           // Enable compression
  helmet: boolean;               // Enable security headers
  rateLimit: {                   // Rate limiting
    windowMs: number;
    max: number;
  };
  timeout: number;               // Request timeout (ms)
  bodyLimit: string;             // Request body size limit
}

Routing Rules and Virtual Model Mapping

The server uses routing rules to determine which virtual model should handle each request. When no specific model is requested, the system evaluates all enabled models against their routing rules and selects the first matching candidate.

Virtual Model Registration

const model = {
  id: 'chat-model',
  name: 'Chat Model',
  provider: 'openai',
  endpoint: 'https://api.openai.com/v1/chat',
  capabilities: ['chat', 'streaming'],
  // ... other config
};

await server.registerVirtualModel(model);

Routing Rules

Virtual models can define routing rules to filter which requests they should handle:

const model = {
  // ... other config
  routingRules: [
    {
      id: 'chat-only',
      name: 'Chat Requests Only',
      condition: 'path:/api/chat',
      weight: 1.0,
      enabled: true,
      priority: 1
    }
  ]
};

Monitoring and Metrics

Health Check

const health = await server.getHealth();
console.log('Server health:', health.status);
console.log('Health checks:', health.checks);

Request Metrics

const metrics = server.getMetrics();
console.log('Total requests:', metrics.length);
console.log('Average response time:', 
  metrics.reduce((sum, m) => sum + m.processingTime, 0) / metrics.length);

Server Status

const status = server.getStatus();
console.log('Server status:', status.status);
console.log('Active connections:', status.connections);
console.log('Virtual models:', status.virtualModels);

Middleware System

Registering Middleware

import { MiddlewareConfig } from 'rcc-server';

const middleware: MiddlewareConfig = {
  name: 'auth',
  type: 'pre',
  priority: 10,
  enabled: true,
  config: {
    secretKey: 'your-secret-key'
  }
};

await server.registerMiddleware(middleware);

Built-in Middleware

The server includes several built-in middleware:

  • Security: Helmet.js for security headers
  • CORS: Cross-origin resource sharing
  • Compression: Response compression
  • Body Parsing: Request body parsing
  • Rate Limiting: Request rate limiting
  • Request Logging: Detailed request logging

Error Handling

The server provides comprehensive error handling:

try {
  const response = await server.handleRequest(request);
  console.log('Request successful:', response);
} catch (error) {
  console.error('Request failed:', error);
  
  // Error response includes:
  // - Error details
  // - Request ID for tracking
  // - Processing time
  // - HTTP status code
}

Development

Building

# Install dependencies
npm install

# Build the module
npm run build

# Run type checking
npm run typecheck

# Run linting
npm run lint

# Run tests
npm test

Testing

# Run all tests
npm test

# Run tests with coverage
npm run test:coverage

# Run tests in watch mode
npm run test:watch

Examples

Check the examples/ directory for complete usage examples:

Performance

The server module is optimized for performance:

  • Non-blocking I/O: Built on Node.js and Express.js
  • Connection Pooling: Efficient connection management
  • Memory Management: Automatic garbage collection and cleanup
  • Load Balancing: Intelligent request distribution
  • Caching: Response caching where appropriate
  • Compression: Automatic response compression

Security

The server includes several security features:

  • Security Headers: Helmet.js for secure headers
  • CORS: Configurable cross-origin resource sharing
  • Rate Limiting: Prevent abuse and DoS attacks
  • Input Validation: Request validation and sanitization
  • Authentication: Optional authentication middleware
  • HTTPS: SSL/TLS support (requires certificate)

Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m 'Add amazing feature'
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For support, please open an issue on the GitHub Issues page.

Changelog

See CHANGELOG.md for a list of changes and version history.

Related Projects


Built with ❤️ by the RCC Development Team