@sandip1046/rubizz-shared-libs
v1.5.0
Published
Shared libraries and utilities for Rubizz Hotel Inn microservices
Maintainers
Readme
Rubizz Shared Libraries
Overview
The Rubizz Shared Libraries (@sandip1046/rubizz-shared-libs) is a common NPM package that provides shared utilities, types, configurations, and services across all Rubizz Hotel Inn microservices. It ensures consistency, reduces code duplication, and provides a centralized location for common functionality.
📦 Published Package: @sandip1046/rubizz-shared-libs
🔗 GitHub Repository: sandip1046/rubizz-shared-libs
🚀 Automated Publishing: GitHub Actions workflow for seamless releases
Package Structure
src/
├── types/ # TypeScript types and interfaces
├── utils/ # Utility functions (including ProtoVerifier)
├── auth/ # Authentication utilities
├── database/ # Database connection utilities
├── logger/ # Logging utilities
├── email/ # Email service utilities
├── validation/ # Validation schemas
├── constants/ # Constants and enums
├── prisma/ # Prisma ORM utilities
├── events/ # Event handling utilities
├── services/ # Service integrations (Redis, etc.)
├── grpc/ # gRPC utilities (HealthCheckService)
├── websocket/ # WebSocket utilities (ConnectionManager)
├── __tests__/ # Test files and setup
└── index.ts # Main export file
dist/ # Compiled JavaScript output
├── *.js # Compiled JavaScript files
├── *.d.ts # TypeScript declaration files
└── *.js.map # Source maps
.github/
└── workflows/
└── publish.yml # Automated publishing workflowCore Features
1. Authentication & Authorization
- JWT token generation and verification
- Password hashing and comparison
- Role-based access control utilities
- Password reset token management
2. Database Management
- Prisma client initialization and management
- Database connection pooling
- Health check utilities
- Transaction management
3. Redis Integration
- Redis connection management
- Session storage utilities
- Caching helpers
- Message queue operations
- Redis service client for centralized Redis operations
- Support for multiple Redis instances (session, cache, queue)
4. Logging System
- Winston-based centralized logging
- Request/response logging middleware
- Error logging utilities
- Service-specific loggers
5. Email Services
- Nodemailer integration
- Email template management
- Verification and notification emails
- Brevo API integration
6. Validation Schemas
- Joi validation schemas
- Common validation rules
- Request validation utilities
- Error handling
7. Utility Functions
- UUID generation
- Date formatting
- String manipulation
- Pagination helpers
- Retry mechanisms
- Email and phone validation
- Random string generation
- Deep cloning and object utilities
- Proto file verification and validation
8. gRPC Health Check Service
- Standard gRPC Health Checking Protocol implementation
- Service health status management (SERVING, NOT_SERVING, UNKNOWN)
- Unary health check endpoint
- Streaming health watch endpoint
- Automatic service registration
- Health status tracking and monitoring
9. WebSocket Connection Management
- Connection lifecycle management
- Automatic ping/pong health monitoring
- Connection state tracking (CONNECTING, CONNECTED, RECONNECTING, DISCONNECTED, ERROR)
- Token-based reconnection with state preservation
- Automatic cleanup of stale connections
- Connection statistics and monitoring
- User-based connection tracking
- Configurable timeouts and intervals
10. Kafka Utilities
- Dead Letter Queue (DLQ) handler with retry logic
- Event schema validation (JSON Schema)
- Exponential backoff retry strategies
- DLQ event tracking and statistics
- Common event types and interfaces
- Event serialization/deserialization helpers
11. Error Handling
- Standardized error classes (AppError, BadRequestError, NotFoundError, etc.)
- Error code enumeration
- Error metadata support
- Operational vs programming error distinction
- Error normalization and formatting utilities
12. HTTP Client Utilities
- HTTP client with automatic retry and exponential backoff
- Circuit breaker pattern implementation
- Request timeout handling
- Configurable retry strategies
- Circuit breaker state management (CLOSED, OPEN, HALF_OPEN)
13. API Response Formatters
- Standardized success/error response formatting
- Paginated response formatters
- Created/updated/deleted response helpers
- Consistent API response structure
14. Version Management
- Semantic version parsing and formatting
- Version comparison utilities
- Version range satisfaction checking (^, ~, >=, <=, >, <)
- Version increment utilities (major, minor, patch)
15. Resilience & Error Handling
- Circuit Breaker: Enhanced circuit breaker with state management (CLOSED, OPEN, HALF_OPEN)
- Configurable failure thresholds
- Automatic state transitions
- Statistics and monitoring
- Retry Strategies: Multiple retry strategies with different backoff algorithms
- Fixed delay retry
- Exponential backoff
- Linear backoff
- Jitter backoff (with randomization)
- Custom retry predicates
- Bulkhead Pattern: Resource isolation to prevent cascading failures
- Concurrent execution limits
- Queue management
- Timeout handling
- Timeout Utilities: Operation timeout management
- Timeout execution wrappers
- Cancellable timeouts
- Delay utilities
- Graceful Degradation: Fallback strategies for service failures
- Primary/fallback operations
- Cache fallback support
- Default value fallback
- Custom degradation strategies
16. Security Utilities
- Input Sanitization (
InputSanitizer)- XSS prevention with DOMPurify
- HTML entity encoding/decoding
- SQL injection pattern removal
- NoSQL injection prevention
- File path sanitization
- Email and URL sanitization
- Recursive object sanitization
- CSRF Protection (
CSRFProtection)- Token generation and verification
- Signed token support
- Cookie and header-based tokens
- Express middleware integration
- Configurable cookie options
- Security Headers (
SecurityHeaders)- Content Security Policy (CSP) builder
- Strict Transport Security (HSTS)
- X-Frame-Options
- X-Content-Type-Options
- X-XSS-Protection
- Referrer-Policy
- Permissions-Policy
- Default security configurations
17. Observability & Monitoring
- Distributed Tracing (
Tracing)- OpenTelemetry integration
- W3C traceparent format support
- Span creation and management
- Context propagation (extract/inject)
- Automatic error recording
- Metrics Collection (
Metrics)- Counter, Gauge, and Histogram metrics
- HTTP, gRPC, Database, and Kafka metrics
- In-memory metric collector
- Metric aggregation (sum, avg, min, max, count)
- Custom metric collectors support
- Application Performance Monitoring (
APM)- Performance metric recording
- Service health monitoring
- Performance statistics (p50, p95, p99)
- Slow operation detection
- Error operation tracking
- Uptime and throughput tracking
- Alerting (
Alerting)- Alert rule management
- Alert creation and resolution
- Alert acknowledgment
- Custom alert handlers
- Cooldown periods
- Severity levels (critical, warning, info)
18. Testing & Quality
- Jest test framework integration
- TypeScript test configuration
- Comprehensive test coverage
- ESLint code quality checks
- Automated testing in CI/CD
Installation
# Install the published package
npm install @sandip1046/rubizz-shared-libs
# Or install specific version
npm install @sandip1046/rubizz-shared-libs@^1.0.0
# Or install latest version
npm install @sandip1046/rubizz-shared-libs@latestPackage Information
- Package Name:
@sandip1046/rubizz-shared-libs - Current Version:
1.5.0 - Registry: npmjs.com
- Size: ~45 kB (unpacked)
- Files: 70+ files included
Usage
Basic Setup
import {
Logger,
AuthUtils,
DatabaseManager,
RedisManager,
Utils,
ValidationSchemas,
BrevoEmailService,
RedisClient,
RedisInstanceType,
RedisOperation,
HealthCheckService,
ServingStatus,
WebSocketConnectionManager,
ConnectionState,
ProtoVerifier
} from '@sandip1046/rubizz-shared-libs';
// Initialize logger
const logger = Logger.getInstance('service-name', 'production');
// Initialize authentication
AuthUtils.initialize({
secret: process.env.JWT_SECRET,
expiresIn: process.env.JWT_EXPIRES_IN,
refreshExpiresIn: process.env.JWT_REFRESH_EXPIRES_IN
});
// Initialize database
DatabaseManager.initialize({
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT),
database: process.env.DB_NAME,
username: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
ssl: process.env.DB_SSL === 'true',
poolSize: parseInt(process.env.DB_POOL_SIZE)
});
// Initialize Redis
RedisManager.initialize({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT),
password: process.env.REDIS_PASSWORD,
db: parseInt(process.env.REDIS_DB),
retryDelayOnFailover: 100,
maxRetriesPerRequest: 3
});
// Initialize email service
BrevoEmailService.initialize(
process.env.BREVO_API_KEY,
process.env.FROM_EMAIL
);Authentication Usage
// Generate JWT token
const accessToken = AuthUtils.generateAccessToken({
userId: user.id,
email: user.email,
role: user.role
});
// Verify JWT token
const payload = AuthUtils.verifyToken(token);
// Hash password
const hashedPassword = await AuthUtils.hashPassword(password);
// Compare password
const isValid = await AuthUtils.comparePassword(password, hash);Database Usage
// Get database connection
const db = DatabaseManager.getConnection();
// Use Prisma client
const user = await db.user.create({
data: {
email: '[email protected]',
firstName: 'John',
lastName: 'Doe'
}
});
// Health check
const isHealthy = await DatabaseManager.healthCheck();Redis Usage
// Set value
await RedisManager.set('key', 'value', 3600); // 1 hour TTL
// Get value
const value = await RedisManager.get('key');
// Hash operations
await RedisManager.hset('user:123', 'name', 'John');
const name = await RedisManager.hget('user:123', 'name');Logging Usage
// Basic logging
logger.info('User created', { userId: user.id });
logger.error('Database error', error, { query: 'SELECT * FROM users' });
// Request logging middleware
app.use(Logger.createRequestLogger('service-name'));
// Error logging middleware
app.use(Logger.createErrorLogger('service-name'));Email Usage
// Send verification email
await BrevoEmailService.sendVerificationEmail(
'[email protected]',
'verification-token'
);
// Send password reset email
await BrevoEmailService.sendPasswordResetEmail(
'[email protected]',
'reset-token'
);
// Send custom email
await BrevoEmailService.sendEmail({
to: '[email protected]',
subject: 'Custom Email',
template: 'custom-template',
params: { name: 'John' }
});Validation Usage
// Validate request body
const { error, value } = ValidationSchemas.userRegistration.validate(req.body);
if (error) {
return res.status(400).json({ error: error.details[0].message });
}
// Validate query parameters
const { error, value } = ValidationSchemas.pagination.validate(req.query);Redis Service Client Usage
// Using the Redis Service Client for centralized Redis operations
import { RedisClient, RedisInstanceType } from '@sandip1046/rubizz-shared-libs';
// Initialize Redis client
const redisClient = new RedisClient({
baseUrl: process.env.REDIS_SERVICE_URL || 'http://localhost:3000/api/v1/redis',
timeout: 30000,
retries: 3,
retryDelay: 1000,
});
// Basic operations
const value = await redisClient.get(RedisInstanceType.CACHE, 'user:123');
await redisClient.set(RedisInstanceType.CACHE, 'user:123', 'John Doe', 3600);
// Hash operations
await redisClient.hset(RedisInstanceType.SESSION, 'session:abc', 'userId', '123');
const userId = await redisClient.hget(RedisInstanceType.SESSION, 'session:abc', 'userId');
// List operations
await redisClient.lpush(RedisInstanceType.QUEUE, 'notifications', 'email', 'sms');
const notification = await redisClient.rpop(RedisInstanceType.QUEUE, 'notifications');
// Health checks
const isHealthy = await redisClient.healthCheck();
const stats = await redisClient.getStats(RedisInstanceType.CACHE);gRPC Health Check Service Usage
import { HealthCheckService, ServingStatus } from '@sandip1046/rubizz-shared-libs';
import * as grpc from '@grpc/grpc-js';
// Initialize health check service
const healthCheckService = new HealthCheckService('my-service', logger);
// Set service status
healthCheckService.setServing(); // Service is healthy
healthCheckService.setNotServing(); // Service is unhealthy
healthCheckService.setUnknown(); // Service status unknown
// Register with gRPC server
const server = new grpc.Server();
healthCheckService.registerHealthService(server);
// Get service status
const status = healthCheckService.getServiceStatus();
console.log('Service status:', status); // SERVING, NOT_SERVING, or UNKNOWN
// Perform health check
const healthStatus = await healthCheckService.performHealthCheck();
console.log('Health status:', healthStatus);Features:
- Standard
grpc.health.v1.Healthservice implementation - Supports both unary
Checkand streamingWatchmethods - Automatic service registration with gRPC server
- Health status tracking and management
- Compatible with Kubernetes health probes
WebSocket Connection Manager Usage
import { WebSocketConnectionManager, ConnectionState } from '@sandip1046/rubizz-shared-libs';
import { WebSocket } from 'ws';
import { createServer } from 'http';
const httpServer = createServer();
const logger = Logger.getInstance('my-service', 'production');
// Initialize connection manager
const connectionManager = new WebSocketConnectionManager(logger, {
pingInterval: 30000, // Send ping every 30 seconds
pongTimeout: 10000, // Wait 10 seconds for pong
connectionTimeout: 60000, // Consider connection stale after 60 seconds
maxReconnectAttempts: 5, // Maximum 5 reconnection attempts
cleanupInterval: 60000 // Cleanup stale connections every 60 seconds
});
// Register a new connection
const connectionId = 'conn-123';
const ws = new WebSocket('ws://localhost:3000');
connectionManager.registerConnection(connectionId, ws, {
userId: 'user-456',
sessionId: 'session-789',
deviceId: 'device-abc',
userAgent: 'Mozilla/5.0...',
ipAddress: '192.168.1.1'
});
// Generate reconnect token
const reconnectToken = connectionManager.generateReconnectToken(connectionId);
// Handle reconnection
const newWs = new WebSocket('ws://localhost:3000');
const reconnectResult = connectionManager.handleReconnection(
reconnectToken,
newWs,
'conn-456' // New connection ID
);
if (reconnectResult && reconnectResult.success) {
console.log('Reconnection successful:', reconnectResult.connectionId);
// State is automatically preserved (subscriptions, metadata, etc.)
}
// Get user connections
const userConnections = connectionManager.getUserConnections('user-456');
console.log('User connections:', userConnections);
// Update connection activity
connectionManager.updateActivity(connectionId);
// Get connection statistics
const stats = connectionManager.getStats();
console.log('Connection stats:', stats);
// {
// totalConnections: 10,
// connectionsByUser: 5,
// connectionsByState: {
// CONNECTING: 0,
// CONNECTED: 8,
// RECONNECTING: 1,
// DISCONNECTED: 0,
// ERROR: 1
// },
// averageConnectionDuration: 1234567
// }
// Remove connection
connectionManager.removeConnection(connectionId, 'User disconnected');
// Shutdown (cleanup all connections)
connectionManager.shutdown();Features:
- Automatic ping/pong health monitoring
- Connection state management
- Token-based reconnection with state preservation
- Automatic cleanup of stale connections
- User-based connection tracking
- Connection statistics and monitoring
- Configurable timeouts and intervals
Proto File Verification Usage
import { ProtoVerifier } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('proto-verifier', 'development');
const protoVerifier = new ProtoVerifier(logger);
// Discover all .proto files in a directory
protoVerifier.discoverProtoFiles('./proto');
// Verify all proto files
const verifiedInfos = protoVerifier.verifyAllProtoFiles();
// Generate verification report
const report = protoVerifier.generateReport();
console.log(report);
// Check if a proto file has health check service
const protoInfo = protoVerifier.parseProtoFile('./proto/health.proto');
if (protoVerifier.hasHealthCheckService(protoInfo)) {
console.log('Health check service found');
}Features:
- Automatic proto file discovery
- Syntax validation
- Import path validation
- Service and message extraction
- Health check service detection
- Detailed verification reports
Dead Letter Queue Handler Usage
import { DeadLetterQueueHandler, DeadLetterEvent } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('my-service', 'production');
const dlqHandler = new DeadLetterQueueHandler(logger, {
maxRetries: 3,
retryDelay: 1000,
retryBackoffMultiplier: 2,
dlqTopicPrefix: 'dlq.',
enableAutoRetry: true,
retryTopicPrefix: 'retry.',
});
// Create DLQ event from failed event
const dlqEvent = dlqHandler.createDeadLetterEvent(
originalEvent,
error,
'order.created',
0 // retry count
);
// Add to DLQ
dlqHandler.addToDLQ(dlqEvent);
// Check if should retry
if (dlqHandler.shouldRetry(dlqEvent)) {
const { retryEvent, retryTopic, retryDelay } = dlqHandler.prepareForRetry(dlqEvent);
// Publish to retry topic after delay
setTimeout(() => {
await kafkaService.publishEvent(retryTopic, retryEvent);
}, retryDelay);
}
// Get DLQ statistics
const stats = dlqHandler.getDLQStats();
console.log('DLQ Stats:', stats);
// Get DLQ topic name
const dlqTopic = dlqHandler.getDLQTopicName('order.created');
// Returns: 'dlq.order.created'Features:
- Automatic retry with exponential backoff
- Configurable retry strategies per event type
- DLQ event tracking and statistics
- Manual retry and resolution support
- Integration with Kafka topics
Event Schema Validator Usage
import { EventSchemaValidator, EventSchema } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('my-service', 'production');
const validator = new EventSchemaValidator(logger, false); // strictMode: false
// Register event schema
const orderSchema: EventSchema = {
eventType: 'order.created',
version: '1.0',
schema: {
type: 'object',
required: ['eventId', 'eventType', 'orderId', 'amount'],
properties: {
eventId: { type: 'string', format: 'uuid' },
eventType: { type: 'string', enum: ['order.created'] },
orderId: { type: 'string' },
amount: { type: 'number', minimum: 0 },
status: { type: 'string', enum: ['pending', 'confirmed', 'cancelled'] },
},
},
};
validator.registerSchema(orderSchema);
// Validate event
const result = validator.validate(event, 'order.created', '1.0');
if (!result.valid) {
console.error('Validation errors:', result.errors);
// Handle validation errors
} else {
console.log('Event is valid');
if (result.warnings.length > 0) {
console.warn('Validation warnings:', result.warnings);
}
}
// Check if schema exists
if (validator.hasSchema('order.created', '1.0')) {
const schema = validator.getSchema('order.created', '1.0');
console.log('Schema found:', schema);
}Features:
- JSON Schema validation
- Type checking (string, number, boolean, array, object)
- Format validation (date-time, email, uri, uuid)
- Enum validation
- Number constraints (minimum, maximum)
- String pattern matching
- Array item validation
- Required field checking
- Unknown field warnings
Common Kafka Event Types
import { KafkaEventType, KAFKA_TOPICS, BaseKafkaEvent } from '@sandip1046/rubizz-shared-libs';
// Use predefined event types
const event: BaseKafkaEvent = {
eventId: 'evt-123',
eventType: KafkaEventType.ORDER_CREATED,
serviceName: 'restaurant-service',
timestamp: new Date().toISOString(),
version: '1.0',
};
// Use predefined topics
const topic = KAFKA_TOPICS.ORDER_CREATED; // 'order.created'Error Handling Usage
import {
AppError,
BadRequestError,
NotFoundError,
ValidationError,
ErrorHandler,
ErrorCode
} from '@sandip1046/rubizz-shared-libs';
// Create custom errors
throw new NotFoundError('User', 'user-123');
throw new BadRequestError('Invalid email format', { field: 'email', value: 'invalid' });
throw new ValidationError('Validation failed', { errors: ['Field required'] });
// Normalize unknown errors
try {
// some operation
} catch (error) {
const appError = ErrorHandler.normalizeError(error);
// Handle appError
}
// Check if error is operational
if (ErrorHandler.isOperationalError(error)) {
// Safe to expose to client
}
// Format error for API response
const errorResponse = ErrorHandler.formatErrorResponse(error, includeStack);HTTP Client Usage
import { HttpClient, CircuitBreakerState } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('my-service', 'production');
const httpClient = new HttpClient(logger, {
baseURL: 'https://api.example.com',
timeout: 30000,
retries: 3,
retryDelay: 1000,
enableCircuitBreaker: true,
circuitBreakerThreshold: 5,
circuitBreakerTimeout: 60000,
});
// Make requests with automatic retry and circuit breaker
try {
const response = await httpClient.get('/users');
console.log(response.data);
} catch (error) {
console.error('Request failed:', error);
}
// Check circuit breaker state
const state = httpClient.getCircuitBreakerState();
if (state === CircuitBreakerState.OPEN) {
console.log('Service unavailable - circuit breaker is open');
}
// Manually reset circuit breaker if needed
httpClient.resetCircuitBreakerManually();API Response Formatter Usage
import { ResponseFormatter } from '@sandip1046/rubizz-shared-libs';
// Success response
const successResponse = ResponseFormatter.success(data, 'Operation successful', requestId);
// Error response
const errorResponse = ResponseFormatter.error('Operation failed', 'ERROR_CODE', requestId);
// Paginated response
const paginatedResponse = ResponseFormatter.paginated(
items,
page,
limit,
total,
'Items retrieved successfully',
requestId
);
// Created response (201)
const createdResponse = ResponseFormatter.created(newResource, 'Resource created', requestId);
// Updated response
const updatedResponse = ResponseFormatter.updated(updatedResource, 'Resource updated', requestId);
// Deleted response
const deletedResponse = ResponseFormatter.deleted('Resource deleted', requestId);Version Management Usage
import { VersionManager } from '@sandip1046/rubizz-shared-libs';
// Parse version
const versionInfo = VersionManager.parse('1.2.3-beta.1+build.123');
console.log(versionInfo); // { major: 1, minor: 2, patch: 3, prerelease: 'beta.1', build: 'build.123' }
// Format version
const versionString = VersionManager.format(versionInfo, true); // 'v1.2.3-beta.1+build.123'
// Compare versions
const result = VersionManager.compare('1.2.3', '1.2.4'); // -1 (v1 < v2)
const result2 = VersionManager.compare('2.0.0', '1.9.9'); // 1 (v1 > v2)
// Check version range
VersionManager.satisfies('1.2.3', '^1.0.0'); // true
VersionManager.satisfies('2.0.0', '^1.0.0'); // false
VersionManager.satisfies('1.2.3', '~1.2.0'); // true
VersionManager.satisfies('1.3.0', '~1.2.0'); // false
// Increment version
const nextPatch = VersionManager.increment('1.2.3', 'patch'); // { major: 1, minor: 2, patch: 4 }
const nextMinor = VersionManager.increment('1.2.3', 'minor'); // { major: 1, minor: 3, patch: 0 }
const nextMajor = VersionManager.increment('1.2.3', 'major'); // { major: 2, minor: 0, patch: 0 }
// Get next version string
const nextVersion = VersionManager.getNextVersion('1.2.3', 'patch'); // '1.2.4'Resilience Utilities Usage
Circuit Breaker
import { ResilienceCircuitBreaker, ResilienceCircuitBreakerState } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('my-service', 'production');
const circuitBreaker = new ResilienceCircuitBreaker(logger, {
failureThreshold: 5,
successThreshold: 2,
timeout: 60000, // 1 minute
resetTimeout: 300000, // 5 minutes
monitoringPeriod: 60000, // 1 minute
});
// Execute with circuit breaker protection
try {
const result = await circuitBreaker.execute(
() => externalService.call(),
() => fallbackService.call() // Optional fallback
);
} catch (error) {
console.error('Operation failed:', error);
}
// Check circuit breaker state
const state = circuitBreaker.getState();
if (state === CircuitBreakerState.OPEN) {
console.log('Circuit breaker is open - service unavailable');
}
// Get statistics
const stats = circuitBreaker.getStats();
console.log('Circuit breaker stats:', stats);Retry Strategy
import { ResilienceRetryStrategy, RetryStrategyType } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('my-service', 'production');
const retry = new ResilienceRetryStrategy(logger, {
maxRetries: 3,
initialDelay: 1000,
maxDelay: 30000,
strategy: RetryStrategyType.EXPONENTIAL,
retryablePredicate: (error) => {
// Only retry on network errors
return error.message.includes('ECONNREFUSED') || error.message.includes('ETIMEDOUT');
},
onRetry: (attempt, error, delay) => {
console.log(`Retry attempt ${attempt} after ${delay}ms`);
},
});
// Execute with retry
const result = await retry.execute(() => unreliableOperation());Bulkhead Pattern
import { Bulkhead } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('my-service', 'production');
const bulkhead = new Bulkhead(logger, {
maxConcurrent: 10, // Max 10 concurrent operations
maxQueueSize: 50, // Max 50 queued operations
timeout: 30000, // 30 second timeout
});
// Execute with bulkhead isolation
const result = await bulkhead.execute(() => resourceIntensiveOperation());
// Get statistics
const stats = bulkhead.getStats();
console.log('Bulkhead stats:', stats);Timeout Utilities
import { Timeout } from '@sandip1046/rubizz-shared-libs';
// Execute with timeout
const result = await Timeout.execute(
() => longRunningOperation(),
5000, // 5 second timeout
new Error('Operation timed out')
);
// Create cancellable timeout
const { cancel } = Timeout.createCancellableTimeout(5000, () => {
console.log('Timeout occurred');
});
// Cancel if needed
cancel();
// Delay execution
await Timeout.delay(1000); // Wait 1 secondGraceful Degradation
import { GracefulDegradation } from '@sandip1046/rubizz-shared-libs';
const logger = Logger.getInstance('my-service', 'production');
const degradation = new GracefulDegradation(logger);
// Using builder pattern
const result = await degradation.execute(
GracefulDegradation.builder<string>()
.primary(() => primaryService.getData())
.fallback(() => backupService.getData())
.cacheFallback(() => cacheService.getData())
.defaultValue('default-value')
.shouldUseFallback((error) => error.message.includes('timeout'))
.build()
);
// Or direct strategy
const result2 = await degradation.execute({
primary: () => primaryService.getData(),
fallback: () => backupService.getData(),
defaultValue: 'default-value',
});Security Utilities Usage
Input Sanitization
import { InputSanitizer } from '@sandip1046/rubizz-shared-libs';
// Sanitize string input
const sanitized = InputSanitizer.sanitizeString(userInput, {
allowHTML: false,
maxLength: 1000,
removeScripts: true,
removeEventHandlers: true,
});
// Sanitize object recursively
const sanitizedObject = InputSanitizer.sanitizeObject(userData);
// Sanitize SQL input
const safeSQL = InputSanitizer.sanitizeSQL(userInput);
// Sanitize file path
const safePath = InputSanitizer.sanitizeFilePath(userPath);
// Sanitize email
const safeEmail = InputSanitizer.sanitizeEmail(userEmail);
// Sanitize URL
const safeURL = InputSanitizer.sanitizeURL(userURL);
// Sanitize NoSQL input
const safeNoSQL = InputSanitizer.sanitizeNoSQL(userData);
// Encode HTML entities
const encoded = InputSanitizer.encodeHtmlEntities('<script>alert("XSS")</script>');
// Result: <script>alert("XSS")</script>CSRF Protection
import { CSRFProtection } from '@sandip1046/rubizz-shared-libs';
import cookieParser from 'cookie-parser';
const csrf = new CSRFProtection({
secret: process.env.CSRF_SECRET || 'your-secret-key',
cookieName: 'XSRF-TOKEN',
headerName: 'X-XSRF-TOKEN',
cookieOptions: {
httpOnly: false, // Must be false for JavaScript access
secure: true, // HTTPS only in production
sameSite: 'strict',
maxAge: 24 * 60 * 60 * 1000, // 24 hours
},
});
// In Express app setup
app.use(cookieParser());
app.use(csrf.generateTokenMiddleware()); // Generate token for all requests
// Protect routes
app.post('/api/users', csrf.verifyTokenMiddleware(), (req, res) => {
// Protected route
});
// Get token for current request (for forms)
app.get('/api/csrf-token', (req, res) => {
const token = csrf.getToken(req);
res.json({ csrfToken: token });
});Security Headers
import { SecurityHeaders } from '@sandip1046/rubizz-shared-libs';
// Use default security headers
app.use(SecurityHeaders.middleware(SecurityHeaders.getDefaultConfig()));
// Custom security headers
app.use(SecurityHeaders.middleware({
contentSecurityPolicy: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "'unsafe-inline'", 'https://cdn.example.com'],
styleSrc: ["'self'", "'unsafe-inline'"],
imgSrc: ["'self'", 'data:', 'https:'],
connectSrc: ["'self'", 'https://api.example.com'],
},
strictTransportSecurity: {
maxAge: 31536000, // 1 year
includeSubDomains: true,
},
xFrameOptions: 'DENY',
xContentTypeOptions: true,
xXSSProtection: true,
referrerPolicy: 'strict-origin-when-cross-origin',
}));Observability & Monitoring Usage
Distributed Tracing
import { Tracing } from '@sandip1046/rubizz-shared-libs';
// Initialize tracing
Tracing.initialize('rubizz-user-service');
// Start a span
const span = Tracing.startSpan('user-operation', {
attributes: { 'user.id': '123' },
});
// Execute function within span
const result = await Tracing.withSpan('fetch-user', async (span) => {
span.setAttribute('user.id', userId);
return await fetchUser(userId);
});
// Extract trace context from headers
const context = Tracing.extractContext(req.headers);
if (context) {
// Use context to create child span
const childSpan = Tracing.createChildSpan('child-operation', context);
}
// Inject trace context into headers
const headers = Tracing.injectContext(context);Metrics Collection
import { Metrics } from '@sandip1046/rubizz-shared-libs';
// Initialize metrics
Metrics.initialize('rubizz-user-service');
// Record counter
Metrics.counter('requests_total', 1, { endpoint: '/users' });
// Record gauge
Metrics.gauge('active_connections', 42);
// Record histogram
Metrics.histogram('request_duration_seconds', 0.5, { method: 'GET' });
// Record HTTP request metrics
Metrics.httpRequest('GET', '/users', 200, 150, { userId: '123' });
// Record gRPC request metrics
Metrics.grpcRequest('UserService', 'GetUser', 0, 50);
// Record database query metrics
Metrics.dbQuery('SELECT', 'users', 25, true);
// Get and aggregate metrics
const allMetrics = Metrics.getMetrics();
const requestCount = Metrics.aggregate('requests_total', 'sum');Application Performance Monitoring
import { APM } from '@sandip1046/rubizz-shared-libs';
// Initialize APM
APM.initialize({
serviceName: 'rubizz-user-service',
environment: 'production',
version: '1.0.0',
});
// Measure operation performance
const result = await APM.measure('fetch-user', async () => {
return await fetchUser(userId);
}, { userId });
// Record performance metric manually
APM.recordPerformance('database-query', 150, true, undefined, {
query: 'SELECT * FROM users',
});
// Get service health
const health = APM.getServiceHealth();
console.log(health.status); // 'healthy' | 'degraded' | 'unhealthy'
// Get performance statistics
const stats = APM.getPerformanceStats('fetch-user');
console.log(stats.p95); // 95th percentile duration
// Get slow operations
const slowOps = APM.getSlowOperations(1000, 10); // Operations > 1s
// Get error operations
const errors = APM.getErrorOperations(10);Alerting
import { Alerting } from '@sandip1046/rubizz-shared-libs';
// Register alert handler
Alerting.registerHandler({
handle: async (alert) => {
// Send email, Slack notification, etc.
console.log(`Alert: ${alert.name} - ${alert.message}`);
},
});
// Register alert rule
Alerting.registerRule({
id: 'high-error-rate',
name: 'High Error Rate',
condition: {
metric: 'error_rate',
operator: 'gt',
threshold: 0.1, // 10%
duration: 60000, // 1 minute
aggregation: 'avg',
},
severity: 'critical',
enabled: true,
cooldown: 300000, // 5 minutes
});
// Create alert manually
Alerting.createAlert(
'Service Down',
'User service is not responding',
'critical',
'health-check',
{ service: 'user-service' }
);
// Check rules against metric values
const metricValues = new Map([
['error_rate', 0.15],
['response_time', 5000],
]);
Alerting.checkRules(metricValues);
// Get active alerts
const activeAlerts = Alerting.getActiveAlerts('critical');
// Resolve alert
Alerting.resolveAlert(alertId);
// Acknowledge alert
Alerting.acknowledgeAlert(alertId, '[email protected]');Development
Prerequisites
- Node.js 18+
- TypeScript 5.3+
- NPM 9+
Setup
# Install dependencies
npm install
# Set up environment variables
cp env.example .env
# Edit .env with your database credentials
# Generate Prisma client
npx prisma generate
# Run database migrations
npx prisma migrate dev --name init
# Build the package
npm run build
# Run tests
npm test
# Lint code
npm run lint
# Fix linting issues
npm run lint:fixDatabase Setup
# Generate Prisma client (after schema changes)
npx prisma generate
# Create and apply migrations
npx prisma migrate dev --name migration_name
# Reset database (development only)
npx prisma migrate reset
# Deploy migrations to production
npx prisma migrate deploy
# View database in Prisma Studio
npx prisma studioBuilding
npm run buildThis will compile TypeScript to JavaScript and generate type definitions in the dist/ directory.
Dependencies
Production Dependencies
jsonwebtoken- JWT token handlingbcryptjs- Password hashingjoi- Validation schemasuuid- UUID generationmoment- Date manipulationlodash- Utility functionsaxios- HTTP clientredis- Redis clientwinston- Logging@prisma/client- Database ORMnodemailer- Email service@grpc/grpc-js- gRPC client/server@grpc/proto-loader- Protocol buffer loaderws- WebSocket client/server
Development Dependencies
@types/*- TypeScript type definitions (including@types/ws)typescript- TypeScript compilerjest- Testing frameworkts-jest- TypeScript Jest transformereslint- Code linting@typescript-eslint/*- TypeScript ESLint rules
Version Management
The package follows semantic versioning (semver):
- Major (1.0.0 → 2.0.0): Breaking changes
- Minor (1.0.0 → 1.1.0): New features, backward compatible
- Patch (1.0.0 → 1.0.1): Bug fixes, backward compatible
Automated Publishing
The package uses GitHub Actions for automated publishing:
Method 1: Tag-based Publishing (Recommended)
# Create and push a tag
git tag v1.0.1
git push origin v1.0.1Method 2: Manual Publishing via GitHub Actions
- Go to Actions
- Select "Publish to NPM" workflow
- Click "Run workflow"
- Choose version type (patch/minor/major)
- Click "Run workflow"
Method 3: Manual Publishing
# Build and publish manually
npm run build
npm publish --access publicPublishing Workflow
The GitHub Actions workflow automatically:
- ✅ Installs dependencies
- ✅ Runs tests (if configured)
- ✅ Runs linting (if configured)
- ✅ Builds the package
- ✅ Publishes to NPM
- ✅ Creates GitHub release
- ✅ Updates version in package.json
Deployment
Render Deployment
The package is designed to work seamlessly with Render:
- No Special Configuration Required: Render automatically installs NPM packages during build
- Environment Variables: Ensure all services have required environment variables
- Version Pinning: Services can pin to specific versions for stability
Service Integration
All Rubizz microservices are configured to use this package:
# In each service's package.json
{
"dependencies": {
"@sandip1046/rubizz-shared-libs": "^1.0.0"
}
}Updating Services
To update all services to use the latest version:
# Run the update script
cd Server
.\update-services.ps1
# Or manually in each service
npm install @sandip1046/rubizz-shared-libs@latestTroubleshooting
Common Issues
Package Installation Issues
# Clear npm cache
npm cache clean --force
# Reinstall package
npm install @sandip1046/rubizz-shared-libs@latest
# Check installed version
npm list @sandip1046/rubizz-shared-libsTypeScript Compilation Issues
# Ensure TypeScript is properly configured
npm run build
# Check for missing dependencies
npm install
# Verify type definitions
npm run build -- --noEmitImport Issues
// Correct import
import { Logger, AuthUtils } from '@sandip1046/rubizz-shared-libs';
// Incorrect import (old package name)
import { Logger } from 'rubizz-shared-libs'; // ❌Debug Commands
# Check package contents
npm pack @sandip1046/rubizz-shared-libs
tar -tzf sandip1046-rubizz-shared-libs-1.0.0.tgz
# Verify package.json
npm view @sandip1046/rubizz-shared-libs
# Check registry
npm config get registryContributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests for new features
- Update documentation
- Ensure all tests pass (
npm test) - Run linting (
npm run lint) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Development Workflow
# Clone the repository
git clone https://github.com/sandip1046/rubizz-shared-libs.git
cd rubizz-shared-libs
# Install dependencies
npm install
# Make your changes
# ... edit files ...
# Run tests
npm test
# Run linting
npm run lint
# Build the package
npm run build
# Test the build
npm packLicense
MIT License - see LICENSE file for details.
Support
- 📧 Issues: GitHub Issues
- 📚 Documentation: NPM Package
- 🔗 Repository: GitHub Repository
- 📋 Changelog: GitHub Releases
Changelog
v1.5.0 (2025-01-XX)
- ✨ NEW: Observability & Monitoring
- Distributed Tracing (
Tracing) - OpenTelemetry integration with W3C traceparent format support - Metrics Collection (
Metrics) - Counter, Gauge, and Histogram metrics with aggregation - Application Performance Monitoring (
APM) - Performance tracking, health monitoring, and statistics - Alerting (
Alerting) - Alert rule management, creation, resolution, and custom handlers
- Distributed Tracing (
- 📦 Added
@opentelemetry/apiand@opentelemetry/semantic-conventionsdependencies - 📚 Enhanced documentation with observability best practices and usage examples
- 🔧 Improved monitoring and observability capabilities across microservices
v1.4.0 (2025-01-XX)
- ✨ NEW: Security Utilities
- Input Sanitization (
InputSanitizer) - XSS prevention, SQL/NoSQL injection protection - CSRF Protection (
CSRFProtection) - Token-based CSRF protection with Express middleware - Security Headers (
SecurityHeaders) - Comprehensive security headers management
- Input Sanitization (
- 📦 Added
isomorphic-dompurifydependency for XSS prevention - 📚 Enhanced documentation with security best practices
- 🔧 Improved input validation and sanitization
v1.3.0 (2025-01-XX)
- ✨ NEW: Resilience Utilities
- Enhanced Circuit Breaker (
ResilienceCircuitBreaker) with state management and statistics - Multiple Retry Strategies (
ResilienceRetryStrategy) - Fixed, Exponential, Linear, Jitter - Bulkhead Pattern for resource isolation
- Timeout utilities for operation timeouts
- Graceful Degradation with fallback strategies
- Enhanced Circuit Breaker (
- 📚 Enhanced documentation with resilience patterns
- 🔧 Improved error handling and resilience patterns
v1.2.0 (2025-01-XX)
- ✨ NEW: Error Handling Utilities (
AppError,ErrorHandler)- Standardized error classes (BadRequestError, NotFoundError, ValidationError, etc.)
- Error code enumeration
- Error metadata support
- Operational vs programming error distinction
- Error normalization and formatting utilities
- ✨ NEW: HTTP Client (
HttpClient)- Automatic retry with exponential backoff
- Circuit breaker pattern implementation
- Request timeout handling
- Configurable retry strategies
- Circuit breaker state management
- ✨ NEW: API Response Formatters (
ResponseFormatter)- Standardized success/error response formatting
- Paginated response formatters
- Created/updated/deleted response helpers
- Consistent API response structure
- ✨ NEW: Version Management (
VersionManager)- Semantic version parsing and formatting
- Version comparison utilities
- Version range satisfaction checking
- Version increment utilities
- 📚 Enhanced documentation with new utilities
- 🔧 Improved type definitions and exports
v1.1.0 (2025-01-XX)
- ✨ NEW: WebSocket Connection Manager (
WebSocketConnectionManager)- Connection lifecycle management
- Automatic ping/pong health monitoring
- Token-based reconnection with state preservation
- Connection state tracking
- Automatic cleanup of stale connections
- Connection statistics and monitoring
- ✨ NEW: gRPC Health Check Service (
HealthCheckService)- Standard
grpc.health.v1.Healthprotocol implementation - Unary and streaming health check endpoints
- Service status management (SERVING, NOT_SERVING, UNKNOWN)
- Automatic service registration
- Standard
- ✨ NEW: Proto File Verifier (
ProtoVerifier)- Automatic proto file discovery
- Syntax and import validation
- Service and message extraction
- Health check service detection
- Detailed verification reports
- ✨ NEW: Kafka Dead Letter Queue Handler (
DeadLetterQueueHandler)- DLQ event management and tracking
- Automatic retry with exponential backoff
- Configurable retry strategies per event type
- DLQ statistics and monitoring
- Integration with Kafka topics
- ✨ NEW: Event Schema Validator (
EventSchemaValidator)- JSON Schema validation for Kafka events
- Type, format, and constraint validation
- Schema registration and management
- Validation error reporting
- ✨ NEW: Common Kafka Event Types (
KafkaEventType,KAFKA_TOPICS)- Predefined event types and topics
- Type-safe event interfaces
- Common event structures
- 📦 Added
wsand@types/wsdependencies - 📦 Added
@grpc/grpc-jsand@grpc/proto-loaderdependencies - 📚 Updated documentation with new features
- 🔧 Enhanced exports and type definitions
v1.0.0 (2024-10-09)
- 🎉 Initial release
- ✅ Complete NPM package setup
- ✅ GitHub Actions automated publishing
- ✅ TypeScript support with full type definitions
- ✅ Jest testing framework integration
- ✅ ESLint code quality checks
- ✅ Redis service client integration
- ✅ Comprehensive utility functions
- ✅ Authentication and authorization utilities
- ✅ Database management utilities
- ✅ Logging system with Winston
- ✅ Email service integration
- ✅ Validation schemas with Joi
- ✅ All 15 microservices updated to use the package
