@upachaarnepal/logger
v1.0.0
Published
Enterprise-grade observability and logging module with HTTP tracking, performance metrics, user analytics, and Prometheus integration
Maintainers
Readme
@upachaarnepal/logger
Enterprise-grade observability and logging module with HTTP tracking, performance metrics, user analytics, and Prometheus integration for UpachaarNepal healthcare platform.
Features
Core Capabilities
- HTTP Request/Response Logging - Automatic tracking of all HTTP traffic with latency, headers, body, and user context
- User Action Tracking - Log every user action with full context and session information
- Performance Metrics - Track service call duration, success rates, and bottlenecks
- Error Tracking - Comprehensive error logging with stack traces and severity levels
- Event-Based Architecture - Non-blocking, async logging that doesn't slow down your APIs
Database Support
- PostgreSQL - Production-ready relational database
- TimescaleDB - Time-series optimized PostgreSQL with automatic partitioning and compression
- MongoDB - Flexible NoSQL database for high write throughput
- ClickHouse - Columnar database optimized for analytics and massive scale
Monitoring & Analytics
- Prometheus Integration - Full metrics export with counters, histograms, and gauges
- Real-time Metrics - Request rates, latency percentiles, error rates
- API Analytics - Most visited endpoints, slowest APIs, user behavior patterns
- Service Metrics - Track performance across all microservices
Framework Integration
- Express - Drop-in middleware for automatic HTTP logging
- NestJS - Decorators and interceptors (coming soon)
- Fastify - Plugin architecture support (coming soon)
Installation
npm install @upachaarnepal/loggerDependencies
Install the database driver for your chosen database:
# PostgreSQL / TimescaleDB
npm install pg typeorm
# MongoDB
npm install mongodb
# ClickHouse
npm install @clickhouse/client
# Redis (optional, for caching)
npm install ioredisQuick Start
1. Basic Setup
import { logger } from '@upachaarnepal/logger';
await logger.initialize({
database: {
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'logger_user',
password: 'your_password',
database: 'logger_db',
},
logging: {
enableHttpLogging: true,
enableUserActions: true,
enablePerformanceMetrics: true,
enableErrorTracking: true,
logLevel: 'info',
retentionDays: 90,
},
prometheus: {
enabled: true,
port: 9090,
path: '/metrics',
prefix: 'upachaarnepal_',
},
});2. Express Integration
import express from 'express';
import { logger, createExpressLogger } from '@upachaarnepal/logger';
const app = express();
app.use(express.json());
// Add logging middleware
app.use(createExpressLogger(logger.getLoggerService(), {
skipPaths: ['/health', '/metrics'],
includeRequestBody: true,
includeResponseBody: true,
sensitiveFields: ['password', 'token'],
}));
app.listen(3000);3. Manual Logging
// Log user actions
await logger.logUserAction({
userId: 'user-123',
action: 'LOGIN',
details: { method: 'email', success: true },
ipAddress: '192.168.1.100',
level: 'info',
});
// Log performance metrics
await logger.logPerformance({
service: 'PaymentService',
operation: 'processPayment',
duration: 150,
success: true,
context: { amount: 100, currency: 'NPR' },
level: 'info',
});
// Log errors
await logger.logError({
errorType: 'PaymentError',
errorMessage: 'Payment gateway timeout',
errorStack: error.stack,
severity: 'high',
context: { transactionId: 'txn-123' },
level: 'error',
});Configuration
Environment Variables
Create a .env file:
# Database Configuration
LOGGER_DB_TYPE=postgres
LOGGER_DB_HOST=localhost
LOGGER_DB_PORT=5432
LOGGER_DB_USERNAME=logger_user
LOGGER_DB_PASSWORD=your_password
LOGGER_DB_NAME=logger_db
# TimescaleDB (if using timescaledb type)
LOGGER_ENABLE_TIMESCALE=true
# ClickHouse (if using clickhouse type)
LOGGER_CLICKHOUSE_HOST=http://localhost:8123
LOGGER_CLICKHOUSE_DATABASE=logger_db
# Logging Configuration
LOGGER_ENABLE_HTTP_LOGGING=true
LOGGER_ENABLE_USER_ACTIONS=true
LOGGER_ENABLE_PERFORMANCE_METRICS=true
LOGGER_ENABLE_ERROR_TRACKING=true
LOGGER_LOG_LEVEL=info
LOGGER_RETENTION_DAYS=90
# Performance
LOGGER_ENABLE_ASYNC=true
LOGGER_BATCH_SIZE=100
LOGGER_BATCH_INTERVAL=5000
LOGGER_ENABLE_SAMPLING=false
LOGGER_SAMPLING_RATE=1.0
# Prometheus
LOGGER_ENABLE_PROMETHEUS=true
LOGGER_PROMETHEUS_PORT=9090
LOGGER_PROMETHEUS_PATH=/metrics
LOGGER_PROMETHEUS_PREFIX=upachaarnepal_
# Cache (optional)
LOGGER_ENABLE_CACHE=true
LOGGER_CACHE_PROVIDER=redis
LOGGER_REDIS_URL=redis://localhost:6379Programmatic Configuration
await logger.initialize({
database: {
type: 'timescaledb',
host: 'localhost',
port: 5432,
username: 'logger_user',
password: 'password',
database: 'logger_db',
enableTimescale: true,
},
logging: {
enableHttpLogging: true,
enableUserActions: true,
enablePerformanceMetrics: true,
enableErrorTracking: true,
logLevel: 'info',
retentionDays: 90,
},
performance: {
enableAsync: true,
batchSize: 100,
batchInterval: 5000,
enableSampling: false,
samplingRate: 1.0,
},
prometheus: {
enabled: true,
port: 9090,
path: '/metrics',
prefix: 'upachaarnepal_',
},
analytics: {
enabled: true,
port: 9091,
path: '/analytics',
},
});Database Setup
PostgreSQL
CREATE DATABASE logger_db;
CREATE USER logger_user WITH PASSWORD 'your_password';
GRANT ALL PRIVILEGES ON DATABASE logger_db TO logger_user;TimescaleDB
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
-- The logger will automatically create hypertables
-- and configure compression/retention policiesMongoDB
# MongoDB will auto-create the database
# Ensure you have appropriate indexes (created automatically)ClickHouse
CREATE DATABASE logger_db;
-- Tables are created automatically by the loggerPrometheus Metrics
The logger exports comprehensive metrics:
HTTP Metrics
upachaarnepal_http_requests_total- Total HTTP requestsupachaarnepal_http_request_duration_ms- Request duration histogramupachaarnepal_http_request_size_bytes- Request size histogramupachaarnepal_http_response_size_bytes- Response size histogramupachaarnepal_http_errors_total- Total HTTP errors
User Metrics
upachaarnepal_user_actions_total- Total user actions
Performance Metrics
upachaarnepal_service_calls_total- Total service callsupachaarnepal_service_call_duration_ms- Service call duration
Error Metrics
upachaarnepal_errors_total- Total errors by type and severity
System Metrics
upachaarnepal_logs_processed_total- Total logs processedupachaarnepal_queue_size- Current queue sizes
Access metrics at: http://localhost:9090/metrics
Two Approaches for Working with Logs
1. Event-Based Logging (Real-Time)
The logger uses EventEmitter to emit events in real-time as logs are created. Perfect for:
- Real-time alerting and monitoring
- Streaming logs to external systems
- Immediate reactions to errors or slow requests
- Live dashboards and metrics
const loggerService = logger.getLoggerService();
// Listen to HTTP logs in real-time
loggerService.on('http-log', (log) => {
console.log(`HTTP: ${log.method} ${log.path} - ${log.statusCode}`);
// Alert on slow requests
if (log.duration > 2000) {
sendAlert(`Slow request: ${log.path} took ${log.duration}ms`);
}
// Alert on errors
if (log.statusCode >= 500) {
sendAlert(`Server error on ${log.path}`);
}
});
// Listen to user actions in real-time
loggerService.on('user-action-log', (log) => {
console.log(`User ${log.userId} performed ${log.action}`);
// Track critical actions
if (log.action === 'USER_DELETED' || log.action === 'ROLE_CHANGED') {
auditLog(`CRITICAL: ${log.action} by ${log.userId}`);
}
});
// Listen to performance logs in real-time
loggerService.on('performance-log', (log) => {
console.log(`${log.service}.${log.operation} took ${log.duration}ms`);
// Track performance degradation
if (!log.success || log.duration > 5000) {
performanceAlert(log);
}
});
// Listen to errors in real-time
loggerService.on('error-log', (log) => {
console.log(`Error: ${log.errorType} - ${log.errorMessage}`);
// Send critical errors to incident management
if (log.severity === 'critical' || log.severity === 'high') {
sendToPagerDuty(log);
}
});2. REST API Querying (Historical Data)
Query stored logs for analytics, reporting, and debugging. Perfect for:
- Historical analysis and reporting
- Building dashboards and visualizations
- Debugging past issues
- Compliance and audit trails
import { QueryService, AnalyticsService } from '@upachaarnepal/logger';
// Initialize query service
const queryService = QueryService.getInstance(dbConfig);
const analyticsService = AnalyticsService.getInstance(dbConfig);
// Query HTTP logs with filters
const logs = await queryService.queryHttpLogs({
timeRange: {
start: new Date('2024-01-01'),
end: new Date('2024-01-31'),
},
filters: {
statusCode: { $gte: 400 }, // Only errors
duration: { $gt: 1000 }, // Slower than 1s
},
limit: 100,
});
// Get analytics data
const requestRate = await analyticsService.getRequestRate(
startDate,
endDate
);
const topEndpoints = await analyticsService.getTopEndpoints(
startDate,
endDate,
10 // Top 10
);
const errorRate = await analyticsService.getErrorRate(
startDate,
endDate
);
// Advanced queries
const complexQuery = await queryService.advancedQuery({
logType: 'http',
timeRange: { start: yesterday, end: today },
filters: {
path: { $regex: '/api/users' },
statusCode: { $in: [500, 502, 503] },
},
groupBy: 'path',
limit: 50,
});See examples/complete-example.ts for a full demonstration of both approaches.
Advanced Features
Batch Processing
Enable async batch processing for high-throughput scenarios:
performance: {
enableAsync: true,
batchSize: 100,
batchInterval: 5000,
}Sampling
Reduce log volume by sampling:
performance: {
enableSampling: true,
samplingRate: 0.1, // Log 10% of requests
}Data Sanitization
Protect sensitive data:
createExpressLogger(logger.getLoggerService(), {
sensitiveHeaders: ['authorization', 'cookie'],
sensitiveFields: ['password', 'token', 'ssn', 'creditCard'],
maxBodySize: 10000,
});Health Checks
const health = await logger.health();
console.log({
status: health.status, // 'healthy' | 'degraded' | 'unhealthy'
uptime: health.uptime,
database: health.database.connected,
logsProcessed: health.metrics.logsProcessed,
queueSize: health.metrics.queueSize,
});API Reference
Logger
class Logger {
initialize(config?: Partial<LoggerConfig>): Promise<void>
logHttp(entry: HttpLogEntry): Promise<void>
logUserAction(entry: UserActionLogEntry): Promise<void>
logPerformance(entry: PerformanceLogEntry): Promise<void>
logError(entry: ErrorLogEntry): Promise<void>
health(): Promise<HealthCheckResult>
flush(): Promise<void>
shutdown(): Promise<void>
}Types
See src/types/index.ts for complete type definitions.
Performance
Benchmarks (TimescaleDB)
- Async Mode: 10,000+ logs/second
- Batch Size 100: ~500ms flush time
- Query Performance: Sub-100ms for most analytics queries
Optimization Tips
- Use TimescaleDB for time-series data
- Enable async batching for high throughput
- Use ClickHouse for massive scale (billions of logs)
- Enable sampling in non-critical environments
- Set appropriate retention policies
Monitoring with Grafana
Import the provided Grafana dashboard (coming soon) to visualize:
- Request rates and latency percentiles
- Error rates and types
- User activity patterns
- Service performance
- Database health
Examples
- Basic Usage
- Express Integration (coming soon)
- NestJS Integration (coming soon)
- Analytics Queries (coming soon)
Architecture
┌─────────────────┐
│ Express/HTTP │
└────────┬────────┘
│
┌────▼────┐
│ Logger │
│ Service │
└────┬────┘
│
┌────▼────────────┐
│ Event Emitter │
└─┬───────────┬───┘
│ │
┌─────▼─────┐ ┌──▼──────────┐
│ Prometheus│ │ Batch │
│ Metrics │ │ Processor │
└───────────┘ └──┬──────────┘
│
┌─────▼─────┐
│ Database │
│ Provider │
└─┬───┬───┬─┘
│ │ │
┌──────▼┐ │ │
│Postgres│ │ │
└────────┘ │ │
┌──────▼┐ │
│MongoDB│ │
└───────┘ │
┌──────▼────┐
│ClickHouse │
└───────────┘Contributing
Contributions are welcome! Please read our contributing guidelines.
License
MIT License - see LICENSE file for details
Support
- Documentation: GitHub Wiki
- Issues: GitHub Issues
- Email: [email protected]
Roadmap
- [ ] Analytics API with REST endpoints
- [ ] NestJS decorators and interceptors
- [ ] Fastify plugin
- [ ] Grafana dashboard templates
- [ ] Log aggregation and search UI
- [ ] Alerting system
- [ ] Log streaming API
- [ ] Multi-tenant support
Built with ❤️ for the UpachaarNepal healthcare platform
