lilac-typescript
v1.6.0
Published
A Node.js process logger for backend applications
Readme
Lilac TypeScript Logger

A comprehensive Node.js process logger for backend applications with Kafka and OpenTelemetry integration.
Table of Contents
Features
- Structured Logging: Track function calls, API requests, DB queries and more
- Multiple Outputs: Console, Kafka, and OpenTelemetry support
- Custom Formatting: Color-coded output with customizable display order
- Security: Automatic sensitive data masking
- Distributed Tracing: OpenTelemetry integration for end-to-end tracing
- Event Streaming: Kafka integration for centralized log collection
Installation
npm install lilac-typescriptUsage
Basic Logging
import { ProcessLogger } from 'lilac-typescript';
// Initialize logger
const logger = new ProcessLogger();
// Log function calls and results
function processData(input: any) {
logger.logFunctionCalled('processData', { input });
const result = transformData(input);
logger.logFunctionCallResult('processData', { result });
return result;
}Kafka Setup
const logger = new ProcessLogger({
enableKafkaLogPublishing: true,
kafkaConfig: {
brokerList: 'localhost:9092',
clientId: 'my-service',
kafkaTopics: 'service-logs',
messageKey: 'service',
disconnectAfterSendingMessage: false,
},
});OpenTelemetry Setup
const logger = new ProcessLogger({
enableOpenTelemetryPublishing: true,
openTelemetryURL: 'http://localhost:4318',
});
// Initialize tracing
await logger.initOpenTelemetryTracing();Docker Deployment
Start Kafka and OpenTelemetry containers:
npm run docker:kafka:ot
npm run docker:run:otConfiguration
Core Logging Settings
| Parameter | Type | Default | Required | Description |
| -------------------------------- | -------------------------- | ---------------------------------- | -------- | ----------------------------- |
| displayOrder | string[] | ['TIME', 'FUNCTIONNAME', 'BODY'] | No | Order of fields in log output |
| colorsMap | Record<string, ColorSet> | Default color mappings | No | Custom colors for log fields |
| printSeparator | string | \| | No | Separator between log fields |
| enablePrintSeparator | boolean | true | No | Show/hide field separators |
| enablePrintSpaceBetweenLogKeys | boolean | true | No | Add spaces between fields |
| enableLogCounterIncrement | boolean | true | No | Auto-increment log counter |
| maskingKeys | Set<string> | Empty Set | No | Keys to mask in log output |
| enableKeyMasking | boolean | true | No | Enable/disable data masking |
| skipFormatting | boolean | false | No | Skip all formatting if true |
Kafka Integration
| Parameter | Type | Required | Description |
| -------------------------- | --------------- | -------- | ------------------------------ |
| enableKafkaLogPublishing | boolean | No | Master enable switch |
| kafkaConfig | KafkaConfig | Yes* | *Required when enabled |
| kafkaClient | Kafka \| null | No | Pre-configured client instance |
KafkaConfig Required Fields:
{
brokerList: string[]; // Min 1 broker in "host:port" format (required)
clientId: string; // Non-empty string identifier (required)
kafkaTopics: string[]; // Min 1 topic name (required)
disconnectAfterSendingMessage: boolean; // (required)
producerConfig: ProducerConfig; // See example below (required)
messageKey?: string | null; // Optional publishing key
}Example ProducerConfig:
{
allowAutoTopicCreation: true,
transactionTimeout: 30000,
retry: {
maxRetryTime: 30000,
retries: 5
}
}Validation Requirements:
brokerList: Must contain at least 1 valid "host:port"clientId: Non-empty stringkafkaTopics: Must contain at least 1 topic name
OpenTelemetry Integration
| Parameter | Type | Required | Description |
| ------------------------------- | --------------------- | -------- | ----------------------- |
| enableOpenTelemetryPublishing | boolean | No | Master enable switch |
| openTelemetryConfig | OpenTelemetryConfig | Yes* | *Required when enabled |
OpenTelemetryConfig Required Fields:
{
url: string; // Collector endpoint (required)
scheduledDelayMillis: number; // Min: 1000 (default: 5000)
maxExportBatchSize: number; // Min: 1 (default: 100)
maxQueueSize: number; // Min: 10 (default: 1000)
serviceName: string; // Service identifier
}Minimum Values (enforced by schema validation):
scheduledDelayMillis: ≥1000 msmaxExportBatchSize: ≥1 spanmaxQueueSize: ≥10 spans
API Reference
Core Logging Methods
logFunctionCalled(name: string, body: object)logFunctionCallResult(name: string, result: object)logException(name: string, error: string)logDebug(message: string, data: object)
Database Logging
logDbQueryRequest(query: string, params: object)logDbQueryResponse(query: string, result: object)
Integration Management
initOpenTelemetryTracing(): Promise<void>disconnectKafkaClient(): Promise<void>
Examples
Example log output:
2023-01-01T12:00:00 [FUNCTION_CALLED] processData {"input":"test"} [SESSION:1234]
2023-01-01T12:00:01 [FUNCTION_CALL_RESULT] processData {"result":"TEST"} [SESSION:1234]
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/your-feature) - Commit your changes (
git commit -m 'Add some feature') - Push to the branch (
git push origin feature/your-feature) - Open a pull request
Include:
- Description of changes
- Test cases
- Screenshots if applicable
- Updated documentation
License
Apache 2.0 License - See LICENSE for details.
Contact
- Author: Amreet Khuntia
- GitHub: AmreetKumarkhuntia
- Issues: Project Issues
