@shaan_mex/logger
v1.0.3
Published
Lightweight async multistream logger for Node.js + Express + TypeScript
Maintainers
Readme
@shaan_mex/logger
Lightweight async multistream logger for Node.js + Express + TypeScript.
‼️ BEWARE ‼️
This project has only been used in production on a test server, paired with a Vue 3 SPA. Load testing has not been conducted extensively.
During this use, no error related to the logger appeared for the SAP application.
🤔 Use this library at your own risk.
Inspired by Pino for its lightweight, asynchronous, and multistream design.
npm install @shaan_mex/logger
npx shaan-logger-initFeatures
- Fire-and-forget — logging never blocks your server
- Multistream — write to console, per-level files, and named domain files simultaneously
- Domain routing — route logs to specific transports based on business context
- Async fanout — transports run in parallel, failures are fully isolated
- Auto rotation — daily log files with size-based overflow
- Express middleware — automatic HTTP request logging with
requestIdpropagation viaAsyncLocalStorage - Resilient — logger errors never interrupt your application flow
- JSON only — structured logs ready for downstream analysis
- Dev/prod modes — colorized output in development, raw JSON in production
Requirements
- Node.js
>=20.19.0 - Express
>=4.0.0(peer dependency) - TypeScript
>=5.0(recommended)
Installation
npm install @shaan_mex/loggerInitialize the config file at your project root:
npx shaan-logger-initThis copies logger.config.json to your project root. If the file already exists, it is left untouched.
Quick Start
import 'dotenv/config'
import express from 'express'
import { logger, httpLogger } from '@shaan_mex/logger'
const app = express()
app.use(express.json())
app.use(httpLogger({ inject: ['method', 'url', 'ip'] }))
app.get('/users/:id', (req, res) => {
logger.info({ msg: 'fetching user', domain: 'service', userId: req.params.id })
res.json({ id: req.params.id })
})
app.listen(3000, () => {
logger.info({ msg: 'server started', domain: 'service', port: 3000 })
})Configuration
All configuration lives in logger.config.json at your project root.
{
"log": {
"enabled": true,
"dir": "./logs",
"console": true,
"file": true,
"minLevel": "trace",
"maxFileSizeMB": 50,
"bufferMaxEntries": 10000
},
"transports": [
{ "type": "console", "muted": false },
{ "type": "level", "prefix": "error-", "level": "error", "muted": false },
{ "type": "level", "prefix": "info-", "level": "info", "muted": false },
{ "type": "level", "prefix": "warn-", "level": "warn", "muted": false },
{
"type": "named",
"name": "trspHttp",
"muted": false,
"prefix": "http-",
"domains": ["http"],
"levels": ["info", "warn", "error"]
},
{
"type": "named",
"name": "trspService",
"muted": false,
"prefix": "service-",
"domains": ["service"]
}
]
}log settings
| Field | Type | Default | Description |
|---|---|---|---|
| enabled | boolean | true | Global on/off switch — disables all transports |
| dir | string | ./logs | Directory for log files (relative to project root) |
| console | boolean | true | Enable console transport |
| file | boolean | true | Enable all file transports |
| minLevel | string | trace / info | Minimum level — entries below are ignored. Defaults to trace in development, info in production |
| maxFileSizeMB | number | 50 | Max file size before rotation overflow |
| bufferMaxEntries | number | 10000 | Max in-memory buffer size before dropping entries |
Log Levels
Pino-compatible levels, from least to most critical:
| Level | Usage |
|---|---|
| trace | Very detailed debugging — loops, internal state |
| debug | General debugging — parameters, code branches |
| info | Nominal events — startup, connections |
| warn | Unexpected but non-blocking situations |
| error | Recoverable errors — failed request, timeout |
| fatal | Non-recoverable errors — imminent process shutdown |
logger.trace({ msg: 'entered resolveUser', userId: 12 })
logger.debug({ msg: 'cache miss', key: 'user:12' })
logger.info({ msg: 'server started', port: 3000 })
logger.warn({ msg: 'token expiring soon', expiresIn: 300 })
logger.error({ msg: 'db connection failed', err: error.message })
logger.fatal({ msg: 'disk full — shutting down' })Transports
Console transport
Colorized pretty-print in development, raw JSON in production.
{ "type": "console", "muted": false }error and fatal are written to stderr, all other levels to stdout.
Level transport
One file per level. Only receives entries matching the exact level.
{ "type": "level", "prefix": "error-", "level": "error", "muted": false }Generated filename: error-2026-01-28.log
| Option | Type | Required | Description |
|---|---|---|---|
| prefix | string | ✓ | Filename prefix |
| level | PinoLevel | ✓ | Exact level to accept |
| muted | boolean | — | Silence this transport |
Named transport
Receives entries filtered by domains and optionally by levels. Designed for grouping logs by business context.
{
"type": "named",
"name": "trspHttp",
"muted": false,
"prefix": "http-",
"domains": ["http"],
"levels": ["info", "warn", "error"]
}Generated filename: http-2026-01-28.log
| Option | Type | Required | Description |
|---|---|---|---|
| name | string | ✓ | Transport identifier |
| prefix | string | ✓ | Filename prefix |
| domains | string[] | ✓ | Accepted domains — strict mode |
| levels | PinoLevel[] | — | If absent: all levels accepted |
| muted | boolean | — | Silence this transport |
Domains
Domains control routing to named transports. They are optional metadata that also appear in the JSON output.
// no domain — reaches level transports and console only
logger.info({ msg: 'generic log' })
// single domain
logger.info({ msg: 'request handled', domain: 'http' })
// multi-domain — routed to all matching transports simultaneously
logger.warn({ msg: 'shared timeout', domain: ['http', 'service'], duration: 5000 })Routing rules (strict mode)
| Entry | Console | Level transport | Named transport with domain filter | |---|---|---|---| | No domain | ✓ | ✓ | ✗ | | Matching domain | ✓ | ✓ | ✓ | | Non-matching domain | ✓ | ✓ | ✗ |
A log without a domain field never reaches a named transport that has a domain filter. This is intentional — domain is routing, not just metadata.
Express Middleware
Automatically captures each HTTP request and injects a requestId propagated via AsyncLocalStorage.
import { httpLogger } from '@shaan_mex/logger'
app.use(httpLogger())
// with options
app.use(httpLogger({
resolveRequestId: (req) => req.headers['x-request-id'] as string ?? undefined,
inject: ['method', 'url', 'ip', 'userAgent']
}))Injectable fields
| Field | Description |
|---|---|
| method | HTTP method (GET, POST...) |
| url | Full URL with query string |
| ip | Client IP address |
| userAgent | User-Agent header |
| userId | req.user?.id — if authentication is present |
Default injected fields: ['method', 'url']
Automatic HTTP log entry
At the end of each request (res.on('finish')), the middleware automatically emits:
{
"level": "info",
"msg": "http request",
"timestamp": 1706486400000,
"requestId": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"status": 200,
"duration": 42,
"method": "GET",
"url": "/users/123",
"ip": "192.168.1.1",
"domain": "http"
}Level is automatically determined by HTTP status: info (< 400), warn (400–499), error (≥ 500).
requestId propagation
The requestId is automatically included in all logs emitted during the request lifecycle:
app.get('/orders/:id', async (req, res) => {
// requestId automatically included via AsyncLocalStorage
logger.info({ msg: 'fetching order', domain: 'service', orderId: req.params.id })
// → { level: 'info', msg: '...', requestId: 'f47ac10b-...', domain: 'service', ... }
res.json({ ok: true })
})File Rotation
Log files are daily. The filename includes the transport prefix and the current date.
{prefix}{YYYY-MM-DD}.log
{prefix}{YYYY-MM-DD}~1.log ← overflow when file exceeds maxFileSizeMB
{prefix}{YYYY-MM-DD}~2.log ← second overflowExamples:
error-2026-01-28.log
error-2026-01-28~1.log
http-2026-01-28.log
service-2026-01-28.logFile size is checked before each write via async stat. No manual intervention required.
Limitation: in Node.js cluster mode (multi-process), multiple workers may write to the same file simultaneously. In this case, use distinct prefixes per worker or delegate writes to a dedicated process via IPC.
Dev vs Production
Behavior switches via NODE_ENV with no additional configuration.
| Aspect | development | production |
|---|---|---|
| Console format | Colorized pretty-print | Raw JSON |
| Minimum level | trace | info |
| Timestamps | ISO 8601 | Unix ms |
Buffer and Drop
All logs go through an in-memory FIFO buffer before being written. This guarantees that logger.*() calls are non-blocking.
Drop threshold
If the buffer reaches bufferMaxEntries unprocessed entries, new entries are dropped. A warning is emitted to the meta-log:
- On the 1st drop
- Then every 1000 drops
Buffer stats
const stats = logger.stats()
// → { queueLength: 42, dropCount: 0 }Resilience
The library never throws exceptions to your application code. Logging cannot interrupt your business flow.
Meta-logging
Any internal logger error (failing transport, inaccessible file, broken rotation) is redirected to a minimal separate channel:
- Written to
stderr— always, synchronously - Written to
./logs/logger-meta.log— single attempt, no retry, no rotation
This channel has no dependency on the rest of the library — it cannot cause recursion.
Transport isolation
If one transport fails, the others continue uninterrupted. The failure is recorded in the meta-log, invisible to the caller.
Graceful Shutdown
The library automatically registers on SIGTERM and SIGINT. On shutdown, it attempts to flush the buffer before exiting.
// manual flush with custom timeout
await logger.flush({ timeoutMs: 5000 })| Option | Default | Description |
|---|---|---|
| timeoutMs | 2000 | Max delay before forced exit |
API Reference
Logger methods
logger.trace(data) // level trace
logger.debug(data) // level debug
logger.info(data) // level info
logger.warn(data) // level warn
logger.error(data) // level error
logger.fatal(data) // level fatal
logger.log(entry) // low-level method — timestamp already set
logger.flush() // flush buffer — returns Promise
logger.stats() // returns { queueLength, dropCount }LogEntry type
interface LogEntry {
level: 'trace' | 'debug' | 'info' | 'warn' | 'error' | 'fatal'
msg: string
timestamp: number // Unix ms
domain?: string | string[]
requestId?: string
[key: string]: unknown // additional free fields
}Exports
import { logger } from '@shaan_mex/logger' // default instance
import { httpLogger } from '@shaan_mex/logger' // Express middleware
import { createLogger, createConsoleTransport, createLevelTransport, createNamedTransport } from '@shaan_mex/logger'Custom instance
import { createLogger, createConsoleTransport, createLevelTransport, createNamedTransport } from '@shaan_mex/logger'
const logger = createLogger({
transports: [
createConsoleTransport(),
createLevelTransport({ prefix: 'error-', level: 'error' }),
createNamedTransport({
name: 'trspHttp',
prefix: 'http-',
domains: ['http'],
levels: ['info', 'warn', 'error']
})
]
})Known Limitations
- Cluster mode: multiple Node.js workers writing to the same log file may cause interleaved writes. Use distinct prefixes per worker or a dedicated logging process via IPC.
- Memory pressure: under heavy load, the in-memory buffer may drop entries when
bufferMaxEntriesis reached. Monitorlogger.stats().dropCount. - Log order: async fanout does not guarantee write order across transports. Timestamps are set before fanout to preserve emission order within each file.
Changelog
1.0.3
- httpLogger — added exclude option to skip logging on specific route prefixes
app.use(httpLogger({ exclude: ['/_pull'] }))1.0.2
README — added warning section, complete usage documentation, known limitations
scripts/init.js — added
npx shaan-logger-initcommand to copy logger.config.json to project rootlogger.config.json— now distributed with the package as a configuration template
1.0.1
- Fixed missing logger.config.json in published package
1.0.0
Initial release
Fire-and-forget async buffer with fanout via Promise.allSettled Three transport types: console, level (one file per level), named (domain-based routing) Daily log rotation with size-based overflow (~1, ~2...) Express middleware with requestId propagation via AsyncLocalStorage Dev/prod modes — colorized output in development, raw JSON in production Resilient meta-logging channel (stderr + logger-meta.log) Graceful shutdown with logger.flush() on SIGTERM / SIGINT Fully configurable via logger.config.json
License
MIT
