npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

logixia

v1.8.2

Published

TypeScript logger with custom log levels, multi-transport (console, file, DB, analytics), NestJS module, built-in search, request tracing, and zero-dep OpenTelemetry support

Readme

logixia


The logging setup you copy-paste into every new project

# The pino route:
npm install pino pino-pretty pino-roll pino-redact pino-nestjs pino-http

# The winston route:
npm install winston winston-daily-rotate-file
# ...then wire 4 separate config objects
# ...then discover there's no built-in DB transport
# ...then discover request tracing is manual
# ...then discover both block your event loop under I/O pressure

# Or:
npm install logixia

logixia ships console + file rotation + database + request tracing + NestJS module + field redaction + log search + OpenTelemetry + plugin API + Prometheus metrics + visual TUI explorer in one package — non-blocking on every transport, zero extra installs.

import { createLogger } from 'logixia';

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    console: { format: 'json' },
    file: { filename: 'app.log', dirname: './logs', maxSize: '50MB' },
    database: { type: 'postgresql', host: 'localhost', database: 'appdb', table: 'logs' },
  },
});

await logger.info('Server started', { port: 3000 });
// Writes to console + file + postgres simultaneously. Non-blocking. Done.

Table of Contents


Why logixia?

console.log doesn't scale. pino is fast but leaves database persistence, NestJS integration, log search, and field redaction entirely to plugins. winston is flexible but synchronous and requires substantial boilerplate to get production-ready.

logixia takes a different approach: everything ships built-in, and nothing blocks your event loop.

  • Async by design — every log call is non-blocking, even to file and database transports
  • Built-in database transports — PostgreSQL, MySQL, MongoDB, SQLite with zero extra drivers
  • Cloud adapters — AWS CloudWatch (EMF), Google Cloud Logging, and Azure Monitor out of the box
  • NestJS module — plug in with LogixiaLoggerModule.forRoot(), inject anywhere in the DI tree; @LogMethod() for auto-logging method entry/exit
  • File rotationmaxSize, maxFiles, gzip archive, time-based rotation — no extra packages needed
  • Log search — query your in-memory log store without shipping to an external service
  • Field redaction — mask passwords, tokens, and PII before they touch any transport; supports dot-notation paths and regex patterns
  • Request tracingAsyncLocalStorage-based trace propagation with no manual thread-locals; includes Kafka and WebSocket interceptors
  • Correlation ID propagation — auto-generate and forward X-Correlation-ID through fetch, axios, Kafka, and SQS across microservice boundaries
  • Browser support — tree-shakeable logixia/browser entry point with console and remote batch transports; no Node.js built-ins
  • OpenTelemetry — W3C traceparent and tracestate support, zero extra dependencies
  • Multi-transport — write to console, file, and database concurrently with one log call
  • TypeScript-first — typed log entries, typed metadata, custom-level IntelliSense throughout
  • Adaptive log level — auto-configures based on NODE_ENV and CI environment
  • Custom transports — ship to Slack, PagerDuty, S3, or anywhere else via a simple interface
  • Plugin / extension API — lifecycle hooks (onInit, onLog, onError, onShutdown); plugins can mutate or cancel log entries; register globally or per-logger
  • Prometheus metrics — turn log events into counters, histograms, and gauges with zero code; expose GET /metrics in Prometheus text format; works with any HTTP framework
  • Visual TUI explorerlogixia explore opens a full-screen terminal log browser with real-time search, level filtering, syntax-highlighted JSON detail panel, stack trace rendering, and one-key export to JSON / CSV / NDJSON

Feature comparison

| Feature | logixia | pino | winston | bunyan | | ------------------------------------ | :---------: | :---------: | :-----------------------: | :-----: | | TypeScript-first | yes | partial | partial | partial | | Async / non-blocking writes | yes | no | no | no | | NestJS module (built-in) | yes | no | no | no | | Database transports (built-in) | yes | no | no | no | | Cloud transports (CW, GCP, Azure) | yes | no | no | no | | File rotation (built-in) | yes | pino-roll | winston-daily-rotate-file | no | | Multi-transport concurrent | yes | no | yes | no | | Log search | yes | no | no | no | | Field redaction (built-in) | yes | pino-redact | no | no | | Request tracing (AsyncLocalStorage) | yes | no | no | no | | Kafka + WebSocket trace interceptors | yes | no | no | no | | Correlation ID propagation | yes | no | no | no | | Browser / Edge / Bun / Deno support | yes | partial | no | no | | OpenTelemetry / W3C headers | yes | no | no | no | | Graceful shutdown / flush | yes | no | no | no | | Custom log levels | yes | yes | yes | yes | | Adaptive log level (NODE_ENV) | yes | no | no | no | | Plugin / extension API | yes | no | no | no | | Prometheus metrics extraction | yes | no | no | no | | Visual TUI log explorer | yes | no | no | no | | Actively maintained | yes | yes | yes | no |


Performance

logixia uses fast-json-stringify (a pre-compiled serializer) for JSON output, which is ~59% faster than JSON.stringify. The hot path — level check, redaction decision, and format — is optimised with pre-built caches built once on construction, not on every log call.

| Library | Simple log (ops/sec) | Structured log (ops/sec) | Error log (ops/sec) | p99 latency | | ----------- | -------------------: | -----------------------: | ------------------: | -----------: | | pino | 1,258,000 | 630,000 | 390,000 | 2.5–12µs | | logixia | 840,000 | 696,000 | 654,000 | 4.8–10µs | | winston | 738,000 | 371,000 | 433,000 | 9–16µs |

logixia is 10% faster than pino on structured logging and 68% faster on error serialization. It beats winston across the board. Pino leads on simple string logs because it uses synchronous direct writes to process.stdout — a trade-off that blocks the event loop under heavy I/O and disappears as soon as you add real metadata.

To reproduce: node benchmarks/run.mjs


Installation

npm install logixia
pnpm add logixia
yarn add logixia
bun add logixia

For database transports, install the relevant driver alongside logixia:

npm install pg          # PostgreSQL
npm install mysql2      # MySQL
npm install mongodb     # MongoDB
npm install sqlite3     # SQLite

Requirements: TypeScript 5.0+, Node.js 18+


Quick start

import { createLogger } from 'logixia';

const logger = createLogger({
  appName: 'api',
  environment: 'production',
});

// ✅ Structured data — machine-readable, searchable, alertable
await logger.info('Server started', { port: 3000 });
await logger.warn('High memory usage', { used: '87%', threshold: '80%' });
await logger.error('Request failed', { orderId: 'ord_123', retryable: true });

// ✅ Pass an Error object directly — logixia serializes the full cause chain
await logger.error(new Error('Connection timeout'));

// ❌ Avoid string interpolation — you lose structured fields
// await logger.info(`Server started on port ${port}`);

No try/catch needed — logixia swallows transport errors internally so a flaky DB or disk-full condition never crashes your app.

Without a transports key, logs go to stdout/stderr. Add a transports key to write to file, database, or anywhere else — all transports run concurrently.

There is also a pre-configured default instance you can import directly:

import { logger } from 'logixia';

await logger.info('Ready');

Core concepts

Log levels

logixia ships with six built-in levels in priority order: error, warn, info, debug, trace, verbose. Logs at or above the configured minimum level are emitted; the rest are dropped.

await logger.error('Something went wrong');
await logger.warn('Approaching rate limit', { remaining: 5 });
await logger.info('Order created', { orderId: 'ord_123' });
await logger.debug('Cache miss', { key: 'user:456' });
await logger.trace('Entering function', { fn: 'processPayment' });
await logger.verbose('Full request payload', { body: req.body });

The error method also accepts an Error object directly — the full cause chain and standard Node.js fields (code, statusCode, errno, syscall) are serialized automatically:

await logger.error(new Error('Connection refused'));

// With extra metadata alongside:
await logger.error(new Error('Payment declined'), { orderId: 'ord_123', retryable: true });

// AggregateError is handled too:
const err = new AggregateError([new Error('A'), new Error('B')], 'Multiple failures');
await logger.error(err);

You can also define custom levels for your domain:

const logger = createLogger({
  appName: 'payments',
  levelOptions: {
    level: 'payment', // show everything down to 'payment'
    levels: {
      error: 0,
      warn: 1,
      info: 2,
      debug: 3,
      // domain-specific levels — lower number = higher priority
      payment: 1, // same priority as warn
      audit: 2, // same priority as info
      kafka: 2,
    },
    colors: {
      error: 'red',
      warn: 'yellow',
      info: 'blue',
      debug: 'green',
      // IntelliSense now suggests 'payment' | 'audit' | 'kafka' as valid keys:
      payment: 'brightYellow',
      audit: 'magenta',
      // kafka omitted → auto-palette assigns a visible color (cyan in this case)
    },
  },
});

// Custom level methods are available immediately, fully typed
await logger.payment('Charge captured', { orderId: 'ord_123', amount: 99.99 });
await logger.audit('Refund approved', { orderId: 'ord_123', by: 'admin' });
await logger.kafka('Message published', { topic: 'order.created' });

// logLevel() is the typed escape hatch — equivalent to the proxy methods above
await logger.logLevel('payment', 'Subscription renewed', { userId: 'usr_456' });

Auto-palette colors — if you omit a color for a custom level, logixia assigns one automatically from the palette magenta → cyan → yellow → green → blue (cycling). No more "uncolored" custom levels blending into terminal noise.

NestJS service IntelliSenseLogixiaLoggerService.create<T>(config) carries the level names into the return type, so the IDE autocompletes service.kafka(...), service.payment(...) etc. with the correct signature:

const svc = LogixiaLoggerService.create({
  levelOptions: {
    levels: { error: 0, warn: 1, info: 2, kafka: 3, payment: 4 },
    colors: { error: 'red', kafka: 'magenta' }, // ← IDE suggests 'kafka' | 'payment' here
  },
});

svc.kafka('Producer connected'); // ✅ fully typed, no 'as any'
svc.payment('Charge captured', { txnId }); // ✅

Structured logging

Every log call accepts a metadata object as its second argument — serialized as structured fields alongside the message, never concatenated into a string:

await logger.info('User authenticated', {
  userId: 'usr_123',
  method: 'oauth',
  provider: 'google',
  durationMs: 42,
  ip: '203.0.113.4',
});

Development output (colorized text):

[2025-03-14T10:22:01.412Z] [INFO] [api] [abc123def456] User authenticated {"userId":"usr_123","method":"oauth",...}

Production output (JSON, via format: { json: true }):

{
  "timestamp": "2025-03-14T10:22:01.412Z",
  "level": "info",
  "appName": "api",
  "environment": "production",
  "message": "User authenticated",
  "traceId": "abc123def456",
  "payload": { "userId": "usr_123", "method": "oauth", "provider": "google", "durationMs": 42 }
}

Child loggers

Create child loggers that inherit their parent's configuration and transport setup, but carry their own context string and optional extra fields:

const reqLogger = logger.child('OrderService', {
  requestId: req.id,
  userId: req.user.id,
});

await reqLogger.info('Processing order'); // includes requestId + userId in every entry
await reqLogger.info('Payment confirmed'); // same context, no repetition

Adaptive log level

logixia automatically selects a sensible default level when no explicit level is configured:

| Condition | Default level | | ---------------------- | :-----------: | | NODE_ENV=development | debug | | NODE_ENV=test | warn | | NODE_ENV=production | info | | CI=true | info | | None of the above | info |

You can override this at any time via the LOGIXIA_LEVEL environment variable:

LOGIXIA_LEVEL=debug node server.js

Or change it at runtime:

logger.setLevel('debug');
console.log(logger.getLevel()); // 'debug'

Per-namespace log levels

Child loggers use their context string as a namespace. You can pin different log levels to different namespaces in config, or override them with environment variables at runtime — without redeploying:

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  namespaceLevels: {
    db: 'debug', // child('db') and child('db.queries') → DEBUG
    'db.*': 'debug', // wildcard: all db.* children
    'http.*': 'warn', // only warn+ from HTTP layer
    payment: 'trace', // full trace for payment namespace
  },
});

const dbLogger = logger.child('db'); // resolves to DEBUG
const httpLogger = logger.child('http.req'); // resolves to WARN

Environment variable overrides use the pattern LOGIXIA_LEVEL_<NS> where <NS> is the first segment of the namespace, uppercased:

# Override just the db namespace to trace, without changing anything else:
LOGIXIA_LEVEL_DB=trace node server.js

# Override the payment namespace:
LOGIXIA_LEVEL_PAYMENT=info node server.js

Transports

All transports are configured under the transports key and run concurrently on every log call.

Console

const logger = createLogger({
  appName: 'api',
  environment: 'development',
  format: {
    colorize: true, // ANSI colour output
    timestamp: true, // include ISO timestamp
    json: false, // text format; set to true for JSON
  },
  transports: {
    console: {
      level: 'debug', // minimum level for this transport only
    },
  },
});

File with rotation

No extra packages. Rotation by size or time interval, automatic gzip compression, configurable retention — all built-in:

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    file: {
      filename: 'app.log',
      dirname: './logs',
      maxSize: '50MB', // rotate when file reaches this size
      maxFiles: 14, // keep 14 rotated files
      zippedArchive: true, // compress old files with gzip
      format: 'json', // 'json' | 'text' | 'csv'
      batchSize: 100, // buffer up to 100 entries before writing
      flushInterval: 2000, // flush buffer every 2 seconds
    },
  },
});

You can also use time-based rotation via the rotation sub-key:

transports: {
  file: {
    filename: 'app.log',
    dirname: './logs',
    rotation: {
      interval: '1d',      // rotate daily: '1h' | '6h' | '12h' | '1d' | '1w'
      maxFiles: 30,
      compress: true,
    },
  },
},

Multiple file transports are supported — pass an array:

transports: {
  file: [
    { filename: 'app.log',   dirname: './logs', format: 'json' },
    { filename: 'error.log', dirname: './logs', format: 'json', level: 'error' },
  ],
},

Database

Write structured logs directly to your database — batched, non-blocking, with configurable flush intervals:

// PostgreSQL
const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    database: {
      type: 'postgresql',
      host: 'localhost',
      port: 5432,
      database: 'appdb',
      table: 'logs',
      username: 'dbuser',
      password: process.env.DB_PASSWORD,
      batchSize: 100, // write in batches of 100
      flushInterval: 5000, // flush every 5 seconds
    },
  },
});

// MongoDB
const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    database: {
      type: 'mongodb',
      connectionString: process.env.MONGO_URI,
      database: 'appdb',
      collection: 'logs',
    },
  },
});

// MySQL
const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    database: {
      type: 'mysql',
      host: 'localhost',
      database: 'appdb',
      table: 'logs',
      username: 'root',
      password: process.env.MYSQL_PASSWORD,
    },
  },
});

// SQLite — great for local development and small apps
const logger = createLogger({
  appName: 'api',
  environment: 'development',
  transports: {
    database: {
      type: 'sqlite',
      database: './logs/app.sqlite',
      table: 'logs',
    },
  },
});

Multiple database targets are supported — pass an array:

transports: {
  database: [
    { type: 'postgresql', host: 'primary-db', database: 'appdb', table: 'logs' },
    { type: 'mongodb', connectionString: process.env.MONGO_URI, database: 'appdb', collection: 'logs' },
  ],
},

Analytics

logixia includes built-in support for Datadog, Mixpanel, Segment, and Google Analytics. All analytics transports are batched and non-blocking.

Datadog — sends logs, metrics, and traces to your Datadog account:

import { DataDogTransport } from 'logixia';

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    analytics: {
      datadog: {
        apiKey: process.env.DD_API_KEY!,
        site: 'datadoghq.com', // or 'datadoghq.eu', 'us3.datadoghq.com'
        service: 'api',
        env: 'production',
        enableLogs: true,
        enableMetrics: true,
        enableTraces: true,
      },
    },
  },
});

Mixpanel:

transports: {
  analytics: {
    mixpanel: {
      token: process.env.MIXPANEL_TOKEN!,
      enableSuperProperties: true,
      superProperties: { platform: 'web', version: '2.0' },
    },
  },
},

Segment:

transports: {
  analytics: {
    segment: {
      writeKey: process.env.SEGMENT_WRITE_KEY!,
      enableBatching: true,
      flushAt: 20,
      flushInterval: 10_000,
    },
  },
},

Google Analytics:

transports: {
  analytics: {
    googleAnalytics: {
      measurementId: process.env.GA_MEASUREMENT_ID!,
      apiSecret: process.env.GA_API_SECRET!,
      enableEcommerce: false,
    },
  },
},

Multiple transports simultaneously

All configured transports receive every log entry concurrently — no sequential bottleneck:

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    console: { format: 'json' },
    file: { filename: 'app.log', dirname: './logs', maxSize: '100MB' },
    database: {
      type: 'postgresql',
      host: 'localhost',
      database: 'appdb',
      table: 'logs',
    },
    analytics: {
      datadog: {
        apiKey: process.env.DD_API_KEY!,
        service: 'api',
      },
    },
  },
});

// One call → console + file + postgres + datadog. All concurrent. All non-blocking.
await logger.info('Order placed', { orderId: 'ord_789' });

Custom transport

Implement ITransport to send logs anywhere — Slack, PagerDuty, S3, an internal queue:

import type { ITransport, TransportLogEntry } from 'logixia';

class SlackTransport implements ITransport {
  name = 'slack';

  async write(entry: TransportLogEntry): Promise<void> {
    if (entry.level !== 'error' && entry.level !== 'fatal') return;
    await fetch(process.env.SLACK_WEBHOOK_URL!, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        text: `*[${entry.level.toUpperCase()}]* ${entry.message}`,
        attachments: [{ text: JSON.stringify(entry.data, null, 2) }],
      }),
    });
  }

  async close(): Promise<void> {
    // optional cleanup
  }
}

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    custom: [new SlackTransport()],
  },
});

The write method may return void or Promise<void> — both are supported. The TransportLogEntry shape is:

interface TransportLogEntry {
  timestamp: Date;
  level: string;
  message: string;
  data?: Record<string, unknown>;
  context?: string;
  traceId?: string;
  appName?: string;
  environment?: string;
}

Cloud adapters

logixia ships three production-ready cloud transports. All are batched, non-blocking, and implement the same flush() / close() lifecycle as the built-in transports. Import and pass directly to the custom transport array.

AWS CloudWatch

Batches log events and sends them to a CloudWatch Logs stream using PutLogEvents. Supports EMF (Embedded Metric Format) so numeric fields in your log entries are automatically promoted to CloudWatch Metrics — no separate SDK needed.

import { CloudWatchTransport } from 'logixia';

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    custom: [
      new CloudWatchTransport({
        region: 'us-east-1', // or set AWS_REGION env var
        logGroupName: '/app/api',
        logStreamName: 'api-server-1', // defaults to hostname + PID
        // Credentials fall back to AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY env vars,
        // or the EC2/ECS/Lambda metadata service — no hard-coding needed.
        batchSize: 100, // default: 100
        flushIntervalMs: 5000, // default: 5000
        emf: true, // emit numeric fields as CloudWatch Metrics
        level: 'warn', // forward only warn+ to CloudWatch
      }),
    ],
  },
});

With emf: true, any numeric field in your log data is published as a CloudWatch Metric under the Logixia namespace:

await logger.info('Request completed', { duration: 142, statusCode: 200 });
// → CloudWatch Metric: Logixia/duration, Logixia/statusCode

Google Cloud Logging

Maps logixia levels to GCP severity values (DEBUG, INFO, WARNING, ERROR, CRITICAL), auto-injects logging.googleapis.com/trace for Cloud Trace correlation, and supports Application Default Credentials (ADC) — no service account JSON required when running on GKE / Cloud Run / App Engine.

import { GCPTransport } from 'logixia';

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    custom: [
      new GCPTransport({
        projectId: 'my-gcp-project', // or set GOOGLE_CLOUD_PROJECT env var
        logName: 'projects/my-gcp-project/logs/api',
        resource: { type: 'k8s_container', labels: { cluster_name: 'prod' } },
        // credentials: { client_email: '...', private_key: '...' }
        // Omit to use ADC (recommended on GCP-hosted infrastructure)
        batchSize: 200,
        flushIntervalMs: 5000,
      }),
    ],
  },
});

Azure Monitor

Sends logs to Azure Monitor via the Logs Ingestion API (Data Collection Rule). Uses OAuth2 client-credentials to obtain a bearer token automatically.

import { AzureMonitorTransport } from 'logixia';

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  transports: {
    custom: [
      new AzureMonitorTransport({
        endpoint: 'https://<dce-name>.ingest.monitor.azure.com',
        ruleId: 'dcr-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
        streamName: 'Custom-LogixiaLogs_CL',
        tenantId: process.env.AZURE_TENANT_ID,
        clientId: process.env.AZURE_CLIENT_ID,
        clientSecret: process.env.AZURE_CLIENT_SECRET,
        batchSize: 200,
        flushIntervalMs: 5000,
      }),
    ],
  },
});

All three cloud transports expose flush() and close() so they participate in logixia's graceful shutdown flow automatically.


Request tracing

logixia uses AsyncLocalStorage to propagate trace IDs through your entire async call graph automatically — no passing of context objects, no manual threading.

Core trace utilities

import {
  generateTraceId, // create a UUID v4 trace ID
  getCurrentTraceId, // read trace ID from current async context
  runWithTraceId, // run a callback inside a new trace context
  setTraceId, // set trace ID in the CURRENT context (use sparingly)
  extractTraceId, // extract a trace ID from a request-like object
} from 'logixia';

// Generate a new trace ID
const traceId = generateTraceId();
// → 'a3f1c2b4-...'

// Run code inside a trace context — every logger.* call within the callback
// (including across await boundaries and Promise.all) will carry this trace ID
runWithTraceId(traceId, async () => {
  await logger.info('Processing job'); // traceId attached automatically
  await processItems(); // all nested async calls carry it too
});

// Read the trace ID currently in context (returns undefined if none is set)
const current = getCurrentTraceId();

// Extract a trace ID from an incoming request object
const incomingTraceId = extractTraceId(req, {
  header: ['traceparent', 'x-trace-id', 'x-request-id'],
  query: ['traceId'],
});

Express / Fastify middleware

import { traceMiddleware } from 'logixia';

// Zero-config — reads from traceparent / x-trace-id / x-request-id / x-correlation-id
// and generates a UUID v4 if none is present. Sets X-Trace-Id on the response.
app.use(traceMiddleware());

// With custom config:
app.use(
  traceMiddleware({
    enabled: true,
    generator: () => `req_${crypto.randomUUID()}`,
    extractor: {
      header: ['x-trace-id', 'traceparent'],
      query: ['traceId'],
    },
  })
);

// Service layer — no parameters needed, trace ID propagates automatically
class OrderService {
  async createOrder(data: OrderData) {
    await logger.info('Creating order', { items: data.items.length });
    // ^ trace ID is automatically included
    await this.processPayment(data);
  }

  async processPayment(data: OrderData) {
    await logger.info('Processing payment', { amount: data.total });
    // ^ same trace ID, propagated automatically through await
  }
}

The default headers checked for an incoming trace ID (in priority order) are: traceparent, x-trace-id, x-request-id, x-correlation-id, trace-id.

NestJS trace middleware

The TraceMiddleware class integrates directly with NestJS's middleware system. LogixiaLoggerModule.forRoot() applies it automatically across all routes — no manual wiring needed:

// Applied automatically by LogixiaLoggerModule.forRoot().
// For manual use in a custom module:

import { MiddlewareConsumer, Module, NestModule } from '@nestjs/common';
import { TraceMiddleware } from 'logixia';

@Module({})
export class AppModule implements NestModule {
  configure(consumer: MiddlewareConsumer) {
    consumer.apply(TraceMiddleware).forRoutes('*');
  }
}

Kafka trace interceptor

Propagates trace IDs through Kafka message handlers. Reads traceId / trace_id / x-trace-id from the message body or headers and runs the handler inside that trace context:

import { KafkaTraceInterceptor } from 'logixia';
import { UseInterceptors, Controller } from '@nestjs/common';
import { MessagePattern } from '@nestjs/microservices';

@UseInterceptors(new KafkaTraceInterceptor())
@Controller()
export class OrdersConsumer {
  @EventPattern('order.created')
  async handle(@Payload() data: OrderCreatedEvent) {
    // getCurrentTraceId() works here — extracted from the Kafka message body/headers
    await logger.info('Processing order event', { orderId: data.orderId });
  }
}

Note: Pass new KafkaTraceInterceptor() (an instance), not the class reference. The interceptor's constructor takes an optional config and is not a NestJS DI provider.

WebSocket trace interceptor

Propagates trace IDs through WebSocket event handlers. Reads traceId from the message body, event payload, or handshake query:

import { WebSocketTraceInterceptor } from 'logixia';
import { UseInterceptors, WebSocketGateway, SubscribeMessage } from '@nestjs/websockets';

@UseInterceptors(new WebSocketTraceInterceptor())
@WebSocketGateway({ namespace: '/events', cors: { origin: '*' } })
export class EventsGateway {
  @SubscribeMessage('ping')
  async handlePing(@MessageBody() data: { traceId?: string }) {
    // trace ID propagated from the WS message body / handshake headers
    await logger.info('WS ping received', { traceId: getCurrentTraceId() });
  }
}

Note: Pass new WebSocketTraceInterceptor() (an instance), not the class reference. UseInterceptors must be imported from @nestjs/common, not @nestjs/websockets.


---

## NestJS integration

Drop-in module with zero boilerplate. Registers `TraceMiddleware` for all routes, provides `LogixiaLoggerService`, `KafkaTraceInterceptor`, and `WebSocketTraceInterceptor` via the global DI container.

> **Full working example** — see [`examples/nestjs-app/`](./examples/nestjs-app/) for a complete NestJS app with Docker Compose (Postgres, MongoDB, Kafka, Kafdrop) that exercises every feature: `LogixiaLoggerModule`, `TraceMiddleware`, `LogixiaExceptionFilter`, `HttpLoggingInterceptor`, `WebSocketTraceInterceptor`, `KafkaTraceInterceptor`, `@LogMethod`, `child()` loggers, `timeAsync`, real Kafka producer + consumer.
>
> ```bash
> cd examples/nestjs-app
> cp .env.example .env
> docker compose up -d
> curl http://localhost:3000/health
> ```

```typescript
// app.module.ts
import { Module } from '@nestjs/common';
import { LogixiaLoggerModule } from 'logixia';

@Module({
  imports: [
    LogixiaLoggerModule.forRoot({
      appName: 'nestjs-api',
      environment: process.env.NODE_ENV ?? 'development',
      traceId: true,
      transports: {
        console: {},
        file: { filename: 'app.log', dirname: './logs', maxSize: '50MB' },
      },
    }),
  ],
})
export class AppModule {}

Async configuration (for credentials from a config service):

LogixiaLoggerModule.forRootAsync({
  imports: [ConfigModule],
  useFactory: async (config: ConfigService) => ({
    appName: 'nestjs-api',
    environment: config.get('NODE_ENV'),
    traceId: true,
    transports: {
      database: {
        type: 'postgresql',
        host: config.get('DB_HOST'),
        database: config.get('DB_NAME'),
        password: config.get('DB_PASSWORD'),
        table: 'logs',
      },
    },
  }),
  inject: [ConfigService],
});

Inject the logger in any service or controller. Since LogixiaLoggerModule is globally scoped, no per-module import is needed:

// orders.service.ts
import { Injectable } from '@nestjs/common';
import { LogixiaLoggerService } from 'logixia';

@Injectable()
export class OrdersService {
  constructor(private readonly logger: LogixiaLoggerService) {}

  async createOrder(dto: CreateOrderDto) {
    await this.logger.info('Creating order', { userId: dto.userId });
    // ...
  }
}

Feature-scoped child logger — create a logger pre-scoped to a specific context string:

// orders.module.ts
import { Module } from '@nestjs/common';
import { LogixiaLoggerModule } from 'logixia';
import { OrdersService } from './orders.service';

@Module({
  imports: [LogixiaLoggerModule.forFeature('OrdersModule')],
  providers: [OrdersService],
})
export class OrdersModule {}
// orders.service.ts — inject the feature-scoped token
import { Inject, Injectable } from '@nestjs/common';
import { LOGIXIA_LOGGER_PREFIX, LogixiaLoggerService } from 'logixia';

@Injectable()
export class OrdersService {
  constructor(
    @Inject(`${LOGIXIA_LOGGER_PREFIX}ORDERSMODULE`)
    private readonly logger: LogixiaLoggerService
  ) {}
}

LogixiaLoggerService exposes the full LogixiaLogger API: info, warn, error, debug, trace, verbose, logLevel, time, timeEnd, timeAsync, setLevel, getLevel, setContext, child, close, getCurrentTraceId, and more.

Custom level proxy methods — every key you add to levelOptions.levels automatically becomes a method on the service instance. Use LogixiaLoggerService.create<T>(config) (instead of new) to get full IntelliSense for those methods:

const logger = LogixiaLoggerService.create({
  levelOptions: {
    levels: { error: 0, warn: 1, info: 2, kafka: 2, mysql: 2, payment: 1 },
    colors: {
      error: 'red',
      warn: 'yellow',
      info: 'blue',
      kafka: 'magenta',
      payment: 'brightYellow',
    },
  },
});

// Methods are created at construction time — no casting required
await logger.kafka('Consumer rebalanced', { groupId: 'app-group' });
await logger.mysql('Slow query detected', { query, ms: 1240 });
await logger.payment('Charge captured', { txnId, amount: 99.99 });

TraceId — with traceId: true, every log line carries a correlation ID automatically. Use LogixiaContext.run() to scope a trace to a block (the TraceMiddleware does this per-request automatically):

import { LogixiaContext } from 'logixia';

await LogixiaContext.run({ traceId: req.headers['x-request-id'] }, async () => {
  await logger.info('Request received'); // traceId: "abc-123" in every line
  await logger.kafka('Event published'); // same traceId
});

// Read the active traceId at any point
const traceId = logger.getCurrentTraceId();

@LogMethod decorator

Automatically logs method entry, exit, duration, and errors — no manual try/catch or logger.debug calls needed. Works on both sync and async methods. Reads the logger property from the class instance (the NestJS convention).

import { Injectable } from '@nestjs/common';
import { LogixiaLoggerService, LogMethod } from 'logixia';

@Injectable()
export class PaymentService {
  constructor(private readonly logger: LogixiaLoggerService) {}

  // Logs entry with args, exit with duration, and errors with full stack trace
  @LogMethod({ level: 'info', logArgs: true, logResult: false })
  async processPayment(orderId: string, amount: number): Promise<void> {
    // your business logic — no try/catch needed for logging
  }

  // Minimal — just tracks duration at debug level
  @LogMethod()
  async fetchExchangeRate(currency: string): Promise<number> {
    return 1.0;
  }
}

@LogMethod options:

| Option | Type | Default | Description | | ----------- | -------------------------------- | --------- | ------------------------------------------------------- | | level | 'debug' \| 'info' \| 'verbose' | 'debug' | Log level for entry / exit messages | | logArgs | boolean | true | Include method arguments in the entry log | | logResult | boolean | false | Include the return value in the exit log | | logErrors | boolean | true | Log errors with stack trace when the method throws | | label | string | auto | Override the auto-detected ClassName.methodName label |

LogixiaExceptionFilter

A global NestJS exception filter that automatically logs unhandled exceptions — HTTP exceptions as warn, everything else as error — and returns a consistent JSON error shape. Reads the injected LogixiaLoggerService; works even without it.

// main.ts
import { NestFactory } from '@nestjs/core';
import { LogixiaExceptionFilter } from 'logixia';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.useGlobalFilters(new LogixiaExceptionFilter());
  await app.listen(3000);
}
bootstrap();

With LogixiaLoggerModule set up, inject the service directly so it logs to your configured transports:

import { APP_FILTER } from '@nestjs/core';
import { LogixiaExceptionFilter, LogixiaLoggerService } from 'logixia';

// In AppModule providers:
{
  provide: APP_FILTER,
  useFactory: (logger: LogixiaLoggerService) => new LogixiaExceptionFilter(logger),
  inject: [LogixiaLoggerService],
}

The filter returns a consistent structured shape on every error:

{
  "success": false,
  "error": {
    "type": "validation_error",
    "code": "ORD-001",
    "message": "Order amount must be greater than zero.",
    "param": "amount"
  },
  "meta": {
    "request_id": "req_8579ef8ff7a64c3cbc752fd1cb9852df",
    "timestamp": "2026-03-24T16:30:19.217Z",
    "path": "/orders/boom",
    "status": 400
  },
  "debug": {
    "stack": "LogixiaException: Order amount must be greater than zero.\n    at ..."
  }
}

And every log line carries method, url, status, request_id as structured fields — not a plain string:

WARN  [NestApplication] (traceId) [ORD-001] Order amount must be greater than zero.
      { method: "GET", url: "/orders/boom", status: 400, request_id: "req_..." }

ERROR [NestApplication] (traceId) DB connection timed out after 5000ms
      { method: "GET", url: "/orders/crash", status: 500, request_id: "req_...", error: { ... } }

Correlation ID propagation

import { ... } from 'logixia/correlation'

In a microservice architecture each incoming request should carry a correlationId that flows through every downstream service call, message queue event, and log line — so you can reconstruct the full request trace in any log aggregator.

logixia ships this as a dedicated logixia/correlation sub-package. It uses the same AsyncLocalStorage store as the main logger, so every logger.* call inside a correlated context automatically includes the ID.

Correlation Express middleware

import { correlationMiddleware } from 'logixia/correlation';

// Zero-config — reads X-Correlation-ID / X-Request-ID from the incoming request.
// Generates a UUID v4 if no header is present.
// Sets X-Correlation-ID on the response.
app.use(correlationMiddleware());

// Custom config:
app.use(
  correlationMiddleware({
    header: 'X-Correlation-ID', // header to read / write. Default: 'X-Correlation-ID'
    generateId: () => crypto.randomUUID(),
    trustIncoming: true, // honour the header from the client. Default: true
    setResponseHeader: true, // echo the ID back in the response. Default: true
  })
);

Correlation Fastify hook

import Fastify from 'fastify';
import { correlationFastifyHook } from 'logixia/correlation';

const app = Fastify();
app.addHook('onRequest', correlationFastifyHook());

Outbound fetch / axios

Every outbound HTTP call made inside a correlated context automatically carries the X-Correlation-ID header.

fetch:

import { correlationFetch } from 'logixia/correlation';

// Drop-in replacement for global fetch — forwards the active correlation ID automatically
const res = await correlationFetch('https://inventory-service/api/items', {
  method: 'GET',
  headers: { Authorization: `Bearer ${token}` },
});

axios:

import axios from 'axios';
import { createCorrelationAxiosInterceptor } from 'logixia/correlation';

const client = axios.create({ baseURL: 'https://inventory-service' });
createCorrelationAxiosInterceptor(client); // attaches X-Correlation-ID to every request

Kafka / SQS helpers

import {
  buildKafkaCorrelationHeaders, // → { 'X-Correlation-ID': '...', 'X-Request-ID': '...' }
  extractMessageCorrelationId, // read correlationId from a Kafka/SQS message body
  childFromRequest, // create a child logger pre-seeded with request context
  withCorrelationId, // run a callback inside an explicit correlation context
  getCurrentCorrelationId, // read the active correlation ID (or undefined)
  generateCorrelationId, // generate a new UUID v4 correlation ID
} from 'logixia/correlation';

// Kafka producer — attach correlation headers to every message
const producer = kafka.producer();
await producer.send({
  topic: 'orders',
  messages: [{ value: JSON.stringify(order), headers: buildKafkaCorrelationHeaders() }],
});

// Kafka consumer — restore context for the handler
const correlationId = extractMessageCorrelationId(message.value);
withCorrelationId(correlationId, async () => {
  await orderService.process(message.value);
  // all logger.* calls inside carry correlationId automatically
});

// Create a child logger pre-loaded with request identifiers
const reqLogger = childFromRequest(logger, req);
await reqLogger.info('Order created', { orderId: 'ord_123' });
// → log includes correlationId, requestId, originService

Standalone context (no HTTP framework):

import { withCorrelationId, generateCorrelationId } from 'logixia/correlation';

const id = generateCorrelationId(); // UUID v4

withCorrelationId(id, async () => {
  await processJob(job);
  // all nested logger calls carry id
});

Browser support

import { ... } from 'logixia/browser'

The logixia/browser entry point is a fully tree-shakeable, Node.js-free logger for browsers, Cloudflare Workers, Deno, Bun, and any other non-Node runtime. It has zero imports from node:fs, node:async_hooks, node:worker_threads, or any other Node.js built-in.

import { createBrowserLogger } from 'logixia/browser';

const logger = createBrowserLogger({
  appName: 'my-app',
  minLevel: 'info',
  pretty: true, // colorized dev-friendly output via console.group
});

logger.info('App loaded', { route: '/home' });
logger.warn('Feature flag missing', { flag: 'new-checkout' });
logger.error('API call failed', { url: '/api/orders', status: 500 });

Browser console transport — uses the native console API and maps levels to their correct methods (console.error, console.warn, console.info, console.debug):

import { BrowserLogger, BrowserConsoleTransport } from 'logixia/browser';

const logger = new BrowserLogger({
  appName: 'my-app',
  transports: [new BrowserConsoleTransport({ pretty: true })],
});

Remote batch transport — buffers log entries and ships them to a remote endpoint in batches. Zero XMLHttpRequest or Node.js http — uses the global fetch API:

import { BrowserLogger, BrowserRemoteTransport } from 'logixia/browser';

const logger = new BrowserLogger({
  appName: 'my-app',
  transports: [
    new BrowserRemoteTransport({
      endpoint: 'https://logs.my-company.com/ingest',
      batchSize: 20, // flush when 20 entries accumulate
      flushIntervalMs: 5000, // or every 5 seconds, whichever comes first
      headers: { Authorization: `Bearer ${TOKEN}` },
      minLevel: 'warn', // only ship warn+ to the remote endpoint
    }),
  ],
});

The following utilities from the main package are also re-exported from logixia/browser (safe for non-Node runtimes):

import {
  createTypedLogger, // typed schema-enforced logger factory
  defineLogSchema, // define a compile-time log schema
  createOtelBridge, // OpenTelemetry bridge
  isOtelActive,
  withOtelSpan,
} from 'logixia/browser';

Log redaction

Redact sensitive fields before they reach any transport — passwords, tokens, PII, credit card numbers. Redaction is applied once before dispatch; no transport can accidentally log sensitive data. The original object is never mutated.

Path-based redaction supports dot-notation, * (single segment wildcard), and ** (any-depth wildcard):

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  redact: {
    paths: [
      'password',
      'token',
      'accessToken',
      'refreshToken',
      '*.secret', // any field named 'secret' at one level deep
      'req.headers.*', // all headers
      'user.creditCard', // nested path
      '**.password', // 'password' at any depth
    ],
    censor: '[REDACTED]', // default if omitted
  },
});

await logger.info('User login', {
  username: 'alice',
  password: 'hunter2', // → '[REDACTED]'
  token: 'eyJhbGc...', // → '[REDACTED]'
  user: {
    creditCard: '4111...', // → '[REDACTED]'
    email: '[email protected]', // untouched
  },
});

Regex-based redaction — mask patterns in string values across all fields:

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  redact: {
    patterns: [
      /Bearer\s+\S+/gi, // Authorization header values
      /sk-[a-z0-9]{32,}/gi, // OpenAI / Stripe secret keys
      /\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b/g, // credit card numbers
    ],
  },
});

Both paths and patterns can be combined in the same config.


Timer API

Measure the duration of any operation — synchronous or async. The result is logged automatically when the timer ends:

// Manual start/stop
logger.time('db-query');
const rows = await db.query('SELECT * FROM orders');
await logger.timeEnd('db-query');
// → logs: Timer 'db-query' finished { duration: '42ms', startTime: '...', endTime: '...' }

// Wrap an async function — timer starts before and stops after, even if the function throws
const result = await logger.timeAsync('process-batch', async () => {
  return await processBatch(items);
});

timeEnd returns the duration in milliseconds so you can use it in your own logic:

const ms = await logger.timeEnd('db-query');
if (ms && ms > 500) {
  await logger.warn('Slow query detected', { durationMs: ms });
}

Field management

Control which fields appear in log output at runtime, without changing config:

// Disable fields you don't need in a specific context
logger.disableField('traceId');
logger.disableField('appName');

// Re-enable them
logger.enableField('traceId');

// Check whether a field is currently active
const isOn = logger.isFieldEnabled('timestamp'); // true

// Inspect the current state of all fields
const state = logger.getFieldState();
// → { timestamp: true, level: true, appName: false, traceId: false, ... }

// Reset all fields back to the config defaults
logger.resetFieldState();

Available field names: timestamp, level, appName, service, traceId, message, payload, timeTaken, context, requestId, userId, sessionId, environment.


Transport level control

By default, every transport receives every log entry that passes the global level filter. You can narrow a specific transport to only receive a subset of levels:

// Only send errors to the database transport — no noise from info/debug
logger.setTransportLevels('database-0', ['error', 'warn']);

// Check what levels a transport is currently configured for
const levels = logger.getTransportLevels('database-0'); // ['error', 'warn']

// List all registered transport IDs
const ids = logger.getAvailableTransports(); // ['console', 'file-0', 'database-0']

// Remove all level overrides — all transports receive everything again
logger.clearTransportLevelPreferences();

Log search

Query your in-memory log history without shipping to Elasticsearch, Datadog, or any external service. Useful in development and lightweight production setups:

import { SearchManager } from 'logixia';

const search = new SearchManager({ maxEntries: 10_000 });

// Index a batch of entries (from a file, database query, or any source)
await search.index(logEntries);

// Search by text query, level, and time range
const results = await search.search({
  query: 'payment failed',
  level: 'error',
  from: new Date('2025-01-01'),
  to: new Date(),
  limit: 50,
});
// → sorted by relevance, full metadata included

OpenTelemetry

W3C traceparent and tracestate headers are extracted from incoming requests and attached to every log entry automatically — enabling correlation between distributed traces and log events in Jaeger, Zipkin, Honeycomb, Datadog, and similar tools:

const logger = createLogger({
  appName: 'checkout-service',
  environment: 'production',
  traceId: {
    enabled: true,
    extractor: {
      header: ['traceparent', 'tracestate', 'x-trace-id'],
    },
  },
});

// The traceparent header from the incoming request is stored as the trace ID
// and included in every log entry automatically.
app.post('/checkout', async (req, res) => {
  await logger.info('Checkout initiated', { cartId: req.body.cartId });
  // → log carries the W3C traceparent from the request
});

Graceful shutdown

Ensures all buffered log entries are flushed to every transport before the process exits. Critical for database and analytics transports that batch writes.

The simplest approach is to set gracefulShutdown: true in config — logixia registers SIGTERM and SIGINT handlers automatically:

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  gracefulShutdown: true,
  transports: { database: { type: 'postgresql' /* ... */ } },
});
// SIGTERM / SIGINT will flush all transports before exit. No extra code needed.

For more control, pass a config object:

const logger = createLogger({
  appName: 'api',
  environment: 'production',
  gracefulShutdown: {
    enabled: true,
    timeout: 10_000, // wait up to 10 s; force-exits after
    signals: ['SIGTERM', 'SIGINT', 'SIGHUP'],
  },
  transports: {
    /* ... */
  },
});

You can also call flushOnExit directly with lifecycle hooks:

import { flushOnExit } from 'logixia';

flushOnExit({
  timeout: 5000,
  beforeFlush: async () => {
    // stop accepting new requests
  },
  afterFlush: async () => {
    // any cleanup after all logs are written
  },
});

Or flush and close manually — useful in Kubernetes SIGTERM handlers:

process.on('SIGTERM', async () => {
  await logger.flush(); // wait for all in-flight writes
  await logger.close(); // close connections, deregister shutdown handlers
  process.exit(0);
});

For health monitoring:

const { healthy, details } = await logger.healthCheck();
// → { healthy: true, details: { 'database-0': { ready: true, metrics: { logsWritten: 1042, ... } } } }

Plugin / extension API

logixia's plugin system lets you hook into every stage of the log lifecycle without touching core logger code. Plugins are plain objects that implement one or more lifecycle methods.

import type { LogixiaPlugin, LogEntry } from 'logixia';

Writing a plugin

const myPlugin: LogixiaPlugin = {
  name: 'my-plugin',

  // Called once when the logger is constructed (global plugins)
  // or when .use() is called (per-logger plugins).
  onInit() {
    console.log('Plugin initialised');
  },

  // Called for every log entry before it is formatted and written.
  // Return the (optionally mutated) entry to let it through,
  // or return null to silently drop it.
  onLog(entry: LogEntry): LogEntry | null {
    // Example: enrich every entry with a deployment tag
    return { ...entry, data: { ...entry.data, deployId: process.env.DEPLOY_ID } };
  },

  // Called whenever logger.error() receives an Error object.
  onError(error: Error, entry?: LogEntry) {
    // Example: forward to a Sentry-compatible sink
    externalErrorTracker.capture(error, { extra: entry?.data });
  },

  // Called during logger.close() — await-able for graceful teardown.
  async onShutdown() {
    await flushBufferedEvents();
  },
};

Registering plugins globally

Plugins registered on globalPluginRegistry are automatically seeded into every logger created after the call.

import { usePlugin } from 'logixia';

usePlugin(myPlugin);

// All loggers created from this point forward will run myPlugin.
const logger = createLogger({
  /* ... */
});

Per-logger plugins

Register or remove plugins on a specific logger instance at any time:

const logger = createLogger({ context: 'PaymentService' });

// Register
logger.use(myPlugin);

// Deregister by name
logger.unuse('my-plugin');

use() is chainable:

createLogger({ context: 'api' }).use(metricsPlugin).use(auditPlugin).use(samplerPlugin);

Cancelling a log entry

Returning null from onLog drops the entry before it reaches any transport — useful for sampling, deduplication, or environment-based suppression:

const devOnlyPlugin: LogixiaPlugin = {
  name: 'dev-only',
  onLog(entry) {
    // Suppress debug/trace entries in production
    if (process.env.NODE_ENV === 'production' && entry.level <= 20) {
      return null; // drop it
    }
    return entry;
  },
};

Multiple onLog hooks run in registration order. If any hook returns null the pipeline stops and no further hooks are called.


Metrics → Prometheus

MetricsPlugin converts log events into Prometheus-compatible counters, histograms, and gauges — no separate instrumentation library required. The /metrics endpoint serves the standard Prometheus text exposition format.

import { createMetricsPlugin } from 'logixia';

Quick start (counters)

import { createLogger, createMetricsPlugin } from 'logixia';
import http from 'node:http';

const metrics = createMetricsPlugin({
  http_requests_total: {
    type: 'counter',
    help: 'Total HTTP requests processed',
    // Which log fields become Prometheus labels
    labels: ['method', 'status', 'route'],
  },
  auth_failures_total: {
    type: 'counter',
    help: 'Authentication failures',
    labels: ['reason'],
  },
});

const logger = createLogger({ context: 'api' }).use(metrics);

// Every log entry automatically increments the matching counter
await logger.info('request handled', { method: 'GET', status: 200, route: '/users' });
await logger.warn('auth failed', { reason: 'bad-token' });

// Expose /metrics
http.createServer(metrics.httpHandler()).listen(9100);

The /metrics response looks like:

# HELP logixia_http_requests_total Total HTTP requests processed
# TYPE logixia_http_requests_total counter
logixia_http_requests_total{method="GET",status="200",route="/users"} 1

# HELP logixia_auth_failures_total Authentication failures
# TYPE logixia_auth_failures_total counter
logixia_auth_failures_total{reason="bad-token"} 1

Histograms

Histograms record the distribution of a numeric field extracted from log entries. Typical use: request latency in milliseconds.

const metrics = createMetricsPlugin({
  http_request_duration_ms: {
    type: 'histogram',
    help: 'HTTP request latency in milliseconds',
    // The log field whose numeric value is recorded as the observation
    valueField: 'durationMs',
    labels: ['route', 'method'],
    // Custom bucket boundaries (defaults: [1, 5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000, 10000])
    buckets: [10, 50, 100, 200, 500, 1000, 5000],
  },
});

await logger.info('request complete', { route: '/api/orders', method: 'POST', durationMs: 142 });

Prometheus output includes _bucket, _sum, and _count lines, compatible with histogram_quantile():

# HELP logixia_http_request_duration_ms HTTP request latency in milliseconds
# TYPE logixia_http_request_duration_ms histogram
logixia_http_request_duration_ms_bucket{le="10",route="/api/orders",method="POST"} 0
logixia_http_request_duration_ms_bucket{le="50",route="/api/orders",method="POST"} 0
logixia_http_request_duration_ms_bucket{le="100",route="/api/orders",method="POST"} 0
logixia_http_request_duration_ms_bucket{le="200",route="/api/orders",method="POST"} 1
...
logixia_http_request_duration_ms_bucket{le="+Inf",route="/api/orders",method="POST"} 1
logixia_http_request_duration_ms_sum{route="/api/orders",method="POST"} 142
logixia_http_request_duration_ms_count{route="/api/orders",method="POST"} 1

Gauges

Gauges track the current value of a numeric field — useful for queue depths, active connections, cache sizes:

const metrics = createMetricsPlugin({
  queue_depth: {
    type: 'gauge',
    help: 'Current number of items in the processing queue',
    valueField: 'depth',
    labels: ['queue'],
  },
});

await logger.info('queue snapshot', { queue: 'email', depth: 47 });

Exposing the /metrics endpoint

Plain Node.js http module:

import http from 'node:http';

http.createServer(metrics.httpHandler()).listen(9100);
// GET http://localhost:9100/ → Prometheus text format

Express:

import express from 'express';

const app = express();
app.get('/metrics', metrics.expressHandler());
app.listen(3000);

Manual render (any framework):

// Returns the full Prometheus text string
const text = metrics.render();
res.setHeader('Content-Type', 'text/plain; version=0.0.4; charset=utf-8');
res.end(text);

Reset all counters (e.g., between tests):

metrics.reset();

Metric configuration reference

interface CounterConfig {
  type: 'counter';
  help?: string; // # HELP line in Prometheus output
  labels?: string[]; // Log entry fields to use as label keys
}

interface HistogramConfig {
  type: 'histogram';
  help?: string;
  valueField: string; // The log field holding the numeric observation
  labels?: string[];
  buckets?: number[]; // Upper-inclusive bucket boundaries (ms or any unit)
}

interface GaugeConfig {
  type: 'gauge';
  help?: string;
  valueField: string; // The log field whose value sets the gauge
  labels?: string[];
}

// Map of Prometheus metric name → config
type MetricsMap = Record<string, CounterConfig | HistogramConfig | GaugeConfig>;

All metric names are automatically prefixed with logixia_ in the output. If a histogram or gauge entry is missing the valueField, the entry is still counted but no numeric observation is recorded.


Logger instance API

Complete reference for every method available on a logger instance returned by createLogger or LogixiaLoggerService:

// Log methods
await logger.error(message: string | Error, data?: Record<string, unknown>): Promise<void>
await logger.warn(message: string, data?: Record<string, unknown>): Promise<void>
await logger.info(message: string, data?: Record<string, unknown>): Promise<void>
await logger.debug(message: string, data?: Record<string, unknown>): Promise<void>
await logger.trace(message: string, data?: Record<string, unknown>): Promise<void>
await logger.verbose(message: string, data?: Record<string, unknown>): Promise<void>
await logger.logLevel(level: string, message: string, data?): Promise<void>  // dynamic dispatch

// Timer API
logger.time(label: string): void
await logger.timeEnd(label: string): Promise<number | undefined>          // returns ms
await logger.timeAsync<T>(label: string, fn: () => Promise<T>): Promise<T>

// Level management
logger.setLevel(level: string): void
logger.getLevel(): string

// Context management
logger.setContext(context: string): void
logger.getContext(): string | undefined
logger.child(context: string, data?: Record<string, unknown>): ILogger

// Field management
logger.enableField(fieldName: string): void
logger.disableField(fieldName: string): void
logger.isFieldEnabled(fieldName: string): boolean
logger.getFieldState(): Record<string, boolean>
logger.resetFieldState(): void

// Transport management
logger.getAvailableTransports(): string[]
logger.setTransportLevels(transportId: string, levels: string[]): void
logger.getTransportLevels(transportId: string): string[] | undefined
logger.clearTransportLevelPreferences(): void

// Plugin API
logger.use(plugin: LogixiaPlugin): this         // register