@alex-michaud/pino-graylog-transport
v1.2.1
Published
Pino transport that sends log to graylog
Downloads
245
Maintainers
Readme
pino-graylog-transport
A Pino transport module that sends log messages to Graylog using the GELF (Graylog Extended Log Format) protocol over TCP, TLS, or UDP.
Features
- 🚀 Full support for Pino transports API
- 📦 GELF (Graylog Extended Log Format) message formatting
- 🔒 TLS, TCP, and UDP protocol support (TLS secure by default)
- 🔧 Configurable facility, host, and port
- 📊 Automatic log level conversion (Pino → Syslog)
- 🏷️ Custom field support with GELF underscore prefix
- ⚡ High-performance async message sending with buffering and reconnection logic
Installation
npm install @alex-michaud/pino-graylog-transport pinoPublishing & Installation
This package is published under the scope @alex-michaud. To install, use the following command:
npm install @alex-michaud/pino-graylog-transportIf you encounter permissions issues, you may need to add --access public to the install command:
npm install @alex-michaud/pino-graylog-transport --access publicQuick Start
const pino = require('pino');
const transport = require('@alex-michaud/pino-graylog-transport');
const transportInstance = transport({
host: 'graylog.example.com',
port: 12201,
// TLS is the default for secure transmission over networks
// Use protocol: 'tcp' only for local development (localhost)
protocol: 'tls',
facility: 'my-app',
staticMeta: {
environment: 'production',
service: 'api',
version: '1.0.0'
}
});
const logger = pino(transportInstance);
logger.info('Hello Graylog!');Configuration Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| host | string | 'localhost' | Graylog server hostname |
| port | number | 12201 | Graylog GELF input port (standard GELF TCP port) |
| protocol | string | 'tls' | Protocol to use ('tcp', 'tls', or 'udp'). Default is 'tls' for security - uses encrypted connection to prevent exposure of sensitive data. Use 'tcp' only for local development. Use 'udp' for high-throughput scenarios where delivery guarantees are not required. |
| facility | string | hostname | Application/service identifier sent with every message. Used to categorize logs by application in Graylog (e.g., 'api-gateway', 'auth-service'). Sent as _facility additional field per GELF spec. |
| hostname | string | os.hostname() | Host field in GELF messages (the machine/server name) |
| staticMeta | object | {} | Static fields included in every log message (e.g., auth tokens, environment, datacenter). These are sent as GELF custom fields with underscore prefix. |
| maxQueueSize | number | 1000 | Max messages to queue when disconnected |
| onError | function | console.error | Custom error handler |
| onReady | function | undefined | Callback when connection is established |
| autoConnect | boolean | true | If false, don't establish connection on initialization. Only applies to TCP/TLS; UDP always initializes immediately since it's connectionless. |
⚠️ Security Note
The default protocol is tls to ensure logs are transmitted securely over encrypted connections. This is important when:
- Sending logs over untrusted networks (internet, shared corporate networks)
- Including authentication tokens in
staticMeta(e.g.,'X-OVH-TOKEN') - Logging sensitive data (PII, API keys, internal URLs)
Only use protocol: 'tcp' for local development when Graylog is running on localhost.
Using with Authentication Tokens
Some Graylog services require authentication tokens to be sent with every log message. Use staticMeta to include these tokens and any other metadata that should be sent with all log messages:
const transport = require('@alex-michaud/pino-graylog-transport');
// Example: OVH Logs Data Platform
const stream = transport({
host: 'bhs1.logs.ovh.com',
port: 12202,
protocol: 'tls',
staticMeta: {
'X-OVH-TOKEN': 'your-ovh-token-here'
}
});
// Example: Generic cloud provider with token
const stream = transport({
host: 'graylog.example.com',
port: 12201,
protocol: 'tls',
staticMeta: {
token: 'your-auth-token',
environment: 'production',
datacenter: 'us-east-1'
}
});All fields in staticMeta will be included in every GELF message with an underscore prefix (e.g., _X-OVH-TOKEN, _token, _environment).
Understanding Configuration Fields
facility vs hostname vs staticMeta
These three configuration options serve different purposes:
| Field | Purpose | Example | GELF Field | When to Use |
|-------|---------|---------|------------|-------------|
| facility | Application/service identifier | 'api-gateway', 'auth-service' | _facility | Identify which application/microservice sent the log |
| hostname | Machine/server identifier | 'web-server-01', 'us-east-1a' | host | Identify which machine/container/pod sent the log |
| staticMeta | Context metadata | { token: 'abc', env: 'prod' } | _token, _env | Add authentication tokens or contextual info |
Example: Microservices Architecture
// API Gateway service running on server 1
transport({
facility: 'api-gateway', // What service?
hostname: 'web-server-01', // Which machine?
staticMeta: {
environment: 'production', // Extra context
region: 'us-east-1',
version: '2.1.0'
}
});
// Auth Service running on server 2
transport({
facility: 'auth-service', // What service?
hostname: 'web-server-02', // Which machine?
staticMeta: {
environment: 'production',
region: 'us-east-1',
version: '1.5.3'
}
});In Graylog, you can then:
- Filter by
_facility:api-gatewayto see all API gateway logs - Filter by
host:web-server-01to see all logs from that server - Filter by
_environment:productionto see production logs across all services
Local Development with Docker
This repository includes a Docker Compose configuration to run Graylog locally for testing.
Start Graylog
npm run docker:up
# or
docker compose up -dWait about 30 seconds for Graylog to fully start, then run the setup script to create the GELF inputs:
npm run docker:setupThen access the web interface at:
- URL: http://localhost:9005
- Username:
admin - Password:
admin
Stop Graylog
npm run docker:down
# or
docker compose downUsage Examples
Basic Logging
const pino = require('pino');
const transport = require('@alex-michaud/pino-graylog-transport');
// For local development (localhost)
const logger = pino(transport({
host: 'localhost',
port: 12201,
protocol: 'tcp' // Safe to use TCP for localhost
}));
logger.info('Application started');
logger.warn('Warning message');
logger.error('Error message');Logging with Custom Fields
logger.info({
userId: 123,
action: 'login',
ip: '192.168.1.1'
}, 'User logged in');
// In Graylog, these will appear as: _userId, _action, _ipError Logging with Stack Traces
try {
throw new Error('Something went wrong!');
} catch (err) {
logger.error({ err }, 'An error occurred');
// Stack trace will be sent as full_message in GELF
}TCP Protocol
const logger = pino(await transport({
host: 'localhost',
port: 12201,
protocol: 'tcp' // Use TCP instead of TLS
}));UDP Protocol
const logger = pino(await transport({
host: 'localhost',
port: 12201,
protocol: 'udp' // Use UDP for high-throughput, fire-and-forget logging
}));Note: UDP is connectionless and does not guarantee message delivery. Use it when:
- You need maximum throughput
- Occasional message loss is acceptable
- You're logging to a local Graylog instance
- Network reliability is high
UDP Limitations:
- Messages are limited to 8KB (8192 bytes) per GELF UDP specification
- Messages exceeding this size are rejected (not sent)
- For large messages, use TCP or TLS protocols instead
- No delivery guarantees (fire-and-forget)
Bun Runtime: When running under Bun, the UDP transport automatically uses Bun's native Bun.udpSocket() API for better performance. Falls back to Node's dgram module if unavailable.
Run the Example
# Start Graylog first
npm run docker:up
# Run the example (after installing dependencies)
npm install
node examples/basic.js
# View logs at http://localhost:9005Testing
Install Dependencies
npm installUnit Tests
Run the library functionality tests (no external dependencies required):
npm test
# or
npm run test:unitIntegration Tests
Integration tests require a running Graylog instance:
# Start Graylog first
npm run docker:up
npm run docker:setup
# Run integration tests
npm run test:integrationRun All Tests
Run both unit and integration tests:
npm run test:allLoad Tests with k6
Load tests use k6 to simulate high-volume logging scenarios.
Install k6
# macOS
brew install k6# Linux
sudo gpg -k
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6# Windows
choco install k6Run Load Tests
# Start Graylog
npm run docker:up
# Setup Graylog inputs (wait for Graylog to be ready first)
npm run docker:setup
# Start the load test server
npm run start:load-server
# In another terminal, run the k6 load test
npm run test:load
# Or run a quick smoke test (20 seconds)
npm run test:smokeThe load test will:
- Ramp up from 0 to 50 virtual users
- Send thousands of log messages
- Measure throughput and latency
- Verify 95% success rate
Project Structure
pino-graylog-transport/
├── lib/ # Source code
│ ├── index.ts # Main transport entry point
│ ├── gelf-formatter.ts # GELF message formatter
│ └── graylog-transport.ts # TCP/TLS transport
├── test/ # Tests
│ ├── unit/ # Unit tests
│ │ ├── gelf-formatter.test.ts
│ │ ├── graylog-transport.test.ts
│ │ └── integration.test.ts
│ ├── load/ # Load tests
│ │ ├── load-test.ts # k6 load test script
│ │ ├── smoke-test.ts # k6 smoke test script
│ │ └── server.ts # Test server for load testing
│ └── benchmark/ # Performance benchmarks
│ ├── benchmark.ts # Microbenchmark (formatting)
│ ├── comparison-server.ts # Server for pino vs winston test
│ └── comparison-load-test.ts # k6 comparison load test
├── examples/ # Usage examples
│ └── basic.js
├── docker-compose.yml # Graylog local setup
└── package.jsonLog Level Mapping
Pino log levels are automatically converted to Syslog severity levels for Graylog:
| Pino Level | Numeric | Syslog Level | Numeric | |------------|---------|--------------|---------| | fatal | 60 | Critical | 2 | | error | 50 | Error | 3 | | warn | 40 | Warning | 4 | | info | 30 | Informational| 6 | | debug | 20 | Debug | 7 | | trace | 10 | Debug | 7 |
GELF Message Format
The transport converts Pino log objects to GELF format:
short_message: The log messagefull_message: Stack trace (if present)level: Syslog severity leveltimestamp: Unix timestamphost: Hostnamefacility: Application/service name_*: Custom fields (all Pino log object properties)
Performance
The GELF formatting logic is optimized for speed. Benchmarks were run using mitata to measure the overhead of message transformation (excluding network I/O).
Benchmark Results
| Benchmark | Time | Description | |-----------|------|-------------| | JSON.stringify (Raw) | 615 ns | Baseline - just serialization, no transformation | | pino-graylog-transport | 1.29 µs | Our GELF formatter | | Manual GELF Construction | 1.88 µs | Simulated naive implementation |
Key Takeaways
- ✅ 31% faster than a naive manual GELF construction approach
- ✅ ~775,000 messages/second theoretical formatting throughput (single-threaded)
- ✅ Negligible overhead: The ~0.67 µs formatting overhead is 1000-100,000x smaller than typical network latency
Run Benchmarks
# Run formatting microbenchmark (no network)
npm run benchmarkComparison Load Test (pino vs winston)
This test compares the real-world performance of @alex-michaud/pino-graylog-transport against winston + winston-log2gelf:
# Start Graylog
npm run docker:up
npm run docker:setup
# Start the comparison server (in one terminal)
npm run start:comparison-server
# Run the comparison load test (in another terminal)
npm run benchmark:loadThe test runs three scenarios in parallel:
- Baseline: No logging (measures pure HTTP overhead)
- Pino: Using @alex-michaud/pino-graylog-transport
- Winston: Using winston + winston-log2gelf
Compare the *_duration metrics to see the logging overhead for each library.
Latest Benchmark Results (January 2026)
Tests performed on a local development machine using k6 (10s duration, 50 VUs).
Node.js Runtime
| Library | Requests/sec | Relative Performance | |---------|--------------|----------------------| | Baseline (No logs) | ~2,502 | 100% | | Pino Graylog | ~1,355 | 54% | | Winston (log2gelf) | ~1,183 | 47% |
Bun Runtime 🚀
| Library | Requests/sec | Relative Performance | |---------|--------------|----------------------| | Baseline (No logs) | ~2,472 | 100% | | Pino Graylog | ~2,005 | 81% | | Winston (log2gelf) | ~1,209 | 49% |
Performance Tip: Using the Bun runtime with
@alex-michaud/pino-graylog-transportyields a massive performance boost, maintaining over 80% of the baseline throughput (compared to ~54% on Node.js) while outperforming Winston by a significant margin.
License
MIT — see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Requirements
This package requires Node.js >= 22 (declared in package.json engines). The CI and release workflows prefer the latest Node LTS. If your local Node version is older, upgrade Node (for example, using nvm):
# Install nvm (if not present)
# https://github.com/nvm-sh/nvm#installing-and-updating
# Use latest LTS
nvm install --lts
nvm use --lts