winston-s3-transport
v2.1.2
Published
Logs generated through Winston can be transferred to an S3 bucket using `winston-s3-transport`.
Readme
Winston S3 Transport
Logs generated through Winston can be transferred to an S3 bucket using
winston-s3-transport.
Installation
The easiest way to install winston-s3-transport is with npm.
npm install winston-s3-transportAlternately, download the source.
git clone https://github.com/stegano/winston-s3-transport.gitExample
[!] The bucket path is created when the log is first created.
// Example - `src/utils/logger.ts`
import winston from "winston";
import { S3StreamTransport } from "winston-s3-transport";
import { v4 as uuidv4 } from "uuid";
import { format } from "date-fns";
type Log = {
message: {
userId: string;
};
};
const s3Transport = new S3StreamTransport<Log>({
s3ClientConfig: {
region: "ap-northeast-2",
},
s3TransportConfig: {
bucket: "my-bucket",
generateGroup: (log: Log) => {
// Group logs with `userId` value and store them in memory.
// If the 'userId' value does not exist, use the `anonymous` group.
return log?.message.userId || "anonymous";
},
generateBucketPath: (group: string = "default") => {
const date = new Date();
const timestamp = format(date, "yyyyMMddhhmmss");
const uuid = uuidv4();
// The bucket path in which the log is uploaded.
// You can create a bucket path by combining `group`, `timestamp`, and `uuid` values.
return `/logs/${group}/${timestamp}/${uuid}.log`;
},
gzip: true,
},
});
export const logger = winston.createLogger({
levels: winston.config.syslog.levels,
format: winston.format.combine(winston.format.json()),
transports: [s3Transport],
});
export default logger;Options
s3ClientConfig
This library is internally using
@aws-sdk/client-s3to upload files to AWS S3.
- Please see AWSJavaScriptSDK/s3clientconfig
s3TransportConfig
/**
* bucket
*/
bucket: string;
/**
* generateGroup
*/
generateGroup?: (log: T) => string;
/**
* generateBucketPath
*/
generateBucketPath?: (group: string, log: T) => string;
/**
* maxBufferSize
* @default 1024
*/
maxBufferSize?: number;
/**
* maxBufferCount
* If the number of buffer size exceeds the maximum number of buffer sizes,
* the stream with the most written data is flushed and a new stream is created.
* @default 50
*/
maxBufferCount?: number;
/**
* maxFileSize
* If the size of the buffer exceeds the maximum size, the stream is automatically flushed and new stream is created.
* @default 1024 * 2
*/
maxFileSize?: number;
/**
* maxFileAge
* If the file age exceeds the set time,
* the stream is automatically flushed and new stream is created after the set time.
* @default 1000 * 60 * 5
*/
maxFileAge?: number;
/**
* maxIdleTime
* If the data is not written for the set time,
* the stream is automatically flushed and new stream is created after the set time.
* @default 1000 * 10
*/
maxIdleTime?: number;
/**
* gzip
* @default false
*/
gzip?: boolean;// Example - `src/utils/logger.ts`
import winston from "winston";
import S3Transport from "winston-s3-transport";
import { v4 as uuidv4 } from "uuid";
import { format } from "date-fns";
const s3Transport = new S3Transport({
s3ClientConfig: {
region: "ap-northeast-2",
},
s3TransportConfig: {
bucket: "my-bucket",
group: (logInfo: any) => {
// Group logs with `userId` value and store them in memory.
// If the 'userId' value does not exist, use the `anonymous` group.
return logInfo?.message?.userId || "anonymous";
},
bucketPath: (group: string = "default") => {
const date = new Date();
const timestamp = format(date, "yyyyMMddhhmmss");
const uuid = uuidv4();
// The bucket path in which the log is uploaded.
// You can create a bucket path by combining `group`, `timestamp`, and `uuid` values.
return `/logs/${group}/${timestamp}/${uuid}.log`;
},
},
});
export const logger = winston.createLogger({
levels: winston.config.syslog.levels,
format: winston.format.combine(winston.format.json()),
transports: [s3Transport],
});
export default logger;Create log using winston in another module
// Example - another module
import logger from "src/utils/logger";
...
// Create a log containing the field `userId`
logger.info({ userId: 'user001', ....logs });Options
s3ClientConfig
This library is internally using
@aws-sdk/client-s3to upload files to AWS S3.
- Please see AWSJavaScriptSDK/s3clientconfig
s3TransportConfig
bucket: string
- AWS S3 Bucket name
bucketPath: ((group: string) => string) | string
- AWS S3 Bucket path to upload log files
group?: (<T = any>(logInfo: T) => string) | string (default: "default")
- Group for logs classification.
dataUploadInterval?: number (default: 1000 * 20)
- Data upload interval(milliseconds)
fileRotationInterval?: number (default: 1000 * 60)
- File rotation interval(milliseconds)
maxDataSize?: number (default: 1000 * 1000 * 2)
- Max data size(byte)
Motivation
I made this so that it can be efficiently partitioned when storing log data in the S3 bucket. When you use vast amounts of S3 data in Athena, partitioned data can help you use the cost effectively.
Contributors ✨
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
