node-server-engine
v1.12.1
Published
Framework used to develop backend services. This package ships with a lot of features to standardize the creation of services, letting you focus on the business logic.
Maintainers
Readme
Node Server Engine
Framework used to develop Node backend services. This package ships with a lot of features to standardize the creation of services, letting you focus on the business logic.
Features
- 🚀 Express-based - Built on the popular Express.js framework (v4.22)
- 🔒 Multiple Auth Methods - JWT, mTLS, HMAC, and Static token authentication
- 🔌 WebSocket Support - Built-in WebSocket server with message handling (ws v8.18)
- 📊 Database Integration - Sequelize ORM with migrations support (v6.37)
- 📡 Pub/Sub - Google Cloud Pub/Sub integration (v4.11)
- 🔔 Push Notifications - Built-in push notification support
- 🌐 i18n - Internationalization with translation management
- 🔍 ElasticSearch - Full-text search integration (v9.2) with auto-migrations
- 💾 Redis - Advanced Redis client with retry logic, TLS support (ioredis v5.8)
- � Secret Manager - GCP Secret Manager integration for secure credential management
- �📝 API Documentation - Swagger/OpenAPI documentation support
- 📤 File Uploads - Single and chunked file upload middleware with validation
- 🧪 TypeScript - Written in TypeScript with full type definitions
- ✅ Modern Tooling - ESLint, Prettier, and automated versioning
- 🛡️ Permission System - Role-based access control with case-insensitive matching
- 🔐 Security - HMAC authentication, TLS/mTLS support, input validation
Requirements
Node.js 18.x or higher
npm 9.x or higher
TypeScript 5.x (if contributing)
- Features
- Requirements
- Install
- Entities
- Middlewares
- Utilities
Install
To start a new service, it is highly recommended that you clone it from our template. It will already include all the necessary tools and boilerplate.
If you need to install it manually:
npm install node-server-engineFor development dependencies:
npm install --save-dev backend-test-toolsLogging
The server provides structured logging with automatic format detection. In local development, logs are colorized and human-readable. In production (GCP, Kubernetes), logs are JSON formatted for log aggregation systems.
Log Format
Logs automatically detect the environment:
- Local Development: Colorized, concise format with time (HH:MM:SS)
- Production: JSON structured logs for cloud log aggregation
Environment Variables
| Variable | Values | Description | Default |
|----------|--------|-------------|---------|
| LOG_FORMAT | local, json | Force specific log format | Auto-detect |
| DETAILED_LOGS | true, false | Show stack traces and verbose details | false |
| DEBUG | namespace:* | Enable debug logs for specific namespaces | Off |
Examples
Default Local Format (clean, concise):
[21:16:15] INFO SERVER_RUNNING
[21:16:15] INFO Connected to database successfully
[21:16:15] DEBUG POST /auth/login [200] 154ms
[21:20:15] DEBUG GET /users [304] 10ms
[21:20:15] WARNING No bearer token found [unauthorized 401] GET /usersDetailed Logs (DETAILED_LOGS=true):
[21:16:15] DEBUG POST /auth/login [200] 154ms
Data:
{
"responseTime": "154ms",
"contentLength": "1012",
"httpVersion": "1.1"
}
[21:20:15] WARNING No bearer token found [unauthorized 401] GET /users
Stack Trace:
Error: No bearer token found
at middleware (/path/to/authJwt.ts:31:13)
...
src/middleware/authJwt/authJwt.ts:31Production Format (LOG_FORMAT=json):
{"severity":"INFO","message":"SERVER_RUNNING","timestamp":"2025-12-09T21:16:15.028Z","serviceContext":{"service":"my-service","version":"1.0.0"}}
{"severity":"DEBUG","message":"POST /auth/login [200] 154ms","timestamp":"2025-12-09T21:16:15.182Z"}Usage in Code
import { reportInfo, reportError, reportDebug } from 'node-server-engine';
// Info logging
reportInfo('Server started successfully');
reportInfo({ message: 'User login', data: { userId: '123' } });
// Error logging
reportError(error);
reportError(error, request); // Include HTTP request context
// Debug logging (requires DEBUG env var)
reportDebug({ namespace: 'app:auth', message: 'Token validated' });Entities
Server
The server class encapsulates all of the express boilerplate. Instantiate one per service and initialize it to get started.
import { Server } from 'node-server-engine';
const server = new Server(config);
server.init();Server Configuration
| Property | Type | Behavior | Default |
| ----------------- | ------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| port | Number | Port to listen to | process.env.PORT |
| endpoints | Array<Endpoint> | List of endpoints that should be served. | [] |
| globalMiddleware | Array<Function> Array<{middleware: Function, path: String}> | List of middlewares that should be executed for each endpoint's logic. If it is given as an object with a path, it will only be applied to requests with this base path. | [] |
| errorMiddleware | Array<Function> Array<{middleware: Function, path: String}> | List of middlewares that should be executed after each endpoint's logic. If it is given as an object with a path, it will only be applied to requests with this base path. | [] |
| initCallbacks | Array<Function> | List of functions that are called on server start |
| [] |
| syncCallbacks | boolean | Forces the init callbacks to run one after the other and not in parallel |
| false |
| cron | Array<Object> | List of cronjob that are called on server start | [] |
| shutdownCallbacks | Array<Function> | List of functions that are called on server shutdown | [] |
| checkEnvironment | Object | Schema against which verify the environment variables, will cause server termination if the environment variables are not properly set. | {} |
| secretManager | SecretManagerOptions | Configuration for GCP Secret Manager integration. Loads secrets at startup before any other initialization. | undefined |
| webSocket.server | Object | Settings to create a WebSocket server. See the ws package documentation for details. | |
| webSocket.client | Object | Settings passed down to SocketClient when a new socket connection is established. | |
Endpoint
Endpoint encapsulates the logic of a single endpoint in a standard way.
The main function of the endpoint is called the handler. This function should only handle pure business logic for a given endpoint.
An endpoint usually has a validator. A validator is a schema that will be compared to the incoming request. The request will be denied if it contains illegal or malformed arguments. For more detail see the documentation of the underlying package express-validator.
import { Endpoint, EndpointMethod } from 'node-server-engine';
// A basic handler that returns an HTTP status code of 200 to the client
function handler(req, res) {
res.sendStatus(200);
}
// The request must contain `id` as a query string and it must be a UUID V4
const validator = {
id: {
in: 'query',
isUUID: {
options: 4
}
}
};
// This endpoint can be passed to the Server
new Endpoint({
path: '/demo',
method: EndpointMethod.GET,
handler,
validator
});Endpoint Configuration
new Endpoint(config)
| Property | Type | Behavior | Default | | --------------- | --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | | path | String | Path to which the endpoint should be served | required | | method | Method | Method to which the endpoint should be served | required | | handler | Function | Endpoint handler | required | | validator | Object | Schema to validate against the request. See documentation for more details. | | | authType | AuthType | Authentication to use for this endpoint | AuthType.NONE | | authParams | Object | Options specific to the authentication methods | {} | | files | Array<Object> | Configuration to upload files. See the specific documentation. | {} | | middleware | Array<Function> | List of middlewares to run before the handler. | [] | | errorMiddleware | Array<Function> | List of middlewares to run after the handler | [] |
const addNodeEndpoint = new Endpoint({
path: '/note',
method: EndpointMethod.POST,
handler: (req, res, next) => res.json(addNote(req.body)),
authType: EndpointAuthType.AUTH_JWT,
middleware: [checkPermission(['DeleteUser', 'AdminAccess'])],
errorMiddleware: [addNoteErrorMiddleware],
validator: {
id: {
in: 'body',
isUUID: true
},
content: {
in: 'body',
isLength: {
errorMessage: 'content to long',
options: {
max: 150
}
}
}
}
});Methods
The following HTTP methods are supported
- Method.GET
- Method.POST
- Method.PUT
- Method.PATCH
- Method.DELETE
- Method.ALL - respond to all requests on a path
Authentication
Endpoints can take an authType and authParam in their configuration to determine their authentication behavior. The following table summarizes their usage.
The server engine exposes an enumeration for auth types.
import {AuthType} from 'node-server-engine';
| AuthType | Description | AuthParams |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| AuthType.NONE | No authentication. All requests are handled | |
| AuthType.JWT | A valid JSON Web Token is required as Bearer token. To be valid it must be properly signed by auth0, and its payload must match with what is set as environment variables.The user's ID is added to req.user. | |
| AuthType.TLS | Authenticate through mutual TLS. CA and an optional list of white listed host names should be set in the environment variables. | whitelist [Array]: List of certificate Common Name or Alt Name that are permitted to make requests to this endpoint. |
| AuthType.HMAC | Authenticate with a signature in the payload.This authentication is deprecated and should be avoided | secret [String]: Overrides the secret used for signatures set in environment variablesisGithub [Boolean]: The request is a Github webhook and therefore uses their signature system and not the default one. |
| AuthType.STATIC | A valid shared Bearer token is required. Shared token is stored in env variable (STATIC_TOKEN) |
File Upload Middleware
This middleware handles multipart file uploads in an Express application. It processes files in memory, validates them based on configuration options, and ensures that required files are uploaded.
Usage
The request must be made using multipart/form-data. The uploaded files will be available in req.files.
The following settings can be used on each object the Endpoint's options.files.
| Property | Type | Description | Default | | ----------- | -------------- | -------------------------------------------------------------------- | ------------ | | key | string | Form key as which the file should be fetched | required | | maxSize | string | Maximum file size in a human readable format (ex: 5MB) | required | | mimeTypes | Array<string> | A list of accepted MIME Types | [] | | required | boolean | Will fail the request and not store the files if this one is missing | false | | noExtension | boolean | Store the file with no extension | false |
Example
import { body } from 'express-validator';
import { Endpoint, middleware, AuthType, Method } from 'node-server-engine';
const filesConfig = [
{ key: 'avatar', mimeTypes: ['image/png', 'image/jpeg'], required: true },
{ key: 'document', mimeTypes: ['application/pdf'], maxSize: '5MB' }
];
new Endpoint({
path: '/upload',
method: Method.POST,
authType: AuthType.JWT,
files: filesConfig,
handler: (req, res) => {
res.json({ message: 'Files uploaded successfully', files: req.files });
}
});Middleware Output
The middleware adds a files object to the req object, which contains information about the uploaded file.
[
{
"fieldname": "avatar",
"originalname": "profile.png",
"mimetype": "image/png",
"size": 204800,
"buffer":[]
},
{
"fieldname": "document",
"originalname": "resume.pdf",
"mimetype": "application/pdf",
"size": 512000,
"buffer":[]
}
]Features
- Supports multiple file uploads.
- Validates file types and sizes.
- Ensures required files are uploaded.
- Uses memory storage (files are not saved to disk).
This middleware simplifies file handling in Express, making it easy to manage uploads while enforcing validation rules.
Multipart File Upload Middleware
This middleware enables chunked file uploads in an Express application. It allows uploading large files by splitting them into smaller chunks, validating them, and merging them once all parts are received.
Usage
The request must be made using multipart/form-data. The uploaded chunks are processed in memory before being stored in temporary directories. Once all chunks are uploaded, they are merged into a single file.
Configuration Options
The following settings can be used when configuring file uploads:
| Property | Type | Description | Default |
| -------- | ------- | --------------------------------------------------- | -------- |
| maxSize | string | Maximum allowed size for each chunk (e.g., "5MB") | No limit |
| required | boolean | Whether the file is mandatory for the request | false |
Expected Request Format
The client must send the following fields in the multipart/form-data request:
| Field | Type | Description |
| ------------- | ------ | ---------------------------------------- |
| file | File | The chunked file data |
| filename | String | Name of the original file |
| uniqueID | String | Unique identifier for the upload session |
| chunkIndex | Number | Current chunk number (0-based index) |
| totalChunks | Number | Total number of chunks for the file |
Middleware Output
The middleware adds a multipartFile object to the req object, which contains information about the uploaded file.
When all chunks are not yet received, req.multipartFile has the below JSON:
{
"isPending": true,
"originalname": "example.pdf",
"uniqueID": "abc123",
"chunkIndex": 2,
"totalChunks": 5
}When the upload is complete, req.multipartFile has the below JSON:
{
"isPending": false,
"originalname": "example.pdf",
"uniqueID": "abc123",
"filePath": "/uploads/completed_files/abc123_example.pdf"
}Example
import { body } from 'express-validator';
import { Endpoint, middleware, AuthType, Method } from 'node-server-engine';
const fileConfig = { maxSize: '10MB', required: true };
new Endpoint({
path: '/upload',
method: Method.POST,
authType: AuthType.JWT,
multipartFile: fileConfig,
handler: (req, res) => {
console.log(req.multipartFile);
}
});Socket Client
The WebSocket server starts automatically if the Server's webSocket option is provided.
Each WebSocket connection creates a new SocketClient instance with built-in authentication, message routing, and connection management.
Features
- JWT Authentication: Secure token-based authentication with automatic renewal
- Message Routing: Type-based message handlers similar to HTTP endpoints
- Connection Health: Built-in ping/pong mechanism with configurable intervals
- Lifecycle Callbacks: Hooks for initialization, authentication, and shutdown
- Error Handling: Automatic error formatting and client notification
Socket Client Options
Options can be set when configuring the Server's webSocket:
import { Server, MessageHandler } from 'node-server-engine';
const server = new Server({
webSocket: {
client: {
handlers: [messageHandler1, messageHandler2],
initCallbacks: (client) => {
console.log(`Client ${client.id} connected`);
},
authCallbacks: (client) => {
const user = client.getUser();
console.log(`User ${user.userId} authenticated`);
},
shutdownCallbacks: (client) => {
console.log(`Client ${client.id} disconnected`);
}
}
}
});| Parameter | Type | Description | Default | | ----------------- | ------------------------------------------ | ----------------------------------------------------------------------- | ------------ | | handlers | Array<MessageHandler> | A list of message handlers to use | required | | authCallbacks | Function | Array<Function> | Callbacks called when a client successfully authenticates | [] | | initCallbacks | Function | Array<Function> | Callbacks called when the socket client is created | [] | | shutdownCallbacks | Function | Array<Function> | Callbacks called when the socket client is destroyed | [] |
Client Properties & Methods
| Property/Method | Type | Description | | ----------------- | --------------------------- | ------------------------------------------------------------------------- | | id | String | Unique identifier for the connection | | establishedAt | Date | Timestamp when connection was established | | isAuthenticated() | () => Boolean | Check if client is authenticated | | getUser() | () => SocketUser | undefined | Get authenticated user data (userId, deviceId, tokenId, audience) | | sendMessage() | (type, payload, options) | Send a message to the client |
Client Authentication
Clients authenticate by sending a message:
// Client-side
websocket.send(JSON.stringify({
type: 'authenticate',
payload: { token: 'your-jwt-token' }
}));
// Server will:
// 1. Verify the JWT token
// 2. Extract user information (userId, deviceId, audience)
// 3. Set authentication status
// 4. Trigger authCallbacks
// 5. Send renewal reminder 1 minute before token expirationEnvironment Variables
| Variable | Description | Default | | ------------------- | ----------------------------------------------- | ------- | | WS_PING_INTERVAL | Interval between ping checks (seconds) | 30 | | WEBSOCKET_CLOSE_TIMEOUT | Timeout for graceful close (milliseconds) | 3000 |
Message Handler
Message handler are similar to Endpoint, but in a WebSocket context. They define how incoming messages should be handled.
import { MessageHandler, Server } from 'node-server-engine';
function handler(payload, client) {
// payload is the message payload in a standard message.
// client is an instance of SocketClient
}
const messageHandler = new MessageHandler(type, handler, options);
new Server({
webSocket: { client: { handlers: [messageHandler] } }
});| Argument | Type | Description | Default |
| --------------------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------- | ------------ |
| type | String | The message type that should be handled | required |
| handler | Function | A function that will run for every message of this type | required |
| options.authenticated | A flag indicating that this kind of message handling require an authenticated client | true |
Redis
The server engine exposes a Redis client configured for production use with automatic reconnection, retry logic, and optional TLS support.
It is a pre-configured instance of ioredis v5.8.2. See the package documentation for more details.
import { Redis } from 'node-server-engine';
// Initialize (automatically called by Server, or manually)
Redis.init();
// Use Redis commands
await Redis.set('key', 'value');
const value = await Redis.get('key');
await Redis.del('key');
await Redis.expire('key', 3600);
// Get underlying client for advanced use
const client = Redis.getClient();
// Shutdown when done
await Redis.shutdown();Features
- Automatic Reconnection: Exponential backoff retry strategy (up to 2s delay)
- Error Recovery: Reconnects on READONLY, ECONNRESET, and ETIMEDOUT errors
- Connection Management: Event listeners for connect, ready, error, close, reconnecting
- TLS Support: Automatic TLS configuration when
TLS_CAis provided - Test Mode: Lazy connection in test environments to prevent connection attempts
Environment Variables
| Variable | Description | Default | Required | | --------------- | -------------------------------------- | ------- | -------- | | REDIS_HOST | Redis server hostname or IP | - | ✓ | | REDIS_PORT | Redis server port | 6379 | ✗ | | REDIS_USERNAME | Username for authentication | - | ✗ | | REDIS_PASSWORD | Password for authentication | - | ✗ | | TLS_CA | TLS certificate authority for SSL/TLS | - | ✗ |
Configuration Options
You can customize Redis client creation:
import { createRedisClient } from 'node-server-engine';
const client = createRedisClient({
db: 2, // Database index (default: 0)
lazyConnect: true, // Don't connect until first command
enableReadyCheck: false, // Disable ready check
redis: { // Override any ioredis options
connectTimeout: 10000,
commandTimeout: 5000
}
});SecretManager
The SecretManager entity provides seamless integration with GCP Secret Manager for secure credential management in production environments. It automatically loads secrets at startup, writes file-based secrets (like certificates) to secure temp locations, and falls back to environment variables in development.
import { SecretManager } from 'node-server-engine';
// Initialize (can be done automatically via Server configuration)
await SecretManager.init({
enabled: process.env.NODE_ENV === 'production',
projectId: process.env.GCP_PROJECT_ID,
cache: true,
fallbackToEnv: true,
secrets: [
'SQL_PASSWORD', // Simple env variable
'JWT_SECRET',
{
name: 'PRIVATE_KEY', // File-based secret
type: 'file',
targetEnvVar: 'PRIVATE_KEY_PATH',
filename: 'private-key.pem'
}
]
});
// Get a cached secret value
const password = SecretManager.getSecret('SQL_PASSWORD');
// Fetch a secret on-demand (useful for rotation)
const apiKey = await SecretManager.fetchSecret('API_KEY');
// Reload all secrets
await SecretManager.reload();
// Check initialization status
if (SecretManager.isInitialized()) {
console.log('Secrets loaded');
}Features
- Automatic Loading: Secrets loaded during server initialization
- Environment Fallback: Uses
process.envin development or when secrets are unavailable - File Support: Writes certificates and keys to temp files with secure permissions (0o600)
- Caching: Optional caching of secret values for performance
- Secret Rotation: On-demand fetching for runtime secret updates
- Lifecycle Management: Automatic cleanup of temp files on shutdown
Server Integration
SecretManager can be configured directly in Server options:
import { Server, SecretManagerOptions } from 'node-server-engine';
const server = new Server({
endpoints: [...],
secretManager: {
enabled: process.env.NODE_ENV === 'production',
projectId: process.env.GCP_PROJECT_ID,
cache: true,
fallbackToEnv: true,
secrets: [
'SQL_PASSWORD',
'JWT_SECRET',
{
name: 'PRIVATE_KEY',
type: 'file',
targetEnvVar: 'PRIVATE_KEY_PATH',
filename: 'private-key.pem'
}
]
}
});
await server.init(); // Secrets loaded before app startsConfiguration Options
| Property | Type | Description | Default |
|---------------|-------------------------------|----------------------------------------------------------------|---------------|
| enabled | boolean | Enable Secret Manager (typically only in production) | false |
| projectId | string | GCP project ID | Required |
| cache | boolean | Cache secret values in memory | true |
| fallbackToEnv | boolean | Fall back to process.env if secret loading fails | true |
| tempDir | string | Directory for file-based secrets | os.tmpdir() |
| secrets | Array<string | SecretConfig> | List of secrets to load | [] |
Secret Configuration
Secrets can be specified as strings (simple env variables) or objects for advanced configuration:
String format (simple env variable):
'SQL_PASSWORD' // Loads GCP secret "SQL_PASSWORD" → process.env.SQL_PASSWORDObject format (advanced configuration):
{
name: 'PRIVATE_KEY', // Secret name in GCP Secret Manager
type: 'env' | 'file', // Type: 'env' for variables, 'file' for certificates
targetEnvVar: 'PRIVATE_KEY_PATH', // Env var name (optional for 'env' type)
filename: 'private-key.pem', // Filename for 'file' type (optional)
version: 'latest' // Secret version (default: 'latest')
}Secret Types
Environment Variable Secrets (type: 'env'):
- Loaded directly into
process.env - Good for passwords, API keys, tokens
- Example:
'SQL_PASSWORD'→process.env.SQL_PASSWORD
File-based Secrets (type: 'file'):
- Written to temp files with secure permissions (0o600)
- Good for certificates, private keys, JSON credentials
- Environment variable points to file path
- Example:
PRIVATE_KEY→/tmp/private-key.pem→process.env.PRIVATE_KEY_PATH
Environment Variables
| Variable | Description | Required | |------------------|---------------------------------------|----------| | GCP_PROJECT_ID | GCP project containing secrets | ✓ | | NODE_ENV | Environment (production/development) | - |
Load Results
The init() and reload() methods return load statistics:
const result = await SecretManager.init({...});
console.log(result);
// {
// loaded: 3, // Successfully loaded from Secret Manager
// failed: 0, // Failed to load
// fallback: 1, // Used fallback environment variables
// details: [...] // Detailed information per secret
// }Security Features
- Secure File Permissions: File-based secrets written with 0o600 (owner read/write only)
- Automatic Cleanup: Temp files deleted on server shutdown
- No Logging: Secret values never logged (only metadata)
- Lifecycle Integration: Registered with LifecycleController for proper cleanup
Example: Multiple Secrets
const secretConfig = {
enabled: process.env.NODE_ENV === 'production',
projectId: 'my-gcp-project',
secrets: [
// Database credentials
'SQL_PASSWORD',
'SQL_USER',
// API keys
'STRIPE_API_KEY',
'SENDGRID_API_KEY',
// Certificate files
{
name: 'TLS_CERT',
type: 'file',
targetEnvVar: 'TLS_CERT_PATH',
filename: 'tls.crt'
},
{
name: 'TLS_KEY',
type: 'file',
targetEnvVar: 'TLS_KEY_PATH',
filename: 'tls.key'
},
// Service account key
{
name: 'GCP_SERVICE_ACCOUNT',
type: 'file',
targetEnvVar: 'GOOGLE_APPLICATION_CREDENTIALS',
filename: 'service-account.json'
}
]
};
await SecretManager.init(secretConfig);AWSS3
The AWSS3 entity is a generic wrapper around AWS S3 (and S3-compatible services like MinIO, LocalStack). It supports uploads, downloads, streaming, metadata, and deletion with size validation and UUID-based path generation.
Based on @aws-sdk/client-s3 v3.
import { AWSS3 } from 'node-server-engine';
import fs from 'fs';
// Initialize with configuration (optional; otherwise uses environment variables)
AWSS3.init({
region: 'us-east-1',
accessKeyId: 'YOUR_ACCESS_KEY',
secretAccessKey: 'YOUR_SECRET_KEY',
// For S3-compatible services
// endpoint: 'http://localhost:4566',
// forcePathStyle: true
});
// Or rely on environment variables and auto-init
// AWSS3.init();
// Upload a file
const fileStream = fs.createReadStream('photo.jpg');
const uploaded = await AWSS3.upload(
fileStream,
'my-bucket',
{ directory: 'photos', mime: 'image/jpeg' },
{ maxSize: '5MB' },
{ userId: '123' } // optional metadata
);
console.log(uploaded.Key); // photos/<uuid>.jpeg
// Download entire file into memory
const { data, metadata } = await AWSS3.get('my-bucket', uploaded.Key);
console.log(metadata.ContentLength);
// Stream a large file
const { stream } = await AWSS3.download('my-bucket', uploaded.Key);
stream.pipe(res);
// Get a fast stream without metadata
const s = await AWSS3.getFileStream('my-bucket', uploaded.Key);
s.pipe(res);
// Metadata only
const head = await AWSS3.getMetadata('my-bucket', uploaded.Key);
// Delete
await AWSS3.delete('my-bucket', uploaded.Key);
// Generate unique destination path
const key = AWSS3.generateFileDestination({ directory: 'uploads/images', mime: 'image/png' });Features
- Auto-initialization via environment variables
- Stream-based upload with
maxSizevalidation - Full download, streaming download, or stream-only access
- UUID-based file naming with directory and extension inference from MIME
- S3-compatible (supports custom
endpointandforcePathStyle)
API Methods
init(config?)
Initialize the S3 client explicitly; otherwise it will lazy-init using environment variables.
AWSS3.init({
region: 'us-east-1',
accessKeyId: '...',
secretAccessKey: '...',
sessionToken: '...', // optional
endpoint: 'http://localhost:9000', // optional (MinIO/LocalStack)
forcePathStyle: true // optional (S3-compatible)
});upload(stream, bucket, destinationOptions?, uploaderOptions?, metadata?)
Uploads content from a readable stream.
destinationOptions:{ directory?, fileName?, mime?, noExtension? }uploaderOptions:{ maxSize?: string }e.g.5MB,100KBmetadata: key/value pairs stored as object metadata
Returns Promise<S3UploadedFile> with keys like Bucket, Key, ETag, Location.
get(bucket, key)
Downloads the full file into memory as Buffer and returns { data, metadata }.
download(bucket, key)
Returns { stream, metadata } for streaming large files efficiently.
getFileStream(bucket, key)
Returns a Readable stream without fetching metadata.
getMetadata(bucket, key)
Returns file metadata via a HEAD request.
delete(bucket, key)
Deletes the object.
generateFileDestination(options?)
Generates a unique key using UUID with optional directory and extension (from mime).
Environment Variables
| Variable | Description | Required |
| --- | --- | --- |
| AWS_REGION | AWS region (e.g., us-east-1) | ✗* |
| AWS_ACCESS_KEY_ID | Access key ID | ✗* |
| AWS_SECRET_ACCESS_KEY | Secret access key | ✗* |
| AWS_SESSION_TOKEN | Session token (temporary creds) | ✗ |
| AWS_S3_ENDPOINT | Custom S3-compatible endpoint (MinIO/LocalStack) | ✗ |
| AWS_S3_FORCE_PATH_STYLE | Use path-style addressing (true/false) | ✗ |
* Not required if you call init() with a config object.
Error Handling
- Throws
WebErrorwith status413whenmaxSizeis exceeded during upload - Other errors bubble up from AWS SDK v3 commands
Example:
try {
await AWSS3.upload(stream, 'bucket', {}, { maxSize: '1MB' });
} catch (e) {
if (e.statusCode === 413) {
console.log('File too large');
}
}Usage in Template Projects
import { AWSS3, Endpoint, Method } from 'node-server-engine';
import { Readable } from 'stream';
new Endpoint({
path: '/upload-s3',
method: Method.POST,
files: [{ key: 'file', maxSize: '10MB', required: true }],
async handler(req, res) {
const file = req.files[0];
const stream = Readable.from(file.buffer);
const result = await AWSS3.upload(
stream,
process.env.S3_UPLOAD_BUCKET,
{ directory: 'user-uploads', mime: file.mimetype }
);
res.json({ key: result.Key, url: result.Location });
}
});Sequelize
The server engine exposes an SQL ORM that is configured to work with the standard environment variables that are used in our services.
It is a pre-configured instance of sequelize. See the package documentation for more details.
import { Sequelize } from 'node-server-engine';
Sequelize.sequelize;
Sequelize.closeSequelize();It can be configured through environment variables
| env | description | default | | ------------ | ------------------------------------------------- | -------- | | SQL_HOST | Host to which connect to | | | SQL_PORT | Port on which SQL is served | 5432 | | SQL_PASSWORD | Password used to authenticate with the SQL server | | | SQL_USER | User used to authenticate with the SQL server | | | SQL_DB | Database to which connect to | | | SQL_TYPE | SQL type which connect to | postgres |
Pub/Sub
The engine exposes a PubSub entity that can be used to communicate with Google Cloud Pub/Sub with production-ready configuration including flow control, retry policies, and batch processing.
Based on @google-cloud/pubsub v4.11.0.
import { PubSub } from 'node-server-engine';
/**
* Declare that the service will be publishing to a topic
* This must be done before init() is called
* Any attempt to publish a message without declaring a topic first will fail
* @param {string|Array<string>} topic - The topic(s) to which we will be publishing
* @param {Object} [options] - Publisher configuration options
*/
PubSub.addPublisher(topic, {
enableMessageOrdering: false, // Enable ordering (requires orderingKey when publishing)
batching: {
maxMessages: 100, // Max messages per batch
maxBytes: 1024 * 1024, // Max bytes per batch (1MB)
maxMilliseconds: 100 // Max delay before sending batch
},
retry: {
initialDelayMillis: 100, // Initial retry delay
maxDelayMillis: 60000, // Max retry delay (60s)
delayMultiplier: 1.3 // Exponential backoff multiplier
}
});
/**
* Binds a message handler to a subscription
* If called multiple times, handlers will be chained
* This must be done before init() is called
* The subscription will not be consumed until init() is called
* @param {string} subscription - The subscription to consume
* @param {function|Array<function>} handler - The message handling function(s)
* @param {Object} [options] - Subscriber configuration options
*/
PubSub.addSubscriber(subscription, handler, {
first: false, // Put handler at beginning of chain
isDebezium: false, // Handle Debezium CDC events
ackDeadline: 60, // Acknowledgement deadline (10-600 seconds)
flowControl: {
maxMessages: 1000, // Max concurrent messages
maxBytes: 100 * 1024 * 1024, // Max concurrent bytes (100MB)
allowExcessMessages: true // Allow excess messages if under maxBytes
}
});
/**
* Establish connection with all the declared publishers/subscribers
* Validates topic/subscription existence and permissions
*/
await PubSub.init();
/**
* Send a message through a previously declared publisher
* @param {string} topic - The name of the topic to which the message should be pushed
* @param {Object} message - The actual message (will be JSON stringified)
* @param {Object} [attributes] - Message attributes for filtering
* @param {string} [orderingKey] - Enforce ordering for messages with same key
*/
await PubSub.publish(topic, message, attributes, orderingKey);
/**
* Flush all pending messages and close connections with Pub/Sub
*/
await PubSub.shutdown();Message Handling
Messages are acknowledged after successful processing. If any handler throws an error, the message is nacked and will be redelivered according to the subscription's retry policy.
// Handler signature
async function messageHandler(payload, attributes, publishedAt) {
// payload: The JSON message content
// attributes: Message attributes (key-value pairs)
// publishedAt: Date when message was published
// Process the message
await processData(payload);
// Message will be ack'd automatically after successful processing
// If an error is thrown, message will be nack'd for redelivery
}Best Practices
- Flow Control: Configure
flowControlbased on your service's memory and processing capacity - Batch Settings: Tune batching for optimal throughput vs. latency tradeoff
- Retry Policy: Use exponential backoff to handle transient failures gracefully
- Dead Letter Topics: Configure dead letter topics on your subscriptions in GCP Console for failed messages
- Exactly-Once Delivery: Enable exactly-once delivery in GCP Console when creating/updating your subscription (requires ackWithResponse() in handlers)
- Message Ordering: Only enable when strict ordering is required (reduces throughput)
- ACK Deadline: Set based on your handler's processing time (default: 60s)
Environment Variables
Pub/Sub authentication uses Google Cloud Application Default Credentials. Set GOOGLE_APPLICATION_CREDENTIALS to your service account key path, or use Workload Identity in GKE.
PushNotification
Communication interface with the push notification service using Pub/Sub.
import { PushNotification } from 'node-server-engine';
// Initialize with optimized settings for push notifications
// Must be called before sending notifications
await PushNotification.init();
/**
* Send a push notification through the push service
* @param {string} userId - ID of the user that should receive the notification
* @param {Object} notification - Notification that should be sent
* @throws {EngineError} If userId is missing or publish fails
*/
await PushNotification.sendPush(userId, {
title: 'Hello!',
body: 'You have a new message',
payload: { type: 'message', messageId: '123' },
priority: true,
ttl: 3600 // Time to live in seconds
});Notification Options
{
title?: string; // Notification title
body?: string; // Notification body text
payload?: unknown; // Custom data payload
voip?: boolean; // VOIP notification (iOS)
background?: boolean; // Background/data-only notification
token?: string; // Specific device token (optional)
mutable?: boolean; // Content can be mutated by client (iOS)
contentAvailable?: boolean; // Requires client-side processing (iOS)
ttl?: number; // Time to live in seconds
priority?: boolean; // High priority notification
collapseId?: string; // Group notifications with same ID
}Environment Variables
| Variable | Description | Required |
| -------- | ----------- | -------- |
| PUBSUB_PUSH_NOTIFICATION_QUEUE_TOPIC | Pub/Sub topic name for push notifications | ✓ |
Configuration
The PushNotification entity is pre-configured with optimal settings:
- Batching: Up to 100 messages, 1MB max, 50ms delay for fast delivery
- Retry: Exponential backoff with 100ms initial delay, up to 60s max delay
- No Ordering: Push notifications don't require strict ordering for better throughput
Integration
Your omg-notification-service (or similar) should subscribe to the configured topic to process and deliver push notifications to devices via FCM, APNs, or other push providers.
Localizator
The localizator exposes localization related utilities.
import { Localizator } from 'node-server-engine';
// The synchronize init should be called first to initialize the data
// The localizator will regularly synchronize new data after that without any calls having to be made
await Localizator.init();
// Get the different ways of displaying a user's name
const { regular, formal, casual } = await Localizator.getDisplayNames(
firstName,
lastName,
locale
);
// This should be call when the program shuts down.
await Localizator.shutdown();| env | description | default | | ----------- | ----------------------------------------- | ------------ | | LOCALES_URL | Base URL where the locales data is stored | required |
ElasticSearch
The ElasticSearch entity provides a managed client for Elasticsearch with automatic migrations, connection management, and TLS support.
Based on @elastic/elasticsearch v9.2.0.
import { ElasticSearch } from 'node-server-engine';
// Initialize (runs migrations automatically)
await ElasticSearch.init();
// Get the client for operations
const client = ElasticSearch.getClient();
// Use Elasticsearch operations
await client.index({
index: 'products',
id: '123',
document: { name: 'Product', price: 99.99 }
});
const result = await client.search({
index: 'products',
query: { match: { name: 'Product' } }
});
// Shutdown when done
await ElasticSearch.shutdown();Features
- Automatic Migrations: Tracks and runs migrations on startup
- Connection Verification: Pings cluster on initialization
- TLS Support: Optional SSL/TLS configuration
- Retry Logic: Built-in retry mechanism with configurable attempts
- Test Mode: Auto-flushes indices in test environment
- Error Handling: Detailed error reporting with context
Migration System
Create migration files in your specified migration directory:
// migrations/001-create-products-index.ts
import { Client } from '@elastic/elasticsearch';
export async function migrate(client: Client): Promise<void> {
await client.indices.create({
index: 'products',
settings: {
number_of_shards: 1,
number_of_replicas: 1
},
mappings: {
properties: {
name: { type: 'text' },
price: { type: 'double' },
createdAt: { type: 'date' }
}
}
});
}Migrations are:
- Run automatically on
init() - Tracked in the
migrationsindex - Executed once per file
- Run in alphabetical order
Environment Variables
| Variable | Description | Required | | ------------------------------ | ------------------------------------------ | -------- | | ELASTIC_SEARCH_HOST | Elasticsearch cluster URL | ✓ | | ELASTIC_SEARCH_USERNAME | Username for authentication | ✓ | | ELASTIC_SEARCH_PASSWORD | Password for authentication | ✓ | | ELASTIC_SEARCH_MIGRATION_PATH | Absolute path to migrations directory | ✓ | | TLS_CA | TLS certificate authority for SSL/TLS | ✗ |
Configuration
The client is configured with:
- Max Retries: 3 attempts
- Request Timeout: 30 seconds
- Sniff on Start: Disabled (use with clusters)
- TLS: Enabled when
TLS_CAis provided
GoogleCloudStorage
The GoogleCloudStorage entity provides a simple, generic wrapper for Google Cloud Storage operations. It handles file uploads, downloads, streaming, and deletion with built-in size validation and automatic file path generation.
Based on @google-cloud/storage v7.14.0.
import { GoogleCloudStorage } from 'node-server-engine';
import fs from 'fs';
// Initialize with configuration
GoogleCloudStorage.init({
projectId: 'my-project',
keyFilename: '/path/to/keyfile.json'
});
// Or use environment variables (GC_PROJECT, GOOGLE_APPLICATION_CREDENTIALS)
GoogleCloudStorage.init();
// Upload a file
const fileStream = fs.createReadStream('photo.jpg');
const result = await GoogleCloudStorage.upload(
fileStream,
'my-bucket',
{ directory: 'photos', mime: 'image/jpeg' },
{ metadata: { contentType: 'image/jpeg' } },
{ maxSize: '5MB' }
);
console.log(result.name); // photos/uuid.jpeg
// Download file as Buffer
const { data, metadata } = await GoogleCloudStorage.get('my-bucket', 'photos/image.jpg');
console.log(metadata.size); // File size in bytes
fs.writeFileSync('downloaded.jpg', data);
// Stream a file (for large files)
const { stream, metadata } = await GoogleCloudStorage.download('my-bucket', 'videos/video.mp4');
stream.pipe(res); // Stream to HTTP response
// Get file stream directly (fastest, no metadata)
const stream = GoogleCloudStorage.getFileStream('my-bucket', 'audio/song.mp3');
stream.pipe(res);
// Delete a file
await GoogleCloudStorage.delete('my-bucket', 'temp/old-file.txt');
// Generate unique file paths
const path = GoogleCloudStorage.generateFileDestination({
directory: 'uploads/images',
mime: 'image/jpeg'
});
// Result: 'uploads/images/a1b2c3d4-e5f6-7890-abcd-ef1234567890.jpeg'Features
- Auto-initialization: Automatically initializes on first use if not manually initialized
- Flexible Configuration: Supports config object, environment variables, or credentials
- File Upload: Stream-based uploads with size validation
- Multiple Download Methods: Full download, streaming, or direct stream access
- Path Generation: Automatic UUID-based file naming with directory and extension support
- Size Validation: Built-in file size limits with human-readable formats (5MB, 100KB, etc.)
- Error Handling: Detailed error reporting with context
- No Project Dependencies: Generic implementation works with any Google Cloud Storage bucket
API Methods
init(config?)
Initialize Google Cloud Storage with configuration. Optional - will auto-initialize with environment variables if not called.
GoogleCloudStorage.init({
projectId: 'my-project-id',
keyFilename: '/path/to/service-account-key.json',
// Or use credentials directly
credentials: {
client_email: '[email protected]',
private_key: '-----BEGIN PRIVATE KEY-----\n...'
},
// For local emulator
apiEndpoint: 'http://localhost:9000'
});upload(stream, bucket, destinationOptions?, storageOptions?, uploaderOptions?)
Upload a file to Google Cloud Storage.
Parameters:
stream(Readable): Node.js readable stream of file contentbucket(string): Bucket namedestinationOptions(object, optional):directory(string): Subdirectory path (e.g., 'uploads/images')fileName(string): Specific filename (if not provided, generates UUID)mime(string): MIME type for extension detectionnoExtension(boolean): Don't append file extension
storageOptions(object, optional): Google Cloud Storage write stream optionsuploaderOptions(object, optional):maxSize(string): Maximum file size (e.g., '5MB', '100KB', '1GB')
Returns: Promise<StorageUploadedFile> - Metadata of uploaded file
Example:
const fileStream = fs.createReadStream('document.pdf');
const result = await GoogleCloudStorage.upload(
fileStream,
'documents-bucket',
{ directory: 'legal/contracts', mime: 'application/pdf' },
{ metadata: { contentType: 'application/pdf' } },
{ maxSize: '10MB' }
);get(bucket, path)
Download a file and return its content as a Buffer along with metadata.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Promise<{data: Buffer, metadata: StorageUploadedFile}>
Example:
const { data, metadata } = await GoogleCloudStorage.get('my-bucket', 'photos/image.jpg');
console.log(metadata.contentType); // 'image/jpeg'
console.log(data.length); // File size in bytesdownload(bucket, path)
Get a readable stream for a file along with its metadata. Use this for large files or when you need to stream content.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Promise<{stream: Readable, metadata: StorageUploadedFile}>
Example:
const { stream, metadata } = await GoogleCloudStorage.download('my-bucket', 'videos/large-video.mp4');
console.log(metadata.size); // File size
stream.pipe(response); // Stream to HTTP responsegetFileStream(bucket, path)
Get a readable stream for a file without fetching metadata. Fastest option when metadata is not needed.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Readable - Node.js readable stream
Example:
const stream = GoogleCloudStorage.getFileStream('my-bucket', 'audio/song.mp3');
stream.pipe(response); // Direct streamingdelete(bucket, path)
Delete a file from Google Cloud Storage.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Promise<void>
Example:
await GoogleCloudStorage.delete('my-bucket', 'temp/old-file.txt');generateFileDestination(options?)
Generate a unique file path with optional directory and extension.
Parameters:
options(object, optional):directory(string): Subdirectory pathmime(string): MIME type for extension detectionnoExtension(boolean): Don't append extension
Returns: string - Generated file path
Examples:
// UUID only
GoogleCloudStorage.generateFileDestination();
// → 'a1b2c3d4-e5f6-7890-abcd-ef1234567890'
// With directory and MIME type
GoogleCloudStorage.generateFileDestination({
directory: 'uploads/images',
mime: 'image/jpeg'
});
// → 'uploads/images/a1b2c3d4-e5f6-7890-abcd-ef1234567890.jpeg'
// Without extension
GoogleCloudStorage.generateFileDestination({
directory: 'data',
noExtension: true
});
// → 'data/a1b2c3d4-e5f6-7890-abcd-ef1234567890'Environment Variables
| Variable | Description | Required | | ------------------------------- | ------------------------------------ | -------- | | GC_PROJECT | Google Cloud Project ID | ✗* | | GOOGLE_APPLICATION_CREDENTIALS | Path to service account key file | ✗* |
* Not required if you call init() with a config object
Error Handling
The entity throws WebError with appropriate status codes:
- 413 (Payload Too Large): File exceeds
maxSizelimit - Other errors are passed through from Google Cloud Storage SDK
Example:
try {
await GoogleCloudStorage.upload(stream, 'bucket', {}, {}, { maxSize: '1MB' });
} catch (error) {
if (error.statusCode === 413) {
console.log('File too large!');
}
}Usage in Template Projects
In your node-server-template endpoints:
import { GoogleCloudStorage } from 'node-server-engine';
import { Endpoint, Method } from 'node-server-engine';
new Endpoint({
path: '/upload',
method: Method.POST,
files: [{ key: 'file', maxSize: '5MB', required: true }],
async handler(req, res) {
const file = req.files[0];
const stream = Readable.from(file.buffer);
const result = await GoogleCloudStorage.upload(
stream,
process.env.UPLOAD_BUCKET,
{ directory: 'user-uploads', mime: file.mimetype }
);
res.json({ path: result.name, url: result.mediaLink });
}
});Translation Manager
The translation manager exposes translation related utilities.
import { TranslationManager } from 'node-server-engine';
// The synchronize init should be called first to initialize the data
// The tranlsation manager will regularly synchronize new data after that without any calls having to be made
await TranslationManager.init();
// Get the different ways of displaying a user's name
const translatedString = await TranslationManager.translate(
lang,
key,
variables,
tags
);
// Example
const translatedString = await TranslationManager.translate(
'zh-TW',
'email.invitation.body',
{ name: 'John' },
{ link: ['a', 'href="https://www.test.com"'] }
);
// This should be call when the program shuts down.
await TranslationManager.shutdown();- lang [String]: Locale for which the translation should be fetched (if no data is found, translation will be returned in
en-US). - key [String]: Translation key
- variables [Object]: A key=>value mapping for variable interpolation in strings. (Optional)
- tags [Object]: A key=>value mapping for variable interpolation in strings. (Optional)
| env | description | default | | ----------- | ----------------------------------------- | ------------ | | LOCALES_URL | Base URL where the locales data is stored | required |
Error Reporting
The server engine standardizes the way errors are handled and reported. The error classes provided by the Server Engine should always be used when throwing an exception.
Errors are a crucial part of the application, they are what helps us to properly debug the program and offer support when need, as well as what exposes issues to the client.
Log Output Formats
The engine automatically adapts log output based on the environment:
Local Development (Readable Format)
- Colorized output with severity levels
- Formatted timestamps and file locations
- Pretty-printed data objects
- Stack traces with proper formatting
- HTTP request context when available
Production/GCP (JSON Format)
- Structured JSON for log aggregation
- Google Cloud
