node-server-engine
v1.10.1
Published
Framework used to develop backend services. This package ships with a lot of features to standardize the creation of services, letting you focus on the business logic.
Downloads
1,339
Maintainers
Readme
Node Server Engine
Framework used to develop Node backend services. This package ships with a lot of features to standardize the creation of services, letting you focus on the business logic.
Features
- 🚀 Express-based - Built on the popular Express.js framework (v4.22)
- 🔒 Multiple Auth Methods - JWT, mTLS, HMAC, and Static token authentication
- 🔌 WebSocket Support - Built-in WebSocket server with message handling (ws v8.18)
- 📊 Database Integration - Sequelize ORM with migrations support (v6.37)
- 📡 Pub/Sub - Google Cloud Pub/Sub integration (v4.11)
- 🔔 Push Notifications - Built-in push notification support
- 🌐 i18n - Internationalization with translation management
- 🔍 ElasticSearch - Full-text search integration (v9.2) with auto-migrations
- 💾 Redis - Advanced Redis client with retry logic, TLS support (ioredis v5.8)
- 📝 API Documentation - Swagger/OpenAPI documentation support
- 📤 File Uploads - Single and chunked file upload middleware with validation
- 🧪 TypeScript - Written in TypeScript with full type definitions
- ✅ Modern Tooling - ESLint, Prettier, and automated versioning
- 🛡️ Permission System - Role-based access control with case-insensitive matching
- 🔐 Security - HMAC authentication, TLS/mTLS support, input validation
Requirements
Node.js 18.x or higher
npm 9.x or higher
TypeScript 5.x (if contributing)
Install
To start a new service, it is highly recommended that you clone it from our template. It will already include all the necessary tools and boilerplate.
If you need to install it manually:
npm install node-server-engineFor development dependencies:
npm install --save-dev backend-test-toolsLogging
The server provides structured logging with automatic format detection. In local development, logs are colorized and human-readable. In production (GCP, Kubernetes), logs are JSON formatted for log aggregation systems.
Log Format
Logs automatically detect the environment:
- Local Development: Colorized, concise format with time (HH:MM:SS)
- Production: JSON structured logs for cloud log aggregation
Environment Variables
| Variable | Values | Description | Default |
|----------|--------|-------------|---------|
| LOG_FORMAT | local, json | Force specific log format | Auto-detect |
| DETAILED_LOGS | true, false | Show stack traces and verbose details | false |
| DEBUG | namespace:* | Enable debug logs for specific namespaces | Off |
Examples
Default Local Format (clean, concise):
[21:16:15] INFO SERVER_RUNNING
[21:16:15] INFO Connected to database successfully
[21:16:15] DEBUG POST /auth/login [200] 154ms
[21:20:15] DEBUG GET /users [304] 10ms
[21:20:15] WARNING No bearer token found [unauthorized 401] GET /usersDetailed Logs (DETAILED_LOGS=true):
[21:16:15] DEBUG POST /auth/login [200] 154ms
Data:
{
"responseTime": "154ms",
"contentLength": "1012",
"httpVersion": "1.1"
}
[21:20:15] WARNING No bearer token found [unauthorized 401] GET /users
Stack Trace:
Error: No bearer token found
at middleware (/path/to/authJwt.ts:31:13)
...
src/middleware/authJwt/authJwt.ts:31Production Format (LOG_FORMAT=json):
{"severity":"INFO","message":"SERVER_RUNNING","timestamp":"2025-12-09T21:16:15.028Z","serviceContext":{"service":"my-service","version":"1.0.0"}}
{"severity":"DEBUG","message":"POST /auth/login [200] 154ms","timestamp":"2025-12-09T21:16:15.182Z"}Usage in Code
import { reportInfo, reportError, reportDebug } from 'node-server-engine';
// Info logging
reportInfo('Server started successfully');
reportInfo({ message: 'User login', data: { userId: '123' } });
// Error logging
reportError(error);
reportError(error, request); // Include HTTP request context
// Debug logging (requires DEBUG env var)
reportDebug({ namespace: 'app:auth', message: 'Token validated' });Entities
Server
The server class encapsulates all of the express boilerplate. Instantiate one per service and initialize it to get started.
import { Server } from 'node-server-engine';
const server = new Server(config);
server.init();Server Configuration
| Property | Type | Behavior | Default |
| ----------------- | ------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| port | Number | Port to listen to | process.env.PORT |
| endpoints | Array<Endpoint> | List of endpoints that should be served. | [] |
| globalMiddleware | Array<Function> Array<{middleware: Function, path: String}> | List of middlewares that should be executed for each endpoint's logic. If it is given as an object with a path, it will only be applied to requests with this base path. | [] |
| errorMiddleware | Array<Function> Array<{middleware: Function, path: String}> | List of middlewares that should be executed after each endpoint's logic. If it is given as an object with a path, it will only be applied to requests with this base path. | [] |
| initCallbacks | Array<Function> | List of functions that are called on server start |
| [] |
| syncCallbacks | boolean | Forces the init callbacks to run one after the other and not in parallel |
| false |
| cron | Array<Object> | List of cronjob that are called on server start | [] |
| shutdownCallbacks | Array<Function> | List of functions that are called on server shutdown | [] |
| checkEnvironment | Object | Schema against which verify the environment variables, will cause server termination if the environment variables are not properly set. | {} |
| webSocket.server | Object | Settings to create a WebSocket server. See the ws package documentation for details. | |
| webSocket.client | Object | Settings passed down to SocketClient when a new socket connection is established. | |
Endpoint
Endpoint encapsulates the logic of a single endpoint in a standard way.
The main function of the endpoint is called the handler. This function should only handle pure business logic for a given endpoint.
An endpoint usually has a validator. A validator is a schema that will be compared to the incoming request. The request will be denied if it contains illegal or malformed arguments. For more detail see the documentation of the underlying package express-validator.
import { Endpoint, EndpointMethod } from 'node-server-engine';
// A basic handler that returns an HTTP status code of 200 to the client
function handler(req, res) {
res.sendStatus(200);
}
// The request must contain `id` as a query string and it must be a UUID V4
const validator = {
id: {
in: 'query',
isUUID: {
options: 4
}
}
};
// This endpoint can be passed to the Server
new Endpoint({
path: '/demo',
method: EndpointMethod.GET,
handler,
validator
});Endpoint Configuration
new Endpoint(config)
| Property | Type | Behavior | Default | | --------------- | --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | | path | String | Path to which the endpoint should be served | required | | method | Method | Method to which the endpoint should be served | required | | handler | Function | Endpoint handler | required | | validator | Object | Schema to validate against the request. See documentation for more details. | | | authType | AuthType | Authentication to use for this endpoint | AuthType.NONE | | authParams | Object | Options specific to the authentication methods | {} | | files | Array<Object> | Configuration to upload files. See the specific documentation. | {} | | middleware | Array<Function> | List of middlewares to run before the handler. | [] | | errorMiddleware | Array<Function> | List of middlewares to run after the handler | [] |
const addNodeEndpoint = new Endpoint({
path: '/note',
method: EndpointMethod.POST,
handler: (req, res, next) => res.json(addNote(req.body)),
authType: EndpointAuthType.AUTH_JWT,
middleware: [checkPermission(['DeleteUser', 'AdminAccess'])],
errorMiddleware: [addNoteErrorMiddleware],
validator: {
id: {
in: 'body',
isUUID: true
},
content: {
in: 'body',
isLength: {
errorMessage: 'content to long',
options: {
max: 150
}
}
}
}
});Methods
The following HTTP methods are supported
- Method.GET
- Method.POST
- Method.PUT
- Method.PATCH
- Method.DELETE
- Method.ALL - respond to all requests on a path
Authentication
Endpoints can take an authType and authParam in their configuration to determine their authentication behavior. The following table summarizes their usage.
The server engine exposes an enumeration for auth types.
import {AuthType} from 'node-server-engine';
| AuthType | Description | AuthParams |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| AuthType.NONE | No authentication. All requests are handled | |
| AuthType.JWT | A valid JSON Web Token is required as Bearer token. To be valid it must be properly signed by auth0, and its payload must match with what is set as environment variables.The user's ID is added to req.user. | |
| AuthType.TLS | Authenticate through mutual TLS. CA and an optional list of white listed host names should be set in the environment variables. | whitelist [Array]: List of certificate Common Name or Alt Name that are permitted to make requests to this endpoint. |
| AuthType.HMAC | Authenticate with a signature in the payload.This authentication is deprecated and should be avoided | secret [String]: Overrides the secret used for signatures set in environment variablesisGithub [Boolean]: The request is a Github webhook and therefore uses their signature system and not the default one. |
| AuthType.STATIC | A valid shared Bearer token is required. Shared token is stored in env variable (STATIC_TOKEN) |
File Upload Middleware
This middleware handles multipart file uploads in an Express application. It processes files in memory, validates them based on configuration options, and ensures that required files are uploaded.
Usage
The request must be made using multipart/form-data. The uploaded files will be available in req.files.
The following settings can be used on each object the Endpoint's options.files.
| Property | Type | Description | Default | | ----------- | -------------- | -------------------------------------------------------------------- | ------------ | | key | string | Form key as which the file should be fetched | required | | maxSize | string | Maximum file size in a human readable format (ex: 5MB) | required | | mimeTypes | Array<string> | A list of accepted MIME Types | [] | | required | boolean | Will fail the request and not store the files if this one is missing | false | | noExtension | boolean | Store the file with no extension | false |
Example
import { body } from 'express-validator';
import { Endpoint, middleware, AuthType, Method } from 'node-server-engine';
const filesConfig = [
{ key: 'avatar', mimeTypes: ['image/png', 'image/jpeg'], required: true },
{ key: 'document', mimeTypes: ['application/pdf'], maxSize: '5MB' }
];
new Endpoint({
path: '/upload',
method: Method.POST,
authType: AuthType.JWT,
files: filesConfig,
handler: (req, res) => {
res.json({ message: 'Files uploaded successfully', files: req.files });
}
});Middleware Output
The middleware adds a files object to the req object, which contains information about the uploaded file.
[
{
"fieldname": "avatar",
"originalname": "profile.png",
"mimetype": "image/png",
"size": 204800,
"buffer":[]
},
{
"fieldname": "document",
"originalname": "resume.pdf",
"mimetype": "application/pdf",
"size": 512000,
"buffer":[]
}
]Features
- Supports multiple file uploads.
- Validates file types and sizes.
- Ensures required files are uploaded.
- Uses memory storage (files are not saved to disk).
This middleware simplifies file handling in Express, making it easy to manage uploads while enforcing validation rules.
Multipart File Upload Middleware
This middleware enables chunked file uploads in an Express application. It allows uploading large files by splitting them into smaller chunks, validating them, and merging them once all parts are received.
Usage
The request must be made using multipart/form-data. The uploaded chunks are processed in memory before being stored in temporary directories. Once all chunks are uploaded, they are merged into a single file.
Configuration Options
The following settings can be used when configuring file uploads:
| Property | Type | Description | Default |
| -------- | ------- | --------------------------------------------------- | -------- |
| maxSize | string | Maximum allowed size for each chunk (e.g., "5MB") | No limit |
| required | boolean | Whether the file is mandatory for the request | false |
Expected Request Format
The client must send the following fields in the multipart/form-data request:
| Field | Type | Description |
| ------------- | ------ | ---------------------------------------- |
| file | File | The chunked file data |
| filename | String | Name of the original file |
| uniqueID | String | Unique identifier for the upload session |
| chunkIndex | Number | Current chunk number (0-based index) |
| totalChunks | Number | Total number of chunks for the file |
Middleware Output
The middleware adds a multipartFile object to the req object, which contains information about the uploaded file.
When all chunks are not yet received, req.multipartFile has the below JSON:
{
"isPending": true,
"originalname": "example.pdf",
"uniqueID": "abc123",
"chunkIndex": 2,
"totalChunks": 5
}When the upload is complete, req.multipartFile has the below JSON:
{
"isPending": false,
"originalname": "example.pdf",
"uniqueID": "abc123",
"filePath": "/uploads/completed_files/abc123_example.pdf"
}Example
import { body } from 'express-validator';
import { Endpoint, middleware, AuthType, Method } from 'node-server-engine';
const fileConfig = { maxSize: '10MB', required: true };
new Endpoint({
path: '/upload',
method: Method.POST,
authType: AuthType.JWT,
multipartFile: fileConfig,
handler: (req, res) => {
console.log(req.multipartFile);
}
});Socket Client
The WebSocket server starts automatically if the Server's webSocket option is provided.
Each WebSocket connection creates a new SocketClient instance with built-in authentication, message routing, and connection management.
Features
- JWT Authentication: Secure token-based authentication with automatic renewal
- Message Routing: Type-based message handlers similar to HTTP endpoints
- Connection Health: Built-in ping/pong mechanism with configurable intervals
- Lifecycle Callbacks: Hooks for initialization, authentication, and shutdown
- Error Handling: Automatic error formatting and client notification
Socket Client Options
Options can be set when configuring the Server's webSocket:
import { Server, MessageHandler } from 'node-server-engine';
const server = new Server({
webSocket: {
client: {
handlers: [messageHandler1, messageHandler2],
initCallbacks: (client) => {
console.log(`Client ${client.id} connected`);
},
authCallbacks: (client) => {
const user = client.getUser();
console.log(`User ${user.userId} authenticated`);
},
shutdownCallbacks: (client) => {
console.log(`Client ${client.id} disconnected`);
}
}
}
});| Parameter | Type | Description | Default | | ----------------- | ------------------------------------------ | ----------------------------------------------------------------------- | ------------ | | handlers | Array<MessageHandler> | A list of message handlers to use | required | | authCallbacks | Function | Array<Function> | Callbacks called when a client successfully authenticates | [] | | initCallbacks | Function | Array<Function> | Callbacks called when the socket client is created | [] | | shutdownCallbacks | Function | Array<Function> | Callbacks called when the socket client is destroyed | [] |
Client Properties & Methods
| Property/Method | Type | Description | | ----------------- | --------------------------- | ------------------------------------------------------------------------- | | id | String | Unique identifier for the connection | | establishedAt | Date | Timestamp when connection was established | | isAuthenticated() | () => Boolean | Check if client is authenticated | | getUser() | () => SocketUser | undefined | Get authenticated user data (userId, deviceId, tokenId, audience) | | sendMessage() | (type, payload, options) | Send a message to the client |
Client Authentication
Clients authenticate by sending a message:
// Client-side
websocket.send(JSON.stringify({
type: 'authenticate',
payload: { token: 'your-jwt-token' }
}));
// Server will:
// 1. Verify the JWT token
// 2. Extract user information (userId, deviceId, audience)
// 3. Set authentication status
// 4. Trigger authCallbacks
// 5. Send renewal reminder 1 minute before token expirationEnvironment Variables
| Variable | Description | Default | | ------------------- | ----------------------------------------------- | ------- | | WS_PING_INTERVAL | Interval between ping checks (seconds) | 30 | | WEBSOCKET_CLOSE_TIMEOUT | Timeout for graceful close (milliseconds) | 3000 |
Message Handler
Message handler are similar to Endpoint, but in a WebSocket context. They define how incoming messages should be handled.
import { MessageHandler, Server } from 'node-server-engine';
function handler(payload, client) {
// payload is the message payload in a standard message.
// client is an instance of SocketClient
}
const messageHandler = new MessageHandler(type, handler, options);
new Server({
webSocket: { client: { handlers: [messageHandler] } }
});| Argument | Type | Description | Default |
| --------------------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------- | ------------ |
| type | String | The message type that should be handled | required |
| handler | Function | A function that will run for every message of this type | required |
| options.authenticated | A flag indicating that this kind of message handling require an authenticated client | true |
Redis
The server engine exposes a Redis client configured for production use with automatic reconnection, retry logic, and optional TLS support.
It is a pre-configured instance of ioredis v5.8.2. See the package documentation for more details.
import { Redis } from 'node-server-engine';
// Initialize (automatically called by Server, or manually)
Redis.init();
// Use Redis commands
await Redis.set('key', 'value');
const value = await Redis.get('key');
await Redis.del('key');
await Redis.expire('key', 3600);
// Get underlying client for advanced use
const client = Redis.getClient();
// Shutdown when done
await Redis.shutdown();Features
- Automatic Reconnection: Exponential backoff retry strategy (up to 2s delay)
- Error Recovery: Reconnects on READONLY, ECONNRESET, and ETIMEDOUT errors
- Connection Management: Event listeners for connect, ready, error, close, reconnecting
- TLS Support: Automatic TLS configuration when
TLS_CAis provided - Test Mode: Lazy connection in test environments to prevent connection attempts
Environment Variables
| Variable | Description | Default | Required | | --------------- | -------------------------------------- | ------- | -------- | | REDIS_HOST | Redis server hostname or IP | - | ✓ | | REDIS_PORT | Redis server port | 6379 | ✗ | | REDIS_USERNAME | Username for authentication | - | ✗ | | REDIS_PASSWORD | Password for authentication | - | ✗ | | TLS_CA | TLS certificate authority for SSL/TLS | - | ✗ |
Configuration Options
You can customize Redis client creation:
import { createRedisClient } from 'node-server-engine';
const client = createRedisClient({
db: 2, // Database index (default: 0)
lazyConnect: true, // Don't connect until first command
enableReadyCheck: false, // Disable ready check
redis: { // Override any ioredis options
connectTimeout: 10000,
commandTimeout: 5000
}
});AWSS3
The AWSS3 entity is a generic wrapper around AWS S3 (and S3-compatible services like MinIO, LocalStack). It supports uploads, downloads, streaming, metadata, and deletion with size validation and UUID-based path generation.
Based on @aws-sdk/client-s3 v3.
import { AWSS3 } from 'node-server-engine';
import fs from 'fs';
// Initialize with configuration (optional; otherwise uses environment variables)
AWSS3.init({
region: 'us-east-1',
accessKeyId: 'YOUR_ACCESS_KEY',
secretAccessKey: 'YOUR_SECRET_KEY',
// For S3-compatible services
// endpoint: 'http://localhost:4566',
// forcePathStyle: true
});
// Or rely on environment variables and auto-init
// AWSS3.init();
// Upload a file
const fileStream = fs.createReadStream('photo.jpg');
const uploaded = await AWSS3.upload(
fileStream,
'my-bucket',
{ directory: 'photos', mime: 'image/jpeg' },
{ maxSize: '5MB' },
{ userId: '123' } // optional metadata
);
console.log(uploaded.Key); // photos/<uuid>.jpeg
// Download entire file into memory
const { data, metadata } = await AWSS3.get('my-bucket', uploaded.Key);
console.log(metadata.ContentLength);
// Stream a large file
const { stream } = await AWSS3.download('my-bucket', uploaded.Key);
stream.pipe(res);
// Get a fast stream without metadata
const s = await AWSS3.getFileStream('my-bucket', uploaded.Key);
s.pipe(res);
// Metadata only
const head = await AWSS3.getMetadata('my-bucket', uploaded.Key);
// Delete
await AWSS3.delete('my-bucket', uploaded.Key);
// Generate unique destination path
const key = AWSS3.generateFileDestination({ directory: 'uploads/images', mime: 'image/png' });Features
- Auto-initialization via environment variables
- Stream-based upload with
maxSizevalidation - Full download, streaming download, or stream-only access
- UUID-based file naming with directory and extension inference from MIME
- S3-compatible (supports custom
endpointandforcePathStyle)
API Methods
init(config?)
Initialize the S3 client explicitly; otherwise it will lazy-init using environment variables.
AWSS3.init({
region: 'us-east-1',
accessKeyId: '...',
secretAccessKey: '...',
sessionToken: '...', // optional
endpoint: 'http://localhost:9000', // optional (MinIO/LocalStack)
forcePathStyle: true // optional (S3-compatible)
});upload(stream, bucket, destinationOptions?, uploaderOptions?, metadata?)
Uploads content from a readable stream.
destinationOptions:{ directory?, fileName?, mime?, noExtension? }uploaderOptions:{ maxSize?: string }e.g.5MB,100KBmetadata: key/value pairs stored as object metadata
Returns Promise<S3UploadedFile> with keys like Bucket, Key, ETag, Location.
get(bucket, key)
Downloads the full file into memory as Buffer and returns { data, metadata }.
download(bucket, key)
Returns { stream, metadata } for streaming large files efficiently.
getFileStream(bucket, key)
Returns a Readable stream without fetching metadata.
getMetadata(bucket, key)
Returns file metadata via a HEAD request.
delete(bucket, key)
Deletes the object.
generateFileDestination(options?)
Generates a unique key using UUID with optional directory and extension (from mime).
Environment Variables
| Variable | Description | Required |
| --- | --- | --- |
| AWS_REGION | AWS region (e.g., us-east-1) | ✗* |
| AWS_ACCESS_KEY_ID | Access key ID | ✗* |
| AWS_SECRET_ACCESS_KEY | Secret access key | ✗* |
| AWS_SESSION_TOKEN | Session token (temporary creds) | ✗ |
| AWS_S3_ENDPOINT | Custom S3-compatible endpoint (MinIO/LocalStack) | ✗ |
| AWS_S3_FORCE_PATH_STYLE | Use path-style addressing (true/false) | ✗ |
* Not required if you call init() with a config object.
Error Handling
- Throws
WebErrorwith status413whenmaxSizeis exceeded during upload - Other errors bubble up from AWS SDK v3 commands
Example:
try {
await AWSS3.upload(stream, 'bucket', {}, { maxSize: '1MB' });
} catch (e) {
if (e.statusCode === 413) {
console.log('File too large');
}
}Usage in Template Projects
import { AWSS3, Endpoint, Method } from 'node-server-engine';
import { Readable } from 'stream';
new Endpoint({
path: '/upload-s3',
method: Method.POST,
files: [{ key: 'file', maxSize: '10MB', required: true }],
async handler(req, res) {
const file = req.files[0];
const stream = Readable.from(file.buffer);
const result = await AWSS3.upload(
stream,
process.env.S3_UPLOAD_BUCKET,
{ directory: 'user-uploads', mime: file.mimetype }
);
res.json({ key: result.Key, url: result.Location });
}
});Sequelize
The server engine exposes an SQL ORM that is configured to work with the standard environment variables that are used in our services.
It is a pre-configured instance of sequelize. See the package documentation for more details.
import { Sequelize } from 'node-server-engine';
Sequelize.sequelize;
Sequelize.closeSequelize();It can be configured through environment variables
| env | description | default | | ------------ | ------------------------------------------------- | -------- | | SQL_HOST | Host to which connect to | | | SQL_PORT | Port on which SQL is served | 5432 | | SQL_PASSWORD | Password used to authenticate with the SQL server | | | SQL_USER | User used to authenticate with the SQL server | | | SQL_DB | Database to which connect to | | | SQL_TYPE | SQL type which connect to | postgres |
Pub/Sub
The engine exposes a PubSub entity that can be used to communicate with Google Cloud Pub/Sub.
import { PubSub } from 'node-server-engine';
/**
* Declare that the service will be publishing to a topic
* This must be done before init() is called
* Any attempt to publish a message without declaring a topic first will fail
* @property {string|Array.<string>} topic - The topic(s) to which we will be publishing
*/
PubSub.addPublisher(topic);
/**
* Binds a message handle to a subscription
* If called multiple times, handlers will be chained
* This must be done before init() is called
* The subscription will not be consumed until init() is called
* @property {string} subscription - The subscription to consume
* @property {function|Array.<function>} handler - The message handling function(s)
* @property {boolean} [first] - Puts the handler(s) at the beginning of the handling chain (default: false)
*/
PubSub.addSubscriber(subscription, handler, first);
/**
* Establish connection with all the declared publishers/subscribers
*/
await PubSub.init();
/**
* Send a message through a previously declared publisher
* @property {string} topic - The name of the topic to which the message should be pushed
* @property {Object} message - The actual message (will be JSON stringified)
* @property {Object} [attributes] - Message attributes
* @property {string} [orderingKey] - Ordering key
*/
await PubSub.publish(topic, message, attributes, orderingKey);
/**
* Flush all pending messages and close connections with Pub/Sub
*/
await PubSub.shutdown();PushNotification
Communication interface with the push service.
import { PushNotification } from 'node-server-engine';
// The entity needs to initialize it's pub/sub connections
// Handlers must be declared before this function is called
await PushNotification.init();
/**
* Send a push notification through the push service
* @param {String} userId - ID of the user that should receive the notification
* @param {Object} notification - Notification that should be sent
* @return {void}
*/
await PushNotification.sendPush(userId, notification);Localizator
The localizator exposes localization related utilities.
import { Localizator } from 'node-server-engine';
// The synchronize init should be called first to initialize the data
// The localizator will regularly synchronize new data after that without any calls having to be made
await Localizator.init();
// Get the different ways of displaying a user's name
const { regular, formal, casual } = await Localizator.getDisplayNames(
firstName,
lastName,
locale
);
// This should be call when the program shuts down.
await Localizator.shutdown();| env | description | default | | ----------- | ----------------------------------------- | ------------ | | LOCALES_URL | Base URL where the locales data is stored | required |
ElasticSearch
The ElasticSearch entity provides a managed client for Elasticsearch with automatic migrations, connection management, and TLS support.
Based on @elastic/elasticsearch v9.2.0.
import { ElasticSearch } from 'node-server-engine';
// Initialize (runs migrations automatically)
await ElasticSearch.init();
// Get the client for operations
const client = ElasticSearch.getClient();
// Use Elasticsearch operations
await client.index({
index: 'products',
id: '123',
document: { name: 'Product', price: 99.99 }
});
const result = await client.search({
index: 'products',
query: { match: { name: 'Product' } }
});
// Shutdown when done
await ElasticSearch.shutdown();Features
- Automatic Migrations: Tracks and runs migrations on startup
- Connection Verification: Pings cluster on initialization
- TLS Support: Optional SSL/TLS configuration
- Retry Logic: Built-in retry mechanism with configurable attempts
- Test Mode: Auto-flushes indices in test environment
- Error Handling: Detailed error reporting with context
Migration System
Create migration files in your specified migration directory:
// migrations/001-create-products-index.ts
import { Client } from '@elastic/elasticsearch';
export async function migrate(client: Client): Promise<void> {
await client.indices.create({
index: 'products',
settings: {
number_of_shards: 1,
number_of_replicas: 1
},
mappings: {
properties: {
name: { type: 'text' },
price: { type: 'double' },
createdAt: { type: 'date' }
}
}
});
}Migrations are:
- Run automatically on
init() - Tracked in the
migrationsindex - Executed once per file
- Run in alphabetical order
Environment Variables
| Variable | Description | Required | | ------------------------------ | ------------------------------------------ | -------- | | ELASTIC_SEARCH_HOST | Elasticsearch cluster URL | ✓ | | ELASTIC_SEARCH_USERNAME | Username for authentication | ✓ | | ELASTIC_SEARCH_PASSWORD | Password for authentication | ✓ | | ELASTIC_SEARCH_MIGRATION_PATH | Absolute path to migrations directory | ✓ | | TLS_CA | TLS certificate authority for SSL/TLS | ✗ |
Configuration
The client is configured with:
- Max Retries: 3 attempts
- Request Timeout: 30 seconds
- Sniff on Start: Disabled (use with clusters)
- TLS: Enabled when
TLS_CAis provided
GoogleCloudStorage
The GoogleCloudStorage entity provides a simple, generic wrapper for Google Cloud Storage operations. It handles file uploads, downloads, streaming, and deletion with built-in size validation and automatic file path generation.
Based on @google-cloud/storage v7.14.0.
import { GoogleCloudStorage } from 'node-server-engine';
import fs from 'fs';
// Initialize with configuration
GoogleCloudStorage.init({
projectId: 'my-project',
keyFilename: '/path/to/keyfile.json'
});
// Or use environment variables (GC_PROJECT, GOOGLE_APPLICATION_CREDENTIALS)
GoogleCloudStorage.init();
// Upload a file
const fileStream = fs.createReadStream('photo.jpg');
const result = await GoogleCloudStorage.upload(
fileStream,
'my-bucket',
{ directory: 'photos', mime: 'image/jpeg' },
{ metadata: { contentType: 'image/jpeg' } },
{ maxSize: '5MB' }
);
console.log(result.name); // photos/uuid.jpeg
// Download file as Buffer
const { data, metadata } = await GoogleCloudStorage.get('my-bucket', 'photos/image.jpg');
console.log(metadata.size); // File size in bytes
fs.writeFileSync('downloaded.jpg', data);
// Stream a file (for large files)
const { stream, metadata } = await GoogleCloudStorage.download('my-bucket', 'videos/video.mp4');
stream.pipe(res); // Stream to HTTP response
// Get file stream directly (fastest, no metadata)
const stream = GoogleCloudStorage.getFileStream('my-bucket', 'audio/song.mp3');
stream.pipe(res);
// Delete a file
await GoogleCloudStorage.delete('my-bucket', 'temp/old-file.txt');
// Generate unique file paths
const path = GoogleCloudStorage.generateFileDestination({
directory: 'uploads/images',
mime: 'image/jpeg'
});
// Result: 'uploads/images/a1b2c3d4-e5f6-7890-abcd-ef1234567890.jpeg'Features
- Auto-initialization: Automatically initializes on first use if not manually initialized
- Flexible Configuration: Supports config object, environment variables, or credentials
- File Upload: Stream-based uploads with size validation
- Multiple Download Methods: Full download, streaming, or direct stream access
- Path Generation: Automatic UUID-based file naming with directory and extension support
- Size Validation: Built-in file size limits with human-readable formats (5MB, 100KB, etc.)
- Error Handling: Detailed error reporting with context
- No Project Dependencies: Generic implementation works with any Google Cloud Storage bucket
API Methods
init(config?)
Initialize Google Cloud Storage with configuration. Optional - will auto-initialize with environment variables if not called.
GoogleCloudStorage.init({
projectId: 'my-project-id',
keyFilename: '/path/to/service-account-key.json',
// Or use credentials directly
credentials: {
client_email: '[email protected]',
private_key: '-----BEGIN PRIVATE KEY-----\n...'
},
// For local emulator
apiEndpoint: 'http://localhost:9000'
});upload(stream, bucket, destinationOptions?, storageOptions?, uploaderOptions?)
Upload a file to Google Cloud Storage.
Parameters:
stream(Readable): Node.js readable stream of file contentbucket(string): Bucket namedestinationOptions(object, optional):directory(string): Subdirectory path (e.g., 'uploads/images')fileName(string): Specific filename (if not provided, generates UUID)mime(string): MIME type for extension detectionnoExtension(boolean): Don't append file extension
storageOptions(object, optional): Google Cloud Storage write stream optionsuploaderOptions(object, optional):maxSize(string): Maximum file size (e.g., '5MB', '100KB', '1GB')
Returns: Promise<StorageUploadedFile> - Metadata of uploaded file
Example:
const fileStream = fs.createReadStream('document.pdf');
const result = await GoogleCloudStorage.upload(
fileStream,
'documents-bucket',
{ directory: 'legal/contracts', mime: 'application/pdf' },
{ metadata: { contentType: 'application/pdf' } },
{ maxSize: '10MB' }
);get(bucket, path)
Download a file and return its content as a Buffer along with metadata.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Promise<{data: Buffer, metadata: StorageUploadedFile}>
Example:
const { data, metadata } = await GoogleCloudStorage.get('my-bucket', 'photos/image.jpg');
console.log(metadata.contentType); // 'image/jpeg'
console.log(data.length); // File size in bytesdownload(bucket, path)
Get a readable stream for a file along with its metadata. Use this for large files or when you need to stream content.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Promise<{stream: Readable, metadata: StorageUploadedFile}>
Example:
const { stream, metadata } = await GoogleCloudStorage.download('my-bucket', 'videos/large-video.mp4');
console.log(metadata.size); // File size
stream.pipe(response); // Stream to HTTP responsegetFileStream(bucket, path)
Get a readable stream for a file without fetching metadata. Fastest option when metadata is not needed.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Readable - Node.js readable stream
Example:
const stream = GoogleCloudStorage.getFileStream('my-bucket', 'audio/song.mp3');
stream.pipe(response); // Direct streamingdelete(bucket, path)
Delete a file from Google Cloud Storage.
Parameters:
bucket(string): Bucket namepath(string): File path in the bucket
Returns: Promise<void>
Example:
await GoogleCloudStorage.delete('my-bucket', 'temp/old-file.txt');generateFileDestination(options?)
Generate a unique file path with optional directory and extension.
Parameters:
options(object, optional):directory(string): Subdirectory pathmime(string): MIME type for extension detectionnoExtension(boolean): Don't append extension
Returns: string - Generated file path
Examples:
// UUID only
GoogleCloudStorage.generateFileDestination();
// → 'a1b2c3d4-e5f6-7890-abcd-ef1234567890'
// With directory and MIME type
GoogleCloudStorage.generateFileDestination({
directory: 'uploads/images',
mime: 'image/jpeg'
});
// → 'uploads/images/a1b2c3d4-e5f6-7890-abcd-ef1234567890.jpeg'
// Without extension
GoogleCloudStorage.generateFileDestination({
directory: 'data',
noExtension: true
});
// → 'data/a1b2c3d4-e5f6-7890-abcd-ef1234567890'Environment Variables
| Variable | Description | Required | | ------------------------------- | ------------------------------------ | -------- | | GC_PROJECT | Google Cloud Project ID | ✗* | | GOOGLE_APPLICATION_CREDENTIALS | Path to service account key file | ✗* |
* Not required if you call init() with a config object
Error Handling
The entity throws WebError with appropriate status codes:
- 413 (Payload Too Large): File exceeds
maxSizelimit - Other errors are passed through from Google Cloud Storage SDK
Example:
try {
await GoogleCloudStorage.upload(stream, 'bucket', {}, {}, { maxSize: '1MB' });
} catch (error) {
if (error.statusCode === 413) {
console.log('File too large!');
}
}Usage in Template Projects
In your node-server-template endpoints:
import { GoogleCloudStorage } from 'node-server-engine';
import { Endpoint, Method } from 'node-server-engine';
new Endpoint({
path: '/upload',
method: Method.POST,
files: [{ key: 'file', maxSize: '5MB', required: true }],
async handler(req, res) {
const file = req.files[0];
const stream = Readable.from(file.buffer);
const result = await GoogleCloudStorage.upload(
stream,
process.env.UPLOAD_BUCKET,
{ directory: 'user-uploads', mime: file.mimetype }
);
res.json({ path: result.name, url: result.mediaLink });
}
});Translation Manager
The translation manager exposes translation related utilities.
import { TranslationManager } from 'node-server-engine';
// The synchronize init should be called first to initialize the data
// The tranlsation manager will regularly synchronize new data after that without any calls having to be made
await TranslationManager.init();
// Get the different ways of displaying a user's name
const translatedString = await TranslationManager.translate(
lang,
key,
variables,
tags
);
// Example
const translatedString = await TranslationManager.translate(
'zh-TW',
'email.invitation.body',
{ name: 'John' },
{ link: ['a', 'href="https://www.test.com"'] }
);
// This should be call when the program shuts down.
await TranslationManager.shutdown();- lang [String]: Locale for which the translation should be fetched (if no data is found, translation will be returned in
en-US). - key [String]: Translation key
- variables [Object]: A key=>value mapping for variable interpolation in strings. (Optional)
- tags [Object]: A key=>value mapping for variable interpolation in strings. (Optional)
| env | description | default | | ----------- | ----------------------------------------- | ------------ | | LOCALES_URL | Base URL where the locales data is stored | required |
Error Reporting
The server engine standardizes the way errors are handled and reported. The error classes provided by the Server Engine should always be used when throwing an exception.
Errors are a crucial part of the application, they are what helps us to properly debug the program and offer support when need, as well as what exposes issues to the client.
Log Output Formats
The engine automatically adapts log output based on the environment:
Local Development (Readable Format)
- Colorized output with severity levels
- Formatted timestamps and file locations
- Pretty-printed data objects
- Stack traces with proper formatting
- HTTP request context when available
Production/GCP (JSON Format)
- Structured JSON for log aggregation
- Google Cloud Error Reporting integration
- Kubernetes pod information
- Service context metadata
Control Log Format
You can override the automatic detection:
# Force readable local format (useful for local Docker)
LOG_FORMAT=local npm start
# Force JSON format (useful for local testing)
LOG_FORMAT=json npm startExample Local Output:
[2025-12-10T10:30:45.123Z] ERROR src/endpoints/users.ts:42 User not found
Error Code: user-not-found
Status: 404
Data:
{
"userId": "abc-123",
"requestId": "req-456"
}By standard, the client receives the following body when an error happens.
// HTTP Status code (400 | 500)
{
errorCode: "some-error", // Machine readable error code
hint: "The selected user does not exist" // (optional) Hint for developers
}Status codes should be limited to 400 for client errors and 500 for server errors. Other 4XX status codes should be avoided unless very specific cases (ex: authentication error).
All our custom error classes take a data parameter. This will be logged on the backend and should contain any data that can help to understand the runtime context or the error (ex: user's ID).
Common options
These options are common to each of the error classes described below.
| option | definition | example | default | | -------- | ----------------------------------------------------------------------------------------------------------------------- | ------------------------------- | ------------------------------ | | message | A message logged on the backend only | "Could not find user in the DB" | required | | severity | The level at which this error should be logged | Severity.WARNING | Severity.CRITICAL | | data | Some data related to the error that will be logged on the backend. This should help to undetrstand the runtime context. | {userId: 'xf563ugh0'} | | | error | An error object, when this is a wrapper around an existing error object | |
Severity
Severity allows us to order errors by their impact on the program. It is important to set severity correctly as backend logs can include hundreds of entries per seconds, severity allows us to filter out the most important errors. An enumeration is exposed that includes all the severity levels, as described in the following table.
Log Levels
| severity | definition | | --------- | ------------------------------------------------------------------------------------------------------------------------ | | DEBUG | Detailed information of the runtime execution. Used for debugging. | | INFO | Base information level of runtime information. | | WARNING | Errors that are expected to happen and do not cause any particular issue. (ex: a client made an unauthenticated request) | | CRITICAL | Errors that are unexpected and cause an improper behavior. (ex: failed to store some data in the DB) | | EMERGENCY | Errors that prevent the program from running. (ex: some environment variables are not correctly set) |
EngineError
This error class represents errors that happen withing the Server Engine. They should be used for server configuration errors, or unexpected behaviors. They will always return 500 - {errorCode: 'server-error'}.
import { EngineError, Severity } from 'node-server-engine';
if (!process.env.IMPORTANT_VARIABLE) {
throw new EngineError(options);
}Engine Errors strictly follow the common options.
WebError
This error class represents errors that happen at runtime and that need specific reporting to the client. Their definition is more complex, but it includes additional data specific to the client.
import { WebError, Severity } from 'node-server-engine';
const user = User.findOne({ where: { id: userId } });
if (!user) {
throw new WebError(options);
}In addition to the common options, WebError defines some other options specific for error reporting to clients.
| option | definition | example | default | | ---------- | ---------------------------------------------------------------- | ------------ | ------------ | | errorCode | A machine readable error code that will be parsed by the client. | unknown-user | required | | statusCode | HTTP status code of the response. | 400 | 500 | | hint | A human readable error message that is intended for developers | |
Middlewares
Server engine exposes a bunch of middlewares. These can be imported to your project and used globally or per endpoint.
Swagger Docs
This middleware allows the service to connect with the API Documentation Service. It exposes the necessary endpoint for the documentation of this API to be visible by the Documentation Service.
import { Server, middleware } from 'node-server-engine';
new Server({
globalMiddleware: [middleware.swaggerDocs()]
});Structuring Documentation
The underlying system used is the Open API spec. Most of the data is already generate by the documentation service. The only real need is to document endpoints.
Endpoints should be documented in YAML files that respect the **/*.docs.yaml convention. It is recommended to place them in the same directory as the endpoint definition. Endpoint documentation has to follow the path object spec from Open API.
/hello:
get:
description: Request the API to say Hello
responses:
'200':
description: Replies hello
content:
application/json:
schema:
type: Object
properties:
says:
type: string
example: HelloSchemas and Responses
To avoid repeating the same structure across the documentation manually, one can use schemas and responses components.
Some common components are already defined directly in the API Documentation Service, please check its documentation to avoid repeats.
If you ever need to declare custom components, they simply must follow the architecture bellow.
# Repository root
/src
/docs
/responses
- coolResponse.yaml
/ schemas
- bestSchema.yamlHere is an example definition:
Dog:
type: object
properties:
name:
type: string
example: Rex
owner:
type: string
example: f1982af0-1579-4c56-a138-de1ab4ff39b3
isAGoodBoy:
type: boolean
example: true
required:
- name
- ownerUser Resolver
/!\ Must be used in combination with AuthType.JWT
Resolves the user making the request with the user resolver. The user's complete data is added to req.user.
import { Endpoint, middleware, AuthType, Method } from 'node-server-engine';
new Endpoint({
path: '/hello',
method: Method.GET,
authType: AuthType.JWT,
middleware: [middleware.userResolver],
handler: (req, res) => {
res.json({ say: `Hello ${req.user.firstName}` });
}
});Gemini File Upload
An endpoint can upload a file to a google gemini AI
The request must be made as multipart/form-data.
File should be uploaded under the key file.
The file's data will be available at req.body.fileUri req.body.mimeType req.body.originalname
Check Permission Middleware
⚠️ Must be used in combination with AuthType.JWT
Role-based access control middleware that checks if the authenticated user has at least one of the required permissions. All permission checks are case-insensitive for maximum flexibility.
Features
- ✅ Single or multiple permission checking
- ✅ Case-insensitive permission matching
- ✅ Integration with JWT authentication
- ✅ Clear error messages for debugging
- ✅ TypeScript support
Basic Usage
import { Endpoint, middleware, AuthType, Method } from 'node-server-engine';
// Single permission check
new Endpoint({
path: '/users',
method: Method.GET,
authType: AuthType.JWT,
middleware: [middleware.checkPermission('users:read')],
handler: (req, res) => {
res.json({ message: 'User list' });
}
});
// Multiple permissions (user needs at least ONE)
new Endpoint({
path: '/admin',
method: Method.GET,
authType: AuthType.JWT,
middleware: [middleware.checkPermission(['admin', 'superuser', 'moderator'])],
handler: (req, res) => {
res.json({ message: 'Admin access granted' });
}
});User Object Structure
The middleware expects req.user to contain a permissions array:
interface User {
id: string;
permissions: string[]; // e.g., ['users:read', 'users:write', 'admin']
}Examples
// Case-insensitive matching
checkPermission('ADMIN') // Matches: 'admin', 'Admin', 'ADMIN'
checkPermission(['READ', 'write']) // Matches any case variation
// Namespace-style permissions
checkPermission('users:read')
checkPermission(['users:write', 'users:delete'])
// Role-based permissions
checkPermission(['admin', 'moderator'])Response Codes
- 200: Permission granted, request proceeds
- 403: Permission denied
- No user authenticated
- User has no permissions array
- User lacks required permission(s)
Error Responses
{
"message": "User does not have permissions"
}{
"message": "Permission denied"
}};
## Utilities
The server engine ships with a handful of utility functions that are commonly used by servers
### Request
This function is a wrapper around [axios](https://github.com/axios/axios). It adds proper error handling for reporting in the logs when a network call fails. It should be used for any requests made by a service.
Refer to the [axios documentation](https://github.com/axios/axios) for the request configuration.
```javascript
import { request } from 'node-server-engine';
const { data } = await request({
method: 'get',
url: 'https://www.google.com'
});TLS Request
This function is a wrapper around request. It adds the necessary configuration to easily make requests with TLS. The settings used are based on the environment variables. It will use request specific certificate/key if defined, or will fallback to use the same ones as the ones used for the server.
It is necessary to use this function when calling other services in the cluster. Requests could fail otherwise as the common CA is not set and the client certificate not exposed.
`
