@vvlad1973/base-agent
v1.8.0
Published
A JavaScript classes to implement agent service base class for BotApp platform
Maintainers
Readme
@vvlad1973/base-agent
Base class for implementing agent services with multi-worker management capabilities.
Features
- Multiple independent worker threads with individual lifecycle control
- Automatic crash recovery with configurable retry limits
- Interval-based worker execution
- Hierarchical logger tree with automatic child logger creation for workers
- Integration with
@vvlad1973/logger-treefor structured logging - Worker-level logging with multiple severity levels (trace, debug, info, warn, error)
- Event-based architecture for monitoring worker lifecycle
Documentation
- Quick Start - Get started quickly
- Agent Architecture Guide - Detailed guide on agent creation, multi-worker orchestration, and execution flow
- Logger Integration - Logger tree integration guide
- Migration Guide - Migrating from simple-logger
Installation
npm install @vvlad1973/base-agentQuick Start
Basic Usage
import { BaseAgent } from '@vvlad1973/base-agent';
class MyAgent extends BaseAgent {
protected executeTasksPath = './tasks.js';
constructor(context: any, config: any, logger: any) {
super({
context,
config,
logger,
loggerPath: 'myagent',
loggerOptions: { level: 'info' },
interval: 60 // Run every 60 seconds
});
}
}
const context = { users: usersDAL, scenario: scenarioProcessor };
const config = { agents: { myAgent: { start: true } } };
const agent = new MyAgent(context, config, logger);
await agent.start();Integration with Existing Logger Tree (Recommended)
When integrating with an application that uses LoggerTree, pass the tree and binder to make the agent part of the hierarchical logging structure:
import { LoggerTree, LoggerBinder } from '@vvlad1973/logger-tree';
import { BaseAgent } from '@vvlad1973/base-agent';
// Application creates LoggerTree once
const { loggerTree, loggerBinder } = createLoggerTree(config);
class MyAgent extends BaseAgent {
protected executeTasksPath = './tasks.js';
constructor(options: AgentOptions) {
super(options);
}
}
const context = { users: usersDAL, scenario: scenarioProcessor };
const config = { agents: { myAgent: { start: true } } };
// Agent becomes part of the tree
const agent = new MyAgent({
context,
config,
loggerTree, // Share the tree
loggerBinder, // Share the binder
loggerPath: 'agents.MyAgent' // Position in hierarchy
});
await agent.start();This creates the following logger hierarchy:
app (root)
├── Database
├── Users
└── agents
└── MyAgent (agent logger)
└── default (worker logger - created automatically)Logger Hierarchy
The agent automatically creates a hierarchical logger structure:
- Standalone mode: Agent creates its own
LoggerTree(for testing/simple cases) - Integrated mode: Agent becomes part of existing
LoggerTree(recommended for production) - Agent gets a logger at the specified
loggerPath(default: 'agent') - Each worker automatically gets a child logger at
{loggerPath}.{workerId}
Standalone Mode Example
const agent = new MyAgent({
logger: pino({ level: 'info' }), // Optional external logger
loggerPath: 'agent'
});Creates isolated hierarchy:
agent (root - isolated tree)
├── default (worker logger)
└── worker-1 (worker logger)Integrated Mode Example (Recommended)
const { loggerTree, loggerBinder } = createLoggerTree(config);
const agent = new MyAgent({
loggerTree,
loggerBinder,
loggerPath: 'agents.TaskExecutor'
});Becomes part of application hierarchy:
app (root)
├── Database
├── Users
└── agents
├── TaskExecutor (agent logger)
│ ├── default (worker logger)
│ └── task-finder (worker logger)
└── Scheduler (agent logger)
└── default (worker logger)See LOGGER_INTEGRATION.md for detailed integration guide.
Advanced Features
Lifecycle Hooks
Override lifecycle hooks to add custom initialization and cleanup logic:
class MyAgent extends BaseAgent {
protected executeTasksPath = './tasks.js';
constructor(context: any, config: any, logger: any) {
super({ context, config, logger, loggerPath: 'myagent' });
}
protected async onBeforeStart(): Promise<void> {
// Called before agent starts
this.logger.info('Initializing agent resources...');
await this.cleanupStuckTasks();
}
protected async onAfterStart(): Promise<void> {
// Called after agent successfully starts
this.logger.info('Agent is ready');
}
protected async onBeforeStop(): Promise<void> {
// Called before agent stops
this.logger.info('Preparing for shutdown...');
}
protected async onAfterStop(): Promise<void> {
// Called after agent successfully stops
this.logger.info('Agent stopped gracefully');
}
private async cleanupStuckTasks(): Promise<void> {
// Custom cleanup logic
}
}Required Features Check
The agent automatically checks required features before starting. Configure required features in your config:
const config = {
agents: {
myAgent: {
start: true,
requiredFeatures: {
User: ['use_tasks', 'use_actions'],
Bot: ['advanced_features']
}
}
}
};
const context = {
config: {
features: {
User: { use_tasks: true, use_actions: true },
Bot: { advanced_features: false }
}
}
};
// Agent will throw error because Bot.advanced_features is disabled
await agent.start(); // Error: Required features are not enabledYou can override the checkRequiredFeatures() method for custom logic:
class MyAgent extends BaseAgent {
protected checkRequiredFeatures(): boolean {
// Custom feature checking logic
if (!this.context?.database) {
this.logger.warn('Database connection required');
return false;
}
return super.checkRequiredFeatures();
}
}Context and Config Access
Access application context and configuration in your agent:
class MyAgent extends BaseAgent {
protected executeTasksPath = './tasks.js';
async doTask(task: any): Promise<void> {
// Access context
const user = await this.context.users.getUserById(task.userId);
// Access config
const interval = this.config.agents?.myAgent?.interval ?? 60;
// Use scenario processor from context
await this.context.scenario.doAction(task.actions, {}, user);
}
}Worker Architecture
IMPORTANT: You do NOT need to implement your own worker! The package includes a built-in worker (worker.js) that handles all the threading logic. You only need to create simple modules with functions that fetch and process data.
How It Works
The base-agent uses a three-step execution model:
1. fetchObjects() → Returns array of objects to process
2. fetchTasks() → For each object, returns array of tasks (optional)
3. executeTasks() → Executes tasks for each objectThe built-in worker handles:
- ✅ Worker thread lifecycle
- ✅ Module loading and imports
- ✅ Calling your functions in the correct order
- ✅ Error handling and recovery
- ✅ Shutdown signals
- ✅ Logging infrastructure
You only provide:
- 📦 Module(s) with your business logic functions
- 🔗 Paths to these modules in your agent class
Single Worker vs Multiple Workers
Single Worker (default):
- One worker thread executing on a fixed interval
- Simple use case: periodic task processing
- Example: Check queue every 60 seconds
Multiple Workers (advanced):
- Multiple independent worker threads, each with own interval
- Use cases:
- Different priorities (fast/slow processing)
- Parallel processing of different data sets
- Different task types with different schedules
- Example: Critical tasks every 10s, normal tasks every 60s, cleanup every hour
Key Point: All workers in one agent share the same module functions (fetchObjects, fetchTasks, executeTasks), but execute independently on their own intervals. Use workerData.id to differentiate behavior if needed.
For detailed explanation of multi-worker orchestration, see Agent Architecture Guide.
Execution Flow
// Built-in worker.js does this automatically:
// Step 1: Fetch objects (optional)
const objects = await fetchObjects(shutdownRequested);
if (Array.isArray(objects) && objects.length > 0) {
// Step 2 & 3: For each object
for (const object of objects) {
if (shutdownRequested()) break;
const tasks = await fetchTasks(object, shutdownRequested);
await executeTasks(object, tasks, shutdownRequested);
}
} else {
// Step 2 & 3: Without objects
const tasks = await fetchTasks(undefined, shutdownRequested);
await executeTasks(undefined, tasks, shutdownRequested);
}Multi-Worker Timeline Example:
Agent with 3 workers (fast:10s, medium:30s, slow:60s)
Time: 0s 10s 20s 30s 40s 50s 60s
| | | | | | |
fast: [run] [run] [run] [run] [run] [run] [run]
medium:[run] [run] [run]
slow: [run] [run]
Each worker runs independently in its own thread!Worker Implementation Patterns
Pattern 1: Simple Task Processing (executeTasks only)
When you just need to process tasks periodically without complex object-task relationships:
// my-agent.ts
import { BaseAgent } from '@vvlad1973/base-agent';
import { join } from 'path';
import { fileURLToPath } from 'url';
import { dirname } from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
class SimpleTaskAgent extends BaseAgent {
protected executeTasksPath: string;
constructor(options: AgentOptions) {
super(options);
// Point to your module with executeTasks function
this.executeTasksPath = join(__dirname, 'workers', 'process-tasks.js');
}
}// workers/process-tasks.ts
import { parentPort, workerData } from 'worker_threads';
/**
* Execute tasks
* Called by built-in worker for each execution cycle
*/
export async function executeTasks(
context: any,
tasks: any[],
shutdownRequested: () => boolean
): Promise<void> {
const { database } = workerData.context;
// Fetch tasks from database
const pendingTasks = await database.getPendingTasks();
// Process each task
for (const task of pendingTasks) {
if (shutdownRequested()) break;
parentPort?.postMessage({
type: 'info',
message: `Processing task ${task.id}`
});
await processTask(task);
}
}Pattern 2: Object-Task Processing (with fetchObjects)
When you need to fetch objects first, then process tasks for each object:
// my-agent.ts
class ObjectTaskAgent extends BaseAgent {
protected fetchObjectsPath: string;
protected executeTasksPath: string;
constructor(options: AgentOptions) {
super(options);
// Two separate modules for clarity
this.fetchObjectsPath = join(__dirname, 'workers', 'fetch-users.js');
this.executeTasksPath = join(__dirname, 'workers', 'process-user.js');
}
}// workers/fetch-users.ts
import { workerData } from 'worker_threads';
/**
* Fetch objects to process
* Returns array of objects - built-in worker will call executeTasks for each
*/
export async function fetchObjects(
shutdownRequested: () => boolean
): Promise<any[]> {
const { database } = workerData.context;
// Return array of users that need processing
return await database.getActiveUsers();
}// workers/process-user.ts
import { parentPort, workerData } from 'worker_threads';
/**
* Execute tasks for a user object
* Called by built-in worker for each object from fetchObjects
*/
export async function executeTasks(
user: any,
tasks: any[],
shutdownRequested: () => boolean
): Promise<void> {
const { notifier } = workerData.context;
if (!user) return;
parentPort?.postMessage({
type: 'info',
message: `Processing user ${user.id}`
});
// Process user notifications
await notifier.sendPendingNotifications(user);
}Pattern 3: Full Object-Task-Execution (all three functions)
When you have complex relationships: objects have multiple tasks, each task needs individual processing:
// my-agent.ts
class FullPipelineAgent extends BaseAgent {
protected fetchObjectsPath: string;
protected fetchTasksPath: string;
protected executeTasksPath: string;
constructor(options: AgentOptions) {
super(options);
// Three separate modules for maximum clarity
this.fetchObjectsPath = join(__dirname, 'workers', 'fetch-projects.js');
this.fetchTasksPath = join(__dirname, 'workers', 'fetch-project-tasks.js');
this.executeTasksPath = join(__dirname, 'workers', 'execute-task.js');
}
}// workers/fetch-projects.ts
export async function fetchObjects(shutdownRequested: () => boolean): Promise<any[]> {
const { workerData } = await import('worker_threads');
const { database } = workerData.context;
// Step 1: Return all projects that need processing
return await database.getActiveProjects();
}// workers/fetch-project-tasks.ts
export async function fetchTasks(
project: any,
shutdownRequested: () => boolean
): Promise<any[]> {
const { workerData } = await import('worker_threads');
const { database } = workerData.context;
// Step 2: For each project, get its pending tasks
return await database.getProjectTasks(project.id, { status: 'pending' });
}// workers/execute-task.ts
import { parentPort, workerData } from 'worker_threads';
export async function executeTasks(
project: any,
tasks: any[],
shutdownRequested: () => boolean
): Promise<void> {
const { executor } = workerData.context;
// Step 3: Execute each task for the project
for (const task of tasks || []) {
if (shutdownRequested()) break;
parentPort?.postMessage({
type: 'info',
message: `Executing task ${task.id} for project ${project.id}`
});
await executor.executeTask(task, project);
}
}Pattern 4: Real-World Example - Telegram Long Polling
Complete example from @vvlad1973/telegram-bot showing how to implement a long-polling agent:
// TelegramLongPollingAgent.ts
import { BaseAgent } from '@vvlad1973/base-agent';
import { join } from 'path';
import { fileURLToPath } from 'url';
import { dirname } from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
export class TelegramLongPollingAgent extends BaseAgent {
protected fetchObjectsPath: string;
protected executeTasksPath: string;
constructor(options: TelegramLongPollingAgentOptions) {
const { client, getUpdatesTimeout = 30, limit = 100 } = options;
super({
...options,
context: {
client,
offset: 0,
timeout: getUpdatesTimeout,
limit
},
interval: 1, // Check for updates every second
loggerPath: 'telegram.polling'
});
// Set module paths - built-in worker will load and execute them
this.fetchObjectsPath = join(__dirname, 'workers', 'fetch-updates.js');
this.executeTasksPath = join(__dirname, 'workers', 'process-update.js');
}
}// workers/fetch-updates.ts
import { parentPort, workerData } from 'worker_threads';
/**
* Fetch Telegram updates via getUpdates API
* Returns array of Update objects
*/
export async function fetchObjects(
shutdownRequested: () => boolean
): Promise<any[]> {
const { client, timeout, limit, offset } = workerData.context;
try {
const response = await client.getUpdates({ offset, timeout, limit });
if (!response.ok) {
parentPort?.postMessage({
type: 'error',
message: `getUpdates failed: ${response.description}`
});
return [];
}
const updates = response.result || [];
// Update offset for next call
if (updates.length > 0) {
const maxUpdateId = Math.max(...updates.map((u: any) => u.update_id));
workerData.context.offset = maxUpdateId + 1;
}
parentPort?.postMessage({
type: 'debug',
message: `Fetched ${updates.length} updates`
});
return updates; // Built-in worker will call executeTasks for each
} catch (error: any) {
parentPort?.postMessage({
type: 'error',
message: `Error fetching updates: ${error.message}`
});
return [];
}
}// workers/process-update.ts
import { parentPort, workerData } from 'worker_threads';
/**
* Process a single Telegram update
* Called by built-in worker for each update from fetchObjects
*/
export async function executeTasks(
update: any,
tasks: any[],
shutdownRequested: () => boolean
): Promise<void> {
if (!update) return;
const { client } = workerData.context;
try {
parentPort?.postMessage({
type: 'debug',
message: `Processing update ${update.update_id}`
});
// Process update through client
await client.processUpdate(update);
parentPort?.postMessage({
type: 'debug',
message: `Successfully processed update ${update.update_id}`
});
} catch (error: any) {
parentPort?.postMessage({
type: 'error',
message: `Error processing update ${update.update_id}: ${error.message}`,
stack: error.stack
});
}
}Key Points
- No custom worker needed: The package includes
worker.jsthat handles all threading - Module separation: Split logic into separate modules for clarity (optional but recommended)
- Context passing: Pass application context through
super({ context: {...} }) - Worker access: Modules access context via
workerData.context - Logging: Use
parentPort.postMessage()for structured logging - Paths: Set
fetchObjectsPath,fetchTasksPath,executeTasksPathto your modules - Flexibility: Implement only the functions you need (executeTasks is minimum)
Worker Logging
Workers can send log messages to their specific logger by posting messages to parentPort:
import { parentPort } from 'worker_threads';
// Different log levels
parentPort?.postMessage({ type: 'trace', message: 'Detailed trace message' });
parentPort?.postMessage({ type: 'debug', message: 'Debug information' });
parentPort?.postMessage({ type: 'info', message: 'Informational message' });
parentPort?.postMessage({ type: 'warning', message: 'Warning message' });
parentPort?.postMessage({
type: 'error',
message: 'Error message',
stack: error.stack
});Context Serialization (Important!)
Understanding Worker Threads and Context Passing
BaseAgent uses Node.js Worker Threads to execute tasks in parallel. When you pass context and config to the agent, these objects must be serializable because they are transferred to worker threads using the Structured Clone Algorithm.
What Can Be Serialized
The structured clone algorithm supports:
| Type | Serializable | Example |
|--------------------------|--------------|-----------------------------------------------------------------------|
| Primitive values | ✅ Yes | string, number, boolean, null, undefined, BigInt |
| Plain objects | ✅ Yes | { name: 'John', age: 30 } |
| Arrays | ✅ Yes | [1, 2, 3], ['a', 'b', 'c'] |
| Date | ✅ Yes | new Date() |
| RegExp | ✅ Yes | /pattern/gi |
| Map, Set | ✅ Yes | new Map(), new Set() |
| ArrayBuffer, TypedArrays | ✅ Yes | Uint8Array, Buffer |
| Error objects | ✅ Yes | new Error('message') |
What Cannot Be Serialized
| Type | Serializable | Why Not | |------------------|--------------|--------------------------------| | Functions | ❌ No | Code cannot be cloned | | Class instances | ❌ No | Methods are functions | | Symbols | ❌ No | Unique per realm | | DOM nodes | ❌ No | Browser-specific | | Promises | ❌ No | Represent async state | | WeakMap, WeakSet | ❌ No | Weak references not cloneable |
Best Practices
❌ Wrong Approach: Passing Class Instances
// DON'T: This will fail!
class UserRepository {
async getUserById(id: number) {
return await this.db.query('SELECT * FROM users WHERE id = ?', [id]);
}
}
const context = {
users: new UserRepository() // ❌ Class instance with methods
};
const agent = new MyAgent({ context });
// Error: DataCloneError: function could not be cloned✅ Correct Approach 1: Import Dependencies in Worker
Instead of passing class instances, import and create them in your worker modules:
// my-agent.ts
const context = {
dbConfig: {
host: 'localhost',
port: 5432,
database: 'mydb'
} // ✅ Plain object with config
};
const agent = new MyAgent({ context });// workers/process-tasks.ts
import { workerData } from 'worker_threads';
import { UserRepository } from '../repositories/UserRepository.js';
import { DatabaseClient } from '../database/client.js';
export async function executeTasks(
object: any,
tasks: any[],
shutdownRequested: () => boolean
): Promise<void> {
// ✅ Create instances inside worker
const { dbConfig } = workerData.context;
const db = new DatabaseClient(dbConfig);
const users = new UserRepository(db);
for (const task of tasks || []) {
if (shutdownRequested()) break;
const user = await users.getUserById(task.userId);
await processTask(task, user);
}
}✅ Correct Approach 2: Singleton Pattern
For shared resources like database connections, use singletons initialized in worker:
// database/connection.ts
let dbConnection: DatabaseClient | null = null;
export function getDatabaseConnection(config: any): DatabaseClient {
if (!dbConnection) {
dbConnection = new DatabaseClient(config);
}
return dbConnection;
}// workers/process-tasks.ts
import { getDatabaseConnection } from '../database/connection.js';
import { workerData } from 'worker_threads';
export async function executeTasks(
object: any,
tasks: any[],
shutdownRequested: () => boolean
): Promise<void> {
// ✅ Get singleton instance
const db = getDatabaseConnection(workerData.context.dbConfig);
// Process tasks...
}✅ Correct Approach 3: Pass Simple Data, Reconstruct Objects
Pass only serializable data and reconstruct complex objects in workers:
// my-agent.ts
import { BaseAgent } from '@vvlad1973/base-agent';
const context = {
apiUrl: 'https://api.example.com',
apiKey: process.env.API_KEY,
config: {
timeout: 5000,
retries: 3
} // ✅ Plain serializable data
};
const agent = new MyAgent({ context });// workers/api-tasks.ts
import { workerData } from 'worker_threads';
import { ApiClient } from '../clients/ApiClient.js';
export async function executeTasks(
object: any,
tasks: any[],
shutdownRequested: () => boolean
): Promise<void> {
const { apiUrl, apiKey, config } = workerData.context;
// ✅ Reconstruct ApiClient from serializable data
const client = new ApiClient(apiUrl, apiKey, config);
for (const task of tasks || []) {
if (shutdownRequested()) break;
await client.execute(task);
}
}Real-World Example: Telegram Bot
Here's how @vvlad1973/telegram-bot correctly handles this:
// ✅ Correct: Pass API credentials and config
const context = {
client: {
token: process.env.TELEGRAM_TOKEN,
apiUrl: 'https://api.telegram.org'
},
offset: 0,
timeout: 30,
limit: 100
};
const agent = new TelegramLongPollingAgent({ context });// workers/fetch-updates.ts
import { workerData } from 'worker_threads';
import { TelegramClient } from '../client.js';
export async function fetchObjects(
shutdownRequested: () => boolean
): Promise<any[]> {
const { client } = workerData.context;
// ✅ Create TelegramClient from serializable config
const telegram = new TelegramClient(client.token, client.apiUrl);
const response = await telegram.getUpdates({
offset: workerData.context.offset,
timeout: workerData.context.timeout,
limit: workerData.context.limit
});
return response.result || [];
}Common Errors and Solutions
Error: "DataCloneError: ... could not be cloned"
Cause: Trying to pass non-serializable object (class instance, function, etc.)
Solution: Pass only plain data, import and create instances in worker
Error: "Cannot read property 'method' of undefined"
Cause: Expecting class instance in worker, but received plain object
Solution: Import class in worker and create instance from plain data
Error: "this.someMethod is not a function"
Cause: Object with methods was serialized, methods were lost
Solution: Never pass objects with methods, reconstruct them in worker
Debugging Tips
- Log context before passing
console.log('Context:', JSON.stringify(context, null, 2));
// If JSON.stringify fails, you have non-serializable data- Check for circular references
const seen = new WeakSet();
JSON.stringify(context, (key, value) => {
if (typeof value === 'object' && value !== null) {
if (seen.has(value)) {
return '[Circular]';
}
seen.add(value);
}
return value;
});- Validate in worker
// workers/your-worker.ts
import { workerData } from 'worker_threads';
console.log('Received context:', workerData.context);
console.log('Context type:', typeof workerData.context);
console.log('Context keys:', Object.keys(workerData.context));Summary
Key Rules:
- ✅ Pass: Plain objects, primitives, arrays, config data
- ❌ Don't pass: Class instances, functions, complex objects with methods
- ✅ Do: Import classes in worker, create instances from config
- ✅ Do: Use singleton pattern for shared resources
- ✅ Do: Test serialization with
JSON.stringify(context)
Following these practices ensures your workers receive all necessary data without serialization errors.
API Reference
Constructor Options
interface AgentOptions {
// Application context and configuration
context?: any; // Application context (users, scenario, etc.)
config?: any; // Agent configuration object
// Logger tree options
loggerTree?: LoggerTree; // Optional existing tree
loggerBinder?: LoggerBinder; // Optional binder for tree
loggerPath?: string; // Path in tree (default: 'agent')
loggerOptions?: LoggerOptions; // Logger configuration
// Legacy logger options (used when creating new tree)
logger?: ExternalLogger;
level?: LoggerLevel;
// Worker options
interval?: number; // Default interval in seconds
allowForceStop?: boolean; // Allow force stop (default: true)
}Methods
Worker Management
start(id?: string): Promise<string>- Start a workerstop(id?: string, forceStop?: boolean): Promise<void>- Stop a workerstartAll(): Promise<void>- Start all workersstopAll(forceStop?: boolean): Promise<void>- Stop all workerscreateWorker(options?: CreateWorkerOptions): string- Create worker configuration
Configuration
setInterval(seconds: number, id?: string): void- Set worker intervalsetMaxRestartAttempts(attempts: number | undefined, id?: string): void- Set restart limitsetAllowForceStop(value: boolean): void- Set force stop permission
Information
getWorkers(): WorkerInfo[]- Get all worker informationgetWorker(id?: string): WorkerInfo | undefined- Get specific worker infoisWorkerStarted(id?: string): boolean- Check if worker is runningisEnabled(id?: string): boolean- Check if worker is enabled
Events
onWorkerCrash(handler): () => void- Subscribe to crash eventsonWorkerRestart(handler): () => void- Subscribe to restart eventsonWorkerFatalError(handler): () => void- Subscribe to fatal error events
Lifecycle Hooks (Override in derived classes)
protected async onBeforeStart?(): Promise<void>- Called before agent startsprotected async onAfterStart?(): Promise<void>- Called after agent startsprotected async onBeforeStop?(): Promise<void>- Called before agent stopsprotected async onAfterStop?(): Promise<void>- Called after agent stops
Feature Checking (Override in derived classes)
protected checkRequiredFeatures(): boolean- Check if required features are enabled
Cleanup
destroy(forceStop?: boolean): Promise<void>- Cleanup all resources
Migration from simple-logger
See LOGGER_TREE_MIGRATION.md for detailed migration guide.
License
MIT with Commercial Use
Author
Vladislav Vnukovskiy [email protected]
