auto-builder-sdk
v1.0.19
Published
SDK for building Auto Builder workflow plugins
Maintainers
Readme
Building nodes using Auto-Builder SDK
Lightweight TypeScript toolkit for authoring custom Auto-Builder workflow nodes (plugins).
Features
- 🔒 Sandbox-first execution – nodes run inside a VM2 sandbox by default.
- 🧩 Tiny surface: just extend
BaseNodeExecutorand export it. - 📊 Built-in telemetry & test/coverage enforcement.
- 🛠️
npx auto-builder-sdk init <my-plugin>scaffolds a ready-to-run plugin. - 🗄️ Database Service Integration – access databases through dependency injection without importing drivers directly.
- 📄 Standard Pagination Utilities – comprehensive pagination support for consistent API and database interactions.
Quick start
# create plugin folder & scaffold files
npx auto-builder-sdk init auto-builder-deskera-integration
cd my-awesome-plugin
# develop
npm run dev # watch mode (tsc)
# run unit tests (required ≥80 % coverage)
npm test
# publish to npm (prepublishOnly → tests+coverage)
npm publish --access publicInstallation
npm install auto-builder-sdkAnatomy of a plugin
my-awesome-plugin/
├─ src/
│ └─ index.ts # exports your node classes
├─ package.json # flagged with "auto-builder-plugin": true
└─ tsconfig.jsonPlugin manifest (definePlugin) & node metadata
Each SDK project exports one default plugin manifest created with
definePlugin(). The object is validated at build-time by Zod so you
can't accidentally publish an invalid plugin.
export default definePlugin({
name: 'acme-erp', // npm name or folder name
version: '1.2.3',
main: './dist/index.js', // compiled entry file
nodes: ['CreateOrder', 'Ping'],
engines: { 'auto-builder': '^1.0.0' },
sandbox: { timeoutMs: 20_000 }, // optional overrides
});Inside a node you can expose extra metadata used by the builder UI:
static readonly definition = {
displayName: 'Create Order', // shown in node palette
icon: 'shopping-cart', // lucide icon name
group: ['action'],
version: 1,
description: 'Create a new order in Acme ERP',
category: 'action',
inputs: ['main'], outputs: ['main'],
properties: [ /* … */ ],
};Only properties is mandatory — everything else falls back to reasonable
defaults if omitted.
Minimal node executor
import { BaseNodeExecutor, definePlugin } from 'auto-builder-sdk';
export class HelloNode extends BaseNodeExecutor {
static type = 'hello.world'; // machine id (shown in UI)
readonly nodeType = HelloNode.type;
async execute(node, input, ctx) {
return [
{ json: { message: `Hello ${ctx.workflowId}!` }, binary: {} },
];
}
}
export default definePlugin({
name: 'my-awesome-plugin',
version: '0.0.1',
main: './dist/index.js',
nodes: ['HelloNode'],
engines: { 'auto-builder': '^1.0.0' },
});Dynamic parameter options (dropdown loaders)
Auto-Builder supports dynamic dropdowns in the property inspector.
Define them by adding typeOptions.loadOptionsMethod to a property and
implement a static (or instance) method on your executor class.
export class HelloNode extends BaseNodeExecutor {
static readonly type = 'hello.world';
readonly nodeType = HelloNode.type;
// 1️⃣ Loader method – can be async and may use credentials/params
static async listGreetings(): Promise<Array<{ name: string; value: string }>> {
return [
{ name: 'Hi', value: 'hi' },
{ name: 'Hello', value: 'hello' },
{ name: 'Howdy', value: 'howdy' },
];
}
// 2️⃣ Reference the method in the property definition
static readonly definition = {
inputs: ['main'], outputs: ['main'],
properties: [
{
displayName: 'Greeting',
name: 'greeting',
type: 'options',
default: 'hello',
typeOptions: {
loadOptionsMethod: 'listGreetings',
},
},
],
} as const;
}The engine discovers the method automatically – no server-side changes required.
Debugging a plugin node
Compile with source-maps (
"sourceMap": true, "inlineSources": trueintsconfig.json).Start the backend in debug/watch mode:
NODE_OPTIONS="--enable-source-maps --inspect=9229" \ PLUGINS_ENABLED=true \ PLUGIN_WATCH=true \ npm run devAttach VS Code (Run → "Node.js attach" →
localhost:9229). Set break-points in your TypeScript sources – thanks to source-maps they will hit inside the sandbox.Disable the sandbox temporarily (faster debugging):
- global:
PLUGIN_SAFE_MODE=false npm run dev - per plugin: add
{ "sandbox": { "enabled": false } }indefinePlugin().
- global:
Logging & error handling
import { log, NodeOperationError, NodeApiError } from 'auto-builder-sdk';
log.info('Fetching data', { url });
if (!apiKey) {
throw new NodeOperationError(node, 'Missing API key');
}- When the plugin runs inside Auto-Builder the SDK logger proxies to the main Winston logger so your messages appear in the service log files and monitoring dashboards.
- In stand-alone tests the logger falls back to
console.*. - Throwing either of the SDK error classes lets the engine classify the failure
(operation vs API) but is not mandatory – any
Errorworks.
Output formatting with createFormattedResult
The SDK provides createFormattedResult and createFormattedErrorResult helpers that automatically format node outputs with proper namespacing and metadata tracking.
Why use formatted results?
- Data lineage tracking: Every result includes
internal_node_referencefield containing the node ID for debugging and tracking data flow - Prevents key conflicts: Namespaced format prevents data collisions when multiple nodes of the same type exist
- Preserves input data: All data from previous nodes is preserved in the output
- Consistent metadata: Automatically adds operation details, timestamps, and execution context
Basic usage
import { BaseNodeExecutor, createFormattedResult, createFormattedErrorResult } from 'auto-builder-sdk';
export class MyApiNode extends BaseNodeExecutor {
static readonly type = 'myapi.fetch';
readonly nodeType = MyApiNode.type;
async execute(node, inputData, context) {
try {
// Your node logic here
const result = await fetchSomeData();
// Format the result - automatically adds internal_node_reference
return inputData.map((item, index) =>
createFormattedResult(item, result, context, index)
);
} catch (error) {
// Format error results - also includes internal_node_reference
return inputData.map((item, index) =>
createFormattedErrorResult(item, error as Error, context, index)
);
}
}
}Result structure
Success result:
{
"json": {
"previousNodeData": "preserved",
"myapi_fetch": {
"data": { "result": "from API" },
"internal_node_reference": "node_abc123",
"_metadata": {
"operation": "fetch",
"timestamp": "2025-10-24T10:30:00.000Z",
"nodeId": "node_abc123",
"nodeType": "myapi.fetch",
"dataItemIndex": 0,
"workflowNodePosition": 1
}
}
},
"binary": {},
"pairedItem": { "item": 0 }
}Error result:
{
"json": {
"previousNodeData": "preserved",
"myapi_fetch": {
"error": "API request failed",
"success": false,
"internal_node_reference": "node_abc123",
"_metadata": {
"operation": "fetch",
"failed": true,
"timestamp": "2025-10-24T10:30:00.000Z",
"nodeId": "node_abc123",
"nodeType": "myapi.fetch",
"dataItemIndex": 0,
"workflowNodePosition": 1
}
}
},
"binary": {},
"pairedItem": { "item": 0 }
}Advanced options
// Legacy mode (flat structure, backward compatible)
createFormattedResult(item, result, context, index, { legacyMode: true });
// Include both namespaced and flat formats for gradual migration
createFormattedResult(item, result, context, index, { includeBothFormats: true });Benefits
- 🔍 Debugging: Use
internal_node_referenceto trace data back to specific node executions - 🔄 Multiple instances: Handles multiple nodes of the same type (e.g.,
myapi_fetch_1,myapi_fetch_2) - 📊 Metadata tracking: Automatic timestamps, operation names, and execution context
- 🛡️ Error handling: Consistent error structure with node tracking
- 🔗 Data lineage: Complete chain of which nodes processed the data
Importing engine types
All public interfaces are bundled with the SDK – no need to install the
auto-builder package:
import type { INode, IExecutionContext, INodeExecutionData } from 'auto-builder-sdk';The file lives at auto-builder-sdk/dist/auto-builder-sdk/src/ab-types.d.ts and
is kept in sync with the backend on every release.
Credential definitions (authentication)
Added in SDK 0.1.x – no engine changes required.
Nodes can declare the kind of credential(s) they need via the credential
registry. A credential definition is just a JSON-ish object that describes the
fields users must enter and an optional validate() function that performs a
live API check.
import { registerCredential, type CredentialDefinition } from 'auto-builder-sdk';
const jiraApiToken: CredentialDefinition = {
name: 'jira', // identifier referenced by nodes
displayName: 'Jira Cloud',
properties: [
{ name: 'domain', displayName: 'Domain', type: 'string', required: true },
{ name: 'email', displayName: 'Email', type: 'string', required: true },
{ name: 'apiToken', displayName: 'API Token',type: 'password', required: true },
],
validate: async (data) => {
const { domain, email, apiToken } = data as Record<string,string>;
const auth = 'Basic ' + Buffer.from(`${email}:${apiToken}`).toString('base64');
const res = await fetch(`${domain}/rest/api/3/myself`, { headers:{Authorization:auth} });
if (!res.ok) throw new Error(`Auth failed (${res.status})`);
},
};
registerCredential(jiraApiToken);Expose it on your node:
static readonly definition = {
// …
credentials: [ { name: 'jira', required: true } ],
};Multiple schemes? Just register multiple definitions (e.g. jiraOAuth2,
jiraBasic) and list them all in credentials:; the builder UI will let users
pick the type they want.
When the node runs you retrieve whatever credential the user selected:
const creds = await this.getCredentials(node.credentials.jira);
console.log(creds.type); // "jira", "jiraOAuth2", …No core-or UI-level changes are needed – adding a new credential definition is as simple as shipping the file with your plugin.
Unit test example (Vitest)
import { expect, it } from 'vitest';
import { HelloNode, makeStubContext, makeStubNode } from '../src';
it('returns greeting', async () => {
const nodeImpl = new HelloNode();
const ctx = makeStubContext({ workflowId: 'wf-123' });
const nodeDef = makeStubNode(HelloNode.type);
const res = await nodeImpl.execute(nodeDef, [], ctx);
expect(res[0].json.message).toBe('Hello wf-123!');
});Testing helpers
The SDK exposes two utilities to remove boiler-plate when writing tests:
import { makeStubContext, makeStubNode } from 'auto-builder-sdk';
const ctx = makeStubContext();
const nodeDef = makeStubNode('my.node');Both helpers accept a partial override so you can customise only the fields you care about.
Coverage & CI script
Every scaffold includes a vitest.config.ts and the matching
@vitest/coverage-v8 dev-dependency. Two npm-scripts are generated:
"test": "vitest", // watch mode – fast dev loop
"verify": "vitest run --coverage" // used by prepublishOnly & CISecurity & sandboxing
- The Auto-Builder engine executes each plugin in a VM2 sandbox, limited by:
- Time-out (default 30 000 ms)
- Memory (default 64 MB)
- Per-plugin overrides via
sandboxfield:
"sandbox": { "timeoutMs": 10000, "memoryMb": 32 }- Global flag
PLUGIN_SAFE_MODE=false(engine env) disables sandbox (dev only).
Database Integration
The SDK provides two ways to access databases from your plugins:
1. Shared Prisma Client
The SDK exposes getDb() which returns the same PrismaClient instance
the Auto-Builder backend already uses. That means plugin nodes can run SQL
without opening their own connection pools or adding the @prisma/client
dependency.
import { getDb, BaseNodeExecutor } from 'auto-builder-sdk';
export class ListUsersNode extends BaseNodeExecutor {
static readonly type = 'db.users.list';
readonly nodeType = ListUsersNode.type;
async execute() {
const db = getDb(); // shared PrismaClient
const rows = await db.user.findMany();
return [{ json: rows, binary: {} }];
}
}2. Database Service (Recommended)
New in SDK 1.0.12: The Auto-Builder engine provides a comprehensive database service that supports multiple database types through dependency injection. This is the recommended approach as it provides better security, connection pooling, and centralized management.
import { getDatabaseService, BaseNodeExecutor, registerCredential } from 'auto-builder-sdk';
// Register database credentials
registerCredential({
name: 'postgres', displayName: 'PostgreSQL',
properties: [
{ name: 'host', displayName: 'Host', type: 'string', required: true },
{ name: 'port', displayName: 'Port', type: 'number', required: true, default: 5432 },
{ name: 'database', displayName: 'Database', type: 'string', required: true },
{ name: 'username', displayName: 'Username', type: 'string', required: true },
{ name: 'password', displayName: 'Password', type: 'password', required: true },
],
});
export class PostgresQueryNode extends BaseNodeExecutor {
static readonly type = 'postgres.query';
readonly nodeType = PostgresQueryNode.type;
static readonly definition = {
credentials: [{ name: 'postgres', required: true }],
properties: [
{ name: 'sql', displayName: 'SQL Query', type: 'string', required: true }
],
} as const;
async execute(node) {
const { sql } = node.parameters as { sql: string };
const credentials = await this.getCredentials(node.credentials.postgres);
// Get the injected database service
const databaseService = getDatabaseService();
// Execute query through the service
const result = await databaseService.executeQuery(sql, {
type: 'postgres',
data: credentials.data
});
return [{
json: {
rows: result.rows,
rowCount: result.rowCount,
executionTime: result.executionTime
},
binary: {}
}];
}
}Supported Database Types:
- PostgreSQL (
postgres) - MySQL (
mysql) - Oracle Database (
oracle) - Microsoft SQL Server (
mssql) - MongoDB (
mongodb) - Google BigQuery (
bigquery)
Benefits of Database Service:
- 🔒 Secure: No direct driver imports in sandboxed plugins
- 🏊 Connection Pooling: Centralized connection management
- 📊 Monitoring: Built-in query logging and metrics
- 🛡️ Validation: Automatic credential and query validation
- 🔄 Consistent: Same interface across all database types
Below are minimal examples for different databases using the legacy pattern (direct driver imports). Note: This approach is deprecated and may not work in sandboxed environments. Use the Database Service approach above instead.
PostgreSQL / MySQL (using pg or mysql2) - Legacy Pattern
⚠️ Deprecated: This example uses direct driver imports which may not work in sandboxed environments. Use the Database Service approach shown above instead.
import { BaseNodeExecutor, registerCredential } from 'auto-builder-sdk';
import { createPool } from 'mysql2/promise'; // or `pg` / `knex`
registerCredential({
name: 'mysql', displayName: 'MySQL',
properties: [
{ name: 'host', displayName: 'Host', type: 'string', required: true },
{ name: 'port', displayName: 'Port', type: 'number', required: true, default: 3306 },
{ name: 'database', displayName: 'Database', type: 'string', required: true },
{ name: 'username', displayName: 'Username', type: 'string', required: true },
{ name: 'password', displayName: 'Password', type: 'password',required: true },
],
});
export class MySqlQueryNode extends BaseNodeExecutor {
static readonly type = 'mysql.query'; readonly nodeType = MySqlQueryNode.type;
static readonly definition = {
credentials: [{ name: 'mysql', required: true }],
properties: [{ name: 'sql', displayName: 'SQL', type: 'string', required: true }],
} as const;
async execute(node) {
const { sql } = node.parameters as { sql: string };
const creds = await this.getCredentials(node.credentials.mysql);
const pool = createPool({
host: creds.data.host,
port: creds.data.port,
user: creds.data.username,
password: creds.data.password,
database: creds.data.database,
});
const [rows] = await pool.query(sql);
await pool.end();
return [{ json: rows, binary: {} }];
}
}MongoDB (using mongodb) - Legacy Pattern
⚠️ Deprecated: This example uses direct driver imports which may not work in sandboxed environments. Use the Database Service approach shown above instead.
import { MongoClient } from 'mongodb';
import { BaseNodeExecutor, registerCredential } from 'auto-builder-sdk';
registerCredential({
name: 'mongo', displayName: 'MongoDB',
properties: [
{ name: 'uri', displayName: 'Connection URI', type: 'string', required: true },
{ name: 'database', displayName: 'Database', type: 'string', required: true },
],
});
export class MongoFindNode extends BaseNodeExecutor {
static readonly type = 'mongo.find'; readonly nodeType = MongoFindNode.type;
static readonly definition = {
credentials: [{ name: 'mongo', required: true }],
properties: [
{ name: 'collection', displayName: 'Collection', type: 'string', required: true },
{ name: 'query', displayName: 'Query (JSON)', type: 'string', required: true },
],
} as const;
async execute(node) {
const { collection, query } = node.parameters as any;
const creds = await this.getCredentials(node.credentials.mongo);
const client = await MongoClient.connect(creds.data.uri);
const docs = await client.db(creds.data.database)
.collection(collection)
.find(JSON.parse(query)).toArray();
await client.close();
return [{ json: docs, binary: {} }];
}
}Microsoft SQL Server (using mssql)
import sql from 'mssql';
import { BaseNodeExecutor, registerCredential } from 'auto-builder-sdk';
registerCredential({
name: 'mssql', displayName: 'MS SQL Server',
properties: [
{ name: 'server', displayName: 'Server', type: 'string', required: true },
{ name: 'user', displayName: 'User', type: 'string', required: true },
{ name: 'password',displayName: 'Password',type: 'password',required: true },
{ name: 'database',displayName: 'Database',type: 'string', required: true },
],
});
export class MsSqlQueryNode extends BaseNodeExecutor {
static readonly type = 'mssql.query'; readonly nodeType = MsSqlQueryNode.type;
static readonly definition = {
credentials: [{ name: 'mssql', required: true }],
properties: [{ name: 'sql', displayName: 'SQL', type: 'string', required: true }],
} as const;
async execute(node) {
const { sql: statement } = node.parameters as { sql: string };
const creds = await this.getCredentials(node.credentials.mssql);
await sql.connect({
server: creds.data.server,
user: creds.data.user,
password: creds.data.password,
database: creds.data.database,
options: { encrypt: true, trustServerCertificate: true },
});
const result = await sql.query(statement);
await sql.close();
return [{ json: result.recordset, binary: {} }];
}
}Other engines / cloud warehouses
The pattern is identical for any future database. Two quick sketches:
• ClickHouse (columnar OLAP)
import { createClient } from '@clickhouse/client';
registerCredential({
name: 'clickhouse', displayName: 'ClickHouse',
properties: [
{ name: 'url', displayName: 'HTTP URL', type: 'string', required: true },
{ name: 'user', displayName: 'User', type: 'string' },
{ name: 'pass', displayName: 'Password', type: 'password' },
],
});
// inside execute()
const ch = createClient({
host: creds.data.url,
username: creds.data.user,
password: creds.data.pass,
});
const rows = await ch.query({ query: sql, format: 'JSONEachRow' });• BigQuery (Google Cloud)
import { BigQuery } from '@google-cloud/bigquery';
registerCredential({
name: 'bigquery', displayName: 'BigQuery',
properties: [
{ name: 'projectId', displayName: 'Project ID', type: 'string', required: true },
{ name: 'jsonKey', displayName: 'Service Account JSON', type: 'string', required: true, typeOptions:{rows:8} },
],
});
// inside execute()
const client = new BigQuery({
projectId: creds.data.projectId,
credentials: JSON.parse(creds.data.jsonKey),
});
const [rows] = await client.query(sql);Any engine that reuses the Postgres/MySQL wire-protocol (CockroachDB, TimescaleDB, Aurora-PG/MySQL) can simply adopt the existing credential & node without code changes.
Tip For databases with native Prisma support (PostgreSQL, MySQL, SQLite, SQL Server, MongoDB) you can also generate a separate client in your plugin and skip the manual driver code. Use
getDb()when you want to run queries against the engine's primary database.
Deskera bearer-token helper (ctx.getDeskeraToken())
New in SDK 0.2.x & Auto-Builder 1.1 – lazy, secure token retrieval
Each node receives an execution-context (IExecutionContext) instance. The
engine now injects an async helper getDeskeraToken() that returns a
user-/tenant-scoped Bearer token which you can pass to Deskera's REST
APIs:
interface IExecutionContext {
// … existing fields …
getDeskeraToken?(): Promise<string>; // optional for typing; always present at runtime
}Why use it?
- 🔐 No secrets in plugins – the helper is resolved inside the core engine.
- 🏷️ Tenant isolation – token is scoped to the current workflow/user.
- 🚀 Caching – the first call fetches the token from IAM, subsequent calls within the same workflow execution are served from memory.
Minimal example
import { BaseNodeExecutor } from 'auto-builder-sdk';
export class ListCustomersNode extends BaseNodeExecutor {
static readonly type = 'deskera.customers.list';
readonly nodeType = ListCustomersNode.type;
async execute(node, input, ctx) {
const token = await ctx.getDeskeraToken(); // ← one line, done!
const res = await fetch(
`${process.env.DESKERA_API_BASE_URL}/customers`,
{ headers: { Authorization: `Bearer ${token}` } },
);
if (!res.ok) throw new Error(`Deskera API ${res.status}`);
const customers = await res.json();
return [{ json: customers, binary: {} }];
}
}Unit-testing the helper
Because BullMQ serialises job payloads, functions are stripped when we bake
them into test contexts. The SDK helper makeStubContext() accepts an
override so you can stub the token call in tests:
import { expect, it } from 'vitest';
import { makeStubContext, makeStubNode } from 'auto-builder-sdk';
import { ListCustomersNode } from '../src';
it('retrieves customers', async () => {
const ctx = makeStubContext({
getDeskeraToken: async () => 'test-token',
});
const node = makeStubNode(ListCustomersNode.type);
const out = await new ListCustomersNode().execute(node, [], ctx);
expect(out[0].json).toBeTypeOf('object');
});No further changes are required on the engine side – the helper works in both inline and branch (parallel) execution modes.
Sandboxing options per plugin
VM2 limits have sensible defaults (30 s / 64 MB). Override them globally via env-vars or per plugin in the manifest:
{
"sandbox": {
"enabled": true, // false disables VM2 (DEV *only*)
"timeoutMs": 10000, // 10 seconds
"memoryMb": 32 // 32 MB
}
}Developers can temporarily set PLUGIN_SAFE_MODE=false on the backend to turn
all sandboxes off while debugging.
Resolving parameters – two options
Auto-Builder exposes the same template engine in two different ways so you can pick whatever fits your code best.
1. From inside a node executor (recommended for normal nodes)
When your class extends BaseNodeExecutor, just use the built-in protected helper:
import { BaseNodeExecutor } from 'auto-builder-sdk';
export class GreetNode extends BaseNodeExecutor {
readonly nodeType = 'demo.greet';
async execute(node, input, ctx) {
// inherited from BaseNodeExecutor
const opts = this.resolveParameters(node.parameters, ctx);
return [
{ json: { greeting: opts.message }, binary: {} },
];
}
}2. Anywhere else (tests, helper libs, UI previews)
Need the same logic without subclassing? Import ParameterResolver:
import { ParameterResolver, type IExecutionContext } from 'auto-builder-sdk';
const ctx: IExecutionContext & { itemIndex?: number } = {
executionId: 'EX123',
workflowId: 'WF001',
workflow: {} as any,
node: {} as any,
inputData: [{ json: { name: 'Jane' }, binary: {} }],
runIndex: 0,
itemIndex: 0,
mode: 'manual',
timezone: 'UTC',
variables: {},
};
const raw = {
subject: 'Welcome {{ $json.name }}!',
sentAt: '{{ $now }}',
};
const resolved = ParameterResolver.resolve(raw, ctx);
// → { subject: 'Welcome Jane!', sentAt: '2025-06-28T14:00:00.000Z' }Both methods share exactly the same implementation under the hood, so the resolved output is identical.
Standard Pagination Utilities
New in SDK 1.0.16: The SDK provides comprehensive pagination utilities to enforce consistent pagination across all nodes and plugins.
Basic Usage
import {
convertToLimitOffset,
validatePaginationParams,
createPaginationResponse,
PAGINATION_NODE_DEFINITION,
PAGINATION_SIZE_NODE_DEFINITION,
type StandardPaginationParams
} from 'auto-builder-sdk';
export class ListUsersNode extends BaseNodeExecutor {
static readonly type = 'api.users.list';
readonly nodeType = ListUsersNode.type;
static readonly definition = {
properties: [
// Add standard pagination parameters
PAGINATION_NODE_DEFINITION,
PAGINATION_SIZE_NODE_DEFINITION,
// ... other properties
]
} as const;
async execute(node) {
// Extract and validate pagination parameters
const paginationParams = validatePaginationParams({
page: node.parameters.page,
pageSize: node.parameters.pageSize
});
// Convert to API-specific format
const { limit, offset } = convertToLimitOffset(
paginationParams.page,
paginationParams.pageSize
);
// Make API call with pagination
const response = await fetch(`/api/users?limit=${limit}&offset=${offset}`);
const data = await response.json();
// Create standard pagination response
return [createPaginationResponse(
data.users,
paginationParams.page,
paginationParams.pageSize,
data.total
)];
}
}Configuration
The pagination system is fully configurable through environment variables:
# Default values
PAGINATION_DEFAULT_PAGE=1
PAGINATION_DEFAULT_PAGE_SIZE=100
# Validation limits
PAGINATION_MIN_PAGE=1
PAGINATION_MIN_PAGE_SIZE=1
PAGINATION_MAX_PAGE_SIZE=1000
# String processing limits
PAGINATION_MAX_STRING_LENGTH=15Advanced Configuration
import { PAGINATION_CONFIG, updatePaginationConfig, resetPaginationConfig } from 'auto-builder-sdk';
// Access current configuration
console.log(PAGINATION_CONFIG.DEFAULT_PAGE_SIZE); // 100
console.log(PAGINATION_CONFIG.MAX_PAGE_SIZE); // 1000
// Update configuration at runtime
updatePaginationConfig({
DEFAULT_PAGE_SIZE: 50,
MAX_PAGE_SIZE: 500
});
// Reset to environment variable defaults
resetPaginationConfig();API-Specific Pagination
import { buildApiPaginationParams, parseApiPaginationResponse } from 'auto-builder-sdk';
// For Jira API
const jiraParams = buildApiPaginationParams(
{ page: 1, pageSize: 50 },
'jira'
);
// → { startAt: 0, maxResults: 50 }
// For SharePoint API
const sharepointParams = buildApiPaginationParams(
{ page: 2, pageSize: 100 },
'sharepoint'
);
// → { top: 100, skip: 100 }
// Parse API responses
const standardResponse = parseApiPaginationResponse(
jiraApiResponse,
'jira'
);Legacy Parameter Conversion
import { convertLegacyPagination } from 'auto-builder-sdk';
// Convert old parameter names to standard format
const standardParams = convertLegacyPagination({
pageNo: 2,
limit: 25,
totalCount: 150
});
// → { page: 2, pageSize: 25, totalRecords: 150 }Database Query Pagination
import { convertToLimitOffset } from 'auto-builder-sdk';
const { limit, offset } = convertToLimitOffset(page, pageSize);
const sql = `SELECT * FROM users LIMIT ${limit} OFFSET ${offset}`;Benefits
- 🔄 Consistent Interface: All nodes use the same
page/pageSizeparameters - 🛡️ Validation: Automatic parameter validation with sensible defaults
- 🔧 API Flexibility: Support for REST, GraphQL, OData, Jira, SharePoint formats
- 📊 Metadata: Automatic calculation of total pages, next/previous indicators
- 🔄 Backward Compatibility: Legacy parameter conversion support
Temporary File Management
New in SDK 1.0.17: The SDK provides comprehensive temporary file management utilities for handling binary files across plugin nodes with automatic cleanup tracking.
Why Use Temp File Utilities?
When nodes process binary files (PDFs, images, Excel files, etc.), they often need to:
- Store files temporarily between node executions
- Track which workflow/node created which files
- Clean up files after processing
- Preserve original file metadata
The SDK's temp file utilities provide a standardized, secure way to handle these scenarios.
Basic Usage
import {
createTempFile,
readTempFile,
deleteTempFile,
getTempFileMetadata,
BaseNodeExecutor
} from 'auto-builder-sdk';
export class ProcessPdfNode extends BaseNodeExecutor {
static readonly type = 'file.pdf.process';
readonly nodeType = ProcessPdfNode.type;
async execute(node, inputData, context) {
try {
// Create a temporary file from buffer
const pdfBuffer = Buffer.from(inputData[0].json.fileData, 'base64');
const tempFile = await createTempFile(
pdfBuffer,
context.workflowId,
node.id,
'document.pdf',
{ documentType: 'invoice', processedBy: 'ProcessPdfNode' }
);
// File is now stored at: /tmp/auto-builder-temp/file-XXXXX/timestamp-document.pdf
// Metadata file is at: /tmp/auto-builder-temp/file-XXXXX/timestamp-document.pdf.meta.json
return [{
json: {
tempFilePath: tempFile.tempFilePath,
fileSize: tempFile.reference.size,
originalName: tempFile.reference.originalFileName,
createdAt: tempFile.reference.createdAt
},
binary: {}
}];
} catch (error) {
throw new NodeOperationError(node, `Failed to create temp file: ${error.message}`);
}
}
}Reading Temporary Files
export class ReadPdfNode extends BaseNodeExecutor {
static readonly type = 'file.pdf.read';
readonly nodeType = ReadPdfNode.type;
async execute(node, inputData, context) {
const tempFilePath = inputData[0].json.tempFilePath;
// Read the file back as a buffer
const buffer = await readTempFile(tempFilePath);
// Get file metadata
const metadata = await getTempFileMetadata(tempFilePath);
return [{
json: {
content: buffer.toString('base64'),
size: buffer.length,
metadata: metadata
},
binary: {}
}];
}
}Cleaning Up Files
export class CleanupNode extends BaseNodeExecutor {
static readonly type = 'file.cleanup';
readonly nodeType = CleanupNode.type;
async execute(node, inputData, context) {
const tempFilePath = inputData[0].json.tempFilePath;
// Delete the temp file and its metadata
await deleteTempFile(tempFilePath);
return [{
json: {
message: 'File cleaned up successfully',
deletedPath: tempFilePath
},
binary: {}
}];
}
}Complete Example: PDF to Text Workflow
import {
createTempFile,
readTempFile,
deleteTempFile,
BaseNodeExecutor,
createFormattedResult,
createFormattedErrorResult
} from 'auto-builder-sdk';
export class PdfToTextNode extends BaseNodeExecutor {
static readonly type = 'file.pdf.totext';
readonly nodeType = PdfToTextNode.type;
async execute(node, inputData, context) {
let tempFilePath: string | undefined;
try {
// Step 1: Create temp file from input
const pdfData = Buffer.from(inputData[0].json.pdfBase64, 'base64');
const tempFile = await createTempFile(
pdfData,
context.workflowId,
node.id,
inputData[0].json.fileName || 'document.pdf',
{ operation: 'pdf-to-text', nodeType: this.nodeType }
);
tempFilePath = tempFile.tempFilePath;
// Step 2: Process the file (extract text from PDF)
const buffer = await readTempFile(tempFilePath);
const text = await this.extractTextFromPdf(buffer);
// Step 3: Clean up
await deleteTempFile(tempFilePath);
// Step 4: Return formatted result
return inputData.map((item, index) =>
createFormattedResult(
item,
{
text,
originalFileName: tempFile.reference.originalFileName,
processedAt: new Date().toISOString()
},
context,
index
)
);
} catch (error) {
// Clean up on error
if (tempFilePath) {
await deleteTempFile(tempFilePath).catch(() => {});
}
return inputData.map((item, index) =>
createFormattedErrorResult(item, error as Error, context, index)
);
}
}
private async extractTextFromPdf(buffer: Buffer): Promise<string> {
// Your PDF text extraction logic here
return 'Extracted text from PDF...';
}
}File Storage Structure
Temp files are stored in a consistent directory structure:
/tmp/auto-builder-temp/
├── file-abc123/
│ ├── 1701234567890-document.pdf # Actual file
│ └── 1701234567890-document.pdf.meta.json # Metadata file
├── file-def456/
│ ├── 1701234568900-invoice.xlsx
│ └── 1701234568900-invoice.xlsx.meta.json
...Metadata File Structure
Each temp file has an associated .meta.json file containing:
{
"workflowId": "wf-123",
"nodeId": "node-abc",
"originalFileName": "document.pdf",
"fileSize": 524288,
"createdAt": "2025-12-01T10:30:00.000Z",
"tempFilePath": "/tmp/auto-builder-temp/file-abc123/1701234567890-document.pdf",
"tempDir": "/tmp/auto-builder-temp/file-abc123",
"documentType": "invoice",
"processedBy": "ProcessPdfNode"
}API Reference
createTempFile(buffer, workflowId, nodeId, fileName, metadata?)
Creates a temporary file with metadata tracking.
Parameters:
buffer: Buffer- Binary data to storeworkflowId: string- Workflow ID for trackingnodeId: string- Node ID for trackingfileName: string- Original filename (extension is preserved)metadata?: Record<string, any>- Additional metadata (optional)
Returns: Promise of:
{
tempFilePath: string; // Full path to temp file
tempDir: string; // Directory containing the file
metadataPath: string; // Path to metadata JSON file
reference: {
path: string; // Same as tempFilePath
size: number; // File size in bytes
originalFileName: string; // Original filename
createdAt: string; // ISO timestamp
workflowId: string; // Workflow ID
nodeId: string; // Node ID
};
}readTempFile(tempFilePath)
Reads a temporary file and returns the buffer.
Parameters:
tempFilePath: string- Path to the temporary file
Returns: Promise<Buffer> - Binary data
Throws: Error if file doesn't exist or can't be read
deleteTempFile(tempFilePath)
Deletes a temporary file and its metadata.
Parameters:
tempFilePath: string- Path to the temporary file
Returns: Promise<void>
Note: Automatically deletes:
- The temp file itself
- The associated
.meta.jsonfile - The parent directory if empty
getTempFileMetadata(tempFilePath)
Gets metadata for a temporary file.
Parameters:
tempFilePath: string- Path to the temporary file
Returns: Promise<Record<string, any> | null> - Metadata object or null if not found
Best Practices
- Always clean up: Use try-finally blocks to ensure temp files are deleted even if processing fails
let tempFilePath: string | undefined;
try {
const tempFile = await createTempFile(...);
tempFilePath = tempFile.tempFilePath;
// ... process file
} finally {
if (tempFilePath) {
await deleteTempFile(tempFilePath).catch(() => {});
}
}Include workflow context: Always pass
workflowIdandnodeIdfor trackingAdd custom metadata: Use the metadata parameter to track processing state, document types, etc.
Sanitize filenames: The utilities automatically sanitize filenames, but avoid very long names
Check file existence: Use
getTempFileMetadata()to verify file exists before reading
Benefits
- 📁 Organized Storage: Consistent directory structure (
/tmp/auto-builder-temp/) - 🏷️ Metadata Tracking: Every file includes workflow/node context
- 🧹 Easy Cleanup: Single function deletes file, metadata, and empty directories
- 🔍 Debugging: File metadata includes creation time, workflow ID, node ID
- 🛡️ Type Safety: Full TypeScript support with proper types
- 📝 Logging: Built-in debug logging for file operations
Telemetry hooks
You can subscribe to runtime metrics:
import { pluginTelemetry } from 'auto-builder/telemetry/plugin-telemetry';
pluginTelemetry.on('metric', (m) => console.log(m));Publishing & versioning workflow
- Bump version (
npm version patch|minor|major). Follow semver: • Patch – docs/typos & additive helpers.
• Minor – new capabilities (dynamic loaders, logger).
• Major – breaking API changes. npm publish --access public(the prepublish script runs tests & coverage).- In your Auto-Builder deployment update the dependency:
npm i auto-builder-sdk@latest # service repoTip: Use Renovate or dependabot so services stay in sync automatically.
Best practices
- Always write unit tests with ≥80 % coverage – enforced.
- Validate external inputs with
zodinsideexecute(). - Keep network/FS access minimal; prefer the SDK's helpers.
- Publish with semver and respect the peer range in
engines. - Document node properties with JSDoc – used by the marketplace doc generator.
License
MIT
