@inteli.city/node-red-contrib-exec-collection
v2.1.0
Published
A collection of Node-RED nodes for running scripts and system commands.
Readme
node-red-contrib-exec-collection
A collection of Node-RED nodes for running scripts and system commands.
Table of Contents
- Nodes
- When to use which node
- exec.queue
- python.queue
- node.queue
- exec.service
- State & Persistence
- Output & Parsing
- python.config
Nodes
| Node | Description |
|---|---|
| exec.queue | Renders a Nunjucks template into a temp file, runs a shell command against it. Fresh process per message. |
| python.queue | Persistent Python worker pool. Each message sends rendered code via stdin. Worker state persists across messages. |
| node.queue | Persistent Node.js worker pool. Each message sends rendered code via stdin. State survives via global.*. |
| exec.service | Runs a long-lived process as a managed service. Streams stdout continuously. Auto-restarts on crash. |
| python.config | Config node storing the Python executable path used by python.queue. |
When to use which node
Use exec.queue when:
- Each execution must be fully isolated — no state between messages
- You need binary output (buffer mode)
- You run shell commands, R scripts, or other interpreters
- You prefer simplicity and predictability over performance
Use python.queue when:
- You want to eliminate process startup cost — workers stay alive between messages
- You run high-frequency Python workloads
- You want to load a model, open a connection, or build state once and reuse it
Use node.queue when:
- Same as python.queue, but for JavaScript
- You want to reuse loaded modules across messages without re-requiring them
- You are working in a JS context and don't want a separate interpreter
Use exec.service when:
- You need a process that runs indefinitely and streams output continuously
- You are watching files, tailing logs, listening on a channel, or polling a system
- You want automatic restart on crash with no intervention
exec.queue
Overview
exec.queue executes arbitrary system commands through a configurable concurrency queue. Each incoming message triggers one execution: a Nunjucks template is rendered into a temporary file, the command is run against that file, and stdout becomes the output message.
It handles:
- Short-lived commands (exec mode — waits for completion)
- Long-running or streaming processes (spawn mode — streams output as it arrives)
- Concurrent executions with backpressure via a queue
- Binary output (buffer mode)
- Cross-platform execution (Linux and Windows)
Execution Pipeline
Every message follows this pipeline:
msg received
→ render Nunjucks template → write to temp file ($file)
→ render command string (optional Nunjucks)
→ run command
→ capture stdout → send msg
→ clean up temp filesNothing is shared between concurrent executions. Each job gets its own temp file and its own Nunjucks environment.
Core Concepts
1. Template → $file
The template body is rendered with Nunjucks and written to a temporary file. The path to that file is available as $file (Linux/macOS) or %file% (Windows) inside the command.
// Template (JavaScript mode)
const data = require("fs").readFileSync(process.env.INPUT_PATH, "utf8");
console.log(JSON.stringify({ lines: data.split("\n").length }));# Command
node $file2. Command
The command string is what runs in the shell. $file is always the rendered template. The command itself can also be a Nunjucks template (enable "Cmd Template" in the node settings).
python3 $file
bash $file
Rscript $file
psql postgresql://user:pass@host:5432/db -f $file3. stdout → msg.payload
Whatever the process writes to stdout becomes the output message payload (subject to the selected output mode). Writing to stderr does not produce output — it produces warnings.
4. Queue
Concurrency is controlled by the Queue setting. If Queue = 2, up to 2 commands run simultaneously; additional messages wait. The status badge shows waiting (executing/concurrency).
Template Engine (Nunjucks)
The template body uses Nunjucks syntax. All msg values are automatically converted to strings before rendering — no filters required.
| Value type | Renders as |
|---|---|
| String | value as-is |
| Number | string representation (42 → "42") |
| Object / Array | JSON-serialized ({"a":1}) |
| null / undefined | empty string |
The rendering context exposes:
| Variable | Value |
|---|---|
| {{ payload }} | msg.payload (stringified) |
| {{ topic }} | msg.topic |
| Any msg.* | Any top-level message property |
| flow.get("key") | Flow context value |
| global.get("key") | Global context value |
| env | process.env (all environment variables) |
Warning: Nunjucks evaluates
{{ }}expressions everywhere in the template — including inside#Python comments and//JS comments. Never put{{ expr }}in a comment unless you intend it to be rendered.
asset() Helper
asset(content) creates an additional temporary file containing content and returns its path. The file is cleaned up automatically after the execution finishes.
{% set config_path = asset('{"threshold": 0.9, "mode": "strict"}') %}
python3 $file --config {{ config_path }}# Template — Python ($file)
import sys, json
config = json.load(open(sys.argv[sys.argv.index("--config") + 1]))
print(json.dumps({"threshold": config["threshold"]}))Command Templating
When Cmd Template is enabled, the command string is also rendered with Nunjucks before execution:
python3 $file --input {{ env.INPUT_DIR }}/{{ payload }}Output Modes
| Mode | Behavior |
|---|---|
| Plain text | stdout as-is (string) |
| Parsed JSON | JSON.parse(stdout) |
| Parsed YAML | js-yaml parse of stdout |
| Parsed XML | xml-js parse of stdout |
| Buffer | raw stdout bytes as a Buffer |
Buffer Mode
Buffer mode captures stdout as raw bytes. msg.payload is a Node.js Buffer. Use this when the process outputs binary data: images, compressed files, protocol frames, etc.
# Template — Python
import sys
with open("/path/to/image.png", "rb") as f:
sys.stdout.buffer.write(f.read())splitLine is not supported in buffer mode.
Execution Modes
exec mode (default) — command runs to completion, stdout is captured and sent as one message.
spawn mode — process streams output as it runs. Each chunk of stdout triggers a message. Use -u for Python to disable output buffering:
python3 -u $filestdout vs stderr
stdout is data. stderr is logs.
import sys
print("processing...", file=sys.stderr) # node warning
print('{"result": 42}') # msg.payloadQueue Behavior
- Messages arriving while the queue is full wait in line
- Status badge:
waiting (executing/concurrency)e.g.3 (2/2) msg.stop = truekills all active processes and drains the queue- ⏹ button in the node editor header does the same without redeploying
Process Lifecycle
- Each active process is tracked by PID
- On redeploy or
msg.stop = true, all tracked processes receive SIGTERM (Linux) or are terminated viaterminate()(Windows) - On Linux, the entire process group is signalled (
-pid) to catch child processes - Temp files are cleaned up in a
finallyblock — removed even if the command fails
Cross-Platform Behavior
| Platform | Shell | Variable |
|---|---|---|
| Linux / macOS | /bin/bash | $file |
| Windows | cmd.exe | %file% |
Examples
1. Run a Python script
# Template — Python
import json
data = json.loads("{{ payload }}")
result = {"length": len(data), "type": type(data).__name__}
print(json.dumps(result))python3 $fileOutput mode: Parsed JSON
2. Stream logs in real time (spawn mode)
# Template — Python
import time, sys, json
for i in range(10):
print(json.dumps({"step": i}), flush=True)
time.sleep(0.5)python3 -u $fileMode: spawn — each print() produces a separate output message.
3. SSH remote execution
# Template — Bash
echo "hostname: $(hostname)"
df -h /cat $file | ssh -i /path/to/key user@remote-host bash -spython.queue
Overview
python.queue keeps a pool of persistent Python workers alive. Each incoming message renders your Nunjucks template into Python source code, sends it to a free worker via stdin, and returns whatever the code prints as msg.payload.
Mental model: each worker is a persistent Python REPL session.
You are not running a script — you are sending code to a running Python engine. Imports, variables, and objects defined at the top level accumulate in the worker's namespace and are available to every subsequent message on that worker.
Execution Model
Internally, each worker runs an event loop:
while True:
code = read_next_job_from_stdin()
exec(code, _ns) # _ns is a persistent dict — the worker's global scope
send_stdout_to_node_red()_ns starts empty and grows with every execution. Any name defined at the top level of your code — variables, functions, classes, imports — persists in _ns for the lifetime of that worker.
This means:
# First message on this worker:
import pandas as pd # → stored in _ns["pd"]
data = pd.DataFrame(...) # → stored in _ns["data"]
# Second message on the same worker:
print(data.shape) # works — "data" is still in _nsState Boundaries
State is per-worker, not global across all workers.
With Queue > 1:
- Each worker has its own independent
_ns - A message routed to worker A cannot see state from worker B
- Execution is non-deterministic — you cannot predict which worker handles a given message
If you need consistent state across messages, use Queue = 1.
Imports
Imports are safe to repeat. Python caches loaded modules internally (sys.modules), so re-importing on every message has no performance cost. That said, write imports explicitly to keep code readable:
import json # safe — Python returns cached module
value = json.loads("{{ payload }}")
print(json.dumps({"ok": True}))The common pattern is to guard expensive one-time initialization, not imports:
if "model" not in dir():
import pickle
with open("/path/to/model.pkl", "rb") as f:
model = pickle.load(f)
import json
features = json.loads("{{ payload }}")
print(json.dumps({"prediction": int(model.predict([features])[0])}))Persistent Resources Warning
Warning: Long-lived resources (database connections, file handles, network sockets) stored in
_nsmay become stale. A connection opened on message 1 may be closed, timed out, or broken by message 100.
Always validate or recreate persistent resources:
import psycopg2
if "conn" not in dir() or conn.closed:
conn = psycopg2.connect("postgresql://user:pass@host/db")
with conn.cursor() as cur:
cur.execute("SELECT count(*) FROM events WHERE id = %s", ("{{ payload }}",))
print(cur.fetchone()[0])python.queue vs node.queue
Both nodes share the same queue-and-worker architecture. The key difference is how persistent state is scoped:
| | python.queue | node.queue |
|---|---|---|
| Language | Python | JavaScript |
| Runtime | Configurable via python.config | System node (same as Node-RED) |
| State mechanism | Implicit — _ns dict, like a module's global scope | Explicit — global.* on the vm context |
| Top-level variables | Persist automatically between messages | Do not persist — scoped to the execution |
| Output function | print() | console.log() |
| require() | N/A | Available (Node-RED module environment) |
In python.queue, top-level names persist automatically:
count = count + 1 if "count" in dir() else 1
print(count)In node.queue, top-level const/let/var are scoped to each execution and do not survive. You must use global.* explicitly:
if (!global.count) global.count = 0;
global.count++;
console.log(global.count);Queue and Worker Lifecycle
The Queue setting controls how many Python workers run concurrently. Workers start lazily on the first incoming message.
| Status | Meaning |
|---|---|
| Blue dot 0 (0/2) | Workers running, all idle |
| Blue ring 0 (2/2) | All workers executing |
| Blue ring 3 (2/2) | 3 messages waiting, both workers busy |
| Grey dot 0 (0/2) | No workers running |
- Idle 20 minutes → all workers are killed; restart on next message
- Worker crash → worker removed; in-flight job fails; replacement created on next message
- Node redeploy / close → all workers killed, pending jobs drained
- ⏹ button in the editor header → kills all workers immediately; confirmation dialog appears when workers are alive
msg.stop = true→ same effect from a flow message
Python Executable
python.queue uses the Python binary defined in a linked python.config node. Falls back to python3 if none is linked.
The path can point to a system Python or a virtual environment:
/usr/bin/python3
/home/user/myenv/bin/pythonThe environment must already exist with all required packages installed.
Template Engine (Nunjucks)
The template is rendered by Nunjucks before Python sees it. All msg values are automatically converted to strings.
Always wrap string variables in Python quotes:
name = "{{ payload }}" # correct — renders to: name = "hello"
name = {{ payload }} # wrong — renders to: name = hello (NameError)
x = {{ payload }} # correct when payload is a numberWarning: Nunjucks evaluates
{{ }}everywhere — including inside#comments. Do not put expressions in comments.
Output
Use print() to produce output. Each print() call produces one message in Delimited mode (default).
Parsing: Delimited — buffers stdout and splits on the delimiter (\n by default).
Parsing: Raw — emits each stdout chunk immediately as a separate message.
stdout vs stderr
import sys
print("debug", file=sys.stderr) # node warning — not in payload
print('{"result": 42}') # becomes msg.payloadExamples
Accumulate values across messages
import json
if "history" not in dir():
history = []
history.append("{{ payload }}")
print(json.dumps(history))Output mode: Parsed JSON — msg.payload grows with each message on the same worker.
Load a model once, predict every message
if "model" not in dir():
import pickle
with open("/path/to/model.pkl", "rb") as f:
model = pickle.load(f)
import json
features = json.loads("{{ payload }}")
prediction = model.predict([features])[0]
print(json.dumps({"prediction": int(prediction)}))node.queue
Overview
node.queue keeps a pool of persistent Node.js workers alive. Each incoming message renders your Nunjucks template into JavaScript, sends it to a free worker via stdin, and returns whatever console.log() prints as msg.payload.
Same execution model as python.queue, but runs JavaScript. Workers use the same Node.js runtime as Node-RED, so require() resolves from Node-RED's module environment.
Mental model: each worker is a persistent Node.js vm context.
Persistent State
Each worker's vm context persists across all messages it handles. Top-level const/let/var are scoped to each execution — they do not survive between messages. Use global.* to persist state:
if (!global.counter) {
global.counter = 0;
}
global.counter++;
console.log(global.counter);With Queue > 1, state is per-worker — no shared state between workers.
require()
require is available in the worker context and resolves from Node-RED's module environment:
const fs = require('fs');
const os = require('os');
console.log(JSON.stringify({ platform: os.platform(), home: os.homedir() }));Output
Use console.log() to produce output.
console.warn() and console.error() → node warnings, not in payload.
Parsing: Delimited — each console.log() call produces one message (splits on \n).
Parsing: Raw — each stdout chunk emitted immediately.
Execution Modes
Synchronous (default)
Code runs in a plain function and completes immediately. No async operations are allowed.
- Safe and deterministic
- Using
awaitor returning a Promise causes an explicit error - Use this mode for pure computation, state manipulation, and synchronous I/O
Asynchronous (Promise-based)
Select Asynchronous (Promise-based) from the Execution dropdown.
The node waits for async work to complete before processing the next message. Your code must ensure all async work completes before execution ends.
- Supports HTTP requests, database calls, file I/O, and any Promise-based API
- Use
return <Promise>orawaitall async calls before execution ends
Async Usage Guidelines
Always ensure async work completes before execution ends. Prefer:
return Promise— the node waits for the returned Promise to resolveawaitall async calls — the execution boundary is the end of the async function
✔ Correct — Promise returned:
const axios = require('axios');
return axios.get(url).then(r => r.data);✔ Correct — await:
const axios = require('axios');
const r = await axios.get(url);
console.log(r.data);✘ Incorrect — async work is not awaited:
axios.get(url).then(r => console.log(r.data));If async work is not completed before execution ends, the message may be lost or execution may fail.
Worker Lifecycle
- Workers start lazily on first message
- After 20 minutes idle → all workers killed; restart on next message
- ⏹ button in editor header → kills all workers immediately; confirmation dialog appears when workers are alive
msg.stop = true→ same effect from a flow message- Node redeploy / close → all workers killed, pending jobs drained
exec.service
Overview
exec.service runs a shell command as a managed, long-lived service. It does not process flow messages — it spawns a process at deploy time and streams stdout continuously as output messages. When the process exits unexpectedly, the node restarts it automatically.
Mental model: you are managing a daemon, not running a command.
Use this node for processes that should always be running: file watchers, log tailers, event listeners, system monitors, persistent workers.
Execution Model
- A single process is spawned immediately on deploy
- No queue — there is only one process at a time
- The node has no input port — it outputs only
- stdout is streamed to output messages using the same Parsing system (Delimited / Raw) as the other nodes
- stderr lines become node warnings
Template and $file
Write code in the Template editor and reference it with $file in the command:
python3 -u $file
bash $file
node $fileThe template is rendered with Nunjucks at process start (and again on every restart). Use {{ flow.get('key') }}, {{ global.get('key') }}, or {{ env.MY_VAR }} to inject values at startup. There is no msg context since the process starts independently of any incoming message.
If no template is configured, the command is run directly.
Restart Behavior
When the process exits unexpectedly:
- The node waits for the configured Restart delay (default: 3000 ms)
- Then spawns a fresh process
Max retries limits how many consecutive failures are tolerated before the node stops trying. Set to 0 for infinite retries (default).
If the process runs stably for 10 seconds, the retry counter resets — so a stable process that occasionally crashes always gets a fresh set of retries.
| Config | Default | Description | |---|---|---| | Restart delay | 3000 ms | Wait before restarting after a crash | | Max retries | 0 | Max consecutive failures before stopping (0 = infinite) |
Control Actions
Stop — kills the process and prevents restart. Triggered by:
- The ⏹ button in the editor header
POST /exec-service/:id/kill
Restart — kills the current process (if running) and immediately starts a fresh one, resetting all retry counters. Triggered by:
- The ↺ button in the editor header (confirm dialog if running)
POST /exec-service/:id/restart
The restart action overrides a stopped state — clicking restart on a stopped service will start it.
Status
| Badge | Meaning |
|---|---|
| Blue ring running | Process is alive and streaming |
| Yellow ring restarting (retry N) | Waiting to restart after a crash |
| Grey dot stopped | Manually killed or max retries exceeded |
Output & Parsing
stdout is streamed using the same Parsing system as python.queue and node.queue:
Parsing: Delimited (default) — buffers output and splits on the delimiter (\n by default). Each line produces one message.
Parsing: Raw — emits each stdout chunk immediately. Chunks may not align with line boundaries.
Use Cases
File watcher (inotifywait)
Command: inotifywait -m -e create,modify --format '{"file":"%w%f","event":"%e"}' /path/to/dir
Parsing: Delimited
Output: Parsed JSONEach file event becomes a msg.payload object.
Log streaming
Command: tail -F /var/log/syslog | grep ERROR
Parsing: Delimited
Output: Plain textEach matching log line becomes a msg.payload string.
PostgreSQL LISTEN
Command: psql postgresql://user:pass@host/db -c "LISTEN my_channel;" -c "SELECT 1" --no-align --tuples-only
Parsing: Delimited
Output: Plain textStreams NOTIFY payloads as they arrive.
System monitoring
Command: while true; do df -h | jc --df; sleep 5; done
Parsing: Delimited
Output: Parsed JSONEmits disk usage as a JSON object every 5 seconds.
Python worker with template
Template:
import time, json, sys
while True:
print(json.dumps({"tick": True}), flush=True)
time.sleep({{ flow.get('interval') or 1 }})
Command: python3 -u $file
Parsing: Delimited
Output: Parsed JSONState & Persistence
exec.queue
No state. Each execution is fully isolated. Nothing survives between messages.
python.queue
State lives in _ns, a persistent Python dict that acts as the worker's global scope. Everything defined at the top level of your code accumulates there.
Worker 1 _ns: { "model": <sklearn model>, "pd": <pandas>, "history": [...] }
Worker 2 _ns: { "model": <sklearn model>, "pd": <pandas>, "history": [...] }Workers do not share state with each other.
node.queue
State lives in the worker's vm context, accessible via global.*. Top-level variable declarations (const, let, var) are scoped to each execution and do not persist.
Worker 1 global: { counter: 42, db: <connection> }
Worker 2 global: { counter: 17, db: <connection> }exec.service
No persistent application state — the service process manages its own state internally. The node manages the process lifecycle only.
Output & Parsing
All nodes except exec.queue use a shared streaming output system:
Parsing: Delimited (default)
Buffers stdout and splits on the configured delimiter (default: \n). Each complete segment is emitted as a separate message. Incomplete segments at the end of a stream are flushed when the job completes.
This is the correct mode for line-oriented output (most scripts).
Parsing: Raw
Emits each stdout chunk immediately as a separate message with no buffering. Chunks may not align with logical line boundaries — a single print() may produce multiple messages, or a single message may contain multiple lines.
Use Raw mode only when you need the lowest possible latency and can handle partial chunks.
Output format
After splitting (Delimited) or on each chunk (Raw), the segment is parsed according to the selected format:
| Format | Behavior |
|---|---|
| Plain text | value as string (trimmed in Delimited, raw in Raw) |
| Parsed JSON | JSON.parse(segment) |
| Parsed YAML | YAML parse of segment |
| Parsed XML | XML parse of segment |
Parse errors are emitted as node errors and do not produce an output message.
python.config
A config node that stores the Python executable path used by python.queue.
Fields:
- Name — optional label shown in the dropdown
- Python Path — path to the Python binary (required)
On deploy, the node warns if the path does not exist.
python.queue falls back to python3 if no config node is linked.
