npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

edgesync360-edgehub-logbook-nodejs-sdk

v2.0.12

Published

EdgeHub LogBook SDK for Node.js — structured audit logging for microservices (IEC 62443 / EU CRA)

Readme

EdgeHub LogBook — Node.js SDK

v2.0.12 · 用於微服務的結構化稽核日誌 SDK,設計對齊 IEC 62443 / EU CRA,支援分散式追蹤 (Distributed Tracing)、跨服務 Trace 傳遞(HTTP / AMQP / MQTT)。


目錄

  1. 安裝
  2. 快速開始
  3. 核心概念
  4. API 用法
  5. 跨服務分散式追蹤
  6. Express 中間件
  7. Transport 與重試機制
  8. 枚舉常數速查

安裝

npm install edgesync360-edgehub-logbook-nodejs-sdk

Peer dependency:需要 mongodb >= 5(若使用 MongoTransport)。


快速開始

const {
  LogClient, MongoTransport,
  SERVICE, SOURCE, RESOURCETYPE
} = require("edgesync360-edgehub-logbook-nodejs-sdk");

// 1. 建立 Transport(建議搭配 withRetry)
const transport = new MongoTransport("mongodb://localhost:27017", "my_db").withRetry();

// 2. 初始化 LogClient(應用生命週期中只建立一次)
const client = new LogClient(transport, {
  serviceName: SERVICE.DM,
  source: SOURCE.EDGE
});

// 3. 啟動時註冊 Action 定義(支援一次傳入陣列以註冊多個 Service 模組)
await client.registerActions([
  {
    svc: SERVICE.DM,
    acts: [
      { code: "DEVICE.CREATE", name: "建立裝置", res_type: RESOURCETYPE.DEVICE }
    ],
    v: "1.0"
  }
]);

// 4. 寫入一筆稽核日誌
await client.audit()
  .accountName("[email protected]")
  .firstName("John")
  .lastName("Doe")
  .tenant("t-001", "MyTenant")
  .action("DEVICE.CREATE")
  .categoryConfigurationChanges()
  .resource("dev-001", "MyDevice")
  .info("裝置已建立")
  .reportSuccess();

核心概念

LogClient 生命週期

  • 應用程式啟動時初始化一個 LogClient 實例,全程共用。
  • MongoTransport 支援傳入 URI(SDK 自行管理連線)或傳入已存在的 MongoClient(由應用管理)。

註冊 Action

registerActions 將操作代碼(code)映射到顯示名稱(name)和資源類型(res_type)。
之後呼叫 .action("CODE") 時,SDK 可自動補齊 .resourceres_type,也能在 Scope 中繼承 svc

Scope 與分散式追蹤

SDK 使用 Scope 物件攜帶 TraceID、SpanID、User、Tenant 等資訊。
Scope 可以從 HTTP 請求(fromRequest)、MQ Headers(fromHeaders)、或 MQTT3 Payload 信封(unwrapPayloadAndScope)中自動解析。

TraceID 傳播規則

  • 收到上游的 x-logbook-span-id → 儲存為本段的 parentID
  • 本服務自動生成新的 spanID,維持樹狀呼叫鏈結構

API 用法

IEC 62443 Category(必要)

  • 所有 log 必須帶 category,否則 SDK 會拒絕寫入。
  • client.logBook()tp=sys)會自動預設 category=control_system_events
  • 稽核路徑請使用語意化 helper(category... 開頭):
    • categoryAccessControl()
    • categoryRequestErrors()
    • categoryControlSystemEvents()
    • categoryBackupRestoreEvents()
    • categoryConfigurationChanges()
    • categoryAuditLogEvents()

Audit:統一稽核日誌

無論是單一次性的行為,或是具有多步驟(開始 → 進行中 → 結束)的長時間任務,統一使用 client.audit() 來取得 Scoped Builder。

情境一:單步驟稽核(一次性事件)

適用於操作當下即結案,不需要 start/progress

await client.audit()
  .fromRequest(req)
  .action("GROUP.CREATE")
  .categoryAccessControl()
  .resource("g-001", "MyGroup")
  .info("建立群組成功")
  .reportSuccess();

資源設定 (Resource)

支援多種方式指定操作對象,SDK 會自動確保 ID 與 Name 以字串格式儲存:

  • .resource(id, name):同時指定 ID 與名稱。
  • .resourceId(id):僅指定 ID,名稱預設為 "-"
  • .resourceName(name):僅指定名稱,ID 預設為 "-"

情境二:多步驟流程(長時間任務)

適用於有開始 → 進行中 → 結束的業務流程,將 audit pointer 存入變數後分次回報。若是背景任務,可自行手動給予身分:

// 手動建立 Scope(適用背景任務、MQ Consumer)
const task = client.audit()
  .accountName("[email protected]")
  .firstName("System").lastName("Agent")
  .tenant("t-001", "MyTenant")
  .action("OTA.UPDATE")
  .categoryConfigurationChanges()
  .resourceId("dev-002"); // 僅帶入 ID,名稱預設為 "-"

await task.info("開始 OTA 更新").start();
// ... 業務邏輯 ...
await task.info("OTA 更新進行中").progress();
// ... 業務邏輯 ...
await task.info("OTA 更新完成").reportSuccess();
// 或 .reportFail() / .reportCancel() / .reportPartialSuccess()

LogBook:系統層日誌

專為系統性操作設計(tp = sys),不計算 Hash,Who(accountName)可省略。

await client.logBook()
  .tenant("t-001", "MyTenant")
  .action("METRIC.COLLECT")
  .resource("node-01", "Edge Node")
  .info("CPU 用量上報")
  .reportSuccess();

跨服務分散式追蹤

SDK 統一使用以下 Header 傳遞追蹤資訊(三端 Go / Java / Node.js 一致):

| Header | 說明 | |--------|------| | x-logbook-trace-id | 跨服務全局 TraceID | | x-logbook-span-id | 本段 SpanID(下游視為 parentID) | | x-logbook-account-name | 使用者帳號 | | x-logbook-tenant-id | 租戶 ID | | x-logbook-tenant-name | 租戶名稱 | | x-logbook-first-name | 使用者名字 | | x-logbook-last-name | 使用者姓氏 |

發送端:用 scoped.toHeaders() 取得 Map,加入 HTTP Header 或 MQ Attributes 傳出。
接收端:用 audit().fromHeaders(headers) 還原 Scope。


HTTP Server — 自動判斷 Entry / Follower

fromRequest(req) 會判斷:

  • x-logbook-trace-id Header → Follower 模式:繼承上游 TraceID,上游 SpanID 轉為 parentID
  • 無該 Header → Entry 模式:從 Cookie(EITokenIFPTenant)取得身分,自動生成新 TraceID
// http_server.js (Express)
const { LogClient, MongoTransport, SERVICE, SOURCE, RESOURCETYPE } = require("edgesync360-edgehub-logbook-nodejs-sdk");

const sdk = new LogClient(
  new MongoTransport("mongodb://localhost:27017", "my_db").withRetry(),
  { source: SOURCE.INTERNAL }
);

await sdk.registerActions({
  svc: SERVICE.DM,
  acts: [{ code: "DEMO.HTTP", name: "HTTP 範例", res_type: RESOURCETYPE.NONE }],
  v: "1.0"
});

app.all("/service", async (req, res) => {

  const scoped = sdk.audit().fromRequest(req).action("DEMO.HTTP")...;

  await scoped.info("識別為入口節點,開啟全新追蹤").start();
  await scoped.info("繼承來自上游的追蹤資訊").progress();
  await scoped.info("成功收到下游回應").reportSuccess();

  // 呼叫下游服務時,傳遞追蹤上下文
  const forwardHeaders = {
    "Content-Type": "application/json",
    ...scoped.toHeaders()   // 關鍵:本段的 spanID 成為下游的 parentID
  };
  const downstreamResp = await axios.post(DOWNSTREAM_URL, body, { headers: forwardHeaders });

  res.json({ traceId: scoped.scope.traceID, detail: downstreamResp.data });
});

AMQP Consumer (RabbitMQ)

Producer 用 AMQP message.headers 傳遞 Trace;Consumer 用 audit().fromHeaders() 還原。

// amqp_consumer.js
const amqp = require("amqplib");
const { LogClient, MongoTransport, SERVICE, SOURCE, RESOURCETYPE } = require("edgesync360-edgehub-logbook-nodejs-sdk");

const sdk = new LogClient(
  new MongoTransport("mongodb://localhost:27017", "my_db").withRetry(),
  { source: SOURCE.INTERNAL }
);

await sdk.registerActions({
  svc: SERVICE.DM,
  acts: [{ code: "GROUP.CREATE", name: "創建群組", res_type: RESOURCETYPE.GROUP }],
  v: "1.0"
});

const conn = await amqp.connect("amqp://user:pass@rabbitmq-host:5672/vhost");
const ch = await conn.createChannel();
const q = await ch.assertQueue("", { exclusive: true });
await ch.bindQueue(q.queue, "amq.topic", "logbook.demo.#");

ch.consume(q.queue, async (msg) => {
  if (!msg) return;

  // 從 AMQP Headers 還原 Trace Context(與 audit().fromRequest() 對稱)
  const scoped = sdk.audit().fromHeaders(msg.properties.headers);

  await scoped
    .action("GROUP.CREATE")
    .info("收到來自上游的 AMQP 訊息,已成功恢復日誌上下文")
    .reportSuccess();

  console.log(`[Consumer] Trace ID: ${scoped.scope.traceID}`);
  ch.ack(msg);
});

AMQP Producer 端(Go / Java / Node.js) 發送時:

// Node.js 發送端
const headers = scoped.toHeaders();
ch.publish(exchange, routingKey, Buffer.from(body), { headers });

MQTT 3.1.1 Consumer

MQTT 3.1.1 不支援 User Property,SDK 採用 Payload 信封(Envelope)方案:
將 Trace 資訊嵌入 JSON Payload 的 _logbook 欄位,原始業務資料放在其餘欄位或 data

// mqtt3_consumer.js
const mqtt = require("mqtt");
const {
  LogClient, MongoTransport,
  SERVICE, SOURCE, RESOURCETYPE,
  unwrapPayloadAndScope
} = require("edgesync360-edgehub-logbook-nodejs-sdk");

const sdk = new LogClient(
  new MongoTransport("mongodb://localhost:27017", "my_db").withRetry(),
  { source: SOURCE.INTERNAL }
);

await sdk.registerActions({
  svc: SERVICE.DM,
  acts: [{ code: "GROUP.CREATE", name: "創建群組", res_type: RESOURCETYPE.GROUP }],
  v: "1.0"
});

const client = mqtt.connect("mqtt://broker-host:1883", {
  username: "user", password: "pass", clientId: "nodejs-mqtt3-consumer"
});

client.on("connect", () => client.subscribe("logbook/demo/mqtt3"));

client.on("message", async (topic, message) => {
  // 1. 從 Payload Envelope 取出 Trace Context 與原始業務資料
  const { scopeMap, dataBytes } = unwrapPayloadAndScope(message);

  // 2. 還原 Scope(與 fromRequest 對稱)
  const scoped = sdk.audit().fromHeaders(scopeMap);

  // 3. 接續記錄日誌,TraceID 會與 Publisher 一致
  await scoped
    .action("GROUP.CREATE")
    .info("收到 MQTT 3 訊息,已成功從 Envelope 恢復跨服務追蹤")
    .reportSuccess();

  // dataBytes 是還原後的業務 payload(不含 _logbook)
  const data = JSON.parse(dataBytes.toString());
  console.log("業務資料:", data);
});

MQTT 3 Publisher 端(發送端封裝信封):

// Node.js Publisher
const envelope = scoped.wrapPayloadWithTrace(Buffer.from(JSON.stringify(payload)));
mqttClient.publish("logbook/demo/mqtt3", envelope);

Express 中間件

透過 middleware(),可讓每個請求的 Scope 自動存入 AsyncLocalStorage
Service 層無需傳遞 req 即可記錄日誌。

app.use(client.middleware());

// 深層 Service 層(不需要 req 物件)
async function someService() {
  await client.audit()
    .fromScopeContext()   // 從 AsyncLocalStorage 自動取得 Tenant/User/TraceID
    .action("DATA.QUERY")
    .info("查詢資料")
    .reportSuccess();
}

Transport 與重試機制

RetryingTransport 提供指數退避 + Full Jitter 重試,預設配置與 Go / Java SDK 對齊:

| 參數 | 預設值 | 說明 | |------|--------|------| | maxAttempts | 4 | 最大重試次數 | | baseMs | 50 ms | 基礎退避時間 | | maxMs | 500 ms | 單次最大退避 | | maxTotalMs | 2000 ms | 總耗時上限 |

// 方式一:URI 建立新連線
const transport = new MongoTransport("mongodb://localhost:27017", "my_db").withRetry();

// 方式二:自訂重試參數
const transport = new MongoTransport(uri, db).withRetry({ maxTotalMs: 5000 });

// 方式三:重用已存在的 MongoClient(例如 Mongoose)
const { MongoClient } = require("mongodb");
const existingClient = mongoose.connection.getClient(); // 或 new MongoClient(uri)
const transport = new MongoTransport(existingClient, db).withRetry();
// 注意:傳入現有 client 時,SDK 不會自動關閉連線(shouldClose = false)

枚舉常數速查

const {
  LOG_TYPE,    // OP, SYS, SEC
  CATEGORY,    // IEC 62443 categories
  SEVERITY,    // I (info), W (warn), E (error), C (critical), D (debug)
  RESULT,      // SUCCESS, FAIL, PARTIAL, CANCEL
  SOURCE,      // WEB, EDGE, INTERNAL
  SERVICE,     // DPM, DM, UM, CC, AE, DA, TM, SS, CM, TFM
  RESOURCETYPE // DEVICE, GROUP, USER, TENANT, FILE, ROLE, SETTING ...
} = require("edgesync360-edgehub-logbook-nodejs-sdk");

| 類別 | 常用值 | |------|--------| | SOURCE | WEB(瀏覽器/Portal)、EDGE(Edge 裝置)、INTERNAL(後端服務/MQ) | | RESOURCETYPE | DEVICE, GROUP, USER, TENANT, FILE, ROLE, SETTING, ALARM ... | | SERVICE | DPM(裝置管理), DM(資料管理), UM(使用者), TM(租戶)... |


稽核日誌自動備份與保留機制 (Data Retention)

所有透過本 SDK 寫入的日誌皆會按月存入 dc_audit_log_YYYYMM 集合。中控 dc-log-service 具有自動化備份排程:

  • 線上保留:MongoDB 中預設保留 當月 + 過去 3 個月(共 4 個完整月)
  • 冷儲存備份:每個月 5 號 03:00,系統會鎖定距離當下 4 個月前 的舊日誌(如 11/5 會備份 7 月資料),完整打包並上傳至 Azure Blob 永久冷保存。
  • 清理過期日誌:上傳完成後,該月份的舊資料將會從 MongoDB 中被 Drop(刪除) 釋放資源。
  • 離線還原:Blob 中的備份為標準 mongodump .bson 檔案,若需查詢極舊的歷史日誌,可隨時透過 mongorestore 手動倒回。

License

MIT