n8n-nodes-privent
v0.3.0
Published
Privent DLP nodes for n8n: session-scoped tokenization, risk scoring, and safe detokenization for AI agent workflows
Maintainers
Readme
n8n-nodes-privent
Early Access. Privent is currently in private rollout. API keys are issued through our access process. Request access →
Official Privent DLP community nodes for n8n. Tokenize PII and secrets in prompts on the way to your AI agents and detokenize them at trusted egress points — without ever exposing raw data to the LLM.
[Webhook] → [Privent Session] → [Privent Tokenize] → [OpenAI Chat]
→ [Privent Detokenize] → [Respond]Why
LLM-powered workflows leak data. A naive {{ $json.prompt }} into ChatGPT sends customer emails, card numbers, and API keys straight to a third party.
Privent sits in the middle: it replaces sensitive values with reversible placeholders ([EMAIL_001], [CREDIT_CARD_002]) before the LLM call, then restores them only at sinks you trust.
Requirements
| Component | Minimum | |---|---| | n8n | 1.22.0 | | Node.js | 20 | | Privent API key | optional (regex-only mode without it) |
Installation
In n8n: Settings → Community Nodes → Install → enter n8n-nodes-privent.
Manual install (self-hosted):
cd ~/.n8n
npm install n8n-nodes-priventRestart n8n. The Privent nodes appear in the node panel.
Credential: PriventApi
Create a PriventApi credential before using any Privent node.
| Field | Description | Default |
|---|---|---|
| API Key | Privent Cloud API key — request access. Encrypted at rest by n8n. | — |
| Base URL | Privent Cloud endpoint | https://api.privent.ai |
| Vault Backend | Token storage: memory or redis | memory |
The Privent ML host (GLiNER entity extractor) is auto-routed to
https://ml.privent.aiand authenticates with the same API key. No extra credential field is required. Self-hosted setups can override via thePRIVENT_ML_URLenvironment variable on the n8n process (see "Local development" below).
Nodes
Privent Session
Opens a Privent session. Place this first in the workflow — every tokenize/detokenize node downstream consumes its sessionId.
Output:
| Field | Type | Description |
|---|---|---|
| sessionId | string | UUID; pass downstream as ={{ $('Privent Session').item.json.sessionId }} |
| traceId | string | Correlation ID for audit logs |
| startedAt | number | Unix ms timestamp |
Parameters:
- Session ID Mode —
auto(new UUID per execution) ormanual - Framework — orchestration label that appears in audit logs (
n8n/manual)
Privent Tokenize
Detects PII and secrets in a text field and replaces them with [KIND_NNN] tokens.
| Parameter | Description |
|---|---|
| Text Field | Field name to tokenize (e.g. text, prompt) |
| Session ID | sessionId from upstream Privent Session node |
| Detection Mode | local (regex), cloud (ML), auto (local-first, cloud fallback) |
| Review Threshold | Items above this risk score are flagged with _privent.flaggedForReview: true |
| Entity Hints | Detection priority list: email, phone, credit_card, iban, ssn, api_key, jwt, aws_key, ip, url |
Output:
{
"text": "Hi [EMAIL_001], your number is [PHONE_002].",
"_privent": {
"sessionId": "...",
"tokenCount": 2,
"riskScore": 0.85,
"flaggedForReview": true,
"detectedEntities": ["EMAIL_001", "PHONE_002"]
}
}Privent Detokenize
Replaces tokens with their original values. Use this at trusted egress points (databases, internal webhooks, email sends) after the LLM step.
| Parameter | Description |
|---|---|
| Session ID | sessionId from the Privent Session node |
| Target Field | Field to detokenize; * walks every string field (default) |
| Strict Mode | If true, throws when a token would leak to an untrusted sink |
| Trusted Sinks | Allowlist of URL prefixes (e.g. https://api.internal.com) |
Privent Risk Check
Scores text for PII and secret risk using the Privent Cloud ML pipeline. Standalone — does not require a Privent Session.
Exposed as a tool to AI Agent nodes (usableAsTool: true).
| Parameter | Description | |---|---| | Text Field | Field name to score | | Session ID | Optional; for audit correlation |
Output:
{
"privent": {
"risk_score": 0.92,
"risk_level": "high",
"categories": { "pii": 0.95, "financial": 0.1 },
"model": "privent-risk-v2",
"latencyMs": 43
}
}Pipe this into a Switch node to route high-risk inputs through human review.
Local development
Smoke-test the nodes locally against the production Privent stack
(api.privent.ai + ml.privent.ai) — same code path as an
npm install n8n-nodes-privent consumer:
docker compose -f docker-compose.local.yml upOpen http://localhost:5678, create a Privent API credential with your
API key, and run a workflow. Entity extraction is routed to
https://ml.privent.ai automatically.
If the ML host is unreachable, the workflow keeps running on regex-only output (silent fail-open).
Example Workflow
[Webhook] → [Privent Session] → [Privent Tokenize]
→ [OpenAI Chat] → [Privent Detokenize] → [Respond to Webhook]- Privent Session — generates a
sessionId, opens a vault. - Privent Tokenize — masks every PII / secret in the prompt.
- OpenAI Chat — sees only tokens, never raw data.
- Privent Detokenize — restores tokens in the LLM response.
Token Format
[KIND_NNN]| Example | Meaning |
|---|---|
| [EMAIL_001] | Email address |
| [PHONE_002] | Phone number |
| [CREDIT_CARD_003] | Credit card number |
| [SSN_004] | Social Security Number |
| [API_KEY_005] | API key |
| [JWT_006] | JSON Web Token |
Tokens are session-scoped: the same value always maps to the same token within one session, but token IDs are randomized across sessions to prevent correlation attacks.
Security Properties
- Vault state is never serialized into n8n execution JSON.
- Each credential pair is isolated by SHA-256 hash (multi-tenant safe).
- Vaults auto-expire 60 minutes after last use.
- Fail-open: if Privent Cloud is unreachable, local regex detection takes over rather than blocking the workflow.
Links
- n8n Setup Guide
- Request access — required to obtain a production API key
License
Apache-2.0 © Privent AI
Questions? Contact us at [email protected].
