n8n-nodes-promptlock-guard
v1.0.14
Published
AI-powered security guardrails for n8n workflows - analyze, redact, or block content based on HIPAA, GDPR, and PCI compliance frameworks
Downloads
253
Maintainers
Readme
PromptLock Guard for n8n
🛡️ AI-powered security guardrails for your n8n workflows
PromptLock Guard is a community node that adds content analysis and sensitive data detection to your n8n automations. Detect prompt injection attacks, identify PII/PHI patterns, and route content based on risk assessment.
✨ Features
- 🔍 Sensitive Data Detection: Identifies patterns associated with HIPAA, GDPR, and PCI requirements
- 🚦 4-Output Routing: Route workflows based on risk assessment (Allow/Flag/Redact/Block)
- 🔒 Fail-Closed Security: Secure by default with configurable error handling
- ⚡ Real-time Analysis: Fast API integration with configurable timeouts
- 🎯 Flexible Targeting: Support for nested field paths with dot notation
- 📊 Rich Metadata: Detailed analysis results attached to each item
Note: Prompt injection detection is always enabled by default, regardless of which compliance frameworks are selected.
🚀 Installation
Community Nodes (Recommended)
- In n8n, go to Settings → Community Nodes
- Click Install a community node
- Enter:
n8n-nodes-promptlock-guard - Click Install
- Restart n8n
npm Installation
# Install globally for n8n
npm install -g n8n-nodes-promptlock-guard
# Or install in your n8n user directory
cd ~/.n8n/
npm install n8n-nodes-promptlock-guard
# Restart n8n⚙️ Setup
1. Create Credentials
In n8n, create new PromptLock API Key credentials:
- Base URL:
https://api.promptlock.io - API Key: Your PromptLock API key (starts with
ps_) - Header Style:
X-API-Key(preferred) orBearer Token
Get your API key at promptlock.io
2. Add the Node
- Search for "PromptLock Guard" in the node panel
- Configure:
- Text Field: Path to your text data (e.g.,
text,payload.message) - Frameworks: Select detection frameworks (HIPAA, GDPR, PCI)
- Credentials: Select your PromptLock API Key
- Text Field: Path to your text data (e.g.,
3. Wire the Outputs
The node provides four distinct outputs:
- ✅ Allow: Content is safe, proceed normally
- ⚠️ Flag: Content needs review, proceed with caution
- 🔒 Redact: Content has been cleaned, use
cleanTextfield - 🚫 Block: Content is blocked, do not proceed
📋 Quick Example
Webhook → PromptLock Guard
├─ Allow → Process Normally
├─ Flag → Send to Review Queue
├─ Redact → Process with Clean Text
└─ Block → Return 403 Error🔧 Configuration Options
Core Settings
- Text Field: Path to analyze (supports dot notation like
data.message.text) - Detection Frameworks: Select HIPAA, GDPR, and/or PCI pattern detection
- Action on High Risk: Override server policy (inherit/flag/redact/block/score)
Advanced Settings
- Write Clean Text To: Field path for redacted content (default:
cleanText) - Attach Metadata Under: Field path for analysis results (default:
promptLock) - On API Error: Error handling strategy (block/flag/allow/throw)
- Request Timeout: API timeout in milliseconds (default: 15000)
📊 Metadata Structure
The node attaches analysis data to each item:
{
"promptLock": {
"risk_score": 56,
"action_taken": "redact",
"clean_text": "Patient [HIPAA_PERSON_NAME] (SSN: [HIPAA_SSN]) needs treatment",
"violations": [
{
"type": "pii_detection",
"category": "person_name",
"confidence": 0.95,
"position": [8, 18],
"text": "John Smith",
"placeholder": "[HIPAA_PERSON_NAME]",
"compliance_frameworks": ["HIPAA"]
}
],
"compliance_status": {
"HIPAA": 2,
"GDPR": 0,
"PCI": 0
},
"usage": {
"tokens_analyzed": 8,
"processing_time_ms": 78
}
}
}🔒 Security Best Practices
- On API Error: Keep as "Block (Fail Closed)" for maximum security
- Action on High Risk: Use "Inherit from Policy" to leverage server-side rules
- Write Clean Text To: Use a separate field to preserve original data
⚠️ Important Note
PromptLock is a security tool that helps detect sensitive data patterns and prompt injection attempts. It is not a compliance certification and does not guarantee regulatory compliance. You remain responsible for your own compliance obligations.
📞 Support
- Website: promptlock.io
- Email: [email protected]
📜 License
MIT License - see LICENSE file for details.
Built by the PromptLock team
