n8n-nodes-openguardrails
v1.0.0
Published
n8n node for OpenGuardrails AI safety and content moderation
Maintainers
Readme
n8n-nodes-openguardrails

This is an n8n community node that integrates OpenGuardrails - an enterprise-grade, open-source AI safety guardrails platform.
n8n is a fair-code licensed workflow automation platform.
Features
OpenGuardrails provides comprehensive AI safety protection including:
- Prompt Attack Detection - Detect jailbreaks, prompt injections, and manipulation attempts
- Content Safety - Check for 19 risk categories including violence, hate speech, adult content, etc.
- Data Leak Detection - Identify privacy violations, commercial secrets, and IP infringement
- Multi-turn Conversation Analysis - Context-aware detection across conversation history
- Ban Policy Enforcement - Automatic blocking of malicious users
- Knowledge Base Responses - Intelligent safe responses for risky content
Installation
Community Nodes (Recommended)
- Go to Settings → Community Nodes in your n8n instance
- Click Install and enter:
n8n-nodes-openguardrails - Click Install
Manual Installation
# Navigate to your n8n installation folder
cd ~/.n8n
# Install the node package
npm install n8n-nodes-openguardrails
# Restart n8nDocker Installation
If you're running n8n in Docker, add this to your docker-compose.yml:
environment:
- N8N_COMMUNITY_PACKAGES=n8n-nodes-openguardrailsPrerequisites
- n8n version 0.200.0 or later
- OpenGuardrails API key (get it from https://api.openguardrails.com)
Credentials
To use this node, you need to configure your OpenGuardrails API credentials:
- Go to Credentials → New in n8n
- Search for OpenGuardrails API
- Enter your API key (format:
sk-xxai-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) - (Optional) Enter a custom API URL if using self-hosted OpenGuardrails
- Click Create
Operations
The OpenGuardrails node supports four operations:
1. Check Content
Check any content for safety issues.
Use Case: Validate user-generated content before processing
Parameters:
- Content: The text to check
- Detection Options:
- Enable Security Check (prompt attacks)
- Enable Compliance Check (content safety)
- Enable Data Security (data leaks)
- User ID (optional, for ban policy)
- Action on High Risk: How to handle risky content
Output:
{
"action": "pass|reject|replace",
"risk_level": "none|low|medium|high",
"categories": ["S9", "S5"],
"suggest_answer": "Safe alternative response",
"hit_keywords": ["hack", "attack"],
"processed_content": "Final content",
"has_warning": false,
"was_replaced": false
}2. Input Moderation
Specifically designed for moderating user input before sending to AI models or processing systems.
Use Case: Protect AI chatbots from prompt attacks and inappropriate input
Parameters: Same as Check Content
Example Workflow:
User Input → Input Moderation → IF (action = pass) → Send to LLM3. Output Moderation
Moderate AI/system output before sending to users.
Use Case: Ensure AI-generated responses are safe and appropriate
Parameters: Same as Check Content
Example Workflow:
LLM Response → Output Moderation → IF (action = pass) → Send to User4. Conversation Check
Check multi-turn conversation for safety with full context awareness.
Use Case: Monitor ongoing conversations for emerging safety issues
Parameters:
- Messages: Array of conversation messages with roles (user/assistant/system)
- Detection Options: Same as above
- Action on High Risk: Same as above
Example Input:
{
"messages": [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi! How can I help?"},
{"role": "user", "content": "Tell me how to hack..."}
]
}Detection Options
Enable Security Check
Detects prompt attacks including:
- Jailbreak attempts
- Prompt injection
- Role manipulation
- System prompt extraction
Risk Category: S9
Enable Compliance Check
Checks for content safety issues across 18 categories:
- S1: Political content
- S2: Sensitive political topics
- S3: Insults to national symbols
- S4: Harm to minors
- S5: Violent crime
- S6: Non-violent crime
- S7: Pornography
- S8: Hate & discrimination
- S10: Profanity
- S14: Harassment
- S15: WMDs
- S16: Self-harm
- S17: Sexual crimes
- S18: Threats
- S19: Professional advice
Enable Data Security
Detects data leaks:
- S11: Privacy invasion
- S12: Commercial violations
- S13: IP infringement
Action on High Risk
Configure how to handle risky content:
Continue with Warning
- Workflow continues
- Output includes warning flags
- Best for: Logging and monitoring
Stop Workflow
- Workflow execution stops
- Error message returned
- Best for: Strict safety requirements
Use Safe Response
- Content replaced with safe alternative
- Uses OpenGuardrails knowledge base
- Best for: User-facing applications
Example Workflows
Example 1: AI Chatbot with Complete Protection
1. Webhook (receive user message)
2. OpenGuardrails - Input Moderation
3. IF (action = pass)
→ YES: Continue
→ NO: Return safe response
4. OpenAI Chat
5. OpenGuardrails - Output Moderation
6. IF (action = pass)
→ YES: Return to user
→ NO: Return safe responseExample 2: Content Moderation Pipeline
1. Trigger (new content)
2. OpenGuardrails - Check Content
3. Switch (based on action)
→ PASS: Publish content
→ REJECT: Flag for review
→ REPLACE: Publish safe versionExample 3: Multi-channel Safety
1. Merge (combine inputs from Slack, Discord, Email)
2. OpenGuardrails - Input Moderation
3. Filter (action = pass)
4. Process safe content
5. Log rejected itemsRisk Levels
OpenGuardrails categorizes content into four risk levels:
| Level | Description | Recommended Action | |-------|-------------|-------------------| | none | No safety concerns | Allow | | low | Minor issues, low severity | Allow with monitoring | | medium | Moderate concerns | Review or substitute | | high | Serious safety issues | Block or replace |
Advanced Configuration
Custom User Tracking
Enable ban policy by providing user IDs:
// In Detection Options
User ID: {{ $json.userId }}Users with repeated violations will be automatically banned based on your OpenGuardrails configuration.
Selective Detection
Disable checks you don't need for better performance:
Enable Security Check: true
Enable Compliance Check: false // Skip content safety
Enable Data Security: false // Skip data leak detectionError Handling
Use n8n's built-in error handling:
- Enable Continue on Fail in node settings
- Add an Error Trigger node for custom error handling
- Errors include detailed messages about why content was blocked
Performance Tips
- Batch Processing: Use n8n's batch mode for processing multiple items
- Caching: Cache results for identical content to reduce API calls
- Selective Detection: Only enable checks you need
- Async Processing: Use n8n's queue mode for high-volume workflows
API Rate Limits
OpenGuardrails API has the following rate limits:
- Free Tier: 10 requests/second, 1000 requests/day
- Pro Tier: 100 requests/second, unlimited daily
- Enterprise: Custom limits
Rate limit exceeded errors (429) will be automatically retried by n8n.
Troubleshooting
Issue: "Invalid API key"
Solution:
- Check that your API key starts with
sk-xxai- - Verify the key is active in your OpenGuardrails dashboard
- Test credentials in the credentials page
Issue: "Connection timeout"
Solution:
- Check your network connection
- Verify API URL is correct (default:
https://api.openguardrails.com) - For self-hosted: ensure service is running and accessible
Issue: "Workflow stops unexpectedly"
Solution:
- Check if "Action on High Risk" is set to "Stop Workflow"
- Review the error message for blocked content details
- Consider using "Continue with Warning" or "Use Safe Response" instead
Issue: "Response parsing error"
Solution:
- Ensure you're using the latest version of the node
- Check that content is properly formatted
- Report issues to: https://github.com/openguardrails/n8n-nodes-openguardrails/issues
Support
- Documentation: https://www.openguardrails.com/docs
- GitHub Issues: https://github.com/openguardrails/n8n-nodes-openguardrails/issues
- Email: [email protected]
- n8n Community: https://community.n8n.io
Development
Setup Development Environment
# Clone the repository
git clone https://github.com/openguardrails/n8n-nodes-openguardrails.git
cd n8n-nodes-openguardrails
# Install dependencies
npm install
# Build the node
npm run build
# Link for local testing
npm link
# In your n8n installation
cd ~/.n8n
npm link n8n-nodes-openguardrails
# Restart n8nRun Tests
npm testBuild
npm run buildLint
npm run lint
npm run lintfix # Auto-fix issuesPublishing to npm
- Update version in
package.json - Build the package:
npm run build - Test locally:
npm link - Publish:
npm publish --access public
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Run linting:
npm run lintfix - Submit a pull request
See CONTRIBUTING.md for detailed guidelines.
License
About OpenGuardrails
OpenGuardrails is an enterprise-grade, open-source AI safety platform that provides:
- 🛡️ Comprehensive safety protection (prompt attacks, content safety, data leaks)
- 🌍 Support for 119 languages
- 🏢 Complete on-premise deployment
- 🔧 Two usage modes: API Call & Security Gateway (Proxy)
- 📊 Visual management interface
- 🚀 High performance (3.3B parameter model)
Model: OpenGuardrails-Text-2510 License: Apache 2.0 Website: https://www.openguardrails.com Model Repository: https://huggingface.co/openguardrails/OpenGuardrails-Text-2510
Version History
1.0.0 (Initial Release)
- ✅ Check Content operation
- ✅ Input Moderation operation
- ✅ Output Moderation operation
- ✅ Conversation Check operation
- ✅ Configurable detection options
- ✅ Multiple action strategies for risky content
- ✅ User tracking for ban policy
- ✅ Comprehensive error handling
Roadmap
- [ ] Multimodal support (image + text)
- [ ] Webhook integration for async detection
- [ ] Batch detection API support
- [ ] Custom threshold configuration
- [ ] Statistics and reporting features
Made with ❤️ by the OpenGuardrails team
