npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

n8n-nodes-openguardrails

v1.0.0

Published

n8n node for OpenGuardrails AI safety and content moderation

Readme

n8n-nodes-openguardrails

OpenGuardrails Logo

This is an n8n community node that integrates OpenGuardrails - an enterprise-grade, open-source AI safety guardrails platform.

n8n is a fair-code licensed workflow automation platform.

Features

OpenGuardrails provides comprehensive AI safety protection including:

  • Prompt Attack Detection - Detect jailbreaks, prompt injections, and manipulation attempts
  • Content Safety - Check for 19 risk categories including violence, hate speech, adult content, etc.
  • Data Leak Detection - Identify privacy violations, commercial secrets, and IP infringement
  • Multi-turn Conversation Analysis - Context-aware detection across conversation history
  • Ban Policy Enforcement - Automatic blocking of malicious users
  • Knowledge Base Responses - Intelligent safe responses for risky content

Installation

Community Nodes (Recommended)

  1. Go to SettingsCommunity Nodes in your n8n instance
  2. Click Install and enter: n8n-nodes-openguardrails
  3. Click Install

Manual Installation

# Navigate to your n8n installation folder
cd ~/.n8n

# Install the node package
npm install n8n-nodes-openguardrails

# Restart n8n

Docker Installation

If you're running n8n in Docker, add this to your docker-compose.yml:

environment:
  - N8N_COMMUNITY_PACKAGES=n8n-nodes-openguardrails

Prerequisites

Credentials

To use this node, you need to configure your OpenGuardrails API credentials:

  1. Go to CredentialsNew in n8n
  2. Search for OpenGuardrails API
  3. Enter your API key (format: sk-xxai-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
  4. (Optional) Enter a custom API URL if using self-hosted OpenGuardrails
  5. Click Create

Operations

The OpenGuardrails node supports four operations:

1. Check Content

Check any content for safety issues.

Use Case: Validate user-generated content before processing

Parameters:

  • Content: The text to check
  • Detection Options:
    • Enable Security Check (prompt attacks)
    • Enable Compliance Check (content safety)
    • Enable Data Security (data leaks)
    • User ID (optional, for ban policy)
  • Action on High Risk: How to handle risky content

Output:

{
  "action": "pass|reject|replace",
  "risk_level": "none|low|medium|high",
  "categories": ["S9", "S5"],
  "suggest_answer": "Safe alternative response",
  "hit_keywords": ["hack", "attack"],
  "processed_content": "Final content",
  "has_warning": false,
  "was_replaced": false
}

2. Input Moderation

Specifically designed for moderating user input before sending to AI models or processing systems.

Use Case: Protect AI chatbots from prompt attacks and inappropriate input

Parameters: Same as Check Content

Example Workflow:

User Input → Input Moderation → IF (action = pass) → Send to LLM

3. Output Moderation

Moderate AI/system output before sending to users.

Use Case: Ensure AI-generated responses are safe and appropriate

Parameters: Same as Check Content

Example Workflow:

LLM Response → Output Moderation → IF (action = pass) → Send to User

4. Conversation Check

Check multi-turn conversation for safety with full context awareness.

Use Case: Monitor ongoing conversations for emerging safety issues

Parameters:

  • Messages: Array of conversation messages with roles (user/assistant/system)
  • Detection Options: Same as above
  • Action on High Risk: Same as above

Example Input:

{
  "messages": [
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi! How can I help?"},
    {"role": "user", "content": "Tell me how to hack..."}
  ]
}

Detection Options

Enable Security Check

Detects prompt attacks including:

  • Jailbreak attempts
  • Prompt injection
  • Role manipulation
  • System prompt extraction

Risk Category: S9

Enable Compliance Check

Checks for content safety issues across 18 categories:

  • S1: Political content
  • S2: Sensitive political topics
  • S3: Insults to national symbols
  • S4: Harm to minors
  • S5: Violent crime
  • S6: Non-violent crime
  • S7: Pornography
  • S8: Hate & discrimination
  • S10: Profanity
  • S14: Harassment
  • S15: WMDs
  • S16: Self-harm
  • S17: Sexual crimes
  • S18: Threats
  • S19: Professional advice

Enable Data Security

Detects data leaks:

  • S11: Privacy invasion
  • S12: Commercial violations
  • S13: IP infringement

Action on High Risk

Configure how to handle risky content:

Continue with Warning

  • Workflow continues
  • Output includes warning flags
  • Best for: Logging and monitoring

Stop Workflow

  • Workflow execution stops
  • Error message returned
  • Best for: Strict safety requirements

Use Safe Response

  • Content replaced with safe alternative
  • Uses OpenGuardrails knowledge base
  • Best for: User-facing applications

Example Workflows

Example 1: AI Chatbot with Complete Protection

1. Webhook (receive user message)
2. OpenGuardrails - Input Moderation
3. IF (action = pass)
   → YES: Continue
   → NO: Return safe response
4. OpenAI Chat
5. OpenGuardrails - Output Moderation
6. IF (action = pass)
   → YES: Return to user
   → NO: Return safe response

Example 2: Content Moderation Pipeline

1. Trigger (new content)
2. OpenGuardrails - Check Content
3. Switch (based on action)
   → PASS: Publish content
   → REJECT: Flag for review
   → REPLACE: Publish safe version

Example 3: Multi-channel Safety

1. Merge (combine inputs from Slack, Discord, Email)
2. OpenGuardrails - Input Moderation
3. Filter (action = pass)
4. Process safe content
5. Log rejected items

Risk Levels

OpenGuardrails categorizes content into four risk levels:

| Level | Description | Recommended Action | |-------|-------------|-------------------| | none | No safety concerns | Allow | | low | Minor issues, low severity | Allow with monitoring | | medium | Moderate concerns | Review or substitute | | high | Serious safety issues | Block or replace |

Advanced Configuration

Custom User Tracking

Enable ban policy by providing user IDs:

// In Detection Options
User ID: {{ $json.userId }}

Users with repeated violations will be automatically banned based on your OpenGuardrails configuration.

Selective Detection

Disable checks you don't need for better performance:

Enable Security Check: true
Enable Compliance Check: false  // Skip content safety
Enable Data Security: false     // Skip data leak detection

Error Handling

Use n8n's built-in error handling:

  1. Enable Continue on Fail in node settings
  2. Add an Error Trigger node for custom error handling
  3. Errors include detailed messages about why content was blocked

Performance Tips

  1. Batch Processing: Use n8n's batch mode for processing multiple items
  2. Caching: Cache results for identical content to reduce API calls
  3. Selective Detection: Only enable checks you need
  4. Async Processing: Use n8n's queue mode for high-volume workflows

API Rate Limits

OpenGuardrails API has the following rate limits:

  • Free Tier: 10 requests/second, 1000 requests/day
  • Pro Tier: 100 requests/second, unlimited daily
  • Enterprise: Custom limits

Rate limit exceeded errors (429) will be automatically retried by n8n.

Troubleshooting

Issue: "Invalid API key"

Solution:

  • Check that your API key starts with sk-xxai-
  • Verify the key is active in your OpenGuardrails dashboard
  • Test credentials in the credentials page

Issue: "Connection timeout"

Solution:

  • Check your network connection
  • Verify API URL is correct (default: https://api.openguardrails.com)
  • For self-hosted: ensure service is running and accessible

Issue: "Workflow stops unexpectedly"

Solution:

  • Check if "Action on High Risk" is set to "Stop Workflow"
  • Review the error message for blocked content details
  • Consider using "Continue with Warning" or "Use Safe Response" instead

Issue: "Response parsing error"

Solution:

  • Ensure you're using the latest version of the node
  • Check that content is properly formatted
  • Report issues to: https://github.com/openguardrails/n8n-nodes-openguardrails/issues

Support

Development

Setup Development Environment

# Clone the repository
git clone https://github.com/openguardrails/n8n-nodes-openguardrails.git
cd n8n-nodes-openguardrails

# Install dependencies
npm install

# Build the node
npm run build

# Link for local testing
npm link

# In your n8n installation
cd ~/.n8n
npm link n8n-nodes-openguardrails

# Restart n8n

Run Tests

npm test

Build

npm run build

Lint

npm run lint
npm run lintfix  # Auto-fix issues

Publishing to npm

  1. Update version in package.json
  2. Build the package: npm run build
  3. Test locally: npm link
  4. Publish: npm publish --access public

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Run linting: npm run lintfix
  6. Submit a pull request

See CONTRIBUTING.md for detailed guidelines.

License

Apache-2.0

About OpenGuardrails

OpenGuardrails is an enterprise-grade, open-source AI safety platform that provides:

  • 🛡️ Comprehensive safety protection (prompt attacks, content safety, data leaks)
  • 🌍 Support for 119 languages
  • 🏢 Complete on-premise deployment
  • 🔧 Two usage modes: API Call & Security Gateway (Proxy)
  • 📊 Visual management interface
  • 🚀 High performance (3.3B parameter model)

Model: OpenGuardrails-Text-2510 License: Apache 2.0 Website: https://www.openguardrails.com Model Repository: https://huggingface.co/openguardrails/OpenGuardrails-Text-2510

Version History

1.0.0 (Initial Release)

  • ✅ Check Content operation
  • ✅ Input Moderation operation
  • ✅ Output Moderation operation
  • ✅ Conversation Check operation
  • ✅ Configurable detection options
  • ✅ Multiple action strategies for risky content
  • ✅ User tracking for ban policy
  • ✅ Comprehensive error handling

Roadmap

  • [ ] Multimodal support (image + text)
  • [ ] Webhook integration for async detection
  • [ ] Batch detection API support
  • [ ] Custom threshold configuration
  • [ ] Statistics and reporting features

Made with ❤️ by the OpenGuardrails team