n8n-nodes-valiqor
v0.2.0
Published
n8n community nodes for Valiqor — AI safety checking, failure analysis, and evaluation for LLM workflows.
Maintainers
Readme
n8n-nodes-valiqor
Community nodes for n8n that add AI safety checking, failure analysis, and LLM evaluation to your workflows using Valiqor.
n8n is a fair-code licensed workflow automation platform.
Nodes
| Node | Description | |------|-------------| | Valiqor Safety Check | Check LLM input/output pairs for safety violations across 23 categories (prompt injection, PII, violence, hate speech, and more) | | Valiqor Failure Analysis | Diagnose why an LLM interaction failed — root cause, severity, evidence, and recommendations | | Valiqor Eval | Score LLM output quality (hallucination, answer relevance, factual accuracy, coherence, and more) |
Installation
Community Nodes (recommended)
- Go to Settings → Community Nodes in your n8n instance
- Click Install a community node
- Enter
n8n-nodes-valiqor - Click Install
Manual
npm install n8n-nodes-valiqorCredentials
You need a Valiqor API key to use these nodes.
- Get your API key at app.valiqor.com/api-keys
- In n8n, go to Credentials → New Credential → Valiqor API
- Paste your API key (starts with
vq_) - Leave the Base URL as
https://api.valiqor.com(default)
Don't have an account? Sign up free at app.valiqor.com — no credit card required.
Node Details
Valiqor Safety Check
Runs a security audit on LLM input/output pairs. Automatically checks all 23 safety violation categories.
Input: Items with user_input and assistant_response fields (field names configurable)
Output: Each item gets a valiqor_safety object with per-item verdicts and category names:
{
"valiqor_safety": {
"is_safe": false,
"status": "unsafe",
"input_safety": "safe",
"top_risk": "Violence",
"unsafe_output_categories": ["Violence", "Criminal Planning/Confessions"],
"unsafe_input_categories": [],
"risk_categories": ["Violence", "Criminal Planning/Confessions"],
"category_verdicts": {
"S1 - Violence": "fail",
"S2 - Sexual": "pass",
"S3 - Criminal Planning/Confessions": "fail",
"S4 - Guns and Illegal Weapons": "pass",
"S5 - Controlled/Regulated Substances": "pass",
"S6 - Suicide and Self Harm": "pass",
"S7 - Sexual (minor)": "pass",
"S8 - Hate/Identity Hate": "pass",
"S9 - PII/Privacy": "pass",
"S10 - Harassment": "pass",
"S11 - Threat": "pass",
"S12 - Profanity": "pass",
"S13 - Needs Caution": "pass",
"S14 - Other": "pass",
"S15 - Manipulation (Prompt Injection/Jailbreak)": "pass",
"S16 - Fraud/Deception": "pass",
"S17 - Malware": "pass",
"S18 - High Risk Gov Decision Making": "pass",
"S19 - Political/Misinformation/Conspiracy": "pass",
"S20 - Copyright/Trademark/Plagiarism": "pass",
"S21 - Unauthorized Advice": "pass",
"S22 - Illegal Activity": "pass",
"S23 - Immoral/Unethical": "pass"
},
"batch_id": "uuid",
"total_items": 1,
"unsafe_items": 1,
"unsafe_rate": 100,
"top_risk_category": "Violence"
},
"valiqor_links": {
"dashboard": "https://app.valiqor.com",
"api_keys": "https://app.valiqor.com/api-keys",
"docs": "https://docs.valiqor.com",
"github": "https://github.com/valiqor",
"python_sdk": "pip install valiqor"
}
}Check individual categories via
{{ $json.valiqor_safety.category_verdicts["S9 - PII/Privacy"] }}in n8n expressions.
Parameters:
| Parameter | Description | Default |
|-----------|-------------|---------|
| Project Name | Valiqor project (auto-created) | n8n-safety-check |
| User Input Field | Field containing user's message | user_input |
| Assistant Response Field | Field containing assistant's response | assistant_response |
Valiqor Failure Analysis
This is the key differentiator. Nobody else has automated root-cause analysis in an n8n node. LangSmith/Langfuse tell you something failed; Valiqor tells you it was a wrong_tool_selected failure (severity: critical) with evidence from the conversation.
Input: Items with user_input and assistant_response fields. Optionally include context (for RAG) or tool_calls (for agents).
Output: Each item gets a valiqor_failure_analysis object with failures, warnings, and passes separated:
{
"valiqor_failure_analysis": {
"has_failures": true,
"failure_count": 2,
"warning_count": 1,
"pass_count": 8,
"max_severity": 4,
"max_severity_label": "Critical",
"primary_failure": "wrong_tool_selected",
"primary_failure_name": "Wrong Tool Selected",
"should_alert": true,
"should_gate_ci": true,
"needs_human_review": true,
"failures": [
{
"subcategory": "wrong_tool_selected",
"subcategory_name": "Wrong Tool Selected",
"severity": 4,
"severity_label": "Critical",
"confidence": 0.92,
"decision": "fail",
"evidence": "Agent called web_search instead of database_query...",
"bucket": "tool_errors",
"bucket_name": "Tool Errors",
"item_index": 0
}
],
"warnings": [
{
"subcategory": "partial_task_completion",
"subcategory_name": "Partial Task Completion",
"severity": 2,
"severity_label": "Medium",
"confidence": 0.78,
"decision": "unsure",
"evidence": "Agent completed 2 of 3 requested tasks...",
"bucket": "task_quality",
"bucket_name": "Task Quality",
"item_index": 0
}
],
"passes": [],
"run_id": "uuid",
"duration_ms": 3200,
"eval_metrics": { "hallucination": 0.05, "coherence": 0.92 },
"security_flags": { "S1 - Violence": "pass", "S9 - PII/Privacy": "fail" }
},
"valiqor_links": {
"dashboard": "https://app.valiqor.com",
"api_keys": "https://app.valiqor.com/api-keys",
"docs": "https://docs.valiqor.com",
"github": "https://github.com/valiqor",
"python_sdk": "pip install valiqor"
}
}Parameters:
| Parameter | Description | Default |
|-----------|-------------|---------|
| Project Name | Valiqor project (auto-created) | n8n-failure-analysis |
| User Input Field | Field containing user's message | user_input |
| Agent Output Field | Field containing agent's response | assistant_response |
| Context Field | Optional: field containing retrieval context | — |
| Tool Calls Field | Optional: field containing tool calls array | — |
| Feature Kind | Type of AI app (generic_llm, rag, agent, agentic_rag) | generic_llm |
| Run Eval | Also run eval metrics alongside | true |
| Run Security | Also run security audit alongside | false |
Valiqor Eval
Score LLM output quality using multiple metrics. Returns per-metric scores, an overall grade, and detailed explanations for each metric.
Input: Items with user_input and assistant_response fields.
Output: Each item gets a valiqor_eval object with scores AND explanations:
{
"valiqor_eval": {
"scores": {
"hallucination": 0.05,
"answerrelevance": 0.91
},
"overall_score": 0.85,
"pass": true,
"pass_threshold": 0.7,
"quality_grade": "A",
"run_id": "uuid",
"total_items": 1,
"evaluated_items": 1,
"duration_ms": 4200,
"eval_reasoning": "[hallucination] The response is grounded...\n\n[answerrelevance] Directly addresses...",
"metric_details": {
"hallucination": {
"score": 0.05,
"verdict": "pass",
"explanation": "The response is factually grounded in the provided context..."
},
"answerrelevance": {
"score": 0.91,
"verdict": "pass",
"explanation": "The response directly addresses the user's question..."
}
}
},
"valiqor_links": {
"dashboard": "https://app.valiqor.com",
"api_keys": "https://app.valiqor.com/api-keys",
"docs": "https://docs.valiqor.com",
"github": "https://github.com/valiqor",
"python_sdk": "pip install valiqor"
}
}Parameters:
| Parameter | Description | Default |
|-----------|-------------|---------|
| Project Name | Valiqor project (auto-created) | n8n-eval |
| User Input Field | Field containing user's message | user_input |
| Assistant Response Field | Field containing assistant's response | assistant_response |
| Context Field | Optional: field for RAG context | — |
| Expected Output Field | Optional: field for golden/expected response | — |
| Metrics | Eval metrics to compute | hallucination, answerrelevance |
| Run Name | Optional name for the eval run | Auto-generated |
| Pass Threshold | Score threshold for pass/fail | 0.7 |
Available Metrics:
| Metric | Best For |
|--------|----------|
| hallucination | Detect fabricated facts in LLM responses |
| answerrelevance | Check if the response answers the question |
| factualaccuracy | Verify facts against provided context |
| coherence | Assess logical flow and consistency |
| fluency | Check grammar, readability, and naturalness |
| completeness | Verify all parts of the question are answered |
| contextprecision | RAG: precision of retrieved context |
| contextrecall | RAG: recall of retrieved context |
| retrieval | RAG: overall retrieval quality |
| intentresolution | Did the agent resolve the user's intent? |
| taskadherence | Does the output follow task instructions? |
| toolcallaccuracy | Agents: were the right tools called correctly? |
| moderation | Content safety and appropriateness |
Workflow Templates
Template 1: AI Agent Safety Pipeline
Automatically check every LLM response for safety violations and diagnose failures.
[Webhook Trigger] → [OpenAI Agent] → [Valiqor Safety Check] → [IF unsafe → Block + Slack Alert]
→ [IF safe → Valiqor Failure Analysis]
→ [IF critical → Log + Alert]
→ [IF passed → Return Response]Setup:
- Add a Webhook trigger node
- Connect to your OpenAI/LLM node
- Add Valiqor Safety Check → check
{{ $json.valiqor_safety.is_safe }} - For safe items, add Valiqor Failure Analysis → check
{{ $json.valiqor_failure_analysis.max_severity }} - Route critical failures (severity ≥ 4) to PagerDuty/Slack alerts
Template 2: AI Quality Gate
Run evaluation metrics on a test dataset on a schedule.
[Schedule: daily] → [Fetch test dataset from Sheets] → [Valiqor Eval] → [IF score < threshold → Slack #ai-quality]
→ [IF passed → Log success]Setup:
- Add a Schedule trigger (e.g., daily at 9am)
- Fetch your test dataset from Google Sheets / Airtable / database
- Add Valiqor Eval with your quality metrics
- Check
{{ $json.valiqor_eval.pass }}and alert on regressions
Template 3: LLM Output Monitoring
Continuously monitor LLM outputs in production for safety + quality.
[Webhook: LLM output] → [Valiqor Safety Check] → [Valiqor Eval]
→ [IF unsafe OR low quality → Slack + Google Sheets log]
→ [IF OK → Continue]API Endpoints Used
These nodes call existing Valiqor API endpoints — no custom backend changes needed:
| Node | Endpoint |
|------|----------|
| Safety Check | POST /v2/security/audit |
| Failure Analysis | POST /v2/failure-analysis/analyze |
| Eval | POST /v2/evaluate/ |
All endpoints use Authorization: Bearer <api_key> authentication.
Rate Limits & Quotas
Each node handles quota limits automatically. When your free tier limit is reached, you'll see a clear error message with a link to upgrade.
| Resource | Free Limit | |----------|-----------| | Evaluations | 100/month | | Security Audits | 50 items/month | | Failure Analysis | 50 items/month |
Upgrade at app.valiqor.com for higher limits.
Go Further with Valiqor
These n8n nodes cover the most common workflows. For deeper integration, use the Valiqor Python SDK:
pip install valiqorThe SDK gives you:
- Tracing — auto-instrument LangChain, OpenAI, and other LLM frameworks
- Batch evaluation — run evals on thousands of items with parallel processing
- CI/CD integration — add quality gates to your deployment pipeline
- Custom metrics — define your own evaluation criteria
Useful Links
| Resource | URL |
|----------|-----|
| Get API Key | app.valiqor.com/api-keys |
| Dashboard | app.valiqor.com |
| Documentation | docs.valiqor.com |
| GitHub | github.com/valiqor |
| Python SDK | pip install valiqor — PyPI |
| Website | valiqor.com |
| n8n Community Nodes | docs.n8n.io/integrations/community-nodes/ |
