npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-ragmetrics

v0.1.6

Published

RagMetrics: Evaluate AI Agents

Readme

RAGMETRICS

The AI Trust Layer

Real-time monitoring and scoring mechanism integrated directly into your LLM pipeline. Monitor, measure, and improve your generative AI outputs with confidence as they happen in production.

RagMetrics is a turnkey evaluationAI service that scores, analyzes, validates and monitors any AI-generated text in seconds. Plug RagMetrics into your workflow to ensure accuracy, consistency, and quality—without needing extra reviewers. The system evaluates responses based on customizable criteria and provides detailed scoring and reasoning.

RagMetrics has been proven to detect Hallucination in AI Agents and chat bots.

Visit: https://ragmetrics.ai/live-ai-evaluation

Put your workflows and chat bots under the microscope and understand when they start to derail, hallucinate or degrade their accuracy.

Requirements

To configure you n8n node, you need a RagMetrics API key. For that you have to register for the service at ragmetrics.ai. You will also need a LLM Provider API key.

You can configure your account and get your RagMetrics API Key at: app.ragmetrics.ai

Understanding the evaluation process

The evaluation process is very simple. It compares an answer to a specific question to a ground truth answer in a context given. A score and reasoning is provided based on a criteria or criterion given to evaluate.

Inputs to the evaluation

  • . question: The question that you want to evaluate. Example: Which is capital of France?

  • . answer: This is the answer that is going to be evaluated. Example: Paris.

  • . ground_truth_answer: This is the source of truth. It is used as rubric to for the evaluation process to score the criterion. Example: The capital of France is Paris.

  • . context: Additional information that is used to evaluate. The context becomes more relevant in non-direct comparison criterion. As an example in cases of hallucination: The context is key to understand if the evaluated answer related to the conversation that has been happening. Example: Paris is the capital and largest city of France, with an estimated population of 2,048,472 in January 2025 in an area of more than 105 km2 (41 sq mi).

  • . Conversation ID: The conversation is use to track the monitoring of different bots

  • . type: This is the type of evaluation.

  • . Conversation ID: The conversation is use to track the monitoring of different bots

  • . Evaluation Group ID: The evaluation group id can be obtained in the application @ ragmetrics.ai. It has to be preconfigured in the platform with the criteria needed to run the evaluations properly.

Example of a Information for an evaluation


From Input to the Node:	
  "question": "What is the capital of France?",
  "answer": "Paris.",
  "ground_truth_answer": "Paris is the capital of France.",
  "context": "",

From Configuration:
	
  "type": "S",
  "conversation_id":"ChatBot1"

After the evaluation is completed almost in real time, the system returns the score for each of the metrics and the reasoning behind.

Using the n8n RagMetrics Module

Configuration: The module requires the corresponding configuration: - RagMetrics API Key: linked to your account in ragmetrics.ai - Type of Evaluation: S - Simple, or C- Conversational - Conversation ID: user created indicator to monitor different conversations - Evaluation Group ID: obtained from the ragmetrics.ai application, used to monitor different groups of AI agents

Input: You should provide the no de the fields needed for the evaluation: - question: question to answer - answer: answer that has to be evaluated - ground_truth_answer: source of truth for the answer to be evaluated - context: additional information to help the evaluation process

The inputs should be in Json Format.

Obtain the results and monitor what is going on with your tools!

Look for more information and examples in our website!!

support: [email protected] website: ragmetrics.ai