npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

n8n-nodes-agentic-rag-supabase

v1.0.0

Published

Advanced n8n node for Agentic RAG with Supabase pgvector - handles structured/unstructured documents with AI-powered query refinement

Readme

Agentic RAG System with Supabase Integration

An advanced Retrieval-Augmented Generation (RAG) system that uses autonomous AI agents to enhance traditional RAG with dynamic query refinement, multi-step reasoning, and tool integration, fully automated via n8n.

🚀 Features

  • Agentic Workflow: Autonomous agents that can refine queries and improve results iteratively
  • Multi-step Reasoning: Up to 3 iterations of query refinement for better answers
  • Supabase Integration: Uses pgvector for efficient vector storage and similarity search
  • n8n Automation: Complete workflow automation with webhook endpoints
  • Document Support: Handles PDF and TXT files with intelligent parsing
  • Quality Evaluation: Automatic answer evaluation with scoring and feedback
  • Custom n8n Node: AgenticRAGSupabase node for seamless integration

📁 Project Structure

rag/
├── data/                    # Raw documents (PDF, TXT)
├── vector_store/           # ChromaDB storage
├── n8n_workflows/          # n8n workflow JSON files
├── src/                    # Python source code
│   ├── main.py            # Main orchestrator
│   ├── data_ingestion.py  # Document loading and indexing
│   ├── retriever_tool.py  # Document retrieval tool
│   ├── agent_logic.py     # Agentic workflow logic
│   └── evaluation.py      # Answer quality evaluation
├── nodes/                  # Custom n8n node
├── credentials/           # n8n credentials
├── requirements.txt       # Python dependencies
├── .env                  # Environment configuration
└── README.md             # This file

🛠️ Setup Instructions

1. Install Python Dependencies

cd rag
pip install -r requirements.txt

2. Configure Environment Variables

Edit the .env file with your API keys:

# Required
OPENAI_API_KEY=your_openai_api_key_here
SUPABASE_URL=your_supabase_project_url
SUPABASE_ANON_KEY=your_supabase_anon_key

# Optional
HUGGINGFACE_API_KEY=your_huggingface_api_key_here
N8N_HOST=http://localhost:5678
N8N_API_KEY=your_n8n_api_key_here

3. Set Up Supabase

  1. Enable the pgvector extension in your Supabase project
  2. Run the setup SQL to create the required table and function:
-- Enable pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;

-- Create documents table
CREATE TABLE IF NOT EXISTS rag_documents (
  id bigserial primary key,
  content text,
  metadata jsonb,
  embedding vector(384)
);

-- Create search function
CREATE OR REPLACE FUNCTION match_documents (
  query_embedding vector(384),
  match_threshold float,
  match_count int
)
RETURNS table (
  id bigint,
  content text,
  similarity float
)
LANGUAGE sql STABLE
AS $$
  select
    rag_documents.id,
    rag_documents.content,
    1 - (rag_documents.embedding <=> query_embedding) as similarity
  from rag_documents
  where 1 - (rag_documents.embedding <=> query_embedding) > match_threshold
  order by similarity desc
  limit match_count;
$$;

4. Prepare Documents

Place your PDF and TXT files in the data/ directory:

mkdir -p data
# Copy your documents to the data/ folder

5. Ingest Documents

Run the data ingestion to index your documents:

python src/data_ingestion.py

6. Test the System

Test with a single query:

python src/main.py --query "What is the compliance process?"

Or run in interactive mode:

python src/main.py --interactive

🔧 n8n Integration

Install Custom Node

  1. Build the custom node:
cd rag
npm install
npm run build
  1. Link the node to your n8n installation:
npm link
cd /path/to/your/n8n
npm link n8n-nodes-agentic-rag-supabase
  1. Restart n8n to load the new node

Import Workflow

  1. Open n8n web interface
  2. Import the workflow from n8n_workflows/agentic_rag_workflow.json
  3. Configure credentials for the AgenticRAGSupabase node
  4. Activate the workflow

Usage via Webhook

Send POST requests to the n8n webhook endpoint:

curl -X POST http://localhost:5678/webhook/agentic-rag \
  -H "Content-Type: application/json" \
  -d '{"query": "What is the compliance process for Northwind Health Plus?"}'

📊 System Workflow

  1. Document Ingestion: Load documents from data/, split into chunks, store embeddings in ChromaDB
  2. Query Processing: Agent receives query and starts iterative process
  3. Retrieval: Semantic search for relevant documents
  4. Generation: LLM generates answer using retrieved context
  5. Evaluation: Answer quality assessment with scoring
  6. Refinement: If needed, refine query and repeat (up to 3 iterations)
  7. Response: Return best answer with quality metrics

🎯 Key Features

  • Dynamic Query Rewriting: Automatically improves queries based on evaluation feedback
  • Multi-iteration Processing: Up to 3 attempts to find the best answer
  • Quality Scoring: Comprehensive evaluation on relevance, groundedness, completeness, clarity, and accuracy
  • Extensible Architecture: Easy to add new tools and capabilities
  • Production Ready: Full n8n integration for scalable deployment

🔍 Usage Examples

Python Direct Usage

from src.agent_logic import AgenticRAGAgent

agent = AgenticRAGAgent()
result = agent.process_query("What are the safety protocols?")
print(result['final_answer'])

n8n Webhook Usage

const response = await fetch('http://localhost:5678/webhook/agentic-rag', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ query: 'What is the compliance process?' })
});

const result = await response.json();
console.log(result.answer);

🚨 Troubleshooting

  • No documents found: Ensure documents are in the data/ directory and run data ingestion
  • API errors: Check your API keys in the .env file
  • ChromaDB issues: Delete the vector_store/ directory and re-run data ingestion
  • n8n node not appearing: Ensure the node is properly built and linked

📈 Performance Tips

  • Use specific queries for better retrieval results
  • Regularly update your document collection
  • Monitor evaluation scores to identify areas for improvement
  • Adjust similarity thresholds based on your use case

🤝 Contributing

This system is designed to be extensible. You can:

  • Add new document loaders for different file types
  • Implement additional evaluation metrics
  • Create new retrieval strategies
  • Extend the n8n node with more operations

📄 License

MIT License - see LICENSE file for details.