npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-rahyana

v0.6.42

Published

Production-ready n8n community node for Rahyana.ir (chat-first, multimodal, streaming-ready)

Readme

n8n-nodes-rahyana

CI npm version

Reliable AI Chatbot for Rahyana.ir - Simple, memory-enabled AI agent for n8n workflows.

🚀 Features

Core Capabilities

  • Simple Chat - Single input field for user messages
  • Reliable Memory - Proven conversation memory that actually works
  • Tool Support - OpenAI-compatible tools integration
  • Streaming - Real-time streaming responses
  • Debug Output - See exactly what messages are sent to the AI

Ultra-Simple Design

  • 🎯 One Input Field - Just ={{$json.chatInput}}
  • 🧠 Automatic Memory - Built-in conversation tracking
  • 🛠️ Optional Tools - Connect tools input if needed
  • 📤 Clean Output - { "content": "AI response", "sessionId": "..." }
  • Works Immediately - Chat Trigger → Rahyana → Response

📦 Installation

Via npm

npm i n8n-nodes-rahyana

In n8n GUI

  1. Go to SettingsCommunity Nodes
  2. Click Add and enter: n8n-nodes-rahyana
  3. Click Install

🔑 Credentials Setup

Create credentials named Rahyana API with:

  • API Key (required) - Your Rahyana.ir API key
  • Base URL (default: https://rahyana.ir/api/v1) - API base URL

🔧 Complete Overhaul - Version 0.6.0

What Was Wrong:

  • ❌ Memory was not working - AI couldn't remember conversations
  • ❌ Complex input structure with confusing message arrays
  • ❌ Unreliable session management
  • ❌ Memory being saved in wrong order
  • ❌ No debugging to see what was happening

What Was Fixed:

  • Memory Flow: Load → Add user → Send to API → Save assistant response
  • Simple Input: Single chatInput field instead of complex arrays
  • Reliable Sessions: Proper sessionId handling and continuity
  • Debug Output: See exactly what messages are sent to the AI
  • Error Handling: Graceful fallbacks when memory fails

🎯 Simple Usage

Basic Chatbot (No Setup Required)

Chat Trigger → Rahyana → Respond to Webhook

Configuration:

  • Model: google/gemini-2.0-flash-lite-001
  • Chat Input: ={{$json.chatInput}}
  • System Prompt: (optional) Add if needed
  • Memory: true (default)
  • Session ID: ={{ $json.sessionId }} (default)

Debug Mode: Enable Debug Mode in the node settings to see detailed memory information:

{
  "content": "AI response",
  "sessionId": "session_123",
  "debug": {
    "messagesSent": 5,
    "lastMessage": "what was the number?",
    "memoryKey": "rahyana_memory_session_123",
    "fullConversation": [
      { "role": "user", "content": "hi" },
      { "role": "assistant", "content": "Hi there! How can I help..." },
      { "role": "user", "content": "remember this number..." },
      { "role": "assistant", "content": "Okay, I remember the number..." },
      { "role": "user", "content": "what was the number?" }
    ]
  }
}

Check the n8n console logs for detailed memory debugging information.

Advanced Chatbot with Tools

Chat Trigger → Rahyana (with Tools) → Respond to Webhook

Configuration:

  • Tools: Add your tool definitions
  • Tool Choice: auto (default)

External Memory Integration

Chat Trigger → Load Memory (Redis) → Rahyana (External Memory) → Save Memory (Redis) → Respond

Configuration:

  • Internal Memory: false
  • Memory Input: Connect to your memory source

📝 Simple Input Configuration

The node now uses a much simpler input format:

Chat Input:

  • Single Field: Just ={{$json.chatInput}}
  • Automatic Role: User messages are automatically added to conversation
  • Memory Integration: Previous messages are automatically included from memory

System Prompt:

  • Optional: Add system instructions if needed
  • Automatic: Added to the beginning of the conversation

How Memory Works:

  1. First Request: User sends "hi"

    • Load memory: [] (empty)
    • Add user message: [{role: "user", content: "hi"}]
    • Save to memory: [{role: "user", content: "hi"}]
    • Send to AI: [{role: "user", content: "hi"}]
    • AI responds: "Hello! How can I help you today?"
    • Save response: [{role: "user", content: "hi"}, {role: "assistant", content: "Hello!"}]
  2. Second Request: User sends "remember 123"

    • Load memory: [{role: "user", content: "hi"}, {role: "assistant", content: "Hello!"}]
    • Add user message: [{role: "user", content: "hi"}, {role: "assistant", content: "Hello!"}, {role: "user", content: "remember 123"}]
    • Save to memory: [full conversation]
    • Send to AI: [full conversation with all messages]
    • AI sees everything and responds appropriately!

🧠 Memory Management

Internal Memory (Default)

  • Automatic: Stores conversation history in workflow static data
  • Session-based: Separate memory for different conversations
  • Context Window: Configurable message limit (default: 20)
  • Persistent: Survives workflow restarts
  • Session ID: Uses {{ $json.sessionId }} from Chat Trigger by default
  • Memory Continuity: Each response includes sessionId for next request

External Memory

  • Flexible: Connect to any memory source (Redis, DB, etc.)
  • Custom Logic: Implement your own memory management
  • Scalable: Handle large-scale deployments

🛠️ Tool Integration

Basic Tools

{
  "name": "get_weather",
  "description": "Get current weather for a location",
  "parameters": {
    "type": "object",
    "properties": {
      "location": {
        "type": "string",
        "description": "City name"
      }
    },
    "required": ["location"]
  }
}

Tool Choice Options

  • Auto: Let the model decide when to use tools
  • None: Disable tool calling
  • Required: Force the model to use a tool

🔄 Streaming Support

Basic Streaming

{
  "stream": true,
  "messages": [{"role": "user", "content": "Tell me a story"}]
}

Streaming Output

  • Returns { streamed: true, response: ... }
  • Handle streaming in subsequent nodes
  • Perfect for real-time chat applications

📚 Example Workflows

1. Simple Chatbot

Chat Trigger → Rahyana → Respond to Webhook

Perfect for: Basic chatbot functionality

2. Chatbot with Tools

Chat Trigger → Rahyana (Tools) → Respond to Webhook

Perfect for: AI agents with capabilities

3. External Memory Chatbot

Chat Trigger → Load Memory → Rahyana → Save Memory → Respond

Perfect for: Production deployments

4. Streaming Chat

Webhook → Rahyana (Streaming) → Webhook (SSE Response)

Perfect for: Real-time applications

🔧 Configuration Options

Basic Settings

  • Model: Rahyana model slug
  • Messages: Message array with role/content
  • System Prompt: Optional system instructions
  • Temperature: Response randomness (0-2)
  • Max Tokens: Response length limit

Memory Settings

  • Internal Memory: Enable/disable built-in memory
  • Context Window: Number of messages to keep
  • Session ID: Custom session identifier

Advanced Settings

  • Stream: Enable streaming responses
  • Tools: Tool definitions
  • Tool Choice: Tool calling behavior
  • Return Raw: Return full API response

🛡️ Security Notes

  • API Keys: Stored securely via n8n's encrypted credentials
  • Input Validation: All inputs are validated before API calls
  • Error Handling: Comprehensive error messages without exposing sensitive data
  • Rate Limiting: Built-in retry logic with exponential backoff

📖 Model Selection

Browse available models at: https://rahyana.ir/models

Popular models:

  • google/gemini-2.0-flash-lite-001 - Fast, general purpose
  • google/gemini-2.5-flash-image-preview - Image generation
  • openai/gpt-4o - Advanced reasoning
  • anthropic/claude-3.5-sonnet - Creative writing

🤝 Support

  • Documentation: https://rahyana.ir/docs
  • Issues: https://github.com/rahyana-ai/n8n-nodes-rahyana/issues
  • Community: https://community.n8n.io

📄 License

MIT License - see LICENSE file for details.


Built with ❤️ for the n8n community