n8n-nodes-rahyana
v0.6.42
Published
Production-ready n8n community node for Rahyana.ir (chat-first, multimodal, streaming-ready)
Maintainers
Readme
n8n-nodes-rahyana
Reliable AI Chatbot for Rahyana.ir - Simple, memory-enabled AI agent for n8n workflows.
🚀 Features
Core Capabilities
- ✅ Simple Chat - Single input field for user messages
- ✅ Reliable Memory - Proven conversation memory that actually works
- ✅ Tool Support - OpenAI-compatible tools integration
- ✅ Streaming - Real-time streaming responses
- ✅ Debug Output - See exactly what messages are sent to the AI
Ultra-Simple Design
- 🎯 One Input Field - Just
={{$json.chatInput}} - 🧠 Automatic Memory - Built-in conversation tracking
- 🛠️ Optional Tools - Connect tools input if needed
- 📤 Clean Output -
{ "content": "AI response", "sessionId": "..." } - ⚡ Works Immediately - Chat Trigger → Rahyana → Response
📦 Installation
Via npm
npm i n8n-nodes-rahyanaIn n8n GUI
- Go to Settings → Community Nodes
- Click Add and enter:
n8n-nodes-rahyana - Click Install
🔑 Credentials Setup
Create credentials named Rahyana API with:
- API Key (required) - Your Rahyana.ir API key
- Base URL (default:
https://rahyana.ir/api/v1) - API base URL
🔧 Complete Overhaul - Version 0.6.0
What Was Wrong:
- ❌ Memory was not working - AI couldn't remember conversations
- ❌ Complex input structure with confusing message arrays
- ❌ Unreliable session management
- ❌ Memory being saved in wrong order
- ❌ No debugging to see what was happening
What Was Fixed:
- ✅ Memory Flow: Load → Add user → Send to API → Save assistant response
- ✅ Simple Input: Single
chatInputfield instead of complex arrays - ✅ Reliable Sessions: Proper sessionId handling and continuity
- ✅ Debug Output: See exactly what messages are sent to the AI
- ✅ Error Handling: Graceful fallbacks when memory fails
🎯 Simple Usage
Basic Chatbot (No Setup Required)
Chat Trigger → Rahyana → Respond to WebhookConfiguration:
- Model:
google/gemini-2.0-flash-lite-001 - Chat Input:
={{$json.chatInput}} - System Prompt: (optional) Add if needed
- Memory:
true(default) - Session ID:
={{ $json.sessionId }}(default)
Debug Mode: Enable Debug Mode in the node settings to see detailed memory information:
{
"content": "AI response",
"sessionId": "session_123",
"debug": {
"messagesSent": 5,
"lastMessage": "what was the number?",
"memoryKey": "rahyana_memory_session_123",
"fullConversation": [
{ "role": "user", "content": "hi" },
{ "role": "assistant", "content": "Hi there! How can I help..." },
{ "role": "user", "content": "remember this number..." },
{ "role": "assistant", "content": "Okay, I remember the number..." },
{ "role": "user", "content": "what was the number?" }
]
}
}Check the n8n console logs for detailed memory debugging information.
Advanced Chatbot with Tools
Chat Trigger → Rahyana (with Tools) → Respond to WebhookConfiguration:
- Tools: Add your tool definitions
- Tool Choice:
auto(default)
External Memory Integration
Chat Trigger → Load Memory (Redis) → Rahyana (External Memory) → Save Memory (Redis) → RespondConfiguration:
- Internal Memory:
false - Memory Input: Connect to your memory source
📝 Simple Input Configuration
The node now uses a much simpler input format:
Chat Input:
- Single Field: Just
={{$json.chatInput}} - Automatic Role: User messages are automatically added to conversation
- Memory Integration: Previous messages are automatically included from memory
System Prompt:
- Optional: Add system instructions if needed
- Automatic: Added to the beginning of the conversation
How Memory Works:
First Request: User sends
"hi"- Load memory:
[](empty) - Add user message:
[{role: "user", content: "hi"}] - Save to memory:
[{role: "user", content: "hi"}] - Send to AI:
[{role: "user", content: "hi"}] - AI responds:
"Hello! How can I help you today?" - Save response:
[{role: "user", content: "hi"}, {role: "assistant", content: "Hello!"}]
- Load memory:
Second Request: User sends
"remember 123"- Load memory:
[{role: "user", content: "hi"}, {role: "assistant", content: "Hello!"}] - Add user message:
[{role: "user", content: "hi"}, {role: "assistant", content: "Hello!"}, {role: "user", content: "remember 123"}] - Save to memory:
[full conversation] - Send to AI:
[full conversation with all messages] - AI sees everything and responds appropriately!
- Load memory:
🧠 Memory Management
Internal Memory (Default)
- Automatic: Stores conversation history in workflow static data
- Session-based: Separate memory for different conversations
- Context Window: Configurable message limit (default: 20)
- Persistent: Survives workflow restarts
- Session ID: Uses
{{ $json.sessionId }}from Chat Trigger by default - Memory Continuity: Each response includes
sessionIdfor next request
External Memory
- Flexible: Connect to any memory source (Redis, DB, etc.)
- Custom Logic: Implement your own memory management
- Scalable: Handle large-scale deployments
🛠️ Tool Integration
Basic Tools
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}Tool Choice Options
- Auto: Let the model decide when to use tools
- None: Disable tool calling
- Required: Force the model to use a tool
🔄 Streaming Support
Basic Streaming
{
"stream": true,
"messages": [{"role": "user", "content": "Tell me a story"}]
}Streaming Output
- Returns
{ streamed: true, response: ... } - Handle streaming in subsequent nodes
- Perfect for real-time chat applications
📚 Example Workflows
1. Simple Chatbot
Chat Trigger → Rahyana → Respond to WebhookPerfect for: Basic chatbot functionality
2. Chatbot with Tools
Chat Trigger → Rahyana (Tools) → Respond to WebhookPerfect for: AI agents with capabilities
3. External Memory Chatbot
Chat Trigger → Load Memory → Rahyana → Save Memory → RespondPerfect for: Production deployments
4. Streaming Chat
Webhook → Rahyana (Streaming) → Webhook (SSE Response)Perfect for: Real-time applications
🔧 Configuration Options
Basic Settings
- Model: Rahyana model slug
- Messages: Message array with role/content
- System Prompt: Optional system instructions
- Temperature: Response randomness (0-2)
- Max Tokens: Response length limit
Memory Settings
- Internal Memory: Enable/disable built-in memory
- Context Window: Number of messages to keep
- Session ID: Custom session identifier
Advanced Settings
- Stream: Enable streaming responses
- Tools: Tool definitions
- Tool Choice: Tool calling behavior
- Return Raw: Return full API response
🛡️ Security Notes
- API Keys: Stored securely via n8n's encrypted credentials
- Input Validation: All inputs are validated before API calls
- Error Handling: Comprehensive error messages without exposing sensitive data
- Rate Limiting: Built-in retry logic with exponential backoff
📖 Model Selection
Browse available models at: https://rahyana.ir/models
Popular models:
google/gemini-2.0-flash-lite-001- Fast, general purposegoogle/gemini-2.5-flash-image-preview- Image generationopenai/gpt-4o- Advanced reasoninganthropic/claude-3.5-sonnet- Creative writing
🤝 Support
- Documentation: https://rahyana.ir/docs
- Issues: https://github.com/rahyana-ai/n8n-nodes-rahyana/issues
- Community: https://community.n8n.io
📄 License
MIT License - see LICENSE file for details.
Built with ❤️ for the n8n community
