n8n-nodes-rooyai-message
v0.3.7
Published
Rooyai Message / Chat Model for n8n - A first-class LLM provider node compatible with AI Agent, Basic LLM Chain, and other n8n AI workflows
Maintainers
Readme
n8n-nodes-rooyai-message
A production-ready Rooyai Message / Chat Model node for n8n, providing first-class LLM provider integration equivalent to OpenAI, Gemini, or DeepSeek.
🎯 Overview
This custom n8n community node enables you to use Rooyai's LLM API as a message/chat model provider in your n8n workflows. It appears under AI → Language Models → Rooyai Message Model and works seamlessly with:
- ✅ AI Agent
- ✅ Better AI Agent
- ✅ Basic LLM Chain
- ✅ Tools
- ✅ Memory
📦 Installation
Option 1: Install in n8n Custom Directory (Recommended for Testing)
# Create custom nodes directory if it doesn't exist
mkdir -p ~/.n8n/custom
# Copy the entire dist folder to the custom directory
cp -r ./dist ~/.n8n/custom/n8n-nodes-rooyai-message
# Restart n8n
n8n restartOption 2: Install via npm (Production)
# In your n8n installation directory
npm install n8n-nodes-rooyai-message
# Restart n8n
n8n restartOption 3: Development Link
# In this project directory
npm run build
npm link
# In your n8n directory
npm link n8n-nodes-rooyai-message
n8n restart🔑 Credentials Setup
- In n8n, navigate to Credentials → Create New Credential
- Search for "Rooyai API"
- Configure the following fields:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| API Key | Password | ✅ Yes | Your Rooyai API authentication key |
| Base URL | String | ✅ Yes | API endpoint (default: https://rooyai.com/api/v1/chat) |
| Optional Headers | JSON String | ❌ No | Additional headers in JSON format: {"X-Custom": "value"} |
- Click Save to store your credentials
🚀 Usage
Basic Chat Completion
- Add Rooyai Message Model node to your workflow
- Select your Rooyai API credentials
- Configure the node:
- Model: Select from dropdown (15 models available)
- Messages: Add user/system/assistant messages
- Temperature:
0.7(0-2 range) - Max Tokens:
1024(optional)
Example Workflow
Start Node → Rooyai Message Model → Output NodeConfiguration:
- Model:
LLaMa 3.3 70B(from dropdown) - Messages:
- Role:
system, Content:You are a helpful assistant - Role:
user, Content:Explain quantum computing in simple terms
- Role:
- Temperature:
0.7
With AI Agent
Manual Chat Trigger → AI Agent → Rooyai Message ModelThe Rooyai Message Model node integrates directly as a language model provider in AI Agent workflows.
With Basic LLM Chain
Start → Basic LLM Chain → Rooyai Message Model → OutputConfigure the chain with your prompt template, and it will automatically use Rooyai for text generation.
⚙️ Configuration Options
Model Selection
Select from 15 available Rooyai models via dropdown:
| Model | Description | Best For | |-------|-------------|----------| | LLaMa 3.3 70B | Meta's flagship model with 70B parameters | Complex reasoning, detailed analysis | | DeepSeek R1 | Reasoning-optimized model | Logical tasks, problem-solving | | DeepSeek v3.1 Nex | Latest DeepSeek with enhancements | General purpose, advanced tasks | | Qwen3 Coder | Code generation specialist | Programming, technical documentation | | GPT OSS 120B | Large open-source GPT | Complex tasks, high accuracy | | GPT OSS 20B | Efficient open-source GPT | Fast responses, good balance | | TNG R1T Chimera | TNG reasoning architecture | Analytical tasks | | TNG DeepSeek Chimera | Hybrid TNG-DeepSeek model | Multi-domain tasks | | Kimi K2 | Moonshot AI's multilingual model | Chinese language, translations | | GLM 4.5 Air | Lightweight ChatGLM | Fast interactions, efficiency | | Devstral | Developer-focused model | Coding, debugging, tech docs | | Mimo v2 Flash | High-speed model | Quick responses, real-time chat | | Gemma 3 27B | Google Gemma large variant | General purpose, quality | | Gemma 3 12B | Google Gemma balanced | Good performance/speed ratio | | Gemma 3 4B | Google Gemma compact | Fastest responses, simple tasks |
Message Roles
- system: Defines AI behavior and context
- user: Human input/questions
- assistant: AI responses (for conversation history)
Advanced Options
| Option | Type | Range | Description | |--------|------|-------|-------------| | Temperature | Number | 0-2 | Controls randomness (0=deterministic, 2=very creative) | | Max Tokens | Number | 1-32768 | Maximum response length | | Frequency Penalty | Number | -2 to 2 | Reduces word repetition | | Presence Penalty | Number | -2 to 2 | Encourages new topics | | Top P | Number | 0-1 | Nucleus sampling (alternative to temperature) |
Simplify Output
- Enabled (default): Returns only the assistant's message content as a clean string
- Disabled: Returns full API response including usage metadata (
cost_usd)
🔧 API Integration Details
Request Format
The node sends POST requests to your configured Base URL with:
{
"model": "gemini-2.0-flash",
"messages": [
{ "role": "system", "content": "You are helpful" },
{ "role": "user", "content": "Hello!" }
],
"temperature": 0.7,
"max_tokens": 1024
}Headers:
Authorization: Bearer {YOUR_API_KEY}
Content-Type: application/json
{...optional custom headers}Response Parsing
Rooyai returns responses in this format:
{
"choices": [
{
"message": {
"content": "Hello! How can I assist you today?"
}
}
],
"usage": {
"cost_usd": 0.000123
}
}The node automatically extracts choices[0].message.content for the final output.
📁 Project Structure
n8n-nodes-rooyai-message/
├── credentials/
│ └── RooyaiApi.credentials.ts # API credentials definition
├── nodes/
│ └── RooyaiMessage/
│ ├── RooyaiMessage.node.ts # Main node implementation
│ ├── ChatDescription.ts # Message/chat operations
│ ├── GenericFunctions.ts # Error handling & utilities
│ ├── RooyaiMessage.node.json # Node metadata
│ └── rooyai.svg # Node icon
├── dist/ # Compiled JavaScript output
├── package.json # Package metadata & dependencies
├── tsconfig.json # TypeScript configuration
├── gulpfile.js # Build tasks (icon copying)
└── README.md # This file🛠️ Development
Prerequisites
- Node.js 18+
- npm 8+
- TypeScript 5.3+
Build from Source
# Install dependencies
npm install
# Build the project (compiles TypeScript + copies icons)
npm run build
# Watch mode for development
npm run devModifying the API Integration
⚙️ Change Base URL:
Edit credentials/RooyaiApi.credentials.ts, line 20:
default: 'https://your-new-endpoint.com/api/v1/chat'⚙️ Modify Response Parsing:
Edit nodes/RooyaiMessage/ChatDescription.ts, lines 140-160 (postReceive function):
// Update to match your API's response structure
const assistantText = item.json?.choices?.[0]?.message?.content || '';⚙️ Add Custom Headers:
Users can add custom headers via the "Optional Headers" credential field without code changes.
✅ Verification
After installation, verify the node:
- Node Appears: Search for "Rooyai" in n8n's "Add Node" menu
- Credentials Work: Create credential and test with valid API key
- Chat Works: Send a test message and receive response
- No Errors: Check n8n logs for any error messages
Expected behavior:
- Node is categorized under AI or Language Models
- Requests sent to configured Base URL
- Responses parsed correctly as strings
- Compatible with AI Agent and LLM Chain nodes
🐛 Troubleshooting
Node doesn't appear in n8n
- Ensure
dist/folder is copied to~/.n8n/custom/ - Restart n8n:
n8n restartorservice n8n restart - Check n8n logs:
~/.n8n/logs/n8n.log
"Cannot find credentials" error
- Create "Rooyai API" credentials in n8n UI first
- Ensure API key is valid and not expired
API request fails
- Verify Base URL is correct:
https://rooyai.com/api/v1/chat - Check API key has proper permissions
- Review error message in n8n execution view
Response parsing error
- Enable "Simplify Output: false" to see raw API response
- Verify Rooyai API returns
choices[0].message.content
📝 License
MIT
👥 Author
Rooyai
Website: https://rooyai.com
Support: [email protected]
🔗 Links
Built with ❤️ for the n8n community
