npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-rooyai-message

v0.3.7

Published

Rooyai Message / Chat Model for n8n - A first-class LLM provider node compatible with AI Agent, Basic LLM Chain, and other n8n AI workflows

Readme

n8n-nodes-rooyai-message

A production-ready Rooyai Message / Chat Model node for n8n, providing first-class LLM provider integration equivalent to OpenAI, Gemini, or DeepSeek.

🎯 Overview

This custom n8n community node enables you to use Rooyai's LLM API as a message/chat model provider in your n8n workflows. It appears under AI → Language Models → Rooyai Message Model and works seamlessly with:

  • ✅ AI Agent
  • ✅ Better AI Agent
  • ✅ Basic LLM Chain
  • ✅ Tools
  • ✅ Memory

📦 Installation

Option 1: Install in n8n Custom Directory (Recommended for Testing)

# Create custom nodes directory if it doesn't exist
mkdir -p ~/.n8n/custom

# Copy the entire dist folder to the custom directory
cp -r ./dist ~/.n8n/custom/n8n-nodes-rooyai-message

# Restart n8n
n8n restart

Option 2: Install via npm (Production)

# In your n8n installation directory
npm install n8n-nodes-rooyai-message

# Restart n8n
n8n restart

Option 3: Development Link

# In this project directory
npm run build
npm link

# In your n8n directory
npm link n8n-nodes-rooyai-message
n8n restart

🔑 Credentials Setup

  1. In n8n, navigate to Credentials → Create New Credential
  2. Search for "Rooyai API"
  3. Configure the following fields:

| Field | Type | Required | Description | |-------|------|----------|-------------| | API Key | Password | ✅ Yes | Your Rooyai API authentication key | | Base URL | String | ✅ Yes | API endpoint (default: https://rooyai.com/api/v1/chat) | | Optional Headers | JSON String | ❌ No | Additional headers in JSON format: {"X-Custom": "value"} |

  1. Click Save to store your credentials

🚀 Usage

Basic Chat Completion

  1. Add Rooyai Message Model node to your workflow
  2. Select your Rooyai API credentials
  3. Configure the node:
    • Model: Select from dropdown (15 models available)
    • Messages: Add user/system/assistant messages
    • Temperature: 0.7 (0-2 range)
    • Max Tokens: 1024 (optional)

Example Workflow

Start Node → Rooyai Message Model → Output Node

Configuration:

  • Model: LLaMa 3.3 70B (from dropdown)
  • Messages:
    • Role: system, Content: You are a helpful assistant
    • Role: user, Content: Explain quantum computing in simple terms
  • Temperature: 0.7

With AI Agent

Manual Chat Trigger → AI Agent → Rooyai Message Model

The Rooyai Message Model node integrates directly as a language model provider in AI Agent workflows.

With Basic LLM Chain

Start → Basic LLM Chain → Rooyai Message Model → Output

Configure the chain with your prompt template, and it will automatically use Rooyai for text generation.

⚙️ Configuration Options

Model Selection

Select from 15 available Rooyai models via dropdown:

| Model | Description | Best For | |-------|-------------|----------| | LLaMa 3.3 70B | Meta's flagship model with 70B parameters | Complex reasoning, detailed analysis | | DeepSeek R1 | Reasoning-optimized model | Logical tasks, problem-solving | | DeepSeek v3.1 Nex | Latest DeepSeek with enhancements | General purpose, advanced tasks | | Qwen3 Coder | Code generation specialist | Programming, technical documentation | | GPT OSS 120B | Large open-source GPT | Complex tasks, high accuracy | | GPT OSS 20B | Efficient open-source GPT | Fast responses, good balance | | TNG R1T Chimera | TNG reasoning architecture | Analytical tasks | | TNG DeepSeek Chimera | Hybrid TNG-DeepSeek model | Multi-domain tasks | | Kimi K2 | Moonshot AI's multilingual model | Chinese language, translations | | GLM 4.5 Air | Lightweight ChatGLM | Fast interactions, efficiency | | Devstral | Developer-focused model | Coding, debugging, tech docs | | Mimo v2 Flash | High-speed model | Quick responses, real-time chat | | Gemma 3 27B | Google Gemma large variant | General purpose, quality | | Gemma 3 12B | Google Gemma balanced | Good performance/speed ratio | | Gemma 3 4B | Google Gemma compact | Fastest responses, simple tasks |

Message Roles

  • system: Defines AI behavior and context
  • user: Human input/questions
  • assistant: AI responses (for conversation history)

Advanced Options

| Option | Type | Range | Description | |--------|------|-------|-------------| | Temperature | Number | 0-2 | Controls randomness (0=deterministic, 2=very creative) | | Max Tokens | Number | 1-32768 | Maximum response length | | Frequency Penalty | Number | -2 to 2 | Reduces word repetition | | Presence Penalty | Number | -2 to 2 | Encourages new topics | | Top P | Number | 0-1 | Nucleus sampling (alternative to temperature) |

Simplify Output

  • Enabled (default): Returns only the assistant's message content as a clean string
  • Disabled: Returns full API response including usage metadata (cost_usd)

🔧 API Integration Details

Request Format

The node sends POST requests to your configured Base URL with:

{
  "model": "gemini-2.0-flash",
  "messages": [
    { "role": "system", "content": "You are helpful" },
    { "role": "user", "content": "Hello!" }
  ],
  "temperature": 0.7,
  "max_tokens": 1024
}

Headers:

Authorization: Bearer {YOUR_API_KEY}
Content-Type: application/json
{...optional custom headers}

Response Parsing

Rooyai returns responses in this format:

{
  "choices": [
    {
      "message": {
        "content": "Hello! How can I assist you today?"
      }
    }
  ],
  "usage": {
    "cost_usd": 0.000123
  }
}

The node automatically extracts choices[0].message.content for the final output.

📁 Project Structure

n8n-nodes-rooyai-message/
├── credentials/
│   └── RooyaiApi.credentials.ts    # API credentials definition
├── nodes/
│   └── RooyaiMessage/
│       ├── RooyaiMessage.node.ts   # Main node implementation
│       ├── ChatDescription.ts       # Message/chat operations
│       ├── GenericFunctions.ts      # Error handling & utilities
│       ├── RooyaiMessage.node.json  # Node metadata
│       └── rooyai.svg              # Node icon
├── dist/                           # Compiled JavaScript output
├── package.json                    # Package metadata & dependencies
├── tsconfig.json                   # TypeScript configuration
├── gulpfile.js                     # Build tasks (icon copying)
└── README.md                       # This file

🛠️ Development

Prerequisites

  • Node.js 18+
  • npm 8+
  • TypeScript 5.3+

Build from Source

# Install dependencies
npm install

# Build the project (compiles TypeScript + copies icons)
npm run build

# Watch mode for development
npm run dev

Modifying the API Integration

⚙️ Change Base URL:
Edit credentials/RooyaiApi.credentials.ts, line 20:

default: 'https://your-new-endpoint.com/api/v1/chat'

⚙️ Modify Response Parsing:
Edit nodes/RooyaiMessage/ChatDescription.ts, lines 140-160 (postReceive function):

// Update to match your API's response structure
const assistantText = item.json?.choices?.[0]?.message?.content || '';

⚙️ Add Custom Headers:
Users can add custom headers via the "Optional Headers" credential field without code changes.

✅ Verification

After installation, verify the node:

  1. Node Appears: Search for "Rooyai" in n8n's "Add Node" menu
  2. Credentials Work: Create credential and test with valid API key
  3. Chat Works: Send a test message and receive response
  4. No Errors: Check n8n logs for any error messages

Expected behavior:

  • Node is categorized under AI or Language Models
  • Requests sent to configured Base URL
  • Responses parsed correctly as strings
  • Compatible with AI Agent and LLM Chain nodes

🐛 Troubleshooting

Node doesn't appear in n8n

  • Ensure dist/ folder is copied to ~/.n8n/custom/
  • Restart n8n: n8n restart or service n8n restart
  • Check n8n logs: ~/.n8n/logs/n8n.log

"Cannot find credentials" error

  • Create "Rooyai API" credentials in n8n UI first
  • Ensure API key is valid and not expired

API request fails

  • Verify Base URL is correct: https://rooyai.com/api/v1/chat
  • Check API key has proper permissions
  • Review error message in n8n execution view

Response parsing error

  • Enable "Simplify Output: false" to see raw API response
  • Verify Rooyai API returns choices[0].message.content

📝 License

MIT

👥 Author

Rooyai
Website: https://rooyai.com
Support: [email protected]

🔗 Links


Built with ❤️ for the n8n community