n8n-nodes-groq-chat
v2.3.7
Published
n8n supply node for Groq API - Provides Groq language models for use with Basic LLM Chain and other AI nodes. Ultra-fast inference with Llama, Mixtral, Gemma, and GPT-OSS models.
Downloads
2,239
Maintainers
Readme
n8n-nodes-groq-chat
Custom n8n node for Groq API - A supply node that provides Groq language models for use with Basic LLM Chain and other AI nodes in n8n.
Description
This n8n custom node provides a Groq Chat Model supply node that can be connected to the Basic LLM Chain node or other AI processing nodes in n8n. Groq provides ultra-fast inference for large language models, making it perfect for real-time AI applications.
Features
- ✅ Supply Node: Provides Groq language models as a supply node (compatible with Basic LLM Chain)
- ✅ Dynamic Model Loading: Automatically fetches available models from Groq API
- ✅ Multiple Models: Support for all Groq models (Llama, Mixtral, Gemma, GPT-OSS, etc.)
- ✅ Configurable Parameters: Temperature, max tokens, include reasoning, and other model options
- ✅ Direct SDK Integration: Uses
groq-sdkdirectly for optimal performance - ✅ Real-time API: Leverages Groq's fast inference engine for low-latency responses
Installation
Via npm (Recommended)
npm install n8n-nodes-groq-chatThen restart your n8n instance. The node will appear in the node palette under "AI" → "Language Models".
Manual Installation
- Clone this repository or download the source code
- Install dependencies:
npm install - Build the node:
npm run build - Link to your n8n instance:
- Set the
N8N_CUSTOM_EXTENSIONSenvironment variable to point to this directory, or - Copy the
distfolder to your n8n custom nodes location
- Set the
Configuration
1. Create Groq API Credentials
- Go to Credentials in n8n
- Click Add Credential
- Search for "Groq API" and select it
- Enter your API key from Groq Console
- Click Save
2. Add Groq Chat Model Node
- In your workflow, click + to add a node
- Search for "Groq Chat Model"
- Select the node to add it to your workflow
3. Configure the Node
- Select Credentials: Choose your Groq API credentials
- Choose Model: Select a model from the dropdown (models are loaded dynamically from Groq API)
- Configure Options (optional):
- Temperature: Controls randomness (0-1, default: 0.7)
- Max Tokens: Maximum number of tokens to generate (default: 4096)
- Include Reasoning: Whether to include reasoning steps in the response (default: false)
4. Connect to Basic LLM Chain
- Add a Basic LLM Chain node to your workflow
- Connect the Model output from Groq Chat Model to the Model input of Basic LLM Chain
- Configure your chain with prompts and other settings
Usage Example
┌─────────────────┐
│ Groq Chat Model │
│ (Supply Node) │
└────────┬────────┘
│ Model
▼
┌─────────────────┐
│ Basic LLM Chain │
│ (AI Node) │
└─────────────────┘The Groq Chat Model node provides the language model instance that the Basic LLM Chain uses to process prompts and generate responses.
Available Models
The node dynamically loads all available models from Groq API :
Groq model reference (pricing & limits)
The table below summarizes some of the main Groq models (Developer plan values, subject to change – always check the Groq console for the latest numbers):
| Model | Model ID | Speed (tokens/s) | Price per 1M tokens | Rate limits (TPM / RPM) | Context window (tokens) | Max completion tokens | Max file size |
|-------|----------|------------------|----------------------|--------------------------|-------------------------|------------------------|---------------|
| Meta Llama 3.1 8B | llama-3.1-8b-instant | ~560 | $0.05 input / $0.08 output | 250K TPM / 1K RPM | 131,072 | 131,072 | - |
| Meta Llama 3.3 70B | llama-3.3-70b-versatile | ~280 | $0.59 input / $0.79 output | 300K TPM / 1K RPM | 131,072 | 32,768 | - |
| Meta Llama Guard 4 12B | meta-llama/llama-guard-4-12b | ~1200 | $0.20 input / $0.20 output | 30K TPM / 100 RPM | 131,072 | 1,024 | 20 MB |
| OpenAI GPT-OSS 120B | openai/gpt-oss-120b | ~500 | $0.15 input / $0.60 output | 250K TPM / 1K RPM | 131,072 | 65,536 | - |
| OpenAI GPT-OSS 20B | openai/gpt-oss-20b | ~1000 | $0.075 input / $0.30 output | 250K TPM / 1K RPM | 131,072 | 65,536 | - |
| OpenAI Whisper Large V3 | whisper-large-v3 | - | $0.111 per hour | 200K ASH / 300 RPM | - | - | 100 MB |
| OpenAI Whisper Large V3 Turbo | whisper-large-v3-turbo | - | $0.04 per hour | 400K ASH / 400 RPM | - | - | 100 MB |
Development
Prerequisites
- Node.js 18+
- npm or yarn
Setup
# Clone the repository
git clone <repository-url>
cd n8n-groq
# Install dependencies
npm install
# Build the node
npm run buildScripts
npm run build- Build the nodenpm run dev- Watch mode for developmentnpm run lint- Run linternpm run format- Format code
Troubleshooting
"Failed to fetch models from Groq API"
- Verify your API key is correct in credentials
- Check your internet connection
- Ensure the Groq API is accessible from your n8n instance
"No models found"
- This usually means the API returned an empty list
- Try refreshing the node or check Groq API status
Resources
License
MIT
Author
maxime-pharmania
