npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

n8n-nodes-groq-chat

v2.3.7

Published

n8n supply node for Groq API - Provides Groq language models for use with Basic LLM Chain and other AI nodes. Ultra-fast inference with Llama, Mixtral, Gemma, and GPT-OSS models.

Downloads

2,239

Readme

n8n-nodes-groq-chat

Custom n8n node for Groq API - A supply node that provides Groq language models for use with Basic LLM Chain and other AI nodes in n8n.

Description

This n8n custom node provides a Groq Chat Model supply node that can be connected to the Basic LLM Chain node or other AI processing nodes in n8n. Groq provides ultra-fast inference for large language models, making it perfect for real-time AI applications.

Features

  • Supply Node: Provides Groq language models as a supply node (compatible with Basic LLM Chain)
  • Dynamic Model Loading: Automatically fetches available models from Groq API
  • Multiple Models: Support for all Groq models (Llama, Mixtral, Gemma, GPT-OSS, etc.)
  • Configurable Parameters: Temperature, max tokens, include reasoning, and other model options
  • Direct SDK Integration: Uses groq-sdk directly for optimal performance
  • Real-time API: Leverages Groq's fast inference engine for low-latency responses

Installation

Via npm (Recommended)

npm install n8n-nodes-groq-chat

Then restart your n8n instance. The node will appear in the node palette under "AI" → "Language Models".

Manual Installation

  1. Clone this repository or download the source code
  2. Install dependencies:
    npm install
  3. Build the node:
    npm run build
  4. Link to your n8n instance:
    • Set the N8N_CUSTOM_EXTENSIONS environment variable to point to this directory, or
    • Copy the dist folder to your n8n custom nodes location

Configuration

1. Create Groq API Credentials

  1. Go to Credentials in n8n
  2. Click Add Credential
  3. Search for "Groq API" and select it
  4. Enter your API key from Groq Console
  5. Click Save

2. Add Groq Chat Model Node

  1. In your workflow, click + to add a node
  2. Search for "Groq Chat Model"
  3. Select the node to add it to your workflow

3. Configure the Node

  1. Select Credentials: Choose your Groq API credentials
  2. Choose Model: Select a model from the dropdown (models are loaded dynamically from Groq API)
  3. Configure Options (optional):
    • Temperature: Controls randomness (0-1, default: 0.7)
    • Max Tokens: Maximum number of tokens to generate (default: 4096)
    • Include Reasoning: Whether to include reasoning steps in the response (default: false)

4. Connect to Basic LLM Chain

  1. Add a Basic LLM Chain node to your workflow
  2. Connect the Model output from Groq Chat Model to the Model input of Basic LLM Chain
  3. Configure your chain with prompts and other settings

Usage Example

┌─────────────────┐
│ Groq Chat Model │
│  (Supply Node)  │
└────────┬────────┘
         │ Model
         ▼
┌─────────────────┐
│ Basic LLM Chain │
│  (AI Node)      │
└─────────────────┘

The Groq Chat Model node provides the language model instance that the Basic LLM Chain uses to process prompts and generate responses.

Available Models

The node dynamically loads all available models from Groq API :

Groq model reference (pricing & limits)

The table below summarizes some of the main Groq models (Developer plan values, subject to change – always check the Groq console for the latest numbers):

| Model | Model ID | Speed (tokens/s) | Price per 1M tokens | Rate limits (TPM / RPM) | Context window (tokens) | Max completion tokens | Max file size | |-------|----------|------------------|----------------------|--------------------------|-------------------------|------------------------|---------------| | Meta Llama 3.1 8B | llama-3.1-8b-instant | ~560 | $0.05 input / $0.08 output | 250K TPM / 1K RPM | 131,072 | 131,072 | - | | Meta Llama 3.3 70B | llama-3.3-70b-versatile | ~280 | $0.59 input / $0.79 output | 300K TPM / 1K RPM | 131,072 | 32,768 | - | | Meta Llama Guard 4 12B | meta-llama/llama-guard-4-12b | ~1200 | $0.20 input / $0.20 output | 30K TPM / 100 RPM | 131,072 | 1,024 | 20 MB | | OpenAI GPT-OSS 120B | openai/gpt-oss-120b | ~500 | $0.15 input / $0.60 output | 250K TPM / 1K RPM | 131,072 | 65,536 | - | | OpenAI GPT-OSS 20B | openai/gpt-oss-20b | ~1000 | $0.075 input / $0.30 output | 250K TPM / 1K RPM | 131,072 | 65,536 | - | | OpenAI Whisper Large V3 | whisper-large-v3 | - | $0.111 per hour | 200K ASH / 300 RPM | - | - | 100 MB | | OpenAI Whisper Large V3 Turbo | whisper-large-v3-turbo | - | $0.04 per hour | 400K ASH / 400 RPM | - | - | 100 MB |

Development

Prerequisites

  • Node.js 18+
  • npm or yarn

Setup

# Clone the repository
git clone <repository-url>
cd n8n-groq

# Install dependencies
npm install

# Build the node
npm run build

Scripts

  • npm run build - Build the node
  • npm run dev - Watch mode for development
  • npm run lint - Run linter
  • npm run format - Format code

Troubleshooting

"Failed to fetch models from Groq API"

  • Verify your API key is correct in credentials
  • Check your internet connection
  • Ensure the Groq API is accessible from your n8n instance

"No models found"

  • This usually means the API returned an empty list
  • Try refreshing the node or check Groq API status

Resources

License

MIT

Author

maxime-pharmania