@bergetai/n8n-nodes-berget-ai-chat
v1.1.0
Published
n8n node for Berget AI chat/text models
Maintainers
Readme
n8n-nodes-berget-ai-chat
n8n node for Berget AI chat/text models (Llama, Mistral, Qwen, GPT-OSS, etc.)
Installation
Community Nodes (Recommended)
- Open n8n
- Go to Settings > Community Nodes
- Click Install a community node
- Enter:
@bergetai/n8n-nodes-berget-ai-chat - Click Install
Manual Installation
# In your n8n project
npm install @bergetai/n8n-nodes-berget-ai-chatLocal Development
# Clone this repo
git clone <repo-url>
cd n8n-nodes-berget-ai-chat
# Install dependencies
npm install
# Build project
npm run build
# Link locally for development
npm link
cd /path/to/your/n8n/project
npm link @bergetai/n8n-nodes-berget-ai-chatConfiguration
- Add the node to your workflow
- Configure API settings:
- API Key: Your Berget AI API key
- Base URL:
https://api.berget.ai/v1(default) - Model: Choose from available models
Available Models
- Llama 3.1 8B Instruct
- Llama 3.3 70B Instruct
- GLM-4.6
- DeepSeek-OCR (vision model for image analysis)
- Mistral Small 3.1 24B Instruct 2503
- Qwen3 32B
- GPT-OSS-120B
Features
- ✅ Chat completion
- ✅ Streaming support
- ✅ Function calling
- ✅ JSON mode
- ✅ Formatted output
- ✅ System and user messages
- ✅ Temperature and other parameters
Examples
See examples/ folder for examples of how to use the node in different scenarios.
Testing
Quick Test
# Test node structure
npm test
# Test with real API
BERGET_API_KEY=your-key npm test
# Link locally for n8n testing
npm run test:localExample Usage
// Basic chat completion
{
"operation": "chat",
"model": "meta-llama/Llama-3.1-8B-Instruct",
"messages": [
{"role": "user", "content": "Hello!"}
],
"options": {
"temperature": 0.7,
"max_tokens": 100
}
}Self-Hosted n8n
Interested in running n8n in Sweden without data leaving EU? Berget AI offers self-hosted n8n solutions in our Kubernetes clusters. Contact us at [email protected] for more information.
