@berriai/n8n-nodes-litellm
v0.1.1
Published
n8n community node for LiteLLM - Access 100+ LLMs with unified API
Readme
n8n-nodes-litellm
Official n8n community node for LiteLLM, providing native integration with LiteLLM proxy to access 100+ LLM providers through a unified API.
LiteLLM is an open-source LLM proxy/gateway that provides a unified OpenAI-compatible API for 100+ LLM providers including OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more. It handles load balancing, fallbacks, cost tracking, and rate limiting across all your LLM deployments.
Features
- 🚀 100+ LLM Providers - Access OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more through a single interface
- 🔄 Automatic Fallbacks - Configure fallback models for reliability
- 📊 Built-in Observability - Native support for Langfuse, Datadog, and custom metadata
- 💰 Cost Tracking - Team-based usage tracking and analytics
- 🎯 Full OpenAI Compatibility - Standard parameters (temperature, max_tokens, etc.)
- 🔒 Enterprise Ready - Team management, access control, and rate limiting
- ⚡ Streaming Support - Real-time response streaming (coming soon)
Installation
Via n8n Community Nodes (Recommended)
- Open your n8n instance
- Go to Settings → Community Nodes
- Search for
@berriai/n8n-nodes-litellm - Click Install
Manual Installation
npm install @berriai/n8n-nodes-litellmThen restart your n8n instance.
Prerequisites
This node requires a running LiteLLM proxy server. If you don't have one set up:
Quick Start with LiteLLM Proxy
Install LiteLLM:
pip install litellm[proxy]Create a config file (
litellm_config.yaml):model_list: - model_name: gpt-4o-mini litellm_params: model: gpt-4o-mini api_key: os.environ/OPENAI_API_KEY - model_name: claude-3-opus litellm_params: model: claude-3-opus-20240229 api_key: os.environ/ANTHROPIC_API_KEYStart the proxy:
litellm --config litellm_config.yaml
The proxy will start at http://localhost:4000 by default.
📖 Full LiteLLM documentation: https://docs.litellm.ai/docs/proxy/quick_start
Configuration
Credentials Setup
- In n8n, create a new LiteLLM API credential
- Configure:
- API Key: Your LiteLLM proxy API key (if authentication is enabled)
- Base URL: Your LiteLLM proxy URL (default:
http://localhost:4000)
Testing Credentials
The credential includes a built-in test that validates connectivity to your LiteLLM proxy by calling the /v1/models endpoint.
Usage
Basic Chat Completion
The simplest use case - send a message and get a response:
// Node configuration
{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "Explain n8n workflows in one sentence"
}
]
}Multi-Message Conversation
Build conversations with system prompts and message history:
{
"model": "claude-3-opus",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant specializing in workflow automation."
},
{
"role": "user",
"content": "How do I connect to a database in n8n?"
},
{
"role": "assistant",
"content": "You can use database nodes like PostgresDB or MySQL..."
},
{
"role": "user",
"content": "Can you show me an example?"
}
]
}Advanced Configuration
Use standard OpenAI parameters for fine-tuned control:
{
"model": "gpt-4",
"messages": [...],
"options": {
"temperature": 0.7,
"max_tokens": 2000,
"top_p": 0.9,
"frequency_penalty": 0.5,
"presence_penalty": 0.3,
"stop": ["END", "###"]
}
}LiteLLM-Specific Features
Fallback Models
Automatically retry failed requests with fallback models:
{
"model": "gpt-4",
"messages": [...],
"liteLLMOptions": {
"fallbacks": "claude-3-opus,gpt-4o-mini" // Try these if primary fails
}
}Team & User Tracking
Track usage by team and user for cost allocation:
{
"model": "gpt-4",
"messages": [...],
"liteLLMOptions": {
"team_id": "engineering",
"user": "[email protected]",
"tags": "production,customer-support"
}
}Custom Metadata for Observability
Send metadata to observability platforms (Langfuse, Datadog, etc.):
{
"model": "gpt-4",
"messages": [...],
"liteLLMOptions": {
"metadata": {
"workflow_id": "workflow-123",
"customer_id": "cust-456",
"environment": "production",
"session_id": "sess-789"
}
}
}Parameters Reference
Model (Required)
The LLM model to use. Must be configured in your LiteLLM proxy config.
Examples: gpt-4o-mini, claude-3-opus, gemini-pro, llama-3-70b
Messages (Required)
Array of conversation messages. Each message requires:
- role:
system,user, orassistant - content: The message text
Options (Optional)
Standard OpenAI-compatible parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| frequency_penalty | number | 0 | Penalty for token frequency (-2.0 to 2.0) |
| max_tokens | number | 1000 | Maximum tokens to generate |
| presence_penalty | number | 0 | Penalty for token presence (-2.0 to 2.0) |
| stop | string | - | Comma-separated stop sequences |
| stream | boolean | false | Enable streaming (experimental) |
| temperature | number | 0.7 | Sampling temperature (0 to 2) |
| top_p | number | 1 | Nucleus sampling parameter (0 to 1) |
LiteLLM Options (Optional)
LiteLLM-specific features:
| Parameter | Type | Description |
|-----------|------|-------------|
| fallbacks | string | Comma-separated fallback models |
| metadata | JSON | Custom metadata object for observability |
| tags | string | Comma-separated tags for categorization |
| team_id | string | Team ID for cost tracking |
| user | string | User ID for analytics and rate limiting |
Response Format
The node returns the full LiteLLM/OpenAI response:
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "This is the AI response..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 50,
"total_tokens": 70
}
}Tip: Use the Expression {{ $json.choices[0].message.content }} to extract just the response text.
Compatibility
- n8n version: 0.220.0 or higher
- Node.js: 18.x or higher
- LiteLLM: 1.0.0 or higher
Troubleshooting
Connection Refused
Problem: Cannot connect to LiteLLM proxy
Solution:
- Verify LiteLLM proxy is running:
curl http://localhost:4000/health - Check the Base URL in your credentials matches the proxy address
- Ensure firewall allows connections to the proxy port
Authentication Failed
Problem: 401 Unauthorized error
Solution:
- If your LiteLLM proxy has authentication enabled, provide the API key in credentials
- Check the API key is valid:
curl -H "Authorization: Bearer YOUR_KEY" http://localhost:4000/v1/models
Model Not Found
Problem: Model not available error
Solution:
- Verify the model is configured in your
litellm_config.yaml - Check model name spelling matches your config exactly
- Restart LiteLLM proxy after config changes
Rate Limiting
Problem: Too many requests error
Solution:
- Configure rate limits in LiteLLM proxy config
- Use
userparameter to track per-user limits - Implement retry logic in your workflow
Examples
Workflow Automation: Email Response Generator
Trigger (Webhook)
↓
LiteLLM Chat Model
Model: gpt-4o-mini
System: "Generate professional email responses"
User: "{{ $json.email_content }}"
↓
Send EmailMulti-Provider Reliability
LiteLLM Chat Model
Model: gpt-4
Fallbacks: claude-3-opus,gpt-4o-mini
↓
(Automatically tries fallbacks if GPT-4 fails)Cost Tracking by Department
LiteLLM Chat Model
Model: gpt-4
Team ID: {{ $json.department }}
User: {{ $json.user_email }}
Tags: {{ $json.project_name }}
↓
(Track costs in LiteLLM dashboard by team/user)Resources
- LiteLLM Documentation: https://docs.litellm.ai
- LiteLLM GitHub: https://github.com/BerriAI/litellm
- n8n Documentation: https://docs.n8n.io
- LiteLLM Discord: https://discord.com/invite/wuPM9dRgDw
Support
- Issues: GitHub Issues
- Questions: n8n Community Forum
- LiteLLM Support: Discord
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
License
Maintainers
LiteLLM Team
- Email: [email protected]
- GitHub: @BerriAI
Built with ❤️ by the LiteLLM team
