@rlquilez/n8n-nodes-openai-litellm
v1.0.22
Published
n8n community node: OpenAI-compatible LLM provider with structured JSON metadata injection
Maintainers
Readme
🚀 n8n-nodes-openai-litellm
A simplified n8n community node for OpenAI-compatible LLM providers with advanced structured JSON metadata injection capabilities.
🙏 Credits
This project is based on the excellent work by rorubyy and their original n8n-nodes-openai-langfuse project. This version has been simplified and refocused to provide a clean, dependency-free solution for structured JSON metadata injection with OpenAI-compatible providers.
Special thanks to rorubyy for the foundation and inspiration! 🎉
✨ Key Features
🎯 Universal Compatibility
- Full support for OpenAI-compatible chat models (
gpt-4o,gpt-4o-mini,o1-preview, etc.) - Seamless integration with LiteLLM and other OpenAI-compatible providers
- Works with Azure OpenAI, LocalAI, and custom APIs
🔧 Structured Metadata Injection
- Inject custom JSON data directly into your LLM requests
- Add structured context for tracking and analysis
- Flexible metadata for projects, environments, workflows, and more
⚡ Simplified Architecture
- No external tracing dependencies
- Quick and easy setup
- Optimized for performance and reliability
📦 NPM Package: @rlquilez/n8n-nodes-openai-litellm
🏢 About n8n: n8n is a fair-code licensed workflow automation platform.
📋 Table of Contents
- 🚀 Installation
- 🔐 Credentials
- ⚙️ Configuration
- 🎯 JSON Metadata
- 🔧 Compatibility
- 📚 Resources
- 📈 Version History
🚀 Installation
Follow the official installation guide for n8n community nodes.
🎯 Community Nodes (Recommended)
For n8n v0.187+, install directly from the UI:
- Go to Settings → Community Nodes
- Click Install
- Enter
@rlquilez/n8n-nodes-openai-litellmin the "Enter npm package name" field - Accept the risks of using community nodes
- Select Install
🐳 Docker Installation (Recommended for Production)
A pre-configured Docker setup is available in the docker/ directory:
Clone the repository and navigate to the docker/ directory
git clone https://github.com/rlquilez/n8n-nodes-openai-litellm.git cd n8n-nodes-openai-litellm/dockerBuild the Docker image
docker build -t n8n-openai-litellm .Run the container
docker run -it -p 5678:5678 n8n-openai-litellm
You can now access n8n at http://localhost:5678
⚙️ Manual Installation
For a standard installation without Docker:
# Go to your n8n installation directory
cd ~/.n8n
# Install the node
npm install @rlquilez/n8n-nodes-openai-litellm
# Restart n8n to apply the node
n8n start🔐 Credentials
This credential is used to authenticate your OpenAI-compatible LLM endpoint.
OpenAI Settings
| Field | Description | Example |
|-------|-------------|---------|
| OpenAI API Key | Your API key for accessing the OpenAI-compatible endpoint | sk-abc123... |
| OpenAI Organization ID | (Optional) Your OpenAI organization ID, if required | org-xyz789 |
| OpenAI Base URL | Full URL to your OpenAI-compatible endpoint | default: https://api.openai.com/v1 |
💡 LiteLLM Compatibility: You can use this node with LiteLLM by setting the Base URL to your LiteLLM proxy endpoint (e.g.,
http://localhost:4000/v1).
✅ After saving the credential, you're ready to use the node with structured JSON metadata injection.
⚙️ Configuration
This node allows you to inject structured JSON metadata into your OpenAI requests, providing additional context for your model calls.
🎯 JSON Metadata
Supported Fields
| Field | Type | Description |
|-------|------|-------------|
| Custom Metadata (JSON) | object | Custom JSON object with additional context (e.g., project, env, workflow) |
| Session ID | string | Used for trace grouping and session management |
| User ID | string | Optional: for trace attribution and user identification |
🧪 Configuration Example
| Input Field | Example Value |
|-------------|---------------|
| Custom Metadata (JSON) | See example below |
| Session ID | default-session-id |
| User ID | user-123 |
{
"project": "example-project",
"env": "dev",
"workflow": "main-flow",
"version": "1.0.0",
"tags": ["ai", "automation"]
}💡 How It Works
The node uses LiteLLM-compatible metadata transmission through the extraBody.metadata parameter, ensuring proper integration with LiteLLM proxies and observability tools.
Metadata Flow:
- Session ID and User ID are automatically added to the custom metadata
- All metadata is transmitted via LiteLLM's standard
extraBody.metadataparameter - Compatible with LiteLLM logging, Langfuse, and other observability platforms
- Maintains full compatibility with OpenAI-compatible endpoints
Common Use Cases:
- Session Management: Track conversations across multiple interactions
- User Attribution: Associate requests with specific users
- Project Tracking: Identify which project generated the request
- Environment Control: Differentiate between dev, staging, and production
- Workflow Analysis: Track performance by workflow type
- Debugging: Add unique identifiers for debugging purposes
- Observability: Integration with Langfuse, LiteLLM logging, and custom analytics
🔧 Compatibility
- Minimum n8n version: 1.0.0 or higher
- Compatible with:
- Official OpenAI API
- Any OpenAI-compatible LLM (e.g., via LiteLLM, LocalAI, Azure OpenAI)
- All providers that support OpenAI-compatible endpoints
Tested Models
✅ OpenAI Models:
gpt-4o,gpt-4o-minigpt-4-turbo,gpt-4gpt-3.5-turboo1-preview,o1-mini
✅ Compatible Providers:
- LiteLLM - Proxy for 100+ LLMs
- Azure OpenAI - Microsoft's enterprise API
- LocalAI - Self-hosted local LLMs
- Ollama - Local models via OpenAI-compatible API
📚 Resources
Official Documentation
- 📖 n8n Community Nodes Documentation
- 🚀 LiteLLM Documentation
- 💬 n8n Community Forum
- 🤖 OpenAI API Documentation
Useful Links
🔗 LiteLLM + Langfuse Configuration
To use this node with LiteLLM and Langfuse for observability, you need to configure your LiteLLM proxy properly:
1. LiteLLM Configuration (config.yaml)
model_list:
- model_name: gpt-4o-mini
litellm_params:
model: gpt-4o-mini
api_key: os.environ/OPENAI_API_KEY
litellm_settings:
success_callback: ["langfuse"] # Enable Langfuse logging
# Langfuse environment variables (set these in your environment)
# LANGFUSE_PUBLIC_KEY=pk-xxx
# LANGFUSE_SECRET_KEY=sk-xxx
# LANGFUSE_HOST=https://cloud.langfuse.com (or your self-hosted URL)2. Environment Variables
Set these environment variables where you run LiteLLM:
export LANGFUSE_PUBLIC_KEY="pk-xxx"
export LANGFUSE_SECRET_KEY="sk-xxx"
export LANGFUSE_HOST="https://cloud.langfuse.com"
export OPENAI_API_KEY="sk-xxx"3. Start LiteLLM Proxy
litellm --config config.yaml --port 40004. Configure n8n Node
- Base URL:
http://localhost:4000(or your LiteLLM proxy URL) - API Key: Any value (LiteLLM will use the configured API key)
- Metadata: Will be automatically forwarded to Langfuse with fields like:
langfuse_user_id(from User ID field)langfuse_session_id(from Session ID field)- Custom metadata from JSON field
📈 Version History
v1.0.15 - Current
- 🔧 Fixed LiteLLM + Langfuse integration - Changed metadata format to work correctly with LiteLLM proxy
- ✅ Proper Langfuse fields - Added
langfuse_user_idandlangfuse_session_idfor proper trace attribution - 🎯 Simplified approach - Removed complex
extra_bodyapproach in favor of direct metadata field - 📚 Enhanced documentation - Added comprehensive LiteLLM + Langfuse configuration guide
v1.0.14
- 🔧 Enhanced metadata transmission - Added dual approach with both direct
extra_bodyandmodelKwargs.extra_bodyfor maximum compatibility - 📊 Improved logging - Enhanced console logging to show both
extra_bodyandmodelKwargsconfiguration
v1.0.13
- 🔧 Multiple transmission approaches - Attempted various methods to ensure metadata reaches LLM endpoint
- 📊 Enhanced debugging - Added comprehensive logging for troubleshooting
v1.0.12
- 🔧 Enhanced metadata transmission - Added dual approach with both direct
extra_bodyandmodelKwargs.extra_bodyfor maximum compatibility - 📊 Improved logging - Enhanced console logging to show both
extra_bodyandmodelKwargsconfiguration - 📚 Documentation - Updated README with comprehensive version history and troubleshooting guide
v1.0.11
- 🔧 Critical Fix: Proper extra_body parameter application - Reorganized ChatOpenAI configuration to prevent options spread from overriding extra_body
- ✅ Enhanced payload transmission - Ensures metadata is properly included in the request payload to LiteLLM/OpenAI endpoints
- 📊 Added detailed logging - Better visibility into extra_body configuration for debugging
v1.0.10
- 📝 Documentation update - Updated version history with v1.0.9 critical fix details
v1.0.9
- 🔧 Critical Fix: Corrected extra_body parameter name - Fixed
extraBodytoextra_bodyto match LangChain ChatOpenAI API specification - ✅ Verified metadata transmission - Ensures metadata is properly sent to LiteLLM and OpenAI-compatible endpoints
- 📚 Based on official documentation - Implementation follows LangChain and LiteLLM examples
v1.0.8
- 📝 Enhanced documentation - Updated README with detailed metadata features and version history
- 🎯 Improved use cases - Added comprehensive examples and observability integration details
v1.0.7
- 🔧 Fixed LiteLLM metadata payload transmission - Implemented proper
extra_body.metadataparameter for LiteLLM compatibility - 📊 Added Session ID and User ID fields - Separate fields for better trace attribution and session management
- 🎯 Improved metadata structure - Based on LiteLLM documentation and reference implementation
- ✅ Enhanced observability - Better integration with Langfuse and LiteLLM logging systems
v1.0.6
- 🆕 Added Session ID and User ID fields - Separate input fields for better metadata organization
- 🔧 Improved metadata handling - Enhanced processing and logging of metadata values
- 📝 Simplified default JSON example - Cleaner default metadata structure
v1.0.5
- 🔄 Repository synchronization - Updated with latest remote changes
- 📚 Documentation improvements - Enhanced README and node descriptions
v1.0.2
- 🔧 Documentation and examples improvements
- 🎯 Focus on custom JSON metadata injection
- 📝 Documentation completely rewritten
v1.0.1
- 🎨 Updated icons to official OpenAI icons from n8n repository
- 🔧 Minor compatibility fixes
v1.0.0
- 🎉 Initial release with OpenAI-compatible providers
- 📊 Structured JSON metadata injection
- ⚡ Simplified architecture without external tracing dependencies
💝 Contributing
Developed with ❤️ for the n8n community
If this project was helpful, consider giving it a ⭐ on GitHub!
