@sevenlab/n8n-nodes-litellm-chat-model
v1.1.1
Published
n8n community node for LiteLLM — connect any LLM provider via a LiteLLM proxy using a simple JSON options field
Maintainers
Readme
@sevenlab/n8n-nodes-litellm-chat-model
An n8n community node for LiteLLM — a proxy that provides a unified OpenAI-compatible API across 100+ LLM providers (OpenAI, Anthropic, Gemini, Azure, Ollama, and more).
Instead of managing separate credentials and node configurations per provider, point this node at your LiteLLM proxy and pass model parameters as plain JSON.
n8n is a fair-code licensed workflow automation platform.
Installation
Follow the installation guide in the n8n community nodes documentation.
Or install directly:
@sevenlab/n8n-nodes-litellm-chat-modelCredentials
You need a running LiteLLM proxy. Configure the following:
- Base URL — the base URL of your LiteLLM proxy (e.g.
https://your-litellm-proxy.example.com/v1) - API Key — your LiteLLM API key
Usage
- Add the LiteLLM Chat Model sub-node to an AI Agent or Chain node
- Select a model from the dropdown (loaded dynamically from your proxy's
/modelsendpoint) - Pass any model parameters as JSON in the Options field
Example options:
{
"temperature": 0.7,
"max_tokens": 1000
}Supported parameters depend on the provider. See:
Compatibility
Minimum n8n version: 1.0
