@dkhalife/n8n-nodes-litellm
v0.1.0
Published
n8n community node for LiteLLM proxy — Chat Completions, Embeddings, and AI Agent model
Downloads
72
Maintainers
Readme
n8n-nodes-litellm
An n8n community node package for integrating with a LiteLLM proxy. Provides a standalone node for Chat Completions, Responses API calls, and Embeddings, plus a LangChain-compatible Chat Model node for use inside AI Agent and Chain workflows.
Nodes
LiteLLM
A general-purpose node with three operations:
| Operation | Endpoint | Description |
|---|---|---|
| Chat Completion | POST /v1/chat/completions | Send messages to any chat model and receive a response |
| Responses | POST /v1/responses | Call OpenAI-compatible responses models such as Copilot Codex / GPT-5 |
| Embeddings | POST /v1/embeddings | Generate vector embeddings for one or more text inputs |
Chat Completion parameters
| Parameter | Description |
|---|---|
| Model | Loaded dynamically from your proxy (filtered to chat-capable models) |
| System Prompt | Convenience field — prepended as a system message |
| Messages | Ordered list of role + content pairs (user / assistant / system / tool) |
| Reasoning Effort | low / medium / high — for extended-thinking models (o1, o3, claude-3-7, etc.) |
| Tools | JSON array of function/tool definitions (OpenAI format) |
| Tool Choice | auto / none / required |
Embeddings parameters
| Parameter | Description |
|---|---|
| Model | Loaded dynamically (filtered to embedding models) |
| Input | Plain string or JSON array of strings for batch embedding |
| Encoding Format | float (default) or base64 |
Responses parameters
| Parameter | Description |
|---|---|
| Model | Loaded dynamically from your proxy (filtered to mode: responses models) |
| Input | Prompt or input text sent to /v1/responses |
| Instructions | Optional system-style instructions |
| Reasoning Effort | low / medium / high for supported reasoning models |
| Tools | JSON array of Responses API tool definitions, including built-in tools and function tools |
| Tool Choice | auto / none / required |
LiteLLM Chat Model
A LangChain-compatible node that exposes the LiteLLM proxy as an AI Language Model. Connect it to the Model input of an AI Agent, Chain, or any other LangChain node.
| Parameter | Description | |---|---| | Model | Chat model loaded dynamically from your proxy | | Temperature | Sampling temperature 0–2 | | Max Tokens | Maximum tokens to generate | | Reasoning Effort | Extended thinking level for supported models | | Max Retries | How many times to retry failed API calls |
Credentials
LiteLlmApi
| Field | Required | Description |
|---|---|---|
| Base URL | ✅ | URL of your LiteLLM proxy, e.g. http://localhost:4000 |
| API Key | ❌ | Bearer token — leave blank for open (unauthenticated) deployments |
Tip: The credential is entirely optional on both nodes. If you don't attach one, the node will attempt to connect to
http://localhost:4000without authentication.
Dynamic model discovery
The model dropdowns call GET /model/info at node-configuration time and filter results by the mode field reported by LiteLLM (chat for Chat Completion, responses for the Responses API, embedding for Embeddings). If /model/info is unavailable the nodes fall back to GET /v1/models. You can also type any model name manually.
Installation
In n8n (community nodes)
- Open Settings → Community Nodes
- Enter
@dkhalife/n8n-nodes-litellmand click Install - Restart n8n if prompted
Manual / self-hosted
# Inside your n8n custom nodes directory (or the n8n data volume):
npm install @dkhalife/n8n-nodes-litellmDevelopment
git clone https://github.com/dkhalife/n8n-nodes-litellm.git
cd n8n-nodes-litellm
npm install
npm run buildLink into a local n8n instance:
# From this repo root:
npm link
# From your n8n installation:
npm link n8n-nodes-litellmReleasing
New versions are published to npm automatically when a version tag is pushed.
npm run releaseThis will lint, build, prompt for a version bump, update the changelog, commit, tag, and push — which triggers the publish workflow to publish to npm with provenance.
One-time setup: Configure OIDC trusted publishing on npmjs.com for this package. See the publish workflow header for detailed instructions.
Requirements
- n8n ≥ 1.0
- Node.js ≥ 18.17
License
MIT
