npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@berriai/n8n-nodes-litellm

v0.1.1

Published

n8n community node for LiteLLM - Access 100+ LLMs with unified API

Readme

n8n-nodes-litellm

npm version n8n

Official n8n community node for LiteLLM, providing native integration with LiteLLM proxy to access 100+ LLM providers through a unified API.

LiteLLM is an open-source LLM proxy/gateway that provides a unified OpenAI-compatible API for 100+ LLM providers including OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more. It handles load balancing, fallbacks, cost tracking, and rate limiting across all your LLM deployments.

Features

  • 🚀 100+ LLM Providers - Access OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more through a single interface
  • 🔄 Automatic Fallbacks - Configure fallback models for reliability
  • 📊 Built-in Observability - Native support for Langfuse, Datadog, and custom metadata
  • 💰 Cost Tracking - Team-based usage tracking and analytics
  • 🎯 Full OpenAI Compatibility - Standard parameters (temperature, max_tokens, etc.)
  • 🔒 Enterprise Ready - Team management, access control, and rate limiting
  • Streaming Support - Real-time response streaming (coming soon)

Installation

Via n8n Community Nodes (Recommended)

  1. Open your n8n instance
  2. Go to SettingsCommunity Nodes
  3. Search for @berriai/n8n-nodes-litellm
  4. Click Install

Manual Installation

npm install @berriai/n8n-nodes-litellm

Then restart your n8n instance.

Prerequisites

This node requires a running LiteLLM proxy server. If you don't have one set up:

Quick Start with LiteLLM Proxy

  1. Install LiteLLM:

    pip install litellm[proxy]
  2. Create a config file (litellm_config.yaml):

    model_list:
      - model_name: gpt-4o-mini
        litellm_params:
          model: gpt-4o-mini
          api_key: os.environ/OPENAI_API_KEY
    
      - model_name: claude-3-opus
        litellm_params:
          model: claude-3-opus-20240229
          api_key: os.environ/ANTHROPIC_API_KEY
  3. Start the proxy:

    litellm --config litellm_config.yaml

The proxy will start at http://localhost:4000 by default.

📖 Full LiteLLM documentation: https://docs.litellm.ai/docs/proxy/quick_start

Configuration

Credentials Setup

  1. In n8n, create a new LiteLLM API credential
  2. Configure:
    • API Key: Your LiteLLM proxy API key (if authentication is enabled)
    • Base URL: Your LiteLLM proxy URL (default: http://localhost:4000)

Testing Credentials

The credential includes a built-in test that validates connectivity to your LiteLLM proxy by calling the /v1/models endpoint.

Usage

Basic Chat Completion

The simplest use case - send a message and get a response:

// Node configuration
{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "user",
      "content": "Explain n8n workflows in one sentence"
    }
  ]
}

Multi-Message Conversation

Build conversations with system prompts and message history:

{
  "model": "claude-3-opus",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant specializing in workflow automation."
    },
    {
      "role": "user",
      "content": "How do I connect to a database in n8n?"
    },
    {
      "role": "assistant",
      "content": "You can use database nodes like PostgresDB or MySQL..."
    },
    {
      "role": "user",
      "content": "Can you show me an example?"
    }
  ]
}

Advanced Configuration

Use standard OpenAI parameters for fine-tuned control:

{
  "model": "gpt-4",
  "messages": [...],
  "options": {
    "temperature": 0.7,
    "max_tokens": 2000,
    "top_p": 0.9,
    "frequency_penalty": 0.5,
    "presence_penalty": 0.3,
    "stop": ["END", "###"]
  }
}

LiteLLM-Specific Features

Fallback Models

Automatically retry failed requests with fallback models:

{
  "model": "gpt-4",
  "messages": [...],
  "liteLLMOptions": {
    "fallbacks": "claude-3-opus,gpt-4o-mini"  // Try these if primary fails
  }
}

Team & User Tracking

Track usage by team and user for cost allocation:

{
  "model": "gpt-4",
  "messages": [...],
  "liteLLMOptions": {
    "team_id": "engineering",
    "user": "[email protected]",
    "tags": "production,customer-support"
  }
}

Custom Metadata for Observability

Send metadata to observability platforms (Langfuse, Datadog, etc.):

{
  "model": "gpt-4",
  "messages": [...],
  "liteLLMOptions": {
    "metadata": {
      "workflow_id": "workflow-123",
      "customer_id": "cust-456",
      "environment": "production",
      "session_id": "sess-789"
    }
  }
}

Parameters Reference

Model (Required)

The LLM model to use. Must be configured in your LiteLLM proxy config.

Examples: gpt-4o-mini, claude-3-opus, gemini-pro, llama-3-70b

Messages (Required)

Array of conversation messages. Each message requires:

  • role: system, user, or assistant
  • content: The message text

Options (Optional)

Standard OpenAI-compatible parameters:

| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | frequency_penalty | number | 0 | Penalty for token frequency (-2.0 to 2.0) | | max_tokens | number | 1000 | Maximum tokens to generate | | presence_penalty | number | 0 | Penalty for token presence (-2.0 to 2.0) | | stop | string | - | Comma-separated stop sequences | | stream | boolean | false | Enable streaming (experimental) | | temperature | number | 0.7 | Sampling temperature (0 to 2) | | top_p | number | 1 | Nucleus sampling parameter (0 to 1) |

LiteLLM Options (Optional)

LiteLLM-specific features:

| Parameter | Type | Description | |-----------|------|-------------| | fallbacks | string | Comma-separated fallback models | | metadata | JSON | Custom metadata object for observability | | tags | string | Comma-separated tags for categorization | | team_id | string | Team ID for cost tracking | | user | string | User ID for analytics and rate limiting |

Response Format

The node returns the full LiteLLM/OpenAI response:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o-mini",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "This is the AI response..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 50,
    "total_tokens": 70
  }
}

Tip: Use the Expression {{ $json.choices[0].message.content }} to extract just the response text.

Compatibility

  • n8n version: 0.220.0 or higher
  • Node.js: 18.x or higher
  • LiteLLM: 1.0.0 or higher

Troubleshooting

Connection Refused

Problem: Cannot connect to LiteLLM proxy

Solution:

  • Verify LiteLLM proxy is running: curl http://localhost:4000/health
  • Check the Base URL in your credentials matches the proxy address
  • Ensure firewall allows connections to the proxy port

Authentication Failed

Problem: 401 Unauthorized error

Solution:

  • If your LiteLLM proxy has authentication enabled, provide the API key in credentials
  • Check the API key is valid: curl -H "Authorization: Bearer YOUR_KEY" http://localhost:4000/v1/models

Model Not Found

Problem: Model not available error

Solution:

  • Verify the model is configured in your litellm_config.yaml
  • Check model name spelling matches your config exactly
  • Restart LiteLLM proxy after config changes

Rate Limiting

Problem: Too many requests error

Solution:

  • Configure rate limits in LiteLLM proxy config
  • Use user parameter to track per-user limits
  • Implement retry logic in your workflow

Examples

Workflow Automation: Email Response Generator

Trigger (Webhook)
  ↓
LiteLLM Chat Model
  Model: gpt-4o-mini
  System: "Generate professional email responses"
  User: "{{ $json.email_content }}"
  ↓
Send Email

Multi-Provider Reliability

LiteLLM Chat Model
  Model: gpt-4
  Fallbacks: claude-3-opus,gpt-4o-mini
  ↓
(Automatically tries fallbacks if GPT-4 fails)

Cost Tracking by Department

LiteLLM Chat Model
  Model: gpt-4
  Team ID: {{ $json.department }}
  User: {{ $json.user_email }}
  Tags: {{ $json.project_name }}
  ↓
(Track costs in LiteLLM dashboard by team/user)

Resources

  • LiteLLM Documentation: https://docs.litellm.ai
  • LiteLLM GitHub: https://github.com/BerriAI/litellm
  • n8n Documentation: https://docs.n8n.io
  • LiteLLM Discord: https://discord.com/invite/wuPM9dRgDw

Support

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

License

MIT License

Maintainers

LiteLLM Team


Built with ❤️ by the LiteLLM team